Developing and Deploying AI/ML Applications on Red Hat OpenShift AI Training in South Africa

  • Learn via: Classroom / Virtual Classroom / Online
  • Duration: 4 Days
  • Price: Please contact for booking options

Gain essential skills in developing, training, and deploying AI and Machine Learning (ML) applications using Red Hat OpenShift AI.

The course “Developing and Deploying AI/ML Applications on Red Hat OpenShift AI (AI267)” equips participants with the foundational knowledge needed to design, train, and manage AI/ML models in OpenShift’s enterprise-grade environment.
Through guided labs and hands-on projects, students learn how to build data-driven workflows and automate model deployment efficiently.

This course is based on Red Hat OpenShift® 4.16 and Red Hat OpenShift AI 2.13.


We can organize this training at your preferred date and location. Contact Us!

Prerequisites

  • Experience with Git.

  • Experience in Python programming or completion of Python Programming with Red Hat (AD141).

  • Familiarity with OpenShift development or completion of Red Hat OpenShift Developer II (DO288).

  • Basic understanding of AI, ML, and Data Science is recommended.

Who Should Attend

This course is ideal for:

  • Data Scientists and AI Specialists who want to train and deploy ML models on OpenShift AI.

  • Developers building AI-enabled applications.

  • MLOps Engineers responsible for managing the ML lifecycle.

  • AI Practitioners automating end-to-end data science workflows.

What You Will Learn

  • Introduction to Red Hat OpenShift AI

  • Data Science Projects and Workbenches

  • Jupyter Notebooks for Interactive Development

  • Installing Red Hat OpenShift AI

  • Managing Users and Resource Allocations

  • Creating Custom Notebook Images

  • Introduction to Machine Learning

  • Training ML Models

  • Enhancing Model Training with RHOAI

  • Model Serving Concepts and Deployment

  • Serving ML Models in Red Hat OpenShift AI

  • Introduction to Data Science Pipelines

  • Creating and Managing Pipelines

  • Controlling Pipelines and Experiments


Organizational Impact

Modern enterprises gather massive amounts of data from multiple sources.
With Red Hat OpenShift AI, organizations can leverage this data to analyze trends, visualize insights, and predict business outcomes using advanced AI and ML techniques — all while maintaining operational security and scalability.


Individual Impact

By completing this course, you will:

  • Understand the architecture and key components of Red Hat OpenShift AI.
  • Learn to install, configure, and manage OpenShift AI and its resources.
  • Gain hands-on experience training, deploying, and serving ML models.
  • Apply machine learning best practices using RHOAI.
  • Create and manage data science pipelines for automated workflows.


Training Outline

1. Introduction to Red Hat OpenShift AI
Identify OpenShift AI’s main features, components, and overall architecture.

2. Data Science Projects
Organize and manage project code, configurations, and data connections using workbenches.

3. Jupyter Notebooks
Execute, visualize, and test code interactively through Jupyter environments.

4. Installing Red Hat OpenShift AI
Install and configure OpenShift AI components for AI/ML workloads.

5. User and Resource Management
Administer user access and control compute resource allocations.

6. Custom Notebook Images
Create, customize, and import notebook images for specialized workloads.

7. Introduction to Machine Learning
Learn ML fundamentals, key algorithms, and workflow design.

8. Training Models
Train models using standard or custom workbenches within RHOAI.

9. Enhancing Model Training with RHOAI
Implement best practices in ML and data science using Red Hat OpenShift AI tools.

10. Introduction to Model Serving
Explore the principles and components needed to serve trained models.

11. Model Serving in OpenShift AI
Deploy and manage production-ready models in OpenShift AI environments.

12. Data Science Pipelines
Set up data pipelines for end-to-end automation and reproducibility.

13. Working with Pipelines
Build pipelines using Kubeflow SDK and Elyra.

14. Controlling Pipelines and Experiments
Track metrics, artifacts, and experiments for ML lifecycle management.



Contact us for more detail about our trainings and for all other enquiries!
By using this website you agree to let us use cookies. For further information about our use of cookies, check out our Cookie Policy.