AI267

Developing and Deploying AI/ML Applications on Red Hat OpenShift AI

Overview

Course Description 

Operationalize the complete life cycle of modern AI applications at scale by using Red Hat OpenShift AI.

Developing and Deploying AI/ML Applications on Red Hat OpenShift AI (AI267) provides students with the fundamental knowledge to manage the complete life cycle of modern AI applications. This course helps students build core skills for using Red Hat OpenShift AI to efficiently train, test, deploy, and monitor both predictive and generative AI models at scale. 

This course is based on Red Hat OpenShift ® 4.18, and Red Hat OpenShift AI 2.25.

Course Content Summary

  • Introduction to Red Hat OpenShift AI
  • Using Workbenches for AI/ML Development
  • Fundamentals of Model Serving
  • Serving Generative and Predictive AI Models
  • Monitoring AI Models
  • Introduction to Data Science Pipelines
  • Advanced Kubeflow Pipelines Development and Experiments
  • GenAI Model Selection, Optimization, and Evaluation
  • Building GenAI Applications

Target Audience

  • ML Engineers responsible for handling the operational tasks of the MLOps/LLMOps lifecycle, such as deployment, automation, and monitoring.
  • Data Scientists who train, deploy, and track their own models. 

Recommended training

Outline

Course Outline 

Introduction to Red Hat OpenShift AI
Identify how Red Hat OpenShift AI provides a complete MLOps and GenAIOps platform and how to use it to configure data science projects for team collaboration.

Using Workbenches for AI/ML Development
Use workbench environments for AI/ML development and connect them to data sources and stores.

Fundamentals of Model Serving
Prepare, deploy, and serve models by using OpenShift AI model serving capabilities.

Serving Generative and Predictive AI Models
Deploy and serve AI models with specific runtimes, including OpenVINO for predictive models and vLLM for large language models.

Monitoring AI Models
Monitor deployed models for bias, data drift, and performance by using TrustyAI and observability tools to ensure reliable and ethical AI performance in production. 

Introduction to Data Science Pipelines
Create and manage basic data science pipelines by using Elyra and Kubeflow SDK to automate fundamental AI/ML workflows.

Advanced Kubeflow Pipelines Development and Experiments
Implement advanced pipeline features including container components, artifacts management, Kubernetes configuration, and systematic experimentation for production MLOps workflows.

GenAI Model Selection, Optimization, and Evaluation
Systematically select, optimize, and evaluate large language models by using RHOAI's model catalog, compression techniques, and evaluation frameworks.

Building GenAI Applications
Build production-ready GenAI applications by using industry patterns including RAG, agentic workflows, and trustworthy AI practices, and move beyond basic model serving to ship complete intelligent solutions.

Outcomes

Impact on the Organization

  • Organizations often see their data science efforts slowed by manual tasks and the increasing complexity of integrating AI tools, especially with Generative AI. With Red Hat OpenShift AI, organizations gain a unified platform to manage the complete life cycle of modern AI applications. This capability allows them to efficiently train, test, deploy, and monitor both predictive and generative AI models at scale, transforming experimental initiatives into reliable business outcomes.

Impact on the Individual

  • As a result of attending this course, you will be able to manage the complete life cycle of modern AI applications, by efficiently training, testing, deploying, and monitoring both predictive and generative AI models at scale. You will learn to configure collaborative data science projects, efficiently utilize workbench environments, and assign specialized resources. You will prepare, deploy, and serve models by using specialized runtimes. Furthermore, you will automate MLOps workflows by creating advanced data science pipelines, and build production-ready GenAI solutions. Finally, you will ensure reliable and ethical AI performance by monitoring deployed models for bias and data drift using, and implement safety guardrails for generative applications.

Recommended next course or exam

Choose a location to get started

More ways to master your skills

Get the best of both worlds: expert-led virtual training and self-paced learning, plus expert help and a certification exam. It’s all included in the Red Hat Learning Subscription.

On-site training available

If you would like to get your entire team trained, we can do it on your premises, in-person or remote.

Red Hat Learning Subscription

Comprehensive training and learning pathways on Red Hat products, industry-recognized certifications, and a flexible and dynamic IT learning experience.