Red Hat AI

RoCE_Multi-nodeAI_OS_featured_image
Learning path

RoCE multi-node AI training on Red Hat OpenShift

In this learning path, I'll demonstrate how anyone can run a distributed AI workload on Red Hat OpenShift using just a few nodes and GPUs. We’ll start with a straightforward manual training setup to grasp the basics and keep things simple, and then we’ll move on to a fully automated training procedure. This will give you a solid foundation that you can expand upon to tailor your infrastructure to your specific needs. This path will guide you through training the...

Video Thumbnail
Video

Unlocking the Power of AI: Accessible Language Model Tuning with InstructLab

Cedric Clyburn +1

Let's take a look at how to effectively integrate Generative AI into an existing application through the InstructLab project, an open-source methodology and community to make LLM tuning accessible to all! Learn about the project, and how InstructLab can help to train a model on domain-specific skills and knowledge, then how Podman's AI Lab allows developers to easily setup an environment for model serving and AI-enabled application development.

Video Thumbnail
Video

Generative AI Applied to Application Modernization with Konveyor AI

Syed M Shaaf +1

The Konveyor community has developed "Konveyor AI" (Kai), a tool that uses Generative AI to accelerate application modernization. Kai integrates large language models with static code analysis to facilitate code modifications within a developer's IDE, helping transition to technologies like Quarkus efficiently. This video provides a short introduction and demo showcasing the migration of the Java EE "coolstore" application to Quarkus using Konveyor AI.

Video Thumbnail
Video

Red Hat Dan on Tech: Episode 2 - What to know when writing SELinux policies

Daniel Walsh +1

Welcome back to Red Hat Dan on Tech, where Senior Distinguished Engineer Dan Walsh dives deep on all things technical, from his expertise in container technologies with tools like Podman and Buildah, to runtimes, Kubernetes, AI, and SELinux! Let's talk about tips & tricks when writing SELinux policies, and how you can use containers to your advantage! This weekly series will bring in guests from around the industry to highlight innovation and things you should know, and new episodes will be released right here, on the Red Hat Developer channel, each and every Wednesday at 9am EST! Stay tuned, and see you in the next episode!

Video Thumbnail
Video

Red Hat Dan on Tech: Episode 1 - Welcome to the Show

Daniel Walsh +1

Welcome to the new Red Hat Dan on Tech, where Senior Distinguished Engineer Dan Walsh dives deep on all things technical, from his expertise in container technologies with tools like Podman and Buildah, to runtimes, Kubernetes, AI, and SELinux! This weekly series will bring in guests from around the industry to highlight innovation and things you should know, and new episodes will be released right here, on the Red Hat Developer channel, each and every Wednesday at 9am EST! Stay tuned, and see you in the next episode!

Video Thumbnail
Video

Getting Started with AI Lab on Podman Desktop

Cedric Clyburn +1

Kickstart your generative AI application development journey with Podman AI Lab, an open-source extension for Podman Desktop to build applications with LLMs on a local environment. The Podman AI Lab helps to make AI more accessible and approachable, providing recipes for example use cases with generative AI, curated models sourced from Hugging Face, model serving with integrated code snippets, and a playground environment to test and adjust model performance. Learn more on Red Hat Developer https://developers.redhat.com/product... and download Podman Desktop today to get started!

Video Thumbnail
Video

Generative AI Development with Podman AI Lab, InstructLab, & OpenShift AI

Cedric Clyburn +1

Let's take a look at how you can get started working with generative AI in your application development process using open-source tools like Podman AI Lab (https://podman-desktop.io/extensions/...) to help build and serve applications with LLMs, InstructLab (https://instructlab.ai) to fine-tune models locally from your machine, and OpenShift AI (https://developers.redhat.com/product...) to handle the operationalizing of building and serving AI on an OpenShift cluster.