Udemy

LLMOps And AIOps Bootcamp With 8 End To End Projects

立即報名
  • 5,838 名學生
  • 更新於 10/2025
  • 可獲發證書
4.3
(314 個評分)
CTgoodjobs 嚴選優質課程,為職場人士提升競爭力。透過本站連結購買Udemy課程,本站將獲得推廣佣金,有助未來提供更多實用進修課程資訊給讀者。

課程資料

報名日期
全年招生
課程級別
學習模式
修業期
14 小時 56 分鐘
教學語言
英語
授課導師
KRISHAI Technologies Private Limited, Sudhanshu Gusain
證書
  • 可獲發
  • *證書的發放與分配,依課程提供者的政策及安排而定。
評分
4.3
(314 個評分)
4次瀏覽

課程簡介

LLMOps And AIOps Bootcamp With 8 End To End Projects

Jenkins CI/CD, Docker, K8s, AWS/GCP, Prometheus monitoring & vector DBs for production LLM deployment with real projects

Are you ready to take your Generative AI and LLM (Large Language Model) skills to a production-ready level? This comprehensive hands-on course on LLMOps is designed for developers, data scientists, MLOps engineers, and AI enthusiasts who want to build, manage, and deploy scalable LLM applications using cutting-edge tools and modern cloud-native technologies.

In this course, you will learn how to bridge the gap between building powerful LLM applications and deploying them in real-world production environments using GitHub, Jenkins, Docker, Kubernetes, FastAPI, Cloud Services (AWS & GCP), and CI/CD pipelines.

We will walk through multiple end-to-end projects that demonstrate how to operationalize HuggingFace Transformers, fine-tuned models, and Groq API deployments with performance monitoring using Prometheus, Grafana, and SonarQube. You'll also learn how to manage infrastructure and orchestration using Kubernetes (Minikube, GKE), AWS Fargate, and Google Artifact Registry (GAR).

What You Will Learn:

Introduction to LLMOps & Production Challenges
Understand the challenges of deploying LLMs and how MLOps principles extend to LLMOps. Learn best practices for scaling and maintaining these models efficiently.

Version Control & Source Management
Set up and manage code repositories with Git & GitHub, integrate pull requests, branching strategies, and project workflows.

CI/CD Pipeline with Jenkins & GitHub Actions
Automate training, testing, and deployment pipelines using Jenkins, GitHub Actions, and custom AWS runners to streamline model delivery.

FastAPI for LLM Deployment
Package and expose LLM services using FastAPI, and deploy inference endpoints with proper error handling, security, and logging.

Groq & HuggingFace Integration
Integrate Groq API for blazing-fast LLM inference. Use HuggingFace models, fine-tuning, and hosting options to deploy custom language models.

Containerization & Quality Checks
Learn how to containerize your LLM applications using Docker. Ensure code quality and maintainability using SonarQube and other static analysis tools.

Cloud-Native Deployments (AWS & GCP)
Deploy applications using AWS Fargate, GCP GKE, and integrate with GAR (Google Artifact Registry). Learn how to manage secrets, storage, and scalability.

Vector Databases & Semantic Search
Work with vector databases like FAISS, Weaviate, or Pinecone to implement semantic search and Retrieval-Augmented Generation (RAG) pipelines.

Monitoring and Observability
Monitor your LLM systems using Prometheus and Grafana, and ensure system health with logging, alerting, and dashboards.

Kubernetes & Minikube
Orchestrate containers and scale LLM workloads using Kubernetes, both locally with Minikube and on the cloud using GKE (Google Kubernetes Engine).

Who Should Enroll?

  • MLOps and DevOps Engineers looking to break into LLM deployment

  • Data Scientists and ML Engineers wanting to productize their LLM solutions

  • Backend Developers aiming to master scalable AI deployments

  • Anyone interested in the intersection of LLMs, MLOps, DevOps, and Cloud

Technologies Covered:

Git, GitHub, Jenkins, Docker, FastAPI, Groq, HuggingFace, SonarQube, AWS Fargate, AWS Runner, GCP, Google Kubernetes Engine (GKE), Google Artifact Registry (GAR), Minikube, Vector Databases, Prometheus, Grafana, Kubernetes, and more.

By the end of this course, you’ll have hands-on experience deploying, monitoring, and scaling LLM applications with production-grade infrastructure, giving you a competitive edge in building real-world AI systems.

Get ready to level up your LLMOps journey! Enroll now and build the future of Generative AI.

課程章節

  • 9 個章節
  • 120 堂課
  • 第 1 章 COURSE INTRODUCTION
  • 第 2 章 AI Anime Recommender using Grafana Cloud,Minikube,ChromaDB,Langchain
  • 第 3 章 Flipkart Product Recommender using Prometheus,Grafana,Minikube,AstraDB,Langchain
  • 第 4 章 AI Travel Planner using Filebeat,ELK(ElasticSearch,Logstash,Kibana) , Kubernetes
  • 第 5 章 Study Buddy AI using Minikube,Jenkins,ArgoCD,GitOps,Langchain,DockerHub
  • 第 6 章 Celebrity Detector & QA using Kubernetes,CircleCI,Groq,Llama-4,OpenCV ,Flask
  • 第 7 章 Multi AI Agent using,Jenkins,SonarQube,FastAPI,Langchain,Langgraph,AWS ECS
  • 第 8 章 Medical RAG Chatbot using Jenkins,Trivy,AWS,FAISS,Langchain,Flask,HTML/CSS
  • 第 9 章 AI Music Composer using GitLab CI/CD,GCP Kubernetes, Music21, Synthesizer,

課程內容

  • Build and deploy real-world AI apps using Langchain, FAISS, ChromaDB, and other cutting-edge tools.
  • Set up CI/CD pipelines using Jenkins, GitHub Actions, CircleCI, GitLab, and ArgoCD.
  • Use Docker, Kubernetes, AWS, and GCP to deploy and scale AI applications.
  • Monitor and secure AI systems using Trivy, Prometheus, Grafana, and the ELK Stack

評價

  • M
    Manish Kumar
    4.0

    It will be better if you provide versions of packages in requirements.txt. Debugging the code and finding the correct version consumes a lot of time.

  • P
    Paulo
    5.0

    Really enjoyed all the tools and i was able to follow along and deploy all the projects to the cloud. Thank you

  • A
    Ayaan m
    3.0

    The content and the project ideas are very interesting. Overall, it gives a good hands on experience with LLM and the Ops side of supporting/deploying those apps. A couple of notes to improve the course. Other users have complained about audio issues already. Other than that, I would really appreciate more in depth explanation of the choices being made. Why using charactersplitter for this text? When is it appropriate to use other splitters? A more in depth approach towards design decisions and library selection would be greatly appreciated, it will keep learners more intuition on how to solve future problems and not just create projects one time. Additionally, I would have liked to hear more about how you're handling some of the data as well.

  • D
    Deepika N
    5.0

    Good use cases and excellent explanation. I am glad that I have registered for this course.

立即關注瀏覽更多

本網站使用Cookies來改善您的瀏覽體驗,請確定您同意及接受我們的私隱政策使用條款才繼續瀏覽。

我已閱讀及同意