Udemy

Generative AI Engineering: LLMs, RAG, and Agentic Systems

Enroll Now
  • 4,632 Students
  • Updated 1/2026
4.6
(941 Ratings)
CTgoodjobs selects quality courses to enhance professionals' competitiveness. By purchasing courses through links on our site, we may receive an affiliate commission.

Course Information

Registration period
Year-round Recruitment
Course Level
Study Mode
Duration
15 Hour(s) 12 Minute(s)
Language
English
Taught by
Rajeev Sakhuja
Rating
4.6
(941 Ratings)

Course Overview

Generative AI Engineering: LLMs, RAG, and Agentic Systems

Learn to design and implement GenAI workflows, multi-agent systems, LangChain, LangGraph, MCP, and model fine-tuning

Build Real-World Generative AI Systems with LLMs, RAG, and AI Agents

Go beyond prompts and chatbots. This course takes you on a complete, progressive journey from Generative AI fundamentals to advanced system-level techniques.

You’ll start by mastering the core concepts of LLMs, NLP, and AI model behavior, then move into applying RAG pipelines, vector search, and prompting patterns. Finally, you’ll tackle advanced topics such as agentic systems, multi-agent orchestration, LangGraph workflows, MCP, and model fine-tuning.

Learn to design and implement intelligent AI workflows and system components using multiple LLMs, LangChain, LangGraph, embeddings, and agentic reasoning—without the pressure of building full production applications.

Skip the beginner fluff—this is for engineers, architects, and technical founders who want to understand how modern GenAI systems are actually structured and engineered.


What You Will Learn

  1. Understand Generative AI foundations and how LLMs work, including OpenAI, Claude, Gemini, and Hugging Face models.

  2. Apply RAG pipelines, vector search, embeddings, and structured outputs to create robust AI workflows.

  3. Learn prompting techniques, in-context learning, and fine-tuning strategies for advanced LLM behavior.

  4. Build and test agentic and multi-agent systems using LangChain and LangGraph.

  5. Explore MCP servers and clients to integrate LLM reasoning with external tools and services.

  6. Understand system-level best practices for efficiency, scalability, cost, and responsible AI deployment.


Hands-On Learning

This is a learning-by-doing course, focused on frameworks, patterns, and exercises, rather than fully functional apps. You will:

  • Work with multiple LLMs and open-source models to understand their behavior.

  • Implement retrieval pipelines, multi-agent patterns, and workflows in hands-on exercises.

  • Explore LangChain, LangGraph, embeddings, vector databases, and MCP integration in manageable components.

  • Gain practical, reusable code snippets and exercises without the stress of shipping a full product.


Who This Course Is For

  • Software engineers & application developers learning system-level GenAI design

  • Solution and platform architects designing LLM-powered workflows and pipelines

  • Cloud, platform, and backend engineers transitioning into Generative AI engineering roles

  • Startup builders and technical founders exploring AI-native system architecture

  • Professionals preparing for Generative AI Engineer / Applied AI / Architect roles

Not for beginners expecting “easy prompts and chatbots,” or for data scientists seeking a math-heavy course.


Course Features

  • 29+ Hours of Video Content

  • Hands-On Projects and Coding Exercises

  • Real-World Examples

  • Quizzes for Learning Reinforcement

  • GitHub Repository with Solutions

  • Web-Based Course Guide


By the end of this course, you'll be well-equipped to leverage Generative AI for a wide range of applications, from natural language processing to content generation and beyond.


Recent Course Updates

  • Jan 2026 – New section on multi-agent system patterns

  • Sep 2025 – 2 new sections on building agents with LangGraph

  • Aug 2025 – Added lessons on chat models (Subscriber ask)

  • Jul 2025 – Updated MCP content after protocol changes

  • Jun 2025 – Expanded MCP lessons (Subscriber ask)

  • May 2025 – Model Context Protocol (MCP) section added

  • May 2025 – Python UV environment support

  • Feb 2025 – LLM fine-tuning lessons added (Subscriber ask)

  • Mar 2025 – Multiple curriculum expansions



Course Content

  • 23 section(s)
  • 269 lecture(s)
  • Section 1 Introduction
  • Section 2 Setup development environment
  • Section 3 Generative AI : Fundamentals
  • Section 4 Generative AI applications
  • Section 5 Hugging Face Models : Fundamentals
  • Section 6 (Optional) Hugging Face Models : Advanced
  • Section 7 LLM challenges & prompt engineering
  • Section 8 Langchain : Prompts, Chains & LCEL
  • Section 9 Dealing with structured responses from LLM
  • Section 10 Datasets for model training, and testing
  • Section 11 Vectors, embeddings & semantic search
  • Section 12 Vector databases
  • Section 13 Conversation User Interface
  • Section 14 Advanced Retrieval Augmented Generation
  • Section 15 Agentic RAG
  • Section 16 Model Context Protocol (MCP)
  • Section 17 Building workflows & agents with LangGraph
  • Section 18 Building workflows & agents with LangGraph
  • Section 19 Multi Agent Systems : Patterns
  • Section 20 Fine tuning
  • Section 21 Dataset preparation for fine-tuning
  • Section 22 Pre-training & Fine-tuning with HuggingFace Trainer
  • Section 23 Quantization

What You’ll Learn

  • Master Generative AI foundations, how LLMs work, and how modern AI systems are designed and applied in real-world products., Design and build end-to-end Generative AI systems using LLMs, retrieval pipelines, tools, and agentic workflows., Implement Retrieval-Augmented Generation (RAG), embeddings, vector search, reranking, and advanced retrieval patterns., Build AI agents, multi-step reasoning systems, and multi-agent workflows using LangChain and LangGraph., Develop production-style applications with structured outputs, validation, memory, and human-in-the-loop workflows., Create MCP servers and clients to connect LLMs to real tools, services, and enterprise systems., Fine-tune and optimize models using Hugging Face workflows, dataset preparation, and quantization techniques., Apply system-level best practices for cost, reliability, scalability, and responsible deployment of GenAI applications.


Reviews

  • P
    Pravinkumar Menon
    5.0

    excellent learning

  • P
    Prabhu Arumugam
    5.0

    Nice useful knowledgeable course

  • B
    Bhikshalu Moka
    5.0

    Excellent foundational experience. This is the core Sir, please create a new series with excercises, projects with deployments Code should be repeated and explained multiple times in multiple sessions to remember in detail. Just looking at code may not help.

  • P
    Priyesh Nagaraj
    5.0

    Very much useful and more informative. Happy to be part of this learning. Thanks so much..:-)

Start FollowingSee all

We use cookies to enhance your experience on our website. Please read and confirm your agreement to our Privacy Policy and Terms and Conditions before continue to browse our website.

Read and Agreed