Udemy

Open-source LLMs: Uncensored & secure AI locally with RAG

Enroll Now
  • 14,362 Students
  • Updated 11/2025
  • Certificate Available
4.6
(1,713 Ratings)
CTgoodjobs selects quality courses to enhance professionals' competitiveness. By purchasing courses through links on our site, we may receive an affiliate commission.

Course Information

Registration period
Year-round Recruitment
Course Level
Study Mode
Duration
10 Hour(s) 2 Minute(s)
Language
English
Taught by
Arnold Oberleiter
Certificate
  • Available
  • *The delivery and distribution of the certificate are subject to the policies and arrangements of the course provider.
Rating
4.6
(1,713 Ratings)
2 views

Course Overview

Open-source LLMs: Uncensored & secure AI locally with RAG

Private ChatGPT Alternatives: Llama3, Mistral a. more with Function Calling, RAG, Vector Databases, LangChain, AI-Agents

ChatGPT is useful, but have you noticed that there are many censored topics, you are pushed in certain political directions, some harmless questions go unanswered, and our data might not be secure with OpenAI? This is where open-source LLMs like Llama3, Mistral, Grok, Falkon, Phi3, and Command R+ can help!

Are you ready to master the nuances of open-source LLMs and harness their full potential for various applications, from data analysis to creating chatbots and AI agents? Then this course is for you!

Introduction to Open-Source LLMs

This course provides a comprehensive introduction to the world of open-source LLMs. You'll learn about the differences between open-source and closed-source models and discover why open-source LLMs are an attractive alternative. Topics such as ChatGPT, Llama, and Mistral will be covered in detail. Additionally, you’ll learn about the available LLMs and how to choose the best models for your needs. The course places special emphasis on the disadvantages of closed-source LLMs and the pros and cons of open-source LLMs like Llama3 and Mistral.

Practical Application of Open-Source LLMs

The course guides you through the simplest way to run open-source LLMs locally and what you need for this setup. You will learn about the prerequisites, the installation of LM Studio, and alternative methods for operating LLMs. Furthermore, you will learn how to use open-source models in LM Studio, understand the difference between censored and uncensored LLMs, and explore various use cases. The course also covers finetuning an open-source model with Huggingface or Google Colab and using vision models for image recognition.

Prompt Engineering and Cloud Deployment

An important part of the course is prompt engineering for open-source LLMs. You will learn how to use HuggingChat as an interface, utilize system prompts in prompt engineering, and apply both basic and advanced prompt engineering techniques. The course also provides insights into creating your own assistants in HuggingChat and using open-source LLMs with fast LPU chips instead of GPUs.

Function Calling, RAG, and Vector Databases

Learn what function calling is in LLMs and how to implement vector databases, embedding models, and retrieval-augmented generation (RAG). The course shows you how to install Anything LLM, set up a local server, and create a RAG chatbot with Anything LLM and LM Studio. You will also learn to perform function calling with Llama 3 and Anything LLM, summarize data, store it, and visualize it with Python.

Optimization and AI Agents

For optimizing your RAG apps, you will receive tips on data preparation and efficient use of tools like LlamaIndex and LlamaParse. Additionally, you will be introduced to the world of AI agents. You will learn what AI agents are, what tools are available, and how to install and use Flowise locally with Node.js. The course also offers practical insights into creating an AI agent that generates Python code and documentation, as well as using function calling and internet access.

Additional Applications and Tips

Finally, the course introduces text-to-speech (TTS) with Google Colab and finetuning open-source LLMs with Google Colab. You will learn how to rent GPUs from providers like Runpod or Massed Compute if your local PC isn’t sufficient. Additionally, you will explore innovative tools like Microsoft Autogen and CrewAI and how to use LangChain for developing AI agents.

Harness the transformative power of open-source LLM technology to develop innovative solutions and expand your understanding of their diverse applications. Sign up today and start your journey to becoming an expert in the world of large language models!

Course Content

  • 9 section(s)
  • 88 lecture(s)
  • Section 1 Introduction and Overview
  • Section 2 Why Open-Source LLMs? Differences, Advantages, and Disadvantages
  • Section 3 The Easiest Way to Run Open-Source LLMs Locally & What You Need
  • Section 4 Prompt Engineering for Open-Source LLMs and Their Use in the Cloud
  • Section 5 Function Calling, RAG, and Vector Databases with Open-Source LLMs
  • Section 6 Optimizing RAG Apps: Tips for Data Preparation
  • Section 7 Local AI Agents with Open-Source LLMs
  • Section 8 Finetuning, Renting GPUs, Open-Source TTS, Finding the BEST LLM & More Tips
  • Section 9 Data Privacy, Security, and What Comes Next?

What You’ll Learn

  • Why Open-Source LLMs? Differences, Advantages, and Disadvantages of Open-Source and Closed-Source LLMs
  • What are LLMs like ChatGPT, Llama, Mistral, Phi3, Qwen2-72B-Instruct, Grok, Gemma, etc.
  • Which LLMs are available and what should I use? Finding "The Best LLMs"
  • Requirements for Using Open-Source LLMs Locally
  • Installation and Usage of LM Studio, Anything LLM, Ollama, and Alternative Methods for Operating LLMs
  • Censored vs. Uncensored LLMs
  • Finetuning an Open-Source Model with Huggingface or Google Colab
  • Vision (Image Recognition) with Open-Source LLMs: Llama3, Llava & Phi3 Vision
  • Hardware Details: GPU Offload, CPU, RAM, and VRAM
  • All About HuggingChat: An Interface for Using Open-Source LLMs
  • System Prompts in Prompt Engineering + Function Calling
  • Prompt Engineering Basics: Semantic Association, Structured & Role Prompts
  • Groq: Using Open-Source LLMs with a Fast LPU Chip Instead of a GPU
  • Vector Databases, Embedding Models & Retrieval-Augmented Generation (RAG)
  • Creating a Local RAG Chatbot with Anything LLM & LM Studio
  • Linking Ollama & Llama 3, and Using Function Calling with Llama 3 & Anything LLM
  • Function Calling for Summarizing Data, Storing, and Creating Charts with Python
  • Using Other Features of Anything LLM and External APIs
  • Tips for Better RAG Apps with Firecrawl for Website Data, More Efficient RAG with LlamaIndex & LlamaParse for PDFs and CSVs
  • Definition and Available Tools for AI Agents, Installation and Usage of Flowise Locally with Node (Easier Than Langchain and LangGraph)
  • Creating an AI Agent that Generates Python Code and Documentation, and Using AI Agents with Function Calling, Internet Access, and Three Experts
  • Hosting and Usage: Which AI Agent Should You Build and External Hosting, Text-to-Speech (TTS) with Google Colab
  • Finetuning Open-Source LLMs with Google Colab (Alpaca + Llama-3 8b, Unsloth)
  • Renting GPUs with Runpod or Massed Compute
  • Security Aspects: Jailbreaks and Security Risks from Attacks on LLMs with Jailbreaks, Prompt Injections, and Data Poisoning
  • Data Privacy and Security of Your Data, as well as Policies for Commercial Use and Selling Generated Content

Reviews

  • B
    Bhushan Asolkar
    4.0

    Covered all of the basics about Large Language Models all the way from Prompt engineering to RAG technology to creating agents. I felt that a disclaimer about who the course is oriented towards would have been helpful. Different people have different needs. As a developer, I would have liked to invest some time into a hands-on course which went into basics of creating AI apps programmatically. Nevertheless, a good course that covers the basics of the new AI world and allows you to do it locally without spending big bucks or having to purchase credits. So good work, Arnie.

  • E
    Erik Zeek
    4.0

    I thoroughly enjoyed this course. Hugging Face no longer has a chat interface, making some parts a bit difficult to follow, but not too difficult to navigate. For the apps that needed to be installed, it would have been better if containers could have been referenced. That would have simplified the installation.

  • V
    Vladimir Galabov
    1.0

    This course is so outdated! It's a total waste of time! Shameful that it's still online!!!

  • D
    David Mayor Tonda
    2.0

    An interesting overview on Large Language Models (LLMs) and their use both offline and online. Positive aspects: 1. The instructor possesses extensive experience in utilizing LLMs and has developed substantial skills, offering intriguing tips on their application. 2. He presents and describes numerous tools for using LLMs, RAGs, and AI-agents offline. 3. The structure of the course is well-organized, with each lesson concluding in a summary of the topics covered. Negative aspects: 1. In some lessons within Chapter 3, the instructor uses Huggingchat, which was shut down long ago. There is no information provided on alternative substitutes, so these lessons cannot be followed as intended. 2. The entire Chapter 7 relies on Flowise, which has also been outdated for some time. Many of the flows and tools discussed are no longer available, with no alternatives suggested. 3. Unfortunately, the professor does not address a significant number of questions posed by students. Some questions affecting multiple students remain unanswered for months, particularly those regarding the aforementioned points. Conclusion: I generally seek courses that have been updated recently, as IT tools frequently change, and it's unwise to learn deprecated or soon-to-be deprecated tools. Although the course description mentions "Last update June 2025," it is unclear what specific updates were made in June 2025, especially regarding Huggingchat and Flowise. It seems that the instructor primarily makes minor adjustments rather than addressing major open issues. Consequently, I do not recommend this course at its full price. However, if you're interested in using LLMs offline, a reduced-price offer could be worthwhile – though it's essential to note that the information on RAGs and AI-agents from Flowise is outdated.

Start FollowingSee all

We use cookies to enhance your experience on our website. Please read and confirm your agreement to our Privacy Policy and Terms and Conditions before continue to browse our website.

Read and Agreed