5.0
(14 ratings)
5 Weeks
·Cohort-based Course
Learn how to build Production-grade RAG and LLM Applications using AWS and GCP with FAST API. Focus on Scale, Security and Low Latency
Previously at
Last chance to enroll
1
day
7
hours
11
mins
Course overview
Welcome to the comprehensive course on advancing your skills in building sophisticated Large Language Model (LLM) applications!
We have tried to build the most advanced LLM course currently being offered in the world. No pun intended.
If you have already acquired knowledge about RAG, cosine similarity, vector databases, and Langchain, it's time to delve into the practical aspects of packaging and deploying these models in production environments.
This course builds upon the fundamental building blocks of LLMs and covers the following key topics:
1. Fine-tuning: Learn advanced techniques for fine-tuning LLMs (ChatGPT and Open-source LLMs) to enhance their performance and adapt them to specific tasks or domains.
2. Model merging: Explore methods to merge multiple models, optimizing their collective capabilities for more robust and versatile language processing.
3. Inference speed exploration: Understand strategies to optimize and accelerate inference speeds, ensuring efficient real-time processing of language model outputs.
4. Quantization methods: Dive into techniques for model quantization, reducing model size while maintaining performance, crucial for deployment in resource-constrained environments.
5. Model hosting and deployments: Gain insights into best practices for hosting and deploying LLMs in production settings, ensuring seamless integration into diverse applications.
6. Semantic Caching: Learn how to build it all from scratch and implement it with GCP and REDIS
7. Guardrail and DSPy: Implement State of the Art Guardrail and learn how you can build applications with minimal prompting
Throughout the course, we will analyze state-of-the-art AI products, reverse-engineering some through Python.
Additionally, my collaboration with experienced Software Engineers on our team will provide valuable insights into integrating LLMs with Node.js for web application development.
As a bonus, you'll have access to experimental products being developed at Traversaal.ai, my startup, allowing you to stay at the forefront of cutting-edge advancements in the field.
Prerequisites for this course include proficiency in Python and a solid understanding of RAGs, as well as Encoder and Decoder models.
If you feel the need for a more foundational course, consider checking out my other offering on LLMs: https://maven.com/boring-bot/ml-system-design
Tools utilized in this course include VS Code, UNIX terminal, Jupyter Notebooks, and Conda package management, ensuring a hands-on and practical learning experience.
01
Machine Learning Engineer exploring different techniques to scale LLM solutions
02
Researcher, who would like to delve in to various aspects of open-source LLMs
03
Software Engineer, looking to learn how to integrate AI into their products
Gain hands-on experience and mastery in deploying Large Language Models (LLMs) in real-world production environments, covering the entire spectrum from model packaging to seamless integration into diverse applications.
Acquire advanced techniques for fine-tuning LLMs, enabling you to adapt these models to specific tasks or domains and enhance their performance in targeted applications.
Learn the art of model merging to combine multiple models effectively, optimizing their collective capabilities for robust and versatile language processing tailored to your application's requirements.
Explore strategies for exploring and optimizing inference speeds, ensuring that your language models perform efficiently in real-time scenarios, a crucial skill for deploying responsive and scalable applications.
Dive into the analysis of state-of-the-art AI products, reverse-engineering some through Python, and gain exclusive access to experimental products developed at Traversaal.ai, staying at the forefront of innovations in the field of advanced language modeling.
8 interactive live sessions
Lifetime access to course materials
8 in-depth lessons
Direct access to instructor
2 projects to apply learnings
Guided feedback & reflection
Private community of peers
Course certificate upon completion
Maven Satisfaction Guarantee
This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.
Advanced LLM Application Building
Week 1
Jul 6—Jul 7
Events
Sat, Jul 6, 4:00 PM - 6:00 PM UTC
Modules
Week 2
Jul 8—Jul 14
Events
Tue, Jul 9, 7:00 PM - 7:30 PM UTC
Sat, Jul 13, 4:00 PM - 6:00 PM UTC
Modules
Week 3
Jul 15—Jul 21
Events
Wed, Jul 17, 7:00 PM - 7:30 PM UTC
Sat, Jul 20, 4:00 PM - 6:00 PM UTC
Week 4
Jul 22—Jul 28
Events
Wed, Jul 24, 7:00 PM - 7:30 PM UTC
Sat, Jul 27, 4:00 PM - 6:00 PM UTC
Week 5
Jul 29—Aug 4
Events
Sat, Aug 3, 4:00 PM - 6:00 PM UTC
Post-Course
Modules
5.0
(14 ratings)
I am a Founder by day and Professor by night. My work revolves in the realm of LLMs and Multi-Modal Systems.
My startup, traversaal.ai was built with one vision: provide scalable LLM Solutions for Startups and Enterprises, which can seamlessly integrate within the existing ecosystem, while being customizable and cost efficient.
This course is a cumulation of all my learnings and the courses I teach at other universities
Cohort 2
$800
Dates
Payment Deadline
Don't miss out! Enrollment closes in 2 days
9:00 - 11:00am PT
Virtual Class
2-3 hours per week
Work in teams to build solutions, this requires engagement with other team members
Active hands-on learning
This course builds on live workshops and hands-on projects
Interactive and project-based
You’ll be interacting with other learners through breakout rooms and project teams
Learn with a cohort of peers
Join a community of like-minded people who want to learn and grow alongside you
Cohort 2
$800
Dates
Payment Deadline
Don't miss out! Enrollment closes in 2 days