Welcome to the "Multimodal AI Essentials" code repository! In this repo, we will learn how multimodal AI merges text, image, and audio for smarter models.
Much of the code in these sessions will be featured in the 2nd edition of my latest book on LLMs:
So if you're itching for more, check it out and please leave a rating/review to tell me what you thought :)
For even more, check out my Expert Playlist!
-
Intermediate - Advanced Python Skills: Comfort with Python is crucial as we'll be using it throughout the course to interact with Hugging Face tools and integrate NLP into practical examples.
-
Foundational Machine Learning Knowledge: You should have an understanding of core machine learning principles, as we’ll build upon these concepts when exploring advanced NLP techniques.nologies in dynamic and evolving data environments.
- Clone this repository to your local machine.
- Ensure you have set the following api keyes:
- OpenAI key
You're all set to explore the notebooks!
This project contains several Jupyter notebooks each focusing on a specific topic:
-
Intro to Multimodality: An introduction to multimodality with CLIP and SHAP-E
-
Whisper: An introduction to using Whisper for audio transcription
-
Llava: Using an open source mult-turn multimodal engine
-
Multimodal Semantic Search: Using SigLip model to build an image search system
-
-
Visual Q/A - This case study requires you to download the data from my Dropbox here. The code snippets should download them in code if that is easier! Our goal is to emulate the process done by Llama 3.2-Vision-Instruct: one of Meta's latest Llama models that can take in images.
-
Diffusion - Exploring Diffusion Models and Fine-tuning techniques like Dreambooth
-
Intro to Diffusion (StableDiff + Flux): Generating images using diffusion models
-
Dreambooth: Fine-tuning a stable difusion model to make images of yours truly! Ever wonder what I look like blonde? Me neither but AI gave me some ideas of what it would look like.
-
-
Texth to Speech - Fine-tune text to speech models
- Fine-tuning SpeechT5 to speak Turkish: An example of proper SpeechT5 fine-tuning with 150k> high quality audio and transcrption examples
- Fine-tuning SpeechT5 to speak like Sinan: I grab videos of me from Youtube, extract the audio, run it through OpenAI's whisper to make my own dataset to train the model to sound more like me
-
Multimodal Applications
-
Multimodal Agents - Connecting an agent framework with OpenAI's Dall-E 3 diffusion model
-
Multimodal Evaluation + Ethics
- Llava-Critic Demo - Multimodal LLM (LMM) as a judge
- Wav2Lip Demo - See how modern deepfakes get made. Also find out my favorite movie! If you believe the video you see that is ;)
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
If you have questions, I'm available on Intro :)