Skip to content

Machine learning and AI projects require managing diverse data sources, vast data volumes, model and parameter development, and conducting numerous test and evaluation experiments. Overseeing and tracking these aspects of a program can quickly become an overwhelming task.

Notifications You must be signed in to change notification settings

my-thought-experiments/Evaluating-and-Debugging-Generative-AI

 
 

Repository files navigation

Machine learning and AI projects require managing diverse data sources, vast data volumes, model and parameter development, and conducting numerous test and evaluation experiments. Overseeing and tracking these aspects of a program can quickly become an overwhelming task.

This course will introduce you to Machine Learning Operations tools that manage this workload. You will learn to use the Weights & Biases platform which makes it easy to track your experiments, run & version your data, and collaborate with your team.

This course will teach you to:

  • Instrument a Jupyter notebook
  • Manage hyperparameter config
  • Log run metrics
  • Collect artifacts for dataset and model versioning
  • Log experiment results
  • Trace prompts and responses to LLMs over time in complex interactions

When you complete this course, you will have a systematic workflow at your disposal to boost your productivity and accelerate your journey toward breakthrough results.

About

Machine learning and AI projects require managing diverse data sources, vast data volumes, model and parameter development, and conducting numerous test and evaluation experiments. Overseeing and tracking these aspects of a program can quickly become an overwhelming task.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 87.6%
  • Python 12.4%