Skip to content

mines-opt-ml/decoding-gpt

Repository files navigation

(copy of syllabus.pdf)

Course Code MATH 598B
Meeting Times 3:30pm-4:45pm, Tuesdays/Thursdays
Location 130 Alderson Hall
Instructors Samy Wu Fung, Michael Ivanitskiy
Contact mivanits@mines.edu
Office location 269 Chauvenet Hall or Zoom by request
Office hours Time TBD, poll here: when2meet.com/?28163081-sA81e
materials github.com/mines-opt-ml/decoding-gpt
website miv.name/decoding-gpt
Credit Hours 3

Course Description

Since the public release of GPT-3 in 2020, Large Language Models (LLMs) have made rapid progress across a wide variety of tasks previously. However, the internal mechanisms by which these models are capable of performing such tasks is not understood. A large fraction of machine learning researchers believe that there are significant risks from training and deploying such models, ranging from mass unemployment and societal harms due to misinformation, to existential risks due to misaligned AI systems. This course will explore the mathematical foundations of Transformer networks, the issues that come with trying to impart human values onto such systems, and the current state of the art in interpretability and alignment research.

Learning outcomes

Over the duration of the course, students will gain:

  1. A solid theoretical understanding of the mechanics of a transformer networks and attention heads
  2. Practical experience with implementing, training, and deploying LLMs for simple tasks
  3. Understanding of the fundamentals of the AI alignment problem, present and future risks and harms, and a broad overview of the current state of the field
  4. Familiarity with current results and techniques in interpretability research for LLMs

Prerequisites

Prerequisite Courses:

Note that higher-level or graduate variants of these courses are also acceptable.

  • MATH 213 (Calculus 3)
  • MATH 332 (Linear Algebra) or MATH 500 (Linear Vector Spaces)
  • CSCI303 (Intro to Data Science) or CSCI470 (Intro to Machine Learning)

Prerequisite Skills:

  • Linear Algebra: Students should have a strong grasp of linear algebra, including matrix multiplication, vector spaces, matrix decompositions, and eigenvalues/eigenvectors.
  • Machine Learning: Students should be familiar with basic Deep Neural Networks and stochastic gradient descent via backpropagation.
  • Software: Students should be very comfortable writing software in python. Familiarity with setting up virtual environments, dependency management, and version control via git is recommended. Experience with PyTorch or another deep learning framework is highly recommended.
  • Research Skills: Students should be comfortable finding and reading relevant papers in depth. How you read papers, whether you take notes, etc. is up to you, but you should be able to understand novel material from a paper in depth and be able to explain it to others.

Course Materials

This field moves too quickly for there to currently be an up-to-date textbook on interpretability and alignment for transformers. Below are provided some useful introductory materials which we will be going over in part. Reading or at least skimming some these before the start of the course is recommended -- they are listed in a rough order of priority, but feel free to skip around. We will also be reading a wide variety of papers throughout the course, and you will be expected to find interesting and useful ones.

Evaluation

  • Homeworks (40%) Short homeworks will be assigned periodically, and will be due at the beginning of class on the due date.
  • Final Project: (40%) Students working in groups will select a research topic related to the course material, design and perform novel experiments, and write a 10-15 page report on their findings. Example topics will be provided, but topic selection is flexible, as long as it relates to alignment or interpretability for ML systems.
  • Paper presentations: (10%) Students will select, read, and present on relevant papers throughout the course of the semester. These papers should be selected with the aim of giving background for the final projects.
  • Class participation (10%): Students are expected to attend course lectures, participate in discussions, and ask questions. Allowances for absences will be made.

Tenative Course Outline

  • Background
    • neural networks
    • autodiff, backprop, and optimization theory
    • other neural network architectures
    • language modeling
  • Attention Heads & the Transformer architecture
    • attention heads
    • positional encodings, causal attention
    • transformers
  • Interpretability
    • intro to interpretability
    • circuit analysis
    • sparse autoencoders
  • Alignment
    • the AI Alignment problem
    • AI safety, ethics, and policy
  • Student Presentations
    • paper presentations
    • final project presentations

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages