A curated collection of significant/impactful articles to be treated as a textbook, because sometimes it's just best to go straight to the source. My hope is to provide a reference for understanding important developments in the historical context that motivated them, e.g. the problems the authors were attempting to solve, what particular features of the discovery were considered especially novel or impressive when it was first published, what the competing theories or techniques at the time were, etc.
Someday this will be organized better.
-
Lasso/elasticnet
- 1996 - "Regression Shrinkage and Selection via the Lasso" - Robert Tibshirani
- 2005 - "Regularization and variable selection via the elastic net" - Hui Zou, Trevor Hastie
-
Boosting
- 1990 - "The Strength of Weak Learnability" - Robert E. Schapire
-
Bagging
- 1991 - "Bagging Predictors" - Leo Breiman
-
random forest
- 2001 - "Random Forests" - Leo Breiman
-
Adaboost
- 1997 - (AdaBoost) "A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting" - Yoav Freund, Robert E Schapire
- 1999 - "Improved Boosting Algorithms Using Confidence-rated Predictions" - Yoav Freund, Robert E Schapire
-
gradient boosting
- 2016 - "XGBoost: A Scalable Tree Boosting System" - Tianqi Chen, Carlos Guestrin
-
bias-variance tradeoff
- 1997 - "Bias Plus Variance Decomposition for Zero-One Loss Functions" - Ron Kohavi, David H. Wolpert
-
non-parametric bootstrap
- 1979 - "Bootstrap Methods: Another Look at the Jackknife" - Bradley Effron
-
permutation testing (target shuffle)
-
PCA
- 1991 - "Face Recognition Using Eigenfaces" - Matthew Turk, Alex Pentland
-
nonlinear PCA variants
- 1989 - "Principal Curves" - Trevor Hastie, Werner Stuetzle
-
ICA
-
LSI/LSA
- 1988 - "Indexing by Latent Semantic Analysis" - Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, Richard Harshman
-
LDA
- 2003 - "Latent Dirichlet Allocation" - David M. Blei, Andrew Ng, Michael I. Jordan
-
SVM
- 1992 - A Training Algorithm for Optimal Margin Classifier - Bernhard E. Boser, Isabelle Guyon, Vladimir N. Vapnik
-
NMF
- 2003 - "Document Clustering Based On Non-negative Matrix Factorization" - Wei Xu, Xin Liu, Yihong Gon
-
random projections
- https://cseweb.ucsd.edu/~dasgupta/papers/randomf.pdf
- See also Johnson-Lindenstrauss lemma
-
MCMC - metropolis-hastings, HMC, reversible jump, NUTS
- Hamiltonian Monte Carlo (HMC)
- 1987 - "Hybrid Monte Carlo" - Simon Duane, A.D. Kennedy, Brian Pendleton, Duncan Roweth
- Hamiltonian Monte Carlo (HMC)
-
SMOTE
- 2002 - "Smote: synthetic minority over-sampling technique" - Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, W Philip Kegelmeyer
-
tSNE
- 2002 - "Stochastic Neighbor Embedding" - Geoffrey Hinton, Sam Roweis
- 2008 - "Visualizing Data using t-SNE" - Laurens van der Maaten, Geoffrey Hinton
-
UMAP
- 2018 - "UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction" - Leland McInnes, John Healy, James Melville
-
LSH
- 2014 - "LOCALITY PRESERVING HASHING" - Yi-Hsuan Tsai, Ming-Hsuan Yang
-
Feature hashing / hashing trick
- 1989 - "Fast learning in multi-resolution hierarchies" - John Moody
- 2009 - "Feature Hashing for Large Scale Multitask Learning" - Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg
-
the kernel trick
-
naive bayes
-
HMM -
CRF
-
RBM - Restricted Boltzman Machine
-
GAM - General Additive Models
-
MARS - Multivariate adaptive regression splines
- 1991 - "Multivariate Adaptive Regression Splines" - Jerome H. Friedman
-
Decision Trees
- 1986 - (ID3) "Induction of Decision Trees" - J. R. Quinlan
- 1984 - (CART) "Classification and Regression Trees" - in lieu of the book, here's a topic summary from 2011 - Breiman L, Friedman JH, Olshen RA, Stone CJ
-
KNN
- 1967 - "Nearest Neighbor Pattern Classification" - T. M. Cover, P. E. Hart
-
Benford's Law
- 1938 - "The Law of Anomalous Numbers" - Frank Benford
- 1881 - "Note on the frequency of use of the different digits in natural numbers" - Simon Newcomb
-
Guassian KDE
-
Boruta
- 2010 - "Feature Selection with the Boruta Package" - Miron B. Kursa, Witold R. Rudnicki
-
Step-wise regression / forward selection / backwards elimination / recursive feature elimination
-
kalman filter
- 1960 - "A New Approach to Linear Filtering and Prediction Problems" - R. E. Kalman
-
restricted boltzman machine
-
Deep belief networks
-
Scree plot
- 1966 - "The Scree Test For The Number Of Factors" - Raymond B. Cattell
-
Collaborative Filtering (SVD and otherwise)
-
Market basket analysis
-
Process mining
-
self-organizing maps
- 1982 - "Self-Organized Formation of Topologically Correct Feature Maps" - Teuvo Kohonen
-
Good overview of modeling process
- 2020 - "Bayesian workflow" - Andrew Gelman, Aki Vehtari, Daniel Simpson, Charles C. Margossian, Bob Carpenter, Yuling Yao, Lauren Kennedy, Jonah Gabry, Paul-Christian Bürkner, Martin Modrák
-
poisson bootstrap
- 2012 - ESTIMATING UNCERTAINTY FOR MASSIVE DATA STREAMS - Nicholas Chamandy, Omkar Muralidharan, Amir Najmi, Siddartha Naidu
-
constrained optimization / Linear Programming
- 1947 - Simplex Algorithm (?) - George Dantzig
-
compressed sensing
- 2004 - "Compressed Sensing" - David L. Donoho
- Graph anomaly detection (enron)
- 2005 - "Scan Statistics On Enron Graphs" - Carey E. Priebe, John M. Conroy, David J. Marchette, Youngser Park
- Exponential random graphs
- modularity / louvain community detection
- 2004 - "Finding community structure in very large networks" - Aaron Clauset, M. E. J. Newman, Cristopher Moore
- 2008 - "Fast unfolding of communities in large networks" - Vincent D. Blondel, Jean-Loup Guillaume, Renaud Lambiotte, Etienne Lefebvre
- pagerank
- 1998 - "The PageRank Citation Ranking: Bringing Order to the Web" - Larry Page
- smallworld
- scale free
- "Networks of Love"
- 2004 - Chains of Affection: The Structure of Adolescent Romantic and Sexual Networks - Peter S. Bearman, James Moody, Katherine Stovel
- Genetic algorithms
- force-directed/spring layout (fruchterman-reingold I think?)
- label propagation
- 2002 - "Learning From Labeled and Unlabeled Data With Label Propagation" - Xiaojin Zhu, Zoubin Ghahramani
- Foundational Books
- 2008 - Group theoretical methods in machine learning (thesis) - Risi Kondor
- 2021 - Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges - Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković
- Expectation maximization
- Newton-raphson
- L-BFGS
- simulated annealing
- FFT
- 1965 - An Algorithm for the Machine Calculation of Complex Fourier Series - James Cooley, John Tukey
- Constraint Programming / queing theory / OR
- Uh... here there be dragons. Maybe just leave some breadcrumbs here?
-
perceptron algorithm
-
SGD / backprop
- 1951 - "A Stochastic Approximation Method" - H. Robbins and S. Monro.
- 1986 - "Learning representations by back-propagating errors" - David Rumelhart, Geoffrey Hinton, Ronald Williams
- 1998 - "Efficient Backprop" - Yann LeCun, Leon Bottou, Genevieve B. Orr and Klaus-Robert Müller
-
Adagrad / RMSProp
- Probably discussed sufficiently in the Adam paper
-
Adam
- 2014 - "Adam: A Method for Stochastic Optimization" - Diederik P. Kingma, Jimmy Ba
-
reverse-mode autodiff
- see backprop
-
gradient clipping
-
learning rate scheduling
-
distributed training
- 2011 - "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent" - Feng Niu, Benjamin Recht, Christopher Re, Stephen J. Wright
-
ZeRo Offload, Zero Redundandancy Optimizers
- 2019 - "ZeRO: Memory Optimizations Toward Training Trillion Parameter Models" - Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He
-
federated learning
-
K-FAC for approximating Fisher Information, Hessian
- 2015 - "Optimizing Neural Networks with Kronecker-factored Approximate Curvature" - James Martens, Roger Grosse
- sigmoid
- ReLU
- 2011 - "Deep Sparse Rectifier Neural Networks" - Xavier Glorot, Antoine Bordes, Yoshua Bengio
- See also AlexNet
- GELU
- 2016 - "Gaussian Error Linear Units (GELUs)" - Dan Hendrycks, Kevin Gimpel
- Gumbel quantization
- 2016 - "Categorical Reparameterization with Gumbel-Softmax" - Eric Jang, Shixiang Gu, Ben Poole
- Xavier/Glorot initialization - vanishing/exploding gradients
- 2010 - "Understanding the difficulty of training deep feedforward neural networks" - Xavier Glorot, Yoshua Bengio
- MLP
- convolutions (+ pooling)
- depthwise-seperable convolutions (mobilenet?)
- dilated convolutions (Wavenet)
- squeeze-and-excitation block
- 2017 - "Squeeze-and-Excitation Networks" - Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu
- LSTM
- 1997 - "LONG SHORT-TERM MEMORY" - Sepp Hochreiter, Jurgen Schmidhuber
- Residual connections - Resnets + highway networks
- 2015 - "Highway Networks" - Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber
- 2015 - "Deep Residual Learning for Image Recognition" - Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
- batchnorm
- 2015 - "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" - Sergey Ioffe, Christian Szegedy
- additive attention
- see also Alex Graves 2013
- 2014 - "Neural Machine Translation by Jointly Learning to Align and Translate" - Dzmitry Bahdanau, KyungHyun Cho Yoshua Bengio
- self-attention / scaled dot-product attention / transformers
- 2017 - "A Structured Self-attentive Sentence Embedding" - Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, Yoshua Bengio
- 2017 - "Attention Is All You Need" - Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
- dropout
- 2014 - "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" - Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov
- AdaIN - see alse: StyleGAN, StyleGANv2
- 2017 - "Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization" - Xun Huang, Serge Belongie
Good list here: https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html#citations-below
- multi-armed bandit
- temporal differences
- 1988 - "Learning to predict by the methods of temporal differences" - Richard S. Sutton
- Q learning, experience replay
- 1989 - "Learning from Delayed Rewards" - Christopher Watkins
- 2013 - (DQN) "Playing Atari with Deep Reinforcement Learning" - Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller
- 2015 - "Human-level control through deep reinforcement learning" - Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, Demis Hassabis
- Policy gradient
- proximal policy optimization (PPO)
- https://openai.com/research/openai-baselines-ppo
- 2017 - "Proximal Policy Optimization Algorithms" - John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov
- direct preference optimization (DPO)
- 2023 - "Direct Preference Optimization: Your Language Model is Secretly a Reward Model" - Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, Chelsea Finn
- random search > grid search
- 2012 - "Random Search for Hyper-Parameter Optimization" - James Bergstra, Yoshua Bengio
- bayesian / gaussian process (explore/exploit)
- 2009 - "Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design" - Niranjan Srinivas, Andreas Krause, Sham M. Kakade, Matthias Seeger
- Population based training
- bandit/hyperband
- Architecture search using hypernetwork proxy
- 2017 - "SMASH: One-Shot Model Architecture Search through HyperNetworks" - Andrew Brock, Theodore Lim, J.M. Ritchie, Nick Weston
-
Occupancy Networks
- 2018 - "Occupancy Networks: Learning 3D Reconstruction in Function Space" - Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, Andreas Geiger
-
SIREN
- 2020 - Implicit Neural Representations with Periodic Activation Functions - Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, Gordon Wetzstein
-
Neural Radiance Fields (NeRF)
- 2020 - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis - Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng
- 2020 - Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains - Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, Ren Ng
-
NeRF + Triplanes
-
TensoRT
-
Video Segmentation
- 2018 - "Tracking Emerges by Colorizing Videos" - Carl Vondrick, Abhinav Shrivastava, Alireza Fathi, Sergio Guadarrama, Kevin Murphy
-
LeNet
- 1998 - "GradientBased Learning Applied to Document Recognition" - Yann LeCun, Leon Bottou, Yoshua Bengio, Patrick Haffner
-
alexnet - Demonstrated importance of network depth (specifically stacking convolutions), and ReLU capability over the then conventional sigmoid and tanh activations
- 2012 - (using ImageNet leaderboard date; article published 2017) - "ImageNet Classification with Deep Convolutional Neural Networks" - Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton
-
GAN, DCGAN, WGAN
-
StyleGAN -> StyleGANv2 -> StyleGAN2-ADA
-
U-net
-
VGG
- 2014 - "Very Deep Convolutional Networks for Large-Scale Image Recognition" - Karen Simonyan, Andrew Zisserman
-
inception/DeepDream
- 2014 - "Understanding Deep Image Representations by Inverting Them" - Aravindh Mahendran, Andrea Vedaldi
- 2015 - "Inceptionism: Going Deeper into Neural Networks" - Alexander Mordvintsev, Christopher Olah, Mike Tyka
-
style transfer, content-texture decomposition, weight covariance transfer
-
cyclegan/discogan
-
YOLO
-
EfficientNet - Scaling laws for conv-resnets
- 2019 - "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks" - Mingxing Tan, Quoc V. Le
-
FPN - Feature Pyramid Networks
- 2016 - "Feature Pyramid Networks for Object Detection" - Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie
-
Mask R-CNN
- 2017 - "Mask R-CNN" - Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick
-
Mobilenet
- 2017 - "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" - Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam
- 2018 - "MobileNetV2: Inverted Residuals and Linear Bottlenecks" - Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen
-
Generative Diffusion models
- 2015 - "Deep Unsupervised Learning using Nonequilibrium Thermodynamics" - Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, Surya Ganguli
- 2020 - "Denoising Diffusion Probabilistic Models" - Jonathan Ho, Ajay Jain, Pieter Abbeel
-
VQVAE/VQGAN
- 2017 - "Neural Discrete Representation Learning" - Aaron van den Oord, Oriol Vinyals, Koray Kavukcuoglu
- 2019 - "Generating Diverse High-Fidelity Images with VQ-VAE-2" - Ali Razavi, Aaron van den Oord, Oriol Vinyals
- 2020 - "Taming Transformers for High-Resolution Image Synthesis" - Patrick Esser, Robin Rombach, Björn Ommer
-
Stable Diffusion
- 2022 - "High-Resolution Image Synthesis with Latent Diffusion Models" - Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer
-
ViT
- 2020 - "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale" - Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby
- 2022 - "Better plain ViT baselines for ImageNet-1k" - Lucas Beyer, Xiaohua Zhai, Alexander Kolesnikov
-
text-to-3d, score distillation sampling, dreamfusion
- 2022 - "DreamFusion: Text-to-3D using 2D Diffusion" - Ben Poole, Ajay Jain, Jonathan T. Barron, Ben Mildenhall
-
Gaussian Splatting
-
SAM
- 2023 - "Segment Anything" - Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, Ross Girshick
-
ConvNeXt
- 2022 - "A ConvNet for the 2020s" - Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie
- BERT
- 2018 - "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" - Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova
- RNN-LM
- 2010 - "Recurrent neural network based language model" - Tomas Mikolov, Martin Karafiat, Luka's Burget, Jan "Honza" Cernocky, Sanjeev Khudanpur
- 2014 - "Generating Sequences With Recurrent Neural Networks" - Alex Graves
- word2vec
- 2013 - "Distributed Representations of Words and Phrases and their Compositionality" - Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean
- GLoVe
- GLUE task
- fasttext
- wordpiece tokenization / BPE
- 2015 - "Neural Machine Translation of Rare Words with Subword Units" - Rico Sennrich, Barry Haddow, Alexandra Birch
- 2016 - "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" - Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean
- Large Language Models
- ULMfit, transfer learning, llm finetuning
- 2018 - "Universal Language Model Fine-tuning for Text Classification" - Jeremy Howard, Sebastian Ruder
- GPT-2 / GPT-3
- 2020 - "Language Models are Few-Shot Learners" - Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei
- ULMfit, transfer learning, llm finetuning
-
autoencoders
- 1991 - "Nonlinear principal component analysis using autoassociative neural networks" - Mark A. Kramer
- 2006 - "Reducing the Dimensionality of Data with Neural Networks" - Geoff Hinton, R. R. Salakhutdinov
-
VAE (w inference amortization)
- 2013 - "Auto-Encoding Variational Bayes" - Diederik P Kingma, Max Welling
-
siamese network
- 2015 - "Siamese Neural Networks for One-Shot Image Recognition" - Gregory Koch
-
student-teacher transfer learning, catastrophic forgetting
- see also knowledge distillation below
-
InfoNCE, contrastive learning
- 2018 - "Representation Learning with Contrastive Predictive Coding" - Aaron van den Oord, Yazhe Li, Oriol Vinyals
-
DINO
- 2021 - "Emerging Properties in Self-Supervised Vision Transformers" - Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin
- 2023 - "DINOv2: Learning Robust Visual Features without Supervision" - Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski
-
CLIP
- 2021 - Learning Transferable Visual Models From Natural Language Supervision - Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever
- 2022 - "Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning" - Weixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Yeung, James Zou
-
Johnson-Lindenstrauss lemma
- 1984 - "Extensions_of_Lipschitz_mappings_into_a_Hilbert_space" - William B. Johnson, , Joram Lindenstrauss
- https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma
- 2021 - "An Introduction to Johnson-Lindenstrauss Transforms" - Casper Benjamin Freksen
-
Neural ODE
-
Neural PDE
-
seq2seq
- 2014 - "Sequence to Sequence Learning with Neural Networks" - Ilya Sutskever, Oriol Vinyals, Quoc V. Le
-
pix2pix
-
BNN
- 1995 - "Bayesian Methods for Neural Networks" - Christopher Bishop
-
The Netflix Prize
- 2007 - "The Netflix Prize" - overview of the contest and dataset
- 2007 - "The BellKor solution to the Netflix Prize" - Robert M. Bell, Yehuda Koren, Chris Volinsky
- 2008 - "The BellKor 2008 Solution to the Netflix Prize" - Robert M. Bell, Yehuda Koren, Chris Volinsky
- 2008 - "The BigChaos Solution to the Netflix Prize 2008" - Andreas Toscher, Michael Jahrer
- 2006 - "How To Break Anonymity of the Netflix Prize Dataset" - Arvind Narayanan, Vitaly Shmatikov
-
Kaggle Galaxy Zoo
-
Capsule Networks
-
BiDirectional RNN
- 1997 - "Bidirectional Recurrent Neural Networks" - Mike Schuster, Kuldip K. Paliwal
-
WaveNet
- 2016 - "WaveNet: A Generative Model for Raw Audio" - Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu
-
Normalizing flows
-
AlphaFold, EvoFormer
- 2021 - "Highly accurate protein structure prediction with AlphaFold" - John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Demis Hassabis
-
AlphaGo
-
IBM Watson on Jeopardy
-
Understanding the GPU compute paradigm
- 2022 - "Making Deep Learning Go Brr From First Principles" - Horace He
Learning theory / Deep learning theory / model compression / interpretability / Information Geometry
-
VC Dimension
- 1971 - "On the uniform convergence of relative frequencies of events to their probabilities" - V. Vapnik and A. Chervonenkis
- 1989 - "Learnability and the Vapnik-Chervonenkis Dimension " - Blumer, A.; Ehrenfeucht, A.; Haussler, D.; Warmuth, M. K.
-
gradient double descent
- 2019 - "Deep Double Descent: Where Bigger Models and More Data Hurt" - Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, Ilya Sutskever
- 2019 - "Reconciling modern machine-learning practice and the classical bias–variance trade-off" - Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal
- 2020 - "The generalization error of random features regression: Precise asymptotics and double descent curve - Song Mei, Andrea Montanari
- 2021 -"A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning" - Yehuda Dar, Vidya Muthukumar, Richard G. Baraniuk
- 1999 - "Generalization in a linear perceptron in the presence of noise" - Anders Krogh, John A Hertz
- 2020 - "The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization" - Ben Adlam, Jeffrey Pennington
-
neural tangent kernel
- 2018 - "Neural Tangent Kernel: Convergence and Generalization in Neural Networks" - Arthur Jacot, Franck Gabriel, Clément Hongler
-
lottery ticket hypothesis
- 2018 - "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks" - Jonathan Frankle, Michael Carbin
-
manifold hypothesis
-
information bottleneck
- 2015 - "Deep Learning and the Information Bottleneck Principle" - Naftali Tishby, Noga Zaslavsky
-
generalized degrees of freedom
- 1998 - "On Measuring and Correcting the Effects of Data Mining and Model Selection" - Jianming Ye
-
AIC / BIC
-
dropout as ensemblification
- 2013 - "Understanding Dropout" - Pierre Baldi, Peter Sadowski
- 2017 - "Analysis of dropout learning regarded as ensemble learning" - Kazuyuki Hara, Daisuke Saitoh, Hayaru Shouno
-
knowledge distillation
- 2005 - "Model Compression" - Cristian Bucila, Rich Caruana, Alexandru Niculescu-Mizil
- 2015 - "Distilling the Knowledge in a Neural Network" - Geoffrey Hinton, Oriol Vinyals, Jeff Dean
-
model quantization
-
SGD = MAP inference
- 2017 - "Stochastic Gradient Descent as Approximate Bayesian Inference" - Stephan Mandt, Matthew D. Hoffman, David M. Blei
-
shapley scoring
-
LIME
-
adversarial examples
- 2014 - "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images" - Anh Nguyen, Jason Yosinski, Jeff Clune
-
(AU)ROC Curve / PR curve
-
Cohen's Kappa / meta-analysis
-
PAC learning
- 1984 - "A Theory of the Learnable" - L. G. Valiant
-
L1/L2 regularization expressable as bayesian priors
- 1943 - "On the stability of inverse problems" - L2 regularization introduced by Tikhonov, original paper in Russian
-
Hilbert spaces
-
No Free Lunch
- 1997 - "No Free Lunch Theorems for Optimization" - David H. Wolpert, William G. Macready
-
Significance test for the LASSO
-
RNN's are near-sighted
- 2003 - "Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies" - Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, Jurgen Schmidhuber
- 2014 - "On the Properties of Neural Machine Translation: Encoder–Decoder Approaches" -
-
Relationship between logistic regression and naive bayes
- 2002 - "On Discriminative vs. Generative classifiers: A comparison of logistic regression and naive Bayes - Andrew Ng, Michael Jordan
-
understanding softmax and its relation to log-sum-exp
- 2017 - "On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning" - Bolin Gao, Lacra Pavel
-
Word2vec approximates a matrix factorization
- 2014 - "Neural Word Embedding as Implicit Matrix Factorization" - Omer Levy, Yoav Goldberg
-
The distributional hypothesis (computational linguistics)
- 1954 - "Distributional Structure" - Zellig Harris
-
Johnson–Lindenstrauss lemma (high dim manifolds can be accurately projected onto lower dim embeddings)
- 1984 - "Extensions of Lipschitz mappings into a Hilbert space" - William B. Johnson, Joram Lindenstrauss
- https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma
-
Empirical Risk Minimization
- 1992 - "Principles of Risk Minimization for Learning Theory" - V. Vapnik
-
Loss geometry
- 2014 - "Identifying and attacking the saddle point problem in high-dimensional non-convex optimization" - Yann Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, Yoshua Bengio
-
generalization, overparameterization, effective model capacity, grokking
- 2016 - "Understanding deep learning requires rethinking generalization" - Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals
- 2021 - Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets - Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin, Vedant Misra
- 2022 - Towards Understanding Grokking: An Effective Theory of Representation Learning - Ziming Liu, Ouail Kitouni, Niklas Nolte, Eric J. Michaud, Max Tegmark, Mike Williams
-
Strong inductive bias in CNN structure
- 2018 - "Deep Image Prior" - Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky
-
prediction calibration
- 2017 - "On Calibration of Modern Neural Networks" - Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
-
batch-norm and dropout in tug-of-war
- 2018 - "Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift" - Xiang Li, Shuo Chen, Xiaolin Hu, Jian Yang
-
deep learning model fitting process
-
Universal approximation theorem
- 1989 - "Multilayer Feedforward Networks are Universal Approximators" - Hornik, Kurt; Tinchcombe, Maxwell; White, Halbert
- 1991 - "Approximation capabilities of multilayer feedforward networks" - Kurt Hornik
- 1993 - "Multilayer Feedforward Networks With a Nonpolynomial Activation Function Can Approximate Any Function" - Moshe Leshno, Lin Vladimir Ya, Allan Pinkus, Shimon Schocken
-
Dropout as approximate bayesian inference
- 2015 - "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning" - Yarin Gal, Zoubin Ghahramani
-
CMA-ES learns an approximation to the hessian of the loss
- 2019 - "On the covariance-Hessian relation in evolution strategies" - Ofer M. Shir, Amir Yehudayoff
-
Inductive Biases
- 1980 - "The need for biases in learning generalizations" - Tom Mitchell
- 2018 - "Relational inductive biases, deep learning, and graph networks" - Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song, Andrew Ballard, Justin Gilmer, George Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, Razvan Pascanu
-
Neural Networks are essentially high dimensional decision trees. The latent space can be datum-specific. The learned manifold is not smooth, but heavily faceted. each neuron (relu) adds a hyperplane.
- 2018 - A Spline Theory of Deep Networks - Randall Balestriero Richard G. Baraniuk
-
Extrapolation
- 2021 - "Learning in High Dimension Always Amounts to Extrapolation" - Randall Balestriero, J´erˆome Pesenti, and Yann LeCun
-
Formalizing "intelligence"
- 2019 - "On the Measure of Intelligence" - François Chollet
-
PEFT: LoRA
- 2021 - "LoRA: Low-Rank Adaptation of Large Language Models" - Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen
-
Fourier features
-
NoPe - positional encodings not needed, learned implicitly
- 2023 - "The Impact of Positional Encoding on Length Generalization in Transformers" - Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, Siva Reddy
-
Intrinsic Dimension, PEFT
- 2020 - "Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning" - Armen Aghajanyan, Luke Zettlemoyer, Sonal Gupta
-
GAN training dynamics, TTUR
- 2017 - "GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium" - Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Sepp Hochreiter
-
The Bitter Lesson
- 2019 - "The Bitter Lesson" - Rich Sutton
-
sigma reparameterization to stabilize transformer training by mitigating "entropy collapse" (concentration of density in attention)
- 2023 - "Stabilizing Transformer Training by Preventing Attention Entropy Collapse" - Shuangfei Zhai, Tatiana Likhomanenko, Etai Littwin, Dan Busbridge, Jason Ramapuram, Yizhe Zhang, Jiatao Gu, Josh Susskind
-
buffer tokens, register tokens, attention sinks
- 2023 - "Vision Transformers Need Registers" - Timothée Darcet, Maxime Oquab, Julien Mairal, Piotr Bojanowski
- 2023 - "Efficient Streaming Language Models with Attention Sinks" - Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis
- Connecting the Variational Renormalization Group and Unsupervised Learning
- 2014 - "An exact mapping between the Variational Renormalization Group and Deep Learning" - Pankaj Mehta, David J. Schwab
- learning a feature is equivalent to searching for a transformation that stabilizes it.
- 2015 - "Why does Deep Learning work? - A perspective from Group Theory" - Arnab Paul, Suresh Venkatasubramanian
- Entropy
- 1948 - "A Mathematical Theory of Communication" - Claude Shannon
- Fisher information
- KL divergence
- Double machine learning
- 2016 - "Double/Debiased Machine Learning for Treatment and Causal Parameters" - Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, James Robins
- Doubly robust inference
- Pearl's do calculus and graphical modeling / structural equation modeling
- Rubin's potential outcomes model
- model identification
- d-separation
- propensity scoring/matching
- item-response model and adaptive testing
- bandit learning for on-line experimentation
- belief propagation
- ARMA / ARIMA / ARIMAX
- sin/cos cyclical day encodings
- RNN forecasting
- 1991 - "Recurrent Networks and NARMA Modeling" - J. Connor, L. Atlas, R. Martin
- 2017 - "A Multi-Horizon Quantile Recurrent Forecaster" - (Amazon) Ruofeng Wen, Kari Torkkola, Balakrishnan Narayanaswamy, Dhruv Madeka
- FB Prophet / bayesian
- Utilizing optical flow to stabilize synthesized video frames
- 2016 - "Artistic style transfer for videos" - Manuel Ruder, Alexey Dosovitskiy, Thomas Brox
- Data Privacy
- See Netflix Prize
- Differential Privacy
- k-anonymity
- Dataset bias - gendered words, differential treatment of skin color, race and zipcode in legal applications
- YOLO author's resignation (blog post + reddit thread)
- CV techniques used to subjugate minorities in SE Asia and China
- Ethical issues surrounding classification of behavioral health and interventions
- Metadata deanonymization and leaks of US domestic data collection programs with corporate participation
- "fairness" algorithms
- gerrymandering and algorithmic redistricting
- Facebook's influence on elections and live-testing to influence people's emotions and behaviors w/o consent
- ML Tech Debt
- 2015 - "Hidden Technical Debt in Machine Learning Systems" - D. Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Franc¸ois Crespo, Dan Dennison
-
Classifier-free Guidance (CFG)
- 2021 - Classifier-Free Diffusion Guidance - Jonathan Ho, Tim Salimans
-
SDEdit
- 2021 - "SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations" - Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon
-
Denoising diffusion as generic de-corruption
- 2022 - "Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise" - Arpit Bansal, Eitan Borgnia, Hong-Min Chu, Jie S. Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, Tom Goldstein
-
k-samplers, variance-preserving, variance exploding
- 2022 - "Elucidating the Design Space of Diffusion-Based Generative Models" - Tero Karras, Miika Aittala, Timo Aila, Samuli Laine
-
Cross Attention guidance
-
Controlnet/T2I adaptors
-
Text inversion
-
null text inversion
-
Chain of thought
- 2022 - "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models" - Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou
-
LLMs as role-play
- 2023 - "Role-Play with Large Language Models" - Murray Shanahan, Kyle McDonell, Laria Reynolds
-
learning to use tools
- 2023 - "Toolformer: Language Models Can Teach Themselves to Use Tools" - Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom
-
Chinchilla
- 2022 - "Training Compute-Optimal Large Language Models" - Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, Laurent Sifre
-
Instruct tuning, InstructGPT
- 2022 - "Training language models to follow instructions with human feedback" - Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe
-
Speculative decoding
- 2023 -"Accelerating Large Language Model Decoding with Speculative Sampling" - Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, John Jumper
largely via https://twitter.com/karpathy/status/1668302116576976906
- https://arxiv.org/abs/1308.0850
- https://arxiv.org/abs/1409.0473
- https://arxiv.org/abs/1410.5401
- Attention is all you need