This repository contains implementations of five influential research papers in the field of machine learning and artificial intelligence. Each implementation is organized in a separate directory, providing clear and modular code for better understanding and usage.
- Title: Attention is All You Need
- Authors: Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
- Link: Attention is All You Need
- Directory:
transformerLanguageModel
- Description: This implementation focuses on the Transformer model, which eliminates the need for recurrence in sequence modeling tasks through the use of self-attention mechanisms.
- Title: Conditional Image Generation with PixelCNN Decoders
- Authors: Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, Koray Kavukcuoglu
- Link: PixelCNN
- Directory:
pixelCNN
- Description: This implementation covers the PixelCNN model, a generative model that conditions on the entire image while generating pixel values sequentially.
- Title: Neural Discrete Representation Learning
- Authors: Aaron van den Oord, Oriol Vinyals, Koray Kavukcuoglu
- Link: VQ-VAE
- Directory:
vq_vae
- Description: This implementation focuses on the VQ-VAE model, a variational autoencoder that learns discrete representations in an unsupervised manner.
- Title: Neural Machine Translation of Rare Words with Subword Units
- Authors: Rico Sennrich, Barry Haddow, Alexandra Birch
- Link: BPE
- Directory:
bpe
- Description: This implementation covers Byte Pair Encoding, a data compression technique used in various natural language processing tasks, including machine translation.
Each directory contains a README with specific instructions on how to set up and run the code. Please refer to individual README files for more detailed information.
Contributions and improvements are welcome. Feel free to fork the repository, create a new branch, and submit a pull request.
This project is licensed under the MIT License - see the LICENSE file for details.