Skip to content

Latest commit

 

History

History
40 lines (34 loc) · 2.23 KB

readme.md

File metadata and controls

40 lines (34 loc) · 2.23 KB

Neural Machine Translation with Character Level Decoder

This repository is a modularized re-write of Stanford CS224N assignment 5. Specifically the Character-Level LSTM Decoder for Neural Machine Translation.

I wanted to re-write it as a way of understanding the different components of the model and then eventually the tricks used to train the model - I have somewhat annotated a few notebooks for my own reference.

Usage

Docker support coming soon. Meanwhile:

  1. Clone repository
  2. Install the requirements using
pipenv install

if running the train and test tasks. If browsing notebooks, use:

pipenv install --dev
  1. Download and place the Assignment 5 data from Stanford CS224N in nmt/datasets/data/.
  2. Run tasks as
pipenv run sh tasks/<task-name>.sh

Possible tasks:

  • train_local.sh : training using small sample (equivalent to train-local-q2 from the assignment)
  • test_local.sh : testing using small sample (equivalent of test-local-q2 from the assignement) - should produce BLEU score of ~99.27
  • train.sh : training using all data on GPU.
  • test.sh : testing using all data - should produce BLEU score of ~29.40

Dependencies for installation

  • Python 3.6 (if using Pipfile; 3.6+ if using requirements.txt)
  • Pipenv

References