Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix the overfit single batch behavior to actually overfit batch, not … #662

Merged
merged 1 commit into from
Jul 1, 2024

Conversation

karpathy
Copy link
Owner

@karpathy karpathy commented Jul 1, 2024

…microbatch, we do this more cleanly by simply resetting the dataloader every step

…microbatch, we do this more cleanly by simply resetting the dataloader every step
@karpathy
Copy link
Owner Author

karpathy commented Jul 1, 2024

This recovers our ability to reproduce our test with our train. Example:

# reproduce our test with our train
make train_gpt2cu PRECISION=FP32
./train_gpt2cu -b 4 -t 64 -d 256 -l 0.0001 -v 200 -s 200 -a 1 -x 10 -r 0 -f 0 -e "gpt2_124M.bin"

We can also use grad accum 2:

./train_gpt2cu -b 2 -t 64 -d 256 -l 0.0001 -v 200 -s 200 -a 1 -x 10 -r 0 -f 0 -e "gpt2_124M.bin"

Same for PyTorch:

python train_gpt2.py --write_tensors 0 --batch_size 4
python train_gpt2.py --write_tensors 0 --batch_size 2

Now print the same things. Grad accum kicks in to recover the total desired batch size and gives the same results.

@karpathy karpathy merged commit 942fed5 into master Jul 1, 2024
26 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant