Skip to content

Latest commit

 

History

History
37 lines (16 loc) · 1.11 KB

File metadata and controls

37 lines (16 loc) · 1.11 KB

pytorch_training_optimization_using_tensordict_memory_mapping

Optimizing PyTorch training by wrapping torch.utils.data.Dataset with tensordict.TensorDict.MemoryMappedTensor mapped, pinned, and loaded onto an Nvidia GPU and inputting TensorDict(Dataset) into torch.utils.data.DataLoader--to boost model training speed.

To run the demo:

git clone https://github.com/OriYarden/pytorch_training_optimization_using_tensordict_memory_mapping
cd pytorch_training_optimization_using_tensordict_memory_mapping
python run_demo.py

Training 1 Epoch via torch.utils.data.Dataset:

demo_dataloader

Training 1 Epoch via tensordict.TensorDict.MemoryMappedTensor(torch.utils.data.Dataset):

demo_td_dataloader

TensorDict Memory Mapping boosts training speed.

The initial wrapping runtime is approximately equal to 1 epoch of torch.utils.data.Dataset:

demo_td_wrapper