diff --git a/04_mnist_basics.ipynb b/04_mnist_basics.ipynb index e8b663749..a1c2400cc 100644 --- a/04_mnist_basics.ipynb +++ b/04_mnist_basics.ipynb @@ -4180,7 +4180,7 @@ "\n", "As we saw in our discussion of data augmentation in <>, we get better generalization if we can vary things during training. One simple and effective thing we can vary is what data items we put in each mini-batch. Rather than simply enumerating our dataset in order for every epoch, instead what we normally do is randomly shuffle it on every epoch, before we create mini-batches. PyTorch and fastai provide a class that will do the shuffling and mini-batch collation for you, called `DataLoader`.\n", "\n", - "A `DataLoader` can take any Python collection and turn it into an iterator over many batches, like so:" + "A `DataLoader` can take any Python collection and turn it into an iterator over mini-batches, like so:" ] }, { @@ -4239,7 +4239,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "When we pass a `Dataset` to a `DataLoader` we will get back many batches which are themselves tuples of tensors representing batches of independent and dependent variables:" + "When we pass a `Dataset` to a `DataLoader` we will get back mini-batches which are themselves tuples of tensors representing batches of independent and dependent variables:" ] }, {