Skip to content

Commit

Permalink
Automated tutorials push
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed Jan 30, 2025
1 parent fed6e56 commit 0c2da3c
Show file tree
Hide file tree
Showing 196 changed files with 13,556 additions and 12,892 deletions.
4 changes: 2 additions & 2 deletions _downloads/13cdb386a4b0dc48c626f32e6cf8681d/amp_recipe.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@
"# The same ``GradScaler`` instance should be used for the entire convergence run.\n",
"# If you perform multiple convergence runs in the same script, each run should use\n",
"# a dedicated fresh ``GradScaler`` instance. ``GradScaler`` instances are lightweight.\n",
"scaler = torch.cuda.amp.GradScaler()\n",
"scaler = torch.amp.GradScaler(\"cuda\")\n",
"\n",
"for epoch in range(0): # 0 epochs, this section is for illustration only\n",
" for input, target in zip(data, targets):\n",
Expand Down Expand Up @@ -308,7 +308,7 @@
"\n",
"net = make_model(in_size, out_size, num_layers)\n",
"opt = torch.optim.SGD(net.parameters(), lr=0.001)\n",
"scaler = torch.cuda.amp.GradScaler(enabled=use_amp)\n",
"scaler = torch.amp.GradScaler(\"cuda\" ,enabled=use_amp)\n",
"\n",
"start_timer()\n",
"for epoch in range(epochs):\n",
Expand Down
4 changes: 2 additions & 2 deletions _downloads/cadb3a57e7a6d7c149b5ae377caf36a8/amp_recipe.py
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ def make_model(in_size, out_size, num_layers):
# The same ``GradScaler`` instance should be used for the entire convergence run.
# If you perform multiple convergence runs in the same script, each run should use
# a dedicated fresh ``GradScaler`` instance. ``GradScaler`` instances are lightweight.
scaler = torch.cuda.amp.GradScaler()
scaler = torch.amp.GradScaler("cuda")

for epoch in range(0): # 0 epochs, this section is for illustration only
for input, target in zip(data, targets):
Expand Down Expand Up @@ -182,7 +182,7 @@ def make_model(in_size, out_size, num_layers):

net = make_model(in_size, out_size, num_layers)
opt = torch.optim.SGD(net.parameters(), lr=0.001)
scaler = torch.cuda.amp.GradScaler(enabled=use_amp)
scaler = torch.amp.GradScaler("cuda" ,enabled=use_amp)

start_timer()
for epoch in range(epochs):
Expand Down
Binary file modified _images/sphx_glr_char_rnn_classification_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_char_rnn_classification_tutorial_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_coding_ddpg_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_dqn_with_rnn_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_neural_style_tutorial_004.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_pinmem_nonblock_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_pinmem_nonblock_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_pinmem_nonblock_003.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_pinmem_nonblock_004.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_ppo_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_q_learning_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_semi_structured_sparse_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_semi_structured_sparse_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_spatial_transformer_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_torchvision_tutorial_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
42 changes: 21 additions & 21 deletions _sources/advanced/coding_ddpg.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1634,26 +1634,26 @@ modules we need.
0%| | 0/10000 [00:00<?, ?it/s]
8%|8 | 800/10000 [00:00<00:05, 1660.50it/s]
16%|#6 | 1600/10000 [00:02<00:17, 477.55it/s]
24%|##4 | 2400/10000 [00:03<00:11, 674.49it/s]
32%|###2 | 3200/10000 [00:04<00:07, 850.24it/s]
40%|#### | 4000/10000 [00:04<00:06, 995.16it/s]
48%|####8 | 4800/10000 [00:05<00:04, 1116.81it/s]
56%|#####6 | 5600/10000 [00:05<00:03, 1208.77it/s]
reward: -2.34 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-2.59/6.17, grad norm= 147.92, loss_value= 295.62, loss_actor= 13.96, target value: -15.91: 56%|#####6 | 5600/10000 [00:06<00:03, 1208.77it/s]
reward: -2.34 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-2.59/6.17, grad norm= 147.92, loss_value= 295.62, loss_actor= 13.96, target value: -15.91: 64%|######4 | 6400/10000 [00:07<00:04, 841.61it/s]
reward: -0.11 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-1.53/5.46, grad norm= 118.19, loss_value= 194.76, loss_actor= 10.63, target value: -10.14: 64%|######4 | 6400/10000 [00:08<00:04, 841.61it/s]
reward: -0.11 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-1.53/5.46, grad norm= 118.19, loss_value= 194.76, loss_actor= 10.63, target value: -10.14: 72%|#######2 | 7200/10000 [00:08<00:03, 700.19it/s]
reward: -2.33 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-2.42/5.58, grad norm= 182.04, loss_value= 220.44, loss_actor= 13.69, target value: -16.09: 72%|#######2 | 7200/10000 [00:09<00:03, 700.19it/s]
reward: -2.33 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-2.42/5.58, grad norm= 182.04, loss_value= 220.44, loss_actor= 13.69, target value: -16.09: 80%|######## | 8000/10000 [00:10<00:03, 623.68it/s]
reward: -4.44 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-2.44/4.89, grad norm= 111.35, loss_value= 211.11, loss_actor= 15.74, target value: -15.42: 80%|######## | 8000/10000 [00:11<00:03, 623.68it/s]
reward: -4.44 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-2.44/4.89, grad norm= 111.35, loss_value= 211.11, loss_actor= 15.74, target value: -15.42: 88%|########8 | 8800/10000 [00:12<00:02, 588.63it/s]
reward: -4.96 (r0 = -2.00), reward eval: reward: -5.98, reward normalized=-2.32/4.85, grad norm= 54.44, loss_value= 165.27, loss_actor= 16.38, target value: -16.11: 88%|########8 | 8800/10000 [00:14<00:02, 588.63it/s]
reward: -4.96 (r0 = -2.00), reward eval: reward: -5.98, reward normalized=-2.32/4.85, grad norm= 54.44, loss_value= 165.27, loss_actor= 16.38, target value: -16.11: 96%|#########6| 9600/10000 [00:15<00:01, 399.47it/s]
reward: -4.86 (r0 = -2.00), reward eval: reward: -5.98, reward normalized=-3.02/4.89, grad norm= 173.10, loss_value= 234.27, loss_actor= 13.70, target value: -21.43: 96%|#########6| 9600/10000 [00:16<00:01, 399.47it/s]
reward: -4.86 (r0 = -2.00), reward eval: reward: -5.98, reward normalized=-3.02/4.89, grad norm= 173.10, loss_value= 234.27, loss_actor= 13.70, target value: -21.43: : 10400it [00:18, 364.88it/s]
reward: -4.93 (r0 = -2.00), reward eval: reward: -5.98, reward normalized=-3.38/3.91, grad norm= 120.25, loss_value= 129.44, loss_actor= 15.23, target value: -23.98: : 10400it [00:18, 364.88it/s]
8%|8 | 800/10000 [00:00<00:05, 1683.66it/s]
16%|#6 | 1600/10000 [00:02<00:17, 485.23it/s]
24%|##4 | 2400/10000 [00:03<00:11, 687.00it/s]
32%|###2 | 3200/10000 [00:04<00:07, 877.15it/s]
40%|#### | 4000/10000 [00:04<00:05, 1045.73it/s]
48%|####8 | 4800/10000 [00:05<00:04, 1188.29it/s]
56%|#####6 | 5600/10000 [00:05<00:03, 1288.59it/s]
reward: -2.38 (r0 = -3.75), reward eval: reward: -0.01, reward normalized=-2.90/6.27, grad norm= 184.34, loss_value= 346.66, loss_actor= 15.34, target value: -18.57: 56%|#####6 | 5600/10000 [00:06<00:03, 1288.59it/s]
reward: -2.38 (r0 = -3.75), reward eval: reward: -0.01, reward normalized=-2.90/6.27, grad norm= 184.34, loss_value= 346.66, loss_actor= 15.34, target value: -18.57: 64%|######4 | 6400/10000 [00:07<00:04, 876.74it/s]
reward: -0.14 (r0 = -3.75), reward eval: reward: -0.01, reward normalized=-2.44/5.85, grad norm= 49.14, loss_value= 239.65, loss_actor= 14.06, target value: -14.04: 64%|######4 | 6400/10000 [00:07<00:04, 876.74it/s]
reward: -0.14 (r0 = -3.75), reward eval: reward: -0.01, reward normalized=-2.44/5.85, grad norm= 49.14, loss_value= 239.65, loss_actor= 14.06, target value: -14.04: 72%|#######2 | 7200/10000 [00:08<00:03, 713.38it/s]
reward: -2.36 (r0 = -3.75), reward eval: reward: -0.01, reward normalized=-2.26/5.79, grad norm= 128.31, loss_value= 254.09, loss_actor= 12.59, target value: -14.58: 72%|#######2 | 7200/10000 [00:09<00:03, 713.38it/s]
reward: -2.36 (r0 = -3.75), reward eval: reward: -0.01, reward normalized=-2.26/5.79, grad norm= 128.31, loss_value= 254.09, loss_actor= 12.59, target value: -14.58: 80%|######## | 8000/10000 [00:10<00:03, 627.39it/s]
reward: -4.64 (r0 = -3.75), reward eval: reward: -0.01, reward normalized=-3.11/5.26, grad norm= 115.52, loss_value= 239.16, loss_actor= 18.21, target value: -19.89: 80%|######## | 8000/10000 [00:11<00:03, 627.39it/s]
reward: -4.64 (r0 = -3.75), reward eval: reward: -0.01, reward normalized=-3.11/5.26, grad norm= 115.52, loss_value= 239.16, loss_actor= 18.21, target value: -19.89: 88%|########8 | 8800/10000 [00:11<00:02, 582.30it/s]
reward: -4.99 (r0 = -3.75), reward eval: reward: -5.59, reward normalized=-3.11/5.35, grad norm= 68.47, loss_value= 219.28, loss_actor= 19.75, target value: -19.78: 88%|########8 | 8800/10000 [00:14<00:02, 582.30it/s]
reward: -4.99 (r0 = -3.75), reward eval: reward: -5.59, reward normalized=-3.11/5.35, grad norm= 68.47, loss_value= 219.28, loss_actor= 19.75, target value: -19.78: 96%|#########6| 9600/10000 [00:15<00:00, 404.39it/s]
reward: -5.39 (r0 = -3.75), reward eval: reward: -5.59, reward normalized=-3.21/5.31, grad norm= 204.79, loss_value= 298.20, loss_actor= 19.41, target value: -22.63: 96%|#########6| 9600/10000 [00:15<00:00, 404.39it/s]
reward: -5.39 (r0 = -3.75), reward eval: reward: -5.59, reward normalized=-3.21/5.31, grad norm= 204.79, loss_value= 298.20, loss_actor= 19.41, target value: -22.63: : 10400it [00:17, 366.15it/s]
reward: -4.68 (r0 = -3.75), reward eval: reward: -5.59, reward normalized=-3.57/4.27, grad norm= 74.57, loss_value= 203.88, loss_actor= 23.34, target value: -24.98: : 10400it [00:18, 366.15it/s]
Expand Down Expand Up @@ -1723,7 +1723,7 @@ To iterate further on this loss module we might consider:

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 29.005 seconds)
**Total running time of the script:** ( 0 minutes 28.797 seconds)


.. _sphx_glr_download_advanced_coding_ddpg.py:
Expand Down
6 changes: 3 additions & 3 deletions _sources/advanced/dynamic_quantization_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -517,9 +517,9 @@ models run single threaded.
.. code-block:: none
loss: 5.167
elapsed time (seconds): 208.2
elapsed time (seconds): 202.0
loss: 5.168
elapsed time (seconds): 115.8
elapsed time (seconds): 113.8
Expand All @@ -541,7 +541,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 5 minutes 34.880 seconds)
**Total running time of the script:** ( 5 minutes 26.458 seconds)


.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:
Expand Down
67 changes: 34 additions & 33 deletions _sources/advanced/neural_style_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -410,32 +410,33 @@ network to evaluation mode using ``.eval()``.
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
0%| | 0.00/548M [00:00<?, ?B/s]
4%|3 | 20.5M/548M [00:00<00:02, 215MB/s]
8%|7 | 41.5M/548M [00:00<00:02, 217MB/s]
11%|#1 | 62.6M/548M [00:00<00:02, 219MB/s]
15%|#5 | 83.9M/548M [00:00<00:02, 220MB/s]
19%|#9 | 105M/548M [00:00<00:02, 221MB/s]
23%|##3 | 126M/548M [00:00<00:02, 221MB/s]
27%|##6 | 148M/548M [00:00<00:01, 221MB/s]
31%|### | 169M/548M [00:00<00:01, 221MB/s]
35%|###4 | 190M/548M [00:00<00:01, 221MB/s]
39%|###8 | 211M/548M [00:01<00:01, 221MB/s]
42%|####2 | 232M/548M [00:01<00:01, 221MB/s]
46%|####6 | 254M/548M [00:01<00:01, 222MB/s]
50%|##### | 275M/548M [00:01<00:01, 222MB/s]
54%|#####4 | 296M/548M [00:01<00:01, 221MB/s]
58%|#####7 | 317M/548M [00:01<00:01, 221MB/s]
62%|######1 | 338M/548M [00:01<00:00, 221MB/s]
66%|######5 | 360M/548M [00:01<00:00, 221MB/s]
69%|######9 | 381M/548M [00:01<00:00, 221MB/s]
73%|#######3 | 402M/548M [00:01<00:00, 221MB/s]
77%|#######7 | 423M/548M [00:02<00:00, 221MB/s]
81%|########1 | 444M/548M [00:02<00:00, 221MB/s]
85%|########4 | 465M/548M [00:02<00:00, 221MB/s]
89%|########8 | 486M/548M [00:02<00:00, 221MB/s]
93%|#########2| 507M/548M [00:02<00:00, 221MB/s]
96%|#########6| 528M/548M [00:02<00:00, 221MB/s]
100%|##########| 548M/548M [00:02<00:00, 221MB/s]
4%|3 | 20.4M/548M [00:00<00:02, 213MB/s]
8%|7 | 41.1M/548M [00:00<00:02, 216MB/s]
11%|#1 | 62.0M/548M [00:00<00:02, 217MB/s]
15%|#5 | 82.9M/548M [00:00<00:02, 217MB/s]
19%|#8 | 104M/548M [00:00<00:02, 218MB/s]
23%|##2 | 125M/548M [00:00<00:02, 218MB/s]
27%|##6 | 146M/548M [00:00<00:01, 218MB/s]
30%|### | 166M/548M [00:00<00:01, 218MB/s]
34%|###4 | 187M/548M [00:00<00:01, 218MB/s]
38%|###8 | 208M/548M [00:01<00:01, 219MB/s]
42%|####1 | 229M/548M [00:01<00:01, 219MB/s]
46%|####5 | 250M/548M [00:01<00:01, 219MB/s]
49%|####9 | 271M/548M [00:01<00:01, 219MB/s]
53%|#####3 | 292M/548M [00:01<00:01, 219MB/s]
57%|#####7 | 313M/548M [00:01<00:01, 219MB/s]
61%|###### | 334M/548M [00:01<00:01, 219MB/s]
65%|######4 | 355M/548M [00:01<00:00, 219MB/s]
69%|######8 | 376M/548M [00:01<00:00, 219MB/s]
72%|#######2 | 397M/548M [00:01<00:00, 219MB/s]
76%|#######6 | 418M/548M [00:02<00:00, 219MB/s]
80%|######## | 439M/548M [00:02<00:00, 219MB/s]
84%|########3 | 460M/548M [00:02<00:00, 219MB/s]
88%|########7 | 481M/548M [00:02<00:00, 219MB/s]
92%|#########1| 502M/548M [00:02<00:00, 219MB/s]
95%|#########5| 523M/548M [00:02<00:00, 219MB/s]
99%|#########9| 544M/548M [00:02<00:00, 219MB/s]
100%|##########| 548M/548M [00:02<00:00, 219MB/s]
Expand Down Expand Up @@ -756,22 +757,22 @@ Finally, we can run the algorithm.
Optimizing..
run [50]:
Style Loss : 4.243999 Content Loss: 4.230177
Style Loss : 4.124988 Content Loss: 4.180835
run [100]:
Style Loss : 1.153091 Content Loss: 3.027826
Style Loss : 1.149009 Content Loss: 3.025290
run [150]:
Style Loss : 0.714814 Content Loss: 2.653670
Style Loss : 0.722513 Content Loss: 2.658751
run [200]:
Style Loss : 0.479303 Content Loss: 2.491420
Style Loss : 0.484897 Content Loss: 2.496019
run [250]:
Style Loss : 0.347053 Content Loss: 2.402259
Style Loss : 0.348076 Content Loss: 2.404442
run [300]:
Style Loss : 0.262986 Content Loss: 2.348989
Style Loss : 0.265312 Content Loss: 2.350962
Expand All @@ -780,7 +781,7 @@ Finally, we can run the algorithm.
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 38.572 seconds)
**Total running time of the script:** ( 0 minutes 38.498 seconds)


.. _sphx_glr_download_advanced_neural_style_tutorial.py:
Expand Down
2 changes: 1 addition & 1 deletion _sources/advanced/numpy_extensions_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 0.602 seconds)
**Total running time of the script:** ( 0 minutes 0.568 seconds)


.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:
Expand Down
Loading

0 comments on commit 0c2da3c

Please sign in to comment.