Skip to content

Commit

Permalink
Automated tutorials push
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed Jan 29, 2025
1 parent 6209c62 commit 3ae1e8b
Show file tree
Hide file tree
Showing 198 changed files with 13,186 additions and 14,598 deletions.
Binary file modified _images/sphx_glr_char_rnn_classification_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_char_rnn_classification_tutorial_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_coding_ddpg_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_dqn_with_rnn_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_neural_style_tutorial_004.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_pinmem_nonblock_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_pinmem_nonblock_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_pinmem_nonblock_003.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_pinmem_nonblock_004.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_ppo_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_q_learning_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_semi_structured_sparse_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_semi_structured_sparse_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_spatial_transformer_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_torchvision_tutorial_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
42 changes: 21 additions & 21 deletions _sources/advanced/coding_ddpg.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1634,26 +1634,26 @@ modules we need.
0%| | 0/10000 [00:00<?, ?it/s]
8%|8 | 800/10000 [00:00<00:05, 1707.64it/s]
16%|#6 | 1600/10000 [00:02<00:17, 488.81it/s]
24%|##4 | 2400/10000 [00:03<00:10, 699.38it/s]
32%|###2 | 3200/10000 [00:03<00:07, 897.49it/s]
40%|#### | 4000/10000 [00:04<00:05, 1060.15it/s]
48%|####8 | 4800/10000 [00:05<00:04, 1191.03it/s]
56%|#####6 | 5600/10000 [00:05<00:03, 1296.38it/s]
reward: -2.14 (r0 = -2.78), reward eval: reward: -0.01, reward normalized=-2.08/6.19, grad norm= 234.64, loss_value= 349.42, loss_actor= 15.13, target value: -12.18: 56%|#####6 | 5600/10000 [00:06<00:03, 1296.38it/s]
reward: -2.14 (r0 = -2.78), reward eval: reward: -0.01, reward normalized=-2.08/6.19, grad norm= 234.64, loss_value= 349.42, loss_actor= 15.13, target value: -12.18: 64%|######4 | 6400/10000 [00:07<00:04, 889.15it/s]
reward: -0.19 (r0 = -2.78), reward eval: reward: -0.01, reward normalized=-2.52/5.60, grad norm= 99.21, loss_value= 254.70, loss_actor= 14.22, target value: -16.14: 64%|######4 | 6400/10000 [00:07<00:04, 889.15it/s]
reward: -0.19 (r0 = -2.78), reward eval: reward: -0.01, reward normalized=-2.52/5.60, grad norm= 99.21, loss_value= 254.70, loss_actor= 14.22, target value: -16.14: 72%|#######2 | 7200/10000 [00:08<00:03, 722.82it/s]
reward: -1.72 (r0 = -2.78), reward eval: reward: -0.01, reward normalized=-2.27/5.64, grad norm= 60.57, loss_value= 231.16, loss_actor= 11.37, target value: -13.73: 72%|#######2 | 7200/10000 [00:09<00:03, 722.82it/s]
reward: -1.72 (r0 = -2.78), reward eval: reward: -0.01, reward normalized=-2.27/5.64, grad norm= 60.57, loss_value= 231.16, loss_actor= 11.37, target value: -13.73: 80%|######## | 8000/10000 [00:10<00:03, 641.42it/s]
reward: -3.97 (r0 = -2.78), reward eval: reward: -0.01, reward normalized=-2.10/5.46, grad norm= 76.46, loss_value= 310.34, loss_actor= 16.33, target value: -14.06: 80%|######## | 8000/10000 [00:10<00:03, 641.42it/s]
reward: -3.97 (r0 = -2.78), reward eval: reward: -0.01, reward normalized=-2.10/5.46, grad norm= 76.46, loss_value= 310.34, loss_actor= 16.33, target value: -14.06: 88%|########8 | 8800/10000 [00:11<00:02, 598.58it/s]
reward: -4.50 (r0 = -2.78), reward eval: reward: -2.28, reward normalized=-2.83/5.53, grad norm= 143.03, loss_value= 314.16, loss_actor= 19.17, target value: -18.72: 88%|########8 | 8800/10000 [00:14<00:02, 598.58it/s]
reward: -4.50 (r0 = -2.78), reward eval: reward: -2.28, reward normalized=-2.83/5.53, grad norm= 143.03, loss_value= 314.16, loss_actor= 19.17, target value: -18.72: 96%|#########6| 9600/10000 [00:15<00:00, 404.64it/s]
reward: -4.74 (r0 = -2.78), reward eval: reward: -2.28, reward normalized=-3.13/4.99, grad norm= 236.04, loss_value= 236.55, loss_actor= 14.05, target value: -22.38: 96%|#########6| 9600/10000 [00:15<00:00, 404.64it/s]
reward: -4.74 (r0 = -2.78), reward eval: reward: -2.28, reward normalized=-3.13/4.99, grad norm= 236.04, loss_value= 236.55, loss_actor= 14.05, target value: -22.38: : 10400it [00:17, 358.82it/s]
reward: -2.65 (r0 = -2.78), reward eval: reward: -2.28, reward normalized=-2.90/4.16, grad norm= 76.19, loss_value= 143.21, loss_actor= 12.72, target value: -20.87: : 10400it [00:18, 358.82it/s]
8%|8 | 800/10000 [00:00<00:05, 1703.98it/s]
16%|#6 | 1600/10000 [00:02<00:17, 484.88it/s]
24%|##4 | 2400/10000 [00:03<00:10, 699.40it/s]
32%|###2 | 3200/10000 [00:03<00:07, 901.42it/s]
40%|#### | 4000/10000 [00:04<00:05, 1070.69it/s]
48%|####8 | 4800/10000 [00:04<00:04, 1219.46it/s]
56%|#####6 | 5600/10000 [00:05<00:03, 1336.30it/s]
reward: -2.08 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-2.60/6.39, grad norm= 100.00, loss_value= 431.35, loss_actor= 16.48, target value: -15.57: 56%|#####6 | 5600/10000 [00:06<00:03, 1336.30it/s]
reward: -2.08 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-2.60/6.39, grad norm= 100.00, loss_value= 431.35, loss_actor= 16.48, target value: -15.57: 64%|######4 | 6400/10000 [00:06<00:03, 901.23it/s]
reward: -0.20 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-3.16/6.03, grad norm= 379.46, loss_value= 354.63, loss_actor= 14.92, target value: -19.87: 64%|######4 | 6400/10000 [00:07<00:03, 901.23it/s]
reward: -0.20 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-3.16/6.03, grad norm= 379.46, loss_value= 354.63, loss_actor= 14.92, target value: -19.87: 72%|#######2 | 7200/10000 [00:08<00:03, 740.79it/s]
reward: -3.19 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-1.95/6.12, grad norm= 78.80, loss_value= 296.14, loss_actor= 11.28, target value: -11.78: 72%|#######2 | 7200/10000 [00:09<00:03, 740.79it/s]
reward: -3.19 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-1.95/6.12, grad norm= 78.80, loss_value= 296.14, loss_actor= 11.28, target value: -11.78: 80%|######## | 8000/10000 [00:10<00:03, 652.77it/s]
reward: -4.73 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-2.86/5.35, grad norm= 79.71, loss_value= 225.52, loss_actor= 19.22, target value: -18.85: 80%|######## | 8000/10000 [00:10<00:03, 652.77it/s]
reward: -4.73 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-2.86/5.35, grad norm= 79.71, loss_value= 225.52, loss_actor= 19.22, target value: -18.85: 88%|########8 | 8800/10000 [00:11<00:01, 609.61it/s]
reward: -5.48 (r0 = -3.42), reward eval: reward: -5.46, reward normalized=-3.25/5.19, grad norm= 207.90, loss_value= 237.13, loss_actor= 21.20, target value: -22.06: 88%|########8 | 8800/10000 [00:14<00:01, 609.61it/s]
reward: -5.48 (r0 = -3.42), reward eval: reward: -5.46, reward normalized=-3.25/5.19, grad norm= 207.90, loss_value= 237.13, loss_actor= 21.20, target value: -22.06: 96%|#########6| 9600/10000 [00:14<00:00, 406.16it/s]
reward: -5.29 (r0 = -3.42), reward eval: reward: -5.46, reward normalized=-2.87/4.98, grad norm= 54.30, loss_value= 193.69, loss_actor= 20.55, target value: -20.42: 96%|#########6| 9600/10000 [00:15<00:00, 406.16it/s]
reward: -5.29 (r0 = -3.42), reward eval: reward: -5.46, reward normalized=-2.87/4.98, grad norm= 54.30, loss_value= 193.69, loss_actor= 20.55, target value: -20.42: : 10400it [00:17, 363.51it/s]
reward: -4.67 (r0 = -3.42), reward eval: reward: -5.46, reward normalized=-3.58/4.46, grad norm= 70.11, loss_value= 183.36, loss_actor= 23.07, target value: -25.11: : 10400it [00:18, 363.51it/s]
Expand Down Expand Up @@ -1723,7 +1723,7 @@ To iterate further on this loss module we might consider:

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 28.713 seconds)
**Total running time of the script:** ( 0 minutes 28.464 seconds)


.. _sphx_glr_download_advanced_coding_ddpg.py:
Expand Down
6 changes: 3 additions & 3 deletions _sources/advanced/dynamic_quantization_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -517,9 +517,9 @@ models run single threaded.
.. code-block:: none
loss: 5.167
elapsed time (seconds): 208.2
elapsed time (seconds): 208.6
loss: 5.168
elapsed time (seconds): 116.5
elapsed time (seconds): 115.2
Expand All @@ -541,7 +541,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 5 minutes 35.389 seconds)
**Total running time of the script:** ( 5 minutes 34.642 seconds)


.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:
Expand Down
70 changes: 33 additions & 37 deletions _sources/advanced/neural_style_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -410,36 +410,32 @@ network to evaluation mode using ``.eval()``.
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
0%| | 0.00/548M [00:00<?, ?B/s]
3%|3 | 17.2M/548M [00:00<00:03, 181MB/s]
7%|6 | 35.6M/548M [00:00<00:02, 187MB/s]
10%|9 | 54.0M/548M [00:00<00:02, 190MB/s]
13%|#3 | 72.5M/548M [00:00<00:02, 191MB/s]
17%|#6 | 90.9M/548M [00:00<00:02, 191MB/s]
20%|#9 | 109M/548M [00:00<00:02, 192MB/s]
23%|##3 | 128M/548M [00:00<00:02, 193MB/s]
27%|##6 | 146M/548M [00:00<00:02, 193MB/s]
30%|### | 165M/548M [00:00<00:02, 193MB/s]
33%|###3 | 184M/548M [00:01<00:01, 193MB/s]
37%|###6 | 202M/548M [00:01<00:01, 193MB/s]
40%|#### | 220M/548M [00:01<00:01, 193MB/s]
44%|####3 | 239M/548M [00:01<00:01, 193MB/s]
47%|####7 | 258M/548M [00:01<00:01, 192MB/s]
50%|##### | 276M/548M [00:01<00:01, 191MB/s]
54%|#####3 | 294M/548M [00:01<00:01, 190MB/s]
57%|#####7 | 313M/548M [00:01<00:01, 190MB/s]
60%|###### | 331M/548M [00:01<00:01, 191MB/s]
64%|######3 | 349M/548M [00:01<00:01, 191MB/s]
67%|######7 | 368M/548M [00:02<00:00, 191MB/s]
70%|####### | 386M/548M [00:02<00:00, 192MB/s]
74%|#######3 | 405M/548M [00:02<00:00, 192MB/s]
77%|#######7 | 423M/548M [00:02<00:00, 192MB/s]
81%|######## | 442M/548M [00:02<00:00, 191MB/s]
84%|########3 | 460M/548M [00:02<00:00, 190MB/s]
87%|########7 | 478M/548M [00:02<00:00, 188MB/s]
90%|######### | 496M/548M [00:02<00:00, 188MB/s]
94%|#########3| 514M/548M [00:02<00:00, 187MB/s]
97%|#########7| 532M/548M [00:02<00:00, 187MB/s]
100%|##########| 548M/548M [00:03<00:00, 191MB/s]
4%|3 | 20.6M/548M [00:00<00:02, 216MB/s]
8%|7 | 41.9M/548M [00:00<00:02, 219MB/s]
12%|#1 | 63.1M/548M [00:00<00:02, 221MB/s]
15%|#5 | 84.6M/548M [00:00<00:02, 222MB/s]
19%|#9 | 106M/548M [00:00<00:02, 223MB/s]
23%|##3 | 128M/548M [00:00<00:01, 223MB/s]
27%|##7 | 149M/548M [00:00<00:01, 224MB/s]
31%|###1 | 170M/548M [00:00<00:01, 224MB/s]
35%|###5 | 192M/548M [00:00<00:01, 224MB/s]
39%|###8 | 213M/548M [00:01<00:01, 224MB/s]
43%|####2 | 235M/548M [00:01<00:01, 225MB/s]
47%|####6 | 256M/548M [00:01<00:01, 225MB/s]
51%|##### | 278M/548M [00:01<00:01, 225MB/s]
55%|#####4 | 299M/548M [00:01<00:01, 225MB/s]
59%|#####8 | 321M/548M [00:01<00:01, 225MB/s]
62%|######2 | 342M/548M [00:01<00:00, 225MB/s]
66%|######6 | 364M/548M [00:01<00:00, 225MB/s]
70%|####### | 385M/548M [00:01<00:00, 225MB/s]
74%|#######4 | 407M/548M [00:01<00:00, 225MB/s]
78%|#######8 | 428M/548M [00:02<00:00, 225MB/s]
82%|########2 | 450M/548M [00:02<00:00, 224MB/s]
86%|########6 | 471M/548M [00:02<00:00, 225MB/s]
90%|########9 | 493M/548M [00:02<00:00, 225MB/s]
94%|#########3| 514M/548M [00:02<00:00, 225MB/s]
98%|#########7| 536M/548M [00:02<00:00, 225MB/s]
100%|##########| 548M/548M [00:02<00:00, 224MB/s]
Expand Down Expand Up @@ -760,22 +756,22 @@ Finally, we can run the algorithm.
Optimizing..
run [50]:
Style Loss : 4.116919 Content Loss: 4.180849
Style Loss : 4.103480 Content Loss: 4.095845
run [100]:
Style Loss : 1.130904 Content Loss: 3.026069
Style Loss : 1.120694 Content Loss: 3.009445
run [150]:
Style Loss : 0.709217 Content Loss: 2.653672
Style Loss : 0.707350 Content Loss: 2.644475
run [200]:
Style Loss : 0.472506 Content Loss: 2.489412
Style Loss : 0.476449 Content Loss: 2.486541
run [250]:
Style Loss : 0.342615 Content Loss: 2.401286
Style Loss : 0.344099 Content Loss: 2.402043
run [300]:
Style Loss : 0.263712 Content Loss: 2.348886
Style Loss : 0.262806 Content Loss: 2.347914
Expand All @@ -784,7 +780,7 @@ Finally, we can run the algorithm.
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 38.991 seconds)
**Total running time of the script:** ( 0 minutes 38.521 seconds)


.. _sphx_glr_download_advanced_neural_style_tutorial.py:
Expand Down
2 changes: 1 addition & 1 deletion _sources/advanced/numpy_extensions_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 0.633 seconds)
**Total running time of the script:** ( 0 minutes 0.582 seconds)


.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:
Expand Down
Loading

0 comments on commit 3ae1e8b

Please sign in to comment.