Skip to content

Actions: pytorch/torchtune

Recipe Tests

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
5,993 workflow runs
5,993 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Add a "division by zero" check in chunked loss handling in kd_losses.py
Recipe Tests #6381: Pull request #2239 synchronize by insop
January 11, 2025 01:02 32m 17s insop:insop/2225
January 11, 2025 01:02 32m 17s
Fix issue #2243, update the document to show correct usage
Recipe Tests #6380: Pull request #2252 opened by insop
January 11, 2025 00:48 32m 36s insop:insop/2243
January 11, 2025 00:48 32m 36s
Update the e2e flow tutorial to fix errors of generate
Recipe Tests #6379: Pull request #2251 opened by iseeyuan
January 10, 2025 23:48 32m 20s iseeyuan:docfix
January 10, 2025 23:48 32m 20s
Remove example inputs from aoti_compile_and_package
Recipe Tests #6378: Commit c152248 pushed by facebook-github-bot
January 10, 2025 22:43 36m 30s main
January 10, 2025 22:43 36m 30s
Adds validation loss to LoRA fine tune single device
Recipe Tests #6377: Pull request #2238 synchronize by MaxFrax
January 10, 2025 21:55 Action required MaxFrax:add_validation_loss_lora_singledevice
January 10, 2025 21:55 Action required
profiling ops on xpu
Recipe Tests #6376: Pull request #2249 synchronize by songhappy
January 10, 2025 21:16 33m 24s songhappy:profiler
January 10, 2025 21:16 33m 24s
Log grad norm aggregated over all ranks, not just rank zero (#2248)
Recipe Tests #6375: Commit f47f633 pushed by ebsmothers
January 10, 2025 20:39 34m 3s main
January 10, 2025 20:39 34m 3s
llama 3.1 has correct max_seq_len for all versions (#2203)
Recipe Tests #6374: Commit 262122b pushed by RdoubleA
January 10, 2025 20:02 32m 31s main
January 10, 2025 20:02 32m 31s
profiling ops on xpu
Recipe Tests #6373: Pull request #2249 opened by songhappy
January 10, 2025 19:44 33m 14s songhappy:profiler
January 10, 2025 19:44 33m 14s
Multi-tile support in vision rope
Recipe Tests #6371: Pull request #2247 opened by RdoubleA
January 10, 2025 18:12 33m 1s RdoubleA:no_tile_emb
January 10, 2025 18:12 33m 1s
llama 3.1 has correct max_seq_len for all versions
Recipe Tests #6370: Pull request #2203 synchronize by ebsmothers
January 10, 2025 18:04 32m 26s akashc1:llama3_1-max_seq_len
January 10, 2025 18:04 32m 26s
Adds clip_grad_norm to all recipe config that supports it (#2220)
Recipe Tests #6368: Commit b68cddd pushed by ebsmothers
January 10, 2025 17:43 33m 21s main
January 10, 2025 17:43 33m 21s
[Small fix] Update CUDA version in README (#2242)
Recipe Tests #6361: Commit baae232 pushed by ebsmothers
January 10, 2025 01:00 32m 58s main
January 10, 2025 01:00 32m 58s
Remove example inputs from aoti_compile_and_package
Recipe Tests #6360: Pull request #2244 opened by angelayi
January 10, 2025 00:52 32m 58s angelayi:export-D67998952
January 10, 2025 00:52 32m 58s
Add a "division by zero" check in chunked loss handling in kd_losses.py
Recipe Tests #6359: Pull request #2239 synchronize by insop
January 10, 2025 00:37 32m 48s insop:insop/2225
January 10, 2025 00:37 32m 48s
[Small fix] Update CUDA version in README
Recipe Tests #6358: Pull request #2242 synchronize by acisseJZhong
January 9, 2025 23:35 33m 4s acisseJZhong:update_cuda_version
January 9, 2025 23:35 33m 4s
[EZ] Fix config bug where interpolation happens too early
Recipe Tests #6356: Pull request #2236 synchronize by EugenHotaj
January 9, 2025 19:58 32m 49s EugenHotaj:patch-2
January 9, 2025 19:58 32m 49s