-
Notifications
You must be signed in to change notification settings - Fork 89
Issues: Lightning-AI/lightning-thunder
Label tracking meta-issue (edit me to get automatically CC'ed...
#72
opened Mar 25, 2024 by
carmocca
Open
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
fusion_type="dataflow" can lead to invalid trace
fusion logic
#1858
opened Mar 7, 2025 by
kshitij12345
[reporting] Utility to save failing repros for a FXReport
enhancement
New feature or request
reporting
#1856
opened Mar 7, 2025 by
kshitij12345
[reporting] Saved repro may have incorrect indentation.
reporting
#1855
opened Mar 7, 2025 by
kshitij12345
test_dynamo.py::test_thundercompiler_optim_step broken
dynamo
thunderfx
for things that could be applicable to the dynamo+thunder frontend
#1821
opened Mar 1, 2025 by
t-vi
Feed fp16/bf16 friendly values to test cases
enhancement
New feature or request
testing
#1803
opened Feb 25, 2025 by
riccardofelluga
cuDNN SDPA executor does not support batched broadcast across attention heads mask
cudnn
#1799
opened Feb 25, 2025 by
IvanYashchuk
Sharing memory pools in CUDAGraphs has non-trivial constraints for memory reuse
cudagraphs
#1792
opened Feb 24, 2025 by
ali-alshaar7
A mask conversion for cuDNN execution of scaled dot product attention should be fused
cudnn
enhancement
New feature or request
sdpa
#1789
opened Feb 24, 2025 by
IvanYashchuk
[Reporting] Automated bisection to identify significant changes in performance
reporting
#1781
opened Feb 18, 2025 by
mruberry
_interpret_call
could reuse names in a trace when it's called in the lookaside of torch.autograd.Function
autograd
#1776
opened Feb 18, 2025 by
crcrpar
Infinite loop in inplace functionalization
bug
Something isn't working
in-place
#1770
opened Feb 17, 2025 by
beverlylytle
Taking New feature or request
requires_grad
propagation seriously in Thunder
autograd
design required
enhancement
#1768
opened Feb 17, 2025 by
IvanYashchuk
High memory consumption without dataflow-based fusion
fusion logic
thunderfx
for things that could be applicable to the dynamo+thunder frontend
#1762
opened Feb 11, 2025 by
riccardofelluga
Apply New feature or request
set_execution_file
to specific traces such as forward/backward execution traces
developer efficiency
enhancement
#1760
opened Feb 11, 2025 by
crcrpar
Support packing multiple sequences with Flash Attention without cross-contamination
enhancement
New feature or request
high priority
nemo
Issues needed to support NVIDIA NeMo models.
thunderfx
for things that could be applicable to the dynamo+thunder frontend
#1758
opened Feb 10, 2025 by
IvanYashchuk
more general inplace support (index_copy_ in litgpt fails to trace)
enhancement
New feature or request
in-place
#1743
opened Feb 4, 2025 by
ali-alshaar7
Thunder slower than eager for PEFT LoRA configs with small input sizes
peft
#1738
opened Feb 4, 2025 by
riccardofelluga
thunderfx produces results with incorrect
requires_grad
autograd
#1733
opened Feb 1, 2025 by
jjsjann123
Grad Transform generates inconsistent saved_for_backward between forward and backward trace.
autograd
#1732
opened Feb 1, 2025 by
jjsjann123
dividing a float16 tensor by a python float is inaccurate with nvfuser
numerical accuracy
nvfuser
#1724
opened Jan 30, 2025 by
beverlylytle
Run LitGPT benchmarking with a custom Attention implementation priority.
benchmarking
#1714
opened Jan 29, 2025 by
wprazuch
Input upcast is missing in Thunder's implementation of torch.nn.functional.rms_norm
operators
#1713
opened Jan 29, 2025 by
IvanYashchuk
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.