Releases: tridao/flash-attention-wheels
Releases · tridao/flash-attention-wheels
v2.3.5.post7
Add Pytorch 2.2.dev
v2.3.5.post6
Try MAX_JOBS=4
v2.3.5.post5
Try not limiting nvcc threads to 2
v2.3.5.post4
Try MAX_JOBS=2
v2.3.5.post3
Set up swap space, use 2 threads with nvcc
v2.3.5.post2
Only build for cxx11_abi = False
v2.3.5.post1
Update, only compile for CUDA 11.8 and 12.2
v2.0.6.post8
Fix exclude for python 3.1
v2.0.6.post7
Bump version
v2.0.6.post4
Switch to cutlass 3.1, disable nvcc --threads