-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: 👷 release,test-release,dep actions and pyproject.toml added #43
Conversation
Signed-off-by: Onuralp SEZER <thunderbirdtr@gmail.com>
Signed-off-by: Onuralp SEZER <thunderbirdtr@gmail.com>
805166e
to
3e1893f
Compare
Current test results py39: OK (66.17=setup[32.69]+cmd[33.48] seconds)
py310: OK (57.87=setup[26.58]+cmd[31.29] seconds)
py311: OK (55.35=setup[25.66]+cmd[29.69] seconds)
py312: OK (61.36=setup[30.26]+cmd[31.10] seconds)
congratulations :) (240.78 seconds) It also confirms installation work properly |
Signed-off-by: Onuralp SEZER <thunderbirdtr@gmail.com>
e38869b
to
4110122
Compare
Signed-off-by: Onuralp SEZER <thunderbirdtr@gmail.com>
4110122
to
734980d
Compare
Hi @SkalskiP 👋, I've completed most of the actions for Maestro and replaced the setup.py and requirements.txt files with a pyproject.toml, leveraging setuptools. Additionally, I wrote actions for both macOS and Ubuntu to ensure that all installations and tests pass. I manually added a FlashAttention wheel for a clean installation in test CI but PyPI version of flash-attention only provides a tarball that requires compilation. However, this complicates a one-liner installation (pip install maestro) since users would need to either set up the proper environment or install flash-attention manually alongside Torch.
|
Signed-off-by: Onuralp SEZER <thunderbirdtr@gmail.com>
Signed-off-by: Onuralp SEZER <thunderbirdtr@gmail.com>
5183d76
to
d1919fc
Compare
- 📝 Added a new CHANGELOG.md file to document changes. - 📝 Updated README.md with new information or corrections. - 🔧 Modified pyproject.toml to update project urls and exclude corrections - 🔧 Package keywords added to pyproject.toml Signed-off-by: Onuralp SEZER <thunderbirdtr@gmail.com>
Wheel Package Content listadding 'maestro/init.py' |
pip install torch | ||
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then | ||
if [[ "${{ matrix.python-version }}" == "3.10" ]]; then | ||
pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.6.3/flash_attn-2.6.3+cu123torch2.4cxx11abiTRUE-cp310-cp310-linux_x86_64.whl |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will this break when torch is bumped to next version given flashattn is fixed to 2.4?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we need to update here, or I can lock pytorch in here and we can update when we wanted to as well. But source installation was breaking in action and I checked documentation of flash-attention it was asking other packages and cuda-toolkit to be present plus they also have certain version lock as you can see in pre-made as wheels
Description