Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace readme paths with urls #1097

Merged
merged 2 commits into from
Jan 29, 2025
Merged

Replace readme paths with urls #1097

merged 2 commits into from
Jan 29, 2025

Conversation

kylesayrs
Copy link
Collaborator

@kylesayrs kylesayrs commented Jan 24, 2025

Purpose

  • Files with the .md extension are not listed in the MANIFEST.in, meaning that they will not be included in the LLM Compressor pypi package. This means that references to these files are left dangling for users who have installed from the pypi package. Rather than including .md in the package and having to also ship all the large images files associated with them, this PR moves the references to urls hosted by github
    • While the github url paths may change between versions, this solution works in lieu of a dedicated readthedoc build for each version
    • This solution also aligns with the practice of other libraries which point to hosted urls rather than file paths
  • Note that this does not apply to files which are themselves .md files, as these files will not be included in the pypi distribution
    • src/llmcompressor/transformers/finetune/README.md
    • src/llmcompressor/pipelines/sequential/README.md

Changes

  • Replace readme file paths with urls
  • Small change to DisableQuantization to better catch cases where exceptions such as tracing exceptions are triggered

Testing

  • N/A

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@kylesayrs kylesayrs requested a review from rahul-tuli January 24, 2025 20:00
@kylesayrs kylesayrs added the ready When a PR is ready for review label Jan 24, 2025
try:
model.apply(disable_quantization)
yield
finally:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

model.apply(enable_quantization)
will always runs regardless of try being success or fail. Is this intended.
Also if it fails I would add a log

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

model.apply(enable_quantization) will always runs regardless of try being success or fail. Is this intended.

Yes, that is exactly the intention of this change

Also if it fails I would add a log

If an exception is raised, the user will see the raised exception

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry why do we want to run model.apply(enable_quantization) if it raises exception? Wouldn't it just exit?

And as long as the exception contains info that we can get from this function, then it should be ok. Ex. have input args useful to log causing the error.

Copy link
Collaborator Author

@kylesayrs kylesayrs Jan 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry why do we want to run model.apply(enable_quantization) if it raises exception? Wouldn't it just exit?

For example, if the sequential pipeline fails, an exception will be thrown and then caught, in order to fall back to the layer-wise sequential pipeline. This is example of where an exception is raised but caught, and I do not want do have to think about the weird interactions that would result from this context not being properly exited.

This "try finally" pattern is, in general, good for contexts. It allows the user to not have to worry about if the context was exited properly, reducing mental load.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we move this code change to a separate PR, ideally I would like the scope of this PR to be limited to updating readme paths

@kylesayrs kylesayrs self-assigned this Jan 27, 2025
@mgoin mgoin merged commit a76563a into main Jan 29, 2025
8 checks passed
@mgoin mgoin deleted the kylesayrs/update-guide-pointers branch January 29, 2025 23:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready When a PR is ready for review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants