Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement aliasable mixin and alias activation ordering #213

Merged
merged 5 commits into from
Nov 29, 2024

Conversation

kylesayrs
Copy link
Contributor

@kylesayrs kylesayrs commented Nov 25, 2024

Purpose

  • Add aliases for activation ordering options to improve usability for researchers familiar with autogptq

Changes

  • Implement AliasableEnum mixin which allows enums to be aliased
  • Add AliasableEnum to ActivationOrdering with the following map:
{
    "dynamic": "group",
    "static": "weight",
}

Testing

  • Added passing tests in tests/test_quantization/test_quant_args.py
  • Successfully quantized a model using dynamic actorder and tested e2e with vllm
llama3.py
from accelerate import cpu_offload
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer

from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot

# Select model and load it.
# MODEL_ID = "meta-llama/Meta-Llama-3-8B-Instruct"
MODEL_ID = "meta-llama/Llama-3.2-1B-Instruct"

model = AutoModelForCausalLM.from_pretrained(
    MODEL_ID,
    device_map="cuda:0",
    torch_dtype="auto",
)
# cpu_offload(model)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)

# Select calibration dataset.
DATASET_ID = "HuggingFaceH4/ultrachat_200k"
DATASET_SPLIT = "train_sft"

# Select number of samples. 512 samples is a good place to start.
# Increasing the number of samples can improve accuracy.
NUM_CALIBRATION_SAMPLES = 285  # 2048
MAX_SEQUENCE_LENGTH = 2048

# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))


def preprocess(example):
    return {
        "text": tokenizer.apply_chat_template(
            example["messages"],
            tokenize=False,
        )
    }


ds = ds.map(preprocess)


# Tokenize inputs.
def tokenize(sample):
    return tokenizer(
        sample["text"],
        padding=False,
        max_length=MAX_SEQUENCE_LENGTH,
        truncation=True,
        add_special_tokens=False,
    )


ds = ds.map(tokenize, remove_columns=ds.column_names)

# Configure the quantization algorithm to run.
from compressed_tensors.quantization import QuantizationArgs, QuantizationType, QuantizationStrategy, ActivationOrdering, QuantizationScheme
recipe = GPTQModifier(
    targets="Linear",
    config_groups={
        "config_group": QuantizationScheme(
            targets=["Linear"],
            weights=QuantizationArgs(
                num_bits=4,
                type=QuantizationType.INT,
                strategy=QuantizationStrategy.GROUP,
                group_size=128,
                symmetric=True,
                dynamic=False,
                actorder="dynamic",
            ),
        ),
    },
    ignore=["lm_head"],
    dampening_frac=0.5
)

# Apply algorithms.
oneshot(
    model=model,
    dataset=ds,
    recipe=recipe,
    max_seq_length=MAX_SEQUENCE_LENGTH,
    num_calibration_samples=NUM_CALIBRATION_SAMPLES,
)

# Confirm generations of the quantized model look sane.
print("\n\n")
print("========== SAMPLE GENERATION ==============")
input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids.to("cuda")
output = model.generate(input_ids, max_new_tokens=100)
print(tokenizer.decode(output[0]))
print("==========================================\n\n")

# Save to disk compressed.
SAVE_DIR = MODEL_ID.split("/")[1] + "-W4A16-G128"
model.save_pretrained(SAVE_DIR, save_compressed=True)
tokenizer.save_pretrained(SAVE_DIR)

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@kylesayrs kylesayrs self-assigned this Nov 26, 2024
@horheynm
Copy link
Member

lgtm,
for aliases on enum, is this the only way? Looks pretty complicated with setting hash for dicts

horheynm
horheynm previously approved these changes Nov 26, 2024
@kylesayrs
Copy link
Contributor Author

@horheynm In order to support comparisons natively, at least these two methods must be overloaded

Copy link
Member

@rahul-tuli rahul-tuli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, pending response to comments!

src/compressed_tensors/utils/helpers.py Outdated Show resolved Hide resolved
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Copy link
Contributor

@dsikka dsikka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM. Seems like our changes to quant args would be backwards compatible but just wanted to confirm that this would be the case?

@kylesayrs
Copy link
Contributor Author

@dsikka Yes, I am confident that any quantization config produced prior to these changes will be compatible with this new definition.

@dsikka dsikka merged commit 724d5ce into main Nov 29, 2024
1 check passed
@dsikka dsikka deleted the kylesayrs/actorder-aliases branch November 29, 2024 18:07
dsikka added a commit that referenced this pull request Nov 29, 2024
@dsikka dsikka restored the kylesayrs/actorder-aliases branch November 29, 2024 20:20
dsikka added a commit that referenced this pull request Nov 29, 2024
@dsikka
Copy link
Contributor

dsikka commented Nov 29, 2024

This is causing llm-compressor tests to fail: https://github.com/vllm-project/llm-compressor/actions/runs/12090087246/job/33716500170?pr=945
It has been reverted: #217

@kylesayrs
Copy link
Contributor Author

The failing tests pass locally with this branch. This points to it being related to different environments, may be a bug present in specific pydantic versions

@kylesayrs
Copy link
Contributor Author

kylesayrs commented Nov 29, 2024

I was able to replicate on python3.9

python package environment
$ python3 -m pip list
Package                   Version     Editable project location
------------------------- ----------- --------------------------------
annotated-types           0.7.0
attrs                     24.2.0
beautifulsoup4            4.12.3
black                     22.12.0
bleach                    6.2.0
certifi                   2024.8.30
charset-normalizer        3.4.0
click                     8.1.7
compressed-tensors        0.8.0       /home/ksayers/compressed-tensors
defusedxml                0.7.1
exceptiongroup            1.2.2
fastjsonschema            2.21.0
filelock                  3.16.1
flake8                    7.1.1
fsspec                    2024.10.0
huggingface-hub           0.26.3
idna                      3.10
importlib_metadata        8.5.0
iniconfig                 2.0.0
isort                     5.8.0
Jinja2                    3.1.4
jsonschema                4.23.0
jsonschema-specifications 2024.10.1
jupyter_client            8.6.3
jupyter_core              5.7.2
jupyterlab_pygments       0.3.0
MarkupSafe                3.0.2
mccabe                    0.7.0
mistune                   3.0.2
mpmath                    1.3.0
mypy-extensions           1.0.0
nbclient                  0.10.1
nbconvert                 7.16.4
nbformat                  5.10.4
networkx                  3.2.1
numpy                     2.0.2
nvidia-cublas-cu12        12.4.5.8
nvidia-cuda-cupti-cu12    12.4.127
nvidia-cuda-nvrtc-cu12    12.4.127
nvidia-cuda-runtime-cu12  12.4.127
nvidia-cudnn-cu12         9.1.0.70
nvidia-cufft-cu12         11.2.1.3
nvidia-curand-cu12        10.3.5.147
nvidia-cusolver-cu12      11.6.1.9
nvidia-cusparse-cu12      12.3.1.170
nvidia-nccl-cu12          2.21.5
nvidia-nvjitlink-cu12     12.4.127
nvidia-nvtx-cu12          12.4.127
packaging                 24.2
pandocfilters             1.5.1
pathspec                  0.12.1
pip                       23.0.1
platformdirs              4.3.6
pluggy                    1.5.0
pycodestyle               2.12.1
pydantic                  2.10.2
pydantic_core             2.27.1
pyflakes                  3.2.0
Pygments                  2.18.0
pytest                    8.3.3
python-dateutil           2.9.0.post0
PyYAML                    6.0.2
pyzmq                     26.2.0
referencing               0.35.1
regex                     2024.11.6
requests                  2.32.3
rpds-py                   0.21.0
safetensors               0.4.5
setuptools                58.1.0
six                       1.16.0
soupsieve                 2.6
sympy                     1.13.1
tinycss2                  1.4.0
tokenizers                0.20.3
tomli                     2.2.1
torch                     2.5.1
tornado                   6.4.2
tqdm                      4.67.1
traitlets                 5.14.3
transformers              4.46.3
triton                    3.1.0
typing_extensions         4.12.2
urllib3                   2.2.3
webencodings              0.5.1
wheel                     0.45.1
zipp                      3.21.0
Python 3.9.20 (main, Sep  7 2024, 18:35:25) 
[GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from compressed_tensors.quantization import ActivationOrdering
>>> ActivationOrdering.DYNAMIC == ActivationOrdering.GROUP
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/ksayers/compressed-tensors/src/compressed_tensors/utils/helpers.py", line 144, in __eq__
    self.aliases.get(self.value, self.value)
TypeError: 'staticmethod' object is not callable

Potentially related to https://bugs.python.org/issue43682

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants