-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Serialize Config from Model #7
Merged
+234
−19
Merged
Changes from all commits
Commits
Show all changes
14 commits
Select commit
Hold shift + click to select a range
9aae8e8
Apply quantization config implementation
8465015
add TODO
24e04b6
integrate full lifecycle support, QuantizationStatus updates, add tin…
b5a07c4
fix comment
7142a71
initial implementation
23e9ae8
add unit test
dd77890
Merge branch 'main' into serialize_config
b9c9530
cleanup is_quantized
845bfb9
clean up targets and ignore lists
1a7984c
global compression ratio and docstrings
faa93c9
make sure scale/zp on correct device
caeab7d
helper for model quantization
e7e6f43
Merge branch 'fix_device_mismatch' into serialize_config
ec2ef84
Merge branch 'main' into serialize_config
bfineran File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
# Copyright (c) 2021 - present / Neuralmagic, Inc. All Rights Reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, | ||
# software distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
# flake8: noqa | ||
from .helpers import * |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,117 @@ | ||
# Copyright (c) 2021 - present / Neuralmagic, Inc. All Rights Reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, | ||
# software distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
from typing import Tuple | ||
|
||
import torch | ||
from torch.nn import Module | ||
from tqdm import tqdm | ||
|
||
|
||
__all__ = [ | ||
"is_module_quantized", | ||
"is_model_quantized", | ||
"iter_named_leaf_modules", | ||
"module_type", | ||
"calculate_compression_ratio", | ||
] | ||
|
||
|
||
def is_module_quantized(module: Module) -> bool: | ||
""" | ||
Check if a module is quantized, based on the existence of a non-empty quantization | ||
scheme | ||
|
||
:param module: pytorch module to check | ||
:return: True if module is quantized, False otherwise | ||
""" | ||
if not hasattr(module, "quantization_scheme"): | ||
return False | ||
|
||
if module.quantization_scheme.weights is not None: | ||
return True | ||
|
||
if module.quantization_scheme.input_activations is not None: | ||
return True | ||
|
||
if module.quantization_scheme.output_activations is not None: | ||
return True | ||
|
||
return False | ||
|
||
|
||
def is_model_quantized(model: Module) -> bool: | ||
""" | ||
Check if any modules in a model are quantized, based on the existence of a non-empty | ||
quantization scheme in at least one module | ||
|
||
:param model: pytorch model | ||
:return: True if model is quantized, False otherwise | ||
""" | ||
|
||
for _, submodule in iter_named_leaf_modules(model): | ||
if is_module_quantized(submodule): | ||
return True | ||
|
||
return False | ||
|
||
|
||
def module_type(module: Module) -> str: | ||
""" | ||
Gets a string representation of a module type | ||
|
||
:module: pytorch module to get type of | ||
:return: module type as a string | ||
""" | ||
return type(module).__name__ | ||
|
||
|
||
def iter_named_leaf_modules(model: Module) -> Tuple[str, Module]: | ||
# yields modules that do not have any submodules | ||
# TODO: potentially expand to add list of allowed submodules such as observers | ||
for name, submodule in model.named_modules(): | ||
if len(list(submodule.children())) == 0: | ||
yield name, submodule | ||
|
||
|
||
def calculate_compression_ratio(model: Module) -> float: | ||
""" | ||
Calculates the quantization compression ratio of a pytorch model, based on the | ||
number of bits needed to represent the total weights in compressed form. Does not | ||
take into account activation quantizatons. | ||
|
||
:param model: pytorch module to calculate compression ratio for | ||
:return: compression ratio of the whole model | ||
""" | ||
total_compressed = 0.0 | ||
total_uncompressed = 0.0 | ||
for name, submodule in tqdm( | ||
iter_named_leaf_modules(model), | ||
desc="Calculating quantization compression ratio", | ||
): | ||
for parameter in model.parameters(): | ||
try: | ||
uncompressed_bits = torch.finfo(parameter.dtype).bits | ||
except TypeError: | ||
uncompressed_bits = torch.iinfo(parameter.dtype).bits | ||
compressed_bits = uncompressed_bits | ||
if is_module_quantized(submodule): | ||
compressed_bits = submodule.quantization_scheme.weights.num_bits | ||
else: | ||
print(name) | ||
num_weights = parameter.numel() | ||
total_compressed += compressed_bits * num_weights | ||
total_uncompressed += uncompressed_bits * num_weights | ||
|
||
return total_uncompressed / total_compressed |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See TODO comment about allowing for exceptions in leaf nodes for observers. This will be relevant for non frozen quantized models