Update compute_type_is_set attribute for Linear4bit #1623
Merged
+1
−1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue: If the input to the forward of Linear4bit is torch.float32 dtype and compute_dtype is set to torch.bfloat16 dtype, then the matmul operation executes in torch.float32 dtype. This issue reproduces on CPU and HPU (Intel Gaudi).
Fix: During initialization, compute_type_is_set is set to False. In the forward pass, compute_dtype is set as per the input of the forward pass. Initializing compute_type_is_set as updated in this PR resolves this issue (and we can get rid of unnecessary casting operations)
Details:
Case I: No change
a) First, we are dequantizing the weights, output is bfloat16 dtype
b) Then we are casting the dequantized weights as per input (which is in float32)
c) and now, we use torch.nn.functional.linear
Case II: Using this change
a) First, we are dequantizing the weights, output is bfloat16 dtype
b) Then we are casting the dequantized weights as per input (which is in bfloat16)
c) and now, we use torch.nn.functional.linear and both the inputs and weights are in bfloat16 dtype