Replies: 9 comments
-
Hi, thanks for your question. I must say, I'm not I follow what you mean with "convert from the original weights and the integer weight". Could you elaborate a bit more? |
Beta Was this translation helpful? Give feedback.
-
What kind of conversion are you trying to achieve? |
Beta Was this translation helpful? Give feedback.
-
From |
Beta Was this translation helpful? Give feedback.
-
When you call |
Beta Was this translation helpful? Give feedback.
-
So my question is a function to convert from |
Beta Was this translation helpful? Give feedback.
-
Convert from Might I ask what you're trying to achieve with this function? That could help me answering your question better |
Beta Was this translation helpful? Give feedback.
-
Essentially I want to do FL with Brevitas models, but the averaging step needs to be done in integers (with rounding of course). Then I want to evaluate using pure integer values. Then, I will convert the averaged model to float weights so I can load it back to PyTorch model and train again in the next round. Is this possible? |
Beta Was this translation helpful? Give feedback.
-
It depends on what averaging you are doing. If the scale factors of the integer tensors you are averaging are all the same, then you can: apply the dequantization formula to go back to floating point and load that back to pytorch deq_w = (quant_w - zero_point) * scale Scale and zero_point are quantization metadata that you can find within the QuantTensor. If the scale factors of the tensors you are averaging are different, then I'm not entirely sure how to proceed. |
Beta Was this translation helpful? Give feedback.
-
I have this model
After training the network, if I do
I will get
However, I want a way to convert between the original floating point weight, e.g.
model.fc1.weight
and the integer weight (model.fc1.quant_weight().int()
). How would I do so?Beta Was this translation helpful? Give feedback.
All reactions