Skip to content

NM Transformers v1.3

Pre-release
Pre-release
Compare
Choose a tag to compare
@bfineran bfineran released this 04 Nov 15:44
· 3230 commits to main since this release
46f78c1
Upgrade to transformers release V4.23.1 (#62)

* Update trainer and model flows to accommodate sparseml

Disable FP16 on QAT start (#12)

* Override LRScheduler when using LRModifiers

* Disable FP16 on QAT start

* keep wrapped scaler object for training after disabling

Using QATMatMul in DistilBERT model class (#41)

Removed double quantization of output of context layer. (#45)

Fix DataParallel validation forward signatures (#47)

* Fix: DataParallel validation forward signatures

* Update: generalize forward_fn selection

Best model after epoch (#46)

fix sclaer check for non fp16 mode in trainer (#38)

Mobilebert QAT (#55)

* Remove duplicate quantization of vocabulary.

enable a QATWrapper for non-parameterized matmuls in BERT self attention (#9)

* Utils and auxillary changes

update Zoo stub loading for SparseZoo 1.1 refactor (#54)

add flag to signal NM integration is active (#32)

Add recipe_name to file names

* Fix errors introduced in manual cherry-pick upgrade

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>