Releases: macrocosm-os/pretraining
Release v4.6.4
Announcing release v4.6.4
Here are the main changes:
Changes
- Bittensor dependency version was bumped to 8.5.1.
The only action needed for validators is to pull this new release and reinstall dependencies.
NOTES TO VALIDATORS
-Please also make sure to rerun pip install to ensure updated dependencies.
python -m pip install -e
Release v4.6.3
Announcing release v4.6.3
Here are the main changes:
Changes
1. Bittensor version bumped to 8.4.3
We have noticed that we have a higher success rate for weight setting with this version than 6.9.4.
2. Changed S3 bucket URL for the stack v2-dedup dataset
The official doc provides two URLs to access the softwareheritage
S3 bucket.
https://docs.softwareheritage.org/user/using_data/index.html#contents-on-s3
The URL we used before seem to fail in some region. The other URL https://softwareheritage.s3.amazonaws.com/content/<sha1>
seem to be more robust, so we updated the dataloader to use it.
NOTES TO VALIDATORS
-IMPORTANT: The newly added dataset for code the-stack-v2-dedup
requires a Hugging Face access token and S3 secret and access keys. You can learn how to obtain and configure those tokens in our validator documentation here.
-Please also make sure to rerun pip install to ensure updated dependencies.
python -m pip install -e
Release v4.6.2
Announcing release v4.6.2
Here are the main changes:
Changes
1. Increasing code proportion in the data mix
Activation block: 4_453_709
We are replacing the-stack-dedup
with the-stack-v2-dedup
in the 14B-star competition and increasing the code proportion in the validation dataset from ~5% to ~15%.
Datasets used during evaluation for the 14B-star competition is as follows:
- HuggingFaceFW/fineweb-edu-score-2 (85%)
- bigcode/the-stack-v2-dedup (15%)
Datasets for the 3B and 14B competitions are left unchanged.
2. New epsilon lower bounds and decay intervals
Activation block 4_453_709
The epsilon decay interval and bounds will be updated for all competitions as follows:
-
3B competition:
Current: decays from 0.005 to 0.0005 over 7 days
Updated: decays from 0.005 to 0.0002 over 4 days -
14B and 14Bstar competitions:
Current: decays from 0.005 to 0.0005 over 7 days
Updated: decays from 0.005 to 0.0002 over 5 days
3. Updated emission distribution for competitions
Activation block 4_453_709
- 3B → 20%
- 14B → 40%
- 14B-star → 40%
4. Fixed all package version in requirements.txt
To avoid any installation issues and package compatibilities, we have fixed all dependency package versiosn in the requirements.txt
file. The installation experience should be smoother now.
NOTES TO VALIDATORS
-IMPORTANT: The newly added dataset for code the-stack-v2-dedup
requires a Hugging Face access token and S3 secret and access keys. You can learn how to obtain and configure those tokens in our validator documentation here.
-Please also make sure to rerun pip install to ensure updated dependencies.
python -m pip install -e
Release 4.6.1
This release fixes a bug where models that are winning in 14B* but are non-competitive in 14B would not be kept for evaluation.
Release v4.6.0
Announcing release v4.6.0
Here are the main changes:
Changes
1. 14B Multi-Dataset Competition.
Activation block: 4_252_646
Datasets used during evaluation:
- HuggingFaceFW/fineweb-edu-score-2 (95%)
- bigcode/the-stack-dedup (5%)
2. Retiring the 700M competition
Activation block: 4_252_646
We will sunset the 700M competition at the same time that the new 14B rises. The 14B multi-dataset competition will be taking in the emissions of 700M.
3. New epsilon lower bound and decay interval
Activation block 4_252_646
We will slightly increase the lower bound for epsilon from 0.0001 to 0.0005 and decrease the decay interval from 10 to 7 days.
4. Evaluation data syncing
Effected immediately.
Validation batches will now be synced across all validators. We will also introduce a delay in picking up models for validation to prevent exploits by training on the exact upcoming batch.
5. Deduplicating evaluation data
Effected immediately.
This issue has been raised on Discord. Picking a random offset when sampling batches could result in two validation pages with some overlap when the offset difference is less than the number of samples pulled at each offset position. Although this is a very rare issue when dealing with large datasets, this issue has been fixed now only on FineWeb-Edu2 but it will be generalized to all loaders in a following releases.
6. Setting repo visibility at model upload
Effected immediately.
Added the --update_repo_visibility
argument to the upload_mode.py
to enable changing the HuggingFace repo at model upload rather than setting this manually.
7. New emission distribution
3B → 29%
14B → 57%
14B Multi-Dataset → 14%
NOTES TO VALIDATORS
-The newly added dataset for code The Stack V1-dedup
requires an Hugging Face access token. You can learn how to obtain one in our validator documentation here.
-Please also make sure to rerun pip install to ensure updated dependencies.
python -m pip install -e
Release v4.5.3
Announcing release v4.5.3
Here are the main changes:
Changes
-
This fix first filters out all models which can't beat all earlier models at full epsilon decay, before computing win rates (and incentive weights). This will ensure that:
- All clones are filtered out before we compute win rates so clones can no longer provide an artificially high win rate.
- Any model that is not better than the fully-decayed top model but submitted at a later block is discarded from the evaluation process. -
Validator version bumped to 3.4.0
Release v4.5.2
Announcing release v4.5.2
Here are the main changes:
Changes
- Bumped validator version to force all validators to re-evaluate models that had been excluded due to 'inf' loss caused by the low compute loss TTL in 4.5.0.
- Decay period for 14B has been temporarily extended from 7 to 10 days, so models that had been excluded can have a second chance to get a fair ranking.
Release v4.5.1
Announcing release v4.5.1
Here are the main changes:
Changes
- Updated to the latest taoverse version v1.0.6 which correctly gets the newest instead of the oldest filetime in a directory.
- Bumped the grace period for downloading models to 10 minutes. this should provide additional buffer for those with slower connections when downloading 14B models.
- Reduced the number of validation pages to 11 instead of 18.
- Increasing compute loss TTL to 430s instead of 400s (Although all current 14B models complete validation under 400s seconds).
- Reduced epsilon decay period to 1 week instead of two for 14B models.
Note to validators
Please also make sure to rerun pip install to ensure updated dependencies.
python -m pip install -e .
Release v4.5.0
Announcing release v4.5.0
This release applies the following changes:
Changes
Immediate effect:
- Model copy exploit fix: Binary pairwise win rates are not applied (average loss ranking per validation page)
Activation on block 4_001_017
:
- Roll back to sample packing.
- Fast epsilon decay from 0.005 to 0.0001 instead of 0.001.
Release v4.4.0
Announcing release v4.4.0
This release includes the following changes.
Changes
- Includes the deactivation block for sunsetting 7B competition (competition ID 0). Block
3849722
. - The 0.15 reward previously attributed to 7B competition will be added to the 14B competition whose reward will total 0.57.
- Did some minor code clean-up including the removal of the epsilon experiment code.
- No action to be taken by the validators other than
pip install -e .
to update to the new version.