Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for new features: stage-specific parameters for explore and fp, finetune/iter-initial-modle distinguishing training parameters #216

Open
Vibsteamer opened this issue Apr 26, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@Vibsteamer
Copy link

REQUEST1 :

expect the following parameters to have further structures to support exploration-stage specific assignment :

  1. explore/convergence (all paramenters within)
  2. explore/max_numb_iter
  3. explore/fatal_at_max
  4. fp/task_max

e.g.,

  • in stage_0 :
    ...
    "explore": {
    "type": "lmp",
    "config": {
    "command": "lmp -var restart 0",
    "impl": "pytorch"
    },
    "convergence": {
    "type": "adaptive-lower",
    "conv_tolerance": 0.005,
    "rate_candi_f": 0.15,
    "level_f_hi": 5.0,
    "n_checked_steps": 3
    },
    "max_numb_iter": 2,
    "fatal_at_max": false,
    ...
    "fp": {
    "task_max": 4000,
    ...

  • in stage_3 :
    ...
    "explore": {
    "type": "lmp",
    "config": {
    "command": "lmp -var restart 0",
    "impl": "pytorch"
    },
    "convergence": {
    "type": "adaptive-lower",
    "conv_tolerance": 0.005,
    "numb_candi_f": 4000,
    "level_f_hi": 5.0,
    "n_checked_steps": 3,
    },
    "max_numb_iter": 20,
    "fatal_at_max": True,
    "fp": {
    "task_max": 4000,
    ...

REQUEST 2

expect to support different ending_pref_e/f/v for initial_finetune from multi-task pre-train models and the successive init_model form the finetuned_initial_model.

Currently train/config supports only start but no end parameters, like only "init_model_start_pref_e" but no "init_model_end_pref_e". Instead, the end_prefs are inherited from the limit_prefs from one training scripr defined intrain/config/templated_script

Maybe need to support two scripts as by train/config/templated_script, or adding new init_model_end_pref_e/f/v parameters in train/config

scenario arising REQUEST 1

In practice of the pre-train models initiated DP-GEN2, multiple successive exploration stages are used to ehance the exploration efficiency on a complex sample space.

The sample sapce consists of derivatives from (1) many severely different initial configurations, (2) both trivial dynamics images and significant low-probability instances, and (3) successors after low-probability instances which is also trivial but as well severely different compared with their initial/parent configurations.

(1) will suffer from the species bias after pre-train (and finetune), then leads to over-sampling on full trajectories of specific far-away-from-pretrain configurations
(2) is our central target
(3) will suffer from the conformer bias after pre-train (and finetune), then leads to over-sampling on these trivial successors configurations

Thus stage_0 and stage_1 are used to debiasing (1) and (3) through randomly select candidates from a broader model_devi range.
No final exploration convergence is expected for these two stages.
stage_2 is the actually meant to be converged one for *2), and related parameters would be different from debiasing stages.

scenario arising REQUEST 2

tests showed different parameter preferences for trainings in two stages.

@Vibsteamer
Copy link
Author

BTW, due to some compatibility limitation, I'm using this branch https://github.com/zjgemi/dpgen2/tree/deepmd-pytorch from and thanks to @zjgemi

@njzjz njzjz added the enhancement New feature or request label Jun 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants