v0.1.0 - Accelerating Finite-Temperature DFT with DNN
RandomDefaultUser
released this
07 Jul 11:27
·
1829 commits
to develop
since this release
First alpha release of MALA. This code accompanies the publication of the same name (https://doi.org/10.1103/PhysRevB.104.035120).
Features:
- Preprocessing of QE data using LAMMPS interface and parsers
- Networks can be created and trained using pytorch
- Hyperparameter optimization using optuna
- experimental: orthogonal array tuning and neural architecture search without training supported
- Postprocessing using QE total energy module (available as separate repository)
Test data repository version: v0.1.0