This repository provides a Dockerized solution for efficiently backing up LAMMPS projects hosted on GitHub. By leveraging Docker and Docker Compose, it establishes a reliable environment equipped with essential tools for comprehensive Git and GitHub operations, ensuring seamless and accessible backups of critical data.
The Dockerfile in this repository sets up the environment by including essential packages:
- GitHub CLI: For interacting with GitHub repositories.
- Java Runtime Environment (JRE): Required for running Java applications.
- BFG Repo-Cleaner: A tool for cleaning up Git repository history.
These packages ensure that the environment is well-equipped for comprehensive Git and GitHub operations.
The docker-compose.yml
file is configured to run the GitHub backup service with the following key components:
- Environment Variables: Loads configuration parameters from the
.env
file. - Volume Mounts: Maps the current directory, a specified project directory.
These configurations ensure a seamless and secure environment for backing up LAMMPS projects from GitHub.
[[toc]]
To get started with this setup, follow these steps:
-
Clone the repository and navigate to the project's root directory.
-
Create a
.env
file by copying the existing.env-sample
file and modifying it as needed.
Once you've completed these steps, you can proceed with the following commands:
Navigate to the root directory of this project, where the Dockerfile is stored. In your terminal, run the following command to build the Docker image:
docker build -t lammps-github-backup-cli:2.3 .
docker compose up --detach
To stop and remove the container, use:
docker compose down
docker exec -it lammps-github-backup-env bash
Note
Ensure your local repository is accessible in the WSL2 environment. Open the WSL2 terminal and navigate to your repository to run Docker commands seamlessly.
For interactive rebase operations, install a text editor such as vi on the Linux system:
apt install vim
Navigate to a local repository and run BFG to replace sensitive texts using the following command:
bfg --replace-text /home/passwords.txt --no-blob-protection
You can use the ./mkdir-for-lammps-results.sh
script to create a new folder with a formatted name, which helps in organizing your LAMMPS nanocutting-SiC results efficiently.
This script prompts you to input various parameters of your simulation, such as:
- Cutting Speed: Choose a value between 1 and 3.
- Groove Depth: Specify "no" for defect-free or select among 3, 6, or 9.
- Groove Shape: Select between isosceles acute (a) or isosceles right (r) if a groove depth is specified.
Based on these inputs, the script generates and creates a new folder with a name that accurately reflects the chosen simulation parameters.
To start using git in a folder, you need to initialize it as a repository. This will create a hidden .git
directory that contains the necessary files and metadata for git to track your changes.
To initialize git in a folder, open a terminal and navigate to the folder you want to use. Then, run the ../setup-git.sh
script. It will execute the git init
command and set up the git user name and email locally.
After you have git installed and configured in your folder. You can run the ../new-gitHub-repo.sh
script to set up the remote repository. The script will prompt you to choose a file type and then create a new private repository, with a name that reflects the file type and the simulation parameters, on GitHub using the GitHub CLI.
You can use the ./explore-github-repo.sh
script to carry out the following tasks:
- Open the GitHub repository in the browser
- Display the description and the README
- Check the disk usage
- Update the description
This script will first prompt you to enter the file type and the simulation parameters. It will resolve the repository name and then ask for the action you want to take.
You can use the ../git-readme-and-logfiles.sh
script to create commits for the README and log files in the folder that contains the simulation results.
This script will create the "Add README file" and "Add log files" commits.
You can use the ../commit-results-files.sh
script to create commits for the result files.
This script will prompt you to enter the cutting speed parameter. Then, it will use a loop to sequentially create commits for the indexed files in multiple batches. This way results in a set of commits that can be pushed to the remote repository in smaller chunks, reducing the risk of conflicts and errors.
You can use the ../progressive-push-latest-commits.sh
script in the folder that contains the simulation results.
Enter the number of revisions before the HEAD. For example, input 10
to the prompt will start from HEAD~10
and push commits to remote one by one with a 5-minute break in between.
You can use the ./ovito.sh
script to quick open the OVITO program on Windows, and preload the state files specifying in the script file.
If you prefer to use PowerShell on Windows, run the ./ovito.ps1
script file instead.
source /usr/share/bash-completion/completions/git
git log --oneline
git rm -r pre_cut/ precut/
git commit -m "Remove pre_cut files"
git commit --amend --date="........"
git reset --soft HEAD^
git rebase -i --root
git push origin <commit ref>:<remote branch>
git checkout -b <new branch name>
git cherry-pick <commit ref>
git rebase branch-A branch-B
Official manual: https://cli.GitHub.com/manual/
gh auth login
π https://cli.GitHub.com/manual/gh_auth_login
gh repo list [<owner>] [flags]
π https://cli.GitHub.com/manual/gh_repo_list.
gh repo clone <repository>
π https://cli.GitHub.com/manual/gh_repo_clone
A shallow clone allows you to clone a repository with a limited history, which can save time and space.
gh repo clone <repository> -- --depth 1
A partial clone allows you to clone a repository without downloading all the history, which can save time and space.
gh repo clone <repository> -- --filter=blob:none
Sparse checkout allows you to check out only a subset of files from the repository.
-
Clone the repository:
gh repo clone <repository> -- --no-checkout
-
Enable sparse checkout:
git sparse-checkout init --cone
-
Define the directories or files you want to include:
git sparse-checkout set <directory-or-file>
To efficiently clone a large repository with many binary files, you can combine shallow clone, partial clone, and sparse checkout:
gh repo clone <repository> -- --depth 1 --filter=blob:none --no-checkout
cd <repository>
git sparse-checkout init --cone
git sparse-checkout set <directory-or-file>
This approach minimizes the amount of data transferred and stored locally by only including the necessary files and excluding large binary files.
gh repo create <owner>/<new-name> --private --template=<repository>
π https://cli.GitHub.com/manual/gh_repo_create.
π https://cli.GitHub.com/manual/gh_repo_view.
gh repo view [<repository>] --web
gh repo view [<repository>] --json description,repositoryTopics
gh repo view [<repository>] --json description,repositoryTopics
gh repo view [<repository>] --json diskUsage
π https://cli.GitHub.com/manual/gh_repo_edit.
gh repo edit [<repository>] --add-topic <strings>