ThreeCopies.com is a hosted service that regularly archives your server-side resources. We create three copies: hourly, daily and weekly.
What's interesting is that the entire product is will be written in EO,
a truly object-orented programming language.
The logo is made by Freepik from flaticon.com, licensed by CC 3.0 BY.
Each script is a bash scenario, which you design yourself. ThreeCopies just starts it regularly and records its output. These are some recommendations on how to design the script. There are three parts: input, package, and output. First, you collect some data from your data sources (input). Then, you compress and encrypt the data (package). Finally, you store the package somewhere (output).
We start your script inside
yegor256/threecopies
Docker container,
here is the
Dockerfile
.
If you don't want your script to be executed too frequently, you may put this code in front of it (to skip hourly executions, for example):
if [ "${period}" == "hour" ]; then exit 0; fi
To retrieve the data from a MySQL database use mysqldump:
mysqldump --lock-tables=false --host=www.example.com \
--user=username --password=password \
--databases dbname > mysql.sql
Since this would require to open your mysql port to the internet, which is not advisable from a security perspective, you should probably use a ssh tunnel:
cat > file.key <<EOT
-----BEGIN RSA PRIVATE KEY-----
<your ssh private key here>
-----END RSA PRIVATE KEY-----
EOT
chmod 700 file.key
ssh -Nf -i file.key -L3306:localhost:3306 your_user@www.example.com
rm file.key
and then connect with the above script:
mysqldump --lock-tables=false --host=localhost ...same as above
To download an entire FTP directory use wget:
wget --mirror --tries=5 --quiet --output-file=/dev/null \
--ftp-user=username --ftp-password=password \
ftp://ftp.example.com/some-directory
To package a directory use tar:
tgz="${period}-$(date "+%Y-%m-%d-%H-%M").tgz"
tar czf "${tgz}" some-directory
We recommend to use exactly that name of your .tgz
archives. The
${period}
environment variable is provided by our server to your
Docker container, it will either be set to hour
, day
, or week
.
To upload a file to Amazon S3, using s3cmd:
echo "[default]" > ~/.s3cfg
echo "access_key=AKIAICJKH*****CVLAFA" >> ~/.s3cfg
echo "secret_key=yQv3g3ao654Ns**********H1xQSfZlTkseA0haG" >> ~/.s3cfg
s3cmd --no-progress put "${tgz}" "s3://backup.example.com/${tgz}"
The tc-scripts
table contains all registered scripts:
fields:
login/H: GitHub login of the owner
name/R: Unique name of the script
bash: Bash script
hour: Epoch-sec when its recent hourly log was scheduled
day: Epoch-sec when its recent daily log was scheduled
week: Epoch-sec when its recent weekly log was scheduled
The tc-logs
table contains all recent logs:
fields:
group/H: Concatenated GitHub login and script name, e.g. "yegor256/test"
finish/R: Epoch-msec of the script finish (or MAX_LONG if still running)
login: GitHub login of the owner
period: Either "hour", "day", or "week"
ocket: S3 object name for the log
ttl: Epoch-sec when the record has to be deleted (by DynamoDB)
start: Epoch-msec time of the start
container: Docker container name
exit: Bash exit code (error if not zero)
mine (index):
login/H
finish/R
Just submit a pull request. Make sure mvn -Pqulice install
passes.