Here is the opinionated conventions for Bash Scripts
Use #!/usr/bin/env bash
instead of #!/bin/bash
The former searches the user's PATH to find the bash binary.
The latter assumes it is always installed to /bin/
folder which can cause issues.
NOTE: There are times when one may have a good reason for using #!/bin/bash
or another direct path to the binary
All variables should be in ALLCAPS format, it shouldn't have any special character except underscore ( _ )
to improve readability, eg. dash(-), plus(+) etc.
USER=`echo whoami`
LIST_OF_FILES=`echo ls -la`
Benefit of having this naming convention helps to notice all the variables in a Bash script file at a first glance.
Almost every Linux distro comes with Bash out-of-the-box.
So, it's almost reliable to use Bash specific syntaxes, such as, conditions, loops, etc.
We can use Bash specific syntaxes on Azure DevOps Pipelines for example, because every Hosted Agents comes with Bash installed on it.
But, in case of deploying Bash scripts in a Docker Container, we can't rely on Bash.
Ubuntu, CentOS, Debian, etc. Docker images have Bash, but more minimal Docker images doesn't, such as, Alpine.
Either we build from Ubuntu, CentOS or Debian based Docker image (around 600MB) with Bash, or we can build from Alpine Docker image (around 6MB)
Unfortunately, we can't use Bash specific syntaxes on Alpine Docker images, because it's not pre-installed on the image.
Always avoid using the function
keyword, it reduces compatibility with older versions of bash.
Use
do_something() {
}
Instead of
function do_something() {
}
All the scripts should developed in DRY principle, so if part of the script can be reused in somewhere else, it should be extracted as a function and called in multiple places.
doit() {
}
echo DoIt now
doit
echo DoIt again
doit
In Bash, you can't call a function which is not declared yet, so, just in case, all the functions should be placed on top of the script files.
In Bash, you can write function definition and body in single line, but it should be avoided because of viewing all the lines at a glance.
If you need to return a value from a Bash function, do it at the very bottom line of the function.
According to Bash language documentation, we can only return single numerical value from a function.
sum() {
return $1 + $2
}
RESULT=sum 1 3
Most of the cases it's not enough to return only one value out from the function.
In that cases, echo a JSON payload to the console only at 1 point in the function and consume it at the caller site.
sum() {
echo `{ "sum": $1 + $2, "avg": (($1 + $2)/2) }`
}
RESULT=`echo sum 1 3`
Afterwards, we can use jq shell extension to parse JSON payload
SUM=$RESULT | jq '.sum'
AVG=$RESULT | jq '.avg'
Since exit
keyword terminates the script execution, it's best to use return
instead of exit
Prefer local variables within functions over global variables, so always declare variables in a function with local
keyword
sum() {
local SUM=$1 + $2
return $SUM
}
If a Bash script requires parameters to run, always check if those parameters are set.
if [ -n "$1" ]; then
# use first parameter
else
echo "First parameter cannot be blank"
return 10
fi
In some cases we need to check if all the parameters are set, in those cases we can check the number of the parameters by using $#
syntax
NUMBEROFPARAMETERS=$#
If we need to iterate over parameters, we can use $@
syntax
for var in "$@"; do
echo "current parameter $var"
done
In Linux world, almost every parameter of native commands has abbreviations, it's good to avoid using abbreviations.
#use
rm --recursive --force
#instead of
rm -rf
This will run the given command and keep it running, even after the terminal or SSH connection is terminated. All output is ignored.
(nohup "$@" &>/dev/null &)
Use ShellCheck tool to find bugs in shell script
To perform a syntax check and dry run of your bash script, use -n
argument:
bash -n myscript.sh
To produce a trace of every command executed, use -v
argument:
bash -v myscripts.sh
To produce a trace of the command, use -x
argument:
bash -x myscript.sh
-v
and -x
can also be made permanent by adding set -o verbose
and set -o xtrace
to the script prolog. This might be useful if the script is run on a remote machine, e.g. a build-bot and you are logging the output for remote inspection.