This project offers a bunch of scripts to ease operations & development. All of them expect to be run at the project root.
If the options [GATE]
, [SERVICE]
, [TYPE]
, [VERSION]
and [DEPLOY]
can be passed, either none or all have to be passed.
In some cases, a temporary file is used to pass this information, which is written to via an interactive prompt.
If no information is passed, the script will iterate over all services, otherwise it will only be executed on the specified one.
This requires kubectl
to be connected to a cluster.
Install all necessary controllers to the given cluster:
ingress-nginx
: the officialnginx
kubernetes controller, maintained by kuberneteskubegres
: thepostgres
controller
Install all necessary elements of the given environment as well as all gates - see the kubernetes documentation for details.
If no service was specified, this affects all services.
If current_service
was specified, ./scripts/data/current_service
is used as input.
If this file doesn't exist, ./scripts/development/swich_current_service.sh
is called to create it.
If GATE
and SERVICE
were specified, the specified micro service is affected.
This applies the manifests for the given service - see the kubernetes documentation for details.
If the development
or integration
environment was chosen, some more steps are executed:
- The affected containers get rebuilt
- The deployment gets a rolling restart
This can be used to refresh the TLS certificate.
The contact email address has to be set as environment variable (KHALEESI_EMAIL
), as well as the affected domain ('KHALEESI_DOMAIN').
It will be put into the folder letsencrypt
- note that this folder is listed in .gitignore
so as not to leak any private keys.
If no service was specified, this affects all services.
If current_service
was specified, ./scripts/data/current_service
is used as input.
If this file doesn't exist, ./scripts/development/swich_current_service.sh
is called to create it.
If GATE
and SERVICE
were specified, the specified micro service is affected.
This will first build the test containers. Afterwards, it will complete any tests configured - this includes:
- Unit and integration tests
- Linting
- Static type checking (for backgates and microservices)
This will generate the code for all proto files located in /proto
.
This will prompt the user to select the current_service
that is used by ./development/test.sh
and ./operations/deploy.sh
.
This only exists in current_service
mode.
It will prompt the user for the app name for which migrations should be created, and proceed with copying them into the correct migrations folder.
This will create a new service. It will prompt the user for some information:
- The
gate
this new service is for - The
type
of service:- gate
- micro
- If micro was specified as type, the
service
name will be prompted for
Afterwards, it will do the following:
- Add the new service to
data/services.json
- Add kubernetes manifests for the service
For gates, it will additionally create the following:
- A skeleton
react
project to hold the frontgate code - A skeleton
django
project to hold the backgate code
For microservices, it will additionally create the following:
- A skeleton
django
project to hold the code
Of course, you should thoroughly review the git
diff before committing anything to source control.
The templates
folder contains the data necessary for this script to work and is documented here.
These scripts are not intended to be called directly, but are used by the other scripts.
This will build the containers in either development
or production
mode.
If no service was specified, it will do so for all services.
Otherwise, it will only do so for the specified service.
Afterwards, any dangling images get pruned.
This can be used to recreate the kubernetes secret holding the TLS certificate (e.g. if the namespace gets set up from scratch)
If no service was specified, the function gets executed on all services. If a service was specified, the function gets executed on the specified service.
This will parse the raw input to extract service information from the json object.
This will parse the raw input to extract environment information from the json object.
This validates that the provided environment exists.