The container-app-operator
is an operator that reconciles Capp
CRs.
Capp
(or ContainerApp) provides a higher-level abstraction for deploying containerized Serverless workload, making it easier for end-users to deploy workloads on Kubernetes without being knowledgeable in Kubernetes concepts, while adhering to the standards required by the infrastructure and platform teams without any extra burden on the users.
The operator uses open-source projects, such as knative-serving
, logging-operator
, nfspvc-operator
and provider-dns
to create an abstraction for containerized workloads.
The container-app-operator
project can work as a standalone solution, but is mostly used together with the rcs-ocm-deployer
project, which allows deploying Capp
workloads in a multi-cluster set-up, using the OCM
(Open Cluster Management) open-source project.
-
The
capp controller
reconciles theCapp
CRs in the cluster and creates (if needed) aKnative Service
(ksvc
) CR, aDommainMapping
CR, andFlow
&Output
CRs for every Capp. -
The
knative controller
reconciles theksvc
CRs in the cluster and controls the lifecycle an autoscaler and pods relevant to theksvc
. -
The
nfspvc-operator controller
reconciles theNFSPVC
CRs in the cluster and createsPVC
andPVs
with an external NFS storage configuration (bring your own NFS). -
The
provider-dns
is aCrossplane Provider
which reconciles the DNS Record CRs in the cluster and creates DNS Records in the pre-configured DNS provider (bring your own DNS provider). -
The
cert-external-issuer
reconcilesCertificate
CRs in the cluster and creates certificates using the Cert API. -
The
logging-operator controller
reconciles theFlow
andOutput
CRs in the cluster and collects logs from the pods'stdout
and sends them to a pre-existingElasticsearch
index (bring your own indexes).
- Support for autoscaler (
HPA
orKPA
) according to the chosenscaleMetric
(concurrency
,rps
,cpu
,memory
) with default settings. - Support for HTTP/HTTPS
DomainMapping
for accessing applications viaIngress
/Route
. - Support for
DNS Records
lifecycle management based on thehostname
API field. - Support for
Certificate
lifecycle management based on thehostname
API field. - Support for all
Knative Serving
configurations. - Support for exporting logs to an
Elasticsearch
index. - Support for changing the state of
Capp
fromenabled
(workload is in running state) todisabled
(workload is not in running state). - Support for external NFS storage connected to
Capp
by usingvolumeMounts
. - Support for
CappRevisions
to keep track of changes toCapp
in a different CRD (up to 10CappRevisions
are saved for eachCapp
)
-
A Kubernetes cluster (you can use KinD).
-
knative-serving
installed on the cluster (you can use the quickstart). -
nfspvc-operator
installed on the cluster (you can use theinstall.yaml
). -
provider-dns
andCrossplane
installed on the cluster (you can follow the instructions for the provider and for Crossplane). -
certificate-external-issuer
installed on the cluster (you can use theinstall.yaml
). -
logging-operator
installed on the cluster (you can use the Helm Chart).
Everything can also be installed by running:
$ make prereq
This uses, behind the scenes, a Helmfile
which is available at the charts/cappp-prereq-helmfile.yaml
file in this repository.
The Helmfile
defines all the Helm Charts which are installed as prerequisites for Capp
. It uses a YAML file to define all the different Charts. To run the Helmfile
directly, use:
$ helmfile apply -f charts/cappp-prereq-helmfile.yaml
Helmfile
, similarly to Helm, allows setting values for the installed Charts either using a state values file (--state-values-file
) or using individual key-value pairs (--state-values-set
). For example, to change the Chart values of the provider-dns
, which is defined in the Helmfile, you can use:
$ helmfile apply -f charts/cappp-prereq-helmfile.yaml --state-values-set providerDNSRealmName=<value>
You can pass different variables to the Makefile
to control the underlying values in the dependent Charts.
For example, to install provider-dns
with certain Chart values, do:
$ make prereq PROVIDER_DNS_REALM=<value> PROVIDER_DNS_KDC=<value> PROVIDER_DNS_POLICY=<value> PROVIDER_DNS_NAMESERVER=<value> PROVIDER_DNS_USERNAME=<value> PROVIDER_DNS_PASSWORD=<value>
Value Name | Value Default | Explanation |
---|---|---|
PROVIDER_DNS_REALM | DANA-DEV.COM |
Defines the name of the Kerberos Realm to use in the provider. |
PROVIDER_DNS_KDC | dana-wdc-1.dana-dev.com |
Defines the name of the Kerberos Key Distribution Center server. |
PROVIDER_DNS_POLICY | ClusterFirst |
Defines the dnsPolicy of the provider-dns deployment. If used then it should be set to None . |
PROVIDER_DNS_NAMESERVER | 8.8.8.8 |
The nameserver to use in the dnsConfig of the provider-dns deployment if dnsPolicy is set to None . |
PROVIDER_DNS_USERNAME | dana |
Defines the username to connect to the KDC with. |
PROVIDER_DNS_PASSWORD | passw0rd |
Defines the password to connect to the KDC with. |
Use Helm
to deploy Capp
with all the needed resources. Only deploy it after installing the prereq.
$ helm upgrade --install capp-operator --namespace capp-operator-system --create-namespace oci://ghcr.io/dana-team/helm-charts/container-app-operator --version <release>
$ make deploy IMG=ghcr.io/dana-team/container-app-operator:<release>
Alternatively, deploy it with Helm (the Chart is available at the charts/container-app-operator
directory on this repository):
$ make docker-build docker-push IMG=<registry>/container-app-operator:<tag>
The autoscaleConfig
is defined within a CappConfig CRD
named capp-config
in the namespaces of the controller.
To modify the target values for the autoscaler
, modify the existing CappConfig
resource, in the namespace capp-operator-system with the desired values.
The autoscaleConfig
section of the CappConfig
CRD specifies the scale metric types and their target values.
Capp
enables using a custom hostname for the application. This in turn creates DomainMapping
, a DNS Record object and a Certificate
object if TLS
is desired.
To correctly create the resources, it is needed to provide the operator with the DNS Config
where the application is exposed.
This is done using the dnsConfig
section of the CappConfig CRD
called capp-config
which needs to be created in the operator namespace.
Note the trailing .
which must be added to the zone name:
apiVersion: rcs.dana.io/v1alpha1
kind: CappConfig
metadata:
name: capp-config
namespace: capp-operator-system
spec:
autoscaleConfig:
rps: 200
cpu: 80
memory: 70
concurrency: 10
activationScale: 3
dnsConfig:
zone: "capp-zone.com."
cname: "ingress.capp-zone.com."
provider: "dns-default"
issuer: "cert-issuer"
In order to use volumeMounts
in Capp
, Knative Serving
needs to be configured to support volumes. This is done by adding the following lines to the ConfigMap
of name config-features
in the Knative Serving
namespace:
kubernetes.podspec-persistent-volume-claim: enabled
kubernetes.podspec-persistent-volume-write: enabled
It's possible to use the following one-liner:
$ kubectl patch --namespace knative-serving configmap/config-features --type merge --patch '{"data":{"kubernetes.podspec-persistent-volume-claim": "enabled", "kubernetes.podspec-persistent-volume-write": "enabled"}}'
apiVersion: rcs.dana.io/v1alpha1
kind: Capp
metadata:
name: capp-sample
namespace: capp-sample
spec:
configurationSpec:
template:
spec:
containers:
- env:
- name: APP_NAME
value: capp-env-var
image: 'ghcr.io/dana-team/capp-gin-app:v0.2.0'
name: capp-sample
volumeMounts:
- name: test-nfspvc
mountPath: /data
routeSpec:
hostname: capp.dev
tlsEnabled: true
volumesSpec:
nfsVolumes:
- server: test
path: /test
name: test-nfspvc
capacity:
storage: 200Gi
logSpec:
type: elastic
host: 10.11.12.13
index: main
user: elastic
passwordSecret: es-elastic-user
scaleMetric: concurrency
state: enabled