Skip to content

Commit

Permalink
change prepareResource for domainMapping, aRecordSet and Certificate
Browse files Browse the repository at this point in the history
With this PR, the name and fields of the DomainMapping, ARecordSet and
Certificate objects are all based on the provided DNS Zone (parsed now from
a ConfigMap) and not only on the Capp Route Hostname. This allows making
sure that the resources are created for the correct DNS Zone, even if
the hostname does not explicitly indicates the Zone. The tests have been
changed and in addition the README.md has been modified.

Signed-off-by: mzeevi <meytar80@gmail.com>
  • Loading branch information
mzeevi committed Jun 4, 2024
1 parent dc873cd commit a2204cc
Show file tree
Hide file tree
Showing 23 changed files with 264 additions and 133 deletions.
28 changes: 23 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,15 +45,17 @@ The `container-app-operator` project can work as a standalone solution, but is m

1. A Kubernetes cluster (you can [use KinD](https://kind.sigs.k8s.io/docs/user/quick-start/)).

2. `Knative Serving` installed on the cluster (you can [use the quickstart](https://knative.dev/docs/getting-started/quickstart-install/)).
2. `knative-serving` installed on the cluster (you can [use the quickstart](https://knative.dev/docs/getting-started/quickstart-install/)).

3. `NFSPVC Operator` installed on the cluster (you can [use the `install.yaml`](https://github.com/dana-team/nfspvc-operator/releases)).
3. `nfspvc-operator` installed on the cluster (you can [use the `install.yaml`](https://github.com/dana-team/nfspvc-operator/releases)).

4. `External DNS` installed on the cluster (you can [use the Helm Chart](https://artifacthub.io/packages/helm/external-dns/external-dns)).
4. `provider-dns` and `Crossplane` installed on the cluster (you can [follow the instructions](https://github.com/dana-team/provider-dns) for the provider and [for Crossplane](https://docs.crossplane.io/latest/software/install/)).

5. `Logging Operator` installed on the cluster (you can [use the Helm Chart](https://kube-logging.dev/docs/install/#deploy-logging-operator-with-helm)).
5. `certificate-operator` installed on the cluster (you can [use the `install.yaml`](https://github.com/dana-team/certificate-operator/releases))

`Knative Serving`, `Logging Operator`, `NFSPVC Operator` and `External DNS` can also be installed by running:
5. `logging-operator` installed on the cluster (you can [use the Helm Chart](https://kube-logging.dev/docs/install/#deploy-logging-operator-with-helm)).

Everything can also be installed by running:

```bash
$ make prereq
Expand Down Expand Up @@ -109,6 +111,22 @@ It's possible to use the following one-liner:
$ kubectl patch --namespace knative-serving configmap/config-features --type merge --patch '{"data":{"kubernetes.podspec-persistent-volume-claim": "enabled", "kubernetes.podspec-persistent-volume-write": "enabled"}}'
```

### Using a Custom Hostname

`Capp` enables using a custom hostname for the application. This in turn creates `DomainMapping` and `ARecordSet` objects and a `Certificate` object if `TLS` is desired.

To correctly create the resources, it is needed to provider the operator with the `DNS Zone` where the application is exposed. This is done using a `ConfigMap` called `dns-zone` which needs to be created in the operator namespace. Note the trailing `.` which must be added to the zone name:

```yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: dns-zone
namespace: capp-operator-system
data:
zone: "capp-zone.com."
```

## Example Capp

```yaml
Expand Down
16 changes: 0 additions & 16 deletions config/default/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,24 +41,8 @@ patches:
# 'CERTMANAGER' needs to be enabled to use ca injection
#- path: webhookcainjection_patch.yaml

configMapGenerator:
- name: dns-zone
literals:
- zone="dana-team-zone."

# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.
# Uncomment the following replacements to add the cert-manager CA injection annotations
replacements:
- source:
kind: ConfigMap
name: dns-zone
fieldPath: data.zone
targets:
- select:
kind: Deployment
name: controller-manager
fieldPaths:
- spec.template.spec.containers.[name=manager].env.[name=CAPP_DNS_ZONE].value
# - source: # Add cert-manager annotation to ValidatingWebhookConfiguration, MutatingWebhookConfiguration and CRDs
# kind: Certificate
# group: cert-manager.io
Expand Down
3 changes: 0 additions & 3 deletions config/manager/manager.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -90,9 +90,6 @@ spec:
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
env:
- name: CAPP_DNS_ZONE
value: $(ZONE)
# TODO(user): Configure the resources accordingly based on the project requirements.
# More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
resources:
Expand Down
25 changes: 7 additions & 18 deletions internal/kinds/capp/resourcemanagers/arecordset.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ import (
"context"
"fmt"
"net"
"os"
"reflect"

"github.com/dana-team/container-app-operator/internal/kinds/capp/utils"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/selection"

Expand All @@ -29,7 +29,6 @@ const (
eventCappARecordSetCreationFailed = "ARecordSetCreationFailed"
eventCappARecordSetCreated = "ARecordSetCreated"
cappNamespaceKey = "rcs.dana.io/parent-capp-ns"
zoneEnvKey = "CAPP_DNS_ZONE"
xpProviderDNSConfigRefName = "dns-default"
localhostAddress = "127.0.0.1"
)
Expand All @@ -41,18 +40,6 @@ type ARecordSetManager struct {
EventRecorder record.EventRecorder
}

// getZoneForRecord returns the zone to be used for the record from an environment variable.
func getZoneForRecord() (string, error) {
zone, ok := os.LookupEnv(zoneEnvKey)
if !ok {
return zone, fmt.Errorf("%s environment variable not set", zoneEnvKey)
} else if zone == "" {
return zone, fmt.Errorf("%s environment variable is empty", zoneEnvKey)
}

return zone, nil
}

// getAddressesForRecord performs a DNS lookup for the given URL and returns a slice of IP address strings.
func getAddressesForRecord(url string) ([]*string, error) {
ips, err := net.LookupIP(url)
Expand All @@ -74,12 +61,14 @@ func (r ARecordSetManager) prepareResource(capp cappv1alpha1.Capp) (dnsv1alpha1.
placeholderAddress := localhostAddress
addresses := []*string{&placeholderAddress}

name := capp.Spec.RouteSpec.Hostname
zone, err := getZoneForRecord()
zone, err := utils.GetZoneFromConfig(r.Ctx, r.K8sclient)
if err != nil {
return dnsv1alpha1.ARecordSet{}, err
}

resourceName := utils.GenerateResourceName(capp.Spec.RouteSpec.Hostname, zone)
recordName := utils.GenerateRecordName(capp.Spec.RouteSpec.Hostname, zone)

// in a normal behavior, it can be assumed that this condition will eventually be satisfied
// since there will be another reconciliation loop after the Capp status is updated
if capp.Status.KnativeObjectStatus.URL != nil {
Expand All @@ -92,15 +81,15 @@ func (r ARecordSetManager) prepareResource(capp cappv1alpha1.Capp) (dnsv1alpha1.
recordset := dnsv1alpha1.ARecordSet{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
Name: capp.Spec.RouteSpec.Hostname,
Name: resourceName,
Labels: map[string]string{
CappResourceKey: capp.Name,
cappNamespaceKey: capp.Namespace,
},
},
Spec: dnsv1alpha1.ARecordSetSpec{
ForProvider: dnsv1alpha1.ARecordSetParameters{
Name: &name,
Name: &recordName,
Zone: &zone,
Addresses: addresses,
},
Expand Down
25 changes: 19 additions & 6 deletions internal/kinds/capp/resourcemanagers/certificate.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import (

certv1alpha1 "github.com/dana-team/certificate-operator/api/v1alpha1"
cappv1alpha1 "github.com/dana-team/container-app-operator/api/v1alpha1"
"github.com/dana-team/container-app-operator/internal/kinds/capp/utils"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
Expand Down Expand Up @@ -35,11 +36,17 @@ type CertificateManager struct {
}

// prepareResource prepares a Certificate resource based on the provided Capp.
func (c CertificateManager) prepareResource(capp cappv1alpha1.Capp) certv1alpha1.Certificate {
return certv1alpha1.Certificate{
func (c CertificateManager) prepareResource(capp cappv1alpha1.Capp) (certv1alpha1.Certificate, error) {
zone, err := utils.GetZoneFromConfig(c.Ctx, c.K8sclient)
if err != nil {
return certv1alpha1.Certificate{}, err
}
resourceName := utils.GenerateResourceName(capp.Spec.RouteSpec.Hostname, zone)

certificate := certv1alpha1.Certificate{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
Name: capp.Spec.RouteSpec.Hostname,
Name: resourceName,
Namespace: capp.Namespace,
Labels: map[string]string{
CappResourceKey: capp.Name,
Expand All @@ -48,10 +55,10 @@ func (c CertificateManager) prepareResource(capp cappv1alpha1.Capp) certv1alpha1
Spec: certv1alpha1.CertificateSpec{
CertificateData: certv1alpha1.CertificateData{
Subject: certv1alpha1.Subject{
CommonName: capp.Spec.RouteSpec.Hostname,
CommonName: resourceName,
},
San: certv1alpha1.San{
DNS: []string{capp.Spec.RouteSpec.Hostname},
DNS: []string{resourceName},
},
Form: certificateForm,
},
Expand All @@ -61,6 +68,8 @@ func (c CertificateManager) prepareResource(capp cappv1alpha1.Capp) certv1alpha1
},
},
}

return certificate, nil
}

// CleanUp attempts to delete the associated Certificate for a given Capp resource.
Expand Down Expand Up @@ -96,7 +105,11 @@ func (c CertificateManager) Manage(capp cappv1alpha1.Capp) error {

// create creates a Certificate resource.
func (c CertificateManager) create(capp cappv1alpha1.Capp) error {
certificateFromCapp := c.prepareResource(capp)
certificateFromCapp, err := c.prepareResource(capp)
if err != nil {
return fmt.Errorf("failed to prepare Certificate: %w", err)
}

certificate := certv1alpha1.Certificate{}
resourceManager := rclient.ResourceManagerClient{Ctx: c.Ctx, K8sclient: c.K8sclient, Log: c.Log}

Expand Down
20 changes: 16 additions & 4 deletions internal/kinds/capp/resourcemanagers/domainmapping.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ import (
"fmt"
"reflect"

"github.com/dana-team/container-app-operator/internal/kinds/capp/utils"

rclient "github.com/dana-team/container-app-operator/internal/kinds/capp/resourceclient"

cappv1alpha1 "github.com/dana-team/container-app-operator/api/v1alpha1"
Expand Down Expand Up @@ -39,11 +41,17 @@ type KnativeDomainMappingManager struct {
}

// PrepareKnativeDomainMapping creates a new DomainMapping for a Knative service.
func (k KnativeDomainMappingManager) prepareResource(capp cappv1alpha1.Capp) knativev1beta1.DomainMapping {
func (k KnativeDomainMappingManager) prepareResource(capp cappv1alpha1.Capp) (knativev1beta1.DomainMapping, error) {
zone, err := utils.GetZoneFromConfig(k.Ctx, k.K8sclient)
if err != nil {
return knativev1beta1.DomainMapping{}, err
}
resourceName := utils.GenerateResourceName(capp.Spec.RouteSpec.Hostname, zone)

knativeDomainMapping := &knativev1beta1.DomainMapping{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
Name: capp.Spec.RouteSpec.Hostname,
Name: resourceName,
Namespace: capp.Namespace,
Labels: map[string]string{
CappResourceKey: capp.Name,
Expand All @@ -64,7 +72,7 @@ func (k KnativeDomainMappingManager) prepareResource(capp cappv1alpha1.Capp) kna
}
}

return *knativeDomainMapping
return *knativeDomainMapping, nil
}

// CleanUp attempts to delete the associated DomainMapping for a given Capp resource.
Expand Down Expand Up @@ -100,7 +108,11 @@ func (k KnativeDomainMappingManager) Manage(capp cappv1alpha1.Capp) error {

// createOrUpdate creates or updates a DomainMapping resource.
func (k KnativeDomainMappingManager) createOrUpdate(capp cappv1alpha1.Capp) error {
domainMappingFromCapp := k.prepareResource(capp)
domainMappingFromCapp, err := k.prepareResource(capp)
if err != nil {
return fmt.Errorf("failed to prepare DomainMapping: %w", err)
}

domainMapping := knativev1beta1.DomainMapping{}
resourceManager := rclient.ResourceManagerClient{Ctx: k.Ctx, K8sclient: k.K8sclient, Log: k.Log}

Expand Down
7 changes: 3 additions & 4 deletions internal/kinds/capp/resourcemanagers/ksvc.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,7 @@ const (
danaAnnotationsPrefix = "rcs.dana.io"
cappDisabledState = "disabled"
cappEnabledState = "enabled"
DefaultAutoScaleCM = "autoscale-defaults"
CappNS = "capp-operator-system"
defaultAutoScaleCM = "autoscale-defaults"
KnativeServing = "knativeServing"
eventCappKnativeServiceCreationFailed = "KnativeServiceCreationFailed"
eventCappKnativeServiceCreated = "KnativeServiceCreated"
Expand Down Expand Up @@ -77,8 +76,8 @@ func (k KnativeServiceManager) prepareResource(capp cappv1alpha1.Capp, ctx conte
knativeService.Spec.Template.Spec.Volumes = append(knativeService.Spec.Template.Spec.Volumes, volumes...)

defaultCM := corev1.ConfigMap{}
if err := k.K8sclient.Get(k.Ctx, types.NamespacedName{Namespace: CappNS, Name: DefaultAutoScaleCM}, &defaultCM); err != nil {
k.Log.Error(err, fmt.Sprintf("could not fetch configMap: %q", CappNS))
if err := k.K8sclient.Get(k.Ctx, types.NamespacedName{Namespace: utils.CappNS, Name: defaultAutoScaleCM}, &defaultCM); err != nil {
k.Log.Error(err, fmt.Sprintf("could not fetch configMap from namespace %q", utils.CappNS))
}

knativeService.Spec.Template.ObjectMeta.Annotations = utils.MergeMaps(knativeServiceAnnotations, autoscale.SetAutoScaler(capp, defaultCM.Data))
Expand Down
25 changes: 16 additions & 9 deletions internal/kinds/capp/status/route.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ import (
"context"

rmanagers "github.com/dana-team/container-app-operator/internal/kinds/capp/resourcemanagers"
"github.com/dana-team/container-app-operator/internal/kinds/capp/utils"

certv1alpha1 "github.com/dana-team/certificate-operator/api/v1alpha1"
cappv1alpha1 "github.com/dana-team/container-app-operator/api/v1alpha1"
Expand All @@ -18,17 +19,22 @@ import (
func buildRouteStatus(ctx context.Context, kubeClient client.Client, capp cappv1alpha1.Capp, isRequired map[string]bool) (cappv1alpha1.RouteStatus, error) {
routeStatus := cappv1alpha1.RouteStatus{}

domainMappingStatus, err := buildDomainMappingStatus(ctx, kubeClient, capp, isRequired[rmanagers.DomainMapping])
zone, err := utils.GetZoneFromConfig(ctx, kubeClient)
if err != nil {
return routeStatus, err
}

aRecordSetStatus, err := buildARecordSetStatus(ctx, kubeClient, capp, isRequired[rmanagers.ARecordSet])
domainMappingStatus, err := buildDomainMappingStatus(ctx, kubeClient, capp, isRequired[rmanagers.DomainMapping], zone)
if err != nil {
return routeStatus, err
}

certificateStatus, err := buildCertificateStatus(ctx, kubeClient, capp, isRequired[rmanagers.Certificate])
aRecordSetStatus, err := buildARecordSetStatus(ctx, kubeClient, capp, isRequired[rmanagers.ARecordSet], zone)
if err != nil {
return routeStatus, err
}

certificateStatus, err := buildCertificateStatus(ctx, kubeClient, capp, isRequired[rmanagers.Certificate], zone)
if err != nil {
return routeStatus, err
}
Expand All @@ -42,13 +48,13 @@ func buildRouteStatus(ctx context.Context, kubeClient client.Client, capp cappv1

// buildDomainMappingStatus partly constructs the Route Status of the Capp object in accordance to the
// status of the corresponding DomainMapping object.
func buildDomainMappingStatus(ctx context.Context, kubeClient client.Client, capp cappv1alpha1.Capp, isRequired bool) (knativev1beta1.DomainMappingStatus, error) {
func buildDomainMappingStatus(ctx context.Context, kubeClient client.Client, capp cappv1alpha1.Capp, isRequired bool, zone string) (knativev1beta1.DomainMappingStatus, error) {
if !isRequired {
return knativev1beta1.DomainMappingStatus{}, nil
}

domainMapping := &knativev1beta1.DomainMapping{}
domainMappingName := capp.Spec.RouteSpec.Hostname
domainMappingName := utils.GenerateResourceName(capp.Spec.RouteSpec.Hostname, zone)
if err := kubeClient.Get(ctx, types.NamespacedName{Namespace: capp.Namespace, Name: domainMappingName}, domainMapping); err != nil {
return knativev1beta1.DomainMappingStatus{}, err
}
Expand All @@ -58,13 +64,13 @@ func buildDomainMappingStatus(ctx context.Context, kubeClient client.Client, cap

// buildCertificateStatus partly constructs the Route Status of the Capp object in accordance to the
// status of the corresponding Certificate object.
func buildCertificateStatus(ctx context.Context, kubeClient client.Client, capp cappv1alpha1.Capp, isRequired bool) (certv1alpha1.CertificateStatus, error) {
func buildCertificateStatus(ctx context.Context, kubeClient client.Client, capp cappv1alpha1.Capp, isRequired bool, zone string) (certv1alpha1.CertificateStatus, error) {
if !isRequired {
return certv1alpha1.CertificateStatus{}, nil
}

certificate := &certv1alpha1.Certificate{}
certificateName := capp.Spec.RouteSpec.Hostname
certificateName := utils.GenerateResourceName(capp.Spec.RouteSpec.Hostname, zone)
if err := kubeClient.Get(ctx, types.NamespacedName{Namespace: capp.Namespace, Name: certificateName}, certificate); err != nil {
return certv1alpha1.CertificateStatus{}, err
}
Expand All @@ -74,13 +80,14 @@ func buildCertificateStatus(ctx context.Context, kubeClient client.Client, capp

// buildARecordSetStatus partly constructs the Route Status of the Capp object in accordance to the
// status of the corresponding ARecordSet object.
func buildARecordSetStatus(ctx context.Context, kubeClient client.Client, capp cappv1alpha1.Capp, isRequired bool) (dnsv1alpha1.ARecordSetStatus, error) {
func buildARecordSetStatus(ctx context.Context, kubeClient client.Client, capp cappv1alpha1.Capp, isRequired bool, zone string) (dnsv1alpha1.ARecordSetStatus, error) {
if !isRequired {
return dnsv1alpha1.ARecordSetStatus{}, nil
}

aRecordSet := &dnsv1alpha1.ARecordSet{}
if err := kubeClient.Get(ctx, types.NamespacedName{Name: capp.Spec.RouteSpec.Hostname}, aRecordSet); err != nil {
aRecordSetName := utils.GenerateResourceName(capp.Spec.RouteSpec.Hostname, zone)
if err := kubeClient.Get(ctx, types.NamespacedName{Name: aRecordSetName}, aRecordSet); err != nil {
return dnsv1alpha1.ARecordSetStatus{}, err
}

Expand Down
Loading

0 comments on commit a2204cc

Please sign in to comment.