Apps
The management cluster starts with the same setup as the managed cluster and then we proceed to add additional apps
Rancher
Lets install rancher:
Thankfully we can use helm. Lets add the repos
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
helm repo update
Next lets create the namespace:
kubectl create namespace cattle-system
And finally we can install Rancher:
It is recommended to to setup the DNS AAA records before setting up rancher that way the cert manager can setup the certificates needed.
It is possible to set it up on a later step but it is definitely more work.
Further details for the install can be found here
# THESE ARE THE RECOMMENDED VALUES. BUT CAN BE ADAPTED AS NEEDED
export RANCHER_VERSION=2.6.9
export RANCHER_HOST=myhost.com
export RANCHER_LE_EMAIL=iot@nimbit.de
export RANCHER_CHART_REPO=stable
export RANCHER_PASSWORD=admin
helm install rancher rancher-$RANCHER_CHART_REPO/rancher \
--namespace cattle-system \
--set hostname=$RANCHER_HOST \
--set replicas=3 \
--version=$RANCHER_VERSION \
--set bootstrapPassword=$RANCHER_PASSWORD \
--set ingress.tls.source=letsEncrypt \
--set letsEncrypt.email=$RANCHER_LE_EMAIL \
--set letsEncrypt.ingress.class=nginx \
--set ingress.extraAnnotations.'kubernetes\.io/ingress\.class'=nginx
Now we can wait for it to be rolled out:
kubectl -n cattle-system rollout status deploy/rancher
After the rollout is finished you can run this to verify the system is running:
kubectl -n cattle-system get deploy rancher
You should see something like:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
rancher 3 3 3 3 3m
Finishing Up
You should now be able to go over to you web browser and open the url you defined as the host url and get started with rancher.
If this is the initial setup you will be requested to create a new password.
ArgoCD
Getting argocd up and running is very straight-forward
# CREATE NAMESPACE
kubectl create namespace argocd
Install last stable version
# Install stable
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
If we want to make argocd accessible from outside the cluster then we need to create an ingress for argocd
# HOST (DNS NAME) should be set here
export HOST=argocd.azure.nimbit.de
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: tls-argocd-secret
namespace: argocd
spec:
secretName: tls-argocd-secret
issuerRef:
kind: ClusterIssuer
name: le-clusterissuer
commonName: $HOST
dnsNames:
- $HOST
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
# If you encounter a redirect loop or are getting a 307 response code
# then you need to force the nginx ingress to connect to the backend using HTTPS.
#
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: $HOST
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
name: https
tls:
- hosts:
- $HOST
secretName: tls-argocd-secret
EOF
Kustomize helm
To be able to use Kustomize to modify helm charts we can define a new plugin via a config map
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/part-of: argocd
data:
configManagementPlugins: |
- name: kustomized-helm
init:
command: ["/bin/sh", "-c"]
args: ["helm dependency build || true"]
generate:
command: ["/bin/sh", "-c"]
args: ["helm template . --name-template --namespace --include-crds > all.yaml && kustomize build"]
EOF
Monitoring
Monitoring can be setup via Rancher. For detailed documentation on how to use the rancher monitoring see Rancher Monitoring