Managed Cluster setup
The managed cluster setup offers a step-by-step setup for a new kubernetes cluster.
Ideally the clusters should be managed by argocd but there is still some initial setup to be done to allow the cluster to communicate with the outside world.
We assume that you already have the cluster of you choice set up. To make sure you are using a fresh cluster run:
kubectl get namespaces
The output should look like this
So now we have a clean cluster its time to get started.
Nginx & Load Balancer v1.5.1
Most clusters want to allow users to access at least certain parts of it from outside the cluster. While we can expose specific node ports outside the cluster, a load balancer makes the most sense and allows us to have more control over our routes in a central place.
Load balancers expose the cluster to the outside world in a safe and manageable way while proving a lot of additional features
While there are a few different load balancers out there we will be using the NGINX load balancer.
We will be using helm to install nginx as it is the easiest way to get it up and running.
- Azure
- GKE
- AWS
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
Then, the ingress controller can be installed like this:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/aws/deploy.yaml
If we want to increase the replica count of our ingress controller this can be done by modifying the deployment.
kubectl scale --replicas=2 deployment ingress-nginx-controller -n ingress-nginx
Your ingress controller and load balancer should be set up now with all the bells and whistles
Cert Manager
The cert manager is responsible for creating certificates for you apps and making sure that they are kept up to date.
We will be using lets encrypt for as our certification authority. We will also be using helm to deploy our cert-manager
Add helm repo and update repo
helm repo add jetstack https://charts.jetstack.io
helm repo update
If cert-manager already exists you can remove it using
helm uninstall cert-manager --namespace cert-manager
Now let install cert manager
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.10.0 --create-namespace --set installCRDs=true
After setting up the cert manager we need to set up a cluster issuer. The cluster issuer is responsible for creating certificates in all namespaces
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: le-clusterissuer
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: iot@nimbit.de
privateKeySecretRef:
name: le-clusterissuer
solvers:
- http01:
ingress:
class: nginx
EOF
Redis Cluster Issuer
kubectl apply -f - <<EOF
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned-issuer
namespace: cert-manager
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: redis-tls-ca
namespace: cert-manager
spec:
ca:
secretName: redis-tls-ca-cert
EOF
TCP Config maps
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
EOF
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
EOF