In this section we’re going to deploy resources using GitOps.
Make sure you’re in your repo
cd aws-gitops-multicloud
Create a pod using gitops
cp ../gitops-cluster-management/examples/k8s/nginx.yaml flux-mgmt/nginx.yaml
git add flux-mgmt
git commit -m 'deploy nginx pod'
git push
Shortly after you should find the pod in the default namespace
kubectl get pod
cp ../gitops-cluster-management/examples/k8s/helm/redis.yaml flux-mgmt/redis.yaml
git add flux-mgmt
git commit -m 'deploy redis helmrelease'
git push
Validate that it worked by checking if the helmrelease and the pod are running
kubectl get hr
kubectl get pod
cp -R ../gitops-cluster-management/examples/k8s/custom-operators/ flux-mgmt
cp ../gitops-cluster-management/examples/k8s/all-ns-deployment.yaml flux-mgmt
cp ../gitops-cluster-management/examples/k8s/all-ns-secret.yaml flux-mgmt
git add flux-mgmt
git commit -m 'deploy custom operators'
git push
Now any secret or deployment in the default namespace that has the labels secret-copier: "yes"
will be copied across namespaces.
Validate that it worked by checking if the example secret and deployment in your cluster are copied across all namespaces.
kubectl get secret -A | grep copy-me
kubectl get deployment -A | grep memcached
For more info you can go through the shell-operator source code for secret-copier and deployment-copier in gitops-cluster-management/operators
.
Let’s create two EC2 clusters using CAPI.
cp -R ../gitops-cluster-management/examples/k8s/clusters/ flux-mgmt
Modify the two cluster files ec2-cluster-1.yaml
and ec2-cluster-2.yaml
as follows:
AWSCluster.spec.region
to us-west-2
AWSCluster.spec.sshKeyName
to weaveworks-workshop
AWSMachineTemplate.spec.template.spec.sshKeyName
to weaveworks-workshop
(There should be 2 machine templates per cluster. One for the control plane instances and one for the worker instances)Finally, let’s push our changes to git
git add flux-mgmt
git commit -m 'create two ec2 clusters'
git push
We can monitor cluster creation
kubectl get clusters -w
kubectl get machines -w
kubectl logs --tail 100 -f -n capa-system deploy/capa-controller-manager -c manager
kubectl get machines
kubectl delete machine <MACHINE-NAME>
Watch what happens
kubectl get machines -w
CAPI should take care of destroying the AWS EC2 instance, and provision a new one to replace it.
You can scale up or down our cluster instances by increasing or decreasing the number of replicas for the control plane or worker nodes in our yaml files.
Let’s bump up the replicas from 3 to 5 for the worker nodes.
git add flux-mgmt
git commit -m 'scale up'
git push
Watch what happens
kubectl get machines -w