Deploying a Sample Voting App Using Kubeadm and Argo CD

Deploying a Sample Voting App Using Kubeadm and Argo CD

Overview

In this guide, we’ll walk through the process of deploying a sample voting application using Kubeadm and Argo CD. This setup will involve launching an AWS EC2 instance, setting up a Kubernetes cluster with Kubeadm, and using Argo CD to manage and deploy your application. By the end, you'll have a working setup of a sample voting app, managed effortlessly through Argo CD.

Step 1: Launch an AWS EC2 Instance

Why AWS EC2?

Amazon EC2 (Elastic Compute Cloud) provides scalable virtual servers, which is perfect for our Kubernetes cluster. We’ll start by creating an EC2 instance where we’ll install and configure Kubeadm.

How to Launch

  1. Log in to AWS Console: Go to the AWS Management Console and log in.

  2. Create a New Instance:

    • Navigate to the EC2 Dashboard.

    • Click on "Launch Instance."

    • Choose an Amazon Machine Image (AMI). For this guide, use the latest Ubuntu Server AMI.

    • Select an instance type (e.g., t2.micro for basic usage).

    • Configure instance details, add storage, and configure security groups (allow SSH and HTTP/HTTPS).

  3. Launch and Connect:

    • Launch the instance and download the key pair.

    • Use SSH to connect to your instance: ssh -i your-key.pem ubuntu@your-ec2-public-dns.

Step 2: Setup Kubeadm Using Shell Script

Why Kubeadm?

Kubeadm simplifies the process of setting up a Kubernetes cluster. It’s ideal for creating a robust and scalable Kubernetes environment.

Shell Script for Setup

We’ll use a shell script to automate the setup of the Kubernetes master and worker nodes.

  1. Create the Shell Script:

     #****************Run both master-slave  and worker-slave**************
    
     sudo -i
     echo "overlay" >> /etc/modules-load.d/containerd.conf
     echo "br_netfilter" >> /etc/modules-load.d/containerd.conf
     modprobe overlay
     modprobe br_netfilter
     echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.d/kubernetes.conf
     echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.d/kubernetes.conf
     echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.d/kubernetes.conf
     sysctl --system
     apt-get update
     apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
     curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
     add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
     apt update
     apt install -y containerd.io
     containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
     sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
     systemctl restart containerd
     systemctl enable containerd
     curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
     echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
     apt update
     apt install -y kubelet kubeadm kubectl
     apt-mark hold kubelet kubeadm kubectl
    
#*******************Run only on master-slave******************

kubeadm init --control-plane-endpoint=

kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml
  1. Run the Script:

    • Upload and execute the script on your EC2 instance.
  2. Add Worker Nodes:

    • After initializing the master, use the kubeadm join command provided at the end of the master setup on your worker nodes.

Step 3: Voting App Deployment and Service Manifests

Here are the Kubernetes manifest files for the voting app components.

DB Deployment (PostgreSQL)

# db deplyments mainifest file

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: db
  name: db
spec:
  replicas: 1
  selector:
    matchLabels:
      app: db
  template:
    metadata:
      labels:
        app: db
    spec:
      containers:
      - image: postgres:15-alpine
        name: postgres
        env:
        - name: POSTGRES_USER
          value: postgres
        - name: POSTGRES_PASSWORD
          value: postgres
        ports:
        - containerPort: 5432
          name: postgres
        volumeMounts:
        - mountPath: /var/lib/postgresql/data
          name: db-data
      volumes:
      - name: db-data
        emptyDir: {}
# db service yaml file 

apiVersion: v1
kind: Service
metadata:
  labels:
    app: db
  name: db
spec:
  type: ClusterIP
  ports:
  - name: "db-service"
    port: 5432
    targetPort: 5432
  selector:
    app: db
# Redis mainfest file for deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: redis
  name: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - image: redis:alpine
        name: redis
        ports:
        - containerPort: 6379
          name: redis
        volumeMounts:
        - mountPath: /data
          name: redis-data
      volumes:
      - name: redis-data
        emptyDir: {}
# redis service mainifest file 

apiVersion: v1
kind: Service
metadata:
  labels:
    app: redis
  name: redis
spec:
  type: ClusterIP
  ports:
  - name: "redis-service"
    port: 6379
    targetPort: 6379
  selector:
    app: redis
# result deployment manifest file 

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: result
  name: result
spec:
  replicas: 3
  selector:
    matchLabels:
      app: result
  template:
    metadata:
      labels:
        app: result
    spec:
      containers:
      - image: darshif5/voting-app-2024:result
        name: result
        ports:
        - containerPort: 80
          name: result
# result service mainifest file 

apiVersion: v1
kind: Service
metadata:
  labels:
    app: result
  name: result
spec:
  type: NodePort
  ports:
  - name: "result-service"
    port: 5001
    targetPort: 4000
    nodePort: 31001
  selector:
    app: result
# vote deployment manifest file 

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: vote
  name: vote
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vote
  template:
    metadata:
      labels:
        app: vote
    spec:
      containers:
      - image: darshif5/voting-app-2024:voting
        name: vote
        ports:
        - containerPort: 80
          name: vote
# vote service manifest file 

apiVersion: v1
kind: Service
metadata:
  labels:
    app: vote
  name: vote
spec:
  type: NodePort
  ports:
  - name: "vote-service"
    port: 5000
    targetPort: 80
    nodePort: 31002
  selector:
    app: vote
# worker deployment manifest file 

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: worker
  name: worker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: worker
  template:
    metadata:
      labels:
        app: worker
    spec:
      containers:
      - image: darshif5/voting-app-2024:worker
        name: worker

Step 4: Install and Configure Argo CD

Why Argo CD?

Argo CD is a continuous delivery tool for Kubernetes. It provides a powerful UI to manage your applications and their configurations, making deployments straightforward.

Installation Steps

  1. Install Argo CD:

     kubectl create namespace argocd
     kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
    
  2. Access the Argo CD UI:

    • Forward the Argo CD API server port:

        kubectl port-forward svc/argocd-server -n argocd 8443:443
      
    • Open http://localhost:8080 in your browser.

  3. Login to Argo CD:

    • Retrieve the initial admin password:

        #***Argo CD Initial Admin Password
        ##Retrieve Argo CD admin password:
        ---- kubectl get secret -n argocd argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo
      
    • Use the username admin and the password you retrieved to log in.

Step 5: Connect and Manage Your Kubernetes Cluster with Argo CD

Create a Sample Voting App

  1. Prepare Your Application Repository:

    • Create a Git repository with Kubernetes manifests for your voting app.
  2. Create an Application in Argo CD:

    • Access the Argo CD UI.

    • Click on "New App."

    • Fill in the details: name, project, repository URL, path, and destination cluster.

    • Click "Create."

  1. Sync and Monitor:

    • Argo CD will sync your application automatically.

    • Monitor the deployment through the Argo CD UI.

Conclusion

Deploying a sample voting application using Kubeadm and Argo CD streamlines the process of managing and deploying Kubernetes applications. By leveraging AWS EC2, Kubeadm, and Argo CD, you can set up a scalable Kubernetes cluster and efficiently manage your application deployments. This approach not only simplifies your deployment pipeline but also enhances your operational efficiency.

Feel free to experiment with this setup and modify it according to your needs. Happy deploying!

Connect and Follow Me on Socials Network

LINKDIN | GITHUB |TWITTER