Three Tier Architecture CI CD On Kubernetes — Azure DevOps

Always learning
10 min readMar 24, 2024

A single-tier application is one in which the backend logic, database, and user interface are all located on the same server. It entails hosting all of the essential components of a software application or technology on a single server (or) platform.

Components all reside on the same single server.

Presentation → Business → Data Access layers in a single 📦 software package.

1-tier

Two-tier system consists of a server and a client. In a two-tier system, the database is stored on the server, and the interface installed on the client is used to access the database.

buymeacoffee ☕ 👈 Click the link

  1. Client Application
  2. Database
2-tier

The 3-Tier Architecture, also known as the three-layer architecture, is a client-server software architecture that separates an application into three distinct layers (or) tiers.

  1. Client-tier
  2. Server-tier
  3. Data Tier
3-tier

An enterprise application (EA) is a large software system platform typically designed to operate in a corporate environment such as business (or) government.

A virtual machine, commonly shortened to just VM, is no different than any other physical computer like a laptop. It is an isolated computing environment created by abstracting resources from a physical machine.

Docker is a software platform that allows you to build, test, and deploy applications quickly.

Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime.

Docker network is a virtual network created by Docker to enable communication between Docker containers. If two containers are running on the same host they can communicate with each other without the need for ports to be exposed to the host machine.

  1. bridge
  2. host
  3. none
  4. overlay
  5. ipvlan
  6. macvlan

Resource management is the process of forecasting, scheduling, and utilizing resources for successful project delivery.

Container security is the process of implementing security tools and policies to assure that all in your container is running as intended, including protection of infrastructure, software supply chain, runtime, and everything between.

High availability (HA) is the elimination of single points of failure to enable applications to continue to operate even if one of the IT components it depends on, such as a server, fails.

IT professionals eliminate single points of failure to ensure continuous operation and uptime at least 99.99% annually.

This fault-tolerance definition refers to the system’s ability to continue operating despite failures (or) malfunctions. It’s Continue operating without interruption when one (or) more of its components fail.

Service discovery provides a mechanism for keeping track of the available instances and distributing requests across available instances.

Kubernetes automates operational tasks of container management and includes built-in commands for deploying applications, rolling out changes to your applications, scaling your applications up and down to fit changing needs, monitoring your applications, and more making it easier to manage applications.

Scalability is the measure of a system’s ability to increase (or) decrease in performance and cost in response to changes in application and system processing demands.

Load balancing is the method of distributing network traffic equally across a pool of resources that support an application.

Kubernetes orchestration allows you to build application services that span multiple containers, schedule containers across a cluster, scale those containers, and manage their health over time.

Kubernetes eliminates many of the manual processes involved in deploying and scaling containerized applications.

Kubernetes provides a host of dynamic services for running, connecting, scaling, and managing complex, multi-container workloads.

Kubernetes Architecture

Kubernetes is an architecture that offers a loosely coupled mechanism for service discovery across a cluster. A Kubernetes cluster has one or more control planes, and one or more compute nodes.

👉 Kubernetes architecture 👈

Kube APiServer

objects which include pods, services, replicationcontrollers, and others. The API Server services REST operations and provides the frontend to the cluster’s shared state through which all other components interact.

Kube Scheduler

The Kubernetes scheduler is a control plane process which assigns Pods to Nodes. The scheduler determines which Nodes are valid placements for each Pod in the scheduling queue according to constraints and available resources.

The scheduler then ranks each valid Node and binds the Pod to a suitable Node. Multiple different schedulers may be used within a cluster; kube-scheduler is the reference implementation. See scheduling for more information about scheduling and the kube-scheduler component.

Kube Controller Manager

The Kubernetes Controller Manager (also called kube-controller-manager) is a daemon that acts as a continuous control loop in a Kubernetes cluster.

The controller monitors the current state of the cluster via calls made to the API Server, and changes the current state to match the desired state described in the cluster’s declarative configuration.

Kube-controller-manager is a collection of different Kubernetes controllers. Its main task is to watch for changes in the state of the objects. it is responsible for the reconciling tasksaround the state of Kubernetes objects.

ETCD

etcd is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines.

Store used to hold and manage the critical information that distributed systems need to keep running.

Pre-requisites

  • Resource Group
  • AKS Kubernetes Cluster
  • Image container registry
  • SQL Server
  • SQL Database

Setting up the environment

The following azure resources need to be configured for this lab:

Infrastructure provisioning

Create a Folder → Inside copy the below code → Give the execute permission the script.


#!/bin/bash
REGION="westus"
RGP="day11-demo-rg"
CLUSTER_NAME="day11-demo-cluster"
ACR_NAME="day11demoacr"
SQLSERVER="day11-demo-sqlserver"
DB="mhcdb"


#Create Resource group
az group create --name $RGP --location $REGION

#Deploy AKS
az aks create --resource-group $RGP --name $CLUSTER_NAME --enable-addons monitoring --generate-ssh-keys --location $REGION

#Deploy ACR
az acr create --resource-group $RGP --name $ACR_NAME --sku Standard --location $REGION

#Authenticate with ACR to AKS
az aks update -n $CLUSTER_NAME -g $RGP --attach-acr $ACR_NAME

#Create SQL Server and DB
az sql server create -l $REGION -g $RGP -n $SQLSERVER -u sqladmin -p P2ssw0rd1234

az sql db create -g $RGP -s $SQLSERVER -n $DB --service-objective S0

Create a folder and update the script

It will be take 10 (or) 15 minutes

(OR)

Create by single single cmd

az group create --name day11-demo-rg --location westus

az aks create --resource-group day11-demo-rg --name day11-demo-cluster --enable-addons monitoring --generate-ssh-keys --location westus

az acr create --resource-group day11-demo-rg --name day11demoacrs --sku Standard --location westus

az aks update -n day11-demo-cluster -g day11-demo-rg --attach-acr day11demoacrs

az sql server create -l westus -g day11-demo-rg -n day11-demo1-sqlserver -u sqladmin -p P2ssw0rd1234

az sql db create -g day11-demo-rg -s day11-demo1-sqlserver -n mhcdb --service-objective S0

In the meantime create a new project for DevOps Platform

Health Clinic Repo → Clone the URL Azure DevOps

Import a repository

Create a new pipeline

Pipeline → Create pipeline → Azure Repos Git → Demo11-AKS

Pre-requisites

Make sure the below Azure DevOps extensions are installed and enabled in your organization

  • Replace Token
  • Kubernetes extension

Install the replace token from marketplace

Add the variable

Changes the service connection and map to the azure container.

Add to the variables value

Azure Pipeline code

# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml

trigger:
- main

pool:
vmImage: ubuntu-latest

steps:
- task: replacetokens@4
displayName: 'Replace tokens in appsettings.json'
inputs:
rootDirectory: '$(build.sourcesdirectory)/src/MyHealth.Web'
targetFiles: 'appsettings.json'
encoding: 'auto'
tokenPattern: 'rm'
writeBOM: true
escapeType: 'none'
actionOnMissing: 'warn'
keepToken: false
actionOnNoFiles: 'continue'
enableTransforms: false
enableRecursion: false
useLegacyPattern: false
enableTelemetry: true

- task: replacetokens@3
displayName: 'Replace tokens in mhc-aks.yaml'
inputs:
targetFiles: 'mhc-aks.yaml'
escapeType: none
tokenPrefix: '__'
tokenSuffix: '__'

- task: DockerCompose@0
displayName: 'Run services'
inputs:
containerregistrytype: 'Azure Container Registry'
azureSubscription: 'Pay-As-You-Go(1)(f30deb63-a417-4fa4-afc1-813a7d3920bb)'
azureContainerRegistry: '{"loginServer":"day11demoacrs.azurecr.io", "id" : "/subscriptions/f30deb63-a417-4fa4-afc1-813a7d3920bb/resourceGroups/day11-demo-rg/providers/Microsoft.ContainerRegistry/registries/day11demoacrs"}'
dockerComposeFile: 'docker-compose.ci.build.yml'
action: 'Run services'
detached: false

- task: DockerCompose@0
displayName: 'Build services'
inputs:
containerregistrytype: 'Azure Container Registry'
azureSubscription: 'Pay-As-You-Go(2)(f30deb63-a417-4fa4-afc1-813a7d3920bb)'
azureContainerRegistry: '{"loginServer":"day11demoacrs.azurecr.io", "id" : "/subscriptions/f30deb63-a417-4fa4-afc1-813a7d3920bb/resourceGroups/day11-demo-rg/providers/Microsoft.ContainerRegistry/registries/day11demoacrs"}'
dockerComposeFile: 'docker-compose.yml'
dockerComposeFileArgs: 'DOCKER_BUILD_SOURCE='
action: 'Build services'
additionalImageTags: '$(Build.BuildId)'

- task: DockerCompose@0
displayName: 'Push services'
inputs:
containerregistrytype: 'Azure Container Registry'
azureSubscription: 'Pay-As-You-Go(1)(f30deb63-a417-4fa4-afc1-813a7d3920bb)'
azureContainerRegistry: '{"loginServer":"day11demoacrs.azurecr.io", "id" : "/subscriptions/f30deb63-a417-4fa4-afc1-813a7d3920bb/resourceGroups/day11-demo-rg/providers/Microsoft.ContainerRegistry/registries/day11demoacrs"}'
dockerComposeFile: 'docker-compose.yml'
dockerComposeFileArgs: 'DOCKER_BUILD_SOURCE='
action: 'Push services'
additionalImageTags: '$(Build.BuildId)'

- task: DockerCompose@0
displayName: 'Lock services'
inputs:
containerregistrytype: 'Azure Container Registry'
azureSubscription: 'Pay-As-You-Go(1)(f30deb63-a417-4fa4-afc1-813a7d3920bb)'
azureContainerRegistry: '{"loginServer":"day11demoacrs.azurecr.io", "id" : "/subscriptions/f30deb63-a417-4fa4-afc1-813a7d3920bb/resourceGroups/day11-demo-rg/providers/Microsoft.ContainerRegistry/registries/day11demoacrs"}'
dockerComposeFile: 'docker-compose.yml'
dockerComposeFileArgs: 'DOCKER_BUILD_SOURCE='
action: 'Lock services'
outputDockerComposeFile: '$(Build.StagingDirectory)/docker-compose.yml'

- task: CopyFiles@2
displayName: 'Copy Files'
inputs:
Contents: |
**/mhc-aks.yaml
**/*.dacpac

TargetFolder: '$(Build.ArtifactStagingDirectory)'

- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact'
inputs:
ArtifactName: deploy

Succesfully run the job

The publish keyword publishes (uploads) a file (or) folder as a pipeline artifact that other jobs and pipelines can consume.

The publish step is supported on Azure DevOps Services only. If you use it on Azure DevOps Server, you’ll receive an error message similar to Pipeline Artifact Task is not supported in on-premises.

Found 3 yaml files.

Build and Release Pipeline

  • You can create your pipeline by following along the video or editing the existing pipeline. The below details need to be updated in the pipeline:
  • Azure Service connection
  • Token pattern
  • Pipeline variables
  • The Kubectl version should be the latest in the release pipeline
  • Secrets should be updated in the deployment step
  • ACR details in the pipeline should be updated

Add artifact → Current project

Add Stage → Dev deployment

Agent Job add → 3 Task add

  1. Azure SQL Dacpac Task
  2. Kubectl apply
  3. Kubectl set

Add another task → Kubectl apply

apply

Secret

Tick the latest version of kubectl

Add Variable

Add another task → Kubectl set

command

Latest version

Create a release pipeline

Pipeline is started

Click deploy

Started

One by one task is completed

Success the release pipeline

Change the Firewall settings of the SQL server

Connect a cluster. first get credentials

Open the Cloud Shell put the cmd

Check the pods

How to fix ‘back-off restarting failed container’ Error

Try the below commands

kubectl logs <pod-name>
kubectl describe pod <pod-name>
kubectl get pod <pod-name> -o yaml
kubectl get deployments --all-namespaces=true

mhc-front pod is n’t ready lets fix the issue

Fixed the issue

Finally the POD is running

Get services

Loadbalancer IP → Paste the browser

More Azure DevOps Project https://github.com/Microsoft/azuredevopslabs/tree/master/labs/vstsextend/kubernetes/

Note: Don’t forget to delete the resources

Thank you 🙏 for taking the time to read our blog.

--

--

Always learning

கற்றுக் கொள்ளும் மாணவன்...