Azure Terraform Pipeline — DevOps

Always learning
16 min readFeb 17, 2024

--

Terraform is an open-source infrastructure as code (IAC) tool that allows users to define and deploy infrastructure resources, such as servers, storage, and networking, using simple, human-readable configuration files.

Azure Provider

The Azure Provider can be used to configure infrastructure in Microsoft Azure using the Azure Resource Manager API’s.

Docs: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs

Click → Service Principal and a Client Secret

buymeacoffee ☕ 👈 Click the link

Login to the CLI in VSCode. Automatically rediret the browser enter your azure credentials.

az login

Successfully logged in Azure DevOps portal via VSCode Terminal

allow users to download data exported from the plan of a Run in a Terraform workspace.

export MSYS_NO_PATHCONV=1

Organizations can use subscriptions to manage costs and the resources that are created by users, teams, and projects.

az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/20000000-0000-0000-0000-000000000000"

An Azure service principal is an identity created for use with applications, hosted services, and automated tools to access Azure resources. This access is restricted by the roles assigned to the service principal, giving you control over which resources can be accessed and at which level.

Create a Service Principal

These values map to the Terraform variables like so:

  • appId is the client_id defined above.
  • password is the client_secret defined above.
  • tenant is the tenant_id defined above.
az login --service-principal -u CLIENT_ID -p CLIENT_SECRET --tenant TENANT_ID

Successfully login the Service Principal account

How Terraform Works?

Terraform creates and manages resources on cloud platforms and other services through their application programming interfaces (APIs).

main.tf: This is our main configuration file where we are going to define our resource definition.

variables.tf: This is the file where we are going to define our variables.

outputs.tf: This file contains output definitions for our resources

backend.tf: Defines where the state file of the current infrastructure will be stored (or) where Terraform stores its state data files

Terraform keeps track of the managed resources. This state can be stored locally or remotely.

This State File contains full details of resources in our terraform code. When you modify something on your code and apply it on cloud, terraform will look into the state file, and compare the changes made in the code from that state file and the changes to the infrastructure based on the state file.

When you run terraform apply command to create an infrastructure on cloud, Terraform creates a state file called “terraform.tfstate”.

version.tf: Terraform will check that the version of the installed Terraform binary that executes the Terraform configuration.

terraform.tfvars: allow us to manage variable assignments systematically in a file with the extension .tfvars or .tfvars.json

Terraform core → Core is responsible for life cycle management of infrastructure.

Terraform Provider → A plugin for Terraform that makes a collection of related resources available

Initialize the Terraform repo

Initializes the Terraform working directory, downloading any necessary provider plugins.

Run Terraform plan

Terraform plan command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.

When Terraform creates a plan it → Reads the current state of any already-existing remote objects to make sure that the Terraform state is up-to-date.

Run Terraform apply

Create the AWS resources defined in your Terraform configuration

Import the below repository into Azure DevOps for Terraform configuration

(OR) Create a file manually

Displays directory paths and (optionally) files in each subdirectory: tree

main.tf: This is our main configuration file where we are going to define our resource definition.

resource "azurerm_resource_group" "example" {
name = "${var.prefix}-rg"
location = var.location
}

resource "azurerm_virtual_network" "main" {
name = "${var.prefix}-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}

resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
}

resource "azurerm_network_interface" "main" {
name = "${var.prefix}-nic"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name

ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}

resource "azurerm_virtual_machine" "main" {
name = "${var.prefix}-vm"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.main.id]
vm_size = "Standard_DS1_v2"

# Uncomment this line to delete the OS disk automatically when deleting the VM
# delete_os_disk_on_termination = true

# Uncomment this line to delete the data disks automatically when deleting the VM
# delete_data_disks_on_termination = true

storage_image_reference {
publisher = "Canonical"
offer = "0001-com-ubuntu-server-jammy"
sku = "22_04-lts"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}

provider.tf: Containing the terraform block, s3 backend definition, provider configurations, and aliases.

provider "azurerm" {
features {}
}

terraform.tfvars: Files are used to store variable definitions. This allows you to externalize your variable definitions and makes it easier to manage them, especially if you have a large number of variables or need to use the same variables in multiple environments.

location = "West Europe"
prefix = "demo"

variables.tf: Define the variables that must have values in order for your Terraform code to validate and run. You can also define default values for your variables in this file.

variable "prefix" {

}

variable "location" {

}

init: Initializes a working directory and downloads the necessary provider plugins and modules and setting up the backend for storing your infrastructure’s state.

terraform init

Reinitialize your working directory. Terraform Cloud has been successfully initialized. You may now begin working with Terraform Cloud.

Finding the latet rm azure version

terraform.lock.hclIt captures the versions of all the Terraform providers you’re using.

Generated by Terraform when you runterraform init command.

This file serves as a reference point across all executions, aiding in the evaluation of compatible dependencies with the current configuration.

terraform fmt

Theterraform fmt command is used to rewrite Terraform configuration files to a canonical format and style. This command applies a subset of the Terraform language style conventions, along with other minor adjustments for readability.

terraform validate runs checks that verify whether a configuration is syntactically valid and internally consistent, regardless of any provided variables or existing state.

terraform validate

terraform plan command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.

When Terraform creates a plan it → Reads the current state of any already-existing remote objects to make sure that the Terraform state is up-to-date.

terraform plan

Findout the how many resources create use grep cmd

Apply the Configuration

Create the AWS resources defined in your Terraform configuration.

Executes the actions proposed in a Terraform plan.

terraform apply (OR)
terraform apply --auto-approve

It asks for confirmation from the user before making any changes, unless it was explicitly told to skip approval. Finally create a resources.

Check the azure portal the resources created?

This state is stored by default in a local file named “terraform. tfstate” → Terraform uses state to determine which changes to make to your infrastructure.

Recommend storing it in Terraform Cloud to version, encrypt, and securely share it with your team.

An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, and tables. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS.

Create storage account

Any container that is used to store products, regardless of its size.

Storage container to host the tfstate file

Build infrastructure with terraform configuration, a state file will be created automatically in the local workspace directory named terraform.tfstate.

This tfstate file will have information about the provisioned infrastructure which terraform manage.

Whenever we make changes to the configuration file, it will automatically determine which part of your configuration is already created. And, also it will determine which needs to be changed with the help of the state file.

Abackend.tf defines where Terraform stores its state data files. Terraform uses persisted state data to keep track of the resources it manages.

terraform {
backend "azurerm" {
resource_group_name = "demo-rg"
storage_account_name = "ibrahimsi"
container_name = "prod-tfstate"
key = "prod.terraform.tfstate"
}
}

Displays directory paths and (optionally) files in each subdirectory: tree

Again initialize the providers the tfstate is already there if migrate

Inside ofprod-tfstate there is no files

Before init terraform.tfstate file there in local.

init: Initializes a working directory and downloads the necessary provider plugins and modules and setting up the backend for storing your infrastructure’s state.

terraform init

There is no file migrate to remote.

Check azure portal container

Remotely refer the file → Everytime run the init cmd

Finally destroy the resources

terraform destroy (OR)
terraform destroy --auto-approve

Azure DevOps CICD Pipeline

Hashicorp Terraform is an open-source IaC (Infrastructure-as-Code) tool for configuring and deploying cloud infrastructure. It codifies infrastructure in configuration files that describe the desired state for your topology.

Terraform enables the management of any infrastructure — such as public clouds, private clouds, and SaaS services — by using Terraform providers.

Create a New project

Import the repositoryhttps://github.com/Ibrahimsi/Terraform-AzureDevOps-Sample.git

Successfully import the github code

Create a new file

In the name of terraform.tfvars

Write the below file

location = "Canada Central"
prefix = "demo"

Azure Pipelines automatically builds and tests code projects. It supports all major languages and project types and combines continuous integration, continuous delivery, and continuous testing to build, test, and deliver your code to any destination. Setup the build pipeline.

Starter Project simplifies the setup of an entire continuous integration (CI) and continuous delivery (CD) pipeline to Azure with Azure DevOps. You can start with existing code or use one of the provided sample applications. Select the starter pipeline.

Copt the Pipeline Code

trigger: 
- main

stages:
- stage: Build
jobs:
- job: Build
pool:
vmImage: 'ubuntu-latest'
steps:

Enable the terraform extension

Terraform enables the definition, preview, and deployment of cloud infrastructure.

Goto Organizational setting → Extension → Browse Marketplace

Add Terraform Extension

Organization level install the terraform

Proceed to the organization

Finally check the extension the terrform add (or) not

Goto the Terraform_Pipeline project

Add Terraform tasks

Pipeline variables → Values that can be set and modified during a pipeline run.

Authorize the subscription → Person who receives any Service under a valid subscription.

Confirm the subscriptions. Command set to the init mode.

Create a Storage account for Azure Portal. Go to thebackend.tf confirm the storage name.

The storage account provides a unique namespace for your Azure Storage data that’s accessible from anywhere in the world over HTTP (or) HTTPS. Create a Storage account.

Go to resource → Create a new container

Fillout the terraform pipeline

Add to the display name code. Again search the terraform → select validate cmd

tf validate successfully added

Add another task → fit

Terraform plan → Creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.

Tf Plan Pipeline Code

Archieve build → It is the overall package.

YAML is a human-readable data serialization language that is often used for writing configuration files.

Publish build artifacts → Task and can be downloaded with the Download Build Artifact task. And when you publish them.

Artifacts are files created as part of a build process that often contain metadata about that build’s jobs like test results, security scans, etc.

Full pipeline code. 6 tasks is added.

  1. Build
  2. Tf init
  3. Tf validate
  4. fmt
  5. Tf plan
  6. Archieve files
trigger: 
- main

stages:
- stage: Build
jobs:
- job: Build
pool:
vmImage: 'ubuntu-latest'
steps:
- task: TerraformTaskV4@4
displayName: Tf init
inputs:
provider: 'azurerm'
command: 'init'
backendServiceArm: 'Pay-As-You-Go(f30deb63-a417-4fa4-afc1-813a7d3920bb)'
backendAzureRmResourceGroupName: 'demo-resources'
backendAzureRmStorageAccountName: 'ibrahimsi'
backendAzureRmContainerName: 'prod-tfstate'
backendAzureRmKey: 'prod.terraform.tfstate'
- task: TerraformTaskV4@4
displayName: Tf validate
inputs:
provider: 'azurerm'
command: 'validate'
- task: TerraformTaskV4@4
displayName: Tf fmt
inputs:
provider: 'azurerm'
command: 'custom'
customCommand: 'fmt'
outputTo: 'console'
environmentServiceNameAzureRM: 'Pay-As-You-Go(f30deb63-a417-4fa4-afc1-813a7d3920bb)'

- task: TerraformTaskV4@4
displayName: Tf plan
inputs:
provider: 'azurerm'
command: 'plan'
commandOptions: '-out $(Build.SourcesDirectory)/tfplanfile'
environmentServiceNameAzureRM: 'Pay-As-You-Go(f30deb63-a417-4fa4-afc1-813a7d3920bb)'

- task: ArchiveFiles@2
displayName: Archive files
inputs:
rootFolderOrFile: '$(Build.SourcesDirectory)/'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
replaceExistingArchive: true
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: '$(Build.BuildId)-build'
publishLocation: 'Container'

Save and run the code

Build job → Pipeline is used to generate Artifacts out of Source Code.

Job approval need → To define a required template approval.

Click on view → Permit → Permit access

Successfully build the job

Go to the release pipeline

A Release Pipeline consumes the Artifacts and conducts follow-up actions within a multi-staging system.

New pipeline → Select empty job

Stage 1 → Deployment

Azure Artifacts enables developers to efficiently manage all their dependencies from one place. Add an artifact.

Add build artifact

Add the trigger

Goto the deployment task

Add another task

Install the terraform installer

Add another extract files

Modified the Destination folder

Another task — terraform init

Terraform apply

Save the setting

Add one more stage → Clone the stage

Change the name

Only one task is modified

Save the setting

A deployment job is a special type of job. It’s a collection of steps to run sequentially against the environment.

Pre-deployment approvals: User must manually sign out after deployment before the release is triggered to other stages.

Post-deployment approvals: Team wants to ensure there are no active issues in the work item or problem management system before deploying a build to a stage

Add approval stage before destroy stage → Click Pre-deployment conditions

Select the members

Make the changes git repo let it trigger end to end

Automatically start the pipeline

Build pipeline is running

Successful

Goto the release pipeline once finish the build pipeline automatically started

job is running

Successfully run the release pipeline

Check the azure portal the resources created or not?

Destroy the resources it need to get approve once approval destroyed

Need to approval → Release pipeline

Successful

Destroy → Removal of every copy of a data item from an organisation.

Deleted the resources automatically in Azure Portal.

Thank you 🙏 for taking the time to read our blog.

--

--

Always learning