Hi, in this post we are going to create an Azure Kubernetes Service (AKS) Cluster using terraform. This post is the second post about Terraform series. If you want to create virtual machines in Azure using Terraform, you can read this post that I wrote about creating a virtual machine in Azure with Terraform.
You can find the source code for this post on my github repository.
In TurkNet, we are facing with so many VM and kubernetes cluster requests. When processing these request, we take advantage of the power of Terraform automation tool.
Terraform is an open-source infrastructure as code software tool that enables you to safely and predictably create, change, and improve infrastructure as described on their website.
If you want to code your infrastructure, Terraform is one of the options, and it is a great one. In this post, we are going to code our Azure Kubernetes Service Cluster with Terraform. Let’s begin.
If you want to automate some of the workloads on Azure, then you will need Service Principal (SP) accounts. We will use an SP for our automation.
Because only way for Terraform to work on Azure is to connecting it, we will connect Terraform with Azure via a Service Principal. Let’s create our SP.
NOTE: If you’ve followed my previous post and created your Service Principal, you can use it in this example too.
You will need your subscription ID for this SP. You can get your subscription ID after logging into Azure-CLI or Azure portal and using CLI with below command:
az account show --subscription <subscription_name> --query id
After getting your subscription ID, we can create our SP with below command:
az ad sp create-for-rbac - name <service_principal_name> - role Contributor - scopes /subscriptions/<subscription_id>
Now, after creating our SP, the information about our SP will be on the STDOUT like below:
{
"appId": "someappid",
"displayName": "sp-diplay-name",
"password": "superstrongpassword",
"tenant": "tenantid"
}
Take note of this information. We will use them in configuring Terraform.
After our SP is ready, now we can get to the Terraform part.
First we need to configure Azure connection information for Terraform. This can be done in multiple ways. I prefer to create environment variables and then pass these variables to Terraform in runtime. The other way is to write connection information in the providers.tf file. (Not recommending it)
Lets create a file named .connection.env (The dot in the beginning means that this is a hidden file) with the configuration below:
export ARM_CLIENT_ID="xxx" <appID>
export ARM_CLIENT_SECRET="xxx" <Password>
export ARM_SUBSCRIPTION_ID="xxx" <SubscriptionID>
export ARM_TENANT_ID="xxx" <TenantID>
After creating the file, we can source the file and create our environment variables.
source .connection.env
Now, our Terraform configuration will be connected to the Azure.
Our file structure will be like below:
└── azure
├── .connection.env
├── main.tf
├── outputs.tf
├── providers.tf
├── terraform.tfvars
└── variables.tf
Let’s create this file structure one by one.
Creating providers.tf file
Terraform will know which provider to use from our providers.tf file. Let’s create a file named providers.tf with below configuration
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.84.0"
}
}
required_version = ">= 1.1.3"
}provider "azurerm" {
features {}
}
Creating variables.tf file
We will provide any variables to Terraform with variables.tf file. In this file, we are providing variables such as agent count, admin username, cluster name, dns prefix of the cluster, resource group name, resource group location, ssh public key location and AKS service principal values. Note that, we are providing default blank values for AKS service principal, we will be providing these values with another file.
We will be naming our cluster “demok8s” and we are setting the cluster node count to “3”.
NOTE: I will be providing some of the variables as comments below, such as log analytics workspace because I will not use these services with AKS.
NOTE: You will be needing an SSH key. If you don’t have one under “~/.ssh/” you must create it.
variable "agent_count" {
default = 3
}# The following two variable declarations are placeholder references.
# Set the values for these variable in terraform.tfvars
variable "aks_service_principal_app_id" {
default = ""
}
variable "aks_service_principal_client_secret" {
default = ""
}
variable "admin_username" {
default = "demo"
}
variable "cluster_name" {
default = "demok8s"
}
(Video) How to create Azure Kubernetes Service using Terraformvariable "dns_prefix" {
default = "demok8s"
}
# # Refer to https://azure.microsoft.com/global-infrastructure/services/?products=monitor for available Log Analytics regions.
# variable "log_analytics_workspace_location" {
# default = "West Europe"
# }
# variable "log_analytics_workspace_name" {
# default = "testLogAnalyticsWorkspaceName"
# }
# # Refer to https://azure.microsoft.com/pricing/details/monitor/ for Log Analytics pricing
# variable "log_analytics_workspace_sku" {
# default = "PerGB2018"
# }
variable "resource_group_location" {
default = "West Europe"
description = "Location of the resource group."
}
variable "resource_group_name" {
default = "demo-terraform-kubernetes-RG"
description = "Resource group name that is unique in your Azure subscription."
}
variable "ssh_public_key" {
default = "~/.ssh/id_rsa.pub"
}
Creating outputs.tf file
We will get any output generated from Terraform with outputs.tf file. In this case we are getting information about our AKS service such as kubeconfig of the admin account in kubernetes cluster.
NOTE: We will be setting sensitive=true because these values are private and we don’t want Terraform to print these values to STDOUT.
output "client_certificate" {
value = azurerm_kubernetes_cluster.k8s.kube_config[0].client_certificate
sensitive = true
}output "client_key" {
value = azurerm_kubernetes_cluster.k8s.kube_config[0].client_key
sensitive = true
}
output "cluster_ca_certificate" {
value = azurerm_kubernetes_cluster.k8s.kube_config[0].cluster_ca_certificate
sensitive = true
}
output "cluster_password" {
value = azurerm_kubernetes_cluster.k8s.kube_config[0].password
sensitive = true
}
output "cluster_username" {
value = azurerm_kubernetes_cluster.k8s.kube_config[0].username
sensitive = true
}
output "host" {
value = azurerm_kubernetes_cluster.k8s.kube_config[0].host
sensitive = true
}
output "kube_config" {
value = azurerm_kubernetes_cluster.k8s.kube_config_raw
sensitive = true
}
output "resource_group_name" {
value = azurerm_resource_group.rg.name
}
(Video) Terraform with Azure Kubernetes Service
Creating main.tf file
Terraform will run based on main.tf file. We will add our AKS configuration as well as Azure resource configurations to this file.
We will be creating our cluster with “kubenet” CNI and our node pool will consist of “Standard_D2_v2" VMs and we will tag our cluster with “Environment=Development”
NOTE: Again I will be providing some of the variables as comments because I will not be using them. But they will stay as a good example.
resource "azurerm_resource_group" "rg" {
location = var.resource_group_location
name = var.resource_group_name
}# resource "random_id" "log_analytics_workspace_name_suffix" {
# byte_length = 8
# }
# resource "azurerm_log_analytics_workspace" "test" {
# location = var.log_analytics_workspace_location
# # The WorkSpace name has to be unique across the whole of azure;
# # not just the current subscription/tenant.
# name = "${var.log_analytics_workspace_name}-${random_id.log_analytics_workspace_name_suffix.dec}"
# resource_group_name = azurerm_resource_group.rg.name
# sku = var.log_analytics_workspace_sku
# }
# resource "azurerm_log_analytics_solution" "test" {
# location = azurerm_log_analytics_workspace.test.location
# resource_group_name = azurerm_resource_group.rg.name
# solution_name = "ContainerInsights"
# workspace_name = azurerm_log_analytics_workspace.test.name
# workspace_resource_id = azurerm_log_analytics_workspace.test.id
# plan {
# product = "OMSGallery/ContainerInsights"
# publisher = "Microsoft"
# }
# }
resource "azurerm_kubernetes_cluster" "k8s" {
location = azurerm_resource_group.rg.location
name = var.cluster_name
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = var.dns_prefix
tags = {
Environment = "Development"
}
default_node_pool {
name = "agentpool"
vm_size = "Standard_D2_v2"
node_count = var.agent_count
}
linux_profile {
admin_username = var.admin_username
ssh_key {
key_data = file(var.ssh_public_key)
}
}
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "standard"
}
service_principal {
client_id = var.aks_service_principal_app_id
client_secret = var.aks_service_principal_client_secret
}
}
Creating terraform.tfvars file
We will be providing some of the required variables with terraform.tfvars file. These values will be service principal values.
aks_service_principal_app_id = "your-principal-app-id"
aks_service_principal_client_secret = "your-principal-client-secret"
After our structure is done, we can begin to create our AKS cluster.
Now while in the folder where “main.tf” resides, we can use the command to initialize Terraform:
terraform init
After initialization:
terraform plan
Terraform will plan the creation and output this plan to STDOUT. If you want to keep this plan in a file, you can use the command:
terraform plan -out myaksplan
with this command, Terraform will create a file named myaksplan and keep this plan inside of it. If you want to apply this config later, you can use the plan file.
After planning, we can apply this plan with:
terraform apply
and this command will apply the last planned terraform code. If you want to apply the config that you planned and outputted to a file, use:
terraform apply <filename>
After these commands, terraform will create your resource in a short time. You can check the resources on Azure portal.
NOTE: You can use -auto-approve argument to skip interacting when Terraform asks your approval. But this argument has it’s downsides, if your Terraform configuration changed without your knowing, applying it without approval will cause unpredictible changes on your infrastructure.
After the creation is finished, Terraform will output some of the variables that we mentioned in the outputs.tf file. These variables are:
- Client Certificate
- Client Key
- Cluster CA Certificate
- Cluster Username
- Cluster Password
- Hostname
- Kubeconfig
- Resource Group Name
To connect to our AKS cluster, we will need the kubeconfig file and to get that kubeconfig file from the Terraform we can use:
terraform output -raw kube_config > aks_kubeconfig
and then to connect to AKS cluster:
export KUBECONFIG=aks_kubeconfig
After that we can use kubectl commands on our CLI. (Assuming you already have the kubectl package installed.)
NOTE: We are providing relative path to KUBECONFIG. You should be working on the path that you’ve outputted the kube_config from Terraform or you can give the absolute path.
After your experiment is done, you can remove your resources created by Terraform with a Terraform command.
You can plan your destroy operation and then use this plan to destroy, or just simply use destroy command.
To plan destroy operation and see what will be destroyed:
terraform plan -destroy
To destroy resources:
terraform destroy
This command is the equivalent of:
terraform apply -destroy
In this post, we looked at how to create an Azure Kubernetes Service Cluster with Terraform. Terraform is a strong infrastructure automation tool, and it is widely used in the cloud infrastructure workloads. I would suggest you to learn it.
This is the second post about Terraform that I wrote. You can find the first post about Terraform here: How to create VMs in Azure with Terraform
If you have any questions feel free to reach out to me.
Thank you all for reading, have a good day !