Setup Kubernetes Cluster on Azure (Self Hosted)

Setting up Kubernetes Master Node

 

In this guide we will set up 1 master node and 1 worker node using the Kubernetes image from the Azure marketplace.  You can deploy as many worker nodes as you need. We will then deploy an NGINX containerized web app as a test.

 

First step is to deploy 2 servers using the Kubernetes image and login to both servers via SSH terminal.

 

First lets login to the server that is going to be your master node.

 

We now need to initialize the Kubernetes master node. In this example im going to use the following private CIDR address space 192.168.0.0/16 that will be used by the Pod network. You will need to make sure the address range space you use, isn’t already in use in your environment:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16 

This will then run for a few minutes and start setting up the master node:

initialize-kubernetes

 

Your output should look something similar to the above.

 

The output gives us a kubeadm join command that we will need to use later to join our worker node(s) to the master node. So, take note of this command for later.

 

The output from above also advises us to run several commands as a regular user to start using the Kubernetes cluster. Run those three commands on the master node:

 

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Check cluster status:

kubectl cluster-info
check-clusterinfo

Add additional Master nodes to cluster

 

If you want to add additional master nodes, simply deploy another Kubernetes server using the image from the marketplace (Azure Image, AWS Image, GCP Image) and run the following kubeadm join command using the output you got from setting up the first master node. Notice i’ve added –control-plane at the end.

kubeadm join 10.0.1.5:6443 --token wwmfhs.2t1u8fdrlfd0j12x \
    --discovery-token-ca-cert-hash sha256:69c9e61969a63201fbf3682e8d4a76bae375127d3720eb21df37b3fc3aeeb529 \
   --control-plane

Install Azure CNI network plugin on Master node

 

On the master node we now need to setup the networking by installing a network plugin. Azure CNI is an open source plugin that integrates Kubernetes Pods with an Azure Virtual Network (also known as VNet) providing network performance at par with VMs. Pods can connect to peered VNet and to on-premises over Express Route or site-to-site VPN and are also directly reachable from these networks. Pods can access Azure services, such as storage and SQL, that are protected by Service Endpoints or Private Link. You can use VNet security policies and routing to filter Pod traffic. The plugin assigns VNet IPs to Pods by utilizing a pool of secondary IPs pre-configured on the Network Interface of a Kubernetes node.

 

The Azure CNI plugin script is pre installed. We need to do some configuration steps to get it working in your environment. 

 

Find out the latest version plugin version

Find out the latest CNI version

 

As of writing and in this example i will use the following versions:

 

PLUGIN_VERSION=”v1.2.8″

CNI_VERSION=”v0.9.1″

 

Run the following commands to run the install on the master node:

 

cd /bin/azure-vnet-cnm/

sudo bash ./install-cni-plugin.sh v1.2.8 v0.9.1

run-azure-cni-install-script

Outbound Connectivity from pods

 

You have to add following iptable command to allow outbound(internet) connectivity from pods. The <vnet_address_space> is the subnet address space of where your master nodes and worker nodes will be located on your vNet, so in my example its 10.0.1.0/24

 

sudo iptables -t nat -A POSTROUTING -m addrtype ! --dst-type local ! -d 10.0.1.0/24 -j MASQUERADE

Azure CNI Network Configuration

 

By default the Azure CNI network configuration should work straight out of the box.  Detailed instructions can be found here:

 

Azure CNI Specifications

 

The default location for configuration files is /etc/cni/net.d/10-azure.conflist

Here is the default config:


{
   "cniVersion":"0.3.0",
   "name":"azure",
   "plugins":[
      {
         "type":"azure-vnet",
         "mode":"transparent",
         "ipsToRouteViaHost":["169.254.20.10"],
         "ipam":{
            "type":"azure-vnet-ipam"
         }
      },
      {
         "type":"portmap",
         "capabilities":{
            "portMappings":true
         },
         "snat":true
      }
   ]
}


Add Azure Secondary Network Interface to Nodes for PODs

 

In order for PODs to communicate on Azure and within your environment we need to add secondary IPconfigs to your Master and Worker nodes NIC.  Here is a screenshot of my NIC IP Configuration:

 

I’ve added 3 extra secondary NICS on both Nodes. The IPAM plugin will automatically detect these new NICs and assign them to newly created PODs. So you will need to add the right amount of IPs on the amount of PODs you plan to create.

 

Master Node

2nd-NICS

Worker Node

Worker-NICS

Enable IP Forwarding

 

We now need to enable IP forwarding on all our node NICs, in order for Azure traffic/routing to access our POD NICs. This change will require a reboot.

 

Enable-IP-Forwarding

Now run the following command to see the status of your pods.

 

You may need to run it a few times to see that the status for all pods says ‘Running‘. If the coredns pods say they are still creating after a long time, it maybe because you need to add more secondary NICs to your Nodes NIC as above.

kubectl get pods --all-namespaces

 

Then after a minute or 2, when i run the command i can see they are now all running:

Pod-status

 

Azure CNI Network Logs

 

If you have issues with networking, review the Azure CNI Network logs at the following locations:

 

Logs generated by azure-vnet plugin are available in /var/log/azure-vnet.log

Logs generated by azure-vnet-ipam plugin are available in /var/log/azure-vnet-ipam.log

Confirm master is ready:

Add Kubernetes Worker Nodes

 

Now that the master node (control plane) has been setup, you are now ready to setup worker nodes (scheduled workloads).  Simply deploy as many worker nodes as you require by deploying your required number of servers using the Kubernetes image from the marketplace (Azure image, AWS image, GCP image)

Login to your worker nodes and we will run the kubeadm join command from the output we got from setting up our master node:

sudo kubeadm join 10.0.1.5:6443 --token wwmfhs.2t1u8fdrlfd0j12x \
    --discovery-token-ca-cert-hash sha256:69c9e61969a63201fbf3682e8d4a76bae375127d3720eb21df37b3fc3aeeb529

This should be the output:

add-worker-nodes

Worker Node Networking Setup

 

On your worker nodes you will need to setup the Azure CNI plugin, the exact same steps you did above for the master node in order for the nodes to communicate in the cluster:

 

  1. Run the Azure CNI Configuration script (instructions above)
  2. Allow outbound connectivity from pods (instructions above)
  3. Add secondary IP configs to worker node NIC (instructions above)
  4. Enable IP Forwarding. (instructions above)

Now if we check the status on the master node (control-plane) we should see the new nodes in the cluster. In my demo i’ve only added 1 worker:

 

kubectl get nodes

 

It should now show both nodes as ready:

 

get-nodes

Add application to Kubernetes Cluster

 

Our cluster is now fully setup and you can now start deploying applications. First we need to validate our cluster is working by deploying an application. I will deploy an Nginx web server.

Creating an NGINX server

 

I’m now going to build an Nginx server just to confirm our deployment is working. I will be using the yaml from kubernetes example

 

This is what consists in the Yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
  1. Create a deployment based on the Yaml file. I will be running this on master node.
kubectl apply -f https://k8s.io/examples/application/deployment.yaml

2. Display the information about the deployment:

kubectl describe deployment nginx-deployment

The output is similar to this:

describe-nginx-deployment

3. List the Pods created by the deployment:

kubectl get pods -l app=nginx

The output is similar to this:

get-nginx-pods

4. Create a service for the Nginx deployment

sudo touch nginx-service.yaml

sudo nano nginx-service.yaml

Now we want to add the following config into our new yaml.  So i’m going to copy and paste the below config into the new nginx-service.yaml file we just created:

apiVersion: v1
kind: Service
metadata:
  name: ngnix-service
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

Now, let’s create the service.

kubectl create -f nginx-service.yaml

Check the service is created successfully.

kubectl get svc

You will notice its mapped to port 30260.  So to access the new Nginx web server we browse to http://<node IP address>:30260

 

get-svc

 

Now if i browse to that address i now see the web server:

 

nginx-pod

Avatar for Andrew Fitzgerald
Andrew Fitzgerald

Cloud Solution Architect. Helping customers transform their business to the cloud. 20 years experience working in complex infrastructure environments and a Microsoft Certified Solutions Expert on everything Cloud.

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x