In this guide we will set up 1 master node and 1 worker node using the Kubernetes image from the Azure marketplace. You can deploy as many worker nodes as you need. We will then deploy an NGINX containerized web app as a test.
First step is to deploy 2 servers using the Kubernetes image and login to both servers via SSH terminal.
First lets login to the server that is going to be your master node.
We now need to initialize the Kubernetes master node. In this example im going to use the following private CIDR address space 192.168.0.0/16 that will be used by the Pod network. You will need to make sure the address range space you use, isn’t already in use in your environment:
This will then run for a few minutes and start setting up the master node:
Your output should look something similar to the above.
The output gives us a kubeadm join command that we will need to use later to join our worker node(s) to the master node. So, take note of this command for later.
The output from above also advises us to run several commands as a regular user to start using the Kubernetes cluster. Run those three commands on the master node:
If you want to add additional master nodes, simply deploy another Kubernetes server using the image from the marketplace (Azure Image, AWS Image, GCP Image) and run the following kubeadm join command using the output you got from setting up the first master node. Notice i’ve added –control-plane at the end.
On the master node we now need to setup the networking by installing a network plugin. Azure CNI is an open source plugin that integrates Kubernetes Pods with an Azure Virtual Network (also known as VNet) providing network performance at par with VMs. Pods can connect to peered VNet and to on-premises over Express Route or site-to-site VPN and are also directly reachable from these networks. Pods can access Azure services, such as storage and SQL, that are protected by Service Endpoints or Private Link. You can use VNet security policies and routing to filter Pod traffic. The plugin assigns VNet IPs to Pods by utilizing a pool of secondary IPs pre-configured on the Network Interface of a Kubernetes node.
The Azure CNI plugin script is pre installed. We need to do some configuration steps to get it working in your environment.
As of writing and in this example i will use the following versions:
PLUGIN_VERSION=”v1.2.8″
CNI_VERSION=”v0.9.1″
Run the following commands to run the install on the master node:
cd /bin/azure-vnet-cnm/
sudo bash ./install-cni-plugin.sh v1.2.8 v0.9.1
Outbound Connectivity from pods
You have to add following iptable command to allow outbound(internet) connectivity from pods. The <vnet_address_space> is the subnet address space of where your master nodes and worker nodes will be located on your vNet, so in my example its 10.0.1.0/24
sudo iptables -t nat -A POSTROUTING -m addrtype ! --dst-type local ! -d 10.0.1.0/24 -j MASQUERADE
Azure CNI Network Configuration
By default the Azure CNI network configuration should work straight out of the box. Detailed instructions can be found here:
Add Azure Secondary Network Interface to Nodes for PODs
In order for PODs to communicate on Azure and within your environment we need to add secondary IPconfigs to your Master and Worker nodes NIC. Here is a screenshot of my NIC IP Configuration:
I’ve added 3 extra secondary NICS on both Nodes. The IPAM plugin will automatically detect these new NICs and assign them to newly created PODs. So you will need to add the right amount of IPs on the amount of PODs you plan to create.
Master Node
Worker Node
Enable IP Forwarding
We now need to enable IP forwarding on all our node NICs, in order for Azure traffic/routing to access our POD NICs. This change will require a reboot.
Now run the following command to see the status of your pods.
You may need to run it a few times to see that the status for all pods says ‘Running‘. If the coredns pods say they are still creating after a long time, it maybe because you need to add more secondary NICs to your Nodes NIC as above.
kubectl get pods --all-namespaces
Then after a minute or 2, when i run the command i can see they are now all running:
Azure CNI Network Logs
If you have issues with networking, review the Azure CNI Network logs at the following locations:
Logs generated by azure-vnet plugin are available in /var/log/azure-vnet.log
Logs generated by azure-vnet-ipam plugin are available in /var/log/azure-vnet-ipam.log
Confirm master is ready:
Add Kubernetes Worker Nodes
Now that the master node (control plane) has been setup, you are now ready to setup worker nodes (scheduled workloads). Simply deploy as many worker nodes as you require by deploying your required number of servers using the Kubernetes image from the marketplace (Azure image, AWS image, GCP image)
Login to your worker nodes and we will run the kubeadm join command from the output we got from setting up our master node:
On your worker nodes you will need to setup the Azure CNI plugin, the exact same steps you did above for the master node in order for the nodes to communicate in the cluster:
Run the Azure CNI Configuration script (instructions above)
Allow outbound connectivity from pods (instructions above)
Add secondary IP configs to worker node NIC (instructions above)
Enable IP Forwarding. (instructions above)
Now if we check the status on the master node (control-plane) we should see the new nodes in the cluster. In my demo i’ve only added 1 worker:
kubectl get nodes
It should now show both nodes as ready:
Add application to Kubernetes Cluster
Our cluster is now fully setup and you can now start deploying applications. First we need to validate our cluster is working by deploying an application. I will deploy an Nginx web server.
Creating an NGINX server
I’m now going to build an Nginx server just to confirm our deployment is working. I will be using the yaml from kubernetes example
Now we want to add the following config into our new yaml. So i’m going to copy and paste the below config into the new nginx-service.yaml file we just created:
Cloud Solution Architect. Helping customers transform their business to the cloud. 20 years experience working in complex infrastructure environments and a Microsoft Certified Solutions Expert on everything Cloud.