How to Setup Docker Swarm Load Balancing using Nginx on Ubuntu 20.04

How to Setup Docker Swarm Load Balancing using Nginx on Ubuntu 20.04. In this post, we will introduce Docker and Container load balancing. Following we show the advantages and then explain how to set up Docker swarm load balancing using Nginx on Ubuntu 20.04.

Let’s continue this article blog about how to Setup Docker Swarm Load Balancing using Nginx on Ubuntu 20.04.

Load Balancing in Docker

Generally, load balancing efficiently distributes incoming network traffic across a group of backend servers.

Different high traffic websites must serve thousands of concurrent requests from users or clients. They have to return the correct images, text, video, or application data in a fast and reliable manner.

There’s a need for more servers to meet the high volumes and modern computing best practices. A load balancer in your servers routes client requests across all servers.

Capable of fulfilling requests to maximize speed and capacity utilization and reduce server overload. But how does load balancing work in Docker?

What are the different benefits of container load balancing? Let’s find out.

What Is Container Load Balancing?

Container load balancing delivers traffic management services for containerized applications with finesse.

Developers use containers to test, deploy, and scale applications through continuous integration and delivery (CI / CD). But the container based applications’ stateless and transient nature requires traffic control for optimal performance.

Orchestration platforms like Kubernetes, Mirantis, Amazon ECS, etc., deploy and manage containers. A dedicated load balancer sitting in front of the Docker engine results in higher scalability and availability of client requests.

Also ensure the uninterrupted performance of the microservices based applications that run inside the container. The ability to update a microservice without disruption is possible by load balancing Docker containers.

When you deploy containers across multiple servers, multiple containers can be accessible on the same host port. It’s because of the load balancers in Docker containers.

How Does Container Load Balancing Work?

Docker is a platform that runs applications in containers. Hence, it also includes the distribution, packaging, and management of independent container applications.

In addition to management tasks, container load balancing offers infrastructure services applications security and effective running.

But it’s only possible once the following points are true:

  • You deploy containers across a cluster of servers.
  • Containers are continuously updated without disruption.
  • Load balancing for different containers on a single host is accessible on the same port.
  • There’s secure container communication.

These factors are vital in achieving Docker’s container load balancing benefits.

Benefits of Container Load Balancing

Out of multiple benefits, here are a few container loads balancing benefits:

Balanced distribution

A user defined load balancing algorithm distributes traffic evenly through the endpoint group’s healthiest and most available backends.

Better visibility and security

Depending on the required granularity, visibility is available at the pod or container level. Also, preserving IP sources results in the simple tracking of traffic sources.

Accurate health checks

Direct health checks of the pods are an accurate way to determine the health of the backends.

Best Practices for Load Balancing Docker Containers

Here are the best practices for load balancing Docker containers with an Application Delivery Controller, including load balancer, WAF, and GSLB.

Support for automated service discovery

Service discovery is a vital feature of container environments because it allows dynamic application scaling.

You can discover containers automatically and their health status so traffic can redirect to optimize application performance. Because of the automation, the process is faster than manual configurations.

Your Application Delivery Controller should support automated service discovery. If not, you will use manual processes to connect your ADCs to your containers. Also, scale your ADCs as your container use fluctuates.

Use an ADC to support DNS based service discovery using Service (SRV) records for port, IP, and weight.

Service mesh awareness

Cloud native applications composed of microservices that run across containers require a service mesh. It also needs a dedicated infrastructure layer for vital service to service communication for containerized microservices.

The services must communicate with each other with no impact on application performance.

Why? As applications convert into loosely coupled microservices running in container environments. 

Management, Observability, and Scalability

Once deployed into the container environment, the ADC provides detailed Layer 7 telemetry, like latency and HTTP error rates. It is easy to access and act, so you know the traffic conditions and how the apps perform.

Also, you need to manage your ADC instances. But in a large scale containerized deployment, you might have thousands of ADCs, where it’s difficult to manage them individually.

Using these practices, you can increase the efficiency of container load balancing in Docker. Now that you understand Docker’s nitty gritty aspects of load balancing, you can use it effectively to manage your ecosystem.

Setup Docker Swarm Load Balancing using NGinx on Ubuntu 20.04

Prerequisites

  • Three server running Ubuntu 20.04 operating system along with SSH access.
  • A root password or a user with sudo privileges
  • Minimum 4 GB of RAM with 2 Cores CPU.

Install Docker Engine on All Nodes

Before starting, You will need to install Docker CE package on all nodes. To install the latest Docker version, you will need to set up the Docker repository in your system.

First, install required dependencies using the following command:

				
					apt-get install apt-transport-https curl ca-certificates curl software-properties-common -y
				
			

Next, add Docker’s GPG key using the command below:

				
					curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
				
			

Next, add the Docker repository to the APT source file.

				
					add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
				
			

Finally, install the latest version of Docker using the command below:

				
					apt-get install docker-ce -y
				
			

Once the Docker is installed, verify the Docker version using the following command:

				
					docker --version
				
			

Sample output:

				
					Docker version 20.10.8, build 3967b7d
				
			

Set up Docker Swarm Cluster

First, you will need to set up Docker Swarm cluster for two of the three nodes. To do so, log in to the master node and run the following command to initialize the Swarm cluster.

				
					docker swarm init --advertise-addr master-ip
				
			

You will get the following output:

				
					Swarm initialized: current node (c1403ozs6dqtfczwifoxds0whme3) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3zircoouq3j30rpsbx6lg2ui7w4zmbsv6zzfgsmjd7w9lp-nxx72k07z9e2q3y08r7m3iv34 192.168.1.11:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

				
			

After initializing the Docker Swarm, you can run the following command to verify Swarm cluster.

				
					docker info
				
			

You should see the following output:

				
					Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 1
 Server Version: 20.10.7
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true

				
			

Add Worker Node to Swarm Cluster

Next, you will need to add the Worker node to the Swarm cluster. To do so, log in to the Worker node and run the following command:

				
					docker swarm join --token SWMTKN-1-3zircoouq3j30rpsbx6lg2ui7w4zmbsv6zzfgsmjd7w9lp-nxx72k07z9e2q3y08r7m3iv34 master-ip:2377
				
			

Once the Worker node is added to the Swarm cluster, you can verify it with the following command:

				
					docker node ls
				
			

You should get something like this:

				
					ID                            HOSTNAME      STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
czwifhme3vc2a2v03ozs6dqtf *   moster        Ready     Active         Leader           20.10.12
ltp7u6zh8zkqmrtk7p2pi3lqz     worker        Ready     Active                          20.10.12

				
			

Deploy Nginx Service on Docker Swarm Cluster

Next, you will need to deploy the Nginx service on the master node and scale it among both nodes. Go to the master node and run the following command to create an Nginx service:

				
					docker service create --name backend --replicas 2 --publish 8080:80 nginx
				
			

You should see the following output:

				
					doxiikvbo6rx0bcbju1lq9zkr
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service converged 
				
			

Next, verify your Nginx service using the following command:

				
					docker service ls
				
			

You should get something like this:

				
					ID             NAME            MODE         REPLICAS   IMAGE          PORTS
bcbju1lq9zkr   backend         replicated   2/2        nginx:latest   *:8080->80/tcp
				
			

Set up a Load Balancer

To set up a load balancer, you will need to initialize a single Swarm cluster on the load balancer node. Log in to the Load balancer node and initialize the new Swarm with the following command:

				
					docker swarm init --advertise-addr load-balancer-ip
				
			

Next, create directory for Load balancer with the following command:

				
					mkdir -p /data/loadbalancer
				
			

Next, create a configuration file with the following command:

				
					nano /data/loadbalancer/default.conf
				
			

Add the following configurations:

				
					server {
   listen 80;
   location / {
      proxy_pass http://backend;
   }
}
upstream backend {
   server master-ip:8080;
   server worker-ip:8080;
}

				
			

Save and close the file then create the load balancer container and publish it on port 80.

				
					docker service create --name loadbalancer --mount type=bind,source=/data/loadbalancer,target=/etc/nginx/conf.d --publish 80:80 nginx
				
			

The above command will create an Nginx container and allow connections to the web services hosted by your Docker Swarm.

Next, open your web browser and verify the Load balancer using the URL http://load-balancer-ip. You should see the Nginx test page on the following screen.

Thank you for reading How to Setup Docker Swarm Load Balancing using Nginx on Ubuntu 20.04. We shall conclude.

How to Setup Docker Swarm Load Balancing using Nginx on Ubuntu 20.04 Conclusion

In this post, we have set up a Docker Swarm cluster on two nodes and deploy the Nginx service. Then, we set up a load balancer on the third node to access the Nginx server hosted on the Docker Swarm cluster. You can now also add more worker nodes on the Docker Swarm cluster to achieve the high availability.

Would you like to read more about Docker, then please click here.

Avatar for Hitesh Jethva
Hitesh Jethva

I am a fan of open source technology and have more than 10 years of experience working with Linux and Open Source technologies. I am one of the Linux technical writers for Cloud Infrastructure Services.

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x