How to Setup NGINX Server on Ubuntu in Azure/AWS/GCP

To setup and install NGINX Open Source on to Linux Ubuntu Server on any of the cloud platforms (Azure, AWS,GCP), the easiest way is to use the available template in the marketplaces on the below links.  The template image fully sets up a Ubuntu server running NGINX Open Source, ready to use in the cloud.  Nginx is open source software for web serving, reverse proxying, caching, load balancing, media streaming, and more. NGINX is highly scalable as well, meaning that its service grows along with its clients traffic.

Nginx on Linux Ubuntu Server

Table of Contents

Nginx Server features

Nginx Reverse Proxy

A reverse proxy server can act as a “traffic cop,” sitting in front of your backend servers and distributing client requests across a group of servers in a manner that maximizes speed and capacity utilization while ensuring no one server is overloaded, which can degrade performance. If a server goes down, the load balancer redirects traffic to the remaining online servers.

Web Acceleration

Reverse proxies can compress inbound and outbound data, as well as cache commonly requested content, both of which speed up the flow of traffic between clients and servers. They can also perform additional tasks such as SSL encryption to take load off of your web servers, thereby boosting their performance.

Security and Anonymity

By intercepting requests headed for your backend servers, a reverse proxy server protects their identities and acts as an additional defense against security attacks. It also ensures that multiple servers can be accessed from a single record locator or URL regardless of the structure of your local area network.

Nginx Load Balancing

NGINX open source allows you to configure as an application load balancer. Distribute traffic to several application servers and improve performance.  Load balancing across multiple application instances is a commonly used technique for optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault-tolerant configurations.  You can use Nginx as an HTTP load balancer to distribute traffic to several application servers and to improve performance, scalability and reliability of web applications.

Application Server Load Balancer
Application Server Load Balancer

Load Balancer Options with NGINX

Round-robin — Requests to the application servers are distributed in a round-robin fashion.

Least-connected — Next request is assigned to the server with the least number of active connections.

IP-hash — A hash-function is used to determine what server should be selected for the next request (based on the client’s IP address).

Session persistence – With ip-hash, the client’s IP address is used as a hashing key to determine what server in a server group should be selected for the client’s requests. This method ensures that the requests from the same client will always be directed to the same server.

Weighted load balancing – It is also possible to influence Nginx load balancing algorithms even further by using server weights

Load balancing with in-band health checks

Continually test your HTTP upstream servers, avoid the servers that have failed, and gracefully add the recovered servers into the load‑balanced group.  Nginx includes in-band (or passive) server health checks.

Nginx Load Balancer Heath Checks
  • Passive Health Checks
  • Health check a URI
  • Define Custom Conditions
  • Test your TCP upstream servers
  • UDP Health Checks

Nginx Mail Proxy

NGINX can proxy IMAP, POP3 and SMTP protocols to one of the upstream mail servers that host mail accounts and thus can be used as a single endpoint for email clients. This will bring in a number of benefits, such as

 

  • Easy scaling the number of mail servers
  • Choosing a mail server basing on different rules, for example, choosing the nearest server basing on a client’s IP address
  • Distributing the load among email servers
Nginx-Cloud-MailProxy
Cloud Mail Proxy

Mail Proxy Server Features

  • POP3/SMTP/IMAP over SSL/TLS
  • Optimised SSL/TLS for mail Proxy – Faster & more secure
  • Load balance mail server traffic
  • Load balancer in-band health checks
  • Reverse proxy support
  • Test your TCP mail upstream servers
  • Keep-alive and pipelined connections support
  • Access control based
  • Response rate limiting
  • HTTP authentication (LDAP)
  • STARTTLS support
  • TLS/SSL with SNI and OCSP stapling support, via OpenSSL

 

NGINX can also be used as a cache for static and dynamic content. Increasing the speed for users access.

Getting Started with Nginx server

Once your NGINX server has been deployed, the following links explain how to connect to a Linux VM:

 

 

Once connected and logged in, the following section explains how to configure NGINX as per your requirements.

NGINX on Ubuntu Server

This solution is built using NGINX Open Source.

 

The configuration files/modules can be found in /etc/nginx 

				
					/etc/nginx 
				
			

Controlling NGINX Processes at Runtime

Master and Worker Processes

NGINX has 1 master process and one or more worker processes. If caching is enabled, the cache loader and cache manager processes also run at startup.

 

The main purpose of the master process is to read and evaluate configuration files, as well as maintain the worker processes.

 

The worker processes do the actual processing of requests. NGINX relies on OS-dependent mechanisms to efficiently distribute requests among worker processes. The number of worker processes is defined by the worker_processes directive in the nginx.conf configuration file and can either be set to a fixed number or configured to adjust automatically to the number of available CPU cores.

Controlling NGINX

To reload your configuration, you can stop or restart NGINX, or send signals to the master process. A signal can be sent by running the nginx command (invoking the NGINX executable) with the -s argument as the following:

				
					nginx -s <SIGNAL>
				
			

The <SIGNAL> can be one of the following:

 

  • quit – Shut down gracefully (the SIGQUIT signal)
  • reload – Reload the configuration file (the SIGHUP signal)
  • reopen – Reopen log files (the SIGUSR1 signal)
  • stop – Shut down immediately (or fast shutdown, the SIGTERM singal)

 

The kill utility can also be used to send a signal directly to the master process.

HTTP Load Balancing

Load balancing across multiple application instances is a commonly used technique for optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault‑tolerant configurations.

 

Refer to the following NGINX Docs on setting up HTTP Load Balancing.

https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/

Content Caching

Cache both static and dynamic content from your proxied web and application servers, to speed delivery to clients and reduce the load on the servers.  When caching is enabled, NGINX saves responses in a disk cache and uses them to respond to clients without having to proxy requests for the same content every time.

 

Refer to the following guide on setting up and enabling caching:

https://docs.nginx.com/nginx/admin-guide/content-cache/content-caching/

Web Server

You can configure NGINX as a web server with support for virtual server multi-tenancy, URI and response rewriting, variables, and error handling.

 

Refer to the following guide for setting up NGINX as a web server:

https://docs.nginx.com/nginx/admin-guide/web-server/web-server/

Reverse Proxy

Configure NGINX as a reverse proxy for HTTP and other protocols, with support for modifying request headers and fine-tuned buffering of responses.  Proxying is typically used to distribute the load among several servers, seamlessly show content from different websites, or pass requests for processing to application servers over protocols other than HTTP.

 

Refer to the following guide on setting up a reverse proxy:

https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/

Configure Reverse Proxy for Web Server

In this example we will show the steps to setup Nginx Reverse Proxy to redirect traffic to access your Web Server on your internal network.

We need to create a virtual host configuration file that will be used to configure your reverse proxy, so run the following commands:  

				
					cd /etc/nginx/conf.d
				
			

Create Configuration File

				
					sudo nano web.conf
				
			

In your Nginx configuration file this is where you setup your config. In the following example is a basic Nginx reverse proxy example. Nginx is set to listen for all traffic on port 80 for all traffic accessing the domain name of your Nginx server. You can put the IP address if your Nginx server isn’t associated with a domain name (remove http:// if using IP address).  

 

The proxy_pass command directs all traffic on port 80 to http://another_server. Just change http://another_server to the location of your choice, and Nginx will intercept client requests and route them to the location you specify. Once you’ve finished, save the file and exit.

 

Refer to the following guide on setting up more options on your reverse proxy:

https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/

				
					server {

listen 80;
listen [::]:80;

server_name http://nginx_server;

location / {

proxy_pass http://another_server;

}

}

				
			

In my config as a test, i redirected traffic from the IP address of our Nginx server to our website:

				
					server {

listen 80;
listen [::]:80;

server_name 54.159.4.236;

location / {

proxy_pass https://cloudinfrastructureservices.co.uk;

}

}


				
			

Test and Reload Nginx

Run the following command to test your Nginx configuration file syntax is correct:

				
					sudo nginx -t
				
			

Run the following command to reload Nginx:

				
					sudo nginx -s reload
				
			

If you open a new browser and access the domain name or IP address of your nginx server it should now redirect to the location of the address you put in your proxy_pass.

Security Controls

Monitoring / Logging

Capture detailed information about errors and request processing in log files, either locally or via syslog.

 

Refer to the following setup instructions on configuring logging:

https://docs.nginx.com/nginx/admin-guide/monitoring/logging/

Mail Proxy

Simplify your email service and improve its performance with NGINX open source as a proxy for the IMAP, POP3, and SMTP protocols.  Once you’ve deployed our Cloud Mail Proxy from any of the platforms (Azure, AWS, GCP), Refer to the following guide on setting up the NGINX mail proxy:

 

The prerequisites are already installed, so skip to ‘Configuring SMTP/IMAP/POP3 Mail Proxy Servers‘ on the following documentation:

https://docs.nginx.com/nginx/admin-guide/mail-proxy/mail-proxy/#configuring-smtpimappop3-mail-proxy-servers

Nginx Firewall Ports

NGINX on Ubuntu has the following ports already enabled. Depending on what you want to use NGINX for, you will need to enable your required ports if you are using a firewall/Network Security Groups

 

  • TCP 80
  • TCP 443

 

To setup AWS firewall rules refer to – AWS Security Groups

To setup Azure firewall rules refer to – Azure Network Security Groups

To setup Google GCP firewall rules refer to – Creating GCP Firewalls

Support

If you require any help with the installation of NGINX, leave a comment below or contact us directly if you are experiencing any issues

 

Disclaimer: This solution is built using Nginx, Inc. and its contributors, an opensource software. This solution is Licensed under the 2-clause BSD license. THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Avatar for Andrew Fitzgerald
Andrew Fitzgerald

Cloud Solution Architect. Helping customers transform their business to the cloud. 20 years experience working in complex infrastructure environments and a Microsoft Certified Solutions Expert on everything Cloud

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x