Setup Ollama with Open WebUI on Azure/AWS/GCP

Setup and install Ollama with Open WebUI on Ubuntu 24.04 & Docker. Self Hosted LLM AI Platform in the cloud on Azure, AWS or Google GCP. Integrate with OpenAI compatible APIs. The ultimate ChatGPT User Interface.

Self Host Ollama and OpenWebUI

Setup Ollama + OpenWebUI on Azure

Ollama on Azure

Deploy Ollama + OpenWebUI on Ubuntu 24.04 in Azure

Setup Ollama + OpenWebUI on Azure

Ollama AWS

Deploy Ollama + Open WebUI on Ubuntu 24.04 in AWS

Setup Ollama + OpenWebUI on Azure

Ollama GCP

Deploy Ollama & OpenWebUI on Ubuntu 24.04 in GCP

Getting Started with Ollama and Open WebUI

Once your Ollama server has been deployed, the following links explain how to connect to a Linux VM:

 

 

Once connected and logged in, the following section explains how to start using Ollama with Open WebUI.

Ollama runs locally on port 11434 and Open WebUI runs as a local Docker container and uses port 8080.

Installing Ollama LLM Models

Once logged in via ssh, the first step is to decide which Ollama LLMs learning language models you would like to install.  For example to install Meta Llama 3: The most capable openly available LLM to date, run the following command:

				
					ollama run llama3
				
			

To exit out of an LLM window, and return to your terminal press Ctrl + D.

To list your installed LLMs run the following command:

				
					ollama list
				
			

Login to Open WebUI Interface

Open WebUI runs as a Docker container. To check the status of the Open WebUI container, run the following command:

				
					sudo docker ps
				
			

It may take a few minutes for the container to fully power up.

Once you’ve installed your LLMs, you can now use Open WebUI as your ChatGPT interface.  Login to the following URL:

 

http://youripaddress:8080

 

On the login screen, you first need to create a username and password by selecting Sign_Up.

Ollama login

Once logged in, you can select your installed model from the drop down menu and you’re ready to start using the chat window.  The speed will depend on the size of the VM you’ve selected during your deployment.  You can upgrade your VM size if you want to improve the speed.

Ollama and Open WebUI Firewall Ports

Ollama runs locally on the following port:

 

  • TCP 11434 (It runs on http://127.0.0.1:11434)

 

Open WebUI runs on a local Docker container on the following port:

 

  • TCP 8080

 

The links below explain how to modify / create firewall rules depending on which cloud platform you are using.

 

To setup AWS firewall rules refer to – AWS Security Groups

To setup Azure firewall rules refer to – Azure Network Security Groups

To setup Google GCP firewall rules refer to – Creating GCP Firewalls

Documentation and Support

Documentation and support for Ollama can be found at:

https://github.com/ollama/ollama

 

Documentation and support for Open WebUI can be found at:

https://docs.openwebui.com/

 

Disclaimer: Ollama is licensed under MIT license. No warrantee of any kind, express or implied, is included with this software. Use at your risk, responsibility for damages (if any) to anyone resulting from the use of this software rest entirely with the user. The author is not responsible for any damage that its use could cause.

Avatar for Andrew Fitzgerald
Andrew Fitzgerald

Cloud Solution Architect. Helping customers transform their business to the cloud. 20 years experience working in complex infrastructure environments and a Microsoft Certified Solutions Expert on everything Cloud.

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x