Local AI7 min read· May 2, 2026

How to Install Open WebUI: The Local ChatGPT Interface 2026

Complete step-by-step guide to install Open WebUI locally. Works with Ollama, LM Studio, and vLLM. No coding needed. Perfect for beginners running local AI models.

How to Install Open WebUI: The Local ChatGPT Interface 2026

If you've installed Ollama or LM Studio and you're tired of using the command line, Open WebUI is the game-changer you need.

Open WebUI is a free, open-source web interface that transforms your local AI into a ChatGPT-like experience. Instead of typing commands in Terminal, you get a beautiful chat interface, conversation history, model switching, and more.

The best part: It works with any local model runner (Ollama, LM Studio, vLLM) and takes 5 minutes to set up.


What Is Open WebUI?

Open WebUI is a sleek, web-based UI for running local large language models (LLMs). It gives you:

  • ChatGPT-like interface — Clean chat window, conversation history, easy model switching
  • Multi-model support — Run Llama 2, Mistral, Neural Chat, Qwen, and 100+ open models
  • Works locally — Everything runs on your computer; no cloud needed
  • Free and open-source — No subscriptions, no API keys required
  • Admin panel — Manage users, models, and settings
  • Markdown support — Formatted code blocks, equations, lists
  • Conversation export — Save and share your chats as JSON or markdown

System requirements:

  • 4GB RAM minimum (8GB recommended)
  • 2-core CPU (more cores = better performance)
  • 500MB disk space
  • Any OS: Windows, macOS, Linux

Step 1: Install Ollama (If You Haven't Already)

Open WebUI needs a model runner backend. Ollama is the easiest option.

For Windows

  1. Go to ollama.ai
  2. Click "Download for Windows"
  3. Run the installer
  4. Launch Ollama (it runs in the background)

For macOS

  1. Visit ollama.ai
  2. Click "Download for Mac"
  3. Open the downloaded .dmg file
  4. Drag Ollama to Applications
  5. Open Applications → Ollama

For Linux

curl https://ollama.ai/install.sh | sh

After installation, verify Ollama is running:

ollama --version

You should see the version number. Good — Ollama is ready.


Step 2: Pull a Model into Ollama

Open your Terminal (or PowerShell on Windows) and run:

ollama pull mistral

This downloads Mistral (a fast, lightweight model). The first pull takes 5–10 minutes depending on your internet.

Alternative models you can try:

ollama pull llama2              # Meta's Llama 2 (13B)
ollama pull neural-chat         # Fast, optimized chat model
ollama pull openchat            # Fast alternative

To list models you've downloaded:

ollama list

Step 3: Install Open WebUI via Docker

The easiest way to run Open WebUI is with Docker.

Install Docker First

  • Windows/macOS: Download Docker Desktop and install
  • Linux: Run sudo apt install docker.io (Ubuntu/Debian)

Verify Docker is installed:

docker --version

Run Open WebUI Container

Once Docker is installed, run this command in Terminal:

docker run -d -p 3000:8080 \
  --add-host=host.docker.internal:host-gateway \
  -v open-webui:/app/backend/data \
  --name open-webui \
  ghcr.io/open-webui/open-webui:latest

This command:

  • -d = Run in background
  • -p 3000:8080 = Access Open WebUI at localhost:3000
  • --add-host=host.docker.internal:host-gateway = Docker can access Ollama on your computer
  • -v open-webui:/app/backend/data = Save data persistently
  • --name open-webui = Container name for easy management

Installation takes 2–3 minutes. Docker downloads the Open WebUI image automatically.


Step 4: Connect Ollama to Open WebUI

  1. Open your browser and go to http://localhost:3000
  2. You'll see a login screen. Click "Sign Up" to create an account (this is your admin account)
  3. Create a username and password
  4. After login, click the Settings icon (gear, top-right)
  5. Look for Admin Settings or Backend Settings
  6. Under "Ollama API URL", enter: http://host.docker.internal:11434
  7. Click "Save"

Note: The URL http://host.docker.internal:11434 tells Open WebUI where Ollama is running on your computer.


Step 5: Select Your Model & Start Chatting

  1. Back in the chat window, look for the Model Selector (dropdown near the top)
  2. Click it and select Mistral (or whichever model you pulled)
  3. Type a message: "Hello, how are you?"
  4. Press Enter

You now have a local ChatGPT.


Alternative: Install Open WebUI Without Docker

If Docker is too complicated, you can install Open WebUI directly via Python.

Requirements

  • Python 3.9+
  • Node.js 16+
  • Ollama running in the background

Steps

  1. Clone the Open WebUI repository:

    git clone https://github.com/open-webui/open-webui.git
    cd open-webui
    
  2. Install dependencies:

    pip install -r requirements.txt
    npm install
    
  3. Build the frontend:

    npm run build
    
  4. Start the backend:

    python -m backend.main
    
  5. Open http://localhost:8000 in your browser

This method works but Docker is simpler for most users.


How to Use Open WebUI (Basic Features)

Start a New Chat

Click the + button (top-left) to create a new conversation.

Switch Models Mid-Chat

Click the Model Selector dropdown and switch to a different model. Your conversation history stays.

Change Model Settings

Click SettingsModel Parameters to adjust:

  • Temperature (creativity) — Lower (0.1–0.3) = focused answers; Higher (0.7–1.0) = creative answers
  • Top P — Diversity of responses
  • Max Tokens — Length of responses

For most users, the defaults are fine.

Upload Files (PDFs, text)

Click the Paperclip icon to upload a file. Ask Open WebUI questions about it:

  • "Summarize this PDF"
  • "Extract the key points"

Export Conversations

Click Menu (three dots) → Export as Markdown or JSON to download your chat history.


Troubleshooting

"Connection refused" or "Can't connect to Ollama"

Solution:

  1. Make sure Ollama is running: ollama list in Terminal
  2. If Ollama isn't running, start it: ollama serve
  3. Restart Open WebUI: docker restart open-webui

"No models available"

Solution:

  1. Pull a model: ollama pull mistral
  2. Verify: ollama list
  3. Refresh Open WebUI in your browser (Cmd+R or Ctrl+R)

Open WebUI won't start

Solution:

  1. Make sure Docker is running
  2. Check if port 3000 is in use:
    • Windows: netstat -ano | findstr :3000
    • macOS/Linux: lsof -i :3000
  3. If in use, stop the container: docker stop open-webui then docker rm open-webui, then re-run the docker command above

Slow responses

Solution:

  1. Reduce model size: Use Mistral (7B) instead of Llama 2 (13B)
  2. Add more RAM to Docker: Docker Settings → Resources → Memory (set to 6–8GB)
  3. Use Ollama quantization for faster models

FAQ: Open WebUI for Beginners

Q: Do I need an internet connection? A: Only for the initial install and model downloads. After that, everything runs locally.

Q: Can I use Open WebUI with GPT-4 or Claude? A: Open WebUI is for open-source local models only. For GPT-4 or Claude, use ChatGPT.com or Claude's website directly.

Q: Is Open WebUI secure? A: Yes. Your data stays on your computer. Nothing is sent to the cloud.

Q: Can I use Open WebUI on my phone? A: If your phone is on the same WiFi network, yes. Visit http://<your-computer-ip>:3000 from your phone's browser.

Q: What if I want to run Open WebUI without Docker? A: Use the Python installation method (see Alternative section above).

Q: Can I add more models later? A: Yes! Just run ollama pull <model-name> and it appears in Open WebUI.

Q: Does Open WebUI use much CPU/GPU? A: Only while generating responses. Otherwise, it's idle.

Q: Can I customize the Open WebUI interface? A: Yes. In Admin Settings, you can change themes, language, and default model.


Next Steps

  1. Try different models — Experiment with Llama 2, Neural Chat, Mistral to find your favorite
  2. Learn prompt engineering — Ask Open WebUI better questions to get better answers (see our Terminal Beginners Guide for more command-line tips)
  3. Integrate with your workflow — Use Open WebUI for writing, brainstorming, debugging code
  4. Explore advanced features — Add custom models, set up RAG (retrieval-augmented generation), integrate with other tools

The Bottom Line

Open WebUI transforms your local AI setup from "command-line only" to "professional chat interface" in 5 minutes. If you've got Ollama running, you're 3 steps away from a ChatGPT-like experience with zero cloud costs.

Get started today at open-webui.com.

Good luck! 🚀

Alex the Engineer

Alex the Engineer

Founder & AI Architect

Senior software engineer turned AI Agency owner. I build massive, scalable AI workflows and share the exact blueprints, financial models, and code I use to generate automated revenue in 2026.

Related Articles