How to Install Open WebUI: The Local ChatGPT Interface 2026
Complete step-by-step guide to install Open WebUI locally. Works with Ollama, LM Studio, and vLLM. No coding needed. Perfect for beginners running local AI models.

If you've installed Ollama or LM Studio and you're tired of using the command line, Open WebUI is the game-changer you need.
Open WebUI is a free, open-source web interface that transforms your local AI into a ChatGPT-like experience. Instead of typing commands in Terminal, you get a beautiful chat interface, conversation history, model switching, and more.
The best part: It works with any local model runner (Ollama, LM Studio, vLLM) and takes 5 minutes to set up.
What Is Open WebUI?
Open WebUI is a sleek, web-based UI for running local large language models (LLMs). It gives you:
- ChatGPT-like interface — Clean chat window, conversation history, easy model switching
- Multi-model support — Run Llama 2, Mistral, Neural Chat, Qwen, and 100+ open models
- Works locally — Everything runs on your computer; no cloud needed
- Free and open-source — No subscriptions, no API keys required
- Admin panel — Manage users, models, and settings
- Markdown support — Formatted code blocks, equations, lists
- Conversation export — Save and share your chats as JSON or markdown
System requirements:
- 4GB RAM minimum (8GB recommended)
- 2-core CPU (more cores = better performance)
- 500MB disk space
- Any OS: Windows, macOS, Linux
Step 1: Install Ollama (If You Haven't Already)
Open WebUI needs a model runner backend. Ollama is the easiest option.
For Windows
- Go to ollama.ai
- Click "Download for Windows"
- Run the installer
- Launch Ollama (it runs in the background)
For macOS
- Visit ollama.ai
- Click "Download for Mac"
- Open the downloaded .dmg file
- Drag Ollama to Applications
- Open Applications → Ollama
For Linux
curl https://ollama.ai/install.sh | sh
After installation, verify Ollama is running:
ollama --version
You should see the version number. Good — Ollama is ready.
Step 2: Pull a Model into Ollama
Open your Terminal (or PowerShell on Windows) and run:
ollama pull mistral
This downloads Mistral (a fast, lightweight model). The first pull takes 5–10 minutes depending on your internet.
Alternative models you can try:
ollama pull llama2 # Meta's Llama 2 (13B)
ollama pull neural-chat # Fast, optimized chat model
ollama pull openchat # Fast alternative
To list models you've downloaded:
ollama list
Step 3: Install Open WebUI via Docker
The easiest way to run Open WebUI is with Docker.
Install Docker First
- Windows/macOS: Download Docker Desktop and install
- Linux: Run
sudo apt install docker.io(Ubuntu/Debian)
Verify Docker is installed:
docker --version
Run Open WebUI Container
Once Docker is installed, run this command in Terminal:
docker run -d -p 3000:8080 \
--add-host=host.docker.internal:host-gateway \
-v open-webui:/app/backend/data \
--name open-webui \
ghcr.io/open-webui/open-webui:latest
This command:
-d= Run in background-p 3000:8080= Access Open WebUI atlocalhost:3000--add-host=host.docker.internal:host-gateway= Docker can access Ollama on your computer-v open-webui:/app/backend/data= Save data persistently--name open-webui= Container name for easy management
Installation takes 2–3 minutes. Docker downloads the Open WebUI image automatically.
Step 4: Connect Ollama to Open WebUI
- Open your browser and go to
http://localhost:3000 - You'll see a login screen. Click "Sign Up" to create an account (this is your admin account)
- Create a username and password
- After login, click the Settings icon (gear, top-right)
- Look for Admin Settings or Backend Settings
- Under "Ollama API URL", enter:
http://host.docker.internal:11434 - Click "Save"
Note: The URL http://host.docker.internal:11434 tells Open WebUI where Ollama is running on your computer.
Step 5: Select Your Model & Start Chatting
- Back in the chat window, look for the Model Selector (dropdown near the top)
- Click it and select Mistral (or whichever model you pulled)
- Type a message: "Hello, how are you?"
- Press Enter
You now have a local ChatGPT.
Alternative: Install Open WebUI Without Docker
If Docker is too complicated, you can install Open WebUI directly via Python.
Requirements
- Python 3.9+
- Node.js 16+
- Ollama running in the background
Steps
-
Clone the Open WebUI repository:
git clone https://github.com/open-webui/open-webui.git cd open-webui -
Install dependencies:
pip install -r requirements.txt npm install -
Build the frontend:
npm run build -
Start the backend:
python -m backend.main -
Open
http://localhost:8000in your browser
This method works but Docker is simpler for most users.
How to Use Open WebUI (Basic Features)
Start a New Chat
Click the + button (top-left) to create a new conversation.
Switch Models Mid-Chat
Click the Model Selector dropdown and switch to a different model. Your conversation history stays.
Change Model Settings
Click Settings → Model Parameters to adjust:
- Temperature (creativity) — Lower (0.1–0.3) = focused answers; Higher (0.7–1.0) = creative answers
- Top P — Diversity of responses
- Max Tokens — Length of responses
For most users, the defaults are fine.
Upload Files (PDFs, text)
Click the Paperclip icon to upload a file. Ask Open WebUI questions about it:
- "Summarize this PDF"
- "Extract the key points"
Export Conversations
Click Menu (three dots) → Export as Markdown or JSON to download your chat history.
Troubleshooting
"Connection refused" or "Can't connect to Ollama"
Solution:
- Make sure Ollama is running:
ollama listin Terminal - If Ollama isn't running, start it:
ollama serve - Restart Open WebUI:
docker restart open-webui
"No models available"
Solution:
- Pull a model:
ollama pull mistral - Verify:
ollama list - Refresh Open WebUI in your browser (Cmd+R or Ctrl+R)
Open WebUI won't start
Solution:
- Make sure Docker is running
- Check if port 3000 is in use:
- Windows:
netstat -ano | findstr :3000 - macOS/Linux:
lsof -i :3000
- Windows:
- If in use, stop the container:
docker stop open-webuithendocker rm open-webui, then re-run the docker command above
Slow responses
Solution:
- Reduce model size: Use Mistral (7B) instead of Llama 2 (13B)
- Add more RAM to Docker: Docker Settings → Resources → Memory (set to 6–8GB)
- Use Ollama quantization for faster models
FAQ: Open WebUI for Beginners
Q: Do I need an internet connection? A: Only for the initial install and model downloads. After that, everything runs locally.
Q: Can I use Open WebUI with GPT-4 or Claude? A: Open WebUI is for open-source local models only. For GPT-4 or Claude, use ChatGPT.com or Claude's website directly.
Q: Is Open WebUI secure? A: Yes. Your data stays on your computer. Nothing is sent to the cloud.
Q: Can I use Open WebUI on my phone?
A: If your phone is on the same WiFi network, yes. Visit http://<your-computer-ip>:3000 from your phone's browser.
Q: What if I want to run Open WebUI without Docker? A: Use the Python installation method (see Alternative section above).
Q: Can I add more models later?
A: Yes! Just run ollama pull <model-name> and it appears in Open WebUI.
Q: Does Open WebUI use much CPU/GPU? A: Only while generating responses. Otherwise, it's idle.
Q: Can I customize the Open WebUI interface? A: Yes. In Admin Settings, you can change themes, language, and default model.
Next Steps
- Try different models — Experiment with Llama 2, Neural Chat, Mistral to find your favorite
- Learn prompt engineering — Ask Open WebUI better questions to get better answers (see our Terminal Beginners Guide for more command-line tips)
- Integrate with your workflow — Use Open WebUI for writing, brainstorming, debugging code
- Explore advanced features — Add custom models, set up RAG (retrieval-augmented generation), integrate with other tools
The Bottom Line
Open WebUI transforms your local AI setup from "command-line only" to "professional chat interface" in 5 minutes. If you've got Ollama running, you're 3 steps away from a ChatGPT-like experience with zero cloud costs.
Get started today at open-webui.com.
Good luck! 🚀

Alex the Engineer
•Founder & AI ArchitectSenior software engineer turned AI Agency owner. I build massive, scalable AI workflows and share the exact blueprints, financial models, and code I use to generate automated revenue in 2026.
Related Articles

Cursor AI Tutorial for Beginners: Setup, Usage & Code Examples 2026
Learn to use Cursor AI in 2026. Complete beginner's guide with setup instructions, code examples, GitHub integration, and keyboard shortcuts. No prior IDE experience needed.

Anthropic Claude Security: A Beginner's Guide to AI-Powered Code Vulnerability Scanning (2026)
Claude Security scans your codebase for vulnerabilities in real time. Learn how to set it up, use it in your workflow, and why it matters for security teams in 2026.