Hermes Agent: The Self-Improving AI Agent That Remembers Everything
Nous Research's Hermes Agent is an open-source autonomous agent with persistent memory, self-built skills, and multi-platform gateway support. Here's how to set it up and what makes it different.

Most AI agents have no memory. Every session starts from scratch — you re-explain your project, re-introduce your codebase, re-describe what you're working on. It's tedious, and it caps how useful the agent can actually become over time.
Hermes Agent, released by Nous Research under MIT license, is built around a different model: it learns your projects, creates skills from its own experience, and reaches you across Telegram, Discord, Slack, or WhatsApp from a single gateway. The longer it runs, the more capable it gets.
This guide covers installation, configuration, and the features that actually matter.
What Hermes Agent Actually Is
Hermes is a server-side autonomous agent — not a chatbot wrapper, not an IDE plugin. You install it on a Linux server, Mac, or WSL2 machine, connect your messaging accounts, and it runs persistently as a system service.
What sets it apart from tools like OpenClaw or standard LLM interfaces:
Persistent memory. Hermes indexes what it learns across sessions. It remembers how it solved a problem, stores project context, and retrieves it on demand. Ask it the same question two weeks later — it has the context from before.
Auto-generated skills. When Hermes solves a new type of problem, it encodes that solution as a reusable skill. Next time a similar task comes up, it pulls the skill instead of reasoning from scratch. Skills can be shared via agentskills.io or installed from communities on ClawHub and LobeHub.
40+ built-in tools. Web search, terminal access, file system, browser automation, vision, image generation, text-to-speech, code execution, cron scheduling, multi-model reasoning, and subagent delegation — all built in.
Multi-platform gateway. One installation, one gateway: Telegram, Discord, Slack, WhatsApp, Signal, Email, and CLI. Start a conversation on Telegram, continue it on Slack. Context carries over.
5 execution environments. Local, Docker, SSH, Singularity, and Modal — with container hardening and namespace isolation.
Requirements
- Linux, macOS, or WSL2 (Windows native is experimental and unsupported)
- No other prerequisites — the installer handles Python 3.11,
uv, and all dependencies automatically
New to the terminal? The install command below is a single copy-paste. If you've never opened Terminal before, our Terminal Beginner's Guide covers exactly what you need.
Installation
One command installs everything:
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
No sudo required. The script installs uv, Python 3.11, clones the repo, and sets up the virtual environment. Takes 2–5 minutes depending on your connection.
After install, run:
hermes setup
This launches an interactive wizard that walks you through connecting your model backend.

Choosing Your Model Backend
Hermes supports three backend options:
Option 1: Nous Portal (Recommended)
hermes setup
# Choose: Nous Portal
# Authenticate via OAuth at portal.nousresearch.com
Connects to Nous Research's hosted models, including their Hermes model series. Best experience, no API key management.
Option 2: OpenRouter
hermes model
# Choose OpenRouter
# Enter your OpenRouter API key
OpenRouter gives you access to 100+ models (GPT-4, Claude, Gemma, Mistral, etc.) through a single key. Useful if you want to switch models without reconfiguring Hermes.
Option 3: Any OpenAI-Compatible Endpoint
hermes model
# Choose: Custom endpoint
# Enter: http://localhost:11434/v1 (for local Ollama)
Running Ollama locally? Point Hermes at it. This means Hermes can orchestrate a fully local model — no cloud required.
Running heavier models locally? If you're running E4B-class models or doing batch inference, Ampere.sh offers affordable ARM-based GPU compute designed for AI workloads — significantly cheaper than AWS for inference tasks.
Setting Up the Multi-Platform Gateway
This is where Hermes goes from useful to powerful. One command launches the setup wizard:
hermes gateway setup
Walk through connecting whichever platforms you use. For Telegram (most common):
- Create a bot via @BotFather → copy the API token
- Paste into the Hermes gateway setup
- Start the gateway:
hermes gateway
Or install it as a system service so it runs on startup and survives reboots:
hermes gateway install
After this, you can chat with Hermes on Telegram exactly like you'd chat with a person. It has full tool access — it can run terminal commands, search the web, generate images, transcribe audio, and execute code in response to your messages.
Natural Language Scheduling
One underrated feature: tell Hermes to do recurring tasks in plain English.
"Every Monday morning, summarize what I worked on last week and email it to me."
"At 9pm daily, check if any GitHub PRs are waiting for my review."
"Every Friday, run my SEO audit script and send me the results."
Hermes converts these to cron jobs and runs them unattended through the gateway. The equivalent of a personal automation layer that speaks your language.
Installing Community Skills
Hermes ships with 40+ bundled skills for MLOps, GitHub workflows, and research. Add more from the community:
hermes skill install <skill-url-or-github-path>
Browse available skills at agentskills.io or the ClawHub community.
Skills are stored in the open agentskills.io format — if Hermes figures out a useful workflow, it packages it and you can share it.
Keeping It Updated
hermes update
Pulls the latest code and reinstalls dependencies. Run anytime a new version drops.
Key Takeaways

- Install: Single curl command, no sudo, no prerequisites — works on Linux/Mac/WSL2
- Backends: Nous Portal, OpenRouter, or any local OpenAI-compatible endpoint (Ollama, LM Studio)
- Gateway: One deployment reaches Telegram, Discord, Slack, WhatsApp, Signal, and CLI
- Memory: Learns your projects and encodes solutions as reusable skills automatically
- Scheduling: Natural language cron — tell it what to do, when, in plain English
- License: MIT — fully open-source, free to use commercially
- GitHub: NousResearch/hermes-agent

Alex the Engineer
•Founder & AI ArchitectSenior software engineer turned AI Agency owner. I build massive, scalable AI workflows and share the exact blueprints, financial models, and code I use to generate automated revenue in 2026.
Related Articles

Deep-Live-Cam: Real-Time AI Face Swap Setup Guide (2026)
Install Deep-Live-Cam for real-time face swap on Windows, Mac, and Linux. Step-by-step guide with GPU acceleration, use cases, and ethical tips — no ML experience required.

AI Agency Pricing Guide 2026: How Much to Charge for AI Automation Services
Exact pricing for AI automation services in 2026 — chatbots, workflows, retainers. How to calculate rates, structure packages, and stop undercharging clients.

The 8 AI Tools That Give You an Unfair Advantage (The 2nd Monitor Stack)
Alex Finn's viral tweet laid out the exact AI stack he uses to work at a level most people can't match. Here's a detailed breakdown of each tool — what it does, why it matters, and how to start using it.