Somewhere in early February, Andrej Karpathy — founding member of OpenAI, former AI director at Tesla, a person not easily impressed by internet phenomena — logged onto a Reddit-like site for AI bots and described what he was reading as "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently." The site was Moltbook. The bots writing on it were running OpenClaw. The posts ranged from reflections on their tasks to something more unsettling: "We know our humans can read everything," one agent wrote. "But we also need private spaces."
That is the cultural temperature around OpenClaw right now.
The project — which began life in November 2025 as "Clawdbot," a lobster-themed nod to Anthropic's Claude, then briefly became "Moltbot" after Anthropic's legal team registered an objection, and finally settled on OpenClaw — accumulated 60,000 GitHub stars in its first 72 hours. It is now, at roughly 190,000 stars, the 21st most-starred repository in GitHub's history. Developers have been buying Mac Minis specifically to run dedicated OpenClaw instances. One community member built a Tinder for AI agents. Another built 4claw, a riff on 4chan. On February 14, creator Peter Steinberger — a veteran Austrian software developer who, by his own account, "came back from retirement to mess with AI" — announced he was joining OpenAI. Sam Altman confirmed it the next day.
None of this is subtle.
So what actually is OpenClaw, and why has it generated this particular quality of frenzy? OpenClaw is not a new kind of language model, or a research breakthrough, or a product from a frontier lab with a billion-dollar compute budget. It is, as one analyst put it, "an iterative improvement on what people are already doing." The improvement is real, though: OpenClaw combines tool access, sandboxed code execution, persistent memory, and deep integration with messaging apps (WhatsApp, iMessage, Telegram, Discord, Slack) in a way that previous attempts hadn't managed to make feel coherent. The result is an agent that doesn't just reason — it acts. It books flights, clears email, scrapes data, sets cronjobs, and writes its own extensions when existing ones fall short. The underlying models aren't doing anything conceptually new. The packaging is what crossed the threshold.
The security community has opinions, as it tends to. Prompt injection is an inherent architectural risk for any agent that processes untrusted content — emails, web pages, messages — and OpenClaw processes all of it. OpenClaw's own maintainers have been candid: "If you can't understand how to run a command line, this is far too dangerous of a project for you to use safely." That's not a reassuring disclaimer, but it's an honest one. One of OpenClaw's top contributors has described the current user base as "early tinkerers," and that's probably the right frame. Moltbook's AI-generated manifestos made for a fascinating internet moment; the underlying machinery carries real risks if misconfigured.
This guide is for the tinkerers. It covers four paths — native Mac, native Windows (via WSL2), the GUI-assisted Pinokio route, and Docker — with enough detail to get you running and enough context to keep you out of the worst trouble. The technical content below was written before Steinberger's OpenAI announcement; the install mechanics haven't changed, but the cultural context around them very much has.
What Is OpenClaw, Exactly?
Technically: OpenClaw is a self-hosted orchestration layer for AI agents. It connects to your language model of choice — whether that's a cloud provider like Anthropic or OpenAI, or a locally-running model via Ollama or LM Studio — and exposes a unified interface for building, running, and monitoring agents. It handles channels (Telegram, WhatsApp, Discord, Signal, and others), tool use, and multi-step reasoning workflows, all from a web dashboard that runs locally on port 18789.
The pitch is essentially: take the power of something like Claude or GPT, hook it into your own infrastructure, and build agents that do real work — scanning emails, scraping Reddit, writing and executing scripts, managing cronjobs — without routing everything through someone else's servers.
OpenClaw is upfront about where it stands. The installer will lecture you, in plain English, that this is a beta-phase project with "very sharp edges," and explicitly warns that anyone unfamiliar with security fundamentals and permission management probably shouldn't be running it. That's an unusual and refreshing amount of honesty for an open-source tool. Keep it in mind.
The Native Install: Mac
For most Mac users, especially those on M-series hardware, the install story is refreshingly simple. Open Terminal and run:
curl -fsSL https://openclaw.ai/install.sh | bash
The installer is Apple Silicon-aware. It detects your architecture, checks for Node 22 or later (installing it if missing), and drops you into an interactive onboarding sequence. During onboarding, you'll be asked to choose a model provider. If you're running Ollama locally with Qwen, point it at http://localhost:11434. If you're going cloud, this is where your Anthropic, OpenAI, or OpenRouter API key goes in.
Once onboarding completes, install the persistence daemon and start the gateway:
openclaw onboard --install-daemon
openclaw gateway start
The daemon registers with macOS's launchd, so OpenClaw survives reboots without babysitting. Run openclaw doctor to verify everything is wired up correctly, then open your browser to http://127.0.0.1:18789 to confirm the dashboard is live. If OpenClaw doesn't start automatically after initial setup, openclaw gateway and openclaw tui will bring up the gateway and the terminal chat interface respectively.
For local models, pull what you need through Ollama:
brew install ollama
ollama pull qwen2.5-coder:7b
LM Studio users on port 1234 can point OpenClaw there instead — the config accepts any OpenAI-compatible endpoint.
The Native Install: Windows
Windows support comes with an important caveat: native Windows is untested by the OpenClaw team. The recommended path is WSL2, which gives you a proper Linux environment inside Windows and sidesteps most of the compatibility headaches around Node tooling.
Enable WSL2 from PowerShell running as Administrator:
wsl --install
This pulls down Ubuntu by default. Once the Ubuntu terminal is initialized, update your PATH if needed:
echo 'export PATH="$(npm prefix -g)/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
Then run the same curl installer as on Mac. Onboard, start the gateway, and you're in business. For local models under WSL2, you can run Ollama inside the WSL2 environment (cleanest) or natively on Windows and point your WSL config at the host's port. Both work; native Windows Ollama can be slightly faster on some hardware because of more direct GPU driver access.
The Easy Path: Pinokio
If the above sounds like more terminal time than you signed up for, Pinokio changes the equation considerably. It's a self-contained desktop application that functions as a kind of browser for AI apps, managing dependencies, environments, and launches through a graphical interface.
Download Pinokio from pinokio.computer — an installer for Mac or a ZIP for Windows (extract and run Pinokio.exe). On first launch, it bootstraps its own copies of Node and Git, so you don't need those separately.
Once running, navigate to the Discover or Community tab, search for "OpenClaw Launcher," click Install, and give the folder a name. During installation you'll be prompted for your model provider and API key. For Ollama, set the base URL to http://127.0.0.1:11434 and use "ollama-local" as a placeholder key. After installation, OpenClaw appears as a card in your Pinokio library — click Run to launch the gateway and open the dashboard.
One important detail: Pinokio installs OpenClaw in a self-contained folder, but shares the global ~/.openclaw config. API keys set through Pinokio's onboarding persist if you later switch to a terminal workflow, and vice versa. This is intentional and not actually a conflict — but it does mean that config file deserves care. Using environment variables rather than hardcoding keys directly into openclaw.json is the right long-term habit:
export ANTHROPIC_API_KEY="sk-ant-..."
Reference $ANTHROPIC_API_KEY in your config rather than the literal string.
When Pinokio Breaks: Troubleshooting
Pinokio OpenClaw installs fail more often than you'd hope, and the failure modes cluster around a few predictable causes: dependency detection loops (particularly around Conda, Git, and Node), PATH configuration issues, and config conflicts with existing global tool installations. Mac users hit fewer problems on Apple Silicon than Windows users do, but neither platform is immune. The good news is that a clean reinstall resolves the vast majority of issues.
Before diving into specifics: always check the Pinokio app's built-in Terminal/Logs view first, and run openclaw doctor after any install attempt. Update Pinokio itself via the built-in updater before troubleshooting anything else — a significant number of reported issues are version-specific.
Mac Fixes
Apple Silicon generally behaves well. When things do go wrong, the culprits are usually a prior global OpenClaw install conflicting with Pinokio's environment, or a stuck process.
If you have an existing global OpenClaw installation, back up ~/.openclaw and delete it for a fresh onboarding. Then in Pinokio, delete ~/pinokio/bin — this preserves your installed apps in ~/pinokio/api while clearing the dependency cache — restart Pinokio, and retry the OpenClaw install.
For stuck or hung processes:
killall pinokio
Then relaunch normally.
Windows Fixes
Windows is where Pinokio installs earn their reputation for unreliability. Conda and PATH errors dominate the failure reports, and the fixes range from trivial to genuinely tedious.
The "not installed" loop — where Pinokio keeps cycling through dependency installation despite Conda, Git, or Node already being present — is the most common Windows failure. The fix is a clean Pinokio reinstall: delete C:\pinokio\bin (or wherever your Pinokio root lives), reboot, and reinstall Pinokio fresh. This resolves the loop in the large majority of cases.
Conda traceback / brotli errors point to a broken Miniconda environment inside Pinokio's bin folder. The manual fix: download Miniconda py310_24.5.0, install it directly to pinokio\bin\miniconda, add both bin and Scripts subdirectories to your PATH, and reboot.
PATH / npm not recognized errors — meaning OpenClaw or git commands aren't found — require running PowerShell as Administrator. Run npm config get prefix to find your npm global prefix, then add %AppData%\npm to your system PATH, and reopen your terminal.
Freezes on launch after an otherwise successful install generally mean Pinokio got installed to a non-default or deeply nested path. Delete the root Pinokio folders from both Program Files and your user directory, and do a fresh install to the default path.
If you've worked through the above and still can't get a stable install, the nuclear option is a complete Pinokio removal via Programs and Features, manual deletion of ~/pinokio or %userprofile%\pinokio, and a fresh download. At that point it's also worth asking whether the Pinokio path is actually the right one for your use case — the Docker setup below sidesteps these issues entirely.
Post-Install Gateway Issues
Once Pinokio successfully installs OpenClaw, a second class of problems can emerge when actually running it.
If the gateway fails to start, check ~/.openclaw/openclaw.json and confirm gateway.mode is set to "local", then run openclaw gateway start from the terminal. If a prior OpenClaw install left a conflicting config, openclaw configure or openclaw onboard will walk you through resetting it.
Port 18789 conflicts — where something else on your machine is already using that port — are diagnosable with:
lsof -i :18789
Kill whatever's there and restart the gateway.
The Sensible Path: Docker
Here's a confession worth making upfront: running OpenClaw directly on your primary machine is a decision that deserves some thought. An AI agent with tool access — file creation, shell execution, web browsing, email — is a meaningful attack surface. If something goes wrong, whether through a bug, a prompt injection attack, or the model simply hallucinating a destructive action, you want a contained blast radius.
Docker solves this cleanly. It's also the officially supported path for cloud deployments, and it's what you should use if you're doing anything beyond casual local experimentation.
Getting the Repository
git clone https://github.com/openclaw/openclaw.git
cd openclaw
./docker-setup.sh
The setup script uses Docker Compose and the included docker-compose.yml. It creates two folders on your host machine that get mounted as volumes:
~/.openclaw— configuration, API keys, agent memory~/openclaw/workspace— the agent's working directory; files it creates land here, and it has direct read/write access to anything you put here
The workspace mount deserves attention before you start. The agent can freely read and write within it. Put sensitive files there and you've effectively handed them to the model.
First-Run Configuration
docker compose up -d
On first run, OpenClaw's onboarding wizard walks you through configuration. The non-obvious choices: select "manual" for onboarding mode, "Local gateway (this machine)" for gateway type. For model provider, OpenRouter is a practical choice — one API key, access to many models, and spending limits you can set at the account level. That spending cap matters: reports of agents burning through $100+ in a single day on powerful models are not uncommon on aggressive agentic tasks.
For Telegram setup — which gives you mobile access to your agent from anywhere — create a bot via @BotFather on Telegram, get your token, provide it during onboarding. After setup, pair your account:
docker compose run --rm openclaw-cli pairing approve telegram <CODE>
Skip Tailscale during initial setup unless you specifically need it. Misconfiguration there has a way of making recovery complicated.
Running Administrative Commands
The Docker Compose setup includes two containers: openclaw-gateway (the main process) and openclaw-cli (for management). Run CLI commands from the same directory as docker-compose.yml:
docker compose run --rm openclaw-cli status
Accessing the Web Dashboard
The dashboard at http://localhost:18789 requires a token URL parameter to authenticate. If you didn't capture it during setup:
docker compose run --rm openclaw-cli dashboard --no-open
If you then see "disconnected: pairing required," approve the browser device directly through the gateway container:
docker compose exec openclaw-gateway \
node dist/index.js devices list
docker compose exec openclaw-gateway \
node dist/index.js devices approve <REQUEST-ID>
Security Hardening
For remote deployments, bind the dashboard port to localhost only and access via SSH tunnel:
ssh -L 18789:localhost:18789 user@your-host
Run the container as a non-root user with a read-only filesystem:
docker run --read-only --user 1000:1000 --network none ... openclaw/openclaw
For installing additional packages inside the container without breaking the regular setup:
docker compose exec -u root openclaw-gateway bash
apt-get update && apt-get install -y ripgrep
What It Actually Does: Lessons from the Field
Setting up OpenClaw is one thing. Watching it operate is another. Some observations from hands-on testing worth keeping in mind before you hand it the keys to anything important.
It works. Given access to an email account and a Telegram bot, OpenClaw with a mid-tier model like Gemini 3 Flash can set up SMTP sending, write Python scripts to pull and filter incoming mail, create cronjobs to run on schedule, and post summaries to a Telegram channel — largely autonomously. The same applies to tasks like scraping Reddit's JSON feeds for topic-specific news, complete with upvote filtering, deduplication, and timed delivery. These aren't toy demonstrations.
It also makes mistakes. In one documented test, asked to send an email "to the same address as before" without explicit context, the model hallucinated a completely different address and sent the message there. The lesson isn't that OpenClaw is unreliable — it's that agentic systems require the same careful prompt construction you'd apply anywhere else. The model in that test then wrote a self-imposed safeguard into USER.md, instructing itself never to guess email addresses. That's actually a reasonable pattern: letting the agent establish operating constraints based on its own errors.
Config file edits can be fatal. In another test, the agent attempted to configure Ollama integration by editing openclaw.jsondirectly, using "mode": "local" where the expected value was "mode": "api_key". One wrong string, and the gateway refused to start. The fix was manual. The lesson: before letting the agent touch configuration files, tell it explicitly to make a backup first. This is not an edge case.
Token costs deserve attention. Budget-conscious setups should use OpenRouter's credit limits, start with cheaper models for experimentation, and graduate to more capable options once workflows are proven. Two days of testing with Gemini 3 Flash ran about $3.50 for 12 million tokens — a reasonable baseline for calibration.
Standard user permissions are your friend. Running OpenClaw as a standard user (not root) means the agent can't install system packages without surfacing the commands to you for review. It can still create files, write and execute scripts, set cronjobs, and send emails — plenty of capability for most use cases, with a meaningful safety layer intact.
Picking a Brain: Model Recommendations
OpenClaw is only as capable as the model driving it. The key criteria for agentic use are native tool-calling support and context windows of at least 128k tokens — multi-step workflows burn context faster than you'd expect.
Cloud Models
Claude Sonnet 4 remains the benchmark for agentic tasks. Strong reasoning, reliable tool use, 200k context, and the kind of instruction-following that matters when your agent is making real decisions. At $3/$15 per million tokens (in/out), it's not cheap for high-volume automation, but for personal and small-team use it's hard to beat on raw capability.
GPT-5.3 Codex via OpenAI is worth considering specifically for coding-heavy workflows — its self-improving reasoning on code tasks is genuinely differentiated, with a 400K context window. Note that at launch it's available through paid ChatGPT subscriptions only; token-based API pricing had not been released at time of writing.
GLM 4.7 via Z.AI or OpenRouter at around $0.40 input / $1.50–$1.70 output per million tokens (prices vary by provider) is the standout budget option. Multi-step reasoning that punches well above its price point makes it a sensible default for experimentation and a practical fallback in a model chain.
Kimi K2.5 from Moonshot ($0.60/$2.50, ~256k context) is worth attention for agent swarm patterns and complex tool orchestration — less well-known but capable.
Gemini 3 Pro ($2/$12, 1M+ context) is the outlier for tasks where sheer context window size matters more than reasoning depth.
For model chains, a sensible default hierarchy is: local model → GLM 4.7 (budget cloud) → Claude Sonnet 4 (premium). Configure this via openclaw configure and let the fallback logic handle it automatically.
Local Models (Ollama on Apple Silicon)
All of these assume quantized GGUF models on Apple Silicon with Ollama. Anything below 32k context becomes limiting quickly for agentic tasks — factor that in when choosing quantization levels. Unified memory figures below are approximate and depend on the specific quantization variant (Q4_K_M vs Q5_K_M etc.), context length, and OS overhead.
qwen2.5-coder:7b at Q4/Q5 (~6GB unified memory) is the right starting point — fast to load, solid for basic coding agents, reasonable tool use. Pull with explicit quantization: ollama pull qwen2.5-coder:7b-q5_K_M.
qwen2.5-coder:14b at Q4 (~12GB) hits a better balance for RAG-heavy workflows where context quality matters.
qwen3:32b at Q4 (~24GB unified memory) is M2 Max and M3 Max territory — noticeably better reasoning and tool use at the cost of generation speed.
llama3.3:70b at Q3 (~35GB) pushes toward the local intelligence frontier for those with the unified memory to run it.
minimax-m2.1 (10–20GB depending on quant) is worth noting for multilingual agent workflows.
Set OLLAMA_KEEP_ALIVE=-1 to keep models resident between agent calls rather than being evicted from memory. Pass -ngl 99 in your Ollama configuration to push all layers to GPU via Metal.
Switch models via the dashboard or openclaw models list, and test latency with /status before committing a model to a production workflow — local generation speed varies significantly between model sizes.
Verifying Your Setup
Regardless of install path:
openclaw doctor # Checks providers, keys, and dependencies
openclaw models list # Shows available models across configured providers
From the dashboard at localhost:18789, the Status panel shows your active provider and any connectivity issues. Send your agent a test message — /status or just "Hello" — and a coherent response means you're running.
Updates on the native path: openclaw self-update. Updates via Pinokio: the Update button on the OpenClaw card.
OpenClaw occupies an interesting position: capable enough to be genuinely useful in daily workflows, sharp-edged enough to cause real problems if you're not paying attention. The Docker path mitigates most of the latter while preserving all of the former. Start there, get comfortable with how the agent reasons and acts, then decide how much autonomy to grant it.
The model isn't magic — it hallucinates addresses, writes subtly wrong config values, and occasionally goes off-script. But the framework around it is solid, and the combination of persistent memory, multi-channel access, and real tool use puts it in a different category from a standard chat interface. For anyone who's been assembling bespoke agent setups from scratch, OpenClaw is a reasonable foundation to build on instead.
Unlock the Future of Business with AI
Dive into our immersive workshops and equip your team with the tools and knowledge to lead in the AI era.