Running OpenClaw on Your Synology NAS — Hardened for Always-On Homelab Use

TL;DR: OpenClaw is upfront about its security model: it's a personal agent that trusts its operator, and it tells you to harden it before exposing it. This post implements those recommendations as a Portainer stack on a Synology NAS, adding a Squid egress proxy to whitelist outbound domains, Docker secrets for credential isolation, and a read-only root filesystem.
Why This Post Exists
OpenClaw is genuinely useful. It's an open-source, self-hosted AI gateway that connects your messaging apps — WhatsApp, Telegram, Discord — to an AI agent that can actually do things: run shell commands, manage files, browse the web, send you alerts. It's what you get when you give an LLM hands.
To their credit, the OpenClaw team is transparent about the security implications. The very first thing you see during setup is a security warning that says, in plain English: "OpenClaw is a personal agent: one trusted operator boundary. A bad prompt can trick it into doing unsafe things." It tells you to sandbox tools, keep secrets out of the agent's reachable filesystem, and run openclaw security audit --deep regularly. If you're not comfortable with security hardening, it tells you not to run it. That honesty is refreshing.
The problem is that a lot of people skip that screen. Censys tracked over 21,000 publicly exposed instances in January 2026 alone, many leaking plaintext API keys. CVE-2026-25253 (CVSS 8.8) showed that a single malicious link could steal an auth token and achieve remote code execution. Researchers found over 1,000 malicious packages in the ClawHub skill marketplace, including macOS credential stealers. The attack surface is real, and OpenClaw themselves will tell you so.
What I couldn't find was a guide that takes OpenClaw's own security recommendations and implements them specifically for homelabbers running always-on NAS hardware. Most guides target developers on laptops or VPS deploys with Nginx bolted on. Neither addresses the fundamental question for a Synology setup: how do you give an AI agent the ability to execute arbitrary commands while limiting what it can actually reach?
That's what we're building here.
What We're Building
OpenClaw's setup screen recommends sandbox and least-privilege tools, keeping secrets out of the agent's filesystem, and pairing/allowlists. The problem isn't who can reach OpenClaw — it's on your LAN, that's fine. The problem is what OpenClaw can reach. It can execute shell commands, install skills from a marketplace that's been targeted by supply chain attacks, and by default it has unrestricted outbound network access. A compromised or malicious skill can talk to any server on the internet.
The stack has four components, all defined in a single Portainer stack (docker-compose) file:
- OpenClaw gateway — the main service, locked down with a read-only root filesystem, dropped capabilities, and a non-root user
- Squid egress proxy — a forward proxy that whitelists exactly which domains OpenClaw can reach outbound (your LLM API provider, nothing else)
- An internal Docker network — marked
internal: true, giving OpenClaw zero direct internet access - Docker secrets — for your API keys, so they never appear in environment variables or compose files
The whole thing runs in Portainer on Synology's Container Manager, which means you get a UI for stack management, log access, and container health — no SSH required for day-to-day operations.
Prerequisites
You'll need:
- A Synology NAS running DSM 7.2+ with Container Manager installed
- Portainer CE installed (either via Container Manager or as a standalone container — I run it on port 9443)
- An Anthropic API key (or another LLM provider key — OpenClaw supports several)
- SSH access to your NAS for initial directory and permission setup
Step 1: Directory Structure and Permissions
SSH into your NAS and create the working directories. This is the step that trips up most Synology users because OpenClaw's container runs as UID 1000, but Synology's Docker process doesn't default to that.
# SSH into your Synology
ssh your-admin-user@your-nas-ip
# Create a dedicated shared folder or use an existing one.
# I use /volume1/docker/openclaw — adjust to match your setup.
sudo mkdir -p /volume1/docker/openclaw/config
sudo mkdir -p /volume1/docker/openclaw/workspace
sudo mkdir -p /volume1/docker/openclaw/squid-cache
sudo mkdir -p /volume1/docker/openclaw/secrets
# OpenClaw runs as UID 1000 inside the container.
# If you skip this, your config will silently revert on every restart.
sudo chown -R 1000:1000 /volume1/docker/openclaw/config
sudo chown -R 1000:1000 /volume1/docker/openclaw/workspace
# Squid runs as its own user
sudo chown -R 13:13 /volume1/docker/openclaw/squid-cacheThe silent revert problem deserves emphasis. OpenClaw writes its configuration to /home/node/.openclaw inside the container. If that bind mount isn't writable by UID 1000, the write fails without an error message. You'll configure OpenClaw through its onboarding flow, everything will look fine, and then the container restarts and you're back to defaults. I lost an hour to this before checking docker logs carefully enough to spot a permissions warning buried in the output.
Step 2: Create Docker Secrets
OpenClaw's own setup recommends keeping secrets out of the agent's reachable filesystem. Most tutorials pass the API key as an environment variable:
# Don't do this.
environment:
- ANTHROPIC_API_KEY=sk-ant-api03-YOUR-KEY-HEREThat key is now visible in docker inspect, in Portainer's container details, in any process listing, and in any log that dumps environment variables. If a malicious skill reads process.env, it gets your key.
Docker secrets are better. They mount the secret as a file inside the container at /run/secrets/<name>, readable only by the container's process. They don't appear in inspect output or environment variable listings.
# Still on your NAS via SSH.
# Write your API key to a file — we'll reference it in the stack.
echo "sk-ant-api03-YOUR-ACTUAL-KEY-HERE" > /volume1/docker/openclaw/secrets/anthropic_api_key
# Lock it down.
sudo chmod 600 /volume1/docker/openclaw/secrets/anthropic_api_key
sudo chown 1000:1000 /volume1/docker/openclaw/secrets/anthropic_api_keyA note on Synology and Docker secrets: Swarm-mode secrets (
docker secret create) aren't available unless you initialise a single-node swarm, which adds complexity most homelabbers don't need. File-based secrets in Compose v2 work fine and keep things simple. The trade-off is that the file exists on disk — so your NAS's own encryption and access controls matter. If you're running SHR with volume encryption enabled, you're in good shape.
Step 3: The Squid Egress Proxy Configuration
Here's the core idea: OpenClaw sits on an internal-only Docker network with no route to the internet. The only way out is through a Squid proxy that also sits on a second, external network. Squid whitelists exactly the domains OpenClaw needs.
Create the Squid config:
cat > /volume1/docker/openclaw/squid.conf << 'EOF'
# Minimal Squid config for OpenClaw egress control
# Only allow HTTPS CONNECT to approved API endpoints
# ACL definitions
acl localnet src 172.16.0.0/12 # Docker internal networks
acl SSL_ports port 443
acl CONNECT method CONNECT
# Whitelist: only these domains can be reached
acl allowed_domains dstdomain .anthropic.com
acl allowed_domains dstdomain .api.anthropic.com
# Add other providers if needed:
# acl allowed_domains dstdomain .api.openai.com
# acl allowed_domains dstdomain .generativelanguage.googleapis.com
# Rules — order matters
http_access allow CONNECT SSL_ports allowed_domains
http_access allow localnet allowed_domains
http_access deny all
# Listener
http_port 3128
# Don't cache API responses — they're unique every time
cache deny all
# Minimal logging
access_log stdio:/dev/stdout
cache_log stdio:/dev/stderr
# Reduce information leakage
via off
forwarded_for delete
reply_header_access X-Cache deny all
reply_header_access X-Cache-Lookup deny all
EOFWhat this does: OpenClaw can only make HTTPS connections to *.anthropic.com. Everything else — every other domain, every other port — gets a hard deny. If a compromised skill tries to exfiltrate data to an attacker's server, it hits the proxy and gets rejected.
The via off and forwarded_for delete lines strip headers that would advertise the proxy's existence to upstream servers. Not strictly necessary for API calls, but good hygiene.
Step 4: The Portainer Stack
In Portainer, go to Stacks → Add Stack. Give it a name like openclaw and paste this compose file:
version: "3.8"
secrets:
anthropic_api_key:
file: /volume1/docker/openclaw/secrets/anthropic_api_key
services:
# --- Egress proxy: the only service with internet access ---
egress-proxy:
image: ubuntu/squid:latest
container_name: openclaw-egress-proxy
restart: unless-stopped
networks:
- openclaw-internal # talks to OpenClaw
- openclaw-egress # talks to the internet (outbound only)
volumes:
- /volume1/docker/openclaw/squid.conf:/etc/squid/squid.conf:ro
- /volume1/docker/openclaw/squid-cache:/var/spool/squid
read_only: true
tmpfs:
- /var/run:size=10M
- /var/log/squid:size=50M
mem_limit: 128m
cpus: 0.25
healthcheck:
test: ["CMD-SHELL", "squidclient -h localhost mgr:info | grep -q 'Squid Object Cache'"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
# --- OpenClaw gateway: NO direct internet access ---
openclaw:
image: ghcr.io/openclaw/openclaw:latest
container_name: openclaw-gateway
restart: unless-stopped
depends_on:
egress-proxy:
condition: service_healthy
networks:
- openclaw-internal # egress via Squid — no direct internet
ports:
- "18789:18789" # gateway API — LAN access only
environment:
- HOME=/home/node
- TERM=xterm-256color
- TZ=Europe/London # adjust to your timezone
# Route ALL outbound traffic through the egress proxy
- HTTP_PROXY=http://egress-proxy:3128
- HTTPS_PROXY=http://egress-proxy:3128
- NO_PROXY=localhost,127.0.0.1
# Tell OpenClaw where to find the secret (not the key itself)
- ANTHROPIC_API_KEY_FILE=/run/secrets/anthropic_api_key
secrets:
- anthropic_api_key
volumes:
- /volume1/docker/openclaw/config:/home/node/.openclaw
- /volume1/docker/openclaw/workspace:/home/node/.openclaw/workspace
read_only: true
tmpfs:
- /tmp:size=500M
- /home/node/.cache:size=200M
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE # needed for the gateway port
mem_limit: 2g
cpus: 1.0
healthcheck:
test: ["CMD-SHELL", "wget -q --spider http://localhost:18789/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
networks:
openclaw-internal:
driver: bridge
internal: true # <-- no default gateway. No internet access.
openclaw-egress:
driver: bridge # normal bridge — Squid's route to the internetLet's unpack what's happening here, because the interesting bits aren't obvious:
The internal: true network. OpenClaw sits on openclaw-internal, which Docker creates as a bridge network with no default gateway. The container literally cannot route packets to the internet. It can only reach other containers on the same network — in this case, the egress proxy.
The proxy environment variables. HTTP_PROXY and HTTPS_PROXY tell OpenClaw (and the Node.js runtime underneath it) to route all outbound HTTP/S traffic through egress-proxy:3128 on the internal network. Squid then decides whether to forward the request to the internet or deny it.
Squid's position. The egress proxy sits on both openclaw-internal (where it receives proxied requests from OpenClaw) and openclaw-egress (a normal bridge network where it can reach the internet). It's the only container that bridges the gap between OpenClaw and the outside world.
read_only: true with targeted tmpfs mounts. The root filesystem is immutable. OpenClaw can write to its config and workspace bind mounts, and to the tmpfs mounts for /tmp and cache directories. It cannot modify its own binaries, install packages, or write anywhere unexpected. If a skill attempts to drop a payload on disk, it fails.
cap_drop: ALL then cap_add: NET_BIND_SERVICE. This is the minimal capability set. We drop every Linux capability — including things like SYS_ADMIN, NET_RAW, DAC_OVERRIDE — and add back only what's needed to bind the gateway port. This limits what a container escape could do.
no-new-privileges prevents the process from gaining additional privileges through setuid binaries or capability escalation. Belt and braces.
Resource limits. The mem_limit and cpus constraints prevent a runaway agent from consuming your entire NAS. Adjust these based on your hardware — 2GB RAM and 1 CPU core is comfortable for a single-user OpenClaw instance on a Synology DS920+ or similar.
Step 5: Deploy and Verify
Click Deploy the stack in Portainer. Watch the logs — both containers should come up healthy within about 60 seconds.
Verify the egress controls work
OpenClaw provides openclaw security audit --deep for checking your instance's security posture, and you should run that too. But we also need to confirm that the network-level controls are working — that OpenClaw genuinely cannot reach the internet directly, and that the proxy whitelist is doing its job.
# Exec into the OpenClaw container
docker exec -it openclaw-gateway /bin/sh
# This should FAIL — no direct internet access
wget -q --spider https://example.com && echo "OPEN" || echo "BLOCKED"
# Expected: BLOCKED
# This should SUCCEED — goes through the proxy to an allowed domain
curl -x http://egress-proxy:3128 https://api.anthropic.com/v1/models
# Expected: JSON response (or 401 if key isn't loaded yet)
# This should FAIL — proxy denies non-whitelisted domains
curl -x http://egress-proxy:3128 https://evil-exfiltration-server.com
# Expected: 403 Forbidden from Squid
exitIf the first test shows "OPEN" instead of "BLOCKED", the internal network isn't working correctly. Check that internal: true is set on the openclaw-internal network and that the OpenClaw service isn't attached to openclaw-external.
Check Squid's access log
docker logs openclaw-egress-proxyYou should see CONNECT api.anthropic.com:443 entries for allowed traffic and DENIED entries for anything else. This log is your audit trail — you can see exactly which external endpoints OpenClaw has contacted and when.
Step 6: Configure OpenClaw
With the stack running, open http://your-nas-ip:18789 in your browser. OpenClaw's onboarding flow will walk you through connecting your messaging channels and choosing your AI model.
A few Synology-specific notes:
If the onboarding asks for your API key interactively: The Docker secret is mounted at /run/secrets/anthropic_api_key. OpenClaw should pick this up via the ANTHROPIC_API_KEY_FILE environment variable. If your version of OpenClaw doesn't support the _FILE suffix convention, you'll need a small entrypoint wrapper that reads the secret file and exports it as ANTHROPIC_API_KEY. I'll show this in the appendix.
Persistence across DSM updates. Synology occasionally restarts Docker during DSM updates. The unless-stopped restart policy handles this, but test it after your next DSM update. I've had one case where Container Manager lost track of a Portainer-managed stack after a major DSM version upgrade — re-importing the stack from Portainer's backup fixed it immediately.
Firewall rules. In DSM's Control Panel → Security → Firewall, add a rule to allow inbound traffic on port 18789 only from your LAN subnet (e.g., 192.168.1.0/24). There's no reason for this port to be reachable from the internet.
What We Gained (And What It Cost)
Let's be honest about the trade-offs. OpenClaw's defaults are designed for a single trusted operator on a local machine, and the project is upfront about that. What we've done here is implement their recommended hardening for a specific deployment scenario: an always-on Synology NAS on a home network.
What this stack adds beyond the defaults:
The API key isn't in an environment variable or a compose file. A compromised skill can't read process.env.ANTHROPIC_API_KEY and exfiltrate it — the variable isn't there. The key is in a file that's only readable by the container's own process.
Outbound network access is explicitly controlled. OpenClaw can reach api.anthropic.com and nothing else. Data exfiltration to an attacker-controlled server, DNS tunnelling to unexpected domains, phone-home behaviour from malicious skills — all blocked at the proxy layer. You have a log of every outbound connection.
The container can't modify itself. The read-only root filesystem means a compromised agent can't install tools, modify binaries, or persist a backdoor. It can write to its workspace (that's the point — it's a tool-using agent), but it can't alter the container image.
Resource limits prevent a runaway agent from taking down your NAS. Your Plex server, your Home Assistant instance, your file shares — they keep working even if OpenClaw goes into an infinite loop.
What it costs:
Complexity. This is a two-service stack across two networks instead of a single docker run command. If Squid goes down, OpenClaw can't reach its API provider. You need to understand forward proxy ACLs to modify the whitelist.
If OpenClaw adds a new external dependency — say, a web browsing feature that needs to reach arbitrary URLs — you'll need to update the Squid whitelist. This is a feature, not a bug, but it does mean you need to check release notes when you upgrade.
The Docker secret approach has a gap: it protects against environment variable enumeration and casual exposure, but the file is still on your NAS's filesystem. Your NAS's own security posture matters — volume encryption, strong passwords, disabled default admin account. Defence in depth applies to the host, not just the container.
Appendix: Entrypoint Wrapper for _FILE Secret Support
If your version of OpenClaw doesn't natively support the _FILE suffix convention for secrets, add this entrypoint wrapper:
#!/bin/sh
# /entrypoint-wrapper.sh
# Reads Docker secrets from files and exports them as env vars
if [ -f /run/secrets/anthropic_api_key ]; then
export ANTHROPIC_API_KEY=$(cat /run/secrets/anthropic_api_key)
fi
# Hand off to the original entrypoint
exec "$@"Then update the compose file:
openclaw:
# ... everything else stays the same
entrypoint: ["/bin/sh", "/entrypoint-wrapper.sh"]
command: ["node", "gateway.js"] # or whatever the default CMD is
volumes:
# Add the wrapper script
- /volume1/docker/openclaw/entrypoint-wrapper.sh:/entrypoint-wrapper.sh:ro
# ... existing volume mountsWhat I'd Do Differently Next Time
I'd look at running OpenClaw's workspace directory on a separate, space-limited volume to prevent an agent from filling the NAS.
If you're building your own OpenClaw image with custom skills — or any multi-arch image for that matter — and you're doing it from an Apple Silicon Mac targeting your x86 Synology, I wrote up the specific pitfalls in Multi-Platform Docker Builds from an M1 Mac Using Your Synology NAS as a Remote Builder. The short version: Synology's Docker daemon is stuck on API v1.43, which creates a version mismatch that can't be resolved with DOCKER_API_VERSION alone. The fix is running BuildKit as a standalone container and connecting to it with the remote buildx driver — it bypasses the old daemon entirely. That post covers the SSH setup gotchas on Synology too (home directories disabled by default, PermitUserEnvironment needed), which you'll hit the moment you try to automate anything over SSH.
The broader principle is one I keep coming back to with homelab services: defaults are designed for the common case, and the common case for OpenClaw is a developer running it on their own machine. An always-on NAS is a different environment. It has IoT devices with known vulnerabilities on the same network, guest devices you don't control, and kids' tablets running who-knows-what. OpenClaw tells you this during setup — "if you're not comfortable with security hardening and access control, don't run OpenClaw." This post is about being comfortable with it.
If you're running other AI agents or LLM tools in your homelab, the egress proxy pattern works for all of them. Swap the whitelist, reuse the architecture. One proxy, multiple internal networks, each service only reaching the domains it needs. That's the real takeaway.
Solution Architect with 30 years in cloud infrastructure, security, identity, and .NET engineering.