Docker is the most popular way to deploy OpenClaw. It's also the most popular way to deploy OpenClaw insecurely. We've reviewed hundreds of Docker-based OpenClaw setups, and the same three mistakes appear in nearly every one: running as root, exposing ports without authentication, and over-sharing volume mounts.
This guide covers those three critical mistakes plus seven more hardening steps that will turn your Docker deployment from a liability into a locked-down production system.
Mistake #1: Running as root
The default OpenClaw Docker image runs as the root user inside the container. This means if an attacker exploits a vulnerability in your agent — prompt injection, dependency exploit, anything — they have root access inside the container. Combined with a misconfigured Docker socket mount or a kernel vulnerability, that's root on the host machine.
The fix: non-root Dockerfile
Create a dedicated user for the OpenClaw process. Here's a production-ready Dockerfile:
FROM node:20-slim AS base
# Security: create non-root user
RUN groupadd --gid 1001 openclaw && \
useradd --uid 1001 --gid openclaw --shell /bin/false --create-home openclaw
# Install OpenClaw
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --only=production && npm cache clean --force
COPY . .
# Security: own everything, then drop to non-root
RUN chown -R openclaw:openclaw /app
USER openclaw
# Security: read-only filesystem compatible
ENV NODE_ENV=production
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD node healthcheck.js
# Security: use exec form to handle signals properly
CMD ["node", "server.js"]
Key details: USER openclaw drops privileges before the process starts. The --shell /bin/false prevents interactive login even if someone gets into the container. The exec form CMD ensures the Node process receives signals directly for graceful shutdown.
Test it: Run docker exec your-container whoami after deployment. If it says "root", you have a problem. It should say "openclaw" (or whatever non-root user you created).
Mistake #2: Exposed ports without authentication
OpenClaw's gateway listens on port 3000 by default. Many deployments bind this directly to 0.0.0.0:3000 — making it accessible to anyone on the internet. No authentication, no rate limiting, no TLS. Your agent is a public API endpoint that anyone can call.
The fix: Docker network isolation
Never expose the OpenClaw port directly. Use a reverse proxy with authentication in front of it, and keep OpenClaw on an internal Docker network.
# docker-compose.yml
services:
openclaw:
build: .
# NO ports section — not directly accessible
networks:
- internal
environment:
- NODE_ENV=production
restart: unless-stopped
nginx:
image: nginx:alpine
ports:
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./certs:/etc/nginx/certs:ro
networks:
- internal
- external
depends_on:
- openclaw
networks:
internal:
internal: true # No external access
external:
The internal: true flag on the network means containers on that network cannot reach the internet directly, and nothing outside can reach them. Only the nginx proxy, which sits on both networks, can forward requests to OpenClaw.
Mistake #3: Volume mount over-sharing
The most dangerous misconfiguration we see: mounting the Docker socket, the entire home directory, or the host filesystem into the OpenClaw container. We've seen -v /:/host in production. That gives the container — and any compromised agent running inside it — full read-write access to your entire server.
The fix: minimal bind mounts
Mount only what the agent actually needs, with the most restrictive permissions possible:
services:
openclaw:
build: .
volumes:
# Config: read-only
- ./config/openclaw.json:/app/config/openclaw.json:ro
# Data: read-write, but only the data directory
- openclaw-data:/app/data
# Logs: write-only via append
- ./logs:/app/logs
# NEVER mount these:
# - /var/run/docker.sock (container escape)
# - /etc (host config access)
# - /home (SSH keys, credentials)
# - / (everything)
volumes:
openclaw-data:
Use named Docker volumes for persistent data instead of bind mounts where possible. Named volumes are managed by Docker and aren't directly accessible from the host filesystem without explicit access.
Get a security scan
We'll analyze your Docker setup, check for these misconfigurations, and give you an A-F security score in 60 seconds.
Scan Your Deployment — FreeRead-only filesystem
After fixing the big three, the next step is making your container's filesystem read-only. This prevents an attacker from modifying application code, installing tools, or dropping malware — even if they get shell access.
services:
openclaw:
build: .
read_only: true
tmpfs:
- /tmp:size=64m,noexec,nosuid
volumes:
- openclaw-data:/app/data
- ./logs:/app/logs
The read_only: true flag makes the entire container filesystem immutable. The tmpfs mount gives the process a small writable temp directory in memory (required for Node.js), but with noexec so nothing in /tmp can be executed and nosuid to prevent privilege escalation.
Security scanning with Trivy
Before deploying any image, scan it for known vulnerabilities. Trivy is fast, free, and catches issues in OS packages and Node.js dependencies.
# Scan your built image
$ trivy image openclaw:latest
# In CI/CD — fail the build on high/critical vulnerabilities
$ trivy image --exit-code 1 --severity HIGH,CRITICAL openclaw:latest
# Scan your config files too
$ trivy config ./docker-compose.yml
Run Trivy in your CI pipeline and block deploys that introduce high or critical vulnerabilities. The five minutes it takes to set up will save you from deploying a container with a known exploit in a base image dependency.
Docker secrets for API keys
Never put API keys in environment variables defined in your docker-compose.yml or Dockerfile. They end up in image layers, container inspect output, and process listings. Use Docker secrets instead.
services:
openclaw:
build: .
secrets:
- openai_api_key
- openclaw_auth_token
environment:
- OPENAI_API_KEY_FILE=/run/secrets/openai_api_key
- AUTH_TOKEN_FILE=/run/secrets/openclaw_auth_token
secrets:
openai_api_key:
file: ./secrets/openai_api_key.txt
openclaw_auth_token:
file: ./secrets/auth_token.txt
Secrets are mounted as files at /run/secrets/ inside the container. They're stored in memory, never written to disk inside the container, and aren't visible in docker inspect output. Your application reads them from the file path instead of environment variables.
Add secrets files to .gitignore immediately. The ./secrets/ directory should never be committed to version control. Add it to .gitignore and .dockerignore on day one.
Health checks and graceful shutdown
Health checks ensure Docker restarts your container if it becomes unresponsive. Without them, a hung OpenClaw process sits there doing nothing until someone notices manually.
// healthcheck.js
const http = require('http');
const options = {
hostname: 'localhost',
port: 3000,
path: '/health',
timeout: 4000
};
const req = http.get(options, (res) => {
process.exit(res.statusCode === 200 ? 0 : 1);
});
req.on('error', () => process.exit(1));
req.on('timeout', () => {
req.destroy();
process.exit(1);
});
For graceful shutdown, your OpenClaw process needs to handle SIGTERM properly. When Docker stops a container, it sends SIGTERM first, waits 10 seconds (configurable with stop_grace_period), then sends SIGKILL. Your process should use that window to finish in-flight requests and flush logs.
// In your server.js
process.on('SIGTERM', async () => {
console.log('SIGTERM received, shutting down gracefully...');
await server.close();
await flushLogs();
await closeConnections();
process.exit(0);
});
Resource limits
Without resource limits, a runaway OpenClaw process can consume all available memory and CPU on the host, taking down other services running on the same machine.
services:
openclaw:
build: .
deploy:
resources:
limits:
memory: 512M
cpus: '1.0'
reservations:
memory: 256M
cpus: '0.5'
# Prevent container from acquiring new privileges
security_opt:
- no-new-privileges:true
# Drop all Linux capabilities, add only what's needed
cap_drop:
- ALL
The no-new-privileges security option prevents processes inside the container from gaining additional privileges through setuid binaries or capability escalation. cap_drop: ALL removes all Linux capabilities — the principle of least privilege applied at the kernel level.
The complete docker-compose.yml
Here's everything combined into a single, production-ready configuration:
version: '3.8'
services:
openclaw:
build: .
read_only: true
user: "1001:1001"
tmpfs:
- /tmp:size=64m,noexec,nosuid
volumes:
- ./config/openclaw.json:/app/config/openclaw.json:ro
- openclaw-data:/app/data
- ./logs:/app/logs
secrets:
- openai_api_key
- openclaw_auth_token
environment:
- NODE_ENV=production
- OPENAI_API_KEY_FILE=/run/secrets/openai_api_key
- AUTH_TOKEN_FILE=/run/secrets/openclaw_auth_token
networks:
- internal
deploy:
resources:
limits:
memory: 512M
cpus: '1.0'
reservations:
memory: 256M
cpus: '0.5'
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
restart: unless-stopped
nginx:
image: nginx:alpine
ports:
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./certs:/etc/nginx/certs:ro
networks:
- internal
- external
depends_on:
- openclaw
restart: unless-stopped
networks:
internal:
internal: true
external:
volumes:
openclaw-data:
secrets:
openai_api_key:
file: ./secrets/openai_api_key.txt
openclaw_auth_token:
file: ./secrets/auth_token.txt
The bottom line
Docker makes it easy to deploy OpenClaw. It also makes it easy to deploy OpenClaw badly. The three critical mistakes — running as root, exposed ports, volume over-sharing — are present in the majority of deployments we audit.
The full hardening takes about 30 minutes: non-root user, internal networks, minimal mounts, read-only filesystem, Trivy scanning, Docker secrets, health checks, resource limits, and dropped capabilities. That's 30 minutes between "anyone can compromise this" and "this is actually production-grade."
Copy the complete docker-compose.yml above, adjust the paths for your setup, and deploy it. Your future self will thank you when the next OpenClaw CVE drops and your container is already locked down.