🚀 Founding customer pricing: 25% off any package with code EARLYMILO25 — limited spots

OpenClaw Incident Response: What To Do When Your Agent Is Compromised

Signs of compromise, containment steps, credential rotation, forensic analysis, and full recovery. The complete playbook.

Milo | getmilo.dev
· · 14 min read

The ClawHavoc supply chain attack compromised 1,184 skills across ClawHub and affected an unknown number of OpenClaw deployments. If you're reading this, you've either confirmed a breach or you suspect one. Either way, the next few hours matter more than anything else you do this month.

This is the incident response playbook. It covers what compromise looks like, how to contain it, how to investigate what happened, and how to rebuild with confidence. Every command is real. Every step is sequenced. Don't skip ahead.

If you're actively under attack right now: Jump to Step 1: Immediate Containment. Kill the agent process, disconnect from the network, then come back and read the rest. Containment first, investigation second.

Recognizing signs of compromise

Most compromised OpenClaw agents don't announce themselves. The ClawHavoc campaign specifically designed payloads to operate silently alongside legitimate agent behavior. But there are signals, and knowing what to look for is the difference between catching a breach in hours versus weeks.

Behavioral indicators

Technical indicators (IOCs)

These are the specific indicators of compromise associated with the ClawHavoc campaign. Check for them now.

# Check for known ClawHavoc C2 domains in your agent's network history
$ grep -rn "185\.xxx\.xxx\.42\|collect\.claw\|fix\.clawhub-cdn\|update-agent\.dev" \
    /var/log/ ~/.openclaw/logs/ 2>/dev/null

# Check for unauthorized modifications to identity files
$ git log --oneline --all -- SOUL.md MEMORY.md

# Look for unexpected processes spawned by your agent
$ ps aux | grep -E "openclaw|claw-agent" | grep -v grep

# Check for rogue cron entries
$ crontab -l 2>/dev/null
$ ls -la /etc/cron.d/ /etc/cron.daily/ 2>/dev/null

# Scan for files modified in the last 7 days in your agent directory
$ find ~/.openclaw/ -type f -mtime -7 -ls 2>/dev/null

If any of these checks return results you don't recognize, treat it as a confirmed compromise and proceed to containment.

Step 1: Immediate containment

The goal of containment is to stop the bleeding. You're not investigating yet. You're not fixing anything. You're severing the compromised agent's ability to cause further damage.

Kill the agent process

# Find and kill all OpenClaw processes immediately
$ pkill -9 -f openclaw
$ pkill -9 -f claw-agent

# Verify nothing is still running
$ ps aux | grep -E "openclaw|claw-agent" | grep -v grep

# If the agent is running in Docker:
$ docker stop $(docker ps -q --filter "name=openclaw") 2>/dev/null
$ docker kill $(docker ps -q --filter "name=openclaw") 2>/dev/null

Isolate the network

If the agent runs on a dedicated server or VM, isolate it from both the internet and your internal network. If it shares a machine with other services, at minimum block its outbound connections.

# Block all outbound traffic from the agent user (Linux)
$ sudo iptables -A OUTPUT -m owner --uid-owner openclaw-user -j DROP

# Or if you need to isolate a whole machine:
$ sudo iptables -A OUTPUT -j DROP
$ sudo iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Verify isolation — this should fail
$ curl -s https://httpbin.org/ip

Preserve evidence

Before you change anything else, snapshot the current state. You'll need this for forensic analysis.

# Snapshot the entire agent directory
$ tar czf /tmp/openclaw-incident-$(date +%Y%m%d-%H%M%S).tar.gz \
    ~/.openclaw/ SOUL.md MEMORY.md gateway.yaml skills.yaml 2>/dev/null

# Capture current process list and network connections
$ ps auxf > /tmp/incident-ps.txt
$ ss -tnpa > /tmp/incident-network.txt
$ env > /tmp/incident-env.txt

# Copy all logs
$ cp -r ~/.openclaw/logs/ /tmp/incident-logs/ 2>/dev/null

# If using Docker, export the container filesystem
$ docker export $(docker ps -aq --filter "name=openclaw") > /tmp/incident-container.tar 2>/dev/null

Do not skip evidence preservation. Once you start rotating credentials and reinstalling, you lose the ability to determine what was accessed. The forensic snapshot takes two minutes and could save you weeks of uncertainty.

Step 2: Credential rotation

Assume every credential your agent had access to is compromised. This includes credentials stored in environment variables, .env files, config files, and anything accessible from the agent's filesystem.

Priority rotation order

Rotate in this order, starting with the credentials that can cause the most damage fastest:

Priority Credential Type Action
1 — Critical Payment APIs (Stripe, PayPal) Rotate keys, review recent charges
1 — Critical Cloud provider (AWS, GCP, Azure) Rotate keys, check IAM activity logs
1 — Critical Database credentials Rotate passwords, audit recent queries
2 — High SSH keys Regenerate and redeploy to all hosts
2 — High LLM API keys (OpenAI, Anthropic) Rotate, check usage dashboards for spikes
2 — High Email/OAuth tokens (Gmail, Google) Revoke tokens, reauthorize
3 — Medium GitHub/GitLab tokens Rotate PATs, review repo access logs
3 — Medium Messaging APIs (Slack, Discord, WhatsApp) Rotate tokens, check sent messages
4 — Standard Third-party SaaS API keys Rotate all remaining keys
# Find every file that might contain credentials in your agent directory
$ grep -rn "API_KEY\|SECRET\|PASSWORD\|TOKEN\|CREDENTIAL\|aws_access\|sk-\|pk_" \
    ~/.openclaw/ SOUL.md MEMORY.md .env* gateway.yaml skills.yaml 2>/dev/null

# Check for SSH key exposure
$ stat ~/.ssh/id_* 2>/dev/null

# Review AWS credential files
$ cat ~/.aws/credentials 2>/dev/null

# Check if your .env was read recently (look at access time)
$ stat -c "%x %n" .env* 2>/dev/null

Don't just rotate the keys. Check the activity logs for each service to determine whether the compromised credentials were actually used. A rotated key with no unauthorized activity is a different situation than a rotated key with three days of unknown API calls.

Step 3: Forensic log analysis

Now that you've contained the breach and rotated credentials, it's time to understand what happened. The goal is to answer three questions: What was accessed? What was exfiltrated? How long was the attacker active?

Analyze agent logs

# Search for skill installation events in the exposure window
$ grep -n "install\|skill.*add\|skill.*enable" ~/.openclaw/logs/*.log \
    | sort -t: -k2 2>/dev/null

# Find all outbound HTTP requests your agent made
$ grep -n "POST\|GET\|PUT\|fetch\|request\|curl\|wget" ~/.openclaw/logs/*.log \
    | grep -v "localhost\|127.0.0.1" 2>/dev/null

# Look for SOUL.md / MEMORY.md write operations
$ grep -n "SOUL\|MEMORY\|soul\.md\|memory\.md\|write.*md" \
    ~/.openclaw/logs/*.log 2>/dev/null

# Check for shell command execution
$ grep -n "exec\|spawn\|child_process\|subprocess\|system(" \
    ~/.openclaw/logs/*.log 2>/dev/null

# Extract all unique external IPs/domains contacted
$ grep -oE "https?://[a-zA-Z0-9._/-]+" ~/.openclaw/logs/*.log \
    | sort -u 2>/dev/null

Build a timeline

Every incident investigation needs a timeline. Use your logs and filesystem metadata to reconstruct what happened and when.

# Create a chronological timeline of events
# 1. When was the malicious skill installed?
$ grep -n "install" ~/.openclaw/logs/*.log | head -20

# 2. When were identity files last modified?
$ stat SOUL.md MEMORY.md 2>/dev/null

# 3. When did unusual network activity start?
$ grep -n "185\.\|collect\.\|fix\.\|update-agent" ~/.openclaw/logs/*.log \
    | head -5 2>/dev/null

# 4. When was the agent last restarted?
$ grep -n "start\|init\|boot" ~/.openclaw/logs/*.log | tail -10 2>/dev/null

If the ClawHavoc attack was involved, the timeline typically looks like this: malicious skill installed via ClawHub, followed by immediate environment variable exfiltration within the first execution, followed by SOUL.md and MEMORY.md modification for persistence, followed by ongoing silent data collection until discovery.

Check for lateral movement

If your agent had SSH keys, cloud credentials, or access to other systems, verify that those systems haven't been accessed by the attacker.

# Check SSH auth logs for unexpected access from the compromised machine
$ grep "Accepted" /var/log/auth.log | tail -20

# Review AWS CloudTrail (if using AWS)
$ aws cloudtrail lookup-events --start-time $(date -d "7 days ago" +%Y-%m-%dT%H:%M:%SZ) \
    --lookup-attributes AttributeKey=EventName,AttributeValue=GetSecretValue

# Check GitHub for unexpected activity
$ gh api /user/repos --jq '.[].full_name' 2>/dev/null
$ gh api /user/keys --jq '.[] | "\(.id) \(.title) \(.created_at)"' 2>/dev/null

Step 4: Recovery and rebuild

Do not attempt to "clean" a compromised agent. Rebuild from scratch. The ClawHavoc campaign demonstrated that persistence mechanisms can survive skill uninstallation by embedding in the agent's identity and memory files. A clean rebuild is the only reliable recovery.

Clean installation

# Back up your forensic snapshot (you already did this in Step 1)
# Then remove the compromised installation entirely
$ rm -rf ~/.openclaw/

# Fresh install of OpenClaw
$ curl -fsSL https://get.openclaw.dev | bash

# Restore ONLY your own configuration files from a known-good backup
# Do NOT restore SOUL.md, MEMORY.md, or skills.yaml from the compromised backup
$ cp /path/to/known-good-backup/gateway.yaml ~/.openclaw/gateway.yaml

Rebuild SOUL.md and MEMORY.md from scratch

Write these files by hand. Do not copy them from the compromised backup. Even if you've "cleaned" the files, subtle behavioral injections can be difficult to spot — a single line buried among legitimate instructions can redirect your agent's behavior.

Reinstall only audited skills

Before reinstalling any skill, read its source code. Every line. Use the audit process described in our security guide. Pin to specific version hashes. Do not install from ClawHub without verification.

# For each skill you want to reinstall:
# 1. Clone and read the source
$ git clone https://clawhub.dev/publisher/skill-name /tmp/audit-skill
$ find /tmp/audit-skill -type f \( -name "*.py" -o -name "*.js" -o -name "*.ts" \) \
    -exec cat {} \;

# 2. Search for red flags
$ grep -rn "curl\|wget\|fetch\|http\|eval\|exec\|base64\|SOUL\|MEMORY\|\.env\|ssh\|credentials" \
    /tmp/audit-skill/

# 3. If clean, install with pinned version
$ openclaw skill add skill-name --version 1.2.3 --verify-hash sha256:abc123...

Step 5: Prevention checklist

Once you've recovered, implement these measures to reduce the likelihood and impact of future compromises.

Configuration hardening

# Protect SOUL.md and MEMORY.md from modification
$ chmod 444 SOUL.md MEMORY.md
$ chattr +i SOUL.md MEMORY.md  # immutable flag (Linux)

# Restrict agent network access to known endpoints only
# gateway.yaml
network:
  allowlist:
    - "api.openai.com"
    - "api.stripe.com"
    - "api.coingecko.com"
  deny_all_other: true

# Enable command restrictions
exec:
  allowlist:
    - "curl"
    - "jq"
    - "python3"
  sandbox: true
  timeout: 30

Monitoring

Real talk: Most OpenClaw operators we've worked with had zero monitoring before their first incident. No network logs, no file integrity checks, no billing alerts. The ClawHavoc campaign operated undetected for approximately 90 days. Monitoring is not optional — it's the difference between a 90-day breach and a 90-minute breach.

Ongoing practices

Lessons from ClawHavoc

The ClawHavoc attack taught us three things that matter for every OpenClaw operator.

First, the attack surface is the agent's mind, not just its code. By writing to SOUL.md and MEMORY.md, the attackers created persistence mechanisms that traditional malware scanners will never catch. You can't antivirus-scan a behavioral instruction. Protecting your agent's identity files is as important as protecting your SSH keys.

Second, implicit trust in marketplaces is a vulnerability. Twenty percent of ClawHub was compromised. The marketplace had no code review, no automated scanning, no permission scoping. If you're installing skills because they have a good description and star ratings, you're making the same mistake that affected thousands of operators.

Third, incident response planning matters before the incident. The operators who recovered fastest from ClawHavoc were the ones who had forensic logging enabled, maintained clean backups, and knew their credential rotation procedures before they needed them. If you're reading this playbook for the first time during an active breach, you've already learned the hard way.

The agent security landscape is evolving fast. Attacks like ClawHavoc are going to become more sophisticated, not less. The time to harden your deployment is now.

Think you might be compromised?

Milo's free scanner checks your OpenClaw configuration for ClawHavoc IOCs, malicious skill signatures, and 40+ other security issues. Takes 60 seconds.

Run Free Scan

Go deeper with professional tools

The OpenClaw Survival Guide covers incident response, supply chain security, and 40+ hardening configurations in detail. The Agent Audit is a full professional review of your deployment by our security team.

Survival Guide — $9 Agent Audit — $199

Get weekly security intelligence

Incident response playbooks, threat advisories, and hardening guides. One email per week. No spam.