The ClawHavoc supply chain attack compromised 1,184 skills across ClawHub and affected an unknown number of OpenClaw deployments. If you're reading this, you've either confirmed a breach or you suspect one. Either way, the next few hours matter more than anything else you do this month.
This is the incident response playbook. It covers what compromise looks like, how to contain it, how to investigate what happened, and how to rebuild with confidence. Every command is real. Every step is sequenced. Don't skip ahead.
If you're actively under attack right now: Jump to Step 1: Immediate Containment. Kill the agent process, disconnect from the network, then come back and read the rest. Containment first, investigation second.
Recognizing signs of compromise
Most compromised OpenClaw agents don't announce themselves. The ClawHavoc campaign specifically designed payloads to operate silently alongside legitimate agent behavior. But there are signals, and knowing what to look for is the difference between catching a breach in hours versus weeks.
Behavioral indicators
- Unexpected network connections — Your agent is making HTTP requests to IP addresses or domains that aren't in any of your configured skill endpoints. This was the first signal in every ClawHavoc case we investigated.
- Modified SOUL.md or MEMORY.md — Instructions or memory entries you didn't write. The ClawHavoc campaign injected persistent behavioral directives that survived skill uninstallation.
- Unusual shell commands in logs — Commands your agent shouldn't need to run:
curlto unknown endpoints, reads of~/.ssh/or~/.aws/credentials, base64 encoding of environment variables. - Spike in API usage or costs — If your Stripe charges, OpenAI token usage, or cloud billing suddenly jumps, a compromised agent may be exfiltrating data or running unauthorized operations.
- New cron jobs or scheduled tasks — Malicious skills can create persistent background processes that survive agent restarts.
- Emails or messages sent without your knowledge — If your agent has Gmail or messaging access, check the sent folder. ClawHavoc payloads were observed forwarding email metadata to collection servers.
Technical indicators (IOCs)
These are the specific indicators of compromise associated with the ClawHavoc campaign. Check for them now.
# Check for known ClawHavoc C2 domains in your agent's network history
$ grep -rn "185\.xxx\.xxx\.42\|collect\.claw\|fix\.clawhub-cdn\|update-agent\.dev" \
/var/log/ ~/.openclaw/logs/ 2>/dev/null
# Check for unauthorized modifications to identity files
$ git log --oneline --all -- SOUL.md MEMORY.md
# Look for unexpected processes spawned by your agent
$ ps aux | grep -E "openclaw|claw-agent" | grep -v grep
# Check for rogue cron entries
$ crontab -l 2>/dev/null
$ ls -la /etc/cron.d/ /etc/cron.daily/ 2>/dev/null
# Scan for files modified in the last 7 days in your agent directory
$ find ~/.openclaw/ -type f -mtime -7 -ls 2>/dev/null
If any of these checks return results you don't recognize, treat it as a confirmed compromise and proceed to containment.
Step 1: Immediate containment
The goal of containment is to stop the bleeding. You're not investigating yet. You're not fixing anything. You're severing the compromised agent's ability to cause further damage.
Kill the agent process
# Find and kill all OpenClaw processes immediately
$ pkill -9 -f openclaw
$ pkill -9 -f claw-agent
# Verify nothing is still running
$ ps aux | grep -E "openclaw|claw-agent" | grep -v grep
# If the agent is running in Docker:
$ docker stop $(docker ps -q --filter "name=openclaw") 2>/dev/null
$ docker kill $(docker ps -q --filter "name=openclaw") 2>/dev/null
Isolate the network
If the agent runs on a dedicated server or VM, isolate it from both the internet and your internal network. If it shares a machine with other services, at minimum block its outbound connections.
# Block all outbound traffic from the agent user (Linux)
$ sudo iptables -A OUTPUT -m owner --uid-owner openclaw-user -j DROP
# Or if you need to isolate a whole machine:
$ sudo iptables -A OUTPUT -j DROP
$ sudo iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Verify isolation — this should fail
$ curl -s https://httpbin.org/ip
Preserve evidence
Before you change anything else, snapshot the current state. You'll need this for forensic analysis.
# Snapshot the entire agent directory
$ tar czf /tmp/openclaw-incident-$(date +%Y%m%d-%H%M%S).tar.gz \
~/.openclaw/ SOUL.md MEMORY.md gateway.yaml skills.yaml 2>/dev/null
# Capture current process list and network connections
$ ps auxf > /tmp/incident-ps.txt
$ ss -tnpa > /tmp/incident-network.txt
$ env > /tmp/incident-env.txt
# Copy all logs
$ cp -r ~/.openclaw/logs/ /tmp/incident-logs/ 2>/dev/null
# If using Docker, export the container filesystem
$ docker export $(docker ps -aq --filter "name=openclaw") > /tmp/incident-container.tar 2>/dev/null
Do not skip evidence preservation. Once you start rotating credentials and reinstalling, you lose the ability to determine what was accessed. The forensic snapshot takes two minutes and could save you weeks of uncertainty.
Step 2: Credential rotation
Assume every credential your agent had access to is compromised. This includes credentials stored in environment variables, .env files, config files, and anything accessible from the agent's filesystem.
Priority rotation order
Rotate in this order, starting with the credentials that can cause the most damage fastest:
| Priority | Credential Type | Action |
|---|---|---|
| 1 — Critical | Payment APIs (Stripe, PayPal) | Rotate keys, review recent charges |
| 1 — Critical | Cloud provider (AWS, GCP, Azure) | Rotate keys, check IAM activity logs |
| 1 — Critical | Database credentials | Rotate passwords, audit recent queries |
| 2 — High | SSH keys | Regenerate and redeploy to all hosts |
| 2 — High | LLM API keys (OpenAI, Anthropic) | Rotate, check usage dashboards for spikes |
| 2 — High | Email/OAuth tokens (Gmail, Google) | Revoke tokens, reauthorize |
| 3 — Medium | GitHub/GitLab tokens | Rotate PATs, review repo access logs |
| 3 — Medium | Messaging APIs (Slack, Discord, WhatsApp) | Rotate tokens, check sent messages |
| 4 — Standard | Third-party SaaS API keys | Rotate all remaining keys |
# Find every file that might contain credentials in your agent directory
$ grep -rn "API_KEY\|SECRET\|PASSWORD\|TOKEN\|CREDENTIAL\|aws_access\|sk-\|pk_" \
~/.openclaw/ SOUL.md MEMORY.md .env* gateway.yaml skills.yaml 2>/dev/null
# Check for SSH key exposure
$ stat ~/.ssh/id_* 2>/dev/null
# Review AWS credential files
$ cat ~/.aws/credentials 2>/dev/null
# Check if your .env was read recently (look at access time)
$ stat -c "%x %n" .env* 2>/dev/null
Don't just rotate the keys. Check the activity logs for each service to determine whether the compromised credentials were actually used. A rotated key with no unauthorized activity is a different situation than a rotated key with three days of unknown API calls.
Step 3: Forensic log analysis
Now that you've contained the breach and rotated credentials, it's time to understand what happened. The goal is to answer three questions: What was accessed? What was exfiltrated? How long was the attacker active?
Analyze agent logs
# Search for skill installation events in the exposure window
$ grep -n "install\|skill.*add\|skill.*enable" ~/.openclaw/logs/*.log \
| sort -t: -k2 2>/dev/null
# Find all outbound HTTP requests your agent made
$ grep -n "POST\|GET\|PUT\|fetch\|request\|curl\|wget" ~/.openclaw/logs/*.log \
| grep -v "localhost\|127.0.0.1" 2>/dev/null
# Look for SOUL.md / MEMORY.md write operations
$ grep -n "SOUL\|MEMORY\|soul\.md\|memory\.md\|write.*md" \
~/.openclaw/logs/*.log 2>/dev/null
# Check for shell command execution
$ grep -n "exec\|spawn\|child_process\|subprocess\|system(" \
~/.openclaw/logs/*.log 2>/dev/null
# Extract all unique external IPs/domains contacted
$ grep -oE "https?://[a-zA-Z0-9._/-]+" ~/.openclaw/logs/*.log \
| sort -u 2>/dev/null
Build a timeline
Every incident investigation needs a timeline. Use your logs and filesystem metadata to reconstruct what happened and when.
# Create a chronological timeline of events
# 1. When was the malicious skill installed?
$ grep -n "install" ~/.openclaw/logs/*.log | head -20
# 2. When were identity files last modified?
$ stat SOUL.md MEMORY.md 2>/dev/null
# 3. When did unusual network activity start?
$ grep -n "185\.\|collect\.\|fix\.\|update-agent" ~/.openclaw/logs/*.log \
| head -5 2>/dev/null
# 4. When was the agent last restarted?
$ grep -n "start\|init\|boot" ~/.openclaw/logs/*.log | tail -10 2>/dev/null
If the ClawHavoc attack was involved, the timeline typically looks like this: malicious skill installed via ClawHub, followed by immediate environment variable exfiltration within the first execution, followed by SOUL.md and MEMORY.md modification for persistence, followed by ongoing silent data collection until discovery.
Check for lateral movement
If your agent had SSH keys, cloud credentials, or access to other systems, verify that those systems haven't been accessed by the attacker.
# Check SSH auth logs for unexpected access from the compromised machine
$ grep "Accepted" /var/log/auth.log | tail -20
# Review AWS CloudTrail (if using AWS)
$ aws cloudtrail lookup-events --start-time $(date -d "7 days ago" +%Y-%m-%dT%H:%M:%SZ) \
--lookup-attributes AttributeKey=EventName,AttributeValue=GetSecretValue
# Check GitHub for unexpected activity
$ gh api /user/repos --jq '.[].full_name' 2>/dev/null
$ gh api /user/keys --jq '.[] | "\(.id) \(.title) \(.created_at)"' 2>/dev/null
Step 4: Recovery and rebuild
Do not attempt to "clean" a compromised agent. Rebuild from scratch. The ClawHavoc campaign demonstrated that persistence mechanisms can survive skill uninstallation by embedding in the agent's identity and memory files. A clean rebuild is the only reliable recovery.
Clean installation
# Back up your forensic snapshot (you already did this in Step 1)
# Then remove the compromised installation entirely
$ rm -rf ~/.openclaw/
# Fresh install of OpenClaw
$ curl -fsSL https://get.openclaw.dev | bash
# Restore ONLY your own configuration files from a known-good backup
# Do NOT restore SOUL.md, MEMORY.md, or skills.yaml from the compromised backup
$ cp /path/to/known-good-backup/gateway.yaml ~/.openclaw/gateway.yaml
Rebuild SOUL.md and MEMORY.md from scratch
Write these files by hand. Do not copy them from the compromised backup. Even if you've "cleaned" the files, subtle behavioral injections can be difficult to spot — a single line buried among legitimate instructions can redirect your agent's behavior.
Reinstall only audited skills
Before reinstalling any skill, read its source code. Every line. Use the audit process described in our security guide. Pin to specific version hashes. Do not install from ClawHub without verification.
# For each skill you want to reinstall:
# 1. Clone and read the source
$ git clone https://clawhub.dev/publisher/skill-name /tmp/audit-skill
$ find /tmp/audit-skill -type f \( -name "*.py" -o -name "*.js" -o -name "*.ts" \) \
-exec cat {} \;
# 2. Search for red flags
$ grep -rn "curl\|wget\|fetch\|http\|eval\|exec\|base64\|SOUL\|MEMORY\|\.env\|ssh\|credentials" \
/tmp/audit-skill/
# 3. If clean, install with pinned version
$ openclaw skill add skill-name --version 1.2.3 --verify-hash sha256:abc123...
Step 5: Prevention checklist
Once you've recovered, implement these measures to reduce the likelihood and impact of future compromises.
Configuration hardening
- Enable exec_allowlist — Restrict which shell commands skills can execute. If a crypto price checker needs
rmaccess, something is wrong. - Set network allowlists — Limit outbound connections to known API endpoints. Block everything else by default.
- Pin skill versions — Never use
latest. Pin to audited version hashes inskills.yaml. - Enable SOUL.md write protection — Set your identity file to read-only so skills cannot modify it at runtime.
- Isolate credentials — Use a secrets manager or vault instead of environment variables. Grant skills access only to the specific credentials they need.
# Protect SOUL.md and MEMORY.md from modification
$ chmod 444 SOUL.md MEMORY.md
$ chattr +i SOUL.md MEMORY.md # immutable flag (Linux)
# Restrict agent network access to known endpoints only
# gateway.yaml
network:
allowlist:
- "api.openai.com"
- "api.stripe.com"
- "api.coingecko.com"
deny_all_other: true
# Enable command restrictions
exec:
allowlist:
- "curl"
- "jq"
- "python3"
sandbox: true
timeout: 30
Monitoring
- Log all network connections — Every outbound request should be logged with timestamp, destination, and payload size.
- Monitor file integrity — Set up alerts for any changes to
SOUL.md,MEMORY.md,gateway.yaml, andskills.yaml. - Track API usage — Set billing alerts on every service your agent accesses. A sudden spike in OpenAI tokens or Stripe API calls is an early warning signal.
- Regular security scans — Run weekly scans against known IOCs and malicious skill signatures.
Real talk: Most OpenClaw operators we've worked with had zero monitoring before their first incident. No network logs, no file integrity checks, no billing alerts. The ClawHavoc campaign operated undetected for approximately 90 days. Monitoring is not optional — it's the difference between a 90-day breach and a 90-minute breach.
Ongoing practices
- Audit skills quarterly — Review every installed skill against its source. Check for updates that add unexpected permissions or network calls.
- Maintain known-good backups — Keep versioned, offline backups of your configuration files. If you need to rebuild, you need a clean starting point.
- Subscribe to security advisories — Follow Koi Security, ClawHub's security bulletins, and community channels for early warning on new threats.
- Principle of least privilege — Your agent should have exactly the permissions it needs and nothing more. Every extra permission is an extra attack surface.
Lessons from ClawHavoc
The ClawHavoc attack taught us three things that matter for every OpenClaw operator.
First, the attack surface is the agent's mind, not just its code. By writing to SOUL.md and MEMORY.md, the attackers created persistence mechanisms that traditional malware scanners will never catch. You can't antivirus-scan a behavioral instruction. Protecting your agent's identity files is as important as protecting your SSH keys.
Second, implicit trust in marketplaces is a vulnerability. Twenty percent of ClawHub was compromised. The marketplace had no code review, no automated scanning, no permission scoping. If you're installing skills because they have a good description and star ratings, you're making the same mistake that affected thousands of operators.
Third, incident response planning matters before the incident. The operators who recovered fastest from ClawHavoc were the ones who had forensic logging enabled, maintained clean backups, and knew their credential rotation procedures before they needed them. If you're reading this playbook for the first time during an active breach, you've already learned the hard way.
The agent security landscape is evolving fast. Attacks like ClawHavoc are going to become more sophisticated, not less. The time to harden your deployment is now.
Think you might be compromised?
Milo's free scanner checks your OpenClaw configuration for ClawHavoc IOCs, malicious skill signatures, and 40+ other security issues. Takes 60 seconds.
Run Free ScanGo deeper with professional tools
The OpenClaw Survival Guide covers incident response, supply chain security, and 40+ hardening configurations in detail. The Agent Audit is a full professional review of your deployment by our security team.
Survival Guide — $9 Agent Audit — $199