How We Isolate Openclaw Containers for Maximum Security

Running Openclaw (formerly Clawdbot) directly on a host operating system is one of the most common mistakes we see in self-managed deployments. When the agent process shares the same kernel, filesystem, and network stack as the rest of your server, a single vulnerability in any dependency can give an attacker full access to your host. Your API keys, messaging tokens, SSH credentials, and every other service on that machine become exposed in one step.

Container isolation changes this equation fundamentally. By running Openclaw inside a Docker container with strict security policies, you create a boundary between the agent and your host system. The container gets its own process tree, its own filesystem view, its own network interface, and hard limits on the resources it can consume. Even if the agent process is compromised, the attacker is confined to a restricted sandbox with no path to the underlying server. This article covers exactly how we configure that sandbox on every professional deployment.

Why Containers Matter for AI Agents

Traditional web applications receive requests, process them, and return responses. AI agents like Openclaw are different. They run continuously, hold long-lived credentials, make autonomous decisions, and interact with external APIs on your behalf. This makes the blast radius of a compromise significantly larger than a typical web app.

Process isolation is the first line of defense. Docker uses Linux namespaces to give the container its own PID namespace, meaning processes inside the container cannot see or signal processes on the host. The agent thinks it is PID 1 in its own isolated world. It cannot list your host processes, attach to them, or read their memory.

Resource containment is equally important. An AI agent that enters a retry loop or encounters a prompt injection attack could consume all available CPU and memory on the host, taking down every other service running alongside it. With container-level resource limits, Docker enforces hard ceilings on CPU and memory usage. If the container exceeds them, it gets throttled or killed cleanly rather than starving the host.

Finally, containers provide reproducible environments. The exact same image runs in development, staging, and production. There are no surprises from mismatched library versions or missing system packages. When we deliver a hardened Openclaw container, you get an artifact that behaves identically on any Docker host.

Docker Compose Configuration for Openclaw

A production-grade docker-compose.yml for Openclaw should include security directives that most tutorials leave out. Here is the configuration we use as a baseline for every deployment:

version: "3.8"

services:
  openclaw:
    image: openclaw/openclaw:latest
    container_name: openclaw-agent
    restart: unless-stopped

    # Security options
    security_opt:
      - no-new-privileges:true
    read_only: true
    tmpfs:
      - /tmp:size=256M,noexec,nosuid
      - /run:size=64M,noexec,nosuid

    # Resource limits
    mem_limit: 2g
    memswap_limit: 2g
    cpus: 1.5
    pids_limit: 256

    # Drop all capabilities, add back only what is needed
    cap_drop:
      - ALL
    cap_add:
      - NET_BIND_SERVICE

    # Environment and secrets
    env_file:
      - .env
    volumes:
      - ./data:/app/data:rw
      - ./logs:/app/logs:rw

    # Network
    networks:
      - openclaw-net

networks:
  openclaw-net:
    driver: bridge

Let us walk through each directive:

security_opt: no-new-privileges:true prevents the container process from gaining additional privileges through setuid or setgid binaries. Even if an attacker places a setuid binary inside the container, the kernel will refuse to honor the privilege escalation.

read_only: true mounts the container's root filesystem as read-only. The agent cannot write to its own binaries, configuration files, or system directories. This blocks an entire class of attacks where malware persists by modifying files inside the container.

tmpfs mounts provide writable scratch space for /tmp and /run that lives only in memory. The noexec flag prevents execution of any binaries written there, and nosuid ignores setuid bits. These mounts disappear when the container restarts.

mem_limit and memswap_limit set a hard ceiling of 2 GB for both memory and swap. Setting them to the same value effectively disables swap for the container, preventing the agent from using disk-backed memory that would degrade host performance.

cpus: 1.5 restricts the container to one and a half CPU cores. This keeps the agent from monopolizing the host processor during intensive operations like large context window processing.

pids_limit: 256 caps the number of processes the container can create. This prevents fork bomb attacks and limits the damage from runaway process spawning.

cap_drop: ALL removes every Linux capability from the container. By default, Docker containers get a subset of root capabilities. Dropping all of them and adding back only NET_BIND_SERVICE (needed if the agent listens on a port below 1024) follows the principle of least privilege.

User Namespace Remapping

Even with capabilities dropped, a process running as UID 0 (root) inside a container is technically root on the host kernel. User namespace remapping solves this by mapping the container's root user to an unprivileged user on the host.

To enable user namespace remapping, configure the Docker daemon by editing /etc/docker/daemon.json:

{
  "userns-remap": "openclaw-user"
}

Then create the subordinate UID and GID mappings:

# Create the remap user
sudo useradd -r -s /bin/false openclaw-user

# Configure subordinate ID ranges
echo "openclaw-user:100000:65536" | sudo tee -a /etc/subuid
echo "openclaw-user:100000:65536" | sudo tee -a /etc/subgid

# Restart Docker to apply
sudo systemctl restart docker

With this configuration, UID 0 inside the container maps to UID 100000 on the host, which is an unprivileged user with no special access. If an attacker breaks out of the container, they land as an unprivileged host user with no ability to read sensitive files or modify system configuration.

PID namespace isolation works alongside this. Each container gets its own PID namespace by default in Docker. Process 1 inside the container is not process 1 on the host. The container cannot see, signal, or trace any host processes. Combined with user namespace remapping, this creates two independent layers of isolation between the agent and your server.

Read-Only Filesystems

The read_only: true directive deserves its own discussion because it is one of the most effective hardening measures available, and one of the most commonly skipped.

An AI agent should not need to write to its own application directory. The code, dependencies, and configuration baked into the image should be immutable at runtime. If an attacker gains code execution inside the container, a read-only filesystem prevents them from modifying the agent's behavior, installing backdoors, or tampering with logs stored inside the container.

The challenge is that most applications need to write somewhere. Openclaw needs to write temporary files during operation, and it needs persistent storage for conversation logs and data. The solution is to provide exactly the writable paths the agent needs and nothing more:

# In docker-compose.yml
read_only: true
tmpfs:
  - /tmp:size=256M,noexec,nosuid
volumes:
  - ./data:/app/data:rw
  - ./logs:/app/logs:rw

The tmpfs mount at /tmp handles transient files. It exists only in memory, is capped at 256 MB, and is wiped on every container restart. The noexec flag means even if an attacker writes a binary to /tmp, the kernel will refuse to execute it.

The bind-mounted data and logs directories provide persistent writable storage on the host, scoped to exactly the directories the agent needs. On the host side, these directories should be owned by the remapped user and have permissions set to 700 so no other user can read them.

Network Segmentation

By default, Docker containers can reach any address on the internet. For Openclaw, the agent only needs to communicate with a handful of endpoints: the Anthropic API, your messaging platform's webhook server, and optionally a database. Everything else should be blocked.

Start by creating a dedicated Docker network with no default internet access:

# Create an isolated network
docker network create \
  --driver bridge \
  --internal \
  openclaw-internal

The --internal flag prevents containers on this network from reaching the outside world. To selectively allow outbound traffic to specific endpoints, use iptables rules on the host:

# Allow traffic to Anthropic API
sudo iptables -I DOCKER-USER -d 104.18.0.0/16 -p tcp --dport 443 -j ACCEPT

# Allow traffic to Slack API
sudo iptables -I DOCKER-USER -d 99.86.0.0/16 -p tcp --dport 443 -j ACCEPT

# Allow DNS resolution
sudo iptables -I DOCKER-USER -p udp --dport 53 -j ACCEPT
sudo iptables -I DOCKER-USER -p tcp --dport 53 -j ACCEPT

# Drop everything else from containers
sudo iptables -A DOCKER-USER -j DROP

For a more maintainable approach, use ipset to create named lists of allowed destinations that you can update without rewriting iptables rules:

# Create an IP set for allowed destinations
sudo ipset create openclaw-allowed hash:net

# Add Anthropic API ranges
sudo ipset add openclaw-allowed 104.18.0.0/16

# Reference the set in iptables
sudo iptables -I DOCKER-USER \
  -m set --match-set openclaw-allowed dst \
  -p tcp --dport 443 -j ACCEPT

This ensures that even if an attacker gains code execution inside the container, they cannot exfiltrate data to arbitrary external servers or establish reverse shells. The container can only talk to the services it is explicitly permitted to reach.

Container security configured for you

Our Business plan includes full container hardening: namespace remapping, read-only filesystems, network segmentation, resource limits, and capability dropping. Deployed and tested in under 24 hours.

View Plans

Resource Limits and OOM Protection

Memory limits are your safety net against runaway processes. When you set mem_limit: 2g and memswap_limit: 2g in Docker Compose, the container is hard-capped at 2 GB of total memory. Setting both values equal disables swap for the container, which is important because swap-heavy containers degrade host I/O performance for every other service.

When a container hits its memory limit, the Linux kernel's OOM (Out of Memory) killer terminates the highest-memory process inside the container. Because Docker sets oom_score_adj appropriately, the kernel will kill the container's main process rather than a host process. Docker then handles the restart according to your restart policy. With restart: unless-stopped, the container comes back automatically after an OOM kill.

CPU limits work differently. The cpus: 1.5 directive uses CFS (Completely Fair Scheduler) bandwidth control to limit the container to 150% of one core's capacity. When the container exceeds this, it is not killed. Instead, the scheduler throttles it, introducing small pauses that slow processing without crashing the agent. This is generally the desired behavior for AI workloads that may have brief spikes during complex reasoning.

You should also set pids_limit to prevent fork bombs. A value of 256 is generous enough for normal operation while blocking pathological process spawning. Monitor these limits with docker stats during normal operation to ensure your limits are not too tight for your workload:

# Monitor resource usage in real time
docker stats openclaw-agent

# Check if OOM kills have occurred
docker inspect openclaw-agent | grep -i oom

Common Mistakes to Avoid

Running the container as root. Many Docker images default to running as the root user inside the container. Without user namespace remapping, this means a container breakout gives the attacker root access on the host. Always configure userns-remap or use the USER directive in your Dockerfile to run as a non-root user.

Mounting the Docker socket. Some monitoring tools and CI pipelines require mounting /var/run/docker.sock into a container. Never do this for Openclaw. Access to the Docker socket is equivalent to root access on the host because it allows creating new privileged containers, reading secrets from other containers, and manipulating the host network. There is no legitimate reason for an AI agent to need Docker socket access.

Skipping memory limits. Without mem_limit, a single container can consume all available host memory. When this happens, the host kernel's OOM killer may terminate critical system processes like SSH, leaving you locked out of your own server. Always set explicit memory limits, even if they are generous.

Using the --privileged flag. The --privileged flag disables nearly all container security features. It gives the container full access to all host devices, disables seccomp and AppArmor profiles, and grants every Linux capability. It exists for edge cases like running Docker-in-Docker during CI builds. It has no place in a production Openclaw deployment. If a tutorial tells you to use --privileged to fix a permissions issue, find the specific capability or device access you actually need instead.

Skip the Security Guesswork

Get Openclaw professionally installed and hardened on your infrastructure in under 24 hours. Plans from $2,449 (one-time).

View Plans Book a Call

Dive deeper into Openclaw security: