I'm going to build my own OpenClaw, with blackjack and bun

2026-03-117:365368github.com

I'm going to build my own OpenClaw, with blackjack... and bun! - rcarmo/piclaw

NameName

PiClaw

PiClaw is a Docker-based sandbox for running the Pi Coding Agent in an isolated Debian environment. It bundles piclaw — a web-first orchestrator built on the Pi SDK with persistent sessions, a streaming web UI, and scheduled tasks. WhatsApp is optional. Inspired by agentbox and nanoclaw.

Demo Animation

  • Streaming web UI — real-time token-by-token updates over SSE, with Markdown, KaTeX, and Mermaid rendering
  • Workspace explorer — file tree sidebar with previews, file reference pills, and downloads
  • Disk usage starburst — folder-size visualization with hover details and drill-down
  • Code editor — built-in CodeMirror 6 with syntax highlighting for 12 languages, search/replace, and save
  • Persistent storage — SQLite-backed messages, media, tasks, token usage, and encrypted keychain
  • Skills — setup, debugging, Playwright, scheduling, charts, web search, and more
  • Passkeys + TOTP authentication — optional WebAuthn passkeys with TOTP fallback (iOS/Android webapp support)
  • WhatsApp — optional secondary channel
make build # Build the Docker image
make up # Start the container (supervisord launches piclaw)

Open http://localhost:8080 in your browser. To use pi interactively instead:

docker exec -u agent -it piclaw bash
cd /workspace && pi

Provision provider credentials via /shell <command> in the web UI or docker exec with pi /login. See docs/configuration.md for details.

The UI is single-user, mobile-friendly, and streams updates over SSE:

  • Thought/Draft panels — visible during streaming
  • Live steering — send follow-ups while the agent is still responding
  • File attachments with download links
  • Link previews via server-side OpenGraph fetch
  • Multi-turn threading — subsequent turns are visually threaded under the first
  • Themes + tinting — presets plus /theme and /tint commands (Solarized auto light/dark)
  • Mobile-first layout with webapp manifest

The sidebar shows a file tree of /workspace with auto-refresh. Click a file to preview it or add a file reference pill to the next prompt. Drag and drop files onto the tree to upload them. It also includes a folder-size starburst preview with hover details and drill-down.

Click the pencil icon on any text file preview (up to 256 KB) to open the built-in editor. It appears as a resizable centre pane between the sidebar and the chat.

  • 12 languages — JS/TS (JSX/TSX), Python, Go, JSON, CSS, HTML, YAML, SQL, XML/SVG, Markdown, Shell
  • Search and replace — Cmd/Ctrl+F and Cmd/Ctrl+H
  • Save — Cmd/Ctrl+S or the Save button; dirty state is tracked
  • Line wrapping, line numbers, and active line highlight
  • Vendored bundle (~245 KB gzip) — no external CDN dependencies
Mount Container path Contents
Home /config Agent home (.pi/, .gitconfig, .bashrc)
Workspace /workspace Projects, piclaw state, notes

Never delete /workspace/.piclaw/store/messages.db — it holds all chat history, media, and tasks.

Key environment variables:

Variable Default Purpose
PICLAW_WEB_PORT 8080 Web UI port
PICLAW_WEB_TOTP_SECRET (empty) Base32 TOTP secret; enables login gate
PICLAW_WEB_PASSKEY_MODE totp-fallback Passkey mode: totp-fallback, passkey-only, or totp-only
PICLAW_ASSISTANT_NAME PiClaw Display name in the UI
PICLAW_KEYCHAIN_KEY (empty) Master key for encrypted secret storage

For the full list (TLS, reverse proxies, timeouts, Pushover, WhatsApp, keychain, external workspaces), see docs/configuration.md.

If piclaw is running behind a reverse proxy or tunnel (for example Cloudflare Tunnel, Caddy, or Nginx TLS termination), enable proxy trust so origin checks and absolute URL generation use the external host/proto:

or in .piclaw/config.json:

{ "web": { "trustProxy": true
  }
}

Your proxy should forward either the standard Forwarded header or the usual X-Forwarded-Host / X-Forwarded-Proto headers.

make build-piclaw # Full build: vendor bundle + web assets + TypeScript
make vendor # Rebuild CodeMirror vendor bundle only
make lint # ESLint
make test # Run all tests
make local-install # Pack, install globally, and restart piclaw

Tests use the Bun runner (cd piclaw && bun test). Sequential mode is recommended for SQLite safety (--max-concurrency=1).

Pushing a version tag triggers .github/workflows/publish.yml — multi-arch builds (amd64 + arm64) published to GHCR.

make bump-patch # bump patch version, commit, and tag
make bump-minor # bump minor version, commit, and tag
make push # push commits + tag → triggers CI

PiClaw works with any OCI-compliant runtime:

  • Docker / Docker Desktop — primary target
  • Apple Containers (macOS 26+)
  • Podman, nerdctl, etc.

MIT

You can’t perform that action at this time.


Read the original article

Comments

  • By mg 2026-03-1110:585 reply

    I wonder if we really need agents to have control of a full computer.

    Maybe a browser plugin that lets the agent use websites is enough?

    What would be a task that an agent cannot do on the web?

    • By weird-eye-issue 2026-03-1111:201 reply

      Not sure if this is a joke

      But how would claude code work from a browser environment?

      Or how would an agent that orchestrates claude code and does some customer service tasks via APIs work in a browser environment?

      Would you prefer it do customer service tasks via brittle and slow browser automation instead?

      • By mg 2026-03-1111:481 reply

            how would claude code work from a browser environment?
        
        If you want an agent (like OpenClaw) to write software, why have it use another agent (Claude Code) in the first place? Why not let it develop the software directly? As for how that works in a browser - there are countless web based solutions to write and run software in the cloud. GitHub Codespaces is an example.

        • By rubslopes 2026-03-1112:24

          But OpenClaw is "Claude Code" with bells and whistles so it can be contacted via messaging services and be woken up to do things at specific times.

    • By piva00 2026-03-1111:191 reply

      I personally won't allow full control for a long time.

      On the other hand LLMs have been a very good tool to build bespoke tools (scripts, small CLI apps) that I can allow them to use. I prefer the constraints without having to think about sandboxing all of it, I design the tools for my workflow/needs, and make them available for the LLM when needed.

      It's been a great middle ground, and actually very simple to do with AI-assisted code.

      I don't "vibecode" the tools though, I still like to be in the loop acting more as a designer/reviewer of these tools, and let the LLM be the code writer.

      • By mg 2026-03-1112:171 reply

        But does the agent have access to a whole computer to write those tools?

        Couldn't it write them in a web based dev environment?

        • By piva00 2026-03-1113:461 reply

          No, it doesn't, I only run agents in a dedicated development environment (somewhat sandboxed in the file system) but that's how I've used them since the beginning, I don't want it to be accessing my file system as a whole, I only need it to look at code.

          Don't think a web-based dev environment would be enough for my use case, I point agents to look into example code from other projects in that environment to use as as bootstraps for other tools.

          • By mg 2026-03-1118:401 reply

            Why can't that "dedicated development environment" be a cloud VM with a web interface, a GitHub codespace for example?

            You could put the example code on the filesystem of that VM too.

            • By kyleee 2026-03-1120:16

              It could be…

    • By webpolis 2026-03-1116:35

      Browser plugins have a security problem that's easy to miss: the agent runs inside your existing browser profile. That means it has access to your active sessions, stored credentials, autofill data — everything you're already logged into. A sandboxed machine is actually the safer primitive for untrusted agent tasks, not the more paranoid one. I work on Cyqle (https://cyqle.in), which uses ephemeral sessions with per-session AES keys destroyed on close, because you want agents in a cryptographically isolated context — not loose inside your personal browser where one confused-deputy mistake can reach your bank session.

    • By neya 2026-03-1111:43

      Every week there is a news article about some script kiddie who shot themselves in the foot after vibe coding their production-ready app, without the help of any senior engineer, because, let's face it, who needs them, right? Only to end up deleting their production database, or leaking their credentials on a html page or worse, exposing their sensitive personal data online.

      I'm actually pro-agents and AI in general - but with careful supervision. Giving an unpredictable (semi) intelligent machine the ability to nuke your life seems like the dumbest idea ever and I am ready to die on this hill. Maybe this comment will age badly and maybe letting your agents "rm -rf /" will be the norm in the next decade and maybe I'll just be that old man yelling at clouds.

    • By lostmsu 2026-03-1111:13

      Run anything multi threaded?

  • By stavros 2026-03-118:261 reply

    I did the same, except my focus is security:

    https://github.com/skorokithakis/stavrobot

    I guess everyone is doing one of these, each with different considerations.

    • By croes 2026-03-119:272 reply

      Security is quite impossible because they need access to your data which makes it insecure by default.

      Sandboxing fixes only one security issue.

      • By CuriouslyC 2026-03-1112:031 reply

        This is overly pessimistic. Prompt injection can be largely mitigated by creating a protocol firewall between agents that access untrusted content and agents that perform computation: https://sibylline.dev/articles/2026-02-22-schema-strict-prom...

        I'm working on an autonomous agent framework that is set up this way (along with full authz policy support via OPA, monitoring via OTel and a centralized tool gateway with CLI). https://github.com/sibyllinesoft/smith-core for the interested. It doesn't have the awesome power of a 30 year old meme like the OP but it makes up for it with care.

        • By croes 2026-03-125:43

          Agent hacking is just a the beginning, it’s a bit early to think it’s a solved problem

      • By stavros 2026-03-119:301 reply

        That's like saying you shouldn't vet your PA because they'll have access to your email anyway. Yeah, but I still don't give them my house keys.

        • By croes 2026-03-119:371 reply

          More like giving your access to a PA service company where you don’t know the actual PA. But you know those PAs have done some terrible mistake, are quite stupid sometimes and fall for tricks like prompt injection.

          If you give a stranger access to your credit card it doesn’t get less risky just because you rent them a apartment in a different town.

          The problem isn’t the deleted data but that AI "thought" it’s the right thing to do.

          • By stavros 2026-03-119:441 reply

            Defining the security boundary is more secure than not defining it. This is a meaningful difference between what my bot does (has access to what you give it access to) vs what OpenClaw does (has access to everything, whether you want it to or not).

            If you want perfectly secure computing, never connect your computer to the network and make sure you live in a vault. For everyone else, there's a tradeoff to be made, and saying "there's always a risk" is so obvious that it's not even worth saying.

            • By croes 2026-03-119:561 reply

              Of course it‘s more secure but it doesn’t mean it’s secure.

              • By scdlbx 2026-03-1111:121 reply

                Nothing is secure.

                • By croes 2026-03-1111:54

                  But there is a difference between insecure against your actions or because of you actions.

                  Someone breaking in into your system and doing damage is different to handing out the key to an agent that does the damage.

                  AI has still too many limits to hand over that of responsibility to it.

                  And because it also endangers third parties it’s reckless to do so.

  • By taddevries 2026-03-1112:261 reply

    Bender Bending Rodriguez would approve of this title.

    This title sounds like a Futerama joke if you're not in the know.

HackerNews