
Run parallel Claudes safely in containers. Jump between their environments to instantly test changes. Get suggestions that catch critical issues as you go.
Dale used Sculptor to build a foreign-language journaling app. While Sculptor handled the refactors, fixed build issues, and churned through background tasks, he spent his time painting the landing page. In his words:
“In a world where generative art is booming, I used Sculptor to write code so that I could go make art.”
Congrats on the launch Imbue team!
I used Sculptor to build most of https://lingolog.app/ (featured in this post).
It was a blast - I was cooking dinner and blasting out features, coming back to see what Sculptor had cooked up for me in the meantime. I also painted the landing page in procreate while Sculptor was whirring away.
Of course, this meant that my time shifted from producing code to reviewing code. I found the diffs, Sculptor's internal to-do list, and summaries all helpful to this end.
n.b. I'm not affiliated with the team, but I've worked with some Imbue team members many years ago which led to being a beta tester.
So... are we all just working on various ways of using Claude Code in docker with git worktrees? Is that like, the whole world's project this month? :-)
Seems like an important project to unlock a whole amount of productivity.
Although, Sculptor does not use work trees, but that is an implementation detail.
It's the new TODO app. Anthropic are just going to build one or acquire one of these soon and the rest will be dead.
haven't seen the other projects. What's the best one?
haha honestly a little bit ya. One key thing we've learned from working on this is that lowering the barrier to working in parallel is key. Making it easy to merge, context switching, etc are all important as you try to parallelize things. I'm pretty excited about "pairing mode" for this reason as it mirrors an agents branch locally so you can make your own edits quickly and test changes.
We've also shipped "suggestions" under beta (think CI pipelines for your parallel agents) which might feel a little different. The idea is to use LLMs and your regular coding tools (pytest, pyre, ...) to verify that the code produced by the agents is actually correct.
This looks awesome!
I really hope there is planned support for other coding agents too, in particular OpenCode which seems to have relatively close feature parity coupled with wide model compatibility and open source.
Definitely! I'm very excited to get in support both for other coding agents, and for as many language models (and providers) as we can.
Eventually what we want is for the whole thing to be open -- Sculptor, the coding agent, the underlying language model, etc.