...

stingraycharles

11898

Karma

2009-07-13

Created

Recent Activity

  • That is not normal. Small scripts should launch in milliseconds, not several seconds.

  • I don’t think running these commands in a docker container is the standard way of doing this, I’ve seen “npx” et al being used way more often.

    Furthermore, the “docker” part wouldn’t even be the most resource wasteful if you consider the general computational costs of LLMs.

    The selling point of MCP servers is that they are composable and plug in into any AI agent. A monolith doesn’t achieve that, unless I’m misunderstanding things.

    What I find annoying is that it’s very unpredictable when exactly an LLM will actually invoke an MCP tool function. Different LLM providers’ models behave differently, and even within the same provider different models behave differently.

    Eg it’s surprisingly difficult to get an AI agent to actually use a language server to retrieve relevant information about source code, and it’s even more difficult to figure out a prompt for all language server functions that works reliably across all models.

    And I guess that’s because of the fuzzy nature of it all.

    I’m waiting to see how this all matures, I have the highest expectations of Anthropic with this. OpenAI seems to be doing their own thing (although ChatGPT supposedly will come with MCP support soon). Google’s models appear to be the most eager to actually invoke MCP functions, but they invoke them way too much, in turn causing a lot of context to get wasted / token noise.

  • Yeah. I would like multiple agents because each can be primed with a different system prompt and “clean” context. This has been proven to work, eg with Aider’s “architect” vs “editor” models / agents working together.

    For parallel work who want stuff to “happen faster”, I am convinced most of these people don’t really read (nor probably understand) the code it produces.

  • There are a dozen of PRs that are not getting accepted, I’m using a custom Aider build and tested their MCP client support but it’s just not getting merged nor reviewed.

  • Yeah that’s what I’m experimenting with, but I think it’s overengineered, especially with the whole dogmatic SPARC approach. I’m personally a more minimalistic person, and I would prefer it to be natively integrated into the app and being able to define exactly the (system) prompts for each of the agents.

HackerNews