MCP is dead; long live MCP

2026-03-1419:32104105chrlschn.dev

Understanding the social media zeitgeist around CLIs and the premature death of MCP

  • There is currently a social media and industry zeitgeist dialed-in on CLIs…just as there was a moment for MCP but just a few short months ago
  • While it is true that there are token savings to be had by using a CLI, many folks have not considered how agents using custom CLIs run into the same context problem as MCP, except now without structure and many other sacrifices
  • In much of the discourse, there is a lack of distinction between local MCP over stdio versus server MCP over HTTP; the latter is a very different use case
  • Many folks are also only familiar with MCP tools, but overlook MCP prompts and resources as an important org- and enterprise-level mechanism for moving from cowboy vibe-coding to organizationally aligned agentic engineering.
  • The importance of MCP auth is also commonly misunderstood as is the role and importance of telemetry in understanding org-wide tool use
  • For enterprise and org-level use cases, MCP is the present and future and teams need to be able to cut through the hype of the moment.

Just 6 months back, Model Context Protocol (MCP) was all that anyone could talk about and it seemed that everyone was in a frenzy to ship MCP-related offerings and tools. At Motion, where I work, we were in the thick of it with seemingly every single vendor trying to get us to buy or use their MCP product.

It’s just an API; why do I need a wrapper around that when I can just call the API directly.

This was my common response to these vendors as they tried to pitch usage of their MCP offerings (often at a premium!). My guard was up as I looked on from the sidelines at the hype surrounding MCP with skepticism; we entirely skipped the MCP hype cycle because it wasn’t clear what the added value was.

But in just 6 short months, it seems as if the industry discourse around MCP has completely shifted. What happened?

First, most folks realized that in most use cases, MCP is just overhead to calling APIs directly because MCP as a wrapper for APIs do not make sense. Instead of using MCP, we simply wrote small tool wrappers around REST API endpoints.

Second, it is instructive to step back and understand just how much of this field is now driven by individuals marketing their companies and marketing themselves. So much of the AI landscape requires creating a sense of FOMO and hype and has been that way for years (while those of us adopting and adapting these tools on a daily basis often look on scratching our heads). These influencers constantly need some content to rant about to stay relevant and push themselves, their products, their companies, or their services so we see constant re-orientation around the zeitgeist of the moment.

In that sense, I view many of these influencers (and even people that should know better like Garry Tan and Andrew Ng) as no different than influencers peddling Ivermectin as a cure-all or anti-vax conspiracies; it’s simple ignorance.

The influencer-driven discourse has now shifted towards hating on MCP and praising CLIs as the current zeitgeist. Not without reason (to be clear) because in many cases, it is true that the CLI makes much more sense as an interface for agentic tool use…except when it does not.

Understanding the Misunderstanding

Before we dive any deeper, I would like to make clear that I think MCP is indeed the wrong choice for a class of use cases, but as the title might suggest, there is a deep misunderstanding of where and how CLI tools can yield savings.

There are two topics at the core of the debate of CLI vs MCP and the foundation of much of the discussion:

  1. Token savings and efficiency with CLIs.
  2. The bloat and uselessness of MCP and its complexity.

Do Token Savings Exist?

Yes, they do. There are two modalities in token savings to be had, but it might not be as dramatic as social media would have you believe.

First is that CLI utilities in the models’ training dataset like jq, curl, git, grep, psql, aws, s3, gcloud, etc. benefit tremendously from the underlying models already having encountered innumerable examples of how to use these tools. Because of this, the agent does not need additional instruction, schemas (not true when calling custom REST APIs, though), or context on how to use these tools; it can simply one-shot the tool in many cases. This can be a significant savings over MCP because in MCP, the tools must be declared up front in the tools/list response.

For CLI tools that will already be in the agent’s training dataset, absolutely always prefer them over MCP (for custom REST APIs, that’s a “maybe”).

However, this is not true of a custom, bespoke CLI tool. The LLM has no way of knowing which CLI to use and how it should use it…unless each tool is listed with a description somewhere either in AGENTS|CLAUDE.md or a README.md. You must provide the LLM some instruction on how and when to use a bespoke CLI tool that it has never seen before.

It is possible to point it to a directory called /cli-tools and rely on descriptive naming to have the agent pick and choose the tool, but anyone that works with agents day in and day out already knows that agents are often not very good at this without more explicit instructions. It will make mistakes and you will have to update your AGENTS|CLAUDE.md or other docs somewhere and fill it with more and more descriptions each time you find the agent misbehaving with your bespoke CLI tools.

Aside from that, even with a tool like curl, any token savings are lost the moment the agent has to understand a bespoke OpenAPI schema to correctly call the API since the entire OpenAPI schema may need to be loaded into context or extensive examples given to the agent in context to instruct it on how to use it. Oops; there goes all of your hyped up context savings.

Progressive Context Consumption

It is true that CLIs allow for progressive context consumption as an agent tries to deduce usage. Whereas MCPs will state the toolset and schema up front (the “bloat”), CLI tools can progressively load their --help into context.

// Sample from the Model Context Protocol site
{
 name: "searchFlights",
 description: "Search for available flights",
 inputSchema: {
 type: "object",
 properties: {
 origin: { type: "string", description: "Departure city" },
 destination: { type: "string", description: "Arrival city" },
 date: { type: "string", format: "date", description: "Travel date" }
 },
 required: ["origin", "destination", "date"]
 }
}

But nonetheless, the reality is that unless it is a well known tool in the LLM’s training dataset, the agent will need to progressively (and over multiple turns) descend the CLI tools’ help content to understand the available commands, sub-commands, and parameters.

There is simply no getting around this because the agent has no way of knowing how to use a bespoke CLI tool otherwise.

# What it might look like as --help output

command: searchFlights Search for available flights
 input: JSON object with origin, destination, date
 example:
 {
 origin: "(string; required) departure city",
 destination: "(string; required) arrival city",
 date: "(date:yyyy-MM-dd; required) travel date"
 }

I don’t know about you, but this sure looks like the MCP schema…just without any structure.

It is true that this could be progressively loaded instead by first listing all of the commands and then having the agent only --help the the desired command to disclose the costly payload descriptor for the tool:

# Progressively loading instead of loading the entire schema

For usage:

flights <command> [--help]

commands: searchFlights Search for available flights
 bookFlight Book a flight
 ...

However, I would make four points here:

  1. For a sufficiently complex flow, the agent will end up traversing most of the tree, regardless.
  2. The likely context savings will end up pretty minimal if the MCP toolset is smartly designed in the first place; the agent just ends up taking more turns with a CLI while progressively discovering commands and parameter descriptors.
  3. Without giving the agent the full schema up front, the chance of the agent using the toolset correctly will go down. In the same way that Vercel found agent usage of docs improved when they placed the full doc index into AGENTS.md, our intuition should tell us that if the agent is aware of all of the tools and parameters at the outset, it will be better equipped to select the right one.
  4. Don’t give your agent complex, useless MCP tools in the first place? CLI and MCP are not mutually exclusive. Be selective in both cases.

If you’re still not convinced that a lot of this discourse lacks nuance and is just hype, congrats on buying into the current AI-influencer FOMO hype cycle; see you in 6 months when the influencers move on to the next revelation of the moment to stay relevant and get your eyeballs and dollars.

The Duality of MCP

When the MCP hype first started, I was as much a skeptic as anyone. Motion, at the time, was building out our “AI Employees” (like every other hype-chasing team) and building integrations.

While vendors were trying to sell us on their MCP implementations of integrations, we simply wrote tool wrappers around REST API calls and passed API keys or auth info in headers using standard Bearer tokens. The hype seemed entirely unjustified and I looked on with skepticism that MCP had any future whatsoever.

Especially in stdio mode, MCP felt excessive and useless. Indeed, in most use cases, MCP over stdio is probably not needed and adds complexity over writing a simple CLI.

Distinction between MCP over stdio and streamable HTTP

But MCP over streamable HTTP? This is an absolute game changer and will be a key linchpin in organizational and enterprise adoption shifting from vibe-coding to agentic engineering.

Why Centralization is Key

(I preface that this is primarily relevant for orgs and enterprises; it really has no relevance for individual vibe-coders)

Most influencers talking about MCP fail to make a distinction between how MCP communicates locally over stdio mode versus streamable HTTP.

In stdio mode, the MCP server runs locally with the agent and indeed, why bother with this over writing a simple CLI?

But when accessed over streamable HTTP transport, it is possible to run that same logic in a centralized server and there are several unlocks when deployed and accessed by the agents under this modality.

Richer Underlying Capabilities

What if the use case would benefit from having access to a Postgres instance? With Apache AGE enabled for Cypher graph queries over indexed context? This is very straight forward when the tooling is centralized on a server and accessed by a thin client. It is possible to implement richer platform capabilities for the tooling because distribution is as simple as pointing an agent to an HTTP endpoint and adding an auth token.

Yes, a local database like SQLite may also do the trick, but there is a limit to what’s possible and sharing state across an org becomes more difficult.

Ephemeral Agent Runtimes

Remote MCP servers over HTTP also offer big advantages depending on the runtime context.

For example, when exposing remote tools and APIs to agents running in GitHub. Here, MCP makes it trivially easy to use tools that could require complex backends without any install and without restrictions of operating in the ephemeral nature of the GitHub Actions runtime environment.

Copilot Agent MCP configuration

It offloads management of stateful workloads in stateless, ephemeral contexts to a centralized server.

Auth and Security

Moving workloads into a server also improves the story around auth and security. CLIs that need to access secured API endpoints using curl, as an example, require that every developer have access to keys to those APIs (or proxy calls through some server). It’s easy to see why this is bad and a pain in the ass for an ops teams.

Centralizing this behind MCP allows each developer to authenticate via OAuth to the MCP server and sensitive API keys and secrets can be controlled behind the server like any other bog-standard server REST API using (more or less) bog-standard OAuth. The exposure of secrets is controlled, limited, and easy to audit.

An engineer leaves your team? Revoke their OAuth token and access to the MCP server; they never had access to other keys and secrets to start with.

Telemetry and Observability

For teams, another big win with centralized streamable MCP is the story around telemetry and observability. Which tools are having an impact? Which agent runtimes are the team using? Which tools are low value? Where and how are tools failing? Without centralized, standardized telemetry, it is exceedingly difficult as an engineering org to understand what’s working and what’s not.

With a centralized MCP server, this is simply a matter of emitting OpenTelemetry traces and metrics and collecting them using standard, off-the-shelf tooling.

Datadog dashboard showing OTEL metrics

While this is achievable with CLI tools, it is far easier to do so when deploying a single server because local delivery requires consumers to update and a lot of the scaffolding that happens centrally now has to be reproduced on each CLI tool that is shipped by a team.

Standardized, Instant Delivery of Up-To-Date Content

Distributed tooling (e.g. via packages) has much the same problem as working with distributed apps: keeping deployments up to date with the latest release and API compatibility. While it is relatively straightforward to update a server to add new capabilities, if tool capabilities are delivered as local CLI tools that interact with those APIs, now API version compatibility becomes a concern.

MCP includes provisions for this: subscriptions and notifications which allow servers to call back to the client notifying them of updates.

While most folks are aware of MCP tools, relatively fewer are aware of MCP prompts and MCP resources.

From the MCP specification page

Look carefully and you can see:

  • MCP Prompts are effectively server-delivered SKILL.md
  • MCP Resources are effectively server-delivered /docs

Why would an org, team, or enterprise want this?

Well, it’s easy to see several benefits.

  1. Dynamic content. While *.md files in a repo are static text files that need to be manually updated, synced, and maintained, server produced prompts and resources have no such restrictions. It is possible to dynamically generate the text on the fly to construct a skill. What about docs? What if some docs are useful in some contexts, but not others? What if docs could benefit from dynamic injection of content/context without a tool call (pricing data, current system status, etc.)?
  2. Automatic and consistent updates. When *.md files are delivered as part of the repo or as part of a package import, the downside is that they can become out of sync and require explicit syncs on the local system. This is not the case with server-delivered skills and resources via MCP prompts as these are always up to date. What about third party, official sources of docs and skills? Do you manually reproduce and update these in your repo? Or would it be easier to simply proxy it through a server?
  3. Org-wide knowledge. Some content applies org-wide. For example, standard best practices for security or telemetry in apps or infrastructure deployment considerations for apps. What about orgs that use microservices where one team may need to have docs for another service. Or what if a service team could provide skills for their service dynamically each time they ship? Does it make sense to reproduce these docs into every repo? How does an org keep them up-to-date and in sync?

MCP is the answer to all of these.

Here is an MCP delivered prompt in OpenCode:

MCP Prompt

And a set of MCP delivered resources:

MCP Prompt (OpenCode)

The same in Claude Code CLI:

MCP Prompt (Claude)

Each of these resources is a dynamically generated, “virtual” index of documents that are available (similar to Vercel’s index in AGENTS.md) and we have the benefit of bringing these documents into any project, always up-to-date, and with full server telemetry on which documents are being accessed.

In all cases, delivery of this capability requires only configuring the MCP client; it’s fire and forget with no need to keep things updated on the client.

Closing Thoughts

I get it; the industry moves fast now and these social media influencers need to keep chasing something new to keep their audience engaged. 6 months ago, MCP was the hot ticket. Now it’s a has-been; blamed for context bloat while often not even considering the tradeoffs and similar traps with custom CLIs. The masses following along have seemingly lost all critical thought in so far as engineering discipline goes.

But a simple thought exercise of how teams can move engineering orgs from vibe-coding towards agentic engineering would land one pretty squarely on the design and mission of the Model Context Protocol. In any scenario beyond a solo vibe-coder, MCP’s telemetry, simplified considerations for managing security, automatic content synchronization, schema + standards based approach, and ease of observability (how can you tell which tools are effective otherwise?) mean that teams that buy into the current zeitgeist will make a mistake when selecting an approach for delivering the scaffolding that enables agentic engineering.

Amazon's recent challenges with AI assisted code

We are still relatively in the early days of AI agents taking a leading role in software engineering, and because the field moves quickly, there is an emphasis on speed at all costs. But as we’ve seen with Amazon’s recent challenges in their AWS division, teams eventually have to operationalize and maintain these software systems produced by AI agents. And for that, we still need an engineering discipline that ensures consistency, high quality, and correctness — even when the producer of that software is an AI agent. Organizations need architectures and processes that start to move beyond cowboy, vibe-coding culture to organizationally aligned agentic engineering practices. And for that, MCP is the right tool for orgs and enterprises.

Long live MCP!

All content was human written; see the file history in the repo.


Read the original article

Comments

  • By 0xbadcafebee 2026-03-1421:034 reply

    MCP is a fixed specification/protocol for AI app communication (built on top of an HTTP CRUD app). This is absolutely the right way to go for anything that wants to interoperate with an AI app.

    For a long time now, SWEs seem to have bamboozled into thinkg the only way you can connect different applications together are "integrations" (tightly coupling your app into the bespoke API of another app). I'm very happy somebody finally remembered what protocols are for: reusable communications abstractions that are application-agnostic.

    The point of MCP is to be a common communications language, in the same way HTTP is, FTP is, SMTP, IMAP, etc. This is absolutely necessary since you can (and will) use AI for a million different things, but AI has specific kinds of things it might want to communicate with specific considerations. If you haven't yet, read the spec: https://modelcontextprotocol.io/specification/2025-11-25

    • By tptacek 2026-03-1422:255 reply

      Why is this the right way to go? It's not solving the problem it looks like it's solving. If your challenge is that you need to communicate with a foreign API, the obvious solution to that is a progressively discoverable CLI or API specification --- the normal tool developers use.

      The reason we have MCP is because early agent designs couldn't run arbitrary CLIs. Once you can run commands, MCP becomes silly.

      There is a clear problem that you'd like an "automatic" solution for, but it's not "we don't have a standard protocol that captures every possible API shape", it's "we need a good way to simulate what a CLI does for agents that can't run bash".

      • By jabber86 2026-03-152:18

        I am creator of HasMCP (my response could have a little bias). Not everyone has home/work computer by preference mostly. I know a lot of people just use iPad or Android tablet in addition to their phone. They still use applications to work on the things. This number is not a small amount of people. They need to access openworld data or service specific data. This is where MCP is still the one of the best ways.

        It tries to standardize the auth, messaging, feedback loop where API can't do alone. A CLI app can do for sure but we are talking about a standard maybe the way is something like mcpcli that you can install your phone but still would you really prefer installing bunch of application to your personal device?

        Some points that MCP is still not good as of today:

        - It does not have a standard to manage context in a good way. You have to find your hack. The mostly accepted one search, add/rm tool. Another one is cataloging the tools.

        - lack of client tooling to support elicitation on many clients (it really hurts productivity but this is not solved with cli too)

        - lack of mcp-ui adoption (mcp-ui vs openai mcp app)

        I would suggest keep building to help you and your users. I am not sponsor of MCP, just sharing my personal opinion. I am also creator HasCLI but kindly biased for MCP then CLI in terms of coverage and standardization.

      • By phpnode 2026-03-152:27

        no, it's all about auth. MCP lets less-technical people plug their existing tools into agents. They can click through the auth flow in about 10 seconds and everything just works. They cannot run CLIs because they're not running anything locally, they're just using some web app. The creator of the app just needed to support MCP and they got connectivity with just about everything else that supports MCP.

      • By oneseventwonine 2026-03-150:061 reply

        For the Agent to use CLI, don't we have to install CLI in the run-time environment first? Instead for the MCP over streamable HTTP we don't have to install anything and just specify the tool call in the context in't it?

        • By tptacek 2026-03-150:10

          This rolls up to my original point. I get that if you stipulate the agent can't run code, you need some kind of systems solution to the problem of "let the agent talk to an API". I just don't get why that's a network protocol coupling the agent to the API and attempting to capture the shape of every possible API. That seems... dumb.

      • By harrall 2026-03-1423:071 reply

        CLI doesn’t work for your coworkers that aren’t technical.

        Have you tried to use a random API before? It’s a process of trial and error.

        With the MCP tools I use, it works the first time and every time. There is no “figuring out.”

        • By mkoubaa 2026-03-1423:24

          If you can't write a good CLI I doubt you could write a good MCP

      • By isbvhodnvemrwvn 2026-03-1422:321 reply

        It's significantly more difficult to secure random clis than those apis. All llm tools today bypass their ignore files by running commands their harness can't control.

        • By tptacek 2026-03-1423:33

          I'm fuzzy when we're talking about what makes an LLM work best because I'm not really an expert. But, on this question of securing/constraining CLIs and APIs? No. It is not easier to secure an MCP than it is a CLI. Constraining a CLI is a very old problem, one security teams have been solving for at least 2 decades. Securing MCPs is an open problem. I'll take the CLI every time.

    • By simianwords 2026-03-1421:173 reply

      > This is absolutely necessary since you can (and will) use AI for a million different things

      the point is, is it necessary to create a new protocol?

      • By hannasanarion 2026-03-1421:38

        Exactly this. I've made some MCP servers and attached tons of other people's MCP servers to my llms and I still don't understand why we can't just use OpenAPI.

        Why did we have to invent an entire new transport protocol for this, when the only stated purpose is documentation?

      • By CharlieDigital 2026-03-1421:40

        By and large, it is a very simple protocol and if you build something with it, you will see that it is just a series of defined flows and message patterns. When running over streamable HTTP, it is more or less just a simple REST API over HTTP with JSON RPC payload format and known schema.

        Even the auth is just OAuth.

      • By paulddraper 2026-03-1421:581 reply

        It’s not a new protocol.

        It’s JSON-RPC plus OAuth.

        (Plus a couple bits around managing a local server lifecycle.)

        • By drdaeman 2026-03-1422:55

          World would be surely a saner place if instead of “MCP vs CLI” people would talk about “JSON-RPC vs execlp(3)”.

          Not accurate, but at least makes on think of the underlying semantics. Because, really, what matters is some DSL to discover and describe action invocations.

    • By ambicapter 2026-03-1421:165 reply

      If AI is AI, why does it need a protocol to figure out how to interact with HTTP, FTP, etc.? MCP is a way to quickly get those integrations up and running, but purely because the underlying technology has not lived up to its hyped abilities so far. That's why people think of MCP as a band-aid fix.

      • By 8note 2026-03-1421:451 reply

        Why the desire to reinvent the wheel every time? Agents can do it accurately, but you have to wait for them to figure it out every time, and waste tokens on non-differentiated work

        The agents are writing the mcps, so they can figure out those http and ftp calls. MCP makes it so they dont have to every time they want to do something.

        I wouldnt hire a new person to read a manual and then make a bespoke json to call an http server, every single time i want to make a call, and thats not a knock on the person's intelligence. Its just a waste of time doing the same work over and over again. I want the results of calling the API, not to spend all my time figuring out how to call the API

        • By theptip 2026-03-1422:07

          It’s simply about making standard, centralized plugins available. Right now Claude benefits from a “link GitHub Connector” button with a clear manifest of actions.

          Obviously if the self-modifying, Clawd-native development thing catches on, any old API will work. (Preferably documented but that’s not a hard requirement.)

          For now though, Anthropic doesn’t host a clawd for you, so there isn’t yet a good way for it to persist customs integrations.

      • By avereveard 2026-03-1422:011 reply

        C

        each ai need context management per conversation this is something that would be very clunky to replicate on top of http or ftp (as in requiring side channel information due session and conversation management)

        Everyone looks at api and sure mcp seem redundant there but look at agent driving a browser the get dom method depends on all the action performed from when the window opened and it needs to be per agent per conversation

        Can you do that as rest sure sneak a session and conversation in a parameter or cookie but then the protocol is not really just http is it it's all this clunky coupling that comes with a side of unknowns like when is a conversation finished did the client terminate or were just between messages and as you go and solve these for the hundredth time you'd start itching for standardization

        • By superturkey650 2026-03-1422:46

          All MCP adds is a session token. How is that not already a solved problem?

      • By CharlieDigital 2026-03-1421:191 reply

        Because protocols provide structure that increases correctness.

        It is not a guarantee (as we see with structured output schemas), but it significantly increases compliance.

        • By ambicapter 2026-03-1421:302 reply

          You're interacting with an LLM, so correctness is already out the window. So model-makers train LLMs to work better with MCP to increase correctness. So the only reason correctness is increased with MCP is because LLMs are specifically trained against it.

          So why MCP? Are there other protocols that will provide more correctness when trained? Have we tried? Maybe a protocol that offers more compression of commands will overall take up more context, thus offering better correctness.

          MCP seems arbitrary as a protocol, because it kinda is. It doesn't >>cause<< the increase in correctness in of itself, the fact that it >>is<< a protocol is the reason it may increase correctness. Thus, any other protocol would do the same thing.

          • By fartfeatures 2026-03-1421:36

            > You're interacting with an LLM, so correctness is already out the window.

            With all due respect if you are prompting correctly and following approaches such as TDD / extensive testing then correctness is not out the window. That is a misunderstanding likely caused by older versions of these models.

            Correctness can be as complete as any other new code, I've used the AI to port algorithms from Python to Rust which I've then tested against math oracles and published examples. Not only can I check my code mathematically but in several instances I've found and fixed subtle bugs upstream. Even in well reviewed code that has been around for many years and is well used. It is simply a tool.

          • By CharlieDigital 2026-03-1421:37

                > So why MCP? ...  MCP seems arbitrary as a protocol
            
            You're right, it is an arbitrary protocol, but it's one that is supported by the industry.

            See the screencaps at the end of the post that show why this protocol. Maybe one day, we will get a better protocol. But that day is not today; today we have MCP.

      • By nonethewiser 2026-03-1421:32

        If AI is AI why does it need me to prompt it?

      • By re-thc 2026-03-1423:25

        > If AI is AI

        That "AI" got renamed to "AGI"

    • By ekropotin 2026-03-150:25

      I mean, CLI tool is also “reusable communication abstraction”, innit?

  • By codemog 2026-03-1420:487 reply

    As soon as MCP came out I thought it was over engineered crud and didn’t invest any time in it. I have yet to regret this decision. Same thing with LangChain.

    This is one key difference between experienced and inexperienced devs; if something looks like crud, it probably is crud. Don’t follow or do something because it’s popular at the time.

    • By fartfeatures 2026-03-1420:563 reply

      All the code I work on now has an MCP interface so that the LLM can debug more easily. I'd argue it is as important as the UI these days. The amount of time it has saved me is unreal. It might be worth investing a very small amount of your time in it to see if it is a good fit. Even a poor protocol can provide useful functionality.

      • By kybernetikos 2026-03-1422:041 reply

        I've just been discovering this pattern too. It's made a huge difference. Trying to get Claude to remote control an app for testing via the various other means was miserable and unreliable.

        I got it to build an MCP server into the app that supported sending commands to allow Claude to interact with it as if it was a user, including keypresses and grabbing screenshots, and the difference was immediate and really beneficial.

        Visual issues were previously one of the things it would tend to struggle with.

        • By behehebd 2026-03-1423:101 reply

          How does it compare to my goto: a test suite that uses Playwright?

          > Claude imolement plan.md until all unit and browser tests pass

          • By kybernetikos 2026-03-1423:38

            I assume that this is dependent on app, and it's quite possible that your approach is best in some cases.

            In my case I started with something somewhat like Playwright, and claude had a habit of interacting with the app more directly than a user would be able to and so not spotting problems because of it. Forcing it to interact by pressing keys rather than delving into the dom or executing random javascript helped. In particular I wanted to be able to chat with it as it tried things interactively. This is more to help with manual tests or exploratory testing rather than classic automated testing.

            My current app is a desktop app, so playwright isn't as applicable.

      • By moralestapia 2026-03-1421:145 reply

        Our workflows must be massively different.

        I code in 8 languages, regularly, for several open source and industry projects.

        I use AI a lot nowadays, but have never ever interacted with an MCP server.

        I have no idea what I'm missing. I am very interested in learning more about what do you use it for.

        • By Kaliboy 2026-03-1421:49

          I've managed to ignore MCP servers for a long time as well, but recently I found myself creating one to help the LLM agents with my local language (Papiamentu) in the dialect I want.

          I made a prolog program that knows the valid words and spelling along with sentence conposition rules.

          Via the MCP server a translated text can be verified. If its not faultless the agent enters a feedback loop until it is.

          The nice thing is that it's implemented once and I can use it in opencode and claude without having to explain how to run the prolog program, etc.

        • By CharlieDigital 2026-03-1421:252 reply

              > I have no idea what I'm missing.
          
          The questions I'd ask:

              - Do you work in a team context of 10+ engineers?
              - Do you all use different agent harnesses?
              - Do you need to support the same behavior in ephemeral runtimes (GH Agents in Actions)?
              - Do you need to share common "canonical" docs across multiple repos?
              - Is it your objective to ensure a higher baseline of quality and output across the eng org?
              - Would your workload benefit from telemetry and visibility into tool activation?
          
          If none of those apply, then it's not for you. Server hosted MCP over streamable HTTP benefits orgs and teams and has virtually no benefit for individuals.

          • By monsieurbanana 2026-03-1423:22

            What I want to know is what's the difference between a remote mcp and an api with an openapi.json endpoint for self-discovery? It's just as centralized

          • By fartfeatures 2026-03-1421:29

            MCP is useful for the above. I work on my own more often than not and the utility of MCP goes far beyond the above. (see my other comment above).

        • By fartfeatures 2026-03-1421:28

          I can't go into specifics about exactly what I'm doing but I can speak generically:

          I have been working on a system using a Fjall datastore in Rust. I haven't found any tools that directly integrate with Fjall so even getting insight into what data is there, being able to remove it etc is hard so I have used https://github.com/modelcontextprotocol/rust-sdk to create a thin CRUD MCP. The AI can use this to create fixtures, check if things are working how they should or debug things e.g. if a query is returning incorrect results and I tell the AI it can quickly check to see if it is a datastore issue or a query layer issue.

          Another example is I have a simulator that lets me create test entities and exercise my system. The AI with an MCP server is very good at exercising the platform this way. It also lets me interact with it using plain english even when the API surface isn't directly designed for human use: "Create a scenario that lets us exercise the bug we think we have just fixed and prove it is fixed, create other scenarios you think might trigger other bugs or prove our fix is only partial"

          One more example is I have an Overmind style task runner that reads a file, starts up every service in a microservice architecture, can restart them, can see their log output, can check if they can communicate with the other services etc. Not dissimilar to how the AI can use Docker but without Docker to get max performance both during compilation and usage.

          Last example is using off the shelf MCP for VCS servers like Github or Gitlab. It can look at issues, update descriptions, comment, code review. This is very useful for your own projects but even more useful for other peoples: "Use the MCP tool to see if anyone else is encountering similar bugs to what we just encountered"

        • By 8note 2026-03-1422:021 reply

          Its very similar to the switch from a text editor + command line, to having an IDE with a debugger.

          the AI gets to do two things:

          - expose hidden state - do interactions with the app, and see before/after/errors

          it gives more time where the LLM can verify its own work without you needing to step in. Its also a bit more integration test-y than unit.

          if you were to add one mcp, make it Playwright or some similar browser automation mcp. Very little has value add over just being able to control a browser

          • By CPLX 2026-03-1422:26

            I’ve been using Chrome DevTools MCP a lot for this purpose and have been very happy with it.

        • By winrid 2026-03-1421:181 reply

          Many products provide MCP servers to connect LLMs. For example I can have claude examine things through my ahrefs account without me using the UI etc

          • By 8n4vidtmkvmk 2026-03-1421:421 reply

            That's also one of the things that worries me the most. What kind of data is being sent to these random endpoints? What if they to rogue or change their behavior?

            A static set of tools is safer and more reliable.

            • By 8note 2026-03-1422:06

              mcp is generally a static set of tools, where auth is handled by deterministic code and not exposed to the agent.

              the agent sees tools as allowed or not by the harness/your mcp config.

              For the most part, the same company that you're connecting to is providing the mcp, so its not having your data go to random places, but you can also just write your own. its fairly thin wrappers of a bit of code to call the remote service, and a bit of documentation of when/what/why to do so

      • By mlnj 2026-03-1420:58

        You are right.

        Although I have been a skeptic of MCPs, it has been an immense help with agents. I do not have an alternative at the moment.

    • By ph4rsikal 2026-03-1421:051 reply

      LangChain is not over-engineered; it's not engineered at all. Pure Chaos.

      • By embedding-shape 2026-03-1421:181 reply

        Much like how "literally" doesn't literally mean "literally" anymore, "over-engineered" in most cases doesn't mean "too much engineering happened" but "wrong design/abstractions", which of course translates to "designs/abstractions I don't like".

        • By fartfeatures 2026-03-1421:39

          Under-engineered is a much better term.

    • By jamesrom 2026-03-1422:32

      What part of MCP do you think is over-engineered?

      This is quite literally the opposite opinion I and many others had when first exploring MCP. It's so _obviously_ simple, which is why it gained traction in the first place.

    • By tptacek 2026-03-1422:26

      I still don't really understand what LangChain even is.

    • By gtirloni 2026-03-1423:25

      What are you investing time in instead?

    • By whattheheckheck 2026-03-1420:531 reply

      So let's say you have a rag llm chat api connected to an enterprises document corpus.

      Do you not expose an mcp endpoint? Literally every vscode or opencode node gets it for free (a small json snippet in their mcp.json config) If you do auth right

      • By CharlieDigital 2026-03-1421:001 reply

        Not only editors, but also different runtime contexts like GitHub Agents running in Actions.

        We can plug in MCP almost anywhere with just a small snippet of JSON and because we're serving it from a server, we get very clear telemetry regardless of tooling and envrionment.

        • By chatmasta 2026-03-1421:122 reply

          What are you using for hosting and deploying the MCP servers? I’d like something low friction for enterprise teams to be able to push their MCP definitions as easily as pushing a Git repo (or ideally, as part of a Git repo, kinda like GitHub pages). It’s obviously not sustainable for every team to host their own MCP servers in their own way.

          So what’s the best centralized gateway available today, with telemetry and auth and all the goodness espoused in this blog post?

          • By CharlieDigital 2026-03-1421:22

            We built our own (may open source eventually).

            MCP is effectively "just another HTTP REST API"; OAuth and everything. The key parts of the protocol is the communication shape and sequence with the client, which most SDKs abstract for you.

            The SDKs for MCPs make it very straightforward to do so now and I would recommend experimenting with them. It is as easy to deploy as any REST API.

          • By whattheheckheck 2026-03-1422:00

            ROSA

            https://docs.aws.amazon.com/whitepapers/latest/overview-depl...

            it should be part of your app and coordinated in a way that everyone in the enterprise can find all the available mcps. Like backstage or something

    • By kubanczyk 2026-03-1421:33

      > if something looks like crud, it probably is crud

      Yes, technically, but you've probably meant cruft here.

  • By jswny 2026-03-1421:111 reply

    MCP is fine, particular remote MCP which is the lowest friction way to get access to some hosted service with auth handled for you.

    However, MCP is context bloat and not very good compared to CLIs + skills mechanically. With a CLI you get the ability to filter/pipe (regular Unix bash) without having to expand the entire tool call every single time in context.

    CLIs also let you use heredoc for complex inputs that are otherwise hard to escape.

    CLIs can easily generate skills from the —help output, and add agent specific instructions on top. That means you can give the agent all the instructions it needs to know how to use the tools, what tools exist, lazy loaded, and without bloating the context window with all the tools upfront (yes, I know tool search in Claude partially solves this).

    CLIs also don’t have to run persistent processes like MCP but can if needed

    • By simianwords 2026-03-1421:232 reply

      but you need to _install_ a CLI. with MCP, you just configure!

      • By jswny 2026-03-152:06

        Plenty of MCPs require you to install and run them locally, like I said remote MCP has a real advantage over CLI tho

      • By charcircuit 2026-03-1421:571 reply

        You just paste in a web link to a skill. Your agent is smart enough to know hours to use it or save it.

HackerNews