MCP Specification – version 2025-06-18 changes

2025-06-1823:59173112modelcontextprotocol.io

Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you’re building an AI-powered IDE, enhancing a…

Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you’re building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need.

This specification defines the authoritative protocol requirements, based on the TypeScript schema in schema.ts.

For implementation guides and examples, visit modelcontextprotocol.io.

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.

Overview

MCP provides a standardized way for applications to:

  • Share contextual information with language models
  • Expose tools and capabilities to AI systems
  • Build composable integrations and workflows

The protocol uses JSON-RPC 2.0 messages to establish communication between:

  • Hosts: LLM applications that initiate connections
  • Clients: Connectors within the host application
  • Servers: Services that provide context and capabilities

MCP takes some inspiration from the Language Server Protocol, which standardizes how to add support for programming languages across a whole ecosystem of development tools. In a similar way, MCP standardizes how to integrate additional context and tools into the ecosystem of AI applications.

Key Details

Base Protocol

  • JSON-RPC message format
  • Stateful connections
  • Server and client capability negotiation

Features

Servers offer any of the following features to clients:

  • Resources: Context and data, for the user or the AI model to use
  • Prompts: Templated messages and workflows for users
  • Tools: Functions for the AI model to execute

Clients may offer the following features to servers:

  • Sampling: Server-initiated agentic behaviors and recursive LLM interactions
  • Roots: Server-initiated inquiries into uri or filesystem boundaries to operate in
  • Elicitation: Server-initiated requests for additional information from users

Additional Utilities

  • Configuration
  • Progress tracking
  • Cancellation
  • Error reporting
  • Logging

Security and Trust & Safety

The Model Context Protocol enables powerful capabilities through arbitrary data access and code execution paths. With this power comes important security and trust considerations that all implementors must carefully address.

Key Principles

  1. User Consent and Control

    • Users must explicitly consent to and understand all data access and operations
    • Users must retain control over what data is shared and what actions are taken
    • Implementors should provide clear UIs for reviewing and authorizing activities
  2. Data Privacy

    • Hosts must obtain explicit user consent before exposing user data to servers
    • Hosts must not transmit resource data elsewhere without user consent
    • User data should be protected with appropriate access controls
  3. Tool Safety

    • Tools represent arbitrary code execution and must be treated with appropriate caution.
      • In particular, descriptions of tool behavior such as annotations should be considered untrusted, unless obtained from a trusted server.
    • Hosts must obtain explicit user consent before invoking any tool
    • Users should understand what each tool does before authorizing its use
  4. LLM Sampling Controls

    • Users must explicitly approve any LLM sampling requests
    • Users should control:
      • Whether sampling occurs at all
      • The actual prompt that will be sent
      • What results the server can see
    • The protocol intentionally limits server visibility into prompts

Implementation Guidelines

While MCP itself cannot enforce these security principles at the protocol level, implementors SHOULD:

  1. Build robust consent and authorization flows into their applications
  2. Provide clear documentation of security implications
  3. Implement appropriate access controls and data protections
  4. Follow security best practices in their integrations
  5. Consider privacy implications in their feature designs

Learn More

Explore the detailed specification for each protocol component:


Read the original article

Comments

  • By neya 2025-06-193:218 reply

    One of the biggest lessons for me while riding the MCP hype was that if you're writing backend software, you don't actually need to do MCP. Architecturally, they don't make sense. Atleast not on Elixir anyway. One server per API? That actually sounds crazy if you're doing backend. That's 500 different microservices for 500 APIs. After working with 20 different MCP servers, it then finally dawned on me, good ole' function calling (which is what MCP is under the hood) works just fine. And each API can be just it's own module instead of a server. So, no need to keep yourself updated on the latest MCP spec nor need to update 100s of microservices because the spec changed. Needless complexity.

    • By ljm 2025-06-1910:11

      Unless the client speaks to each micro service independently without going through some kind of gateway or BFF, I'd expect you'd just plonk your MCP in front of it all and expose the same functionality clients call via API (or graphql or RPC or whatever), so it's basically just an LLM-specific interface.

      No reason why you couldn't just use tool calls with OpenAPI specs though. Either way, making all your microservices talk to each other over MCP sounds wild.

    • By leonidasv 2025-06-193:28

      I always saw MCPs as a plug-and-play integration for enabling function calling without incurring API costs when using Claude.

      If you're using the API and not in a hurry, there's no need for it.

    • By aryehof 2025-06-194:211 reply

      It really is a standard protocol for connecting clients to models and vice versa. It’s not there to just be a container for tool calls.

    • By rguldener 2025-06-196:513 reply

      Agree, one MCP server per API doesn’t scale.

      With something like https://nango.dev you can get a single server that covers 400+ APIs.

      Also handles auth, observability and offers other interfaces for direct tool calling.

      (Full disclosure, I’m the founder)

      • By saberience 2025-06-1913:462 reply

        Why do you even need to connect to 400 APIs?

        In the end, MCP is just like Rest APIs, there isn't need for a paid service for me to connect to 400 Rest APIs now, why do I need a service to connect to 400 MCPs?

        All I need for my users is to be able to connect to one or two really useful MCPs, which I can do myself. I don't need to pay for some multi REST API server or multi MCP server.

        • By herval 2025-06-1916:55

          Agentic automation is almost always about operating multiple tools and doing something with them. So you invariably need to integrate with a bunch of APIs. Sure, you can write your own MCP and implement everything in it. Or you can save yourself the trouble and use the official one provided by the integrations you need.

        • By _boffin_ 2025-06-1919:06

          People want to not think and throw the kitchen sink at problems instead of thinking what they actually need.

      • By czechdeveloper 2025-06-199:242 reply

        Nango is cool, but pricing is quite high at scale.

        • By falcor84 2025-06-1910:04

          There are quite a few competitors in this space, trying to figure the best way about this. I've been recently playing with the Jentic MCP server[0] that seems to do it quite cleanly and appears to be entirely free for regular usage.

          [0] https://jentic.com/

        • By rguldener 2025-06-199:40

          We offer volume discounts on all metrics.

          Email me on robin @ <domain> and happy to find a solution for your use case

      • By neya 2025-06-197:25

        Looks pretty cool, thanks for sharing!

    • By com2kid 2025-06-196:572 reply

      I'll go one further -

      Forcing LLMs to output JSON is just silly. A lot of time and effort is being spent forcing models to output a format that is picky and that LLMs quite frankly just don't seem to like very much. A text based DSL with more restrictions on it would've been a far better choice.

      Years ago I was able to trivially teach GPT 3.5 to reliably output an English like DSL with just a few in prompt examples. Meanwhile even today the latest models still have notes that they may occasionally ignore some parts of JSON schemas sent down.

      Square peg, round hole, please stop hammering.

      • By felixfbecker 2025-06-197:371 reply

        MCP doesn't force models to output JSON, quite the opposite. Tool call results in MCP are text, images, audio — the things models naturally output. The whole point of MCP is to make APIs digestable to LLMs

        • By fennecfoxy 2025-06-199:34

          I think perhaps they're more referring to the tool descriptions...not sure why they said output

      • By neya 2025-06-197:271 reply

        I'm not sure about that. Newer models are able to output structured outputs perfectly and infact, if you combine it with changesets, you can have insanely useful applications since changesets also provide type-checking.

        For example, in Elixir, we have this library: https://hexdocs.pm/instructor/

        It's massively useful for any structured output related work.

        • By fellatio 2025-06-197:392 reply

          Any model can provide perfect JSON according to a schema if you discard non-conforming logits.

          I imagine that validation as you go could slow things down though.

          • By svachalek 2025-06-1913:58

            The technical term is constrained decoding. OpenAI has had this for almost a year now. They say it requires generating some artifacts to do efficiently, which slows down the first response but can be cached.

          • By ethbr1 2025-06-1912:12

            Expect this is a problem pattern that will be seen a lot with LLMs.

            Do I look at whether the data format is easily output by my target LLM?

            Or do I just validate clamp/discard non-conforming output?

            Always using the latter seems pretty inefficient.

    • By throwaway314155 2025-06-194:231 reply

      > One server per API? That actually sounds crazy if you're doing backend

      Not familiar with elixir, but is there anything prohibiting you from just making a monolith MCP combining multiple disparate API's/backends/microservices as you were doing previously?

      Further, you won't get the various client-application integrations with MCP merely using tool-calling; which to me is the "killer app" of MCP (as a sibling comment touches on).

      (I do still have mixed feelings about MCP, but in this case MCP sorta wins for me)

      • By neya 2025-06-195:16

        > just making a monolith MCP combining multiple disparate API

        This is what I ended up doing.

        The reason I thought I must do it the "MCP way" was because of the tons of YouTube videos about MCP which just kept saying how much of an awesome protocol it is, an everyone should be using it, etc. Once I realized it's actually more consumer facing than the backend, it made much more sense as why it became so popular.

    • By mindwok 2025-06-194:301 reply

      “each API can just be its own module instead of a server”

      This is basically what MCP is. Before MCP, everyone was rolling their own function calling interfaces to every API. Now it’s (slowly) standardising.

      • By neya 2025-06-195:211 reply

        If you search for MCP integrations, you will find tons of MCP "servers", which basically are entire servers for just one vendor's API (sometimes just for one of their products, eg. YouTube). This is the go to default right now, instead of just one server with 100 modules. The MCP protocol itself is just to make it easier to communicate with the LLM clients that users can use and install. But, if you're doing backend code, there is no need to use MCP for it.

        • By vidarh 2025-06-197:081 reply

          There's also little reason not to. I have an app server in the works, and all the API endpoints will be exposed via MCP because it hardly requires writing any extra code since the app server already auto-generates the REST endpoints from a schema anyway and can do the same for MCP.

          An "entire server" is also overplaying what an MCP server is - in the case where an MCP server is just wrapping a single API it can be absolutely tiny, and also can just be a binary that speaks to the MCP client over stdio - it doesn't need to be a standalone server you need to start separately. In which case the MCP server is effectively just a small module.

          The problem with making it one server with 100 modules is doing that in a language agnostic way, and MCP solves that with the stdio option. You can make "one server with 100 modules" if you want, just those modules would themselves be MCP servers talking over stdio.

          • By neya 2025-06-197:222 reply

            > The problem with making it one server with 100 modules is doing that in a language agnostic way

            I agree with this. For my particular use-case, I'm completely into Elixir, so, for backend work, it doesn't provide much benefit for me.

            > it can be absolutely tiny

            Yes, but at the end of the day, it's still a server. Its size is immaterial - you still need to deal with the issues of maintaining a server - patching security vulnerabilities and making sure you don't get hacked and don't expose anything publicly you're not supposed to. It requires routine maintenance just like a real server. Mulitply that with 100, if you have 100 MCP "servers". It's just not a scalable model.

            In a monolith with 100 modules, you just do all the security patching for ONE server.

            • By vidarh 2025-06-1910:461 reply

              You will still have the issues of maintaining and patching your modules as well.

              I think you have an overcomplicated idea of what a "server" means here. For MCP that does not mean it needs to speak HTTP. It can be just a binary that reads from stdin and writes to stdout.

              • By neya 2025-06-1912:511 reply

                >I think you have an overcomplicated idea of what a "server"

                That's actually true, because I'm always thinking from a cloud deployment perspective (which is my use case). What kind of architecture do you run this on, at scale on the cloud? You have very limited options if your monolith is on a serverless and is CPU/memory bound, too. So, that's where I was coming from.

                • By vidarh 2025-06-1914:35

                  You'd run it on any architecture that can spawn a process and attach to stdin/stdout.

                  The overhead of spawning a process is not the problem. The overhead if your runtime is huge and/or slow to start could be, in which case you simply wouldn't run it on a serverless system - which is ludicrously expensive at scale anyway (my dayjob is running a consultancy where the big money-earner is helping people cut their cloud costs, and moving them off serverless systems that are entirely wrong for them is often one of the big savings; it's cheap for tiny systems, but even then often the hassle isn't worth it)

            • By jcelerier 2025-06-1910:321 reply

              .. what's the security patching you have to do for reading / writing on stdio on your local machine?

    • By skeledrew 2025-06-196:281 reply

      > each API can be just it's own module

      That implies a language lock-in, which is undesirable.

      • By neya 2025-06-197:25

        It is only undesirable if you are to expose your APIs for others to use via a consumer facing client. As a backend developer, I'm writing a backend for my app to consume, not for consumers (like MCP is designed for). So, this is better for me since I'm a 100% Elixir shop anyway.

  • By dend 2025-06-191:221 reply

    I am just glad that we now have a simple path to authorized MCP servers. Massive shout-out to the MCP community and folks at Anthropic for corralling all the changes here.

    • By jjfoooo4 2025-06-192:306 reply

      What is the point of a MCP server? If you want to make an RPC from an agent, why not... just use an RPC?

      • By fennecfoxy 2025-06-199:351 reply

        Standardising tool use, I suppose.

        Not sure why people treat MCP like it's much more than smashing tool descriptions together and concatenating to the prompt, but here we are.

        It is nice to have a standard definition of tools that models can be trained/fine tuned for, though.

        • By ethbr1 2025-06-1912:18

          Also nice to have a standard(ish) for evolution purposes. I.e. +15 years from now.

      • By rco8786 2025-06-1912:18

        Standardization. You spin up a server that conforms to MCP, every LLM instantly knows how to use it.

      • By antupis 2025-06-195:001 reply

        It is easier to communicate and sell that we have this MCP server that you can just plug and play vs some custom RPC stuff.

        • By freeone3000 2025-06-199:051 reply

          But MCP deliberately doesn’t define endpoints, or arguments, or return types… it is the definition of custom RPC stuff.

          How does it differ from providing a non MCP REST API?

          • By hobofan 2025-06-1910:082 reply

            The main alternative one would have for having a plug-and-play (just configure a single URL) non-MCP REST API would be to use OpenAPI definitions and ingesting them accordingly.

            However, as someone that has tried to use OpenAPI for that in the past (both via OpenAI's "Custom GPT"s and auto-converting OpenAPI specifications to a list of tools), in my experience almost every existing OpenAPI spec out there is insufficient as a basis for tool calling in one way or another:

            - Largely insufficient documentation on the endpoints themselves

            - REST is too open to interpretation, and without operationIds (which almost nobody in the wild defines), there is usually context missing on what "action" is being triggered by POST/PUT/DELETE endpoints (e.g. many APIs do a delete of a resource via a POST or PUT, and some APIs use DELETE to archive resources)

            - baseUrls are often wrong/broken and assumed to be replaced by the API client

            - underdocumented AuthZ/AuthN mechanisms (usually only present in the general description comment on the API, and missing on the individual endpoints)

            In practice you often have to remedy that by patching the officially distributed OpenAPI specs to make them good enough for a basis of tool calling, making it not-very-plug-and-play.

            I think the biggest upside that MCP brings (given all "content"/"functionality") being equal is that using it instead of just plain REST, is that it acts as a badge that says "we had AI usage in mind when building this".

            On top of that, MCP also standardizes mechanisms like e.g. elicitation that with traditional REST APIs are completely up to the client to implement.

            • By freeone3000 2025-06-1913:02

              I can’t help but notice that so many of the things mentioned are not at all due to flaws in the protocol, but developers specifying protocol endpoints incorrectly. We’re one step away from developers putting everything as a tool call, which would put us in the same situation with MCP that we’re in with OpenAPI. You can get that badge with a literal badge; for a protocol, I’d hope for something at least novel over HATEOAS.

            • By ethbr1 2025-06-1912:21

              REST for all the use cases: We have successfully agreed on what words to use! We just disagree on what they mean.

      • By vidarh 2025-06-197:12

        MCP is an RPC protocol.

      • By dend 2025-06-195:571 reply

        The analogy that was used a lot is that it's essentially USB-C for your data being connected to LLMs. You don't need to fight 4,532,529 standards - there is one (yes, I am familiar with the XKCD comic). As long as your client is MCP-compatible, it can work with _any_ MCP server.

        • By fennecfoxy 2025-06-199:381 reply

          The whole USB C comparison they make is eyeroll inducing, imo. All they needed to state was that it was a specification for function calling.

          My gripe is that they had the opportunity to spec out tool use in models and they did not. The client->llm implementation is up to the implementor and many models differ with different tags like <|python_call|> etc.

          • By lsaferite 2025-06-1911:191 reply

            Clearly they need to try explaining it it easy terms. The number of people that cannot or will not understand the protocol is mind boggling.

            I'm with you on an Agent -> LLM industry standard spec need. The APIs are all over the place and it's frustrating. If there was a spec for that, then agent development becomes simply focused on the business logic and the LLM and the Tools/Resource are just standardized components you plug together like Lego. I've basically done that for our internal agent development. I have a Universal LLM API that everything uses. It's helped a lot.

            • By ethbr1 2025-06-1912:431 reply

              The comparison to USB C is valid, given the variety of unmarked support from cable to cable and port to port.

              It has the physical plug, but what can it actually do?

              It would be nice to see a standard aiming for better UX than USB C. (Imho they should have used colored micro dots on device and cable connector to physically declare capabilities)

              • By fennecfoxy 2025-06-1914:30

                Certainly valid in that just like various usb c cables supporting slightly different data rates or power capacities, MCP doesn't deal with my aforementioned issue of the glue between MCP client and model you've chosen; that exercise is left up to us still.

      • By refulgentis 2025-06-192:431 reply

        Not everyone can code, and not everyone who can code is allowed to write code against the resources I have.

        • By nsonha 2025-06-194:05

          you have to write code for MCP server, and code to consume them too. It's just the LLM vendor decide that they are going to have the consume side built-in, which people question as they could as well do the same for open API, GRPC and what not, instead of a completely new thing.

  • By elliotbnvl 2025-06-191:361 reply

    Fascinated to see that the core spec is written in TypeScript and not, say, an OpenAPI spec or something. I suppose it makes sense, but it’s still surprising to see.

    • By lovich 2025-06-193:371 reply

      Why is it surprising? I use typescript a lot, but I would have never even thought to have this insight so I am missing some language design decisions

      • By fnordpiglet 2025-06-1917:12

        Because type script is a language not a language agnostic specification for languages. As someone who uses typescript a lot that probably is irrelevant. But if I’m using Rust, say, then the typescript means I need to reimplement the spec from scratch. With OpenApi, say, I can code generate canonically correct stubs and implement from there.

HackerNews