Teaching fullstack web and LLM development at LBKE, France https://www.lbke.fr
Member of Devographics (State of JS survey) https://www.devographics.com/
https://bsky.app/profile/ericburel.tech
The LLM doesn't intervene much actually, it just tells what tool to call. It's your MCP implementation that does the heavy lifting. So yeah you can always shove a key somewhere in your app context and pass it to the tool call. But I think the point of other comments is that the MCP protocol is kinda clueless about how to standardize that within the protocol itself.
My understanding is that it can upgrade to an SSE connection so a persistent stream. Also for interprocess communication you usually prefer a persistent connection. All that to reduce communication overheads. The rationale also is that an AI agent may trigger more fine-grained calls than a normal program or a UI, as it needs to collect information to observe the situation and decide next move (lot more get requests than usual for instance).
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io