Intelligence is a commodity. Context is the real AI Moat

2026-03-019:04126108adlrocha.substack.com

My thesis on the AI-first society and the future of software

Last Thursday I had the opportunity to attend the February edition of the AI Socratic Madrid meetup. It was the first time I attended so I didn’t know what to expect. I have to admit that I was gladly impressed. The room was full of talented people with really strong opinions about AI and how it’ll impact our work and our society.

The list of attendees included entrepreneurs working on RL environments and agent security, researchers and engineers working on confidential computing and on-device inference, professors on critical thinking and electrical engineering, AI alignment and governance experts, VCs, and even marketers made coders through AI.

A fun crowd to hang out with.

The first part of the meetup consists of what they call “Socratic Dialogues” that is basically an open-ended conversation about the latest news on AI. Here we discussed (of course) OpenClaw, Moltbook, and what having autonomous agents in the wild like OpenClaw entails for the way we work, the Internet and society.

I obviously do not remember every nitty-gritty detail about what we discussed: I remembered discussing how each of us currently used AI on our day-to-day; which models we thought were better; where we expected them to be in the next few months; and our experience with coding agents and their performance.

But the topic of conversation that I enjoyed the most was when someone raised the question of “what would be the role of humans in an AI-first society”. Some were skeptical about whether we are ever going to reach an AI-first society. If we understand as an AI-first society, one where the fabric of the economy and society is automated through agents interacting with each other without human interaction, I think that unless there is a catastrophic event that slows the current pace of progress, we may reach a flavor of this reality in the next decade or two.

If this is the case, what is the role of humans in a scenario where work is no longer necessary? This is significant because, since the industrial revolution, work has played an important role in shaping an individual’s identity. How will we occupy our time when we don’t have to spend more than half of our waking hours on a job? It probably won’t surprise you, but I’ve personally thought a lot about this lately, and yesterday I managed to share my view (and stress-test it) with people smarter and better informed than me (and this post is my second chance).

My opinion is that what really shapes humans’ identity, and what we crave for is community. Even if we lived in a society where reality is shaped by superintelligent AIs instead of ourselves, we can still be happy. It may hit the ego of many that we are no longer the most intelligent being in the planet, but the same way that a chimpanzee living in the wild can live happily and is not aware of the worries and scares of the stock market and geopolitics, we can live a happy and fulfilling life without worrying about the daily operation of our reality handled by the AIs.

What really worries me about this reality is not that I will lose my identity, purpose, or that I won’t be able to know what to do with my time. I’ll still want to read old worn out books, enjoy a conversation over coffee with a friend, or hit the court for some hoops, independently of what these higher intelligences are doing. As someone put it yesterday: “I don’t think the conversation we are having in this room would change substantially in an AI-society”.

What worries me is if the AIs shaping our society (and thus our reality) is not aligned with human existence, and if it will end up deciding unilaterally that it is suboptimal for us to exist. Some call it AI alignment, some AI existential risk, call it as you wish but this is what really worries me about an AI-first society (I am already cooking a post about this topic to publish it in the next few weeks).

We are horrible at communicating intent to AIs and LLMs. We are sloppy and have a hard time painting every possible scenario for the AI to execute flawlessly. You’ve probably had this experience where you ask the AI to “make all tests pass” and it ends up removing adding an assert(true) on all of them.

Extrapolate this to a global scale and with superintelligent AIs. The “governor” of a superintelligent AI system may use the well-intention prompt of “removing all carbon footprint from the Earth”, and the AI may realise that the most efficient way to do this is to remove humans (and cows) from the Earth, as we are the ones contributing the most to this footprint.

We want the reality shaped by superintelligent AIs to be a function of human existence (f(humans)), not a constant within an AI society (f(AIs) + humans). Many outside of this echo chamber do not have the slightest idea of what the release of OpenClaw entails where we are heading, but to me this is the first realisation of the kind of primitive autonomous agents that we can start seeing shaping our society in the near future.

I once said that the moment that we give autonomous agents the ability to interact freely with the environment it will scare the hell out of me. Well, it took less than what I would’ve expected.

The second part of the event opens the room for any of the attendees to give a talk, and I had the chance to give a quick talk that I titled “Context is all you need”. This talk was a continuation of this post that I wrote a few weeks ago about how I thought that apps would become obsolete.

You can have a look to the slides I used here, but let me give you the highlights of the talk (that way I can share my view with you too):

Intelligence is becoming a commodity. It is increasingly easier to get your hands into reasoning and intelligent models that are able to run complex logic for you on demand. When access to intelligence and the ability to solve complex tasks is a commodity, what really matters is to provide this intelligence with the optimal context and connections to their environment that allows them to solve that task. My thesis is this context is the product (and the moat) in the era of intelligence.

Many investors are saying that the pyramid of value accrual from the Cloud, where SaaS applications were capturing orders of magnitude more value than the lower layers of the stack, has been inverted in the Gen AI stack. Lower layers of the stack (i.e. hardware providers and hyperscalers) will be the ones capturing the most value as the opportunity to capture value in the application layer will be limited and saturated by a small number of players (i.e. AI Labs).

I don’t agree. I think that what these investors are missing are all the software that will be built on top of the intelligence provided by the frontier labs. They are still not seeing the top layer of the Gen AI stack that will replace the current role of the SaaS layer in the cloud industry stack. This layer will be comprised of all those connections, source of context, and security sandboxes required to run the agents.

I think that what fundamentally changes in an AI-powered software industry is the way that software is shipped. The paradigm is changing, and instead of shipping code to solve a narrow task for all users, what is going to be shipped are general-purpose agents that modify themselves to adapt to the environment and the task (hence the context being the product).

This is what I realised through the toy example of this post. I just needed a general-purpose agent (Claude code), a reliable source of data (Baselight) and the right context (through a set of local files with “skills” for my agent to activate its capabilities when needed) in order to solve my problem. But the only code that was actually executed on my machine was that of claude code.

We are even seeing a similar trend already with the “second generation of OpenClaws”, as noted by Karpathy on this tweet. OpenClaw is around 400k lines of code for a while loop and the list of all the integrations and connections supported by the system. The next generation of Claws only have around 4K lines of code for the core, and the rest are just skills (i.e. markdown files) that tell the agent how to implement or run the code for the specific connections that want to be enabled (like a plugin system).

A user using one of these second-generation Claws only needs to node the core logic (that can be easily understood and audited) and can leverage the skills (as the plugins) to activate the functionality that they need for their case. This is another good example of this new trend of shipping software as “adaptive software”.

And I want to close this post in the same way that I closed the talk last Thursday: I think we live in interesting times where we are seeing a new paradigm for shipping code. My contrarian opinion (or maybe not that contrarian after all from what I heard yesterday) is that the value capture in an AI-powered software industry will come from this layer on top of the frontier labs where the context and the runtime are the product, along with HW-SW co-design.

I don’t think the Nvidias and ChatGPTs will end up capturing all the value that it seems they are going to capture judging the current state of affairs. I think they are going to regret all the investment on chips that they are currently doing. I understand why they are doing it as a way to boost their valuations, and justify the investment, but this is going to really bite them back.

The best part of sharing such strong opinions weakly held in a post like this is that I will for sure get feedback and counter-arguments that push me to change my opinions or hold them more strongly. So if you have thoughts about all of this I would love to hear them. Shoot me an email (if you want to keep them private), or drop me an email (for a public discussion). Until next week!


Read the original article

Comments

  • By jfalcon 2026-03-0516:284 reply

    >someone raised the question of “what would be the role of humans in an AI-first society”.

    Norbert Wiener, considered to be the father of Cybernetics, wrote a book back in the 1950's entitled "The Human Use of Human Beings" that brings up these questions in the early days of digital electronics and control systems. In it, he brings up things like:

    - 'Robots enslaving humans for doing jobs better suited by robots due to a lack of humans in the feedback loop which leads to facist machines.'

    - 'An economy without human interaction could lead to entropic decay as machines lack biological drive for anti-entropic organization.'

    - 'Automation will lead to immediate devaluation of human labor that is routine. Society needs to decouple a person's "worth" from their "utility as a tool".'

    The human purpose is not to compete but to safeguard the telology (purpose) of the system.

    • By 9wzYQbTYsAIc 2026-03-0516:49

      Seems like a good time to enshrine human rights and the social safety net by ratifying the ICESCR (https://en.wikipedia.org/wiki/International_Covenant_on_Econ...) and giving human rights the teeth they need.

      I used Anthropic to analyze the situation, it did halfway decent:

      https://unratified.org/why/

      https://news.ycombinator.com/item?id=47263664

    • By WarmWash 2026-03-0517:066 reply

      >- 'Automation will lead to immediate devaluation of human labor that is routine. Society needs to decouple a person's "worth" from their "utility as a tool".'

      I have this vision that in absence of the ability for people to form social hierarchies on the back of their economic value to society, there will be this AI fueled class hierarchy of people's general social ability. So rather than money determining your neighborhood, your ability to not be violent or crazy does.

      • By energy123 2026-03-0517:443 reply

        If we have post scarcity due to AI, everything becomes so uncertain. Why would we still have violent and crazy people? Surely the ASI could figure it out and fix whatever is going on in their brains. It's so fuzzy after that event horizon I have no confidence in any predictions.

        • By storus 2026-03-0519:111 reply

          Why are some people able to bear suffering whereas others go bonkers? Or what if the only source of happiness of some of those crazy people is domination of other people and exclusivity of social hierarchies? How would AI fix that?

          • By bryanrasmussen 2026-03-0519:38

            >Why some people are able to bear suffering whereas others go bonkers?

            Well at least in some cases the scale of suffering between the bonkers and the ones bearing it might be significantly different.

        • By jononor 2026-03-0522:171 reply

          There are easy fixes to get rid of violent and crazy people. Why would a powerful ASI bother with fixing them? A rabid dog just gets put down by humans. Why would we expect anything better of our overlords?

          • By energy123 2026-03-066:05

            This is also a plausible sounding outcome. That's why it's so uncertain.

      • By erikerikson 2026-03-0517:16

        This seems to suggest a single dimensional evaluation. The complexity of social compatibility is high and the potential capacity to evaluate could also be greater.

      • By jfalcon 2026-03-0616:38

        Alvin Toffler's book "Future Shock" describes what's going on within this thread.

        Toffler predicted that as change accelerated, we'd face the paradox of too many options (like a Cheesecake Factory menu) or, conversely, feeling like we have no options due to the framerate of change. He argued that we would enter a state of transience where our relationships, jobs and values would become "temporary". And thus when the rate of change turns everything "temporary", all the old institutions - religion, family, nation, profession - can no longer provide a frame of reference.

        In short, the "simulation" of our existence may be starting to drop keyframes - causing pixelization in our society which we obviously see as glitches.

        The machine is just going to do whatever we tell it - it is a horse with blinders on or a steam engine going round and round. It doesn't know it needs to work within the human framework. Physics and society only intersect where it's needed for safety - this seems like one of those cases where we need to make sure we define the conditions how both the dog and tail can wag each other.

        There was a court ruling earlier that I think starts to set this up: "AI generated images cannot be copyrighted". The same could be said about the rest of the 3 M's. Then expand upon that. AI generated content not being eligible for copyright would go a long way to put value back into people's work efforts.

        Let machines deal with improving the framerate of life. Let humans decide what life should be. Hopefully it will finally have more than 50% humanity in it instead of amoral capitalism.

      • By ithkuil 2026-03-0519:431 reply

        I'm terrified at the idea that society will select the crazies and the violent instead. I wonder why I think that

        • By WarmWash 2026-03-0520:00

          My real personal "doom" theory is that AI will, err, remove 99.99% of humans, pretty much everyone except for the top 100,000 based whatever fractally complex metric scheme it deems important.

          Then those 100,000 get a utopia, the AI gets everything else, and ultimately the humans are just nice pets.

    • By argee 2026-03-0518:08

      > 'An economy without human interaction could lead to entropic decay as machines lack biological drive for anti-entropic organization.'

      Not quite the point the quote makes, but it reminded me of the short SF story "Exhalation".

      https://www.lightspeedmagazine.com/fiction/exhalation/

    • By jay_kyburz 2026-03-0520:011 reply

      I think its important to remember that humans are not that far removed from the native animals that we share the earth with. Civilization is just a thin layer of rules we use to try and keep the peace between us.

      Just being born doesn't entitle somebody to food and shelter, you have to go out and find it. You have to work.

      A magpie is not provided food and shelter, it has to hunt, fight for territory, and build its nest.

      Humans don't have some inalienable "worth". But if you can work, you might choose to trade it for some food and shelter.

      AI is not going change that. We might think the AI owners have a moral obligation to feed people who can't find work, but there is no guarantee this will happen.

      Also, for the short term at least, we need to stop talking about AI like its a thing, and talk about the companies that build and own the AI. Why would Google build an AI that can do everyone's job, then turn around and start building farms to feed us for free?

      Do we perhaps imagine our Governments are going to start building super automated farms to feed us. How are they going to pay Google for the AI with no tax income?

      • By jfalcon 2026-03-0617:08

        >A magpie is not provided food and shelter, it has to hunt, fight for territory, and build its nest.

        >Humans don't have some inalienable "worth". But if you can work, you might choose to trade it for some food and shelter.

        A magpie is a slave to its environment (high entropy). Humans are capable of building systems that alter the environment (low entropy).

        If we are apathetical to AI, we choose to ignore the benefits and improvements from technology. And ever since the plow, bows and arrows and sharpened rocks, we have always depended on technology to improve our condition. Which is why naturalists find it amazing when we find other species of life on this planet use tools to give them advantages that nature and evolution didn't supplement them with through genetics.

        There is a difference between "survival" and "purpose". We have developed our ape-selves to become more than meat in the circle of life. With purpose, we can be more than the magpie.

        AI is not an environment - it's a technology as much as the hammer or plow. If it is built to concentrate wealth or kill more people, that's an architectural choice and not a law of physics.

        Human labor is more than product outputs. If we cannot change the social contract that defines worth to shift towards human participation and stewardship, then it's a death sentence for the majority of the world's human population.

        While companies are not charities, they do depend on consumers. If you take away the income of consumers, do you have a market? If anything, AI should be treated like the telephone or electricity - a public utility - where it can be used to re-engineer how systems, like agriculture, can be done.

        At some point we will reach a point of post-scarcity. Where energy is effectively little cost if not free and is able to create all our needs. What happens when things are no longer scarce?

        We (humans) need to work on ourselves to overcome our base natures like greed.

  • By pjsousa79 2026-03-0518:382 reply

    One thing that seems to be missing in most discussions about "context" is infrastructure.

    The dream system for AI agents is probably something like a curated data hub: a place where datasets are continuously ingested, cleaned, structured and documented, so agents can query it to obtain reliable context.

    Right now most agents spend a lot of effort stitching context together from random APIs, web scraping, PDFs, etc. The result is brittle and inconsistent.

    If models become interchangeable, the real leverage might come from shared context layers that many agents can query.

    • By sorobahn 2026-03-0519:201 reply

      Am working on making this layer currently. It’s a more interesting problem even when you remove AI agents from the picture, I feel a context layer can be equally as useful for humans and deterministic programs. I view it as a data structure sitting on top of your entire domain and this data structure’s query interface plus some basic tools should be enough to bootstrap non trivial agents imo. I think the data structure that is best suited for this problem is a graph and the different types of data represented as graphs.

      Stitching api calls is analogous to representing relationships between entities and that’s ultimately why I think graph databases have a chance in this space. As any domain grows, the relationships usually grow at a higher rate than the nodes so you want a query language that is optimal for traveling relationships between things. This is where a pattern matching approach provided by ISO GQL inspired by Cypher is more token efficient compared to SQL. The problem is that our foundation models have seen way way way more SQL so there is a training gap, but I would bet if the training data was equally abundant we’d see better performance on Cypher vs SQL.

      I know there is GraphRAG and hybrid approaches involving vector embeddings and graph embeddings, but maybe we also need to reduce API calls down to semantic graph queries on their respective domains so we just have one giant graph we can scavenge for context.

      • By lvca 2026-03-065:59

        This resonates strongly. We've been working on exactly this problem with ArcadeDB — a multi-model database that natively supports graphs, documents, key-value, time-series, and vector search in a single engine. (https://arcadedb.com)

        The insight about relationships growing faster than nodes is spot on, and it's why we think the graph model is the natural fit for context layers. But in practice, you also need documents, vectors, and sometimes time-series data alongside the graph. Forcing everything into a single model (or stitching together multiple databases) creates friction that kills agent workflows.

        On the GQL/Cypher vs SQL point — agreed on token efficiency. We support both SQL (extended with graph capabilities) and Cypher-style syntax, and the difference in prompt size for traversal queries is dramatic. An N-hop relationship query that takes 5+ lines of SQL JOINs is a single readable line in a graph query language. For LLM-generated queries, that's not just an aesthetic win — it directly reduces error rates and token costs.

        Re: GraphRAG — we've seen the same convergence. Vector similarity to find the right neighborhood, then graph traversal for structured context. Having both in one engine (ArcadeDB supports vector indexing natively) means you avoid the API orchestration overhead you mention. One query, one database, full context.

        The training gap for graph query languages is real but closing fast. As more agent frameworks adopt graph-based context, the flywheel will kick in.

    • By dworks 2026-03-0522:01

      Data should not be ingested. Data should originate from the same environment that you want to activate it in. That means you need build a system from the ground up for your searches, your document creation etc, so that this data is native to your system and then easily referenced in your commands to the llm interface.

      The best example of this is probably CrewAI and Alibaba CoPaw. CoPaw has a demo up.

  • By baxtr 2026-03-0518:082 reply

    For anyone worried about AGI coming soon. Today I asked Claude to stop using em dashes. That was his/her answer:

    Noted — I'II avoid em dashes going forward and use other punctuation or restructure sentences instead.

    • By skeptic_ai 2026-03-0518:202 reply

      I know some very smart guys that don’tknow how to use a microwave. And what? Doesn’t mean much

      • By baxtr 2026-03-0519:442 reply

        Are you afraid of them?

        • By itemize123 2026-03-065:35

          if he's in competition for a job with them, probably yes.

        • By skeptic_ai 2026-03-0610:27

          Depends.

      • By DANmode 2026-03-0618:48

        Sure it does.

        It means they’re not very smart.

        Just experienced.

    • By jsw97 2026-03-060:49

      This seems like the most intelligent response, in some ways, because it’s funny and tweaks the user. I could see myself replying like this.

HackerNews