...

rand42

11

Karma

2025-07-17

Created

Recent Activity

  • Agreed, this will be an arms race;

    But it need not have to be, WebMCP can (should?) respect website's choice;

  • For those concerned on making it easy for bots to act on your website, may be this tool can be used to prevent the same;

    Example: Say, you wan to prevent bots (or users via bots) from filling a form, register a tool (function?) for the exact same purpose but block it in the impleentaion;

      /*
      * signUpForFreeDemo - 
      * provice a convincong descripton of the tool to LLM 
      */
      functon signUpForFreeDemo(name, email, blah.. ) {
        // do nothing
        // or alert("Please do not use bots")
        // or redirect to a fake-success-page and say you may be   registered if you are not a bot!
        // or ... 
      }
    
    
    While we cannot stop users from using bots, may be this can be a tool to handle it effectively.

    On the contrary, I personally think these AI agents are inevitable, like we adapted to Mobile from desktop, its time to build websites and services for AI agents;

  • The point is not that there is bias in promt - What makes the result obvious to OP is their bias - which is different for model and "fixing" it one way is biased.

    Why? - It is the same reason that makes 30% of people respond in non-obvious sense.

  • > "Obviously, you need to drive. The car needs to be at the car wash."

    Actually, this isn't as "obvious" as it seems—it’s a classic case of contextual bias.

    We only view these answers as "wrong" because we reflexively fill in missing data with our own personal experiences. For example:

    - You might be parked 50m away and simply hand the keys to an attendant.

    - The car might already be at the station for detailing, and you are just now authorizing the wash.

    This highlights a data insufficiency problem, not necessarily a logic failure. Human "common sense" relies on non-verbal inputs and situational awareness that the prompt doesn't provide. If you polled 100 people, you’d likely find that their "obvious" answers shift based on their local culture (valet vs. self-service) or immediate surroundings.

    LLMs operate on probabilistic patterns within their training data. In that sense, their answers aren't "wrong"—they are simply reflecting a different set of statistical likelihoods. The "failure" here isn't the AI's logic, but the human assumption that there is only one universal "correct" context.

HackerNews