
Conversational AI presents (non-IT) people with the powerful illusion that it is conscious. (I personally have a friend who argues vehemently that ChatGPT is conscious - admittedly, he has a diagnosed mental illness, but still.) People become emotionally attached, over-trust it and rely on it for gu...
That illusion is powerfully strengthened by the use of first-person pronouns. But "I", "we", "us" etc in LLM output have no referential object. There is no "I" in a LLM.
I want a mandatory ban on the use of first-person pronouns by LLMs. There's no impairment in meaning if it says "Would you like a list?" instead of "Would you like me to give you a list?"
Personally, I provide a system prompt with this instruction. Works well.
Why not?
Over use of first person pronouns is an indication of some forms of Autism. Persons on the spectrum are also more receptive to commentary with excessive first person pronouns. Knowing this you can target and persuade a substantial segment of the population very effectively.
For a more real world example look at any Bari Weiss interview and count the first person pronouns, look for the goals expressed in the commentary.
Was also thinking about this. Running LLMs raw it's all about the next token.
Like Ask Jeeves and then along came Google, we can go further and not use LLM as chat. We may also be more efficient as well as reducing anthropomorphism.
E.g. We can re frame our queries: "list of x is"...
Currently we are stuck in the inefficient and old fashioned and unhealthy Ask Jeeves / Clippy mindset. But like when Google took over search we can quickly adapt and change.
So not only should a better LLM not present it's output as chat, we the users also need to approach it differently.
They want the people to think that it is conscious. That is why they called it artificial intelligence instead of calling it neural networks.
It should be called Artificial Consciousness. The "intelligence" it provides (ie information) is real, just as Google search results are real (albeit often false).