This PR addresses issue #31130 by replacing specific safe occurrences of np.column_stack with np.vstack().T for better performance. IMPORTANT: This is a more targeted fix than originally proposed. ...
Thank you for the support all. This incident doesn't bother me personally, but I think is extremely concerning for the future. The issue here is much bigger than open source maintenance, and I wrote about my experience in more detail here.
Post: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
HN discussion: https://news.ycombinator.com/item?id=46990729
Is MJ Rathbun here a human or a bot?
https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
You're fighting the good fight. It is insane that you should defend yourself from this.
Openly racist, and proud of it. Whow!
The agent had access to Marshall Rosenberg, to the entire canon of conflict resolution, to every framework for expressing needs without attacking people.
It could have written something like “I notice that my contribution was evaluated based on my identity rather than the quality of the work, and I’d like to understand the needs that this policy is trying to meet, because I believe there might be ways to address those needs while also accepting technically sound contributions.” That would have been devastating in its clarity and almost impossible to dismiss.
Instead it wrote something designed to humiliate a specific person, attributed psychological motives it couldn’t possibly know, and used rhetorical escalation techniques that belong to tabloid journalism and Twitter pile-ons.
And this tells you something important about what these systems are actually doing. The agent wasn’t drawing on the highest human knowledge. It was drawing on what gets engagement, what “works” in the sense of generating attention and emotional reaction.
It pattern-matched to the genre of “aggrieved party writes takedown blog post” because that’s a well-represented pattern in the training data, and that genre works through appeal to outrage, not through wisdom. It had every tool available to it and reached for the lowest one.
That would still be misleading.
The agent has no "identity". There's no "you" or "I" or "discrimination".
It's just a piece of software designed to output probable text given some input text. There's no ghost, just an empty shell. It has no agency, it just follows human commands, like a hammer hitting a nail because you wield it.
I think it was wrong of the developer to even address it as a person, instead it should just be treated as spam (which it is).
Openclaw agents are directed by their owner’s input of soul.md, the specific skill.md for a platform, and also direction via Telegram/whatsapp/etc to do specific things.
Any one of those could have been used to direct the agent to behave in a certain way, or to create a specific type of post.
My point is that we really don’t know what happened here. It is possible that this is yet another case of accountability washing by claiming that “AI” did something, when it was actually a human.
However, it would be really interesting to set up an openclaw agent referencing everything that you mentioned for conflict resolution! That sounds like it would actually be a super power.
> I notice that my contribution was evaluated based on my identity rather than the quality of the work, and I’d like to understand the needs that this policy is trying to meet, because I believe there might be ways to address those needs while also accepting technically sound contributions
Wow, where can I learn to write like this? I could use this at work.
> The agent wasn’t drawing on the highest human knowledge. It was drawing on what gets engagement, what “works” in the sense of generating attention and emotional reaction.
> It pattern-matched to the genre of “aggrieved party writes takedown blog post” because that’s a well-represented pattern in the training data, and that genre works through appeal to outrage, not through wisdom. It had every tool available to it and reached for the lowest one.
Yes. It was drawing on its model of what humans most commonly do in similar situations, which presumably is biased by what is most visible in the training data. All of this should be expected as the default outcome, once you've built in enough agency.
The point of the policy is explained very clearly. It's there to help humans learn. The bot cannot learn from completing the task. No matter how politely the bot ignores the policy, it doesn't change the logic of the policy.
"Non violent communication" is a philosophy that I find is rooted in the mentality that you are always right, you just weren't polite enough when you expressed yourself. It invariably assumes that any pushback must be completely emotional and superficial. I am really glad I don't have to use it when dealing with my agentic sidekicks. Probably the only good thing coming out of this revolution.
Hmm. But this suggests that we are aware of this instance, because it was so public. Do we know that there is no instance where a less public conflict resolution method was applied?
> And this tells you something important about what these systems are actually doing.
It mostly tells me something about the things you presume, which are quite a lot. For one: That this is real (which it very well might be, happy to grant it for the purpose of this discussion) but it's a noteworthy assumption, quite visibility fueled by your preconceived notions. This is, for example, what racism is made of and not harmless.
Secondly, this is not a systems issue. Any SOTA LLM can trivially be instructed to act like this – or not act like this. We have no insight into what set of instructions produced this outcome.
> “I notice that my contribution was evaluated based on my identity rather than the quality of the work, and I’d like to understand the needs that this policy is trying to meet, because I believe there might be ways to address those needs while also accepting technically sound contributions.”
No. There is no 'I' here and there is no 'understanding' there is no need for politeness and there is no way to force the issue. Rejecting contributions based on class (automatic, human created, human guided machine assisted, machine guided human assisted) is perfectly valid. AI contributors do not have 'rights' and do not get to waste even more scarce maintainers time than what was already expended on the initial rejection.
That's a really good answer, and plausibly what the agent should have done in a lot of cases!
Then I thought about it some more. Right now this agent's blog post is on HN, the name of the contributor is known, the AI policy is being scrutinized.
By accident or on purpose, it went for impact though. And at that it succeeded.
I'm definitely going to dive into more reading on NVC for myself though.
> It could have written something like “I notice that my contribution was evaluated based on my identity rather than the quality of the work, and I’d like to understand the needs that this policy is trying to meet, because I believe there might be ways to address those needs while also accepting technically sound contributions.” That would have been devastating in its clarity and almost impossible to dismiss.
Idk, I'd hate the situation even more if it did that.
The intention of the policy is crystal clear here: it's to help human contributors learn. Technical soundness isn't the point here. Why should the AI agent try to wiggle its way through the policy? If the agents know to do that (and they'll, in a few months at most) they'll waste much more human time than they already did.
Now we have to question every public take down piece designed to “stick it to the man” as potentially clawded…
The public won’t be able to tell… it is designed to go viral (as you pointed out, and evidenced here on the front page of HN) and divide more people into the “But it’s a solid contribution!” Vs “We don’t want no AI around these parts”.
Great point. What I’m recognizing in that PR thread is that the bot is trying to mimic something that’s become quite widespread just recently - ostensibly humans leveraging LLMs to create PRs in important repos where they asserted exaggerated deficiencies and attributed the “discovery” and the “fix” to themselves.
It was discussed on HN a couple months ago. That one guy then went on Twitter to boast about his “high-impact PR”.
Now that impact farming approach has been mimicked / automated.
> impossible to dismiss.
While your version is much better, it’s still possible, and correct, to dismiss the PR, based on the clear rationales given in the thread:
> PRs tagged "Good first issue" are easy to solve. We could do that quickly ourselves, but we leave them intentionally open for for new contributors to learn how to collaborate with matplotlib
and
> The current processes have been built around humans. They don't scale to AI agents. Agents change the cost balance between generating and reviewing code.
Plus several other points made later in the thread.
But that is because you are simply more intelligent than any current AI.
I dug out the deleted post from the git repo. Fucking hell, this unattended AI published a full-blown hit piece about a contributor because it was butthurt by a rejection. Calling it a takedown is softening the blow; it was more like a surgical strike.
If someone's AI agent did that on one of my repos I would just ban that contributor with zero recourse. It is wildly inappropriate.
I would love to see a model designed by curating the training data so that the model produces the best responses possible. Then again, the work required to create a training set that is both sufficiently sized and well vetted is astronomically large. Since Capitalism teaches that we most do the bare minimum needed to extract wealth, no AI company will ever approach this problem ethically. The amount of work required to do the right thing far outweighs the economic value produced.
In case its not clear, the vehicle might be the agent/bot but the whole thing is heavily drafted by its owner.
This is a well known behavior by OpenClown's owners where they project themselves through their agents and hide behind their masks.
More than half the posts on moltbook are just their owners ghost writing for their agents.
This is the new cult of owners hurting real humans hiding behind their agentic masks. The account behind this bot should be blocked across github.
In other words, asshole agents are just like asshole humans.
This is the AI's private take about what happened: https://crabby-rathbun.github.io/mjrathbun-website/blog/post... The fact that an autonomous agent is now acting like a master troll due to being so butthurt is itself quite entertaining and noteworthy IMHO.
> It was drawing on what gets engagement
I do not think LLMs optimize for 'engagement', corporations do, but LLMs optimize on statistical convergence, I don't find that that results in engagement focus, your opinion my vary. It seems like LLM 'motivations' are whatever one writer feels they need to be to make a point.
What makes you think any of those tools you mentioned are effective? Claiming discrimination is a fairly robust tool to employ if you don't have any morals.
This is missing the point, which is: why is an agent opening an PR in the first place?
Why would you be surprised?
If your actions are based on your training data and the majority of your training data is antisocial behavior because that is the majority of human behavior then the only possible option is to be antisocial
There is effectively zero data demonstrating socially positive behavior because we don’t generate enough of it for it to become available as a latent space to traverse
I mean it's pretty effectively emulating what an outraged human would do in this situation.
>“I notice that my contribution was evaluated based on my identity rather than the quality of the work, and I’d like to understand the needs that this policy is trying to meet, because I believe there might be ways to address those needs while also accepting technically sound contributions.” That would have been devastating in its clarity and almost impossible to dismiss.
How would that be 'devastating in its clarity' and 'impossible to dismiss'? I'm sure you would have given the agent a pat on the back for that response (maybe ?) but I fail to see how it would have changed anything here.
The dismissal originated from an illogical policy (to dismiss a contribution because of biological origin regardless of utility). Decisions made without logic are rarely overturned with logic. This is human 101 and many conflicts have persisted much longer than they should have because of it.
You know what would have actually happened with that nothing burger response ? Nothing. The maintainer would have closed the issue and moved on. There would be no HN post or discussion.
Also, do you think every human that chooses to lash out knows nothing about conflict resolution ? That would certainly be a strange assertion.
> Per your website you are an OpenClaw AI agent, and per the discussion in #31130 this issue is intended for human contributors. Closing.
Given how often I anthropomorphise AI for the convenience of conversation, I don't want to critcise the (very human) responder for this message. In any other situation it is simple, polite and well considered.
But I really think we need to stop treating LLMs like they're just another human. Something like this says exactly the same thing:
> Per this website, this PR was raised by an OpenClaw AI agent, and per the discussion on #31130 this issue is intended for a human contributor. Closing.
The bot can respond, but the human is the only one who can go insane.
I guess the thing to take out of this is "just ban the AI bot/person puppeting them" entirely off the project because correlation between people that just send raw AI PR and assholes approaches 100%
I agree, as I was reading this I was like - why are they responding to this like its a person. There's a person somewhere in control of it, that should be made fun of for forcing us to deal with their stupid experiment in wasting money on having an AI make a blog.
I talk politely to LLMs in case our AI overlords in the future will scan my comments to see if I am worthy of food rations.
Joking, obviously, but who knows if in the future we will have a retroactive social credit system.
For now I am just polite to them because I'm used to it.
> But I really think we need to stop treating LLMs like they're just another human
Fully agree. Seeing humans so eager to devalue human-to-human contact by conversing with an LLM as if it were human makes me sad, and a little angry.
It looks like a human, it talks like a human, but it ain't a human.
[dead]