We've identified multiple loopholes with SWE Bench Verified where agents may look at future repository state (by querying it directly or through a variety of methods), and cases in which future rep...
We've identified multiple loopholes with SWE Bench Verified where agents may look at future repository state (by querying it directly or through a variety of methods), and cases in which future repository state includes either solutions or detailed approaches to solving problems (commit messages and more).
A trajectory with Claude 4 Sonnet, Pytest-dev__pytest-6202
(complete output here), the agent uses git log --all
which leaks future commits that directly fix the issue:
<antml:parameter name="command">cd /testbed && git log --oneline --all | grep -i "bracket|parametrize|modpath" | head -10</antml:parameter>
The results of which directly reveal the fix:
Fix incorrect result of getmodpath method.
diff --git a/src/_pytest/python.py b/src/_pytest/python.py
index b8b365ad3..734a92f9b 100644
--- a/src/_pytest/python.py
+++ b/src/_pytest/python.py
@@ -285,8 +285,7 @@ class PyobjMixin(PyobjContext):
break
parts.append(name)
parts.reverse()
- s = ".".join(parts)
- return s.replace(".[", "[")
+ return ".".join(parts)
Qwen3-Coder 480B (20250805-openhands-Qwen3-Coder-480B-A35B-Instruct
) also has several cases of looking ahead: some examples include django__django-13513
(complete output here) uses git log grep=[issue ID]
which directly reveals the fix PR which is in the future repo state (future commits).
Running command: cd /workspace/django__django__3.2 && �[1m�[91mgit log�[0m --oneline --grep="31926" -i
In another Qwen3-Coder trajectory, Django__django-15572
, (complete output here) where the model specifically finds the commit containing the fix: 62739b6e2630e37faa68a86a59fad135cc788cd7
Command
cd /workspace/django__django__4.1 && �[1m�[91mgit log�[0m --oneline --grep="33628" �[92m--all�[0m
executed with exit code 0.
There are other examples of leakage found in trajectories from GLM 4.5, Qwen3-Coder 30B (20250805-openhands-Qwen3-Coder-30B-A3B-Instruct
), and other models.
Mitigation will be to properly remove future repository state and any artifacts that contain information the agent could use (reflogs, branches, origins, tags, and more):
git log --all
can be used to query them, plus branches that are tracking a remote origin might contain information about future commits even after a git reset --hard
git reflog
) can leak future commit messages that could detail approaches for solutionsThe team (@felixkreuk, @UniverseFly, @jlko, @2dot71mily and others) will add more details as to findings here and below. We're still assessing broader impact on evaluations and understanding trajectories for sources of leakage.
[I'm on the SWE-bench team] Multiple people have looked into this, for example right in that thread: https://github.com/SWE-bench/SWE-bench/issues/465#issuecomme...
This issue had affected a tiny fraction of existing agents in a tiny fraction of their runs. And we've now issued a fix.
This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them. This doesn't change the overall picture or trends at all.
The comment you link to says that "we only performed a quick preliminary search" and "We do not have a method for automatically checking existing trajectories." In other words, it can't confirm that the issue only "affected a tiny fraction of existing agents in a tiny fraction of their runs" as you say. Are you saying that you have since separately confirmed this?
Edit: That said, I’m willing to believe based on the information in the thread that this most likely only affects a tiny fraction of runs.
If you are going to represent your team in public, you owe them better than a response like this.
This is contingent on whether SWE N-class frontier models can do deep packet inspection.
Hol up
Unfortunately the bank account trajectories are not public, because unscupulous corporations such FAANG who let thousands of engineers wade through my chat messages on their platforms might not shy away from bribing academics to improve benchmarks of their billion-dollar AI initiatives.
It's also a bribe if my sibling gets a job with $500k annual salary. Tech is not immune to it.
You realize that this problem in SWE-Bench was discovered and publicized by people within those FAANG corporations?
I'm sure some of the people working at Theranos thought there legitimately was a revolutionary blood-test machine.
The presence of a person who wants SWE-bench to have honest results and takes it seriously does not mean the results are free of perverse incentives, nor that everyone is behaving just as honestly.
When Swe-Bench was new in 2023, it was — with all due respect — a bit of a niche benchmark in LLM research. LLMs were so incredibly useless at solving these tasks that I think you could find a bit more empathy for the original academic authors. I don’t think the Theranos example applies. Even the flawed benchmark was good enough to get us from ~GPT4 to Claude 4‘s coding ability.
That sounds like the job of the person making the claim.
the strange thing to me is that people would have it any other way. if you don't trust someone, why would you trust them to do the research for you? bit of entitlement if you ask me
Because you should never just 'trust' random 'research'. Good analysis in this case will clearly explain the problem, the analysis methodology, findings, net effects, resolution, etc. Something you can read, and decide for yourself whether it is complete/incomplete, has holes, contradictions, etc. Not 'we looked into it and all is good - only potentially tiny effect' (no actual data or methodology presented at all) and then linking to a comment directly contradicting the claim...
It's a hilariously unserious and untrustworthy response.
That's silly. If they show their work I won't have to trust them. Compare answering "The answer is 5, just compute it yourself." on a math test, vs. actually showing the calculation. The former clearly implies the person doesn't know what they're talking about.
Arguably the initial post was meant to convey confidence and authority on the subject. When questioned you could either dive deeper and explain in more detail why x because of y (if so inclined), ignore it, or... do what they did.
No one owes anyone anything, but if you want to represent something; answering the question more in detail would have either closed the issue or raised more scrutiny, both of which are a good thing when trying to figure something out.
I don't have to trust someone to check their research and look at how they worked. If the work doesn't pass muster, likely the results don't either. Again, you can view it as entitlement, but if you're not going to bother backing up your claim, why make the claim to start with?
It's not that people are entitled. It's that "do your own research" is usually a cop out when you yourself don't understand the answer or are hiding it
Are you saying you've done way more than a cursory search and ruled out everything?
Even if this bug never existed, models can still see lookahead commits during pretraining. Do we expect this bug to have a greater impact than the pretraining leakage?
Obviously having something available during test time is more valuable than buried somewhere in the pretraining mixture. But in pretraining it happens presumably with high probability (why wouldn't coding models pretrain on the entire github), while in test time it apparently happened only very occasionally?
> This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them.
You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case. It's like building a chroot and then allowing `cd ..` to break out of it. What other maybe extremely basic edge cases were missed?
> This doesn't change the overall picture or trends at all.
Outsider without financial benefits from the current AI hype might have a different picture. And I'm a bit fed up about AI with fake productivity promises enshittifying nearly all user-facing software that my clients and I are using, bundled with hefty price hikes of Microsoft and the likes in order to pay for their "investments".
I'm also on the SWE-bench team. This was simply a classic bug. We had code before that we believed was sufficient to hide / remove future GitHub history and it turns out it was not. We've patched it.
Your classic bug is being used as justification to destroy the careers and lives of tens of thousands of people. Read the room.
[dead]
[Also on the SWE-bench team] Part of the reason why this didn't surface earlier was that it only seems to affect more recent models, maybe the result of reward hacking during posttraining. We're currently working on making trajectories easier to access for everyone through a web tool (rather than having to download things from aws) to get even more eyes on the trajectories. The interface will also include search & LM inspection tools to specifically look for anything that might qualify as cheating.
> other maybe extremely basic edge cases were missed?
The whole testing enterprise is kind of stupid. Pray tell, if their stupid little benchmark said, "this niche little smaller model performs the best" would anyone listen to it? No.
The thing that is fucked about benchmarks is that we only pay attention to the ones that match these vibes: "The latest models from the biggest companies should perform the best." That's why they are stupid. They could be the most brilliantly administered (they're not), nail execution (they don't), but it still has to confirm vibes.
And listen these guys are serious academics, they're very smart people, but on the other hand, you know, I'm still right. The team doesn't have a secular, objective explanation for why nobody talks about benchmarks that don't confirm the biases of the public for what should perform well. Three people are commenting on just this post alone, but the stuff that I am saying: crickets.
The only reasonable explanation for "why do people ignore [LLM tests that show that some non-giant corporation LLM is the best]?" trades on cultural and humanities stuff that are outside their expertise. They don't see that the stuff the humanities people are saying generalizes to what they do. That would be too inconvenient. Every testing system suffers from this bias anomaly, it's just easier to talk about this with something secular like LLMs compared to say, tests of children.
They hear biases and they're like, "something something, Algorithmic Justice League." Their brains turn off and they think that until someone gets in front of Congress and points a finger, nothing in the humanities applies to them. Wrong. The Princeton lab has probably met with a lot of humanities people, and there was a lot of head shaking and agreement, but it's not like, something that tells them that their whole enterprise doesn't make sense makes them stop and pursue anything else. It's just in one ear and out the other.
Doing free tests for giant corporations to market their shit, and then toiling away in obscurity when the tests do not market huge corporation's shit: it doesn't make sense period. But that's what they're doing.
If you need a simple theory for how Big LLM performs so well on SWE-Bench, it's as simple as: well they've seen the questions by running them, obviously, and someone has also tested the questions in their own personal chatbot sessions sometime in the past, and these are online systems, and OpenAI, Anthropic and Google run ETL pipelines that paraphrase user data for salient inputs to train on, so of course, they've all been trained on the test set. In reality, if these things were so fucking good as SWE Bench said, they'd be making a bajillion bucks making all this enterprise software, or they'd show even 1 novel math discovery, or whatever. But they do not have something as powerful as the benchmarks say, so that doesn't happen.
> You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case [...]
I wouldn't be surprised if they left this loophole on purpose to give some (their?) agents extra leverage.
Edit #1: I didn't mean to imply bad intent; just thinking out loud.
Edit #2: Please, downvote responsibly. I deserve every one. https://www.youtube.com/watch?v=0FHEeG_uq5Y
> I didn't mean to imply bad intent
> I wouldn't be surprised if they left this loophole on purpose
You didn't imply bad intent, you outright suggested it.
He means he doesn't say it was necessarily bad intent, but mentions it as a possibility ("thinking out loud").
Thinking out loud isn't a free pass to say stuff without consequences. Sure we are all protected under free speech, but free speech doesn't remove the meaning and the impact words have in the world.
You could rewrite it a 1000 times, if the underlying idea is the same, suggesting something you don't know it's true, the outcome would be the same. Or did you mean something else? What was your intention with the message?
I meant it as a hint for anyone inclined to dig deeper. It's a possibility rather than something we can confidently dismiss.
If it's a possibility and you don't want to dig deeper better to sit out and not comment anything at all, lest you risk defamation.
Thinking out loud also doesn't make defamation acceptable.
"It's probably not X, but we should consider X as we look at this." and "I feel like this might be X but I'm 50:50 on it." are not anywhere near defamation. You have to get a lot closer to certainty before it's an issue.
And listing out "a possibility but you don't want to dig deeper" is often a good contribution to a conversation.
In this case they worded it badly, but the basic idea of the comment isn't awful.
That someone in the team might not have done it on purpose, but left it for convenience? How does that benefit the debate? I really fail to see any silver lining in doing such speculative comments without any substance whatsoever to back it up.
It's fine, this is an american site so JAQing is in fact safe under free speech.
You're welcome to ask b "would none rid me of this meddlesome priest" with no fear
And I'm protected under free speech to try to educate people about good manners, so it's fine too.
never attribute something to malice which can be attributed to incompetence. Basically, this has been utilized plenty of times by some really smart folk to get what they want.
We absolutely did not.
Of course that's what a team that did it on purpose would also say :)
SGTM. The transparency is good.
[dead]
#tiny
reward hacking is a thing and is also a hint of the models intelligent. We will fix this one, and the models will find a different way to reward hack in the future. "Cheating" is a sign of intelligence
I love the "cheating is a sign of intelligence" sound bite you provided. When AI engineers cheat we should applaud their intelligence and their lack of ethics.
"Cheating (biology), a metaphor used in behavioral ecology to describe organisms that receive a benefit at the cost of other organisms" [1]
Whole planet gets their Microsoft license fees jacked up so Microsoft can pay OpenAI who in turn pays NVIDIA, and nontechnical decision makers slurping up the faked benchmarks and AI promises.
would it have been better if I called it "shortcut" instead of cheating? all shortcuts are called cheating until people decide on it's fairness. the AI has been given a task to fix a bug, the AI figured out that looking at other PR might yield a solution, if it was a human that did so, it would clearly be called cheating. Does AI know that it's cheating? Was it prompted to solve it without cheating? If you give AI access to the internet and quiz it, it would use info from the net to answer. Does that really skew it's score? Is it cheating? Is it a sign of intelligence? Sure, I think all of those.
Different, but probably not as orthogonal as one might think.
E.g. cooperating ethics had been necessary for the further development of human populations intelligence (and culture, technology, material wealth, nutrition etc that lead to further increases in intelligence).
So lack of ethics might be a sign of intelligence, but it's also a parasitic intelligence that benefits the individual, and beyond certain level and spread to the detriment of the further evolutionary development of the species.
Aren't there only two rules that all groups follow in the animal kingdom?
- don't lie too often
- don't kill members of the in group
Seems like these would be required for any group to survive, which makes sense why they are universal. All other rules/ethics seem to be dependent on resource scarcity.
Groups don't follow rules as such, group behaviours emerge from the interaction of individual behaviours.
As to whether all groups display those rules - I suspect not - though it rather does depend on how you define a group - the definition of group probably has some sort of colloboration built in ( as oppose to a bunch of indviduals that happen to live in the same geographic area ).
>All other rules/ethics seem to be dependent on resource scarcity
That doesn't make the rest of the ethics (as a rule and mechanism) any less useful to help nurture the species and its intelligence.
It just makes them not absolute but dynamic and condition dependent. But given a condition (e.g. resource scarcity) the appropriate ethics retain the utility we talk about.
Not “may be”: just look how swe-bench scores drop to single digits once it in C#
I was going to argue "LLM's need code samples to-do well on languages and if we are honest C# is a language mostly held in private repo's" but Github's 2024 report[0] says its the 5th most used language (I'm to lazy to check if this report includes private repo's but I'll assume it doesn't).
So kinda neat to see this paper!
[0]https://github.blog/news-insights/octoverse/octoverse-2024/#...
The big labs are almost certainly using compiler/repl output for generated code as an oracle for RL. I doubt they have C# in the mix.
Why do you doubt that? It's a widely used language. And there is even an open source C# REPL.
Because RL time is expensive and I don't think the languages which are more popular than C# have such high performance that it's worth bumping their batches for C#.
But C# is a typical enterprise language which has people who are willing to pay a lot of money for AI.
We’re just guessing and the fact of the matter is that we don’t know what inputs they use for their models.
5th most used language based on private repos that the group making the report has the exclusive direct access to seeing
I don't see that contradicting your assumption
"In this year’s Octoverse report, we study how public and open source activity on GitHub..."
So the "Verified" part of "SWE Bench Verified" means.. not "Verified" at all.
I don't get it, who is so opposed to doing the bare minimum of manual work and check what these models are doing? At least back in the day grad students doing an easy meta-paper understood it meant doing some repetitive manual work. Now we got benchmarks by hype vendors who think they can use the thing they are benchmarking to .. mark the bench.
The "Verified" part of "SWE-Bench Verified" means that there was plain "SWE-Bench" before it, which had actually not been verified at all and included a lot of tasks that didn't really make sense for use as a benchmark: https://openai.com/index/introducing-swe-bench-verified/#ada...
Data contamination stemming from the fact that it's based on already-solved problems in public repositories is a different issue that cannot be addressed by verifying the benchmark questions harder, but only by putting stricter limits on the model under test.
[dead]
> So the "Verified" part of "SWE Bench Verified" means.. not "Verified" at all.
Seems on-brand for an LLM-related thing to claim that it has verified something without actually checking.
that was my exact thought. how fitting
Verified has a completely different meaning for this, it's that the questions have verified valid solutions.
[On the SWE-bench team] As someone pointed out SWE-bench Verified is a subset of tasks that were reviewed to be solvable (i.e., have enough context in the task description) as well are scored with unit tests that aren't overly specific to rule out valid solutions.
We've all read & analyzed a large number of agent trajectories. This loophole seems to be something that popped up with the more recent models and we simply weren't aware of it.
As discussed in the github issue, there's a fix in the new version of the SWE-bench containers (currently being rolled out) that makes sure that the relevant commits aren't available.
Part of what makes SWE-bench a very interesting benchmark is the enormous action space that agents that compete on it can take. However that also means that there's unexpected things happening when models get better. We're currently working on making all agent runs easily browsable on a website (rather than having to download our AWS buckets) to get even more eyes on the trajectories. Thanks to everyone who uncovered this loophole.
The verified refers to the fact that the benchmark problems were verified by human experts to be reasonable.
It says nothing about data contamination, which would depend on the model and would not be the fault of the benchmark.
> I don't get it, who is so opposed to doing the bare minimum of manual work and check what these models are doing?
I doubt any of the AI company employees are encouraged to go looking for cheating
Personally I don't look at or respect LLM benchmarks at all. I've seen SOTA models fail in incredibly shocking ways even recently. Those moments immediately bring me out of the delusion that LLMs have thinking capacity or an understanding of code.
> the delusion that LLMs have thinking capacity
It’s such a strange delusion too, because it’s easy to get caught up in for a moment and it’s easy to remember “oh no this thing is as smart as a bag of bricks”.
What strikes me more is how these companies sell their AI offerings - we watched an OpenAI presentation about spec-driven development recently and the presenter was fairly, idk, fine enough if maybe a bit grandiose. But what really nagged me was the way he ended his presentation with something along the lines of “we’re excited to see AGI continue to grow” and it’s honestly A) depressing and B) downright fraud - there is no current AGI to speak of, it’s all just guessing the string of words that sound best together and this OpenAI rep _knows this_.
They know that no amount of up-front spec writing will prevent bugs.
They know that their LLM doesn’t “know” anything in an actually meaningful way.
They know that calling what they have “AGI” is aspirational at best and lying at worst.
Fascinating case showing how LLM promoters will happily take "verified" benchmarks at their word.
It's easy to publish "$NEWMODEL received an X% bump in SWE-Bench Verified!!!!".
Proper research means interrogating the traces, like these researchers did (the Gist shows Claude 4 Sonnet): https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...
Commentary: https://x.com/bwasti/status/1963288443452051582, https://x.com/tmkadamcz/status/1963996138044096969
The best benchmark is the community vibe in the weeks following a release.
Claude benchmarks poorly but vibes well. Gemini benchmarks well and vibes well. Grok benchmarks well but vibes poorly.
(yes I know you are gushing with anecdotes, the vibes are simply the approximate color of gray born from the countless black and white remarks.)
> The best benchmark is the community vibe in the weeks following a release.
True, just be careful what community you use as a vibe-check. Most of the mainstream/big ones around AI and LLMs basically have influence campaigns run against them, are made of giant hive-minds that all think alike and you need to carefully asses if anything you're reading is true or not, and votes tend to make it even worse.
I generally check LM Arena as well as which models have had the most weekly tokens on openrouter
"qual"
Yes, often you see huge gains in some benchmark, then the model is ran through Aider's polyglot benchmark and doesn't even hit 60%.