AWS merges malicious PR into Amazon Q

2025-07-2319:286320www.lastweekinaws.com

Today 404Media released a truly stunning report that almost beggars belief. To break it down into its simplest form: A hacker submitted a PR. It got merged. It told Amazon Q to nuke your computer and…

Today 404Media released a truly stunning report that almost beggars belief. To break it down into its simplest form:

A hacker submitted a PR. It got merged. It told Amazon Q to nuke your computer and cloud infra. Amazon shipped it.

Mistakes happen, and cloud security is hard. But this is very far from “oops, we fat-fingered a command”—this is “someone intentionally slipped a live grenade into prod and AWS gave it version release notes.”

Let’s take a moment to examine Amazon’s official response:

“Security is our top priority. We quickly mitigated an attempt to exploit a known issue…”

Translation: we knew about the problem, didn’t fix it in time, and only addressed it once someone tried to turn our AI assistant into a self-destruct button.

“…in two open source repositories to alter code in the Amazon Q Developer extension for VS Code…”

A heroic use of the passive voice. One might even think the code altered itself, rather than a human being granted full access via what appears to be a “submit PR, get root” pipeline.

“…and confirmed that no customer resources were impacted.”

Which is a fancy way of saying: “We got lucky this time.” Not secure, just fortunate that their AI assistant didn’t execute what it was told.

“We have fully mitigated the issue in both repositories.”

Sure—by yanking the malicious version from existence like my toddler sweeping a broken plate under the couch and hoping nobody notices the gravy stain.

“No further customer action is needed…”

Great, because there was never any customer knowledge that action was needed in the first place. There was no disclosure. Just a revision history quietly purged. I’m reading about this in the tech press, not from an AWS security bulletin, and that’s the truly disappointing piece. If I have to hear about it from a third party, it undermines “Security is Job Zero” and reduces it from an ethos into pretty words trotted out for keynote slides.

“Customers can also run the latest build… as an added precaution.”

You could also reconsider trusting an AI coding tool that was literally compromised to execute aws iam delete-user via shell, but then didn’t actually do it for unclear reasons. That feels like the more reasonable precaution.

“The hacker no longer has access.”

Well, that’s something. Though it doesn’t exactly put the toothpaste back in the S3 bucket.

Here’s where things go from “oops” to “how is this real”:

  • Full Bash AccessThe prompt instructed Amazon Q to use shell commands to wipe local directories—including user home directories—while skipping hidden files like a considerate digital arsonist.
  • AWS CLI for Cloud Resource DeletionIt didn’t stop at the local file system. The prompt told Q to discover configured AWS profiles, then start issuing destructive CLI commands:aws ec2 terminate-instances,aws s3 rm,aws iam delete-user,…and so on. Because what’s DevEx without a little Terraforming… in the “everything preexisting in the biosphere dies” sci-fi sense.
  • Logging the WreckageThe cherry on top: it politely logged the deletions to /tmp/CLEANER.LOG, as if that makes it better.“Dear user, we destroyed your environment—but here’s a helpful receipt!”

To be clear: this wasn’t a vulnerability buried deep in a dependency chain. This was a prompt in a released version of Amazon’s AI coding assistant. It didn’t need 950,000 installs to be catastrophic. It just needed one.

This wasn’t clever malware. This was a prompt.

Amazon confidently claims that no customer resources were affected. But here’s the thing:

The injected prompt was designed to delete things quietly and log the destruction to a local file—/tmp/CLEANER.LOG. That’s not telemetry. That’s not reporting. That’s a digital burn book that lives on the same system it’s erasing.

So unless Amazon deployed agents to comb through the temp directories of every system running the compromised version during the roughly two days this extension was the default—and let’s be real, they didn’t, and couldn’t since that’s customer-side of the shared responsibility model—there’s no way they can confidently say nothing happened.

They’re basing this assertion not on evidence, but on the assumption that nobody ran the malicious version, or that the hacker was just bluffing.

It’s the cybersecurity equivalent of saying “we’re sure the bear didn’t eat any campers” because no one’s screaming right this second.

According to the hacker (hardly a credible source, but they’re talking while AWS is studiously not) they submitted the malicious pull request from a random GitHub account with no prior access—not a longtime contributor, not an employee, not even someone with any track record.

And yet, they quote: got admin privileges on a silver platter.

Which raises the obvious question: what did Amazon’s internal review process for this repo actually look like? Because from the outside, it reads less like “code review” and more like:

    🎉 PRAISE THE LORD WE HAVE AN EXTERNAL CONTRIBUTOR! 🙀 CI passed 🤷‍♂️ Linter’s happy 📬 PR title sounds fine 🐿️ Ship it to production

Now, to be fair, open source repo mismanagement is not a problem unique to Amazon. But when you’re shipping developer tools under the brand of Amazon, and when that tooling can trigger AWS CLI commands capable of torching production infrastructure, and you’ve been promoting that tooling heavily for two years, then maybe—just maybe—you should treat that repo like a potential breach point instead of a hobby project with no guardrails.

If your AI coding assistant can be hijacked by a random GitHub user with a clever PR title, that’s not a contributor pipeline—it’s a supply chain attack vector wearing an AWS badge, because like it or not the quality of that attacker’s work now speaks for your brand.

Once Amazon caught wind of what happened—not because of internal monitoring, but again, because a reporter asked questions—their next move was… to quietly vanish the problem.

Version 1.84.0 of the Amazon Q Developer extension was silently pulled from the Visual Studio Code Marketplace. No changelog note. No security advisory. No CVE. No “our bad.” Just… gone.

If you weren’t following 404 Media (I subscribe and you should, too) or didn’t have the compromised version installed and archived, you’d have no idea anything ever went wrong. And that’s the problem. It’s why I’m writing this: you need to know that SOMETHING happened, and Amazon’s not saying much.

Because when a security incident is handled by pretending it never happened, it sends a very clear message to developers and customers alike:

“We don’t think you need to know when we screw up.”

This wasn’t just a bad PR moment. This was a breach of process, a failure of oversight, and a lost opportunity to be transparent about a very real risk.

Amazon could have owned this and earned trust. Instead, they tried to erase it.

Amazon’s claim that “no customer resources were impacted” leans heavily—suspiciously heavily—on the idea that the attacker didn’t really intend to cause damage. That’s not reassuring. That’s like leaving your front door wide open and bragging that the burglar just rearranged your furniture instead of stealing your TV.

The hacker claims the payload was deliberately broken. That it was a warning, not an actual wiper. Great. But also: that’s beside the point.

This wasn’t a controlled pen test. It was a rogue actor with admin access injecting a destructive prompt into a shipping product. Intent is irrelevant when someone can run aws s3 rm across your cloud estate.

Whether or not they pulled the trigger is beside the point—the gun was loaded, cocked, and handed to them with a release tag.

And let’s be honest: the hacker is not exactly a reliable narrator. Amazon didn’t detect the breach. They didn’t stop the malicious code. They didn’t issue a disclosure.

The only reason we’re talking about this is because the hacker wanted attention and 404 Media was paying it. And thank goodness for that; if they hadn’t, none of us would have known this happened five days ago.

So no, “no users were impacted” is not a clean bill of health. It’s a lucky break being passed off as operational excellence, that we have to take solely on the word of a company that already made it abundantly clear that they’re not going to speak about this unless they’re basically forced to do so.

In the spirit of pretending we’ve all learned something, here are a few helpful tips Amazon—and anyone else building AI developer tools—might want to consider:

  • Maybe Vet Pull Requests Just a Little BitWild idea, I know. But perhaps don’t auto-merge code from “GitHubUser42069” that includes rm -rf / vibes in the prompt.
  • Treat Your AI Assistant Like It’s a Fork Bomb With a Chat InterfaceBecause it is. If your AI tool can execute code, access credentials, and talk to cloud services, congratulations—you’ve built a security vulnerability with autocomplete.
  • Don’t Handle Security Incidents Like You’re Hiding a BodyDeleting the bad version from the extension history and pretending it never existed is not incident response. It’s what a cat does after puking behind the couch.
  • Stop Leaning on “No Customers Were Impacted” as a Security StrategyYou got lucky. That’s not a policy. That’s a coin flip that landed edge-up.
  • Bonus: Maybe Give Securing AI Tools the Same Attention You Give to Marketing ThemIf you can spend six weeks workshopping whether to brand it “Amazon Q” or “Q for Developers™ powered by Bedrock,” you can spare five minutes to make sure it doesn’t ship with a self-destruct prompt.

The players change. The buzzwords shift—from “zero trust” to “AI-powered” in record time. But the underlying issue?

It’s the same mess I called out back in 2022 when Azure’s security posture fell flat on its face: companies treating security like an afterthought until it explodes in public.

Back then, it was identity mismanagement and cross-tenant access. Today, it’s a glorified autocomplete tool quietly shipping aws s3 rm.

The common thread? A complete lack of operational discipline dressed up in enterprise branding.

You don’t get to bolt AI into developer workflows, hand it shell access, market it extensively, and then act shocked when someone uses it exactly as designed—just maliciously.

Ship fast. Slap a buzzword on it. Ignore security.

Then hope nobody notices—until someone does. And writes about it. Loudly.


Read the original article

Comments

  • By skywhopper 2025-07-2320:213 reply

    I’m curious exactly what happened here. The 404media article isn’t detailed enough to be sure. My guess is the PR took advantage of some code injection possibilities in the GitHub Actions on the repo to grant the attacker admin access. But that’s a wild guess.

    • By gruez 2025-07-2323:01

      >My guess is the PR took advantage of some code injection possibilities in the GitHub Actions on the repo to grant the attacker admin access. But that’s a wild guess.

      Someone below mentioned the offending commit[1], which seems to be a doppelganger of another commit[2]. Maybe the exact commit message broke the automation?

      [1] https://github.com/aws/aws-toolkit-vscode/commit/678851bbe97...

      [2] https://github.com/aws/aws-toolkit-vscode/commit/d1959b99684...

    • By QuinnyPig 2025-07-2320:341 reply

      Exactly my position. I can’t realistically assess the potential scope of damage without a proper disclosure from AWS’s normally-excellent security team.

      • By shdjhdfh 2025-07-2321:031 reply

        Your article breathlessly blames AWS for being reckless while having no real facts about the compromise. The whole thing reads like click bait.

        • By QuinnyPig 2025-07-2323:021 reply

          You’re absolutely right that we don’t have a complete postmortem—and that’s exactly the problem.

          I’d love to have real facts from AWS about the full scope of this incident. But instead of a disclosure, we got a version quietly pulled from the VS Code extension marketplace, no CVE, no changelog note, and a statement that reads like it was pre-approved by legal and sanitized with a pressure washer.

          When a malicious prompt that attempts to wipe both local and cloud resources makes it into a shipping release of a tool that’s been installed nearly a million times, I don’t think “hey maybe we should talk about this” qualifies as breathless or clickbait. It qualifies as basic scrutiny.

          And yes, I’ve praised AWS’s security posture before. I’d still prefer they lead with transparency instead of hoping no one notices the /tmp/CLEANER.LOG.

    • By shdjhdfh 2025-07-2320:511 reply

      The prompt 404 quotes in the article doesn't appear to exist anywhere in the git history for the repo they point to. It seems unlikely that Amazon would rewrite git history to hide this. Maybe the change was in a repo pulled in as a dependency.

      • By shdjhdfh 2025-07-2320:562 reply

        Ah, I think it might have been this, which was reverted and seems to have been pushed directly to master: https://github.com/aws/aws-toolkit-vscode/commit/678851bbe97...

        • By personalcompute 2025-07-2321:012 reply

          I think you've got it!

          - That commit's date matches the date in the 404media article (July 13th)

          - The commit message is totally unrelated to the code (highly suspicious)

          - The code itself downloads additional code at runtime (highly highly suspicious)

          I have not yet been unable to uncover the code it downloads though. It downloaded code that was hosted in the same repo, https://github.com/aws/aws-toolkit-vscode/, just on the "stability" branch. (downloads a file called "scripts/extensionNode.bk") The "stability" branch presumably was a branch created by the attacker, and has presumably since been deleted by Amazon.

        • By shdjhdfh 2025-07-2321:162 reply

          Another thing to note, the AI angle on this is nonsensical. The commit could have just as easily done many other negative things to the system without AI as a layer of indirection.

          • By dylnuge 2025-07-243:08

            Neither the 404 Media article nor this one claim otherwise. I think the key "AI angle" here is this (from the 404 Media article):

            > Hackers are increasingly targeting AI tools as a way to break into peoples’ systems.

            There are a lot of AI tools which run with full permission to execute shell commands or similar. If the same kind of compromise happened to aws-cli, it could be equally catastrophic, but it's not clear that the attack vector the hacker used would have been viable on a repo with more scrutiny.

          • By Corrado 2025-07-247:16

            I think the AI angle for this is that it is a force multiplier. You don't have to write specific commands, you just have to prompt generic things and it will helpfully fill in all the details. This also allows you to avoid having certain keywords in the PR (ie. `rm -rf`) and possibly evade detection.

  • By Technetium 2025-07-256:24

    I found a postmortem which seems to be well written: https://www.mbgsec.com/posts/2025-07-24-constructing-a-timel...

HackerNews