Detect and crash Chromium bots

2025-05-0715:0114647blog.castle.io

Disclaimer: If you're here for the holy grail of bot detection, this may not be it, unless your UX strategy involves surprise popups and your marketing strategy involves blocking Google crawlers. We…

Disclaimer: If you're here for the holy grail of bot detection, this may not be it, unless your UX strategy involves surprise popups and your marketing strategy involves blocking Google crawlers.

We recently stumbled across a bug on the Chromium bug tracker where a short JavaScript snippet can crash headless Chromium browsers like those used by Puppeteer and Playwright. Sounds like a dream bot signal, right? Detect the bots, crash their browsers, and all from client-side JS, no server needed. If you’re lucky enough, you may even be able to cause memory leaks on their servers!

Maybe. Maybe not. In this post, we'll break down the bug, explore how it could be weaponized for detection, and finally explain why this is probably not a good idea to use it in production.

Analyzing the bug report

Bug trackers aren’t just for frustrated engineers — they’re gold mines for bot hunters. Every headless quirk or automation bug is a potential detection signal. If it's broken in Puppeteer but fine in Chrome, it’s probably worth a closer look.

This one's beautifully simple. Call contentWindow.open on an iframe with certain arguments, and the browser crashes. Fully reproducible in both Puppeteer and Playwright:

const iframe = document.createElement("iframe");
iframe.src = "data:text/html,<body></body>";
document.body.appendChild(iframe);
iframe.contentWindow.open("", "", "top=9999");

To illustrate, here’s a Playwright bot navigating to Hacker News, taking a screenshot, then detonating the crash:

import { chromium } from "playwright";

(async () => {
    const browser = await chromium.launch({ headless: false });
    const context = await browser.newContext();
    const page = await context.newPage();
    
    await page.goto('https://news.ycombinator.com');
    
    await page.waitForTimeout(1000);
    await page.screenshot({ path: 'screenshot.png' });
    
    try {
        await page.evaluate(() => {
            const iframe = document.createElement("iframe");
            iframe.src = "data:text/html,<body></body>";
            document.body.appendChild(iframe);
            iframe.contentWindow.open("", "", "top=9999");
        });
    } catch (error) {
        console.log(error);
    }
    
    await browser.close();
})();

You should note that the try/catch block does nothing. No exception is thrown. The call to page.evaluate just hangs, and the browser dies silently. browser.close() is never reached, which can cause memory leaks over time.

Creating the ultimate bot detection signal?

Notice the question mark. Let’s not get too excited.

Here’s the code for botCheckMate, our not-so-perfect detector:

function botCheckMate() {
	const iframe = document.createElement("iframe");
	iframe.src = "data:text/html,<body></body>";
	document.body.appendChild(iframe);
	iframe.contentWindow.open("", "", "top=9999");
	
	// After this point, if the code didn't crash, then you're human
	return false;
}

let isBot = botCheckMate();

If you're human, this returns false. If you're a Chromium-based bot, you crash, and we save a return value! #EfficiencyMatters

You can verify it by running this in your browser devtools, it will return false. If you run it with Puppeteer or Playwright (with Chrome), your browser will crash.

Why this is a terrible idea in production

While the tone of this article is clearly tongue-in-cheek, there’s a serious takeaway: not all detection signals are fit for production use. This one, in particular, comes with a host of drawbacks that far outweigh its novelty.

For starters, triggering a popup for human users is rarely a good idea. Most people don’t expect (or want) unsolicited windows opening in their face. It breaks user expectations, interrupts their flow, and is almost guaranteed to degrade the user experience. And let’s be honest: your CMO probably won’t be thrilled either.

Then there’s the issue of side effects. One of the foundational principles in building bot detection systems, especially the way we approach it at Castle, is minimizing impact. We prefer signals that are quiet and unnoticeable, that don’t log noisy events, spike CPU, or trigger console warnings. This detection method? It’s the digital equivalent of shouting.

Another major concern is how tightly this ties detection and response. It’s tempting to merge the two, especially when the response is so dramatically satisfying, but it’s rarely the right approach. Good bot detection means separating detection from the action you take. You might want to block a user, shadowban them, flag their account for review, or do nothing at all. But once you crash their browser, the choice is already made.

Also, since this entire strategy is executed on the client side, you lose access to most of the useful metadata you might otherwise use for decision-making. You can’t store bot signatures server-side, manage allow lists for Googlebot or your own QA tools, or tailor the response to the threat level.

And finally, bots evolve. The moment a bot author figures out what’s causing the crash, they’ll override the open() method or sanitize the parameters. Game over. You're back in the detection arms race. Want to go deeper and detect overrides? We’ve got you covered with techniques like those in our canvas randomization article. But then you’re stepping into a full-fledged cat-and-mouse game, with all the maintenance that comes with it.

So yes, this signal works. But not without caveats, and certainly not without cost.

Conclusion

On paper, this kind of detection looks irresistible. A few lines of JavaScript, and poof, bot browser gone. It’s clean, dramatic, and weirdly satisfying. But when it comes to real-world use, the story is more complicated.

The best detection signals don’t just work, they work quietly. They don’t degrade performance or user trust. They let you make decisions based on context, not just trigger an irreversible action the moment a condition is met. And most importantly, they’re resilient to adaptation.

This signal, while hilarious and powerful in demos, checks none of those boxes. It’s loud. It’s invasive. It’s brittle.

So enjoy this bug. Keep it in your toolkit. Laugh at the bots you crash in test environments. But maybe don’t deploy it in production. Especially not where Googlebot can see it.

Unless you’re already off Google’s search index. Then sure, go wild.


Read the original article

Comments

  • By oefrha 2025-05-1010:421 reply

    > The call to page.evaluate just hangs, and the browser dies silently. browser.close() is never reached, which can cause memory leaks over time.

    Not just memory leaks. Since a couple months ago, if you use Chrome via playwright etc. on macOS, it will deposit a copy of Chrome (more than 1GB) into /private/var/folders/kd/<...>/X/com.google.Chrome.code_sign_clone/, and if you exit without a clean browser.close(), the copy of Chrome will remain there. I noticed after it ate up ~50GB in two days. No idea what's the point of this code sign clone thing, but I had to add --disable-features=MacAppCodeSignClone to all my invocations to prevent it, which is super annoying.

    • By closewith 2025-05-1012:061 reply

      That's an open bug at the minute, but the one saving grace is that they're APFS clones so don't actually consume disk space.

      • By oefrha 2025-05-1012:36

        Interesting, IIRC I did free up quite a bit of disk space when I removed all the clones, but I also deleted a lot of other stuff that time so I could be mistaken. du(1) being unaware of APFS clones makes it hard to tell.

  • By chrismorgan 2025-05-109:351 reply

    Checking https://issues.chromium.org/issues/340836884, I’m mildly surprised to find the report just under a year old, with no attention at all (bar a me-too comment after four months), despite having been filed with priority P1, which I understand is supposed to mean “aim to fix it within 30 days”. If it continues to get no attention, I’m curious if it’ll get bumped automatically in five days’ time when it hits one year, given that they do something like that with P2 and P3 bugs, shifting status to Available or something, can’t quite remember.

    I say only “mildly”, because my experience on Chromium bugs (ones I’ve filed myself, or ones I’ve encountered that others have filed) has never been very good. I’ve found Firefox much better about fixing bugs.

  • By wraptile 2025-05-110:43

    I find the "don't let googlebot see this" kinda funny considering how top google results are often much worse. The captcha/anti-bot is getting so bad I had to move to Kagi to block some domains specifically as browsing contemporary web is almost impossible at times. Why isn't google down ranking this experience?

HackerNews