Pnpm has a new setting to stave off supply chain attacks

2025-09-187:12238146pnpm.io

Minor Changes

There have been several incidents recently where popular packages were successfully attacked. To reduce the risk of installing a compromised version, we are introducing a new setting that delays the installation of newly released dependencies. In most cases, such attacks are discovered quickly and the malicious versions are removed from the registry within an hour.

The new setting is called minimumReleaseAge. It specifies the number of minutes that must pass after a version is published before pnpm will install it. For example, setting minimumReleaseAge: 1440 ensures that only packages released at least one day ago can be installed.

If you set minimumReleaseAge but need to disable this restriction for certain dependencies, you can list them under the minimumReleaseAgeExclude setting. For instance, with the following configuration pnpm will always install the latest version of webpack, regardless of its release time:

minimumReleaseAgeExclude:
- webpack

Related issue: #9921.

Advanced dependency filtering with finder functions

Added support for finders.

In the past, pnpm list and pnpm why could only search for dependencies by name (and optionally version). For example:

prints the chain of dependencies to any installed instance of minimist:

verdaccio 5.20.1
├─┬ handlebars 4.7.7
│ └── minimist 1.2.8
└─┬ mv 2.1.1
└─┬ mkdirp 0.5.6
└── minimist 1.2.8

What if we want to search by other properties of a dependency, not just its name? For instance, find all packages that have react@17 in their peer dependencies?

This is now possible with "finder functions". Finder functions can be declared in .pnpmfile.cjs and invoked with the --find-by=<function name> flag when running pnpm list or pnpm why.

Let's say we want to find any dependencies that have React 17 in peer dependencies. We can add this finder to our .pnpmfile.cjs:

module.exports = {
finders: {
react17: (ctx) => {
return ctx.readManifest().peerDependencies?.react === "^17.0.0";
},
},
};

Now we can use this finder function by running:

pnpm why --find-by=react17

pnpm will find all dependencies that have this React in peer dependencies and print their exact locations in the dependency graph.

@apollo/client 4.0.4
├── @graphql-typed-document-node/core 3.2.0
└── graphql-tag 2.12.6

It is also possible to print out some additional information in the output by returning a string from the finder. For example, with the following finder:

module.exports = {
finders: {
react17: (ctx) => {
const manifest = ctx.readManifest();
if (manifest.peerDependencies?.react === "^17.0.0") {
return `license: ${manifest.license}`;
}
return false;
},
},
};

Every matched package will also print out the license from its package.json:

@apollo/client 4.0.4
├── @graphql-typed-document-node/core 3.2.0
│ license: MIT
└── graphql-tag 2.12.6
license: MIT

Related PR: #9946.

Patch Changes

  • Fix deprecation warning printed when executing pnpm with Node.js 24 #9529.
  • Throw an error if nodeVersion is not set to an exact semver version #9934.
  • pnpm publish should be able to publish a .tar.gz file #9927.
  • Canceling a running process with Ctrl-C should make pnpm run return a non-zero exit code #9626.

Read the original article

Comments

  • By postepowanieadm 2025-09-188:057 reply

    If everyone is going to wait 3 days before installing the latest version of a compromised package, it will take more than 3 days to detect an incident.

    • By acdha 2025-09-1812:053 reply

      Think about how the three major recent incidents were caught: not by individual users installing packages but by security companies running automated scans on new uploads flagging things for audits. This would work quite well in that model, and it’s cheap in many cases where there isn’t a burning need to install something which just came out.

      • By kjok 2025-09-1815:541 reply

        I think there's some confusion here. No automated scan was able to catch the attack. It was an individual who notified these startups.

        • By acdha 2025-09-1817:15

          Quite possibly - there have been several incidents recently and a number of researchers working together so it’s not clear exactly who found something first and it’s definitely not as simple to fix as tossing a tool in place.

          The CEO of socket.dev described an automated pipeline flagging new uploads for analysts, for example, which is good but not instantaneous:

          https://news.ycombinator.com/item?id=45257681

          The Aikido team also appear to be suggesting they investigated a suspicious flag (apologies if I’m misreading their post), which again needs time for analysts to work:

          https://www.aikido.dev/blog/npm-debug-and-chalk-packages-com...

          My thought was simply that these were caught relatively quickly by security researchers rather than by compromised users reporting breaches. If you didn’t install updates with a relatively short period of time after they were published, the subsequent response would keep you safe. Obviously that’s not perfect and a sophisticated, patient attack like liblzma suffered would likely still be possible but there really does seem to be a value to having something like Debian’s unstable/stable divide where researchers and thrill-seekers would get everything ASAP but most people would give it some time to be tested. What I’d really like to see is a community model for funding that and especially supporting independent researchers.

      • By nikanj 2025-09-1818:141 reply

        Automated scans have detected 72251 out of the previous 3 supply-chain attacks

      • By davidpfarrell 2025-09-1820:452 reply

        Wow so couldn't said security co's establish their own registry that we could point to instead and packages would only get updated after they reviewed and approved them?

        I mean I'd prolly be okay paying yearly fee for access to such a registry.

        • By davidshepherd7 2025-09-196:04

          IIUC chainguard is this, but only for python, java, and docker images so far. https://www.chainguard.dev/libraries

        • By getcrunk 2025-09-190:361 reply

          I think it would be a no brainer for npm to offer this but idk why they haven’t

          • By phatfish 2025-09-199:11

            Probably because they would expose themselves legally? Not sure what the current situation is exactly, but I assume it's "at your own risk".

    • By anematode 2025-09-188:204 reply

      A lot of people will still use npm, so they'll be the canaries in the coal mine :)

      More seriously, automated scanners seem to do a good job already of finding malicious packages. It's a wonder that npm themselves haven't already deployed an automated countermeasure.

      • By kjok 2025-09-1816:013 reply

        > automated scanners seem to do a good job already of finding malicious packages.

        That's not true. This latest incident was detected by an individual researcher, just like many similar attacks in the past. Time and again, it's been people who flagged these issues, later reported to security startups, not automated tools. Don't fall for the PR spin.

        If automated scanning were truly effective, we'd see deployments across all major package registries. The reality is, these systems still miss what vigilant humans catch.

        • By kelnos 2025-09-190:14

          > This latest incident was detected by an individual researcher

          So that still seems fine? Presumably researchers are focusing on latest releases, and so their work would not be impacted by other people using this new pnpm option.

        • By hobofan 2025-09-1816:261 reply

          > If automated scanning were truly effective, we'd see deployments across all major package registries.

          No we wouldn't. Most package registries are run by either bigcorps at a loss or by community maintainers (with bigcorps again sponsoring the infrastructure).

          And many of them barely go beyond the "CRUD" of package publishing due to lack of resources. The economic incentives of building up supply chain security tools into the package registries themselves are just not there.

          • By kjok 2025-09-1817:131 reply

            You're right that registries are under-resourced. But, if automated malware scanning actually worked, we'd already see big tech partnering with package registries to run continuous, ecosystem-wide scanning and detection pipelines. However, that isn't happening. Instead, we see piecemeal efforts from Google with assurance artifacts (SLSA provenance, SBOMs, verifiable builds), Microsoft sponsoring OSS maintainers, Facebook donating to package registries. Google's initiatives stop short of claiming they can automatically detect malware.

            This distinction matters. Malware detection is, in the general case, an undecidable problem (think halting problem and Rice theorem). No amount of static or dynamic scanning can guarantee catching malicious logic in arbitrary code. At best, scanners detect known signatures, patterns, or anomalies. They can't prove absence of malicious behavior.

            So the reality is: if Google's assurance artifacts stop short of claiming automated malware detection is feasible, it's a stretch for anyone else to suggest registries could achieve it "if they just had more resources." The problem space itself is the blocker, not just lack of infra or resources.

            • By motorest 2025-09-196:26

              > But, if automated malware scanning actually worked, we'd already see big tech partnering with package registries to run continuous, ecosystem-wide scanning and detection pipelines.

              I think this sort of thought process is misguided.

              We do see continuous, ecosystem-wide scanning and detection pipelines. For example, GitHub does support DependaBot, which runs supply chain checks.

              https://github.com/dependabot

              What you don't see is magical rabbits being pulled out of top hats. The industry has decades of experience with anti-malware tools in contexts where said malware runs in spite of not being explicitly provided deployment or execution permissions. And yet it deploys and runs. What do you expect if you make code intentionally installable and deployable, and capable of sending HTTP requests to send and receive any kind of data?

              Contrary to what you are implying, this is not a simple problem with straight-forward solutions. The security model has been highly reliant on the role of gatekeepers, both in producer and consumer sides. However, the last batch of popular supply chain attacks circumvented the only failsafe in place. Beyond this point, you just have a module that runs unspecified code, just like any other module.

        • By anematode 2025-09-1818:17

          The latest incident was detected first by an individual researcher (haven't verified this myself, but trusting you here) -- or maybe s/he was just the fastest reporter in the west. Even simple heuristics like the sudden addition of high-entropy code would have caught the most recent attacks, and obviously there are much better methods too.

      • By mcintyre1994 2025-09-189:591 reply

        In the case of the chalk/debug etc hack, the first detection seemed to come from a CI build failure it caused: https://jdstaerk.substack.com/p/we-just-found-malicious-code...

        > It started with a cryptic build failure in our CI/CD pipeline, which my colleague noticed

        > This seemingly minor error was the first sign of a sophisticated supply chain attack. We traced the failure to a small dependency, error-ex. Our package-lock.json specified the stable version 1.3.2 or newer, so it installed the latest version 1.3.3, which got published just a few minutes earlier.

        • By DougBTX 2025-09-1812:443 reply

          > Our package-lock.json specified the stable version 1.3.2 or newer

          Is that possible? I thought the lock files restricted to a specific version with an integrity check hash. Is it possible that it would install a newer version which doesn't match the hash in the lock file? Do they just mean package.json here?

          • By streptomycin 2025-09-1812:501 reply

            If they were for some reason doing `npm install` rather than `npm ci`, then `npm install` does update packages in the lock file. Personally I always found that confusing, and yarn/pnpm don't behave that way. I think most people do `npm ci` in CI, unless they are using CI to specifically test if `npm install` still works, which I guess maybe would be a good idea if you use npm since it doesn't like obeying the lock file.

            • By Rockslide 2025-09-1813:133 reply

              How does this get repeated over and over, when it's simply not true? At least not anymore. npm install will only update the lockfile if you make changes to your package.json. Otherwise, it will install the versions from the lockfile.

              • By mirashii 2025-09-1815:501 reply

                > How does this get repeated over and over, when it's simply not true?

                Well, for one, the behavior is somewhat insane.

                `npm install` with no additional arguments does update the lockfile if your package.json and your lockfile are out of sync with one another for any reason, and so to get a guarantee that it doesn't change your lockfile, you must do additional configuration or guarantee by some external mechanism that you don't ever have an out of date package.json and lock. For this reason alone, the advice of "just don't use npm install, use npm ci instead" is still extremely valid, you'd really like this to fail fast if you get out of sync.

                `npm install additional-package` also updates your lock file. Other package managers distinguish these two operations, with the one to add a new dependency being called "add" instead of "install".

                The docs add to the confusion. https://docs.npmjs.com/cli/v11/commands/npm-install#save suggests that writing to package-lock.json is the default and you need to change configuration to disable it. The notion that it won't change your lock file if you're already in sync between package.json and package-lock.json is not actually spelled out clearly anywhere on the page.

                > At least not anymore.

                You've partially answered your own question here.

                • By Rockslide 2025-09-1816:161 reply

                  > You've partially answered your own question here.

                  Is that the case? If it were ever true (outside of outright bugs in npm), it must have been many many years and major npm releases ago. So that doesn't justify brigading outdated information.

                  • By chowells 2025-09-1818:38

                    I mean, it's my #1 experience using npm. I never once have used `npm install` and had a result other than it changing the lockfile. Maybe you want to blame this on the tools I used, but I followed the exact installation instructions of the project I was working on. If it's that common to get it "wrong", it's the tool that is wrong.

              • By streptomycin 2025-09-1817:501 reply

                My bad, it really annoyed me when npm stopped respecting lockfiles years ago so I stopped using it. That's great news that they eventually changed their mind.

                However in rare cases where I am forced to use it to contribute to some npm-using project, I have noticed that the lockfile often gets updated and I get a huge diff even though I didn't edit the dependencies. So I've always assumed that was the same issue with npm ignoring the lockfile, but maybe it's some other issue? idk

                • By Rockslide 2025-09-1818:51

                  Well there are other lockfile updates as well, which aren't dependency version changes either. e.g. if the lockfile was created with an older npm version, running npm install with a newer npm version might upgrade it to a newer lockfile format and thus result in huge diffs. But that wouldn't change anything about the versions used for your dependencies.

              • By cluckindan 2025-09-1815:471 reply

                Are you 100% on that?

                • By Rockslide 2025-09-1816:13

                  Yes. As someone who's using npm install daily, and given the update cadence of npm packages, I would end up with dirty lock files very frequently if the parent statement were true. It just doesn't happen.

          • By hobofan 2025-09-1816:28

            Since nobody else answers your question:

            > Do they just mean package.json here?

            Yes, most likely. A package-lock.json always specifies an exact version with hash and not a "version X or newer".

          • By Mattwmaster58 2025-09-1813:10

            > Is that possible?

            This comes up every time npm install is discussed. Yes, npm install will "ignore" your lockfile and install the latest dependancies it can that satisfy the constraints of your package.json. Yes, you should use npm clean-install. One shortcoming is the implementation insists on deleteing the entire node_modules folder, so package installs can actually take quite a bit of time, even when all the packages are being served from the npm disk cache: https://github.com/npm/cli/issues/564

      • By vasachi 2025-09-188:39

        If only there was a high-ranking official at Microsoft, who could prioritize security[1]! /s

        [1] https://blogs.microsoft.com/blog/2024/05/03/prioritizing-sec...

    • By kibwen 2025-09-1812:22

      Also, if everyone is going to wait 3 days before installing the latest version of a compromised package, it will take more than 3 days to broadly disseminate the fix for a compromise in the wild. The knife cuts both ways.

    • By kaelwd 2025-09-198:40

      The chalk+debug+error-ex maintainer probably would have noticed a few hours later when they got home and saw a bunch of "Successfully published" emails from npm that they didn't trigger.

    • By singulasar 2025-09-189:35

      Not really, app sec companies scan npm constantly for updated packages to check for malware. Many attacks get caught that way. e.g. the debug + chalk supply chain attack was caught like this: https://www.aikido.dev/blog/npm-debug-and-chalk-packages-com...

    • By blamestross 2025-09-1810:55

      1) Checks and audits will still happen (if they are happening at all)

      2) Real chances for owners to notice they have been compromised

      3) Adopt early before that commons is fully tragedy-ed.

    • By djdjsjejb 2025-09-1912:00

      thats what npm is for, so they install it first. cannon fodders.

  • By omnicognate 2025-09-188:263 reply

    Should have included the units in the name or required a choice of unit to be selected as part of the value. Sorry, just a bugbear of mine.

    • By homebrewer 2025-09-189:571 reply

      The new setting is consistent with the old ones, which is more important IMHO:

      https://pnpm.io/settings#modulescachemaxage

      • By rtpg 2025-09-1810:231 reply

        the name could have included it though right?

        • By TheRoque 2025-09-1813:58

          If the others don't include it, it could be another inconsistance though

    • By zokier 2025-09-188:352 reply

      Or just use ISO8601 standard notation (e.g. "P1D" for one day)

      • By 1oooqooq 2025-09-1812:211 reply

        or PT1400M or P0.5DT700M?

        oh, you can use commas too.

        and if you're still not thinking this is fun, here's a quote from Wikipedia "But keep in mind that "PT36H" is not the same as "P1DT12H" when switching from or to Daylight saving time."

        just add a unit to your period parameters. sigh.

        • By brewmarche 2025-09-1821:47

          Maybe surprising for days (could also happen with minutes because of leap seconds technically :-) but for months and years this is more apparent due to month ends and leap years.

      • By johanyc 2025-09-1816:07

        TIL ISO8601 also standardizes duration

    • By fzeindl 2025-09-188:392 reply

      ISO8601 durations should be used, like PT3M.

      • By mort96 2025-09-1811:252 reply

        Oh wow, never looked at ISO8601 durations before and I had no idea they were this ugly. Please, no, don't make me deal with ISO8601. I'd rather write a number of seconds or a format like 'X weeks' or 'Y hours Z minutes'x ISO8601 looks exclusively like a data interchange format

        • By johanyc 2025-09-1816:051 reply

          https://docs.digi.com/resources/documentation/digidocs/90001...

          It's pretty simple actually.

          > 'X weeks' or 'Y hours Z minutes'

          PxW, PTyHzM. So simple that I learned it in a few seconds.

          • By mort96 2025-09-1820:19

            I learned it too. It's just ugly. Before I looked at the relevant section of the ISO 8601 duration format yesterday, I didn't know it. After I looked at it, I now know it, and I strongly dislike it.

        • By cluckindan 2025-09-1815:32

          When in doubt, use only seconds. PT86400S

      • By aa-jv 2025-09-188:50

        Should be easy, just add the ISO8601-duration package to your project ..

        /s

  • By the_mitsuhiko 2025-09-1810:081 reply

    I think uv should get some credit for being an early supporter of this. They originally added it as a hidden way to create stable fixtures for their own tests, but it has become a pretty popular flag to use.

    This for instance will only install packages that are older than 14 days:

    uv sync --exclude-newer $(date -u -v-14d '+%Y-%m-%dT%H:%M:%SZ')

    It's great to see this kind of stuff being adopted in more places.

    • By mcintyre1994 2025-09-1810:152 reply

      Nice, but I think the config file is a much better implementation for protecting against supply chain attacks, particularly those targeting developers rather than runtime. You don’t want to rely on every developer passing a flag every time they install. This does suffer from the risk of using `npm install` instead of `pnpm install` though.

      It would also be nice to have this as a flag so you can use it on projects that haven't configured it though, I wonder if that could be added too.

      • By ramses0 2025-09-1812:45

        Just Minimum Version Selection in conjunction with "Minimum non-Vulnerable Version" (and this "--minAge") would do a lot, and effectively suss out a lot of poorly/casually maintained packages (eg: "finished" ones).

        https://research.swtch.com/vgo-mvs#upgrade_timing

        MVS makes tons of sense that you shouldn't randomly uptake "new" packages that haven't been "certified" by package maintainers in their own dependencies.

        In the case of a vulnerable sub-dependency, you're effectively having to "do the work" to certify that PackageX is compatible with PackageY, and "--minAge" gives industry (and maintainers) time to scan before insta-pwning anyone who is unlucky that day.

      • By cap11235 2025-09-1812:142 reply

        You can put the uv setting in pyproject.toml or uv.toml.

        • By fainpul 2025-09-1813:20

          But then you have to hardcode a timestamp, since this is not gonna work in uv.toml:

            exclude-newer = $(date -uv -14d '+%Y-%m-%dT%H:%M:%SZ')

        • By mcintyre1994 2025-09-1812:45

          Nice, supporting both definitely seems ideal.

HackerNews