
TL;DR: Dependency cooldowns are a free, easy, and incredibly effective way to mitigate the large majority of open source supply chain attacks. More individual projects should apply cooldowns (via…
TL;DR: Dependency cooldowns are a free, easy, and incredibly effective way to mitigate the large majority of open source supply chain attacks. More individual projects should apply cooldowns (via tools like Dependabot and Renovate) to their dependencies, and packaging ecosystems should invest in first-class support for cooldowns directly in their package managers.
“Supply chain security” is a serious problem. It’s also seriously overhyped, in part because dozens of vendors have a vested financial interest in convincing your that their framing of the underlying problem1 is (1) correct, and (2) worth your money.
What’s consernating about this is that most open source supply chain attacks have the same basic structure:
An attacker compromises a popular open source project, typically via a stolen credential or CI/CD vulnerabilty (such as “pwn requests” in GitHub Actions).
The attacker introduces a malicious change to the project and uploads it somewhere that will have maximum effect (PyPI, npm, GitHub releases, &c., depending on the target).
At this point, the clock has started, as the attacker has moved into the public.
Users pick up the compromised version of the project via automatic dependency updates or a lack of dependency pinning.
Meanwhile, the aforementioned vendors are scanning public indices as well as customer repositories for signs of compromise, and provide alerts upstream (e.g. to PyPI).
Notably, vendors are incentivized to report quickly and loudly upstream, as this increases the perceived value of their services in a crowded field.
Upstreams (PyPI, npm, &c.) remove or disable the compromised package version(s).
End-user remediation begins.
The key thing to observe is that the gap between (1) and (2) can be very large2 (weeks or months), while the gap between (2) and (5) is typically very small: hours or days. This means that, once the attacker has moved into the actual exploitation phase, their window of opportunity to cause damage is pretty limited.

We can see this with numerous prominent supply chain attacks over the last 18 months3:
| Attack | Approx. Window of Opportunity | References |
|---|---|---|
| xz-utils | ≈ 5 weeks4 | Source |
| Ultralytics (phase 1) | 12 hours | Source |
| Ultralytics (phase 2) | 1 hour | Source |
| tj-actions | 3 days | Source |
| chalk | < 12 hours | Source |
| Nx | 4 hours | Source |
| rspack | 1 hour | Source |
| num2words | < 12 hours | Source |
| Kong Ingress Controller | ≈ 10 days | Source |
| web3.js | 5 hours | Source |
(Each of these attacks has significant downstream effect, of course, but only within their window of opportunity. Subsequent compromises from each, like Shai-Hulud, represent new windows of opportunity where the attackers regrouped and pivoted onto the next set of compromised credentials.)
My takeaway from this: some windows of opportunity are bigger, but the majority of them are under a week long. Consequently, ordinary developers can avoid the bulk of these types of attacks by instituting cooldowns on their dependencies.
A “cooldown” is exactly what it sounds like: a window of time between when a dependency is published and when it’s considered suitable for use. The dependency is public during this window, meaning that “supply chain security” vendors can work their magic while the rest of us wait any problems out.
I love cooldowns for several reasons:
They’re empirically effective, per above. They won’t stop all attackers, but they do stymie the majority of high-visibiity, mass-impact supply chain attacks that have become more common.
They’re incredibly easy to implement. Moreover, they’re literally free to implement in most cases: most people can use Dependabot’s functionality, Renovate’s functionality, or the functionality build directly into their package manager5.
This is how simple it is in Dependabot:
1
2
3
4
5
6
7
8
9
version: 2 # update once a week, with a 7-day cooldown - package-ecosystem: github-actions directory: / schedule: interval: weekly cooldown: default-days: 7
(Rinse and repeat for other ecosystems as needed.)
Cooldowns enforce positive behavior from supply chain security vendors: vendors are still incentivized to discover and report attacks quickly, but are not as incentivized to emit volumes of blogspam about “critical” attacks on largely underfunded open source ecosystems.
In the very small sample set above, 8/10 attacks had windows of opportunity of less than a week. Setting a cooldown of 7 days would have prevented the vast majority of these attacks from reaching end users (and causing knock-on attacks, which several of these were). Increasing the cooldown to 14 days would have prevented all but 1 of these attacks6.
Cooldowns are, obviously, not a panacea: some attackers will evade detection, and delaying the inclusion of potentially malicious dependencies by a week (or two) does not fundamentally alter the fact that supply chain security is a social trust problem, not a purely technical one. Still, an 80-90% reduction in exposure through a technique that is free and easy seems hard to beat.
Related to the above, it’s unfortunate that cooldowns aren’t baked directly into more packaging ecosystems: Dependabot and Renovate are great, but even better would be if the package manager itself (as the source of ground truth) could enforce cooldowns directly (including of dependencies not introduced or bumped through automated flows).
People in this thread are worried that they are significantly vulnerable if they don't update right away. However, this is mostly not an issue in practice. A lot of software doesn't have continuous deployment, but instead has customer-side deployment of new releases, which follow a slower rhythm of several weeks or months, barring emergencies. They are fine. Most vulnerabilities that aren't supply-chain attacks are only exploitable under special circumstances anyway. The thing to do is to monitor your dependencies and their published vulnerabilities, and for critical vulnerabilities to assess whether your product is affect by it. Only then do you need to update that specific dependency right away.
> for critical vulnerabilities to assess whether your product is affect by it. Only then do you need to update that specific dependency right away.
This is indeed what's missing from the ecosystem at large. People seem to be under the impression that if a new release of software/library/OS/application is released, you need to move to it today. They don't seem to actually look through the changes, only doing that if anything breaks, and then proceed to upgrade because "why not" or "it'll only get harder in the future", neither which feel like solid choices considering the trade-offs.
While we've seen to already have known that it introduces massive churn and unneeded work, it seems like we're waking up to the realization that it is a security tradeoff as well, to stay at the edge of version numbers. Sadly, not enough tooling seems to take this into account (yet?).
At my last job, we only updated dependencies when there was a compelling reason. It was awful.
What would happen from time to time was that an important reason did come up, but the team was now many releases behind. Whoever was unlucky enough to sign up for the project that needed the updated dependency now had to do all those updates of the dependency, including figuring out how they affected a bunch of software that they weren't otherwise going to work on. (e.g., for one code path, I need a bugfix that was shipped three years ago, but pulling that into my component affects many other code paths.) They now had to go figure out what would break, figure out how to test it, etc. Besides being awful for them, it creates bad incentives (don't sign up for those projects; put in hacks to avoid having to do the update), and it's also just plain bad for the business because it means almost any project, however simple it seems, might wind up running into this pit.
I now think of it this way: either you're on the dependency's release train or you jump off. If you're on the train, you may as well stay pretty up to date. It doesn't need to be every release the minute it comes out, but nor should it be "I'll skip months of work and several major releases until something important comes out". So if you decline to update to a particular release, you've got to ask: am I jumping off forever, or am I just deferring work? If you think you're just deferring the decision until you know if there's a release worth updating to, you're really rolling the dice.
(edit: The above experience was in Node.js. Every change in a dynamically typed language introduces a lot of risk. I'm now on a team that uses Rust, where knowing that the program compiles and passes all tests gives us a lot of confidence in the update. So although there's a lot of noise with regular dependency updates, it's not actually that much work.)
I think it also depends on the community as well. Last time I touched Node.js and Javascript-related things, every time I tried to update something, it practically guaranteed something would explode for no reason.
While my recent legacy Java project migration from JDK 8 -> 21 & a ton of dependency upgrades has been a pretty smooth experience so far.
Yeah, along with any community's attitudes to risk and quality, there is also a varying, er, chronological component.
I'd prefer to upgrade around the time most of the nasty surprises have already been discovered by somebody else, preferably with workarounds developed.
At the same time, you don't want to be so far back that upgrading uncovers novel migration problems, or issues that nobody else cares about anymore.
Yeah, the JavaScript/Node.js ecosystem is pain. Lots of tooling (ORMs, queue/workflow frameworks, templating) is new-ish or quickly changing. I've also had minor updates cause breakages; semver is best-effort at best.
I don't like Java but sometimes I envy their ecosystem.
There is a reason most of stable companies use Java - stability. Outside of startups and SV, there are few reasons to avoid such a robust system.
Plus you can find endless stream of experienced devs for it. Which are more stable job wise than those who come & go every 6-12 months. Stability. Top management barely cares for anything else from IT.
Yes, I’ve had exactly the same experience. Once you get off the dependency train, it’s almost impossible to get back on.
I don’t think this is specific to any one language or environment, it just gets more difficult the larger your project is and the longer you go without updating dependencies.
I’ve experienced this with NPM projects, with Android projects, and with C++ (neglecting to merge upstream changes from a private fork).
It does seem likely that dynamic languages make this problem worse, but I don’t think very strict statically typed languages completely avoid it.
> I'm now on a team that uses Rust, where knowing that the program compiles and passes all tests gives us a lot of confidence in the update.
That's been my experience as well. In addition, the ecosystem largely holds to semver, which means a non-major upgrade tends to be painless, and conversely, if there's a major upgrade, you know not to put it off for too long because it'll involve some degree of migration.
It's a similar experience in Go, specially because imports are done by URL and major versions higher than v1.x are forced to change it to add a suffix `/vN` at the end.
Although this is true, any large ecosystem will have some popular packages not holding to semver properly. Also, the biggest downside is when your `>=v1` depends - indirectly usually - on a `v0` dependency which is allowed to do breaking changes.
You can choose to either live at the slightly-bleeding edge (as determined by “stable” releases, etc), or to live on the edge of end-of-life, as discussed here: <https://news.ycombinator.com/item?id=21785399>
OP wisely said for critical vulnerabilities is where the actual exposure needs to be assessed, in order to make an exception from a rule like “install the latest release of things that’s been published for X length of time.”
For instance if you use a package that provides a calendar widget and your app uses only the “western” calendar and there is a critical vulnerability that only manifests in the Islamic calendar, you have zero reason to worry about an exploit.
I see this as a reasonable stance.
That's indeed reasonable, but the opposite can happen just as well: there is a vulnerability in the western calendar, but I'm on an old major.minor version that receives no security patches anymore. So now I have to upgrade that dependency, potentially triggering an avalanche of incompatibilities with other packages, leading to further upgrades and associated breakages. Oopsie.
This is why I don't use dependencies that break backwards compatibility.
If you break my code I'm not wasting time fixing what you broke, I'm fixing the root cause of the bug: finding your replacement.
My current employer publishes "staleness" metrics at the project level. It's imperfect because it weights all the dependencies the same, but it's better than nothing.
I wonder, are there tools to help you automate this? I.e. to assign a value to the staleness of each package instead of simple "oudated" boolean, and also a weight to each package.
E.g. something like:
pkg_staleness = (1 * missed_patch + 5 * missed_minor + 10 * missed_major) * (month_diff(installed_release_date, latest_release_date)/2)^2
proj_staleness = sum(pkg.staleness * pkg.weight for pkg in all_packages)My current org uses this:
Update at least quarterly so you don’t have them stale an super hard to update
I fought off the local imposition of Dependabot by executive fiat about a year ago by pointing out that it maximizes vulnerabilities to supply chain attacks if blindly followed or used as a metric excessively stupidly. Maximizing vulnerabilities was not the goal, after all. You do not want to harass teams with the fact that DeeplyNestedDepen just went from 1.1.54-rc2 to 1.1.54-rc3 because the worst case is that they upgrade just to shut the bot up.
I think I wouldn't object to "Dependabot on a 2-week delay" as something that at least flags. However working in Go more than anything else it was often the case even so that dependency alerts were just an annoyance if they aren't tied to a security issue or something. Dynamic languages and static languages do not have the same risk profiles at all. The idea that some people have that all dependencies are super vital to update all the time and the casual expectation of a constant stream of vital security updates is not a general characteristic of programming, it is a specific characteristic not just of certain languages but arguably the community attached to those languages.
(What we really need is capabilities, even at a very gross level, so we can all notice that the supposed vector math library suddenly at version 1.43.2 wants to add network access, disk reading, command execution, and cryptography to the set of things it wants to do, which would raise all sorts of eyebrows immediately, even perhaps in an automated fashion. But that's a separate discussion.)
It seems like some of the arguments in favor of doing frequent releases apply at least a little bit for dependency updates?
Doing updates on a regular basis (weekly to monthly) seems like a good idea so you don't forget how to do them and the work doesn't pile up. Also, it's easier to debug a problem when there are fewer changes at once.
But they could be rescheduled depending on what else is going on.
> Doing updates on a regular basis (weekly to monthly)
This lessens, but doesn't eliminate supply side vulns. You can still get a vulnerable new release if your schedule happens to land just after the vuln lands.
TFA proposes a _delay_ in a particular dependency being pulled in. You can still update every day/hour/microsecond if you want, you just don't get the "new" thing until it's baked a bit.
Yes, understood. Not arguing against cooldowns.
Dependabot only suggest upgrades when there are CVEs, and even then it just alerts and raises PRs, it doesn’t force it on you. Our team sees it as a convenience, not a draconian measure.
I use a dependabot config that buckets security updates into a separate pull than other updates. The non-security update PRs are just informational (can disable but I choose to leave them on), and you can actually spend the time to vet the security updates
Thats because the security industry has been captured by useless middle manager types who can see that "one dependency has a critical vulnerability", but could never in their life scrounge together the clue to analyze the impact of that vulnerability correctly. All they know is the checklist fails, and the checklist can not fail.
(Literally at one place we built a SPA frontend that was embedded in the device firmware as a static bundle, served to the client and would then talk to a small API server. And because these NodeJS types liked to have libraries reused for server and frontend, we would get endless "vulnerability reports" - but all of this stuff only ever ran in the clients browser!)
You could use this funky tool from oss-rebuild which proxies registries so they return the state they were at a past date: https://github.com/google/oss-rebuild/tree/main/cmd/timewarp
That... looks like it would let me do this exactly the way I want to for npm and python. For my C++ stuff, it's all vendored and manual anyway.
I had not seen that tool. Thanks for pointing it out.
> "it'll only get harder in the future"
that's generally true, no?
of course waiting a few days/weeks should be the minimum unless there's a CVE (or equivalent) that's applies
I once almost managed to get back on the release train. I was tasked with adding new features to a piece of software originally developed years ago for internal use at a university. It was running PHP 5 on Debian Jessie. The first hurdle was that no Docker image existed for the production environment (PHP 5). After patching all the code and getting it to run on PHP 8, another issue surfaced: the MSSQL server it needed to communicate with only supported TLS 1.0, which had been removed from the OpenSSL version included in Debian 12—the base image for my PHP 8 setup.
In the end, I decided to implement a lightweight PHP 5 relay to translate SQL requests so the MSSQL server could still be accessed. The university stuffs were quite satisfied with my work. But I really felt guilty for the next guy who will touch this setup. So I still didn’t quite make it back onto the release train… as that PHP 5 relay counts.
Jumping straight to the new release because it fixed one security bug has always struck me as a round about way of trying to achieve security through obscurity, especially when the releases include tons of other changes. Yes, this release fixed CVE-123, but how many new ones were added?
This is a valid security strategy tho, always shifting the ground beneath the attackers feet. As the code author, you might not know where there are vulnerabilities in your code, but someone targeting you does. You will never have bug free code, so better to just keep it in constant flux than allow an attacker to analyze an unchanging application over months and years.
> Sadly, not enough tooling seems to take this into account
Most tooling (e.g. Dependabot) allows you to set an interval between version checks. What more could be done on that front exactly? Devs can already choose to check less frequently.
The check frequency isn't the problem, it's the latency between release and update. If a package was released 5 minutes before dependabot runs and you still update to it, your lower frequency hasn't really done anything.
What are the chances of that, though? The same could happen if you wait X amount of days for the version to "mature" as well. A security issue could be found five minutes after you update.
EDIT: Github supports this scenario too (as mentioned in the article):
https://github.blog/changelog/2025-07-01-dependabot-supports...
https://docs.github.com/en/code-security/dependabot/working-...
> What are the chances of that, though?
The whole premise of the article is that they’re substantially lower, because some time for the ecosystem of dependency scanners and users to detect and report is better than none.
>The thing to do is to monitor your dependencies and their published vulnerabilities, and for critical vulnerabilities to assess whether your product is affect by it. Only then do you need to update that specific dependency right away.
The practical problem with this is that many large organizations have a security/infosec team that mandates a "zero CVE" posture for all software.
Where I work, if our infosec team's scanner detect a critical vulnerability in any software we use, we have 7 days to update it. If we miss that window we're "out of compliance" which triggers a whole process that no one wants to deal with.
The path of least resistance is to update everything as soon as updates are available. Consequences be damned.
You view this as a burden, but (at least if you operate in the EU) I’d argue you’re actually looking at a competitive advantage that hasn't cashed out yet.
Come 2027-12, the Cyber Resilience Act enters full enforcement. The CRA mandates a "duty of care" for the product's lifecycle, meaning if a company blindly updates a dependency to clear a dashboard and ships a supply-chain compromise, they are on the hook for fines up to €15M or 2.5% of global turnover.
At that point, yes, there is a sense in which the blind update strategy you described becomes a legal liability. But don't miss the forest for the trees, here. Most software companies are doing zero vetting whatsoever. They're staring at the comet tail of an oncoming mass extinction event. The fact that you are already thinking in terms of "assess impact" vs. "blindly patch" already puts your workplace significantly ahead of the market.
The CRA, unfortunately, also has language along the lines of "don't ship with known vulnerabilities", without defining who determines what is a vulnerability and how, so I fully expect this no-thoughts-only-checkboxes approach to increase with it (there's already a bunch of other standards which can be imposed on organizations from various angles which essentially force updates without any consideration of the risk of introducing new vulnerabilities or supply-chain attacks).
My previous job we did continuous deployment and had a weekly JIRA ticket where an engineer would merge dependabot PRs. We scanned everything in our stack with Trivy to be aware of security vulnerabilities and had processes to ensure they were always patched within 2 weeks.
I really dislike that approach. We're by now evaluating high-severity CVEs ASAP in a group to figure out if we are affected, and if mitigations apply. Then there is the choice of crash-patching and/or mitigating in parallel, updating fast, or just prioritizing that update more.
We had like 1 or 2 crash-patches in the past - Log4Shell was one of them, and blocking an API no matter what in a component was another one.
In a lot of other cases, you could easily wait a week or two for directly customer facing things.
> The practical problem with this is that many large organizations have a security/infosec team that mandates a "zero CVE" posture for all software.
The solution is to fire those teams.
This isn’t a serious response. Even if you had the clout to do that, you’d then own having to deal with the underlying pressure which lead them to require that in the first place. It’s rare that this is someone waking up in the morning and deciding to be insufferable, although you can’t rule that out in infosec, but they’re usually responding to requirements added by customers, auditors needed to get some kind of compliance status, etc.
What you should do instead is talk with them about SLAs and validation. For example, commit to patching CRITICAL within x days, HIGH with y, etc. but also have a process where those can be cancelled if the bug can be shown not to be exploitable in your environment. Your CISO should be talking about the risk of supply chain attacks and outages caused by rushed updates, too, since the latter are pretty common.
Aren't some of these government regulations for cloud, etc.?
SOC2/FIPS/HIPAA/etc don't mandate zero-CVE, but a zero-CVE posture is an easy way to dodge all the paperwork that would be involved in exhaustively documenting exactly why each flagged CVE doesn't actually apply to your specific scenario (and then potentially re-litigating it all again in your annual audit).
So it's more of a cost-cutting/cover-your-ass measure than an actual requirement.
There are several layers of translation between public regulations, a company's internal security policy and the processes used to enforce those policies.
Let's say the reg says you're liable for damages caused by software defects you ship due to negligence, giving you broad leeway how to mitigate risks. The corporate policy then says "CVEs with score X must be fixed in Y days; OWASP best practices; infrastructure audits; MFA; yadda yadda". Finally the enforcement is then done by automated tooling like sonarqube, prisma. dependabot, burpsuite, ... and any finding must be fixed with little nuance because the people doing the scans lack the time or expertise to assess whether any particular finding is actually security-relevant.
On the ground the automated, inflexible enforcement and friction then leads to devs choosing approaches that won't show up in scans, not necessarily secure ones.
As an example I witnessed recently: A cloud infra scanning tool highlighted that an AppGateway was used as TLS-terminating reverse proxy, meaning it used HTTP internally. The tool says "HTTP bad", even when it's on an isolated private subnet. But the tool didn't understand Kubernetes clusters, so a a public unencrypted ingress, i.e. public HTTP didn't show up. The former was treated as a critical issue that must be fixed asap or the issue will get escalated up the management chain. The latter? Nobody cares.
Another time I got pressure to downgrade from Argon2 to SHA2 for password hashing because Argon2 wasn't on their whitelist. I resisted that change but it was a stressful bureaucratic process with some leadership being very unhelpful and suggesting "can't you just do the compliant thing and stop spending time on this?".
So I agree with GP that some security teams barely correlate with security, sometimes going into the negative. A better approach would to integrate software security engineers into dev teams, but that'd be more expensive and less measurable than "tool says zero CVEs".
Sure I'll go suggest that to my C-suite lol
Someone has the authority to fix the problem. Maybe it’s them.
I think the main question is: do your app get unknown input (i.e. controlled by other people).
Browsers get a lot of unknown input, so they have to update often.
A Weather app is likely to only get input from one specific site (controlled by the app developers), so it should be relatively safe.
Fun part is people are worried about 0 days but in reality most problems come from 300 or 600 days old not patched vulns.
"The thing to do is to monitor your dependencies and their published vulnerabilities, and for critical vulnerabilities to assess whether your product is affect by it."
Yes
"Only then do you need to update that specific dependency right away."
Big no. If you do that it is guaranteed one day you miss a vulnerability that hurts you.
To frame it differently: What you propose sounds good in theory but in practice the effort to evaluate vulnerabilities against your product will be higher than the effort to update plus taking appropriate measures against supply chain attacks.
And it needs to be said that you generally cannot tell if a vulnerability is critical for a given application except by evaluating the vulnerability in the context of said application. One that I've seen is some critical DoS vulnerability due to a poorly crafted regex. That sort of vulnerability is only relevant if you are passing untrusted input to that regex.
> People in this thread are worried that they are significantly vulnerable if they don't update right away
Most of them assume what if they are working on some public accessible website then 99% of the people and orgs in the world are running nothing but some public accessible website.
A million times this. You update a dependency when there are bug fixes or features that you need (and this includes patching vulnerabilities!). Those situations are rare. Otherwise you're just introducing risk into your system - and not that you're going to be caught in some dragnet supply chain attack, but that some dependency broke something you relied on by accident.
Dependencies are good. Churn is bad.
I'd rather have excellent quality test suites, and pipelines I can have a high degree of confidence in to catch these issues and mitigate the risk, than dependencies that basically never get updated and then become very difficult to do so, which just introduce different classes of risk.
Vulnerabilities are not rare, the two most popular programming languages, javascript (node) and python, have 1000+ CVEs in their official docker images. I.e. in practice useless and shouldn't be used by anyone for anything.
Also, if you are updating "right away" it is presumably because of some specific vulnerability (or set of them). But if you're in an "update right now" mode you have the most eyes on the source code in question at that point in time, and it's probably a relatively small patch for the targeted problem. Such a patch is the absolute worst time for an attacker to try to sneak anything in to a release, the exact and complete opposite of the conditions they are looking for.
Nobody is proposing a system that utterly and completely locks you out of all updates if they haven't aged enough. There is always going to be an override switch.
I agree. Pick yout poison. my poison is waiting before upgrades, assess zero days case by case.
The Debian stable model of having a distro handle common dependencies with a full system upgrade every few years looks more and more sane as years pass.
It's a shame some ecosystems move waaay too fast, or don't have a good story for having distro-specific packages. For example, I don't think there are Node.js libraries packaged for Debian that allow you to install them from apt and use it in projects. I might be wrong.
Never mistake motion for action.
An eco system moving too quickly, when it isn't being fundamentally changed, isn't a sign of a healthy ecosystem, but of a pathological one.
No one can think that js has progressed substantially in the last three years, yet trying to build any project three years old without updates is so hard a rewrite is a reasonable solution.
> No one can think that js has progressed substantially in the last three years
Are we talking about the language, or the wider ecosystem?
If the latter, I think a lot of people would disagree. Bun is about three years old.
Other significant changes are Node.js being able to run TypeScript files without any optional flags, or being able to use require on ES Modules. I see positive changes in the ecosystem in recent years.
That is motion not action.
The point of javascript is to display websites in the browser.
Ask yourself, in the last three years has there been a substantial improvement in the way you access websites? Or have they gotten even slower, buggier and more annoying to deal with?
> The point of javascript is to display websites in the browser.
I don't follow. JavaScript is a dynamic general purpose programming language. It is certainly not limited to displaying websites, nor it's a requirement for that. The improvements I mentioned in the previous post aren't things you'd get the benefit of inside a web browser.
No but the devs can push slower, buggier, more annoying websites to prod, faster!
And after all, isn’t developer velocity (and investor benefits) really the only things that matter???
/sssss
> modern websites
Your are comparing the js ecosystem and bad project realizations/designs.
> Action vs motion
I think the main difference you mean is the motivation behind changes, is it a (re)action to achieve a meassurable goal, is this a fix for a critical CVE, or just some dev having fun and pumping up the numbers.
GP mentioned the recent feature of executing ts, which is a reasonable goal imo, with alot of beneficial effects down the line but in the present just another hustle to take care about. So is this a pointless motion or a worthy action? Both statements can be correct, depending on your goals.
> For example, I don't think there are Node.js libraries packaged for Debian that allow you to install them from apt and use it in projects
Web search shows some: https://packages.debian.org/search?keywords=node&searchon=na... (but also shows "for optimizing reasons some results might have been suppressed" so might not be all)
Although probably different from other distros, Arch for example seems to have none.
Locally, you can do:
apt-cache showpkg 'node-*' | grep ^Package:
which returns 4155 results, though 727 of them are type packages.Using these in commonjs code is trivial; they are automatically found by `require`. Unfortunately, system-installed packages are yet another casualty of the ESM transition ... there are ways to make it work but it's not automatic like it used to be.
> there are ways to make it work but it's not automatic like it used to be
Out of curiosity, what would you recommend? And what would be needed to make them work automatically?
Some solutions include: add a hook to node on the command line, fake a `node_modules`, try to make an import map work, just hard-code the path ... most of this is quite intrusive unlike the simple `require` path.
I really don't have a recommendation other than another hack. The JS world is hacks upon hacks upon hacks; there is no sanity to be found anywhere.
> Unfortunately, system-installed packages are yet another casualty of the ESM transition ...
A small price to pay for the abundant benefits ESM brings.
Honestly, I don't see the value gain, given how many other problems the JS tooling has (even ignoring the ecosystem).
In particular, the fact that Typescript makes it very difficult to write a project that uses both browser-specific and node-specific functionality is particularly damning.
It is possible to work with rust, using debian repositories as the only source.
The stable model usually implies that your app has to target both the old and the new distro version for a while. That is a bit too much to ask for some, unfortunately
There's a tradeoff and the assumption here (which I think is solid) is that there's more benefit from avoiding a supply chain attack by blindly (by default) using a dependency cooldown vs. avoiding a zero-day by blindly (by default) staying on the bleeding edge of new releases.
It's comparing the likelihood of an update introducing a new vulnerability to the likelihood of it fixing a vulnerability.
While the article frames this problem in terms of deliberate, intentional supply chain attacks, I'm sure the majority of bugs and vulnerabilities were never supply chain attacks: they were just ordinary bugs introduced unintentionally in the normal course of software development.
On the unintentional bug/vulnerability side, I think there's a similar argument to be made. Maybe even SemVer can help as a heuristic: a patch version increment is likely safer (less likely to introduce new bugs/regressions/vulnerabilities) than a minor version increment, so a patch version increment could have a shorter cooldown.
If I'm currently running version 2.3.4, and there's a new release 2.4.0, then (unless there's a feature or bugfix I need ASAP), I'm probably better off waiting N days, or until 2.4.1 comes out and fixes the new bugs introduced by 2.4.0!
Yep, that's definitely the assumption. However, I think it's also worth noting that zero-days, once disclosed, do typically receive advisories. Those advisories then (at least in Dependabot) bypass any cooldown controls, since the thinking is that a known vulnerability is more important to remediate than the open-ended risk of a compromised update.
> I'm sure the majority of bugs and vulnerabilities were never supply chain attacks: they were just ordinary bugs introduced unintentionally in the normal course of software development.
Yes, absolutely! The overwhelming majority of vulnerabilities stem from normal accidental bug introduction -- what makes these kinds of dependency compromises uniquely interesting is how immediately dangerous they are versus, say, a DoS somewhere in my network stack (where I'm not even sure it affects me).
Could a supply chain attacker simulate an advisory-remediating release somehow, i.e., abuse this feature to bypass cooldowns?
Of course. They can simply wait to exploit their vulnerability. It it is well hidden, then it probably won't be noticed for a while and so you can wait until it is running on the majority of your target systems before exploiting it.
From their point of view it is a trade-off between volume of vulnerable targets, management impatience and even the time value of money. Time to market probably wins a lot of arguments that it shouldn't, but that is good news for real people.
You should also factor in that a zero-day often isn’t surfaced to be exploitable if you are using the onion model with other layers that need to be penetrated together. In contrast to a supply chain vulnerability that is designed to actively make outbound connections through any means possible.
Thank you. I was scanning this thread for anyone pointing this out.
The cooldown security scheme appears like some inverse "security by obscurity". Nobody could see a backdoor, therefor we can assume security. This scheme stands and falls with the assumed timelines. Once this assumption tumbles, picking a cooldown period becomes guess work. (Or another compliance box ticked.)
On the other side, the assumption can very well be sound, maybe ~90% of future backdoors can be mitigated by it. But who can tell. This looks like the survivorship bias, because we are making decisions based on the cases we found.
I’d estimate the vast majority of CVEs in third party source are not directly or indirectly exploitable. The CVSS scoring system assumes the worst case scenario the module is deployed in. We still have no good way to automate adjusting the score or even just figuring false positive.
Defaults are always assumptions. Changing them usually means that you have new information.
The big problem is the Red Queen's Race nature of development in rapidly-evolving software ecosystems, where everyone has to keep pushing versions forward to deal with their dependencies' changes, as well as any actual software developments of their own. Combine that with the poor design decisions found in rapidly-evolving ecosystems, where everyone assumes anyting can be fixed in the next release, and you have a recipe for disaster.