This is not an opinion, this is not an opinion... it's an opinion, isn't it?
There was a recent incident where Microsoft somehow allegedly blocked a mailbox of a sanctioned individual. Any organization highly depending on MS products that might come into the crosshair should ask - can this happen to me? What would be the cost? How much I invest into prevention of this scenario? In this article I try to get the facts straight and use a return on security investment calculation to try and judge this situation in a rational way. Let’s grab our tinfoil hats and find out if it’ll be fine.
I don’t like to cover recent news. But I do want to take a moment and consider a couple of implications of the recent alleged blocking of Microsoft services for the ICC.
For better or for worse, availability and business continuity usually fall under the security umbrella. That’s why I want to have a clear reference of this situation and some organized thoughts.
I will try and stick to the facts and hypotheticals. If you catch me having and voicing an opinion, please let me know and I’ll try and adjust accordingly. Alternatively, feel free to take a drink whenever you spot an opinion and then blame your drunkness on this article. For legal reasons that’s a joke.
I have these questions:
What happened?
What is the probability of an MS cutoff?
What is the cost of an MS cutoff?
How much should I invest to prevent this?
I’ll try to answer each of those in a separate section.
The story summarized for a tweet is basically: Trump introduced sanctions against ICC and Microsoft disabled an account of at least one of such officials, locking them out of their work mailbox.
Sources supporting the claim:
Sources disputing the claim:
Microsoft didn’t cut services to International Criminal Court, its president says (Politico, 04-06-2025)
Here there’s a claim that only a single person was targeted, rather than the whole court. But they don’t dispute the single person target in the article.
Critical part:
Microsoft declined to comment further in response to questions regarding the exact process that led to Khan's email disconnection, and exactly what it meant by “disconnection.” The ICC declined to comment.
Other sources:
The chain of events observed is roughly:
USA imposes sanctions
Microsoft complies with said sanctions by blocking their customer.
I am not a lawyer, so I won’t go into any detail of whether this is legal. I’d rather argue that by the time legality is established, the damage has already been done. No doubt that plenty of requests will be perfectly legal.
I have to talk about the current US president, since it seems that he has massive power over this process.
The Trump politics are, for a lack of a better publishable word, unpredictable. This is even worse than if he was straight up hostile to everyone, because then it would be easier to justify a very costly change. Check this for further explanation, especially:
Ironically, Trump’s additional goal of “reshoring” manufacturing to the U.S. is undercut by his own tariff unpredictability. Both domestic and foreign investors are loath to invest in reshored or new U.S. industries when there is no way to know what the specific tariffs will be for each product and each country.
There seems to be no clear way of knowing whether you or your company will be at odds with Trump (and, by extension, with USA) in near future.
Ok, so a decision to impose sanctions was made. Can it be executed? It seems so that yes.
Surely, this is temporary - after all, it’s his second term and that will end one day. Or not - Trump Says He Will ‘Negotiate’ Third Term Because He’s ‘Entitled’ To It. That will be decided in 2028, which is sort of far away.
If you’d like to oppose the observation that Trump will impose sanctions in a wildly unpredictable manner, be my guest. I am not saying that it will happen every day, I am saying that it takes one bad public remark from one of the company leaders and the whole company might be a target (example 1, example 2 (from 2018!)).
There is some silver lining though. The fact that this isn’t happening very often. It’s not a small thing to observe, as we’ll see. Should this mechanism prove effective, we might see an acceleration, should Microsoft push back more, we might not.
Continuing our hypothetical, the sanctions go live, let’s see what would MS do.
This is the real story in my opinion (I know, erasing, getting back to facts).
It’s no secret that MS has billions of USD contracted with various USA departments. Example contract with a ceiling of 9 billion dollars, further commentary by the Register.
If I would want to convince Microsoft to do something, I would threaten to pull my contracts with them. If you think this is irrational, please refer to previous section.
The MS now has a decision to make. No matter what they do, they are bound to anger someone. Moreover, it can be hard to predict who, how and for how long (rarely indefinitely). What’s easier to predict is whether you are losing some amount of money or possibly an absolute ludicrous amount of money.
That’s why I argue it’s likely that MS will comply with requests of the US government. And let’s not forget that plenty of those requests might be perfectly legal and binding.
Again, silver lining - you’d argue that this rarely happens and you’re right. The chances of this happening are very slim, but the consequences are so dire. There’s no doubt to me that a vast majority of companies will be able to sit through this with no harm at all.
Let me offer another perspective and that is that the software world has changed. Here I’ll argue that software companies have a new ability (and responsibility) to enable/disable their software. Remember keygens?
You see, if I walked to the Microsoft with a demand to shut down a company in 1999, it might not be possible to do so. They would have their own email servers, backups and MS software that would continue working offline as there was no server to disable them.
Today, everybody knows that they can cut the lifeline to all of their products, completely disabling them in the process. The MS 365 being the most obvious example, anything Azure being a close second. A final example here is a feature I was looking forward to for some time - Python in Excel. I was horrified to learn that there’s an Azure container behind every cell of a spreadsheet executing the python code instead of… you know, my PC doing the work.
To provide the opposite view, there are cases when Microsoft absolutely should refuse to provide their services. I am surprised that the law enforcement isn’t asking them to pull the plug more often (at least I don’t know about such cases).
And to conclude with a number, there are more than 2 million companies in the world that use Microsoft 365 products. So let’s just say that your chance in being selected in a given year is 1 in 2 million. Of course, this probability might go up and down, but let’s say that it’s not likely to jump by several orders of magnitude.
Two steps again, let’s find out:
how do companies depend on MS,
how does a full MS outage look like.
To give credit to Microsoft, rarely do their services fail completely. There isn’t a whole lot of data for these cases (AFAIK, please send some if you have them).
To me, this is a no-brainer (is that an opinion?). It’s a good place to mention the famous Embrace, Extend, Estinguish slogan of MS. I was trying quite hard to find any kind of sources on this, but they are surprisingly scarce (reddit agrees).
I thought that a percentage of OSes observed would showcase this well, but no, Android wins ~46% to ~25% of Windows (did you know that there are more mobile phone owners than tootbrush owners?). The last trick I have is to go to any PC vendor. Nearly all have windows pre-installed - check this very limited list of non-windows PC vendors.
IT is not only the workstations. If you imagine a typical MS enabled IT department, you’d find:
Communication through MS Exchange and MS Teams,
Intranet in MS Sharepoint,
Documents in MS Office,
MS Active Directory variant for identity management and auth,
Backups in MS Azure and/or MS OneDrive,
Windows in employee workstations.
This dependency on MS services further increases when using their MS 365 suite, which provides cloud SaaS backend for the above.
From these, the email point seems to be most critical. In 2025, you really want a big provider to handle your emails - this point is so non-controversial that you have hosting companies advocating against hosting your own email. While it seems that the Android phones strike again (you pretty much need a Gmail account to use them) the Outlook is in second place for business usage (also quite hard to find some good sources).
Imagine, that suddenly none of this works. Good luck getting any work done. Arguing that workers need to communicate and process documents for a company to function is out of scope of this article.
As a recent example, the Crowdstrike incident of July 2024 lasted about a day and costed the average Fortune 500 company 44 million dollars. Since it was a windows based solution, I’d say it closely resembles a full MS outage. You might say it’s not a lot of money, but imagine if the issue stayed for a week or a month and perhaps longer.
Another source points at a conservative estimate of 1670 USD per minute of outages for SMBs and up to 16700 USD per minute per server (!) for enterprises (yes, its 100k USD per hour and a 1 million per hour, I saw that). A 2014 study by Gartner puts the average to a 5600 USD per minute, which is aligned with the previous numbers.
If MS decided to cut you off, it would take some time to regroup. Try and be honest with yourself. Can you build a whole new IT stack in a company in 14 days? Let’s say that yes. The cost is still ridiculous - 33 667 200 for our SMB example, around 113 million USD with the Gartner estimate. For huge enterprises? The sky is the limit.
I finally have an excuse to write about return on security investment (ROSI). Businesses love the return on investment formula (ROI) and this is an attempt to modify it to be more applicable for security purposes. The issue is, that security doesn’t generate any profits whatsoever. We are in the business of loss-prevention. If we have greater loss reduction than security investment, we’re winning.
To fully utilize the formula, we need to define a couple of terms:
Single Loss Expectancy: The thing happens once, how much does it cost? Let’s start with the lower bound of 34 million USD.
Annual Rate of Occurence: How often does the thing happen? Previously we guesstimated 1 / 2 000 000.
Annual Loss Expectancy: Multiply the above together and you’re getting how much you’re losing on average. In our case that’s 17 USD a year (hold on…).
Mitigation Ratio: How effective is the change that we’ve introduced? Say that we’ve migrated to Linux and this risk is now zero. That means mitigation ratio is 100% which translates to 1 in such formulas. In reality, the chance of your company failing to maintain a fleet of linux machines is probably higher than 1 in 2 million.
Solution Cost: The cost of the security solution. Usually an input, but since in this case it’s quite hard to calculate, I’ll turn the formula around and solve for this variable and we’ll get a maximum effective budget.
Return on security investment: Finally, we’ll use all of the data above to calculate ROSI.
Turning the formula around, solving for solution cost:
Let’s assume that ROSI isn’t -100% (still equal to -1) - that would mean that either the annual loss expectancy is 0 (nothing to do) or the solution mitigated nothing.
Let’s also say that we want a ROSI of 100% - the solution should pay for itself in a year. We’re getting a measly 8.50 USD as a maximum budget for a perfect SMB solution. But hey, the MS licenses and service fees we’ve cut should be added on top of it!
Enterprises might have some more space to manoeuver. Recall the 16 700 USD per minute per server, imagine a deployment of 1000 servers and you’re looking at a maximal budget of about 84 168 USD in a year. That’s… one DevOps engineer? Maybe two? Handling all of the IT of a large enterprise? I don’t think that’s going to work either.
But wait! We forgot to add the MS services and licenses we’re dumping. Now we have something to work with - there is a single report noting that Walmart spent 580 million USD on MS services (see - this is approximately 5% of US Gov spend on MS). With that budget, you can certainly try and build your own cloud. You’d still have to be more efficient than Microsoft though - that’s a challenge. And train all of your users to use something else. Good luck.
The issue with this approach is that it’s only as good as the data you put inside. Tweaking the variables gets you nowhere really - the probability of this occurring is so small, that unless you have some confidence that the US government would be out to get you through MS products it will just drown any costs.
Getting good data is tough. Take a good look at how many assumptions we’ve made along the way. There is not a single one solid number anywhere to be found in the process. This issue is bigger than you’d think and I hate to conclude that these calculations are practically impossible. Blessed are the lawyers for they shall inherit cybersecurity (go read this, seriously).
What does this say about security? We just can’t seem to find good enough data to model the situations we’re facing. On one hand it shows the immaturity of the field. Arsonists were burning buildings for thousands of years, while the internet is not even a hundred years old! On the other hand, we all know that the field is changing rapidly and it’s hard to keep up with all the advances. Of course that it’s hard to get a solid dataset when the attackers shift their attack tactics by the month.
Taking a look at the single incident cost - it doesn’t feel right to ignore it either. A single incident cost in millions of dollars is a death sentence for most small companies. Alas, it’s the rational thing to do. And that’s why risk management is hard and non-intuitive. Leave your emotions at the door and embrace the cold touch of numbers and logic.
Look, when I’ve read the news, I nearly jumped from my chair. Surely, I thought, MS went too far this time. Surely, the rational customer would realize that the risk this causes is unacceptable and surely, there’s a method to prove it precisely.
Seems that none of the above is supported by facts. It is even more important to approach the issue rationally with a clear head, unless you want to recommend an insane solution to a problem that doesn’t exist in the eyes of other people. It doesn’t matter that you’re right, it matters that you’re seemingly insane.
To give more credit to MS, the Embrace, Extend, Extinguish strategy is working well. If you’ve ever tried to teach linux to a family member, you know what I am talking about. The cost of switching away is so prohibitive, that Microsoft can do pretty much anything they like.
I’ve got one last twist for you. Not all organizations care about costs. Denmark government is seriously considering dumping Microsoft. For a government agency, there are other values than profit. That’s good.
It’ll be fine - Ça Ira. Yes, I am not passing an opportunity to recommend this iconic Gojira performance (metal ahead):
The trick with Microsoft is to very carefully separate the good parts from the bad ones.
Labeling all of Microsoft as banned is really constraining your technology options. This is a gigantic organization with a very diverse set of people in it.
There aren't many things like .NET, MSSQL and Visual Studio out there. The debugger experience in VS is the holy grail if you have super nasty real world technology situations. There's a reason every AAA game engine depends on it in some way.
Azure and Windows are where things start to get bad with Microsoft.
> There aren't many things like .NET, MSSQL and Visual Studio out there. The debugger experience in VS is the holy grail if you have super nasty real world technology situations. There's a reason every AAA game engine depends on it in some way.
The reason all the AAA games are on it is because they're on the Windows platform, and more importantly their customers are on the Windows platform.
If 95% of gamers ran MacOS instead of Windows, you'd see a very different tech stack among game developers.
Game customers are on Windows because DirectX has been superior to OpenGL - development wise - for what, 30 years?
> OpenGL
OpenGL is legacy tech, just as DirectX
Vulkan is the new shader thing, and has been for at least a decade by now.
DirectX12 is still much better to use than Vulkan.
Natively supports Xbox and PC. Can run on Linux with Proton. The Playstation API functionally resembles DX12.
Vulkan is extension management hell (but has gotten much better, I concede)
> has been
I was talking about the past.
No, they're on Windows because it was the only viable gaming desktop environment during the 90's and 00's. Apple was all but dead and hardware was limited, Linux was in its infancy, Unix vendors didn't care about normal desktop users, etc...
In the early days of 3D gaming, there were studios that used OpenGL over DirectX on Windows. ID Software were the best known example of choosing OpenGL over DirectX.
Of course excellent OpenGL products exist (ID software is the worst example because they were... geniuses), but from the developer point of view, DirectX was the full package.
Yes, that's the essential reason. Even though that barrier has been lifted with Vulkan matching DX12 in many ways, the accumulated mass moves slowly.
Has there been a game where vulkan performance has been better then dx12? Whenever they are side by side vulkan always performs worse in my experience.
There have been cases like that, e.g., RDR2, but I think it mostly comes down to implementation quality.
Everything feeds everything else. If Apple had a stack and a business model that worked for game developers, you’d see a different stack.
Microsoft is where it is because they are viciously competitive at different layers of the stack. Apple wants a piece of every nickel, Microsoft wants a piece of every computer. They license windows for every Mac user in a company.
How do you separate the good from the bad? What do you do when Microsoft changes the good things into bad things?
My take is that Microsoft consistently makes bad things and makes "good" things into "bad" things; so, I don't have much expectation or faith that anything that I currently think is "good" will stay that way.
Services are bad - that is what the first part of the story is about.
However I do not think it is different for any online service. Any American company would have to cut off services to an individual (or organisation) subject to sanctions (the main example given). The same might apply to other countries for various reasons. There are various reasons a service might fail, or cut off a particular customer (lots of reasons, lots of examples in previous HN discussion).
What has changed is that the typical MS customer is a lot more dependent on MS services - MS 365, Python in Excel ONLY works in the cloud, people used hosted email instead of their own Exchange installation...... That means MS cutting off a customer would mean all their IT would cease working. They can just shut down any organisation with that level of dependency if they are ordered to, or decide to, do so.
> How do you separate the good from the bad?
Developer tools and enterprise stuff good (mostly). Consumer products bad.
MS office is 30 years ahead of open office.
For whom? Microsoft?
I don't know which of their developer tools I would consider good. Or less worse than the competition
I consider C# / .NET to be one of the best options for application development.
Many would consider both VSCode and Visual Code pretty good. There might be better alternatives, but generally I'd say they are more good then bad. Github is also a good product. Maybe not exactly a develop tool, but Power BI is also fairly good.
Borderline developer / enterprise solution: SQLServer is great to work with. Maybe not the best relational database server, but it's every bit as capable as MariaDB and I'd prefer it over Oracle.
"There aren't many things like .NET, MSSQL and Visual Studio out there. The debugger experience in VS is the holy grail if you have super nasty real world technology situations. There's a reason every AAA game engine depends on it in some way."
I'm not interested in AAA games engines writing and nor is most of the world. If that is it, then you have damned MS with (very) faint praise.
Bah, leaving out .NET like this is ignorance, considering the amount of custom applications every company has written on it.
RAD was a game changer and I think you don't know the extent and penetration of .NET in the enterprise
Well. this is clearly just a example of a hard problem where MS tools are good for.
The MOST common developer that work on MS stack is in business apps and web, data, integration stuff.
There is much better fit for MS and there is NO good counterpart in OSX or Linux.
One of the major shocks I get when starting to work on OSX is how much less developed EVERYTHING is outside the ms stack.
The only good reason you have a life working in OSX and less in Linux is because the web lower the playing field.
But if this were a contest of "native" vs "native" is clear MS stack is ahead.
(Much more before, because of course the web change the equation so you can claim things FOR THE WEB are better on linux and even osx)
I think you misunderstand- game engines are complex beasts and visual studio and/or .Net (in any of its incarnations) have the best debugging workflow I've seen.
Typescript is also Microsoft. So is ONNX.
What makes it better than say IntelliJ? Is there some feature which helps you more with debugging?
It's been a few years since I've used Visual Studio, but for longest time its support for debugging multithreaded and GPU code was unmatched. This is one of the main reasons game developers loved VS. It also had good support for mixed language debugging which is very useful when your C# code calls a C++ library for example.
"I think you misunderstand- game engines are complex beasts and visual studio and/or .Net (in any of its incarnations) have the best debugging workflow I've seen."
I think you misunderstand: the market, ie the number of people who actually care about developing game engines, is tiny.
How many games developers do you know as a subset of the people you know of?
OP only managed to find a niche product area for MS to shine in and maintain traction - the moat thing. Nothing else apparently.
I for one would not miss MS one jot. I wasted so much time with things like autoexec.bat and config.sys back in the day. I got good at it - Novell gave me a T shirt on Cool Solutions for a boot floppy image that managed to try several popular NIC drivers (3c595, 3c905, 3c509, ne1000 and a few others) and get you to a network connection for imaging or whatever. Later on I get to ignore SFC /SCANNOW answers to searches. Do you remember WINS? What about the horror of time sync? The PDC emulator FSMO role is basically a NT domain controller. AD was a bodge from day one, tacked onto ...
Sorry, got carried away there.
Again, Typescript is cared about by whom and what on earth is ONNX?
A game engine is often an example of a 'complex beast'.
No one is arguing that developing game engines specifically is common.
Thanks for trying to expound on my expounding on the original. But, the response indicates they don't know and actively avoid learning. Thus, nothing would change their mind.
PS: to throw some shade- I'm surprised they didn't (mis)spell it M$- after all everything they mentioned is making me nostalgic for phpBB based tech forums in 2004.
ONNX is a format that allows you to run AI models without Python in any language that implements ONNX, there's even an ONNX implementation in Go, meaning you can churn out even more performance out of AI models and waste drastically less resources (Go, Rust, C++, Zig, C, D etc could be used to squeeze performance). Think of it how Java produces a JAR file, well an ONNX file is a file that could be run by any runtime built for it. Another reasonable analogy would be WebAssembly, but to a degree.
Typescript is used by web developers over the world and ONNX for deploying deep neural networks. Two huge markets.
To paint a picture: I’ve worked with Microsoft technologies almost exclusively for decades but recently I was forced to pick up some Node.js, Docker, and Linux tooling for a specific app.
I can’t express in words what a giant step backwards it is from ASP.NET and Visual Studio. It’s like bashing things with open source rocks after working in a rocket manufacturing facility festooned with Kuka robots.
It’s just… end-to-end bad. Everything from critical dependencies developed by one Russian kid that’s now getting shot at in Ukraine so “maintenance is paused” to everything being wired up with shell scripts that have fifty variants, no standards, and none of them work. I’ve spent more time just getting the builds and deployments to work (to an acceptable standard) for Node.js than I’ve spent developing entire .NET applications! [1]
I have had similar experiences every few years for decades. I touched PHP once and recoiled in horror. I tried to get a stable build going for some Python ML packages and learnt that they have a half-life measured in days or hours after which they become impossible to reproduce. Etc…
Keep on assuming “Microsoft is all bad” if you like. You’re tying both hands behind your back and poking the keyboard with your nose.
PS: The dotnet SDK is open source and works fine on Linux, and the IntelliJ Rider IDE is generally very good and cross-platform. You're not forced to use Windows.
[1] The effort required to get a NestJS app to have barely acceptable performance is significantly greater than the effort to rewrite it in .NET 9 which will immediately be faster and have a far bigger bag of performance tuning tools and technologies available if needed.
Thanks for writing this. I couldn't agree any more. We've worked with .NET for decades and it really just works or lets you debug easily. Then we've started working on projects with Angular, React and Docker and it's just a nightmare to get a stable version.
Comparing .Net to Angular and Docker is comparing apples to oranges.
Now if you moved from .Net to say Linux and the Java ecosystem (Maven, Intellij, etc.) that would be something you can compare.
There was a study comparing the “half life” of code in different codebases as a measure of “churn”. Linux unsurprisingly is pretty stable. Meanwhile Angular is the worst with code lasting just months before it’s rewritten. Then again months later. And again. And again.
This is why it has so many breaking changes.
I have a lot of respect for organizations that get a lot done with Microsoft technologies. I think your perspective could be thought of as the benefits of vertical integration and vendor lock in. These do help people get things done!
In the academic and open source world those things are fought against because you don't want to be at the mercy of the software developer in the context of certain rights.
I think for every negative you mention on either side a positive could be found on either side. And like many things on the net, you're not wrong but not necessarily talking about the same kinds of things.
My remaining complaints about Microsoft are the inflexibility of their solutions that command abstractions that just don't work for many organizations, and the general viral nature of software sales in general of which they are one of many with similar issues, however Oracle is the worst of course.
Perfectly valid points. I've worked in academia, and their insistence on non-Microsoft technologies was helpful in certain fields where openness and long-term reproducibility is critical.
The downside is that this produces a microcosm of obscure technologies that can have... strange effects on industry. Some FAANG-like companies have a habit of hiring only recent graduates, so their entire staff is convinced that what they saw at their University is how everybody else does things.
It leads to Silicon Valley clique that has a fantastically distorted perspective of the rest of the world.
Some comments I've seen here on HN are downright hilarious to anyone from the "rest of the world", such as:
"Does anyone still use Windows Server!?" -- yes, at least 60% of all deployed servers world wide, and over 80% in many industries.
"Supports all popular directory servers such as OpenLDAP, ApacheDS, Kopano, ..." -- hello!? Active Directory! Have you heard of it!? It's something like 95% of all deployed LDAP deployments no matter how you count it! The other 5% is Oracle Directory and/or Novell eDirectory and then all of the rest put together is a rounding error.
I agree with this, I see the AD as critical. Do you please have a source for these numbers? Would love to include it in the article.
Everything you describe has more to do with the state of JavaScript development than MS vs. Linux tooling.
I wouldn't touch .NET for ideological reasons (and fear of a rug pull) but I also wouldn't touch any server side JS because I value my sanity.
I tried developing an MS .NET app and it's indescribably bad. The deployment story is non-existent, monitoring, tracing, alarming is barely there. You have to work with MS libraries that are on life-support with glaring bugs still present.
> The deployment story is non-existent
Wrong! it is as simple as executing `dotnet publish`, zipping up the output folder and sending that package somewhere using whichever protocol and shell utility you like.
> monitoring, tracing, alarming is barely there
Also wrong. OpenTelemetry is fully supported by first-class packages and the dotnet runtime itself exposes a lot of counters. There are a lot of tools to monitor and collect traces of running dotnet processes [1]
> You have to work with MS libraries that are on life-support with glaring bugs still present
You don't have to. Every Microsoft.* library follows strict semantic versioning and is clearly labeled when it is deprecated. If you don't have a plan in place on how to manage your dependencies then this is on you.
[1] https://learn.microsoft.com/en-us/dotnet/core/diagnostics/to...
I feel you. Having done sane programming before I tried it all the .NET stuff didn't feel right. Like they made Java even more ugly and brought nothing new to the table that works outside of their weird ecosystem. (At least that was the state 10-15 years ago, coding .NET on/for Linux)
It’s wildly different now.
The first release that would run at all on Linux was 9 years ago and was essential a beta or maybe just a proof of concept.
The current .NET 9 version has trivialised my development projects to the point that I feel bad for taking the customer’s money.
The only problem I’ve had with .NET is that it doesn’t have the same breadth of third party libraries that Java has. If you need something really obscure it either exists in Java or nowhere else.
That sounds magical. What makes it so superior tho? You mean in terms of ready made libraries doing the heavy lifting? If so would nodejs or rails not be even easier?
Or do you mean specific on desktop applications? I have no idea about that field
ASP.NET Just Works and has Batteries Included.
As an example, just over the last few days, I hit all of these common issues with Node.js apps (that don't happen with ASP.NET):
1) Node apps are typically a "transpiled language on a language" because JavaScript is unusable for large-scale software development so everyone uses TypeScript compiled to JavaScript instead. But this is a hack! A clever, useful hack, but a hack nonetheless. You can't go two steps without finding the rough edges, such as "any" everywhere or the use of "monkeypatching" which makes static analysis impossible for either tools or humans. (This reminds me of the earliest days of C++ where compilers like Cfront transpiled C++ to C.)
2) It's single-threaded! Sure, this is simpler, right up until it's not. It means you need about one process per core. Which means that on typical hardware you get a max of 4-8 GB of memory per process, because that's what most hardware has per core. This means in-memory caching is generally too inefficient. (I finally understand why Redis is so popular!)
2b) Okay, let's take a look at Redis! What do you mean it doesn't properly support multiple databases per cluster!? It's single threaded too!? Wat!? Is this a database for children?
3) It takes minutes to start! I hope you never have an emergency update that needs to go out right now!. ASP.NET takes seconds at most. This is largely because it's precompiled and ships as a small number of large binary files instead of millions (literally!) of tiny files that are very slow on almost all server-grade storage. There's now ahead-of-time (AoT) compilation modes for ASP.NET that make it comparable to C++, Rust, or Go in startup performance!
4) I'm sure Node people have heard of certificates and HTTPS, but I'm fairly certain they think it's a fad and it'll just "go away" eventually.
5) NPM libraries are under constant churn. Just updating packages requires minutes of 100% computer power to resolve the dependency graph logic... which has changed. In a breaking way. Either way, it can be mathematically impossible to disentangle the mess before the heat death of the universe. I'm not kidding! It's possible to get into a situation where "error: timed out" doesn't quite do it justice.
6) In .NET land there's basically only two ORMs used: Entity Framework from Microsoft and Dapper from StackOverflow. They work fine. Someone at $dayjob picked "typeorm" for Node. Is it the best? Who knows! There's dozens to pick from! None of them work properly, of course. I do know that typeorm doesn't allow me to pick my own database driver. Why? Because they're too busy, according to the GitHub issue tickets. Entity Framework uses a pluggable interface with dozens of well-supported implementations. This is because the entire platform, all of its database support, and the ORM on top were written by one vendor in a coordinated way and is pluggable via interfaces in the standard library instead of a hodge-podge of random code thrown together by literal children. [2]
Etc, etc...
[1] Under-funded is the more generous reason.
[2] A very significant portion of NPM packages were written by people under the age of 18. This is either commendable or horrifying depending on your perspective. It's hard to prove though, because contributions are effectively anonymous.
That's really an insightful answer I enjoyed reading. Really brings me back!
I dislike nodejs for the same reasons. But do get the feeling that rust and go, maybe even something more exotic like elixir would be good alternatives as well for your use case.
I have barely anything to compare to your requirements tho. I personally would get panic attacks and couldn't sleep anymore if my dependencies aren't open source and I would depend on a company for any reason. But that's just me, it definitely sounds very mature
Unless you found yourself in some bizarre dark corner of a huge ecosystem of products, that's just not true.
Deployments are just "file copy". You don't even need Docker, because Windows isn't Linux, it has stable user-land APIs so apps are portable.
Not to mention that the dotnet sdk can create container images directly without even needing Docker installed: https://learn.microsoft.com/en-us/dotnet/core/containers/sdk...
There are pre-built Linux and Windows ASP.NET base docker images: https://learn.microsoft.com/en-us/aspnet/core/host-and-deplo...
Visual Studio's ASP.NET templates all have a literal checkbox for "Docker support" which is all it takes to have a hot-reload debugging/editing experience.
The dotnet runtime has very good Docker support, including automatic memory usage tuning to prevent it getting killed by Kubernetes or whatever.
The underlying "App Host" below ASP.NET has fantastic support for layered configuration, which by default supports environment variables, command line parameters, named environment configuration files, and "user secrets" in IDEs. All of it is strongly typed and supports runtime refresh instead of Linux style "restart the process and interrupt user file uploads to get a new config". There's plugins for Key Vault, AWS KMS, App Configuration, feature flags, and on-and-on.
Open Telemetry is fully supported and now the default: https://learn.microsoft.com/en-us/dotnet/core/diagnostics/ob...
Everything in ASP.NET uses the standard built-in ILogger interface, so wiring up any kind of audit logging or custom observability is a piece of cake: https://learn.microsoft.com/en-us/dotnet/api/microsoft.exten...
The really fancy logging uses the high-performance ActivitySource APIs, which are used for lower-level tracing of dependencies and the like. Again, these are standardised throughout not just Microsoft libraries but most third-party packages too: https://learn.microsoft.com/en-us/dotnet/api/system.diagnost...
Aspire.NET can orchestrate multiple cloud emulators, multiple apps, Node.js front-end apps, and wire up the whole thing with Open Telemetry and a free local trace viewer (with span support) with zero config: https://learn.microsoft.com/en-us/dotnet/aspire/fundamentals...
Windows GUI App deployments use standardised installer packages (MSI) that have simple devops pipeline tooling: https://github.com/wixtoolset Now... name the one package format that you can use to distribute client apps to all Linux distros!
When you run "dotnet build", the result is built, unlike Node.js where you end up with 150K tiny little files that need to be rebuilt again "in production" because oh-my-god it's a scripting language with C code blended in randomly, so it doesn't... actually... build. I just had the fun of trying to figure out why PM2 doesn't like musl or running under non-root user accounts, why starting a Node.js app takes frigging minutes whereas ASP.NET starts in milliseconds, and on and on.
All that, and finally, PowerShell, literary light years ahead of everything Linux has to offer. I have PTSD from bash and friends. It is so good, that I rarely even write C# nowadays for most of the critical government stuff and simply run smallish scripts as services, and change them on the server when intervention is needed in notepad, in a couple of minutes, while my colleagues still worm up their full-blown Visual Studio.
I love it like it is the hottest wife that have a great job, do the dishes and cooks like a grandma (I am bad at this :))
> change them on the server when intervention is needed in notepad
How is this any different than a Linux setup where you can just ssh into a box and edit your scripts in the shell using something like nano or vim if you're into that sort of thing?
It's not different. I am not talking about method, I talk about the language and ecosystem. Package manager for scripts? Yeah. Standardized names and params? Fuck yeah. Drop to dotNet on funky corners? Shut up and take my money.
Put pwsh on linux (I do, in all of them) and I will use ssh and vim no problem.
And now there is going to be support for `dotnet run app.cs` which, in my opinion, will replace most of my powershell scripts.
Yeah, that is a new thing, but honestly it existed a way back, its just that it is now officially supported.
But you are missing a point here. It's about the language, ecosystem and practicality. In shell, you need correct abstraction that lets you work in fast and efficient way and lets you interact fast when debug is needed. What is done using c# in 10 lines can be done in a single line in pwsh. In my book, lower amount of code, ideally no code, is the most important aspect of the development. Majority of things are not constrained by the performance, so pwsh is usually a good fit.
People used ruby, python etc. for infrastructure development long time ago and it was/is akward.
> What is done using c# in 10 lines can be done in a single line in pwsh
Mostly, yes. The problem is that the moment I need something more than what a single cmdlet or bash utility can provide, now I have to use an awkward looking scripting language (bash is the worst offender here). Almost every time I found myself having to write a somewhat long script file, I wish I could just do it with a C like language instead.
For simpler tasks, I fully agree. It is better to use something immediately available like an OS shell utility over coding one myself.
> Deployments are just "file copy".
Ah yes how the "works on my machine" meme started!
Microsoft, for all their warts, has the absolute best documentation for every public API in Windows. I'd go so far as to say it's better on average than manpages in Linux and BSD and light years better than the actively hostile bullshit from Apple.
Submitting a bug report though, you gotta know people or know where to ask.
> Microsoft, for all their warts, has the absolute best documentation for every public API in Windows.
That is true in some areas of MS's output, but far from all. Some of their documentation is concise but understandable, complete, and up-to-date. Some of it is auto-generated garbage that is only of use if you already know what you are doing and looking for a remainder of a detail.
Some of it is absolutely awful, I've run into numerous issues with Azure related documentation. This is in part because that side of things is rapidly evolving, but sometimes the new information isn't even there, and sometimes it is faff to identify it from information about the previous couple of iterations that are now deprecated. One recent example: installing some of their SQL Server and Azure storage access tooling on the latest Ubuntu LTS release (24.04, now over a year old). The repos are there, maintained, and supported, but the documentation doesn't mention anything beyond 22.04. Yes it is easy to work out what to change, mostly just substitute 24.04 for 22.04, but the docs should be updated. Also, instructions from different documents, all from MS, put their public keys for package signing in different places, which can cause confusion (not an issue for someone like me familiar with apt & related tooling, but I can imagine it being very frustrating to someone less experienced with those parts).
The old documentation was the best. The new stuff is a mix of barely acceptable and absolute crap, and some of it is even AI-generated. Here's a recent funny:
https://learn.microsoft.com/en-us/windows-hardware/drivers/u...
"The characteristics of the endpoint determine the size of each packet is fixed and determined by the characteristics of the endpoint."
It really depends how far you go. The basics - they're pretty good. But for the more complicated things they just ignore all context and pretty much restate the names of functions/arguments without explaining how/why things work. See for example https://learn.microsoft.com/en-us/windows/win32/api/tsvirtua... and https://learn.microsoft.com/en-us/windows/win32/api/tsvirtua... What does the terminal services renderer do? "It renders bitmaps you dummy, just look at those arguments!"
Some of the advanced stuff has great docs. E.g. SQL Server and its wire protocol. When you need to write client to a language that doesn't have one, the TDS doc from Microsoft is amazing. Compare that to e.g. Oracle and you know what I mean.
In general SQL Server is such a great product. If you cannot choose PostgreSQL for some reason, make sure your buying manager plays golf with the Microsoft sales people, not with Oracle.
Reading your comment is like listening to a cult member who's cult leader has been caught committing a horrible crime. "The trick is to separate his good deeds from his bad deeds."
Why do you feel you have to defend a mutli-billion dollar corporation? Do you have some kind of psychological dependency?
Azure has some things about it that I don't like (compared to AWS), but it wins over AWS for Azure App Services. Essentially, IIS (webserver) as a service (PaaS), with autoscaling, auto-deployment, hot swap slots, auto-recovery, backups, etc. At it's core, it's basically a managed Docker container (either Windows or Linux) with IIS, so you can customize it quite a bit like a familiar VM, but unlike a VM, updates and security is all managed for you.
Beanstalk is a joke compared to AAS, and I'm more than happy to stay far away from Docker/K8s until that complexity is actually required, which it usually isn't until an entire department handles your K8s clusters/EKS.
Setting aside the debugger, visual studio has to be the worst IDE I've used. There are so many rough edges it is astounding.
On an ancient project, among other things I've been editing JavaScript code both in js files and inline script tags in aspx files. The indentation auto-formatting appears to choose new levels of indentation using a random number generator.
You can't add a new file to the project while it is running, or even create a file through the context menu, but it can detect when the files have changed externally and recommend restarting the project.
There's a thousand little things, but the indentation auto-formatting abomination is a constant burr under my saddle.
This isn't a coincidence. Microsoft have spent a lot of money, including billions in acquisitions, to maintain some positive developer mindshare.
If you're not doing C++ gdb is pretty good, most people just don't know how to use it.
Maybe it's baby duck syndrome, but I've gone out of my way to use gdb over VS's debugger. A simple .gdbinit has more expressive scripting power than a GUI can ever allow.
I still find it hard to believe that so many people and companies are prepared to use Microsoft's online/cloud services.
Not ony is this a single point of failure but it's one they've no control over whatsoever. Same goes for Google/Youtube etc. It's as risky as flying a passenger jet with only one engine.
What are they thinking, why are they prepared to risk everything?
It boggles my mind.
Most companies enter into a contract with Microsoft. That is infinitely better than using a 2 person startup that runs out of a garage. Contracts come with strict terms of service, SLAs, service expectations and such.
If you had a restaurant, would you source your produce from your trusty friend who grows vegetables as a hobby or from an established mega-farming-company?
Ironically appropriate example. Many of the most famous restaurants in the world, like Noma [1], are famous precisely for sourcing ingredients that bypass mega-farming. At Noma many of the dishes are based on the produce provided from local foraging.
And contrary to what you might expect from its presentation/reputation, the place itself is just a building surrounded by green houses and a guy growing and harvesting most of his own stuff. It's an extreme example, but the issue is fairly typical at nice restaurants.
Yeah, and many of these best restaurants in the world barely make a profit, compared to Olive Garden or McDonald's.
But yes, I'm with you here. I also like Noma way more than Olive Garden.
Kind of a tangent, but a lot of brick and mortar business is far less profitable than most think. A McDonalds franchise owner is looking at ~$150k/year profit on average. And with lots of other fun stuff like the fact you don't even own the property, it's rented from McDonalds. And that's going to likely trend downward as McDonalds continues to put the squeeze on franchisees and labor costs continue to rise.
And far from passive income, there's a joke that buying a franchise is basically buying a job and not just any job - but a stressful, thankless job with terrible working hours. And the price tag for this new life of luxury starts at around a million dollars.
> That is infinitely better than using a 2 person startup that runs out of a garage.
The big advantages with the 2 person startup are
1. A small business a customer who matters to them, and you will get better service
2. You can get terms such as having control of backups, hosting of your choice, and access to systems so you can get someone else to maintain things
> Contracts come with strict terms of service, SLAs, service expectations and such.
How close do these come to covering the consequential losses of an extended outage?
I would sure want to dine in a restaurant were vegetables were grown out of love and not as a profit making machine above all else.
I'm not really sure this is true. Big companies find themselves with a big problem by the nature of their own weight. To simply exist they need to see revenue in the millions, if not billions of dollars. So everything rapidly becomes about money. That, in turn, equally rapidly leads to rent-seeking as a goal, which just generally turns everything into a dystopia from inception to production to launch.
but are they? on average? how do you measure this?
it's pretty easy to talk to a solo-dev or gardener
It's easy to talk to Microsoft employees too.
Example?
No, I'd never use a 2-person startup, that's silly and irresponsible. I'd keep my services in-house and use multiple companies to store backups as I've done for decades—as we all used to do before the renting/leasing software (ripoff) model.
Nor would I ever use software that lives on a remote server that I've no direct control over.
Let's hope Trump does more blocking, it's the only way to wake up a lazy sleepy world.
BTW, isn't 'infinitely' somewhat of an exaggeration?
> If you had a restaurant, would you source your produce from your trusty friend who grows vegetables as a hobby or from an established mega-farming-company?
You sure you like this analogy? Every ambitious restaurant (Michelin stars, World's 50 Best, that type) uses small farmers to try obtain higher quality produce.
It's chain restaurants and shitty family restaurants that use the large suppliers.
You go to those restaurants for a boutique experience. You can't run an enterprise based on whatever your suppliers have available on the day.
It's cheap and it works well. Also integrates into everything related you'd need.
Also, if you're a small business without a dedicated tech team, what are your options that don't involve relying on a single big company?
Speaking for myself, running a bakery, I chose MS 365/Teams with regret but accepting that there's nothing else out there with the same value proposition except maybe Google workspace.
They have regional pricing so we get everything for the equivalent of $3.50 per user. Basically no other apps offer regional price - Slack alone would cost about $8 a user.
This includes chat, calls, messaging, 1tb of onedrive space per user, calendar, planner, emails, office, plus loads more.
Sure, it's janky but it basically works. The only thing I've found with a close value proposition (still slightly more expensive even if I limit to just a few gb of space per user) is self-hosted Nextcloud, which is about the same level of janky and requires a tech person or team to set up.
> The only thing I've found with a close value proposition (still slightly more expensive even if I limit to just a few gb of space per user) is self-hosted Nextcloud, which is about the same level of janky and requires a tech person or team to set up.
I would not imagine storage to be a major cost with something like Nextcloud. They major cost is going to be the tech person to set it up, and you do not need that many hours to do it. Mostly an initially one off cost will be high, and big upgrades might be a lot of work, but maintenance should not be
I am not sure if a company that doesn't need a technical team even needs that technical overhead?
No reason to not just host email with any domain provider and manage the rest with a small NAS in the office.
Not only cheaper and taking away the update obligations of Microsoft. Which I am sure kills more productivity than managing a Synology server.
> No reason to not just host email with any domain provider and manage the rest with a small NAS in the office.
Sounds like even more of a single point of failure, just on your domain provider (who's much more likely to go out of business) than Microsoft. And one with no chat, or phones, or conference calling, or shared calendars, or endpoint management, or SSO, etc, etc.
Just sticking all your data on a cheap NAS in the office works for a few people, although it becomes PITA to do granular permissions when you don't have any proper central authentication. But then it's also a massive single point of failure, so you need to implement a backup solution, and then a way to share files outside of the organisation, and then a VPN so that people can work remotely, and then some monitoring so that you know when a disk fails....and that's getting way beyond what a non-technical person can manage.
It's fine if you're just using it for your hobby. But building your business on top of something like is very likely to come back and bite you in the arse.
What do people consider as NAS? A Network Harddrive? If you buy a "normal" Synology Nas it comes with shared calenders, office, VPN control, several backup options including Aws and Azure and a lot more. Typical setup and forget setup, thanks to their high package quality.
And I am sure there are even better options than a household Synology.
But putting everything in a cloud and fully depending on a single provider that for the majority of people is in a foreign (and politically dangerous) country is definitely not the obviously better option.
Well yeah, that's what a NAS is. What you're talking about is just self-hosting a all-in-one server, like people used do with Windows Small Business Server, and all the problems and limitations that comes with.
And plenty of small businesses and hobbyists do that, and then after they "setup and forget" it they get compromised or lose their data a few years down the line.
Yes and no. I've been in companies with windows business servers that were a constant pain to manage. Whereas the modern NAS, Building on stable open source software mostly offers the 'just works' experience people are looking for + the business grad documentation.
Why would using a NAS (or small server) mean ignoring any basic logic (and business requirements) and not having off-site backups?
SBS server "just works" if you just set it up once and then ignore it, your requirements never change, and and don't do basic things like maintenance and installing updates as well.
People absolutely should be setting up offsite backups. And more importantly, testing them so that they can prove that they work. But if they have no technical team then neither of those things are going to happen.
Storage is far from the highest priority. I do store quite a bit of files for marketing but those only need to be accessed by a few people.
Also, much as I wish otherwise, very little happens via email in this business. It's all chat apps now.
The important part is cross platform real time communications, calendar, and office. Some kind of kanban board is a nice bonus.
Synology does all that, they even have some kind of office suite (haven't tried, I would just use libre and a central storage)
Synology is also setup and keep running for years usually.
It's just an example tho. I just don't see any need for a Microsoft cloud solution for a small company (or anyone really)
There's a demo on their website:
It's a simple opportunity cost calculation. The service is there, provides value. Creating a replacement is not realistic. Paying for another replacement gives you potential headaches from using a less popular service. So when choosing between not doing a thing or doing a thing with the risk of spof, it's often a reasonable choice to go with those services.
Do you consider the same single point of failure to use AWS?
There's a pretty significant lower bound of size to where you can reasonably have multiple points of failure. And like oh well if you use this stack you could theoretically move at any time isn't really the same thing as being multi-homed. I've been at places where this has been a concern of the leadership but the economics of it have never really worked out compared to spending your time working on anything else related to the business.
If it's locked within AWS and you have no way of moving out fast I definitely would consider it a single point of failure.
The outlined risk is not solely a Microsoft risk. It’s a “contract out to another irreplaceable service” risk.
Make your technology fungible and risks disappear.
We (the EU) should have a reasonable response for monopolistic or significant market share suppliers that may fall under the control of foreign governments that could mitigate this issue. Else I noted:
(the EU) doesn't need to throw out the baby with the (US-controlled) bathwater. The EU should present Microsoft with an ultimatum similar to what China might: setup a non-controlled european licensee to own and manage all MS & Azure infrastructure in the region, or have some legislators force a similar structure on them. Complete control, full sourcecode, EU-only support/access - as a condition for corp HQ being allowed to have a monopolistic market share. Either way, nothing the US might decide to do should have any effect in "EU Microsoft", short of severing US Microsoft off completely, in which case EU MS just becomes fully autonomous and bye-bye US. Clearly, a US-controlled Microsoft without this structure is a deep security risk to europe now.
> Make your technology fungible and risks disappear.
Not wrong in principle, but the failure mode here is that the political winds have changed. Tech alone cannot help you if you do not hedge against that risk.