Yo. Firstly, thanks for the trip down memory lane - well written, engaging, fun. My mind is still stuck in those days even after finishing the article, as you can tell from my anachronistic greeting.
Secondly, as someone who spent 15 years working with Lotus Notes, I can assure you that you can run it standalone. Obviously it makes no real sense for a Groupware product, but it can be done. To the Notes client opening a database locally or on a mail server is largely the same.
The main issue is that people used Notes to communicate and collaborate. So you could just go creating new Address Books, Discussion databases, Document Libraries and so on, but what exactly are you proving with that? It's be like just firing up the Microsoft Mail client and only looking at the address book...
Whilst I'm aware that there's plenty in Notes that people didn't like, I do think that there are some gems hidden in there which it would have been nice to have kept. The Notes dialect of Rich Text had a couple of niceties (programmable buttons, collapsible/expandable Sections). The database engine itself was unparalleled at the time, and in some ways it still hasn't been bettered.
But the issue remains that you'd need to set up a Notes/Domino Server (depending on your version - 4.5 onwards it's called Domino), and a small network. And that's a ball-ache that nobody wants. It can speak IPX/SPX and NetBIOS, so it doesn't have to be as complicated as TCP/IP, but it's still a lot of prep work before you even get to start looking at the usage. :-(
That having been said, I was a Principal Certified Lotus Professional on the Sysadmin track for about three versions of Notes, from 4.6 to 6, and can definitely help if you ever did want to do that. Feel free to email me at phil [at] philipstorry.net if you're ever so lacking in subjects that you feel forced into this last resort.
Not a bad article - thanks!
Others are pointing out that you cannot understand everything - and that's true enough.
But you only need to understand what's important. The experience of a good expert helps you to find that out.
As a systems administrator the recent AWS outage in the Middle East is the best recent example. There will be roughly three types of companies, separated by their understanding:
- Don't Understand - these companies thought that the cloud would handle this kind of thing for them, and are probably going to be doing a lot of finger-pointing in the near future.
- Do Understand, Don't Care - these companies did understand that high availability meant going multi-region, but decided against it for whatever reason. Probably cost vs perceived likelihood. These companies know that they've made a mistake. Short term they're wondering how to survive it, long term they'll be re-assessing their risk acceptance. Many may decide to stay single-region, but at least understand why.
- Do Understand, Do Care - these companies will simply be checking that their procedures worked for any manual parts of their failover, plus possibly looking at any improvements they can make given the real-life experience they've gained.
An LLM is just going to tell you how to implement it. It's not going to be thinking "what sort of availability do we require?", it may never start that conversation unless explicitly prompted. And even then it's going to return consensus opinions, which may not be what you want when evaluating risk.
I'd love to think a lot of companies will be looking at this event and updating their own risk register or justifying their existing risk decisions for hosting. But let's be honest - most won't even have thought about it, and won't until it goes wrong.
Quite the nostalgia blast for me!
I'm honestly not sure I had a machine with more than 2 fixed disks until well into the days of Windows 7 and SATA. The exception would be logical disks such as Stacker or similar compressed volumes - but I wasn't using them until later either.
If I recall correctly before SATA we had IDE which only had two devices (primary & secondary) per controller, and usually only two controllers on a motherboard. Given the physical size of disks even you'd probably just have a boot disk, maybe a data disk and then perhaps two optical drives. So it's absolutely believable that nobody found the bug simply because nobody had a machine configured that way.
Sure, you could have SCSI for more disks. But if you did, then you were probably doing something that required a lot of CPU grunt - at which point you might just leave the PC behind and go to a UNIX workstation anyway.
OK, now I'm starting to get flashbacks to just how bad SCSI support was on the PC, and it's stripping the the rose-tint from my glasses. Time to go!