Floppinux – An Embedded Linux on a Single Floppy, 2025 Edition

2026-02-034:33253192krzysztofjankowski.com

December 21, 2025 | back to home FLOPPINUX was released in 2021. After four years people find it helpful. Because of that I decided to revisit FLOPPINUX in 2025 and make updated tutorial. This brings…

FLOPPINUX was released in 2021. After four years people find it helpful. Because of that I decided to revisit FLOPPINUX in 2025 and make updated tutorial. This brings bunch of updates like latest kernel and persistent storage.

Table of Contents

Main Project Goals

Think of this as Linux From Scratch but for making single floppy distribution.

It is meant to be a full workshop (tutorial) that you can follow easily and modify it to your needs. It is a learning exercise. Some base Linux knowledge is needed.

The final distribution is very simple and consists only of minimum of tools and hardware support. As a user you will be able to boot any PC with a floppy drive to a Linux terminal, edit files, and create simple scripts. There is 264KB of space left for your newly created files.

Core features:

  • Fully working distribution booting from the single floppy
  • Latest* Linux kernel
  • Supporting all 32-bit x86 CPUs since Intel 486DX
  • Have a working text editor (Vi) and basic file manipulation commands (move, rename, delete, etc.)
  • Support for simple scripting
  • Persistent storage on the floppy to actualy save files (264KB)
  • Works on real hardware and emulation

Minimum Hardware Requirements:

  • Intel 486DX 33MHz
  • 20MB RAM
  • Internal floppy disk

Linux Kernel

The Linux kernel drops i486 support in 6.15 (released May 2025), so 6.14 (released March 2025) is the latest version with full compatibility.

64-bit Base OS

This time I will do everything on Omarchy Linux. It is 64-bit operating system based on Arch Linux. Instructions should work on all POSIX systems. Only difference is getting needed packages.

Working Directory

Create directory where you will keep all the files.

mkdir ~/my-linux-distro/
BASE=~/my-linux-distro/
cd $BASE

Host OS Requirements

You need supporting software to build things. This exact list may vary depending on the system you have.

Install needed software/libs. On Arch/Omarchy 3.1:

sudo pacman -S ncurses bc flex bison syslinux cpio

Cross-compiler:

wget https://musl.cc/i486-linux-musl-cross.tgz
tar xvf i486-linux-musl-cross.tgz
rm i486-linux-musl-cross.tgz

Emulation

86Box is also good but slower. Bochs is the best but for debugging, not needed here.

For emulation I will be using qemu.

Kernel

Get the sources for the latest compatible kernel 6.14.11:

git clone --depth=1 --branch v6.14.11 https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
cd linux

Now, that you have them in linux/ directory lets configure and build our custom kernel. First create tiniest base configuration:

This is a bootstrap with absolute minimum features. Just enough to boot the system. We want a little bit more.

Add additonal config settings on top of it:

Important: Do not uncheck anything in options unless specified so. Some of those options are important. You can uncheck but on your own risk.

From menus choose those options:

  • General Setup
    • Configure standard kernel features (expert users)
      • Enable support for printk
    • Initial RAM filesystem and RAM disk (initramfs/initrd)
      • Support initial ramdisk/ramfs compressed using XZ and uncheck everything else
  • Processor type and features
    • x86 CPU resources control support
    • Processor family
  • Enable the block layer
  • Executable file formats
    • Kernel support for ELF binaries
    • Kernel support for scripts starting with #!
  • Device Drivers
    • Block devices
      • Normal floppydisk support
      • RAM block device support
        • Default number of RAM disk: 1
    • Character devices
  • File systems
    • DOS/FAT/EXFAT/NT Filesystems
    • Pseudo filesystems
      • /proc file system support
      • sysfs file system support
    • Native language support
  • Library routines
    • XZ decompression and uncheck everything under it

Exit configuration (yes, save settings to .config).

Time for compiling!

Compile Kernel

make ARCH=x86 bzImage -j$(nproc)

This will take a while depending on the speed of your CPU. In the end the kernel will be created in arch/x86/boot/ as bzImage file.

Move kernel to our main directory and go back to it:

mv arch/x86/boot/bzImage ../
cd ..

Without tools kernel will just boot and you will not be able to do anything. One of the most popular lightweight tools is BusyBox. It replaces the standard GNU utilities with way smaller but still functional alternatives, perfect for embedded needs.

Get the 1.36.1 version from busybox.net or Github mirror. Download the file, extract it, and change directory:

Remember to be in the working directory.

wget https://github.com/mirror/busybox/archive/refs/tags/1_36_1.tar.gz
tar xzvf 1_36_1.tar.gz
rm 1_36_1.tar.gz
cd busybox-1_36_1/

As with kernel you need to create starting configuration:

make ARCH=x86 allnoconfig

You may skip this following fix if you are building on Debian/Fedora

Fix for Arch Linux based distributions:

sed -i 's/main() {}/int main() {}/' scripts/kconfig/lxdialog/check-lxdialog.sh

Now the fun part. You need to choose what tools you want. Each menu entry will show how much more KB will be taken if you choose it. So choose it wisely :) For the first time use my selection.

Run the configurator:

Choose the following options. Remember to do not uncheck anything if not stated here.

  • Settings
    • Support files
    • Build static binary (no shared libs)
  • Coreutils
    • cat
    • cp
    • df
    • echo
    • ls
    • mkdir
    • mv
    • rm
    • sync
    • test
  • Console Utilities
  • Editors
  • Init Utilities
    • init
      • uncheck everything else (inside init: keep [*] only on init in this page)
  • Linux System Utilities
    • mdev
    • mount
      • Support lots of -o flags
      • uncheck evrything else
    • umount
  • Miscellaneous Utilities
  • Shells
    • Choose alias as (ash)
    • ash
    • Optimize for size instead of speed
    • Alias support

Now exit with save config.

Cross Compiler Setup

Our target system needs to be 32-bit. To compile it on 64-bit system we need a cross compiler. You can setup this by hand in the menuconfig or just copy and paste those four lines.

Setup paths:

sed -i "s|.*CONFIG_CROSS_COMPILER_PREFIX.*|CONFIG_CROSS_COMPILER_PREFIX=\"${BASE}/i486-linux-musl-cross/bin/i486-linux-musl-\"|" .config

sed -i "s|.*CONFIG_SYSROOT.*|CONFIG_SYSROOT=\"${BASE}/i486-linux-musl-cross\"|" .config

sed -i "s|.*CONFIG_EXTRA_CFLAGS.*|CONFIG_EXTRA_CFLAGS=-I$BASE/i486-linux-musl-cross/include|" .config

sed -i "s|.*CONFIG_EXTRA_LDFLAGS.*|CONFIG_EXTRA_LDFLAGS=-L$BASE/i486-linux-musl-cross/lib|" .config

Compile BusyBox

Build tools and create base filesystem (“install”). It will ask for options, just press enter for default for all of them.

make ARCH=x86 -j$(nproc) && make ARCH=x86 install

This will create a filesystem with all the files at **_install/**. Move it to our main directory. I like to rename it to.

Lastly to to that new directory.

mv _install ../filesystem
cd ../filesystem

Filesystem

You got kernel and basic tools but the system still needs some additional directory structure.

This created minimum viable directory structure for satisfying the basic requirements of a Linux system.

Remember to be in the filesystem/ directory.

mkdir -pv {dev,proc,etc/init.d,sys,tmp,home}
sudo mknod dev/console c 5 1
sudo mknod dev/null c 1 3

Next step is to add minimum configuration files. First one is a welcome message that will be shown after booting.

Here is the first real opportunity to go wild and make this your own signature.

cat >> welcome << EOF
Your welome message or ASCII art.
EOF

Or download my welcome file.

wget https://krzysztofjankowski.com/floppinux/downloads/0.3.1/welcome

It looks like that:

$ cat welcome

                _________________
               /_/ FLOPPINUX  /_/;
              / ' boot disk  ' //
             / '------------' //
            /   .--------.   //
           /   /         /  //
          .___/_________/__//   1440KiB
          '===\_________\=='   3.5"

_______FLOPPINUX_V_0.3.1 __________________________________
_______AN_EMBEDDED_SINGLE_FLOPPY_LINUX_DISTRIBUTION _______
_______BY_KRZYSZTOF_KRYSTIAN_JANKOWSKI ____________________
_______2025.12 ____________________________________________

Back to serious stuff. Inittab tells the system what to do in critical states like starting, exiting and restarting. It points to the initialization script rc that is the first thing that our OS will run before dropping into the shell.

Create an inittab file:

cat >> etc/inittab << EOF
::sysinit:/etc/init.d/rc
::askfirst:/bin/sh
::restart:/sbin/init
::ctrlaltdel:/sbin/reboot
::shutdown:/bin/umount -a -r
EOF

And the init rc script:

cat >> etc/init.d/rc << EOF
#!/bin/sh
mount -t proc none /proc
mount -t sysfs none /sys
mdev -s
ln -s /proc/mounts /etc/mtab
mkdir -p /mnt /home
mount -t msdos -o rw /dev/fd0 /mnt
mkdir -p /mnt/data
mount --bind /mnt/data /home
clear
cat welcome
cd /home
/bin/sh
EOF

Make the script executable and owner of all files to root:

chmod +x etc/init.d/rc
sudo chown -R root:root .

Compress this directory into one file. Then go back to working directory.

find . | cpio -H newc -o | xz --check=crc32 --lzma2=dict=512KiB -e > ../rootfs.cpio.xz
cd ..

Create booting configuration.

Another place to tweak parameters for your variant. Text after SAY is what will be displayed on the screen as first, usualy a name of the OS.

The tsc=unstable is useful on some (real) computers to get rid of randomly shown warnings about Time Stamp Counter.

Remember to be in the working directory.

cat >> syslinux.cfg << EOF
DEFAULT floppinux
LABEL floppinux
SAY [ BOOTING FLOPPINUX VERSION 0.3.1 ]
KERNEL bzImage
INITRD rootfs.cpio.xz
APPEND root=/dev/ram rdinit=/etc/init.d/rc console=tty0 tsc=unstable
EOF

Make it executable:

Create sample file

To make the system a little bit more user friendly I like to have a sample file that user will be able to read and edit. You can put anything you want in it. A simple help would be also a good idea to include.

cat >> hello.txt << EOF
Hello, FLOPPINUX user!
EOF

Filesystem is ready. Final step is to put this all on a floppy!

Boot Image

First we need an empty file in exact size of a floppy disk. Then format and make it bootable.

Create empty floppy image:

dd if=/dev/zero of=floppinux.img bs=1k count=1440

Format it and create bootloader:

mkdosfs -n FLOPPINUX floppinux.img
syslinux --install floppinux.img

Mount it and copy syslinux, kernel, and filesystem onto it:

sudo mount -o loop floppinux.img /mnt
sudo mkdir /mnt/data
sudo cp hello.txt /mnt/data/
sudo cp bzImage /mnt
sudo cp rootfs.cpio.xz /mnt
sudo cp syslinux.cfg /mnt
sudo umount /mnt

Done!

Test in emulator

It’s good to test before wasting time for the real floppy to burn.

Boot the new OS in qemu:

qemu-system-i386 -fda floppinux.img -m 20M -cpu 486

If it worked that means You have successfully created your own distribution! Congratulations!

The floppinux.img image is ready to burn onto a floppy and boot on real hardware!

Floppy Disk

<!> Important <!>

Change XXX to floppy drive name in your system. In my case it is sdb. Choosing wrongly will NUKE YOUR PARTITION and REMOVE all of your files! Think twice. Or use some GUI application for that.

sudo dd if=floppinux.img of=/dev/XXX bs=512 conv=notrunc,sync,fsync oflag=direct status=progress

After 5 minutes I got freshly burned floppy.

  • FLOPPINUX: 0.3.1 (December 2025)
  • Linux Kernel: 6.14.11
  • Busybox: 1.36.1
  • Image size: 1440KiB / 1.44MiB
  • Kernel size: 881KiB (bzImage)
  • Tools: 137KiB (rootfs.cpio.xz)
  • Free space left (df -h): 253KiB

File & Directory Manipulation

  • cat - display file contents
  • cp - copy files and directories
  • mv - move/rename files and directories
  • rm - remove files and directories
  • ls - list directory contents
  • mkdir - creates directory
  • df -h - display filesystem disk space usage
  • sync - force write of buffered data to disk - use this after any changes to the floppy filesystem
  • mount - mount filesystems
  • umount - unmount filesystems

Text Processing & Output

  • echo - display text output
  • more - page through text output

Utilities

  • clear - clear terminal screen
  • test - evaluate conditional expressions

Applications

Download


Read the original article

Comments

  • By sockbot 2026-02-035:2818 reply

    Over Christmas I tried to actually build a usable computer from the 32-bit era. Eventually I discovered that the problem isn't really the power of the computer. Computers have been powerful enough for productivity tasks for 20 years, excepting browser-based software.

    The two main problems I ran into were 1) software support at the application layer, and 2) video driver support. There is a herculean effort on the part of package maintainers to build software for distros, and no one has been building 32 bit version of software for years, even if it is possible to build from source. There is only a very limited set of software you can use, even CLI software because so many things are built with 64 bit dependencies. Secondly, old video card drivers are being dropped from the kernel. This means all you have is basic VGA "safe-mode" level support, which isn't even fast enough to play an MPEG2. My final try was to install Debian 5, which was period correct and had support for my hardware, but the live CDs of the the time were not hybrid so the ISO could not boot from USB. I didn't have a burner so I finally gave up.

    So I think these types of projects are fun for a proof of concept, but unfortunately are never going to give life to old computers.

    • By tombert 2026-02-036:2915 reply

      > Computers have been powerful enough for productivity tasks for 20 years

      It baffles me how usable Office 97 still. I was playing with it recently in a VM to see if it worked as well as I remembered, and it was amazing how packed with features it is considering it's nearing on thirty. There's no accounting for taste but I prefer the old Office UI to the ribbon, there's a boatload of formatting options for Word, there's 3D Word Art that hits me right in the nostalgia, Excel 97 is still very powerful and supports pretty much every feature I use regularly. It's obviously snappy on modern hardware, but I think it was snappy even in 1998.

      I'm sure people can enumerate here on the newer features that have come in later editions, and I certainly do not want to diminish your experience if you find all the new stuff useful, but I was just remarkably impressed how much cool stuff was in packed into the software.

      • By flomo 2026-02-037:161 reply

        I think MS Word was basically feature-complete with v4.0 which ran on a 1MB 68000 Macintosh. Obviously they have added lots of UI and geegaws, but the core word processing functionality hasn't really changed at all.

        (edit to say I'm obviously ignoring i8n etc.)

        • By blackhaz 2026-02-037:422 reply

          My dad used to run a whole commercial bank on MS Office 4.0 and a 386. (A small one, but still!)

          • By 2b3a51 2026-02-038:39

            Small, medium and large colleges in the UK ran on Novell servers and 386 client machines with windows for workgroups and whatever Office they came with. I think the universities were using unixy minicomputers then though. Late 80s early 90s. Those 386 machines were built like tanks and survived the tender ministrations of hundreds of students (not to mention some of the staff).

          • By hilti 2026-02-038:443 reply

            I love this story where a C64 in Poland rans a Auto repair shop.

            https://www.popularmechanics.com/technology/gadgets/a23139/c...

            • By cbdevidal 2026-02-038:55

              I still use Office 2010 to this day and feel like absolutely nothing is missing that I truly need. The only issues are Alt-Tab and multiple monitors have bugs. But functionality? 100%.

      • By MrGilbert 2026-02-037:312 reply

        It's wild to remember that I basically grew up with this type of software. I was there, when the MDI/SDI (Multi-Document Interface / Single-Document Interface) discussion was ongoing, and how much backlash the "Ribbon"-interface received. It also shows that writing documents hasn't really changed in the past 30 years. I wonder if that's a good or bad development.

        With memory prices skyrocketing, I wonder if we will see a freeze in computer hardware requirements for software. Maybe it's time to optimize again.

        • By anthk 2026-02-0310:181 reply

          Sadly Electron developers will be fired, and C++ and even Rust ones will be highly praised. QT5/6 will be king for tons of desktop software.

          • By krzyk 2026-02-0312:161 reply

            One can dream.

            • By anthk 2026-02-0315:113 reply

              Ram shortages are not dreams.

              • By kelnos 2026-02-043:341 reply

                I think GP was dreaming about Electron developers being fired, not suggesting that RAM shortages weren't happening.

                But perhaps I'm just projecting. Ugh, Electron.

              • By arcanemachiner 2026-02-0316:17

                They're not permanent either.

        • By hnlmorg 2026-02-038:14

          Consumer laptops have been frozen on 8GB of RAM for a while already.

          Yeah you can get machines which are higher specced easily enough, but they’re usually at the upper end of the average consumers budget.

      • By blackhaz 2026-02-037:413 reply

        I have MS Office 4.0 installed on my 386DX-40 with 4 MB of RAM and 210 MB HDD, running Windows 3.1, and it is good. Most of the common features are there, it's a perfectly working office setup. The major thing missing is font anti-aliasing. Office 95 and 97 are absolutely awesome.

        • By hilti 2026-02-038:47

          Totally agree! I‘d pay definitely $300 (lifetime license) for a productivity suite like Windows 95 design and Office 95 with no bloatware and ads. Just pure speed and productivity.

        • By aidenn0 2026-02-0314:58

          I do remember running Word on an Am386DX-40 and later an i486DX2-66 and there was an issue that wouldn't be a problem with faster hardware; the widow/orphan control happened live so if you made an edit, then hit print, there was a race condition where you could end up with a duplicated line or missing line across page boundaries. Since later drafts tended to have fewer edits, I once turned in a final draft of a school paper with such an error.

        • By kraai 2026-02-0315:021 reply

          Then again, if you'd also run it at low res on an old CRT it might not or barely benefit from anti-aliasing anyway.

          • By blackhaz 2026-02-0315:07

            Oh, right! 800x600 was pretty sharp on a 14", and 1024x768 on 15", and when ClearType came out it actually was blurring things on CRTs.

      • By justapassenger 2026-02-037:082 reply

        Last true step change in computer performance for general home computing tasks was SSD.

        • By Cthulhu_ 2026-02-0313:03

          I'd add multicore processors as well, which makes multiprocess computing viable. And as a major improvement, Apple's desktop CPUs which are both fast, energy efficient and cool - my laptop fan never turns on. At one point I was like "do they even work?" so I ran a website that uses CPU and GPU to the max, and... still nothing, stuff went up to 90 degrees but no fan action yet. I installed a fan control app to demonstrate that my system does in fact have fans.

          Meanwhile my home PC starts blowing whenever I fire up a video game.

        • By johnisgood 2026-02-038:42

          In 20 years? That is nothing.

      • By mikepurvis 2026-02-036:52

        It's crazy too to realise how much of the multi-application interop vision was realized in Office 97 too. Visual Basic for Applications had rich hooks into all the apps, you could make macros and scripts and embed them into documents, you could embed documents into each other.

        It's really astonishing how full-featured it all was, and it was running on those Pentium machines that had a "turbo" button to switch between 33 and 66 MHz and just a few MBs of RAM.

      • By lproven 2026-02-0317:351 reply

        > I was playing with it recently in a VM

        With the small caveat that I only use Word, it runs perfectly in WINE and has done for over a decade. I use it on 64-bit Ubuntu, and it runs very well: it's also possible to install the 3 service releases that MS put out, and the app runs very quickly even on hardware that is 15+ years old.

        The service packs are a good idea. They improve stability, and make export to legacy formats work.

        WINE works better than a VM: it takes less memory, there's no VM startup/shutdown time, and host integration is better: e.g. host filesystem access and bidirectional cut and paste.

        • By tombert 2026-02-0318:441 reply

          I had trouble getting the Office 97 installer working with Wine. Not claiming it’s impossible but I figured just to play with it I could spin up Qemu.

          • By lproven 2026-02-0411:28

            I have used this on half a dozen machines with precisely zero special config.

            Step by step:

            1. Install WINE, all defaults from OS package manager.

            2. Open terminal. Change to directory with Office 97 install files.

            3. Run `wine setup`

            4. For me: turn off everything except the essential bits of Word. Do not install OS extensions, as they won't work. No bits that plug into other apps. No WordMail, no FastFind, no Quicklaunch toolbar, no Office Assistant.

            5. Enter product key: 11111-1111111

            6. Allow to complete.

            7. Install SRs.

            8. Run and use app.

      • By deafpolygon 2026-02-038:06

        it’s also proof that Microsoft hasn’t done much with office in decades… except add bloat, tracking, spyware…

      • By goalieca 2026-02-0315:00

        > but I think it was snappy even in 1998.

        It definitely was snappy. I used it on school computers that were Pentium (1?) with about as much RAM as my current L2 cache (16MB). Dirty rectangles and win32 primitives. Very responsive. It also came with VB6 where you could write your own interpreted code very easily to do all kinds of stuff.

      • By rkagerer 2026-02-038:181 reply

        The curse-ed ribbon was a huge productivity regression. I still use very old versions of Word and Excel (the latter at least until the odd spreadsheet exceeds size limits) because they're simply better than the newer drivel. Efficient UI, proper keyboard shortcuts with unintrusive habbit-reinforcing hints, better performance, not trying to siphon all my files up to their retarded cloud. There is almost nothing I miss in terms of newer features from later versions.

        • By speed_spread 2026-02-0319:501 reply

          The ribbon thing was a taste of things to come in the degradation of UI standards. Take something that works great and looks ok, replace it with something flashy that gives marketing people something to say. Break the workflow of existing users. Repeat every 10 years.

          • By direwolf20 2026-02-0412:19

            IIRC the Ribbon had real UX testing behind it. All the most common features were truly easier to access, but it was harder to find a certain feature when you needed it. In other words they optimized for the wrong thing.

            My favorite was that Paste was a giant button while Cut and Copy were small because the UX research found that people paste more than they cut or copy...

      • By nunobrito 2026-02-0523:33

        Office 97 was fantastic and the one that followed in 2000 was peak Microsoft quality all the way up to the 2003 edition.

        Still remember it was possible to perfectly mimick existing documents that had long stopped being printed with such a quality in replication.

        The introduction of ribbons was a cruel mistake. It gets harder and harder to know where anything is located nowadays because ribbons hide options too often.

      • By dfex 2026-02-0311:322 reply

        This! I have the 14-core M4 Macbook Pro with 48GB of RAM, and Word for Mac (Version 16 at this time) runs like absolute molasses on large documents, and pegs a single core between 70 and 90% for most of the time, even when I'm not typing.

        I am now starting to wonder how much of it has to do with network access to Sharepoint and telemetry data that most likely didn't exist in the Office 97 dial-up era.

        Features-wise - I doubt there is a single feature I use (deliberately) today in Excel or Word that wasn't available in Office 97.

        I'd happily suffer Clippy over Co-Pilot.

        • By lproven 2026-02-0317:37

          > I'd happily suffer Clippy

          It's an optional install. You can just click Custom, untick "Office Assistant" and other horrid bits of bloat like "Find Fast" and "Word Mail in Outlook" and get rid of that stuff.

        • By musicale 2026-02-044:02

          What Intel/AMD/(and now)Apple giveth, Microsoft taketh away.

      • By pjmlp 2026-02-038:232 reply

        Except for Internet surfing, a plain Amiga 500 would be good enough for what many folks do at home, between gaming, writing letters, basic accounting and the occasional flyers for party invitations.

        • By flomo 2026-02-038:562 reply

          Total nostalgia talk. Those machines were just glacially slow at launching apps and really everything, like spell check, go get a coffee. I could immediately tell the difference between a 25Mhz Mac IIci and a 25Mhz Mac IIci with a 32KB cache card. That's how slow they were.

          • By pjmlp 2026-02-039:462 reply

            Some of us do actually use such machines every now and then.

            The point being made was that for many people whose lives doesn't circle around computers, their computing needs have not changed since the early 1990's, other than doing stuff on Internet nowadays.

            For those people, using digital typewriter hardly requires more features than Final Writer, and for what they do with numbers in tables and a couple of automatic updated cells, something like Superplan would also be enough.

            • By kelnos 2026-02-043:362 reply

              > their computing needs have not changed since the early 1990's, other than doing stuff on Internet nowadays.

              So in other words, their computer needs have changed significantly.

              You can't do most modern web-related stuff on a machine from the 90s. Assuming you could get a modern browser (with a modern TLS stack, which is mandatory today) compiled on a machine from the 90s, it would be unusably slow.

              • By anthk 2026-02-0412:19

                Amigans are already using AmiSSL and AmiGemini (and some web browsers) perfectly fine in m68k CPU's recreated with FPGA's.

                You can do modern TLS stuff with a machine from the 90's if you cut own the damn JavaScript and run services from https://farside.link or gemini://gemi.dev proxying the web to Gemini.

              • By pjmlp 2026-02-045:33

                Not everyone is all the time on the Internet, for some folks their computer needs have stayed the much pretty much the same.

                If they want to travel they go to an agency, they still go to the local bank branch to do their stuff, news only what comes up on radio and TV, music is what is on radio, CDs and vinyl, and yet manage to have a good life.

            • By flomo 2026-02-0310:094 reply

              Yeah, I just posted that a lot of that software was amazing and pretty 'feature-complete', all while running on a very limited old personal conmputers.

              Just please don't gaslight us with some alternate Amiga bullshit history. All that shit was super slow, you were begging for +5Mhz or +25KB of cache. If Amiga had any success outside of teenage gamers, that stuff would have all been historical, just like it was on the Mac.

              • By Gormo 2026-02-0312:47

                The Amiga had huge success outside of "teenage gamers", even if in niche markets. Amigas were extremely important in TV and video production throughout the 1990s. I remember a local Amiga repair shop in South Florida that stayed in business until about 2007, mainly by servicing Amigas still in service in the local broadcast industry -- all of the local cable providers in particular had loads of them, since they were used for the old Prevue Guide listings, along with lots of other stuff.

              • By pjmlp 2026-02-0311:341 reply

                Goes both ways, Mac was hardly something to write home about outside US, and they did not follow Commodore footsteps into bankruptcy out of sheer luck.

                • By anthk 2026-02-0316:27

                  The Mac was just an expensive toy for people working on different media. No one used it at home, even less at school. Ever.

              • By anthk 2026-02-0316:241 reply

                The Mac didn't exist in Europe except for expensive A/V production machines and the printing world (books, artists, movie posters, covers and the like).

                If you were from Humanities and worked for a newspaper design layout you would use a Mac at work. That's it.

                • By lproven 2026-02-0317:492 reply

                  > The Mac didn't exist in Europe

                  That is absolutely not a valid generalisation.

                  I worked on Macs from the start of my career in 1988. They were the standard computer for state schools in education here in the Isle of Man in the late 1980s and early 1990s.

                  The Isle of Man's national travel company ran on a Mac database, Omnis, and later moved to Windows to keep using Omnis.

                  It's still around:

                  https://www.omnis.net/

                  I supported dozens of Mac-using clients in London through the 1990s and they were the standard platform in some businesses. Windows NT Server had good MacOS support from the very first version, 3.1, and Macs could access Windows NT Server shares over the built-in Appleshare client, and store Mac files complete with their Resource Forks on NTFS volumes. From 1993 onwards this made mixed Mac/PC networks much easier.

                  I did subcontracted Mac support for a couple of friends of mine's consultancy businesses because they were Windows guys and didn't "speak Mac".

                  Yes, they were very strong in print, graphics, design, photography, etc. but not only in those markets. Richer types used them as home computers. I also worked on Macs in the music and dance businesses and other places.

                  Macs were always there.

                  Maybe you didn't notice but they always were. Knowing PC/Mac integration was a key career skill for me, and the rise of OS X made the classic MacOS knowledge segue into more general Unix/Windows integration work.

                  Some power users defected to Windows NT between 1993 and 2001 but then it reversed and grew much faster: from around 2001, PowerMacs started to become a credible desktop workstation for power users because of OS X. From 2006, Macintel boxes became more viable in general business use because the Intel chips meant you could run Windows in a VM at full speed for one or two essential Windows apps. They ran IE natively and WINE started to make OS X feasible for some apps with no need for a Windows licence.

                  In other words, the rise of OS X coincided with the rise of Linux as a viable server and GUI workstation.

                  • By pjmlp 2026-02-045:42

                    In Portugal there was only one single shop for the whole country, Interlog, located in Lisbon.

                    Wanted to get a Mac, needed to travel there, or order by catalogue, from magazine ads.

                    On my university there were about 5 LCs on a single room for students use, while the whole campus was full of PCs, and UNIX green/amber phosphor terminals to DG/UX rooms, on all major buildings.

                    Besides that single room, there were two more on the IT department, and that was about it.

                    When Apple was going down, between buying Be or NeXT as last survival decision, the fate of the university keeping those Macs around was being discussed.

                  • By anthk 2026-02-047:261 reply

                    >Yes, they were very strong in print, graphics, design, photography, etc. but not only in those markets. Richer types used them as home computers. I also worked on Macs in the music and dance businesses and other places.

                    So, A/V production, something I said too. My point still stands. Macs in Europe were seen as something fancy for media production people and that's it. Something niche for the arts/press/TV/cinema world.

                    • By lproven 2026-02-0411:251 reply

                      Nope. Wrong. My own extensive personal experience, travelling and working in multiple countries. Not true, never was.

                      Like I said, and you missed: but not only there.

                      People often mistake "Product A dominates in market B" -- meaning A outsells all others in B -- for "A only sells in market B."

                      Macs were expensive. Clone PCs were cheap. Yeah, cheap products outsell expensive ones. Doesn't mean that the expensive ones are some kind of fancy designer brand only used by the idle rich.

                      • By anthk 2026-02-0411:541 reply

                        Yes, it was. I'm from Spain. The Macs where for media people, not for the common worker on a boring office, where MS dominated. At home, Macs where a thing maybe for some rounding percent from kids living in a loaded neighbour.

                        No one got Macs at school either. First DOS, then Windows 95/98. Maybe in some Universities they used Macbooks well into the OSX era, as a reliable Unix machine to compile legacy scientific stuff; and even in those environments GNU/Linux began to work perfectly well recompiling everything from Sparcs and the like with a much cheaper price.

                        Forget about pre-OSX machines in Spain outside of a newspaper/publishing/AV producing office. Also, by the time XP and 2000 were realiable enough against OSX (w9x was hell) that OS was replaced for much cheaper PC alternatives.

                        I mean, if w2k/wxp could handle big loads without BSODing every few hours, that was a success. And as the Pentium 4's with SSE2 and Core Duo's happened, suddenly G4's and G'5 weren't that powerful any more.

                        • By lproven 2026-02-0510:19

                          So, not a conspicuously wealthy country, then?

                          Whereas I lived (and am back, sadly) in an offshore tax haven.

                          The rich used Macs. Musicians used Macs. They were not some dedicated tool only found in certain places. Entire industries, big important industries, ran on them.

                          What killed Commodore and Atari was that in the end although they had niches, they didn't conquer whole sectors.

                          This is why Sinclair Research tried to push into the business market with the QL. Sir Clive knew that the home/games sector was about thin margins and price battles, while in rich America, you could get fat on it, you can't in Europe.

                          He carved out an early niche as the cheapest home computers that were good enough and were competitive, but it was low-margin/high-unit-count.

                          The business market will pay for good tools. Bits of it paid extra for Macs for decades because they were good at some things.

                          That is a viable long-term market: "the best cheap home computer for the money" is not.

              • By 2000UltraDeluxe 2026-02-0312:03

                Amiga was big in Europe. No doubt they were slow though; most computers of the time were.

          • By bombcar 2026-02-0314:17

            Those machines could be pretty darn fast - if you get one and run the earliest software that still worked on. DOS-based apps would fly on a 486, even as Windows 95 would be barely usable.

        • By hilti 2026-02-038:42

          Or controlling the heating and AC systems at 19 schools under its jurisdiction using a system that sends out commands over short-wave radio frequencies

          https://www.popularmechanics.com/technology/infrastructure/a...

      • By boznz 2026-02-0318:342 reply

        My crappy old 2018 Chromebook is still just about usable with 2GB but has gone from a snappy system to a lethargic snail.. and getting slower every update.. Yeah for progress!

        • By b00ty4breakfast 2026-02-0318:57

          Maybe with the price of memory going up, we'll start seeing a more conservative use of resources in consumer software.

          A fella can dream, anyways.

        • By starkparker 2026-02-0320:38

          eMMC Chromebooks are notorious for storage-related slowdowns. If it's an option, booting a ChromeOS variant or similar distro off a high-speed microSD, over USB, or (least likely with a Chromebook) via PXE might confirm.

      • By jama211 2026-02-0318:261 reply

        “Powerful enough for productivity tasks” is very variable depending on what you need to be productive in. Office sure. 3D modelling? CAD? Video editing? Ehhhhh not so sure.

        • By dsr_ 2026-02-0318:322 reply

          I hate to tell you this, but people were doing CAD and CNC work on PCs back when a 33MHz 80386 with 8MB of RAM was an expensive computer.

          And they did video editing on Amigas with an add-on peripheral called a Video Toaster.

          • By tombert 2026-02-0318:43

            I don’t know enough about CAD to comment but video editing is considerably more expensive now for a bunch of reasons and I don’t think an Amiga could handle it now.

            Video compression is a lot more computationally complex now than it was in the 90s, and it is unlikely that an Amiga with a 68k or old PowerPC would be able to handle 4k video with H265 or ProRes. Even if you had specialized hardware to decode it, I’m not 100% sure that an Amiga has enough memory to hold a single decompressed frame to edit against.

            Don’t get me wrong, Video Toaster is super awesome, but I don’t think it’s up to modern tasks.

          • By jama211 2026-02-0518:07

            And you’re aware of all the reasons why that hardware wouldn’t handle modern workflows, right? Show me an amiga that will play 4K video let alone render it.

      • By nxobject 2026-02-0310:00

        > old Office UI to the ribbon

        Truly, I do not miss the swamp of toolbar icons without any labels. I don't weep for the old interface.

    • By jsdevrulethewr 2026-02-038:021 reply

      > Eventually I discovered that the problem isn't really the power of the computer.

      Nope, that’s a modern problem. That’s what happens when the js-inmates run the asylum. We get shitty bloated software and 8300 copies of a browser running garage applications written by garbage developers.

      I can’t wait to see what LLMs do with that being the bulk of their training.

      Exciting!

      • By dariosalvi78 2026-02-038:431 reply

        not gonna disagree with you, but, as a solo developer who needs to reach audiences of all sorts, from mobile to powerful servers, the most reasonable choice today is Javascript. JS, with its "running environments" (Chrome, Node, etc.), has done what Java was supposed to do in the 90s. It's a pity that Java didn't hold its promises, but the blame is to put all on the companies that ran the show back then (and running the show now).

        • By hilti 2026-02-038:541 reply

          Javascript is not the problem at all.

          Rookie developers who use hundreds of node modules or huge CSS frameworks are ruining performance and hurt the environment with bloated software that consumes energy and life time.

          • By direwolf20 2026-02-0412:56

            JavaScript is another, separate, problem.

    • By zokier 2026-02-037:291 reply

      > There is a herculean effort on the part of package maintainers to build software for distros, and no one has been building 32 bit version of software for years, even if it is possible to build from source. There is only a very limited set of software you can use, even CLI software because so many things are built with 64 bit dependencies

      That seems odd? Debian 12 Bullseye (oldstable) has fully supported i386 port. I would expect it to run reasonably well on late 32 bit era systems (Pentium4/AthlonXP)

      • By jabl 2026-02-038:501 reply

        AFAIU the Debian i386 port has effectively required i686 level CPU's for quite a long time (CMOV etc.)? So if he has an older CPU like the Pentium it might not work?

        But otherwise, yes, Debian 12 should work fine as you say. Not so long ago I installed it on an old Pentium M laptop I had lying around. Did take some tweaking, turned out that the wifi card didn't support WPA2/3 mixed mode which I had configured on my AP, so I had to downgrade security for the experiment. But video was hopeless, it couldn't even play 144p videos on youtube without stuttering. Maybe the video card (some Intel thing, used the i915 driver) didn't have HW decoding for whatever video encoder youtube uses nowadays (AV1?), or whatever.

        • By UncleSlacky 2026-02-039:302 reply

          You can force YouTube to use H264 instead (via extensions like H264ify), that should reduce the processing load.

          • By 2000UltraDeluxe 2026-02-0312:051 reply

            Were there actually Pentium M chipsets that could decode anything but MPEG2?

            The CPU will be struggling with most modern video formats including h.264.

            • By dmitrygr 2026-02-0318:121 reply

              we were decoding 480x320 MP4 on PalmOS 5 devices in early 2000. Those were single-core in-order 200mhz ARM devices with no accelerators at all. Pentium M outperforms those easily and thus can do it too.

              • By anthk 2026-02-0318:421 reply

                Mp4 is the container. H264 is the video codec.

                • By dmitrygr 2026-02-0323:031 reply

                  got me, it was DivX and XviD which are indeed newer and fancier than MPEG2

                  • By anthk 2026-02-047:24

                    And still much easier to play than h264. A Pentium II with NetBSD was more than enough.

                    Nowadays on an n270 CPU based netbook I use mpv and yt-dlp capped to 420p, even if I can play 720p@30FPS.

          • By jabl 2026-02-0310:17

            Good point. Though too late in this particular case, since the battery was also busted, I ended up e-wasting the machine.

    • By amne 2026-02-0311:181 reply

      I used to run a cs1.6 server on an amd 800mhz with 256mb of ram in the 2000s. I'm looking these days to get a mac mini and while thinking that 16gb will not be enough I remembered about that server. It was a NAT gateway too, had a webserver also with hitstats for the cs server. And it was a popular 16v16 type of server too. What happened? How did we get to 16gb minimum and 32gb will make you not sad.

      • By genewitch 2026-02-0523:32

        i ran my whole house network off a laptop with the specs of a raspberry pi 2 for a really long time. I finally broke and moved it to a VM because the laptop's built in port and USB were finally too slow to route traffic, 11mbit USB! It took a decade+[1] of "innovation" in the US before i could finally buy internet faster than 11mbit. IIRC i switched to VM based IPCop in ~2007.

        [1] My first broadband connection was in 1998 at 768/768 kbit symmetrical. My first megabit speed connection was in 2006 or 2007. in 2010 or 2011 we got VDSL and it was 16 whole megabits. Now i have 300mbit on a good day, and 150mbit on a bad day.

        I literally wrote the guide on how to use old hardware with VM tech to route your house, first with ipcop[2], then generically[3], and just this week i wrote a guide on how to get ipv6 working with starlink and dd-wrt[4].

        i've been in this a long time.

        [2]https://web.archive.org/web/20220323223325/https://www.dslre...

        [3]https://web.archive.org/web/20131214075417/https://www.dslre...

        and the dd-wrt starlink one from this week:

        [4]https://nextcloud.projectftm.com/index.php/s/4iScqZbrfYiNcKy

        ETA: it is hilarious how much pushback i got about doing all of this in a VM, just scant years before "you should just use a VM for that" became the default answer, and a decade before "just put it in a k8s cluster and pay someone a quarter million a year to babysit it" became a thing...

        also ipcop booted and installed off a single floppy forever

    • By 1313ed01 2026-02-036:401 reply

      NetBSD is probably what would make most sense to run on that old hardware.

      Alternatively you may have accidently built a great machine for installing FreeDOS to run old DOS games/applications. It does install from USB, but needs BIOS so can't run it on modern PC hardware.

      • By iberator 2026-02-037:51

        NetBSD is the only 32bit modern Unix still running like a charm on 32 bit hardware. OpemBSD is second with great wifi support.

    • By littlecranky67 2026-02-037:541 reply

      I was on linux as my main driver in the early 2000s an we did watch movies back then, even DVDs. Of course, the formats where not HD and it was DivX or DVD ISOs. I remember running Gentoo and optimizing build flags for mplayer to get it working, at a time I had a 500Mhz Pentium III, later 850Mhz. And I also remember having to tweak the mplayer output driver params to get a good and smooth playback, but it was possible (mplayer -vo xv for Xvideo support). IIRC I got DVD .iso playback to run even on the framebuffer without X running at all (mplayer -vo fb). Also the "-framedrop" flag came in handy (you can do away with a bit less than 25fps when under load). Also, definitely you would need compile-time support for SSE/SSE2 in the CPU. I am not even sure I ever had a GPU that had video decoding support.

      • By anthk 2026-02-0310:21

        mpv and yt-dlp will fix that today.

    • By leidenfrost 2026-02-0315:55

      Try Plop Boot Manager: https://www.plop.at/en/bootmanagers.html

      It can boot from a floppy or from a CD drive, and it lets you chainload into a live usb even on old computers.

      I used it to boot from CD from a floppy in an old Pentium MMX and it worked great (although slow, of course)

    • By endgame 2026-02-036:47

      You might have some luck applying isohybrid(1) to the period-correct .iso image, making it bootable by other means: https://manpages.debian.org/stretch/syslinux-utils/isohybrid...

    • By 2b3a51 2026-02-038:331 reply

      My 32 bit laptop is a Thinkpad T42 from 2005 which has a functioning CDROM, and which can run Slackware15 stable 32bit install OKish, so I haven't tried any of this but:

      My first thought: How about using a current computer to run qemu then mounting the Lenny iso as an image and installing to a qemu hard drive? Then dd the hard drive image to your 32bit target. (That might need access to a hard drive caddy depending on how you can boot the 32bit target machine, so a 'hardware regress' I suppose).

      My second thought: If target machine is bootable from a more recent live linux, try a debootstrap install of a minimal Lenny with networking (assuming you can connect target machine to a network, I'm guessing with a cable rather than wifi). Reboot and install more software as required.

      • By wink 2026-02-038:412 reply

        I have OpenBSD running on my old 2004 Centrino notebook (I might be lagging 2-3 versions behind, I don't really use it, just play around with it) and it's fine until you start playing YouTube videos, that is kinda hard on the CPU.

        • By 2b3a51 2026-02-039:21

          Yes, NetBSD and OpenBSD work fine on the 2005 T42 but as you say video performance is low. Recent OpenBSD versions have had to reduce the range of binary packages (i.e. outside of the base and installed with pkg_add) on i386 because of the difficulty of compiling them (e.g. Firefox, Seamonkey needing dependencies that are hard to compile on i386, a point the poster up thread made).

        • By anthk 2026-02-0316:30

          My ~/yt-dlp.conf:

              #inicio de fichero
              --format=bestvideo[height<=?480][fps<=?30]+bestaudio/best
               #fin de fichero
          
          My ~/.config/mpv/config

          #inicio

                ytdl-format=bestvideo[height<=?480][fps<=?30]+bestaudio/best
                ao=sndio
                vo=gpu,xv
                audio-pitch-correction=no
                quiet=yes
                pause=no
                profile=fast
                vd-lavc-skiploopfilter=all
                #demuxer-cache-wait=yes
               #demuxer-max-bytes=4MiB
               #fin
          
          Usage: mpv $YOUTUBE_URL

          Upgrade ASAP.

    • By forinti 2026-02-0312:001 reply

      I have a P166 under my desk and once in a blue moon I try to run something on it.

      My biggest obstacles are that it doesn't have an ethernet port and that it doesn't have BIOS USB support (although it does have a card with two USB ports).

      I've managed to run some small Linux distros on it (I'll definitely try this one), but, you're right, I haven't really found anything useful to run on it.

      • By dosk 2026-02-068:33

        Could you share motherboard vendor and model I will check your options

        I have P1 90mhz P2 500mhz and typing from P4 just now :P

        I think biggest limit will be missing SSE2 PAE POPCNT modern distros need this

    • By fuzzfactor 2026-02-069:27

      The way an ISO is supposed to be made to boot from USB (or HDD, SSD) is to set up the BIOS to boot to the proper type device (or let you select from a boot menu).

      Start with a conventional MBR and active FAT32 partition, and make sure it will boot to MS-DOS, this only requires the 3 DOS OS files to be present when the bootsector is a DOS bootsector (which seeks IO.SYS).

      Once that's done, then (optionally) copy the DOS bootsector to a file on that FAT32 volume, name the (512 byte) file BOOTSECT.DOS. A disk editor can do this, or carefully use dd in Linux.

      I then boot to Windows and use its CLI to run SYSLINUX.EXE (v6.03 on virgin media), to "Syslinux" (verb) the FAT32 volume. You can alternatively do this from Linux. This replaces the DOS bootsector with a Syslinux bootsector that will seek a Syslinux folder instead of seeking IO.SYS. Also writes ldlinux.sys and ldlinux.c32 to the FAT volume.

      You do have to be consistent with your Syslinux version, the .C32 files in use must be from the same version of Syslinux that you use to "Syslinux" the FAT volume. And must match the version of Isolinux used to make the ISO. To find out which version of Isolinux was originally used on the ISO, open the ISO in a disk editor and these have big sectors but about the third sector down will be some readable text with the Isolinux version number.

      Then copy all the files & folders from the mounted ISO to the FAT volume, change the name of the isolinux folder to syslinux, in the syslinux folder change the name of isolinux.cfg to syslinux.cfg.

      A properly prepared distro distributed in ISO form should then boot normally the way it is intended when stored on a FAT filesystem instead.

      Show-stoppers can still arise when some live distros have .CFG bootstrings within their Isolinux folder that specify CDROM or other hardcoded deficiencies, for USB you can sometimes specify REMOVABLE after you change the foldername to Syslinux. You can also specify a chosen volume in case it's not picked up by default.

      You may need to look at every .CFG file in the Syslinux folder, they are all usually linked, ideally there is only syslinux.cfg but some people make it more complicated than that. Back them up before editing but they are just text files.

    • By mrighele 2026-02-0312:31

      It seems that both OpenBSD [1] and NetBSD [2] still support i386, for example here [3] you can find the image for a USB stick.

      I expect at least the base system (including X) to work without big issues (if your hardware is supported), for extra packages you may need a bit of luck.

      [1] https://www.openbsd.org/plat.html

      [2] https://wiki.netbsd.org/ports/

      [3] https://wiki.netbsd.org/ports/i386/

    • By 1vuio0pswjnm7 2026-02-040:53

      "There is a herculean effort on the part of package maintainers to build software for distros, and no one has been building 32 bit version of software for years, even if it is possible to build from source."

      This statement must be Linux-only

      Pre-compiled packages for i386 are still available for all versions of NetBSD including the current one

      I still compile software for i386 from pkgsrc

      https://ftp.netbsd.org/pub/pkgsrc/current/

      NB. I'm not interested in graphical software, I prefer VGA textmode

    • By svilen_dobrev 2026-02-0317:02

      i had an original 7" eeepc from 2007, running archlinux-32 from ~2017, with Xfce and all that, and few months ago updated it.. took me almost a day, going through various rabbit-holes, like 1-2 static-built pacmans and python and manually picking and combining various versions. The result was okay but somehow took more space than before (it has 4G ssd, from which i did have 2gb free, now only 1.5). But it maybe that is not old enough as machine..

    • By iberator 2026-02-037:551 reply

      You can always run Linux off the dos partition with vmlinux loader. Or Slackware DOS version (forgot it's name).

      Don't lose hope. You can boot it one way or other :)

    • By anthk 2026-02-0310:17

      The last release of NetBSD still has drivers.

    • By b00ty4breakfast 2026-02-0319:01

      >Computers have been powerful enough for productivity tasks for 20 years

      Little known fact; before 2006 all we did was play Pong and make beep-boop noises on our computers.

  • By mlacks 2026-02-0315:072 reply

    Reminds me of my first linux distro called damnsmall linux. I think this was used as a first attempt to port linux to the gamecube, but the main team driving the effort ended up going with Gentoo instead.

    From the main page:

    As with most things in the GNU/Linux community, this project continues to stand on the shoulders of giants. I am just one guy without a CS degree, so for now, this project is based on antiX 23 i386. AntiX is a fantastic distribution that I think shares much of the same spirit as the original DSL project. AntiX shares pedigree with MEPIS and also leans heavily on the geniuses at Debian. So, this project stands on the shoulders of giants. In other words, DSL 2024 is a humble little project!

    Though it may seem comparably ridiculous that 700MB is small in 2024 when DSL was 50MB in 2002, I’ve done a lot of hunting to find small footprint applications, and I had to do some tricks to get a workable desktop into the 700MB limit. To get the size down the ISO currently reduced full language support for German, English, French, Spanish, Portuguese and Brazilian Portuguese (de_DE, en_AU, en_GB, en_US, es_ES, fr_FR, es_ES, pt_PT, & pt_BR ). I had to strip the source codes, many man pages, and documentation out. I do provide a download script that will restore all the missing files, and so far, it seems to be working well.

    https://www.damnsmalllinux.org/

    • By sudobash1 2026-02-0416:52

      > Though it may seem comparably ridiculous that 700MB is small in 2024 when DSL was 50MB in 2002...

      It really depends on what you are looking at. This is a bit of an apples to oranges comparison, but OpenWrt happily works with 16MB of disk space, and can go down to 8MB if you squeeze it. It includes a modern Linux kernel, shell, networking stack, ssh server, package manager, text editor, web server with dynamic pages, etc...

      Part of it's trick is that it aggressively pares down the hardware support, such that you normally download an OpenWrt image customized to your exact router. But of course the biggest difference is that it doesn't include a graphics stack or any GUI applications.

      I work in embedded Linux, and its a whole different world here of trimming the fat on Linux to keep the BOM prices low. But you'd be surprised how lean we can get it.

    • By alsetmusic 2026-02-0315:181 reply

      I was just reacquainting myself with Puppy Linux, DSL, and TinyCoreLinux a couple weeks ago to sandbox an LLM agent in a VM. Good stuff.

      For those who are curious, Alpine was the recommended distro as I went through various reviews. I don't know how reliable that advice is.

      • By zamadatix 2026-02-0317:05

        Alpine is great, especially for anything single purposed and headless (be it physical, VM, or container) so long as that thing isn't too tied to glibc. Been around a long time with a stable community (who are mostly using it for containers). It also defaults to a typical versioned release scheme but has the ability to switch to rolling just by changing the repo if you know you need the latest versions.

        I once tried to use it as a GUI daily driver on my work laptop (since I was already using it for containers and VMs at work) and found that stretched it a bit too far out of its speciality. It definitely had the necessary packages, just with a lot of rough edges and increased rate of problems (separate from glibc, systemd, or other expected compatibility angles). Plus the focus on having things be statically linked makes really wide (lots of packages) installs negated any space efficiency gains it had.

  • By Fiveplus 2026-02-035:256 reply

    The persistence strategy described here (mount -t msdos -o rw /dev/fd0 /mnt) combined with a bind mount to home is a nice clever touch for saving space.

    I don't know if that's also true for data integrity on physical magnetic media. FAT12 is not a journaling filesystem. On a modern drive, a crash during a write is at best, annoying while on a 3.5" floppy with a 33mhz CPU, a write operation blocks for a perceptible amount of time. If the user hits the power switch or the kernel panics while the heads are moving or the FAT is updating, that disk is gone. The article mentions sync, but sync on a floppy drive is an agonizingly slow operation that users might interrupt.

    Given the 253KiB free space constraint, I wonder if a better approach would be treating the free space as a raw block device or a tiny appended partition using a log-structured filesystem designed for slow media (like a stripped down JFFS2 or something), though that might require too many kernel modules.

    Has anyone out there experimented with appending a tar archive to the end of the initramfs image inplace for persistence, rather than mounting the raw FAT filesystem? It might be safer to serialize writes only on shutdown, would love more thoughts on this.

    • By userbinator 2026-02-035:541 reply

      Controversial position: journaling is not as beneficial as commonly believed. I have been using FAT for decades and never encountered much in the way of data corruption. It's probably found in far more embedded devices than PCs these days.

      • By Skunkleton 2026-02-036:362 reply

        If you make structural changes to your filesystem without a journal, and you fail mid way, there is a 100% chance your filesystem is not in a known state, and a very good chance it is in a non-self-consistent state that will lead to some interesting surprises down the line.

        • By ars 2026-02-036:57

          FAT has two allocation tables, the main one and a backup. So if you shut it off while manipulating the first one you have the backup. You are expected to run a filesystem check after a power failure.

        • By userbinator 2026-02-037:011 reply

          No, it is very well known what will happen: you can get lost cluster chains, which are easily cleaned up. As long as the order of writes is known, there is no problem.

          • By dezgeg 2026-02-039:241 reply

            Better hope you didn't have a rename in progress with the old name removed without the new name in place. Or a directory entry written pointing to a FAT chain not yet committed to the FAT.

            Yes, soft updates style write ordering can help with some of the issues, but the Linux driver doesn't do that. And some of the issues are essentially unavoidable, requiring a a full fsck on each unclean shutdown.

            • By M95D 2026-02-0315:111 reply

              I don't know how Linux driver updates FAT, but if it doesn't do it the way DOS did, then it's a bug that puts data at risk.

              1) Allocate space in FAT#2, 2) Write data in file, 3) Allocate space in FAT#1, 4) Update directory entry (file size), 5) Update free space count.

              Rename in FAT is an atomic operation. Overwrite old name with new name in the directory entry, which is just 1 sector write (or 2 if it has a long file name too).

              • By dezgeg 2026-02-0316:211 reply

                No, the VFAT driver doesn't do anything even slightly resembling that.

                In general "what DOS did" doesn't cut for a modern system with page and dentry caches and multiple tasks accessing the filesystem without completely horrible performance. I would be really surprised if Windows handled all those cases right with disk caching enabled.

                While rename can be atomic in some cases, it cannot be in the case of cross directory renames or when the new filename doesn't fit in the existing directory sector.

                • By M95D 2026-02-0610:53

                  > No, the VFAT driver doesn't do anything even slightly resembling that.

                  Which driver? DOS? FreeDOS? Linux? Did you study any of them?

                  > While rename can be atomic in some cases, it cannot be in the case of cross directory renames or when the new filename doesn't fit in the existing directory sector.

                  That's a "move". Yes, you would need to write 2-6 sectors in that case.

                  For short filenames, the new filename can't not fit the directory cluster, because short file names are fixed 8.3 characters, pre-allocated. A long file name can occupy up to 10 consecutive directory entries out of the 16 fixed entries each directory sector (512B) has. So, an in-place rename of a LFN can write 2 sectors maximum (or 1KB).

                  Considering that all current drives use 4KB sectors at least (a lot larger if you consider the erase block of a SSD) the rename opearation is still atomic in 99% of cases. Only one physical sector is written.

                  The most complicated rename operation would be if the LFN needs an extra cluster for the directory, or is shorter and one cluster is freed. In that case, there are usually 2 more 1-sector writes to the FAT tables.

                  Edit: I corrected some sector vs. cluster confusion.

    • By M95D 2026-02-0310:531 reply

      FAT can be made tolerant form the driver just like a journaled FS:

        1) mark blocks allocated in first FAT
        If a crash occurs here, then data written is incomplete, so write FAT1 with data from FAT2 discarding all changes.
        
        2) write data in sectors
        If a crash occurs here, same as before, keep old file size.
        
        3) update file size in the directory
        This step is atomic - it's just one sector to update. If a crash occurs here (file size matches FAT1), copy FAT1 to FAT2 and keep the new file size.
        
        4) mark blocks allocated in the second FAT
        If a crash occurs here, write is complete, just calculate and update free space.
        
        5) update free space

      • By ale42 2026-02-0312:021 reply

        Is this something the FAT driver is Linux can do?

        • By dezgeg 2026-02-0316:20

          No. There are proprietary implementations which can, though not in 100% of the cases.

    • By iberator 2026-02-037:581 reply

      Ps. On old good days there was not initrd and other ram disk stuff - you read entire system straight from the disk. Slackware 8 was that for sure and NetBSD (even newest one) is still doing it by default

      • By anthk 2026-02-0322:37

        Slackware has several kernels to choose from.

    • By zx8080 2026-02-035:54

      > If the user hits the power switch or the kernel panics while the heads are moving or the FAT is updating, that disk is gone.

      Makes sense, great point. I would rather use a second drive for the write disk space, if possible (I know how rare it's now to have two floppy drives, but still).

    • By M95D 2026-02-0315:21

      OpenWrt on some devices such as Turris Omnia writes the squashfs (mounted as RO root fs) in the "root" partition and then, immediately after, in the same partition, it writes a jffs2 (mounted as RW overlayfs). So it can be done.

    • By ars 2026-02-036:58

      > If the user hits the power switch or the kernel panics while the heads are moving or the FAT is updating, that disk is gone.

      This isn't true, I commented lower in the thread, but FAT keeps a backup table, and you can use that to restore the disk.

HackerNews