The lost cause of the Lisp machines

2025-11-1919:44170171www.tfeb.org

2025-11-18 :: lisp, stupidity, stories I am just really bored by Lisp Machine romantics at this point: they should go away. I expect they never will. History Symbolics went bankrupt in early 1993. In…

:: lisp, stupidity, stories

I am just really bored by Lisp Machine romantics at this point: they should go away. I expect they never will.

History

Symbolics went bankrupt in early 1993. In the way of these things various remnants of the company lingered on for, in this case, decades. But 1983 was when the Lisp Machines died.

The death was not unexpected: by the time I started using mainstream Lisps in 19891 everyone knew that special hardware for Lisp was a dead idea. The common idea was that the arrival of RISC machines had killed it, but in fact machines like the Sun 3/260 in its ‘AI’ configuration2 were already hammering nails in its coffin. In 1987 I read a report showing the Lisp performance of an early RISC machine, using Kyoto Common Lisp, not a famously fast implementation of CL, beating a Symbolics on the Gabriel benchmarks [PDF link].

1993 is 32 years ago. The Symbolics 3600, probably the first Lisp machine that sold in more than tiny numbers, was introduced in 1983, ten years earlier. People who used Lisp machines other than as historical artefacts are old today3.

Lisp machines were both widely available and offered the best performance for Lisp for a period of about five years which ended nearly forty years ago. They were probably never competitive in terms of performance for the money.

It is time, and long past time, to let them go.

But still the romantics — some of them even old enough to remember the Lisp machines — repeat their myths.

‘It was the development environment’

No, it wasn’t.

The development environments offered by both families of Lisp machines were seriously cool, at least for the 1980s. I mean, they really were very cool indeed. Some of the ways they were cool matter today, but some don’t. For instance in the 1980s and early 1990s Lisp images were very large compared to available memory, and machines were also extremely slow in general. So good Lisp development environents did a lot of work to hide this slowness, and in general making sure you only very seldom had to restart everthing, which took significant fractions of an hour, if not more. None of that matters today, because machines are so quick and Lisps so relatively small.

But that’s not the only way they were cool. They really were just lovely things to use in many ways. But, despite what people might believe: this did not depend on the hardware: there is no reason at all why a development environent that cool could not be built on stock hardware. Perhaps, (perhaps) that was not true in 1990: it is certainly true today.

So if a really cool Lisp development environment doesn’t exist today, it is nothing to do with Lisp machines not existing. In fact, as someone who used Lisp machines, I find the LispWorks development environment at least as comfortable and productive as they were. But, oh no, the full-fat version is not free, and no version is open source. Neither, I remind you, were they.

‘They were much faster than anything else’

No, they weren’t. Please, stop with that.

‘The hardware was user-microcodable, you see’

Please, stop telling me things about machines I used: believe it or not, I know those things.

Many machines were user-microcodable before about 1990. That meant that, technically, a user of the machine could implement their own instruction set. I am sure there are cases where people even did that, and a much smaller number of cases where doing that was not just a waste of time.

But in almost all cases the only people who wrote microcode were the people who built the machine. And the reason they wrote microcode was because it is the easiest way of implementing a very complex instruction set, especially when you can’t use vast numbers of transistors. For instance if you’re going to provide an ‘add’ instruction which will add numbers of any type, trapping back into user code for some cases, then by far the easiest way of doing that is going to be by writing code, not building hardware. And that’s what the Lisp machines did.

Of course, the compiler could have generated that code for hardware without that instruction. But with the special instruction the compiler’s job is much easier, and code is smaller. A small, quick compiler and small compiled code were very important with slow machines which had tiny amounts of memory. Of course a compiler not made of wet string could have used type information to avoid generating the full dispatch case, but wet string was all that was available.

What microcodable machines almost never meant was that users of the machines would write microcode.

At the time, the tradeoffs made by Lisp machines might even have been reasonable. CISC machines in general were probably good compromises given the expense of memory and how rudimentary compilers were: I can remember being horrified at the size of compiled code for RISC machines. But I was horrified because I wasn’t thinking about it properly. Moore’s law was very much in effect in about 1990 and, among other things, it meant that the amount of memory you could afford was rising exponentially with time: the RISC people understood that.

‘They were Lisp all the way down’

This, finally, maybe, is a good point. They were, and you could dig around and change things on the fly, and this was pretty cool. Sometimes you could even replicate the things you’d done later. I remember playing with sound on a 3645 which was really only possible because you could get low-level access to the disk from Lisp, as the disk could just marginally provide data fast enough to stream sound.

On the other hand they had no isolation and thus no security at all: people didn’t care about that in 1985, but if I was using a Lisp-based machine today I would certainly be unhappy if my web browser could modify my device drivers on the fly, or poke and peek at network buffers. A machine that was Lisp all the way down today would need to ensure that things like that couldn’t happen.

So may be it would be Lisp all the way down, but you absolutely would not have the kind of ability to poke around in and redefine parts of the guts you had on Lisp machines. Maybe that’s still worth it.

Not to mention that I’m just not very interested in spending a huge amount of time grovelling around in the guts of something like an SSL implementation: those things exist already, and I’d rather do something new and cool. I’d rather do something that Lisp is uniquely suited for, not reinvent wheels. Well, may be that’s just me.

Machines which were Lisp all the way down might, indeed, be interesting, although they could not look like 1980s Lisp machines if they were to be safe. But that does not mean they would need special hardware for Lisp: they wouldn’t. If you want something like this, hardware is not holding you back: there’s no need to endlessly mourn the lost age of Lisp machines, you can start making one now. Shut up and code.

And now we come to the really strange arguments, the arguments that we need special Lisp machines either for reasons which turn out to be straightforwardly false, or because we need something that Lisp machines never were.

‘Good Lisp compilers are too hard to write for stock hardware’

This mantra is getting old.

The most important thing is that we have good stock-hardware Lisp compilers today. As an example, today’s CL compilers are not far from CLANG/LLVM for floating-point code. I tested SBCL and LispWorks: it would be interesting to know how many times more work has gone into LLVM than them for such a relatively small improvement. I can’t imagine a world where these two CL compilers would not be at least comparable to LLVM if similar effort was spent on them4.

These things are so much better than the wet-cardboard-and-string compilers that the LispMs had it’s not funny. In particular, if some mythical ‘dedicated Lisp hardware’ made it possible to write a Lisp compiler which generated significantly faster code, then code from Lisp compilers would comprehensively outperform C and Fortran compilers: does that seem plausible? I thought not.

A large amount of work is also going into compilation for other dynamically-typed, interactive languages which aim at high performance. That means on-the-fly compilation and recompilation of code where both the compilation and the resulting code must be quick. Example: Julia. Any of that development could be reused by Lisp compiler writers if they needed to or wanted to (I don’t know if they do, or should).

Ah, but then it turns out that that’s not what is meant by a ‘good compiler’ after all. It turns out that ‘good’ means ‘compillation is fast’.

All these compilers are pretty quick: the computational resources used by even a pretty hairy compiler have not scaled anything like as fast as those needed for the problems we want to solve (that’s why Julia can use LLVM on the fly). Compilation is also not an Amdahl bottleneck as it can happen on the node that needs the compiled code.

Compilers are so quick that a widely-used CL implementation exists where EVAL uses the compiler, unless you ask it not to.

Compilation options are also a thing: you can ask compilers to be quick, fussy, sloppy, safe, produce fast code and so on. Some radically modern languages also allow this to be done in a standardised (but extensible) way at the language level, so you can say ‘make this inner loop really quick, and I have checked all the bounds so don’t bother with that’.

The tradeoff between a fast Lisp compiler and a really good Lisp compiler is imaginary, at this point.

‘They had wonderful keyboards’

Well, if you didn’t mind the weird layouts: yes, they did5. And has exactly nothing to do with Lisp.

And so it goes on.

Bored now

There’s a well-known syndrome amongst photographers and musicians called GAS: gear acquisition syndrome. Sufferers from this6 pursue an endless stream of purchases of gear — cameras, guitars, FX pedals, the last long-expired batch of a legendary printing paper — in the strange hope that the next camera, the next pedal, that paper, will bring out the Don McCullin, Jimmy Page or Chris Killip in them. Because, of course, Don McCullin & Chris Killip only took the pictures they did because he had the right cameras: it was nothing to do with talent, practice or courage, no.

GAS is a lie we tell ourselves to avoid the awkward reality that what we actually need to do is practice, a lot, and that even if we did that we might not actually be very talented.

Lisp machine romanticism is the same thing: a wall we build ourself so that, somehow unable to climb over it or knock it down, we never have to face the fact that the only thing stopping us is us.

There is no purpose to arguing with Lisp machine romantics because they will never accept that the person building the endless barriers in their way is the same person they see in the mirror every morning. They’re too busy building the walls.

As a footnote, I went to a talk by an HPC person in the early 90s (so: after the end of the cold war7 and when the HPC money had gone) where they said that HPC people needed to be aiming at machines based on what big commercial systems looked like as nobody was going to fund dedicated HPC designs any more. At the time that meant big cache-coherent SMP systems. Those hit their limits and have really died out now: the bank I worked for had dozens of fully-populated big SMP systems in 2007, it perhaps still has one or two they can’t get rid of because of some legacy application. So HPC people now run on enormous shared-nothing farms of close-to-commodity processors with very fat interconnect and are wondering about / using GPUs. That’s similar to what happened to Lisp systems, of course: perhaps, in the HPC world, there are romantics who mourn the lost glories of the Cray–3. Well, if I was giving a talk to people interested in the possibilities of hardware today I’d be saying that in a few years there are going to be a lot of huge farms of GPUs going very cheap if you can afford the power. People could be looking at whether those can be used for anything more interesting than the huge neural networks they were designed for. I don’t know if they can.


Read the original article

Comments

  • By Animats 2025-11-206:535 reply

    As someone who used Franz LISP on Sun workstations while someone else nearby used a Symbolics 3600 refrigerator-sized machine, I was never all that impressed with the LISP machine. The performance wasn't all that great. Initially garbage collection took 45 minutes, as it tried to garbage-collect paged-out code. Eventually that was fixed.

    The hardware was not very good. Too much wire wrap and slow, arrogant maintenance.

    I once had a discussion with the developers of Franz LISP. The way it worked was that it compiled LISP source files and produced .obj files. But instead of linking them into an executable, you had to load them into a run-time environment. So I asked, "could you put the run time environment in another .obj file, so you just link the entire program and get a standalone executable"? "Why would you want to do that?" "So we could ship a product." This was an alien concept to them.

    So was managing LISP files with source control, like everything else. LISP gurus were supposed to hack.

    And, in the end, 1980s "AI" technology didn't do enough to justify that hardware.

    • By e40 2025-11-2015:551 reply

      I worked on Franz Lisp at UCB. A couple of points:

      The ".obj" file was a binary file that contain machine instructions and data. It was "fast loaded" and the file format was called "fasl" and it worked well.

      The issue of building an application wasn't an issue because we had "dumplisp" which took the image in memory and wrote it to disk. The resulting image could be executed to create a new instance of the program, at the time dumplisp was run. Emacs called this "unexec" and it did approximately the same thing.

      Maybe your discussions with my group predated me and predated some of the above features, I don't know. I was Fateman's group from '81-84.

      I assume your source control comments were about the Lisp Machine and not Franz Lisp. RCS and SCCS were a thing in the early 80's, but they didn't really gain steam until after I arrived at UCB. I was the one (I think... it was a long time ago) that put Franz Lisp under RCS control.

      • By Animats 2025-11-2019:222 reply

        I was doing this in 1980-1983. Here's some code.[1] It's been partly converted to Common LISP, but I was unable to get some of the macros to work.

        This is the original Oppen-Nelson simplifier, the first SAT solver. It was modified by them under contract for the Pascal-F Verifier, a very early program verifier.

        We kept all the code under SCCS and built with make, because the LISP part was only part of the whole system.

        [1] https://github.com/John-Nagle/pasv/tree/master/src/CPC4

        • By specialgoodness 2025-11-214:26

          The Nelson-Oppen simplifier is a great piece of work, but it is not the first SAT solver. Boyer and Moore published their formally verified SAT solver in their 1979 A Computational Logic, the first book on the Boyer-Moore Theorem Prover, though it was first implemented I believe in 1973. This algorithm, based on IF-normalization and lifting, was also a core part of the original Boyer-Moore prover. One interesting note is that it actually was almost an earlier discovery of BDDs - they have the core BDD data structure and normalization algorithm but were just missing memoization and the fact that orderings on variables induce canonicity for checking boolean equivalence! But in any case, Boyer-Moore had a (formally verified, even!) implemented and used SAT solver long before Nelson and Oppen.

        • By e40 2025-11-2021:001 reply

          Do you remember who you discussed it with? It had to be either Sklower or Foderaro, unless you talked with Fateman.

          Were the macros originally from another dialect of Lisp?

          • By Animats 2025-11-2021:23

            Franz LISP had, I think, MacLISP macros, while Common LISP has a different system.

            I talked to Fateman at some point. Too long ago to remember about what.

    • By varjag 2025-11-208:023 reply

      Lisp Machines had versioning file systems IIRC. Kinda like on VMS. Was SCCS really that far ahead?

      • By kragen 2025-11-2019:00

        Yes, because on VMS (and presumably Genera) 20 versions of a file took 20× as much disk space as one version, so you wouldn't keep unlimited versions. In SCCS the lines that didn't change are only stored once, so 20 versions might be 2× or 1.1× or 1.01× the original file size.

      • By johnisgood 2025-11-2013:43

        You are correct, see: https://en.wikipedia.org/wiki/Versioning_file_system#LMFS.

        Also: https://hanshuebner.github.io/lmman/pathnm.xml

        It is worth mentioning that while it is not versioning per se, APFS and ZFS support instantaneous snapshots and clones as well.

        Btrfs supports snapshots, too.

        HAMMER2 in DragonFlyBSD has the ability to store revisions in the filesystem.

      • By rst 2025-11-2017:01

        Ummmm... yes. The problem with versioning file systems is that they only kept the last few versions; for files under active development, it was usually difficult to recover state older than a week or two.

        (SCCS handled collaborative development and merges a lot worse than anything current, but... versioning file systems were worse there, too; one war story I heard involved an overenthusiastic developer "revising" someone else's file with enough new versions that by the time the original author came back to it, their last version of the code was unrecoverable.)

    • By rjsw 2025-11-2014:021 reply

      Franz Lisp could create standalone executables from very early in the project, the compiler is one.

      • By e40 2025-11-2015:59

        Correct. To continue the puns, it was called Liszt.

    • By MangoToupe 2025-11-2014:54

      > The hardware was not very good.

      The hardware was never very interesting to me. It was the "lisp all the way down" that I found interesting, and the tight integration with editing-as-you-use. There's nothing preventing that from working on modern risc hardware (or intel, though please shoot me if I'm ever forced back onto it).

    • By raverbashing 2025-11-207:595 reply

      > "So we could ship a product." This was an alien concept to them.

      This mentality seems to have carried over to (most) modern FP stacks

      • By whstl 2025-11-208:264 reply

        Nah, it carried over to scripting languages.

        Most of them still require a very specific, very special, very fragile environment to run, and require multiple tools and carefully ran steps just so it does same you can do with a compiled executable linked to the OS.

        They weren't made for having libraries, or being packaged to run in multiple machines, or being distributed to customers to run in their own computers. Perhaps JS was the exception but only to the last part.

        Sure it mostly works today, but a lot of people put a lot of the effort so we can keep shoving square pegs into round roles.

        • By jacquesm 2025-11-209:094 reply

          Don't get me started. I tried to use a very simply python program the other day, to talk to a bluetooth module in a device I'm building. In the end I gave up and wrote the whole thing in another language, but that wasn't before fighting the python package system for a couple of hours thinking the solution is right around the corner, if only I can get rid of one more little conflict. Python is funny that way, it infantilized programming but then required you to become an expert at resolving package manager conflicts.

          For a while Conda seemed to have cracked this, but there too I now get unresolvable conflicts. It is really boggling the mind how you could get this so incredibly wrong and still have the kind of adoption that python has.

          • By foobarian 2025-11-2013:51

            You and me both. These days I don't even try, just docker pull python and docker run -v .:/app python /app/foo.py

          • By eternityforest 2025-11-218:00

            I don't generally see this kind of issue with UV, at least not with the ultra popular libraries.

            With the exception of Gstreamer. I use some awful hacks to break out of virtual environments and use the system Gstreamer, because it's not on PyPi....

          • By arethuza 2025-11-2014:261 reply

            I thought that was just me - I really rather liked Python the language but was completely confused at how the package system seemed to work.... Mind you this was 12 years ago or so but it was enough to put me off using it ever again.

            • By iLemming 2025-11-2016:04

              Yeah, it's still shitty. So often I have to go through some weird hoops to even just run the tests for a project with commits made last week. I can't even ask Claude to explain something about any given repo, it naively tries to run them tests, only to hit the wall. The number of different linters and checkers we have to run on CI just to make sure things are in good state, yet every time I clone something and try to get it running, almost always some kind of bullcrap. Why the fuck we even keep trying to write things in Python, I just don't get it.

          • By ErroneousBosh 2025-11-209:253 reply

            Or, you know, it might just be that you're not very good at computers.

            Instead of jamming in thing after thing after thing blindly hoping it's going to work, try reading the error messages and making sense of why it's doing what it's doing.

            This is such Gen Z behaviour - it doesn't work first time so throw a strop and fling stuff.

            • By iLemming 2025-11-2016:121 reply

              > you're not very good at computers.

              Yup, I guess I am not. Been coding for over 20 years, went through over a dozen different PLs and only Python - the best fucking friend who doesn't sugar coat it, tells you without stuttering - "you suck at this, buddy"

                   # PEP 9001: Constructive Computing Feedback
              
                   ## Abstract
              
                   This PEP proposes a standardized error message for situations where Python interpreters shall inform the user of insufficient computational competence.
              
                   ## Specification
              
                   When a user attempts to execute Python code that supposedly should work but it doesn't, the interpreter shall emit:
              
                       You're suck at computers

              • By ErroneousBosh 2025-11-2019:311 reply

                > Yup, I guess I am not. Been coding for over 20 years

                Wow, nearly as long ago as my first kernel module.

                Then you ought to be able to figure out how to install one thing then, eh?

                Just out of interest, what was it that was causing such a hassle? I expect it's not really relevant now, but it would be interesting to see how hard it would be to fix.

                • By iLemming 2025-11-2019:401 reply

                  It's not hard. It's just annoying to deal with this shit on constant basis. Like just the other day, the tests wouldn't pass locally, while they're passing on CI. I was scratching my head for sometime, turns out there was breaking change in csv.QUOTE_STRINGS or something, between 3.12 and 3.13 of Python. How the fuck did they manage to fix/improve fucking csv logic introducing a breaking change?

                  • By whstl 2025-11-2115:06

                    I'm always suspicious of people who go "this is easy" as a way to put others down.

                    ...especially when it's about problems that are universally accepted as not being trivial, and often require entire teams and ecosystems (Docker, Poetry, uv) to solve in scale.

            • By jacquesm 2025-11-209:271 reply

              This is such a hilarious comment.

              Thank you for making my day.

              • By DonHopkins 2025-11-209:412 reply

                Hey Gen Z, as long as I have you on the line, could you please explain 67 to me?

                I've heard of "68 and I'll owe you one", so is 67 about owing you two?

                • By jacquesm 2025-11-209:451 reply

                  I'm having a hard time coping with my social media addiction while doing some fairly hardcore development on an STM32 based platform so sorry :)

                  Incidentally, when will you (multiple) come and visit?

                  It's been too long.

                  • By DonHopkins 2025-11-209:48

                    I owe you at least one or two! Maybe we can test your drones out on that Russian guy with the GoFundMe campaign, then I'll owe you three! ;)

                • By ux266478 2025-11-2015:18

                  thats a gen alpha thing sorry unc

            • By s0sa 2025-11-2011:031 reply

              Oh yeah? Well the jerk store called, and they’re running out of you!

              • By whstl 2025-11-2115:08

                His wife is in a coma

        • By graemep 2025-11-2012:493 reply

          TCL has good solutions for this, but its not made it a success.

          Where I see Python used is in places where you do not need it packaged as executables:

          1. Linux - where the package manager solves the problem. I use multiple GUI apps written in python

          2. On servers - e.g. Django web apps, where the the environment is set up per application

          3. Code written for specific environments - even for specific hardware

          4. One off installs - again, you have a specified target environment.

          In none of the above cases do I find the environment to be fragile. On the other hand, if you are trying to distribute a Windows app to a large number of users I would expect it to be problematic.

          • By whstl 2025-11-2013:33

            You don't find the environment to be fragile because millions of human hours have been spent fixing those problems or working around them.

            Which is significantly more than was needed for different technologies to achieve similar results.

          • By mr_toad 2025-11-2014:09

            But people start by hacking away with one-off installs written for their specific environments, get it to the point where it’s useful to others, and then expect others to install all the tools and dependencies needed to install it.

            Quick start guide: works on my machine.

          • By throwaway81523 2025-11-215:36

            It seems the other way to me, maintaining environment consistency is such a pain that even a 5 line Python script ends up getting packaged in its own Docker container.

        • By raverbashing 2025-11-208:42

          You are correct unfortunately

        • By logicprog 2025-11-209:11

          Yeah, anytime I see a useful tool, and then find out it's written in Python, I want to kms — ofc, unless it happens to work with UV, but they don't always

      • By rmunn 2025-11-208:351 reply

        Not the ones I've used. Haskell compiles to executables, F# compiles to the same bytecode that C# does and can be shipped the same way (including compiling to executables if you need to deploy to environments where you don't expect the .NET runtime to be already set up), Clojure compiles to .jar files and deploys just like other Java code, and so on.

        I'll grant that there are plenty of languages that seemed designed for research and playing around with cool concepts rather than for shipping code, but the FP languages that I see getting the most buzz are all ones that can ship working code to users, so the end users can just run a standard .exe without needing to know how to set up a runtime.

        • By raverbashing 2025-11-208:442 reply

          True but some still wants me to understand what a monofunctor is or something that sounds like a disease to do things like print to screen or get a random number

          I feel that is the biggest barrier to their adoption nowadays (and also silly things like requiring ;; at the end of the line)

          Pure functions are a good theoretical exercise but they can't exist in practice.

          • By jacquesm 2025-11-209:112 reply

            > Pure functions are a good theoretical exercise but they can't exist in practice.

            Well, they can. But not all the way up to the top level of your program. But the longer you can hold off from your functions having side effects the more predictable and stable your codebase will be, with as an added benefit fewer bugs and less chance of runtime issues.

            • By DonHopkins 2025-11-2010:011 reply

              Yes, but they're "Hello world!" hostile, so traditional programming language pedagogy doesn't work well.

              Q: How many Prolog programmers does it take to change a lightbulb?

              A: Yes.

              • By mchaver 2025-11-2010:52

                I imagine LLMs have already thrown traditional programming language pedagogy out the window.

            • By raverbashing 2025-11-2011:182 reply

              Yes I agree, pure functions are good building blocks (for the most part), but I don't think the current abstractions and ways of bridging the FP and Procedural world are good enough

              Also have you managed to eliminate the side effect of your IP register changing when your program is running? ;)

              • By mr_toad 2025-11-2014:12

                > but I don't think the current abstractions and ways of bridging the FP and Procedural world are good enough

                I find that both Python and Javascript allow you to use functional code when appropriate, without forcing you to use it when it isn’t.

              • By dreamcompiler 2025-11-2014:101 reply

                I love FP but at the end of the day registers are global variables. Half of modern compiler theory consists of workarounds for this sad truth.

                • By lucas_membrane 2025-11-2020:09

                  A functional program is an a self-contained expression -- an isolated system following its own rules. The foremost example we have of such a thing is the universe itself, but the universe is not a good example in this discussion, because we have plenty of reasons to think that the universe contains pure (not pseudo-) randomness. Beyond that, isolation , when it matters, is not an easily proven proposition, and is a deplorable fantasy when assumed in many of the other science and engineering disciplines.

          • By roryc89 2025-11-209:22

            In most FP languages it is simple to print to screen and get a random number.

            Pure functions often exist in practice and are useful for preventing many bugs. Sure, they may not be suitable for some situations but they can prevent a lot of foot guns.

            Here's a Haskell example with all of the above:

              import System.Random (randomRIO)
            
              main :: IO ()
              main = do
                num <- randomRIO (1, 100)
                print $ pureFunction num
            
              pureFunction :: Int -> Int
              pureFunction x = x * x + 2 * x + 1

      • By iLemming 2025-11-2015:08

        There's 'FP stacks' and "FP stacks" and some aren't expressly similar. Volumes of money/data get handled by FP stacks - Jane Street famously uses OCaml; Cisco runs their entire cybersec backend on Clojure; Nubank covers entire Latin America and about to spread into the US - runs on Clojure on Elixir; Apple has their payment system, Walmart their billing, Netlfix their analytics on Clojure; Funding Circle in Europe and Splash in the US; etc. etc. There are tons of actual working products built on FP stacks. Just because your object-oriented brain can't pattern match the reality, it doesn't mean it's not happening.

      • By dbtc 2025-11-209:18

        Wouldn't the whole system be the product then? There's tradeoffs, but that's just integration.

      • By vrighter 2025-11-219:15

        python comes to mind here. I have almost never had a deployment go smoothly.

  • By rmunn 2025-11-206:133 reply

    Time to dig up a classic story about Tom Knight, who designed the first prototype of the Lisp Machine at MIT in the mid-70's. It's in the form of a classic Zen koan. This copy comes from https://jargondb.org/some_ai_koans but I've seen plenty of variations floating around.

    A novice was trying to fix a broken Lisp machine by turning the power off and on.

    Knight, seeing what the student was doing, spoke sternly: “You cannot fix a machine by just power-cycling it with no understanding of what is going wrong.”

    Knight turned the machine off and on.

    The machine worked.

    • By DonHopkins 2025-11-2010:05

      That's one of the funniest and most enlightening classic AI Koans, originally from the ITS file "AI:HUMOR;AI KOANS".

      Here's another Moon story from the humor directory:

      https://github.com/PDP-10/its/blob/master/doc/humor/moon's.g...

      Moon's I.T.S. CRASH PROCEDURE document from his home directory, which goes into much more detail than just turning it off and on:

      https://github.com/PDP-10/its/blob/master/doc/moon/klproc.11

      And some cool Emacs lore:

      https://github.com/PDP-10/its/blob/master/doc/eak/emacs.lore

      Reposting this from the 2014 HN discussion of "Ergonomics of the Symbolics Lisp Machine":

      https://news.ycombinator.com/item?id=7878679

      http://lispm.de/symbolics-lisp-machine-ergonomics

      https://news.ycombinator.com/item?id=7879364

      eudox on June 11, 2014

      Related: A huge collections of images showing Symbolics UI and the software written for it:

      http://lispm.de/symbolics-ui-examples/symbolics-ui-examples

      agumonkey on June 11, 2014

      Nice, but I wouldn't confuse static images with the underlying semantic graph of live objects that's not visible in pictures.

      DonHopkins on June 14, 2014

      Precisely! When Lisp Machine programmer look at a screen dump, they see a lot more going on behind the scenes than meets the eye.

      I'll attempt to explain the deep implications of what the article said about "Everything on the screen is an object, mouse-sensitive and reusable":

      There's a legendary story about Gyro hacking away on a Lisp Machine, when he accidentally trashed the function cell of an important primitive like AREF (or something like that -- I can't remember the details -- do you, Scott? Or does Devon just make this stuff up? ;), and that totally crashed the operating system.

      It dumped him into a "cold load stream" where he could poke around at the memory image, so he clamored around the display list, a graph of live objects (currently in suspended animation) behind the windows on the screen, and found an instance where the original value of the function pointer had been printed out in hex (which of course was a numeric object that let you click up a menu to change its presentation, etc).

      He grabbed the value of the function pointer out of that numeric object, poked it back into the function cell where it belonged, pressed the "Please proceed, Governor" button, and was immediately back up and running where he left off before the crash, like nothing had ever happened!

      Here's another example of someone pulling themselves back up by their bootstraps without actually cold rebooting, thanks to the real time help of the networked Lisp Machine user community:

      ftp://ftp.ai.sri.com/pub/mailing-lists/slug/900531/msg00339.html

      Also eudox posted this link:

      Related: A huge collections of images showing Symbolics UI and the software written for it:

      http://lispm.de/symbolics-ui-examples/symbolics-ui-examples....

    • By f1shy 2025-11-206:241 reply

      Everybody knows, you have to wait at least 5 tau.

      • By kragen 2025-11-2018:04

        This puts the koan in a completely different light. Thank you.

  • By seanhunter 2025-11-2020:10

    A few years ago I was learning lisp and I mentioned it to my uncle who had been an inspiration to me getting into programming. It turns out he wrote a tcp/ip stack for the symbolics lisp machine when he worked at Xerox. They had some sort of government contract that had to be done in lisp on the symbolics and deep in a very long contract it said that the interface had to be tcp/ip which the symbolics didn’t support out of the box. He said to me his boss came to him one day and the conversation went something like this:

    Boss: Hey there, you like learning new things right?

    Him (sensing a trap): Errr, yes.

    Boss: But you don’t program in lisp do you?

    Him (relieved, thinking he’s getting out of something): No.

    Boss: Good thing they sent these (gesturing at a literal bookshelf full of manuals that came with the symbolics).

    So he had to write a tcp stack. He said it was really cool because it had time travel debugging, the ability hit a breakpoint, walk the execution backwards, change variables and resume etc. This is in the 1980s. Way ahead of its time.

HackerNews