Lucent 7 R/E 5ESS Telephone Switch Rescue (2024)

2025-11-1823:595232kev009.com

I am still recovering from the fairly challenging logistical project of saving a Lucent 5ESS. This is a whale of a project and I am still in a state of disbelief that I have gotten to this point.…

I am still recovering from the fairly challenging logistical project of saving a Lucent 5ESS. This is a whale of a project and I am still in a state of disbelief that I have gotten to this point. Thanks to my wife, brother, and a few friends for their help and the University of Arizona which has a very dedicated and professional Information Technology Services staff.

It started when I saw some telephone history enthusiasts post about a construction bid at the University of Arizona. It turns out, U of A installed the 5ESS in the late 1980s in a rather forward thinking move that netted a phone system that handled the growth of the University, medium speed data anywhere a phone may be located (ISDN BRI or PRI), and copper and fiber plant that will continue to be used indefinitely.

At peak, it served over 20,000 lines. They've done their own writeup, The End of An Era in Telecommunications, that is worth a read. In particular, the machine had an uptime of approximately 35 years including two significant retrofits to newer technology culminating in the current Lucent-dressed 7 R/E configuration that includes an optical packet-switched core called the Communications Module 3 (CM3) or Global Messaging Server 3 (GMS3).

5ESS diagram

Moving 40 frames of equipment, this required a ton of planning and muscle. The whole package took up two 26' straight-trucks, which is just 1' short of an entire standard US semi-trailer.

Coming from the computing and data networking world, the construction of the switch was quite bewildering at first. It is physically made up of standard frames which are interconnected into rows not unlike datacenter equipment, but the frames are integrated into an overhead system for cable management. Internally, they are wired up usually within the row and quite a few cables route horizontally beween frames, but some connections have to transit up and over to other rows.

Line Trunk Peripherals hook up to a Switching Module Controller (SMC) directly or an OXU (Optical Cross Connect Unit) which hooks up to an SMC and reduces the amount of copper cabling going between rows. Alarm cables run directly to an OAU (Office Alarm Unit) or form rings in between rows that eventually end at the OAU. Optical connections go from OXUs to SMCs and then to the CM, copper test circuits home run to a Metallic Test Service Unit shelf. Communications Cables come out the top and route toward the wire frame, usually in large 128 wire cables but occasionally in smaller quantity for direct or cross connect of special services. A pair of Power Distribution Frames distribute -48V throughout the entire footprint, taking into account redundancy at every level.

All of this was neatly cable laced with wax string. Moving a single frame required hundreds of distinct actions that vary from quick, like cutting cable lace, to time consuming removal of copper connections and bolts in all directions.

5ESS move

We were able to complete the removal in a single five day workweek, and I was able to unload it to my receiving area in two days over the weekend where it now safely resides.

The next step will be to acquire some AC and DC power distribution equipment, which will have to wait for my funds to recover.

I should be able to boot the Administrative Module (AM), a 3B21D computer, up relatively soon by acquiring a smaller DC rectifier and that alone will be very interesting as it is the only use I know of the DMERT or UNIX-RTR operating system, a fault tolerant micro-kernel realtime UNIX from Bell Labs.

3B21D

The system came with a full set of manuals and schematics which will help greatly in rewiring and reconfiguring the machine. After the AM is up, I need to "de-grow" the disconnected equipment and I will eventually add back in an assortment of line, packet, and service units so that I can demonstrate POTS as well as ISDN voice and data. In particular, I am looking forward to interoperating with other communication and computing equipment I have.

I will have to reduce the size of the system quite a bit for power and space reasons so will have spare parts to sell or trade.

Additional Pictures are available here until I have a longer term project page established.

This is too much machine for one man, and it is part of a broader project I am working on to build a computing and telecommunications museum. If you are interested in working on the system with me, please feel free to reach out.

5ESS receiving


Read the original article

Comments

  • By luckyturkey 2025-11-198:371 reply

    This is such a stark contrast with how "critical infrastructure" is built now.

    A university bought a 5ESS in the 80s, ran it for ~35 years, did two major retrofits, and it just kept going. One physical system, understandable by humans with schematics, that degrades gracefully and can be literally moved with trucks and patience. The whole thing is engineered around physical constraints: -48V, cable management, alarm loops, test circuits, rings. You can walk it, trace it, power it.

    Modern telco / "UC" is the opposite: logical sprawl over other people's hardware, opaque vendor blobs, licensing servers, soft switches that are really just big Java apps hoping the underlying cloud doesn't get "optimized" out from under them. When the vendor loses interest, the product dies no matter how many 9s it had last quarter.

    The irony is that the 5ESS looks overbuilt until you realize its total lifecycle cost was probably lower than three generations of forklifted VoIP, PBX, and UC migrations, plus all the consulting. Bell Labs treated switching as a capital asset with a 30-year horizon. The industry now treats it as a revenue stream with a 3-year sales quota.

    Preserving something like this isn't just nostalgia, it's preserving an existence proof: telephony at planetary scale was solved with understandable, serviceable systems that could run for decades. That design philosophy has mostly vanished from commercial practice, but it's still incredibly relevant if you care about building anything that's supposed to outlive the current funding cycle.

  • By jakedata 2025-11-190:502 reply

    Visiting Bletchley Park and seeing step-by-step telephone switching equipment repurposed for computing re-enforced my appreciation for the brilliance of the telecommunication systems we created in the past 150 years. Packet switching was inevitable and IP everything makes sense in today's world, but something was lost in that transition too. I am glad to see that enthusiasts with the will and means are working to preserve some of that history. -Posted from SC2025-

    • By dekhn 2025-11-191:231 reply

      I wanted to learn more about computer hardware in college so I took a class called "Cybernetics" (taught by D. Huffman). I thought we were going to focus on modern stuff, but instead, it was a tour of information theory- which included various mathematical routing concepts (kissing spheres/spherical code, Karnaugh maps). At the time I thought it was boring, but a couple decades later, when working on Clos topologies, it came in handy.

      Other interesting notes: the invention of telegraphy and improvements to the underlying electrical systems really helped me understand communications in the 1800s better. And reading/watching Cuckoo's Egg (with the german relay-based telephones) made me appreciate modern digital transistor-based systems.

      Even today, when I work on electrical projects in my garage, I am absolutely blown away with how much people could do with limited understanding and technology 100+ years ago compared to what I'm able to cobble together. I know Newton said he saw farther by standing on the shoulders of giants, but some days I feel like I'm standing on a giant, looking backwards and thinking "I am not worthy".

      • By Animats 2025-11-195:423 reply

        When the Bell System broke up, the old guys wrote a 3-volume technical history of the Bell System.[1] So all that is well documented.

        The history of automatic telephony in the Bell System is roughly:

        - Step by step switches. 1920s Very reliable in terms of failure, but about 1% misdirected or failed calls. Totally distributed. You could remove any switch, and all it would do is reduce the capacity of the system slightly. Too much hardware per line.

        - Panel. 1930s. Scaled better, to large-city central offices. Less hardware per line. Beginnings of common control. Too complex mechanically. Lots of driveshafts, motors, and clutches.

        - Crossbar. 1940s. #5 crossbar was a big dumb switch fabric controlled by a distributed set of microservices, all built from relays. Most elegant architecture. All reliable wire relays, no more motors and gears. If you have to design high-reliability systems, is worth knowing how #5 crossbar worked.

        - 1ESS - first US electronic switching. 1960s Two mainframe computers (one spare) controlling a big dumb switch fabric. Worked, but clunky.

        - 5ESS - good US electronic switching. Two or more minicomputers controlling a big dumb switch fabric. Very good.

        The Museum of Communications in Seattle has step by step, panel, and crossbar systems all working and interconnected.

        In the entire history of electromechanical switching in the Bell System, no central office was ever fully down for more than 30 minutes for any reason other than a natural disaster, and in one case a fire in the cable plant. That record has not been maintained in the computer era. It is worth understanding why.

        [1] https://archive.org/details/bellsystem_HistoryOfEngineeringA...

        • By kev009 2025-11-198:42

          The more I study the 5E I see it as a multicomputer or distributed system. The minicomputers were responsible for OAM and orchestrating the symphony over time, but the communications are happening across the CM which implements the Time/Space/Time fabric and a sea of microcontrollers. I think this clarification is worthwhile because it drives your point about faults in this computer-era and by extension this (micro)services-era home even more -- it's much less mainframe and more distributed system than commonly chronicled, which can be a harder problem especially with the tooling back then.

        • By Aloha 2025-11-1915:11

          It's actually an 8 volume History (I have all 8 on my shelf) 3 were just on switching system - you left out the parallel development to Panel, Rotary.

          Museum in seattle also has a working 3ESS (likely the only one left in the world), and have recently added a DMS-10 as well.

        • By palmotea 2025-11-198:151 reply

          > That record has not been maintained in the computer era. It is worth understanding why.

          Go on.

          • By Animats 2025-11-1920:191 reply

            Briefly,

            The big dumb switch fabric of #5 Crossbar has no processing power at all, but it has persistent state. The units that have processing power all go down to their ground state at the end of each call processing event, and have no state that persists over transactions. The various processing units (markers, junctors, senders, originating registers, etc.) are all at least duplicated, and usually there's a pool of them. Requests "seize" a unit at random from a pool, the unit does its thing, and the unit is quickly released.

            Units have self-checking, and if they fail, they drop out of their pool and raise an alarm. The call capacity or connection speed of the exchange is reduced but it keeps working. Everything has short hardware stall timers which will prevent some unit failure from hanging the exchange.

            #5 Crossbar has almost no persistent memory. End offices (for connecting subscriber lines) did not log call info. Toll offices did, but that used an output-only paper tape punch. There's so little state in the switch that matching up call start and call end events was done later in a billing office where the paper tape was read.

            The combination of statelessness and resource pools prevented total failure. Errors and unit failures happened occasionally but could not take down the whole switch.

            There's plenty of info about #5 Crossbar on line, but 1950s telephony jargon is so different from 2020s server jargon that it's not obvious that #5 Crossbar is a microservices architecture.

            • By Animats 2025-11-205:17

              Thinking about this, this is why Erlang, designed for phone switches, is built around small processes which can fail and be restarted.

  • By hasbot 2025-11-1913:13

    My first development job was as a software developer at Bell Labs in Naperville working on the 5E. I started at the end of 5E4 (the 4th revision) and then worked on 5E5 and 5E6. I went from school writing maybe 1000 line programs to maintaining and enhancing a system comprised of millions of lines of code and hundreds of developers. Most of the code itself was very simple but it was the interactions between modules and switching features that was very complex.

HackerNews