Good insight, but if you discount the visual elements (tabs, buttons, etc), you're limiting TUI to CLI, and I think that's unwarranted. The value proposition of both TUI and GUI is two-fold: you see the available action options, and you see the effect of your actions. So, yes, TUI and GUI _are_ closely related: who cares whether we're displaying pixels or character blocks.
Unfortunately, they are often artificially differentiated by the style of the UX interaction: TUIs promote the keyboard actions, and GUIs prefer mouse without corresponding keyboard shortcuts. Unfortunately for GUIs, their designers are often so enamored with WIMP that they omit the keyboard shortcuts or make them awkward. I hate it when, even if the ACTION button is available by keyboard traversal at all, it requires some unknown number of widget traversals instead of being one tab away.
Since the keyboard is almost always used for the textual data, it makes sense to me to always enable it for command execution. Well designed GUIs and TUIs provide both WIMP and keyboard UX, which sadly is not the norm today, so here's my vote to make them larp for each other more.
Any competent computer engineer can design a much better ISA than RISC-V.
Hello, my fellow bitter old man! I have to respectfully disagree, though. Firstly, RISC-V was actually designed by competent academic designers with four preceding RISC projects under their belt. The tenet of RISC philosophy is that the ISA is designed by careful measurement and simulation: the decisions are not supposed to be based on gut feeling or familiarity, but on optimizing the choices, which they arguably did.Specifically, about detecting the overflow: the familiar, classic approach of a hardware overflow (V) flag is well known to be suboptimal, because of its effect on speculative and OoO implementations. RISC-V has enough primitives to handle an explicit overflow checking, and they are consistent with performance techniques such as branch prediction and macro fusing, to the point of having asymptotically vanishing cost--there can be no performance penalty. Even more so, the RISC-V code that does NOT care about overflow can completely ignore these checks.
A lot of computer users are domain experts in something like chemistry or physics or material science. Computing to them is just a tool in their field, e.g. simulating molecular dynamics, or radiation transfer. They dot every i and cross every t _in_their_competency_domain_, but the underlying code may be a horrible FORTRAN mess. LLMs potentially can help them write modern code using modern libraries and tooling.
My go-to analogy is assembly language programming: it used to be an essential skill, but now is essentially delegated to compilers outside of some limited specialized cases. I think LLMs will be seen as the compiler technology of the next wave of computing.
Greg mentions discipline and vision as determinants of successful software, which is correct but I think he misses another aspect of vision: the ability to attract and crystallize a community around their project. Arguably, most successful softwares thrive in the long term because they have a team of people that inspire each other, fill in with complementary talents, and provide continuity.