We asked some of the world’s foremost experts an impossible question. Amazingly, they answered.
In the course of reporting this series on the impact of artificial intelligence on science and math, Quanta writers interviewed close to 100 experts — computer scientists, biologists, physicists, mathematicians and many others. Just about everyone reported seeing the impact of AI in their professional lives, some for an obvious reason: They had some kind of hand in building the technology itself. Others, a step or two away, have been so moved or pushed by AI’s potential that they have made professional changes as a result. They are modifying how they approach experiments, perhaps, or seeking out new collaborations, or asking new types of questions entirely. Careers that had been moving one way now go another.
At the end of each interview, we asked one impossible question: In the next five to 10 years, where do you see all of this going? The question is impossible not just because of the old quip about how it’s hard to make predictions, especially about the future, but because this particular future is being dreamed up, reconfigured and dreamed again nearly every day. Artificial intelligence as we currently think of it appeared to erupt suddenly out of nowhere just a few short years ago, blanketing the world in a fine, disruptive ash. Scientists and mathematicians are going to be shoveling through the consequences of this explosion, and perhaps more, for many years. They have our utmost gratitude for sharing their thoughts, hopes and fears with us. If anyone can see a coherent future through the haze, it might be them.
I did a biophysics PhD, and I do think the main value of AI in academia will be rapid bespoke scripting. Most of my code in grad school was little one-off scripts to perform a new experiment or show a new result. The code is not really the goal; in fact, it is frequently annoying and in the way of your actual goal. I would've killed for a tool that could style my figure for a talk, or perform a rolling average on a time trace.
This applies to a lot of professional jobs that involve programming.
Yes, exactly. AI will soon be treated like yet another technology, not a sentient alien, and will be used largely to accomplish mundane, well-trodden tasks.
Its analogous to how social media founders dreamily promised global democracy at first; in reality, we got an app to sell a used monitor, complain about a new thing, and look at some cat pictures.
HN's bubble/biases are pretty obvious here and we should expect them. But, as someone that uses code but is not a coder, I'll confirm this.
Nearly none of my coworkers were hired as coders. Yet we all code in some small way or another. As such, 100% of our code is really bad. No, its okay, we know it, it really is bad.
To echo the GP too; I had a friend in grad school that was trying to do some neuroscience experiments and analyze the data. He wanted my help with some matlab code and I said, sure, I'll sit down with you for a six pack. After the 11th nested if-statement, I upped the price to a case.
Like, most of the people I work with do not care at all about the code itself. They care about the result. I know much of HN does care about the code, and I'm not calling you out on it. Your feelings are quite valid! But so are those of myself and my coworkers.
LLMs that can, and very much do, code for us? That is the thing I think HN is really missing out on, understandably so. The power of AIs is not going to be trying to figure out the code base. From what I hear on here, it's bad at this. The power I see in my life is that suddenly, things are possible that we never thought we'd ever be able to do. And most of those things are under 200 lines of code, probably under 15 lines really.
I tend to think of AI as a wheelchair or other mobility aids. For a lot of people I know, AI/LLMs let us get moving at all. The dark ages where we just sat there knowing that we weren't smart enough to do the code we wanted to, to get the results we need? Those days are over! It's so nice!
I find LLM coding reduces the "activation energy" required to get started. It's like a "knowledge catalyst" for these kinds of tasks. I've leveraged them to make a few scripts (talking 100 lines out less) that added a lot of value to our little data processing team.
Funnily enough, a lot of the time some huge expensive software tool is purchased or even built to order, but all you really need from it can be done with some small scripts. It completely changes the economics of small tasks.
I see a lot of people here replying on the assumption that AI=LLMs, which I don't think will last for very long. LLMs have unlocked a primitive level of AI faster then many people expected, but it is only that. Where AI is going is surely going to be more complex/structured ANN-based architectures, built for the job (i.e. cognitive architectures), not simplistic pass-thru transformers which were never intended to be more than a seq-2-seq architecture.
I don't see any reason to suppose that we won't achieve human-level/human-like AGI, and do so fairly soon. Transformers may or may not be part of it, but I think we've now seen enough of what different ANN architectures can do, the "unexpected" power of prediction, have sufficient compute, etc, that the joke of AI/AGI always being 50(?) years away no longer applies.
I think achieving real human-level AGI is now within grasp as more of an engineering challenge (and not such a big one!) than an open-ended research problem. Of course (was it Chollet who said this?) LLMs have sucked all the oxygen/funding out of the room, so it may be a while until we see a radical "cognitive architecture" direction shift from any of the big players, although who knows what Sutskever or anyone else operating in stealth mode is working on?!
So, I think the interesting way to interpret the question of "where is this all going" is to assume that we do achieve this, and then ask what does that look like?
One consequence would seem to be that the vast majority of all white collar jobs (including lawyers, accountants, managers - not just tech jobs) will be done by computers, at least in countries where salaries are/were high enough to justify this, probably leading to the need for some type of universal basic income, meaning a big reduction in income for this segment of society. One could dream of an idyllic future where we're all working far less pursuing hobbies, etc, but it seems more likely that we're headed for a dystopian future where the masses live poorly and at the grace of the wealthy elite who are profiting from the AI labor, and only vote for UBI to extent of preventing the mass riots that would threaten their own existence.
While white collar and intellectual jobs disappear, and likely become devalued as the realm of computers rather than what makes humans special, it seems that (until that falls to AI too) manual and human-touch jobs/skills may become more valued and regarded as the new "what makes us special".
Over time even emotions and empathy will likely fall to AI, since these are easy to understand at mechanical level in terms of how they operate in the brain, although it'd take massive advances in robotics for them to be able to deliver the warm soft touch of a human.