I would summarize the central claim of the paper as: the widespread use of AI to mediate human interaction will rob people of agency, understanding and skill development, as well as destroying the social links necessary to maintain and improve institutions, while at the same time allowing powerful unaccountable actors (AI cabal) to interject into those relations and impose their institutional goals; by "institution" we mean a shared set of beneficial social rules, not merely an organization tasked with promoting them, "justice" vs. "US justice system".
The authors then break down the mechanisms by which AI achieves these outcomes (that seem quite reductive and dated compared to the frontier, for example they take it as granted that AI cannot be creative, that it can only work prospectively and can't react to new situations and events etc.), as well as exemplifying those mechanism already at work in a few areas like journalism and academia.
AI is by it's nature an entropy machine.
And I think that's about right. Despite the marketing, I think AI (especially if the hyped capabilities arrive) will be one of the most destructive technologies ever invented. It only looks good to blinkered and deluded technocrats.
We should be more worried what AI will due to the ability of an average human to think.
Not that I think there is a lot of thinking going on now anyway, thanks to our beloved smartphones.
But just think about a time when human ability to reason has atrophied globally. AI might even give us true Idiocracy!
You think smartphones are the cause of atrophy ?
No sir, there was nothing there to begin with - if you read recent history, you'll see that it's full of stupidity, and a few rabble rousers leaving entire nations by the nose.
With the mollification off the smartphone, we've merely taken off the edge of this killing machine.
> We should be more worried what AI will due to the ability of an average human to think.
I had a wake up call on this yesterday. After a recent HN thread about Zed editor, I decided to give it another try, so I loaded it up, disabled AI, and tried writing some code from scratch. No AI completion, no intellisence. Two things came to mind. First, my editor seems so much more peaceful without being told what to do. Second, it was a bit scary how lost I felt. It was obvious that my own ability to communicate through code had declined a bit since I began using AI coding assistants. It turns out that as expected, coding assistants really are competitive cognitive artifacts. After that experience, I've decided that I am going to do at least part of my coding with all completions turned off. Unfortunately at work you are paid to produce quickly, so I think my AI free editor will have to be reserved for personal projects.
Further related to your statement about thought, the hallucinations persist, and even last night I got a response about 80's pop culture that was over 50% bullshit. Just imagine what intentional persuasion through LLM models will do to society. Independent thought has never been more important.
Similar rhetoric was allways there with new technologies. Calculators, radio, cameras, phones, computers, smartphones, social networks...
Regulations do more harm than learning process from mistakes.
Yes we have this before and everytime it was correct.
>That we became idiots with new tools?
Yes, a little bit each time. But AI will finish the job.
Because earlier it was writing skills, or attention span that was at stake.
This time it is literally the ability to think.
Pray tell, what makes you impervious to the atrophy and mental decline caused by these inventions? Do you just not use "calculators, radio[s], cameras, phones, computers, smartphones, [or] social networks"? And so, you have avoided the trap of technologies through defiance?
I mean we were seeing this even before AI. It's the same type of person. To slop is human.
It's like for some reason we thought that like some good percentage of us aren't just tribal worker drones who fundamentally just want fats, sugars, salts, dopamine and seratonin. People actively vote against things like UBI, higher corporate taxes, making utilities public. People actively choose to believe misinformation because it suits their own personal tribal narratives.
This is the way "AI" will deliver on the promise to become more intelligent than humans. Or at least than humans who believe in it.
Just from reading the abstract, it feels like the authors didn't even attempt at trying to be objective. It hard to take what they're saying seriously when the language is so loaded and full of judgments. The kind of language you'd expect in an Op-Ed and not a research paper
I think you may be confused. This is not a research paper, it's an op-ed in a law journal.
SSRN is where most draft law review/journal articles are published, which may be the source of confusion.
For most other fields, it is a source of draft/published science papers, but for law, it's pretty much any kind of article that is going to show up in a law review/journal.
Ah okay, thanks for explaining it! Just based on the name, journal and metadata it seemed like a research paper.. and I was honestly a bit surprised. But I obviously don't publish law research :))
From what you're saying it seems that for an insider this is clear. I guess that makes more sense then
It is literally called “ Boston Univ. School of Law Research Paper No. 5870623”
It's also an submission to UC hastings law journal, as it also says right before that?
The automated tagging with a BUSL ID is just how BUSL's system for papers of any sort works.
For reference: I did my first year of law school at BUSL so i'm very familiar with how it all works there :)
This is also very common elsewhere - everything that IBM used to release got tagged with a technical report number too, for example, whether it was or not.
In any case - it is clearly a piece meant to be persuasive writing, rather than deep research.
Law journals contain a mix of essentially op-eds and deeper research papers or factual expositories/kind of thing. They are mostly not like scientific journals. Though some exist that are basically all op-ed or zero op-ed.
Compare something like:
https://repository.uclawsf.edu/cgi/viewcontent.cgi?article=2...
Which is a piece in UC law journal meant as an informative piece cataloguing how california courts adjudicate false advertising law. It does not really take a position.
with
https://repository.uclawsf.edu/cgi/viewcontent.cgi?article=3...
Which is a piece in UC hastings law journal meant as, essentially an op ed, arguing that dog sniff tests are bullshit.
I picked both of these at random from stuff in UC hastings law journal that had been cited by the Supreme Court of California. There are things that are even more factual/take zero positions, and things that are even more persuasive writing/less researchy, than either of these, but they are reasonable representatives, i think
It's an essay. Being opinionated is a feature.