It depends on your plan, but Google[1] and Anthropic[2] at least provide indemnity against this. Haven't checked the others. Still not a situation you want to find yourself in, though.
[1] https://cloud.google.com/blog/products/ai-machine-learning/p...
[2] https://www.anthropic.com/news/expanded-legal-protections-ap...
I don't know about worse, but I think the situations are very similar. It's inaccurate to think the Luddites just hated technological advancement for the sake of it. They were happy to use machines; why wouldn't they be, if they had a back-breaking and monotonous job and the machine made it easier?
The issue is not the technology per se, it's how it's applied. If it eliminates vast swathes of jobs and drives wages down for those left, then people start to have a problem with it. That was true in the time of the Luddites and it's true today with AI.
That does the Luddites a bit of a disservice:
> But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.”[1]
[1] https://www.smithsonianmag.com/history/what-the-luddites-rea...
I agree that trying to produce this sort of spec for the entire project is probably a fool's errand, but I still see the value for critical components of the system. Formally verifying the correctness of balance calculation from a ledger, or that database writes are always persisted to the write ahead log, for example.