A new bill in New York would require disclaimers on AI-generated news content

2026-02-069:56583245www.niemanlab.org

A new bill in the New York state legislature would require news organizations to label AI-generated material and mandate that humans review any such content before publication. On Monday, Senator…

A new bill in the New York state legislature would require news organizations to label AI-generated material and mandate that humans review any such content before publication. On Monday, Senator Patricia Fahy (D-Albany) and Assemblymember Nily Rozic (D-NYC) introduced the bill, called The New York Fundamental Artificial Intelligence Requirements in News Act — The NY FAIR News Act for short.

“At the center of the news industry, New York has a strong interest in preserving journalism and protecting the workers who produce it,” said Rozic in a statement announcing the bill.

A closer look at the bill shows a few regulations, mostly centered around AI transparency, both for the public and in the newsroom. For one, the law would demand that news organizations put disclaimers on any published content that is “substantially composed, authored, or created through the use of generative artificial intelligence.”

AI disclaimers for readers have been hotly debated in the news industry, with some critics arguing that such labels alienate audiences, even when generative AI is only used as an assistive tool. The bill contains a carve-out that would allow copyrightable material to be excluded from the law. (The U.S. Copyright Office has ruled that works solely generated by AI systems are not eligible for copyright, but allows leeway for works that show signs of “human authorship.”)

The bill also requires that news organizations disclose to journalists and other media professionals in their newsrooms when AI is being used and how. Any news content created using generative AI must also be reviewed by a human employee “with editorial control” before publication. That goes not just for news articles but also for audio, images, and other visuals.

In addition, the bill contains language that requires news organizations to create safeguards that protect confidential material — mainly, information about sources — from being accessed by AI technologies.

State lawmakers highlighted two main reasons for proposing the NY FAIR News Act. First, they say, AI-generated content may be “false or misleading.” Second, they argue, AI-generated content “plagiarizes” by deriving content from original sources “without permission or proper citation.”

“Perhaps one of the industries at most risk from the use of artificial intelligence is journalism and as a result, the public’s trust and confidence in accurate news reporting,” said Sen. Fahy in a statement. “More than 76% of Americans are concerned about AI stealing or reproducing journalism and local news stories.”

The proposed bill was announced with broad endorsements from unions across the news industry, including WGA-East, SAG-AFTRA and the DGA.

Jennifer Sheehan, a spokesperson for the NewsGuild of New York, confirmed that the NewsGuild has been meeting with this labor coalition to discuss shared concerns around AI adoption and working to get the bill off the ground.

Notably, the bill would cement some labor protections for newsroom workers — including restrictions on firing journalists or reducing their work, pay, or benefits due to generative AI adoption. Similar language has been negotiated into individual newsroom union contracts across the country over the past couple of years.

In December, the NewsGuild launched a nationwide campaign called “News Not Slop” to advocate for more guardrails on AI usage in newsrooms. In New York City, the Business Insider union held a rally in the Financial District to protest an editorial pilot that was publishing AI-generated news stories with an “AI byline.”

“Our union is deeply concerned about media companies implementing artificial intelligence in ways that damage the credibility of our members’ journalism,” Sheehan said, “as well as the impact such technology has had and will have on jobs.”

Show tagsHide tags


Read the original article

Comments

  • By padolsey 2026-02-0611:0812 reply

    I'm surprised to see so little coverage of AI legislation news here tbh. Maybe there's an apathy and exhaustion to it. But if you're developing AI stuff, you need to keep on top of this. This is a pretty pivotal moment. NY has been busy with RAISE (frontier AI safety protocols, audits, incident reporting), S8420A (must disclose AI-generated performers in ads), GBL Article 47 (crisis detection & disclaimers for AI chatbots), S7676B (protects performers from unauthorized AI likenesses), NYC LL144 (bias audits for AI hiring tools), SAFE for Kids Act [pending] (restricts algorithmic feeds for minors). At least three of those are relevant even if your app only _serves_ people in NY. It doesn't matter where you're based. That's just one US state's laws on AI.

    It's kinda funny the oft-held animosity towards EU's heavy-handed regulations when navigating US state law is a complete minefield of its own.

    • By raincole 2026-02-0611:2511 reply

      > I'm surprised to see so little coverage of AI legislation news here tbh.

      Because no one believes these laws or bills or acts or whatever will be enforced.

      But I actually believe they'll be. In the worst way possible: honest players will be punished disproportionally.

      • By padolsey 2026-02-0611:534 reply

        > Because no one believes these laws or bills or acts or whatever will be enforced.

        Time will tell. Texas' sat on its biometric data act quite quietly then hammered meta with a $1.4B settlement 20 years after the bill's enactment. Once these laws are enacted, they lay quietly until someone has a big enough bone to pick with someone else. There are already many traumatic events occurring downstream from slapdash AI development.

        • By Ajedi32 2026-02-0614:242 reply

          That's even worse, because then it's not really a law, it's a license for political persecution of anyone disfavored by whoever happens to be in power.

          • By dylan604 2026-02-0615:58

            Never mind the damage that was willfully allowed to happen that the bill was supposed to protect from happening.

          • By ls612 2026-02-0718:411 reply

            Every law is like this. Only fools and schoolchildren believe that the rule of law means anything other than selective punishment of those who displease the ruling class.

            • By AngryData 2026-02-092:481 reply

              I agree that is how it currently is in the US, but I don't believe it is universally true or that nothing can be done to change it if enough people resisted.

              • By ls612 2026-02-094:591 reply

                My statement has nothing to do with contemporary politics and is not unique in the slightest to the US. For an example you are likely sympathetic to, consider the experience of Pavel Durov since late 2024.

                • By Ajedi32 2026-02-0914:451 reply

                  "Every law" seems like a huge exaggeration. Assuming for a moment we agree Pavel is a victim of selective prosecution, notice they're not charging him with a clear, straightforward crime like murder, they're charging him with things like[1] failing to prevent illicit activity on Telegram, and "provision of cryptology services [...] without a declaration of conformity". Those laws seem far more prone to abuse as a tool for selective prosecution than most others. (Some of the things he's charged with don't even sound to me like they should be illegal in the first place.)

                  [1]: https://x.com/fs0c131y/status/1828103712964817389

                  • By ls612 2026-02-0916:14

                    Every law in the sense of cumulatively, the ‘rule of law’ system has the same property of “Show me the man and I’ll show you the crime” that Beria’s system did.

        • By vulcan01 2026-02-0614:191 reply

          Meta made $60B in Q4 2025. A one-time $1.4B fine, 20 years after enactment, is not "getting hammered".

          • By Retric 2026-02-0617:113 reply

            They didn’t make $60B in Q4 2025 in Texas. 1.4B was 100% profit from Texas for years, that a big fine.

            • By vulcan01 2026-02-0714:17

              I see this as roughly equivalent to amortized big O complexity. If I push to a vector repeatedly, sometimes I will incur a significant cost O(n) of reallocation, but most of the time it's still O(1).

              Similarly, if Meta violates the law, and is infrequently fined a small fraction of their revenue by a small number of governments, in general it will not be a big deal for them.

            • By saalweachter 2026-02-0618:22

              You also have to ask "how much is the specific thing in the lawsuit worth to Meta?"

              I don't know how much automatically opting everyone in to automatic photo tagging made Meta, but I assume its "less than 100% of their revenue".

              Barring the point of contention being integral to the business's revenue model or management of the company being infected with oppositional defiant disorder a lawsuit is just an opportunity for some middle manager + team to get praised for making a revenue-negative change that reduces the risk of future fines.

              Work like that is a gold mind; several people will probably get promoted for it.

            • By ninalanyon 2026-02-0618:001 reply

              Big for Texas, not for Meta.

              • By Retric 2026-02-0620:351 reply

                It’s under 5 hours of GDP for Texas. It’s a big fine, but not a huge deal for either party.

                • By abustamam 2026-02-071:272 reply

                  So what's the point? If neither party is really affected by a penalty (no diacernible benefit or loss to either), then is it all just performative?

                  Maybe I just answered my own question.

                  • By Retric 2026-02-072:001 reply

                    Things don’t need to be huge deals to influence behavior or be a net gain.

                    I bet you’ve taken a shortcut to save less than 1h for example.

                    • By abustamam 2026-02-076:46

                      I think time is different because it's finite. I admit I'll still opt for store brand to save a few bucks even making an engineering salary. But I'll also do something "illegal" (like parking at a metered spot without paying) to save time or otherwise do what I want and just deal with whatever financial cost incurred if I know it won't break me.

                      A saying I've heard is that if the punishment for a crime is financial, then it is only a deterrent for those who lack the means to pay. Small business gets caught doing bad stuff, a $30k fine could mean shutting down. Meta gets caught doing bad stuff, a billion dollar fine is almost a rounding error in their operational expenses.

        • By jandrese 2026-02-0616:12

          That sounds like it will be in the courts for ages before Facebook wins on selective prosecution.

        • By OGEnthusiast 2026-02-0615:07

          > Texas' sat on its biometric data act quite quietly then hammered meta with a $1.4B settlement 20 years after the bill's enactment.

          Sounds like ignoring it worked fine for them then.

      • By SAI_Peregrinus 2026-02-0615:465 reply

        Or it'll end up like California cancer warnings: every news site will put the warning on, just in case, making it worthless.

        • By Wistar 2026-02-0616:452 reply

          … or the sesame seed labeling law that resulted in sesame seeds being added to everything.

          https://apnews.com/article/sesame-allergies-label-b28f8eb3dc...

          • By sebastiennight 2026-02-0620:18

            Wow, it's always amazing to me how the law of unintended consequences (with capitalistic incentives acting as the Monkey's Paw) strikes everytime some well-intended new law gets passed.

          • By nemo 2026-02-0619:491 reply

            As someone who is allergic to sesame, that is insanely annoying.

            • By account42 2026-02-0910:18

              I don't like the opposite any more though, i.e. commercial food being effectively limited to the lowest common denominator of allergens and other dietary as well as religious restrictions. I see that happen a lot more than this one example and it doesn't even need any laws to cause it.

        • By clickety_clack 2026-02-0618:402 reply

          There just can’t be a way to discriminate on the spectrum from “we use AI to tidy up the spelling and grammar” to “we just asked ChatGPT to write a story on x”, so the disclaimer will make it look like everyone just asked ChatGPT.

          • By slg 2026-02-0620:481 reply

            >There just can’t be a way to discriminate on the spectrum from “we use AI to tidy up the spelling and grammar” to “we just asked ChatGPT to write a story on x”

            Why though? Did the AI play the role of an editor or did it play the role of a reporter seems like a clear distinction to me and likely anyone else familiar enough with how journalism works.

            • By clickety_clack 2026-02-072:271 reply

              People know what it _should_ mean, but if you say that it’s fine to have an AI editor, then there will be a bunch of people saying something like “my reporting is that x is a story, and my editor, ChatGPT, just tidied that idea up into a full story”. There’s all sorts of hoops people can jump through like that. So you end up putting a banner on all AI, or only penalizing the honest people who follow the distinction that’s supposed to exist.

              • By slg 2026-02-077:29

                Fair enough, but my main response to that is that people need to support independent journalism. It's entirely possible I'm paying some fraud(s), but as someone who certainly spends more than the average person on online journalism, I trust the people I support at the very least know that putting their byline on an AI written article would be a career destroying scandal in the eyes of their current audience.

          • By account42 2026-02-0910:21

            I'm fine with that. I want neither AI-hallucinated stories nor AI-expanded fluff. If it's not worth it for a real human editor it's probably not worth reading.

        • By sodapopcan 2026-02-0619:541 reply

          I just came across this for the first time. I ordered a precision screw driver kit and it came with a cancer warning on it. I was really taken aback and then learned about this.

          • By mrandish 2026-02-0621:181 reply

            Some legislation which sounds good in concept and is well-intended ends up being having little to no positive impact in practice. But it still leaves businesses with ongoing compliance costs/risks, taxpayers footing the bill for an enforcement bureaucracy forever and consumers with either annoying warning interruptions or yet more 'warning message noise'.

            It's odd that legislators seem largely incapable of learning from the rich history of past legislative mistakes. Regulation needs to be narrowly targeted, clearly defined and have someone smart actually think through how the real-world will implement complying as well as identifying likely unintended consequences and perverse incentives. Another net improvement would be for any new regs passed to have an automatic sunset provision where they need to be renewed a few years later under a process which makes it easy to revise or relax certain provisions.

            • By thesmtsolver2 2026-02-073:14

              It makes sense once you understand law makers generally care about their careers more than the state/country/citizens.

              Most of it is performative law making.

        • By _blk 2026-02-0619:252 reply

          Yup. Or like "necessary cookies" that aren't all that necessary when it works just fine without.

          • By charcircuit 2026-02-0622:521 reply

            Just because you doing notice that it is not working properly, that doesn't mean you haven't broken anything.

            • By account42 2026-02-0910:25

              If you don't notice then it was probably not something you considered essential. Breaking the tracking of you and your personal information is kind of the point.

          • By subscribed 2026-02-0621:27

            Well, they're necessary if you're spying on your visitors.

        • By vablings 2026-02-0619:002 reply

          Known by the state of cancer to cause California. I do think P65 warnings are pretty useful for the most part jokes aside

          • By 8cvor6j844qw_d6 2026-02-0619:302 reply

            Essentially useless if everyone slaps on that label. Kinda like hospital alarm fatigue.

            But this just my uninformed opinion, perhaps those that work in the health industry think differently.

            • By datsci_est_2015 2026-02-0620:001 reply

              Maybe it’s not a fair comparison, but I think it’s been shown that tobacco warnings are effective even though they’re so common to be “fatigued”.

              • By DrinkingRedStar 2026-02-0620:141 reply

                I do believe this is an unfair comparison. With tobacco the warnings are always true, but with prop 65 the product might not contain any cancer causing ingredients, but the warning is there just in case.

                It's much easier to tell yourself prop 65 doesn't have to be avoided because "it's probably just there to cover their asses" wile tobacco products have real warnings that definitely mean danger (though there are people who convince themselves otherwise_

                • By SAI_Peregrinus 2026-02-0916:44

                  Also even if there's a prop 65 warning because there are cancer-causing ingredients, those ingredients may not be user-accessible or may be in tiny enough quantities that they'd statistically never result in cancer even with lifetime use by every human on the planet. E.g. lead in a circuit board inside an IP-68 rated sealed device would require a prop 65 warning even though it won't pose any cancer risk to the user unless they grind up the device & ingest or inhale the lead.

            • By account42 2026-02-0910:24

              But that is because the requirement is binary - warning vs. no warning. This problem doesn't happen if the requirement is to disclose what was used although it could still lead to other issues.

          • By bigstrat2003 2026-02-075:08

            I don't know of anyone (seriously not one person) who actually believes those labels. And the reason why is precisely because the government was foolish enough to put them on everything under the sun. Now nobody listens to them because the seriousness got diluted.

      • By Galanwe 2026-02-0612:191 reply

        How about a pop-up on websites, next to the tracking cookie ones, to consent reading AI generated text?

        I see a bright future for the internet

      • By cheschire 2026-02-0611:401 reply

        Yeah it’s like that episode of schoolhouse rock about how a bill becomes a law now takes place in squid games.

        • By razingeden 2026-02-0622:23

          >But I wonder who that sad little scrap of 8,523 pieces of paper is?

      • By mmooss 2026-02-0618:41

        The primary obstacle is discussions like this one. It will be enforced if people insist it's enforced - the power comes from the voters. If a large portion of the population - especially the informed population, represented to some extent here on HN - thinks it's hopeless then it will be. If they believe they will get together to make it succeed, it will. It's that simple: Whatever people believe is the number one determination of outcome. Why do you think so many invest so much in manipulating public opinion?

        Many people here love SV hackers who have done the impossible, like Musk. Could you imagine this conversation at an early SpaceX planning meeting? That was a much harder task, requiring inventing new technology and enormous sums of money.

        Lots of regulations are enforced and effective. Your food, drugs, highways, airplane flights, etc. are all pretty safe. Voters compelling their representatives is commonplace.

        It's right out of psyops to get people to despair - look at messages used by militaries targeted at opposing troops. If those opposing this bill created propaganda, it would look like the comments in this thread.

      • By crimsonsupe 2026-02-0611:568 reply

        > Because no one believes these laws or bills or acts or whatever will be enforced.

        That’s because they can’t be.

        People assume they’ve already figured out how AI behaves and that they can just mandate specific "proper" ways to use it.

        The reality is that AI companies and users are going to keep refining these tools until they're indistinguishable from human work whenever they want them to be.

        Even if the models still make mistakes, the idea that you can just ban AI from certain settings is a fantasy because there’s no technical way to actually guarantee enforcement.

        You’re essentially passing laws that only apply to people who volunteer to follow them, because once someone decides to hide their AI use, you won't be able to prove it anyway.

        • By chrisjj 2026-02-0613:392 reply

          > the idea that you can just ban AI from certain settings is a fantasy because there’s no technical way to actually guarantee enforcement.

          By that token bans on illegal drugs are fantasy. Whereas in fact, enforcement doesn't need to be guaranteed to be effective.

          There may be little technical means to distinguish at the moment. But could that have something to do with lack of motivation? Let's see how many "AI" $$$ suddenly become available to this once this law provides the incentive.

          • By amanaplanacanal 2026-02-0614:12

            > By that token bans on illegal drugs are fantasy.

            I think you have this exactly right. They are mostly enforced against the poor and political enemies.

          • By raw_anon_1111 2026-02-0619:481 reply

            Well considering how ineffective the War on Drugs has been - is that really a great analogy?

            • By chrisjj 2026-02-0620:311 reply

              > considering how ineffective the War on Drugs has been

              Relative to no war on drugs? Who knows.

              • By raw_anon_1111 2026-02-0620:531 reply

                Has there ever been a single person who wants an illegal drug that couldn’t get one because it was illegal?

                Just a quick Google search g estimates that less than 3% of drugs are intercepted by the government.

                • By subscribed 2026-02-0621:171 reply

                  Me. There are four I want. All very safe.

                  I always wanted to try specific two, but first cannot be had in the safest form because of the specific precursor ban, and all of them suffer from an insane (to me) risk of adulteration.

                  In twenty minutes I could probably find 10 "reputable" shops/markets, but still with 0 guarantee I won't get the specific thing laced with something for strength.

                  Even if I wanted pot (I don't, I found it repetitive and extremely boring, except for one experience), I would have to grow it myself (stench!) but then.... where I find sane seeds (healthy ratio CBD to THC)?

                  Similarly, I wouldn't buy the moonshine from someone risking prosecution to make and sell it. It's guaranteed this risk is offset.

                  So ... I can't get what I want because there's extremely high chance of getting hurt. An example being poisoning with pills sold as mdma - every music festival, multiple people hurt. Not by Molly, by additives.

                  • By raw_anon_1111 2026-02-0621:301 reply

                    I’m absolutely positive that someone in your 1st degree or 2nd degree social circle can get you weed if you wanted it.

                    • By subscribed 2026-02-081:06

                      I don't want a random weed. I can easily get it myself on the street (there are several places with the distinct smell), and I know at least 3-4 people who I know smoke.

                      But I want safe (not pcp/fentanyl sprinkled) and sane (not engineered for a 'kick').

                      I don't know anyone who's a cultivator themselves :)

        • By rconti 2026-02-0617:14

          Sure they can be enforced. Your comment seems to be based on the idea of detecting AI writing from the output. But you can enforce this law based on the way content is created. The same way you can enforce food safety laws from conditions of the kitchen, not the taste of the food. Child labor laws can be enforced. And so on.

          Unless you're trying to tell me that writers won't report on their business that's trying to replace them with AI.

        • By 6LLvveMx2koXfwn 2026-02-0612:45

          > You’re essentially passing laws that only apply to people who volunteer to follow them . .

          Like every law passed forever (not quite but you get the picture!) [1]

          1. https://en.wikipedia.org/wiki/Consent_of_the_governed

        • By Forgeties79 2026-02-0613:58

          The idea that you can just ban drinking and driving is a fantasy because there’s no technical way to actually guarantee enforcement.

          I know that sounds ridiculous but it kind of illustrates the problem with your logic. We don’t just write laws that are guaranteed to have 100% compliance and/or 100% successful enforcement. If that were the case, we’d have way fewer laws and little need for courts/a broader judicial system.

          The goal is getting most AI companies to comply and making sure that most of those that don’t follow the law face sufficient punishment to discourage them (and others). Additionally, you use that opportunity to undo what damage you can, be it restitution or otherwise for those negatively impacted.

        • By delaminator 2026-02-0612:40

          C2PA-enabled cameras (Sony Alpha range, Leica, and the Google Pixel 10) sign the digital images they record.

          So legislators, should they so choose, could demand source material be recorded on C2PA enabled cameras and produce the original recordings on demand.

        • By wwfn 2026-02-0612:351 reply

          > passing laws that only apply to people who volunteer to follow them

          That's a concerning lens to view regulations. Obviously true, but for all laws. Regulations don't apply to only to what would be immediately observable offenses.

          There are lots of bad actors and instances where the law is ignored because getting caught isn't likely. Those are conspiracies! They get harder to maintain with more people involved and the reason for whistle-blower protections.

          VW's Dieselgate[1] comes to mind albeit via measurable discrepancy. Maybe Enron or WorldCom (via Cynthia Cooper) [2] is a better example.

          [1]: https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal [2]: https://en.wikipedia.org/wiki/MCI_Inc.#Accounting_scandals

          • By hsuduebc2 2026-02-070:371 reply

            But most regulations are, and can be, enforced because the perpetrator can simply be caught. That’s the difference. This is not enforceable in any meaningful way. The only way it could change anything would be through whistleblowers, for example someone inside a major outlet like the New York Times reporting to authorities that AI was being used. On the contrary, if you systematically create laws that are, by their nature, impossible to enforce, you weaken trust in the law itself by turning it into something that exists more on paper than in reality.

            • By wwfn 2026-02-0713:22

                * I suspect many existing and reasonable regulations do not meet that "simply caught" classification. @rconti's comment above[1] gives some examples of regulations on process that are not observed in the output (food, child labor). I'll add accounting, information control (HIPAA, CUI, etc), environmental protections.
              
                * Newsroom staff is incentivized to enforce the regulation. It protects their livelihood. From the article: 
                > Notably, the bill would cement some labor protections for newsroom workers 
              
                * Mandatory AI labeling is not impossible to enforce. At worst, it requires random audits (who was paid to write this story, do they attest to doing so). At best, it encourages preemptive provenance tracking (that could even be accessible to the news consumer! I'd like that).   
              
              One reason for the regulation is we fear hallucinations slipping into the public record -- even if most LLM usage is useful/harmless. Legal restrictions ideally prevent this, but also give a mechanism for recourse when it does happen.

              Say a news story goes off the rails and reports a police officer turned into a frog [2] or makes up some law[3]. Someone thinks that's odd and alerts whatever authority. The publisher can be investigated, reprimanded, and ideally motivated to provide better labeling or QC on their LLM usage.

              [1]: https://news.ycombinator.com/item?id=46915463 [2]: https://www.wate.com/news/ai-generated-police-report-says-of... [3]: https://www.reuters.com/legal/litigation/judge-fines-lawyers...

        • By conartist6 2026-02-0612:27

          Indistinguishable, no. Not these tools.

          Without emotion, without love and hate and fear and struggle, only a pale imitation of the human voice is or will be possible.

        • By songodongo 2026-02-0611:582 reply

          And you can easily prompt your way out of the typical LLM style. “Written in the style of Cormac McCarthy’s The Road”

          • By capnrefsmmat 2026-02-0612:27

            No, that doesn't really work so well. A lot of the LLM style hallmarks are still present when you ask them to write in another style, so a good quantitative linguist can find them: https://hdsr.mitpress.mit.edu/pub/pyo0xs3k/release/2

            That was with GPT4, but my own work with other LLMs show they have very distinctive styles even if you specifically prompt them with a chunk of human text to imitate. I think instruction-tuning with tasks like summarization predisposes them to certain grammatical structures, so their output is always more information-dense and formal than humans.

          • By Der_Einzige 2026-02-0620:15

            This still doesn't remove all the slop. You need sampler or fine-tuning tricks for it. https://arxiv.org/abs/2510.15061

      • By tedggh 2026-02-0617:28

        Probably worse than that. I can totally see it being weaponized. A media company critic o a particular group or individual being scrutinized and fined. I haven’t looked at any of these laws, but I bet their language gives plenty of room for interpretation and enforcement, perhaps even if you are not generating any content with AI.

      • By AbstractH24 2026-02-073:36

        > Because no one believes these laws or bills or acts or whatever will be enforced.

        This

        I still regularly see job posting with no salary here in nyc. Never heard of any enforcement

      • By cucumber3732842 2026-02-0615:29

        >But I actually believe they'll be. In the worst way possible: honest players will be punished disproportionally.

        As with everything else BigCo with their legal team will explain to the enforcers why their "right up to the line if not over it" solution is compliant and mediumco and smallco will be the ones getting fined or being forced to waste money staying far from the line or paying a 3rd party to do what bigco's legal team does at cost.

      • By just_once 2026-02-0611:482 reply

        What does that look like? Can you describe your worst case scenario?

        • By jandrese 2026-02-0616:131 reply

          Highly selective enforcement along partisan lines to suppress dissent. Government officials forcing you to prove that your post is not AI generated if they don't like it. Those same officials claiming that it is AI generated regardless of the facts on the ground to have it removed and you arrested.

          • By idle_zealot 2026-02-0617:21

            If you assume the use of law will be that capricious in general, then any law at all would be considered too dangerous for fear of use as a partisan tool.

            Why accuse your enemies of using AI-generated content in posts? Just call them domestic terrorists for violently misleading the public via the content of their posts and send the FBI or DHS after them. A new law or lack thereof changes nothing.

        • By amelius 2026-02-0611:541 reply

          Worst case? Armed officers entering your home without warrant, taking away your GPU card?

          • By just_once 2026-02-0612:17

            They can do that anyway. What does that have to do with the content of the proposed law?

      • By sumeno 2026-02-0613:121 reply

        Who are the honest players generating AI slop articles

        • By chrisjj 2026-02-0613:41

          The ones honestly labelling their articles e.g. "AI can make mistakes". Full marks to Google web search for leading the way!

    • By toofy 2026-02-0623:25

      > SAFE for Kids Act [pending] (restricts algorithmic feeds for minors).

      i personally would love to see something like this but changed a little:

      for every user (not just minors) require a toggle: upfront, not buried, always in your face toggle to turn off algorithmic feeds, where you’ll only see posts from people you follow, in the order in which they post it. again, no dark patterns, once a user toggles to a non-algorithmic feeds, it should stick.

      this would do a lot to restore trust. i don’t really use the big social medias much any more, but when i did i can not tell you how many posts i missed because the algorithms are kinda dumb af. like i missed friends anniversary celebrations, events that were right up my alley, community projects, etc… because the algorithms didn’t think the posts announcing the events would be addictive enough for me.

      no need to force it “for the kids” when they can just give everyone the choice.

    • By mbreese 2026-02-0615:182 reply

      None of those bills/laws involve legislating publishing though. This bill would require a disclaimer on something published. That’s a freedom of speech issue, so it going to be tougher to enforce and keep from getting overturned in the courts. The question here are what are the limits the government can have on what a company publishes, regardless of how the content is generated.

      IMO, It’s a much tougher problem (legally) than protecting actors from AI infringement on their likeness. AI services are easier to regulate.. published AI generated content, much more difficult.

      The article also mentions efforts by news unions of guilds. This might be a more effective mechanism. If a person/union/guild required members to add a tagline in their content/articles, this would have a similar effect - showing what is and what is not AI content without restricting speech.

      • By tempodox 2026-02-0615:351 reply

        > This bill would require a disclaimer on something published. That’s a freedom of speech issue

        They can publish all they want, they just have to label it clearly. I don’t see how that is a free speech issue.

        • By mothballed 2026-02-0615:481 reply

          Because compelled speech is an insult to free speech just as censored speech is.

          • By fwip 2026-02-0619:451 reply

            How do you feel about the fact that manufacturers need to list the ingredients of the food they sell you?

            • By mothballed 2026-02-0621:511 reply

              Not thrilled about it, and I personally would rather see them repealed. I will concede compelled speech impositions have been interpreted more generously when they are commercial. I don't necessarily agree with it, but even if we concede they can happen, I hope that distinction is made for commercial vs non-commercial content. Though I'm not thrilled with it happening for either.

              • By oblio 2026-02-0823:15

                I will never understand free speech absolutists.

                Ads should be marked, ingredients and components should be listed.

                What possible reason would there be for a viewer/customer to not be allowed to find out such obvious info?

      • By HanShotFirst 2026-02-0616:091 reply

        Is AI-generated text speech?

        • By frumplestlatz 2026-02-0617:031 reply

          It is when a human publishes it. Which is why they're also liable for it.

          • By _blk 2026-02-0619:271 reply

            I agree in general and that should be the position but it's probably more nuanced than this in practice: who published it when it's a dev that writes a script that just spits junk into the wild or reinforces someone else's troll-speech?

            • By mbreese 2026-02-0619:59

              In general, I think LLM content has been found to not be copyrightable, but it would still speech when it's published. It would be the speech of the company publishing it, not the dev that wrote the script. So, ai-junk-news.com is still publishing some kind of speech, even if it was an LLM that wrote it. At least, that would be my interpretataion.

    • By tencentshill 2026-02-0618:21

      I'll bet AI is going to be simply outlawed for hiring, and possibly algorithmic hiring practices altogether. You can't audit a non-deterministic system unless you train the AI from scratch, which is an expense only the wealthiest companies can take on.

    • By Balinares 2026-02-0613:221 reply

      Don't ding the amusingly scoped animosity, it's very convenient: we get to say stuff like "Sure, our laws may keep us at the mercy of big corps unlike these other people, BUT..." and have a ready rationalization for why our side is actually still superior when you look at it. Imagine what would happen if the populace figured it's getting collectively shafted in a way others may not.

      • By rubyfan 2026-02-0613:50

        >Imagine what would happen if the populace figured it's getting collectively shafted in a way others may not.

        They already believe that and it’s used to keep us fighting each other.

    • By totetsu 2026-02-0613:08

      Ai view from Simmons+simmons is a very good newsletter on the topic of ai regulation https://www.simmons-simmons.com/en/publications/clptn86e8002...

    • By venkat223 2026-02-0613:37

      All video and other contests should have ai stamp as most of the YouTube is AI generated.Almost like memes

    • By dyauspitr 2026-02-0713:34

      I believe it’s because it will be impossible to enforce. It might have some teeth with LLMs that add watermarks to their images but otherwise you could have one human in the loop for 10,000 articles and not call it AI.

    • By snickerbockers 2026-02-0622:06

      I honestly just don't see any point in these laws because they're all predicated on the people who own the AI's acting in good faith. In a way I actually think they're a net negative because they seem to be giving a false impression that these problems have an obvious solution.

      One of the most persistent and also the dumbest opinion I keep seeing both among laymen and people who really ought to know better is that we can solve the deepfake problem by mandating digital watermarks on generated content.

    • By vasco 2026-02-0611:121 reply

      ~Everything will use AI at some point. This is like requiring a disclaimer for using Javascript back when it was introduced. It's unfortunate but I think ultimately a losing battle.

      Plus if you want to mandate it, hidden markers (stenography) to verify which model generated the text so people can independently verify if articles were written by humans (emitted directly by the model) is probably the only feasible way. But its not like humans are impartial anyway already when writing news so I don't even see the point of that.

      • By layer8 2026-02-0611:284 reply

        It would make sense to have a more general law about accountability for the contents of news. If news is significantly misleading or plagiarizing, it shouldn’t matter if it is due to the use of AI or not, the human editorship should be liable in either case.

        This is a concept at least in some EU countries, that there has to always be one person responsible in terms of press law for what is being published.

        • By terminalshort 2026-02-0616:541 reply

          That's government censorship and it not allowed here, unlike the EU. As for plagiarism, every single major news outlet is guilty of it in basically every single article. Have you ever seen the NYT cite a source?

          • By layer8 2026-02-0623:01

            You’re still allowed to say virtually anything you want if you make it clear that it’s an opinion and not news reporting.

            Not citing sources doesn’t imply plagiarism, as long as you don’t misrepresent someone else’s research as your own (such as in an academic paper). Giving an account of news that you heard elsewhere in your own words isn’t plagiarism. The hurdles for plagiarism are generally relatively high.

        • By mothballed 2026-02-0612:49

          If a news person in the USA publishes something that's actually criminal, the the corporate veil can be pierced. If the editor printed CSAM they would be in prison lickity split. Unless they have close connections to the executive.

          Most regulations around disclaimers in the USA are just civil and the corporate veil won't be pierced.

        • By vasco 2026-02-0612:42

          I agree with that the most. That's why I added the bit about humans. In the end if what you're writing is not sourced properly or too biased it shouldn't matter if AI is involved or not. The truth is more the thing that matters with news.

        • By RobotToaster 2026-02-0612:431 reply

          That would bankrupt every news organisation in the USA.

          • By _blk 2026-02-0619:28

            Seems like a good idea then

    • By jMyles 2026-02-0611:16

      > I'm surprised to see so little coverage of AI legislation news here tbh.

      I think the reason is that most people don't believe, at least on sufficiently long times scales, that legacy states are likely to be able to shape AI (or for that matter, the internet). The legitimacy of the US state appears to be in a sort of free-fall, for example.

      It takes a long time to fully (or even mostly) understand the various machinations of legislative action (let alone executive discretion, and then judicial interpretation), and in that time, regardless of what happens in various capitol buildings, the tests pass and the code runs - for better and for worse.

      And even amidst a diversity of views/assessments of the future of the state, there seems to be near consensus regarding the underlying impetus: obviously humans and AI are distinct, and hearing the news from a human, particular a human with a strong web-of-trust connection in your local society, is massively more credible. What's not clear is whether states have a role to play in lending clarity to the situation, or whether that will happen of the internet's accord.

  • By jfengel 2026-02-0615:124 reply

    What I'd really like to see is a label on original reporting.

    Even beyond AI, the vast majority of news is re-packaging information you got from somewhere else. AI can replace the re-writers, but not the original journalists, people who spoke to primary sources (or who were themselves eyewitnesses).

    Any factual document should reference its sources. If not, it should be treated skeptically, regardless of whether AI or a human is doing that.

    An article isn't automatically valueless just because it's synthesized. It can focus and contextualize, regardless of whether it's human or AI written. But it should at the very least be able to say "This is the actual fact of the matter", with a link to it. (And if AI has hallucinated the link, that's a huge red flag.)

    • By foxbarrington 2026-02-0616:104 reply

      A common reaction I get to https://forty.news is that the stories “need sources” which I always find funny. I don’t hear the same demand of sources from every other news outlet (I find it extra weird because all FN’s stories are 40 years old, simple to verify, and can’t push an agenda the same way).

      Totally agree with you: all newspapers should cite sources. What’s silly to me is how selectively people care—big outlets get to hand-wave the “trust me” part even when a piece is basically a lightly rewritten press release, thinly sourced, or reflecting someone’s incentives more than reality.

      • By squeaky-clean 2026-02-070:18

        For Forty News I don't think the "need sources" requests are for contents of the news stories. It's about where did these stories come from? How can I know these were ever actually published. As it currently is, I can't tell if these were pulled from real newspapers or AI generated to write a simulation of what the story might have been like if it were condensed to 10 sentences.

      • By CodingJeebus 2026-02-0622:041 reply

        > all newspapers should cite sources.

        You'd lose a lot of valid sourcing if you made this a requirement. For example, the Catholic Church scandal investigation would never have seen the light of day if the key legal sources corroborating the story had to give up their identity as part of the process. Speaking off the record is often where a lot of those kinds of stories come together.

        And the reaction around the world to that story, the thousands of victims that came forward, resoundingly confirmed what people were saying on background.

        • By jfengel 2026-02-0713:581 reply

          You can say "an anonymous source". That's a standard journalistic practice.

          What shouldn't be standard is "we read this newspaper article and we're going to repeat it as if we originated the story". Which is what AI news always is, in addition to the vast majority of news outlets.

          The origin of most news is wire services, repeating what local reporters write. Newspapers usually at least cite the wire service. Blogs, TV shows, and others speak as if they were the originators.

          • By CodingJeebus 2026-02-0717:26

            Oh I see what you're saying now, totally agree with that.

      • By Spivak 2026-02-0621:38

        Well yeah because investigative journalism and original reporting outside of the spectacle of buying a plane ticket to a warzone or weather disaster to the reporter can have a dramatic background is too expensive when people come to you in droves with literally pre-written articles you can rubber stamp and publish.

        Which by the way if you ever want to get in the paper that's how, it's super easy. AI will help you learn how to write in the right tone/voice for news if you don't know how.

      • By cindyllm 2026-02-135:01

        [dead]

    • By ntnsndr 2026-02-0619:04

      For example, the Colorado Sun has labels on every story for the nature of reporting that went into it: https://coloradosun.com/

      Some may find it surprising that this is left over from the Sun's early support from the crypto journalism project Civil.

    • By stahorn 2026-02-0615:45

      Just like we want to know where the food we eat comes from, we want to know where the information comes from. Of course there's the limit of journalists having to keep their sources secret in many cases. But original publisher I think should be possible.

    • By carlosjobim 2026-02-0616:17

      There's already such a label: "exclusive!"

  • By Llamamoe 2026-02-0610:579 reply

    Ideally, trying to pass anything AI-generated as human-made content would be illegal, not just news, but it's a good start.

    • By xnorswap 2026-02-0611:047 reply

      That could do more harm than good.

      Like how California's bylaw about cancer warnings are useless because it makes it look like everything is known to the state of California to cause cancer, which in turn makes people just ignore and tune-out the warnings because they're not actually delivering signal-to-noise. This in turn harms people when they think, "How bad can tobacco be? Even my Aloe Vera plant has a warning label".

      Keep it to generated news articles, and people might pay more attention to them.

      Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad ( somehow this is contaminated, lol ), then it'll become a useless warning.

      • By Llamamoe 2026-02-0615:391 reply

        > That could do more harm than good.

        The downside to having labels on AI-written political comments, stellar reviews of bad products, speeches by a politician, or supposed photos of wonderful holiday destinations in ads targeted at old people are what, exactly?

        Are you really arguing that putting a label on AI generated content could do more harm than just leaving it (approximately) indistinguishable from the real thing might somehow be worse?

        I'm not arguing that we need to label anything that used gen AI in any capacity, but past the point of e.g. minor edits, yeah, it should be labeled.

        • By terminalshort 2026-02-0617:00

          None of those AI written political comments will have the label added because it's unprovable, and those propaganda shops are based well outside of the necessary jurisdiction anyway. It will just be a burden on legitimate actors and a way for the government to harass legitimate media outlets that it doesn't like with expensive "AI usage investigations."

      • By elric 2026-02-0619:052 reply

        I bought a piece of wooden furniture some time ago. It came with a label saying that the state of California knows it to be a carcinogen. I live in Belgium. It was weird.

        • By frm88 2026-02-079:50

          The proposition 65 warnings apply to carcinogenic materials used on furniture surfaces which can be released into the air or accumulate in dust. None of these substances are a conditio sine qua non, there are alternatives. https://www.p65warnings.ca.gov/fact-sheets/furniture-product...

          The same warnings and labels are used in the EU, for example for formaldehyde which will be severely limited in its use starting in August 2026. https://easecert.com/blogs/insights/formaldehyde-emission-li...

          It may look weird, but personally I prefer a warning to being submitted to toxic substance without my knowledge.

        • By bogwog 2026-02-0619:522 reply

          Just an observation, but this California meme seems like the go-to talking point for anti AI regulation crowd lately.

          • By turtlesdown11 2026-02-0621:33

            It's not even a good argument. Studies have demonstrated it reduces toxic chemicals in the body, and also deters companies from using the toxic chemicals in their products.

          • By elric 2026-02-078:53

            That's a weird comparison, hadn't heard that one yet.

            I'm very much in favour of regulating (and heavily taxing) AI. But I very much dislike silly warning labels that miss the point. Owning wooden furniture is not carcinogenic. Inhaling tons of wood dust (e.g. from sanding wood in a poorly ventilated room) could be carcinogenic. But putting such warning labels on furniture is just ridiculous scaremongering.

      • By turtlesdown11 2026-02-0621:33

        > Like how California's bylaw about cancer warnings are useless

        Californians have measurably lower concentrations of toxic chemicals than non-California's, so very useless!

      • By cardanome 2026-02-0614:52

        > Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad

        People have been writing articles without the help of an LLM for decades.

        You don't need an LLM for grammar and spell checking, arguably an LLM is less efficient and currently worse at it anyway.

        The biggest help a LLM can provide is with research but that is only because search engines have been artificially enshitified these day. But even here the usefulness is very limited because of hallucinations. So you might be better off without.

        There is no proof that LLMs can significantly improve the workflow of a professional journalist when it comes to creating high quality content.

        So no, don't believe the hype. There will still be enough journalists not using LLMs at all.

      • By SkyBelow 2026-02-0612:06

        It is worse, even less than useless. With the California case, there is very little go gain by lying and not putting a sticker on items that should have one. With AI generated content, as the models get to the point we can't tell anymore if it is fake, there are plenty of reasons to pass off a fake as real, and conditioning people to expect an AI warning will make them more likely to fall for content that ignores this law and doesn't label itself.

      • By direwolf20 2026-02-0611:191 reply

        Imagine selling a product with the tagline: "Unlike Pepsi, ours doesn't cause cancer."

        • By groovy2shoes 2026-02-0717:33

          "Good prices, no rats! That's the Fairsley Difference™!"

    • By driverdan 2026-02-0613:401 reply

      What does that mean though? Photos taken using mobile camera apps are processed using AI. Many Photoshop tools now use AI.

      • By Llamamoe 2026-02-0615:41

        Obviously it should not apply to anything using machine learning based algorithms in any way, just content made using generative AI, with exceptions for minor applications and/or a separate label for smaller edits.

    • By reliabilityguy 2026-02-0612:101 reply

      How do we know what’s AI-generated vs. sloppy human work? Of course in some situations it is obvious (e.g., video), but text? Audio?

      • By FeteCommuniste 2026-02-0613:39

        And of course you can even ask AI to add some "human sloppiness" as part of the prompt (spelling mistakes, run-on sentences, or whatever).

    • By ppeetteerr 2026-02-0618:15

      Publishing is more than just authoring. You have research, drafts, edits, source verification, voice, formatting, multiple edits for different platforms and mediums. Each one of those steps could be done by AI. It's not a single-shot process.

    • By pezgrande 2026-02-0611:46

      Where we put the line within AI-generate vs AI-assisted (aka Photoshop and other tools)?

    • By sekai 2026-02-0613:001 reply

      > Ideally, trying to pass anything AI-generated as human-made content would be illegal, not just news, but it's a good start.

      Does photoshop fall under this category?

      • By hermannj314 2026-02-0613:432 reply

        Spell check, autocomplete, grammar editing, A-B tests for bylines and photo use, related stories, viewers also read, tag generation

        I guess you have to disclose every single item on your new site that does anything like this. Any byte that touches a stochastic process is tainted forever.

        • By catlifeonmars 2026-02-074:26

          Colloquially “AI” means LLMs and generative art. If you’re trying to make an argument by absurdity and you don’t want it to fall flat, maybe keep it relevant and don’t attack the straw man you just fabricated?

        • By b40d-48b2-979e 2026-02-0615:19

          None of those things are "AI" (LLMs). We had those things before, we'll have them after.

    • By jacquesm 2026-02-0611:00

      Fully agreed.

    • By infecto 2026-02-0613:18

      Please no. I don’t want that kind of future. It’s going to be California cancer warnings all over again.

      I don’t like AI slop but this kind of legislation does nothing. Look at the low quality garbage that already exists, do we really need another step in the flow to catch if it’s AI?

      You legislate these problems away.

    • By patrick451 2026-02-0613:591 reply

      Ideally, we would just ban AI content altogether.

      • By Llamamoe 2026-02-0615:42

        I don't think there's any way for that to happen, and IF we could create a solid legislative framework, AI could definitely (at some point in the future) contribute more good than bad to society.

HackerNews