Court filings allege Meta downplayed risks to children and misled the public

2025-11-2315:18409162time.com

Court filings allege Meta tolerated sex trafficking, hid harms to teens, and prioritized growth over user safety for years.

Sex trafficking on Meta platforms was both difficult to report and widely tolerated, according to a court filing unsealed Friday. In a plaintiffs’ brief filed as part of a major lawsuit against four social media companies, Instagram’s former head of safety and well-being Vaishnavi Jayakumar testified that when she joined Meta in 2020 she was shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.” 

“You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” Jayakumar reportedly testified, adding that “by any measure across the industry, [it was] a very, very high strike threshold.” The plaintiffs claim that this testimony is corroborated by internal company documentation.

The brief, filed by plaintiffs in the Northern District of California, alleges that Meta was aware of serious harms on its platform and engaged in a broad pattern of deceit to downplay risks to young users. According to the brief, Meta was aware that millions of adult strangers were contacting minors on its sites; that its products exacerbated mental health issues in teens; and that content related to eating disorders, suicide, and child sexual abuse was frequently detected, yet rarely removed. According to the brief, the company failed to disclose these harms to the public or to Congress, and refused to implement safety fixes that could have protected young users.  

“Meta has designed social media products and platforms that it is aware are addictive to kids, and they’re aware that those addictions lead to a whole host of serious mental health issues,” says Previn Warren, the co-lead attorney for the plaintiffs in the case. “Like tobacco, this is a situation where there are dangerous products that were marketed to kids,” Warren adds. “They did it anyway, because more usage meant more profits for the company.” 

The following allegations against Meta come from the brief filed in an unprecedented multidistrict litigation. More than 1,800 plaintiffs—including children and parents, school districts, and state attorneys general—have joined together in a suit alleging that the parent companies behind Instagram, TikTok, Snapchat, and YouTube “relentlessly pursued a strategy of growth at all costs, recklessly ignoring the impact of their products on children’s mental and physical health,” according to their master complaint. The newly unsealed allegations about Meta are just one small part of the sprawling suit. (TIME filed a motion to intervene in the case to ensure public access to court records; the motion was denied.)

The plaintiffs’ brief, first reported by TIME, purports to be based on sworn depositions of current and former Meta executives, internal communications, and company research and presentations obtained during the lawsuit’s discovery process. It includes quotes and excerpts from thousands of pages of testimony and internal company documents. TIME was not able to independently view the underlying testimony or research quoted in the brief, since those documents remain under seal.  

Read More: The Lawyer Suing Social Media Companies On Behalf of Kids

But the brief still paints a damning picture of the company’s internal research and deliberations about issues that have long plagued its platforms. Plaintiffs claim that since 2017, Meta has aggressively pursued young users, even as its internal research suggested its social media products could be addictive and dangerous to kids. Meta employees proposed multiple ways to mitigate these harms, according to the brief, but were repeatedly blocked by executives who feared that new safety features would hamper teen engagement or user growth.

“We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions in an attempt to present a deliberately misleading picture," a Meta spokesperson said in a statement to TIME. "The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens – like introducing Teen Accounts with built-in protections and providing parents with controls to manage their teens’ experiences. We’re proud of the progress we’ve made and we stand by our record.”

In the years since the lawsuit was filed, Meta has implemented new safety features designed to address some of the problems described by plaintiffs. In 2024, Meta unveiled Instagram Teen Accounts, which defaults any user between 13 and 18 into an account that is automatically private, limits sensitive content, turns off notifications at night, and doesn’t allow messaging from unconnected adults. “We know parents are worried about their teens having unsafe or inappropriate experiences online, and that’s why we’ve significantly reimagined the Instagram experience for tens of millions of teens with new Teen Accounts,” a Meta spokeswoman told TIME in June. “These accounts provide teens with built-in protections to automatically limit who’s contacting them and the content they’re seeing, and teens under 16 need a parent’s permission to change those settings. We also give parents oversight over their teens’ use of Instagram, with ways to see who their teens are chatting with and block them from using the app for more than 15 minutes a day, or for certain periods of time, like during school or at night.”

And yet the plaintiffs’ brief suggests that Meta resisted safety changes like these for years. 

The brief quotes testimony from Brian Boland, Meta’s former vice president of partnerships who worked at the company for 11 years and resigned in 2020. “My feeling then and my feeling now is that they don’t meaningfully care about user safety,” he allegedly said. “It’s not something that they spend a lot of time on. It’s not something they think about. And I really think they don’t care.”

After the plaintiffs’ brief was unsealed late Friday night, Meta did not immediately respond to TIME’s requests for comment. 

Here are some of the most notable allegations from the plaintiffs’ omnibus brief: 

Allegation: Meta had a high threshold for "sex trafficking" content—and no way to report child sexual content

Despite Instagram’s “zero tolerance” policy for child sexual abuse material, the platform did not offer users a simple way to report child sexual abuse content, according to the brief. Plaintiffs allege that Jayakumar raised the issue multiple times when she joined Meta in 2020, but was told it would be too difficult to address. Yet Instagram allowed users to easily report far less serious violations, like “spam,” “intellectual property violation” and “promotion of firearms,” according to plaintiffs.

Jayakumar was even more shocked to learn that Instagram had a disturbingly high tolerance for sex trafficking on the platform. According to the brief, she testified that Meta had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex,” meaning it would take at least 16 reports for an account to be deleted.

“Meta never told parents, the public, or the Districts that it doesn’t delete accounts that have engaged over fifteen times in sex trafficking,” the plaintiffs wrote.

A Meta spokesperson disputed this allegation to TIME, saying the company has for years removed accounts immediately if it suspects them of human trafficking or exploitation and has made it easier over time for users to report content that violates child-exploitation policies.

Allegation: Meta "lied to Congress" about its knowledge of harms on the platform

For years, plaintiffs allege, Meta’s internal research had found that teenagers who frequently use Instagram and Facebook have higher rates of anxiety and depression. 

In late 2019, according to the brief, Meta designed a “deactivation study,” which found that users who stopped using Facebook and Instagram for a week showed lower rates of anxiety, depression, and loneliness. Meta halted the study and did not publicly disclose the results, stating that the research study was biased by the “existing media narratives around the company.” (A Meta spokesperson told TIME that the study was initially conceived as a pair of one-weeks pilots, and researchers declined to continue it because it found that the only reductions in feelings of depression, anxiety, and loneliness were among people who already believed Facebook was bad for them.)

At least one Meta employee was uncomfortable with the implications of this decision: “If the results are bad and we don’t publish and they leak,” this employee wrote, according to the brief, “is it going to look like tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves?”

Indeed, in December 2020, when the Senate Judiciary Committee asked the company in a set of written questions whether it was “able to determine whether increased use of its platform among teenage girls has any correlation with increased signs of depression” and “increased signs of anxiety,” the company offered only a one-word answer: “No.”

To the plaintiffs in the case, the implication is clear: “The company never publicly disclosed the results of its deactivation study. Instead, Meta lied to Congress about what it knew.”

Allegation: The company knew Instagram was letting adult strangers connect with teenagers

For years Instagram has had a well-documented problem of adults harassing teens. Around 2019, company researchers recommended making all teen accounts private by default in order to prevent adult strangers from connecting with kids, according to the plaintiffs’ brief. Instead of implementing this recommendation, Meta asked its growth team to study the potential impact of making all teen accounts private. The growth team was pessimistic, according to the brief, and responded that the change would likely reduce engagement. 

By 2020, the growth team had determined that a private-by-default setting would result in a loss of 1.5 million monthly active teens a year on Instagram. The plaintiffs’ brief quotes an unnamed employee as saying: “taking away unwanted interactions… is likely to lead to a potentially untenable problem with engagement and growth.” Over the next several months, plaintiffs allege, Meta’s policy, legal, communications, privacy, and well-being teams all recommended making teen accounts private by default, arguing that the switch “will increase teen safety” and was in line with expectations from users, parents, and regulators. But Meta did not launch the feature that year. 

Safety researchers were dismayed, according to excerpts of an internal conversation quoted in the filing. One allegedly grumbled: “Isn’t safety the whole point of this team?” 

“Meta knew that placing teens into a default-private setting would have eliminated 5.4 million unwanted interactions a day,” the plaintiffs wrote. Still, Meta didn’t make the fix. Instead, inappropriate interactions between adults and kids on Instagram skyrocketed to 38 times that on Facebook Messenger, according to the brief. The launch of Instagram Reels allegedly compounded the problem. It allowed young teenagers to broadcast short videos to a wide audience, including adult strangers.

Read More: The AG Putting Big Tech On Trial.

An internal 2022 audit allegedly found that Instagram’s Accounts You May Follow feature recommended 1.4 million potentially inappropriate adults to teenage users in a single day. By 2023, according to the plaintiffs, Meta knew that they were recommending minors to potentially suspicious adults and vice versa. 

It wasn’t until 2024 that Meta rolled out default privacy settings to all teen accounts. In the four years it took the company to implement their own safety recommendations, teens experienced billions of unwanted interactions with strangers online. Inappropriate encounters between teens and adults were common enough, according to the brief, that the company had an acronym for them: “IIC,” or “inappropriate interactions with children.” 

A Meta spokesperson said the company has defaulted teens under 16 to private accounts since 2021, began defaulting teens under 18 into private accounts with the introduction of its Teen Accounts program, and has taken steps to protect users from online predators.

Meta feared young users would abandon Facebook and Instagram for their competitors. Acquiring and keeping young users became a central business goal. Meta CEO Mark Zuckerberg suggested that “teen time spent be our top goal of 2017,” according to a company executive quoted in the brief. That has remained the case, plaintiffs allege; internal company documents from 2024 stated that “acquiring new teen users is mission critical to the success of Instagram.” (A Meta spokesperson said time spent on its platforms is not currently a company goal.)

Meta launched a campaign to connect with school districts and paid organizations like the National Parent Teacher Association and Scholastic to conduct outreach to schools and families. Meanwhile, according to the brief, Meta used location data to push notifications to students in “school blasts,” presumably as part of an attempt to increase youth engagement during the school day. As one employee allegedly put it: “One of the things we need to optimize for is sneaking a look at your phone under your desk in the middle of Chemistry :)”.

Though Meta aggressively pursued young users, it may not have known exactly how old those new users were. Whistleblower Jason Sattizahn recently testified to Congress that Meta does not reliably know the age of its users. (Meta pushed back on Sattizahn’s testimony, saying in a statement to NBC that his claims were “nonsense” and “based on selectively leaked internal documents that were picked specifically to craft a false narrative.”) In 2022, according to the plaintiffs’ brief, there were 216 million users on Meta platforms whose age was “unknown.”

Federal law requires social media platforms to observe various data-privacy safeguards for users under 13, and Meta policy states that users under 13 are not allowed on its platforms. Yet the plaintiffs’ court filing claims Meta knew that children under 13 used the company’s products anyway. Internal research cited in the brief suggested there were 4 million users under 13 on Instagram in 2015; by 2018, the plaintiffs claim, Meta knew that roughly 40% of children aged 9 to 12 said they used Instagram daily.

The plaintiffs allege that this was a deliberate business strategy. The brief describes a coordinated effort to acquire young users that included studying the psychology and digital behavior of “tweens” and exploring new products designed for “users as young as 5-10.” 

Internally, some employees expressed disgust at the attempt to target preteens. “Oh good, we’re going after <13 year olds now?” one wrote, according to the brief. “Zuck has been talking about that for a while...targeting 11 year olds feels like tobacco companies a couple decades ago (and today). Like we’re seriously saying ‘we have to hook them young’ here.”

Allegation: Meta's executives initially shelved efforts to make Instagram less toxic for teens

To combat toxic “social comparison,” in 2019 Instagram CEO Adam Mosseri announced a new product feature that would “hide” likes on posts. Meta researchers had determined that hiding likes would make users “significantly less likely to feel worse about themselves,” according to the plaintiffs’ brief. The initiative was code-named Project Daisy. 

But after a series of tests, Meta backtracked on Project Daisy. It determined the feature was “pretty negative to FB metrics,” including ad revenue, according to the plaintiffs’ brief, which quotes an unnamed employee on the growth team insisting: “It’s a social comparison app, fucking get used to it.” 

A similar debate took place over the app’s beauty filters. Plaintiffs claim that an internal review concluded beauty filters exacerbated the “risk and maintenance of several mental health concerns, including body dissatisfaction, eating disorders, and body dysmorphic disorder,” and that Meta knew that “children are particularly vulnerable.” Meta banned beauty filters in 2019, only to roll them back out the following year after the company realized that banning beauty filters would have a “negative growth impact,” according to the plaintiffs’ brief. 

Other company researchers allegedly built an AI “classifier” to identify content that would lead to negative appearance comparison, so that Meta could avoid recommending it to vulnerable kids. But Mosseri allegedly killed the project, disappointing developers who “felt like they had a solution” to “a big problem.”

While Meta developed AI tools to monitor the platforms for harmful content, the company didn’t automatically delete that content even when it determined with “100% confidence” that it violated Meta’s policies against child sexual-abuse material or eating-disorder content. Meta’s AI classifiers did not automatically delete posts that glorified self-harm unless they were 94% certain they violated platform policy, according to the plaintiffs’ brief. As a result, most of that content remained on the platform, where teenage users often discovered it. In a 2021 internal company survey cited by plaintiffs, more than 8% of respondents aged 13 to 15 reported having seen someone harm themselves, or threaten to do so, on Instagram during the past week.

Read More: ‘Everything I Learned About Suicide, I Learned On Instagram.’

A Meta spokesperson said the company reports more child sexual-abuse material than any other service and uses an array of tools to proactively find that content, including photo and video-matching technologies as well as machine learning. The spokesperson said human reviewers assess content flagged before it is deleted to ensure it violates policies, prevent mistakes that could affect users, and maintain the integrity of the company's detection databases.

Allegation: Meta knew its products were addictive, but publicly downplayed the harms

The addictive nature of the company’s products wasn’t a secret internally. “Oh my gosh yall IG is a drug,” one of the company’s user-experience researchers allegedly wrote to a colleague. “We’re basically pushers.” 

Meta does not officially study addiction to its products, plaintiffs allege; it studies “problematic use.” In 2018, company researchers surveyed 20,000 Facebook users in the U.S. and found that 58% had some level of “problematic use”—55% mild, and 3.1% severe. But when Meta published an account of this research the following year, only the smaller number of users with “severe” problematic use was mentioned. “We estimate (as an upper bound) that 3.1% of Facebook users in the U.S. experience problematic use,” wrote the researchers. The other 55% of users are not mentioned anywhere in the public report. 

Plaintiffs allege that Meta’s safety team proposed features designed to lessen addiction, only to see them set aside or watered down. One employee who helped develop a “quiet mode” feature said it was shelved because Meta was concerned that this feature would negatively impact metrics related to growth and usage.

Around the same time, another user-experience researcher at Instagram allegedly recommended that Meta inform the public about its research findings: “Because our product exploits weaknesses in the human psychology to promote product engagement and time spent,” the researcher wrote, Meta needed to “alert people to the effect that the product has on their brain.” 

Meta did not. 

This story has been updated to reflect additional comments from Meta.


Read the original article

Comments

  • By kryogen1c 2025-11-2316:172 reply

    > a Meta spokesperson said in a statement to TIME. "The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens

    Omegalol. Cigarette maker introduces filter, cares about your health.

    • By mtillman 2025-11-2317:103 reply

      Every cig exec lied under oath and only received monetary fines.

      • By kryogen1c 2025-11-2317:391 reply

        The comparison was not accidental. I expect a similar, meaningless outcome for poisoning children.

      • By zombiwoof 2025-11-2318:36

        [dead]

      • By flag_fagger 2025-11-2319:062 reply

        Cigarette makers were a dying cry of the old aristocracy. Silicon Valley is the rallying cry of the new aristocracy.

        While I don’t quite believe they’ll achieve their Feudal dreams in the near-medium future. I do expect the US to transition to a much more explicitly an oligarchic republic as a large, with the pretense of “Government of the people, by the people, for the people” is largely pushed to the side.

        Only solution seems to be to drop out of society to whatever degree possible.

        • By samdoesnothing 2025-11-2320:202 reply

          The government and massive corporations being in bed with each other is nothing new. Different breed same species. Except tech execs think they're a lot smarter than they are.

          • By pclmulqdq 2025-11-2321:16

            Pretty much all execs throughout time have thought that.

          • By Refreeze5224 2025-11-245:021 reply

            It's nothing new, as it is essentially the only logical outcome of capitalism. It's not an aberration, it's an intended feature. Capital is power, and law and government is how that power is expressed and enacted over those without capital.

            • By samdoesnothing 2025-11-246:341 reply

              It's actually the logical outcome of any system with a consolidated monopoly on political power (the government). Blaming capitalism is ridiculous because alternative systems suffer from the exact same issue.

              • By Refreeze5224 2025-11-247:451 reply

                Sort of. Capitalism cannot exist without a monopoly of coercive state power backing it. So it makes sense to criticize it when that's what's actually happening. Other systems can work without coercive state power, are in fact intended to, and result in more freedom for the members of the resulting society, so I agree with your take on government generally.

                • By samdoesnothing 2025-11-2420:35

                  Free trade and private property rights can exist without a monopoly on political power, but as with stuff like this I don't really know what's meant by "capitalism".

                  That being said I don't think at the present moment it's possible to have a society without some form of government, so then the question becomes "what do we do about it", and I think the answer is to limit the scope of political intervention and power as much as possible.

        • By Loughla 2025-11-2320:301 reply

          Cashless payments, always connected software and devices, and required app use for basic services like power, water, and heat as well as extreme data collection as it exists today makes dropping out of society more difficult than ever.

          While his crimes were atrocious, Ted Kaczynski might be right in some ways. The industrial and technological revolutios have improved life dramatically for n many humans and we live in a tube of astonishing abundance, but at what cost?

          aaaaannndd now I'm on a list somewhere.

          • By sznio 2025-11-2414:10

            the appetite of the rich always grows. the fast technological growth created more wealth than they could consume. once that runs out they'll take back what accidentally "trickled down"

            most innovation since 2012 seems to be not in the technology, but the financial sector. not ways to create value, but to squeeze more from the same thing.

    • By nielsbot 2025-11-244:43

      I bet true but misleading:

      > listened to parents

      ...but not taken significant actions

      > researched issues that matter most

      ...but ignored the results of the research

      > made real changes to protect teens

      ...sure, insignificant changes

  • By binarymax 2025-11-2316:2712 reply

    Look, most of us here know that meta is a terrible company that has done terrible things. But what is actually being done about it? So far just some token fines and petty wrist slaps. What’s really the plan here? Because they’re not going to stop.

    • By pksebben 2025-11-2323:17

      At the surface, it's an antitrust issue (the scale of Meta doesn't have the capacity to behave better, so it doesn't). This, like so many other things, can be traced back to a broken system of governance on a root level.

      Our system of incentives, operating within a system of governmental authority baked in an age where gunpowder was the new hotness, leads to a place where the movement of individual bits of law or policy don't matter. The forces at work will roll back whatever you do to make the social situation better, if they are antithetical to the interests of capital. Fix healthcare, and the insurance companies will find ways to twist it to their profit. Fix housing, and the banks and real estate developers will find ways to charge rent anyway.

      The coupling between decision making and the vox populi is weak and must be strengthened. The coupling between decision making and capital is strong and must be broken. Unless we can accomplish either, any change we make is cosmetic.

      I think what we need is a dissolution of representatives in favor of a more direct form of democracy, but most dismiss this as looney/impossible. I'm inclined to agree about the impossibility but that just kind of lands us back at 'what the hell do we do about it'.

      Ranked choice is a good start, perhaps. Might not 'fix it' but maybe it's a foot in the door.

    • By ThinkBeat 2025-11-2323:52

      For a while I worked for a company that was doing some shady and unethical things, but just within the law.

      I took em a while to understand how things worked and when I did I found a different job.

      Now this enterprise I left, could never have done what they did it was not for the developers that made it possible.

      When we talk about the giants on social media, it it us, the developers who make it possible for them to do what they do.

      If you are frustrated about how they are not being stopped from doing what they do, encourage people to leave. They money is great, but doe sit make it worth it?

      From the other side, let us say that the US shut down Meta and the rest of the social media beasts, how many developers would be out on the street?

    • By Aurornis 2025-11-2316:4216 reply

      > But what is actually being done about it?

      Serious question: What exactly do you want to see done? I mean real specifics, not just the angry mob pitchfork calls for corporate death penalty or throwing Mark Zuckerberg in jail.

      • By slg 2025-11-2318:207 reply

        Amend Section 230 so that it does not apply to content that is served algorithmically. Social media companies can either allow us to select what content we want to see by giving us a chronological feed of the people/topics we follow or they can serve us content according to some algorithm designed to keep us on their platform longer. The former is neutral and deserves protection, but the latter is editorial. Once they take on that editorial role of deciding what content we see, they should become liable for the content they put in front of us.

        • By Manuel_D 2025-11-2319:235 reply

          So Hacker News should lose section 230 protection?

          Because the content served here isn't served in chronological order. The front page takes votes into account and displays hotter posts higher in the feed.

          • By slg 2025-11-2319:444 reply

            Technically sorting by timestamp is an "algorithm" too, so I was just speaking informally rather than drafting the exact language of a piece of legislation. I would define the categories as something like algorithms determined by direct proactive user decisions (following, upvoting, etc) versus algorithms that are determined by other factors (views, watch time, behavior by similar users, etc). Basically it should always be clear why you're being served what you're being served, either because the user chose to see it or because everyone is seeing it. No more nebulous black box algorithms that give every user an experience individually designed to keep them on the platform.

            This will still impact HN because of stuff like the flame war downranker they use here. However, that doesn't automatically mean HN loses Section 230 protection. HN could respond by simplifying its ranking algorithm to maintain 230 protections.

            • By rafabulsing 2025-11-2320:152 reply

              I think the best way to put it is, users with the same user picked settings should see the same things, in the same order.

              That's a given on HackerNews, as there's only one frontpage. On Reddit that would be, users subscribed to the same subreddits would always see the same things on their frontpages. Same as users on YouTube subscribed to the same channels, users on Facebook who liked the same pages, and so on.

              The real problem starts when the algorithm takes into account implicit user actions. E.g., two users are subscribed to the same channels, and both click on the same video. User A watches the whole video, user B leaves halfway through. If the algorithm takes that into account, now user A will see different suggestions than user B.

              That's what gets the ball rolling into hyper specialized endless feeds which tend to push you into extremes, as small signals will end up being amplified without the user ever taking an explicit action other than clicking or not suggestions in the feed.

              As long as every signal the algorithm takes into account is either a global state (user votes, total watch time, etc), or something the user explicitly and proactively has stated is their preference, I think that would be enough to curb most of the problems with algorithmic feeds.

              Users could still manually configure feeds that provide hyper personalized, hyper specific, and hyper addictive content. But I bet the vast majority of users would never go beyond picking 1 specific sport, 2 personal hobbies and 3 genres of music they're interested in and calling it a day. Really, most would probably never even go that far. That's the reason platforms all converged on using those implicit signals, after all: they work much better than the user's explicit signals (if your ultimate goal is maximizing user retention/addiction, and you don't care at all about the collateral damage resulting from that).

              • By slg 2025-11-2320:461 reply

                Yes, you said it better than me. Fundamentally if you know a user's choices, you should be able to reproduce what that user sees.

                • By Manuel_D 2025-11-2321:062 reply

                  But Meta's content ranking would conform to this too: in theory a user that had the exact same friends, is a member of the exact same groups, had the exact same watch history, etc. would be served the same content. Although I'm pretty sure there's at least some degree of randomization, but putting that aside it remains unclear how you're constructing a set of criteria that does Hacker News, and plenty of other sites but not Meta.

                  • By rafabulsing 2025-11-2321:241 reply

                    Even that, I don't think is entirely true. I'm pretty sure they use signals as implicit as how long you took to scroll past an autoplaying video, or if you even hovered your mouse pointer over the video but ultimately didn't click on it.

                    Same with friends, even if you have the exact same friends, if you message friend A more than friend B, and this otherwise identical account does the opposite, than the recommendation engine will give you different friend-related suggestions.

                    Then there's geolocation data, connection type/speeds, OS and browser type, account name (which, if they are real names such as on Facebook, can be used to infer age, race, etc), and many others, which can also be taken into account for further tailoring suggestions.

                    You can say that, oh, but some automated system that sent the exact same signals on all these fronts would end up with the same recommendations, which I guess is probably true, but it's definitely not reasonable. No two (human) users would ever be able to achieve such state for any extended period of time.

                    That's why we are arguing that only explicit individual actions should be allowed into these systems. You can maybe argue what would count as an explicit action. You mention adding friends, I don't think that should count as an explicit action for changing your content feed, but I can see that being debated.

                    Maybe the ultimate solution could be legislation requiring that any action that influences recommendation engines to be explicitly labeled as such (similar to how advertising needs to be labeled), and maybe require at least a confirmation prompt, instead of working with a single click. Then platforms would be incentivized to ask as little as possible, as otherwise confirming every single action would become a bit vexing.

                  • By slg 2025-11-2321:311 reply

                    >had the exact same watch history

                    Watching the content that is being served to you is a passive decision. It's totally different from clicking a button that says you want to see specific content in the future. You show me something that enrages me, I might watch it, but I'll never click a button saying "show me more stuff that enrages me". It's the platform taking advantage of human psychology and that is a huge part of what I want to stop.

                    >it remains unclear how you're constructing a set of criteria that does Hacker News, and plenty of other sites but not Meta.

                    I already said "This will still impact HN because of stuff like the flame war downranker...". I don't know this and your reply to my other comment seem to be implying that I think HN is perfect and untouchable. My proposal would force HN to make a choice on whether to change or lose 230 protections. I'm fine with that.

              • By jefftk 2025-11-242:171 reply

                > I think the best way to put it is, users with the same user picked settings should see the same things, in the same order. That's a given on HackerNews, as there's only one frontpage.

                Are you sure? The algorithm isn't public, but putting a tiny fraction of "nearly ready for the frontpage" posts on the front page for randomly selected users would be a good way to get more votes on them without subjecting everyone to /new

                • By rafabulsing 2025-11-2412:41

                  That's a good point. As I pointed out, I'm ok with global state (total votes, how recent is a post, etc). Randomness could be thought as a kind of global state, even if it's not reproducible. As long as it's truly random, and not something where user A is more likely to see it than user B for any reason, then I'm fine with it.

            • By BrenBarn 2025-11-2321:42

              Another possibility would be to somehow incorporate the possibility of publishing the algorithm and providing some kind of "under the hood" view that reveals to people what determined what they're seeing. Part of the issue currently is that everything is opaque. If Facebook could not change their algorithm without some of kind of public registration process, well, it might not make things better but it might make it get worse a bit slower.

            • By Manuel_D 2025-11-2320:061 reply

              So a simple "most viewed in last month" page would trigger a loss of protection? Because that ranking is determined by number of views, rather than a proactive user decision like upvoting.

              • By slg 2025-11-2320:481 reply

                >So a simple "most viewed in last month" page would trigger a loss of protection?

                The key word there is "page". I have no problem with news.ycombinator.com/active, but that is a page that a user must proactively seek out. It's not the default or even possible to make it the default. Every time a user visits it, it is because they decided to visit it. The page is also the same for everyone who visits it.

                • By Manuel_D 2025-11-2321:03

                  To be clear, even the front page of Hacker News is not just a simple question of upvotes. Views, comments, time since posting, political content down ranking, etc. all at a factor in the ordering of posts.

            • By terminalshort 2025-11-243:11

              But what makes this neutral vs editorial? The definition there has nothing to do with whether it is personalized or universal content selection.

          • By ryandrake 2025-11-2321:153 reply

            This is an unpopular opinion here, but I think in general the whole "immunity for third-party content" thing in 230 was a big mistake overall. If you're a web site that exercises editorial control over the content you publish (such as moderating, manually curating, algorithmically curating, demoting or promoting individual contents, and so on), then you have already shown that you are the ones controlling the content that gets published, not end users. So you should take responsibility for what you publish. You shouldn't be able to hide behind "But it was a third party end user who gave it to me!" You've shown (by your moderation practices) that you are the final say in what gets posted, not your users. So you should stand behind the content that you are specifically allowing.

            If a web site makes a good faith effort to moderate things away that could get them in trouble, then they shouldn't get in trouble. And if they have a policy of not moderating or curating, then they should be treated like a dumb pipe, like an ISP. They shouldn't be able to have their cake (exercise editorial control) and eat it too (enjoy liability protection over what they publish).

            • By Manuel_D 2025-11-2323:58

              Moderating and ranking content is distinct from editorial control. Editorial control refers to editing the actual contents of posts. Sites that exercise editorial control are liable for their edits. For instance if a user posts "Joe Smith is not a criminal" and the website operators delete the word "not", then the company can be held liable for defaming Joe Smith. https://en.wikipedia.org/wiki/Section_230#Application_and_li...

            • By SoftTalker 2025-11-241:231 reply

              I’d go farther and say that any content presented to the public should be exempt from protection. If it’s between individuals (like email) then the email provider is a dumb pipe. If it’s a post on a public website the owner of the site should be ultimately responsible for it. Yes that means reviewing everything on your site before publishing it. This is what publishers in the age of print have always had to do.

              • By terminalshort 2025-11-243:171 reply

                I don't want a law that requires a gatekeeper for communication between members of the public.

                • By SoftTalker 2025-11-2417:13

                  Then stand up your own website.

                  If you're using someone else to do it, they should have a say in what is published under their name, and some responsibility for it.

            • By bostik 2025-11-2321:39

              The thing that's missing is the difference between a unsolicited external content (ie. pay-for-play stuff) and directly user-supplied content.

              If you're doing editorial decisions, you should be treated like a syndicator. Yep, that means vetting the ads you show, paid propaganda that you accept to publish, and generally having legal and financial liability for the outcomes.

              User-supplied content needs moderation too, but with them you have to apply different standards. Prefiltering what someone else can post on your platform makes you a censor. You have to do some to prevent your system from becoming a Nazi bar or an abuse demo reel, but beyond that the users themselves should be allowed to say what they want to see and in what order of preference. Section 230 needs to protect the latter.

              The thing I would have liked to see long time ago is for the platforms / syndicators to have obligation to notify their users who have been subjected to any kind of influence operations. Whether that's political pestering, black propaganda or even out-and-out "classic" advertising campaign, should make no difference.

          • By erentz 2025-11-2322:41

            Can’t you just limit scope for section 230 by revenue or users?

            E.g. it only applies to companies with revenue <$10m. Or services with <10,000 active users. This allows blogs and small forums to continue as is, but once you’re making meaningful money or have a meaningful user base you become responsible for what you’re publishing.

          • By armada651 2025-11-2322:51

            I think the biggest problem is when we're all served a uniquely personalized feed. Everyone on Hacker News gets the same front page, but on Facebook users get one specifically tailored to them.

            If Hacker News filled their front page with hate speech and self-harm tutorials there would be public outcry. But Facebook can serve that to people on their timeline and no one bats an eye, because Facebook can algorithmically serve that content only to people who engage with it.

          • By sleight42 2025-11-2319:341 reply

            And there's also moderator control?

            Yup. Accountable.

            • By Manuel_D 2025-11-2319:391 reply

              So to be clear, anything other than a 4chan-like unmoderated chronological feed results in loss of section 230 protection?

              Heck, even 4chan wouldn't qualify, because despite considerably looser content rules they still actually do perform moderation.

              • By sokoloff 2025-11-241:09

                Most sites that accept user-generated-content are forced to do some level of moderation, lest they become a cesspit of one form or another (CSAM, threats, hate speech, exposing porn to underage users, stolen credit card sales, etc...)

        • By ares623 2025-11-2318:28

          That’s the first reasonable take I’ve seen on this. Thanks for explaining it, I will use it for offline discussions on the subject. It’s been hard to explain.

        • By techblueberry 2025-11-2320:17

          Yeah, I wonder if the rules should basically state something like everything must be topical and you must opt in to certain topics (adult, politics, etc)People can request recommendations but they must be requested; no accidental pro-Ana content. If you want to allow hate speech fine, but people have to opt in to every offensive category/slur explicitly. (We can call it “potentially divisive” for all the “one persons hate speech is another persons love rap” folks or whatever.

        • By phyzix5761 2025-11-2417:56

          Go after the specific companies and executives you believe are doing wrong. Blanket regulations raise costs for smaller competitors and end up entrenching giants like Meta, Google, and Apple because they can afford compliance while smaller competitors can’t. These rules are a big reason the largest firms are more dominant than ever and have become, effectively, monopolies in their markets. And the irony is that many of these regulations are influenced or supported by the big companies themselves, since a small "investment" in shaping the rules helps them secure even more market share.

        • By worik 2025-11-2319:26

          They could use transparent adjustable algorithms

          I would like to tweak my own feed

        • By FloorEgg 2025-11-2319:17

          This would be a huge step in the right direction.

        • By parineum 2025-11-2319:43

          Chronological is an algorithm

      • By bluefirebrand 2025-11-2320:48

        I think with the harm that these companies are doing, the angry pitchfork mobs are a serious suggestion and not just hyperbole anymore

        Keep in mind that not very long ago some random person assassinated an insurance CEO and many people's reaction was along the lines of "awesome, that fat cat got what he deserved"

        Don't underestimate how much of society absolutely loathes the upper class right now.

        I would bet that many people are one layoff away from calling for execs to get much worse than jail

      • By __MatrixMan__ 2025-11-2319:111 reply

        Other shareholders in jail also.

        If my dog bites somebody, I'm on the hook. It should be no different with companies.

        We have to create incentives to not invest in troublesome companies. Fines are inadequate, they incentivize buying shares in troublesome companies and then selling them before the harm comes to light.

        • By mitthrowaway2 2025-11-2321:083 reply

          Where will you find a jail big enough to simultaneously imprison everyone who invested in S&P 500?

          • By __MatrixMan__ 2025-11-240:17

            You just go after the top four or five. It's not about proportional punishment, but about ensuring that those with enough power to actually affect outcomes feel a sense of responsibility over those outcomes whether or not they later divest.

            Blindly letting a CEO commit crimes should itself be a crime, but only if there's something you could've done to prevent it--that's not most shareholders.

          • By paulddraper 2025-11-2321:34

            Thanks, I have the same question.

          • By bdangubic 2025-11-2321:46

            just 500?

      • By knuppar 2025-11-2318:05

        > angry mob pitchfork calls

        > corporate death penalty

        I don't know man these don't seem very specific. From your whole comment I do agree Mark should be in jail

      • By BrenBarn 2025-11-2321:44

        I don't really get why corporate death penalty and Zuck in jail is not a good idea. It might not be the best idea, but I think it would absolutely be better than what we have now. Even a random-chainsaw-esque destruction of Facebook, Google, Amazon, and Apple would be better than what we have now.

      • By fakedang 2025-11-2322:20

        For one, I'd like the EU to use this as evidence to straight up ban Meta apps. If countries can ban TikTok, why not extend the same privilege to Meta?

        But then again, the EU are a bunch of vacuous chicken shits incapable of pulling their heads out of their arses, never mind safeguarding their own children.

      • By dkdcio 2025-11-2316:463 reply

        ban digital advertisement at a federal level and 95% of the underlying problems are solved at the incentive level

        • By Aurornis 2025-11-2316:562 reply

          > ban digital advertisement at a federal level

          This is what I meant by angry mob pitchfork ideas. This isn’t a real idea, it’s just rage venting.

          It’s also wrong, as anyone familiar with the problems in pay-to-play social video games for kids, which are not ad supported, can tell you. These platforms have just as many problems if not more, yet advertising has nothing to do with it. I bet you could charge $10/month for Instagram and the same social problems would exist. It’s a silly suggestion.

          • By dkdcio 2025-11-2316:591 reply

            literally the opposite of a pitchfork idea; quite simple, relatively easy to implement, and immediately effective. incentives from advertising is the underlying issue with the addictive nature of these platforms (and much more)

            • By Aurornis 2025-11-2317:033 reply

              > literally the opposite of a pitchfork idea

              The mere fact that commenters think banning advertising is a simple and realistic idea, without any constitutional road blocks or practical objections, is what I mean when I say these comment sections are just angry bloviating with unrealistic expectations.

              If you think banning all advertising is “simple” then I don’t know what to say, but there isn’t a real conversation here.

              • By dkdcio 2025-11-2317:122 reply

                so is it a pitchfork idea? I want Mark’s head? or it’s impractical? you’ve changed your apprehension to my idea twice in two comments

                constitutional roadblock…to banning digital advertisement? please do explain!

                I didn’t claim it’s easy to get it done in the real world, but it’s not a reactive/vindictive pitchfork idea. it’s really not that hard, if people wanted it we’ve banned plenty of things at the federal level in this country over the years (the hard part is of course people realizing how detrimental digital advertising is)

                it’s a simple solution that’s very effective. obviously any large-scale change, to fix a large-scale problem, is not “simple” to implement, but it’s also not fucking rocket science on this one mate

                you’re clearly not having a conversation in good faith. you asked, I answered, I’m done with this

                • By Supermancho 2025-11-241:051 reply

                  > so is it a pitchfork idea?

                  What constitutes an advertisement is not a simple proposition. eg Is a paragraph describing some facts (phrased carefully) about a product or company an advertisement?

                  To what effect speech would have to be controlled to enforce this, is unthinkable. While some handwaving is necessary, as anyone can agree (since even the simplest legislation would be corrupted by the US political class), "banning advertising" is not a practical goal.

                  • By dkdcio 2025-11-2412:25

                    payment. exchange of goods or services worth $$$

                    it’s quite a simple definition of what is or is not advertisement. run it through real world examples, it’s trivial to say whether something is or isn’t an advertisement

                    as with any broad regulation there would be grey areas, continued cat and mouse games with bad actors, etc.

                    but it is not a remotely insurmountable obstacle to define what is and is not advertisement in relation to free speech

                    (as an aside it’s really funny to me anyone would consider being paid to say something free speech, but I get it)

                • By Aurornis 2025-11-2317:191 reply

                  I’ve not changed anything, I was asking for realistic suggestions. You’re throwing out unrealistic suggestions.

                  Why stop there? Why not just shut down the whole internet? Simple and effective. Ban cell phones. Simple and effective.

                  These are just silly ways of thinking about the world.

                  • By dkdcio 2025-11-2317:361 reply

                    you’re just doing ad hominems and strawmans. I’m not suggesting banning anything other than digital advertisement. you’re not open to having a productive discussion about it, just misdirection and whataboutism

                    please stop ascribing intent I do not have and words I did not say in your juvenile attempt to win an argument

                    p.s. still would love to hear your constitutional argument against it! banning digital advertisement at the federal level is not unrealistic and if you've actually given it the thought you’re pretending to and still reach that conclusion, I do have an ad hominem to throw back at you

              • By ChrisMarshallNY 2025-11-2318:302 reply

                Just FYI. For a very long time, strong alcohol ads were banned on TV, and the same with tobacco.

                I don't watch regular TV, anymore, so I don't know if it still is in place.

                Mentioning "banning advertising" on HN is bound to draw downvotes. A significant number of HN members make money directly, or indirectly, from digital advertising.

                It's like walking into a mosque, and demanding they allow drinking.

                Won't end well.

                • By integralid 2025-11-2319:571 reply

                  There's a large difference between banning strong alcohol ads, and instantly collapsing a whole huge advertisement economy (that indirectly funds most of the free things people take for granted).

                  Either I misunderstand something or I'm baffled how anyone can consider that easy.

                  • By array_key_first 2025-11-2414:17

                    Not easy per se, but definitely doable. It's a relatively new economy, there's no blood oath anywhere saying we have to allow it.

                    We've banned literally all tobacco ads and its... fine. I mean not for the tobacco company, but who cares?

                    I'm not gonna advocate making the world worse so some people stay employed. That's so counter productive. Who knows - maybe in a less shitty world, new jobs will emerge!

                • By fn-mote 2025-11-2318:571 reply

                  In this case, the suggestion of banning advertising is drawing downvotes from me because I see it as politically unrealistic.

                  At least in my state, there isn’t even a ban on advertising online gambling!! It is quite a stretch to think we could move from there to banning any kind of advertising.

                  It has nothing to do with the fact that a bunch of HN readers make money from ads. I don’t.

                  • By cycomanic 2025-11-2319:58

                    Somewhat meta question, do you believe that down voting opinions we don't like is a good way of engaging with one another on HN?

                    I wish we could discuss the issue here, and instead would have liked to hear from you why you think it is a pitically unrealistic proposal, and what your criteria is for seeming something politically unrealistic.

              • By flag_fagger 2025-11-2318:44

                > The mere fact that commenters think banning advertising is a simple and realistic idea, without any constitutional road blocks

                Of course not, clearly you just need a captured congress and an EO. Can’t be too hard to find a reason to turn Trump against Zuckerberg.

          • By squigz 2025-11-2316:591 reply

            Banning advertisement seemed to work for smoking.

            https://tobaccocontrol.bmj.com/content/early/2025/01/22/tc-2...

            Why do you think it would be ineffective here?

            I'm also curious on how you think we might tackle these issues.

            • By Aurornis 2025-11-2317:021 reply

              The parent comment called for banning all advertising, not for banning ads promoting social media platforms.

              They don’t want anyone to be able to advertise anything. Not even your local contractors trying to advertise their businesses that you want to find, because that’s advertising.

              The tobacco ad ban isn’t relevant to what was claimed.

              • By squigz 2025-11-2319:36

                > The parent comment called for banning all advertising, not for banning ads promoting social media platforms.

                This wasn't my reading of it, but it does appear that's what GP meant. I don't agree with that. Even so, if you were interested in having a good faith discussion about solutions here, you might have responded to both interpretations.

                You may consider this me putting forth the suggestion as an answer to your question, if you must.

        • By justapassenger 2025-11-2316:514 reply

          We can also solve global warming problems by banning oil, coal and cows, and solve hunger by banning having kids.

          • By Aurornis 2025-11-2316:572 reply

            “Just ban everything I don’t like as long as it won’t impact anything I do like” is a frequent take on HN these days.

            Then when states start doing things like adding ID requirements for websites it’s shock and rage as the consequences of banning things (even for under 18s) encounter the realities of what happens when you “just ban” things.

            • By opan 2025-11-2317:051 reply

              I think we can separate the banning of things which affect personal freedom from the rest. Like if oil were "banned", I'm imagining it's not illegal to possess oil, but rather oil companies wouldn't be able to drill it up and sell it anymore. A bit like fazing out asbestos. The ordinary people with asbestos tiles in their basement don't get into trouble, but new house builds can't/won't use that tile anymore.

              ID requirements seem like the main burden is being put on ordinary people instead of corporations, and by extension seems clearly bad.

              • By Aurornis 2025-11-2317:562 reply

                > Like if oil were "banned", I'm imagining it's not illegal to possess oil, but rather oil companies wouldn't be able to drill it up and sell it anymore.

                What does that have to do with anything?

                It doesn’t matter where you ban it, if you turn off oil overnight a lot of people are left stranded from their jobs, sectors of the economy collapse, unemployment becomes out of control.

                Banning things like this is just fantasy talk that only makes sense to people who can’t imagine consequences or think they don’t care. I guarantee you would change your mind very quickly about banning oil overnight as soon as the consequence became obvious.

                • By fzeroracer 2025-11-2320:382 reply

                  I'm curious: Where do you put the line? For example, leaded gas improved car performance and arguably key to economic performance. But it was also incredibly neurotoxic and damaging to society. Do you believe banning it was a bad idea because it resulted in a lot of people losing their jobs?

                  • By warkdarrior 2025-11-241:161 reply

                    > Do you believe banning [leaded gas] was a bad idea because it resulted in a lot of people losing their jobs?

                    Who lost their job when leaded gas was banned? A web search did not give me any examples.

                  • By mrguyorama 2025-11-2422:33

                    >For example, leaded gas improved car performance and arguably key to economic performance

                    This is not true. We currently use ethanol to boost octane, and that additive was known at the time by the company that invented TEL, and they did not use it because they did not control the market for ethanol like they could control the market of a new and patented chemical.

                    TEL was never actually necessary, and we poisoned ourselves for most of a decade to enrich a corporation. Large scale ethanol (as beer) production was one of humanity's earliest industries.

                    Indeed, after we banned leaded gas, we tried using yet another stupid poison additive, MTBE, for a decade or so, and that continued to poison people because gas tanks leak and that chemical was toxic. Most of Asia actually still uses MTBE, to their detriment.

                    Ethanol has never had this problem. Arguably, when Bush required all US gasoline to include 10-20% ethanol, he wasn't even trying to fix the poison problem of MTBE, he might have just been greenwashing and kicking more subsidies to corn growers, but it definitely solved the poisonous additive problem for octane boosters.

                    Indeed, zero additives for octane are "required" at all. You can produce high octane gasoline just by choosing different refined components but this results in less gasoline produced per barrel of oil.

                • By squigz 2025-11-2319:39

                  Who suggested "turning oil off overnight"? What does that even mean?

                  GP (and I) have given you several examples of stuff society learned was harmful and then phased out with regulations/legislation. No, it didn't and does not happen overnight.

                  Why are you acting in such bad faith, trying to disregard people you don't agree with as "not being able to imagine consequences"?

            • By terminalshort 2025-11-2319:40

              It really has turned into a bitter losers bitch fest in here.

          • By opan 2025-11-2317:01

            I was on board until the end. If we don't have kids, we're wiping ourselves out even faster than with climate change. I also wonder with oil if we'd need it for some things still, though maybe it's fine if it's made from something else. Gasoline has some obvious alternatives in most areas, but oil seems to be more than fuel. It's also a lubricant.

          • By dkdcio 2025-11-2317:53

            [dead]

        • By vincnetas 2025-11-2316:541 reply

          There is a substantial opposing force to that "US$790 billion ad market for 2024"

          • By dkdcio 2025-11-2317:57

            yep! it’d be hard, but we’re already at most people nodding their head when you say “social media is addictive, detrimental to individual mental health, and overall negative for society”

            you just got to get enough people to nod at “…and this is caused by the underlying incentives from digital advertisement” then to “and the most effective course of action is to ban digital advertisement”

            I truly don’t believe it’s a big leap, especially after a few more years of all this

      • By Refreeze5224 2025-11-245:041 reply

        Why is the corporate death penalty or Zuckerberg in jail reduced to angry mob ideas? I think both are valid responses to the social harms that Facebook and social media generally have caused.

      • By wyre 2025-11-2316:501 reply

        Larger fines, more robust methods for Meta to keep children off their platforms, more robust methods to stop the spread of propaganda and spam on their platforms, for Meta to start prioritizing connection between others instead of attention.

        • By loosescrews 2025-11-2318:281 reply

          If you want a company to do something, you do need to ensure that the fine is bigger than the amount of money they made or will make by doing the thing you are trying to discourage. You need there to be a real downside. I don't think any of the fines that have been discussed are anywhere close to the levels that I am talking about.

          • By nandomrumber 2025-11-2319:53

            Don’t corporate fines often come with requirements that the company also discontinue certain activities, start certain other ones, and be able to prove this or that to a regulator?

      • By tehjoker 2025-11-2320:08

        Why not? Those are effective ideas, it’s just impractical because our political system is so insulated from public input.

      • By GOD_Over_Djinn 2025-11-2320:09

        > throwing Mark Zuckerberg in jail.

        …why not?

      • By flag_fagger 2025-11-2319:16

        Isn’t this what we have RICO for?

        > she was shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.”

        There’s no way in hell this isn’t just tacitly incentivized the facilitation of trafficking activities through the site.

      • By anthem2025 2025-11-2318:15

        [dead]

      • By cindyllm 2025-11-2318:38

        [dead]

      • By worik 2025-11-2319:27

        > Serious question: What exactly do you want to see done?

        Confiscate their wealth

    • By PessimalDecimal 2025-11-2316:314 reply

      Are there any serious attempts to enact a "corporate death penalty" in the US? Is there even a viable route to getting something like that in the current regime?

      • By dragonwriter 2025-11-2317:46

        Charter revocation is, I think, technically on the books in every state, but its not used for variety of reasons, one of which is because while it destroys the corporate entity, it mostly punishes the people least responsible for any wrongdoing (it can sometimes be accompanied by real punishment for the responsible actors, but those are separate processes that doesn’t require charter revocation, such as individual criminal prosecution or civil process that ends with fines, being barred from serving as a corporate officer, etc.)

      • By binarymax 2025-11-2316:344 reply

        My opinion is that if corporate personhood is OK, then the corporation should face the same consequences as people do when they break the law. So facilitation of human trafficking should go to criminal court.

        • By dragonwriter 2025-11-2317:491 reply

          Corporate crimes can and sometimes do go to criminal court (PG&E, for instance, has convicted of 84 counts of involuntary manslaughter for the 2018 Camp Fire, obstruction and various criminal pipeline safety violations in the 2010 San Bruno pipeline explosion, and various other crimes at other times), but aside from fines most criminal punishments don’t apply to corporations. You can’t imprison a corporation as such, nor can you execute it except metaphorically. So, ultimately, that’s largely just a higher standard of proof route to fines than civil court (though probation restrictions are also a thing.)

          • By vlovich123 2025-11-2318:502 reply

            You can prosecute and imprison officers and/or the board. A corporation isn’t a magical immunity shield for them - for some reason prosecutors have shied away from piercing the corporate veil.

            • By dragonwriter 2025-11-2319:46

              > You can prosecute and imprison officers and/or the board.

              Right, that's just normal individual criminal prosecution; it doesn't require prosecuting the corporation.

              Of course, it's possible for the corporation to be guilty of a crime without any individual officer or board member being guilty.

              > A corporation isn’t a magical immunity shield for them - for some reason prosecutors have shied away from piercing the corporate veil.

              Piercing the corporate veil is holding shareholders liable for certain debts (including criminal or civil judgements) of the corporation. It has nothing directly to do with criminal prosecution of corporate officers or board members for crimes committed in the context of the corporation (though there are certainly cases where both things are relevant to the same circumstances.)

            • By tpmoney 2025-11-2321:11

              I imagine that prosecutors don’t pierce the corporate veil for public companies often because “DA charges 87 year old grandmother with stake in Evil Corp in her pension fund with manslaughter, local union members brace for additional charges against members” doesn’t make for good headlines or good justice either.

              The officers and board of the company aren’t protected by the corporate veil concerning their actions. They retain some degree of protection from actions of others within the corporation provided they did not have (or did not have a reason to suspect) knowledge of those activities. But to my knowledge that’s not special to officers, it applies to any employee which is why the rank and file Enron employees didn’t get prosecuted.

        • By Aurornis 2025-11-2317:003 reply

          > So facilitation of human trafficking should go to criminal court.

          Be careful what you wish for.

          Who else should go to criminal court for facilitating human trafficking? The airlines because they sold flights to these people, statistically speaking? What if they used a messaging app you use, like Signal? Should the government shut that down or ban it too? I have a feeling these calls to regulate platforms don’t extend to platforms actually used by commenters, they just want certain platforms they don’t use shut down and don’t care how much the law is bent to make it happen, as long as the law isn’t stretched for things they do like.

          • By crtasm 2025-11-2317:201 reply

            Knowing facilitation. Facebook knows about specific users, it's not a case of statistics.

            • By twen_ty 2025-11-2319:24

              This. I've reported scammers so many times in Facebook, it's so obvious but obviously it's not a priority for them.

          • By rfrey 2025-11-2319:59

            This is solved with humans. Mens Rea is usually required for successful prosecution. A taxi driver who takes a fare to a bank, who then robs the bank, is not prosecuted. A getaway driver is.

            To avoid nitpicking, op probably should have said knowingly facilitates, but this is conversation not legislation and 99% of readers probably understood that.

          • By knuppar 2025-11-2317:20

            Nah I'd feel pretty okay with more regulation. In your two examples predictable crimes happened in these platforms. An airline should most definitely be liable to enable that, just like they are liable for letting people without visas boarding a flight. Signal should also be liable for enabling a crime, but realistically all they could do in an investigation is give e2e encryption logs with some timestamps.

        • By benoau 2025-11-2316:49

          Exactly. And same for games for children that somehow don't detect pedophiles spending $100s and $1000s to lure children. And same for the platforms taking immense fees from Meta and such games that are suspiciously unaware of what's going on.

        • By parineum 2025-11-2319:49

          The entire point of corporate personhood is to be able to hold corporations liable for their actions.

      • By llbbdd 2025-11-2317:17

        Only among the terminally unserious

      • By anthem2025 2025-11-2318:13

        [dead]

    • By nalekberov 2025-11-2316:31

      For them these fines are just cost of doing business. Apparently politicians don't care too, for them imposing fines is all about bringing extra money from time to time.

    • By chad_c 2025-11-240:07

      Capital is orders of magnitude more powerful than labor. Until that changes, this story will be repeated.

    • By nccn67 2025-11-2320:11

      Its basically like the history of money before banks got regulated and central banks emerged to regulate money printing. In this case its all about Attention which is functioning exactly like currency.

    • By kspacewalk2 2025-11-2316:37

      Specifically when it comes to children, lots of jurisdictions are enacting actual non-bullshit age verification to ensure children aren't on social media. In my opinion this is real, substantive change.

    • By webdoodle 2025-11-2316:50

      They aren't going to stop because LifeLog was as Darpa project before they found a private stoog to build it for the military. Remember it's only dystopian to spy on every aspect of a persons life, if YOUR THE GOVERNMENT. Private entities in the U.S. basically can do anything they want, especially now when they can rent a President too pardon it away.

    • By kiba 2025-11-2317:39

      They're monopolies. Break them up, heavily regulate, or tax their economic rent privileges.

      Georgism gave a good lenses on these kind of issue. All the sudden, late stage capitalism starts looking like monopolies.

    • By blactuary 2025-11-2316:342 reply

      We have vote with our dollars/attention and stop using their products. Including pressuring our friends and family to stop using them.

      • By binarymax 2025-11-2316:362 reply

        Has that ever really worked? And considering meta has billions of users on not just Facebook, but also WhatsApp and instagram, I’m skeptical. I know people who hate meta, but can’t shake instagram.

        • By blactuary 2025-11-2317:14

          Then we are well and truly cooked, if we are so addicted to a specific photo-sharing platform that we will let this abuse continue

        • By armchairhacker 2025-11-2318:231 reply

          It protects you and your friends+family from the negative effects of using Meta platforms.

          • By Fargren 2025-11-2320:31

            It does not. Social media platforms have had massive societal impact. From language, to social movements, to election results, social media has had effects, positive or negative, that impact the lives of even those who do not use them.

      • By tjpnz 2025-11-2317:511 reply

        When my kids were born I told my family I wouldn't be posting their pictures on any Meta owned platform. That was all I needed to move the family group, photos etc. to another app.

        • By ares623 2025-11-2318:32

          Which app did you move to?

  • By blaufast 2025-11-2317:001 reply

    So much of this audience already knows the job is to collect comprehensive analytics and never run the analyses on your product’s externalities.

    to be obvious enough to downplay, it must be impossible to miss while looking the other way. To be impossible to miss, it must be inextricably linked to the profits.

    • By dalka 2025-11-2317:073 reply

      It's even more egregious in this case because Meta's employees were turning a blind eye to child sexual exploitation that they knew fine well their work was enabling.

      Maybe those fat bonuses and generous stock options wiped away the feelings of guilt, if these Silicon Valley sociopaths even felt any in the first place.

      • By fn-mote 2025-11-2319:29

        > You could incur 16 violations for prostitution and sexual solicitation

        So although this is being spun as “trafficking”, that doesn’t seem accurate.

        This classification sounds like it includes selling “your own services”.

      • By olelele 2025-11-2323:16

        Similar to roblox debate raging now no?

      • By blaufast 2025-11-2318:58

        We’re all just trying to get our nut.

HackerNews