X blocks Taylor Swift searches after fake AI videos go viral

2024-01-2814:2283199www.ft.com

Subscribe to unlock this articleTo read this article for freeRegister nowOnce registered, you can: Read free articles and add your comments Get our Editor's Digest newsletter and more Follow topics…

Padlock icon
Subscribe to unlock this article

To read this article for free
Register now

Once registered, you can:

  • Read free articles and add your comments
  • Get our Editor's Digest newsletter and more
  • Follow topics and set up personalised events
  • Access Alphaville: our markets and finance blog
  • Global news & analysis
  • Expert opinion
  • Special features
  • FirstFT newsletter
  • Videos & Podcasts
  • Android & iOS app
  • FT Edit app
  • 10 gift articles per month
Everything in Premium Digital

See why over a million readers pay to read the Financial Times.

Find out why

Read the original article

Comments

  • By maxbond 2024-01-291:266 reply

    If you believe AI has tremendous potential to do good - you should sit upright and pay attention when it does harm.

    Because it's up to people like you to demonstrate to society that it does more good than it does harm. And if your response is to roll your eyes and say that everything is fine because people could do this in Photoshop - you are making yourself irrelevant. You are telegraphing that your support of AI is untethered to the harms normal everyday people care about. People aren't dumb and they're going to notice that you aren't in their corner. And then they're going to start deciding what to do about AI without you.

    I am reminded of one of my favorite HN comments, made in response to Amazon automatically recommending "suicide kits" to people.

    > As a developer, as a data scientist, as a designer, as a business leader, you are responsible for things like this. Do not make excuses. Do not disclaim liability. Acknowledge the mistake and fix it. We can do better.

    > -- 'kitanata

    https://news.ycombinator.com/item?id=33142941

    • By roenxi 2024-01-296:172 reply

      The harm here isn't AI, or if it is the harm is minimal. Media companies have routinely organised vicious smear campaigns since forever. If you spend time really watching politics, there are some grave injustices peddled by insensitive mobs. I expect we can date that sort of activity back to Roman times.

      AI is going to make life pretty miserable for good looking women in the public eye. Maybe. I think we can probably social-out the interest as this becomes widespread, it isn't like this actually reflects badly on Taylor Swift. But that has always been a serious hazard of public life.

      We've had this sort of thing happen before [0]; IMO this could be argued as harm reduction. I'm not sure the argument would be carried, but there is a case to be made. The important thing is to acknowledge this is easy to do and everyone agrees that nudes should be ignored.

      [0] https://en.wikipedia.org/wiki/2014_celebrity_nude_photo_leak

      • By deafpolygon 2024-01-296:241 reply

        I think the real harm here is that someone like Taylor Swift can have her searches scrubbed. What about Jane next door? She has no such recourse.

        • By oceanplexian 2024-01-2915:151 reply

          It’s actually the opposite. X is making these changes on their own. An individual out of the spotlight could probably sue.

          In the law public figures are not protected the way individuals are. When sell out and willingly make millions of dollars exploiting your likeness, you don’t get to choose if people make a parody of that likeness.

          • By deafpolygon 2024-01-2915:54

            Yes, X is doing it because Taylor Swift is an important figure for their platform. Jane could sue, but X would claim no responsibility and there would be no loss to X if she left the platform.

      • By maxbond 2024-01-296:482 reply

        > The harm here isn't AI, or if it is the harm is minimal.

        > AI is going to make life pretty miserable for good looking women in the public eye.

        Could you help me understand how you reconcile these two statements? Is the harm minimal, or will women in the public eye be miserable? Or is making women in the public eye miserable not something to take seriously - and if so, why?

        I'd also be curious if you think the harm will stop there - could you foresee maybe, high schoolers using deepfake porn to bully each other, con artists using cloned voices to commit fraud, deepfake evidence being used to frame someone for a crime, etc? Is there a reason I shouldn't be concerned about these prospects?

        • By roenxi 2024-01-297:441 reply

          Being in the public eye is already miserable. The public eye appears to be a close relation to the Eye of Sauron. These people smile a lot, but they don't look like they're having much fun.

          The existence of AI will make life a bit worse for some of them. But life being better or worse doesn't really reach my standard of 'more harm' here. I suppose what I was thinking is a bit like if someone cuts in front of me in a queue. I might be worse off, certainly could feel annoyance, but I wouldn't say I'm harmed. I just don't like it. For harm to have happened I would actually have to be ... I dunno, harmed.

          Ms. Swift already has to deal with literal stalkers. The baseline of what public people are expected to put up with is already high. I'm not sure this thing is really moving the bar.

          > I'd also be curious if you think...

          2 thirds of that (bullying, miscarriage of justice) are already major problems. The legal system is already a crapshoot; they do there best but it makes a lot of mistakes. I could see the net effect of AI being helpful in both cases; I'm more excited about neutral, AI judges than I am worried about evidence quality dropping off.

          Con artists, we'll see what happens. The major protection I've relied on to date has been rollbacks and balances rather than preventing fraud up front. Social engineering is hard to stop.

          You can feel concerned about whatever you like, of course.

          • By maxbond 2024-01-2918:191 reply

            To be frank this is nonsense. People in the public eye are miserable so it doesn't matter if they're more miserable? Making lide worse isn't harm? I don't even know what harm would mean, then.

            I'm always bemused when people counter with the observation that something is already a problem. Okay? So then we should agree that more if it is also a problem, right? It's like saying the invention of the machine gun wasn't a factor in WW2, because we had already had guns and before that we had crossbows. There was a phase change with the machine gun and things were quite different.

            Similarly, if people are able to produce deepfakes at an industrial scale, things will be very different.

            • By roenxi 2024-01-301:331 reply

              > I don't even know what harm would mean, then.

              Suffering some sort of detectable injury, for example. I'm hedging a bit because there could easily be something obvious I'm not seeing, but I don't really see how Taylor Swift of 2024 is worse off than the one of 2023 here. She hasn't suffered any physical harm. I don't think this harms her reputation. It isn't going to cause her financial prospects to drop. There is emotional harm, but that has always been a bit of a cop out taht people use when they just don't like something and I don't see why it has to cause emotional harm. There doesn't have to be any stigma around this, it looks like it is just going to be one of those things that happens.

              Automated fake porn is obviously different and I can see why she might be upset. But it isn't so clear she - or anyone else - has actually been harmed in any serious way. I get upset by a bunch of things that are frankly more materially impactful on me than this is on her and it isn't appropriate to say they harm me. Tax springs to mind as a light example.

              And speaking of WWII, we'll note that although war became much more horrific, it became so much more horrific that we seem to have stopped doing so much of it. So although objectively the machine gun makes fighting much worse, it actually led to less killing overall.

              • By maxbond 2024-01-308:321 reply

                > There is emotional harm...

                I agree. Things like this are potentially traumatic. This reduces to a form of cyber bullying. Cyber bullying has gotten people killed.

                > [T]hat has always been a bit of a cop out taht people use when they just don't like something...

                I disagree. Furthermore I would describe saying that there wasn't harm, and then admitting that there was but that it doesn't count, as a copout or equivocation.

                > Tax springs to mind as a light example.

                I shouldn't need to point out why harassment and taxation is an apples to oranges comparison.

                > [A]lthough war became much more horrific, it became so much more horrific that we seem to have stopped doing so much of it...

                We do a huge amount of war. I'll wager you're not aware of all the conflicts categorized as a "war"/"major war" in the list below (I wasn't when I came across it a few days ago).

                https://en.m.wikipedia.org/wiki/List_of_ongoing_armed_confli...

                I'm not even sure how to respond to the idea that war getting more horrific is a boon except to say that you should think on that again.

                • By roenxi 2024-01-3023:21

                  Bullying is targeted though. I doubt that this instance is someone trying to annoy Swift. She could theoretically not have noticed the episode at all and the faker would still be happy. I don't see why this has to make her feel like she is under some sort of attack. She isn't; most of the people looking at this stuff think she is a very respectable human being. Some fanatically think she is one of the best.

                  > I shouldn't need to point out why harassment and taxation is an apples to oranges comparison.

                  Sure. But you can see how (1) me being worse off and (2) me being upset about it are not sufficient for people to be persuaded that harm is being done to me. I mean, technically there is a point there but most people don't consider it appropriate to describe taxes as "harmful".

                  * War and Peace

                  There is a lot of war in the world, but if you look at the overall trends we're actually in a period of remarkable peace [0]. Absolute number of wars is way down, and death rates due to war will be way down (note that population growth is exponential so absolute numbers being a little higher actually means rates are dropping).

                  [0] https://ourworldindata.org/conflict-measures-how-do-research...

        • By bvvg 2024-01-305:08

          > ...deepfake evidence being used to frame someone for a crime, etc? Is there a reason I shouldn't be concerned about these prospects?

          This is already happening, I'm reasonably certain, and until something of substance is done about it on a systemic, legal level, "deepfake" proliferation will lead to further systemic 'othering' of already-marginalized people at the expense of our own humanity.

          Eventually, some of us (humans) won't be able to worry so much about engineering heaven-on-demand, as it were, when suddenly we find ourselves limited in ways we never could've imagined or ever hope to escape from.

          People will continue to point and laugh by way of social media, but for how long and at what, or more importantly, whose expense? These are serious questions, which the general public has been plainly ignoring until recently- and that's being entirely overly-favorable.

    • By squigz 2024-01-292:492 reply

      I fully agree with this sentiment - but what is the solution? I think most people do agree that such uses of this technology are horrible, but what can we really do about it? We can't put the cat back in the bag, and while efforts to censor the more popular services are commendable (if it's even possible to do so), that's not really going to stop bad actors from doing it

      • By maxbond 2024-01-293:121 reply

        I don't pretend to have the answers, my point is more that if you want to be a part of the discussion of what a solution is, you need to have more than a shallow dismissal. We're still looking at the tip of the iceberg. Deepfake-enabled harassment, fraud and propaganda haven't seen extensive use yet. When and if they do, "you could do this with Photoshop" is going to sound increasingly out of touch.

        This is a social problem enabled by technology, so the solution is necessarily a social one with technology to support it.

        I don't know if this fits existing legal frameworks or if we'll have to put new laws on the books, but we should treat this as harassment. We should make it clear to younger generations that this isn't okay and why.

        On the flip side, I imagine people will get more defensive about their biometric data. We'll probably start to regard photos and voice recordings as being very private. It probably won't be as socially acceptable to post them. It will probably be more normalized to use an avatar and synthesized voice on things like Zoom calls.

        But this is all very speculative, so I won't be too surprised if things go very differently than I anticipate.

        • By 15457345234 2024-01-2911:031 reply

          > It will probably be more normalized to use an avatar and synthesized voice on things like Zoom calls.

          Which creates a subsequent issue of it being completely impossible to authenticate the person you're talking to. Which will inevitably lead to companies being completely cored out by exploiters who, on a long term, infiltrate networks and give instructions to subordinates 'as the boss' which lead to companies failing.

          • By maxbond 2024-01-2921:02

            I don't really follow that chain of reasoning.

      • By jstarfish 2024-01-294:35

        Sue everyone involved for defamation. No more immunity for hosting user-generated content because people abused the privilege. Problem solved.

        The bad keeps happening because there is no financial incentive to change. Platforms are all too happy to look the other way because there's no risk to them. Change that.

        As much as I find them distasteful, I'll agree that deepfakes should be outlawed as harassment the same day someone holds teenaged girls spreading rumors to the same standard.

    • By Gunax 2024-01-296:181 reply

      Well I am going to bluntly disagree, sorry. I am dismissive. But in the end, I will be right and you will be wrong. That's the pattern in technology.

      1. Some technology is invented 2. It changes the status quo, directly or indirectly, on society. 3. There is some degree of backlash, regulations, and outrage. 4. Everyone accepts the new reality and forgets what it was even like before. Step 3 is often forgotten and considered quaint.

      The reality is this is an unsolveable situation. Porn deep-fakes are here. There is nothing that will change that. 20 years from now, celebrities will just accept that they are going to exist.

      I'm not even arguing that the technology is good (im not sure it is) or that I support it.

      I am arguing that it's inevitable.

      You think I am going to be irrelevant, but I think it's the opposite.

      In the last century we have invented number technologies responsible for megadeaths (cars, air pollution, carpet bombing). If those couldn't be stopped after killing thousands every year, then neither will this.

      • By maxbond 2024-01-296:342 reply

        Your step four is incomplete. In addition to accepting a new normal, we put limits on technology and address the harms. We didn't roll over and accept, say, the destruction of the ozone layer. We put restrictions on the use of CFCs, and we invented new refrigerants that were much less harmful. Though there has been some recent backsliding due to rogue emitters, these measures were effective at stopping and eventually reversing the degradation of the ozone layer.

        Some more examples include unleaded gasoline, breaks without asbestos, paint and cosmetics that don't include heavy metals, light bulbs that are much more efficient, etc. We've also figured out how to make fuels that don't contain all much sulfur, thus avoiding issues with acid rain.

        This is an attitude I see on HN a lot. Technological innovation is seen as unstoppable and all powerful - until it's time to innovate new ways to address the problems we've created, and then there's apparently no possibility of coming up with innovative solutions. We're to accept ourselves as passive actors, caught up in the currents of history, with no agency to alter course.

        An example that I find haunting is OceanGate and Stockton Rush. Rush was all for being innovative when it came to novel submersible construction. But when his engineers wanted to inspect the hull for degradation - can't be done, the hull is too thick to scan. You might expect someone who views themselves as a fearless innovator to see that as an opportunity to develop a new methodology or technology. But Rush was only a fearless innovator when it pleased him, and so he took himself and his customers to their deaths.

        Frankly I think this attitude is a bad excuse, and not one that society will tolerate forever. This is the path to crippling regulations and trust busting. We can't just foist our externalities on the rest of society forever and not expect them to eventually decide we're incapable of self regulating.

        • By veeti 2024-01-297:15

          I can't help but to see a parallel to the 90's crypto wars and end to end encryption debate. The technology and models to make these fakes is already out there and are freely downloadable by anybody, and unlike asbestos brakes or incandescent light bulbs it isn't going to naturally expire at any point.

          Even if the most crippling legislation was passed to ban such technology from law abiding citizens, criminals would still have their E2E messaging and the deepfakers would have AI at their disposal. Hate to be a doomer, but the smoke is already out of the box, what can you do?

        • By Gunax 2024-01-297:072 reply

          I didn't write this because it was too long already, but it depends on the technology.

          I agree that CFC bans are a good example of when something was done.

          But I can also name many situations in which nothing effective was done. And even in the face of overwhelming effort to halt it, the resistance amounted to little difference.

          The difference all comes down to the technology.

          So, my argument is that there are some technologies which can be addressed, and others which cannot.

          Solveable problems tend towards those with few decision makers, low incentive to defect, adherence can be monitored, costs are internal, and where the problem exists at top-level of large organizations.

          Unsolveable problems tend towards those with lots of decision makers, high incentives to defect, difficult to detect violators, costs are external, and the problem is personal.

          CFCs fall more in the the former--there just aren't that many CFC manufacturers (apparently 18 companies accounted for > 99% of production). Consider for a second, an alternate world where everyone builds their own refrigerants in their garage, and each individual decided whether to use CFCs or an alternative. Do you think we would have solved CFCs by now?

          This particular issue is very much the latter. The tech is already out in the open. The decision to use it responsibility or not is left to individuals.

          You mentioned foisting externalities on the rest of society. I agree that this is happening. And I agree that society will not stand for it. But my point is not that society is going to accept it, it's that society cannot do anything about it. Oh, people will shout and stomp and raise hell. Hearing will be had. People will be lambasted in front of congress. Voters will decry politicians to do something. And efforts will be attempted. But in the end, it will all be for naught.

          I realize this is a doomer outlook. Please don't mistake what I am saying as an endorsement.

          • By brenschluss 2024-01-297:422 reply

            Here's a scenario:

            It's 2030 and deepfakes run rampant. This is a problem creating political deepfakes, celebrity fake porn, CSAM, fake revenge porn, etc.

            Apple builds on their Spatial Photo feature, and their newest smartphones allows POR, or Proof of Reality.

            Proof of Reality: Data from multiple cameras and a LIDAR sensor are stitched together to generate a 3d depth-map + color map that can validate that a photo or video was shot without post-production manipulation. This processing is done on-chip, on a Secure Enclave-like chip, and cryptographically embedded in the following photo. Each raw capture starts as the first block of a blockchain; further images or videos that are created from this raw data are understood to be downstream of this first block.

            A piece of Proof-of-Reality media, edited together with multiple clips or images, can be cryptographically verified that it is composed out of individual Proof-of-Reality media. Like a Merkle tree.

            Apple pioneers the first fully Deepfake-proof media workflow. Consumers can watch news media or social media while being cryptographically assured that it wasn't AI-generated.

            2031: Proof-Of-Reality (POR) starts to catch on in public. Samsung gets on the bandwagon, and develops their own version (or joins a POR consortium). Soon, 40% of media is POR-validated, following the usual smartphone & OS update statistics.

            2032: A particular egregious deepfake scandal from a non-POR source drives the rush towards POR standardization. Apple and/or the POR consortium partners begin to produce more professional-level POR camera equipment. Content blockers that block non-POR media become developed.

            2033: Certain social media websites begin to place labels notices on non-POR media. POR-media consists of 70% of all news & social media.

            2034: News media companies fully switch over to a POR-workflow. Browsers start adopting non-POR labels for content, like Twitter's 'Community Notes'.

            2035: Deepfakes as we know it are mostly hidden from the public eye, but continues to evolve and change in unexpected ways..

            • By Gunax 2024-01-297:531 reply

              I love it! Though I am not an expert, I can at least see this.

              However, I hate to be a nitpicker, but I think this is solving a separate issue. I don't think this issue is about authenticating the legitimacy of deep fake porn. Rather, the mere existence of it is the issue.

              That is, people don't care that it's fake. People don't want to buy the Apple proof of reality because they don't want reality.

              • By brenschluss 2024-01-298:13

                Good point! Then:

                2036: Due to increasing amounts of deepfake CSAM, the US's Congress passes a law against unconsensual deepfake porn, requiring "websites of sexual nature" to be POR-compliant or be shut down. Porn web companies, ISPs/hosting providers, and credit card processors alike are legally liable.

                Pornhub welcomes this change with cheeky 'PORnhub' branding, but the reality is that change is necessary or they will be sued out of oblivion.

                Prosumer platforms like OnlyFans welcome POR-validation with wide arms, because it bolsters their image of authenticity. Exploiting the ban on deepfake porn, "softfake porn", where celebrity look-alikes create porn, becomes mildly popular.

                2037: Eventually, ISP / hosting providers / credit card processors that instigate the change. Much like SESTA/FOSTA's impact on sex workers in the early 2020s, payment processors and ISPs refuse to work with POR-unvalidated porn sites. Eventually, porn sites shift towards POR-compliance, and create new niches.

                Of course, underground deepfake porn still exists, if you know where to look. But by now, its associated reputation with CSAM makes it very inaccessible and disdained.

                Gone are the days of rampant deepfakes in the late 2020s and early 2030s. Mainstream media and politics call this a success, but a minority are angry, saying that deepfakes are a creative act, and the effective ban on POR-noncompliant material is a further restriction on creative liberties. ..

            • By hanselot 2024-01-2916:24

              2030: Deepfakes are rampant, causing significant issues in politics, entertainment, and personal privacy. However, instead of technological solutions, there is a growing trend of regulatory capture. Large corporations and governments begin to argue that deepfakes are an inevitable part of the digital landscape. The cost-effectiveness of creating deepfake content compared to traditional media production becomes a significant talking point.

              2031: As deepfake technology becomes more sophisticated and cheaper, it starts to replace traditional media production methods. Major studios and media companies lobby for and receive regulatory approval to use deepfakes as a legitimate form of content creation. This shift is justified by the reduced cost and logistical ease of using AI-generated characters instead of real actors.

              2032: A scandal arises involving a particularly damaging deepfake, but instead of driving a push towards authenticity verification technologies, it leads to further normalization of deepfakes. The argument is made that since distinguishing between real and fake content is increasingly difficult, society should adapt to accepting deepfake content as a new norm.

              2033: Social media platforms and news outlets begin to openly embrace deepfake technology, citing cost reduction and the ability to generate more engaging content. Traditional media actors and creators are increasingly marginalized, with deepfake creators dominating the market.

              2034: Regulatory bodies, heavily influenced by big tech and media conglomerates, begin to actively promote deepfake content. New regulations make it easier for deepfake content to be produced and disseminated, while traditional media production is bogged down by increased costs and regulatory hurdles.

              2035: The public gradually accepts deepfakes as the primary form of digital content. Traditional media, with real actors and genuine locations, becomes a niche market due to its higher production costs and complexity. Deepfakes evolve in unexpected ways, permeating every aspect of digital media and blurring the line between reality and AI-generated content.

          • By maxbond 2024-01-297:36

            I understand you are not offering an endorsement, and I appreciate that we can disagree without being opposed.

            I think our disagreement is quite narrow. I subscribe to the view that everything in history is contingent and nothing is inevitable. I gather you believe some things are inevitable, and that the particulars of a technology might make certain things inevitable. Thank you for elaborating.

            Sometimes we're not able to mitigate the harms of technology; fossil fuels come to mind. But I would argue this has next to nothing to do with fuel technology. It's because there are power people in our society who have successfully slow walked a transition to alternatives. But they haven't managed to stop it altogether - solar, wind, and EVs are seeing widespread deployment.

            The tech industry has been able to do this to some extent, but I don't think they have nearly the political capital of the fossil fuel industry. We're seeing increasing calls in the US to regulate the tech industry and to break up the larger companies. While the two parties aren't able to cooperate on nearly any legislation at this time, this attitude is bipartisan. The EU has already begun to regulate the tech industry. (I can't comment usefully on the rest of the world.)

            I do agree that the properties you enumerated make a problem easier or more difficult to solve, and that there's no way to stop people from making deepfake porn altogether. But that doesn't mean there isn't anything to be done about it, even if you and I might not be smart enough or positioned correctly to understand what that might be.

            My goal is to convince people not to give up on the idea that something could be done, and not to become entrenched in a certain position. Eg, if someone doesn't like that X responded by disallowing searches, I don't want to have yet another argument about whether or not this is a free speech issue. I want to hear that person offering a critique I haven't heard before, asking interesting questions, suggesting bad or partly formed ideas about what to do differently. And I don't want to see the rest of us start a flamewar beneath their comment, I really to see the rest of us pointing out weaknesses in a proposed solution and how we might address them. I want to see this community being creative and constructive about this, and I believe that really could make a difference.

            If we set aside doomerism, the worst thing that can happen is that we're wrong. That's not so bad; let's chance it.

    • By RecycledEle 2024-01-2911:162 reply

      Ohhh nooo...A celebrity got mocked online...how horrible!!1I! /s

      I assume the deep fakes were mild compared to what goes on in some of her fans' minds.

      • By maxbond 2024-01-2918:16

        I wouldn't say she was mocked, I would call this harassment. You don't stop having rights when you become a celebrity. I also don't believe it's safe to assume this sort of thing will only happen to celebrities going forward.

        I think you know the difference between imagining something and publishing something, and that you're being facetious.

      • By Zetobal 2024-01-2919:42

        Send me a face portrait and the address of your employer and some of your friend group and I wlii mock you as well. Just so you personally experience it and see what a load of shit your comment is.

    • By dryanau 2024-01-292:312 reply

      What on earth are you on about. I just built some classifiers to run in industrial settings, why is it my job to do or say anything about Taylor Swift deepfakes.

      > Because it's up to people like you to demonstrate to society that

      Condescending as shit.

      • By dang 2024-01-2921:03

        Yikes, please don't break the site guidelines like this, regardless of how bad another comment is or you feel it is. We have to ban accounts that post this aggressively, and I don't want to ban you.

        If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

        Edit: Fortunately your other comments seem fine so this shouldn't be too hard to fix.

      • By maxbond 2024-01-292:521 reply

        I'm sorry that I made you feel patronized.

        If you just wanna make your classifiers and tend to your own knitting, then I have no problem with that. There are a lot of people in this community who vociferously advocate for AI, to the point that I view them as political activists for AI, and it's food for thought for them. But that's a mantle they've chosen to take on, it isn't mandatory.

        • By jacquesm 2024-01-296:15

          It's interesting how 'any use of the tool' gets mixed up with 'responsible use of the tool' and that people who engage in responsible uses of the tool somehow feel addressed when it is very clear that you didn't have them in mind when you wrote that.

    • By skybrian 2024-01-291:522 reply

      This is not how blame works. You have no idea who is reading your message.

      • By jacquesm 2024-01-296:011 reply

        Blame and personal responsibility are not necessarily connected. Blame is imposed from outside, personal responsibility is uniquely yours: you either accept responsibility for your actions or you do not and if you do then chances are that that will modify your behavior.

        • By skybrian 2024-01-2922:231 reply

          It seems like writing about how someone else should take responsibility is blaming by this definition, since it comes from the outside?

          Or maybe a vague exhortation to do better doesn't count.

      • By maxbond 2024-01-292:001 reply

        I have no interest in blaming anyone. I'm talking about an attitude I see in this thread, which I believe is shortsighted.

        • By skybrian 2024-01-293:281 reply

          Your post is warning about how other people will be held in judgement:

          > People aren't dumb and they're going to notice that you aren't in their corner. And then they're going to start deciding what to do about AI without you.

          I find it rather dubious. Who are these people who will remember what some random Hacker News commenters said? Why would they be listening to them in the first place? It seems like an empty threat.

          • By maxbond 2024-01-293:34

            It's not a threat, that's a pretty uncharitable reading. Society will realize that everything is not fine, and that the group of people who insist that it is are to be ignored. I don't imagine that will involve singling out individual HN users.

            If this is a subject you want to discuss then I'm here to discuss it. If you want to dissect my language further, you're free to do so. But I'm not interested in arguing about semantics or tone and this is my final word on it.

  • By Khaine 2024-01-2822:114 reply

    Fake Celebrity porn has been a thing for the longest time. In the 90s people used to airbrush the face/head of a celebrity onto the body of a nude model. Prior to that people used their imaginations.

    What has changed since then? I guess technology now makes this easier, has fakes like this become more socially unacceptable?

    • By dharmab 2024-01-2822:181 reply

      The change is now it takes about 5 seconds and a nice-ish computer instead of hours of skilled work.

      • By _heimdall 2024-01-2822:305 reply

        So is the problem that fake porn is being created, or the ease in which it can be created?

        The latter almost makes it sound like fake celebrity porn is more of an art form that should be appreciated and guarded from cheap knock offs.

        • By anonymouskimmer 2024-01-2823:371 reply

          I'm skimming the comments here and barely anyone is mentioning the increased ease of distribution. It has evolved from upload to a single BBS, to upload to Usenet, to place on your own website, to place on a forum, to distribute like mad with retweets.

          > The latter almost makes it sound like fake celebrity porn is more of an art form that should be appreciated and guarded from cheap knock offs.

          Cheap, high-quality forgeries are still forgeries. Yes, forged currency did become an item more mentioned in the news when inkjet printers came into common use. This doesn't mean the old-school forgeries were "works of art worthy of appreciation and protection", or were ignored by the local news media when pertinent to the location they reported on.

          • By _heimdall 2024-01-2823:483 reply

            > I'm skimming the comments here and barely anyone is mentioning the increased ease of distribution. It has evolved from upload to a single BBS, to upload to Usenet, to place on your own website, to place on a forum, to distribute like mad with retweets.

            This is where I'm stuck, I don't really know where I land on whether its a problem we need solved or just a fact of how some people will use new tech. I wouldn't want someone posting fake nudes (or any photos for that matter) of me online, but I'm not sure if I have a right to stop them.

            Most aren't free speech absolutists, though I would hope that most would take issue with the idea of further limiting free speech based on new technologies that we don't have direct control over. Are there certain things I can write in a book but not write online? Or things I can write on my own site but not post to Twitter? I had no say in Twitter's creation, how does it have an impact on my personal freedoms?

            Edit: in addition - if I take issue with the current legal limit for free speech, can I simply develop a new form of distribution and claim that we now need to further limit free speech in response? That feels like a dangerous backdoor to attack constitutional rights without going through the courts or congress.

            • By UncleMeat 2024-01-290:355 reply

              What is the purpose of free speech? It allows for the undisturbed transmission of ideas, criticism of the government, and engagement with politics and political leaders. Widely distributing fake porn using somebody's likeness without their permission is absolutely not providing any sort of social good and is damn far from the key forms of speech that are important for a healthy social system.

              This isn't a backdoor to attack your constitutional rights. Not even close.

              • By _heimdall 2024-01-292:09

                Your initial definition doesn't mention social good, though you follow it up with an assumption that social good is some kind of check on where free speech ends. Where does this come from? I know it isn't in the constitution, but is there some legal precedent set that more clearly defines this line?

                > This isn't a backdoor to attack your constitutional rights. Not even close.

                I'd be really curious to hear more here. My specific question and proposed backdoor was specifically in response to the idea that free speech may somehow be limited based on a measure of how easily current technology allows said speech to be distributed. What did I miss there and how is it not a backdoor?

              • By anonymouskimmer 2024-01-290:391 reply

                It also allows the spread of enjoyable pastimes and a whole host of other non-political speech that some may find objectionable. There are a whole host of interesting pre-Code Hollywood films for instance, that aren't pornographic, but still had important or humorous messages.

                • By UncleMeat 2024-01-290:483 reply

                  Sure, we can expand the values of free speech as far as you want. Making and distributing porn of somebody using their likeness without their consent is unethical behavior that is not some pillar of any social good derived from the broad principle of free speech.

                  If you want to fight for free speech there's plenty of other actually useful battles.

                  I'm serious. Just once, I'd love a free speech absolutist in these situations to talk about all the work they are doing to prevent retaliatory arrest or expand the speech rights of students or try to get Thomas (who wants to overturn Tinker) or Alito (who voted by himself in Snyder v Phelps) kicked off the bench.

                  • By _heimdall 2024-01-292:18

                    There must be more behind this social good metric that you keep bringing up, care to cite where that comes into play?

                    What merits social good, and who decides it? For example, if I go to a train station and tell people horses have purple hair, is that not free speech? I wouldn't consider that ridiculous statement a social good, it really serves no value and doesn't better society. Am I breaking a law or ethical standard if I do this?

                  • By thfuran 2024-01-291:101 reply

                    >Making and distributing porn of somebody using their likeness without their consent is unethical behavior

                    I suspect not everyone agrees with that. Are you willing to trade away some of the things you think should be permitted but they don't for the ban?

                    • By anonymouskimmer 2024-01-291:202 reply

                      First we should see whether it's possible to outlaw non-consensual distribution of deepfake porn on it's own. If that's possible then we don't have to consider the alternative of a more general ban.

                      • By oceanplexian 2024-01-2915:252 reply

                        What about lookalikes? Should we make it illegal for a porn star to say, dress up as a famous actor or actress? That has been going on for decades.

                        What about drag shows where they impersonate a famous figure and do a sexually suggestive performance? How much protection should the “likeness” of ultra-wealthy superstars get?

                        • By anonymouskimmer 2024-01-2919:19

                          Famous people have fewer rights against parody than do non-famous people. I'm not in favor of making it illegal, per se, but depending on the limits of the law might be in favor or requiring licensing (i.e. legal permission) to do so for all but parody (which would necessarily take some effort to be obvious as parody).

                          This article has a good discussion on the lookalike topic of which I'm excerpting some: https://www.publicethics.org/post/the-ethics-of-deepfake-por...

                          : Suppose I have an indistinguishable look-alike named ‘Danny’ in the pornography business. Danny’s videos are not deepfakes and do not depict me, but people who watch Danny’s videos mistakenly think they depict me. Consequently, the videos affect me in the ways just mentioned.

                          : I suspect that the most fundamental morally relevant difference between distributing deepfake pornography of me and distributing Danny’s pornography is that the deepfake pornography depicts me while Danny’s pornography, despite appearances, does not.

                          : Now, if you use that picture to make a deepfake, then that deepfake depicts me, because it’s based on an image that depicts me.

                          : Just as it would be wrong for someone to nonconsensually distribute a stick figure illustration of me having sex to students in my classroom, so it’s wrong for someone to nonconsensually distribute deepfake pornography of me on the internet.

                        • By thfuran 2024-01-2917:42

                          >Should we make it illegal for a porn star to say, dress up as a famous actor or actress?

                          And what about unflattering but non-pornographic impersonation?

                      • By thfuran 2024-01-2915:201 reply

                        Why is your opinion of what should be allowed more important than theirs? And do you really want to make everything unethical illegal?

                        • By anonymouskimmer 2024-01-2919:221 reply

                          > Why is your opinion of what should be allowed more important than theirs?

                          Least harm.

                          > And do you really want to make everything unethical illegal?

                          By definition, YES! Or at least actionable civil torts.

                          https://www.dictionary.com/browse/unethical

                          : lacking moral principles; unwilling to adhere to proper rules of conduct.

                          : not in accord with the standards of a profession

                          • By thfuran 2024-01-2919:441 reply

                            Have you ever eaten pork or beef or perhaps chicken parmesann or even an omelette? There are many who would say that any one of those acts shows a lack of moral principles. You want all of these things to be illegal?

                            • By anonymouskimmer 2024-01-2920:15

                              It's arguable, but at this point I'm just going to state that you are going out of your way for a "gotcha" on a reduction to the absurd.

                              When you mentioned "ethics" as opposed to "morality" I naively assumed interhuman behaviors such as business ethics and the like.

                              So yes, I believe that anything that can harm a fellow human (and many things that can harm other life) should be civilly actionable. We can argue about some of the edge cases, but non-consensual deepfake porn is not one of those edge cases to me.

                  • By _heimdall 2024-01-292:301 reply

                    > Making and distributing porn of somebody using their likeness without their consent is unethical behavior

                    How do you propose we define likeness here? Sure if its shared and claims to be of a named celebrity that must meet the bar, but how much alike the original person must it look like to become unethical?

                    If a person finds a celebrity attractive and bases a fictitious character on them, is that a problem? What if the haircut is the same, or the eyes and nose are similar? If it goes to court, how personal can the defendant get to show discrepancies between the fake porn and the actual person?

                    Don't get me wrong I really wish people wouldn't use tech for this kind of stupid shit. I just don't think we will ever be able to draw clear and predictable lines around what does and does not break the law.

                    • By allturtles 2024-01-294:361 reply

                      What you're describing is just how law works. It's all written in natural language terms that are inherently ambiguous. It's not computer code. Courts decide the boundary cases. Laws around "likeness," in particular are nothing new: https://en.wikipedia.org/wiki/Personality_rights

                      • By _heimdall 2024-01-2913:251 reply

                        And you don't have any problem with laws being ambiguous enough that you can't tell beforehand whether you're breaking the law?

                        In my experience with jury cases the jury is given as little room to interpret law as possible. In this example prosecutors would say "here is what 'likeness' means" and that would be that. Jurors would be expected and instructed to use that definition when determining if the defendant broke the law.

                        To say that its expected and even a good thing for our elected officials to pass ambiguous laws and our unelected courts to determine what that should mean with little to no input from the public is very confusing IMO. The government is there to work for us, not to define rules and enforce punishment in ways that are completely outside our democratic process of elections and representation.

                        • By anonymouskimmer 2024-01-2919:29

                          > And you don't have any problem with laws being ambiguous enough that you can't tell beforehand whether you're breaking the law?

                          As long as you can tell beforehand whether you're getting close to breaking the law this is fine. The closer you get to possibly breaking the law the more exculpatory evidence you should be collecting to present to a court.

                          > To say that its expected and even a good thing for our elected officials to pass ambiguous laws and our unelected courts to determine what that should mean with little to no input from the public is very confusing IMO. The government is there to work for us, not to define rules and enforce punishment in ways that are completely outside our democratic process of elections and representation.

                          Sure. So what's your alternative? No laws at all? Only laws that can be precisely defined in a completely non-ambiguous manner (good luck with that [1])? I favor making sure that people know about jury nullification prior to becoming jurors. And also favor elected courts (which exist in many jurisdictions).

                          [1] - I argued elsewhere that even the Constitutional requirement that the President by 35 years old or older is highly ambiguous (as per how the length of a year is defined, leap years, sidereal years, age at running for office, election to office, or assumption of office, etcetera), but hasn't been adjudicated because we haven't had any edge cases. Genuinely non-ambiguous laws don't exist, because every term can be argued over.

              • By cwillu 2024-01-291:361 reply

                And yet the law will be written that refers to obscene materials, and then later someone will find a graphic criticism of someone else's religion or favourite politician to be obscene. It will happen, because it always happens. It doesn't (only) matter what the intent of the law is, any more than it matters what the intent of strcpy is. It (also) matters what purposes the law can be twisted to.

                • By anonymouskimmer 2024-01-291:591 reply

                  al_borland (https://news.ycombinator.com/item?id=39167506 ) posted this link earlier. It looks like the laws are being appropriately tailored not to "obscene" materials but toward "intimate" materials (where "intimate" supposes a right of privacy, regardless of whether fake or not).

                  https://arstechnica.com/tech-policy/2024/01/sharing-deepfake...

                  : the “Preventing Deepfakes of Intimate Images Act,” which seeks to "prohibit the non-consensual disclosure of digitally altered intimate images." Under the proposed law, anyone sharing deepfake pornography without an individual's consent risks damages that could go as high as $150,000 and imprisonment of up to 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

                  • By _heimdall 2024-01-292:151 reply

                    "Obscene" versus "intimate" both fall into the same trap of requiring more clear definition.

                    In the first case one has to make a judgement call on whether they personally perceive something as obscene, and whether that means the other party was at fault for creating it. In the case of intimate, you have to define the word clearly and draw a line between what would and would not be considered intimate.

                    For example, if a fake porn image was made to include a scene of the act happening in a large gathering or public place, does that dodge the word "intimate"? And can the viewer of the image decide whether it is intimate, or would that distinction depend on the feelings of the image's subject at the time it was taken? If the latter, how can that exist when the image was entirely generated and had no living subjects to question?

                    • By anonymouskimmer 2024-01-293:151 reply

                      They helpfully define the term in the text of the proposed law.

                      https://www.congress.gov/bill/118th-congress/house-bill/3106...

                      “(a) Definitions.—In this section:

                      “(1) CONSENT.—The term ‘consent’ has the meaning given such term in section 1309.

                      “(2) DEPICTED INDIVIDUAL.—The term ‘depicted individual’ means an individual who, as a result of digitization or by means of digital manipulation, appears in whole or in part in an intimate digital depiction and who is identifiable by virtue of the person’s face, likeness, or other distinguishing characteristic, such as a unique birthmark or other recognizable feature, or from information displayed in connection with the digital depiction.

                      “(3) DIGITAL DEPICTION.—The term ‘digital depiction’ means a realistic visual depiction, as that term is defined in section 2256(5) of title 18, United States Code, of an individual that has been created or altered using digital manipulation.

                      “(4) DISCLOSE.—The term ‘disclose’ has the meaning given such term in section 1309.

                      “(5) INTIMATE DIGITAL DEPICTION.—The term ‘intimate digital depiction’ means a digital depiction of an individual that has been created or altered using digital manipulation and that depicts—

                      “(A) the uncovered genitals, pubic area, anus, or postpubescent female nipple of an identifiable individual;

                      “(B) the display or transfer of bodily sexual fluids—

                      “(i) onto any part of the body of an identifiable individual; or

                      “(ii) from the body of an identifiable individual; or

                      “(C) an identifiable individual engaging in sexually explicit conduct.

                      “(6) SEXUALLY EXPLICIT CONDUCT.—The term ‘sexually explicit conduct’ has the meaning given the term in subparagraphs (A) and (B) of section 2256(2) of title 18, United States Code.

                      • By _heimdall 2024-01-294:071 reply

                        > (2) DEPICTED INDIVIDUAL.—The term ‘depicted individual’ means an individual who, as a result of digitization or by means of digital manipulation, appears in whole or in part in an intimate digital depiction and who is identifiable by virtue of the person’s face, likeness, or other distinguishing characteristic, such as a unique birthmark or other recognizable feature, or from information displayed in connection with the digital depiction

                        "And who is identifiable" is doing some heavy lifting here and could be legally ambiguous. If I see someone rob a convenience store and believed the person to look like Al Pacino, does that make the person identifiable? What if I actually thought it was him, not just that it looked like him?

                        In the case of fake porn, the creator can't know if one viewing it will think the person was modeled after Taylor Swift or if their intent was to actually create porn that would be identified by viewers as Swift.

                        Is this morally and ethically a problem? Absolutely, in my opinion. But can this be codified into law that is clear enough to make sure you don't break the law? Absolutely not, as long as the law depends on whether a viewer believes the fake porn looks like Swift or is Swift.

                        A person can't be reasonably expected to stay on the right side of a law that is defined by how someone will judge the creation later. Obviously its entirely different if the creator attempts to pass it off as the original person, in this case if someone created or distributed fake porn and claimed it actually was Swift.

                        • By anonymouskimmer 2024-01-294:401 reply

                          > "And who is identifiable" is doing some heavy lifting here and could be legally ambiguous.

                          Sure, that's what a jury and/or judge is for. The defendant has a lot of potential defenses against a civil claim. The criminal portion of the law is tighter:

                          : “(a) Offense.—Whoever, in or affecting interstate or foreign commerce, discloses or threatens to disclose an intimate digital depiction—

                          :: “(1) with the intent to harass, annoy, threaten, alarm, or cause substantial harm to the finances or reputation of the depicted individual; or

                          :: “(2) with actual knowledge that, or reckless disregard for whether, such disclosure or threatened disclosure will cause physical, emotional, reputational, or economic harm to the depicted individual,

                          The legal definition of "reckless": https://www.law.cornell.edu/wex/reckless_disregard

                          : reckless action is distinguished from negligent action in that the actor consciously disregards a substantial and unjustified risk, as opposed to merely being unreasonable. For example, in State v. Olson, a 1990 South Dakota Supreme Court decision, the court did not find a tractor driver who turned left at 5-15 mph and hit another car reckless because the prosecution could not prove that he was aware that there was an oncoming car.

                          • By _heimdall 2024-01-2915:581 reply

                            Based on how today's courts operate, juries aren't offered the opportunity to interpret law. Lawyers will present a specific interpretation of laws they deem relevant to the case, they will provide facts and evidence of the charge, and ask juries to decide based solely on that.

                            Ultimately it is left to judges to interpret law. Meaning that if laws are passed with ambiguous definitions unelected judges get to decide how they think the blanks should be filled in. Our elected officials can then, if they choose, pass laws with the intent of leaving them ambiguous to allow unelected judges to effectively write laws completely unbeholden to the very voters that they are meant to serve.

                            • By anonymouskimmer 2024-01-2919:10

                              For this law juries don't need to interpret the law, merely deciding whether the clauses of the law have been met or not is enough. Saying that they don't think the plaintiff or prosecution met the burden of proof is fine.

                              In the US jury nullification is also a right. If the jury thinks that the law is too draconian for the facts of the case they can vote to acquit.

              • By jrm4 2024-01-291:161 reply

                I agree with you in theory, but at the end of it all -- which is to say, this looks like this could be a question of "what are we willing to spend resources to enforce?"

                It's really hard for me to see the utility in going after THIS vs. letting people say garbage like "If I see a black man flying a plane I'm going to worry." with near complete impunity.

                Edit: And I don't think we should necessarily spend resources to go after the latter either, I'm just saying, as a black man, if I had to pick...yeah.

                • By anonymouskimmer 2024-01-291:421 reply

                  It might be informative talking to your black sisters.

                  • By jrm4 2024-01-293:031 reply

                    Weird of you to assume I don't? What precisely are you suggesting here?

                    • By anonymouskimmer 2024-01-293:11

                      I meant "sisters" in the metaphorical sense.

                      I'm suggesting asking black women which one they'd prioritize. Deepfake porn impacts women a whole lot more than it impacts men. Just as racist assumptions of violence or danger impacts men a whole lot more than it impacts women.

                      Edit to add: If something prevents or inhibits someone from engaging in the public sphere to the extent of cutting back on their employment and employment opportunities, or their consumption and consumption opportunities, for instance, then there is also a cost in not policing that something.

              • By oceanplexian 2024-01-2915:001 reply

                We don’t measure Freedom of Speech by some test of how much Social Good is created by various types of speech.

                These rights were enumerated in a time when people distributed pamphlets that led to a bloody revolutionary war. War is a hell of a lot worse for the social fabric than “some fake nudes got distributed on the Internet”.

                • By UncleMeat 2024-01-2916:54

                  The founders passed the Alien and Sedition Acts and various other laws that we'd now consider extreme limitations on political speech. Whatever free speech means today, it isn't derived from the core beliefs of the founders.

            • By staticautomatic 2024-01-290:39

              We already have “false light” laws to address this, and they’re rooted in the common law right of privacy rather than the constitutional right.

            • By anonymouskimmer 2024-01-290:091 reply

              Speech about people needs to be truthful if possible, and also limited to the domain in which those people are salient. Free speech is bad when it means Joe Blow across the street is wrongfully called a pedophilic arsonist on the front page of the New York Times. And it's still kind of crappy when Bo Jellow's actual theft of $50 in Missouri is shamed in Europe.

              Twitter has mass play. It's effectively a worldwide public forum. There should be limits to what speech occurs on it, if only to keep the signal-to-noise ratio down so that genuinely important speech is less drowned out.

              > I wouldn't want someone posting fake nudes (or any photos for that matter) of me online, but I'm not sure if I have a right to stop them.

              It depends on where you live what the law allows.

              > though I would hope that most would take issue with the idea of further limiting free speech based on new technologies that we don't have direct control over

              People have been asking for limits on public posting of even non-fake "revenge" porn for years.

              Edit:

              > can I simply develop a new form of distribution and claim that we now need to further limit free speech in response? That feels like a dangerous backdoor to attack constitutional rights without going through the courts or congress.

              You can claim this while only positing a new form of distribution. Talking about policies and the limits of freedoms is one of the fundamental freedoms necessary to a well functioning democracy or republic. It's no attack, as the speech does not create law, or even policy, only the courts or congress or duly appointed regulators (pre-Chevron overturning[1]) can do that.

              [1] - https://www.scotusblog.com/2024/01/supreme-court-likely-to-d...

              • By _heimdall 2024-01-292:231 reply

                > Twitter has mass play. It's effectively a worldwide public forum. There should be limits to what speech occurs on it, if only to keep the signal-to-noise ratio down so that genuinely important speech is less drowned out.

                This stipulation would be a problem. For one thing we need to be able to distinguish what is genuinely important speech. For another, we would need some kind of check to prevent someone from tipping the scales by flooding the system with speech that technically meets that bar to tip the scales away from less important speech that would have otherwise been protected. Effectively, this makes free speech a sliding scale based on how much "genuinely important speech" others are sharing.

                • By anonymouskimmer 2024-01-294:42

                  No. I'm just saying don't air private grievances on the world stage. Keep it to a local forum if possible. If not possible, then generalize it for the public forum.

        • By ender341341 2024-01-2822:512 reply

          > or the ease in which it can be created?

          It's the volume of it, which comes from the ease.

          • By _heimdall 2024-01-2823:221 reply

            So some fake celebrity porn is fine, but there's a line where we have too much?

            • By calamari4065 2024-01-2823:391 reply

              Literally nobody but you has even suggested this.

              • By _heimdall 2024-01-292:251 reply

                I'm not proposing it at all, I'm trying to clarify the comment chain I was replying to.

                It sure read to me as though the issue was with the ease of which fake porn can be created and distributed, not the porn itself. I wouldn't agree with that at all and wanted to make sure I understood them right before arguing against something I simply misread or misunderstood.

                • By ender341341 2024-01-298:21

                  I’m not saying it was alright before, but it was more contained and now the volume is making it a bigger issue that more people are exposed to.

          • By DonsDiscountGas 2024-01-2823:04

            Also the speed with which they spread.

        • By calamari4065 2024-01-2823:371 reply

          If a thing is bad, is it better or worse if you can make many, many more of them much, much faster?

          • By Ferret7446 2024-01-302:06

            In this case, probably better. I don't see any meaningful increase in harm going from, say, a million porn pictures to 100 million. But it does collapse the industry of paying more money to create the pictures.

            Here's a strawman example: AI child porn would probably be better for the world, because there would be no/far lower demand to actually abuse children, all else being equal (i.e., AI child porn still being illegal).

        • By pretendgeneer 2024-01-2823:24

          I'd say it's both.

          The old stuff Human created fakes have always drawn criticism but the amount and distribution was so limited that it was easy for people not directly affected to ignore or not know about it.

          Now the new Generated stuff is to prevalent and easy to produce it's too hard to ignore.

        • By gqcwwjtg 2024-01-2822:582 reply

          Kanye West did a music video with wax figures of nude celebrities including Taylor Swift and that had barely any backlash. Is it because he’s known as an artist? The price of the recreation? The way it’s distributed? Is creating fake images of someone nude in a somewhat sexual context meaningfully different from creating images of them actively engaging in sex acts?

          It’s almost like the problem is nobody claiming it as art they created.

          • By anonymouskimmer 2024-01-2823:42

            I remember some backlash about that.

            > Is creating fake images of someone nude in a somewhat sexual context meaningfully different from creating images of them actively engaging in sex acts?

            Yes.

          • By _heimdall 2024-01-2823:25

            Yeah maybe its the anonymous creation of it, I could see that being part of the difference at least. It is much easier to get angry about some random, anonymous person rather than a known entity. Maybe it also helps that you can't, and therefore don't have to, make the decision of whether to sue the creator for using your likeness.

    • By jrflowers 2024-01-293:13

      >Fake Celebrity porn has been a thing for the longest time.

      This is a good point. This software that enables near instantaneous massive production of nonconsensual pornography of any human being on earth is strictly an issue for celebrities, and as such nothing has changed since the days of dodging and burning in a darkroom

    • By jacquesm 2024-01-296:12

      Technology changed the scale so many more people can do this, spread it around the globe in the blink of an eye and the fidelity is so much better that it is far less obvious that something is faked.

      Fakes have always been 'socially unacceptable', it's a - one would hope - minority that engages in these things. But at scale the social norms get overwhelmed and then you won't be able to easily deal with the consequences. For instance: the legal system would be a venue to seek redress but if it happens millions of times then the legal system breaks down. So 'at scale' is a substantial modifier for any process.

    • By 2OEH8eoCRo0 2024-01-291:301 reply

      > Fake Celebrity porn has been a thing for the longest time.

      Then where was the flood of photoshops making headline news?

      > What has changed since then?

      Would the recent flooding of social media have been possible in the past?

      • By devbent 2024-01-297:411 reply

        > Then where was the flood of photoshops making headline news?

        It was news several decades back then everyone just ignored it.

        • By 2OEH8eoCRo0 2024-01-2916:19

          I didn't say news I said headline news. Almost everything is news.

          > everyone just ignored it.

          Did they ignore it because it wasn't headline news? Because the harm a single motivated individual could inflict was small and theoretical rather than large and real as we have seen today?

HackerNews