Meta’s AI smart glasses and data privacy concerns

2026-03-0222:321448812www.svd.se

Bank details, sex and naked people who seem unaware they are being recorded. Behind Meta’s new smart glasses lies a hidden workforce, uneasy about peering into the most intimate parts of other…

The advertisement is everywhere. The ice hockey player Peter Forsberg is trying on a pair of black glasses. In the viral clip he talks to the glasses, asking who is Sweden’s greatest hockey player of all time.

They are not just any glasses.

They are Facebook owner Meta’s new AI glasses.

The glasses are marketed as an all-in-one assistant that helps the wearer excel at work, capture beautiful sunsets, act as a travel guide and translate foreign languages in real time.

So powerful that they are meant to compete with smartphones, while the user remains in control of their privacy.

Reality would prove to be different.

It is stuffy at the top of the hotel in Nairobi, Kenya. The grey sky presses the heat against the windows. The man in front of us is nervous. If his employer finds out that he is here, he could lose everything.

He is one of the people few even realise exist – a flesh-and-blood worker in the engine room of the data industry. What he has to say is explosive.

“In some videos you can see someone going to the toilet, or getting undressed. I don’t think they know, because if they knew they wouldn’t be recording.”

In Svenska Dagbladet and Göteborgs-Posten’s investigation, the people behind Meta’s smart glasses testify to the hidden stream of privacy-sensitive data that is fed straight into the tech giant’s systems.

It begins on the other side of the world.

September 2025 in Menlo Park, the heart of Silicon Valley. Mark Zuckerberg, founder of Meta, the company behind Facebook, Instagram and WhatsApp, is about to present the initiative he hopes will define the company’s future. On gigantic screens, the audience can see him sitting backstage, leaning over a script and rehearsing.

Mark Zuckerberg presenting what he hopes to be the future of Meta. Foto: Nic Coury/AP

They lie in front of him on the table.

“Meta Ray-Ban Glasses”.

He stands up after a while, and puts the glasses on.

The perspective shifts – on the screens, the audience sees the world through his eyes. Zuckerberg walks through the corridors, towards the stage. On the way, he is met with cheers, fist bumps and a nod from the international music star Diplo.

On stage, Zuckerberg preaches. He explains that his revolutionary glasses are to be a kind of all-in-one assistant with everything from live translations to facial recognition.

He concludes by thanking his American team. But what is shown in Menlo Park is just as much the result of a completely different type of work, far away from Silicon Valley.

The company they work for is called Sama and is a subcontractor to Meta. Here in Kenya’s capital, thousands of people train AI systems, teaching them to recognise and interpret the world.

They are called data annotators, and they are the manual labourers of the AI revolution. On the screens they draw boxes around flower pots and traffic signs, follow contours, register pixels and name objects: cars, lamps, people. Every image must be described, labelled and quality assured.

All to make the next generation of smart glasses a little more intelligent – a little more human.

It is an uncomfortable truth for tech giants: the AI revolution is to a large extent built on labor in low-income countries. What we call “machine learning” is often the result of human hands.

In the multi-million city of Nairobi, SvD and GP meet Sama workers at an indistinct hotel, at a safe distance from Sama. Some come straight from a night shift, others are preparing for a ten-hour shift in front of the screens.

The employees have signed extensive confidentiality agreements – if they break them they can lose their jobs – and be thrown back into a life without income, often to the slums. Therefore we publish no names.

The workers in Kenya say that it feels uncomfortable to go to work. They tell us about deeply private video clips, which appear to come straight out of Western homes, from people who use the glasses in their everyday lives.

Several describe video material showing bathroom visits, sex and other intimate moments.

Another worker talks about people coming out of bathrooms.

“Someone may have been walking around with the glasses, or happened to be wearing them, and then the person’s partner was in the bathroom, or they had just come out naked”, an employee says.

Do you sometimes feel that you are looking straight into other people’s private lives?

“When you see these videos, it feels that way. But since it is a job, you have to do it. You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work. You are not supposed to question it. If you start asking questions, you are gone.”

“We see everything – from living rooms to naked bodies. Meta has that type of content in its databases. People can record themselves in the wrong way and not even know what they are recording. They are real people like you and me”.

The workers describe videos where people’s bank cards are visible by mistake, and people watching porn while wearing the glasses. Clips that could trigger “enormous scandals” if they were leaked.

“There are also sex scenes filmed with the smart glasses – someone is wearing them having sex. That is why this is so extremely sensitive. There are cameras everywhere in our office, and you are not allowed to bring your own phones or any device that can record”, an employee says.

The data annotators also work with transcriptions, where they are to check that the AI assistant in Meta’s glasses has answered users’ questions correctly.

“It can be about any topics at all. We see chats where someone talks about crimes or protests. It is not just greetings, it can be very dark things as well”, one of the workers says.

Another recounts a text where a man described a woman he wanted to have sex with:

“He commented on her body and said that he liked her breasts.”

2025 becomes a breakthrough for Meta Ray-Ban, which is manufactured in collaboration with the eyewear giant EssilorLuxottica. From having sold two million smart glasses in 2023 and 2024 combined, sales are tripled to seven million units.

In Sweden, Synsam is one of the major Swedish retailers, as is the chain Synoptik. Some independent opticians also carry the glasses.

Reporters Ahmed Abdigadir and Julia Lindblom outside Synsam’s flagship store in Gothenburg. Foto: Olof Ohlsson

Throughout the autumn of 2025, we visit ten retailers in Stockholm and Gothenburg to ask the sales staff how the data from the Meta glasses is processed. Several of the sales people give us reassuring answers. We are told that we ourselves can choose exactly what data is shared with Meta.

“Nothing is shared with them (Meta). That was a big concern for me as well. Are they going to get access to my data, that is a bit scary, but you have full control”, says an employee at a Synsam store.

Others are more uncertain.

“To be completely honest, I don’t know where the data goes, or if they take data at all”, says a shop assistant at an independent optician. Another salesperson points out that the customer can always choose not to share their data:

“No, it is completely fine – everything stays locally in the app.”

We buy our own pair of glases at Synsam’s flagship store in Gothenburg.

At the Göteborgs-Posten newsroom we begin installing them. The glasses are to be connected to an app called Meta AI. Only after several approvals in the app is it possible to get started with the AI function. One of the steps concerns whether we want to share extra data with Meta to help improve their products. We choose “no”.

The AI functions are activated with the voice command “Hey Meta”. Within ten minutes of the package being opened we begin asking questions. The glasses answer immediately, in English.

Together with a system developer at Svenska Dagbladet we try to find out whether what the salesperson said is correct, that we can choose not to share our data with Meta. We try to use the glasses without internet connection turned on.

But that makes it impossible to get help interpreting what we see. The glasses urge us to turn on the connection. When we then analyse the network traffic from the app, we see that the phone has frequent contact with Meta servers in Luleå, Swden, and Denmark.

In order to answer questions and interpret what the camera sees, the glasses require that data be processed via Meta’s infrastructure – it is not possible to interact with the AI solely locally on the phone.

What the salespeople say about nothing being shared onwards does not appear to be correct.

We contact Synsam and Synoptik for an interview about what training the sales staff receive and how it can be that the answers they give are so different. Synsam responded in writing that its role is to inform customers about the applicable terms and to provide internal training, but that responsibility for complying with Swedish law and Meta’s terms ultimately rests with the wearer. Synoptik responded in similar terms, saying its staff are trained in ethics and emphazise the user’s responsibility.

With the glasses we bought there is also a manual with a QR code that leads to Meta’s privacy policy for wearable products. This in turn links to other pages, such as the Terms of Use for Meta’s AI services.

At first glance, it appears that we have significant control over our data. It states that voice recordings may only be saved and used for improvement or training of other Meta products if the user actively agrees.

But for the AI assistant to function, voice, text, image and sometimes video must be processed and may be shared onwards. This data processing is done automatically and cannot be turned off.

We read further on in the Terms of Use for Meta’s AIs. The terms state that “in some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human).”

It also states that the AIs may store and use information shared with them, and that the user should not share information “that you don’t want the AIs to use and retain, such as information about sensitive topics”.

The user is given no choice; it is mandatory to participate.

It is not specified how much data may be analysed or for how long it may be stored. Nor is it specified who is given access to the data.

Data experts we contact in Sweden and abroad question how aware users really are that their data may be used to train Meta’s AI.

The experts point to an unclear boundary between what is shared voluntarily and what is collected automatically – a boundary that can be difficult to detect.

When Meta offers services within the EU, the company is subject to the General Data Protection Regulation (GDPR), which requires transparency about how personal data is processed and where that processing takes place.

Kleanthi Sardeli is a data protection lawyer at None Of Your Business (NOYB), a non-profit organisation in Vienna that has brought several legal cases against Meta. They are currently reviewing the new smart glasses.

She says there is a clear transparency problem: users may not realise that the camera is recording when they begin speaking to the AI assistant.

Kleanthi Sardeli. Foto: Privat

“If this happens in Europe, both transparency and a legal basis for the processing are lacking,” she says. She believes that explicit consent should be required when data is used to train artificial intelligence.

“Once the material has been fed into the models, the user in practice loses control over how it is used,” Sardeli says.

Petter Flink is an IT and security specialist at IMY, the Swedish Authority for Privacy Protection. It is the authority that is to protect Swedes’ personal data and privacy.

According to him, few people truly consider what they are agreeing to when they start using services such as Meta’s glasses.

“The user really has no idea what is happening behind the scenes”, says Petter Flink.

Petter Flink. Foto: Daniel Larsson

At the same time, the technology has become both more accessible and more enticing, with new functions that quickly reach a broad audience.

He emphasises that the data Meta collects is more valuable than the glasses themselves. The more details that can be extracted from the user’s everyday life, the more accurately advertising and services can be targeted at the person.

"I think few people would want to share the details of their daily lives to that extent. But when it is presented in a fun and appealing way, it becomes harder to see the risks”, says Petter Flink.

The Swedish Authority for Privacy Protection has not reviewed the Meta glasses.

“Therefore we cannot comment on where the data ends up”, says Petter Flink.

According to our sources, sensitive data is not intended to be used to train the AI models.

Even so, it can still happen.

“As soon as the device ends up in the hands of users, they do whatever they want with it”, says one of the former Meta employees.

According to the former Meta employees, faces that appear in annotation data are automatically blurred.

However, data annotators in Kenya told SvD and GP that the anonymisation does not always work as intended. Faces that are to be covered are sometimes visible. We ask one of the former Meta employees how this is possible.

“The algorithms sometimes miss. Especially in difficult lighting conditions, certain faces and bodies become visible”.

Where do the images come from? Can private videos from Sweden end up on screens in Kenya? Those who appear in the images, have they consented to appearing in this way?

We contact Meta repeatedly for an open interview about how the company informs users about the glasses, what filters are used to prevent private material from reaching annotators, how the chain of subcontractors is audited, and why content showing extremely private situations appears.

We also ask how long voice recordings and video clips are stored, how the possibility for consumers to object works, in practice, and whether the video clips can come from Swedish users.

After two months, we receive an email from Meta’s spokesperson in London, Joyce Omope. The letter does not directly answer our questions, but explains how data is transferred from the glasses to the user’s mobile app.

Instead, Meta refers to its AI terms of use and privacy policy. These do not specify where the data ends up, but they do state that it may be subject to human review.

We asked Meta to elaborate on how sharing highly private material with subcontractors such as Sama in Kenya can be reconciled with its privacy policy. We posed the same questions to Sama. There was no response.

We receive no additional answers from Meta either and have to make do with what Meta’s spokesperson Joyce Omope first wrote:

“When live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy.”

A European Meta executive, who asked not to be named, says it does not matter where the data is processed as long as the data protection rules are equivalent to those in Europe.

“Many believe that data must be stored within the EU to be protected. But under GDPR it does not matter where the server is located – as long as the country meets the EU’s requirements. If it does not, data may not be sent there”.

They continue:

“Technically, we have data centres in Sweden, Denmark and Ireland, but the physical location is actually less relevant. The legal responsibility lies with Meta Ireland, which is the European entity. Where the data is actually processed – in Europe or in the US – does not change the regulatory framework”.

There is currently no EU decision recognising Kenya as providing an adequate level of protection, but the EU and Kenya began a dialogue on the matter in May 2024. It is expected to take time before an agreement is in place.

Meta themselves write in their privacy policy that they must transfer, store and process user data globally, since “Meta is a company that operates globally”, and that they share information both internally between offices and data centres and externally with partners, third parties and service providers. Meta explicitly writes that this applies to interactions that people have with AI at Meta, for example content and messages.

Petra Wierup, a lawyer at the Swedish Authority for Privacy Protection, IMY, says that if Meta is the data controller under GDPR, then they have a responsibility for Swedes’ personal data collected when the glasses are used.

Petra Wierup. Foto: Pressbild

“For it to be permitted to use a service provider in a third country (outside the EU), it is required that robust agreements with instructions are in place. It must also be ensured that there is legal support for the transfers, so that the data that is transferred receives continued strong and equivalent protection when it is processed in a third country. The protection must therefore not become weaker when it is processed by subcontractors”, says Petra Wierup.

At one end, the glasses are marketed as an everyday assistant – a voice in the frame that tells you what you are seeing. At the other end, people in Nairobi sit annotating the most intimate moments the camera captures: open-plan offices, living rooms, bedrooms, bathrooms.

One annotator sums it up:

“You think that if they knew about the extent of the data collection, no one would dare to use the glasses”.

Hur sannolikt är det att du skulle rekommendera SvD till en vän eller kollega?


Read the original article

Comments

  • By zmmmmm 2026-03-030:3221 reply

    I do think it's completely unacceptable if Meta makes the glasses unable to be used for routine functions without (a) other humans reviewing your private content and (b) AI training on your content. There needs to be total transparency to people when this is happening - these are absolutes.

    But I'm a bit confused by the article because it describes things that seem really unlikely given how the glasses work. They shine a bright light whenever recording. Are people really going into bathrooms, having sex, sharing rooms with people undressed while this light is on? Or is this deliberate tampering, malfunctioning, or Meta capturing footage without activating the light (hard to believe even Meta would do this intentionally).

    • By losvedir 2026-03-030:392 reply

      Agreed. I'm confused trying to map what the article is saying to what's happening at a technical level. For example, obviously it's not doing on-device inference, so it's unsurprising that it won't work without a network connection, but this is totally distinct from your recordings ending up getting labeled. It talks about being able to opt into that, which is one thing. But I guess I don't understand if you don't opt in, if the data still gets sent out for labeling.

      I feel like this article is either a bombshell, or totally confused.

      • By cmiles74 2026-03-0313:051 reply

        My reading was that as soon as you enable the "AI" functionality you are opted into having your recordings labeled.

        "But for the AI assistant to function, voice, text, image and sometimes video must be processed and may be shared onwards. This data processing is done automatically and cannot be turned off."

        • By losvedir 2026-03-0314:241 reply

          Right, that's the section I was confused by because it was in the context of an experiment trying to use the AI stuff without an Internet connection, which obviously won't work. The article is using the "shared onwards" terminology to refer to at least inference. But the inference part is uninteresting to me, and the data labeling is. The article doesn't really separate those out.

          • By LorenPechtel 2026-03-040:20

            I would figure if there is AI labeling that some things will confuse the system and will be sent to a human. And some things will randomly be sent to a human for error checking. Same thing with Alexa, I figure there's always a low probability chance that anything I say to her will end up reaching a human. She's not always listening as some people fear (the data use would have been detected long ago if she were), but humans occasionally trigger her accidentally--and such errant triggers will be more likely to be sent to a human because they are not going to make sense.

      • By noduerme 2026-03-039:19

        >> but this is totally distinct from your recordings ending up getting labeled

        The distinction here occurs wherever the data is processed, and it sounds as if the difference between using your video for labeling versus privately processing it through an AI is deliberately confusing and obscured to the user by the way the terms of service are written. Once the video is uploaded, which is necessary for the basic function, it's unclear how or whether it can be separated from other streams that do go through labeling. This confusion also seems to be an intentional dark pattern.

    • By _ink_ 2026-03-037:402 reply

      I do believe people do all of that with the light on. And then there are also people who tamper with the device to deactivate the light. You can find guides for that online.

      • By stronglikedan 2026-03-0321:52

        The funny thing about the light is that it doesn't even matter when surreptitious recording devices are trivial to make these days. You can never know when you're being recorded, even when no one is wearing glasses.

      • By qnleigh 2026-03-039:581 reply

        Also some people probably tape over the light for whatever reason.

        • By Sebguer 2026-03-0310:033 reply

          my understanding is that the light is resistant to simply taping over it, and recording can't happen in this case. you have to intentionally modify the glasses to be able to surreptitiously record.

          • By NekkoDroid 2026-03-0310:143 reply

            > my understanding is that the light is resistant to simply taping over it, and recording can't happen in this case.

            I remember when the glasses came out and this was tested: if you tape it over before starting the recording it refuses, but if you tape it after starting it will happily continue to record. I don't know if they changed it, but that is how it use to be.

            • By weli 2026-03-0310:323 reply

              Still works like that.

              The glasses have in the same hole a led light and a small light sensor (similar to the ones used in monitors to set up auto-brightness).

              On start recording the glasses check if the light sensor is above a certain threshold, if it is then it starts recording and turns on the led light.

              So, if you start recording and then cover the hole, it keeps recording because the check only happens on start. Even if they wanted to fix this by making the light sensor do a constant check it wouldn't work as the privacy led light indicator is triggering the same sensor, which is a terrible design choice.

              And to disable the light is as easy as using a small drill bit and breaking either the light sensor module or the led light. They can detect if it's been tampered with and they put a giant notice saying the privacy light is not working but they still let you record anyways lol.

              • By mofeien 2026-03-0311:121 reply

                > Even if they wanted to fix this by making the light sensor do a constant check it wouldn't work as the privacy led light indicator is triggering the same sensor,

                The privacy led light could just turn off for a couple of milliseconds (or less) while the light sensor performs its check.

                • By nomel 2026-03-0320:07

                  Or, just buy any of the many pages of hidden cam devices that exist on Amazon, which also aren't limited to only 3 minute videos.

              • By weli 2026-03-0311:212 reply

                > The privacy led light could just turn off for a couple of milliseconds (or less) while the light sensor performs its check.

                True but then that would mean a blinking led light instead of a constant turned on led light, which is a different product requirement from what it currently does.

                • By ethbr1 2026-03-0312:401 reply

                  Parent's point was that you can likely do it at a high enough frequency that blinking would be imperceivable by the human eye.

                  • By weli 2026-03-0314:001 reply

                    I don't think the cheap light sensor would have a fast enough polling rate for that. And if you increase the polling rate I will just put a phosphorescent sticker that absorbs and reflects the light coming out of the led with a good enough afterglow that the photoresistor will still pick up as some value and still allow for recording.

                    Also what is the implication here? If you cover the hole accidentally for one microsecond do you invalidate the whole recording? Does it need to be covered for more than one second, two seconds, ten?

                    All of that for what? So that in 2 years we can have chinese off-brand clones for 50 dollars that offer no security mechanisms anyways?

                    We all need to understand this is the new normal, being able to be recorded anywhere anytime. Just like you can get punched in the street anywhere anytime. We only act on things that can be proven to have caused you prejudice in court.

                    • By wongarsu 2026-03-0317:38

                      We successfully shamed people out of wearing Google glasses. We also mostly have social norms about when recording with your smart phone is ok. We don't need to accept defeat about these glasses just yet

                • By NekkoDroid 2026-03-0312:42

                  I feel like it was pretty common to have the red light blinking on and off every second when recording. In that time where it is off during that cycle it would make sense to preform the sensor checks.

              • By lazide 2026-03-0313:00

                Sounds like it would be pretty easy to fake out with a custom circuit too, for those that are willing to go beyond ‘whoops how did that happen’ levels.

            • By jacyro 2026-03-0613:32

              You can buy sticker packs on amazon that specifically allow enough light to enter so recording functions while blocking the LED indicator.

            • By nomel 2026-03-0320:05

              This is somewhat handled by the max recording time of 3 minutes.

          • By tw04 2026-03-0313:23

            I’ve never seen a light in my life that can get through electricians tape. You would need a high powered laser to burn through it.

          • By shevy-java 2026-03-0311:021 reply

            Taping can not be done? But if there are guides on the www for this, is this a true statement? To me it is a difficult statement because ... taping can be done in many ways. I don't see how light can magically pass through it?

            • By ricardobeat 2026-03-0312:19

              The device detects that the light has been covered, and prevents recording.

    • By hananova 2026-03-036:151 reply

      But there is total transparency though? Meta is using all your data, always. And the harder they say they’re not, the sneakier they’re doing it.

      • By jcgrillo 2026-03-036:241 reply

        This is historically what we've had consumer protection regulations for. When they put lead, radium, asbestos, arsenic, or other poisons in consumer products the regulators step in and put a stop to it. It should be pretty clear at this point these consumer tech companies are no different--they're just producing poison. And it's not like there weren't signs, it's been like this for damn near a quarter century.

        • By edgyquant 2026-03-0312:01

          Our country is too divided to focus on things like this.

    • By jsbisviewtiful 2026-03-0418:031 reply

      > (hard to believe even Meta would do this intentionally)

      Are you referring to the same company that runs Facebook, WhatsApp and Instagram? Meta has, for well over a decade, been caught multiple times -as recently as 2 years ago caught for the third time I know of- burrowing into areas of phones that their apps weren't directly given access to. Android phones have been highly susceptible to this kind of snooping.

      https://www.techradar.com/pro/security/meta-halts-phone-and-...

      • By lelandfe 2026-03-054:15

        Other 2 are Onavo Protect & Project Atlas

    • By InsideOutSanta 2026-03-039:384 reply

      I'm going to guess that people are intentionally recording themselves having sex, assuming that they are creating a local recording that is not sent to Meta. Does the light mean "camera is recording" or "cloud services are involved"?

      • By yard2010 2026-03-0314:111 reply

        Follow up question - can we jailbreak this device and install an open source alternative that makes meta irrelevant?

        • By beeflet 2026-03-0315:58

          maybe once before they patch it

      • By cmiles74 2026-03-0313:08

        The article isn't clear on this point, I believe because Meta isn't clear on this themselves. Other bits of this piece highlight third parties reviewing the responses of the AI assistant; it's possible that people are recording and some sound they make triggers the AI assistant which, in turn, leads to the video being reviewed.

        OTOH, Meta could just be desperate for training content and they're just slurping up all recordings by people who've opted into the AI function. It would be great for them to clarify how this works.

      • By shevy-java 2026-03-0311:042 reply

        I am very much confused. People recorded sex way before the meta-spy-glasses.

        I mean, not as if I were to visit such sites, right ... but video recordings can be done in numerous ways. Also on small devices. I mean the smartphones are fairly small.

        • By antiframe 2026-03-041:06

          Right, but most people didn't send their sex tapes to a multinational corporation.

      • By phyrex 2026-03-0310:30

        Camera is recording/taking a picture

    • By Sabinus 2026-03-035:301 reply

      If you're not paying a subscription for Meta to AI process your audio and video then they're going to get value out of it some way. It's just like any other 'free' digital service

    • By Findecanor 2026-03-0311:55

      I find the root issue to be that what the glasses see is described as "content" in the first place.

    • By mentalgear 2026-03-039:291 reply

      The Zuck being the Zuck, I wouldn't put it past him collecting data even if the cosmetic light is not on.

    • By lazide 2026-03-0312:58

      People absolutely will (and are) modding them to hide that light. But even if they weren’t, a lot of people won’t notice.

      And regardless of any privacy policy or the like, you still have to worry about Room 641A scenarios [https://en.wikipedia.org/wiki/Room_641A].

      Can you imaging a Stasi that has a large portion of the population also wearing pervasive surveillance tech? Amazing!

    • By krunck 2026-03-0316:391 reply

      It is absolutely within possibility that all "camera is on" lights are software controlled just like the camera and independently of the camera. They are meant to tell the user that they are using the camera. They are not meant to tell anyone that the owner of the devices back-end is using the camera.

      • By vizzah 2026-03-0421:33

        they are not. their design is done well to make it hard to circumvent the light.

    • By NickC25 2026-03-0323:581 reply

      >(hard to believe even Meta would do this intentionally)

      Hard to believe they would?

      This is Mark Zuckerberg we are talking about.

      It's hard to believe they wouldn't.

    • By pyrale 2026-03-0314:33

      It is also completely unacceptable to capture the public space without oversight and consent from third parties. If glass users are fine with that, why wouldn’t they accept it for themselves?

    • By alterom 2026-03-032:424 reply

      >hard to believe even Meta would do this intentionally).

      Hahahahahahahaha

      ZUCK: yea so if you ever need info about anyone at harvard

      ZUCK: just ask

      ZUCK: i have over 4000 emails, pictures, addresses, sns

      FRIEND: what!? how’d you manage that one?

      ZUCK: people just submitted it

      ZUCK: i don’t know why

      ZUCK: they “trust me”

      ZUCK: dumb fucks

      Actual quote, BTW [1].

      [1] https://www.newyorker.com/magazine/2010/09/20/the-face-of-fa...

      • By drawnwren 2026-03-033:0719 reply

        As much as this is a damning quote, it is perhaps also damning that any time someone wants to smear zuck they have to reach 20 years into the past.

        • By ElProlactin 2026-03-0311:092 reply

          It's not "smearing" to use Zuckerberg's own words in a discussion of his character, and this is far from the only example of things he's done or said in the past 20 years that would lead a reasonable person to call into question his moral fiber.

          It remains, however, a popular point of reference because:

          1. It's fast and easy to read and digest.

          2. The blunt language leaves little room for speculation about his feelings and intent at the time.

          3. A lot of people understand that as Zuckerberg's wealth exploded, he surrounded himself with people (coaches, stylists, PR professionals, etc.) who are paid handsomely to rehabilitate and manage his image. Therefore, his pre-wealth behavior gives insight into who he really is.

          • By Gooblebrai 2026-03-0312:223 reply

            > his pre-wealth behavior gives insight into who he really is

            "No man ever steps in the same river twice, for it's not the same river and he's not the same man."

            Not defending Zuck but it reflects a rigid mindset to assume that people cannot change.

            • By ElProlactin 2026-03-0314:361 reply

              People can change but based on Facebook's actions vis-a-vis privacy, mental health, etc. there's little evidence that Zuckerberg has gone from treating his users like "dumb f...." to treating them like human beings.

              If we're going to talk about quotes, here's one: "money amplifies who you are".

              • By alex1138 2026-03-042:06

                Whatsapp is one of the only instances I can think of in corporate acquisitions where the side being acquired lashes out at the acquiring side as much as this ("It's time. Delete Facebook")

                You're talking about someone who changes privacy settings, who was told about gay people being automatically added to groups and posting on their walls so it outed them, being told about this and dismissing it. Or "graph search". He doesn't think people deserve any respect when it's not him?

            • By godelski 2026-03-0319:57

              When a man changes it is on him to prove that he has changed. Has Zuck atoned himself in any way? Has Meta?

              I'm a big believer in second chances and letting people rehabilitate, but there's no evidence the Meta or Zuck have changed for the better. Meanwhile, *there is plenty of evidence that suggests he has only become more uncaring and deceptive, as Meta has only become more invasive over time*, the article itself being one such example.

              So I do believe Zuck has changed, but not in the direction that we should applaud and/or forgive him. I've only seen him change in the way that should make us more concerned and further justify the hatred. A man may change, but he does not always change for the better.

            • By rangerelf 2026-03-0316:58

              I think there's more than enough evidence that Zuck has not grown to see others as human beings.

          • By edgyquant 2026-03-0312:073 reply

            It doesn’t though, no one is the same person they were 20 years ago and every young person is makes a ton of mistakes

            • By ethbr1 2026-03-0312:441 reply

              You're suggesting a ton of money and power made Zuckerberg more empathetic?

              • By edgyquant 2026-03-0313:321 reply

                No I didn’t suggest that, I’m stating a fact that kids say stupid stuff all the time.

                • By godelski 2026-03-0320:00

                  No, you didn't suggest that. You suggested that the quote is not representative of who he is now.

                  We'd need a lot more context (and words) for us to understand that sentence as anything other than defending him. At best you're giving him the benefit of doubt.

            • By Integrape 2026-03-0314:32

              You're right, he's much worse now.

            • By ElProlactin 2026-03-0314:33

              I think his actions speak for themselves. Facebook, effectively completely controlled by Zuckerberg, has consistently taken actions that erode privacy and degrade mental health.

              And no, not every young person has the attitude that Zuckerberg demonstrated in his "dumb f...s" comment. If my son or daughter was behaving like that in their late teens/early twenties I would be ashamed and feel like a failure as a parent.

        • By duskdozer 2026-03-034:212 reply

          There's a big difference between "someone said something stupid as a kid"... "but now has changed and is a totally different person" and "is doing the same things but now knows how not to say the quiet part out loud"

          • By asadotzler 2026-03-0319:17

            He wasn't even a kid. He was like 20 years old at university.

          • By echelon 2026-03-036:33

            Exactly.

            Show us how Meta is a moral player in society.

            All I can see are lots of evil behaviors.

        • By hattmall 2026-03-034:07

          >they have to reach 20 years into the past.

          Well, they don't, but this is a particularly damning statement and it's age is more of a feature than a flaw because it shows a long history of anti-social disdain for humanity.

        • By jrussino 2026-03-0317:03

          I hear this rebuttal a lot; here's why it doesn't work for me:

          I'm the exact same age as Zuckerberg. When I first read this quote, it struck me as a really gross mindset and a point of view that I could neither relate to nor have sympathy for. I would not have said (or thought) those things when I was his age. Fundamentally, this is a demonstration of poor character.

          Now, people do grow and change. We've all said or done things that we regret. Life can be really hard, at times, for most of us, and more often than not young arrogant guys eventually learn some humility and grace and empathy after they confront the real world and experience the inevitable ups and downs of life.

          But Zuckerberg had no such experience. His life during and after the time when he said this was one of accelerating material success and validation. The scam he was so heartlessly bragging about in that statement actually worked, and he became one of the richest men in the world. So my expectation of the likelihood that he matured away from this mindset is much lower than it would be for someone like you or me.

          (And, as others have said in this thread, there's ample evidence from his subsequent decisions to support this)

        • By zephyreon 2026-03-033:54

          Learning to choose your words more wisely as you age does not necessarily indicate your underlying value system has evolved.

        • By sillyfluke 2026-03-036:531 reply

          >it is perhaps also damning that any time someone wants to smear zuck they have to reach 20 years into the past.

          It is perhaps not, and perhaps a bit disingenuous to claim so in good faith, as if it exceeds your abilities to search for the list of facebook scandals in the decades following and see that the behavior is often consistent with this quote. Even if you choose to ignore all that, it's also not very reasonable to expect troves of juicier quotes after all the C-suites, lawyers, and HR departments showed up locked everything down with corporate speak. I'm sure if facebook were to be so kind as to leak all the messages and audio of zuck's internal comms since that time people would be able to have many other juicy quotes to work with.

          It is often referenced because it's the best quote that represents the trailblazing era of preying on users' undying thirst for convenience in order to package their private data as a product.

          • By alterom 2026-03-038:36

            Thank you for saying this. I would not find a better way to word the response myself.

            "It is perhaps not, and perhaps a bit disingenuous to claim so in good faith, as if it exceeds your abilities to search for the list of facebook scandals in the decades following and see that the behavior is often consistent with this quote.

            It is often referenced because it's the best quote that represents the trailblazing era of preying on users' undying thirst for convenience in order to package their private data as a product.

            These sentences are deliciously delightful to read in this era of writing whose blandness and sloppiness is only amplified by LLM-driven "assistance".

            It is difficult to be pithy without being bitter, but your writing achieves it within the span of a single comment. If you have a blog, I hope you share it!

        • By shakna 2026-03-037:29

          Okay, how about a settlement from just last year, about how Meta does nothing but violate privacy? [0]

          [0] https://www.bbc.com/news/articles/cx2jmledvr3o

        • By alterom 2026-03-038:47

          >As much as this is a damning quote, it is perhaps also damning that any time someone wants to smear zuck they have to reach 20 years into the past.

          Smear is a word that's not applicable here. It implies that the allegations in the argument labeled thusly are wrong and unjust.

          This is not the case here.

        • By DonHopkins 2026-03-0310:17

          Not as self-damning as you trying to defend what he said 20 years ago, with full knowledge of how he's acted in those intervening 20 years.

          Congratulations, you've just smeared yourself with your own contemporary words.

        • By noduerme 2026-03-039:21

          I'd say once someone reveals their true character, you should believe it.

        • By nkrisc 2026-03-0311:47

          You would have a good point if what Meta is doing now wasn’t far worse than what Zuck himself is describing in those comments, all while Zuck has remained at the helm the entire time.

        • By LogicFailsMe 2026-03-039:19

          Or just quote anything out of the much more recent book Careless People.

        • By aaa_aaa 2026-03-0310:02

          Character almost never changes.

        • By jcgrillo 2026-03-033:15

          or more recently the times he lied to Congress, all the layoffs, the "metaverse", etc

        • By datatrashfire 2026-03-038:08

          or just at any point in the last 20 years to the present works too

        • By bdangubic 2026-03-0311:17

          you are who you are

      • By jcgrillo 2026-03-033:14

        This is a very important window into how the industry, by and large, views users and the concept of privacy. It's not merely authoritarian and predatory, to them users are subhuman.

      • By pembrook 2026-03-039:174 reply

        Now if only we could look up everything you said in chatrooms as a 19 year old and post the most inflammatory stuff on HN.

        I’m sure you’ve never said anything callous or snarky, and were a bastion of morality as a teenager.

        • By 542354234235 2026-03-0313:161 reply

          I've tried to learn and grow from the stupid comments of my youth. I haven't been involved in a long list of scandals directly related to the ideas those comments expressed, and if I was, it would be pretty clear that I didn't learn or grow at all.

          • By pembrook 2026-03-0410:091 reply

            You haven't been involved in a long list of public scandals because you've never done anything in your life that's publicly notable.

            By tricking yourself into believing you sit on a higher moral pedestal you're simply easing the pain of comparison.

            When high school girls spread gossip that the pretty, popular girl has loose morals, they aren't performing this service out of the goodness of their hearts. They're hoping to elevate themselves by tearing down the competition.

            • By alterom 2026-03-0920:41

              >You haven't been involved in a long list of public scandals because you've never done anything in your life that's publicly notable.

              That's funny.

              You genuinely think that doing something "publicly notable" is necessary and sufficient for being involved in multiple public scandals, as if notable people who aren't slimy asshats didn't exist.

              It's a fine argument too. You can keep narrowing down what counts as "publicly notable" until it only includes "founding Meta" when counterexamples are pointed out to you.

              That's how you can be so confident is saying "you've never done anything in your life that's publicly notable" without knowing who you're talking to.

              >By tricking yourself into believing you sit on a higher moral pedestal you're simply easing the pain of comparison

              What a beautiful example moving the goal posts with a personal attack while saying absolutely nothing that has any discernable meaning.

              Easing the pain of comparison, huh?

              It's not painful to compare an asshat who brags about betraying trust of people who thought he's a decent human being to anyone who finds that repulsive.

              Particularly in the context of discussing how trustworthy that person is.

              It's not about "morals", see.

              It's that Mark Zuckerberg is the highest authority when it comes to talking about Mark Zuckerberg, —...

              ... — and he explicitly said that you'd be a dumb fuck to trust him with your personal data, which is what you do when you wear Meta's AI glasses.

              These are the concrete, specific facts, not contrived examples about high school girls (on whose behalf you can't speak either).

        • By matkoniecz 2026-03-0311:43

          Yes, I posted some stupid stuff as teenager and later.

          I never in my life were mocking and making fun out of other people for trusting me, or equivalent.

          I also never run company that knowingly ruined multitude of lives and social interactions in general.

          > snarky

          Snark is not a problem that people have with Mr. Zuckerberg.

        • By alterom 2026-03-042:18

          >Now if only we could look up everything you said in chatrooms as a 19 year old and post the most inflammatory stuff on HN.

          Sure. Wen I was in college, I didn't have the idea of snooping on other students and exploiting them as "dumb fucks" who were stupid enough to trust me.

          Most of my public online history starts at around that time too.

          And one of my first comments on Slashdot was expressing concern about Facebook violating people's privacy by introducing the feed back in 2006.

          https://slashdot.org/comments.pl?sid=195861&cid=16054826

          I was 19 then.

        • By DonHopkins 2026-03-0310:231 reply

          [flagged]

          • By pembrook 2026-03-0311:13

            Hilarious.

            Before you posted this I actually edited my comment to remove a sentence at the end where I said "Now please proceed to call me a bootlicker while not rebutting my point."

            I thought it would be too flame-war-y. Guess it was actually needed however! US politics getting hysterical has been like the eternal semptember for HN. This place is so braindead and predictable and uninteresting now.

      • By lucasfin000 2026-03-0415:07

        The worst part isn't even that quote, its that nothing structurally has changed one bit since then. The business model still requires users as the product. Glasses that upload video to Meta's servers is the entire point.

    • By cmiles74 2026-03-0313:01

      This was one of the first hits on Kagi. 404 has a similar article (I think) but it's behind a paywall.

      "The demand for this ‘Ray-Ban hack’ has been steadily increasing, with the hobbyist’s waiting list growing longer by the day. This demonstrates a clear desire among Ray-Ban owners to exercise more control over their privacy and mitigate concerns about unknowingly recording others."

      https://bytetrending.com/2025/10/28/ray-ban-hack-disabling-t...

    • By benrapscallion 2026-03-045:28

      It is Facebook. I expected exactly this from smart glasses made by them. You can’t change the DNA of a company.

    • By techpression 2026-03-038:23

      If anyone were to record even when the light is not shining, it would be Meta. This would not surprise me at all, they have everything to win and nothing to lose, no country would fine them anything remotely relevant compared to the value of the data they'd be getting.

    • By lofaszvanitt 2026-03-0415:26

      Strange Days - the movie - scenario.

    • By GJim 2026-03-0311:092 reply

      > There needs to be total transparency to people when this is happening

      This is why WE have the GDPR. To outlaw and prevent exploitation such as this.

      • By GJim 2026-03-0316:40

        Presumably the 'drive-by' downvotes are coming from the ad-tech industry who would prefer the population to simply bend over and grab ankles with both hands the moment they request our personal data?

    • By ccppurcell 2026-03-038:322 reply

      I mean laptop webcams also shine a light when they're recording but obviously you don't just trust the light to come on right?

      • By cjrp 2026-03-039:38

        I'd say the incentives are different. Laptop manufacturers see no upside from having the light stay off, whereas Meta might be the opposite.

      • By jcgrillo 2026-03-0318:12

        The more trustworthy ones have a physical door you can close

    • By a_ba 2026-03-0919:43

      The light is a mere technicality. Zuck himself was smart enough to cover his laptop cam/camera with tape 10 years ago[0].

      [0] https://www.bbc.com/news/technology-36596070

  • By chwahoo 2026-03-031:5317 reply

    I'll confess that I like my Meta Ray Ban glasses: I love using them to listen to podcasts at the pool/beach, while riding my bike, and it's cool to snap a quick picture of my kids without pulling out my phone.

    I wish this article (or Meta) were a bit clearer about the specific connection between the device settings and use and when humans get access to the images.

    My settings are:

    - [OFF] "Share additional data" - Share data about your Meta devices to help improve Meta products.

    - [OFF] "Cloud media" - Allow your photos and videos to be sent to Meta's cloud for processing and temporary storage.

    I'm not sure whether my settings would prevent my media from being used as described in the article.

    Also, it's not clear which data is being used for training:

    - random photos / videos taken

    - only use of "Meta AI" (e.g., "Hey Meta, can you translate this sign")

    As much as I've liked my Meta Ray Ban's I'm going to need clarity here before I continue using them.

    TBH, if it were only use of Meta AI, I'd "get it" but probably turn that feature off (I barely use it as-is).

    • By greentea23 2026-03-035:446 reply

      I don't understand how a parent can be OK non-consenually uploading pictures of their children's real faces to an ad driven AI company famous for abusing people's data and manipulating children on their platforms.

      • By Findecanor 2026-03-0311:59

        It is because they don't understand the scope of the problem. People are inclined to think that other people who have treated them kindly mean well also in the long term.

      • By tim333 2026-03-0314:051 reply

        Probably the majority of the planet share family photos on facebook, messenger, whatsapp or instagram - all meta properties. On the whole nothing much bad happens.

        • By croes 2026-03-0510:01

          Pretty short sighted

      • By echelon 2026-03-036:348 reply

        I don't get how private businesses allow these. It's as creepy as Google Glass, yet we don't see the same pushback.

        Is it because younger people don't care about privacy anymore?

        • By OrangeMusic 2026-03-0310:062 reply

          Why would they disallow cameras on glasses but not cameras on phones, where it's just as easy to take pictures discreetly.

          Not to mention, hidden miniature cameras have existed for decades.

          • By dwighttk 2026-03-0311:571 reply

            People using hidden miniature cameras should be shamed and punished when that is discovered just like the people using these glasses.

            • By OrangeMusic 2026-03-0317:161 reply

              All right here's my position:

              - filming people without their consent is wrong

              - the vast majority of people are not creeps and are not discreetly filming random people

              - the vast majority of people are not interesting, and nobody is filming them

              - today, in a public space, everybody already has lots or cameras pointing to them (e.g. anyone with a phone), without a way to know if they're being filmed. So this is not a new 'problem'.

              - banning smart glasses doesn't make sense if you're not also banning all devices that can film discreetly (so, smartphones)

              - 'creeps' use hidden miniature cameras, not glasses with an obvious camera right there on their very face

              • By kakacik 2026-03-0320:421 reply

                Incorrect points, there should be just first and the rest is just fluff.

                Try taking a photo of somebody with your phone. Usage will definitely look like you are snapping a picture, nobody walks around with phones straight up. The result is, when you take pics with phone, most often its obvious. When you insult people by not asking, they see it and react negatively.

                When you point to people with smart glasses, nobody knows do they and that seems to be the point. Or is it beeping and blinking some led to make everybody aware? I don't think so.

                Also, we live in society where smart doorbell for which it shouldn't be technically possible to upload any pics to cloud due to not having subscription still did that, and from major manufacturer. Security is a moot point, quadruple that for facebook / meta who are consistent assholes regarding breaking security and privacy to scoop any possible data points for further advertising. The slaps on wrist they receive is just cost of doing business.

                • By filoleg 2026-03-0321:491 reply

                  > Try taking a photo of somebody with your phone. Usage will definitely look like you are snapping a picture, nobody walks around with phones straight up.

                  I urge you to visit any big city and see for yourself how wrong you are. I see it at least every time every day just during my barely 20-25min subway commute to work.

                  And that's the most unremarkable the most uninteresting place and scenario here. Any big park, any even remotely touristy location, any public square, any concert/sports venue, and even an overwhelmingly large proportion of restaurants are like that.

                  • By chucksta 2026-03-0418:381 reply

                    > I see it at least every time every day just during my barely 20-25min subway commute to work.

                    Guess its not that subtle then

          • By shinycode 2026-03-0312:11

            Discreet is not the same thing as embedded in your face with no hands involved and indiscernible from regular glasses.

        • By dataflow 2026-03-038:38

          As creepy? It's way creepier than Google Glass.

        • By NitpickLawyer 2026-03-039:451 reply

          > It's as creepy as Google Glass, yet we don't see the same pushback.

          Didn't it come out that the pushback against google glasses was in part made by PR companies on behalf of their competition? I remember reading something along those lines.

        • By dwayne_dibley 2026-03-038:18

          Yes, the window has shifted considerably since Google glass

        • By monegator 2026-03-037:02

          they are raybans. glassholes were ugly and quirky.

        • By zer0zzz 2026-03-037:205 reply

          The youths literally do not care from what I observe.

          How many people under 25 do you interact with on a day to day basis?

          • By pastage 2026-03-0310:011 reply

            There are bubbles, you are obviously in one if you do not know any privacy concerned under 25. I know 15 year olds who are extreme privacy freaks, then I care about it so it might be easier to find those people. I do find that it is the people that I think are least likely are the one who are the most extreme.

            • By zer0zzz 2026-03-0323:01

              You make a good point. I know a couple in there late 20s with kids who are pretty apathetic about their own privacy but who refuse to let Google or iCloud sync photos of their kids.

          • By bigyabai 2026-03-037:27

            Don't forget the older people, many of whom gladly use Facebook or WhatsApp without a second thought.

            Us HN weirdos are some of the last who care, and even we disagree on which tech is creepy. Hard to blame the average Joe for giving up.

          • By riffraff 2026-03-039:591 reply

            I'm pretty sure they care who takes pictures or videos of them. Try going on a train and taking pictures of a young woman or man. The only difference is these are less noticeable.

            • By zer0zzz 2026-03-040:13

              > and even we disagree on which tech is creepy.

              I think this is a huge point of constant bickering here. Makes it impossible to take most privacy centric discussion seriously.

          • By rangerelf 2026-03-0317:06

            On the other hand, EVERY young person in my circle (my kids and their friends) is insanely privacy aware. All of that means ... we're not part of the young people anymore?

          • By throwaway290 2026-03-038:18

            I know about 20 and 2 of them are without socials and even smartphones. Its a counterculture

            HN is an echo chamber who can't imagine not using some tech. Normal people can...

        • By kstrauser 2026-03-0313:53

          I’ve banned them from our office, for the same reason that I’d tell someone deliberately aiming their phone camera at the screen all day to knock it off. In an office setting, you have to treat these as industrial espionage tools, either by choice of the wearer or of a remote person controlling them.

        • By RobotToaster 2026-03-036:491 reply

          Google glass was more a victim of it's time, normies weren't used to everyone carrying a camera everywhere back then.

          • By dmschulman 2026-03-0312:18

            Google Glass failed because they made the user look like they were wearing a high tech computer on their face ala Dragon Ball Z. It looked odd. Meta and Snap learned from this, but it had nothing to do with smartphone cameras not being part of daily life.

            The first iPhone was 2007. Google Glass came out in 2013

      • By IncreasePosts 2026-03-0316:36

        My 5 year old didn't consent to going to kindergarten today. But I sent him any way. I am quite the monster it seems.

      • By losvedir 2026-03-0320:17

        You don't get how parents share pictures of their children with their friends and family on Facebook?

      • By chucksmash 2026-03-0312:474 reply

        Non-consensual? Abuse?

        The terminology you chose is tasteless, loaded, and detracts from your point.

        • By datsci_est_2015 2026-03-0320:051 reply

          It’s not a controversial viewpoint that a child can’t consent to their information being uploaded permanently to the internet, even by a parent. This is because, as an adult, I can’t retroactively remove my presence from the internet. Seems silly in trivial cases (school website), but is quite severe in others (bathtub photos).

          It’s also not controversial to paint the harmful, profit-seeking actions of companies upon minors as “abusive” (e.g. tobacco firms).

          If anything, your knee-jerk response at their rhetoric raises eyebrows: why would you go to bat for a company who by nearly all public measures is fundamentally evil in aim and structure?

          • By chucksmash 2026-03-0323:271 reply

            If there's something wrong with how we've organized our society than we need to fix it on a societal level.

            Evoking what the comment in question evokes over uploading pictures of your kid to the internet is not the way to convince people. It takes the thing you want people to care about and exaggerates it in a way that makes your view point trivial to dismiss.

            I say this from the place of someone who deactivated their social media accounts over similar concerns. This is not the way to convince people.

            • By datsci_est_2015 2026-03-041:121 reply

              > This is not the way to convince people.

              Idk, agree to disagree in this case. Sometimes people do need to hear the stark words of those they disagree with to reconsider their weakly, or even deeply, held positions. Especially in this forum, where so many people of what I would figure is “higher intelligence” continue to turn a blind eye to the clearly unethical actions of their employers because $$$. Some of them even convince themselves that what they’re doing is somehow not unethical!

              • By chucksmash 2026-03-041:501 reply

                Consider the US in the late 2010s and where we are now. Making the (oversaturating) argument that X is basically Y is how we got here. The people who argee with you directtionally nod in agreement (because of course it is) and you alienate the ones who don't.

                • By datsci_est_2015 2026-03-044:321 reply

                  > X is basically Y is how we got here

                  This is abuser rhetoric that’s become increasingly common in conservative circles, akin to “You’re making me do this to you!”

                  “Woke” individuals (i.e. people who are well-read and critically observant) have been sounding the alarm about warning signs for years, but their message was often twisted and lampooned, leaving an easy out for less critically-observant individuals to mark it as hysteria: “X is basically Y”.

                  You can find plenty of moderate “woke” voices dating back to the Bush administration warning about objectively concerning trends, especially with regards to the surveillance state and rights to privacy, which is why this thread exists in the first place.

                  • By yunwal 2026-03-054:251 reply

                    >> X is basically Y is how we got here

                    > This is abuser rhetoric

                    Oh come on, this has nothing to do with being an abuser. You're doing the online millennial version of calling someone a dork. It's the way an entire generation of "left"ists (with no actual leftist principles) learned to bully the people they have a distaste for. Just call them an abuser, a facist, etc etc until the words mean nothing anymore and actual abusers and facists can get away with it in broad daylight.

        • By kakacik 2026-03-0320:36

          Tasteless to you, factually correct to me. Both correct actually.

          Look, you do your kids, literally nobody in the world cares how great or messed up individuals they will become, the result always match the process so its pretty obvious.

          But your freedom to do whatever stops when you start infringing rights of me and my family. Right to privacy is, where I live and most sane places, enforceable by law. Also, its called not being an asshole or similar rougher terms.

        • By teejmya 2026-03-0320:07

          The child did not consent.

          Facebook us currently being sued for targeting children with "sexual exploitation, solicitation, sextortion and human trafficking."

          However, you have chosen to directly attack the above commenter based on your own views. This is tasteless, loaded, and detracts from your point.

        • By iso-logi 2026-03-041:00

          How exactly can a child consent to having their face analyzed and tracked, both by Facebook and its 10,000 ad partners, including ingestion into Government databases automatically, then used in countless AI algorithms, which may act against them.

          They simply are not of sound mind to understand the consequences of such a transaction.

    • By SoftTalker 2026-03-034:381 reply

      Those settings are IMO likely not doing what you think they are. Or might be doing strictly, precisely what they say they are.

      [OFF] "Share data about your Meta devices to help improve Meta products." doesn't preclude sharing data for other purposes.

      [OFF] "Allow your photos and videos to be sent to Meta's cloud for processing and temporary storage." doesn't preclude sending them to Meta's cloud for permanent storage.

      • By sheiyei 2026-03-039:55

        And nothing is actually preventing Meta from doing what you refuse anyway. They might get caught, but what are you going to do?

    • By dbreunig 2026-03-034:361 reply

      Last year they pushed out an update stating if any “Meta AI” is left on, they can access image data for training,

      I turned the AI off and used them as headphones and taking videos while biking. After a couple rides, I couldn’t bring myself to put them on because people started to recognize them and I realized I didn’t want to be associated with them (people are right to assume Meta has access to what they see).

      Meta Ray Bans, if kept simple, could have been a great product. They ruined them.

      • By sheiyei 2026-03-039:58

        I think public shaming of that spyware should be a social norm.

    • By levanten 2026-03-036:46

      After all that has been revealed to us over the past 15 years, it is really disheartening to see people still thinking that setting a few toggles will prevent these companies from abusing them.

      Just continues to prove that if you solve a bit of inconvenience for them, people will let you exploit them and their families.

    • By matsemann 2026-03-039:581 reply

      I'll confess I look at Meta Glasses the same as Google Glasses: A big sign saying "punch me in the face". If you enter some premises I'm in while wearing those, I'm either leaving or they will have to come off your face somehow.

      Wearing these glasses is just as obnoxious as walking around putting your phone in people's faces while recording.

      • By nedt 2026-03-0321:541 reply

        If it say "punch me in the face" then you have bigger problems. And after you got recorded showing what it says to you they might be growing. Tell them what you think but don't forget "Pretty, I feel pretty, ..." - just in case.

        • By matsemann 2026-03-046:501 reply

          Your thinly veiled threat of using the glasses to record and then publish interactions to harass people is exactly why lots of people have issues with these glasses...

          • By nedt 2026-03-047:101 reply

            They can just take the glasses off and stop being a thread. What about you?

            • By matsemann 2026-03-0410:26

              I have honestly no idea what you're on about.

    • By voidUpdate 2026-03-038:09

      Bone conduction headphones let you listen to things while keeping your ears free, and don't upload your childrens photos to The Algorithm

    • By zhubert 2026-03-032:321 reply

      So you believe that you are in control?

      • By chwahoo 2026-03-032:492 reply

        I think the most likely case is: this company is labeling images from meta AI use from people who opted-in to share their data with Meta.

        It's certainly possible that it's something much more surprising / sinister, but there is a fairly logical combination of settings that I could see a company could argue lets them use the data for training.

        I'm also very certain that few users with these settings would expect the images to be shown to actual people, so I'm not defending Meta.

        • By roughly 2026-03-033:101 reply

          What in Meta's history would lead you to give them the benefit of the doubt like this?

          • By chwahoo 2026-03-033:405 reply

            Perhaps I'm ignorant.

            I know some of the criticism of Meta: many people don't like the way their products are optimized for engagement. I've heard about their weird AI bots interacting on their platform as if they were people. And I know people of all political stripes have had complaints about content moderation and their algorithm.

            But all of that is within the bounds of the law and their terms of service.

            None of it would remotely approach something like: bypassing the well-advertised features in the glasses that show when the camera is in use and secretly recording things to train AI. It's hard to imagine any company's lawyers approving something like that. (this sounds like what many commenters believe is happening)

            FWIW, I suspect this is the relevant section of the Privacy policy:

            > "When you use the Meta AI service on your AI Glasses (if available for your device), we use your information, like Media and audio recordings of your voice to provide the service."

            from: https://www.meta.com/legal/privacy-policy/

            if so, "to provide the service" is doing a lot of work

            • By macNchz 2026-03-034:09

              Meta has consistently and repeatedly shown an absolute lack of respect for user privacy for basically as long as they’ve existed as a company. I’m honestly not certain there’s anything fully out of the question as far as things they might do, regardless of what their policies might say.

              Two examples that are top of mind…

              They exploited browser vulnerabilities not unlike malware to track users’ behavior across the web: https://www.eff.org/deeplinks/2025/06/protect-yourself-metas...

              They bought a “privacy” VPN app and used it to harvest data, then abused Apple’s enterprise app deployments to continue to ship the app after it was banned from the app store: https://en.wikipedia.org/wiki/Onavo

            • By 1shooner 2026-03-034:29

              I'm not an expert on all of Meta's historical criminal activity, but just going back a few months:

              https://arstechnica.com/tech-policy/2025/08/jury-finds-meta-...

            • By rebolek 2026-03-035:39

              Optimized for engagement? I guess heroin is "optimized for engagement" too.

            • By kuschku 2026-03-038:52

              You missed the cases where the facebook app ran a local webserver on your smartphone which the facebook ad trackers would send data to to be able to track you across all websites, breaking GDPR laws and circumventing browser third-party cookie controls?

            • By awesomeusername 2026-03-033:571 reply

              Please disclose your affiliations

              • By chwahoo 2026-03-034:13

                Hah, not meta or anything related

        • By rytis 2026-03-033:50

          > there is a fairly logical combination of settings

          I think it's anything but logical, if users (like yourself) have no idea what those settings are, as evident from your previous post.

    • By shriek 2026-03-034:06

      A simple on/off toggle isn't going to prevent them from using your data. If your data is in their server then it's going to be used one way or another. Whether in an anonymous way or shipped to where there are no privacy laws.

    • By Lio 2026-03-039:27

      I don’t know how anyone has the balls to wear them in public.

      They are creepy as fuck.

      I’m embarrassed to wear my non-Meta Raybans now. That logo is a liability.

    • By whiplash451 2026-03-036:36

      Your setting is off cloud media until the company arbitrarily turns it on for you. Seems crazy now, won’t be ten years from now. They’re just boiling the frog all the way.

    • By alex1sa 2026-03-0914:57

      The core issue here is that "to provide the service" in privacy policies has become a catch-all that can justify almost anything. I work on web products in the EU and we had to redesign our entire data pipeline for GDPR compliance. The key principle is "data minimization" — you collect only what's strictly necessary and delete it after processing. Meta's approach seems to be the opposite: collect everything, process in the cloud, and use vague language to keep the door open for secondary uses like labeling and training. The fact that turning off "Cloud media" might not actually prevent your data from being sent to Meta's servers for inference is a textbook dark pattern. Users see a toggle and assume they have control. In practice, the toggle only controls one specific processing path while others remain active. Under GDPR, this would likely fail the "informed consent" test — consent must be specific, unambiguous, and freely given. But enforcement is slow and fines are just a cost of doing business at Meta's scale.

    • By croes 2026-03-0510:00

      In the future your kids could sue you for uploading their pictures.

    • By alliao 2026-03-0322:03

      i don't trust the zuck at all, and am not naive about any of this. I'm sure the words used above are watertight in court of law but I bet you there are shenanigans in places where light don't reach

    • By tux1968 2026-03-034:13

      You might enjoy these conveniences now, but this is just the pre-enshitification stage. Soon enough, to take advantage of those features you will have advertisements integrated into your view, and your data will be scraped for whatever its worth to Meta.

    • By huddert 2026-03-033:083 reply

      Do you take them off in the bathroom? Or if the wife is feeling spontaneous?

  • By notyetmachine 2026-03-038:00

    Ghanaian authorities are seeking the arrest of a Russsian national who was using glasses to record himself picking up, and sleeping with, women in Ghana and Kenya. He uploaded them to social media and telegram. Was quite the story on African tech twitter last month.

    https://www.bbc.com/news/articles/c9wn5p299eko

HackerNews