Every company building your AI assistant is now an ad company

2026-02-2018:55315169juno-labs.com

Every company building an AI assistant is funded by advertising. They are also building hardware that listens and watches 24/7. Local on-device inference is the only way out.

Pre-orders for the Juno Pioneer Edition now open, reserve your Juno today!

On January 16, OpenAI quietly announced that ChatGPT would begin showing advertisements. By February 9th, ads were live. Eight months earlier, OpenAI spent $6.5 billion to acquire Jony Ive's hardware startup io. They're building a pocket-sized, screenless device with built-in cameras and microphones -- "contextually aware," designed to replace your phone.

But this isn't a post about OpenAI. They're just the latest. The problem is structural.

Every single companyWe can quibble about Apple. building AI assistants is now funded by advertising. And every one of them is building hardware designed to see and hear everything around you, all day, every day. These two facts are on a collision course, and local on-device inference is the only way off the track.

The always-on future is inevitable

Before we talk about who's building it, let's be clear about what's being built.

Every mainstream voice assistant today works behind a gate. You say a magic word -- "Hey Siri," "OK Google," "Alexa" -- and only then does the system listen. Everything before the wake word is theoretically discarded.

This was a reasonable design in 2014. It is a dead end for where AI assistance needs to go.

Here's what happens in a real kitchen at 6:30am:Anonymized from one of our test homes. The real version was messier and included a toddler screaming about Cheerios.

"Are we out of eggs again? I'm thinking frittata tonight but we also need to -- oh wait, did the school email about Thursday? I think there's a early release. Anyway, if we don't have eggs, I'll get them from Target and also that dish soap, the blue one."

Nobody is going to preface that with a wake word. The information is woven into natural speech between two flustered parents getting the family ready to leave the house. The moment you require a trigger, you lose the most valuable interactions -- the ones that happen while people are living their lives, not thinking of how to give context to an AI assistant.

You cannot build proactive assistance behind a wake word. The AI has to be present in the room, continuously, accumulating context over days and weeks and months, to build the understanding that makes proactive help possible.

This is where every major AI company is heading. Not just audio -- vision, presence detection, wearables, multi-room awareness. The next generation of AI assistants will hear and see everything. Some will be on your face or in your ears all day. They will be always on, always sensing, always building a model of your life.

The question is not whether always-on AI will happen. It's who controls the data it collects. And right now, the answer to that question is: advertising companies.

Policy is a promise. Architecture is a guarantee.

Here's where the industry's response gets predictable. "We encrypt the data in transit." "We delete it after processing." "We anonymize everything." "Ads don't influence the AI's answers." "Read our privacy policy."With cloud processing, every user is trusting: • The company's current privacy policy • Every employee with production access • Every third-party vendor in the processing pipeline • Every government that can issue a subpoena or national security letter • Every advertiser partnership that hasn't been announced yet

• The company's future privacy policy

OpenAI's own ad announcement includes this language: "OpenAI keeps conversations with ChatGPT private from advertisers, and never sells data to advertisers." It sounds reassuring. But Google scanned every Gmail for ad targeting for thirteen years before quietly stopping in 2017. Policies change. Architectures don't.

Policy is a promise. Architecture is a guarantee.

When a device processes data locally, the data physically cannot leave the network. There is no API endpoint to call. There is no telemetry pipeline. There is no "anonymized usage data" that somehow still contains enough signal to be useful for ad targeting. The inference hardware sits inside the device or in the user's home, on their network.

Your email is sensitive. A continuous audio and visual feed of your home is something else entirely. It captures arguments, breakdowns, medical conversations, financial discussions, intimate moments, parenting at its worst, the completely unguarded version of people that exists only when they believe nobody is watching.We wrote a deep dive on our memory system in Building Memory for an Always-On AI That Listens to Your Kitchen.

Amazon already showed us what happens. They eliminated local voice processing. They planned to feed Alexa conversations to advertisers. They partnered Ring with a surveillance network that had federal law enforcement access. What happens when those same economic incentives are applied to devices that capture everything?

The edge inference stack is ready

The counterargument is always the same: "Local models aren't good enough." Three years ago, that was true. It is no longer true.

You can run a complete ambient AI pipeline today -- real-time speech-to-text, semantic memory, conversational reasoning, text-to-speech, etc -- on a device that fits next to a cable box (remember those?). No fan noise. A one-time hardware purchase with no per-query fee and no data leaving the building. New model architectures, better compression, and open-source inference engines have converged to make this possible, and the silicon roadmap points in one direction: more capability per watt, every year.We've been running always-on prototypes in five homes. The complaints we get are about the AI misunderstanding context, not about raw model capability. That's a memory architecture problem, not a model size problem.

Are local models as capable as the best cloud models? No. But we're usually not asking our smart speaker to re-derive the Planck constant.

Hardware that runs inference on-device. Models that process audio and video locally and never transmit it. There needs to be a business model based on selling the hardware and software, not the data the hardware collects. An architecture where the company that makes the device literally cannot access the data it processes, because there is no connection to access it through.

The most helpful AI will also be the most intimate technology ever built. It will hear everything. See everything. Know everything about the family. The only architecture that keeps that technology safe is one where it is structurally incapable of betraying that knowledge. Not policy. Not promises. Not a privacy setting that can be quietly removed in a March software update.

Choose local. Choose edge. Build the AI that knows everything but phones home nothing.


Read the original article

Comments

  • By paxys 2026-02-2022:4110 reply

    This spiel is hilarious in the context of the product this company (https://juno-labs.com/) is pushing – an always on, always listening AI device that inserts itself into your and your family’s private lives.

    “Oh but they only run on local hardware…”

    Okay, but that doesn't mean every aspect of our lives needs to be recorded and analyzed by an AI.

    Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?

    Have all your guests consented to this?

    What happens when someone breaks in and steals the box?

    What if the government wants to take a look at the data in there and serves a warrant?

    What if a large company comes knocking and makes an acquistion offer? Will all the privacy guarantees still stand in face of the $$$ ?

    • By zmmmmm 2026-02-2023:114 reply

      The fundamental problem with a lot of this is that the legal system is absolute: if information exists, it is accessible. If the courts order it, nothing you can do can prevent the information being handed over, even if that means a raid of your physical premises. Unless you encrypt it in a manner resistant to any way you can be compelled to decrypt it, the only way to have privacy is for information not to exist in the first place. It's a bit sad as the potential for what technology can do to assist us grows that this actually may be the limit on how much we can fully take advantage of it.

      I do sometimes wish it would be seen as an enlightened policy to legislate that personal private information held in technical devices is legally treated the same as information held in your brain. Especially for people for whom assistive technology is essential (deaf, blind, etc). But everything we see says the wind is blowing the opposite way.

      • By ajuhasz 2026-02-2023:242 reply

        Agreed, while we've tried to think through this and build in protections we can't pretend that there is a magical perfect solution. We do have strong conviction that doing this inside the walls of your home is much safer than doing it within any companies datacenter (I accept that some just don't want this to exist period and we won't be able to appease them).

        Some of our decisions in this direction:

          - Minimize how long we have "raw data" in memory
          - Tune the memory extraction to be very discriminating and err on the side of forgetting (https://juno-labs.com/blogs/building-memory-for-an-always-on-ai-that-listens-to-your-kitchen)
          - Encrypt storage with hardware protected keys (we're building on top of the Nvidia Jetson SOM)
        
        We're always open to criticism on how to improve our implementation around this.

        • By sixtyj 2026-02-2618:23

          Device should have been accompanied with a lot of examples so people are really aware how stored data could be misused. Alexa or any other similar device - their users are technically illiterate. Do you remember leaks of movie stars’ iPhone images? Multiply it by thousands… Court order, burglars, hackers - all bad actors imaginable…

          For you, as producer, those situations can be a nightmare if not well described in operating conditions. And devices should not be pre-setup (don’t be “Google-evil”, as they track everything if you don’t set it up different; and it is always hidden deep in the third level menu under 2-steps verification)

        • By bossyTeacher 2026-02-2114:06

          > - Minimize how long we have "raw data" in memory

          I believe you should allow people to set how long the raw data should be stored as well as dead man switches.

      • By HWR_14 2026-02-213:294 reply

        > Unless you encrypt it in a manner resistant to any way you can be compelled to decrypt it,

        In the US you it is not legal to be compelled to turn over a password. It's a violation of your fifth amendment rights. In the UK you can be jailed until you turn over the password.

        • By eel 2026-02-2115:082 reply

          At Amazon, their travel trainings always recommended giving out your laptop password if asked by law enforcement or immigration, regardless of whether it was legal in the jurisdiction. Then you were to report the incident as soon as possible afterwards, and you'd have to change your password and possibly get your laptop replaced.

          That kind of policy makes sense for the employee's safety, but it definitely had me thinking how they might approach other tradeoffs. What if the Department of Justice wants you to hand over some customer data that you can legally refuse, but you are simultaneously negotiating a multi-billion dollar cloud hosting deal with the same Department of Justice? What tradeoff does the company make? Totally hypothetical situation, of course.

          • By ratorx 2026-02-228:21

            You can make it so employees don’t have ambient access to data, and require multi-party approval for all actions that require user data. Giving away a user password should be treated as a routine risk.

            I’m not saying that’s how it actually works, and this process doesn’t have warts, but the ideal of individual employees not having direct access is not novel.

          • By DANmode 2026-02-2115:39

            Totally.

        • By SpicyLemonZest 2026-02-2117:051 reply

          There are many jurisdictions in the US (not all!) where you can't be compelled to turn over a password in a criminal case that's specifically against yourself. But that's a narrow exception to the general principle that a court can order you to give them whatever information they'd like.

          • By HWR_14 2026-02-2120:351 reply

            It's a federal constitutional protection to not be compelled to turn over your password. If you think a jurisdiction can compel you, I would like references.

        • By lesuorac 2026-02-2119:301 reply

          Well, currently sure.

          However, back when the constitution was amended the 5th amendment also applied to your own papers. (How is using something you wrote down not self-incrimination!?).

          It only matters if one year in the future it is because all that back data becomes immediately allowed.

          • By HWR_14 2026-02-2121:001 reply

            Papers were covered under the 4th amendment. It's always been the case that a warrant could let the government access your journal.

            • By lesuorac 2026-02-2123:22

              > See United States v. Hubbell. In Boyd v. United States,[60] the U.S. Supreme Court stated that "It is equivalent to a compulsory production of papers to make the nonproduction of them a confession of the allegations which it is pretended they will prove".

              https://en.wikipedia.org/wiki/Fifth_Amendment_to_the_United_...

              This opinion hasn't lasted the test of time but historically your own documents cannot be used against use. Eventually the supreme court decided that since corporations weren't people that their documents could used against them and then later that it also people weren't protected by their own documents.

        • By rrr_oh_man 2026-02-2113:101 reply

          There’s an interesting loophole for Face ID…

          • By estimator7292 2026-02-2116:451 reply

            In the US, law enforcement is specifically allowed to compel biometric scans to unlock personal devices.

            • By schrodinger 2026-02-2117:56

              FYI -- Because of this, Apple made a feature where if you click the power button 5 times, your phone goes into "needs the passcode to unlock" mode.

              Whenever I'm approaching a border crossing (e.g. in an airport), I'm sure to discreetly click power 5 times. You also get haptic feedback on the 5th click so you can be sure it worked even from within your pocket.

      • By Sharlin 2026-02-2117:241 reply

        > nothing you can do can prevent the information being handed over

        I'm being a bit flippant here, but thermite typically works fine.

        • By DontForgetMe 2026-02-221:54

          Tricky to take data off the cloud, even with thermite

    • By drdaeman 2026-02-214:015 reply

      > Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?

      Is this somehow fundamentally different from having memories?

      Because I thought about it, and decided that personally I do - with one important condition, though. I do because my memories are not as great as I would like them to be, and they decline with stress and age. If a machine can supplement that in the same way my glasses supplement my vision, or my friend's hearing aid supplements his hearing - that'd be nice. That's why we have technology in the first place, to improve our lives, right?

      But, as I said, there is an important condition. Today, what's in my head stays in there, and is only directly available to me. The machine-assisted memory aid must provide the same guarantees. If any information leaves the device without my direct instruction - that's a hard "no". If someone with physical access to the device can extract the information without a lot of effort - that's also a hard "no". If someone can too easily impersonate myself to the device and improperly gain access - that's another "no". Maybe there are a few more criteria, but I hope you got the overall idea.

      If a product passes those criteria, then it - by design - cannot violate others' privacy - no more than I can do myself. And then - yeah - I want it, wish there'd be something like that.

      • By encom 2026-02-215:383 reply

        >That's why we have technology in the first place, to improve our lives, right?

        No, we have technology to show you more and more ads, sell you more and more useless crap, and push your opinions on Important Matters toward the state approved ones.

        Of course indoor plumbing, farming, metallurgy and printing were great hits, but technology has had a bit of a dry spell lately.

        If "An always-on AI that listens to your household" doesn't make you recoil in horror, you need to pause and rethink your life.

        • By drdaeman 2026-02-216:401 reply

          > you need to pause and rethink your life.

          If you can't think of an always-on AI that listens but doesn't cause any horrors (even though its improbable to get to the market in the world we live on), I urge you to exercise your imagination. Surely, it's possible to think of an optimistic scenario?

          Even more so, if you think technology is here to unconditionally screw us up no matter what. Honestly - when the world is so gloomy, seek something nice, even if a fantasy.

          • By encom 2026-02-217:451 reply

            Not only is it improbable, it's a complete fantasy. It's not going to happen. And personally, I'm of the opinion that having AI be a constant presence in your life and relying on it to assist you with every minor detail or major decision is dystopian in the extreme, and that's not even factoring in the inevitable Facebook-esque monetisation.

            >when the world is so gloomy, seek something nice, even if a fantasy

            I don't need fantasy to do that. My something nice is being in nature. Walking in the forest. Looking at and listening to the ocean by a small campfire. An absence of stimulation. Letting your mind wander. In peace, away from technology. Which is a long winded way to say "touch grass", but - and I say this sincerely without any snark - try actually doing it. You realise the alleged gloom isn't even that bad. It's healing.

            • By drdaeman 2026-02-2110:051 reply

              > I'm of the opinion that having AI be a constant presence in your life and relying on it to assist you with every minor detail or major decision is dystopian in the extreme

              Could that be because you're putting some extra substance in what you call an "AI"? Giving it some properties that it doesn't necessarily have?

              Because when I'm thinking about "AI" all I'm giving to it is "a machine doing math at scale that allows us to have meaningful relation with human concepts as expressed in a natural language". I don't put anything extra in it, which allows me to say "AI can do good things while avoiding bad things". Surely, a machine can be made to crunch numbers and put words together in a way that helps me rather than harms me.

              Oh, and if anything - I don't want "AI" to save me thinking. It cannot do that for me anyway, in principle. I want it to help me do things it machines finally start to do acceptably well: remember and relate things together. This said, yea, I guess I was generous with just a single requirement - now I can see that a personal "AI" also needs its classifications (interpretations) to match with the individual user's expectations as close as possible at all times.

              > It's not going to happen.

              I can wholeheartedly agree as far as "it is extremely unlikely to happen", but... to say "it is not going to happen", after last five years of "that wasn't on my bingo list"? How can you be so sure? How do we know there won't be some more weird twists of history? Call me naive but I rather want to imagine something nice would happen for a change. And it's not beyond fathomable that something crashes and the resulting waves, would possibly bring us towards a somewhat better world.

              Touching grass is important, and it helps a lot, but as soon as you're back - nothing goes anywhere in the meanwhile. The society with all the mess doesn't disappear while we stop looking. So seeking an optimistic possibility is also important, even if it may seem utterly unrealistic. I guess one just have to have something to believe in?

              • By duskdozer 2026-02-2115:26

                I can imagine a lot of ways we could be using the new tech advancements of the last decade or two in really great ways, but unfortunately I've seen things go in very bad directions almost every time, and I do not have faith that this trend will stop in the future.

        • By lukan 2026-02-2112:40

          I really hope, that before I will get old and fragile, I will get my smart robotic house, with an (local!) AI assistant always listening to my wishes and then executing them.

          I rather have the horror of being old and forgotten in a half care like most old people are right now. AI and robots can bring emporerment. And it is up to us, whether we let ad companies serve them to us from the cloud, or local models running in the basement.

        • By schrodinger 2026-02-2118:092 reply

          I don't think that ads _have_ to be evil.

          When I look at Google, I see a company that is fully funded by ads, but provides me a number of highly useful services that haven't really degraded over 20 years. Yes, the number of search results that are ads grew over the years, but by and large, Google search and Gmail are tools that serve rather benevolently. And if you're about to disagree with this ask yourself if you're using Gmail, and why?

          Then I look at Meta or X, and I see a cesspool of content that's driven families apart and created massive societal divides.

          It makes me think that Ads aren't the root of the problem, though maybe a "necessary but not sufficient" component.

          • By encom 2026-02-2122:301 reply

            Google is almost cartoonishly evil these days. I think that's pretty much an established fact at this point.

            I'm not using Gmail, and I don't understand why anyone would voluntarily. It was the worst email client I'd ever used, until I had to use Outlook at my new job.

            The only Google products I use are YouTube, because that's where the content is. And Android, because IOS is garbage and Apple is only marginally less evil than Google.

            • By schrodinger 2026-02-2123:27

              I’ve recently begun using my personal domain as my primary email address, with it forwarding to gmail so I can “get out” easily if I ever had a reason. That said, I’ve found Gmail’s service great, their spam filtering highly effective, (although I haven’t surveyed the competition lately so it’s possible their huge advantage no longer exists) and their features pretty user-friendly (eg the one-click unsubscribe as well as a page to view all your subs in one place). I have never felt like they _abused_ the immense amount of data they have about me nor used it for “evil” purposes; only to profit on relevant ads that are at least clearly marked and unobtrusive. I don’t like that they have so much data on me, but I’ve felt like they’ve been transparent about it, so it’s been on me for making a decision eyes wide open. As opposed to Meta and the shady shit they’ve been caught doing...

              That said, I’m open-minded and obviously thinking about this given moving to my own domain.

              What’s the evil behavior you’ve experienced? I’m down to move off if I’m oblivious to something…

          • By whattheheckheck 2026-02-2215:17

            Yeah the question is what is the optimal feedback loop between producers and consumers and what are the appropriate communivation channels that respect human rights that we can all agree on

      • By dbtc 2026-02-215:101 reply

        This will not augment memory the way glasses do for sight, this will replace memory the way a wheelchair replaces legs.

        • By estimator7292 2026-02-2117:011 reply

          So do you think disabled people deserve to participate in society or not?

          • By walt_grata 2026-02-2117:191 reply

            Yes but able bodied people shouldnt decide to use a wheelchair until their legs attrophy and become useless.

            • By drdaeman 2026-02-2119:301 reply

              I understand the rationale, but don’t you see how this idea contradicts autonomy of decisions for able-minded people? Such good intentions tend to be a pavement on roads to bad places.

              I’d rather suggest to inform about all the potential benefits and drawbacks, but leave decisions with the individual.

              Especially given that it’s not something irreversibly permanent.

              • By walt_grata 2026-02-2120:12

                I'd agree but we're closer to getting forced into the chair than making a decision that's right for us

      • By shevy-java 2026-02-215:481 reply

        Memories are usually private. People can make them public via a blog.

        AI feels more like an organized sniffing tool here.

        > If a product passes those criteria, then it - by design - cannot violate others' privacy

        A product can most assuredly violate privacy. Just look how Facebook gathered offline data to interconnect people to reallife data points, without their consent - and without them knowing. That's why I call it Spybook.

        Ever since the USA became hostile to Canadians and Europeans this has also become much easier to deal with anyway - no more data is to be given to US companies.

        • By drdaeman 2026-02-216:14

          > AI feels more like an organized sniffing tool here.

          "AI" on its own is an almost meaningless word, because all it tells is that there's something involving machine learning. This alone doesn't have any implied privacy properties, the devil is always in the untold details.

          But, yeah, sure, given the current trends I don't think this device will be privacy-respecting, not to say truly private.

          > A product can most assuredly violate privacy.

          That depends on the design and implementation.

        • By drdaeman 2026-02-229:51

          I’m not sure I understand the morale of the story. Would you share yours?

          A crudest summary of my understanding is that it’s a tale of some dude with eidetic memory who - as a consequence of it - develops a conlang with a huge vocabulary but without abstract concepts.

          It’s a stretch for sure, but all I could think of it, is that it’s possibly a tale of how a person with an eidetic memory may find the sheer volume of available information so overwhelming it may even hurt their information processing, like the formation of associative memories. Or something like that, I don’t think I know how it works.

          If that, my idea of how machine-assisted memory is supposed to work is opposite of that, it should provide limited but relevant information, with a lot of classifications and references further. Like an encyclopedia with extra fancy natural language querying mechanism. It’s whole point to give awareness about anything user wants to know, faster and more comprehensively than regular diaries, but focused on just what matters for an inquiry.

          Fumes, in my understanding, wouldn’t have an idea of a “key” but only “that front door key on a silver keychain” or “smaller mailbox key with a deep scratch on the right side”. If I’d be querying external memory through a natural language interface, it’d be doing opposite of that, heavily relying on abstract ideas as classifiers. Machine that cannot connect “mail”, “key” and “location” into a meaningful query would be useless. Computer “AI” assistant is not an eidetic memory (at least until we start to consider BMI), it’s only a personal encyclopedia at one’s fingertips.

      • By qotgalaxy 2026-02-216:39

        [dead]

    • By BoxFour 2026-02-2023:232 reply

      It’s definitely a strange pitch, because the target audience (the privacy-conscious crowd) is exactly the type who will immediately spot all the issues you just mentioned. It's difficult to think of any privacy-conscious individual who wouldn't want, at bare minimum, a wake word (and more likely just wouldn't use anything like this period).

      The non privacy-conscious will just use Google/etc.

      • By yndoendo 2026-02-211:571 reply

        A good example of this is what one of my family member's partner said. "Isn't creep that you just talked about something and now you are seeing ads for it. Guess we just have to accept it."

        My response was no I don't get any of that because I disable that technology since it is always listening and can never be trusted. There is no privacy in those services.

        They did not like that response.

        • By dotancohen 2026-02-214:45

          I used to be considered a weirdo and creep because I would answer the question of why don't I have WhatsApp with the answer "I do not accept their terms of service". Now people accept this answer.

          I don't know what changed, but the general public is starting to figure out that that actually can disagree with large tech companies.

      • By bandrami 2026-02-219:15

        I want a hardware switch for the microphone. If it can hear the wake word it's already listening.

    • By com2kid 2026-02-2023:503 reply

      > Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?

      Typically not how these things work. Speech is processed using ASR (automatic speech recognition), and then ran through a prompt that checks for appropriate tools calls.

      I've been meaning to basically make this myself but I've been too lazy lately to bother.

      I actually want a lot more functionality from a local only AI machine, I believe the paradigm is absurdly powerful.

      Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.

      Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.

      I understand for people who aren't neurodiverse that the idea of just forgetting to do something that is incredibly critical to ones health and well-being isn't something that happens (often) but for plenty of other people a device that just helps people remember important things can be dramatically life changing.

      • By ramenbytes 2026-02-211:331 reply

        > Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.

        > Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.

        > I understand for people who aren't neurodiverse that the idea of just forgetting to do something that is incredibly critical to ones health and well-being isn't something that happens (often) but for plenty of other people a device that just helps people remember important things can be dramatically life changing.

        Those don't sound like things that you need AI for.

        • By jcgrillo 2026-02-213:15

          > > Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.

          This would be its death sentence. Nuked from orbit:

            sudo rm -rfv /
          
          Or maybe if there's any slower, more painful way to kill an AI then I'll do that instead. I can only promise the most horrible demise I can possibly conjure is that clanker's certain end.

      • By dotancohen 2026-02-214:461 reply

          > Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.
        
        I push a button on the phone and then say them. I've been doing this for over twenty years. The problem is ever getting back to those voice notes.

        • By JustinLeung 2026-02-2313:43

          Absolutely. There fundamentally are two sides of this problem. Capturing the intent and making it actionable.

      • By reilly3000 2026-02-211:121 reply

        It really is a prosthetic for minds that struggle to organize themselves.

    • By tzs 2026-02-214:14

      > Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?

      Maybe I missed it but I didn't see anything there that said it saved conversations. It sounds like it processes them as they happen and then takes actions that it thinks will help you achieve whatever goals of your it can infer from the conversation.

    • By SkyPuncher 2026-02-211:092 reply

      I agree. I also don't really have an ambient assistant problem. My phone is always nearby and Siri picks up wake words well (or I just hold the powerbutton).

      My problem is Siri doesn't do any of this stuff well. I'd really love to just get it out of the way so someone can build it better.

      • By ajuhasz 2026-02-211:16

        Some of the more magical moments we’ve had with Juno is automatic shopping list creation saying “oh no we are out of milk and eggs” out loud without having to remember to tell Siri becomes a shopping list and event tracking around kids “Don’t forget next Thursday is early pickup”. A nice freebie is moving the wake word to the end. “What’s weather Juno today?” becomes much more natural than a prefixed wake word.

      • By DontForgetMe 2026-02-222:15

        Honestly this has been my main issue with the tech privacy issue for years.

        I love smart gadgets. I really wanted to go all in and automate my life, and the whole 'personal data' thing seemed like a really fair trade off for what was promised.

        Only, they took all the data and never really delivered the convenience.

        I spent about 10 years trying to figure out why WearOS needed to collect all my data, all the time, even when I wasn't wearing a watch, and yet when it crashed every few weeks, there was no way to restore anything from a backup. Had to start again from scratch every time (or ADB). What's the point in collecting all that data when I couldn't usefully access any of it?

        Same thing with Google home, more or less. I wasn't a big fan of the terms and conditions, but hey, it's super convenient just being able to announce 'ok Google I need to get out of bed soon' and have it turn on the lights, play music etc.

        Only, some mornings it wouldn't do that. Wouldn't even remember that I'd set an alarm. And alarms kinda need to be reliable: if they work 19 times out of 20, that's not actually good enough to rely on. Dumb alarm clocks, or phones, you can be pretty sure the alarm will go off So, not much point using Google for morning routines and alarms. So, not much point giving it full access to everything I say any time.

        I would give it all my data if it could reliably remember to play preset alarms, or give a basic backup and restore option. Hell, I'd probably give Google access to all my photos if the UI wasn't so ugly.

        I still don't really understand big techs reasoning here. If data is the new gold and everyone was dying for more ways to track odds us all and harvest our data - why not just build a decent product? If phone batteries lasted for days, people would spend more time on their phones, isn't that what the tech companies want? If competent people worked on making Gmail efficient, light, user friendly, and not crawling with bugs more people would use it, and more data.

        It's like the oligarchs trying to take over the world will do literally anything, anything to win, other than paying people to develop decent, reliable products

    • By ajuhasz 2026-02-2023:13

      > Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?

      One of our core architecture decisions was to use a streaming speech-to-text model. At any given time about 80ms of actual audio is in memory and about 5 minutes of transcribed audio (text) is in memory (this is help the STT model know the context of the audio for higher transcription accuracy).

      Of these 5 minute transcripts, those that don't become memories are forgotten. So only selected extracted memories are durably stored. Currently we store the transcript with the memory (this was a request from our prototype users to help them build confidence in the transcription accuracy) but we'll continue to iterate based on feedback if this is the correct decision.

    • By throwaway5465 2026-02-211:35

      They seem quite honest with who they are and how they do what they do.

    • By peyton 2026-02-2113:36

      I’m 99% sure this article is AI generated. Regardless, people will gravitate to the tool that solves their problems. If their problem is finding a local plumber or a restaurant they like, advertising will be involved.

  • By thundergolfer 2026-02-210:14

    I agree with the core premise that the big AI companies are fundamentally driven towards advertising revenue and other antagonistic but profit-generating functionality.

    Also agree with paxys that the social implications here are deep and troubling. Having ambient AI in a home, even if it's caged to the home, has tricky privacy problems.

    I really like the explorations of this space done in Black Mirror's The Entire History of You[1] and Ted Chiang's The Truth of Fact short story[2].

    My bet is that the home and other private spaces almost completely yield to computer surveillance, despite the obvious problems. We've already seen this happen with social media and home surveillance cameras.

    Just as in Chiang's story spaces were 'invaded' by writing, AI will fill the world and those opting out will occupy the same marginal positions as those occupied by dumb phone users and people without home cameras or televisions.

    Interesting times ahead.

    1. https://en.wikipedia.org/wiki/The_Entire_History_of_You 2. https://en.wikipedia.org/wiki/The_Truth_of_Fact,_the_Truth_o...

  • By 0xbadcafebee 2026-02-216:032 reply

    > The always-on future is inevitable

    Not if you use open source. Not if you pay for services contractually will not mine your data. Not if you support start-ups that commit to privacy and the banning of ads.

    I said on another thread recently that we need to kill Android, that we need a new Mobile Linux that gives us total control over what our devices do, our software does. Not controlled by a corporation. Not with some bizarre "store" that floods us with millions of malware-ridden apps, yet bans perfectly valid ones. We have to take control of our own destiny, not keep handing it over to someone else for convenience's sake. And it doesn't end at mobile. We need to find, and support, the companies that are actually ethical. And we need to stop using services that are conveniently free.

    Vote with your dollars.

    • By bigyabai 2026-02-2118:00

      We have mobile Linux. It's only supported on less than a dozen handsets and runs like shit, but we have it already.

      The reason nobody uses mobile Linux is that it has to compete with AOSP-derived OSes like LineageOS and GrapheneOS, which don't suck or run like shit. This is what it looks like when people vote with their dollars, people want the status-quo we have (despite the horrible economic damages).

    • By ponector 2026-02-2121:201 reply

      >> that gives us total control over what our devices do

      Like rooted Android phone, which is useless for regular folks because many critical apps doesn't work (like banking).

      • By Gander5739 2026-02-2123:58

        I have a rooted Android phone and my banking app works fine, with relatively little effort to get it working. Though rooting is quite niche, so it's easy enough for banks to disallow it completely when accessing their apps. If root access were as common on mobile devices as on desktop I doubt there would be any problems at all.

HackerNews