Google to provide Pentagon with AI agents

2026-03-119:307380www.bloomberg.com

Alphabet Inc.’s Google is introducing artificial intelligence agents across the Pentagon’s three million-strong workforce to automate routine jobs, according to a senior defense official.

Alphabet Inc.’s Google is introducing artificial intelligence agents across the Pentagon’s three million-strong workforce to automate routine jobs, according to a senior defense official.

Google’s Gemini AI agents, which can undertake work independently on behalf of a user who sets them tasks, will initially operate on unclassified networks, said Emil Michael, the under secretary of defense for research and engineering.


Read the original article

Comments

  • By cmiles8 2026-03-1111:156 reply

    Companies are getting desperate to show AI adoption as right now the numbers just don’t add up.

    Not surprisingly companies are willing to get into bed with more and more questionable use cases if it helps show some desperately needed AI adoption revenue.

    • By aurareturn 2026-03-1111:412 reply

        Companies are getting desperate to show AI adoption as right now the numbers just don’t add up.
      
      All compute companies say they don't have enough compute to meet demands. Why do you think there isn't enough AI adoption to justify the investment?

      • By cmiles8 2026-03-1111:444 reply

        “Demand” is mostly their training of models, which they’ve yet to demonstrate is a profitable business.

        Just because you’re struggling to get raw materials for your business doesn’t make it a good business. Without strong enterprise adoption ASAP (which is what’s seriously suffering) things are going to hit the fan real quick.

        • By couchdb_ouchdb 2026-03-1115:25

          With respect, I don't think you've used the latest models and have not seen Anthropic's enterprise revenue hockey-stick like number. They are so busy outfitting fortune-500, you can't even get someone in sales to respond to emails. I've been waiting for months and so have others.

        • By lancebeet 2026-03-1111:502 reply

          This will sound snarky, so forgive me, but I honestly don't know the answer. Is this actually true? Is there a reliable source containing statistics on LLM compute usage that includes training vs inference for the whole market?

          • By seanmcdirmid 2026-03-1115:461 reply

            I don’t understand why people don’t just use Gemini or some other AI web search to get an answer to these kinds of questions quickly (I excluded the sources, you can get them if you ask the same question).

            > While AI training is often the most intense and expensive process for a single model, the majority of total AI compute usage (approximately 90%) is used for inference.

            > Here is the breakdown of why this is the case: > Inference as High-Volume

            > Activity: Inference occurs every time a user interacts with an AI model (e.g., asking ChatGPT a question, using image recognition, or generating code). While a model is trained once (or updated infrequently), it runs millions or billions of inferences continuously.

            > Cost Scaling: Training is a massive, one-time upfront cost, while inference is an ongoing, daily operational cost. As the number of AI users grows, the demand for inference compute scales faster than the need for training new, large models.

            > The Shift to Efficiency: While early AI hype focused on the immense compute needed for training, the industry has shifted toward making inference cheaper and faster through specialized hardware and techniques like optimization, quantization, and small language models (SLMs).

            • By ajam1507 2026-03-1120:571 reply

              Gemini is not a reliable source. You posted the only part of the AI response that isn't useful in verifying whether it is true.

              • By seanmcdirmid 2026-03-1121:27

                Sure, I guess. I asked Gemini to give me some markdown of citations and the claims made that address the question:

                https://share.google/aimode/v3Y9P3rYIx1oj9VI2

                And I finally figured out how to get links to answers instead of just inlining the content as before. Anyways, there it is. We live in a time where questions like "Does inference or training use more compute?" can be answered quickly by just pasting it into a search box.

          • By concats 2026-03-1111:57

            The revenue numbers are public for the major AI companies. That's probably the best estimate for "inference for the whole market" we have, since most of that inference is billed in either API usage or subscriptions, and it won't include any in-house usage such as training.

        • By seanmcdirmid 2026-03-1115:43

          Most of the compute is actually used for inference (90% if Gemini is to be trusted).

        • By aurareturn 2026-03-1112:451 reply

          Do you have source?

          • By tempodox 2026-03-1120:19

            Just ask the slop generator of your choice. Making sense of all that smoke and mirrors without AI support has become impossible.

      • By duskdozer 2026-03-1112:09

        "enough compute" will be when there is no more hardware for use outside of their walled garden, at which point they can control what they want

    • By Tklaaaalo 2026-03-1112:112 reply

      Google has enough money, still has positive revenue and still invests in AI + Deepmind.

      Google doesn't need to do anything to make any other numbers work.

      Gemini 3.1 pro is really good; Meta just signed a deal with Google for their TPUs.

      Nano Banana 2 Pro is alsy very good.

      OpenAI numbers might not add up, Antrophic might burn through cash, but not google.

      And it doesn't matter anyway because as long as google can afford it, Microsoft HAS TO do this too and Microsoft also can afford it. The same with Amazon.

      Microsoft invests in OpenAI and Amazon invests in Antrophic.

      • By cmiles8 2026-03-1112:361 reply

        Worth remembering that Amazon is now taking out loans to help pay for it all. That says a lot.

        • By Tklaaaalo 2026-03-1114:471 reply

          That honestly says nothing.

          Even a company like Amazon hasn't just billions on a bank account.

          They make enough profit to easily afford this.

          Its easier to get a loan instead of getting a lump sump through other means.

          • By cmiles8 2026-03-1118:231 reply

            Amazon isn’t broke — but their AI ambitions are now bigger than even a highly profitable company can self-fund. That’s not a nothing-burger. When you’re structurally dependent on debt markets to execute your core strategy, ‘we make plenty of profit’ stops being a complete answer. Will all things AI looking a bit shaky at the moment, taking out massive loans to keep running faster is raising eyebrows.

            • By Tklaaaalo 2026-03-1312:24

              But if you look at their numbers, they still can pay it back in a short period of time. Like 1-3 years.

              And their core business is super stable even with AI.

              So the only real risk is increasing operational costs but if they pay of the investment, they literlay could just stop the hardware from running and reduce the operational cost if there is no demand

      • By cermicelli 2026-03-1112:28

        Amazon now has just as much invested in OpenAI, as much as Microsoft most likely.

        Given Anthropic is also funded by them, either they are desperate to not lose or they really don't think Anthropic has a moat.

    • By JKCalhoun 2026-03-1112:56

      "Not surprisingly companies are willing to get into bed with more and more questionable use cases…"

      But not all companies as we have seen over the last week or so.

      Irregardless, all companies doing so will have to balance the ethics of their choices against the public perception of their company as all of us are free to make choices that align with our own personal ethics.

      (In short, they don't get to hide behind "everyone else is doing it".)

    • By nxobject 2026-03-1111:21

      And, in a post-ZIRP era, guess where all of the easy money for growth is coming from? Yup, deficit-funded defense spending.

    • By jasonfrost 2026-03-1114:06

      Questionable use cases like hyperscalers housing confidential data of military operations? Use case is the same, private companies supporting military operations, as they have for ages.

    • By dotancohen 2026-03-1111:582 reply

      The pentagon is a questionable use case?

      • By pjc50 2026-03-1112:281 reply

        The most questionable of all! You just know it's going to be used for increasingly inappropriate "generate me a list of targets in Iran" stuff.

        • By dotancohen 2026-03-1116:451 reply

          I don't "just know that". However you think you "just know that", I think you should verify your sources before spreading that around.

          • By array_key_first 2026-03-1121:361 reply

            Considering the DOD's stance on using AI for questionable means with Anthropic, we can very safely assume it.

            This is further compounded by the fact the DOD, and the administration at large, is headed by some of the most incompetent and morally bankrupt individuals imaginable. I wouldn't trust Hegseth to change a goddamn light bulb, let alone run the DOD.

            • By dotancohen 2026-03-122:321 reply

              I certainly prefer the US administration's morals over those of the Islamic Republic whom this war is against. I do not perceive the US administration to be either incompetent nor morally bankrupt as you say.

              I do think those who bully those who defend the US online are the morally bankrupt ones.

      • By cmiles8 2026-03-1112:031 reply

        I’m OK with it, but the fact that this is news highlights that many others don’t like it

        • By dotancohen 2026-03-1116:461 reply

          The fact that this is news highlights that there is an effort to discredit US institutions. We are meant to believe that others don't like it.

          • By UncleMeat 2026-03-1116:491 reply

            US institutions are discrediting themselves.

            • By dotancohen 2026-03-122:341 reply

              No, there are targeting attacks online aimed at discrediting US institutions. People are afraid of defending the US and Israel online due to downvoting brigades.

              • By sebastiennight 2026-03-1212:41

                > People are afraid of defending the US and Israel online

                Do you sincerely believe this? This claim seems as wild as those vastly popular podcasters who constantly claim they're being "silenced".

  • By SecureVillage27 2026-03-1110:441 reply

    Sounds sketchy as hell but the article suggests its for unclassified work, like "drafting meeting notes, creating action items, and breaking large projects into step-by-step plans".

    I think I'd be more annoyed if my government weren't using tools to make BS work more efficient.

    • By duskdozer 2026-03-1112:10

      It does those things poorly.

  • By free652 2026-03-1111:172 reply

    >The DOD’s workforce of more than 3 million people will now be able to use a no-code or low-code tool called Agent Designer to create their own digital assistants for repetitive administrative tasks.

    • By _DeadFred_ 2026-03-1115:40

      As someone who moved from software companies to IT management, seeing this move to fully embrace 'everything in Excel' or basically undefined business use cases/processes moved into software ad hoc and without validation, it's going to be interesting to see how this plays out. Especially for companies that have outsourced IT and expect software to be defined/tested out business processes in supported systems.

      In house IT is going to be huge in a couple of years sorting out this mess. I would have never guessed the future would be all custom Excel spreadsheets, but instead of Excel just random code in random languages with random data stores.

    • By coffeefirst 2026-03-1111:511 reply

      Oh this is dumb.

      So the problem is filling out forms is too onerous, but rather than fix the process, create a device that fills the form with slop and then another device that approves or rejects the slop form.

      I could have sworn I signed up for the other future-the one without quite this much stupid.

      • By JKCalhoun 2026-03-1113:00

        Had the film "Brazil" been written today, AI no doubt would be a significant plot-element.

HackerNews