
Alphabet Inc.’s Google is introducing artificial intelligence agents across the Pentagon’s three million-strong workforce to automate routine jobs, according to a senior defense official.
Alphabet Inc.’s Google is introducing artificial intelligence agents across the Pentagon’s three million-strong workforce to automate routine jobs, according to a senior defense official.
Google’s Gemini AI agents, which can undertake work independently on behalf of a user who sets them tasks, will initially operate on unclassified networks, said Emil Michael, the under secretary of defense for research and engineering.
Companies are getting desperate to show AI adoption as right now the numbers just don’t add up.
Not surprisingly companies are willing to get into bed with more and more questionable use cases if it helps show some desperately needed AI adoption revenue.
Companies are getting desperate to show AI adoption as right now the numbers just don’t add up.
All compute companies say they don't have enough compute to meet demands. Why do you think there isn't enough AI adoption to justify the investment?“Demand” is mostly their training of models, which they’ve yet to demonstrate is a profitable business.
Just because you’re struggling to get raw materials for your business doesn’t make it a good business. Without strong enterprise adoption ASAP (which is what’s seriously suffering) things are going to hit the fan real quick.
With respect, I don't think you've used the latest models and have not seen Anthropic's enterprise revenue hockey-stick like number. They are so busy outfitting fortune-500, you can't even get someone in sales to respond to emails. I've been waiting for months and so have others.
This will sound snarky, so forgive me, but I honestly don't know the answer. Is this actually true? Is there a reliable source containing statistics on LLM compute usage that includes training vs inference for the whole market?
I don’t understand why people don’t just use Gemini or some other AI web search to get an answer to these kinds of questions quickly (I excluded the sources, you can get them if you ask the same question).
> While AI training is often the most intense and expensive process for a single model, the majority of total AI compute usage (approximately 90%) is used for inference.
> Here is the breakdown of why this is the case: > Inference as High-Volume
> Activity: Inference occurs every time a user interacts with an AI model (e.g., asking ChatGPT a question, using image recognition, or generating code). While a model is trained once (or updated infrequently), it runs millions or billions of inferences continuously.
> Cost Scaling: Training is a massive, one-time upfront cost, while inference is an ongoing, daily operational cost. As the number of AI users grows, the demand for inference compute scales faster than the need for training new, large models.
> The Shift to Efficiency: While early AI hype focused on the immense compute needed for training, the industry has shifted toward making inference cheaper and faster through specialized hardware and techniques like optimization, quantization, and small language models (SLMs).
Gemini is not a reliable source. You posted the only part of the AI response that isn't useful in verifying whether it is true.
Sure, I guess. I asked Gemini to give me some markdown of citations and the claims made that address the question:
https://share.google/aimode/v3Y9P3rYIx1oj9VI2
And I finally figured out how to get links to answers instead of just inlining the content as before. Anyways, there it is. We live in a time where questions like "Does inference or training use more compute?" can be answered quickly by just pasting it into a search box.
The revenue numbers are public for the major AI companies. That's probably the best estimate for "inference for the whole market" we have, since most of that inference is billed in either API usage or subscriptions, and it won't include any in-house usage such as training.
Most of the compute is actually used for inference (90% if Gemini is to be trusted).
Do you have source?
Just ask the slop generator of your choice. Making sense of all that smoke and mirrors without AI support has become impossible.
"enough compute" will be when there is no more hardware for use outside of their walled garden, at which point they can control what they want
Google has enough money, still has positive revenue and still invests in AI + Deepmind.
Google doesn't need to do anything to make any other numbers work.
Gemini 3.1 pro is really good; Meta just signed a deal with Google for their TPUs.
Nano Banana 2 Pro is alsy very good.
OpenAI numbers might not add up, Antrophic might burn through cash, but not google.
And it doesn't matter anyway because as long as google can afford it, Microsoft HAS TO do this too and Microsoft also can afford it. The same with Amazon.
Microsoft invests in OpenAI and Amazon invests in Antrophic.
Worth remembering that Amazon is now taking out loans to help pay for it all. That says a lot.
That honestly says nothing.
Even a company like Amazon hasn't just billions on a bank account.
They make enough profit to easily afford this.
Its easier to get a loan instead of getting a lump sump through other means.
Amazon isn’t broke — but their AI ambitions are now bigger than even a highly profitable company can self-fund. That’s not a nothing-burger. When you’re structurally dependent on debt markets to execute your core strategy, ‘we make plenty of profit’ stops being a complete answer. Will all things AI looking a bit shaky at the moment, taking out massive loans to keep running faster is raising eyebrows.
But if you look at their numbers, they still can pay it back in a short period of time. Like 1-3 years.
And their core business is super stable even with AI.
So the only real risk is increasing operational costs but if they pay of the investment, they literlay could just stop the hardware from running and reduce the operational cost if there is no demand
Amazon now has just as much invested in OpenAI, as much as Microsoft most likely.
Given Anthropic is also funded by them, either they are desperate to not lose or they really don't think Anthropic has a moat.
"Not surprisingly companies are willing to get into bed with more and more questionable use cases…"
But not all companies as we have seen over the last week or so.
Irregardless, all companies doing so will have to balance the ethics of their choices against the public perception of their company as all of us are free to make choices that align with our own personal ethics.
(In short, they don't get to hide behind "everyone else is doing it".)
And, in a post-ZIRP era, guess where all of the easy money for growth is coming from? Yup, deficit-funded defense spending.
Questionable use cases like hyperscalers housing confidential data of military operations? Use case is the same, private companies supporting military operations, as they have for ages.
The pentagon is a questionable use case?
The most questionable of all! You just know it's going to be used for increasingly inappropriate "generate me a list of targets in Iran" stuff.
I don't "just know that". However you think you "just know that", I think you should verify your sources before spreading that around.
Considering the DOD's stance on using AI for questionable means with Anthropic, we can very safely assume it.
This is further compounded by the fact the DOD, and the administration at large, is headed by some of the most incompetent and morally bankrupt individuals imaginable. I wouldn't trust Hegseth to change a goddamn light bulb, let alone run the DOD.
I certainly prefer the US administration's morals over those of the Islamic Republic whom this war is against. I do not perceive the US administration to be either incompetent nor morally bankrupt as you say.
I do think those who bully those who defend the US online are the morally bankrupt ones.
I’m OK with it, but the fact that this is news highlights that many others don’t like it
The fact that this is news highlights that there is an effort to discredit US institutions. We are meant to believe that others don't like it.
No, there are targeting attacks online aimed at discrediting US institutions. People are afraid of defending the US and Israel online due to downvoting brigades.
> People are afraid of defending the US and Israel online
Do you sincerely believe this? This claim seems as wild as those vastly popular podcasters who constantly claim they're being "silenced".
Sounds sketchy as hell but the article suggests its for unclassified work, like "drafting meeting notes, creating action items, and breaking large projects into step-by-step plans".
I think I'd be more annoyed if my government weren't using tools to make BS work more efficient.
It does those things poorly.
>The DOD’s workforce of more than 3 million people will now be able to use a no-code or low-code tool called Agent Designer to create their own digital assistants for repetitive administrative tasks.
As someone who moved from software companies to IT management, seeing this move to fully embrace 'everything in Excel' or basically undefined business use cases/processes moved into software ad hoc and without validation, it's going to be interesting to see how this plays out. Especially for companies that have outsourced IT and expect software to be defined/tested out business processes in supported systems.
In house IT is going to be huge in a couple of years sorting out this mess. I would have never guessed the future would be all custom Excel spreadsheets, but instead of Excel just random code in random languages with random data stores.
Oh this is dumb.
So the problem is filling out forms is too onerous, but rather than fix the process, create a device that fills the form with slop and then another device that approves or rejects the slop form.
I could have sworn I signed up for the other future-the one without quite this much stupid.
Had the film "Brazil" been written today, AI no doubt would be a significant plot-element.