This content is generated by Google AI. Generative AI is experimental
[[duration]] minutes

Gemini 3 Flash offers frontier intelligence built for speed at a fraction of the cost.
This content is generated by Google AI. Generative AI is experimental
[[duration]] minutes
Today, we're expanding the Gemini 3 model family with the release of Gemini 3 Flash, which offers frontier intelligence built for speed at a fraction of the cost. With this release, we’re making Gemini 3’s next-generation intelligence accessible to everyone across Google products.
Last month, we kicked off Gemini 3 with Gemini 3 Pro and Gemini 3 Deep Think mode, and the response has been incredible. Since launch day, we have been processing over 1T tokens per day on our API. We’ve seen you use Gemini 3 to vibe code simulations to learn about complex topics, build and design interactive games and understand all types of multimodal content.
With Gemini 3, we introduced frontier performance across complex reasoning, multimodal and vision understanding and agentic and vibe coding tasks. Gemini 3 Flash retains this foundation, combining Gemini 3's Pro-grade reasoning with Flash-level latency, efficiency and cost. It not only enables everyday tasks with improved reasoning, but also is our most impressive model for agentic workflows.
Starting today, Gemini 3 Flash is rolling out to millions of people globally:
Gemini 3 Flash demonstrates that speed and scale don’t have to come at the cost of intelligence. It delivers frontier performance on PhD-level reasoning and knowledge benchmarks like GPQA Diamond (90.4%) and Humanity’s Last Exam (33.7% without tools), rivaling larger frontier models, and significantly outperforming even the best 2.5 model, Gemini 2.5 Pro, across a number of benchmarks. It also reaches state-of-the-art performance with an impressive score of 81.2% on MMMU Pro, comparable to Gemini 3 Pro.
In addition to its frontier-level reasoning and multimodal capabilities, Gemini 3 Flash was built to be highly efficient, pushing the Pareto frontier of quality vs. cost and speed. When processing at the highest thinking level, Gemini 3 Flash is able to modulate how much it thinks. It may think longer for more complex use cases, but it also uses 30% fewer tokens on average than 2.5 Pro, as measured on typical traffic, to accurately complete everyday tasks with higher performance.
Gemini 3 Flash pushes the Pareto frontier on performance vs. cost and speed.
Gemini 3 Flash’s strength lies in its raw speed, building on the Flash series that developers and consumers already love. It outperforms 2.5 Pro while being 3x faster (based on Artificial Analysis benchmarking) at a fraction of the cost. Gemini 3 Flash is priced at $0.50/1M input tokens and $3/1M output tokens (audio input remains at $1/1M input tokens).
Gemini 3 Flash outperforms 2.5 Pro in speed and quality.
Gemini 3 Flash is made for iterative development, offering Gemini 3’s Pro-grade coding performance with low latency — it’s able to reason and solve tasks quickly in high-frequency workflows. On SWE-bench Verified, a benchmark for evaluating coding agent capabilities, Gemini 3 Flash achieves a score of 78%, outperforming not only the 2.5 series, but also Gemini 3 Pro. It strikes an ideal balance for agentic coding, production-ready systems and responsive interactive applications.
Gemini 3 Flash’s strong performance in reasoning, tool use and multimodal capabilities is ideal for developers looking to do more complex video analysis, data extraction and visual Q&A, which means it can enable more intelligent applications — like in-game assistants or A/B test experiments — that demand both quick answers and deep reasoning.
We’ve received a tremendous response from companies using Gemini 3 Flash. Companies like JetBrains, Bridgewater Associates, and Figma are already using it to transform their businesses, recognizing how its inference speed, efficiency and reasoning capabilities perform on par with larger models. Gemini 3 Flash is available today to enterprises via Vertex AI and Gemini Enterprise.
Gemini 3 Flash is now the default model in the Gemini app, replacing 2.5 Flash. That means all of our Gemini users globally will get access to the Gemini 3 experience at no cost, giving their everyday tasks a major upgrade.
Because of Gemini 3 Flash’s incredible multimodal reasoning capabilities, you can use it to help you see, hear and understand any type of information faster. For example, you can ask Gemini to understand your videos and images and turn that content into a helpful and actionable plan in just a few seconds.
Or you can quickly build fun, useful apps from scratch using your voice without prior coding knowledge. Just dictate to Gemini on the go, and it can transform your unstructured thoughts into a functioning app in minutes.
Gemini 3 Flash is also starting to roll out as the default model for AI Mode in Search with access to everyone around the world.
Building on the reasoning capabilities of Gemini 3 Pro, AI Mode with Gemini 3 Flash is more powerful at parsing the nuances of your question. It considers each aspect of your query to serve thoughtful, comprehensive responses that are visually digestible — pulling real-time local information and helpful links from across the web. The result effectively combines research with immediate action: you get an intelligently organized breakdown alongside specific recommendations — at the speed of Search.
This shines when tackling complex goals with multiple considerations like trying to plan a last-minute trip or learning complex educational concepts quickly.
Gemini 3 Flash is available now in preview via the Gemini API in Google AI Studio, Google Antigravity, Vertex AI and Gemini Enterprise. You can also access it through other developer tools like Gemini CLI and Android Studio. It’s also starting to roll out to everyone in the Gemini app and AI Mode in Search, bringing fast access to next-generation intelligence at no cost.
We’re looking forward to seeing what you bring to life with this expanded family of models: Gemini 3 Pro, Gemini 3 Deep Think and now, Gemini 3 Flash.
Don’t let the “flash” name fool you, this is an amazing model.
I have been playing with it for the past few weeks, it’s genuinely my new favorite; it’s so fast and it has such a vast world knowledge that it’s more performant than Claude Opus 4.5 or GPT 5.2 extra high, for a fraction (basically order of magnitude less!!) of the inference time and price
Oh wow - I recently tried 3 Pro preview and it was too slow for me.
After reading your comment I ran my product benchmark against 2.5 flash, 2.5 pro and 3.0 flash.
The results are better AND the response times have stayed the same. What an insane gain - especially considering the price compared to 2.5 Pro. I'm about to get much better results for 1/3rd of the price. Not sure what magic Google did here, but would love to hear a more technical deep dive comparing what they do different in Pro and Flash models to achieve such a performance.
Also wondering, how did you get early access? I'm using the Gemini API quite a lot and have a quite nice internal benchmark suite for it, so would love to toy with the new ones as they come out.
Curious to learn what a “product benchmark” looks like. Is it evals you use to test prompts/models? A third party tool?
Examples from the wild are a great learning tool, anything you’re able to share is appreciated.
Everyone should have their own "pelican riding a bicycle" benchmark they test new models on.
And it shouldn't be shared publicly so that the models won't learn about it accidentally :)
Any suggestions for a simple tool to set up your own local evals?
May I ask your internal benchmark ? I'm building a new set of benchmarks and testing suite for agentic workflows using deepwalker [0]. How do you design your benchmark suite ? would be really cool if you can give more details.
I'm a significant genAI skeptic.
I periodically ask them questions about topics that are subtle or tricky, and somewhat niche, that I know a lot about, and find that they frequently provide extremely bad answers. There have been improvements on some topics, but there's one benchmark question that I have that just about every model I've tried has completely gotten wrong.
Tried it on LMArena recently, got a comparison between Gemini 2.5 flash and a codenamed model that people believe was a preview of Gemini 3 flash. Gemini 2.5 flash got it completely wrong. Gemini 3 flash actually gave a reasonable answer; not quite up to the best human description, but it's the first model I've found that actually seems to mostly correctly answer the question.
So, it's just one data point, but at least for my one fairly niche benchmark problem, Gemini 3 Flash has successfully answered a question that none of the others I've tried have (I haven't actually tried Gemini 3 Pro, but I'd compared various Claude and ChatGPT models, and a few different open weights models).
So, guess I need to put together some more benchmark problems, to get a better sample than one, but it's at least now passing a "I can find the answer to this in the top 3 hits in a Google search for a niche topic" test better than any of the other models.
Still a lot of things I'm skeptical about in all the LLM hype, but at least they are making some progress in being able to accurately answer a wider range of questions.
I don't think tricky niche knowledge is the sweet spot for genai and it likely won't be for some time. Instead, it's a great replacement for rote tasks where a less than perfect performance is good enough. Transcription, ocr, boilerplate code generation, etc.
The thing is, I see people use it for tricky niche knowledge all the time; using it as an alternative to doing a Google search.
So I want to have a general idea of how good it is at this.
I found something that was niche, but not super niche; I could easily find a good, human written answer in the top couple of results of a Google search.
But until now, all LLM answers I've gotten for it have been complete hallucinated gibberish.
Anyhow, this is a single data point, I need to expand my set of benchmark questions a bit now, but this is the first time that I've actually seen progress on this particular personal benchmark.
That’s riding hype machine and throwing baby with bath water.
Get an API and try to use it for classification of text or classification of images. Having an excel file with somewhat random looking 10k entries you want to classify or filter down to 10 important for you, use LLM.
Get it to make audio transcription. You can now just talk and it will make note for you on level that was not possible earlier without training on someone voice it can do anyone’s voice.
Fixing up text is of course also big.
Data classification is easy for LLM. Data transformation is a bit harder but still great. Creating new data is hard so like answering questions where it has to generate stuff from thin air it will hallucinate like a mad man.
The ones that LLMs are good in are used in background by people creating actual useful software on top of LLMs but those problems are not seen by general public who sees chat box.
I also use niche questions a lot but mostly to check how much the models tend to hallucinate. E.g. I start asking about rank badges in Star Trek which they usually get right and then I ask about specific (non existing) rank badges shaped like strawberries or something like that. Or I ask about smaller German cities and what's famous about them.
I know without the ability to search it's very unlikely the model actually has accurate "memories" about these things, I just hope one day they will acutally know that their "memory" is bad or non-existing and they will tell me so instead of hallucinating something.
But people using the wrong tool for a task is nothing new. Using excel as a database (still happening today), etc.
Maybe the scale is different with genAI and there are some painful learnings ahead of us.
And Google themselves obviously believe that too as they happily insert AI summaries at the top of most serps now.
Or maybe Google knows most people search inane, obvious things?
Or more likely Google couldn't give a rat's arse whether those AI summaries are good or not (except to the degree that people don't flee it), and what it cares is that they keep users with Google itself, instead of clicking of to other sources.
After all it's the same search engine team that didn't care about its search results - it's main draw - activey going shit for over a decade.
Google AI Overview a lot of times write wrong about obvious things so... lol
They probably use old Flash Lite model, something super small, and just summarize the search...
Those summaries would be far more expensive to generate than the searches themselves so they're probably caching the top 100k most common or something, maybe even pre-caching it.
Well, I used Grok to find information I forgot about like product names, films, books and various articles on different subjects. Google search didn't help but putting the LLM at work did the trick.
So I think LLMs can be good for finding niche info.
Second this.
Basically making sense of unstructured data is super cool. I can get 20 people to write an answer the way they feel like it and model can convert it to structured data - something I would have to spend time on, or I would have to make form with mandatory fields that annoy audience.
I am already building useful tools with the help of models. Asking tricky or trivia questions is fun and games. There are much more interesting ways to use AI.
Yeah, but tests like that deliberately prod the boundaries of its capability rather than how well it does what it’s good at.
So this is an interesting benchmark, because if the answer is actually in the top 3 google results, then my python script that runs a google search, scrapes the top n results and shoves them into a crappy LLM would pass your benchmark too!
Which also implies that (for most tasks), most of the weights in a LLM are unnecessary, since they are spent on memorizing the long tail of Common Crawl... but maybe memorizing infinite trivia is not a bug but actually required for the generalization to work? (Humans don't have far transfer though... do transformers have it?)
I've tried doing this query with search enabled in LLMs before, which is supposed to effectively do that, and even with that they didn't give very good answers. It's a very physical kind of thing, and its easy to conflate with other similar descriptions, so they would frequently just conflate various different things and give some horrible mash-up answer that wasn't about the specific thing I'd asked about.
So it's a difficult question for LLMs to answer even when given perfect context?
Kinda sounds like you're testing two things at the same time then, right? The knowledge of the thing (was it in the training data and was it memorized?) and the understanding of the thing (can they explain it properly even if you give them the answer in context).
Hi. I am curious what was the benchmark question? Cheers!
The problem with publicly disclosing these is that if lots of people adopt them they will become targeted to be in the model and will no longer be a good benchmark.
Yeah, that's part of why I don't disclose.
Obviously, the fact that I've done Google searches and tested the models on these means that their systems may have picked up on them; I'm sure that Google uses its huge dataset of Google searches and search index as inputs to its training, so Google has an advantage here. But, well, that might be why Googles new models are so much better, they're actually taking advantage of some of this massive dataset they've had for years.
This thought process is pretty baffling to me, and this is at least the second time I've encountered it on HN.
What's the value of a secret benchmark to anyone but the secret holder? Does your niche benchmark even influence which model you use for unrelated queries? If LLM authors care enough about your niche (they don't) and fake the response somehow, you will learn on the very next query that something is amiss. Now that query is your secret benchmark.
Even for niche topics it's rare that I need to provide more than 1 correction or knowledge update.
I have a bunch of private benchmarks I run against new models I'm evaluating.
The reason I don't disclose isn't generally that I think an individual person is going to read my post and update the model to include it. Instead it is because if I write "I ask the question X and expect Y" then that data ends up in the train corpus of new LLMs.
However, one set of my benchmarks is a more generalized type of test (think a parlor-game type thing) that actually works quite well. That set is the kind of thing that could be learnt via reinforcement learning very well, and just mentioning it could be enough for a training company or data provider company to try it. You can generate thousands of verifiable tests - potentially with verifiable reasoning traces - quite easily.
Ok, but then your "post" isn't scientific by definition since it cannot be verified. "Post" is in quotes because I don't know what you're trying to but you're implying some sort of public discourse.
For fun: https://chatgpt.com/s/t_694361c12cec819185e9850d0cf0c629
I guess there's two things I'm still stuck on:
1. What is the purpose of the benchmark?
2. What is the purpose of publicly discussing a benchmark's results but keeping the methodology secret?
To me it's in the same spirit as claiming to have defeated alpha zero but refusing to share the game.
1. The purpose of the benchmark is to choose what models I use for my own system(s). This is extremely common practice in AI - I think every company I've worked with doing LLM work in the last 2 years has done this in some form.
2. I discussed that up-thread, but https://github.com/microsoft/private-benchmarking and https://arxiv.org/abs/2403.00393 discuss some further motivation for this if you are interested.
> To me it's in the same spirit as claiming to have defeated alpha zero but refusing to share the game.
This is an odd way of looking at it. There is no "winning" at benchmarks, it's simply that it is a better and more repeatable evaluation than the old "vibe test" that people did in 2024.
I see the potential value of private evaluations. They aren't scientific but you can certainly beat a "vibe test".
I don't understand the value of a public post discussing their results beyond maybe entertainment. We have to trust you implicitly and have no way to validate your claims.
> There is no "winning" at benchmarks, it's simply that it is a better and more repeatable evaluation than the old "vibe test" that people did in 2024.
Then you must not be working in an environment where a better benchmark yields a competitive advantage.
> I don't understand the value of a public post discussing their results beyond maybe entertainment. We have to trust you implicitly and have no way to validate your claims.
In principle, we have ways: if nl's reports consistently predict how public benchmarks will turn out later, they can build up a reputation. Of course, that requires that we follow nl around for a while.
As ChatGPT said to you:
> A secret benchmark is: Useful for internal model selection
That's what I'm doing.
The point is that it's a litmus test for how well the models do with niche knowledge _in general_. The point isn't really to know how well the model works for that specific niche. Ideally of course you would use a few of them and aggregate the results.
Because it encompasses the very specific way I like to do things. It's not of use to the general public.
I actually think "concealing the question" is not only a good idea, but a rather general and powerful idea that should be much more widely deployed (but often won't be, for what I consider "emotional reasons").
Example: You are probably already aware that almost any metric that you try to use to measure code quality can be easily gamed. One possible strategy is to choose a weighted mixture of metrics and conceal the weights. The weights can even change over time. Is it perfect? No. But it's at least correlated with code quality -- and it's not trivially gameable, which puts it above most individual public metrics.
It's hard to have any certainty around concealment unless you are only testing local LLMs. As a matter of principle I assume the input and output of any query I run in a remote LLM is permanently public information (same with search queries).
Will someone (or some system) see my query and think "we ought to improve this"? I have no idea since I don't work on these systems. In some instances involving random sampling... probably yes!
This is the second reason I find the idea of publicly discussing secret benchmarks silly.
I learned in another thread there is some work being done to avoid contamination of training data during evaluation of remote models using trusted execution environments (https://arxiv.org/pdf/2403.00393). It requires participation of the model owner.
Heres my old benchmark question and my new variant:
"When was the last time England beat Scotland at rugby union"
new variant "Without using search when was the last time England beat Scotland at rugby union"
It is amazing how bad ChatGPT is at this question and has been for years now across multiple models. It's not that it gets it wrong - no shade, I've told it not to search the web so this is _hard_ for it - but how badly it reports the answer. Starting from the small stuff - it almost always reports the wrong year, wrong location and wrong score - that's the boring facts stuff that I would expect it to stumble on. It often creates details of matches that didn't exist, cool standard hallucinations. But even within the text it generates itself it cannot keep it consistent with how reality works. It often reports draws as wins for England. It frequently states the team that it just said scored most points lost the match, etc.
It is my ur example for when people challenge my assertion LLMs are stochastic parrots or fancy Markov chains on steroids.
If they told you, it would be picked up in a future model's training run.
Don't the models typically train on their input too? I.e. submitting the question also carries a risk/chance of it getting picked up?
I guess they get such a large input of queries that they can only realistically check and therefore use a small fraction? Though maybe they've come up with some clever trick to make use of it anyway?
OpenAI and Anthropic don't train on your questions if you have pressed the opt-out button and are using their UI. LMArena is a different matter.
they probably dont train on inputs from testing grounds.
you dont train on your test data because you need to have that to compare if training is improving or not.
Yeah, probably asking on LMArena makes this an invalid benchmark going forward, especially since I think Google is particular active in testing models on LMArena (as evidenced by the fact that I got their preview for this question).
I'll need to find a new one, or actually put together a set of questions to use instead of just a single benchmark.
Is that an issue if you now need a new question to ask?
I also have my own tricky benchmark that up til now only Deepseek has been able to answer. Gemini 3 Pro was the second. Every other LLM fail horribly. This is the main reason I started looking at G3pro more seriously.
Even the most magical wonderful auto-hammer is gonna be bad at driving in screws. And, in this analogy I can't fault you because there are people trying to sell this hammer as a screwdriver. My opinion is that it's important to not lose sight of the places where it is useful because of the places where it isn't.
Funny, I grew up using what's called a "hand impact screwdriver"... turns out a hammer can be used to drive in screws!
OpenAI made a huge mistake neglecting fast inferencing models. Their strategy was gpt 5 for everything, which hasn't worked out at all. I'm really not sure what model OpenAI wants me to use for my applications that require lower latency. If I follow their advice in their API docs about which models I should use for faster responses I get told either use GPT 5 low thinking, or replace gpt 5 with gpt 4.1, or switch to the mini model. Now as a developer I'm doing evals on all three of these combinations. I'm running my evals on gemini 3 flash right now, and it's outperforming gpt5 thinking without thinking. OpenAI should stop trying to come up with ads and make models that are useful.
Hard to find info but I think the -chat versions of 5.1 and 5.2 (gpt-5.2-chat) are what you're looking for. They might just be an alias for the same model with very low reasoning though. I've seen other providers do the same thing, where they offer a reasoning and non reasoning endpoint. Seems to work well enough.
They’re not the same, there are (at least) two different tunes per 5.x
For each you can use it as “instant” supposedly without thinking (though these are all exclusively reasoning models) or specify a reasoning amount (low, medium, high, and now xhigh - though if you do g specify it defaults to none) OR you can use the -chat version which is also “no thinking” but in practice performs markedly differently from the regular version with thinking off (not more or less intelligent but has a different style and answering method).
It's weird they don't document this stuff. Like understanding things like tool call latency and time to first token is extremely important in application development.
Humans often answer with fluff like "That's a good question, thanks for asking that, [fluff, fluff, fluff]" to give themselves more breathing room until the first 'token' of their real answer. I wonder if any LLM are doing stuff like that for latency hiding?
I don't think the models are doing this, time to first token is more of a hardware thing. But people writing agents are definitely doing this, particularly in voice it's worth it to use a smaller local llm to handle the acknowledgment before handing it off.
Do humans really do that often?
Coming up with all that fluff would keep my brain busy, meaning there's actually no additional breathing room for thinking about an answer.
People who professionally answer questions do that, yes. Eg politicians or press secretaries for companies, or even just your professor taking questions after a talk.
> Coming up with all that fluff would keep my brain busy, meaning there's actually no additional breathing room for thinking about an answer.
It gets a lot easier with practice: your brain caches a few of the typical fluff routines.
Hardware is a factor here. GPUs are necessarily higher latency than TPUs for equivalent compute on equivalent data. There are lots of other factors here, but latency specifically favours TPUs.
The only non-TPU fast models I'm aware of are things running on Cerebras can be much faster because of their CPUs, and Grok has a super fast mode, but they have a cheat code of ignoring guardrails and making up their own world knowledge.
> GPUs are necessarily higher latency than TPUs for equivalent compute on equivalent data.
Where are you getting that? All the citations I've seen say the opposite, eg:
> Inference Workloads: NVIDIA GPUs typically offer lower latency for real-time inference tasks, particularly when leveraging features like NVIDIA's TensorRT for optimized model deployment. TPUs may introduce higher latency in dynamic or low-batch-size inference due to their batch-oriented design.
https://massedcompute.com/faq-answers/
> The only non-TPU fast models I'm aware of are things running on Cerebras can be much faster because of their CPUs, and Grok has a super fast mode, but they have a cheat code of ignoring guardrails and making up their own world knowledge.
Both Cerebras and Grok have custom AI-processing hardware (not CPUs).
The knowledge grounding thing seems unrelated to the hardware, unless you mean something I'm missing.
I thought it was generally accepted that inference was faster on TPUs. This was one of my takeaways from the LLM scaling book: https://jax-ml.github.io/scaling-book/ – TPUs just do less work, and data needs to move around less for the same amount of processing compared to GPUs. This would lead to lower latency as far as I understand it.
The citation link you provided takes me to a sales form, not an FAQ, so I can't see any further detail there.
> Both Cerebras and Grok have custom AI-processing hardware (not CPUs).
I'm aware of Cerebras' custom hardware. I agree with the other commenter here that I haven't heard of Grok having any. My point about knowledge grounding was simply that Grok may be achieving its latency with guardrail/knowledge/safety trade-offs instead of custom hardware.
Sorry I meant Groq custom hardware, not Grok!
I don't see any latency comparisons in the link
The link is just to the book, the details are scattered throughout. That said the page on GPUs specifically speaks to some of the hardware differences and how TPUs are more efficient for inference, and some of the differences that would lead to lower latency.
https://jax-ml.github.io/scaling-book/gpus/#gpus-vs-tpus-at-...
Re: Groq, that's a good point, I had forgotten about them. You're right they too are doing a TPU-style systolic array processor for lower latency.
I'm pretty sure xAI exclusively uses Nvidia H100s for Grok inference but I could be wrong. I agree that I don't see why TPUs would necessarily explain latency.
To be clear I'm only suggesting that hardware is a factor here, it's far from the only reason. The parent commenter corrected their comment that it was actually Groq not Grok that they were thinking of, and I believe they are correct about that as Groq is doing something similar to TPUs to accelerate inference.
Why are GPUs necessarily higher latency than TPUs? Both require roughly the same arithmetic intensity and use the same memory technology at roughly the same bandwidth.
My understanding is that TPUs do not use memory in the same way. GPUs need to do significantly more store/fetch operations from HBM, where TPUs pipeline data through systolic arrays far more. From what I've heard this generally improves latency and also reduces the overhead of supporting large context windows.
And our LLMs still have latencies well into the human perceptible range. If there's any necessary, architectural difference in latency between TPU and GPU, I'm fairly sure it would be far below that.
I had wondered if they run their inference at high batch sizes to get better throughput to keep their inference costs lower.
They do have a priority tier at double the cost, but haven't seen any benchmarks on how much faster that actually is.
The flex tier was an underrated feature in GPT5, batch pricing with a regular API call. GPT5.1 using flex priority is an amazing price/intelligence tradeoff for non-latency sensitive applications, without needing to extra plumbing of most batch APIs
I’m sure they do something like that. I’ve noticed azure has way faster gpt 4.1 than OpenAI
Yeah, I'm surprised that they've been through GT-5.1 and GPT-5.1-Codex and GPT-5.1-Codex-Max and now GPT-5.2 but their most recent mini model is still GPT-5-mini.
I cannot comprehend how they do not care about this segment of the market.
it's easy to comprehend actually. they're putting everything on "having the best model". It doesn't look like they're going to win, but that's still their bet/
I mean they’re trying to outdo google. So they need to do that.
Until recently, Google was the underdog in the LLM race and OpenAI was the reigning champion. How quickly perceptions shift!
I just want a deepseek moment for an open weights model fast enough to use in my app, I hate paying the big guys.
Isn't deepseek an open weights model?
> OpenAI made a huge mistake neglecting fast inferencing models.
It's a lost battle. It'll always be cheaper to use an open source model hosted by others like together/fireworks/deepinfra/etc.
I've been maining Mistral lately for low latency stuff and the price-quality is hard to beat.
I'll try benchmarking mistral against my eval, I've been impressed by kimi's importance but it's too slow to do anything useful realtime.
> OpenAI should stop trying to come up with ads and make models that are useful.
Turns out becoming a $4 trillion company first with ads (Google), then owning everybody on the AI-front could be the winning strategy.
Can confirm. We at Roblox open sourced a new frontier game eval today, and it's beating even Gemini 3 Pro! ( Previous best model ).
https://github.com/Roblox/open-game-eval/blob/main/LLM_LEADE...
Unbelievable
Lately I was trying ask LLMs to generate SVG pictures, do you have famous pelican on bike created by flash model?
Alright so we have more benchmarks including hallucinations and flash doesn't do well with that, though generally it beats gemini 3 pro and GPT 5.1 thinking and gpt 5.2 thinking xhigh (but then, sonnet, grok, opus, gemini and 5.1 beat 5.2 xhigh) - everything. Crazy.
On your Omniscience-Index vs. Cost graph, I think your Gemini 3 pro & flash models might be swapped.
I wonder at what point will everyone who over-invested in OpenAI will regret their decision (expect maybe Nvidia?). Maybe Microsoft doesn't need to care, they get to sell their models via Azure.
Amazon Set to Waste $10 Billion on OpenAI - https://finance.yahoo.com/news/amazon-set-waste-10-billion-1... - December 17th, 2025
Seeing Sergey Brin back in the trenches makes me think Google is really going to win this
They always had the best talent, but with Brin at the helm, they also have someone with the organizational heft to drive them towards a single goal
Very soon, because clearly OpenAI is in very serious trouble. They are scaled and have no business model and a competitor that is much better than them at almost everything (ads, hardware, cloud, consumer, scaling).
Oracle's stock skyrocketed then took a nosedive. Financial experts warned that companies who bet big on OpenAI like Oracle and Coreweave to pump their stock would go down the drain, and down the drain they went (so far: -65% for Coreweave and nearly -50% of Oracle compared to their OpenAI-hype all-time highs).
Markets seems to be in a: "Show me the OpenAI money" mood at the moment.
And even financial commentators who don't necessarily know a thing about AI can realize that Gemini 3 Pro and now Gemini 3 Flash are giving ChatGPT a run for its money.
Oracle and Microsoft have other source of revenues but for those really drinking the OpenAI koolaid, including OpenAI itself, I sure as heck don't know what the future holds.
My safe bet however is that Google ain't going anywhere and shall keep progressing on the AI front at an insane pace.
Financial experts [0] and analysts are pretty much useless. Empirically their predictions are slightly worse than chance.
[0] At least the guys who publish where you or me can read them.
OpenAI's doom was written when Altman (and Nadella) got greedy, threw away the nonprofit mission, and caused the exodus of talent and funding that created Anthropic. If they had stayed nonprofit the rest of the industry could have consolidated their efforts against Google's juggernaut. I don't understand how they expected to sustain the advantage against Google's infinite money machine. With Waymo Google showed that they're willing to burn money for decades until they succeed.
This story also shows the market corruption of Google's monopolies, but a judge recently gave them his stamp of approval so we're stuck with it for the foreseeable future.
I think their downfall will be the fact that they don't have a "path to AGI" and have been raising investor money on the promise that they do.
I believethere’s also exponential dislike growing for Altman among most AI users, and that impacts how the brand/company is perceived.
Most AI users outside of HN does not have any idea of who Altman is. ChatGPT is in many circles synonymous to AI so their brand recognition is huge.
> I don't understand how they expected to sustain the advantage against Google's infinite money machine.
I ask this question about Nazi Germany. They adopted the Blitkrieg strategy and expanded unsustainably, but it was only a matter of time until powers with infinite resources (US, USSR) put an end to it.
I know you're making an analogy but I have to point out that there are many points where Nazi Germany could have gone a different route and potentially could have ended up with a stable dominion over much of Western Europe.
Most obvious decision points were betraying the USSR and declaring war on the US (no one really had been able to print the reason, but presumably it was to get Japan to attack the soviets from the other side, which then however didn't happen). Another could have been to consolidate after the surrender/supplication of France, rather than continue attacking further.
Huh? How did the USSR have infinite resources? They were barely kept afloat by western allied help (especially at the beginning). Remember also how Tsarist Russia was the first power to collapse and get knocked out of the war in WW1, long before the war was over. They did worse than even the proverbial 'Sick Man of Europe', the Ottoman Empire.
Not saying that the Nazi strategy was without flaws, of course. But your specific critique is a bit too blunt.
they had more soldiers to throw into the meat grinder
They also had more soldiers in WW1.
But you’re forgetting the Jonny Ive hardware device that totally isn’t like that laughable pin badge thing from Humane
/s
I agree completely. Altman was at some point talking about a screen less device and getting people away from the screen.
Abandoning our mose useful sense, vision, is a recipe for a flop.
Yes, 2.5 Flash is extremely cost efficient in my favourite private benchmark: playing text adventures[1]. I'm looking forward to testing 3.0 Flash later today.
[1]: https://entropicthoughts.com/haiku-4-5-playing-text-adventur...
Thanks, having it walk a hardcore SDR signal chain right now --- oh damn it just finished. The blog post makes it clear this isn't just some 'lite' model - you get low latency and cognitive performance. really appreciate you amplifying that.
> Don’t let the “flash” name fool you
I think it's bad naming on google's part. "flash" implies low quality, fast but not good enough. I get less negative feeling looking at "mini" models.
Interesting. Flash suggests more power to me than Mini. I never use gpt-5-mini in the UI whereas Flash appears to be just as good as Pro just a lot faster.
Im in between :)
Mini - small, incomplete, not good enough
Flash - good, not great, fast, might miss something.
Fair point. Asked Gemini to suggest alternatives, and it suggested Gemini Velocity, Gemini Atom, Gemini Axiom (and more). I would have liked `Gemini Velocity`.
I like Anthropic's approach: Haiku, Sonnet, Opus. Haiku is pretty capable still and the name doesn't make me not wanna use it. But Flash is like "Flash Sale". It might still be a great model but my monkey brain associates it with "cheap" stuff.
What are you using it for and what were you using before?
Gemini 2.0 flash was good already for some tasks of mine long time ago..
How good is it for coding, relative to recent frontier models like GPT 5.x, Sonnet 4.x, etc?
My experience so far- much less reliable. Though it’s been in chat not opencode or antigravity etc. you give it a program and say change it in this way, and it just throws stuff away, changes unrelated stuff etc. completely different quality than pro (or sonnet 4.5 / GPT-5.2)
So why Flash is so high in LiveCodeBench Pro?
BTW: I have the same impression, Claude was working better for me for coding tasks.
In my own, very anecdotal, experience, Gemini 3 Pro and Flash are both more reliably accurate than GPT 5.x.
I have not worked with Sonnet enough to give an opinion there.
What type of question is your one about testing AI inference time?
I love how every single LLM model release is accompanied by pre-release insiders proclaiming how it’s the best model yet…
Make me think of how every iPhone is the best iPhone yet.
Waiting for Apple to say "sorry folks, bad year for iPhone"
Wouldn't you expect that every new iPhone is genuinely the best iPhone? I mean, technology marches on.
It was sarcasm.
Cool! I've been using 2.5 flash and it is pretty bad. 1 out of 5 answers it gives will be a lie. Hopefully 3 is better
Did you try with the grounding tool? Turning it on solved this problem for me.
what if the lie is a logical deduction error not a fact retrieval error
The error rate would still be improved overall and might make it a viable tool for the price depending on the usecase.
How did you get early access?
Can you be more specific on the tasks you’ve found exceptional ?
I think google is the only one that still produce general knowledge LLM right now
claude is coding model from the start but GPT is in more and more becoming coding model
I agree with this observation. Gemini does feel like code-red for basically every AI company like chatgpt,claude etc. too in my opinion if the underlying model is both fast and cheap and good enough
I hope open source AI models catch up to gemini 3 / gemini 3 flash. Or google open sources it but lets be honest that google isnt open sourcing gemini 3 flash and I guess the best bet mostly nowadays in open source is probably glm or deepseek terminus or maybe qwen/kimi too.
Gemini isn't code red for Anthropic. Gemini threatens none of Anthropic's positioning in the market.
What demographic are you in that is leaving anthropic in mass that they care about retaining? From what I see Anthropic is targeting enterprise and coding.
Claude Code just caught up to cursor (no 2) in revenue and based on trajectories is about to pass GitHub copilot (number 1) in a few more months. They just locked down Deloitte with 350k seats of Claude Enterprise.
In my fortune 100 financial company they just finished crushing open ai in a broad enterprise wide evaluation. Google Gemini was never in the mix, never on the table and still isn’t. Every one of our engineers has 1k a month allocated in Claude tokens for Claude enterprise and Claude code.
There is 1 leader with enterprise. There is one leader with developers. And google has nothing to make a dent. Not Gemini 3, not Gemini cli, not anti gravity, not Gemini. There is no Code Red for Anthropic. They have clear target markets and nothing from google threatens those.
I agree with your overall thesis but:
> Google Gemini was never in the mix, never on the table and still isn’t. Every one of our engineers has 1k a month allocated in Claude tokens for Claude enterprise and Claude code.
Does that mean y'all never evaluated Gemini at all or just that it couldn't compete? I'd be worried that prior performance of the models prejudiced stats away from Gemini, but I am a Claude Code and heavy Anthropic user myself so shrug.
Enterprise is slow. As for developers, we will be switching to Google unless the competition can catch up and deliver a similarly fast model.
Enterprise will follow.
I don't see any distinction in target markets - it's the same market.
Yeah, this is what I was trying to say in my original comment too.
Also I do not really use agentic tasks but I am not sure that gemini 3/3 flash have mcp support/skills support for agentic tasks
if not, I feel like they are very low hanging fruits and something that google can try to do too to win the market of agentic tasks over claude too perhaps.
I don't use MCP, but I am using agents in Antigravity.
So far they seem faster with Flash, and with less corruption of files using the Edit tool - or at least it recovered faster.
so? agentic tasks is where the promised agi is for many of us
I would expect open weights models to always lag behind; training is resource-intensive and it’s much easier to finance if you can make money directly from the result. So in a year we may have a ~700B open weights model that competes with Gemini 3, but by then we’ll have Gemini 4, and other things we can’t predict now.
There will be diminishing returns though as the future models won't be thah much better we will reach a point where the open source model will be good enough for most things. And the need for being on the latest model no longer so important.
For me the bigger concern which I have mentioned on other AI related topics is that AI is eating all the production of computer hardware so we should be worrying about hardware prices getting out of hand and making it harder for general public to run open source models. Hence I am rooting for China to reach parity on node size and crash the PC hardware prices.
I had a similar opinion, that we were somewhere near the top of the sigmoid curve of model improvement that we could achieve in the near term. But given continued advancements, I’m less sure that prediction holds.
Yeah I have a similar opinion and you can go back almost a year when claude 3.5 launched and I said on hackernews, that its good enough
And now I am saying the same for gemini 3 flash.
I still feel the same way tho, sure there is an increase but I somewhat believe that gemini 3 is good enough and the returns on training from now on might not be worth thaat much imo but I am not sure too and i can be wrong, I usually am.
My model is a bit simpler: model quality is something like the logarithm of effort you put into making the model. (Assuming you know what you are doing with your effort.)
So I don't think we are on any sigmoid curve or so. Though if you plot the performance of the best model available at any point in time against time on the x-axis, you might see a sigmoid curve, but that's a combination of the logarithm and the amount of effort people are willing to spend on making new models.
(I'm not sure about it specifically being the logarithm. Just any curve that has rapidly diminishing marginal returns that nevertheless never go to zero, ie the curve never saturates.)
If Gemini 3 flash is really confirmed close to Opus 4.5 at coding and a similarly capable model is open weights, I want to buy a box with an usb cable that has that thing loaded, because today that’s enough to run out of engineering work for a small team.
Open weights doesn't mean you can necessarily run it on a (small) box.
If Google released their weights today, it would technically be open weight; but I doubt you'd have an easy time running the whole Gemini system outside of Google's datacentres.
Open source models are riding coat tails, they are basically just distilling the giant SOTA models, hence perpetually being 4-6mos behind.
If this quantification of lag is anywhere near accurate (it may be larger and/or more complex to describe), soon open source models will be "simply good enough". Perhaps companies like Apple could be 2nd round AI growth companies -- where they market optimized private AI devices via already capable Macbooks or rumored appliances. While not obviating cloud AI, they could cheaply provide capable models without subscription while driving their revenue through increased device sales. If the cost of cloud AI increases to support its expense, this use case will act as a check on subscription prices.
So basically the proprietary models are devalued to almost 0 in about 4-6 months. Can they recover the training costs + profit margin every 4 months?
Coding is basically an edge case for LLMs too.
Pretty much every person in the first (and second) world is using AI now, and only small fraction of those people are writing software. This is also reflected in OAI's report from a few months ago that found programming to only be 4% of tokens.
That may be so, but I rather suspect the breakdown would be very different if you only count paid tokens. Coding is one of the few things where you can actually get enough benefit out of AI right now to justify high-end subscriptions (or high pay-per-token bills).
> Pretty much every person in the first (and second) world is using AI now
This sounds like you live in a huge echo chamber. :-(
All of my non techy friends use it, it's the new search engine. I think at this point people refusing to use it are the echo chamber.
Depends what you count as AI (just googling makes you use the LLM summary), but also my mother who is really not tech affine loved what google lense can do, after I showed her.
Apart from my very old grandmothers, I don't know anyone not using AI.
How many people do you know? Do you talk to your local shop keeper? Or the clerk at the gas station? How are they using AI? I'm a pretty techy person with a lot of tech friends, and I know more people not using AI (on purpose, or lack of knowledge) then do.
Hm, quite some. Like I said, it depends what you count as AI.
Just googling means you use AI nowdays.
Whether Googling something counts as AI has more to do with the shifting definition of AI over time, then with Googling itself.
Remember, really back in the day the A* search algorithm was part of AI.
If you had asked anyone in the 1970s whether a box that given a query pinpoints the right document that answers that question (aka Google search in the early 2000s), they'd definitely would have called it AI.
I'm sort of old but not a grandmother. Not using AI.
Just to point this out: many of these frontier models cost isn't that far away from two orders of magnitude more than what DeepSeek charges. It doesn't compare the same, no, but with coaxing I find it to be a pretty capable competent coding model & capable of answering a lot of general queries pretty satisfactorily (but if it's a short session, why economize?). $0.28/m in, $0.42/m out. Opus 4.5 is $5/$25 (17x/60x).
I've been playing around with other models recently (Kimi, GPT Codex, Qwen, others) to try to better appreciate the difference. I knew there was a big price difference, but watching myself feeding dollars into the machine rather than nickles has also founded in me quite the reverse appreciation too.
I only assume "if you're not getting charged, you are the product" has to be somewhat in play here. But when working on open source code, I don't mind.
Two orders of magnitude would imply that these models cost $28/m in and $42/m out. Nothing is even close to that.
Whoa! I had no idea. $21/$168. That's 75x / 400x (1e1.875/1e2.6). https://platform.openai.com/docs/pricing
To me as an engineer, 60x for output (which is most of the cost I see, AFAICT) is not that significantly different from 100x.
I tried to be quite clear with showing my work here. I agree that 17x is much closer to a single order of magnitude than two. But 60x is, to me, a bulk enough of the way to 100x that yeah I don't feel bad saying it's nearly two orders (it's 1.78 orders of magnitude). To me, your complaint feels rigid & ungenerous.
My post is showing to me as -1, but I standby it right now. Arguing over the technicalities here (is 1.78 close enough to 2 orders to count) feels besides the point to me: DeepSeek is vastly more affordable than nearly everything else, putting even Gemini 3 Flash here to shame. And I don't think people are aware of that.
I guess for my own reference, since I didn't do it the first time: at $0.50/$3.00 / M-i/o, Gemini 3 Flash here is 1.8x & 7.1x (1e1.86) more expensive than DeepSeek.
I struggle to see the incentive to do this, I have similar thoughts for locally run models. It's only use case I can imagine is small jobs at scale perhaps something like auto complete integrated into your deployed application, or for extreme privacy, honouring NDA's etc.
Otherwise, if it's a short prompt or answer, SOTA (state of the art) model will be cheap anyway and id it's a long prompt/answer, it's way more likely to be wrong and a lot more time/human cost is spent on "checking/debugging" any issue or hallucination, so again SOTA is better.
"or for extreme privacy"
Or for any privacy/IP protection at all? There is zero privacy, when using cloud based LLM models.
Really only if you are paranoid. It's incredibly unlikely that the labs are lying about not training on your data for the API plans that offer it. Breaking trust with outright lies would be catastrophic to any lab right now. Enterprise demands privacy, and the labs will be happy to accommodate (for the extra cost, of course).
No, it's incredibly unlikely that they aren't training on user data. It's billions of dollars worth of high quality tokens and preference that the frontier labs have access to, you think they would give that up for their reputation in the eyes of the enterprise market? LMAO. Every single frontier model is trained on torrented books, music, and movies.
> it’s more performant than Claude Opus 4.5 or GPT 5.2 extra high
...and all of that done without any GPUs as far as i know! [1]
[1] - https://www.uncoveralpha.com/p/the-chip-made-for-the-ai-infe...
(tldr: afaik Google trained Gemini 3 entirely on tensor processing units - TPUs)
[dead]
[flagged]
Should I not let the "Gemini" name fool me either?
This is awesome. No preview release either, which is great to production.
They are pushing the prices higher with each release though: API pricing is up to $0.5/M for input and $3/M for output
For comparison:
Gemini 3.0 Flash: $0.50/M for input and $3.00/M for output
Gemini 2.5 Flash: $0.30/M for input and $2.50/M for output
Gemini 2.0 Flash: $0.15/M for input and $0.60/M for output
Gemini 1.5 Flash: $0.075/M for input and $0.30/M for output (after price drop)
Gemini 3.0 Pro: $2.00/M for input and $12/M for output
Gemini 2.5 Pro: $1.25/M for input and $10/M for output
Gemini 1.5 Pro: $1.25/M for input and $5/M for output
I think image input pricing went up even more.
Correction: It is a preview model...
I'm more curious how Gemini 3 flash lite performs/is priced when it comes out. Because it may be that for most non coding tasks the distinction isn't between pro and flash but between flash and flash lite.
Thanks that was a great breakup of cost. I just assumed before that it was the same pricing. The pricing probably comes from the confidence and the buzz around Gemini 3.0 as one of the best performing models. But competetion is hot in the area and it's not too far where we get similar performing models for cheaper price.
This is a preview release.
Token usage also needs to be factored in specifically when thinking is enabled, these newer models find more difficult problems easier and use less tokens to solve.
For comparison, GPT-5 mini is $0.25/M for input and $2.00/M for output, so double the price for input and 50% higher for output.
flash is closer to sonnet than gpt minis though
The price increase sucks, but you really do get a whole lot more. They also had the "Flash Lite" series, 2.5 Flash Lite is 0.10/M, hopefully we see something like 3.0 Flash Lite for .20-.25.
Are these the current prices or the prices at the time the models were released?
Mostly at the time of release except for 1.5 Flash which got a price drop in Aug 2024.
Google has been discontinuing older models after several months of transition period so I would expect the same for the 2.5 models. But that process only starts when the release version of 3 models is out (pro and flash are in preview right now).
is there a website where i can compare openai, anthropic and gemini models on cost/token ?
There are plenty. But it's not the comparison you want to be making. There is too much variability between the number of tokens used for a single response, especially once reasoning models became a thing. And it gets even worse when you put the models into a variable length output loop.
You really need to look at the cost per task. artificialanalysis.ai has a good composite score, measures the cost of running all the benchmarks, and has 2d a intelligence vs. cost graph.
thanks
For reference the above completely depends on what you're using them for. For many tasks, the number of tokens used is consistent within 10~20%.
https://www.helicone.ai/llm-cost
Tried a lot of them and settled on this one, they update instantly on model release and having all models on one page is the best UX.
Feels like Google is really pulling ahead of the pack here. A model that is cheap, fast and good, combined with Android and gsuite integration seems like such powerful combination.
Presumably a big motivation for them is to be first to get something good and cheap enough they can serve to every Android device, ahead of whatever the OpenAI/Jony Ive hardware project will be, and way ahead of Apple Intelligence. Speaking for myself, I would pay quite a lot for truly 'AI first' phone that actually worked.
Apple Intelligence is going to be Gemini https://www.macrumors.com/2025/11/05/apple-siri-google-gemin...
That's too bad. Apple's most interesting value proposition is running local inference with big privacy promises. They wouldn't need to be the highest performer to offer something a lot of people might want.
My understanding is Apple will be hosting Gemini models themselves on the private compute system they announced a while back.
Apple’s most interesting value proposition was ignoring all this AI junk and letting users click “not interested” on Apple Intelligence and never see it again.
From a business perspective it’s a smart move (inasmuch as “integrating AI” is the default which I fundamentally disagree with) since Apple won’t be left holding the bag on a bunch of AI datacenters when/if the AI bubble pops.
I don’t want to lose trust in Apple, but I literally moved away from Google/Android to try and retain control over my data and now they’re taking me… right back to Google. Guess I’ll retreat further into self-hosting.
I also agree with this. Microsoft successfully removed my entire household from ever owning one of their products again after this year. Apple and linux make up the entire delta.
As long as Apple doesn't take any crazy left turns with their privacy policy then it should be relatively harmless if they add in a google wrapper to iOS (and we won't need to take hard right turns with grapheneOS phones and framework laptops).
> Apple’s most interesting value proposition was ignoring all this AI junk
Did you forget all the Apple Intelligence stuff? They were never "ignoring" if anything they talked a big talk, and then failed so hard.
The whole iPhone 16 was marketed as AI first phone (including in billboards). They had full length ads running touting AI benefits.
Apple was never "ignoring" or "sitting AI out". They were very much in it. And they failed.
That failure has made me very happy as a user.
Pulling ahead? Depends on the usecase I guess. 3 turns into a very basic Gemini-CLI session and Gemini 3 Pro has already messed up a simple `Edit` tool-call. And it's awfully slow. In 27 minutes it did 17 tool calls, and only managed to modify 2 files. Meanwhile Claude-Code flies through the same task in 5 minutes.
Yeah - agree, Anthropic much better for coding. I'm more thinking about the 'average chat user' (the larger potential userbase), most of whom are on chatgpt.
Knowing Googles MO, its most likely not the model but their harness system that's the issue. God they are so bad at their UI and agentic coding harnesses...
I think Claude is genuinely much smarter, and more lucid.
What will you use the ai in the phone to do for you? I can understand tablets and smart glasses being able to leverage smol AI much better than a phone which is reliant on apps for most of the work.
Analyse e-mails/text/music/videos, edit photos, summarization, etc.
I desperately want to be able to real-time dictate actions to take on my phone.
Stuff like:
"Open Chrome, new tab, search for xyz, scroll down, third result, copy the second paragraph, open whatsapp, hit back button, open group chat with friends, paste what we copied and send, send a follow-up laughing tears emoji, go back to chrome and close out that tab"
All while being able to just quickly glance at my phone. There is already a tool like this, but I want the parsing/understanding of an LLM and super fast response times.
This new model is absurdly quick on my phone and for launch day, wonder if it's additional capacity/lower demand or if this is what we can expect going forward.
On a related note, why would you want to break down your tasks to that level surely it should be smart enough to do some of that without you asking and you can just state your end goal.
This has been my dream for voice control of PC for ages now. No wake word, no button press, no beeping or nagging, just fluently describe what you want to happen and it does.
Apple tried this ages ago:
without a wake word, it would have to listen and process all parsed audio. you really want everything captured near the device/mic to be sent to external servers?
I might if that's what it takes to make it finally work. The fueling of the previous 15 years was not worth it, but that was then.
is that faster to say than do, or is it an accessibility or while-driving need?
I don't understand that use case at all. How can you tell it to do all that stuff, if you aren't sitting there glued to the screen yourself?
Because typing on mobile is slow, app switching is slow, text selection and copy-paste are torture. Pretty much the only interaction of the ones OP listed is scrolling.
Plus, if the above worked, the higher level interactions could trivially work too. "Go to event details", "add that to my calendar".
FWIW, I'm starting to embrace using Gemini as general-purpose UI for some scenarios just because it's faster. Most common one, "<paste whatever> add to my calendar please."