Nvidia Stock Crash Prediction

2026-01-2015:56457373entropicthoughts.com

To keep things computationally simple, we are going to use a binomial model for the price of the underlying Nvidia stock. We don’t know the daily volatility, so we’ll keep that as a variable we call…

To keep things computationally simple, we are going to use a binomial model for the price of the underlying Nvidia stock. We don’t know the daily volatility, so we’ll keep that as a variable we call \(\sigma\). We will pretend that each day, the Nvidia stock price can either grow with a factor of \(e^\sigma\) or shrink with a factor of \(e^{-\sigma}\).5 This is a geometric binomial walk. We could transform everything in the reasoning below with the logarithm and get an additive walk in log-returns.

Thus, on day zero, the Nvidia stock trades for $184. On day one, it can take one of two values:

  • \(184e^\sigma\) because it went up, or
  • \(184e^{-\sigma}\) because it went down.

On day two, it can have one of three values:

  • \(184e^{2\sigma}\) (went up both in the first and second day),
  • \(184e^{\sigma - \sigma} = 184\) (went up and then down, or vice versa), or
  • \(184e^{-2\sigma}\) (went down both days).

If it’s easier, we can visualise this as a tree. Each day, the stock price branches into two possibilities, one where it rises, and one where it goes down. In the graph below, each column of bubbles represents the closing value for a day.

nvidia-prediction-03.png

This looks like a very crude approximation, but it actually works if the time steps are fine-grained enough. The uncertainties involved in some of the other estimations we’ll do dwarf the inaccuracies introduced by this model.6 Even for fairly serious use, I wouldn’t be unhappy with daily time steps when the analysis goes a year out.

It is important to keep in mind that the specific numbers in the bubbles depend on which number we selected for the daily volatility \(\sigma\). Any conclusion we draw from this tree is a function of the specific \(\sigma\) chosen to construct the tree.

When we have chosen an initial \(\sigma\) and constructed this tree, we can price an option using it. Maybe we have a call option expiring on day three, with a strike price of $180. On day four, the last day, the option has expired, so it is worth nothing. We’ll put that into the tree.

nvidia-prediction-04.png

We have already seen what the value of the option is on the day it expires: it’s what we would profit from exercising it. If the stock is valued at $191, the option is worth $11, the difference between the stock value and the strike price. On the other hand, if the stock is valued at $177, it is worth less than the strike price of the option, so we will not exercise the option, instead letting it expire.

nvidia-prediction-05.png

The day before the expiration day is when we have the first interesting choice to make. We can still exercise the option, with the exercise value of the option calculated the same way.

Or we could hold on to the option. If we hold on to the option for a day, the value of the option will either go up or down, depending on the value of the underlying stock price. We will compute a weighted average of these movement possibilities as

\[\tilde{p} V_u + (1 - \tilde{p}) V_d\]

where \(V_u\) and \(V_d\) are the values the option will have on the next day when the underlying moves up or down in the tree, respectively. Then we’ll discount this with a safe interest rate to account for the fact that by holding the option, we are foregoing cash that could otherwise be used to invest elsewhere. The general equation for the hold value of the option at any time before the expiration day is

\[e^{-r} \left[ \tilde{p} \; V_u + (1 - \tilde{p}) V_d \right].\]

Let’s look specifically at the node where the stock value is $199. We’ll assume a safe interest rate of 3.6 % annually, which translates to 0.01 % daily.7 In the texts I’ve read, 4 % is commonly assumed, but more accurate estimations can be derived from us Treasury bills and similar extremely low-risk interest rates. The value of holding on to the option is, then

\[0.9999 \left[ \tilde{p} \; 26.97 + (1 - \tilde{p}) 11.36 \right]\]

and now we only need to know what \(\tilde{p}\) is. That variable looks and behaves a lot like a probability, but it’s not. There’s an arbitrage argument that fixes the value of \(\tilde{p}\) to

\[\tilde{p} = \frac{e^r - e^{-\sigma}}{e^\sigma - e^{-\sigma}}\]

where \(\sigma\) is the same time step volatility we assumed when creating the tree – in our case, 4 %. This makes \(\tilde{p} = 0.491\), and with this, we can compute the hold value of the option when the underlying is $199:

  • Hold value: $19.03
  • Exercise value: $19.01

The value of the option at any point in time is the maximum of the hold value and the exercise value. So we replace the stock value of $199 in the tree with the option value of $19.03. We perform the same calculation for the other nodes in day two.

nvidia-prediction-06.png

and then we do the same for the day before that, then before that, etc., until we get to day zero.

nvidia-prediction-07.png

We learn that if someone asks us on day zero to buy a call option with a strike price of $180 and expiry three days later, when the underlying stock currently trades for $184, and has an expected daily volatility of 0.04, then we should be willing to pay $7.38 for that option.

What’s weird is this number has nothing to do with the probability we are assigning to up or down movements. Go through the calculations again. We never involved any probability in the calculation of the price. Although I won’t go through the argument – see Shreve’s excellent Stochastic Calculus for Finance8 Stochastic Calculus for Finance I: The Binomial Asset Pricing Model; Shreve; Springer; 2005. for that – this price for the option is based on what it would cost to hedge the option with a portfolio of safe investments, borrowing, and long or short positions in the underlying stock.

Even without going through the detailed theory, we can fairly quickly verify that this is indeed how options are priced. Above, we made educated guesses as to the safe interest rate, a reasonable volatility, etc. We calculated with a spot price of $184, a strike price of $180, and expiry three days out. We got an option price of $7.38.

At the time of writing, the Nvidia stock trades at $184.94. It has options that expire in four days. The ones with a strike price of $180 currently sell for $6.20. That’s incredibly close, given the rough estimations and the slight mismatch in duration.9 The main inaccuracy comes from the volatility we used to construct the tree. The actual volatility of the Nvidia stock on such short time periods and small differences in price is lower.


Read the original article

Comments

  • By _fat_santa 2026-01-2016:4125 reply

    This article goes more into the technical analysis of the stock rather than the underlying business fundamentals that would lead to a stock dump.

    My 30k ft view is that the stock will inevitably slide as AI datacenter spending goes down. Right now Nvidia is flying high because datacenters are breaking ground everywhere but eventually that will come to an end as the supply of compute goes up.

    The counterargument to this is that the "economic lifespan" of an Nvidia GPU is 1-3 years depending on where it's used so there's a case to be made that Nvidia will always have customers coming back for the latest and greatest chips. The problem I have with this argument is that it's simply unsustainable to be spending that much every 2-3 years and we're already seeing this as Google and others are extending their depreciation of GPU's to something like 5-7 years.

    • By agentcoops 2026-01-2018:236 reply

      I hear your argument, but short of major algorithmic breakthroughs I am not convinced the global demand for GPUs will drop any time soon. Of course I could easily be wrong, but regardless I think the most predictable cause for a drop in the NVIDIA price would be that the CHIPS act/recent decisions by the CCP leads a Chinese firm to bring to market a CUDA compatible and reliable GPU at a fraction of the cost. It should be remembered that NVIDIA's /current/ value is based on their being locked out of their second largest market (China) with no investor expectation of that changing in the future. Given the current geopolitical landscape, in the hypothetical case where a Chinese firm markets such a chip we should expect that US firms would be prohibited from purchasing them, while it's less clear that Europeans or Saudis would be. Even so, if NVIDIA were not to lower their prices at all, US firms would be at a tremendous cost disadvantage while their competitors would no longer have one with respect to compute.

      All hypothetical, of course, but to me that's the most convincing bear case I've heard for NVIDIA.

      • By reppap 2026-01-2023:101 reply

        People will want more GPUs but will they be able to fund them? At what points does the venture capital and loans run out? People will not keep pouring hundreds of billions into this if the returns don't start coming.

        • By gadflyinyoureye 2026-01-211:35

          Money will be interesting the next few years.

          There is a real chance that the Japanese carry trade will close soon the BoJ seeing rates move up to 4%. This means liquidity will drain from the US markets back into Japan. On the US side there is going to be a lot of inflation between money printing, refund checks, amortization changes and a possible war footing. Who knows?

        • By agentcoops 2026-01-2118:37

          Yeah, that's the bull case for sure. Chinese firms might not accept training setbacks even given CCP regulations that they dogfood X homegrown chip.

      • By tracker1 2026-01-2023:151 reply

        Doesn't even necessarily need to be CUDA compatible... there's OpenCL and Vulkan as well, and likely China will throw enough resources at the problem to bring various libraries into closer alignment to ease of use/development.

        I do think China is still 3-5 years from being really competitive, but still even if they hit 40-50% of NVidia, depending on pricing and energy costs, it could still make significant inroads with legal pressure/bans, etc.

        • By bigyabai 2026-01-211:563 reply

          > there's OpenCL and Vulkan as well

          OpenCL is chronically undermaintained & undersupported, and Vulkan only covers a small subset of what CUDA does so far. Neither has the full support of the tech industry (though both are supported by Nvidia, ironically).

          It feels like nobody in the industry wants to beat Nvidia badly enough, yet. Apple and AMD are trying to supplement raster hardware with inference silicon; both of them are afraid to implement a holistic compute architecture a-la CUDA. Intel is reinventing the wheel with OneAPI, Microsoft is doing the same with ONNX, Google ships generic software and withholds their bespoke hardware, and Meta is asleep at the wheel. All of them hate each other, none of them trust Khronos anymore, and the value of a CUDA replacement has ballooned to the point that greed might be their only motivator.

          I've wanted a proper, industry-spanning CUDA competitor since high school. I'm beginning to realize it probably won't happen within my lifetime.

          • By zozbot234 2026-01-212:561 reply

            The modern successor to OpenCL is SYCL and there's been some limited convergence with Vulkan Compute (they're still based on distinct programming models and even SPIR-V varieties under the hood, but the distance is narrowing somewhat).

            • By pjmlp 2026-01-2216:01

              Which is basically Intel for practical purposes.

          • By robmay 2026-01-2113:04

            Lemurian Labs is working on this https://www.lemurianlabs.com/

          • By Balinares 2026-01-217:45

            Ask Claude, HN tells me that it can implement the things that you ask.

      • By laughing_man 2026-01-215:392 reply

        I suspect major algorithmic breakthroughs would accelerate the demand for GPUs instead of making it fall off, since the cost to apply LLMs would go down.

        • By MaxBarraclough 2026-01-2119:58

          Sounds like the Jevons paradox. From https://en.wiktionary.org/wiki/Jevons_paradox :

          > The proposition that technological progress that increases the efficiency with which a resource is used tends to increase (rather than decrease) the rate of consumption of that resource.

          See also Wikipedia: https://en.wikipedia.org/wiki/Jevons_paradox

        • By nroets 2026-01-216:361 reply

          Some changes to the algorithms and implementations will allow cheaper commodity hardware to be used.

          • By Rover222 2026-01-2111:45

            There will always be an incentive to scale data centers. Better algorithms just mean more bang per gpu, not that “well, that’s enough now, we’ve done it”.

      • By iLoveOncall 2026-01-2019:234 reply

        > short of major algorithmic breakthroughs I am not convinced the global demand for GPUs will drop any time soon

        Or, you know, when LLMs don't pay off.

        • By unsupp0rted 2026-01-2022:002 reply

          Even if LLMs didn't advance at all from this point onward, there's still loads of productive work that could be optimized / fully automated by them, at no worse output quality than the low-skilled humans we're currently throwing at that work.

          • By pvab3 2026-01-2022:533 reply

            inference requires a fraction of the power that training does. According to the Villalobos paper, the median date is 2028. At some point we won't be training bigger and bigger models every month. We will run out of additional material to train on, things will continue commodifying, and then the amount of training happening will significantly decrease unless new avenues open for new types of models. But our current LLMs are much more compute-intensive than any other type of generative or task-specific model

            • By SequoiaHope 2026-01-215:152 reply

              Run out of training data? They’re going to put these things in humanoids (they are weirdly cheap now) and record high resolution video and other sensor data of real world tasks and train huge multimodal Vision Language Action models etc.

              The world is more than just text. We can never run out of pixels if we point cameras at the real world and move them around.

              I work in robotics and I don’t think people talking about this stuff appreciate that text and internet pictures is just the beginning. Robotics is poised to generate and consume TONS of data from the real world, not just the internet.

              • By DoctorOetker 2026-01-2119:28

                While we may run out of human written text of value, we won't run out of symbolic sequences of tokens: we can trivially start with axioms and do random forward chaining (or random backward chaining from postulates), and then train models on 2-step, 4-step, 8-step, ... correct forward or backward chains.

                Nobody talks about it, but ultimately the strongest driver for terrascale compute will be for mathematical breakthroughs in crypography (not bruteforcing keys, but bruteforcing mathematical reasoning).

              • By vintermann 2026-01-219:401 reply

                Yeah, another source of "unlimited data" is genetics. The human reference genome is about 6.5 GB, but these days, they're moving to pangenomes, wanting to map out not just the genome of one reference individual, but all the genetic variation in a clade. Depending on how ambitious they are about that "all", they can be humongous. And unlike say video data, this is arguably a language. We're completely swimming in unmapped, uninterpreted language data.

                • By boppo1 2026-01-2115:05

                  Can you say more?

            • By yourapostasy 2026-01-2023:17

              Inference leans heavily on GPU RAM and RAM bandwidth for the decode phase where an increasingly greater amount of time is being spent as people find better ways to leverage inference. So NVIDIA users are currently arguably going to demand a different product mix when the market shifts away from the current training-friendly products. I suspect there will be more than enough demand for inference that whatever power we release from a relative slackening of training demand will be more than made up and then some by power demand to drive a large inference market.

              It isn’t the panacea some make it out to be, but there is obvious utility here to sell. The real argument is shifting towards the pricing.

            • By zozbot234 2026-01-2023:092 reply

              > We will run out of additional material to train on

              This sounds a bit silly. More training will generally result in better modeling, even for a fixed amount of genuine original data. At current model sizes, it's essentially impossible to overfit to the training data so there's no reason why we should just "stop".

              • By _0ffh 2026-01-210:511 reply

                You'd be surprised how quickly improvement of autoregressive language models levels off with epoch count (though, admittedly, one epoch is a LOT). Diffusion language models otoh indeed keep profiting for much longer, fwiw.

                • By zozbot234 2026-01-219:331 reply

                  Does this also apply to LLM training at scale? I would be a bit surprised if it does, fwiw.

                  • By _0ffh 2026-01-2112:48

                    Yup, as soon as data is the bottleneck and not compute, diffusion wins. Tested following the Chinchilla scaling strategy from 7M to 2.5B parameters.

                    https://arxiv.org/abs/2507.15857

              • By pvab3 2026-01-2023:311 reply

                I'm just talking about text generated by human beings. You can keep retraining with more parameters on the same corpus

                https://proceedings.mlr.press/v235/villalobos24a.html

                • By x-complexity 2026-01-211:35

                  > I'm just talking about text generated by human beings.

                  That in itself is a goalpost shift from

                  > > We will run out of additional material to train on

                  Where it is implied "additional material" === "all data, human + synthetic"

                  ------

                  There's still some headroom left in the synthetic data playground, as cited in the paper linked:

                  https://proceedings.mlr.press/v235/villalobos24a.html ( https://openreview.net/pdf?id=ViZcgDQjyG )

                  "On the other hand, training on synthetic data has shown much promise in domains where model outputs are relatively easy to verify, such as mathematics, programming, and games (Yang et al., 2023; Liu et al., 2023; Haluptzok et al., 2023)."

                  With the caveat that translating this success outside of these domains is hit-or-miss:

                  "What is less clear is whether the usefulness of synthetic data will generalize to domains where output verification is more challenging, such as natural language."

                  The main bottleneck for this area of the woods will be (X := how many additional domains can be made easily verifiable). So long as (the rate of X) >> (training absorption rate), the road can be extended for a while longer.

          • By SchemaLoad 2026-01-2022:092 reply

            How much of the current usage is productive work that's worth paying for vs personal usage / spam that would just drop off after usage charges come in? I imagine flooding youtube and instagram with slop videos would reduce if users had to pay fair prices to use the models.

            The companies might also downgrade the quality of the models to make it more viable to provide as an ad supported service which would again reduce utilisation.

            • By unsupp0rted 2026-01-2022:201 reply

              For any "click here and type into a box" job for which you'd hire a low-skilled worker and give them an SOP to follow, you can have an LLM-ish tool do it.

              And probably for the slightly more skilled email jobs that have infiltrated nearly all companies too.

              Is that productive work? Well if people are getting paid, often a multiple of minimum wage, then it's productive-seeming enough.

              • By greree 2026-01-214:24

                Another bozo making fun of other job classes.

                Why are there still customer service reps? Shouldn’t they all be gone by now due to this amazing technology?

                Ah, tumbleweed.

            • By bethekidyouwant 2026-01-213:24

              Who is generating videos for free?

        • By stingraycharles 2026-01-2022:111 reply

          Exactly, the current spend on LLMs is based on extremely high expectations and the vendors operating at a loss. It’s very reasonable to assume that those expectations will not be met, and spending will slow down as well.

          Nvidia’s valuation is based on the current trend continuing and even increasing, which I consider unlikely in the long term.

          • By bigyabai 2026-01-2022:271 reply

            > Nvidia’s valuation is based on the current trend continuing

            People said this back when Folding@Home was dominated by Team Green years ago. Then again when GPUs sold out for the cryptocurrency boom, and now again that Nvidia is addressing the LLM demand.

            Nvidia's valuation is backstopped by the fact that Russia, Ukraine, China and the United States are all tripping over themselves for the chance to deploy it operationally. If the world goes to war (which is an unfortunate likelihood) then Nvidia will be the only trillion-dollar defense empire since the DoD's Last Supper.

            • By matthewdgreen 2026-01-2022:341 reply

              China is restricting purchases of H200s. The strong likelihood is that they're doing this to promote their own domestic competitors. It may take a few years for those chips to catch up and enter full production, but it's hard to envision any "trillion dollar" Nvidia defense empire once that happens.

              • By bigyabai 2026-01-2023:001 reply

                It's very easy to envision. America needs chips, and Intel can't do most of this stuff.

                • By zozbot234 2026-01-2023:121 reply

                  Intel makes GPUs.

                  • By bigyabai 2026-01-210:091 reply

                    Intel's GPU designs make AMD look world-class by comparison. Outside of transcode applications, those Arc cards aren't putting up a fight.

                    • By irishcoffee 2026-01-216:001 reply

                      ...if you can't be with the one you love, love the one you're with?

                      • By pjmlp 2026-01-2216:02

                        Intel's GPU story all their life.

        • By MichaelRo 2026-01-217:59

          > short of major algorithmic breakthroughs I am not convinced the global demand for GPUs will drop any time soon

          >> Or, you know, when LLMs don't pay off.

          Heh, exactly the observation that a fanatic religious believer cannot possibly foresee. "We need more churches! More priests! Until a breakthrough in praying technique will be achieved I don't foresee less demand for religious devotion!" Nobody foresaw Nietzsche and the decline in blind faith.

          But then again, like an atheist back in the day, the furious zealots would burn me at the stake if they could, for saying this. Sadly no longer possible so let them downvotes pour instead!

        • By selfhoster11 2026-01-2019:482 reply

          They already are paying off. The nature of LLMs means that they will require expensive, fast hardware that's a large capex.

          • By kortilla 2026-01-2019:531 reply

            They aren’t yet because the big providers that paid for all of this GPU capacity aren’t profitable yet.

            They continually leap frog each other and shift around customers which indicates that the current capacity is already higher than what is required for what people actually pay for.

            • By MrDarcy 2026-01-2020:495 reply

              Google, Amazon, and Microsoft aren’t profitable?

              • By notyourwork 2026-01-2021:022 reply

                I assume the reference was AI use cases are not profitable. Those companies are subsidizing and OpenAI/grok are burning money.

                • By lossyalgo 2026-01-210:00

                  Yeah but OpenAI is adding ads this year for the free versions, which I'm guessing is most of their users. They are probably hedging on taking a big slice of Google's advertising monopoly-pie (which is why Google is also now all-in on forcing Gemini opt-out on every product they own, they can see the writing on the wall).

                • By onion2k 2026-01-2023:531 reply

                  Google, Amazon, and Microsoft do a lot of things that aren't profitable in themselves. There is no reason to believe a company will kill a product line just because it makes a loss. There are plenty of other reasons to keep it running.

                  • By notyourwork 2026-01-2116:44

                    I didn't imply anything about what big-tech would do.

              • By wolfram74 2026-01-2022:20

                Do you think it's odd you only listed companies with already existing revenue streams and not companies that started with and only have generative algos as their product?

              • By josefx 2026-01-2021:15

                Aren't all Microsoft products OpenAI based? OpenAI has always been burning money.

              • By dangus 2026-01-2022:25

                How many business units have Google and Microsoft shut down or ceased investment for being unprofitable?

                I hear Meta is having massive VR division layoffs…who could have predicted?

                Raw popularity does not guarantee sustainability. See: Vine, WeWork, MoviePass.

          • By Forgeties79 2026-01-2020:301 reply

            Where? Who’s in the black?

      • By kelseyfrog 2026-01-2117:33

        Algorithmic breakthroughs (increases in efficiency) risk Jevons Paradox. More efficient processes make deploying them even more cost effective and increases demand.

    • By lairv 2026-01-2018:032 reply

      NVIDIA stock tanked in 2025 when people learned that Google used TPUs to train Gemini, which everyone in the community knows since at least 2021. So I think it's very likely that NVIDIA stock could crash for non-rationale reasons

      edit: 2025* not 2024

      • By readthenotes1 2026-01-2019:273 reply

        It also tanked to ~$90 when Trump announced tariffs on all goods for Taiwan except semiconductors.

        I don't know if that's non-rational, or if people can't be expected to read the second sentence of an announcement before panicking.

        • By Loudergood 2026-01-2019:523 reply

          The market is full of people trying to anticipate how other people are going to react and exploit that by getting there first. There's a layer aimed at forecasting what that layer is going to do as well.

          It's guesswork all the way down.

        • By gertlex 2026-01-2020:341 reply

          This was also on top of claims (Jan 2025) that Deepseek showed that "we don't actually need as much GPU, thus NVidia is less needed"; at least it was my impression this was one of the (now silly-seeming) reasons NVDA dropped then.

          • By readthenotes1 2026-01-2419:04

            It had already recovered from the DeepSeek head fake iirc

        • By mschuster91 2026-01-2021:22

          > I don't know if that's non-rational, or if people can't be expected to read the second sentence of an announcement before panicking.

          These days you have AI bots doing sentiment based training.

          If you ask me... all these excesses are a clear sign for one thing, we need to drastically rein in the stonk markets. The markets should serve us, not the other way around.

      • By Der_Einzige 2026-01-2020:594 reply

        Google did not use TPUs for literally every bit of compute that led to Gemini. GCP has millions of high end Nvidia GPUs and programming for them is an order of magnitude easier, even for googlers.

        Any claim from google that all of Gemini (including previous experiments) was trained entirely by TPUs is lies. What they are truthfully saying is that the final training run was done on all TPUs. The market shouldn’t react heavily to this, but instead should react positively to the fact that google is now finally selling TPUs externally and their fab yields are better than expected.

        • By djsjajah 2026-01-2021:23

          > including all previous experiments

          How far back do you go? What about experiments into architecture features that didn’t make the cut? What about pre-transformer attention?

          But more generally, why are you so sure that they team that built Gemini didn’t exclusively use TPUs while they were developing it?

          I think that one of the reasons that Gemini caught up so quickly is because they have so much compute at fraction of the price of everyone else.

        • By notyourwork 2026-01-2021:03

          Why should it not react heavily? What’s stopping this from being a start of a trend for google and even Amazon?

        • By imtringued 2026-01-217:35

          JAX is very easy to use. Give it a try.

        • By gregorygoc 2026-01-217:19

          They are not lies.

    • By mnky9800n 2026-01-2017:179 reply

      I really don't understand the argument that nvidia GPUs only work for 1-3 years. I am currently using A100s and H100s every day. Those aren't exactly new anymore.

      • By mbrumlow 2026-01-2018:523 reply

        It’s not that they don’t work. It’s how businesses handle hardware.

        I worked at a few data centers on and off in my career. I got lots of hardware for free or on the cheap simply because the hardware was considered “EOL” after about 3 years, often when support contracts with the vendor ends.

        There are a few things to consider.

        Hardware that ages produce more errors, and those errors cost, one way or another.

        Rack space is limited. A perfectly fine machine that consumes 2x the power for half the output cost. It’s cheaper to upgrade a perfectly fine working system simply because it performs better per watt in the same space.

        Lastly. There are tax implications in buying new hardware that can often favor replacement.

        • By fooker 2026-01-2018:563 reply

          I’ll be so happy to buy a EOL H100!

          But no, there’s none to be found, it is a 4 year, two generations old machine at this point and you can’t buy one used at a rate cheaper than new.

          • By pixl97 2026-01-2020:261 reply

            Well demand is so high currently that it's likely this cycle doesn't exist yet for fast cards.

            For servers I've seen where the slightly used equipment is sold in bulk to a bidder and they may have a single large client buy all of it.

            Then around the time the second cycle comes around it's split up in lots and a bunch ends up at places like ebay

            • By lancekey 2026-01-2022:31

              Yea looking at 60 day moving average on computeprices.com H100 have actually gone UP in cost recently, at least to rent.

              A lot of demand out there for sure.

          • By aswegs8 2026-01-2019:255 reply

            Not sure why this "GPUs obsolete after 3 years" gets thrown around all the time. Sounds completely nonsensical.

            • By belval 2026-01-2019:42

              Especially since AWS still have p4 instances that are 6 years old A100s. Clearly even for hyperscalers these have a useful life longer than 3 years.

            • By tuckerman 2026-01-2022:391 reply

              I agree that there is hyperbole thrown around a lot here and its possible to still use some hardware for a long time or to sell it and recover some cost but my experience in planning compute at large companies is that spending money on hardware and upgrading can often result in saving money long term.

              Even assuming your compute demands stay fixed, its possible that a future generation of accelerator will be sufficiently more power/cooling efficient for your workload that it is a positive return on investment to upgrade, more so when you take into account you can start depreciating them again.

              If your compute demands aren't fixed you have to work around limited floor space/electricity/cooling capacity/network capacity/backup generators/etc and so moving to the next generation is required to meet demand without extremely expensive (and often slow) infrastructure projects.

              • By zozbot234 2026-01-2023:191 reply

                Sure, but I don't think most people here are objecting to the obvious "3 years is enough for enterprise GPUs to become totally obsolete for cutting-edge workloads" point. They're just objecting to the rather bizarre notion that the hardware itself might physically break in that timeframe. Now, it would be one thing if that notion was supported by actual reliability studies drawn from that same environment - like we see for the Backblaze HDD lifecycle analyses. But instead we're just getting these weird rumors.

                • By tuckerman 2026-01-2117:14

                  I agree that is a strange notion that would require some evidence and I see it in some other threads but looking at the parent comments going up it seems people are discussing economic usefulness so that is what I'm responding to.

            • By thworp 2026-01-2115:471 reply

              A toy example: NeoCloud Inc builds a new datacenter full of the new H800 GPUs. It rents out a rack of them for $10/minute while paying $6/minute for electricity, interest, loan repayment, rent and staff.

              Two years later, H900 is released for a similar price but it performs twice as many TFlOps/Watt. Now any datacenter using H900 can offer the same performance as NeoCloud Inc at $5/month, taking all their customers.

              [all costs reduced to $/minute to make a point]

              • By fooker 2026-01-2212:17

                It really depends on how long `NeoCloud` takes to recoup their capital expenditure on the H800s.

                Current estimates are about 1.5-2 years, which not-so-suspiciously coincides with your toy example.

            • By bmurphy1976 2026-01-2020:002 reply

              It's because they run 24/7 in a challenging environment. They will start dying at some point and if you aren't replacing them you will have a big problem when they all die en masse at the same time.

              These things are like cars, they don't last forever and break down with usage. Yes, they can last 7 years in your home computer when you run it 1% of the time. They won't last that long in a data center where they are running 90% of the time.

              • By zozbot234 2026-01-2020:501 reply

                A makeshift cryptomining rig is absolutely a "challenging environment" and most GPUs by far that went through that are just fine. The idea that the hardware might just die after 3 years' usage is bonkers.

                • By Der_Einzige 2026-01-2021:02

                  Crypto miners undervote for efficiency GPUs and in general crypto mining is extremely light weight on GPUs compared to AI training or inference at scale

              • By Der_Einzige 2026-01-2021:01

                With good enough cooling they can run indefinitely!!!!! The vast majority of failures are either at the beginning due to defects or at the end due to cooling! It’s like the idea that no moving parts (except the HVAC) is somehow unreliable is coming out of thin air!

            • By jpfromlondon 2026-01-2114:10

              Economically obsolete, not obsolete, I suspect this is in line with standard depreciation.

          • By SequoiaHope 2026-01-2021:591 reply

            There’s plenty on eBay? But at the end of your comment you say “a rate cheaper than new” so maybe you mean you’d love to buy a discounted one. But they do seem to be available used.

            • By fooker 2026-01-210:55

              > so maybe you mean you’d love to buy a discounted one

              Yes. I'd expect 4 year old hardware used constantly in a datacenter to cost less than when it was new!

              (And just in case you did not look carefully, most of the ebay listings are scams. The actual product pictured in those are A100 workstation GPUs.)

        • By JMiao 2026-01-2019:45

          Do you know how support contract lengths are determined? Seems like a path to force hardware refreshes with boilerplate failure data carried over from who knows when.

        • By aorloff 2026-01-217:12

          > Rack space is limited.

          Rack space and power (and cooling) in the datacenter drives what hardware stays in the datacenter

      • By linkregister 2026-01-2017:243 reply

        The common factoid raised in financial reports is GPUs used in model training will lose thermal insulation due to their high utilization. The GPUs ostensibly fail. I have heard anecdotal reports of GPUs used for cryptocurrency mining having similar wear patterns.

        I have not seen hard data, so this could be an oft-repeated, but false fact.

        • By Melatonic 2026-01-2017:382 reply

          It's the opposite actually - most GPU used for mining are run at a consistent temp and load which is good for long term wear. Peaky loads where the GPU goes from cold to hot and back leads to more degradation because of changes in thermal expansion. This has been known for some time now.

          • By Yizahi 2026-01-2018:264 reply

            That is commonly repeated idea, but it doesn't take into account countless token farms which are smaller than a datacenter. Basically anything from a single MB with 8 cards to a small shed with rigs, all of which tend to disregard common engineering practices and run hardware into a ground to maximize output until next police raid or difficulty bump. Plenty of photos in the internet of crappy rigs like that, and no one guarantees which GPU comes whom where.

            Another commonly forgotten issue is that many electrical components are rated by hours of operation. And cheaper boards tend to have components with smaller tolerances. And that rated time is actually a graph, where hour decrease with higher temperature. There were instances of batches of cards failing due to failing MOSFETs for example.

            • By Melatonic 2026-01-2020:44

              While I'm sure there are small amateur setups done poorly that push cards to their limits this seems like a more rare and inefficient use. GPUS (even used) are expensive and running them at maximum would require large costs and time to be replacing them regularly. Not to mention the increased cost of cooling and power.

              Not sure I understand the police raid mentality - why are the police raiding amateur crypto mining setups ?

              I can totally see cards used by casual amateurs being very worn / used though - especially your example of single mobo miners who were likely also using the card for gaming and other tasks.

              I would imagine that anyone purposely running hardware into the ground would be running cheaper / more efficient ASICS vs expensive Nvidia GPUs since they are much easier and cheaper to replace. I would still be surprised however if most were not proritising temps and cooling

            • By whaleofatw2022 2026-01-2019:051 reply

              Let's also not forget the set of miners that either overclock or dont really care about long term in how they set up thermals

              • By belval 2026-01-2019:442 reply

                Miners usually don't overclock though. If anything underclocking is the best way to improve your ROI because it significantly reduces the power consumption while retaining most of the hashrate.

                • By Melatonic 2026-01-2020:401 reply

                  Exactly - more specifically undervolting. You want the minimum volts going to the card with it still performing decently.

                  Even in amateur setups the amount of power used is a huge factor (because of the huge draw from the cards themselves and AC units to cool the room) so minimising heat is key.

                  From what I remember most cards (even CPUs as well) hit peak efficiency when undervolted and hitting somewhere around 70-80% max load (this also depends on cooling setup). First thing to wear out would probably be the fan / cooler itself (repasting occasionally would of course help with this as thermal paste dries out with both time and heat)

                  • By bluGill 2026-01-2023:07

                    The only amatures I know doing this are trying to heat their garrage for free. so long as the heat gain is paid for they can afford to heat an otherwise unheated building.

                • By zozbot234 2026-01-2020:53

                  Wouldn't the exact same considerations apply to AI training/inference shops, seeing as gigawatts are usually the key constraint?

            • By coryrc 2026-01-2019:10

              Specifically, we expect a halving of lifetime per 10K increase in temperature.

            • By WalterBright 2026-01-2023:11

              Why would police raid a shed housing a compute center?

          • By mbesto 2026-01-2019:23

            Source?

        • By zozbot234 2026-01-2017:32

          > I have heard anecdotal reports of GPUs used for cryptocurrency mining having similar wear patterns.

          If this was anywhere close to a common failure mode, I'm pretty sure we'd know that already given how crypto mining GPUs were usually ran to the max in makeshift settings with woefully inadequate cooling and environmental control. The overwhelming anecdotal evidence from people who have bought them is that even a "worn" crypto GPU is absolutely fine.

        • By munk-a 2026-01-2017:29

          I can't confirm that fact - but it's important to acknowledge that consumer usage is very different from the high continuous utilization in mining and training. It is credulous that the wear on cards under such extreme usage is as high as reported considering that consumers may use their cards at peak 5% of waking hours and the wear drop off is only about 3x if it is used near 100% - that is a believable scale for endurance loss.

      • By denimnerd42 2026-01-2017:46

        1-3 is too short but they aren’t making new A100s, theres 8 in a server and when one goes bad what do you do? you wont be able to renew a support contract. if you wanna diy you eventually you have to start consolidating pick and pulls. maybe the vendors will buy them back from people who want to upgrade and resell them. this is the issue we are seeing with A100s and we are trying to see what our vendor will offer for support.

      • By iancmceachern 2026-01-2017:183 reply

        They're no longer energy competitive. I.e. the amount of power per compute exceeds what is available now.

        It's like if your taxi company bought taxis that were more fuel efficient every year.

        • By bob1029 2026-01-2017:213 reply

          Margins are typically not so razor thin that you cannot operate with technology from one generation ago. 15 vs 17 mpg is going to add up over time, but for a taxi company it's probably not a lethal situation to be in.

          • By SchemaLoad 2026-01-2022:11

            At least with crypto mining this was the case. Hardware from 6 months ago is useless ewaste because the new generation is more power efficient. All depends on how expensive the hardware is vs the cost of power.

          • By iancmceachern 2026-01-2020:082 reply

            Tell that to the airline industry

            • By hibikir 2026-01-2021:59

              And yet they aren't running planes and engines all from 2023 or beyond: See the MD-11 that crashed in Louisville: Nobody has made a new MD-11 in over 20 years. Planes move to less competitive routes, change carriers, and eventually might even stop carrying people and switch to cargo, but the plane itself doesn't get to have zero value when the new one comes out. An airline will want to replace their planes, but a new plane isn't fully amortized in a year or three: It still has value for quite a while

            • By bob1029 2026-01-2020:10

              I don't think the airline industry is a great example from an IT perspective, but I agree with regard to the aircraft.

        • By echelon 2026-01-2017:26

          Nvidia has plenty of time and money to adjust. They're already buying out upstart competitors to their throne.

          It's not like the CUDA advantage is going anywhere overnight, either.

          Also, if Nvidia invests in its users and in the infrastructure layouts, it gets to see upside no matter what happens.

        • By mikkupikku 2026-01-2017:224 reply

          If a taxi company did that every year, they'd be losing a lot of money. Of course new cars and cards are cheaper to operate than old ones, but is that difference enough to offset buying a new one every one to three years?

          • By gruez 2026-01-2017:39

            >If a taxi company did that every year, they'd be losing a lot of money. Of course new cars and cards are cheaper to operate than old ones, but is that difference enough to offset buying a new one every one to three years?

            That's where the analogy breaks. There are massive efficiency gains from new process nodes, which new GPUs use. Efficiency improvements for cars are glacial, aside from "breakthroughs" like hybrid/EV cars.

          • By philwelch 2026-01-2017:46

            If there was a new taxi every other year that could handle twice as many fares, they might. That’s not how taxis work but that is how chips work.

          • By dylan604 2026-01-2017:352 reply

            >offset buying a new one every one to three years?

            Isn't that precisely how leasing works? Also, don't companies prefer not to own hardware for tax purposes? I've worked for several places where they leased compute equipment with upgrades coming at the end of each lease.

            • By mikkupikku 2026-01-2019:033 reply

              Who wants to buy GPUs that were redlined for three years in a data center? Maybe there's a market for those, but most people already seem wary of lightly used GPUs from other consumers, let alone GPUs that were burning in a crypto farm or AI data center for years.

              • By dylan604 2026-01-2021:021 reply

                > Who wants to buy

                who cares? that's the beauty of the lease. once it's over, the old and busted gets replaced with new and shiny. what the leasing company does is up to them. it becomes one of those YP not an MP situations with deprecated equipment.

                • By bluGill 2026-01-2023:15

                  The leasing company cares - the lease terms depend on the answer. That is why I can lease a car for 3 years for the same payment as a 6 year loan (more or less) - the lease company expects someone will want it. If there is no market for it after they will still lease it but the cost goes up

              • By coryrc 2026-01-2019:14

                Depends on the price, of course. I'm wary of paying 50% of new for something run hard 3 years. Seems an NVIDIA H100 is going for $20k+ on EBay. I'm not taking that risk.

              • By pixl97 2026-01-2020:40

                Depending on the discount, a lot of people.

            • By gowld 2026-01-2018:26

              That works either because someone wants to buy old hardware for the manufacturer/lessor, or because the hardware is EOL in 3 years but it's easier to let the lessor deal with recyling / valuable parts recovery.

          • By wordpad 2026-01-2017:301 reply

            If your competitor refreshes their cards and you dont, they will win on margin.

            You kind of have to.

            • By lazide 2026-01-2017:352 reply

              Not necessarily if you count capital costs vs operating costs/margins.

              Replacing cars every 3 years vs a couple % in efficiency is not an obvious trade off. Especially if you can do it in 5 years instead of 3.

              • By zozbot234 2026-01-2017:401 reply

                You can sell the old, less efficient GPUs to folks who will be running them with markedly lower duty cycles (so, less emphasis on direct operational costs), e.g. for on-prem inference or even just typical workstation/consumer use. It ends up being a win-win trade.

                • By lazide 2026-01-2019:271 reply

                  Then you’re dealing with a lot of labor to do the switches (and arrange sales of used equipment), plus capital float costs while you do it.

                  It can make sense at a certain scale, but it’s a non trivial amount of cost and effort for potentially marginal returns.

                  • By pixl97 2026-01-2020:431 reply

                    Building a new data center and getting power takes years to double your capacity. Swapping out out a rack that is twice as fast takes very little time in comparison.

                    • By lazide 2026-01-2021:091 reply

                      Huh? What does your statements have to do with what I’m saying?

                      I’m just pointing out changing it out at 5 years is likely cheaper than at 3 years.

                      • By pixl97 2026-01-2021:441 reply

                        Depends at the rate of growth of the hardware. If your data center is full and fully booked, and hardware is doubling in speed every year it's cheaper to switch it out every couple of years.

                        • By lazide 2026-01-2110:30

                          So many goal posts being changed constantly?

              • By iancmceachern 2026-01-2022:411 reply

                You highlight the exact dilemma.

                Company A has taxis that are 5 percent less efficient and for the reasons you stated doesn't want to upgrade.

                Company B just bought new taxis, and they are undercutting company A by 5 percent while paying their drivers the same.

                Company A is no longer competitive.

                • By Dylan16807 2026-01-2023:471 reply

                  The debt company B took on to buy those new taxis means they're no longer competitive either if they undercut by 5%.

                  The scenario doesn't add up.

                  • By iancmceachern 2026-01-210:431 reply

                    But Company A also took on debt for theirs, so that's a wash. You assume only one of them has debt to service?

                    • By Dylan16807 2026-01-210:491 reply

                      Both companies bought a set of taxis in the past. Presumably at the same time if we want this comparison to be easy to understand.

                      If company A still has debt from that, company B has that much debt plus more debt from buying a new set of taxis.

                      Refreshing your equipment more often means that you're spending more per year on equipment. If you do it too often, then even if the new equipment is better you lose money overall.

                      If company B wants to undercut company A, their advantage from better equipment has to overcome the cost of switching.

                      • By iancmceachern 2026-01-213:151 reply

                        You are assuming something again.

                        They both refresh their equipment at the same rate.

                        • By Dylan16807 2026-01-214:13

                          > They both refresh their equipment at the same rate.

                          I wish you'd said that upfront. Especially because the comment you replied to was talking about replacing at different rates.

                          So your version, if company A and B are refreshing at the same rate, then that means six months before B's refresh company A had the newer taxis. You implied they were charging similar amounts at that point, so company A was making bigger profits, and had been making bigger profits for a significant time. So when company B is able to cut prices 5%, company A can survive just fine. They don't need to rush into a premature upgrade that costs a ton of money, they can upgrade on their normal schedule.

                          TL;DR: six months ago company B was "no longer competitive" and they survived. The companies are taking turns having the best tech. It's fine.

      • By mbesto 2026-01-2018:112 reply

        Not saying your wrong. A few things to consider:

        (1) We simply don't know what the useful life is going to be because of how new the advancements of AI focused GPUs used for training and inference.

        (2) Warranties and service. Most enterprise hardware has service contracts tied to purchases. I haven't seen anything publicly disclosed about what these contracts look like, but the speculation is that they are much more aggressive (3 years or less) than typical enterprise hardware contracts (Dell, HP, etc.). If it gets past those contracts the extended support contracts can typically get really pricey.

        (3) Power efficiency. If new GPUs are more power efficient this could be huge savings on energy that could necessitate upgrades.

        • By epolanski 2026-01-2018:592 reply

          Nvidia is moving to a 1 year release life cycle for data center, and in Jensen's words once a new gen is released you lose money for being on the older hardware. It makes no longer financially sense to run it.

          • By aurareturn 2026-01-2115:13

            Do you not see the bad logic?

            Companies can’t buy new Nvidia GPUs because their older Nvidia GPUs are obsolete. However, the old GPUs are only obsolete if companies buy the new Nvidia GPUs.

          • By pixl97 2026-01-2020:45

            That will come back to bite them in the ass if money leaves the AI race.

        • By pvab3 2026-01-2023:14

          based on my napkin math, an H200 needs to run for 4 years straight at maximum power (10.2 kW) to consume its own price of $35k worth of energy (based on 10 cents per kWh)

      • By swalsh 2026-01-2020:25

        If power is the bottleneck, it may make business sense to rotate to a GPU that better utilizes the same power if the newer generation gives you a significant advantage.

      • By linuxftw 2026-01-2017:39

        I think the story is less about the GPUs themselves, and more about the interconnects for building massive GPU clusters. Nvidia just announced a massive switch for linking GPUs inside a rack. So the next couple of generations of GPU clusters will be capable of things that were previously impossible or impractical.

        This doesn't mean much for inference, but for training, it is going to be huge.

      • By legitster 2026-01-2017:44

        From an accounting standpoint, it probably makes sense to have their depreciation be 3 years. But yeah, my understanding is that either they have long service lives, or the customers sell them back to the distributor so they can buy the latest and greatest. (The distributor would sell them as refurbished)

      • By savorypiano 2026-01-2017:30

        You aren't trying to support ad-based demand like OpenAI is.

    • By nospice 2026-01-2017:193 reply

      > My 30k ft view is that the stock will inevitably slide as AI datacenter spending goes down.

      Their stock trajectory started with one boom (cryptocurrencies) and then seamlessly progressed to another (AI). You're basically looking at a decade of "number goes up". So yeah, it will probably come down eventually (or the inflation will catch up), but it's a poor argument for betting against them right now.

      Meanwhile, the investors who were "wrong" anticipating a cryptocurrency revolution and who bought NVDA have not much to complain about today.

      • By mysteria 2026-01-2017:341 reply

        Personally I wonder even if the LLM hype dies down we'll get a new boom in terms of AI for robotics and the "digital twin" technology Nvidia has been hyping up to train them. That's going to need GPUs for both the ML component as well as 3D visualization. Robots haven't yet had their SD 1.1 or GPT-3 moment and we're still in the early days of Pythia, GPT-J, AI Dungeon, etc. in LLM speak.

        • By iwontberude 2026-01-2019:281 reply

          Exactly, they will pivot back to AR/VR

          • By mysteria 2026-01-2019:571 reply

            That's going to tank the stock price though as that's a much smaller market than AI, though it's not going to kill the company. Hence why I'm talking about something like robotics which has a lot of opportunity to grow and make use of all those chips and datacenters they're building.

            Now there's one thing with AR/VR that might need this kind of infrastructure though and that's basically AI driven games or Holodeck like stuff. Basically have the frames be generated rather than modeled and rendered traditionally.

            • By bigyabai 2026-01-2020:04

              Nvidia's not your average bear, they can walk and chew bubblegum at the same time. CUDA was developed off money made from GeForce products, and now RTX products are being subsidized by the money made on CUDA compute. If an enormous demand for efficient raster compute arises, Nvidia doesn't have to pivot much further than increasing their GPU supply.

              Robotics is a bit of a "flying car" application that gets people to think outside the box. Right now, both Russia and Ukraine are using Nvidia hardware in drones and cruise missiles and C2 as well. The United States will join them if a peer conflict breaks out, and if push comes to shove then Europe will too. This is the kind of volatility that crazy people love to go long on.

      • By munk-a 2026-01-2017:261 reply

        That's the rub - it's clearly overvalued and will readjust... the question is when. If you can figure out when precisely then you've won the lottery, for everyone else it's a game of chicken where for "a while" money that you put into it will have a good return. Everyone would love if that lasted forever so there is a strong momentum preventing that market correction.

        • By jama211 2026-01-2018:062 reply

          It was overvalued when crypto was happening too, but another boom took its place. Of course, lightening rarely strikes twice and all that, but it proves overvalued doesn’t mean the price is guaranteed to go down it seems. Predicting the future is hard.

          • By pixl97 2026-01-2020:481 reply

            As they say, the market can remain irrational far longer than you can remain solvent.

          • By sidrag22 2026-01-2018:491 reply

            if there was anything i was going to bet against between 2019 and now, it was nvidia... and wow it feels wild how much in the opposite direction it went.

            I do wonder what people would think the reasoning would be for them to increase in value this much back then, prolly would just assume crypto related still.

            • By jama211 2026-01-214:25

              It’s not impossible they could’ve seen AI investment coming but it would’ve been very hard

      • By ericmcer 2026-01-2018:041 reply

        Crypto & AI can both be linked to part of a broader trend though, that we need processors capable of running compute on massive sets of data quickly. I don't think that will ever go down, whether some new tech emerges or we just continue shoveling LLMs into everything. Imagine the compute needed to allow every person on earth to run a couple million tokens through a model like Anthropic Opus every day.

        • By pixl97 2026-01-2020:47

          Agreed, single thread performance increases are dead and things are moving to massively parallel processing.

    • By JakeSc 2026-01-2022:212 reply

      Agree on looking at the company-behind-the-numbers. Though presumably you're aware of the Efficient Market Hypothesis. Shouldn't "slowed down datacenter growth" be baked into the stock price already?

      If I'm understanding your prediction correctly, you're asserting that the market thinks datacenter spending will continue at this pace indefinitely, and you yourself uniquely believe that to be not true. Right? I wonder why the market (including hedge fund analysis _much_ more sophisticated than us) should be so misinformed.

      Presumably the market knows that the whole earth can't be covered in datacenters, and thus has baked that into the price, no?

      • By testdelacc1 2026-01-2022:27

        I saw a $100 bill on the ground. I nearly picked it up before I stopped myself. I realised that if it was a genuine currency note, the Efficient Market would have picked it up already.

      • By matthewdgreen 2026-01-2022:30

        The EMH does not mean that markets are free of over-investment and asset bubbles, followed by crashes.

    • By TacticalCoder 2026-01-2023:39

      > This article goes more into the technical analysis of the stock rather than the underlying business fundamentals that would lead to a stock dump. My 30k ft view is that the stock will inevitably slide as AI

      Actually "technical analysis" (TA) has a very specific meaning in trading: TA is using past prices, volume of trading and price movements to, hopefully, give probabilities about future price moves.

      https://en.wikipedia.org/wiki/Technical_analysis

      But TFA doesn't do that at all: it goes in detail into one pricing model formula/method for options pricing. In the typical options pricing model all you're using is current price (of the underlying, say NVDA), strike price (of the option), expiration date, current interest rate and IV (implied volatility: influenced by recent price movements but independently of any technical analysis).

      Be it Black-Scholes-Merton (european-style options), Bjerksund-Stensland (american-style options), binomial as in TFA, or other open options pricing model: none of these use technical analysis.

      Here's an example (for european-style options) where one can see the parameters:

      https://www.mystockoptions.com/black-scholes.cfm

      You can literally compute entire options chains with these parameters.

      Now it's known for a fact that many professional traders firms have their own options pricing method and shall arb when they think they find incorrectly priced options. I don't know if some use actual so forms of TA that they then mix with options pricing model or not.

      > My 30k ft view is that the stock will inevitably slide as AI datacenter spending goes down.

      No matter if you're right or not, I'd argue you're doing what's called fundamental analysis (but I may be wrong).

      P.S: I'm not debatting the merits of TA and whether it's reading into tea leaves or not. What I'm saying is that options pricing using the binomial method cannot be called "technical analysis" for TA is something else.

    • By AnotherGoodName 2026-01-2017:57

      I'll also point out there were insane takes a few years ago before nVidia's run up based on similar technical analysis and very limited scope fundamental analysis.

      Technical analysis fails completely when there's an underlying shift that moves the line. You can't look at the past and say "nvidia is clearly overvalued at $10 because it was $3 for years earlier" when they suddenly and repeatedly 10x earnings over many quarters.

      I couldn't get through to the idiots on reddit.com/r/stocks about this when there was non-stop negativity on nvidia based on technical analysis and very narrow scoped fundamental analysis. They showed a 12x gain in quarterly earnings at the time but the PE (which looks on past quarters only) was 260x due to this sudden change in earnings and pretty much all of reddit couldn't get past this.

      I did well on this yet there were endless posts of "Nvidia is the easiest short ever" when it was ~$40 pre-split.

    • By KeplerBoy 2026-01-2016:513 reply

      Also there's no way Nvidia's market share isn't shrinking. Especially in inference.

      • By gpapilion 2026-01-2017:091 reply

        The large api/token providers, and large consumers are all investing in their own hardware. So, they are in an interesting position where the market is growing, and NVIDIA is taking the lion's share of enterprise, but is shrinking at the hyperscaler side (google is a good example as they shift more and more compute to TPU). So, they have a shrinking market share, but its not super visible.

        • By zozbot234 2026-01-2018:04

          > The large api/token providers, and large consumers are all investing in their own hardware.

          Which is absolutely the right move when your latest datacenter's power bill is literally measured in gigawatts. Power-efficient training/inference hardware simply does not look like a GPU at a hardware design level (though admittedly, it looks even less like an ordinary CPU), it's more like something that should run dog slow wrt. max design frequency but then more than make up for that with extreme throughput per watt/low energy expense per elementary operation.

          The whole sector of "neuromorphic" hardware design has long shown the broad feasibility of this (and TPUs are already a partial step in that direction), so it looks like this should be an obvious response to current trends in power and cooling demands for big AI workloads.

      • By dogma1138 2026-01-2017:14

        Market share can shrink but if the TAM is growing you can still grow.

      • By blackoil 2026-01-2017:05

        But will the whole pie grow or shrink?

    • By baxtr 2026-01-2017:163 reply

      I no AI fanboy at all. I think it there won’t be AGI anytime soon.

      However, it’s beyond my comprehension how anyone would think that we will see a decline in demand growth for compute.

      AI will conquer the world like software or the smartphone did. It’ll get implemented everywhere, more people will use it. We’re super early in the penetration so far.

      • By Ekaros 2026-01-2017:222 reply

        At this point computation is in essence commodity. And commodities have demand cycles. If other economic factors slowdown or companies go out of business they stop using compute or start less new products that use compute. Thus it is entirely realistic to me that demand for compute might go down. Or that we are just now over provisioning compute in short or medium term.

        • By galaxyLogic 2026-01-2017:443 reply

          I wonder, is the quality of AI answers going up over time or not? Last weekend I spent a lot of time with Preplexity trying to understand why my SeqTrack device didn't do what I wanted it to do and seems Perplexity had a wrong idea of how the buttons on the device are laid out, so it gave me wrong or confusing answers. I spent literally hours trying to feed it different prompts to get an answer that would solve my problem.

          If it had given me the right easy to understand answer right away I would have spent 2 minutes of both MY time and ITS time. My point is if AI will improve we will need less of it, to get our questions answered. Or, perhaps AI usage goes up if it improves its answers?

          • By jama211 2026-01-2018:10

            Always worth trying a different model, especially if you’re using a free one. I wouldn’t take one data point to seriously either.

            The data is very strongly showing the quality of AI answers is rapidly improving. If you want a good example, check out the sixty symbols video by Brady Haran, where they revisited getting AI to answer a quantum physics exam after trying the same thing 3 years ago. The improvement is IMMENSE and unavoidable.

          • By zozbot234 2026-01-2017:531 reply

            If the AI hasn't specifically learned about SeqTracks as part of its training it's not going to give you useful answers. AI is not a crystal ball.

            • By SchemaLoad 2026-01-2022:192 reply

              The problem is it's inability to say "I don't know". As soon as you reach the limits of the models knowledge it will readily start fabricating answers.

              • By galaxyLogic 2026-01-2023:11

                Both true. Perplexity knows a lot about SeqTrack, I assume it has read the UserGuide. But some things it gets wrong, seems especially things it should understand by looking at the pictures.

                I'm just wondering if there's a clear path for it to improve and on what time-table. The fact that it does not tell you when it is "unsure" of course makes things worse for users. (It is never unsure).

              • By CamperBob2 2026-01-210:22

                That's nowhere near as true as it was as recently as a year ago.

          • By lorddumpy 2026-01-2021:33

            With vision models (SOTA models like Gemini and ChatGPT can do this), you can take a picture/screenshot of the button layout, upload it, and have it work from that. Feeding it current documentation (eg a pdf of a user manual) helps too.

            Referencing outdated documentation or straight up hallucinating answers is still an issue. It is getting better with each model release though

        • By wordpad 2026-01-2017:321 reply

          So...like Cisco during dot com bust?

          • By Ekaros 2026-01-2017:38

            More so I meant to think of oil, copper and now silver. All follow demand for the price. All have had varying prices at different times. Compute should not really be that different.

            But yes. Cisco's value dropped when there was not same amount to spend on networking gear. Nvidia's value will drop as there is not same amount of spend on their gear.

            Other impacted players in actual economic downturn could be Amazon with AWS, MS with Azure. And even more so those now betting on AI computing. At least general purpose computing can run web servers.

      • By marricks 2026-01-2017:212 reply

        > I no AI fanboy at all.

        While thinking computers will replace human brains soon is rabid fanaticism this statement...

        > AI will conquer the world like software or the smartphone did.

        Also displays a healthy amount of fanaticism.

        • By jwoods19 2026-01-2018:11

          Even suggesting that computers will replace human brains brings up a moral and ethical question. If the computer is just as smart as a person, then we need to potentially consider that the computer has rights.

          As far as AI conquering the world. It needs a "killer app". I don't think we'll really see that until AR glasses that happen to include AI. If it can have context about your day, take action on your behalf, and have the same battery life as a smartphone...

        • By xenospn 2026-01-2018:271 reply

          I don’t see this as fanaticism at all. No one could predict a billion people mindlessly scrolling tiktok in 2007. This is going to happen again, only 10x. Faster and more addictive, with content generated on the fly to be so addictive, you won’t be able to look away.

      • By Ronsenshi 2026-01-2017:361 reply

        What if its penetration ends up being on the same level as modern crypto? Average person doesn't seem to particularly care about meme coins or bitcoin - it is not being actively used in day to day setting, there's no signs of this status improving.

        Doesn't mean that crypto is not being used, of course. Plenty of people do use things like USDT, gamble on bitcoin or try to scam people with new meme coins, but this is far from what crypto enthusiasts and NFT moguls promised us in their feverish posts back in the middle of 2010s.

        So imagine that AI is here to stay, but the absolutely unhinged hype train will slow down and we will settle in some kind of equilibrium of practical use.

        • By infecto 2026-01-2017:461 reply

          I have still been unable to see how folks connect AI to Crypto. Crypto never connected with real use cases. There are some edge cases and people do use it but there is not a core use.

          AI is different and businesses are already using it a lot. Of course there is hype, it’s not doing all the things the talking heads said but it does not mean immense value is not being generated.

          • By Ronsenshi 2026-01-2018:132 reply

            It's an analogy, it doesn't have to map 1:1 to AI. The point is that current situation around AI looks kind of similar to the situation and level of hype around Crypto when it was still growing: all the "ledger" startups, promises of decentralization, NFTs in video games and so on. We are somewhere around that point when it comes to AI.

            • By infecto 2026-01-210:57

              No it’s an absolutely ridiculous comparison that people continue to make even though AI has well past the usefulness of crypto and at an alarming rate of speed. AI has unlocked so many projects my team would never have tackled before.

            • By lorddumpy 2026-01-2021:382 reply

              I agree with the all the startups but AI is already much more useful in everyday tasks vs crypto.

              Eg: A chatbot assistant is much more tangible to the regular joe than blockchain technology

              • By Ronsenshi 2026-01-216:49

                I agree that AI is much more useful than crypto ever was, but it's not as useful as AI hype valuation would like to paint it.

              • By theshackleford 2026-01-216:41

                Anecdotally, many non-technical users or "regular joes" as it were that I know who were very enthusiastic about AI a year ago are now disengaging. With the rate really picking up the last couple of months.

                Their usage has declined primarily with OpenAI and Gemini tools, no one has mentioned Anthropic based models but I don't think normies know they exist honestly.

                The disengagement seems to be that with enough time and real world application, the shortcomings have become more noticable and the patience they once had for incorrect or unreliable output has effectively evaporated. In cases, to the point where its starting to outweigh any gains they get.

                Not all of the normies I know to be fair, but a surprising amount given the strange period of quiet inbetween "This is amazing!" and "eh, its not as good as I thought it was at first."

    • By amelius 2026-01-2112:44

      > My 30k ft view is that the stock will inevitably slide as AI datacenter spending goes down.

      This is like saying Apple stock will inevitably slide once everybody owns a smartphone.

    • By ramijames 2026-01-217:13

      This seems to take for granted that China and their foundries and engineering teams will never catch up. This seems foolish. I'm working under the assumption that sometime in the next ten years some Chinese company will have a breakthrough and either meet Nvidia's level or leapfrog them. Then the market will flood with great, cheap chips.

    • By m12k 2026-01-2022:131 reply

      I think the way to think about the AI bubble is that we're somewhere in 97-99 right now, heading toward the dotcom crash. The dotcom crash didn't kill the web, it kept growing in the decades that followed, influencing society more and more. But the era where tons of investments were uncritically thrown at anything to do with the web ended with a bang.

      When the AI bubble bursts, it won't stop the development of AI as a technology. Or its impact on society. But it will end the era of uncritically throwing investments at anyone that works "AI" into their pitch deck. And so too will it end the era of Nvidia selling pickaxes to the miners and being able to reach soaring heights of profitability born on wings of pretty much all investment capital in the world at the moment.

      • By enos_feedler 2026-01-2022:17

        Bubble or not it’s simply strange to me that people confidently put a timeline on it. To name the phases of the bubble and calling when they will collapse just seems counter intuitive to what a bubble is. Brad Gerstner was the first “influencer” I heard making these claims of a bubble time line. It just seems downright absurd.

    • By cortesoft 2026-01-2018:451 reply

      > The problem I have with this argument is that it's simply unsustainable to be spending that much every 2-3 years

      Isn’t this entirely dependent on the economic value of the AI workloads? It all depends on whether AI work is more valuable than that cost. I can easily see arguments why it won’t be that valuable, but if it is, then that cost will be sustainable.

      • By alfalfasprout 2026-01-2018:531 reply

        100% this. all of this spending is predicated on a stratospheric ROI on AI investments at the proposed investment levels. If that doesn't pan out, we'll see a lot of people left holding the cards including chip fabs, designers like Nvidia, and of course anyone that ponied up for that much compute.

        • By jiggawatts 2026-01-2022:49

          Chip fabs will be fine. The demand for high end processors will remain because of the likes of Apple and AMD.

    • By richardw 2026-01-2019:391 reply

      I’m sad about Grok going to them, because the market needs the competition. But ASIC inference seems to require a simpler design than training does, so it’s easier for multiple companies to enter. It seems inevitable that competition emerges. And eg a Chinese company will not be sold to Nvidia.

      What’s wrong with this logic? Any insiders willing to weigh in?

      • By bigyabai 2026-01-2019:551 reply

        I'm not an insider, but ASICs come with their own suite of issues and might be obsolete if a different architecture becomes popular. They'll have a much shorter lifespan than Nvidia hardware in all likelihood, and will probably struggle to find fab capacity that puts them on equal footing in performance. For example, look at the GPU shortage that hit crypto despite hundreds of ASIC designs existing.

        The industry badly needs to cooperate on an actual competitor to CUDA, and unfortunately they're more hostile to each other today than they were 10 years ago.

        • By zozbot234 2026-01-2023:32

          You can build ASICs to be a lot more energy efficient than current GPUs, especially if your power budget is heavily bound by raw compute as opposed to data movement bandwidth. The tradeoff is much higher latency for any given compute throughput, but for workloads such as training or even some kinds of "deep thinking inference" you don't care much about that.

    • By pjmlp 2026-01-2215:56

      Even though I like CUDA, I think the point is when do compute centers reach the point that they can run their workloads with other vendors, or custom accelerators.

    • By jwoods19 2026-01-2017:052 reply

      “In a gold rush, sell shovels”… Well, at some point in the gold rush everyone already has their shovels and pickaxes.

      • By krupan 2026-01-2017:131 reply

        Or people start to realize that the expected gold isn't really there and so stop buying shovels

      • By gopher_space 2026-01-2018:171 reply

        The version I heard growing up was "In a gold rush, sell eggs."

        • By FergusArgyll 2026-01-2019:28

          Selling jeans is the one that actually worked

    • By WalterBright 2026-01-2023:00

      > technical analysis of the stock

      AKA pictures in clouds

    • By throwaway85825 2026-01-2022:26

      It's not flat growth that's currently priced in, but continuing high growth. Which is impossible.

    • By kqr 2026-01-2019:043 reply

      Fundamental analysis is great! But I have trouble answering concrete questions of probability with it.

      How do you use fundamental analysis to assign a probability to Nvidia closing under $100 this year, and what probability do you assign to that outcome?

      I'd love to hear your reasoning around specifics to get better at it.

      • By esafak 2026-01-2019:471 reply

        Don't you need a model for how people will react to the fundamentals? People set the price.

        • By kqr 2026-01-2021:11

          Possibly? I don't know -- hence the question!

          GP was presenting fundamental analysis as an alternative to the article's method for answering the question, but then never answered the question.

          This is a confusion I have around fundamental analysis. Some people appear to do it very well (Buffett?) but most of its proponents only use it to ramble about possibilities without making any forecasts speciic enough to be verifiable.

          I'm curious about that gap.

      • By djeastm 2026-01-2021:20

        I think the idea of fundamental analysis that you focus on return on equity and see if that valuation is appreciably more than the current price (as opposed to assigning a probability)

    • By cheschire 2026-01-2018:07

      Well, not to be too egregiously reductive… but when the M2 money supply spiked in the 2020 to 2022 timespan, a lot of new money entered the middle class. That money was then funneled back into the hands of the rich through “inflation”. That left the rich with a lot of spare capital to invest in finding the next boom. Then AI came along.

      Once the money dries up, a new bubble will be invented to capture the middle class income, like NFTs and crypto before that, and commissionless stocks, etc etc

      It’s not all pump-and-dump. Again, this is a pretty reductive take on market forces. I’m just saying I don’t think it’s quite as unsustainable as you might think.

    • By stego-tech 2026-01-2018:46

      Add in the fact companies seriously invested in AI (and like workloads typically reliant on GPUs) are also investing more into bespoke accelerators, and the math for nVidia looks particularly grim. Google’s TPUs set them apart from the competition, as does Apple’s NPU; it’s reasonable to assume firms like Anthropic or OpenAI are also investigating or investing into similar hardware accelerators. After all, it’s easier to lock-in customers if your models cannot run on “standard” kit like GPUs and servers, even if it’s also incredibly wasteful.

      The math looks bad regardless of which way the industry goes, too. A successful AI industry has a vested interest in bespoke hardware to build better models, faster. A stalled AI industry would want custom hardware to bring down costs and reduce external reliance on competitors. A failed AI industry needs no GPUs at all, and an inference-focused industry definitely wants custom hardware, not general-purpose GPUs.

      So nVidia is capitalizing on a bubble, which you could argue is the right move under such market conditions. The problem is that they’re also alienating their core customer base (smaller datacenters, HPC, gaming market) in the present, which will impact future growth. Their GPUs are scarce and overpriced relative to performance, which itself has remained a near-direct function of increased power input rather than efficiency or meaningful improvements. Their software solutions - DLSS frame-generation, ray reconstruction, etc - are locked to their cards, but competitors can and have made equivalent-performing solutions of their own with varying degrees of success. This means it’s no longer necessary to have an nVidia GPU to, say, crunch scientific workloads or render UHD game experiences, which in turn means we can utilize cheaper hardware for similar results. Rubbing salt in the wound, they’re making cards even more expensive by unbundling memory and clamping down on AIB designs. Their competition - Intel and AMD primarily - are happily enjoying the scarcity of nVidia cards and reaping the fiscal rewards, however meager they are compared to AI at present. AMD in particular is sitting pretty, powering four of the five present-gen consoles, the Steam Deck (and copycats), and the Steam Machine, not to mention outfits like Framework; if you need a smol but capable boxen on the (relative) cheap, what used to be nVidia + ARM is now just AMD (and soon, Intel, if they can stick the landing with their new iGPUs).

      The business fundamentals paint a picture of cannibalizing one’s evergreen customers in favor of repeated fads (crypto and AI), and years of doing so has left those customer markets devastated and bitter at nVidia’s antics. Short of a new series of GPUs with immense performance gains at lower price and power points with availability to meet demand, my personal read is that this is merely Jenson Huang’s explosive send-off before handing the bag over to some new sap (and shareholders) once the party inevitably ends, one way or another.

    • By bArray 2026-01-2018:38

      > My 30k ft view is that the stock will inevitably slide as AI datacenter spending goes down. Right now Nvidia is flying high because datacenters are breaking ground everywhere but eventually that will come to an end as the supply of compute goes up.

      Exactly, it is currently priced as though infinite GPUs are required indefinitely. Eventually most of the data centres and the gamers will have their GPUs, and demand will certainly decrease.

      Before that, though, the data centres will likely fail to be built in full. Investors will eventually figure out that LLMs are still not profitable, no matter how many data centres you produce. People are interested in the product derivatives at a lower price than it costs to run them. The math ain't mathin'.

      The longer it takes to get them all built, the more exposed they all are. Even if it turns out to be profitable, taking three years to build a data centre rather than one year is significant, as profit for these high-tech components falls off over time. And how many AI data centres do we really need?

      I would go further and say that these long and complex supply chains are quite brittle. In 2019, a 13 minute power cut caused a loss of 10 weeks of memory stock [1]. Normally, the shops and warehouses act as a capacitor and can absorb small supply chain ripples. But now these components are being piped straight to data centres, they are far more sensitive to blips. What about a small issue in the silicon that means you damage large amounts of your stock trying to run it at full power through something like electromigration [2]. Or a random war...?

      > The counterargument to this is that the "economic lifespan" of an Nvidia GPU is 1-3 years depending on where it's used so there's a case to be made that Nvidia will always have customers coming back for the latest and greatest chips. The problem I have with this argument is that it's simply unsustainable to be spending that much every 2-3 years and we're already seeing this as Google and others are extending their depreciation of GPU's to something like 5-7 years.

      Yep. Nothing about this adds up. Existing data centres with proper infrastructure are being forced to extend use for previously uneconomical hardware because new data centres currently building infrastructure have run the price up so high. If Google really thought this new hardware was going to be so profitable, they would have bought it all up.

      [1] https://blocksandfiles.com/2019/06/28/power-cut-flash-chip-p...

      [2] https://www.pcworld.com/article/2415697/intels-crashing-13th...

    • By jpadkins 2026-01-2020:44

      How much did you short the stock?

    • By clownpenis_fart 2026-01-2017:13

      [dead]

  • By reflexe 2026-01-2018:483 reply

    According to nvidia’s 2025 annual report [1], 34% of their sales for 2025 comes from just 3 customers.

    Additionally, they mentioned that customers can cancel purchases with little to no penalty and notice [2].

    This is not unique for hardware companies, but to think that all it takes is just one company to get their sales down by 12% (14b$).

    To cut to the point, my guess is that nvidia is not sustainable, and at some point one or more of these big customers won’t be able to keep up with the big orders, which will cause them to miss their earnings and then it will burst. But maybe i’m wrong here.

    [1] https://s201.q4cdn.com/141608511/files/doc_financials/2025/a..., page 155: > Sales to direct Customers A, B and C represented 12%, 11% and 11% of total revenue, respectively, for fiscal year 2025.

    [2] same, page 116: > Because most of our sales are made on a purchase order basis, our customers can generally cancel, change, or delay product purchase commitments with little notice to us and without penalty.

    • By smw 2026-01-2019:441 reply

      I have lots of skepticism about everything involved in this, but on this particular point:

      It's a bit like TSMC: you couldn't buy space on $latestGen fab because Apple had already bought it all. Many companies would have very much liked to order H200s and weren't able to, as they were all pre-sold to hyperscalers. If one of them stopped buying, it's very likely they could sell to other customers, though there might be more administrative overhead?

      Now there are some interesting questions about Nvidia creating demand by investing huge amounts of money in cloud providers that will order nv hardware, but that's a different issue.

      • By CoolestBeans 2026-01-2021:24

        Its probably not very likely that if a large buyer pulled out, NVIDIA could just sell to other customers. If a large buyer pulls out, that's a massive signal to everyone else to begin cutting costs as well. The large buyer either knows something everyone else doesn't, or knows something that everyone else has already figured out. Either way, the large buyer pulling out signals "I don't think the overall market is large enough to support this amount of compute at these prices at current interest rates" and everybody is doing the same math too.

    • By DrProtic 2026-01-2118:10

      Or they might build another factory and fulfill all the orders they were previously unable to fulfill and increase their share even more.

      Or US administration suddenly allows exports of top-tier to China and they get more whales on their order book.

      It's all guess work, that's why their share price is high.

    • By blindriver 2026-01-214:511 reply

      None of those customers can afford to cancel their orders. OpenAI, Google and Meta cannot afford to get cheap on GPUs when presumably they believe GAI is around the corner. The first company to achieve GAI will win because at that point all gains will become exponential.

      All the AI companies are locked in a death loop where they must spend as much money as possible otherwise everything they invested will immediately become zero. No one is going to pay for an LLM when the competitor has GAI. So it's death loop for everyone that has become involved in this race.

      • By cake-rusk 2026-01-2114:151 reply

        I don't know why you are being downvoted. What you said makes sense to me but I understand I know very little about how companies think. Can someone with a differing point of view elaborate?

        • By reflexe 2026-01-2122:08

          No idea why the downvotes, these are valid points. I still don’t fully agree with it:

          1. There are alternatives to nvidia: these 3 companies are probably developing their own alternative to NVIDIA, at some point they will switch to their solution or to competitors (for example: google used TPUs to train Gemini 3 [1], with no nvidia GPUs, despite being a pretty large Nvidia customer).

          2. The market seems to be consolidating: for example Apple has decided to use Google Gemini for their new Siri [2]. I’m not an export (or future teller), but I think it increases the chance that other companies might follow and get off the AI race.

          3. I am sure that OpenAI and related companies would want to sustain these kind of orders, but I am not sure it is possible without more and more funding, and I don’t know if even Sam himself know to estimate how many GPUs they will be able to buy from Nvidia in 2026.

          [1] https://x.com/JeffDean/status/1886852442815652188

          [2] https://blog.google/company-news/inside-google/company-annou...

  • By NewCzech 2026-01-2016:373 reply

    He doesn't really address his own question.

    He's answering the question "How should options be priced?"

    Sure, it's possible for a big crash in Nvidia just due to volatility. But in that case, the market as a whole would likely be affected.

    Whether Nvidia specifically takes a big dive depends much more on whether they continue to meet growth estimates than general volatility. If they miss earnings estimates in a meaningful way the market is going to take the stock behind the shed and shoot it. If they continue to exceed estimates the stock will probably go up or at least keep its present valuation.

    • By Shocka1 2026-01-2021:55

      I've been selling options for almost a decade now, including running trading algorithms, and was laughing a bit to myself because it was basically just the math in an everyday option chain. As you already know, anyone can look at the strike they are talking about, with the IV already cooked into it, on platforms like Think Or Swim or even Yahoo Finance. Some of the stuff can be pretty useful though in backtesting and exploration.

      All that aside, I'm impressed it made it to the HN front page.

    • By dsr_ 2026-01-2016:49

      > Sure, it's possible for a big crash in Nvidia just due to volatility. But in that case, the market as a whole would likely be affected.

      Other way around: if NVidia sinks, it likely takes a bunch of dependent companies with it, because the likely causes of NVidia sinking all tell us that there was indeed an AI bubble and it is popping.

    • By weslleyskah 2026-01-2017:291 reply

      Indeed, the market as a whole would be affected. But is not NVIDIA more of a software company than a hardware one? This bugs the shit out of me.

      They are maintaining this astronomical growth through data centers margins from the design of their chips and all of that started from graphics related to video games.

      • By coffeebeqn 2026-01-2017:351 reply

        > But is not NVIDIA more of a software company than a hardware one?

        No? That’s why they have almost no competition. Hardware starting costs are astronomical

        • By weslleyskah 2026-01-2017:422 reply

          But the actual manufacturing foundry is TSMC no? And they create the whole software environment based on their chips.

          • By mr_toad 2026-01-215:48

            Apple doesn’t make hardware either, but they make a lot of money selling it.

          • By immibis 2026-01-2020:20

            It costs eight figures to create the masks (patterns) to use in the process of creating a modern chip. Just because it doesn't cost the eleven figures of the factory itself doesn't make it cheap.

HackerNews