During the 2008 financial crisis the housing bubble burst because of mortgages to unqualified borrowers, complex financial products like mortgage-backed securities (MBS), and lax lending standards. So, a faulty system depending on we, the people, paying off mortgages and loans that were not being paid back. Seems a logical cause for a bubble to burst.
Before, the dotcom bubble bursted because of extreme overvaluation of companies, which were not performing up to the expectations, so the revenue wasn't there.
Now there is obviously an AI bubble as has been pointed out many many times, but currently the companies involved are still meeting their expected revenue goals (looking at NVIDIA, Meta, Google even though that is not strictly an AI-related company, their current valuation is also due to their AI developments). Of course, investing in each other and buying each other's products, causing stocks to rise, is super inflatory, but is not punished so far. It seems.
Now, a geopolitical conflict involving a certain chipmaker to not be able to produce would likely pop the bubble overnight. Given the current geopolitical situation and the people involved, this is not unlikely in the coming years. But as long as this doesn't happen it appears to be business as usual, and the AI-race will continue.
Now, comparing this to earlier bubbles, the pattern is similar. An industry is pumped to the moon, a bunch of people make an insane amount of money, the bubble bursts and most people get screwed over with a few winners. The question is always: how high will it go when the companies are profitable and how deep will the lows be?
As a retail investor who is not trading daily, this situation is extremely difficult and hard to predict when also just having a regular 9 to 5 job. I know I won't be able to predict it, so it is a risk analysis whether the current valuations will be the future lows OR if big companies with PE ratios of 50 are already a selling sign for the retail investor. This would even apply to ETFs like VWRL, since their share of NVDA is also high. The whole market will likely go down when this bubble bursts, just some companies more than others. given earlier arguments, I feel like going short here is stupid. Thereby, world governments are hedging inflation (buying loads of gold), which also has geopolitical implications. Now I believe in the mantra that time in the market beats timing the market but probably needing the money in 3 years or so, the current situation is a spicy sauce. It seems like hedging inflation (e.g. buying gold and funds like Berkshire) is not a bad move.
I keep hear it will be, it won't be, it might be...
What's the chances? Money in seems aggressive without money out
Videos
I feel the real winners of the AI boom won’t be the AI companies themselves, but the infrastructure that supports them. Instead of betting on long-term, uncertain plays like nuclear or small modular reactors, I’m leaning toward power grids and cooling systems. It’s less about hype and more about owning the essential backbone that everything depends on and as well they being first to get orders. My picks are SMCI, BE, NVT, NBIS ; what do you guys think?
I scroll through Reddit and post after post calls AI a bubble. I open Bloomberg or NYT and read an article about the AI bubble. I go out to dinner with friends and they’re convinced AI is a bubble. My favorite podcasters and YouTubers hem and haw about the AI bubble.
It appears the entire world is convinced AI is a bubble. If this is the prevailing sentiment, why would this not be priced in?
I thought the hallmarks of a bubble was that it by and large wasn’t the prevailing sentiment and wasn’t expected.
Edit: Thanks everyone! I’ve decided to double down on AI. The bubble is already priced in.
I’m sorry for my short sidedness and I understand that a lot of things right now are AI related. But what would be the tangible impact (other than job loss I think) that would cause large scale issues of the AI bubble burst?
TIA.
https://www.commondreams.org/news/ocasio-cortez-ai-bailout
Seeing an increase in “what if the AI Bubble pops” lately. So I did some digging. The oldest post I could find on the question was two years ago (link below). Since then, VTI has grown 42% and VOO 47%.
Those who stayed on the sidelines or sold out of fear missed out on an incredible growth. Understand the recency biased today. It’s possible there’s a bubble. It’s possible a correction is coming. No one knows the time, depth or breadth of it. We Bogle because even those corrections are compensated by periods like the last two years. Staying out of the market means you might miss the bad times, but you’re definitely going to miss the good times.
https://www.reddit.com/r/Bogleheads/s/2gAsAlkWEj
People keep comparing today’s AI market to the Dotcom bubble, but the structure is fundamentally different. Back then, the market was dominated by hundreds of small, non-viable companies with no revenue and no real product. Today, the core of the AI build-out is driven by the most profitable, cash-rich companies on the planet: Microsoft, Google, Amazon, Apple, Meta, NVIDIA, Broadcom, and the hyperscalers. These firms have actual products, real demand, and business models that already scale.
What is similar to the Dotcom era is the valuation stretch and the expectation curve. We are in a CapEx Supercycle where hyperscalers are pouring unprecedented amounts of money into GPUs, data centers, power infrastructure, and model development. This phase cannot grow linearly forever. At some point, build-out slows, ROI expectations tighten, and the market will reprice.
When that happens, here’s what to expect:
Winners: diversified hyperscalers, cloud platforms, chip manufacturers with real moats, and software ecosystems that can monetize AI at scale.
Survivors but volatile: model labs, foundation model vendors, and second-tier hardware companies that depend on hyperscaler demand cycles.
Casualties: AI “feature startups,” companies without defensible tech, firms relying on perpetual GPU scarcity, and anything whose valuation implies perfect execution for a decade.
This isn’t a bubble waiting to burst into nothingness but a massive, front-loaded investment cycle that will normalize once infrastructure saturation and cost pressures kick in. The technology is real, the demand is real, and the winners will be even large, but the path there won’t be a straight line.
Edit: Thank you all very much for your posts and discussion. This seems to be a very controversial topic, but this is also something where everyone can learn.
Everyone is talking about the fabled “AI Bubble” but I see a much different problem all together. Here’s what I see in phases. Just my DD.
Phase 1: AI demand will keep exploding for years, like a river that keeps widening no matter how many dams you build. Look at the Micron news recently, or the Genesis Executive Order. The ruling class and the stock market will surf the early AI wave, even if the economy is doing crap (it is).
Phase 2: Job cuts are more apparent as college grads already can’t get entry level jobs. Right now, AI layoffs feel like cutting “cost fat,” but if they keep going, you’re actually sawing into the bone that keeps the body alive. Who is going to buy shit from Amazon when they are broke? Who will buy a house when they can’t establish credit or a down payment? You see how this trickles?
Phase 3: Structural unemployment doesn’t bounce back… AI is on track to genuinely replace occupations in many industries. I’m talking entry level positions in finance, HR, hospitality, etc, etc. Once AI can reliably handle tasks like entry-level coding, customer support, basic analysis, HR screening, or content production, companies redesign workflows around machines instead of humans, so those roles don’t come back in the same numbers or at the same pay. Over time, this creates a gap: a huge pool of workers trained for jobs that no longer exist, while new AI-era jobs (prompt engineers, system architects, AI ops, etc.) require skills, credentials, or geography that many of them don’t have. What bailout can the Fed provide if we happen to have thousands of graduates in debt that also don’t have a career option in their major?
*This is kind of a doomer post, so I want to reiterate I may be and hopefully am wrong. I hope this country can thrive, but I think this AI revolution upon us will have some sour aftertastes coming.
Regardless of what you think about the tech behind AI (given what sub this is I can safely assume that most people here are deeply sceptical) you can do some simple math to show why the spending on AI has to blow up. Regardless of weather or not the AI industry becomes profitable (it's not anywhere close to profitable currently) it is almost impossible to justify the current spending on the AI bubble. Note: there are really two aspects of the AI bubble: 1 a bunch of start-ups with no path to profitability and 2 insanely irresponsible capex spending on data centers by big tech. I am only really focusing on the latter in this post because it is what has turned the AI bubble from an industry problem to a systemic risk.
First, just ask the question of how much revenue would it take to justify the capex spending on AI datacenters? I'll just use ball park round numbers for 2025 to make my point but, I think these numbers are directionally correct. In 2025 there has been an expected 400 Billion dollars of capex spending on AI data centers. An AI data center is a rapidly deprecating asset; the chips become obsolete in 1-3 years, cooling and other ancillary systems last about 5 years, and the building itself becomes obsolete in about 10 years due to changing layouts caused by frequent hardware innovations. I'll average this out and say a datacenter deprecates almost all its value in 5 years. Which means, the AI datacenters of 2025 deprecate by 80 billion dollars every year.
How much profits do AI companies need to make in order to justify this cost? I'll be extremely generous and say that AI companies will actually become profitable soon with a gross margin of 25%. Why 25%? I don't know it just seems like a good number for an asset heavy industry to have. Note: the AI industry actually has a gross margin of about -1900% as of 2025 so, like I said I am being very generous with my math here. Assuming 25% gross margin the AI industry needs to earn 320 billion dollars in revenue just to break even on the data center buildout of 2025. Just 2025 by the way. This is not accounting for the datacenters of 2024 or 2026.
Let's assume in 2026 there is twice the capex spend on data centers as 2025. That means the revenues they need, again assuming this actually becomes profitable, the AI industry will need close to a trillion dollars in revenue just to break even on the capex spending in 2 years. What if there is even more capex spending 2027 or 28?
In conclusion, even assuming that AI becomes profitable in the near term it will rapidly become impossible to justify the spending that is being done on data centers. The AI industry as a whole will need to be making trillions of dollars a year in revenue by 2030 to justify the current build out. If the industry is still unprofitable by 2030 it will probably become literally impossible to ever recoup the spending on data centers. This is approaching the point where even the US government can't afford to waste that much money trying to save this sinking ship.
Hey y'all! This will be a dump of some ideas and thoughts I had on various arguments I've heard regarding everyone's favorite topic: the AI bubble.
tl;dr: Not sure about stock prices, but the tech supporting them is solid, and profitability may come faster than expected.
Disclaimer: I'm just a random dude who mostly buys VT (and won't change that), and who just happens to work in an AI lab now (top 500) and a cloud company before (top 50) but not one of the hyped companies. I have started my AI journey ~5 years before ChatGPT was first released. I have 0 predictive power about whether we're in a bubble, and won't touch PE ratios and whatnot, but hopefully I know a bit more about running LLMs than <insert consulting company of choice> :)
First: GPU lifetime. Despite Jensen Huang's jokes (1), new GPUs (aka "AI accelerators") are not only good for 1 year. Many cloud companies still have and offer V100 GPUs, that were released in 2018 and are worse in every metric than RTX 5090 GPUs. The V100 might be getting old, but the slightly newer models (L40S, A100, ...) are still very popular, to the point that I've hit AWS availability limits when trying to use them at work. So the systems that house these GPUs useful for many years, partly due to insane demand, and partly because of their usefulness (more on that later).
Second: Running costs - GPU compute & power consumption. Despite Sam Altman losing money on $200 subscriptions (2), the cost of creating (training) and running (inference) AI models (mainly Large Language Models (LLMs)), is dropping way faster most people realize. Jensen's definitely joke had some truth to it. No matter what metric you use (bandwidth, TFLOPS, VRAM, tokens), the H100 was a huge leap over the A100, both in the raw metric and the power-normalized metric. A B200 is another step upwards in both raw metrics and efficiency. The VRAM-related gains in particular are the biggest improvement, because it means inference and training become that much faster (relevant term: batching).
Third: Running costs - algorithmic advances (I'll just dump terms, use Google or your favorite LLM as needed). Up until 2-3 years ago, all LLMs were dense models, and many of their implementation details meant that they were really slow and expensive to run. That has changed dramatically. Today we have advances in attention mechanisms, batching strategies, parallelism (tensor, pipeline, expert, data, context, probably more), multi-tier KV caching, networking & distributed training/inference, Mixture-of-Expert models (iirc, the first was Mistral's "Mixtral", now almost all big ones are MoE models), speculative decoding, quantization, training on FP8/FP4, and more. Advances like these have not stopped happening, even if posts about new models may seem low at times. In recent technical reports from Qwen (one of Alibaba's AI teams), they mention up to 10x reduction in training costs (or 90%, I guess?) just by using "new" (or old, by AI world standards) techniques. Add other improvements such as training methodologies (optimization algorithms, multiple training stages, reinforcement learning), and that cost drops even more, or abilities increase without much extra cost. Speaking of abilities...
Fourth: Running costs - abilities & quality. Even small models of today are surprisingly capable. There are models you can run on a 5-year-old smartphone that can produce grammatically & syntactically correct sentences, in multiple languages, even if the logic is not always there. However, if you move a step up to a single mid-tier consumer GPU (5060 Ti anyone?) with a mid-tier gaming PC (32-48GB RAM anyone?), you can run models (for free) that are better and likely faster than the original ChatGPT, and it hasn't even been 3 years since then. And obviously if you used a cluster of servers, each filled with 8xH100 GPUs, and then tried to compare the cost to serve the same number of users for the original ChatGPT to this equivalent small model, you would see a cost reduction somewhere in range of 95-99%, likely even more. In <3 years. We're unlikely to see another drop such as that, but the costs will keep dropping.
Fifth: Running costs - costs & profits. With the three previous points you might be wondering how can these companies still burn so much money as their revenues rise and costs drop. A few reasons. Cheaper training doesn't mean cheap training, and they are not aiming for "equivalent" abilities but State-Of-The-Art models. The intense competition between the various AI companies means they need to keep competing, or people will just move to a different LLM from their competitors. Migrating your stuff from one company to another has never been easier, as pretty much everyone (including Anthropic, Google, open-source projects) uses OpenAI-compatible APIs, MCP servers, and the like. If for whatever reason the need to constantly train new models goes away, then a major cost for these companies will go away. And speaking of major costs going away, all those fat salaries will likely go away too, because who needs to pay millions of dollars for a single IC when you don't have intense competition going on? Subscriptions and free plans will likely be finetuned like what companies in other sectors are doing (cough Netflix cough), but API pricing (which is what most enterprise customers get, although at a discount) is already profitable for them and, as we saw previously, it only getting cheaper. Also as these AI companies build their own infrastructure (which may or may not be a money waste, I can't tell), they need to rely less on intermediaries (AWS, Azure, GCP, OCI) for their compute, and as such they'll grab more of the profits. So, dear Bain consultants, I don't think they'll need $2 trillion annual revenues to be profitable :)
Intermission: personal anecdotes from my homelab & job. I already had a homelab, because I can, but also because I worked in IT. Now that I'm back to working in the AI field after a small break, it made sense to me to build an LLM server. Using a modestly capable small dense model, slightly older & used components, paying consumer electricity prices in an expensive country, as a single user, the cost is roughly ~$0.0445/1M input tokens (non-cached, 0 cached) and ~$2.161/1M output tokens (3). If I had concurrent 4 users, 1M output token costs would drop to roughly $0.617, and with 16 users we'd be down to just ~$0.24. At my job, about a year ago using enterprise hardware from the same generation (~2020) and factoring in corporate discounts, I ran ~3x bigger models and calculated the cost to roughly $0.1-0.2 / 1M output tokens, without any attempts at optimization. I cannot imagine big AI companies, using all the nice stuff I mentioned in points 2-5, doing worse than what I did.
Sixth: Usefulness - project success rates (4). Most projects fail. The vast majority of projects today fail because of senseless hype that either comes from braindead developers (like myself, but there are lots of us) who want to try new stuff for the fun of it, or braindead management (middle, upper) who heard their friends speak about the "year of AI agents" half a year after it became a thing, and now they want to do something with it just to show their bosses or shareholders. I'm not even joking, if you have dev friends ask how believable that sentence is. Now, even before the hype, AI projects (that were likely called Machine Learning (ML) projects) were often unsuccessful too. The fundamental premise is complex and problematic (5), and projects can fail even if you do nothing wrong. But some projects succeed. Most of the successful projects, still fail to generate a positive ROI (outdated models, high development costs, next big thing), but we are getting better at it over time (MLOps standardization, internal tools, etc). However, some projects are also wildly successful. Over the past few years, after working on ~4-5 projects that mostly ended up in the recycling bin, the one I'm working now has a cost of $1-2M but is estimated to result in cost cutting (6) in the lower/mid tens of millions per year. I doubt I'll work on another such project ever again in my life, but hey, it happens! My colleagues have similar stats, but likely higher because they're smarter.
Seventh: Usefulness - ease of use. LLMs (and related research) really redefined what's possible, and what's easy. Let's say you wanted to make an app that counted how many unique people visited your shop per day. Just 5 years ago you'd need a highly capable data scientist working on this for weeks or months. Today your cheap junior developer from Lidl can call an LLM API and it will likely work okay. Or maybe you want to track security vulnerabilities in your company data center. Instead of having a dedicated human IT person look up stuff every day, you can ask your unpaid intern to feed a CVE feed into an LLM and have it automatically create a bunch of tickets for your dev team to ignore. Jokes aside, I think there are many applications of LLMs (especially those integrating vision and sound) that haven't been tried yet, simply because people haven't yet had enough time to test this technology with everything, or because the quality is only 90% of the way there instead of the (for example) expected >95%.
Eight: Ongoing research. This also ties to project success rates and running costs, but deserves it's own point. The amount of ongoing research is insane. It is impossible even for a team of people to keep track of everything in the AI field. It is probably impossible for teams to even track a subset of AI such as "only LLMs". An insane amount of papers gets published every day (btw, I think China reached #1 on Arxiv recently), and even though most of them get lost and may be less useful or notable than others, one thing still remains true even if you filter out the ones you don't find relevant: it's nearly impossible to adapt even half of the research into your AI systems. A great example is hallucinations. It is indeed a big problem with LLMs. Did you know that there are hundreds of research papers with detection methods, prevention methods, mitigation methods, investigations into causes, and more? Do you think the RAG system your dev department built over the weekend does any of that, or is it more likely that they watched a YouTube tutorial and maybe read some docs and called it a day? Research is currently not slowing down, even if model releases seem fewer / sparser (in total they aren't, we have more companies/labs releasing open/closed models, but maybe not the big names).
So, yeah, we may be in a valuation bubble, investment bubble, and whatever other bubble there is, and the hype is annoying to no end, but there are clear paths to profitability for most companies that currently aren't profitable based on cost reduction that comes from the still booming tech "underlay". And yes, I can appreciate the irony (or stupidity) of me almost preaching LLMs and AI, yet not using an LLM to write this.
NVDA to the moon Please crash so I can buy cheap H100 servers.
(1) In the keynote where Jetsen presented the new Blackwell chips (I think it was GTC 2025), the first time he spoke of these chips his tongue slipped and he said something along the lines of "you can give your H100s away", supposedly because they're so much faster. Then the marketing/sales department probably reminded him they have a big stock of H100/H200 GPUs so think he corrected that later.
(2) Which we trust 100%, because we believe non-public companies would never lie, right? And since it supposedly happens to OpenAI, it must happen to all other companies too, right?
(3) Mistral-Small-3.2-24B (not a MoE model, no speculative decoding) at FP16 with vLLM, on a 4x3090 system with power limit set to 200-225W (automatically adjusted based on temperature), each running on PCIe 4.0 x4, assuming full & constant load.
(4) I'm probably biased since I liked "AI" for a while (way before LLMs were a thing), and because I work in the field, and because now I'm really tired of all the bullshit hype from inside and outside the company I work at. Bite me. But not too hard, please, I haven't touched grass, let alone the gym card, in months.
(5) The premise behind the whole thing is that some process has some pattern(s), and a model (be it linear regression, tree-based models, genetic algorithms, reinforcement learning, LLMs) can somehow "make sense" of that pattern and perform or predict it. You likely have seen most successful applications, but you haven't seen the mountain of dead projects that died because a problem was just too hard or too weird to reliably model, or the model was too slow, too inconsistent, or otherwise problematic. Not to mention successful models that are just getting outdated after a while. We probably have some quants here that build and use a bunch of models too ("AI" or otherwise), maybe you folks can tell us how many models you discard or need to finetune?
(6) Yes, replacing human work. No, not replacing QA or tech support or sales, but highly paid and highly qualified professionals in a high profile domain. No, quality won't be worse because many smart professionals, scientists, engineers, plus a dumb Redditor, worked on this for a year, and humans will still be involved.
Edit: Here's a summary of the discussion in the comments so far, in case you want to jump to specific topics of interest:
Small models are nice (Nvidia paper, also comments here, here, here)
Liquidity, concentration, and other risks, even if costs keep falling fast, AI companies might simply not survive until then (here, here, here)
Results may be good but might not meet expectations (here)
Further discussions about GPU lifetime, positive & negative (here, here, and also a post on the localllama sub asking why we don't see GPUs from 4+ years ago being decommissioned or dropping in price)
Counter-argument about inference costs rising, not falling here.
AI startups face competition with companies having moats in data & expertise (here)
LLMs, "true intelligence", AGI, ASI, and the whole "promises vs results" debate (too many comments to list)
Environmental, political, infrastructure, and other concerns not under "finance" or "tech" (mainly here (definitely give this a read), a bit here, probably a few more) (not really my focus, since I only tried to address tech stuff not mentioned in past critiques, but still important -and correct- if you want an overall picture and the "other" side)
Accountants rejoice, you're safe because you can go to jail! (here)
Radiologists are safe too (here) (I still don't get how this ties to the post, as I never mentioned anything about "replacing all jobs", I only mentioned replacing a specific job in a specific, unnamed domain, in footnote 6. The dumb Redditor working on the project is me, in case it wasn't clear)
Notable mentions: the all-rounder comment, the Amazon experience™ (also good points, e.g. about reading papers), "this could've been an email" (it's a common joke, at least in dev circles, about having too many useless meetings, no need to downvote the guy!)
I keep seeing this comparasion on Reddit, but I believe it'swrong to compare the two and expect a crash as big as the one in 2000.
The big difference is that the companies today actually make money. Microsoft, Google, Nvidia, Meta they all have real products, billions in profits, and millions of users. Think of Google Maps, YouTube, Windows, Office, Android, Instagram etc. These are all solid products with billions in profits and massive user bases. Can you replace them? No. Will they go away anytime soon? Also no. Even if the AI trend slows down, these companies still have strong foundations and cash flow from their non-AI products.
During the dot com bubble, most companies had no profits and were running on hype alone. Not to mention how technology wasn’t even a big part of people’s lives back then. Most people were just getting online for the first time, and the internet was still something new and weird, used mostly by nerds.
Sure, valuations are high and there’s some overexcitement, but that doesn’t mean the market is heading for a total meltdown. A correction? Probably. A repeat of 2000? Highly unlikely.