As a retail investor who is not trading daily, this situation is extremely difficult and hard to predict when also just having a regular 9 to 5 job The question to ask is - is it something you even need to worry about? Markets have major downturns for all sorts of reasons. If not an "AI bubble" it can be whatever. Accept that as reality, acknowledge your time horizon, and invest accordingly. If you build a strong ship you don't need to worry about when the next wave will be. Answer from therealjerseytom on reddit.com
🌐
Reddit
reddit.com › r/stocks › everybody talks about the ai bubble going to burst, but how? and what are the implications for the small investor?
r/stocks on Reddit: Everybody talks about the AI bubble going to burst, but how? And what are the implications for the small investor?
1 month ago -

During the 2008 financial crisis the housing bubble burst because of mortgages to unqualified borrowers, complex financial products like mortgage-backed securities (MBS), and lax lending standards. So, a faulty system depending on we, the people, paying off mortgages and loans that were not being paid back. Seems a logical cause for a bubble to burst.

Before, the dotcom bubble bursted because of extreme overvaluation of companies, which were not performing up to the expectations, so the revenue wasn't there.

Now there is obviously an AI bubble as has been pointed out many many times, but currently the companies involved are still meeting their expected revenue goals (looking at NVIDIA, Meta, Google even though that is not strictly an AI-related company, their current valuation is also due to their AI developments). Of course, investing in each other and buying each other's products, causing stocks to rise, is super inflatory, but is not punished so far. It seems.

Now, a geopolitical conflict involving a certain chipmaker to not be able to produce would likely pop the bubble overnight. Given the current geopolitical situation and the people involved, this is not unlikely in the coming years. But as long as this doesn't happen it appears to be business as usual, and the AI-race will continue.

Now, comparing this to earlier bubbles, the pattern is similar. An industry is pumped to the moon, a bunch of people make an insane amount of money, the bubble bursts and most people get screwed over with a few winners. The question is always: how high will it go when the companies are profitable and how deep will the lows be?

As a retail investor who is not trading daily, this situation is extremely difficult and hard to predict when also just having a regular 9 to 5 job. I know I won't be able to predict it, so it is a risk analysis whether the current valuations will be the future lows OR if big companies with PE ratios of 50 are already a selling sign for the retail investor. This would even apply to ETFs like VWRL, since their share of NVDA is also high. The whole market will likely go down when this bubble bursts, just some companies more than others. given earlier arguments, I feel like going short here is stupid. Thereby, world governments are hedging inflation (buying loads of gold), which also has geopolitical implications. Now I believe in the mantra that time in the market beats timing the market but probably needing the money in 3 years or so, the current situation is a spicy sauce. It seems like hedging inflation (e.g. buying gold and funds like Berkshire) is not a bad move.

🌐
Reddit
reddit.com › r/explainlikeimfive › eli5 what is the ai bubble and the ai bubble burst? i've searched up stuff about it but still struggle to understand
ELI5 what is the ai bubble and the ai bubble burst? I've searched up stuff about it but still struggle to understand : r/explainlikeimfive
December 6, 2025 - If the general public knew how much (or had to pay for) each Google search, or simple query in ChatGPT, the bubble would have burst before it started. ... it’s an illusion that AI is very good.
🌐
Reddit
reddit.com › r/stocks › ai bubble who wins?
r/stocks on Reddit: Ai bubble who wins?
5 days ago -

I feel the real winners of the AI boom won’t be the AI companies themselves, but the infrastructure that supports them. Instead of betting on long-term, uncertain plays like nuclear or small modular reactors, I’m leaning toward power grids and cooling systems. It’s less about hype and more about owning the essential backbone that everything depends on and as well they being first to get orders. My picks are SMCI, BE, NVT, NBIS ; what do you guys think?

🌐
Reddit
reddit.com › r/amd › amd ceo lisa su says concerns about an ai bubble are overblown
r/Amd on Reddit: AMD CEO Lisa Su Says Concerns About an AI Bubble Are Overblown
December 4, 2025 - Demand might be real now, but that doesn't necessarily exclude a bubble. ... The demand for hardware is real, which is where Micron and AMD both land on the supply chain, everyone is building data centers. That has nothing to do with market demand or capitalization or ROI for the end product which is much further down the supply chain. Even OpenAI is spending spending spending without making much in the way of returns. And what are they buying? Hardware. They are selling AI, and that's the question without an answer.
🌐
Reddit
reddit.com › r/investing › how can ai be a bubble when everyone is convinced it’s one?
r/investing on Reddit: How can AI be a bubble when everyone is convinced it’s one?
November 21, 2025 -

I scroll through Reddit and post after post calls AI a bubble. I open Bloomberg or NYT and read an article about the AI bubble. I go out to dinner with friends and they’re convinced AI is a bubble. My favorite podcasters and YouTubers hem and haw about the AI bubble.

It appears the entire world is convinced AI is a bubble. If this is the prevailing sentiment, why would this not be priced in?

I thought the hallmarks of a bubble was that it by and large wasn’t the prevailing sentiment and wasn’t expected.

Edit: Thanks everyone! I’ve decided to double down on AI. The bubble is already priced in.

🌐
Reddit
reddit.com › r/outoftheloop › what’s the deal with the ai bubble? what are the implications and ramifications of an ai bubble burst?
r/OutOfTheLoop on Reddit: What’s the deal with the AI bubble? What are the implications and ramifications of an AI bubble burst?
November 19, 2025 -

I’m sorry for my short sidedness and I understand that a lot of things right now are AI related. But what would be the tangible impact (other than job loss I think) that would cause large scale issues of the AI bubble burst?

TIA.

https://www.commondreams.org/news/ocasio-cortez-ai-bailout

Top answer
1 of 5
2238
Answer: There is a general belief that AI investment is in a state where the entire market is being propped up and is heavily overvalued. Those that support AI claim that the investment will pay off incredibly and change the world as we know it, akin to getting in on the ground floor of the internet. However, the investment in AI is so massive that anything short of complete societal revolution where every claim is completely right would be a failure. At this point, so much money is being bet on it that a new tool that's really useful that everyone uses sometimes would be a failure. Imagine if someone said they'd build the best house ever, and raised money based on saying it would be the most valuable house ever. Then they raised billions of dollars to build the house. Well, the house could be really good. It could be amazing. But unless that house sells for even more billions of dollars, it's going to be a failure and those investors lose out. And the house isn't looking like a palace at the moment. But this extends further. People are investing in the investors, and so on. If the AI bubble bursts, first the main companies likely fail as they run out of money. But then the companies invested in those companies fail. Then the companies invested in THOSE ones fail. This ripples out through the market, where investment accounts, retirement accounts, etc. all keep failing. Going back to the house metaphor, it's like if this house was promised to be so good, a bunch of businesses cropped up to make a town supporting the house. People were hired to build stores, make businesses to support the mega-house, and even support people working in industries to support the mega-house. If the house fails to be the best thing ever, that entire town, built up entirely on expectation of the house being the best, will fail. And everyone invested in it is going to suffer for it. Even if you hate the house and think it's stupid, you might have a retirement fund through your job. That fund invested it's money into Bob's Painting Company because it was doing super well. And it was doing super well because it invested in a new branch, and that branch invested in the House because they figured they'd make a ton of money being hired to paint and repaint it. When the house fails, the branch fails, Bob's stock crashes, and your retirement fund loses a lot of money.
2 of 5
70
Answer: Finance journalists have always been racing to predict the next market downturn and they have been specifically trying to predict "bubbles" i.e. market downturns in specific business areas since the dot com crash in year 2000. The cloud computing bubble was supposed to burst in early 2010s The housing bubble was supposed to burst in late 2000s As you see on the above examples, sometimes some pundits can get things right while at other times they may get things wrong. The housing market really crashed in 2007 while cloud computing is here to stay. Everyone who's talking about the AI bubble bursting is just guessing. It might be that AI turns out not to be as useful/profitable as expected. It could turn out more useful than people expected. It may be that AI will be very successful just not in the hands of the current biggest players (which was the case with the dot com bubble). No one really knows.
Find elsewhere
🌐
Reddit
reddit.com › r/bogleheads › 2 years since first “ai tech bubble” fear post
r/Bogleheads on Reddit: 2 years since first “AI Tech Bubble” fear post
2 weeks ago -

Seeing an increase in “what if the AI Bubble pops” lately. So I did some digging. The oldest post I could find on the question was two years ago (link below). Since then, VTI has grown 42% and VOO 47%.

Those who stayed on the sidelines or sold out of fear missed out on an incredible growth. Understand the recency biased today. It’s possible there’s a bubble. It’s possible a correction is coming. No one knows the time, depth or breadth of it. We Bogle because even those corrections are compensated by periods like the last two years. Staying out of the market means you might miss the bad times, but you’re definitely going to miss the good times.

https://www.reddit.com/r/Bogleheads/s/2gAsAlkWEj

🌐
Reddit
reddit.com › r/artificialinteligence › there is no “ai bubble.” what we’re living through is an ai capex supercycle.
r/ArtificialInteligence on Reddit: There is no “AI Bubble.” What we’re living through is an AI CapEx Supercycle.
December 4, 2025 -

People keep comparing today’s AI market to the Dotcom bubble, but the structure is fundamentally different. Back then, the market was dominated by hundreds of small, non-viable companies with no revenue and no real product. Today, the core of the AI build-out is driven by the most profitable, cash-rich companies on the planet: Microsoft, Google, Amazon, Apple, Meta, NVIDIA, Broadcom, and the hyperscalers. These firms have actual products, real demand, and business models that already scale.

What is similar to the Dotcom era is the valuation stretch and the expectation curve. We are in a CapEx Supercycle where hyperscalers are pouring unprecedented amounts of money into GPUs, data centers, power infrastructure, and model development. This phase cannot grow linearly forever. At some point, build-out slows, ROI expectations tighten, and the market will reprice.

When that happens, here’s what to expect:

Winners: diversified hyperscalers, cloud platforms, chip manufacturers with real moats, and software ecosystems that can monetize AI at scale.

Survivors but volatile: model labs, foundation model vendors, and second-tier hardware companies that depend on hyperscaler demand cycles.

Casualties: AI “feature startups,” companies without defensible tech, firms relying on perpetual GPU scarcity, and anything whose valuation implies perfect execution for a decade.

This isn’t a bubble waiting to burst into nothingness but a massive, front-loaded investment cycle that will normalize once infrastructure saturation and cost pressures kick in. The technology is real, the demand is real, and the winners will be even large, but the path there won’t be a straight line.

Edit: Thank you all very much for your posts and discussion. This seems to be a very controversial topic, but this is also something where everyone can learn.

🌐
Reddit
reddit.com › r/technology › ‘absolutely' a market bubble: wall street sounds the alarm on ai-driven boom as investors go all in
r/technology on Reddit: ‘Absolutely' a market bubble: Wall Street sounds the alarm on AI-driven boom as investors go all in
November 14, 2025 - Being able to ask questions to a database like IBM Watson medical AI is bound to increase productivity with reduced workload. Thousands of industries are looking to automate or reduce the payroll while offering extended operating hours. All these companies will invest billions to be the ones with the edge. People compare it to the dot bubble, the internet as far as I know generates trillions in value each year.
🌐
Reddit
reddit.com › r/stocks › the notorious “ai bubble”, why it is likely not what many think, yet can still be a slippery slope
r/stocks on Reddit: The Notorious “AI Bubble”, Why It is Likely Not What Many Think, Yet Can Still Be A Slippery Slope
October 31, 2025 -

Everyone is talking about the fabled “AI Bubble” but I see a much different problem all together. Here’s what I see in phases. Just my DD.

Phase 1: AI demand will keep exploding for years, like a river that keeps widening no matter how many dams you build. Look at the Micron news recently, or the Genesis Executive Order. The ruling class and the stock market will surf the early AI wave, even if the economy is doing crap (it is).

Phase 2: Job cuts are more apparent as college grads already can’t get entry level jobs. Right now, AI layoffs feel like cutting “cost fat,” but if they keep going, you’re actually sawing into the bone that keeps the body alive. Who is going to buy shit from Amazon when they are broke? Who will buy a house when they can’t establish credit or a down payment? You see how this trickles?

Phase 3: Structural unemployment doesn’t bounce back… AI is on track to genuinely replace occupations in many industries. I’m talking entry level positions in finance, HR, hospitality, etc, etc. Once AI can reliably handle tasks like entry-level coding, customer support, basic analysis, HR screening, or content production, companies redesign workflows around machines instead of humans, so those roles don’t come back in the same numbers or at the same pay. Over time, this creates a gap: a huge pool of workers trained for jobs that no longer exist, while new AI-era jobs (prompt engineers, system architects, AI ops, etc.) require skills, credentials, or geography that many of them don’t have. What bailout can the Fed provide if we happen to have thousands of graduates in debt that also don’t have a career option in their major?

*This is kind of a doomer post, so I want to reiterate I may be and hopefully am wrong. I hope this country can thrive, but I think this AI revolution upon us will have some sour aftertastes coming.

🌐
Reddit
reddit.com › r/news › no firm is immune if ai bubble bursts, google ceo tells bbc
r/news on Reddit: No firm is immune if AI bubble bursts, Google CEO tells BBC
October 14, 2025 - Exactly, this is just google/big tech saying "our investment money is drying up so if a country wants a strong AI industry we want government handouts". I suspect also its a hint that big tech growth might be slowing down and a lot of the US stock market relies in big tech growth to make the overall picture look good. I am not what can be done about it. Does he mean the bubble should be supported forever just in case?
🌐
Reddit
reddit.com › r/futurology › some simple math to show why the ai bubble has to burst. (ai/economics)
r/Futurology on Reddit: Some simple math to show why the AI bubble has to burst. (AI/Economics)
October 25, 2025 -

Regardless of what you think about the tech behind AI (given what sub this is I can safely assume that most people here are deeply sceptical) you can do some simple math to show why the spending on AI has to blow up. Regardless of weather or not the AI industry becomes profitable (it's not anywhere close to profitable currently) it is almost impossible to justify the current spending on the AI bubble. Note: there are really two aspects of the AI bubble: 1 a bunch of start-ups with no path to profitability and 2 insanely irresponsible capex spending on data centers by big tech. I am only really focusing on the latter in this post because it is what has turned the AI bubble from an industry problem to a systemic risk.

First, just ask the question of how much revenue would it take to justify the capex spending on AI datacenters? I'll just use ball park round numbers for 2025 to make my point but, I think these numbers are directionally correct. In 2025 there has been an expected 400 Billion dollars of capex spending on AI data centers. An AI data center is a rapidly deprecating asset; the chips become obsolete in 1-3 years, cooling and other ancillary systems last about 5 years, and the building itself becomes obsolete in about 10 years due to changing layouts caused by frequent hardware innovations. I'll average this out and say a datacenter deprecates almost all its value in 5 years. Which means, the AI datacenters of 2025 deprecate by 80 billion dollars every year.

How much profits do AI companies need to make in order to justify this cost? I'll be extremely generous and say that AI companies will actually become profitable soon with a gross margin of 25%. Why 25%? I don't know it just seems like a good number for an asset heavy industry to have. Note: the AI industry actually has a gross margin of about -1900% as of 2025 so, like I said I am being very generous with my math here. Assuming 25% gross margin the AI industry needs to earn 320 billion dollars in revenue just to break even on the data center buildout of 2025. Just 2025 by the way. This is not accounting for the datacenters of 2024 or 2026.

Let's assume in 2026 there is twice the capex spend on data centers as 2025. That means the revenues they need, again assuming this actually becomes profitable, the AI industry will need close to a trillion dollars in revenue just to break even on the capex spending in 2 years. What if there is even more capex spending 2027 or 28?

In conclusion, even assuming that AI becomes profitable in the near term it will rapidly become impossible to justify the spending that is being done on data centers. The AI industry as a whole will need to be making trillions of dollars a year in revenue by 2030 to justify the current build out. If the industry is still unprofitable by 2030 it will probably become literally impossible to ever recoup the spending on data centers. This is approaching the point where even the US government can't afford to waste that much money trying to save this sinking ship.

🌐
Reddit
reddit.com › r/technology › how does the ai bubble compare to dotcom fever?
r/technology on Reddit: How Does The AI Bubble Compare To Dotcom Fever?
December 2, 2025 - I see a lot of financially illiterate replies, so let me dumb this down: the main difference between the dot-com bust and the AI bubble is that these companies actually have cash flow and revenue, whereas in the dot-com bust, it was purely speculation.
🌐
Reddit
reddit.com › r/technology › aoc warns we may be in a 'massive' ai bubble with '2008-style threats to economic stability'
r/technology on Reddit: AOC warns we may be in a 'massive' AI bubble with '2008-style threats to economic stability'
November 20, 2025 - That will inevitably involve massive price increases, and I think that a lot of ai usage will disappear at that point. ... The fear isn't that LLMs can actually replace people, but that rich assholes think it can and do it anyway. ... That’s when the bubble bursts and the economy takes a big hit.
🌐
Reddit
reddit.com › r/investing › ai bubble, from a tech perspective
r/investing on Reddit: AI Bubble, from a tech perspective
October 13, 2025 -

Hey y'all! This will be a dump of some ideas and thoughts I had on various arguments I've heard regarding everyone's favorite topic: the AI bubble.

tl;dr: Not sure about stock prices, but the tech supporting them is solid, and profitability may come faster than expected.

Disclaimer: I'm just a random dude who mostly buys VT (and won't change that), and who just happens to work in an AI lab now (top 500) and a cloud company before (top 50) but not one of the hyped companies. I have started my AI journey ~5 years before ChatGPT was first released. I have 0 predictive power about whether we're in a bubble, and won't touch PE ratios and whatnot, but hopefully I know a bit more about running LLMs than <insert consulting company of choice> :)

First: GPU lifetime. Despite Jensen Huang's jokes (1), new GPUs (aka "AI accelerators") are not only good for 1 year. Many cloud companies still have and offer V100 GPUs, that were released in 2018 and are worse in every metric than RTX 5090 GPUs. The V100 might be getting old, but the slightly newer models (L40S, A100, ...) are still very popular, to the point that I've hit AWS availability limits when trying to use them at work. So the systems that house these GPUs useful for many years, partly due to insane demand, and partly because of their usefulness (more on that later).

Second: Running costs - GPU compute & power consumption. Despite Sam Altman losing money on $200 subscriptions (2), the cost of creating (training) and running (inference) AI models (mainly Large Language Models (LLMs)), is dropping way faster most people realize. Jensen's definitely joke had some truth to it. No matter what metric you use (bandwidth, TFLOPS, VRAM, tokens), the H100 was a huge leap over the A100, both in the raw metric and the power-normalized metric. A B200 is another step upwards in both raw metrics and efficiency. The VRAM-related gains in particular are the biggest improvement, because it means inference and training become that much faster (relevant term: batching).

Third: Running costs - algorithmic advances (I'll just dump terms, use Google or your favorite LLM as needed). Up until 2-3 years ago, all LLMs were dense models, and many of their implementation details meant that they were really slow and expensive to run. That has changed dramatically. Today we have advances in attention mechanisms, batching strategies, parallelism (tensor, pipeline, expert, data, context, probably more), multi-tier KV caching, networking & distributed training/inference, Mixture-of-Expert models (iirc, the first was Mistral's "Mixtral", now almost all big ones are MoE models), speculative decoding, quantization, training on FP8/FP4, and more. Advances like these have not stopped happening, even if posts about new models may seem low at times. In recent technical reports from Qwen (one of Alibaba's AI teams), they mention up to 10x reduction in training costs (or 90%, I guess?) just by using "new" (or old, by AI world standards) techniques. Add other improvements such as training methodologies (optimization algorithms, multiple training stages, reinforcement learning), and that cost drops even more, or abilities increase without much extra cost. Speaking of abilities...

Fourth: Running costs - abilities & quality. Even small models of today are surprisingly capable. There are models you can run on a 5-year-old smartphone that can produce grammatically & syntactically correct sentences, in multiple languages, even if the logic is not always there. However, if you move a step up to a single mid-tier consumer GPU (5060 Ti anyone?) with a mid-tier gaming PC (32-48GB RAM anyone?), you can run models (for free) that are better and likely faster than the original ChatGPT, and it hasn't even been 3 years since then. And obviously if you used a cluster of servers, each filled with 8xH100 GPUs, and then tried to compare the cost to serve the same number of users for the original ChatGPT to this equivalent small model, you would see a cost reduction somewhere in range of 95-99%, likely even more. In <3 years. We're unlikely to see another drop such as that, but the costs will keep dropping.

Fifth: Running costs - costs & profits. With the three previous points you might be wondering how can these companies still burn so much money as their revenues rise and costs drop. A few reasons. Cheaper training doesn't mean cheap training, and they are not aiming for "equivalent" abilities but State-Of-The-Art models. The intense competition between the various AI companies means they need to keep competing, or people will just move to a different LLM from their competitors. Migrating your stuff from one company to another has never been easier, as pretty much everyone (including Anthropic, Google, open-source projects) uses OpenAI-compatible APIs, MCP servers, and the like. If for whatever reason the need to constantly train new models goes away, then a major cost for these companies will go away. And speaking of major costs going away, all those fat salaries will likely go away too, because who needs to pay millions of dollars for a single IC when you don't have intense competition going on? Subscriptions and free plans will likely be finetuned like what companies in other sectors are doing (cough Netflix cough), but API pricing (which is what most enterprise customers get, although at a discount) is already profitable for them and, as we saw previously, it only getting cheaper. Also as these AI companies build their own infrastructure (which may or may not be a money waste, I can't tell), they need to rely less on intermediaries (AWS, Azure, GCP, OCI) for their compute, and as such they'll grab more of the profits. So, dear Bain consultants, I don't think they'll need $2 trillion annual revenues to be profitable :)

Intermission: personal anecdotes from my homelab & job. I already had a homelab, because I can, but also because I worked in IT. Now that I'm back to working in the AI field after a small break, it made sense to me to build an LLM server. Using a modestly capable small dense model, slightly older & used components, paying consumer electricity prices in an expensive country, as a single user, the cost is roughly ~$0.0445/1M input tokens (non-cached, 0 cached) and ~$2.161/1M output tokens (3). If I had concurrent 4 users, 1M output token costs would drop to roughly $0.617, and with 16 users we'd be down to just ~$0.24. At my job, about a year ago using enterprise hardware from the same generation (~2020) and factoring in corporate discounts, I ran ~3x bigger models and calculated the cost to roughly $0.1-0.2 / 1M output tokens, without any attempts at optimization. I cannot imagine big AI companies, using all the nice stuff I mentioned in points 2-5, doing worse than what I did.

Sixth: Usefulness - project success rates (4). Most projects fail. The vast majority of projects today fail because of senseless hype that either comes from braindead developers (like myself, but there are lots of us) who want to try new stuff for the fun of it, or braindead management (middle, upper) who heard their friends speak about the "year of AI agents" half a year after it became a thing, and now they want to do something with it just to show their bosses or shareholders. I'm not even joking, if you have dev friends ask how believable that sentence is. Now, even before the hype, AI projects (that were likely called Machine Learning (ML) projects) were often unsuccessful too. The fundamental premise is complex and problematic (5), and projects can fail even if you do nothing wrong. But some projects succeed. Most of the successful projects, still fail to generate a positive ROI (outdated models, high development costs, next big thing), but we are getting better at it over time (MLOps standardization, internal tools, etc). However, some projects are also wildly successful. Over the past few years, after working on ~4-5 projects that mostly ended up in the recycling bin, the one I'm working now has a cost of $1-2M but is estimated to result in cost cutting (6) in the lower/mid tens of millions per year. I doubt I'll work on another such project ever again in my life, but hey, it happens! My colleagues have similar stats, but likely higher because they're smarter.

Seventh: Usefulness - ease of use. LLMs (and related research) really redefined what's possible, and what's easy. Let's say you wanted to make an app that counted how many unique people visited your shop per day. Just 5 years ago you'd need a highly capable data scientist working on this for weeks or months. Today your cheap junior developer from Lidl can call an LLM API and it will likely work okay. Or maybe you want to track security vulnerabilities in your company data center. Instead of having a dedicated human IT person look up stuff every day, you can ask your unpaid intern to feed a CVE feed into an LLM and have it automatically create a bunch of tickets for your dev team to ignore. Jokes aside, I think there are many applications of LLMs (especially those integrating vision and sound) that haven't been tried yet, simply because people haven't yet had enough time to test this technology with everything, or because the quality is only 90% of the way there instead of the (for example) expected >95%.

Eight: Ongoing research. This also ties to project success rates and running costs, but deserves it's own point. The amount of ongoing research is insane. It is impossible even for a team of people to keep track of everything in the AI field. It is probably impossible for teams to even track a subset of AI such as "only LLMs". An insane amount of papers gets published every day (btw, I think China reached #1 on Arxiv recently), and even though most of them get lost and may be less useful or notable than others, one thing still remains true even if you filter out the ones you don't find relevant: it's nearly impossible to adapt even half of the research into your AI systems. A great example is hallucinations. It is indeed a big problem with LLMs. Did you know that there are hundreds of research papers with detection methods, prevention methods, mitigation methods, investigations into causes, and more? Do you think the RAG system your dev department built over the weekend does any of that, or is it more likely that they watched a YouTube tutorial and maybe read some docs and called it a day? Research is currently not slowing down, even if model releases seem fewer / sparser (in total they aren't, we have more companies/labs releasing open/closed models, but maybe not the big names).

So, yeah, we may be in a valuation bubble, investment bubble, and whatever other bubble there is, and the hype is annoying to no end, but there are clear paths to profitability for most companies that currently aren't profitable based on cost reduction that comes from the still booming tech "underlay". And yes, I can appreciate the irony (or stupidity) of me almost preaching LLMs and AI, yet not using an LLM to write this.

NVDA to the moon Please crash so I can buy cheap H100 servers.


(1) In the keynote where Jetsen presented the new Blackwell chips (I think it was GTC 2025), the first time he spoke of these chips his tongue slipped and he said something along the lines of "you can give your H100s away", supposedly because they're so much faster. Then the marketing/sales department probably reminded him they have a big stock of H100/H200 GPUs so think he corrected that later.

(2) Which we trust 100%, because we believe non-public companies would never lie, right? And since it supposedly happens to OpenAI, it must happen to all other companies too, right?

(3) Mistral-Small-3.2-24B (not a MoE model, no speculative decoding) at FP16 with vLLM, on a 4x3090 system with power limit set to 200-225W (automatically adjusted based on temperature), each running on PCIe 4.0 x4, assuming full & constant load.

(4) I'm probably biased since I liked "AI" for a while (way before LLMs were a thing), and because I work in the field, and because now I'm really tired of all the bullshit hype from inside and outside the company I work at. Bite me. But not too hard, please, I haven't touched grass, let alone the gym card, in months.

(5) The premise behind the whole thing is that some process has some pattern(s), and a model (be it linear regression, tree-based models, genetic algorithms, reinforcement learning, LLMs) can somehow "make sense" of that pattern and perform or predict it. You likely have seen most successful applications, but you haven't seen the mountain of dead projects that died because a problem was just too hard or too weird to reliably model, or the model was too slow, too inconsistent, or otherwise problematic. Not to mention successful models that are just getting outdated after a while. We probably have some quants here that build and use a bunch of models too ("AI" or otherwise), maybe you folks can tell us how many models you discard or need to finetune?

(6) Yes, replacing human work. No, not replacing QA or tech support or sales, but highly paid and highly qualified professionals in a high profile domain. No, quality won't be worse because many smart professionals, scientists, engineers, plus a dumb Redditor, worked on this for a year, and humans will still be involved.


Edit: Here's a summary of the discussion in the comments so far, in case you want to jump to specific topics of interest:

  • Small models are nice (Nvidia paper, also comments here, here, here)

  • Liquidity, concentration, and other risks, even if costs keep falling fast, AI companies might simply not survive until then (here, here, here)

  • Results may be good but might not meet expectations (here)

  • Further discussions about GPU lifetime, positive & negative (here, here, and also a post on the localllama sub asking why we don't see GPUs from 4+ years ago being decommissioned or dropping in price)

  • Counter-argument about inference costs rising, not falling here.

  • AI startups face competition with companies having moats in data & expertise (here)

  • LLMs, "true intelligence", AGI, ASI, and the whole "promises vs results" debate (too many comments to list)

  • Environmental, political, infrastructure, and other concerns not under "finance" or "tech" (mainly here (definitely give this a read), a bit here, probably a few more) (not really my focus, since I only tried to address tech stuff not mentioned in past critiques, but still important -and correct- if you want an overall picture and the "other" side)

  • Accountants rejoice, you're safe because you can go to jail! (here)

  • Radiologists are safe too (here) (I still don't get how this ties to the post, as I never mentioned anything about "replacing all jobs", I only mentioned replacing a specific job in a specific, unnamed domain, in footnote 6. The dumb Redditor working on the project is me, in case it wasn't clear)

  • Notable mentions: the all-rounder comment, the Amazon experience™ (also good points, e.g. about reading papers), "this could've been an email" (it's a common joke, at least in dev circles, about having too many useless meetings, no need to downvote the guy!)

Top answer
1 of 5
197
A few things, it's not just an AI bubble. Liquidity in markets is draining, rapidly. It is the deterioration of borrows to be able to fund their debt repayments, and every job removed whether it's from automation or Gov policy is a step towards tightening liquidity conditions. Eventually it will cascade. The major AI companies in the US are competing against each other for sure, but, increasingly from models developed overseas. China has restricted access to chips, which means they have to develop models that are more efficient. AI companies are pouring hundreds of billions into systems, that China is doing for a fraction of the price and producing comparable results. It's this discrepancy in capex that is going to cause issues. If a model from China can produce the same results for a fraction of the price, then US Companies will have to charge more to make their investors whole. That Capex spread is the bubble. On a side note, LLMs aren't true intelligence, intelligence is reasoning and problem solving combined with the widest possible variety of inputs. Nothing more.
2 of 5
99
I want to add to that as a software developer, that until 6 months or so ago I was kinda forcing myself to experiment and use AI tools every once in a while to keep up to date with the progress. Things have improved so dramatically that for the last 2 or so months most of my code is written by AI - I'm still doing a lot of the thinking, design and architecture, but mostly not writing any code. It usually gets most things right from the first time, writes cleaner code than I do, follows the code base patterns very well, and sometimes points out edge cases I didn't think about. I cannot reason about valuations, liquidity, etc. but the value is very much there already, this is definitely not a dotcom situation.
🌐
Reddit
reddit.com › r/finance › the question isn’t whether the ai bubble will burst – but what the fallout will be
r/finance on Reddit: The question isn’t whether the AI bubble will burst – but what the fallout will be
December 1, 2025 - The biggest 'dot com type' company, Google, was not public during the dot com bubble, and therefore was not an investment option. META did not exist yet either. You're implying that if you had just invested in all the public dot com companies equally, you would've picked 'winners' by default, but that is completely untrue - most of the 'winners' didn't exist yet. The same is likely true for the AI bubble - the companies you're seeing today are most likely going to be losers and the winners will be born out of the rubble.
🌐
Reddit
reddit.com › r/investing › unpopular opinion but i don't think the ai bubble is anything like the dot com bubble
r/investing on Reddit: Unpopular opinion but I don't think the AI bubble is anything like the dot com bubble
November 5, 2025 -

I keep seeing this comparasion on Reddit, but I believe it'swrong to compare the two and expect a crash as big as the one in 2000.

The big difference is that the companies today actually make money. Microsoft, Google, Nvidia, Meta they all have real products, billions in profits, and millions of users. Think of Google Maps, YouTube, Windows, Office, Android, Instagram etc. These are all solid products with billions in profits and massive user bases. Can you replace them? No. Will they go away anytime soon? Also no. Even if the AI trend slows down, these companies still have strong foundations and cash flow from their non-AI products.

During the dot com bubble, most companies had no profits and were running on hype alone. Not to mention how technology wasn’t even a big part of people’s lives back then. Most people were just getting online for the first time, and the internet was still something new and weird, used mostly by nerds.

Sure, valuations are high and there’s some overexcitement, but that doesn’t mean the market is heading for a total meltdown. A correction? Probably. A repeat of 2000? Highly unlikely.

🌐
Reddit
reddit.com › r/dataisbeautiful › [oc] s&p 500 comparing dotcom and ai bubbles with two scales
r/dataisbeautiful on Reddit: [OC] S&P 500 Comparing Dotcom and AI Bubbles with Two Scales
November 18, 2025 - Either way this is a very bubbly situation. Much more so than the .com boom. It's also a once in history situation. AI is unique in that it promises infinite improvements and growth at an exponential rate.