Forget the talk about bubbles and corrections. Can someone explain to me the rationale of investing in a product, marketing it, seeing that it drives consumers away from your product and erodes trust, and then you continue to invest at an accelerating rate? Good business would have driven us very far away from this point years ago. This is very deep in the "because we can" territory. It's not FOMO.
> Can someone explain to me the rationale of investing in a product, marketing it, seeing that it drives consumers away from your product and erodes trust, and then you continue to invest at an accelerating rate?
Sure!
Google began investing heavily in AI (LLMs, actually) to catch up to the other frontier labs, which had already produced a product that was going to eviscerate Google Search (and therefore, Google ad revenue). They recognized this, and set about becoming a leader in the emerging field.
Is it not better to be a leader in the nascent industry that is poised to kill your profitability?
This is the same approach that Google took with smartphones. They saw Apple as a threat not because they had a product that was directly competing, but because they recognized that allowing Apple to monopolize mobile computing would put them in a position to take Google’s ad revenue — or allow them to extract rent in the form of payments to ensure Apple didn’t direct their users to a competing service. Android was not initially intended to be a revenue source, at least not most importantly. It was intended to limit the problem that Apple represented. Later, once Google had a large part of the market, they found ways to monetize the platform via both their ad network and an app store.
AI is no different. If Google does nothing, they lose. If they catch up and take the lead, they limit the size of the future threat and if all goes well, will be able to monetize their newfound market share down the road - but monetization is a problem for future Google. Today’s Google’s problem is getting the market share.
This has been how I've framed a lot of the expenditure despite lack of immediate substantial new revenues. Everyone including Google is driven to protecting current revenues from prospective disruption. But the vulnerability AI created for Google is to other companies worth positioning themselves to take advantage of if Google falls behind and loses chunks of marketshare.
Yeah, like imagine if the LLM's don't advance that much, the agentic stuff doesn't really take off etc.
Even in this conservative case, ChatGPT could seriously erode Google Search revenues. That alone would be a massive disruption and Google wants to ensure they end up as the Google in that scenario and not the Lycos, AltaVista, AskJeeves etc. etc.
> frontier labs, which had already produced a product that was going to eviscerate Google Search (and therefore, Google ad revenue)
> If Google does nothing, they lose.
Is any of that actually true though? In retrospect, had google done nothing their search product would still work. Currently it's pretty profoundly broken, at least from a functional standpoint--no idea how that impacts revenue if at all. To me it seems like google in particular took the bait and went after a paper tiger, and in doing so damaged their product.
There are at least a few stories from the 90s where companies that readily could have invested in “getting online” instead decided that it would only harm their existing business. The hype at the time was extraordinary to be sure, but after the dust settled the internet did change the shape of the world.
Nobody can really know what things will look like in 10 years, but if you have the capital to deploy and any conviction at all that this might be a sea-change moment, it seems foolish to not pursue it.
> Can someone explain to me the rationale of investing in a product, marketing it, seeing that it drives consumers away from your product and erodes trust, and then you continue to invest at an accelerating rate?
I'll take a stab at this. It's not 100% clear to me which product you're referring to, so I'll try to answer as if the product is something that already has customers, and the maker of the product is shoving AI into it. The rationale is that the group you're trying to convince that you're doing a good job is your shareholders or investors, not your actual customers. You can justify some limited customer attrition by noting that your competitors are doing the same thing, and that maybe if you shove the _right_ AI features into the product, you'll win those customers back.
I'm not saying it's a _good_ rationale, but that seems to be what's at play in many cases.
Eastman Kodak tried your implied proposed strategy, of ignoring technological developments that undermine their core product. It didn't go so well. Naturally technology companies have learned from this and other past mistakes.
I don't know which product you're even talking about.
If you mean AI Overview, you really need to cite the source of this claim:
> seeing that it drives consumers away from your product
Because every single source I can find claims that Google search grew in 2024[0]. HN is not a good focus group for a product that targets billions of people.
If you're talking about all of AI with your statement I think you may need to reconcile that opinion with the fact that chat GPT alone has almost 1 billion daily users. Clearly lots of people derive enormous value from AI.
If there's something more specific and different you were referring to I'd love to hear what it is.
> Clearly lots of people derive enormous value from AI.
I don’t really see this. Lots of people like freebies, but the value aspect is less clear. AFAIK, none of these chatbots are profitable. You would not see nearly as many users if they had to actually pay for the thing.
> "It doesn't matter whether you want to be a teacher [or] a doctor. All those professions will be around, but the people who will do well in each of those professions are people who learn how to use these tools."
Bullshit. Citation very much needed. It's a shame--a shameful stain on the profession--that journalists don't respond critically to such absurd nonsense and ask the obvious question: are you fucking lying?. It is absolutely not true that AI tools make doctors more effective, or teachers, or programmers. It would be very convenient to people like Pichai and Scam Altman, but that don't make it so.
I’m probably an outlier: I use chatgpt/gemini for specific purposes, but ai summaries on eg google search or youtube gives me negative value (I never read them and they take up space).
I can't say I find them 100% useless - though I'd rather they not pop up by default - and I understand why people like them so much. They let people type in a question and get a confident and definitive answer all in natural language, which is what it seems like the average person has tried to do all along with search engines. The issue is that they think whatever it spits out is 100% true and factual, which is scary.
> Can someone explain to me the rationale of investing in a product, marketing it, seeing that it drives consumers away from your product and erodes trust, and then you continue to invest at an accelerating rate?
Hey now, Google Plus was more than a decade ago. I didn't like it either, but maybe it's time to move on? I think they learned their lesson.
What evidence do you have that it's driving consumers away from the product? The people who bother to say anything on the internet are the extreme dedicated minority and are often not representative of a silent majority. Unless you have access to analytics, you can't make this inference.
The golden goose is not you or I. It is our boss who will buy this junk for us and expect us to integrate it into our workflows or be shown the door. It is the broccoli headed kids who don’t even have to crack open cliffnotes to shirk their academic responsibilities anymore. It is universities that are trying to “keep up” by forcing an AI prompting class as a prerequisite for most majors. These groups represent a lot of people and a lot of money.
It doesn’t have to work. It just has to sell and become entrenched enough. And by all metrics that is what is happening. A self fulfilling prophecy, no different than your org buying redundant enterprise software from all the major vendors today.
anyway i totally agree with your reasoning. one might as well ask "why is MS Teams so bad? it's bloated, slow, buggy, nasty to use from a UX pov... yet it's everywhere"
this shitware -- ms teams, llm slopguns, whatever -- never had to work, they just have to sell.
> McKinsey says, while quoting an HR executive at a Fortune 100 company griping: "All of these copilots are supposed to make work more efficient with fewer people, but my business leaders are also saying they can't reduce head count yet."
The problem becomes that eventually all these people who are laid off are not going to find new roles.
Who is going to be buying the products and services if no-one has money to throw around?
I don't even know what the selling point of AI is for regular people. In the 60s it was possible for a man to work an ordinary job, buy a house, settle down with a wife and support two or three children. That's completely out of the realm of reality for many young people now and the plummeting birth rates show it.
The middle class have financially benefited very little from the past 20+ years of productivity gains.
Social media is driving society apart, making people selfish, jealous, and angry.
Do people really think more technology is going to be the path to a better society? Because to me it looks like the opposite. It will just be used to stomp on ordinary people and create even more inequality.
> That's completely out of the realm of reality for many young people now and the plummeting birth rates show it.
I'm skeptical of this explanation for falling birthrates just because birthrates are falling across the world and there seems to be no correlation between fertility and financial security. America has low birthrates. Scandinavia (usually considered to have generous welfare states) has low birthrates. Hungary, where the government gives massive tax breaks (IIRC they spend around ~6% of their GDP on child incentives), has low birthrates. Europe, East Asia, India, the Middle East, the Americas, basically the whole world except for central Asia and Sub-Saharan Africa (which are catching up) has low birth rates. Obviously the economic conditions between basically all the countries in the world varies wildly, but there isn't a consistent relationship between those conditions and fertility.
Also within countries, the number of children people have is not always correlated with wealth (and at times in the past 60 years it has been negatively correlated).
Anyway, I find your argument intuitive, but it doesn't seem to align with the data we have.
I mean that I know of first hand, just the US and Japan. "Possible" being a low bar that just means that I've seen it at least once.
I don't think data with all of those factors (household income, number of earners per household, gender of the earners, home ownership, and number of children) exists for any country. Do you have data like that for 1960s America or is your argument based on extrapolations from watching Leave it to Beaver?
But if we abstract your hypothesis slightly to: fertility is lower now than in 1960 because people are less financially secure now than they were in 1960, I don't think the data we have supports this.
I have seen it all across the EU. Is pretty doable (granted, you have a University title). But you can absolutely buy a home and have a couple of children which will have absolute all they need.
Yea because the average Joe totally has a university title.
However in Germany a lot of poor people have many children while a lot of academics have less [0].
It's "doable" also doesn't mean its pleasant. I have checked the rural housing market recently and for a somewhat acceptable house you will have to pay easily ~3k per month given you have a somewhat big start capital. Not sustainable if one person loses their job for a while. Not to say it was that much easier back in the day, the housing market is just beyond fucked for most ordinary people.
Ordinary men have wifes and two children in all those countries. You are also projecting American lifestyle "buying house without family help is necessary" on countries "hungary" where this was not an expectation for a really really long time. Like, generations.
- women don't want to leave the workforce because one salary cannot support a family
- yet women remaining in the workforce, since single-salary is infeasible, thusly doubling supply of workers, lowering salaries, which itself makes it infeasible to single-income a family
Not to pick on women, as a feminist if you ask me, all modern men should have to be houseboys to serve their feminine masters. It does suck but it is necessary to benefit the modern women who did not suffer, in so by causing modern men to suffer -- to make amends for the suffering of all women in the perpetuity of history at the hands of all historical men, neither of which are alive today.
Well that's the point, men are refusing to suffer.
There is little incentive to walking in a contract, where you are working all the time, no appreciation, love, gratitude or even a thank you. All the time being made to feel like you are not measuring up. And they'd rather be with somebody else apart from you. That done, you also come back from work and do all the chores you would if you remained single.
And if a few years later the other party decided to break the contract, now they take your home, get monthly pensions(with raises), and get to start the process all over again with somebody else at your expense.
Plus these days kids don't stay back with aging parents to care for them, so having kids appears pointless as well.
By and large, let alone an incentive, marriage and children seem to a massive negative for men. Hence I wouldn't be surprised low marriage and birth rates all over the world.
Why would you want to do all this? When you can work, keep the money, and spend it for your pleasure by staying single?
Except that it is men who complain constantly about wanting to marry and have kids while women are much more content being single and have friends.
You dont have to pay alimony of the wife worked thw whole time. That complain is funny in the comtext of men demanding to return back to time where alimony arrangement was necessary protection.
Even in marriage, it is more of women who initiate divorce are report higher hapiness after the divorce. Men report lower hapiness and are more likely yo want to marry again.
A generous welfare state (like the Nordics or Switzerland) does not necessarily mean that the middle class is well off with lots of resources for kids. Usually it's the middle class (+upper class) that pays for the generous welfare state, but gets almost none of the benefits. You don't get/need the welfare, if you earn enough to be considered middle class.
Birth rates correlate negatively with education of women. I read somewhere that this is one of the most robust findings in all of social science (and when I asked Gemini just now whether there was such a correlation, it said the same).
There’s a (positive or negative) correlation between birth rates and dozens of factors, because over the period birth rates have been falling, the world has changed dramatically. Issue is we don’t know what is causal. education also correlates with all kinds of other factors like income, type of work, marital status, and political views, meaning birth rates are also likely correlated with all of these factors.
Is it really true that this is not known? Although I only claimed correlation (and am thus surprised that I was downvoted twice, as that claim is obviously true), based on the famous "robustness" of this observation, I strongly suspect that confounding factors like those you mention have already been analysed to death, and found not to eliminate the explanatory power of women's education.
At least, checking these confounders seems an obviously valuable and interesting avenue to explore. If it hasn't been done yet, I wonder what social scientists are doing instead.
Thanks for pointing out this skewed view of economic history common in North America.
The short period of boom in 50s/60s US and Canada was driven by WW2 devastation everywhere else. We can see the economic crisis' in the US first in the 70s/80s with Europe and Japan rebounding, then again in 90s/00s with China and East Asia growing, and now again with the rest of the world growing (esp Latin America, India, Indonesia, Nigeria, Philippines, etc). Unless US physically invades and devastates China, India or Brazil the competition will keep getting exponentially higher. It's a shame that US didn't invest all that prosperity into social capital that could have helped create high value jobs.
In short, its easier to have high standards of living in your secure isolated island when the rest of the world (including historical industrial powers) are completely decimated by war.
> In short, its easier to have high standards of living in your secure isolated island when the rest of the world (including historical industrial powers) are completely decimated by war.
I assume the idea is more money could've been invested into bringing the bottom rungs of American society up and created a more skilled and educated workforce in the process.
The US has pushed a shit ton of money into education. I mean an unreasonable amount of it went to administrators. But the goal and the intent was certainly there.
Education is part of it. But a lot of the social capital which makes societies prosperous is separate from what we usually consider to be education. On an individual behavior level that includes things like knowing how to show up for work on time, sober, and properly dressed, and follow management instructions without arguing or taking things personally. These are skills that people in the middle and upper classes take for granted but they forget that there are a large number of fellow citizens in the economically disconnected underclass who never had a good opportunity to learn those basics. As a society we've never done a good job of lifting those people up.
> On an individual behavior level that includes things like knowing how to show up for work on time, sober, and properly dressed, and follow management instructions without arguing or taking things personally. These are skills that people in the middle and upper classes take for granted
I don't see your point.
Those rules does not apply to the upper class and middle class workers have way more leeway regarding that than the lower class.
This seems to be saying that a large fraction of poor people are poor only because of bad habits, which they have only because nobody taught them any better?
What's your point? I didn't make any claims about averages. We could do a lot more to improve opportunities and social mobility for people caught in the permanent underclass.
But we have. The underclass today has much better lives in many aspects than the highest class from many decades ago. The absolute level of wealth has increased, it's simply that the delta between the high and the low is widening.
Would you rather live equally in poverty or live comfortably with others who are way more wealthy than you? Surprisingly people do seem to prefer the former, though I'd prefer the latter
> I mean an unreasonable amount of it went to administrators. But the goal and the intent was certainly there.
This is wrong.
The increase in administrator pay began well after the crises cited in OP.
You could cite spending on the sciences (and thus Silicon Valley), but the spending by the US did not accrue to administrators; and further, federal money primarily goes to grants and loans, but GP is citing a time over which there were relatively low increases in tuition.
Edit: Not at home, but even a cursory serious search will turn up reports like this one that indicate the lack of clarity in the popular uprising against money "[going] to administrators"
> For universities, yes. But not for primary education. Administrative bloat is the worst in K-12.
First, where is your data?
Second, this discussion is clearly about post-secondary education ("the idea is more money could've been invested into bringing the bottom rungs of American society up and created a more skilled and educated workforce in the process.")
Cheaper education, free/subsidized healthcare, free/subsidized childcare, cultural norms around family support, etc.
Things that let workers focus on innovation. IT workers in cheaper countries have it much easier while we have to juggle rising cost of living and cyclical layoffs here. And ever since companies started hiring workers directly and paying 30-50% (compared to 10-15% during the GCC era) the quality is almost at par with US.
>>> It's a shame that US didn't invest all that prosperity into social capital that could have helped create high value jobs.
>> What does this sentence mean?
> Cheaper education, free/subsidized healthcare, free/subsidized childcare, cultural norms around family support, etc.
Except for free/subsidized healthcare, didn't the US already have those things during the post-war boom?
Cheaper education? Public K-12 schools, the GI bill, generous state subsidies of higher education (such that you could pay for college with the money you made working a summer job).
Free/subsidized childcare, cultural norms around family support? Wages high enough to raise a family on a single income, allowing for stay-at-home moms to provide childcare.
> Except for free/subsidized healthcare, didn't the US already have those things during the post-war boom?
Yes, but education system is being dismantled piece by piece at all levels. I work in edutech and our goal is to cut costs faster than revenue. Enrolments are down, students are over burdened with student loans, and new grads can't compete in the market.
Also, do you think kids going to K-12 in the US can compete with kids who go to international schools in China and India? High end schools in those countries combine the Asian grind mindset with western education standards.
> Wages high enough to raise a family on a single income, allowing for stay-at-home moms to provide childcare.
This was a special period of post war prosperity that I mentioned. It was unnatural and the world has reset back to the norm where a nuclear family needs societal/governmental support to raise kids, or need to have two 6 figure jobs. "It takes a village to raise a child" is a common western idiom based on centuries of observations. Just because there was 20-30 years of unnatural economic growth doesn't make it the global or historical norm.
Education is a tough one. Like healthcare, it's highly subject to Baumol's Cost Disease. Technology holds some potential but fundamentally we still need a certain ratio of teachers to students, and those teachers get more expensive every year.
Education should be well funded. But at the same time, taxpayers are skeptical because increasing funding doesn't necessarily improve student outcomes. Students from stable homes with aspirational parents in safe neighborhoods will tend to do well even with meager education funding, and conversely students living in shitholes will tend to do badly regardless of how good the education system is. If we want to improve their lot then we need to fix broader social issues that go beyond just education. Anyone who has gotten involved with a large school district has seen the enormous waste that goes to paying multiple levels of administrators, and education "consultants" chasing the latest ineffective fad. Much of it is just a grift.
>> Except for free/subsidized healthcare, didn't the US already have those things during the post-war boom?
> Yes, but education system is being dismantled piece by piece at all levels.
So? That's not really relevant to the historical period you were referring to when you said: "It's a shame that US didn't invest all that prosperity into social capital that could have helped create high value jobs."
At the time, Americans already had many of the things you're saying they should've invested in to get. How were they supposed to predict things would change and agitate for something different without the hindsight you enjoy?
> This was a special period of post war prosperity that I mentioned. It was unnatural and the world has reset back to the norm where a nuclear family needs societal/governmental support to raise kids, or need to have two 6 figure jobs.
Exactly why do you think it is it unnatural?
I think you should be more explicit about how you think things should be for families. Because going on an on about how the times when things were easier was "unnatural" may create the wrong impression.
Also keep in mind where talking about human society here, the concept of "natural" has very little to do with any of it. What were really talking about is the consequence of the internal logic of this or that set of artificial cultural practices.
> How were they supposed to predict things would change and agitate for something different without the hindsight you enjoy?
By comparing themselves to their counterparts in other countries. By 1955 there should have been alarm bells ringing as Europe re-industrialized. Same with 70s oil crisis but the best that US could do was to cripple Japan with Plaza Accords.
Americans even now have a mindset that nothing exists beyond their borders, one could assume it was worse back then.
> Exactly why do you think it is it unnatural?
Because only two industrialized countries were left standing after WW2 and those two countries enjoyed unnatural growth until others caught up - first the historical powers in Europe then Asia.
> By comparing themselves to their counterparts in other countries. ... Americans even now have a mindset that nothing exists beyond their borders, one could assume it was worse back then.
That's not realistic, except in hindsight. Most people everywhere pay more attention to their immediate environment and living their lives. Not speculating about what is the global economy is going to look like in 50 years, and how would those changes affect them personally.
You're talking about stuff only some PhD at RAND would be doing (or would have the ability to do) in the 1960s.
Without the democratic pressure of common people either 1) having a need or 2) seeing things get worse, no changes like you describe would happen.
> Because only two industrialized countries were left standing after WW2 and those two countries enjoyed unnatural growth until others caught up - first the historical powers in Europe then Asia.
What's natural?
And more importantly: how do you think things should be for families.
Europe is quite innovative on per-capita basis. Not like US but the workers there have much happier lives and their societies don't have extreme inequality and resulting violence like the US.
China is arguably more innovative than all and has terrible work life balance, but their society is stable and you won't go from millionaire to homeless just because you had to get cancer treatment.
GCC = global consulting companies, the bane of innovation. Outsourcing of all kinds (even domestic C2C) should be banned.
Were there a lot of imports at that time in terms of materials or labor or food? If not, I don’t really see how money flowing in from abroad actually changes the economy in this area. If the wood is harvested in America and the workers are in America and the wood and workers are available, then any amount of money value generated by everyone else will be sufficient to pay them, unless there is a significant stream of imports that need to be paid for (which I’m not aware of in this time period).
What could have made a big difference is if foreign competition arose for American materials and land, which it did. But that is under our control, we collectively can choose whether to allow them to buy it or not, and whether to let people in at a rate that outpaces materials discovery and harvesting capabilities.
We also restricted materials harvesting quite a bit during this time period, for example I believe a lot of forestry protections were not in place yet.
So you're saying that working-class living standards are a zero-sum competition across capitalist countries, even negative-sum as competing national economies grow their total output and hourly productivity?
> Thanks for pointing out this skewed view of economic history common in North America....
> In short, its easier to have high standards of living in your secure isolated island when the rest of the world (including historical industrial powers) are completely decimated by war.
So, what's your point? That the plebs shouldn't expect that much comfort?
A common maxim across all cultures is to "manage expectations" for happiness.
And while comparing societal standards expand the time horizon to 100 years, not nitpick one specific unnatural era of history.
An automotive engineer in Detroit in 1960 was a globally competitive worker because most of his counterparts in other countries were either dead, disabled or their companies bankrupt.
The equivalent in today's world would be aerospace engineers, AI researchers, quantum engineers, robotics engineers, etc who arguably have the same standard of living as the automotive engineer in 1960s Detroit.
Economic and technological standards evolve - societies should invest in human capital to evolve with them or risk stagnation.
> An automotive engineer in Detroit in 1960 was a globally competitive worker because most of his counterparts in other countries were either dead, disabled or their companies bankrupt.
> The equivalent in today's world would be aerospace engineers, AI researchers, quantum engineers, robotics engineers, etc who arguably have the same standard of living as the automotive engineer in 1960s Detroit.
You know were not really talking about top-end positions like automotive engineers in Detroit in 1960. I think we're talking more about automotive factory workers in Detroit in 1960.
> Economic and technological standards evolve - societies should invest in human capital to evolve with them or risk stagnation.
You need to be more explicit about how you think things should be for the common man.
I hope you understand the concept of relative prosperity - The current equivalent would be a factory worker at Boeing. In 60s cars were innovative in US, now Nigeria can outcompete China in cars.
Times change, standards rise, competition increases. If America wants to remain competitive globally you need to work in the top 1% fields like you did back in 60s, not expecting $25 per hour for flipping burgers (which should have been automated with robots by now).
>The short period of boom in 50s/60s US and Canada was driven by WW2 devastation everywhere else.
The US just renamed "Department of Defense" to "Department of War" and they seem willing to go to any extreme to "Make America Great Again". Threatening to take over Canada, Greenland, and Panama already in the first few months of the current administration. Using US military on US soil. There's no line they won't cross. WW3 isn't off the table at all, unfortunately.
Yeah if you bar over 50% of your workforce from working at market clearing wages then naturally the other 50% are going to get paid at their expense. When you underpay minorities and often outright ban women from working formal employment, it's not hard to see how wages for the others remain high.
Do you want to take a 20% pay cut so I can take the marginal benefit? Who wants to volunteer to be barred from working so I can negotiate better salary?
Life has improved for nearly everyone on nearly every metric. But if one myopically focuses on house purchasing as the only thing that matters and takes anomalous post WW2 period, then sure, things are bad (ignoring the fact that housing space and quality + amenities improved dramatically, but hey, who cares about nuance, we just love to complain!)
Instead of making this dream true for all the people who were previously excluded, we have pursued equality by making this dream accessible to NO ONE.
> Well, this probably why statistics exist.
Like the statistics on plummeting mental health and happiness, an obesity epidemic, increases in "deaths of despair", and plateauing or decreasing life expectancy?
You're both right. I take your point to mean similar to the disastrous outcome of "no child left behind" act. I do agree with you, but people didn't seriously _intend_ for the result to be everyone lowers to a shit position.
Or maybe you're saying that's always how these initiatives turn out? It can't be helped?
I think there is something to be valued about historical accuracy.
> Like the statistics on plummeting mental health and happiness, an obesity epidemic, increases in "deaths of despair", and plateauing or decreasing life expectancy?
In the 60ties, suicide rates went UP. Peaked around 1970 and we did not reached their levels.
Long terms statistics about alcoholism rates and drug use are also a real exiting thing. We know that cirrhosis death rate was going up in the 60ties up to 70ties, peaked and went down. It was the time when drinking and driving campaigns started.
Current drug use is nowhere near what it was a generation ago.
Because one party wants to return to those times with the exact same social norms. So it's a dangerous line of thinking to forget that women were walled out of many jobs, or had a huge wage gap when they were let in. As well as minorities only barely starting to really get the same opportunities after a lot of struggle.
>Like the statistics on plummeting mental health and happiness, an obesity epidemic, increases in "deaths of despair", and plateauing or decreasing life expectancy?
Yes. When it affects the majority is only when we start to pay attention.
A lot of the people who admire the caricatured midcentury economy are probably actually just nostalgic for the '90s. Case-Shiller was much lower, gas was cheap, college was still relatively affordable. The biggest economic complaints of the present day were not as serious then. (There were still affordable parts of the Bay Area!) The subjection of black people and women that existed in the 60s obviously wasn't necessary for those things to be possible.
But each decade's economy is the product of decades past. The policies of the 90s brought us to the present. So we don't want to repeat the mistakes of the 90s, and the 80s are associated with the iniquities of the Reagan administration. Thus you get this misplaced nostalgia for the 50s-70s without really understanding the problems or the progress that society made even as the highest levels of government seemed to drift off course.
The main problem there is soaring housing costs which have nothing to do with technology and everything to do with extremely restrictive planning regulations that make it impossible for the housing supply to keep up with population growth.
This is an excellent metaphor, so don't take this as criticism merely an observation, but it skews heavily towards the techno-utopian narrative that scam artists like Altman and Pichai keep harping on. Your techno-dystopia makes the same fatal assumption that tech matters much at all. The internet has become television. That's it. It's not nothing but it damn sure ain't everything, and it's just not all that important to most folks.
> Do people really think more technology is going to be the path to a better society? Because to me it looks like the opposite. It will just be used to stomp on ordinary people and create even more inequality.
The problem isn't "more technology" (nor is the solution "less) but rather a change in who controls it and benefits. We shouldn't surrender-in-advance to the idea that the stompers will definitely own it.
> In the 60s it was possible for a man to work an ordinary job, buy a house, settle down with a wife and support two or three children. That's completely out of the realm of reality for many young people now and the plummeting birth rates show it.
Most of the people I see working in tech can easily afford this. Maybe not private schools or McMansions but the basics are pretty easy. Sure if you're a humanities major with health problems its tough.
This is far from true. Aside from Valley pay, which also has Valley housing costs, a "tech job" will barely pay for healthcare and housing for one, much less healthcare and housing for four.
To me this all just looks like a big frothy chemical reaction playing out far beyond any one person's control.
With that view, many things oscillate over time, including game theory patterns (average interaction intentions of win-win, win-lose, lose-lose), and integration / mitosis (unions, international treaties, civil wars),etc.
So my optimistic view is that inevitably we will get more tech whether we want it or not, and it will probably make things worse many for a while, but then it will simultaneously enable and force a restructuring at some level that starts a new cycle of prosperity. On the other side it will be clear that all this tech directly enables a better (more free, more diverse, more rewarding, more sustainable) way of life.
I believe this because from studying history it seems this pattern plays out over and over and over again to varying degrees.
I don't have time to be precise, but I'll do my best to be more specific.
New system better at organizing human behavior -> increases prosperity -> more capacity for invention -> new technologies disrupt power dynamics -> greed and power-law dynamics tilt system away from broad prosperity (most powerful switch from win-win to win-lose) -> majority become unsatisfied with system -> economics break down (too much debt, not enough education, technology increasingly and disproportionately benefits wealthy) -> trust break down -> average pattern of behavior tilts towards lose-lose dynamics -> technology keeps advancing -,> new technologies disrupt old power structures -> restructuring of world-powr order at highest levels (often through conflict) -> new system established, incorporating lessons learned from the old (more fair, more inclusive) -> trust reestablished, shift back to win-win dynamics (cycle repeats)
In reality it's more messy than this. Also the geographical location of this cycle and the central power can move around. Some places may sit out one or more cycles and get stuck.
The majority of people are already doing ”bullshit jobs” and many of them know it too. Using AI to automate the bullshit and capture the value leaves them with nothing.
The AI evangelists generally overlook that one of the primary things that capitalism does is fill people’s lives with busywork, in part as an exercise in power and in another part because if given their time back, those people would genuinely have absolutely no idea what to do with it.
What's crazy is that people will jump all over themselves to say "well you could totally live like that at a 1960s level" like that's even a viable possibility today (in the US).
What's that about the falcon and the falconer? The center cannot hold..
People do make it work in the US with tiny incomes and a better standard of living than you’d see in a typical 1960’s household.
I know people raising a family of 4 on 1 income well below the median wage without a collage degree. They do get significant help from government assistance programs for healthcare, but their lifestyle is way better off than what was typical in the 1960’s.
Granted they aren’t doing this in a ultra expensive US city, but on the flip side they’re living in a huge for 1960’s 3 bedroom house with a massive backyard.
> "In the 60s it was possible for a man to work an ordinary job, buy a house, settle down with a wife and support two or three children."
In the 1930s, it wasn't possible so what's your point? (History time: What happened on October 24, 1929?) Choosing the 1960s as a baseline is artificially cherry-picking an era of economic growth (at the expense of the rest of post-WW2 Europe and Asia who were rebuilding) instead of an era of decline or normalcy.
The thing is, a 1960s standard of living would be totally unacceptable by almost everyone today. Single car max, no air conditioning, small house or apartment, multiple children sharing bedrooms, no cellphones, no Internet, no cable, no streaming. Local calls only. Children allowed outside by themselves.
I think you're out of touch with what "almost everyone" considers an acceptable standard of living. I know plenty of people who have a single car or none at all, live in apartments living pay check to pay check with no kids at all because they are afraid they can't afford them. They would love to have what you described, minus the no cell phones/internet.
A random idea I had a few years ago was, what if someone started a “recent modern Amish” community, where they just intentionally keep the community’s tech usage either fixed at 1960s or 1990s, or maybe a fixed number of years in the past like 30 or 50 (meaning, the time target moves forward by a year each year).
So the kids growing up now might be playing the original Nintendo NES, or maybe an N64, they’d have phones and even computers, etc.
It could even be a little more nuanced like, the community could vote in certain classes of more modern goods.
I feel there is something unsound with that comparison, because you could also apply it to kings of history, simply by listing things that technologically unavailable or unaffordable.
Imagine transmigrating King Louis XVI (pre-revolution) into some working class professional with a refrigerator, a car, cable TV, etc. I don't think it's a given that he'd consider the package an upgrade.
The "issue" is the comparison is much more complex than people may be led to believe. It's not a simple "adjust the dollars to be the same" calculation.
There are a lot of assumptions that go into making that calculation.
If I tell you that the value of a dollar you hold has gone down or up this year versus last year because of the price fluctuation of an item you never have or never will purchase, such as hermit crabs in New Zealand.
Would you believe your dollar is worth more or less? What if the price of a good you do spend your dollars on has an inverse relationship with the price of hermit crabs in New Zealand? Or what if the prices of the items you do buy haven't moved at all?
> I don't even know what the selling point of AI is for regular people.
AI healthcare, for example. Have an entity that can diagnose you in a week at most, instead of being sent from specialist to specialist for months, each one being utterly uninterested in your problem.
A lot of this stuff about baby boomers vs now is based on how remember things. The data is more complicated. Example: The average home in 1960 was like 1600 sq ft, now its like 2800 sq ft. Sometimes we are comparing apples to oranges.
I am not trying to blunt social criticism. The redistribution of wealth is a real thing that started in the tax policies of the 1980s that we just can't seem to back away from.
But a lot of people are pushing gambling, crypto, options that are telling people that they have no hope of getting ahead just by working and saving. That's not helpful.
> The average home in 1960 was like 1600 sq ft, now its like 2800 sq ft.
Statements like this are not particularly meaningful unless there is actually a supply of 1600 sqft houses that are proportionally cheaper, otherwise you're just implying a causal relationship with no evidence.
Supply is driven by demand unless there is a monopoly in house building (there isn't). If this wasn't the case, one could quickly become a billionaire by starting first company that build small houses that are supposedly in demand but not provided by the market
All this means is there are enough buyers who can afford 2,800 sqft houses to keep builders from wasting a lot on a 1,600 sqft house. There could be a lot more people who want a cheaper 1,600 sqft house (including some of the 2,800 sqft house buyers!) than who want 2,800 sqft houses, but the market will keep delivering the latter as long as the return is better (for the return to improve for 1,600 sqft houses, see about convincing towns and cities to allow smaller lots, smaller setbacks, et c).
You're still presupposing that there's a linear (or at least linear enough to be significant amongst the myriad other factors involved) relationship between square footage of house and cost. And that that relationship extends arbitrarily downwards as you reduce the square footage.
It's one of the main factors. And it can be reduced to almost nothing if a small single family housing zone is turned into a skyscraper providing accommodation for thousands
> The problem becomes that eventually all these people who are laid off are not going to find new roles.
> Who is going to be buying the products and services if no-one has money to throw around?
I've wondered about this myself. People keep talking about the trades as a good path in the post-AI world, but I just don't see it. If huge swaths of office workers are suddenly and permanently unemployed, who's going to be hiring all these tradesmen?
If I were unemployed long-term, the one upside is that I would suddenly have the time to a do a lot of the home repairs that I've been hiring contractors to take care of.
The other thing I worry about is the level of violence we're likely to see if a significant chunk of the population is made permanently unemployed. People bring up Universal Basic Income as a potential, but I think that only address a part of the issue. Despite the bluster and complaints you might hear at the office, most want to have the opportunity to earn a living; they want to feel like they're useful to their fellow man and woman. I worry about a world in which large numbers of young people are looking at a future with no job prospects and no real place for them other than to be made comfortable by government money and consumer goods. To me that seems like the perfect recruiting ground for all manner of extremist organizations.
> If huge swaths of office workers are suddenly and permanently unemployed, who's going to be hiring all these tradesmen?
"Professionals were 57.8 percent of the total workforce in 2023, with 93 million people working across a wide variety of occupations" [1]. A reasonable worst-case scenario leaves about half of the workforce intact as is. We'd have to assume AI creates zero new jobs or industries, and that none of these people can pivot into doing anything socially useful, to expect them to be rendered unemployable.
> if a significant chunk of the population is made permanently unemployed
They won't. They never have. We'd have years to debate this in the form of unemployment insurance extensions.
>We'd have to assume AI creates zero new jobs or industries
Zero American jobs, sure. It's clear that these american industries don't want to invest in America.
>They won't. They never have.
not permanent, but trends don't look good. It doesnt' remain permanent because mass unemployment becomes a huge political issue at some point. As is it now among Gen Z who's completely pivoted in the course of a year.
Increased production has always just lead to more stuff being made, not more people unemployed. When even our grandparents were kids a new shirt was something you’d take care of, as you don’t get a new one very often. Now we head on to Target and throw 5 into our cart on a whim.
Were there less weavers with machines now doing the job (or whatever?). Sure. But it balances out. It’s just bumpy.
The big change here is that it’s hitting so many industries at once, but that already happened with the personal computer.
UBI correctly identifies the problem (people can’t afford housing/clothing/food without money) but is an inefficient solution imo. If we want people to have those things, we should simply give them to them.
How much of them, which ones, to who, at what price, who is forced to provide them, how much do they get, what about other needs...
Or we could just give people money and let them do as they wish with it, and trade off between their needs and wants as they see fit (including the decision of whether they want to work to obtain more of their wants).
The right answer to this is not a number, but rather a feedback loop that converges on the right number. When everyone is laid off without production of goods slowing down, the result is deflation; when everyone gets too much money relative to production of goods, the result is inflation. So that means you can use the CPI inflation as a feedback variable, and adjust the UBI amount until the CPI is stable.
If the plan was to give people the full set of housing/clothing/food then use the poverty line calculation for amount of money. Or the social security calculation.
We can iterate on the exact amount. There are difficulties with UBI but figuring out the amount is a pretty minor one.
> Who is going to be buying the products and services if no-one has money to throw around?
The same people who are buying products and services right now. Just 10% of the US population is responsible for nearly 50% of consumption.
We are just going to bifurcate even more into the haves and have-nots. Maybe that 10% now becomes responsible for 70+% of consumption and everyone else is fighting for scraps.
It won't be sustainable and we need UBI. A bunch of unemployed, hungry citizens with nothing left to lose is a combo that equals violent revolution.
Top 10% of households are 212k. Plenty of software developers don't make that but if they have a spouse with 70k job, they are in top 10%. However, many software jobs are starting to be in HCOL so they probably don't feel like they are in top 10%.
Pretty much yeah, I believe it's around $200k/year puts you in that bracket.
If all jobs evaporate, then only asset owners will have money to spend, everyone else is left to fight for scraps so we either all die off or we get mad max.
Or maybe the type of labor desired will be more comple, interesting, and valuable as it was when we gave up hunting and gathering for farms and as we mechanized farming and left for factories and factories and offices.
"All of these copilots are supposed to make work more efficient with fewer people, but my business leaders are also saying they can't reduce head count yet."
Duh, if they reduce headcount then they will have fewer people in their department, which will negatively affect their chances for promotion and desirability of their resume. That's why they actually offshore the jobs to India and Southeast Asia; it lets them have 3x+ the headcount for the same budget.
If you want to have them actually reduce headcount, make org size the denominator in their performance reviews, so a director with 150 people needs to be 15x more productive than a manager with 10, who needs to be 10x more productive than the engineer IC. I guarantee that you will see companies collapse from ~150,000 employees to ~150, and profit/employee numbers in the millions (and very likely, 90% unemployment and social revolution). This is an incentive issue, not a productivity issue. Most employees and their employers are woefully unproductive because of Parkinson's Law.
You'll never see a manager or even a managing-CEO propose this, though, because it'll destroy their own marketability in the management job market. Only an owner-CEO would do it - which some have, eg. Valve, Nintendo, Renaissance Technologies. But by definition, these are minority employers that are hard to get into, because their business model is to employ only a few hundred people and pay them each millions of dollars.
Intuitively, the whole economy cannot be B2B SAAS companies funded by VCs. At some point you need to provide value to consumers or the government. If those consumers don’t have any money and/or aren’t willing to spend a paycheck making studio ghibli profile pics, you have a problem. I guess Sam Altman has been asking for a government bailout so maybe he is going for the b2g option in a backwards sort of way.
>Who is going to be buying the products and services if no-one has money to throw around?
Let me answer your question with another question - if the population pyramid is inverted and birth rate is like 1.1 babbies per 2 adults.. then how is any market going to grow? Seems to me all markets with halve. On top of what you pointed out. Or I suppose it's a happy accident if our workforce halves as our work halves - but still the consumer market has halved. It does make me wonder under what reality one would fathom that the stock market would go up long term.
The narrative going around in AI skeptic circles at least is that these layoffs are not tied to AI but to covid-era over-hiring, and that companies have an incentive to blame the layoffs on AI rather than admit underperformance/bad planning.
AGI succeeds and there are mass layoffs, money is concentrated further in the hands of those who own compute.
OR
AI bubble pops and there are mass layoffs, with bailouts going to the largest players to prevent a larger collapse, which drives inflation and further marginalizes asset-less people.
I honestly don't see a third option unless there is government intervention, which seems extremely far fetched given it's owned by the group of people who would benefit from either scenario presented above.
> I honestly don't see a third option unless there is government intervention
Bailouts are government intervention. The third option is an absence of government intervention, at least at the business level. By all means intervene in the form of support for impacted individuals, e.g. making sure people have food on the table. Stop intervening to save businesses that fail.
Does the USA even have enough money to rescue the tech giants at this point? We could be talking multiple trillion dollars at worst. And the AI only companies like OpenAI and Anthropic would be the most vulnerable in comparison to say Google or Microsoft, because they have no fallback and no sustainability without investor money.
And Nvidia would be left in a weird place where the vast majority of their profits are coming from AI cards and demand would potentially dry up entirely.
There is talk about bailout, but is it first possible. Second how long will it post pone issue. Massive increase in government debt used in bailout likely leads to more inflation, which leads to higher interest rates, making that debt much more expensive. And at some point credibility of that debt and dollar in general will be gone.
Ofc, this does lead to ever increasing paper valuations. So maybe that is the win they are after.
Tbh, the answer is simple: if we truly get AGI, the government would nationalize it because it's a matter of national security and prosperity for that matter. Everything will change forever. Agriculture, Transportation, Health... Breakthrough after breakthrough after breakthrough. The country would hold the actual key to solve almost any problem.
when you write it out like that, it sounds unfathomably… silly.
I'm not a tinfoil hat skeptic, and i'd like to think i can accept the rationale behind the possibility. But I don't think we're remotely close as people seem to think.
As technology changes over history, governments tend to emerge that reflect the part of the population that can maintain a monopoly of violence.
In the Classical Period, it was the citizen soldiers of Rome and Greece, at least in the west. These produced the ancient republics and proto-democracies.
Later replaced by professional standing armies under people like Alexander and the Ceasars. This allowed kings and emperors.
In the Early to Mid Medieaval time, they were replaced by knights, elites who allowed a few men to defeat commoners many times their number. This caused feudalism.
Near the end of the period, pikes and crossbows and improved logistic systems shifted power back to central governments, primarily kings/emperors.
Then, with rifles, this swung the pendulum all the way back to citizen soldiers between the 18th and early 20th century, which brought back democracies and republics.
Now the pendulum is going in the opposite direction. Technology and capital distribution has already effectively moved a lot of power back to an oligarchic elite.
And if full AGI combined with robots more physically capable than humans, it can swing all the way. In principle a single monarch could gain monopoly of violence over an entire country.
Do not take for granted that our current undertanding of what the government is, is going to stay the same.
Some kind of merger between capital and power seems likely, where democratic elections quickly become completely obsolete.
Once the police and military have been mostly automated, I don't think our current system is going to last very long.
The same could be said for the nuclear arms race. The problem is that you can't afford to let a competitor/foreign country own this technology. You must invest. The problems have to be figured out later.
> problem ... laid off are not going to find new roles
Not necessarily. If AI improves productivity, which it hasn't very much yet, there is the option to make more stuff rather than the same output with less people.
The Luddites led on to Britain being the workshop of the world, not to everyone sitting around unemployed, at least not for a while.
Depends how absolute one takes the statement "no-one has money to throw around".
Taken loosely, we have seen previous developments which make a large fraction of a population redundant in short periods, and it goes really badly, even though the examples I know of are nowhere near the entire population.
I'm not at all sure how much LLMs or other GenAI are "it" for any given profession: while they impress me a lot, they are fragile and weird and I assume that if all development stopped today the current shinyness would tarnish fast enough.
But on the other hand, I just vibe coded 95% of a game that would have taken me a few months back when I was a fresh graduate, in a few days, using free credit. Similar for the art assets.
How much money can I keep earning for that last 5% the LLM sucks at?
Its not even about reducing headcount but offshoring too. I see that in my industry. Major orgs are all hiring in bangalore now. Life is good if you are in bangalore or hyderabad. Ai is seen as something to smooth over the previous language/skill/culture gaps that may have been plugging the dam so far.
> Who is going to be buying the products and services if no-one has money to throw around?
Here's hoping we figure that out soon otherwise we're going to see how long it takes the poor to reinvent the guillotine
Personally I'm kind of hoping for sooner than later. The greed and vice of the upper stratosphere in society is wildly out of control and needs to be brought to heel
Google, Meta, Microsoft, and Amazon will get through easily. They don't have excessive debt. They can afford to lose their investments into AI. Their valuations will take a hit. Nvidia will lose revenue and profits, stock will go down by 60% or more, but it will also survive.
Oracle will likely fail. It funded its AI pivot with debt. The Debt-to-Revenue ratio is 1.77, the Debt-to-Equity ratio D/E is 520, and it has a free cash flow problem.
OpenAI, Anthropic, and others will be bought for cents on the dollar.
Microsoft is in a pickle. They put AI lipstick on top of decades of unfixed tech debt and their relationship with their userbase isn't great. Their engineering culture is clearly not healthy. For their size and financial resources, their position in the market right now is very delicate.
I think that's the impression you get if you focus on Microsoft as a OS vendor. It's not that anymore, that's why their OS sucks for many years now. Their main business is b2b, cloud services, and azure. I think they are pretty safe from OpenAI. Plus they have invested big in OpenAI as well.
Windows is hard to replace in large organizations. Is there actually any real AI competitors in the stack? Well Google, maybe. The whole Windows+Office+AD+Exchange and now Azure stack is unlikely to go any time soon. However badly they screw it up.
True. Basically any medium to large scale business is reliant on Windows/Office/AD. While there are open source alternatives to Windows/Office, I can't think of a good open source alternative to AD/Group Policy/etc
I disagree. They're the one place that can get away without investing in frontier model research and still win in the enterprise.
Google is only place that serves the enterprise (Workspace for productivity, Cloud for IT, Devices for end users) AND conducts meaningful AI research.
AWS doesn't (they can sell cloud effectively, but don't have any meaningful in-house AI R&D), Meta doesn't (they don't cover enterprise and, frankly, nobody trusts Zuck... and they're flaky.
Oracle doesn't. They have grown their cloud business rapidly by 1) easy button for Oracle on-prem to move to OCI, and 2) acting like a big colo for bare metal "cloud" infra. No AI.
Open AI has fundamental research and is starting to have products, but it's still niche. Same as Anthropic. They're not in the same ball game as the others, and they're going to continue to pay billions to the hyperscalers annually for infra, too.
This is Google's game to lose, imho, but the biggest loser will be AWS (not Azure/Microsoft).
They are one of the few companies actually making money with AI as they have intelligently leveraged the position of Office 365 in companies to sell Copilot. Their AI investment plans are, well, plans which could be scaled down easily. Worst case scenario for them is their investment in OpenAI becoming worthless.
It would hurt but is hardly life threatening. Their revenue driver is clearly their position at the heart of entreprise IT and they are pretty much untouchable here.
> Worst case scenario for them is their investment in OpenAI becoming worthless.
And even then, if that happens when the bubble pops, they'll likely just acquire OpenAI on the cheap. Thanks to the current agreement, it already runs on Azure, they already have access to OpenAI's IP, and Microsoft has already developed all their Copilots on top of it. It would be near-zero cost for Microsoft at that point to just absorb them and continue on as they are today.
Microsoft isn't going anywhere, for better or for worse.
Despite them pissing off users with Windows, what HN forgets, is they aren't Microsoft's customer. The individual user/consumer never was. We may not want what MS is selling, but their enterprise customers definitely do.
I cry for Elon, that precious jewel of a human being.
Tesla (P/E: 273, PEG: 16.3) the car maker without robots, robotaxis is less than 15% of the Tesla valuation at best. When the AI hype dies, selloff starts and negative sentiment hits, we have below $200B market cap company.
You will be able to rent a whole Meta datacenter with thousands of NVIDIA B200 for $5/hour. AWS will become unprofitable due to abundance of capacity...
> Google, Meta, Microsoft, and Amazon will get through easily. They don't have excessive debt. They can afford to lose their investments into AI.
Survive, yes. I don't think anybody ever questioned this.
I wonder if they will be able to remain as "growth stocks", however. These companies are allergic to be seen as nature companies, with more modest growth profiles, share profits, etc.
OpenAI, Anthropic, and others will be bought for cents on the dollar.
OpenAI is existential threat to all big tech including Meta, Google, Microsoft, Apple. Hence, they're all spending lavishly right now to not get left behind.
Meta --> GenAI Content creation can disrupt Instagram. ChatGPT likely has more data on a person than Instagram does by now for ads. 800 million daily active users for ChatGPT already.
Google --> Cash cow search is under threat from ChatGPT.
Microsoft --> Productivity/work is fundamentally changed with GenAI.
Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
I'm betting that OpenAI will emerge bigger than current big tech in ~5 years or less.
> Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
Yeah... No they can't. I don't agree with any of your "disruptions," but this one is just comically incorrect. There was a post on HN somewhat recently that was a simulated computer using LLMs, and it was unusable.
Not to mention you would need an order of magnitude improvement in on-device inference speed to make this feasible at current smartphone costs. Or they could offload it and sell an insecure overpriced-subscription laggy texting device that bricks when you don’t have cell service…
I find myself doing more and more inside ChatGPT. When ChatGPT inevitably can generate GUIs on the fly, book me an uber, etc. I don't see why iOS wouldn't have competition.
They have <a really expensive> infrastructure that serves 800 million monthly active <but non-paying> users.
Even worse, they train their model(s) on the interactions of those non-paying customers, what makes the model(s) less useful for paying customers. It's kind of a "you can not charge for a Porsche if you only satisfy the needs of a typical Dacia owner".
They have <a really expensive> infrastructure that serves 800 million monthly active <but non-paying> users.
I don't pay Meta any money too. Yet, Meta is one of the most profitable companies in the world.
I give more of my data to OpenAI than to Meta. ChatGPT knows so much about me. Don't you think they can easily monetize their 800 million (close to 1 billion by now) users?
Clearly, a lot of people here disagree with you. Doesn't mean you cannot be right, but in general, the HN crowd is a pretty good predictor of the trends in the tech industry.
Nobody was predicting for the dotcom or the financial crisis bubbles. The fact that everyone and their grandma is calling this a bubble makes me think that it simply can’t be.
Weird example to trot out as a bubble when at any point in its history, if you held for a few years or so you’d be pretty far ahead on your investment. It clearly shows people are awful at calling out bubbles.
OpenAI has yet to make a single, solitary thing that works well. It's nothing but Sam Altman hyping things. They aren't an existential threat to anyone.
ChatGPT 3 and 4 were impressive and kind of kicked off the current AI boom/bubble. Since then though, Altman changing the non-profit OpenAI into a kind of for profit Closed AI seems to have led to a lot of talent leaving.
> Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
Or, instead of spending billions training models that are nearly all the same, they instead take advantage of all the datacenter full of GPUs, and AI companies frantically trying to make a profit, many most likely crashing and burning in the process, to pay relative pennies to use the top, nearly commoditized, model of the month?
Then, maybe someday, starting late and taking advantage of the latest research/training methods that shave off years of training time, save billions on a foundation model of their own?
I don't think it makes sense for Apple to be an AI company. It makes sense for them to use AI, but I don't see why everyone needs to have their own model, right now, during all the churn. It's nearly already commodity. In house doesn't make sense to me.
No, LLMs are an existential threat. OpenAI is a heavily leveraged prop-model company selling inference, which often has a model a few months ahead of its competitors.
AI isn’t bullshit, but selling access to a proprietary model has certainly not been proven as a business model yet.
> He told the BBC that the company owns what he called a “full stack” of technologies, from chips to YouTube data to models and frontier science research. This integrated approach, he suggested, would help the company weather any market turbulence better than competitors.
I guess but is it better for an investor to own 2 shares of Google or 1 share of OpenAI and 1 share of TSMC?
Like I have no doubt that being vertically integrated as a single company has lot of benefits but one can also create a trust that invests vertically as well.
There may be firm specific risk etc., but there is also a concept of double marginalization, where monopolies that exist across the vertical layers of a production chain will be less efficient than a single monopoly, because you only get a single layer of dead weight loss rather than multiple.
Well if AI goes poof - the equity markets take a really big bad hit. So I would probably move out of equity and into something more concrete and reinvest if you can time the market bottom.
Nvidia earnings tomorrow will be the litmus test if things are going to topple over.
it might bring in the schedules, but since it probably wouldn't cause there to be an actual hole, its really more about long term fab build plans than anything else
Agreed, it's when. They're hoping to stave it off or maybe stretch out the pop into a correction by all hedging together with all these incestuous deals, but you can't hold back the tide. They debuted this tech way too early, promised way too much, and now the market is wary about buying AI products until more noise settles out of the system.
> They debuted this tech way too early, promised way too much,
finally, some rational thought into the AI insanity. The entire 'fake it til you make it' aspect of this is ridiculous. sadly, the world we live in means that you can't build a product and hold its release until it works. you have to be first to release even if it's not working as advertised. you can keep brushing off critiques with "it's on the road map". those that are not as tuned in will just think it is working and nothing nefarious is going on. with as long as we've had paid for LLM apps, I'm still amazed at the number of people that do not know that the output is still not 100% accurate. there are also people that use phrases as thinking when referring to getting a response. there's also the misleading terms like "searching the web..." when on this forum we all know it's not a live search.
> sadly, the world we live in means that you can't build a product and hold its release until it works. you have to be first to release even if it's not working as advertised.
You absolutely can and it's an extremely reliable path to success. The only thing that's changed is the amount of marketing hype thrown out by the fake-it vendors. Staying quiet and debuting a solid product is still a big win.
> I'm still amazed at the number of people that do not know that the output is still not 100% accurate.
This is the part that "scares" me. People who do not understand the tool thinking they're ACTUALLY INTELLIGENT. Not only are they not intelligent, they're not even ACTUALLY language models because few LLMs are actually trained on only language data and none work on language units (letters, words, sentences), tokens are abstractions from that. They're OUTPUT modelers. And they're absolutely not even close to being let loose unattended on important things. There are already people losing careers over AI crap like lawyers using AI to appeal sanctions because they had AI write a motion. Etc.
And I think that was ultimately the biggest unforced error of these AI companies and the ultimate reason for the coming bubble crash. They didn't temper expectations at all, the massive gap between expectation and reality is already costing companies huge amounts of money, and it's only going to get worse. Had they started saying, "these work well, but use them carefully as we increase reliability" they'd be in a much better spot.
In the past 2 years I've been involved in several projects trying to leverage AI, and all but one has failed. The most spectacular failure was Microsoft's Dragon Copilot. We piloted it with 100 doctors, after a few months we had a 20% retention rate, and by the end of a year, ONE doctor still liked it. We replaced it with another tool that WORKS, docs love it, and it was 12.6% the cost, literally a sixth the price. MS was EXTREMELY unhappy we canceled after a year, tried to throw discounts at us, but ultimately we had to say "the product does not work nearly as well as the competition."
I think they already have that confirmation. When we bailed the banks out in 08 we basically said "If you're big enough that we'd be screwed without you then take whatever risks you like with impunity".
That's a reduction of complexity, of course, but the core of the lesson is there. We have actually kept on with all the practices that led to the housing crash (MBS, predatory lending, Mixing investment and traditional banking).
> If you're big enough that we'd be screwed without you then take whatever risks you like with impunity".
I know financially it will be bad because number not go up and number need go up.
But do we actually depend on generative/agentic AI at all in meaningful ways? I’m pretty sure all LLMs could be Thanos snapped away and there would be near zero material impact. If the studies are at all reliable all the programmers will be more efficient. Maybe we’d be better off because there wouldn’t be so much AI slop.
It is very far from clear that there is any real value being extracted from this technology.
The government should let it burn.
Edit: I forgot about “country girls make do”. Maybe gen AI is a critical pillar of the economy after all.
> I’m pretty sure all LLMs could be Thanos snapped away and there would be near zero material impact.
I mostly agree, but I don't think it's the model developers that would get bailed out. OpenAI & Anthropic can fail, and should be let to fail if it comes to that.
Nvidia is the one that would get bailed out. As would Microsoft, if it came to that.
I also think they should be let to fail, but there's no way the US GOV ever allows them to.
Why would Nvidia need bail out? They have 10 billion debt and 60 billion of cash... Or is it finally throwing any trust in the market and just propping up valuations? Which will lead to inevitable doom.
> Nvidia is the one that would get bailed out. As would Microsoft, if it came to that.
> I also think they should be let to fail, but there's no way the US GOV ever allows them to.
There's different ways to fail, though: liquidation, and a reorganization that wipes out the shareholders.
OpenAI could be liquidated and all its technology thrown in to the trash, and I wouldn't shed a tear, but Microsoft makes (some) stuff (cough, Windows) that has too much stuff dependent on it to go away. The shareholders can eat it (though I think broad-based index funds should get priority over all other shareholders in a bankruptcy).
I expect the downvotes to come from this as they always seem to do these days, but I know from my personal experience that there is value in these agents.
Not so much for the work I do for my company, but having these agents has been a fairly huge boon in some specific ways personally:
- search replacement (beats google almost all of the time)
- having code-capable agents means my pet projects are getting along a lot more than they used to. I check in with them in moments of free time and give them large projects to tackle that will take a while (I've found that having them do these in Rust works best, because it has the most guardrails)
- it's been infinitely useful to be able to ask questions when I don't know enough to know what terms to search for. I have a number of meatspace projects that I didn't know enough about to ask the right questions, and having LLMs has unblocked those 100% of the time.
Economic value? I won't make an assessment. Value to me (and I'm sure others)? Definitely would miss them if they disappeared tomorrow. I should note that given the state of things (large AI companies with the same shareholder problems as MAANG) I do worry that those use cases will disappear as advertising and other monetizing influences make their way in.
Slop is indeed a huge problem. Perhaps you're right that it's a net negative overall, but I don't think it's accurate to say there's not any value to be had.
I'm glad you had positive experiences using this specific technology.
Personally, I had the exact opposite experience: Wrong, deceitful responses, hallucinations, arbitrary pointless changes to code...
It's like that one junior I requested to be removed from the team after they peed in the codebase one too many times.
On the slop i have 2 sentiments: Lots of slop = higher demand for my skills to clean it up. But also lots of slop = worse software on probably most things, impacting not just me, but also friends, family and the rest of humanity. At least it's not only a downside :/
It all depends on whether MAGA survives as a single community. One of the few things MAGA understands correctly is that AI is a job-killer.
Trump going all out to rescue OpenAI or Anthropic doesn't feel likely. Who actually needs it, as a dependency? Who can't live without it? Why bail out entities you can afford to let go to the wall (and maybe then corruptly buy out in a fire sale)?
Similarly, can you actually see him agreeing to bail out Microsoft without taking an absurd stake in the business? MAGA won't like it. But MS could be broken up and sold; every single piece of that business has potential buyers.
Nvidia, now that I can see. Because Trump is surrounded by crypto grifters and is dependent on crypto for his wealth. GPUs are at least real solid products and Nvidia still, I think, make the ones the crypto guys want.
Google, you can see, are getting themselves ready to not be bailed out.
> One of the few things MAGA understands correctly is that AI is a job-killer
Trump (and by extension MAGA) has the worst job growth of any President in the past 50 years. I don't think that's their brand at all. They put a bunch of concessions to AI companies in the Big Beautiful Bill, and Trump is not running again. He would completely bail them out, and MAGA will believe whatever he says, and congress will follow whatever wind is blowing.
If Meta or Google disappared overnight, it would be, at worst, a minor annoyance for most of the world. Despite the fact that both companies are advertising behemoths, marketing departments everywhere would celebrate their end.
Then they would just use another Messenger or fall back on RCS/SMS.
The only reason WhatsApp is so popular, is because so many people are on it, but you have all you need (their phone number) to contact them elsewhere anyway
So if WhatsApp had an outage, but you needed to communicate to someone, you wouldn't be able to? Don't you have contacts saved locally, and other message apps available?
In most of Asia, Latin America, Africa, and about half of Europe?
You’d be pretty stuck. I guess SMS might work, but it wouldn’t for most businesses (they use the WhatsApp business functionality, there is no SMS thing backing it).
Most people don’t even use text anymore. China has it’s own Apps, but everyone else uses WhatsApp exclusively at this point.
Brazil had many times a judge punished WhatsApp by blocking it in Brazil, and all the times that happened, Telegram gained hundreds of thousands of new users.
The bubble may well burst when the corporations are denied the enormous quantity of energy that they claim they need "to innovate". From TFA:
"""
Mr Pichai said action was needed, including in the UK, to develop new sources of energy and scale up energy infrastructure.
"You don't want to constrain an economy based on energy, and I think that will have consequences," he said.
He also acknowledged that the intensive energy needs of its expanding AI venture meant there was slippage on the company's climate targets, but insisted Alphabet still had a target of achieving net zero by 2030 by investing in new energy technologies.
"""
"Slippage" in this context probably means, "We no longer care about climate change but we don't feel that mere citizens are ready to hear us say it."
They got enough slush money to make this go on for a couple of years.
I am shocked at the part they know it is a bubble and they are doing nothing to amortize it. Which means they expect the government to step in and save their butts.
I've been trying to grok this idea of - when does a bubble pop. Like in theory if everyone knows it's a bubble, that should cause it to pop, because people should be making their way to the exists, playing music chairs to get their money out early.
But as I try to sort of narrative the ideas behind bubbles and bursts, one thing I realize, is that I think in order for a bubble to burst, people essentially have to want it to burst(or the opposite have to want to not keep it going).
But like Bernie Madoff got caught because he couldn't keep paying dividends in his ponzi scheme, and people started withdrawing money. But in theory, even if everyone knew, if no one withdrew their money (and told the FCC) and he was able to use the current deposits to pay dividends a few years. The ponzi scheme didn't _have_ to end, the bubble didn't have to pop.
So I've been wondering, like if everyone knows AI is a bubble, what has to happen to have it collapse? Like if a price is what people are willing to pay, in order for Tesla to collapse, people have to decide they no longer want to pay $400 for Tesla shares. If they keep paying $400 for tesla shares, then it will continue to be worth $400.
So I've been trying to think, in the most simple terms, what would have to happen to have the AI bubble pop, and basically, as long as people perceive AI companies to have the biggest returns, and they don't want to move their money to another place with higher returns (similar to TSLA bulls) then the bubble won't pop.
And I guess that can keep happening as long as the economy keeps growing. And if circular deals are causing the stock market to keep rising, can they just go on like this forever?
The downside of course being, the starvation of investments in other parts of the economy, and giving up what may be better gains. It's game theory, as long as no one decides to stop playing the game, and say pull out all their money and put it into I dunno, bonds or GME, the music keeps playing?
You’re over complicating something that is very simple. The stock market reflects people’s sentiments: greed, excitement, FOMO, despair…
A bubble doesn’t need a grand catalyst to collapse. It only needs prices to slip below the level where investors collectively decide the downside risk outweighs the upside hope. Once that threshold is crossed, selling accelerates, confidence unravels, and the fall feeds on itself.
It's important to keep in mind the difference between the stock market and the economy.
Economically, AI is a bubble, and lots of startups whose current business model is "UI in front of the OpenAI API" are likely doomed. That's just economic reality - you can't run on investor money forever. Eventually you need actual revenue, and many of these companies aren't generating very much of it.
That being said, most of these companies aren't publicly traded right now, and their demise would currently be unlikely to significantly affect the stock market. Conversely, the publicly traded companies who are currently investing a lot in AI (Google, Apple, Microsoft, etc) aren't dependent on AI, and certainly wouldn't go out of business over it.
The problem with the dotcom bubble was that there were a lot of publicly traded companies that went bankrupt. This wiped out trillions of dollars in value from regular investors. Doesn't matter how much you may irrationally want a bubble to continue - you simply can't stay invested in a company that doesn't exist anymore.
On the other hand, the AI bubble bursting is probably going to cost private equity a lot of money, but not so much regular investors unless/until AI startups (startups dependent on AI for their core business model) start to go public in large numbers.
I think the targeted ad revenue all of the llm providers will get using everyones regular chat data + credit card dataset for training is going to be insanely good.
Plus the information they can provide to the State on the sentiment of users is also going to be greatly valued
Didn't perplexity make only like 27K from ad revenue? They're going to have to actively compete with Google and Facebook dollars, as google and facebook develop competing products.
Eventually money to invest will run out. If earnings of the companies doesn't catch up we'll reach a situation where stock prices reach a peak, have limited future expected returns, and then it'll pop when there's a better opportunity for the money
Imagine if interest rates go up and you can get 5% from a savings account. One big player pulls out cash triggering a minor drop in AI stocks. Panic sells happen trying to not be the last one out of the door, margin calls etc.
You're assuming cash will never stop flowing in driving up prices. It will. The only way it goes on forever is if the companies end up being wildly profitable
This one? When China commits to subsidising and releasing cutting-edge open-source models. What BYD did to Tesla's FSD fee dreams, Beijing could do to American AI's export ambitions.
I'm not. A few podcasts I've listened to recently (mostly Odd Lots) explored how a pop is often preferable to a protracted downturn because it weeds out the losers quickly and allows the economy to begin the recovery aspect sooner. A protracted downturn risks poorly managed assets limping along for years instead of having capital reallocated to better investments.
which is kind of sad to think about.
The US could have invested all that money to actually invest in its infrastructure, schools, hospitals and general wellbeing of its workforce to make the economy thrive.
It's not "the US" who's investing the money. This is the same problem people run into when they say, "we should just put money into more trains and buses rather than self driving cars".
Private actors are the ones who are investing into AI, and there's no real way for them to invest into public infrastructure, or to eventually profit from it, the way investors reasonably expect to do when they put up their money for something.
It's the government who can choose to invest into infrastructure, and it's us voters who can choose to vote for politicians who will make that choice. But we haven't done that. So many people want to complain endlessly about government and corporations -- not entirely without merit, of course -- but then are quick to let voters off the hook.
I think the economic background has changed, in 2008 it was after a big run up in wealth so the reversion wasn’t so bad, there was some fat to cut. Since then people have been ground down to the breaking point, another 2008 wipeout will cut into the bone. I do think this time it could be different.
Sort of? My thoughts are that there's something of an AI arms race and the US doesn't want to lose that race to another country... so if the AI bubble pops too fiercely, there may likely be some form of intervention. And any time the government intervenes, all bets are off the table. Who knows what they will do and what the impact will be.
I can see them intervening to preserve AI R&D of some sort, but many of the current companies are running consumer oriented products. Why care if some AI art generation website goes bust?
It feels as if every CEO used ChatGPT once and said “wow this is incredible, pivot hard to make our product use AI”, and that’s about all the thought has matured to.
Most bubbles occur due to excessive levels of credit offered too cheaply, resulting in a whole bunch of defaults happening at the same time. All the major AI players have borrowed money to buy GPUs and build data centers and have used Special Purpose Vehicles to do it so the debt doesn't fall on their own balance sheet, probably using a certain amount of stock as collateral. If the SPV defaults, could that trigger a big sell-off?
If they’ve securitized and sold their data center buildout, will the big clouds and AI labs actually face any severe impact? While the sums are huge, most of these companies have the cash on hand to pay down the debt. The big AI labs have said their models earn enough to cover the cost to train themselves, just not the next one. This means they could at any time walk away from the compute spend for training.
With the heavy securitization of all these deals, will the “bubble pop” just hurt the financial industry?
If a company like CoreWeaver sees their SPV for a Microsoft-specific data center go bankrupt, that means MSFT decided to walk away from the deal. Red flag for the industry, but also a sign of fiscal restraint. Someone else can swoop in and buy the DC for cheap, while MSFT avoids the Opex hit. Seems like the losers will be whoever bought that SPV debt, which probably isn’t a tech company.
Right, insurance companies are the new "financial dark matter". The next financial crisis will probably be triggered when a few large life and property insurers fail because they purchased debt assets which were highly rated but turn out to be junk. (Medical and auto insurers aren't exposed here because they operate on much shorter timeframes.)
It feels like AI investment and product focus is now a religion or cult. You have to be so fully invested, blind from any data, and throwing billions of dollars at it, otherwise you’re not “in”.
Meanwhile, no one in my sphere of tech and non tech people is wanting “more AI” and sees the pros/cons of chatgpt, using it as kinda fancy google search..
Where’s the “killer app” that’ll generate literally trillions in revenue to offset the costs? How do the economics work, especially when GPUs are depreciating?
I think it will pop but not in the way everyone thinks it will pop. There's plenty that's not going to go away / anywhere, but I'm sure lots of startups will fail and close their doors.
What way do you envision it popping? Nvidia has tons of investments on their books in smaller companies. If a couple of them start showing poor earnings, it could cause a death spiral for NVDA because 1) their investment just tanked, and 2) those companies are no longer buying chips from them therefore reducing revenue.
Nvidia also makes up ~7% of the S&P 500 so if their stock price falls substantially, that's a big chunk of capital just... gone for a lot of people.
Is it really a bubble about to burst when literally everyone is talking about AI being in a bubble and maybe bursting soon?
To me, we're clearly not peak AI exuberance. AI agents are just getting started and getting so freaking good. Just the other day, I used Vercel's v0 to build a small business website for a relative in 10 minutes. It looked fantastic and very mobile friendly. I fed the website to ChatGPT5.1 and asked it to improve the marketing text. I fed those improvements back to v0. Finished in 15 minutes. Would have taken me at least one week in the past to do a design, code it, test it for desktop/mobile, write the copy.
The way AI has disrupted software building in 3 short years is astonishing. Yes, code is uniquely great for LLM training due to open source code and documentation but as other industries catch up on LLM training, they will change profoundly too.
It's not that the AI models or products don't work.
It's how much money is being poured into it, how much of the money that is just changing hands between the big players, the revenue, and the valuations.
Well, do you have a model for this? Or is it just regurgitating mass media that it's a bubble?
If hyperscalers keep buying GPUs and Chinese companies keep saying they don't have enough GPUs, especially advanced ones, why should we believe someone that it's a bubble based on "feel"?
> because leaders in the space also keep saying it?
They have a lot of reasons for saying that, including to give themselves cover in the event of a crash.
What’s happening now is a classic land grab. You’re always going to get inflated prices in that situation, and it’s always going to correct at some point. It’s difficult to predict when, though.
This is a very biased example.
Also, it is possible only because right now the tools you've used are heavily subsidised by investors' money. A LOT of it.
Nobody questions the utility of what you just mentioned, but nodoby stops to ask if this would be viable if you were to pay the actual cost of these models, nor what it means for 99.9% of all the other jobs that AI companies claim can be automated, but in reality are not even close to be displaced by their technology.
So what if it's subsidized and companies are in market share grab? Is it going to cost $40 instead of $20 that I paid? Big deal. It still beats the hell out of $2k - $3k that it would have taken before and weeks in waiting time.
100x cheaper, 1000x faster delivery. Further more, v0 and ChatGPT together for sure did much better than the average web designer and copy writer.
Lastly, OpenAI has already stated a few times that they are "very profitable" in inference. There was an analysis posted on HN showing that inference for open source models like Deepseek are also profitable on a per token basis.
We don't know what AI should cost but if you look at the numbers then 2x more expensive is much too low.
Think about the pricing. OpenAI fixed everyone's prices to free and/or roughly the cost of a Netflix subscription, which in turn was pinned to the cost of a cable TV subscription (originally). These prices were made up to sound good to his friends, they weren't chosen based on sane business modelling.
Then everyone had to follow. So Anthropic launched Claude Code at the same price point, before realizing that was deadly and overnight the price went up by an order of magnitude. From $20 to $200/month, and even that doesn't seem to be enough.
If the numbers leaked to Ed Zitron are true then they aren't profitable on inference. But even if that were true, so what? It's a meaningless statement, just another way of saying they're still under-pricing their models. Inferencing and model licensing are their only revenue streams! That has to cover everything including training, staff costs, data licensing, lawsuits, support, office costs etc.
Maybe OpenAI can launch an ad network soon. That's their only hope of salvation but it's risky because if they botch it users might just migrate to Grok or Gemini or Claude.
Then everyone had to follow. So Anthropic launched Claude Code at the same price point, before realizing that was deadly and overnight the price went up by an order of magnitude. From $20 to $200/month, and even that doesn't seem to be enough.
Maybe it was because demand was so high that they didn't have enough GPUs to serve? Hence, the insane GPU demand?
I’ve wondered if it makes sense to buy Intel along with Cerebrus in order to use Intels newest nodes while under development to fab the Cerebrus wafer level inference chips which are more tolerant of defects. Overall that seems like the cheapest way to perform inference - if you have $100B.
If it subsidized it's a problem because we're not talking about Uber trying to disrupt a clearly flawed system of transportation.
We're talking about companies whose entire promise is an industrial revolution of a scale we've never seen before. That is the level of the bet.
The fact they did much better than the average professional is also your own take and assessment that is purely self evident.
Also, your example has fundamentally no value. You mentioned a marginal use case that doesn't scale. Personal websites will be quicker to make because you can get whatever the AI spews your way, you have basically infinite flexibility and the only contraints are "getting it done" and "looking ok/good".
That is not how larger business work, at all. So there is a massive issue of scalability of this.
Finally, OpenAI "states" a lot of things, and a lot of them have been proven to be flat out lies, because they're led by a man who has been proved to be a pathological narcissistic liar many times over.
Yet you keep drinking the kool aid, inlcuding about inference. There are by the way reports that, data in hand, prove quite convincingly that "being profitable on inference" seems to be math gymnastics, and not at all the financial reality of OpenAI.
The vast majority of highly valuable tech companies in the last 35 years have subsidized their products or services in the beginning as they grew. Why should OpenAI be any different? In particular the tokenomics is already profitable.
I think you are missing the fundamental point here. The question is not really if AI has some value. That much is obvious and the exemple you give, increasing developer productivity, is a good one.
The question is: is the value generated by AI aligned with the market projected value as currently priced in AI companies valuation? That's what's more difficult to assess.
The gap between fundamental financial data and valuations is very large. The risk is a brutal reassessment of these prices. That's what people call a bubble bursting and it doesn't mean the underlying technology has no value. The internet bubble burst yet the internet is probably the most significant innovation of the past twenty years.
Well it all started with usual SV style "growth hacking"(price dumping as a SaaS) of "gather users now, monetize later" by OpenAI - which works only if you attain virtual monopoly(basically dominance over segment of a market, with competition not really competing with you) over segment of the market.
The problem is no one attained that position, price expectations are set and it turns out that wishful thinking of reducing costs of running the models by orders of magnitude wasn't fruitful.
Is AI useful? of course.
Are the real costs of it justified? in most cases no.
There's also a lot of debate over how long a GPU lasts. If a GPU loses most of it's value after 2 years because a newer model is much better/cheaper, that destroys the economics of the companies who have spent billions on now obsolete hardware.
The question is: is the value generated by AI aligned with the market projected value as currently priced in AI companies valuation? That's what's more difficult to assess.
I agree it is difficult to assess. Right now, competitive pressure is causing big players to go all in or get left behind.
That said, I don't think the bubble is done growing nor do I think it is about to burst.
I personally think we are in 1995 of the dotcom bubble equivalent. When it bursts, it will still be much bigger than in November 2025.
> Is it really a bubble about to burst when literally everyone is talking about AI being in a bubble and maybe bursting soon?
Yes, it is even one of necessary components. Everybody is twitchy afraid of the pop, but immediate returns are too tempting so they keep money in. The bubble pops when something happens and they all start to panicking at the same time. They all need to be sufficiently stressed for that mass run to happen.
There's a weird gleefulness about AI being a bubble that'll pop any day now both in this thread and in the world at large. It's kinda weird and I find most of the predictions about the result of such a bubble popping to sound highly exaggerated.
I think it’s because a lot of people feel like AI is being pushed from the top a lot with all kinds of spectacular claims / expectations, while it’s actually still difficult to apply to a lot of problems.
The AI bubble bursting would be somewhat of an “I told you so!” moment for a lot of people.
And there’s also a large group that’s genuinely afraid to lose their job because of it, so for that group it’s very much understandable.
Whether or not the predictions of the bubble popping are exaggerated or not, I cannot tell; it feels like most companies investing in AI know that the expectations are huge and potentially unrealistic (AGI/ASI in the next few years), but they have money to burn and the cost of missing out on this opportunity would be tremendous. I know that this is the position that both Meta and Google shared with their investors.
So it all seems to be a very well calculated risk.
I do agree that there seems to be a bubble, imo it's largely in the application space with the likes of cursor being valued at $23B+, but I don't see GPU purchases going down anytime soon and I don't even see usage going down. If these overhyped apps fail then it seems like something else will take their place. The power that LLMs provide is just too promising. It seems like those predicting things like a global economic crisis or the new big model-providers like OpenAI going to 0 just think that AI is like NFTs with no real intrinsic value.
There are equal weight S&P ETFs, which avoid having a handful of stock dominating. However, they do have to do a lot more rebalancing to keep things in line.
Do you live in a home you own with no mortgage? Do you have a fully electrified home, only EVs, and enough solar to run those things? You can make real concrete capital investments instead of abstract financial ones to reduce your required living/"operating" expenses, insulating you somewhat from the state of financial markets.
It does if you're paying for that housing (either rent or mortgage payments). People invest into stocks while simultaneously holding a liability (e.g. they need to somehow come up with payments to continue living somewhere, or to continue having heating/cooling/lights). If you think all of the financial investments available to you might crash, and your source of income may evaporate in a correlated event, you can instead put all of your money into minimizing your liabilities. The goal is not to see your home value increase--you're not trying to sell it. It's to secure everything you need to have the standard of living you want by owning those things.
Same thing as always. Stick with your plan and rebalance if you need to. If your plan is 80% stock 20% bond (or whatever ratios), and the increased stock prices are putting you significantly out of balance, then sell your stock funds and buy bonds to put it back to where it should be. If the crash happens, sell your now too-high bonds and buy stocks. Or just buy into one of those funds that does all this for you, or hire a fiduciary financial advisor to do it for you.
With even the SP500 being super concentrated in AI-exposed companies, probably a combination of bonds and foreign equities. But hedging does mean being OK with watching any (perceived or real) bubble madness continue. I wanted to put all my wealth into Apple circa 2005, but chose not to because blah blah blah diversification. Obviously I wish I did, but I'm ok with the perfectly sensible decision I made - and I'd be retired many times over had I done it.
Personally speaking, as somebody that was 100% in equities until earlier this year (I'm in my early 40s and had most of my wealth in VOO), I shifted to a 60-40 portfolio - there are ETFs that maintain the balance for you. I did this knowing full well that this could attenuate my upside, but I figured it's worth it than being so concentrated in a single part of an industry (AI within tech) and so much upside was already acquired up until that point. Also, I figured the chances of the 2nd Trump term adding to volatility weren't going to help tamper volatility. On top of that, my income is tied to tech, so diversifying away further from it is sensible (especially the equity parts of my compensation).
But if you're in your 20s, your nest egg is likely small enough that I'd just continue plugging away in automatic contributions. Investing at all is far more important than anything else at that stage.
It's more complex than that, a summary of my highly subjective understanding:
1. AI companies manage to build AGI and achieve takeoff. I have no idea on how to hedge against that.
2. The market is not allowed to crash. There will likely be some lag between economic weakness and money printing. Safer option is probably to buy split 50% SPY and 50% bonds. A riskier option is trying to time the market.
3. The market is allowed to crash. Bonds, cash, etc.
Depending on what you believe will happen and risk appetite you can blend between the strategies or add a short component. I am holding #2 with no short positions in post-tax accounts and full SPY in tax advantaged accounts.
You can't hedge against a whole market. And you can't time bubble pop events anyway. You can dump NVDA today, sure, because it's overvalued at $180. And most of us agree. But that won't prevent it from going to $300 before it pops (which is totally reasonable too!), so dumping it today might hurt as much as it helps. Run-ups at the end of a speculative bubble are by definition irrational and produce in-hindsight-ridiculous numbers.
If you're young and invested for the long term, just leave all your junk in broad index securities. You can't do better than that, you just have to ride the bumps.
On the other hand, I'm approaching retirement and looking seriously at when to pull the trigger. The aggregate downside to me of a large market drop or whatever is much higher than it is to a 20-something, because losing out on (to make a number up) an extra 30% of net worth is minor when compared to "now you have to work another three years before retiring" (or alternate framings like "you have to retire in Houston and not Miami", etc...).
So most of my assets are moving out of volatiles entirely.
A few bond funds, but frankly just a lot of money market cash in the short term. Most of our guts say that the crash is imminent and if it is the extra fees and hassle won't be worth it.
Theres a really funny thing going on right now -- in that everyone is forecasting an AI bubble to pop. It feels like every single human is saying that from the heads of tech companies with comments that are veiled to bankers and everyone on the street.
It reminds me of the time that everyone said the economy was going to tank and somehow everyone had it wrong a couple years ago.
It feels implausible that it isn't overbuilt but it also feels really strange for everyone to be pushing this narrative that its a bubble - and people taking very public short bets. It feels like the contrarian bet is that its going to keep running hot. Nvidia earnings tommorrow big litmus test.
If it was the people actually investing in AI all saying it's a bubble, implying that they are holding back, not all-in, for fear of it crashing, then it'd have room to run further (until they were all-in, and leveraged to the eyeballs, cf subprime housing crash liar loans, dot-com crash investor margin accounts).
However, it seems more like the people pumping billions into AI are all still "this is going to the moon" gung-ho, and unless they are investing billions of CASH, then I guess they are borrowing to do so.
I don't know how this financing works - maybe no fear of having it pulled like a foreclosure on a subprime mortgage holder, or a broker margin call, but it's not going to end well if these investments start to fail and the investors start running for the door.
Peter Thiel's recent exit from NVidia should be a bit concerning given his good record on macro bets and timing.
Why trade individual stocks anyway? Cost averaging ETFs is a proven way to building wealth. S&P goes down 20%, you average down, it recovers and you get another 2-3 years of growth. This goes on until civilization collapses.
If you buy ETFs, you basically hold some stocks you don't want.
For example, stock from war profiteering companies (lockheed, raytheon).
Note that investing in war profiteers is a proven way to build wealth. I just don't want to do that.
This argument not only applies to evil companies, but also dumb ones. For example, I have no interest in investing in IBM or Oracle even those both of those are also money makers.
That only works if you wear horse blinders and subconsciously ignore or make up excuses for evil by association. There is absolutely no way to invest anything ethically.
Buying index funds (either mutual funds or ETFs) has been an effective approach for retail investors. But the concern now is that some US stock index funds are so heavily weighted to the "Magnificent 7" stocks that much of the previous benefit of diversification has been lost. The Mag 7 are all highly correlated with each other so if one falls then usually the others do as well.
There are other index funds which are equal weighted rather than market weighted. Those have underperformed lately but might be less volatile if the AI bubble pops.
Assuming this is irrational and must come back to reality at some point, I'm not convinced this is connected to the common man economy as other bubbles in the past were. This round of investment is mostly being funded by exuberant cash flow accumulated over the years that was otherwise used as stock buybacks by a small number of stocks and some private credit deals that are not that accessible to the general public. This is looking more like a crypto crash kind of effect rather than a 2008 one.
However 'normal people' buying into the stock market via 401ks or otherwise likely (and arguably sensibly) will be in index funds, that of course are exposed to the bubble via (grossly?) inflated tech stocks. Effectively their current pension/savings contributions are being clipped by whatever the delta is between now and post bubble. Time in the market and all that, but still it might be a hefty haircut.
Very cool and healthy for the CEO of a company investing massive amounts into a given technology to casually refer to it as a "bubble" at the same time. I guess he softens the statement a bit by calling it "an AI bubble" instead of the "the AI bubble", but it's still interesting to see everyone involved in this economic mess start to acknowledge it.
Unironically agreed that it's good for a CEO to remain relatively level headed and clear eyed.
The comparison made to the dotcom bubble is apt. It was a bubble, but that didn't mean that all the internet and e-commerce ideas were wrong, it was more a matter of investing too much too early. When the AI bubble pops or deflates, progress on AI models will continue on.
Hasn't it been pretty widely acknowledged that AI funding has created a whirlpool of money cycling between a few players- cloud / datacenter hosts / operators (oracle), GPU (nvidia) and model operators (openai).
To pile on, there's hardly a product being developed that doesn't integrate "ai" in some way. I was trying to figure out why my brand new laptop was running slowly, and (among other things) noticed 3 different services running- microsoft copilot, microsoft 365 copilot (not the same as the first, naturally) and the laptop manufacturer's "chat" service. That same day, I had no fewer than 5 other programs all begging me to try their AI integrations.
Job boards for startups are all filled with "using AI" fluff because that's the only thing investors seem to want to put money into.
Makes one think that this was the plan all along. I think they saw how SVB went down and realize that if they're reckless and irresponsible at a big enough scale they can get the government to just transfer money to them. It's almost like this is their new business model "we're selling exposure to the $XX trillion dollar bailout industry."
I don't think it's very difficult to imagine that the usgov is trying to put pressure on industry to make "number go up". Given the general competency level in usgov these days, I also wouldn't be particularly surprised if nobody knew or cared about whether the "up" of the number was real or meaningful, or whether there would be consequences.
Current admin really, really wants the number going up, and is also incapable of considering or is ignorant to any notion of consequence for any actions of any kind.
This is the thing that worries me the most. The market is past due for a market correction. Yet this government is willing to burn down everything for short term gains.
> They really are shameless aren't they? Makes one think that this was the plan all along.
Not really. Sundar is still pretty bullish on GenAI, just not the investor excitement around it (bubble).
Pichai described AI as "the most profound technology" humankind has worked on. "We will have to work through societal disruptions," he said, adding that the technology would "create new opportunities" and "evolve and transition certain jobs." He said people who adapt to AI tools "will do better" in their professions, whatever field they work in.
The context makes it clear that it's not any sort of implied threat. Pichai made his statement in response to an interview question about whether Google might be so well positioned that they're immune to the impact of an AI bubble. (But I don't blame you for being misled - like most headlines these days, this would have been intensely optimized for virality over accuracy, and making tech CEOs sound like supervillains is great for virality.)
I mean this commonly happens in business/economies. Businesses that are dirty can make more money, at least temporarily out competing those around them. If they play it right they can drive their good competitors out of business or buy them up. Moreso, the crash at the end of a bubble will just as likely drive the good businesses out as the bad.
Also it seems like Wall Street greedy bankers might have an other subprime crisis on their hands at same time... I wonder which one will be saved again...
And there we have the reason for all of these interdependent deals between all these firms, they're all hedging with each other they can keep this set of plates spinning.
Keiretsu are ways to hedge against loss, you form interlocking relationships that spread both risk and success around. In this case no one is sending actual money, they're sharing obligations with each other.
After COVID they were still making a killing, but axed 12k people anyway. So, if someone starts doing layoffs and the market reacts well profitable companies will do layoffs as well
Every CEO is reading from the same script right now. It might be a bubble but it’s just like the internet, it’s still going to be relevant and it’s just the crap companies and grifters who will die.
I'm not seeing that happening. Unlike banking and housing there's not much systemic or political risk in letting these companies crash. It's mostly going to hit a very small number of high net worth people who don't have a lot of clout and are oddly disconnected from the rest of the economy.
Virtually everyone's 401k is overexposed to these companies due to their insane market caps and the hype around them. If they go every S&P 500 and total US market ETF goes with them, right as the Boomers start retiring en-masse.
Even Vanguard's Total World Index, VT, is roughly 15% MAG 7.
That's not even getting into who's financing whom for what and to whom that debt may be sold to.
This is incorrect. A lot of these companies are raising debt to pay for these datacenter build outs. And that debt has already been sold to pension funds. The risk has already been spread. See Blue Owl Capital and how Meta is financing its Hyperion datacenter. They raised 30 billion in debt. Main street is already exposed as those bonds are in funds offered by the usual players BlackRock, Invesco, Pimco etc.
Ever literally blow bubbles? You never really know how big each one will be.
My biggest worry is that what will be left standing after all of this is the organizations that are quietly all the AI slop everywhere, be it the normal web or YouTube.
Investor in leading frontier LLM that automates software production says stuff to try to reduce funding for LLM startups...
Forget the talk about bubbles and corrections. Can someone explain to me the rationale of investing in a product, marketing it, seeing that it drives consumers away from your product and erodes trust, and then you continue to invest at an accelerating rate? Good business would have driven us very far away from this point years ago. This is very deep in the "because we can" territory. It's not FOMO.
> Can someone explain to me the rationale of investing in a product, marketing it, seeing that it drives consumers away from your product and erodes trust, and then you continue to invest at an accelerating rate?
Sure!
Google began investing heavily in AI (LLMs, actually) to catch up to the other frontier labs, which had already produced a product that was going to eviscerate Google Search (and therefore, Google ad revenue). They recognized this, and set about becoming a leader in the emerging field.
Is it not better to be a leader in the nascent industry that is poised to kill your profitability?
This is the same approach that Google took with smartphones. They saw Apple as a threat not because they had a product that was directly competing, but because they recognized that allowing Apple to monopolize mobile computing would put them in a position to take Google’s ad revenue — or allow them to extract rent in the form of payments to ensure Apple didn’t direct their users to a competing service. Android was not initially intended to be a revenue source, at least not most importantly. It was intended to limit the problem that Apple represented. Later, once Google had a large part of the market, they found ways to monetize the platform via both their ad network and an app store.
AI is no different. If Google does nothing, they lose. If they catch up and take the lead, they limit the size of the future threat and if all goes well, will be able to monetize their newfound market share down the road - but monetization is a problem for future Google. Today’s Google’s problem is getting the market share.
This has been how I've framed a lot of the expenditure despite lack of immediate substantial new revenues. Everyone including Google is driven to protecting current revenues from prospective disruption. But the vulnerability AI created for Google is to other companies worth positioning themselves to take advantage of if Google falls behind and loses chunks of marketshare.
Yeah, like imagine if the LLM's don't advance that much, the agentic stuff doesn't really take off etc.
Even in this conservative case, ChatGPT could seriously erode Google Search revenues. That alone would be a massive disruption and Google wants to ensure they end up as the Google in that scenario and not the Lycos, AltaVista, AskJeeves etc. etc.
> frontier labs, which had already produced a product that was going to eviscerate Google Search (and therefore, Google ad revenue)
> If Google does nothing, they lose.
Is any of that actually true though? In retrospect, had google done nothing their search product would still work. Currently it's pretty profoundly broken, at least from a functional standpoint--no idea how that impacts revenue if at all. To me it seems like google in particular took the bait and went after a paper tiger, and in doing so damaged their product.
I like the duck duck go AI summaries...
There are at least a few stories from the 90s where companies that readily could have invested in “getting online” instead decided that it would only harm their existing business. The hype at the time was extraordinary to be sure, but after the dust settled the internet did change the shape of the world.
Nobody can really know what things will look like in 10 years, but if you have the capital to deploy and any conviction at all that this might be a sea-change moment, it seems foolish to not pursue it.
> Nobody can really know what things will look like in 10 years,
100% true. Do not invest trillions on such an uncertainty in the long term is always a bad investment.
> but if you have the capital to deploy and any conviction at all that this might be a sea-change moment, it seems foolish to not pursue it.
Gamblers argument: "But what if this is the winning ticket?"
Then aren't most forms of investment based on a "gambler's argument"?
A risky investment is obviously akin to gambling, but to get things built and make a profit you have no other choice.
Pascal's Wager for tech.
> Can someone explain to me the rationale of investing in a product, marketing it, seeing that it drives consumers away from your product and erodes trust, and then you continue to invest at an accelerating rate?
I'll take a stab at this. It's not 100% clear to me which product you're referring to, so I'll try to answer as if the product is something that already has customers, and the maker of the product is shoving AI into it. The rationale is that the group you're trying to convince that you're doing a good job is your shareholders or investors, not your actual customers. You can justify some limited customer attrition by noting that your competitors are doing the same thing, and that maybe if you shove the _right_ AI features into the product, you'll win those customers back.
I'm not saying it's a _good_ rationale, but that seems to be what's at play in many cases.
Correct. More succinctly, this what happens when the share price becomes the product.
i like this reason, and the 'dont do nothing' sibling comment. both make sense to me.
Eastman Kodak tried your implied proposed strategy, of ignoring technological developments that undermine their core product. It didn't go so well. Naturally technology companies have learned from this and other past mistakes.
I don't know which product you're even talking about.
If you mean AI Overview, you really need to cite the source of this claim:
> seeing that it drives consumers away from your product
Because every single source I can find claims that Google search grew in 2024[0]. HN is not a good focus group for a product that targets billions of people.
[0]: For example https://www.seroundtable.com/google-search-growth-39040.html but feel free to provide a more credible source that claims the opposite.
One can also interpret the growth as a failure from product p.o.v though it drives views.
With AI summaries being hit or miss, we may say users now need to do 20% more searches to find what they are looking for.
Numbers aren't always a good proxy for customer satisfaction.
This was literally their policy. Make the searches worse so people search more.
https://pluralistic.net/2024/04/24/naming-names/
If you're talking about all of AI with your statement I think you may need to reconcile that opinion with the fact that chat GPT alone has almost 1 billion daily users. Clearly lots of people derive enormous value from AI.
If there's something more specific and different you were referring to I'd love to hear what it is.
> Clearly lots of people derive enormous value from AI.
I don’t really see this. Lots of people like freebies, but the value aspect is less clear. AFAIK, none of these chatbots are profitable. You would not see nearly as many users if they had to actually pay for the thing.
In the article Pichai is quoted saying:
> "It doesn't matter whether you want to be a teacher [or] a doctor. All those professions will be around, but the people who will do well in each of those professions are people who learn how to use these tools."
Bullshit. Citation very much needed. It's a shame--a shameful stain on the profession--that journalists don't respond critically to such absurd nonsense and ask the obvious question: are you fucking lying?. It is absolutely not true that AI tools make doctors more effective, or teachers, or programmers. It would be very convenient to people like Pichai and Scam Altman, but that don't make it so.
I’m probably an outlier: I use chatgpt/gemini for specific purposes, but ai summaries on eg google search or youtube gives me negative value (I never read them and they take up space).
I agree about the summaries! I think AI is applied in a lot of bad ways ATM although TBH I've heard some people like the summaries
I can't say I find them 100% useless - though I'd rather they not pop up by default - and I understand why people like them so much. They let people type in a question and get a confident and definitive answer all in natural language, which is what it seems like the average person has tried to do all along with search engines. The issue is that they think whatever it spits out is 100% true and factual, which is scary.
> seeing that it drives consumers away from your product and erodes trust
Where is your evidence for this part of your claim?
> Can someone explain to me the rationale of investing in a product, marketing it, seeing that it drives consumers away from your product and erodes trust, and then you continue to invest at an accelerating rate?
Hey now, Google Plus was more than a decade ago. I didn't like it either, but maybe it's time to move on? I think they learned their lesson.
While you & I may find shoehorning LLMs into every nook & cranny distasteful, I worry Marl may think differently
https://open.substack.com/pub/nothinghuman/p/the-tyranny-of-...
What evidence do you have that it's driving consumers away from the product? The people who bother to say anything on the internet are the extreme dedicated minority and are often not representative of a silent majority. Unless you have access to analytics, you can't make this inference.
Because the investor class is drunk on their own wine.
The golden goose is not you or I. It is our boss who will buy this junk for us and expect us to integrate it into our workflows or be shown the door. It is the broccoli headed kids who don’t even have to crack open cliffnotes to shirk their academic responsibilities anymore. It is universities that are trying to “keep up” by forcing an AI prompting class as a prerequisite for most majors. These groups represent a lot of people and a lot of money.
It doesn’t have to work. It just has to sell and become entrenched enough. And by all metrics that is what is happening. A self fulfilling prophecy, no different than your org buying redundant enterprise software from all the major vendors today.
> broccoli headed kids
now that there's a prime quality rare insult
anyway i totally agree with your reasoning. one might as well ask "why is MS Teams so bad? it's bloated, slow, buggy, nasty to use from a UX pov... yet it's everywhere"
this shitware -- ms teams, llm slopguns, whatever -- never had to work, they just have to sell.
What I think:
- get customers used to using AI by “gently” cough nudging them towards this in all products.
- get experience and data for your LLMs and AI dev teams regarding what works and what doesn’t
- Bet on AI becoming good enough that people will use this as their entry to the Web, Media and software
- become the largest player of these AI apps, so that you can inject ads again
People are scared of change which threatens their occupation, hobbies, and sense of human purpose.
Even if that change is wildly profitable for megacorps in the long run.
Interestingly I think if the AI succeed at the level that a lot of these CEOs hope we're not much better off either.
And the sentiment that goes around is more: reduce the amount of people needed to do the same amount of work:
https://www.theregister.com/2025/10/09/mckinsey_ai_monetizat...
> McKinsey says, while quoting an HR executive at a Fortune 100 company griping: "All of these copilots are supposed to make work more efficient with fewer people, but my business leaders are also saying they can't reduce head count yet."
The problem becomes that eventually all these people who are laid off are not going to find new roles.
Who is going to be buying the products and services if no-one has money to throw around?
I don't even know what the selling point of AI is for regular people. In the 60s it was possible for a man to work an ordinary job, buy a house, settle down with a wife and support two or three children. That's completely out of the realm of reality for many young people now and the plummeting birth rates show it.
The middle class have financially benefited very little from the past 20+ years of productivity gains.
Social media is driving society apart, making people selfish, jealous, and angry.
Do people really think more technology is going to be the path to a better society? Because to me it looks like the opposite. It will just be used to stomp on ordinary people and create even more inequality.
> That's completely out of the realm of reality for many young people now and the plummeting birth rates show it.
I'm skeptical of this explanation for falling birthrates just because birthrates are falling across the world and there seems to be no correlation between fertility and financial security. America has low birthrates. Scandinavia (usually considered to have generous welfare states) has low birthrates. Hungary, where the government gives massive tax breaks (IIRC they spend around ~6% of their GDP on child incentives), has low birthrates. Europe, East Asia, India, the Middle East, the Americas, basically the whole world except for central Asia and Sub-Saharan Africa (which are catching up) has low birth rates. Obviously the economic conditions between basically all the countries in the world varies wildly, but there isn't a consistent relationship between those conditions and fertility.
Also within countries, the number of children people have is not always correlated with wealth (and at times in the past 60 years it has been negatively correlated).
Anyway, I find your argument intuitive, but it doesn't seem to align with the data we have.
In which of those countries is it possible for a man to work an ordinary job, buy a house, settle down with a wife and support two or three children?
I mean that I know of first hand, just the US and Japan. "Possible" being a low bar that just means that I've seen it at least once.
I don't think data with all of those factors (household income, number of earners per household, gender of the earners, home ownership, and number of children) exists for any country. Do you have data like that for 1960s America or is your argument based on extrapolations from watching Leave it to Beaver?
But if we abstract your hypothesis slightly to: fertility is lower now than in 1960 because people are less financially secure now than they were in 1960, I don't think the data we have supports this.
I have seen it all across the EU. Is pretty doable (granted, you have a University title). But you can absolutely buy a home and have a couple of children which will have absolute all they need.
Yea because the average Joe totally has a university title. However in Germany a lot of poor people have many children while a lot of academics have less [0]. It's "doable" also doesn't mean its pleasant. I have checked the rural housing market recently and for a somewhat acceptable house you will have to pay easily ~3k per month given you have a somewhat big start capital. Not sustainable if one person loses their job for a while. Not to say it was that much easier back in the day, the housing market is just beyond fucked for most ordinary people.
[0] https://en.wikipedia.org/wiki/Demographics_of_Germany
Ordinary men have wifes and two children in all those countries. You are also projecting American lifestyle "buying house without family help is necessary" on countries "hungary" where this was not an expectation for a really really long time. Like, generations.
It's a simple catch-22
- women don't want to leave the workforce because one salary cannot support a family
- yet women remaining in the workforce, since single-salary is infeasible, thusly doubling supply of workers, lowering salaries, which itself makes it infeasible to single-income a family
Not to pick on women, as a feminist if you ask me, all modern men should have to be houseboys to serve their feminine masters. It does suck but it is necessary to benefit the modern women who did not suffer, in so by causing modern men to suffer -- to make amends for the suffering of all women in the perpetuity of history at the hands of all historical men, neither of which are alive today.
Well that's the point, men are refusing to suffer.
There is little incentive to walking in a contract, where you are working all the time, no appreciation, love, gratitude or even a thank you. All the time being made to feel like you are not measuring up. And they'd rather be with somebody else apart from you. That done, you also come back from work and do all the chores you would if you remained single.
And if a few years later the other party decided to break the contract, now they take your home, get monthly pensions(with raises), and get to start the process all over again with somebody else at your expense.
Plus these days kids don't stay back with aging parents to care for them, so having kids appears pointless as well.
By and large, let alone an incentive, marriage and children seem to a massive negative for men. Hence I wouldn't be surprised low marriage and birth rates all over the world.
Why would you want to do all this? When you can work, keep the money, and spend it for your pleasure by staying single?
Except that it is men who complain constantly about wanting to marry and have kids while women are much more content being single and have friends.
You dont have to pay alimony of the wife worked thw whole time. That complain is funny in the comtext of men demanding to return back to time where alimony arrangement was necessary protection.
Even in marriage, it is more of women who initiate divorce are report higher hapiness after the divorce. Men report lower hapiness and are more likely yo want to marry again.
This documentary goes into a lot of detail on the causes worldwide: The Birth Gap - https://www.youtube.com/watch?v=m2GeVG0XYTc
A generous welfare state (like the Nordics or Switzerland) does not necessarily mean that the middle class is well off with lots of resources for kids. Usually it's the middle class (+upper class) that pays for the generous welfare state, but gets almost none of the benefits. You don't get/need the welfare, if you earn enough to be considered middle class.
If by ”Nordics” you mean the rich oil kingdom of Norway, sure. Everyone else has been cutting back on welfare for the last 20 years.
Birth rates correlate negatively with education of women. I read somewhere that this is one of the most robust findings in all of social science (and when I asked Gemini just now whether there was such a correlation, it said the same).
There’s a (positive or negative) correlation between birth rates and dozens of factors, because over the period birth rates have been falling, the world has changed dramatically. Issue is we don’t know what is causal. education also correlates with all kinds of other factors like income, type of work, marital status, and political views, meaning birth rates are also likely correlated with all of these factors.
>Issue is we don’t know what is causal.
Is it really true that this is not known? Although I only claimed correlation (and am thus surprised that I was downvoted twice, as that claim is obviously true), based on the famous "robustness" of this observation, I strongly suspect that confounding factors like those you mention have already been analysed to death, and found not to eliminate the explanatory power of women's education.
At least, checking these confounders seems an obviously valuable and interesting avenue to explore. If it hasn't been done yet, I wonder what social scientists are doing instead.
> In the 60s it was possible for a man to work an ordinary job, buy a house, settle down with a wife and support two or three children.
That wasn't true in the rest of the world.
The US had a unique position due to ww2 that was bound to errode.
I find it funny to compare horror stories from my parents/grandparents to this...
> In the 60s it was possible for a man to work an ordinary job, buy a house, settle down with a wife and support two or three children.
Every kind of a man, or woman?
> Do people really think more technology is going to be the path to a better society? Because to me it looks like the opposite.
Well, this probably why statistics exist.
Thanks for pointing out this skewed view of economic history common in North America.
The short period of boom in 50s/60s US and Canada was driven by WW2 devastation everywhere else. We can see the economic crisis' in the US first in the 70s/80s with Europe and Japan rebounding, then again in 90s/00s with China and East Asia growing, and now again with the rest of the world growing (esp Latin America, India, Indonesia, Nigeria, Philippines, etc). Unless US physically invades and devastates China, India or Brazil the competition will keep getting exponentially higher. It's a shame that US didn't invest all that prosperity into social capital that could have helped create high value jobs.
In short, its easier to have high standards of living in your secure isolated island when the rest of the world (including historical industrial powers) are completely decimated by war.
> It's a shame that US didn't invest all that prosperity into social capital that could have helped create high value jobs.
Are you aware of the plan Marshall?
> In short, its easier to have high standards of living in your secure isolated island when the rest of the world (including historical industrial powers) are completely decimated by war.
Don't give them any ideas.
> It's a shame that US didn't invest all that prosperity into social capital that could have helped create high value jobs.
What does this sentence mean?
I assume the idea is more money could've been invested into bringing the bottom rungs of American society up and created a more skilled and educated workforce in the process.
So "social capital" == "education"?
The US has pushed a shit ton of money into education. I mean an unreasonable amount of it went to administrators. But the goal and the intent was certainly there.
Education is part of it. But a lot of the social capital which makes societies prosperous is separate from what we usually consider to be education. On an individual behavior level that includes things like knowing how to show up for work on time, sober, and properly dressed, and follow management instructions without arguing or taking things personally. These are skills that people in the middle and upper classes take for granted but they forget that there are a large number of fellow citizens in the economically disconnected underclass who never had a good opportunity to learn those basics. As a society we've never done a good job of lifting those people up.
> On an individual behavior level that includes things like knowing how to show up for work on time, sober, and properly dressed, and follow management instructions without arguing or taking things personally. These are skills that people in the middle and upper classes take for granted
I don't see your point.
Those rules does not apply to the upper class and middle class workers have way more leeway regarding that than the lower class.
This seems to be saying that a large fraction of poor people are poor only because of bad habits, which they have only because nobody taught them any better?
The existence of an upper class necessitates the existence of a lower class. You can't just pull everyone up to be above average.
What's your point? I didn't make any claims about averages. We could do a lot more to improve opportunities and social mobility for people caught in the permanent underclass.
But we have. The underclass today has much better lives in many aspects than the highest class from many decades ago. The absolute level of wealth has increased, it's simply that the delta between the high and the low is widening.
Would you rather live equally in poverty or live comfortably with others who are way more wealthy than you? Surprisingly people do seem to prefer the former, though I'd prefer the latter
> I mean an unreasonable amount of it went to administrators. But the goal and the intent was certainly there.
This is wrong.
The increase in administrator pay began well after the crises cited in OP.
You could cite spending on the sciences (and thus Silicon Valley), but the spending by the US did not accrue to administrators; and further, federal money primarily goes to grants and loans, but GP is citing a time over which there were relatively low increases in tuition.
Edit: Not at home, but even a cursory serious search will turn up reports like this one that indicate the lack of clarity in the popular uprising against money "[going] to administrators"
https://www.investigativeeconomics.org/p/who-to-believe-on-u...
For universities, yes. But not for primary education. Administrative bloat is the worst in K-12.
> For universities, yes. But not for primary education. Administrative bloat is the worst in K-12.
First, where is your data?
Second, this discussion is clearly about post-secondary education ("the idea is more money could've been invested into bringing the bottom rungs of American society up and created a more skilled and educated workforce in the process.")
Cheaper education, free/subsidized healthcare, free/subsidized childcare, cultural norms around family support, etc.
Things that let workers focus on innovation. IT workers in cheaper countries have it much easier while we have to juggle rising cost of living and cyclical layoffs here. And ever since companies started hiring workers directly and paying 30-50% (compared to 10-15% during the GCC era) the quality is almost at par with US.
>>> It's a shame that US didn't invest all that prosperity into social capital that could have helped create high value jobs.
>> What does this sentence mean?
> Cheaper education, free/subsidized healthcare, free/subsidized childcare, cultural norms around family support, etc.
Except for free/subsidized healthcare, didn't the US already have those things during the post-war boom?
Cheaper education? Public K-12 schools, the GI bill, generous state subsidies of higher education (such that you could pay for college with the money you made working a summer job).
Free/subsidized childcare, cultural norms around family support? Wages high enough to raise a family on a single income, allowing for stay-at-home moms to provide childcare.
> Except for free/subsidized healthcare, didn't the US already have those things during the post-war boom?
Yes, but education system is being dismantled piece by piece at all levels. I work in edutech and our goal is to cut costs faster than revenue. Enrolments are down, students are over burdened with student loans, and new grads can't compete in the market.
Also, do you think kids going to K-12 in the US can compete with kids who go to international schools in China and India? High end schools in those countries combine the Asian grind mindset with western education standards.
> Wages high enough to raise a family on a single income, allowing for stay-at-home moms to provide childcare.
This was a special period of post war prosperity that I mentioned. It was unnatural and the world has reset back to the norm where a nuclear family needs societal/governmental support to raise kids, or need to have two 6 figure jobs. "It takes a village to raise a child" is a common western idiom based on centuries of observations. Just because there was 20-30 years of unnatural economic growth doesn't make it the global or historical norm.
Education is a tough one. Like healthcare, it's highly subject to Baumol's Cost Disease. Technology holds some potential but fundamentally we still need a certain ratio of teachers to students, and those teachers get more expensive every year.
https://www.unesco.org/en/articles/baumols-cost-disease-long...
Education should be well funded. But at the same time, taxpayers are skeptical because increasing funding doesn't necessarily improve student outcomes. Students from stable homes with aspirational parents in safe neighborhoods will tend to do well even with meager education funding, and conversely students living in shitholes will tend to do badly regardless of how good the education system is. If we want to improve their lot then we need to fix broader social issues that go beyond just education. Anyone who has gotten involved with a large school district has seen the enormous waste that goes to paying multiple levels of administrators, and education "consultants" chasing the latest ineffective fad. Much of it is just a grift.
>> Except for free/subsidized healthcare, didn't the US already have those things during the post-war boom?
> Yes, but education system is being dismantled piece by piece at all levels.
So? That's not really relevant to the historical period you were referring to when you said: "It's a shame that US didn't invest all that prosperity into social capital that could have helped create high value jobs."
At the time, Americans already had many of the things you're saying they should've invested in to get. How were they supposed to predict things would change and agitate for something different without the hindsight you enjoy?
> This was a special period of post war prosperity that I mentioned. It was unnatural and the world has reset back to the norm where a nuclear family needs societal/governmental support to raise kids, or need to have two 6 figure jobs.
Exactly why do you think it is it unnatural?
I think you should be more explicit about how you think things should be for families. Because going on an on about how the times when things were easier was "unnatural" may create the wrong impression.
Also keep in mind where talking about human society here, the concept of "natural" has very little to do with any of it. What were really talking about is the consequence of the internal logic of this or that set of artificial cultural practices.
> How were they supposed to predict things would change and agitate for something different without the hindsight you enjoy?
By comparing themselves to their counterparts in other countries. By 1955 there should have been alarm bells ringing as Europe re-industrialized. Same with 70s oil crisis but the best that US could do was to cripple Japan with Plaza Accords.
Americans even now have a mindset that nothing exists beyond their borders, one could assume it was worse back then.
> Exactly why do you think it is it unnatural?
Because only two industrialized countries were left standing after WW2 and those two countries enjoyed unnatural growth until others caught up - first the historical powers in Europe then Asia.
> By comparing themselves to their counterparts in other countries. ... Americans even now have a mindset that nothing exists beyond their borders, one could assume it was worse back then.
That's not realistic, except in hindsight. Most people everywhere pay more attention to their immediate environment and living their lives. Not speculating about what is the global economy is going to look like in 50 years, and how would those changes affect them personally.
You're talking about stuff only some PhD at RAND would be doing (or would have the ability to do) in the 1960s.
Without the democratic pressure of common people either 1) having a need or 2) seeing things get worse, no changes like you describe would happen.
> Because only two industrialized countries were left standing after WW2 and those two countries enjoyed unnatural growth until others caught up - first the historical powers in Europe then Asia.
What's natural?
And more importantly: how do you think things should be for families.
The US is not perfect by any measure, but your argument that the US doesn't have innovative nor "high-value" jobs is absurd beyond belief.
Right, because Europe is so innovative.
The mother of invention is idiomatically necessity, not comfort.
Ultimately, increased levels of competition should lead to higher levels of innovation.
Btw, what is "the GCC era" a reference to?
Europe is quite innovative on per-capita basis. Not like US but the workers there have much happier lives and their societies don't have extreme inequality and resulting violence like the US.
China is arguably more innovative than all and has terrible work life balance, but their society is stable and you won't go from millionaire to homeless just because you had to get cancer treatment.
GCC = global consulting companies, the bane of innovation. Outsourcing of all kinds (even domestic C2C) should be banned.
Is GCC an acronym you just now came up with, or is does it commonly mean “global consulting company” in your part of the world?
I ask because, when I do a Google search, the two most common meanings for that term are “Global Capacity Center” and “Gulf Cooperation Council”.
Were there a lot of imports at that time in terms of materials or labor or food? If not, I don’t really see how money flowing in from abroad actually changes the economy in this area. If the wood is harvested in America and the workers are in America and the wood and workers are available, then any amount of money value generated by everyone else will be sufficient to pay them, unless there is a significant stream of imports that need to be paid for (which I’m not aware of in this time period).
What could have made a big difference is if foreign competition arose for American materials and land, which it did. But that is under our control, we collectively can choose whether to allow them to buy it or not, and whether to let people in at a rate that outpaces materials discovery and harvesting capabilities.
We also restricted materials harvesting quite a bit during this time period, for example I believe a lot of forestry protections were not in place yet.
So you're saying that working-class living standards are a zero-sum competition across capitalist countries, even negative-sum as competing national economies grow their total output and hourly productivity?
That sounds like a really shitty system.
> Thanks for pointing out this skewed view of economic history common in North America....
> In short, its easier to have high standards of living in your secure isolated island when the rest of the world (including historical industrial powers) are completely decimated by war.
So, what's your point? That the plebs shouldn't expect that much comfort?
A common maxim across all cultures is to "manage expectations" for happiness.
And while comparing societal standards expand the time horizon to 100 years, not nitpick one specific unnatural era of history.
An automotive engineer in Detroit in 1960 was a globally competitive worker because most of his counterparts in other countries were either dead, disabled or their companies bankrupt.
The equivalent in today's world would be aerospace engineers, AI researchers, quantum engineers, robotics engineers, etc who arguably have the same standard of living as the automotive engineer in 1960s Detroit.
Economic and technological standards evolve - societies should invest in human capital to evolve with them or risk stagnation.
> An automotive engineer in Detroit in 1960 was a globally competitive worker because most of his counterparts in other countries were either dead, disabled or their companies bankrupt.
> The equivalent in today's world would be aerospace engineers, AI researchers, quantum engineers, robotics engineers, etc who arguably have the same standard of living as the automotive engineer in 1960s Detroit.
You know were not really talking about top-end positions like automotive engineers in Detroit in 1960. I think we're talking more about automotive factory workers in Detroit in 1960.
> Economic and technological standards evolve - societies should invest in human capital to evolve with them or risk stagnation.
You need to be more explicit about how you think things should be for the common man.
I hope you understand the concept of relative prosperity - The current equivalent would be a factory worker at Boeing. In 60s cars were innovative in US, now Nigeria can outcompete China in cars.
Times change, standards rise, competition increases. If America wants to remain competitive globally you need to work in the top 1% fields like you did back in 60s, not expecting $25 per hour for flipping burgers (which should have been automated with robots by now).
You need to be more explicit about how you think things should be for the common man.
>The short period of boom in 50s/60s US and Canada was driven by WW2 devastation everywhere else.
The US just renamed "Department of Defense" to "Department of War" and they seem willing to go to any extreme to "Make America Great Again". Threatening to take over Canada, Greenland, and Panama already in the first few months of the current administration. Using US military on US soil. There's no line they won't cross. WW3 isn't off the table at all, unfortunately.
Yeah if you bar over 50% of your workforce from working at market clearing wages then naturally the other 50% are going to get paid at their expense. When you underpay minorities and often outright ban women from working formal employment, it's not hard to see how wages for the others remain high.
Well congratulations! We have succeeded in having stagnating wages and stagnating standard of living for everyone now!
Do you want to take a 20% pay cut so I can take the marginal benefit? Who wants to volunteer to be barred from working so I can negotiate better salary?
I have no full time job, so I already took the cut. You're welcome.
Lemme guess, we should bring back Bretton Woods?
I originally upvoted the parent comment. But I changed it.
"The good ol' days" ... yeah, but good for who?
The good old days... that never were!
Life has improved for nearly everyone on nearly every metric. But if one myopically focuses on house purchasing as the only thing that matters and takes anomalous post WW2 period, then sure, things are bad (ignoring the fact that housing space and quality + amenities improved dramatically, but hey, who cares about nuance, we just love to complain!)
> Every kind of a man, or woman?
Why do so many people miss the point on this?
Instead of making this dream true for all the people who were previously excluded, we have pursued equality by making this dream accessible to NO ONE.
> Well, this probably why statistics exist.
Like the statistics on plummeting mental health and happiness, an obesity epidemic, increases in "deaths of despair", and plateauing or decreasing life expectancy?
You're both right. I take your point to mean similar to the disastrous outcome of "no child left behind" act. I do agree with you, but people didn't seriously _intend_ for the result to be everyone lowers to a shit position.
Or maybe you're saying that's always how these initiatives turn out? It can't be helped?
I think there is something to be valued about historical accuracy.
> Like the statistics on plummeting mental health and happiness, an obesity epidemic, increases in "deaths of despair", and plateauing or decreasing life expectancy?
In the 60ties, suicide rates went UP. Peaked around 1970 and we did not reached their levels.
Long terms statistics about alcoholism rates and drug use are also a real exiting thing. We know that cirrhosis death rate was going up in the 60ties up to 70ties, peaked and went down. It was the time when drinking and driving campaigns started.
Current drug use is nowhere near what it was a generation ago.
>Why do so many people miss the point on this?
Because one party wants to return to those times with the exact same social norms. So it's a dangerous line of thinking to forget that women were walled out of many jobs, or had a huge wage gap when they were let in. As well as minorities only barely starting to really get the same opportunities after a lot of struggle.
>Like the statistics on plummeting mental health and happiness, an obesity epidemic, increases in "deaths of despair", and plateauing or decreasing life expectancy?
Yes. When it affects the majority is only when we start to pay attention.
A lot of the people who admire the caricatured midcentury economy are probably actually just nostalgic for the '90s. Case-Shiller was much lower, gas was cheap, college was still relatively affordable. The biggest economic complaints of the present day were not as serious then. (There were still affordable parts of the Bay Area!) The subjection of black people and women that existed in the 60s obviously wasn't necessary for those things to be possible.
But each decade's economy is the product of decades past. The policies of the 90s brought us to the present. So we don't want to repeat the mistakes of the 90s, and the 80s are associated with the iniquities of the Reagan administration. Thus you get this misplaced nostalgia for the 50s-70s without really understanding the problems or the progress that society made even as the highest levels of government seemed to drift off course.
> Well, this probably why statistics exist.
How are statistics going to answer this question? Statistics are used to measure things. They don't tell you what things you should be measuring.
I'm not going to engage with you on a debate because you aren't acting in good faith.
The main problem there is soaring housing costs which have nothing to do with technology and everything to do with extremely restrictive planning regulations that make it impossible for the housing supply to keep up with population growth.
We're disappearing up our own assholes made of misery, loneliness and consumerism.
If you want a picture of the future: imagine a robotic boot, stamping on a human face that's begging for more techno-stamping, forever.
This is an excellent metaphor, so don't take this as criticism merely an observation, but it skews heavily towards the techno-utopian narrative that scam artists like Altman and Pichai keep harping on. Your techno-dystopia makes the same fatal assumption that tech matters much at all. The internet has become television. That's it. It's not nothing but it damn sure ain't everything, and it's just not all that important to most folks.
> Do people really think more technology is going to be the path to a better society? Because to me it looks like the opposite. It will just be used to stomp on ordinary people and create even more inequality.
The problem isn't "more technology" (nor is the solution "less) but rather a change in who controls it and benefits. We shouldn't surrender-in-advance to the idea that the stompers will definitely own it.
> In the 60s it was possible for a man to work an ordinary job, buy a house, settle down with a wife and support two or three children. That's completely out of the realm of reality for many young people now and the plummeting birth rates show it.
Most of the people I see working in tech can easily afford this. Maybe not private schools or McMansions but the basics are pretty easy. Sure if you're a humanities major with health problems its tough.
This is far from true. Aside from Valley pay, which also has Valley housing costs, a "tech job" will barely pay for healthcare and housing for one, much less healthcare and housing for four.
Median dev salary in USA is $133k, most likely with healthcare already. You can easily afford a mortgage on a median home which is about $24k/yr.
https://www.bls.gov/ooh/computer-and-information-technology/...
https://www.zillow.com/home-values/102001/united-states/
https://www.calculator.net/mortgage-calculator.html?chousepr...
The median dev doesn’t work in a city with a median cost of living. The jobs are concentrated where there’s a high cost of living.
To me this all just looks like a big frothy chemical reaction playing out far beyond any one person's control.
With that view, many things oscillate over time, including game theory patterns (average interaction intentions of win-win, win-lose, lose-lose), and integration / mitosis (unions, international treaties, civil wars),etc.
So my optimistic view is that inevitably we will get more tech whether we want it or not, and it will probably make things worse many for a while, but then it will simultaneously enable and force a restructuring at some level that starts a new cycle of prosperity. On the other side it will be clear that all this tech directly enables a better (more free, more diverse, more rewarding, more sustainable) way of life.
I believe this because from studying history it seems this pattern plays out over and over and over again to varying degrees.
Either that or the AI robots kill us all.
Could go either way.
My guess is that whatever actually happens it will be very different than what the average person has imagined could happen (including me).
When you say that this pattern plays out, can you be specific?
I don't have time to be precise, but I'll do my best to be more specific.
New system better at organizing human behavior -> increases prosperity -> more capacity for invention -> new technologies disrupt power dynamics -> greed and power-law dynamics tilt system away from broad prosperity (most powerful switch from win-win to win-lose) -> majority become unsatisfied with system -> economics break down (too much debt, not enough education, technology increasingly and disproportionately benefits wealthy) -> trust break down -> average pattern of behavior tilts towards lose-lose dynamics -> technology keeps advancing -,> new technologies disrupt old power structures -> restructuring of world-powr order at highest levels (often through conflict) -> new system established, incorporating lessons learned from the old (more fair, more inclusive) -> trust reestablished, shift back to win-win dynamics (cycle repeats)
In reality it's more messy than this. Also the geographical location of this cycle and the central power can move around. Some places may sit out one or more cycles and get stuck.
The majority of people are already doing ”bullshit jobs” and many of them know it too. Using AI to automate the bullshit and capture the value leaves them with nothing.
The AI evangelists generally overlook that one of the primary things that capitalism does is fill people’s lives with busywork, in part as an exercise in power and in another part because if given their time back, those people would genuinely have absolutely no idea what to do with it.
> The middle class have financially benefited very little from the past 20+ years of productivity gains.
More like the last 50 years.
https://www.pewresearch.org/short-reads/2018/08/07/for-most-...
"For most U.S. workers, real wages have barely budged in decades"
The TL;DR is that in 1964 the average hourly wage was $20.27. As of 2018, average hourly wage was $22.65.
1974 median individual income was $28k 2024 median individual income was $45k
Over 50 years that’s a decent amount of growth. Obviously it could be better but it’s not nothing.
Source: https://fred.stlouisfed.org/series/MEPAINUSA672N
Sure but pretty sure with your 1974 28k you could buy a nice house, whereas with your 2024 (equivalent) 45k you can buy an OK car, not a house
In theory this graph is already inflation-adjusted.
In practice, I think this shows why economic statistics are borderline lies.
What's crazy is that people will jump all over themselves to say "well you could totally live like that at a 1960s level" like that's even a viable possibility today (in the US).
What's that about the falcon and the falconer? The center cannot hold..
People do make it work in the US with tiny incomes and a better standard of living than you’d see in a typical 1960’s household.
I know people raising a family of 4 on 1 income well below the median wage without a collage degree. They do get significant help from government assistance programs for healthcare, but their lifestyle is way better off than what was typical in the 1960’s.
Granted they aren’t doing this in a ultra expensive US city, but on the flip side they’re living in a huge for 1960’s 3 bedroom house with a massive backyard.
Finally a rational comment and not blind hating-complaining. Thank you
I'd say the most mundane thing I use chat GPT for is to tell me which deeply nested menu some option is in for software I don't use very often.
I feel like most people would get value out of that.
Most recent example from a few days ago 'How do I change settings around spacing for "heading 2" in a document?'
> Do people really think more technology is going to be the path to a better society?
It has been for the last few thousand years.
That ended in around 2010, IMHO.
Now technology mostly divides and manipulates us, enshittifies things, and works to turn billionaires into trillionaires.
> "In the 60s it was possible for a man to work an ordinary job, buy a house, settle down with a wife and support two or three children."
In the 1930s, it wasn't possible so what's your point? (History time: What happened on October 24, 1929?) Choosing the 1960s as a baseline is artificially cherry-picking an era of economic growth (at the expense of the rest of post-WW2 Europe and Asia who were rebuilding) instead of an era of decline or normalcy.
> cherry-picking an era of economic growth
But we already did the growth. We didn't shrink back. So we should still be able to get those results.
According to the census bureau median family income in 1960 was $5,600, which is $58,946.59 in Jan 2024 dollars. It's $83,730 in 2024.
For individual males, in 1960 it was $4,100 ($43,157.33 Jan 2024 dollars) and $71,090 in 2024.
Sources:
https://www.census.gov/library/publications/1962/demo/p60-03...
https://www.census.gov/library/publications/2025/demo/p60-28...
https://data.bls.gov/cgi-bin/cpicalc.pl
The thing is, a 1960s standard of living would be totally unacceptable by almost everyone today. Single car max, no air conditioning, small house or apartment, multiple children sharing bedrooms, no cellphones, no Internet, no cable, no streaming. Local calls only. Children allowed outside by themselves.
I think you're out of touch with what "almost everyone" considers an acceptable standard of living. I know plenty of people who have a single car or none at all, live in apartments living pay check to pay check with no kids at all because they are afraid they can't afford them. They would love to have what you described, minus the no cell phones/internet.
A random idea I had a few years ago was, what if someone started a “recent modern Amish” community, where they just intentionally keep the community’s tech usage either fixed at 1960s or 1990s, or maybe a fixed number of years in the past like 30 or 50 (meaning, the time target moves forward by a year each year).
So the kids growing up now might be playing the original Nintendo NES, or maybe an N64, they’d have phones and even computers, etc.
It could even be a little more nuanced like, the community could vote in certain classes of more modern goods.
I feel there is something unsound with that comparison, because you could also apply it to kings of history, simply by listing things that technologically unavailable or unaffordable.
Imagine transmigrating King Louis XVI (pre-revolution) into some working class professional with a refrigerator, a car, cable TV, etc. I don't think it's a given that he'd consider the package an upgrade.
“Pre-revolution” is doing a lot of work in that sentence
The point is to compare the daily life the monarch would have considered "normal."
"Pre-revolution" is just to preempt any "Hurr hurr he'd be ecstatic not to have his head chopped off LOL."
https://en.wikipedia.org/wiki/Execution_of_Louis_XVI
now tell us how they calculate those inflation adjusted dollars-- what basket of goods? prices in which markets?
They've cited and linked their sources. What's the issue?
The "issue" is the comparison is much more complex than people may be led to believe. It's not a simple "adjust the dollars to be the same" calculation.
There are a lot of assumptions that go into making that calculation.
If I tell you that the value of a dollar you hold has gone down or up this year versus last year because of the price fluctuation of an item you never have or never will purchase, such as hermit crabs in New Zealand.
Would you believe your dollar is worth more or less? What if the price of a good you do spend your dollars on has an inverse relationship with the price of hermit crabs in New Zealand? Or what if the prices of the items you do buy haven't moved at all?
The issue is that it doesn't support his preconceived notion that everyone is doing terribly.
> I don't even know what the selling point of AI is for regular people.
AI healthcare, for example. Have an entity that can diagnose you in a week at most, instead of being sent from specialist to specialist for months, each one being utterly uninterested in your problem.
A lot of this stuff about baby boomers vs now is based on how remember things. The data is more complicated. Example: The average home in 1960 was like 1600 sq ft, now its like 2800 sq ft. Sometimes we are comparing apples to oranges.
I am not trying to blunt social criticism. The redistribution of wealth is a real thing that started in the tax policies of the 1980s that we just can't seem to back away from.
But a lot of people are pushing gambling, crypto, options that are telling people that they have no hope of getting ahead just by working and saving. That's not helpful.
> The average home in 1960 was like 1600 sq ft, now its like 2800 sq ft.
Statements like this are not particularly meaningful unless there is actually a supply of 1600 sqft houses that are proportionally cheaper, otherwise you're just implying a causal relationship with no evidence.
Supply is driven by demand unless there is a monopoly in house building (there isn't). If this wasn't the case, one could quickly become a billionaire by starting first company that build small houses that are supposedly in demand but not provided by the market
This is developers maximizing profit per lot.
All this means is there are enough buyers who can afford 2,800 sqft houses to keep builders from wasting a lot on a 1,600 sqft house. There could be a lot more people who want a cheaper 1,600 sqft house (including some of the 2,800 sqft house buyers!) than who want 2,800 sqft houses, but the market will keep delivering the latter as long as the return is better (for the return to improve for 1,600 sqft houses, see about convincing towns and cities to allow smaller lots, smaller setbacks, et c).
> All this means is there are enough buyers who can afford 2,800 sqft houses to keep builders from wasting a lot on a 1,600 sqft house.
So there's a limited supply of lots (or of permission to use those lots).
Exactly. Zoning laws affect lot sizes, remove them as in Houston or most other countries and the problem disappears.
Rural America often doesn’t have “zoning laws”. That hasn’t stopped my home price from more than doubling since 2012.
> Supply is driven by demand unless there is a monopoly in house building (there isn't).
There is, however, a monopoly on issuing building permits.
Bingo! And that's on government. Remove (or relax) those now by decree and rent prices drop tomorrow
Government? Or the people? These are local government issues, they're not that far detached from the people who elect them.
You're still presupposing that there's a linear (or at least linear enough to be significant amongst the myriad other factors involved) relationship between square footage of house and cost. And that that relationship extends arbitrarily downwards as you reduce the square footage.
It's one of the main factors. And it can be reduced to almost nothing if a small single family housing zone is turned into a skyscraper providing accommodation for thousands
We will be able to build even bigger super yachts for billionaires now though. Bezos can have his own personal cruise ship.
> The problem becomes that eventually all these people who are laid off are not going to find new roles.
> Who is going to be buying the products and services if no-one has money to throw around?
I've wondered about this myself. People keep talking about the trades as a good path in the post-AI world, but I just don't see it. If huge swaths of office workers are suddenly and permanently unemployed, who's going to be hiring all these tradesmen?
If I were unemployed long-term, the one upside is that I would suddenly have the time to a do a lot of the home repairs that I've been hiring contractors to take care of.
The other thing I worry about is the level of violence we're likely to see if a significant chunk of the population is made permanently unemployed. People bring up Universal Basic Income as a potential, but I think that only address a part of the issue. Despite the bluster and complaints you might hear at the office, most want to have the opportunity to earn a living; they want to feel like they're useful to their fellow man and woman. I worry about a world in which large numbers of young people are looking at a future with no job prospects and no real place for them other than to be made comfortable by government money and consumer goods. To me that seems like the perfect recruiting ground for all manner of extremist organizations.
> If huge swaths of office workers are suddenly and permanently unemployed, who's going to be hiring all these tradesmen?
"Professionals were 57.8 percent of the total workforce in 2023, with 93 million people working across a wide variety of occupations" [1]. A reasonable worst-case scenario leaves about half of the workforce intact as is. We'd have to assume AI creates zero new jobs or industries, and that none of these people can pivot into doing anything socially useful, to expect them to be rendered unemployable.
> if a significant chunk of the population is made permanently unemployed
They won't. They never have. We'd have years to debate this in the form of unemployment insurance extensions.
[1] https://www.dpeaflcio.org/factsheets/the-professional-and-te...
>We'd have to assume AI creates zero new jobs or industries
Zero American jobs, sure. It's clear that these american industries don't want to invest in America.
>They won't. They never have.
not permanent, but trends don't look good. It doesnt' remain permanent because mass unemployment becomes a huge political issue at some point. As is it now among Gen Z who's completely pivoted in the course of a year.
Increased production has always just lead to more stuff being made, not more people unemployed. When even our grandparents were kids a new shirt was something you’d take care of, as you don’t get a new one very often. Now we head on to Target and throw 5 into our cart on a whim.
Were there less weavers with machines now doing the job (or whatever?). Sure. But it balances out. It’s just bumpy.
The big change here is that it’s hitting so many industries at once, but that already happened with the personal computer.
I heard Spain has 20% unemployment among the young and the violence problem did not happen. Didn’t check it though.
UBI correctly identifies the problem (people can’t afford housing/clothing/food without money) but is an inefficient solution imo. If we want people to have those things, we should simply give them to them.
How much of them, which ones, to who, at what price, who is forced to provide them, how much do they get, what about other needs...
Or we could just give people money and let them do as they wish with it, and trade off between their needs and wants as they see fit (including the decision of whether they want to work to obtain more of their wants).
How much money should everyone get?
The right answer to this is not a number, but rather a feedback loop that converges on the right number. When everyone is laid off without production of goods slowing down, the result is deflation; when everyone gets too much money relative to production of goods, the result is inflation. So that means you can use the CPI inflation as a feedback variable, and adjust the UBI amount until the CPI is stable.
If the plan was to give people the full set of housing/clothing/food then use the poverty line calculation for amount of money. Or the social security calculation.
We can iterate on the exact amount. There are difficulties with UBI but figuring out the amount is a pretty minor one.
the main problem with UBI is it makes people even more dependent on the state, and therefore more easy to control by said state.
Well, that and the utterly insane cost (and therefore inflation).
> The other thing I worry about is the level of violence we're likely to see if a significant chunk of the population is made permanently unemployed.
No worries, they'll just make AI robots to shoot people.
> Who is going to be buying the products and services if no-one has money to throw around?
The same people who are buying products and services right now. Just 10% of the US population is responsible for nearly 50% of consumption.
We are just going to bifurcate even more into the haves and have-nots. Maybe that 10% now becomes responsible for 70+% of consumption and everyone else is fighting for scraps.
It won't be sustainable and we need UBI. A bunch of unemployed, hungry citizens with nothing left to lose is a combo that equals violent revolution.
The top 10% income bracket of the US is broad enough to include basically all US software developers, isn't it?
If all jobs evaporate, what does the economy look like when just based on interest and dividend payments?
Top 10% of households are 212k. Plenty of software developers don't make that but if they have a spouse with 70k job, they are in top 10%. However, many software jobs are starting to be in HCOL so they probably don't feel like they are in top 10%.
> The top 10% income bracket of the US is broad enough to include basically all US software developers, isn't it?
I wish! My salary is a bit below the median US household income.
Pretty much yeah, I believe it's around $200k/year puts you in that bracket.
If all jobs evaporate, then only asset owners will have money to spend, everyone else is left to fight for scraps so we either all die off or we get mad max.
It looks like Mad Max.
Or maybe the type of labor desired will be more comple, interesting, and valuable as it was when we gave up hunting and gathering for farms and as we mechanized farming and left for factories and factories and offices.
> Maybe that 10% now becomes responsible for 70+% of consumption and everyone else is fighting for scraps.
Or everyone else starts fighting that 10% once they get tired of scraps.
I posit that the consumption is the problem.
"All of these copilots are supposed to make work more efficient with fewer people, but my business leaders are also saying they can't reduce head count yet."
Duh, if they reduce headcount then they will have fewer people in their department, which will negatively affect their chances for promotion and desirability of their resume. That's why they actually offshore the jobs to India and Southeast Asia; it lets them have 3x+ the headcount for the same budget.
If you want to have them actually reduce headcount, make org size the denominator in their performance reviews, so a director with 150 people needs to be 15x more productive than a manager with 10, who needs to be 10x more productive than the engineer IC. I guarantee that you will see companies collapse from ~150,000 employees to ~150, and profit/employee numbers in the millions (and very likely, 90% unemployment and social revolution). This is an incentive issue, not a productivity issue. Most employees and their employers are woefully unproductive because of Parkinson's Law.
You'll never see a manager or even a managing-CEO propose this, though, because it'll destroy their own marketability in the management job market. Only an owner-CEO would do it - which some have, eg. Valve, Nintendo, Renaissance Technologies. But by definition, these are minority employers that are hard to get into, because their business model is to employ only a few hundred people and pay them each millions of dollars.
> The problem becomes that eventually all these people who are laid off are not going to find new roles.
At least one sci-fi author has gamed this out:
https://marshallbrain.com/manna1
Intuitively, the whole economy cannot be B2B SAAS companies funded by VCs. At some point you need to provide value to consumers or the government. If those consumers don’t have any money and/or aren’t willing to spend a paycheck making studio ghibli profile pics, you have a problem. I guess Sam Altman has been asking for a government bailout so maybe he is going for the b2g option in a backwards sort of way.
>Who is going to be buying the products and services if no-one has money to throw around?
Let me answer your question with another question - if the population pyramid is inverted and birth rate is like 1.1 babbies per 2 adults.. then how is any market going to grow? Seems to me all markets with halve. On top of what you pointed out. Or I suppose it's a happy accident if our workforce halves as our work halves - but still the consumer market has halved. It does make me wonder under what reality one would fathom that the stock market would go up long term.
When AI and robots take care of everything there is more time to make babbies
The narrative going around in AI skeptic circles at least is that these layoffs are not tied to AI but to covid-era over-hiring, and that companies have an incentive to blame the layoffs on AI rather than admit underperformance/bad planning.
Realistically I think there are two outcomes:
AGI succeeds and there are mass layoffs, money is concentrated further in the hands of those who own compute.
OR
AI bubble pops and there are mass layoffs, with bailouts going to the largest players to prevent a larger collapse, which drives inflation and further marginalizes asset-less people.
I honestly don't see a third option unless there is government intervention, which seems extremely far fetched given it's owned by the group of people who would benefit from either scenario presented above.
> with bailouts going to the largest players
> I honestly don't see a third option unless there is government intervention
Bailouts are government intervention. The third option is an absence of government intervention, at least at the business level. By all means intervene in the form of support for impacted individuals, e.g. making sure people have food on the table. Stop intervening to save businesses that fail.
Third option: no bailouts of any type. Don't socialize losses. The board resets itself again, and let entrepreneurs, small businesses flourish again.
It's slim but the 3rd option is the administration falling apart Nixon-style. Be it by death, conviction, or resignation.
The slimmer part is any potential successors suddenly responding to the people, but maybe that happens in congress in 2027. Still up in the air.
Does the USA even have enough money to rescue the tech giants at this point? We could be talking multiple trillion dollars at worst. And the AI only companies like OpenAI and Anthropic would be the most vulnerable in comparison to say Google or Microsoft, because they have no fallback and no sustainability without investor money.
And Nvidia would be left in a weird place where the vast majority of their profits are coming from AI cards and demand would potentially dry up entirely.
There is talk about bailout, but is it first possible. Second how long will it post pone issue. Massive increase in government debt used in bailout likely leads to more inflation, which leads to higher interest rates, making that debt much more expensive. And at some point credibility of that debt and dollar in general will be gone.
Ofc, this does lead to ever increasing paper valuations. So maybe that is the win they are after.
That is a huge problem.
We were not pushing 40 trillion in debt at the time of the great financial crisis.
The TARP was a max of 700 billion and we didn't even disburse all the funds.
Trying to do a bailout of this size could easily cause a crisis in the treasury market. Then we really are in huge trouble.
All these companies don’t have piles of debt. So why would they need a bailout at all?
Microsoft, Google, etc are printing money despite the bubble, not because.
Don't see why they would need rescuing.
If the AI bubble pops it's just another bubble like the many ones that have occurred throughout history and the market eventually recovers.
In the AGI Succeeds scenario, the situation is unprecedented and it's not clear how it ever gets better.
Tbh, the answer is simple: if we truly get AGI, the government would nationalize it because it's a matter of national security and prosperity for that matter. Everything will change forever. Agriculture, Transportation, Health... Breakthrough after breakthrough after breakthrough. The country would hold the actual key to solve almost any problem.
when you write it out like that, it sounds unfathomably… silly.
I'm not a tinfoil hat skeptic, and i'd like to think i can accept the rationale behind the possibility. But I don't think we're remotely close as people seem to think.
As technology changes over history, governments tend to emerge that reflect the part of the population that can maintain a monopoly of violence.
In the Classical Period, it was the citizen soldiers of Rome and Greece, at least in the west. These produced the ancient republics and proto-democracies.
Later replaced by professional standing armies under people like Alexander and the Ceasars. This allowed kings and emperors.
In the Early to Mid Medieaval time, they were replaced by knights, elites who allowed a few men to defeat commoners many times their number. This caused feudalism.
Near the end of the period, pikes and crossbows and improved logistic systems shifted power back to central governments, primarily kings/emperors.
Then, with rifles, this swung the pendulum all the way back to citizen soldiers between the 18th and early 20th century, which brought back democracies and republics.
Now the pendulum is going in the opposite direction. Technology and capital distribution has already effectively moved a lot of power back to an oligarchic elite.
And if full AGI combined with robots more physically capable than humans, it can swing all the way. In principle a single monarch could gain monopoly of violence over an entire country.
Do not take for granted that our current undertanding of what the government is, is going to stay the same.
Some kind of merger between capital and power seems likely, where democratic elections quickly become completely obsolete.
Once the police and military have been mostly automated, I don't think our current system is going to last very long.
The government only nationalises losses, not wins.
The key to typing out the solution to any problem, not actually solving it.
Looking at the current state of politics around the world...you really think that would be the outcome?
AGI? Absolutely. If your country gets there, would anyone relinquish that type of power and knowledge?
Bailouts for chip fabicration and nothing else.
The same could be said for the nuclear arms race. The problem is that you can't afford to let a competitor/foreign country own this technology. You must invest. The problems have to be figured out later.
> problem ... laid off are not going to find new roles
Not necessarily. If AI improves productivity, which it hasn't very much yet, there is the option to make more stuff rather than the same output with less people.
The Luddites led on to Britain being the workshop of the world, not to everyone sitting around unemployed, at least not for a while.
> Who is going to be buying the products and services if no-one has money to throw around?
We have no basis for seriously considering this hypothetical when it comes to LLMs.
Depends how absolute one takes the statement "no-one has money to throw around".
Taken loosely, we have seen previous developments which make a large fraction of a population redundant in short periods, and it goes really badly, even though the examples I know of are nowhere near the entire population.
I'm not at all sure how much LLMs or other GenAI are "it" for any given profession: while they impress me a lot, they are fragile and weird and I assume that if all development stopped today the current shinyness would tarnish fast enough.
But on the other hand, I just vibe coded 95% of a game that would have taken me a few months back when I was a fresh graduate, in a few days, using free credit. Similar for the art assets.
How much money can I keep earning for that last 5% the LLM sucks at?
There's a karma element too
Maybe I can make things more efficient by getting rid of you and replacing you with AI, but how long until my boss has the same idea?
Its not even about reducing headcount but offshoring too. I see that in my industry. Major orgs are all hiring in bangalore now. Life is good if you are in bangalore or hyderabad. Ai is seen as something to smooth over the previous language/skill/culture gaps that may have been plugging the dam so far.
Tax AI.
> Who is going to be buying the products and services if no-one has money to throw around?
Here's hoping we figure that out soon otherwise we're going to see how long it takes the poor to reinvent the guillotine
Personally I'm kind of hoping for sooner than later. The greed and vice of the upper stratosphere in society is wildly out of control and needs to be brought to heel
Google, Meta, Microsoft, and Amazon will get through easily. They don't have excessive debt. They can afford to lose their investments into AI. Their valuations will take a hit. Nvidia will lose revenue and profits, stock will go down by 60% or more, but it will also survive.
Oracle will likely fail. It funded its AI pivot with debt. The Debt-to-Revenue ratio is 1.77, the Debt-to-Equity ratio D/E is 520, and it has a free cash flow problem.
OpenAI, Anthropic, and others will be bought for cents on the dollar.
>Debt-to-Equity ratio D/E is 520
It's actually 520% or 5.2 - still high but 520 would be crazy.
> OpenAI, Anthropic, and others will be bought for cents on the dollar.
And just like 23andme, so will all that data be sold for dimes.
If it's just like 23andme, that sale would be to Sam Altman.
Google, Meta, Microsoft and Amazon might get through easily as companies. I don't think all G/M/M/A staff will get through easily.
Microsoft is in a pickle. They put AI lipstick on top of decades of unfixed tech debt and their relationship with their userbase isn't great. Their engineering culture is clearly not healthy. For their size and financial resources, their position in the market right now is very delicate.
I think that's the impression you get if you focus on Microsoft as a OS vendor. It's not that anymore, that's why their OS sucks for many years now. Their main business is b2b, cloud services, and azure. I think they are pretty safe from OpenAI. Plus they have invested big in OpenAI as well.
Windows is hard to replace in large organizations. Is there actually any real AI competitors in the stack? Well Google, maybe. The whole Windows+Office+AD+Exchange and now Azure stack is unlikely to go any time soon. However badly they screw it up.
True. Basically any medium to large scale business is reliant on Windows/Office/AD. While there are open source alternatives to Windows/Office, I can't think of a good open source alternative to AD/Group Policy/etc
M365 is arguably far worse than office97. Drive/sharepoint is confusing and team is especially broken.
Azure is a product all right, but there’s nothing particularly better there than anywhere else.
SharePoint has been a dog’s breakfast since forever.
M365 is inarguably worse than Office 97
I disagree. They're the one place that can get away without investing in frontier model research and still win in the enterprise.
Google is only place that serves the enterprise (Workspace for productivity, Cloud for IT, Devices for end users) AND conducts meaningful AI research.
AWS doesn't (they can sell cloud effectively, but don't have any meaningful in-house AI R&D), Meta doesn't (they don't cover enterprise and, frankly, nobody trusts Zuck... and they're flaky.
Oracle doesn't. They have grown their cloud business rapidly by 1) easy button for Oracle on-prem to move to OCI, and 2) acting like a big colo for bare metal "cloud" infra. No AI.
Open AI has fundamental research and is starting to have products, but it's still niche. Same as Anthropic. They're not in the same ball game as the others, and they're going to continue to pay billions to the hyperscalers annually for infra, too.
This is Google's game to lose, imho, but the biggest loser will be AWS (not Azure/Microsoft).
I don't think so.
They are one of the few companies actually making money with AI as they have intelligently leveraged the position of Office 365 in companies to sell Copilot. Their AI investment plans are, well, plans which could be scaled down easily. Worst case scenario for them is their investment in OpenAI becoming worthless.
It would hurt but is hardly life threatening. Their revenue driver is clearly their position at the heart of entreprise IT and they are pretty much untouchable here.
> Worst case scenario for them is their investment in OpenAI becoming worthless.
And even then, if that happens when the bubble pops, they'll likely just acquire OpenAI on the cheap. Thanks to the current agreement, it already runs on Azure, they already have access to OpenAI's IP, and Microsoft has already developed all their Copilots on top of it. It would be near-zero cost for Microsoft at that point to just absorb them and continue on as they are today.
Microsoft isn't going anywhere, for better or for worse.
Despite them pissing off users with Windows, what HN forgets, is they aren't Microsoft's customer. The individual user/consumer never was. We may not want what MS is selling, but their enterprise customers definitely do.
I cry for Elon, that precious jewel of a human being.
Tesla (P/E: 273, PEG: 16.3) the car maker without robots, robotaxis is less than 15% of the Tesla valuation at best. When the AI hype dies, selloff starts and negative sentiment hits, we have below $200B market cap company.
It will hurt Elon mentally. He will need a hug.
He's gonna need a lot of ketamine in the aftermath that's for sure.
The fanboys obsessively buy any dip. It should have been back at a $200billion market cap countless times but it never gets there.
Never bet against TSLA. Elon will just start selling tickets Mars colony.
Then show us your puts, mr buffet
Buffett isn't a put buyer but did well investing in Tesla's rival BYD.
lol - yea…
You will be able to rent a whole Meta datacenter with thousands of NVIDIA B200 for $5/hour. AWS will become unprofitable due to abundance of capacity...
> Google, Meta, Microsoft, and Amazon will get through easily. They don't have excessive debt. They can afford to lose their investments into AI.
Survive, yes. I don't think anybody ever questioned this.
I wonder if they will be able to remain as "growth stocks", however. These companies are allergic to be seen as nature companies, with more modest growth profiles, share profits, etc.
Meta --> GenAI Content creation can disrupt Instagram. ChatGPT likely has more data on a person than Instagram does by now for ads. 800 million daily active users for ChatGPT already.
Google --> Cash cow search is under threat from ChatGPT.
Microsoft --> Productivity/work is fundamentally changed with GenAI.
Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
I'm betting that OpenAI will emerge bigger than current big tech in ~5 years or less.
> Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
Yeah... No they can't. I don't agree with any of your "disruptions," but this one is just comically incorrect. There was a post on HN somewhat recently that was a simulated computer using LLMs, and it was unusable.
I think it should be obvious just from thinking about how much more an OS is beyond the UI for launching programs.
Not to mention you would need an order of magnitude improvement in on-device inference speed to make this feasible at current smartphone costs. Or they could offload it and sell an insecure overpriced-subscription laggy texting device that bricks when you don’t have cell service…
It isn't going to happen soon. Maybe 4-6 years from now.
But it is clearly the direction. Apple will try to stave off this move by turning iOS more into an LLM as well.
I find myself doing more and more inside ChatGPT. When ChatGPT inevitably can generate GUIs on the fly, book me an uber, etc. I don't see why iOS wouldn't have competition.
OpenAI has no technical moat (others can do what they do), generate content, all have the same data.
OpenAI does not expect to be cash-flow positive until 2029. When no new capital comes in, it can't continue.
OpenAI can's survive any kind of price competition.
They consistently have the best or second best models.
They have infrastructure that serves 800 million monthly active users.
Investors are lining up to give them money. When they IPO, they'll easily be worth over $1 trillion.
There's price competition right now. They're still surviving. If there is price competition, they're the most likely to survive.
They have <a really expensive> infrastructure that serves 800 million monthly active <but non-paying> users.
Even worse, they train their model(s) on the interactions of those non-paying customers, what makes the model(s) less useful for paying customers. It's kind of a "you can not charge for a Porsche if you only satisfy the needs of a typical Dacia owner".
I give more of my data to OpenAI than to Meta. ChatGPT knows so much about me. Don't you think they can easily monetize their 800 million (close to 1 billion by now) users?
Meta has the giant advantage that other people interact with your data. I think that is widely more valuable than what chat engines have.
Given that OpenAI has publicly stated that they're working on monetizing free users (ads), I think they can make ads targeting as good as Meta can.
This is why Meta is all in on AI by the way. With nearly 1 billion users, ChatGPT is a huge threat to Meta's ad empire.
> Investors are lining up to give them money. When they IPO, they'll easily be worth over $1 trillion.
Your premise is that there is no bubble. We are talking about what happens when bubble bursts. Without investor money drying out there is no bubble.
I think we are in 1995 of the dotcom bubble for AI.
Clearly, a lot of people here disagree with you. Doesn't mean you cannot be right, but in general, the HN crowd is a pretty good predictor of the trends in the tech industry.
The mass is usually wrong on predicting these kinds of events. I don't see why HN is any different than Reddit group think.
Nobody was predicting for the dotcom or the financial crisis bubbles. The fact that everyone and their grandma is calling this a bubble makes me think that it simply can’t be.
Bitcoin is going to be the next universal payment system anytime now...
Weird example to trot out as a bubble when at any point in its history, if you held for a few years or so you’d be pretty far ahead on your investment. It clearly shows people are awful at calling out bubbles.
More like 1998
What if investors stop giving them money before they IPO?
I’ll HAPPILY bet that it won’t. $10,000 to a charity of each other’s choosing?
OpenAI has yet to make a single, solitary thing that works well. It's nothing but Sam Altman hyping things. They aren't an existential threat to anyone.
ChatGPT 3 and 4 were impressive and kind of kicked off the current AI boom/bubble. Since then though, Altman changing the non-profit OpenAI into a kind of for profit Closed AI seems to have led to a lot of talent leaving.
> Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
Or, instead of spending billions training models that are nearly all the same, they instead take advantage of all the datacenter full of GPUs, and AI companies frantically trying to make a profit, many most likely crashing and burning in the process, to pay relative pennies to use the top, nearly commoditized, model of the month?
Then, maybe someday, starting late and taking advantage of the latest research/training methods that shave off years of training time, save billions on a foundation model of their own?
I don't think it makes sense for Apple to be an AI company. It makes sense for them to use AI, but I don't see why everyone needs to have their own model, right now, during all the churn. It's nearly already commodity. In house doesn't make sense to me.
> I'm betting that OpenAI will emerge bigger than current big tech in ~5 years or less.
I seriously doubt it. If this bubble pops, the best OpenAI can hope for is they just get absorbed into Microsoft.
>Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
Ah yes, PromptOS will go down in the history books for sure.
No, LLMs are an existential threat. OpenAI is a heavily leveraged prop-model company selling inference, which often has a model a few months ahead of its competitors.
AI isn’t bullshit, but selling access to a proprietary model has certainly not been proven as a business model yet.
> He told the BBC that the company owns what he called a “full stack” of technologies, from chips to YouTube data to models and frontier science research. This integrated approach, he suggested, would help the company weather any market turbulence better than competitors.
I guess but is it better for an investor to own 2 shares of Google or 1 share of OpenAI and 1 share of TSMC?
Like I have no doubt that being vertically integrated as a single company has lot of benefits but one can also create a trust that invests vertically as well.
There may be firm specific risk etc., but there is also a concept of double marginalization, where monopolies that exist across the vertical layers of a production chain will be less efficient than a single monopoly, because you only get a single layer of dead weight loss rather than multiple.
https://en.wikipedia.org/wiki/Double_marginalization?wprov=s...
Well if AI goes poof - the equity markets take a really big bad hit. So I would probably move out of equity and into something more concrete and reinvest if you can time the market bottom.
Nvidia earnings tomorrow will be the litmus test if things are going to topple over.
OpenAI is privately held. Regular retail investors can't buy shares.
OpenAI going poof would have a negative impact on TSMC demand (revenue), right?
Yeah, TSMC demand might go down from 300% to 100%.
So yes, it would have an effect; even with your imaginary numbers that'd be a 3x drawdown
it might bring in the schedules, but since it probably wouldn't cause there to be an actual hole, its really more about long term fab build plans than anything else
> since it probably wouldn't cause there to be an actual hole, its really more about long term fab build plans than anything else
Equities are forward looking. TSMC's valuation doesn't make sense if it doesn't have a backlog to grow into.
With those 2 shares of Google you are also buying a piece of their money printer, i.e. the advertising business.
Does anyone really think it’s “if” and not “when” ?
Agreed, it's when. They're hoping to stave it off or maybe stretch out the pop into a correction by all hedging together with all these incestuous deals, but you can't hold back the tide. They debuted this tech way too early, promised way too much, and now the market is wary about buying AI products until more noise settles out of the system.
> They debuted this tech way too early, promised way too much,
finally, some rational thought into the AI insanity. The entire 'fake it til you make it' aspect of this is ridiculous. sadly, the world we live in means that you can't build a product and hold its release until it works. you have to be first to release even if it's not working as advertised. you can keep brushing off critiques with "it's on the road map". those that are not as tuned in will just think it is working and nothing nefarious is going on. with as long as we've had paid for LLM apps, I'm still amazed at the number of people that do not know that the output is still not 100% accurate. there are also people that use phrases as thinking when referring to getting a response. there's also the misleading terms like "searching the web..." when on this forum we all know it's not a live search.
> sadly, the world we live in means that you can't build a product and hold its release until it works. you have to be first to release even if it's not working as advertised.
You absolutely can and it's an extremely reliable path to success. The only thing that's changed is the amount of marketing hype thrown out by the fake-it vendors. Staying quiet and debuting a solid product is still a big win.
> I'm still amazed at the number of people that do not know that the output is still not 100% accurate.
This is the part that "scares" me. People who do not understand the tool thinking they're ACTUALLY INTELLIGENT. Not only are they not intelligent, they're not even ACTUALLY language models because few LLMs are actually trained on only language data and none work on language units (letters, words, sentences), tokens are abstractions from that. They're OUTPUT modelers. And they're absolutely not even close to being let loose unattended on important things. There are already people losing careers over AI crap like lawyers using AI to appeal sanctions because they had AI write a motion. Etc.
And I think that was ultimately the biggest unforced error of these AI companies and the ultimate reason for the coming bubble crash. They didn't temper expectations at all, the massive gap between expectation and reality is already costing companies huge amounts of money, and it's only going to get worse. Had they started saying, "these work well, but use them carefully as we increase reliability" they'd be in a much better spot.
In the past 2 years I've been involved in several projects trying to leverage AI, and all but one has failed. The most spectacular failure was Microsoft's Dragon Copilot. We piloted it with 100 doctors, after a few months we had a 20% retention rate, and by the end of a year, ONE doctor still liked it. We replaced it with another tool that WORKS, docs love it, and it was 12.6% the cost, literally a sixth the price. MS was EXTREMELY unhappy we canceled after a year, tried to throw discounts at us, but ultimately we had to say "the product does not work nearly as well as the competition."
It’s going to pop as soon as they get confirmation the govt will bail them out. Until then they’re going to give it their all to keep it growing.
I think they already have that confirmation. When we bailed the banks out in 08 we basically said "If you're big enough that we'd be screwed without you then take whatever risks you like with impunity".
That's a reduction of complexity, of course, but the core of the lesson is there. We have actually kept on with all the practices that led to the housing crash (MBS, predatory lending, Mixing investment and traditional banking).
> If you're big enough that we'd be screwed without you then take whatever risks you like with impunity".
I know financially it will be bad because number not go up and number need go up.
But do we actually depend on generative/agentic AI at all in meaningful ways? I’m pretty sure all LLMs could be Thanos snapped away and there would be near zero material impact. If the studies are at all reliable all the programmers will be more efficient. Maybe we’d be better off because there wouldn’t be so much AI slop.
It is very far from clear that there is any real value being extracted from this technology.
The government should let it burn.
Edit: I forgot about “country girls make do”. Maybe gen AI is a critical pillar of the economy after all.
> I’m pretty sure all LLMs could be Thanos snapped away and there would be near zero material impact.
I mostly agree, but I don't think it's the model developers that would get bailed out. OpenAI & Anthropic can fail, and should be let to fail if it comes to that.
Nvidia is the one that would get bailed out. As would Microsoft, if it came to that.
I also think they should be let to fail, but there's no way the US GOV ever allows them to.
Why would Nvidia need bail out? They have 10 billion debt and 60 billion of cash... Or is it finally throwing any trust in the market and just propping up valuations? Which will lead to inevitable doom.
> Nvidia is the one that would get bailed out. As would Microsoft, if it came to that.
> I also think they should be let to fail, but there's no way the US GOV ever allows them to.
There's different ways to fail, though: liquidation, and a reorganization that wipes out the shareholders.
OpenAI could be liquidated and all its technology thrown in to the trash, and I wouldn't shed a tear, but Microsoft makes (some) stuff (cough, Windows) that has too much stuff dependent on it to go away. The shareholders can eat it (though I think broad-based index funds should get priority over all other shareholders in a bankruptcy).
I expect the downvotes to come from this as they always seem to do these days, but I know from my personal experience that there is value in these agents.
Not so much for the work I do for my company, but having these agents has been a fairly huge boon in some specific ways personally:
- search replacement (beats google almost all of the time)
- having code-capable agents means my pet projects are getting along a lot more than they used to. I check in with them in moments of free time and give them large projects to tackle that will take a while (I've found that having them do these in Rust works best, because it has the most guardrails)
- it's been infinitely useful to be able to ask questions when I don't know enough to know what terms to search for. I have a number of meatspace projects that I didn't know enough about to ask the right questions, and having LLMs has unblocked those 100% of the time.
Economic value? I won't make an assessment. Value to me (and I'm sure others)? Definitely would miss them if they disappeared tomorrow. I should note that given the state of things (large AI companies with the same shareholder problems as MAANG) I do worry that those use cases will disappear as advertising and other monetizing influences make their way in.
Slop is indeed a huge problem. Perhaps you're right that it's a net negative overall, but I don't think it's accurate to say there's not any value to be had.
I'm glad you had positive experiences using this specific technology.
Personally, I had the exact opposite experience: Wrong, deceitful responses, hallucinations, arbitrary pointless changes to code... It's like that one junior I requested to be removed from the team after they peed in the codebase one too many times.
On the slop i have 2 sentiments: Lots of slop = higher demand for my skills to clean it up. But also lots of slop = worse software on probably most things, impacting not just me, but also friends, family and the rest of humanity. At least it's not only a downside :/
Which of these firms is too big to fail though?
It all depends on whether MAGA survives as a single community. One of the few things MAGA understands correctly is that AI is a job-killer.
Trump going all out to rescue OpenAI or Anthropic doesn't feel likely. Who actually needs it, as a dependency? Who can't live without it? Why bail out entities you can afford to let go to the wall (and maybe then corruptly buy out in a fire sale)?
Similarly, can you actually see him agreeing to bail out Microsoft without taking an absurd stake in the business? MAGA won't like it. But MS could be broken up and sold; every single piece of that business has potential buyers.
Nvidia, now that I can see. Because Trump is surrounded by crypto grifters and is dependent on crypto for his wealth. GPUs are at least real solid products and Nvidia still, I think, make the ones the crypto guys want.
Google, you can see, are getting themselves ready to not be bailed out.
> One of the few things MAGA understands correctly is that AI is a job-killer
Trump (and by extension MAGA) has the worst job growth of any President in the past 50 years. I don't think that's their brand at all. They put a bunch of concessions to AI companies in the Big Beautiful Bill, and Trump is not running again. He would completely bail them out, and MAGA will believe whatever he says, and congress will follow whatever wind is blowing.
If Meta or Google disappared overnight, it would be, at worst, a minor annoyance for most of the world. Despite the fact that both companies are advertising behemoths, marketing departments everywhere would celebrate their end.
> If Meta or Google disappared overnight, it would be, at worst, a minor annoyance for most of the world.
I'll grant you Meta, but losing Google in that way would be highly disruptive because so many people have their primary email account on it.
Watching this on my Android on a chrome browser.
Hard disagree
Considering how much of the world runs on WhatsApp alone, that’s laughably wrong.
Then they would just use another Messenger or fall back on RCS/SMS.
The only reason WhatsApp is so popular, is because so many people are on it, but you have all you need (their phone number) to contact them elsewhere anyway
So if WhatsApp had an outage, but you needed to communicate to someone, you wouldn't be able to? Don't you have contacts saved locally, and other message apps available?
In most of Asia, Latin America, Africa, and about half of Europe?
You’d be pretty stuck. I guess SMS might work, but it wouldn’t for most businesses (they use the WhatsApp business functionality, there is no SMS thing backing it).
Most people don’t even use text anymore. China has it’s own Apps, but everyone else uses WhatsApp exclusively at this point.
Brazil had many times a judge punished WhatsApp by blocking it in Brazil, and all the times that happened, Telegram gained hundreds of thousands of new users.
Nobody uses WhatsApp Business in Germany, Austria or Switzerland in a way that you would be stuck without
The bubble may well burst when the corporations are denied the enormous quantity of energy that they claim they need "to innovate". From TFA:
"""
Mr Pichai said action was needed, including in the UK, to develop new sources of energy and scale up energy infrastructure.
"You don't want to constrain an economy based on energy, and I think that will have consequences," he said.
He also acknowledged that the intensive energy needs of its expanding AI venture meant there was slippage on the company's climate targets, but insisted Alphabet still had a target of achieving net zero by 2030 by investing in new energy technologies.
"""
"Slippage" in this context probably means, "We no longer care about climate change but we don't feel that mere citizens are ready to hear us say it."
They got enough slush money to make this go on for a couple of years.
I am shocked at the part they know it is a bubble and they are doing nothing to amortize it. Which means they expect the government to step in and save their butts.
... Well, not that shocked.
They're floating 40 year bonds for technology with a three year lifecycle. They do not have the actual cash for this.
Yet another [1] AI winter [2]
[1] https://en.wikipedia.org/wiki/AI_winter#The_setbacks_of_1974
[2] https://en.wikipedia.org/wiki/AI_winter#AI_winter_of_the_199...
I've been trying to grok this idea of - when does a bubble pop. Like in theory if everyone knows it's a bubble, that should cause it to pop, because people should be making their way to the exists, playing music chairs to get their money out early.
But as I try to sort of narrative the ideas behind bubbles and bursts, one thing I realize, is that I think in order for a bubble to burst, people essentially have to want it to burst(or the opposite have to want to not keep it going).
But like Bernie Madoff got caught because he couldn't keep paying dividends in his ponzi scheme, and people started withdrawing money. But in theory, even if everyone knew, if no one withdrew their money (and told the FCC) and he was able to use the current deposits to pay dividends a few years. The ponzi scheme didn't _have_ to end, the bubble didn't have to pop.
So I've been wondering, like if everyone knows AI is a bubble, what has to happen to have it collapse? Like if a price is what people are willing to pay, in order for Tesla to collapse, people have to decide they no longer want to pay $400 for Tesla shares. If they keep paying $400 for tesla shares, then it will continue to be worth $400.
So I've been trying to think, in the most simple terms, what would have to happen to have the AI bubble pop, and basically, as long as people perceive AI companies to have the biggest returns, and they don't want to move their money to another place with higher returns (similar to TSLA bulls) then the bubble won't pop.
And I guess that can keep happening as long as the economy keeps growing. And if circular deals are causing the stock market to keep rising, can they just go on like this forever?
The downside of course being, the starvation of investments in other parts of the economy, and giving up what may be better gains. It's game theory, as long as no one decides to stop playing the game, and say pull out all their money and put it into I dunno, bonds or GME, the music keeps playing?
You’re over complicating something that is very simple. The stock market reflects people’s sentiments: greed, excitement, FOMO, despair…
A bubble doesn’t need a grand catalyst to collapse. It only needs prices to slip below the level where investors collectively decide the downside risk outweighs the upside hope. Once that threshold is crossed, selling accelerates, confidence unravels, and the fall feeds on itself.
It's important to keep in mind the difference between the stock market and the economy.
Economically, AI is a bubble, and lots of startups whose current business model is "UI in front of the OpenAI API" are likely doomed. That's just economic reality - you can't run on investor money forever. Eventually you need actual revenue, and many of these companies aren't generating very much of it.
That being said, most of these companies aren't publicly traded right now, and their demise would currently be unlikely to significantly affect the stock market. Conversely, the publicly traded companies who are currently investing a lot in AI (Google, Apple, Microsoft, etc) aren't dependent on AI, and certainly wouldn't go out of business over it.
The problem with the dotcom bubble was that there were a lot of publicly traded companies that went bankrupt. This wiped out trillions of dollars in value from regular investors. Doesn't matter how much you may irrationally want a bubble to continue - you simply can't stay invested in a company that doesn't exist anymore.
On the other hand, the AI bubble bursting is probably going to cost private equity a lot of money, but not so much regular investors unless/until AI startups (startups dependent on AI for their core business model) start to go public in large numbers.
I think the targeted ad revenue all of the llm providers will get using everyones regular chat data + credit card dataset for training is going to be insanely good.
Plus the information they can provide to the State on the sentiment of users is also going to be greatly valued
Didn't perplexity make only like 27K from ad revenue? They're going to have to actively compete with Google and Facebook dollars, as google and facebook develop competing products.
Eventually money to invest will run out. If earnings of the companies doesn't catch up we'll reach a situation where stock prices reach a peak, have limited future expected returns, and then it'll pop when there's a better opportunity for the money
Imagine if interest rates go up and you can get 5% from a savings account. One big player pulls out cash triggering a minor drop in AI stocks. Panic sells happen trying to not be the last one out of the door, margin calls etc.
You're assuming cash will never stop flowing in driving up prices. It will. The only way it goes on forever is if the companies end up being wildly profitable
> when does a bubble pop
This one? When China commits to subsidising and releasing cutting-edge open-source models. What BYD did to Tesla's FSD fee dreams, Beijing could do to American AI's export ambitions.
More like that when it happens, how big the pop is.
It's also possible it'll be more of a deflation than a pop.
That's what I'm personally hoping for anyway, would rather the economy avoid a big recession.
I'm not. A few podcasts I've listened to recently (mostly Odd Lots) explored how a pop is often preferable to a protracted downturn because it weeds out the losers quickly and allows the economy to begin the recovery aspect sooner. A protracted downturn risks poorly managed assets limping along for years instead of having capital reallocated to better investments.
Better to rip the bandaid off and begin anew.
Without AI bubble the economy is already mostly in a recession.
which is kind of sad to think about. The US could have invested all that money to actually invest in its infrastructure, schools, hospitals and general wellbeing of its workforce to make the economy thrive.
It's not "the US" who's investing the money. This is the same problem people run into when they say, "we should just put money into more trains and buses rather than self driving cars".
Private actors are the ones who are investing into AI, and there's no real way for them to invest into public infrastructure, or to eventually profit from it, the way investors reasonably expect to do when they put up their money for something.
It's the government who can choose to invest into infrastructure, and it's us voters who can choose to vote for politicians who will make that choice. But we haven't done that. So many people want to complain endlessly about government and corporations -- not entirely without merit, of course -- but then are quick to let voters off the hook.
Without the AI bubble, most of that money would probably still flow to some other sector of the economy. It wouldn't just disappear.
It'll be fine. When the banks burst in 2008, they were gifted 7 trillion to make up the shortfall and life went on for the rich.
This time they'll be gifted 70 trillion to make up for the shortfall, and life shall continue on for the rich.
It's win-win for them, there's no risk at all
I think the economic background has changed, in 2008 it was after a big run up in wealth so the reversion wasn’t so bad, there was some fat to cut. Since then people have been ground down to the breaking point, another 2008 wipeout will cut into the bone. I do think this time it could be different.
The money system breaking is one thing, companies that build chatbots going to the wall is another.
privatize profit, socialize risk. same as it always was
It's called "AI Winter" because it's a cycle that repeats. Just like the seasons.
See y'all in the spring!
Sort of? My thoughts are that there's something of an AI arms race and the US doesn't want to lose that race to another country... so if the AI bubble pops too fiercely, there may likely be some form of intervention. And any time the government intervenes, all bets are off the table. Who knows what they will do and what the impact will be.
I can see them intervening to preserve AI R&D of some sort, but many of the current companies are running consumer oriented products. Why care if some AI art generation website goes bust?
It feels as if every CEO used ChatGPT once and said “wow this is incredible, pivot hard to make our product use AI”, and that’s about all the thought has matured to.
Most bubbles occur due to excessive levels of credit offered too cheaply, resulting in a whole bunch of defaults happening at the same time. All the major AI players have borrowed money to buy GPUs and build data centers and have used Special Purpose Vehicles to do it so the debt doesn't fall on their own balance sheet, probably using a certain amount of stock as collateral. If the SPV defaults, could that trigger a big sell-off?
The question is, a sell off for who?
If they’ve securitized and sold their data center buildout, will the big clouds and AI labs actually face any severe impact? While the sums are huge, most of these companies have the cash on hand to pay down the debt. The big AI labs have said their models earn enough to cover the cost to train themselves, just not the next one. This means they could at any time walk away from the compute spend for training.
With the heavy securitization of all these deals, will the “bubble pop” just hurt the financial industry?
If a company like CoreWeaver sees their SPV for a Microsoft-specific data center go bankrupt, that means MSFT decided to walk away from the deal. Red flag for the industry, but also a sign of fiscal restraint. Someone else can swoop in and buy the DC for cheap, while MSFT avoids the Opex hit. Seems like the losers will be whoever bought that SPV debt, which probably isn’t a tech company.
It’s an insurance company so basically pensions.
Right, insurance companies are the new "financial dark matter". The next financial crisis will probably be triggered when a few large life and property insurers fail because they purchased debt assets which were highly rated but turn out to be junk. (Medical and auto insurers aren't exposed here because they operate on much shorter timeframes.)
It's more disturbing than you think:
America’s Retirement Savings in Bermuda entities that lose US protections while making opaque, complex bets: https://www.bloomberg.com/graphics/2025-america-insurance-pa... https://archive.ph/lhZv9
> It’s an insurance company
What is?
It feels like AI investment and product focus is now a religion or cult. You have to be so fully invested, blind from any data, and throwing billions of dollars at it, otherwise you’re not “in”.
Meanwhile, no one in my sphere of tech and non tech people is wanting “more AI” and sees the pros/cons of chatgpt, using it as kinda fancy google search..
Where’s the “killer app” that’ll generate literally trillions in revenue to offset the costs? How do the economics work, especially when GPUs are depreciating?
I think it will pop but not in the way everyone thinks it will pop. There's plenty that's not going to go away / anywhere, but I'm sure lots of startups will fail and close their doors.
What way do you envision it popping? Nvidia has tons of investments on their books in smaller companies. If a couple of them start showing poor earnings, it could cause a death spiral for NVDA because 1) their investment just tanked, and 2) those companies are no longer buying chips from them therefore reducing revenue.
Nvidia also makes up ~7% of the S&P 500 so if their stock price falls substantially, that's a big chunk of capital just... gone for a lot of people.
Is it really a bubble about to burst when literally everyone is talking about AI being in a bubble and maybe bursting soon?
To me, we're clearly not peak AI exuberance. AI agents are just getting started and getting so freaking good. Just the other day, I used Vercel's v0 to build a small business website for a relative in 10 minutes. It looked fantastic and very mobile friendly. I fed the website to ChatGPT5.1 and asked it to improve the marketing text. I fed those improvements back to v0. Finished in 15 minutes. Would have taken me at least one week in the past to do a design, code it, test it for desktop/mobile, write the copy.
The way AI has disrupted software building in 3 short years is astonishing. Yes, code is uniquely great for LLM training due to open source code and documentation but as other industries catch up on LLM training, they will change profoundly too.
It's not that the AI models or products don't work.
It's how much money is being poured into it, how much of the money that is just changing hands between the big players, the revenue, and the valuations.
Well, do you have a model for this? Or is it just regurgitating mass media that it's a bubble?
If hyperscalers keep buying GPUs and Chinese companies keep saying they don't have enough GPUs, especially advanced ones, why should we believe someone that it's a bubble based on "feel"?
> why should we believe someone that it's a bubble based on "feel"?
because leaders in the space also keep saying it? And then making financial moves that us pleebs can't even dream of, which back that up?
This whole "the media keeps reporting it" as a point against the credibility of something is utterly silly and illogical.
The same people who are saying it are also the same people increasing capex?
> because leaders in the space also keep saying it?
They have a lot of reasons for saying that, including to give themselves cover in the event of a crash.
What’s happening now is a classic land grab. You’re always going to get inflated prices in that situation, and it’s always going to correct at some point. It’s difficult to predict when, though.
The economics of it is what's the problem, not the power of LLMs.
So tell us the economics of it?
The vast majority of AI doomers in the mass media have never used tools like v0 or Cursor. How would they know that AI is overvalued?
Are the companies making a profit on every prompt I give it (factoring in the cost of upgrading GPUs every year or two)?
According to OpenAI, yes. They're "very profitable" on inference. It is the training and capex (driven by competition) that is driving loss.
https://www.wheresyoured.at/oai_docs/ offers a different point of view on OAI's profitability on inference.
This is a very biased example. Also, it is possible only because right now the tools you've used are heavily subsidised by investors' money. A LOT of it. Nobody questions the utility of what you just mentioned, but nodoby stops to ask if this would be viable if you were to pay the actual cost of these models, nor what it means for 99.9% of all the other jobs that AI companies claim can be automated, but in reality are not even close to be displaced by their technology.
Why is it biased?
So what if it's subsidized and companies are in market share grab? Is it going to cost $40 instead of $20 that I paid? Big deal. It still beats the hell out of $2k - $3k that it would have taken before and weeks in waiting time.
100x cheaper, 1000x faster delivery. Further more, v0 and ChatGPT together for sure did much better than the average web designer and copy writer.
Lastly, OpenAI has already stated a few times that they are "very profitable" in inference. There was an analysis posted on HN showing that inference for open source models like Deepseek are also profitable on a per token basis.
LLMs are particularly good at web development (granted that's a big market), probably due to a lot of the training material being that.
We don't know what AI should cost but if you look at the numbers then 2x more expensive is much too low.
Think about the pricing. OpenAI fixed everyone's prices to free and/or roughly the cost of a Netflix subscription, which in turn was pinned to the cost of a cable TV subscription (originally). These prices were made up to sound good to his friends, they weren't chosen based on sane business modelling.
Then everyone had to follow. So Anthropic launched Claude Code at the same price point, before realizing that was deadly and overnight the price went up by an order of magnitude. From $20 to $200/month, and even that doesn't seem to be enough.
If the numbers leaked to Ed Zitron are true then they aren't profitable on inference. But even if that were true, so what? It's a meaningless statement, just another way of saying they're still under-pricing their models. Inferencing and model licensing are their only revenue streams! That has to cover everything including training, staff costs, data licensing, lawsuits, support, office costs etc.
Maybe OpenAI can launch an ad network soon. That's their only hope of salvation but it's risky because if they botch it users might just migrate to Grok or Gemini or Claude.
I’ve wondered if it makes sense to buy Intel along with Cerebrus in order to use Intels newest nodes while under development to fab the Cerebrus wafer level inference chips which are more tolerant of defects. Overall that seems like the cheapest way to perform inference - if you have $100B.
If it subsidized it's a problem because we're not talking about Uber trying to disrupt a clearly flawed system of transportation. We're talking about companies whose entire promise is an industrial revolution of a scale we've never seen before. That is the level of the bet. The fact they did much better than the average professional is also your own take and assessment that is purely self evident. Also, your example has fundamentally no value. You mentioned a marginal use case that doesn't scale. Personal websites will be quicker to make because you can get whatever the AI spews your way, you have basically infinite flexibility and the only contraints are "getting it done" and "looking ok/good". That is not how larger business work, at all. So there is a massive issue of scalability of this. Finally, OpenAI "states" a lot of things, and a lot of them have been proven to be flat out lies, because they're led by a man who has been proved to be a pathological narcissistic liar many times over. Yet you keep drinking the kool aid, inlcuding about inference. There are by the way reports that, data in hand, prove quite convincingly that "being profitable on inference" seems to be math gymnastics, and not at all the financial reality of OpenAI.
The vast majority of highly valuable tech companies in the last 35 years have subsidized their products or services in the beginning as they grew. Why should OpenAI be any different? In particular the tokenomics is already profitable.
I think you are missing the fundamental point here. The question is not really if AI has some value. That much is obvious and the exemple you give, increasing developer productivity, is a good one.
The question is: is the value generated by AI aligned with the market projected value as currently priced in AI companies valuation? That's what's more difficult to assess.
The gap between fundamental financial data and valuations is very large. The risk is a brutal reassessment of these prices. That's what people call a bubble bursting and it doesn't mean the underlying technology has no value. The internet bubble burst yet the internet is probably the most significant innovation of the past twenty years.
Well it all started with usual SV style "growth hacking"(price dumping as a SaaS) of "gather users now, monetize later" by OpenAI - which works only if you attain virtual monopoly(basically dominance over segment of a market, with competition not really competing with you) over segment of the market.
The problem is no one attained that position, price expectations are set and it turns out that wishful thinking of reducing costs of running the models by orders of magnitude wasn't fruitful.
Is AI useful? of course.
Are the real costs of it justified? in most cases no.
There's also a lot of debate over how long a GPU lasts. If a GPU loses most of it's value after 2 years because a newer model is much better/cheaper, that destroys the economics of the companies who have spent billions on now obsolete hardware.
That said, I don't think the bubble is done growing nor do I think it is about to burst.
I personally think we are in 1995 of the dotcom bubble equivalent. When it bursts, it will still be much bigger than in November 2025.
Let him cook
> Is it really a bubble about to burst when literally everyone is talking about AI being in a bubble and maybe bursting soon?
Yes, it is even one of necessary components. Everybody is twitchy afraid of the pop, but immediate returns are too tempting so they keep money in. The bubble pops when something happens and they all start to panicking at the same time. They all need to be sufficiently stressed for that mass run to happen.
So after the bubble pops, do you think the AI market will still be bigger in November 2025?
In other words, do you think we're in 1995 of the dotcom or 2000?
There's a weird gleefulness about AI being a bubble that'll pop any day now both in this thread and in the world at large. It's kinda weird and I find most of the predictions about the result of such a bubble popping to sound highly exaggerated.
I think it’s because a lot of people feel like AI is being pushed from the top a lot with all kinds of spectacular claims / expectations, while it’s actually still difficult to apply to a lot of problems.
The AI bubble bursting would be somewhat of an “I told you so!” moment for a lot of people.
And there’s also a large group that’s genuinely afraid to lose their job because of it, so for that group it’s very much understandable.
Whether or not the predictions of the bubble popping are exaggerated or not, I cannot tell; it feels like most companies investing in AI know that the expectations are huge and potentially unrealistic (AGI/ASI in the next few years), but they have money to burn and the cost of missing out on this opportunity would be tremendous. I know that this is the position that both Meta and Google shared with their investors.
So it all seems to be a very well calculated risk.
I do agree that there seems to be a bubble, imo it's largely in the application space with the likes of cursor being valued at $23B+, but I don't see GPU purchases going down anytime soon and I don't even see usage going down. If these overhyped apps fail then it seems like something else will take their place. The power that LLMs provide is just too promising. It seems like those predicting things like a global economic crisis or the new big model-providers like OpenAI going to 0 just think that AI is like NFTs with no real intrinsic value.
> I don’t even see usage going down
Depends on the price to use these cloud LLMs.
>There's a weird gleefulness about AI being a bubble that'll pop any day now both in this thread and in the world at large
All i want is some cheap RAM for my PC.
I'm curious what HN is doing with their portfolios right about now. I'd be dumping NVDA and reallocating to more bonds for the time being.
Broad, global diversification with a long term time horizon. So I'm doing exactly nothing
I'm not able to predict what the overall market is going to do short or medium term
What makes you think your guess is better than the rest of the money in the market, most of it acting with better information than you?
Currently? Wishing there was an S&P 500 that banned tech stocks.
The SPXT [0]/XMAG [1] ETFs are exactly what you're looking for.
[0] https://www.proshares.com/our-etfs/strategic/spxt (S&P minus tech stocks)
[1] https://www.defianceetfs.com/xmag/ (S&P minus "Magnificent 7")
Am I misreading, or does SPXT still hold >20% GOOG, TSLA, META, AMZN?
"Information technology" apparently just means Microsoft and Apple.
Welp, time to see if my 401k provider supports them.
There are equal weight S&P ETFs, which avoid having a handful of stock dominating. However, they do have to do a lot more rebalancing to keep things in line.
Same thing I've been doing for 15 years - VTI and chill.
This sub is the most important question in the thread. Where do you put your money to hedge against an AI market crash?
Do you live in a home you own with no mortgage? Do you have a fully electrified home, only EVs, and enough solar to run those things? You can make real concrete capital investments instead of abstract financial ones to reduce your required living/"operating" expenses, insulating you somewhat from the state of financial markets.
This doesn't seem to work very well in economies where housing isn't constantly appreciating like crazy (I'm not from the US)...
It does if you're paying for that housing (either rent or mortgage payments). People invest into stocks while simultaneously holding a liability (e.g. they need to somehow come up with payments to continue living somewhere, or to continue having heating/cooling/lights). If you think all of the financial investments available to you might crash, and your source of income may evaporate in a correlated event, you can instead put all of your money into minimizing your liabilities. The goal is not to see your home value increase--you're not trying to sell it. It's to secure everything you need to have the standard of living you want by owning those things.
Same thing as always. Stick with your plan and rebalance if you need to. If your plan is 80% stock 20% bond (or whatever ratios), and the increased stock prices are putting you significantly out of balance, then sell your stock funds and buy bonds to put it back to where it should be. If the crash happens, sell your now too-high bonds and buy stocks. Or just buy into one of those funds that does all this for you, or hire a fiduciary financial advisor to do it for you.
Land/housing/property, as directly as possible and reasonable. Just make sure you don't do it in an overheated place like New York.
> Just make sure you don't do it in an overheated place like New York
If the stock market crashes, New York property probably sings. Stock market crash means ZIRP. And ZIRP means lots of money sloshing through New York.
> lots of money sloshing through New York.
That's kinda the problem, I'd expect it to be a bit… volatile. I guess it's a valid target to gamble on if that matches your risk profile.
> I'd expect it to be a bit… volatile
Technically yes, but only because something monotonically increasing in price is volatile.
In a crash everything gets positively correlated for a while. You can go to cash temporarily but of course no one can consistently time the market.
With even the SP500 being super concentrated in AI-exposed companies, probably a combination of bonds and foreign equities. But hedging does mean being OK with watching any (perceived or real) bubble madness continue. I wanted to put all my wealth into Apple circa 2005, but chose not to because blah blah blah diversification. Obviously I wish I did, but I'm ok with the perfectly sensible decision I made - and I'd be retired many times over had I done it.
Personally speaking, as somebody that was 100% in equities until earlier this year (I'm in my early 40s and had most of my wealth in VOO), I shifted to a 60-40 portfolio - there are ETFs that maintain the balance for you. I did this knowing full well that this could attenuate my upside, but I figured it's worth it than being so concentrated in a single part of an industry (AI within tech) and so much upside was already acquired up until that point. Also, I figured the chances of the 2nd Trump term adding to volatility weren't going to help tamper volatility. On top of that, my income is tied to tech, so diversifying away further from it is sensible (especially the equity parts of my compensation).
But if you're in your 20s, your nest egg is likely small enough that I'd just continue plugging away in automatic contributions. Investing at all is far more important than anything else at that stage.
AAPL is chronically undervalued, and at the same time derives no revenue from generative AI.
It's more complex than that, a summary of my highly subjective understanding:
1. AI companies manage to build AGI and achieve takeoff. I have no idea on how to hedge against that.
2. The market is not allowed to crash. There will likely be some lag between economic weakness and money printing. Safer option is probably to buy split 50% SPY and 50% bonds. A riskier option is trying to time the market.
3. The market is allowed to crash. Bonds, cash, etc.
Depending on what you believe will happen and risk appetite you can blend between the strategies or add a short component. I am holding #2 with no short positions in post-tax accounts and full SPY in tax advantaged accounts.
Commodities.
Cash or bonds I assume
US government bonds
You can't hedge against a whole market. And you can't time bubble pop events anyway. You can dump NVDA today, sure, because it's overvalued at $180. And most of us agree. But that won't prevent it from going to $300 before it pops (which is totally reasonable too!), so dumping it today might hurt as much as it helps. Run-ups at the end of a speculative bubble are by definition irrational and produce in-hindsight-ridiculous numbers.
If you're young and invested for the long term, just leave all your junk in broad index securities. You can't do better than that, you just have to ride the bumps.
On the other hand, I'm approaching retirement and looking seriously at when to pull the trigger. The aggregate downside to me of a large market drop or whatever is much higher than it is to a 20-something, because losing out on (to make a number up) an extra 30% of net worth is minor when compared to "now you have to work another three years before retiring" (or alternate framings like "you have to retire in Houston and not Miami", etc...).
So most of my assets are moving out of volatiles entirely.
into? bonds?
A few bond funds, but frankly just a lot of money market cash in the short term. Most of our guts say that the crash is imminent and if it is the extra fees and hassle won't be worth it.
Theres a really funny thing going on right now -- in that everyone is forecasting an AI bubble to pop. It feels like every single human is saying that from the heads of tech companies with comments that are veiled to bankers and everyone on the street.
It reminds me of the time that everyone said the economy was going to tank and somehow everyone had it wrong a couple years ago.
It feels implausible that it isn't overbuilt but it also feels really strange for everyone to be pushing this narrative that its a bubble - and people taking very public short bets. It feels like the contrarian bet is that its going to keep running hot. Nvidia earnings tommorrow big litmus test.
If it was the people actually investing in AI all saying it's a bubble, implying that they are holding back, not all-in, for fear of it crashing, then it'd have room to run further (until they were all-in, and leveraged to the eyeballs, cf subprime housing crash liar loans, dot-com crash investor margin accounts).
However, it seems more like the people pumping billions into AI are all still "this is going to the moon" gung-ho, and unless they are investing billions of CASH, then I guess they are borrowing to do so.
I don't know how this financing works - maybe no fear of having it pulled like a foreclosure on a subprime mortgage holder, or a broker margin call, but it's not going to end well if these investments start to fail and the investors start running for the door.
Peter Thiel's recent exit from NVidia should be a bit concerning given his good record on macro bets and timing.
Why trade individual stocks anyway? Cost averaging ETFs is a proven way to building wealth. S&P goes down 20%, you average down, it recovers and you get another 2-3 years of growth. This goes on until civilization collapses.
If you buy ETFs, you basically hold some stocks you don't want.
For example, stock from war profiteering companies (lockheed, raytheon).
Note that investing in war profiteers is a proven way to build wealth. I just don't want to do that.
This argument not only applies to evil companies, but also dumb ones. For example, I have no interest in investing in IBM or Oracle even those both of those are also money makers.
That only works if you wear horse blinders and subconsciously ignore or make up excuses for evil by association. There is absolutely no way to invest anything ethically.
You could buy ETFs and then short the stocks you don't like more, I suppose.
Ok?
Buying index funds (either mutual funds or ETFs) has been an effective approach for retail investors. But the concern now is that some US stock index funds are so heavily weighted to the "Magnificent 7" stocks that much of the previous benefit of diversification has been lost. The Mag 7 are all highly correlated with each other so if one falls then usually the others do as well.
https://www.fidelity.com/learning-center/smart-money/magnifi...
There are other index funds which are equal weighted rather than market weighted. Those have underperformed lately but might be less volatile if the AI bubble pops.
I'm in a global tracker and the sheet amount in these big stocks is scary.
Every cent poured into this boom is building Google’s future competitors.
Of course he’s nervous - what else would you expect him to say?
That language feels like it came from a fever dream of Alan Greenspan.
Assuming this is irrational and must come back to reality at some point, I'm not convinced this is connected to the common man economy as other bubbles in the past were. This round of investment is mostly being funded by exuberant cash flow accumulated over the years that was otherwise used as stock buybacks by a small number of stocks and some private credit deals that are not that accessible to the general public. This is looking more like a crypto crash kind of effect rather than a 2008 one.
However 'normal people' buying into the stock market via 401ks or otherwise likely (and arguably sensibly) will be in index funds, that of course are exposed to the bubble via (grossly?) inflated tech stocks. Effectively their current pension/savings contributions are being clipped by whatever the delta is between now and post bubble. Time in the market and all that, but still it might be a hefty haircut.
will gamers be eatin' good?
i hope so, its been a while
Obviously, at least in the US the AI bubble is the only thing keeping the economy afloat. If it wasn't for the bubble the US would be in a recession.
Not sure how the situation is in Europe and Asia, but I would guess about the same.
Is there any significant AI investment outside the US?
(I know of a few companies, but they’re tiny tiny minnows compared to the big AI companies listed in the US).
Tons of companies survived the dot com bubble pop. Corrections are when the market does some sorting.
The strongest few did. The likes of pets.com did not.
Wow, this is such a unique and beautiful insight literally nobody has ever heard before. Good job!
10x more useful than your retort.
Sounds like "We're too big to fail. If we go down, everyone goes down. It is your choice."
But unlike 08 crisis, we're getting a heads up to bring out the lube.
I hope it goes down. Its really not a very powerful threat.
isn't that basically the definition of a bubble, a boom with elements of irrationality?
Very cool and healthy for the CEO of a company investing massive amounts into a given technology to casually refer to it as a "bubble" at the same time. I guess he softens the statement a bit by calling it "an AI bubble" instead of the "the AI bubble", but it's still interesting to see everyone involved in this economic mess start to acknowledge it.
Unironically agreed that it's good for a CEO to remain relatively level headed and clear eyed.
The comparison made to the dotcom bubble is apt. It was a bubble, but that didn't mean that all the internet and e-commerce ideas were wrong, it was more a matter of investing too much too early. When the AI bubble pops or deflates, progress on AI models will continue on.
People predicting a crash and pulling back due to a bubble indicates the beginning of the bubble.
If I were the gambling man, I would grab my PETS.AI stock and sit tight.
I'm inclined to agree.
I don't think we're anywhere near the 2000 peak.
At least let OpenAI IPO, and then a few of the LLM wrapper companies to IPO with a ton of hype first.
We're calling the top without an OpenAI IPO first? That seems crazy to me.
About twice a day I think about how I am basically reinforcement learning my replacement. Thank goodness I am 53.
Sounds like something an extortionist would say in a movie. “We’re all dirty here!”
Hasn't it been pretty widely acknowledged that AI funding has created a whirlpool of money cycling between a few players- cloud / datacenter hosts / operators (oracle), GPU (nvidia) and model operators (openai).
To pile on, there's hardly a product being developed that doesn't integrate "ai" in some way. I was trying to figure out why my brand new laptop was running slowly, and (among other things) noticed 3 different services running- microsoft copilot, microsoft 365 copilot (not the same as the first, naturally) and the laptop manufacturer's "chat" service. That same day, I had no fewer than 5 other programs all begging me to try their AI integrations.
Job boards for startups are all filled with "using AI" fluff because that's the only thing investors seem to want to put money into.
We really are all dirty here.
They really are shameless aren't they?
Makes one think that this was the plan all along. I think they saw how SVB went down and realize that if they're reckless and irresponsible at a big enough scale they can get the government to just transfer money to them. It's almost like this is their new business model "we're selling exposure to the $XX trillion dollar bailout industry."
I don't think it's very difficult to imagine that the usgov is trying to put pressure on industry to make "number go up". Given the general competency level in usgov these days, I also wouldn't be particularly surprised if nobody knew or cared about whether the "up" of the number was real or meaningful, or whether there would be consequences.
Current admin really, really wants the number going up, and is also incapable of considering or is ignorant to any notion of consequence for any actions of any kind.
This is the thing that worries me the most. The market is past due for a market correction. Yet this government is willing to burn down everything for short term gains.
> They really are shameless aren't they? Makes one think that this was the plan all along.
Not really. Sundar is still pretty bullish on GenAI, just not the investor excitement around it (bubble).
Yeah the profundity of the slop churned out by Sora is really something to behold. Veritably the pinnacle of millennia of human art and creativity.
Yeah, exactly: it is an implied threat: "If I go, I am taking you down with me"
The context makes it clear that it's not any sort of implied threat. Pichai made his statement in response to an interview question about whether Google might be so well positioned that they're immune to the impact of an AI bubble. (But I don't blame you for being misled - like most headlines these days, this would have been intensely optimized for virality over accuracy, and making tech CEOs sound like supervillains is great for virality.)
I mean this commonly happens in business/economies. Businesses that are dirty can make more money, at least temporarily out competing those around them. If they play it right they can drive their good competitors out of business or buy them up. Moreso, the crash at the end of a bubble will just as likely drive the good businesses out as the bad.
"But everybody's doing it!"
I'm okay with being victim to RAM and NVME prices returning to pre-skyrocket levels.
And GPUs being available again for normal prices.
Now they are talking like Wall Street greedy bankers back in subprime crisis.
Also it seems like Wall Street greedy bankers might have an other subprime crisis on their hands at same time... I wonder which one will be saved again...
And there we have the reason for all of these interdependent deals between all these firms, they're all hedging with each other they can keep this set of plates spinning.
They can't, not firever. Bubbles pop.
How is what they’re doing a ‘hedge’? It seems more like alternative financing, or keiretsu.
Keiretsu are ways to hedge against loss, you form interlocking relationships that spread both risk and success around. In this case no one is sending actual money, they're sharing obligations with each other.
Most of all of Big Tech, especially Google are doing just fine, making $100B a quarter.
Startups and other unprofitable companies however...
After COVID they were still making a killing, but axed 12k people anyway. So, if someone starts doing layoffs and the market reacts well profitable companies will do layoffs as well
You do have to question the usefulness of Big Tech burning all that profit to the ground in order to buy GPUs though.
Every CEO is reading from the same script right now. It might be a bubble but it’s just like the internet, it’s still going to be relevant and it’s just the crap companies and grifters who will die.
I wonder who’s writing the script.
It’s not a matter of if, it’s a matter of when.
...says the man guilty of said irrationality.
if only there were some way for THE ENTIRE MARKET to not have quite so much exposure to hype bubbles
In other words - "We will be firing many of you when the bubble bursts."
Silicon Valley! Unprofitable debt and marketing all the way up until you get bailed out by the taxpayers, apparently.
I'm not seeing that happening. Unlike banking and housing there's not much systemic or political risk in letting these companies crash. It's mostly going to hit a very small number of high net worth people who don't have a lot of clout and are oddly disconnected from the rest of the economy.
Virtually everyone's 401k is overexposed to these companies due to their insane market caps and the hype around them. If they go every S&P 500 and total US market ETF goes with them, right as the Boomers start retiring en-masse.
Even Vanguard's Total World Index, VT, is roughly 15% MAG 7.
That's not even getting into who's financing whom for what and to whom that debt may be sold to.
This is incorrect. A lot of these companies are raising debt to pay for these datacenter build outs. And that debt has already been sold to pension funds. The risk has already been spread. See Blue Owl Capital and how Meta is financing its Hyperion datacenter. They raised 30 billion in debt. Main street is already exposed as those bonds are in funds offered by the usual players BlackRock, Invesco, Pimco etc.
I think AI is a bubble, but I don't think we're close to the peak yet.
Ever literally blow bubbles? You never really know how big each one will be.
My biggest worry is that what will be left standing after all of this is the organizations that are quietly all the AI slop everywhere, be it the normal web or YouTube.
It’s not a bubble until it bursts
Gave our kids a bath last night, can confirm, a bubble is a bubble even before it pops.
Except, yes, they will.
Not immune, maybe, but pretty well off if they didn't buy in.