> Likewise, a METR report found that AI coding tools, which are meant to be the most promising application for generative AI, actually slow developers down. Both studies cited the same issue, “hallucinations”.
> AI hallucinations are one of the best bits of PR ever. The term reframes critical errors to anthropomorphise the machine, as that is essentially what an AI hallucination is: the machine getting it significantly and repeatedly wrong. Both MIT and METR found that the effort and cost required to look for, identify, and rectify these errors was almost always significantly larger than the effort the AI reduced.
> In other words, for AI (specifically generative AI) to be even remotely useful in the real world and have a hope in hell of generating revenue by augmenting workers at scale, let alone replacing them like it has promised to, it needs to cut “hallucinations” down to basically zero.
As someone who uses Claude 4.5 in Cursor every workday this rings extremely hollow. I am thinking to myself daily “I would have never had time to do this before.”
Have an idea for a script, you don’t have to lose a day building it. Wanna explore a feature, make a worktree and let the agent go. It’s fundamentally changed my workflow for the better and I don’t wanna go back, hallucinations and all.
> Or, to put it another way, OpenAI’s 2025 revenue is on track to only be $3.1 billion more than last year, while its annual operational costs are set to be $24.1 billion more than last year. So, for every dollar of revenue growth OpenAI has, it is costing them $7.77!
>
> I cannot stress how unprecedentedly dreadful that is. It shows that the promised future investors were piling their money into is a fairy tale. This is a money black hole.
Isn't this a "startup blueprint" for tech companies? Uber, Airbnb, Amazon, etc ...
More importantly, AI dominance is more important given the reward?
> Or, to put it another way, OpenAI’s 2025 revenue is on track to only be $3.1 billion more than last year, while its annual operational costs are set to be $24.1 billion more than last year. So, for every dollar of revenue growth OpenAI has, it is costing them $7.77!
>
> I cannot stress how unprecedentedly dreadful that is. It shows that the promised future investors were piling their money into is a fairy tale. This is a money black hole.
Isn't this a "startup blueprint" for tech companies? Uber, Airbnb, Amazon, etc ...
More importantly, AI dominance is more important given the reward?
When I read these sorta of articles I ask if I would invest today if given the opportunity. Currently the answer is still yes.
They have barely even monetized users. I think it's possible the bubble pops and openai still continues to win.
So much of this article is copium pretending the world is not radically changing. Even if progress stops today massive numbers of jobs will be and are being replaced. I wish it wasn't true but what I wish has no bearing on reality.
But here is the thing: OpenAI’s revenue growth is slowing down dramatically. In 2023, they increased their revenue by 169% over 2022, and in 2024, they increased their revenue by 250% over 2023. In 2025, they are set to increase revenue by only 56% over 2024.
They are set to increase revenue by 3-4x in 2025.[0] So already, this article is based on false data.
Further more, losing $8b in the first half to buy GPUs isn't a big deal when you're growing 3-4x and there are investors lining up to give you money.
The rest of the article is mostly exaggerated AI doomer opinions that are often dispelled here on HN news comment section. For example, the author cites the MIT AI report snippet that says 95% of companies are failing at agentic AI. But the actual report is far more positive on AI's impact in the workforce.[1]
These doomer articles always fail to grasp two things:
1. Major Silicon Valley companies have always lost a huge amount of money before becoming profitable. OpenAI is just the next and at a bigger scale (because tech is far bigger in 2025 than before). Despite countless examples of tech companies losing a lot of money early on to becoming hugely profitable later, people still get hung up on the fact that OpenAI isn't profitable in 2025.
2. They always think that AI is as good as it gets now with little to no improvements coming. But we're still on an exponential curve.[2]
I don’t think Sam Altman thinks (2) is true, in fact it seems all the AI insiders are starting to signal that neither 1 or 2 will happen, why is it only outsiders saying this now?
Also the article, as all doomer articles I read do, address your points directly.
The problem I see is that the entire US stock market has been rising only because of a small handful of tech companies, Nvidia being among them. If this bubble pops, and maybe "if" isn't the right word--when this bubble pops, it's going to hurt.
> AI hallucinations are one of the best bits of PR ever. The term reframes critical errors to anthropomorphise the machine, as that is essentially what an AI hallucination is: the machine getting it significantly and repeatedly wrong. Both MIT and METR found that the effort and cost required to look for, identify, and rectify these errors was almost always significantly larger than the effort the AI reduced.
> In other words, for AI (specifically generative AI) to be even remotely useful in the real world and have a hope in hell of generating revenue by augmenting workers at scale, let alone replacing them like it has promised to, it needs to cut “hallucinations” down to basically zero.
As someone who uses Claude 4.5 in Cursor every workday this rings extremely hollow. I am thinking to myself daily “I would have never had time to do this before.”
Have an idea for a script, you don’t have to lose a day building it. Wanna explore a feature, make a worktree and let the agent go. It’s fundamentally changed my workflow for the better and I don’t wanna go back, hallucinations and all.
Isn't this a "startup blueprint" for tech companies? Uber, Airbnb, Amazon, etc ... More importantly, AI dominance is more important given the reward?
Isn't this a "startup blueprint" for tech companies? Uber, Airbnb, Amazon, etc ... More importantly, AI dominance is more important given the reward?
They have barely even monetized users. I think it's possible the bubble pops and openai still continues to win.
So much of this article is copium pretending the world is not radically changing. Even if progress stops today massive numbers of jobs will be and are being replaced. I wish it wasn't true but what I wish has no bearing on reality.
Further more, losing $8b in the first half to buy GPUs isn't a big deal when you're growing 3-4x and there are investors lining up to give you money.
The rest of the article is mostly exaggerated AI doomer opinions that are often dispelled here on HN news comment section. For example, the author cites the MIT AI report snippet that says 95% of companies are failing at agentic AI. But the actual report is far more positive on AI's impact in the workforce.[1]
These doomer articles always fail to grasp two things:
1. Major Silicon Valley companies have always lost a huge amount of money before becoming profitable. OpenAI is just the next and at a bigger scale (because tech is far bigger in 2025 than before). Despite countless examples of tech companies losing a lot of money early on to becoming hugely profitable later, people still get hung up on the fact that OpenAI isn't profitable in 2025.
2. They always think that AI is as good as it gets now with little to no improvements coming. But we're still on an exponential curve.[2]
[0]https://finance.yahoo.com/news/openai-cfo-we-will-more-than-...
[1]https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...
[2]https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
Also the article, as all doomer articles I read do, address your points directly.