Who profits from AI? Not OpenAI, says think tank
Findings from a new study by Epoch AI, a non-profit research institute, appear to poke major holes in the notion that AI firms, and specifically OpenAI, will eventually become profitable.
The research paper written by Jaime Sevilla, Hannah Petrovic and Anson Ho, suggests that while running an AI model may generate enough revenue to cover its own R&D costs, any profit is outweighed by the cost of developing the next big model. So, it said, “despite making money on each model, companies can lose money each year.”
The paper seeks to answer three questions: How profitable is running AI models? Are models profitable over their lifecycle? Will AI models become profitable?
To answer question one, researchers created a case study they called the GPT-5 bundle, which they said included all of OpenAI’s offerings available during GPT-5’s lifetime as the flagship model, including GPT-5 and GPT-5.1, GPT-4o, ChatGPT, and the API, and estimated the revenue from and costs of running the bundle. All numbers gathered were based on sources of information that included claims by OpenAI and its staff, and reporting by media outlets, primarily The Information, CNBC, and the Wall Street Journal.
The revenue estimate, they said, “is relatively straightforward”. Since the bundle included all of OpenAI’s models, it was the company’s total revenue over GPT-5’s lifetime from August to December last year: $6.1 billion.
And, they pointed out, “at first glance, $6.1 billion sounds healthy, until you juxtapose it with the costs of running the GPT-5 bundle.” These costs come from four main sources, the report said, the first of which is inference compute at a cost of $3.2 billion. That number is based on public estimates of OpenAI’s total inference compute spend in 2025, and assumes that the allocation of compute during GPT-5’s tenure was proportional to the fraction of the year’s revenue generated in that period.
The other costs are staff compensation ($1.2 billion), sales and marketing ($2.2 billion) and legal, office, and administrative costs: $0.2 billion.
It’s all in the calculation
As for options for calculating profit, the paper stated, “one option is to look at gross profits. This only counts the direct cost of running a model, which in this case is just the inference compute cost of $3.2 billion. Since the revenue was $6.1 billion, this leads to a profit of $2.9 billion, or gross profit margin of 48%, and in line with other estimates. This is lower than other software businesses, but high enough to eventually build a business on.”
In short, they stated, “running AI models is likely profitable in the sense of having decent gross margins.”
However, that’s not the full story.
The paper stated that by buying the argument that gross margins only should be considered when looking at profitability, “on those terms, it was profitable to run the GPT-5 bundle. But was it profitable enough to recoup the costs of developing it? In theory, yes — you just have to keep running them, and sooner or later you’ll earn enough revenue to recoup these costs. But in practice, models might have too short a lifetime to make enough revenue. For example, they could be outcompeted by products from rival labs, forcing them to be replaced.”
The trick, the authors stated, revolves around comparing gross profits and comparing the nearly $3 billion to the firm’s R&D costs: “To evaluate AI products, we need to look at both profit margins in inference as well as the time it takes for users to migrate to something better. In the case of the GPT-5 bundle, we find that it’s decidedly unprofitable over its full lifecycle, even from a gross margin perspective.”
As for the big question of whether AI models will become profitable, the paper stated, “the most crucial point is that these model lifecycle losses aren’t necessarily cause for alarm. AI models don’t need to be profitable today, as long as companies can convince investors that they will be in the future. That’s standard for fast-growing tech companies.”
The bottom line, said the trio of authors, is that profitability is very possible because “compute margins are falling, enterprise deals are stickier, and models can stay relevant longer than the GPT-5 cycle suggests.”
Asked whether the markets will stay irrational for long enough for OpenAI to become solvent, Jason Andersen, VP and principal analyst at Moor Insights & Strategy, said, “it’s possible, but there is no guarantee. I believe in 2026 you will see refinements in strategy from these firms. In my brain, there are three levers that OpenAI and other general-purpose AIs can use to improve their financial position (or at least slow the burn).”
The first, he said, is pacing, “and I think that is happening already. We saw major model drops at a slower pace last year. So, by slowing down a bit, they can reduce some of their costs or at the very least spread them out better. Frankly, customers need to catch up anyway, so they can plausibly slow down, so the market can catch up to what they already have.”
The second, said Andersen, is to diversify their offerings, and the third involves capturing revenue from other software vendors.
As to whether OpenAI and others can keep going long enough for AI to become truly effective, he said, “OpenAI and Anthropic have the best chance of going long and staying independent. But, that said, I also want to be cautious about what ‘truly effective’ means. If you mean truly effective means achieving AGI, it’s theoretical, so probably not without major breakthroughs in hardware and energy. But if ‘effective’ means reaching profitability over a period of years, then yes, those two have a shot.”
The trick on the road to profits, he said, “will be finding a way to compete and win against companies that have welded their future to AI. Notably, Google, Microsoft, and X have now made their models inextricable to their other products and offerings. So, is there enough time and diversification opportunities to compete with them? My guess is that a couple pure plays will do well and maybe even disrupt the market, but many others won’t make it.”
Describing the paper’s findings as “very linear” and based on short-term analysis, Scott Bickley, advisory fellow at Info-Tech Research Group, said that OpenAI has been “pretty open about the fact they are not profitable currently. What they pivot to is this staggering chart of how revenues are going to grow exponentially over the next three plus years, and that’s why they are trying to raise $200 billion now to build up infrastructure that’s going to support hundreds of billions of dollars of business a year.”
Many fortunes tied to OpenAI
He estimated that OpenAI’s overall financial commitments, as a result of agreements with Nvidia and hyperscalers as well as data center buildouts, now total $1.4 trillion, and said, “They’re trying to make themselves too big to fail, to buy the long runway they’re going to need for these investments to hopefully pay off over the course of years, or even decades.”
Right now, he said, the company is “shoring up the balance sheet. They’re trying to build everything they can to buy runway ahead. But either they wildly succeed beyond any of our imagination, and they come up with applications that I can’t envision are realistic today, or they fail miserably, and they’re guaranteed that everyone can buy a chunk of the empire for pennies on the dollar or something to that effect. But I think it’s either boom or bust. I don’t see a middle road.”
As it currently stands, said Bickley, all major vendors have “tied their fortunes to OpenAI, which is exactly what Sam Altman wanted to have happen. He’s going to force the biggest players in the space to help him be successful.”
In the event the company did end up failing, he predicted the impact on companies buying AI initiatives developed by it will be minimal. “Regardless of what happens to the commercial entity of OpenAI, the intellectual property that’s been developed, the models that are there, are going to be there. They’ll fall under someone’s control and continue to be used. They’re not in any danger of not being available.”
Read more: Who profits from AI? Not OpenAI, says think tank