What is the business model for generative AI given what we know about the technology and the market today?
I’ve spent some time in my articles talking about the technical and resource limitations of generative AI, and it’s very interesting to see how these issues are becoming more apparent and relevant to the industry that has sprung up around this technology.
However, I think the question that arises is: what exactly is the business model for generative AI? What to expect and what is just advertising? What is the difference between the promise of this technology and the practical reality?
Is generative AI a feature or a product?
I’ve talked to a few people about this and heard it discussed quite a bit in the media. The difference between a technology being a feature and a product is essentially whether it provides enough value on its own for people to purchase access to it separately, or whether it actually demonstrates most or all of its value. when combined with other technologies. We now see “AI” attached to many existing products, from text/code editors to search and browsers, and these applications are examples of “generative AI as a feature”. (I’m writing this very text in Notion, and it keeps pushing me to do something with AI.) On the other hand, we have Anthropic, OpenAI, and various other companies trying to sell products that have generative AI as a central component, like ChatGPT or Claude.
This might start to get a little blurry, but the key factor I think is that for the “generative AI as a product” crowd, if the generative AI doesn’t meet the customer’s expectations, whatever those are, then they will stop use of the product and cessation of use, payment to the supplier. On the other hand, if someone finds (understandably) that Google’s AI search summaries are rubbish, they can complain and disable them and continue using Google search as before. The core business value proposition isn’t built around AI, it’s just an additional potential selling point. This results in much less risk for the entire business.
The way Apple has approached much of the generative AI space is a good example of conceptualizing generative AI as a feature rather than a product, and in my opinion their apparent strategy is more promising. At the last WWDC, Apple announced that it is partnering with OpenAI to allow Apple users to access ChatGPT through Siri. There are several important key components to this. First, Apple doesn’t pay OpenAI anything to create this link—Apple provides access to its highly economically attractive users, and OpenAI has a chance to convert those users into paying subscribers to ChatGPT if they can. Apple takes no risks in the relationship. Second, it doesn’t prevent Apple from making other generative AI offerings like Anthropic or Google available to their user base in the same way. They’re clearly not betting on a particular horse in the larger generative AI arms race, though OpenAI appears to be the first partnership announced. Apple is of course working on Apple AI, its own generative artificial intelligence solution, but they’re clearly aiming for these offerings to complement their existing and future product lines, making your iPhone more useful, rather than selling the model as a stand-alone product.
All of this means that there are many ways to think about how generative AI can and should be incorporated into business strategy, and creating the technology itself does not guarantee that it will be the most successful. When we look back a decade from now, I doubt that the companies we will consider the “big winners” in the generative AI business space will be the ones that actually developed the core technology.
What business strategy makes sense to develop?
OK, you might think, but someone has to build it if the features are valuable enough, right? If money isn’t invested in actually building generative AI capabilities, will we have those capabilities? Will their full potential be realized?
I have to admit that many tech investors really believe that there is a lot of money to be made in generative AI, which is why they have already invested many billions of dollars in OpenAI and its ilk. However, I’ve also written in several previous articles that even with those billions, I pretty strongly suspect that in the future we’ll see only minor, incremental improvements in AI’s generative performance rather than the continuation of the seemingly exponential technological progress we saw in 2022 -2023 (In particular, the limits on the amount of human-generated data available for training to make the promised progress cannot simply be solved by pouring money into the problem.) That said, I’m not convinced that generative AI will become much more useful or “smarter” than it is now.
With all that said, and whether you agree with me or not, we need to remember that having a highly advanced technology is very different from being able to create a product from that technology that people will buy and making a sustainable, renewable business model of it. You might come up with a great new thing, but as any product team at any startup or tech company will tell you, that’s not the end of the process. It’s extremely difficult to figure out how real people can and will use your cool new thing and to communicate that and get people to believe that your cool new thing is worth a sustainable price.
We’re definitely seeing a lot of pitched ideas for this coming from a lot of channels, but some of those ideas aren’t living up to expectations. The new beta version of the OpenAI search engine announced last week already had serious errors in its results. Anyone who has read my previous articles on how LLMs work will not be surprised. (Personally, I was just surprised they didn’t think of this obvious problem when they designed this product in the first place.) Even ideas that are somewhat appealing can’t just be “nice to haves” or luxury items, they have to is necessary because the price needed to make this business sustainable must be very high. When the burn rate is $5 billion per year, to become profitable and self-sustaining, your paying user base has to be astronomical and/or the price those users pay has to be incredible.
Isn’t research still inherently valuable?
This puts the people most interested in pushing the technological frontier in a difficult position. Research for research’s sake has always existed in one form or another, even when the results are not immediately useful. But capitalism doesn’t have a good channel to support this kind of work, especially when it costs an unimaginable amount of money to participate in such research. The United States has been depleted of academic institutions for decades, so scientists and researchers in academia have little or no chance of even engaging in this type of research without private investment.
I think that’s a real shame because academia is where this kind of research can be done with proper oversight. Issues of ethics, safety and security can be taken seriously and studied in academia in ways that are simply not a priority in the private sector. Research culture and norms for scientists may value money over knowledge, but when all research is done by private enterprise, those decisions change. The people our society trusts to conduct “cleaner” research do not have access to the necessary resources to meaningfully participate in the generative AI boom.
Now what?
Of course, there’s a significant chance that even these private companies don’t have the resources to support the mad rush to train more and bigger models, which brings us back to the quote I started this article with. Because of the economic model that drives our technological progress, we may be missing out on potential opportunities. Generative AI apps that make sense but don’t generate the billions needed to pay GPU bills may never get much research, while socially harmful, stupid, or useless apps get investment because they open up more profit opportunities.