As I have written, large Language Models do not have a business moat. Therefore it did not surprise me that some already thought that OpenAI might be bankrupt by the end of 2024. Guido Appenzeller estimates that the “GPT-3 training cost range from $500,000 to $4.6 million, depending on hardware assumptions”. I believe that OpenAI understands what they need to do to create their business moat.
User Experience
No doubt. ChatGPT has a moat. 100+ million consumers go to ChatGPT and no longer go to Google with their questions. So many users in itself is a moat. The question is, however, how to protect this moat since Erik Schmidt from Google famously put it: “Competition is just a click away.”
The answer is that OpenAI will improve the User Experience. LLMs are not friction-free. They can easily frustrate users by giving lengthy, irrelevant, or even incorrect information. Often this comes down to proper prompt design, but that in itself is a UX challenge. ChatGPT started to iterate with its interface to improve the user experience and flow. See below; for a while now, they offer ‘pre-written’ queries. They offer a warning. The easier the access to ChatGPT, the stickier the product. The stickier the product, the higher the moat.
OpenAI recommends pre-defined prompts.
Feedback Loop
How good is the LLM or the chatbot powered by an LLM? What do users want to know? Did the chatbot answer?
Google has spent a lot of effort to find this out. They know whether an answer was not helpful. Once a user repeats the search query, Google knows the previous information was insufficient. But OpenAI? They have a trained model that is not even up to date. How does OpenAI know whether or not an answer has created user delight? (user delight is the idea that by interacting with a device or interface, we have created a positive emotional effect on the user.) Currently, OpenAI does not know. They don’t have a feedback loop. They offer a clumsy thumbs-up/down button, but I wonder how many will use those.
One of the challenges for OpenAI will be to create user journeys that allow them to measure how good an answer is. Only then will OpenAI be able to create a moat!
There is no good feedback loop between the user and OpenAI
Workflow Integration
Feedback loops work best if they are integral to an existing workflow. Overall we will see more and more workflows become “supercharged” with Large Language Models. Take, for example, Google Docs. I showed in this mini video how Google uses LLMs to write documents better. It is seamless and does not need me to use a different tool such as Jasper.ai (and yes, they laid off people as their business started to decline). Once OpenAi is integrated into those workflows, it’s easier to see when and whether a user accepts a change or a suggestion.
For OpenAI, they will need to invest more efforts into workflows and products to be more than just a generic language interface. Specialization will be the norm, and many different specializations will create a moat.
Initially, OpenAI used the fast route by tagging along with Microsoft. Co-Pilot offers integrations into many of Microsoft’s core tools. Whether or not those integrations will create a competitive moat is hard to say. It all depends on the feedback loop. Additionally, OpenAI will offer an API-driven feedback loop for all the integrations offered via their plug-ins.
Microsoft offers OpenAI ability over your enterprise data
Data Access
LLMs can create human-like sentences. However, to create value via text, the content of the text matters. In the article “Access to Data Will Change the World Power Structure”, I argue that access to datasets is the main value—companies like Legal.OS (listen to my interview with Torben, their CTO) or Qatalog (listen to my interview with Tariq, the founder of Qatalog) use LLM over enterprise data to create this value. OpenAI has, therefore, smartly chosen to work with Microsoft to allow their LLM to work on top of any dataset that Microsoft stores. If you wonder what RAGs are, please listen to our Podcast, where we explain RAGs.
This collaboration creates a moat. But this moat is more for Microsoft than for OpenAI. How and whether OpenAI will try to build a business moat for enterprise applications remains to be seen.
Regulatory Protection
Last but not least, regulation can create a moat. Sam Altman has been on tour with policymakers advocating for more oversight and control. He is warning them about the very tool he has built. Additionally, OpenAI now allows blocking their scraping mechanism to not use text from webpages with a “NoAI” tag. But note this tag will not be retroactively honored. Thus data that OpenAI used to train their tool will still be used. It’s only for future cases.
I will refrain from discussing whether those are valuable efforts, as this will need to be a more nuanced discussion. However, both efforts would create a moat for OpenAI. (1) Regulators would allow OpenAI as a tool but would be more restrictive to any newcomer. (2) Newcomers could no longer scrape data with a “NoAI” tag even though that data was used to create the OpenAI tools.
In summary, while models do not have a moat (see the discussion on OpenAI’s issue), OpenAI is not doomed at all. They are focusing on the right areas to protect their business investment. It remains to be seen whether they can build out a moat in the areas where Microsoft built out their own moat.
Lutz Finger has built data products for LinkedIn, Google, Snap, and Marpai Health. He teaches “Designing Data Products” and AI Strategy at Cornell’s Johnson School of Business. Views are his own.
Follow me on LinkedIn.