Hey there, folks! Here’s a mind-blowing fact for you: every time you ask ChatGPT a question, its parent company, OpenAI, spends about 4 cents (or roughly 0.29 Chinese yuan)! Now, 4 cents may not seem like much, but what if we multiply that by 13 million? Yeah, you heard me right.
According to a report by UBS Group, ChatGPT had a whopping 100 million monthly active users in February, with around 13 million unique visitors every day. So, let’s do a little math here. If each user asks just one question per day (and we assume that’s the case), then ChatGPT’s daily operational cost reaches a minimum of $520,000. But hold on, there’s more! One analyst from INSIDER estimated that ChatGPT’s daily cost could go as high as $700,000. Woah!
Now, when it comes to the cost of running AI operations, the hardware expenses are the biggest chunk of the bill. I mean, let’s face it, chips are crucial for the tech industry, especially for AI companies. They have to deal with these sky-high hardware costs, which is why they’re compelled to develop their own chips, my friends.
When AI companies hit a certain stage of development, they gotta start thinking about chip manufacturing. It’s just something they can’t avoid.
According to Reuters, word on the street is that OpenAI is exploring the idea of developing its own AI-designed smart chip. Smart move, if you ask me. It’s all about cutting down those operational costs. Look at OpenAI’s biggest competitor in the AI field, Google. They plan to kick Broadcom out of the AI chip supplier roster by 2027, which would save them billions of dollars each year! Now that’s a lot of dough, my friends.
Reducing costs isn’t the only reason OpenAI wants to go down the self-designed chip route. They also want to break away from depending on other companies. You see, the current AI chip market is dominated by Nvidia, holding about 80% of the global market share. With AI booming left and right, the demand for chips is skyrocketing, and Nvidia’s supply capacity is starting to struggle. Even if Nvidia raises its H100 chip production to 1.5 to 2 million units by 2024, it won’t be enough to meet the high demand. OpenAI’s CEO, Sam Altman, has even publicly voiced his frustration about the shortages and how it’s affecting their expansion speed. Tough times, my friends.
Now, folks, “making chips” ain’t no easy task. It requires a hefty investment of time and money. Alex White, the General Manager of AI company SambaNova Systems, said that “designing and manufacturing chips isn’t something you can accomplish overnight. It demands a vast amount of expertise and increasingly scarce resources. OpenAI took more than five years to develop GPT-4. So, if they spend a similar amount of time on hardware, I wouldn’t be surprised at all.”
Even if you pour in manpower, financial resources, and time, there’s no guarantee that OpenAI will successfully develop a chip that meets their requirements. That’s the challenge of self-designed chips, my friends. To reduce costs and increase the chances of success, acquiring an established chip company might just be the best option. And guess what? OpenAI is currently conducting due diligence on potential acquisition targets. Ooh, mysterious!
But wait, there’s more! It’s not just OpenAI. Meta and Microsoft have been putting years of effort into chip development too, my friends.
According to Reuters, Meta has managed to develop a custom chip, but it’s been plagued with problems, leading them to abandon some AI chips. But fear not! Mark Zuckerberg is determined to create an upgraded chip that supports all types of AI within the Meta ecosystem. Talk about ambition!
Now, Microsoft’s progress in this area seems to be sailing a bit smoother. Way back in 2019, they started secretly developing a chip codenamed Athena for AI purposes. Reports suggest that Microsoft plans to start using this chip more extensively within Microsoft and OpenAI starting next year. But hold your horses, folks! Microsoft doesn’t expect Athena to replace Nvidia chips in the short term. Their main goal is to reduce dependence on Nvidia. Smart move, Microsoft!
So, my friends, it’s crystal clear that giants like OpenAI and Microsoft don’t wanna rely on others and are looking to reduce their dependence on other companies. That, my friends, is one of the primary motivations behind their “chip-making” endeavors. Reuters speculates that OpenAI’s pursuit of self-designed chips might be the latest sign of the company parting ways with its partner, Microsoft. Interesting stuff!
Now, let’s be real here, OpenAI’s chip-making plan is still in its early stages. It’ll be a few years before the market sees the fruits of their labor. Until then, OpenAI only has two choices: shell out big bucks for Nvidia chips or heavily rely on Microsoft. Tough decisions!