
Open AI spending $700,000 dollars per day to keep it running.
According to a report from The Information, for maintaining the enormous popularity of Chat GPT and capability is a financially staggering task, with OpenAI reportedly shelling out as much as $700,000 per day to sustain its robust infrastructure. These figures were obtained from research firm SemiAnalysis.
The chief analyst of SemiAnalysis, Dylan Patel, stated that the bulk of this expenditure is due to the exorbitant costs of the servers that are needed to support capabilities of Chat GPT.
Running AIs, particularly those that function as search engines and conversational interfaces, can be immensely expensive due to the specialized and costly chips that power them. This isn’t an issue exclusive to Chat GPT.
To address this problem, Microsoft, which has made significant investments in OpenAI, has been developing its own AI chip named “Athena” since 2019. The Information reports that the chip is now available to a limited number of Microsoft and OpenAI employees.
Microsoft aims to use Athena to replace the current Nvidia graphics processing units it employs, which are less efficient and more costly. The switch to Athena could result in substantial cost savings for the company.
Dylan Patel, the chief analyst at SemiAnalysis, stated to The Information that if Athena proves to be a viable alternative, it could lower the cost per chip by as much as one-third in comparison to Nvidia’s current offerings.
While Microsoft’s development of its own AI chip marks a significant entry into the field of AI hardware, the company is unlikely to completely replace Nvidia’s AI chips, as the two companies have recently agreed to collaborate on AI projects for several years to come.
It’s worth noting that Microsoft is playing catch-up with its rivals, Google and Amazon, who have already established their own in-house AI chips.
If the rumors surrounding Athena are true, then the chip’s arrival cannot come soon enough.
In a recent statement, OpenAI CEO Sam Altman expressed his belief that we are approaching the end of the era of “giant AI models,” as the benefits of their massive size may be reaching a point of diminishing returns. OpenAI’s latest GPT-4 model, with over one trillion parameters, may already be approaching the practical limits of scalability, according to the company’s own analysis.
Although larger AI models have traditionally led to increased power and expanded capabilities, as per Patel’s analysis, the additional bloat will likely result in higher costs.
However, considering the overwhelming success of Chat GPT, it’s unlikely that OpenAI is struggling financially.
Read more: How to use ChatGpt 4 free in Pakistan