🚨 BREAKING: OpenAI is officially working on GPT-5 (OR you can call it Superintelligence)
OpenAI, the creator of ChatGPT, plans to secure more financial backing from Microsoft, its largest investor, as it continues to develop artificial general intelligence (AGI). In an interview, CEO Sam Altman discussed the strong partnership with Microsoft and the expectation of further investments to support the high costs of developing sophisticated AI models. Microsoft had already invested 10bn in OpenAI, valuing it at29bn.
Altman highlighted OpenAI’s revenue growth but noted the company remains unprofitable due to high training expenses. He emphasized the mutual benefits of the partnership with Microsoft.
OpenAI recently announced new tools and upgrades to GPT-4, aiming to build a business model similar to Apple’s App Store. These include custom ChatGPT versions and a GPT Store for developers and companies. Altman views these as channels to their primary product, which he describes as “intelligence, magic intelligence in the sky.”
The company is hiring executives like Brad Lightcap to expand its enterprise business. Altman splits his time between research on superintelligence and building computing power for AGI. OpenAI is also developing more autonomous agents capable of tasks like coding and making payments.
Plans for GPT-5 are underway, requiring more training data from various sources. OpenAI is seeking new data sets, particularly in long-form writing or conversations. The exact capabilities of GPT-5 remain uncertain until it’s developed.
OpenAI uses Nvidia’s advanced H100 chips for training its models, a crucial component in AI development. However, with other companies like Google and AMD entering the AI chip market, dependence on Nvidia may decrease.
OpenAI DevDay Breakout Sessions Are Out
Great news for devs who were not at OpenAI Dev Day, the breakout sessions are now live on YouTube! 🎉
Nvidia unveils H200, its newest high-end chip for training AI models
Nvidia recently introduced the H200, a new graphics processing unit (GPU) designed for AI model training and deployment, significantly enhancing the capabilities seen in the generative AI sector. This GPU, an upgrade from the H100 used by OpenAI for training GPT-4, features a substantial increase in memory, boasting 141GB of advanced “HBM3” memory. This enhancement is crucial for performing tasks like inference, where the GPU utilizes trained models to generate text, images, or predictions.
The H200 is expected to deliver outputs nearly twice as fast as its predecessor, the H100, a claim supported by tests using Meta’s Llama 2 LLM. Priced between 25,000 and40,000, the demand for such chips is high among major companies, startups, and government agencies, with thousands needed to train the largest models.
Nvidia’s stock has soared by over 230% in 2023, with the company anticipating a 170% surge in sales this quarter, amounting to approximately $16 billion. The H200 is set to be compatible with existing H100 systems, offering flexibility in upgrading without the need for significant system or software changes. It will be avail
able in configurations of four or eight GPUs and will be part of Nvidia’s HGX complete systems, as well as the GH200, which pairs the H200 GPU with an Arm-based processor.
However, Nvidia is already planning its next big step with the introduction of the B100 chip, based on the upcoming Blackwell architecture, expected to be announced and released in 2024. This move is a part of Nvidia’s strategy to shift from a two-year to a one-year architecture release cadence, driven by the high demand for its GPUs.
A tip for when building custom GPTs
Prompt of the Day 🍭: The Startup Idea Alchemist
Click on the image to load the full prompt in ChatGPT (requires Superpower ChatGPT)
Hope you enjoyed today’s newsletter
⚡️ Join over 200,000 people using the Superpower ChatGPT extension on Chrome and Firefox.