GPT-4 Full Access to Office: Opportunities and Challenges for the Chip Industry

Since OpenAI launched ChatGPT at the end of last year, the AI field has continued to heat up. First, OpenAI followed ChatGPT with its more powerful large-scale multimodal model GPT-4. Now Microsoft announced the full access of GPT-4 to all office software, which even blew up the whole scene.

All signs indicate that the application of large models is about to enter thousands of households, and the development of the artificial intelligence industry has come to a climax. As we all know, whether it is ChatGPT, Wenxin Yiyin, or the training and deployment of other generative AI models, there are high requirements for arithmetic power.

Microsoft 365 Copilot – Chat GPT-4

Opportunities and Challenges of Big Arithmetic Chips

Currently, the training of big models around the world is basically using NVIDIA GPUs. ChatGPT, previously launched by OpenAI, and GPT-4, recently released, are trained on a large number of NVIDIA A100s. Microsoft’s Azure cloud service is said to have built an AI computing cluster of over 10,000 Nvidia A100 GPU chips for ChatGPT.

In the long run, the development and deployment of large models is an inevitable trend in the future, and each large model training and deployment is supported by tens of thousands of GPUs behind the scenes. As you can imagine, the market demand for general-purpose GPUs will explode in the future with the popularity of R&D and applications in this area. This will be a huge opportunity for GPU companies.

Opportunities and Challenges for High-Speed Interface IP

As mentioned above, the training of large models requires a large number of GPUs, and the training of ChatGPT used 10,000 high-end GPUs from NVIDIA, however, from the perspective of training, it may take hundreds of years to train a large model like ChatGPT if the GPU chips with good computational performance such as A100 cannot be clustered together for training. Therefore, the training of AI large models is a huge challenge and a huge opportunity for high-speed interface IP.

First is the interconnection interface on a chip, that is, Die to Die type interconnection interface IP, including UCIe, etc., to expand the computing capacity of a single chip; second is the Chip to Chip type interconnection interface IP, including SerDes/PCIe/CXL, etc., which can accelerate the interconnection and data exchange between chips to meet the demand of higher bandwidth; in addition, there is memory Therefore, from a training perspective, the explosion of ChatGPT-like applications can bring a very large demand for interface IP.

Opportunities and challenges for server vendors

Training and deployment of large models cannot be done without server support. Nidhi Capel, general manager of Azure AI infrastructure at Microsoft, said that they have built a system architecture that can operate at very large scales and is very reliable, which is a big reason for ChatGPT’s success. Cloud services rely on thousands of different components, including servers, pipes, different metals, minerals, etc.

In recent years, the global wave of digitalization and intelligence, smartphones, autonomous driving, data centers, image recognition, and other applications have driven the AI server market to grow rapidly. According to IDC data, the global AI server market has reached $14.5 billion in 2021 and is expected to exceed $26 billion in 2025.


Obviously, with the release of large language models such as ChatGPT, GPT-4, and Wenxin Yiyin, and the future deployment of large models in various fields, people are getting closer to the long-awaited general AI. The development and application deployment of large models will also bring unprecedented opportunities to industries such as arithmetic chips, interface IP, and servers. At the same time, the country has its own unique advantages in these fields and faces many challenges. This will be a protracted battle.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top