GPT-4 and New Hardware Requirements

As AI technology continues to evolve and be commercialized, hardware for computing power will continue to benefit as the foundation for AI large-scale models. GPT-4.0 is moving from a language model to a multimodal model, which will bring more diverse application scenarios.

At the same time, the larger model of GPT-4 demands higher computing power and the multimodal feature increases the demand for other encoding and decoding modules.

Multimodal should have three basic components: the first part is images, and videos consisting of multiple frames of images; the second part is audio; and the third part is text.

Therefore, as multimodality develops, at the hardware computing level, there is a need to increase the encoding and decoding capabilities for video and audio. In terms of difficulty, video has the highest requirements for computing power and IP complexity, followed by audio, and there is an opportunity to expand GPU computing to GPU peripheral support for encoding and decoding IP, such as VPU modules.

According to media reports, TSMC, which is leading the industry in wafer foundry, has seen an increase in orders for A100 and H100 GPUs from Nvidia and for the A800 series GPUs designed for the domestic market.

It is worth noting that the Nvidia A100, H100, and A800 series GPUs are all products designed for data centers. The increase in orders for these three GPUs is likely related to recent hot products such as ChatGPT.


In a report last month, the media mentioned that the world may once again face a shortage of GPUs due to the training and provision of related services for products such as ChatGPT. The shortage may occur earlier than expected.

To create and maintain the massive database of AI analysis data needed for ChatGPT, developer OpenAI used 10,000 Nvidia GPUs for related training. To support practical applications and meet server requirements, OpenAI has already used about 25,000 Nvidia GPUs and is expected to increase as demand increases in the future.

The increase in orders from Nvidia at TSMC may not have started recently. Last month, foreign media reported that TSMC’s 5nm and 4nm process utilization rates have rebounded thanks to strong demand for AI chips and server processors with short delivery times. Although Nvidia’s orders were not mentioned explicitly at the time, AI chip and server processor orders are likely to involve Nvidia’s products.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top