As the company enters a critical phase of transformation, Intel has officially sounded the alarm for its entry into the GPU (Graphics Processing Unit) market. On Tuesday, at the Cisco AI Summit in San Francisco, Intel's current CEO **Lip-Bu Tan** announced that the company will begin producing this type of chip, which has become famous due to Nvidia.

Intel

Core Strategy: Heavy Investment and Executive Consolidation

Lip-Bu Tan confirmed at the meeting that Intel is forming an elite engineering team to execute the GPU strategy:

  • Core Management: This project is led by **Kevork Kechichian**, who joined Intel from Arm last September. He currently serves as the Executive Vice President and General Manager of the Data Center Group.

  • Top Architect Hires: Lip-Bu Tan revealed that the company has successfully convinced and hired an "extremely outstanding" Chief GPU Architect. According to industry sources, **Eric Demers**, a veteran engineer with 13 years at Qualcomm, joined in January this year, bringing key momentum to Intel's GPU development.

Strategic Shift: From Traditional CPU to AI Inference GPU

Although Intel had once stated it would return to its core CPU business, facing the insatiable demand for computing power from the AI wave, Lip-Bu Tan quickly expanded its scope. The newly launched GPU will focus on artificial intelligence model training and inference, especially addressing the increasingly severe storage bottleneck. Lip-Bu Tan pointed out that current GPUs consume a lot of memory, and Intel will formulate strategies based on customer needs, providing differentiated solutions using advanced packaging technologies.

Industry Context: Breaking Through the "Storage Crisis"

Lip-Bu Tan gave a clear insight into the current state of AI infrastructure at the summit: he predicted that the shortage of memory chips will continue until 2028, and called on companies to modernize their processes before pursuing AI scalability. Intel's move into this space is not only to break Nvidia's dominant market share of over 80% in the AI accelerator field, but also to build a complete foundry and product ecosystem on its 18A process node.