
Hardware Designed for Machine Intelligence
Bow IPU Processor
The AI processor delivering unprecedented performance and power efficiency for current and future ML innovation
Speak to an ExpertHardware Designed for Machine Intelligence
The AI processor delivering unprecedented performance and power efficiency for current and future ML innovation
Speak to an ExpertThe Bow IPU is the first processor in the world to use Wafer-on-Wafer (WoW) 3D stacking technology, taking the proven benefits of the IPU to the next level.
Featuring groundbreaking advances in compute architecture and silicon implementation, communication and memory, each Bow IPU delivers up to 350 teraFLOPS of AI compute, an impressive 40% leap forward in performance and up to 16% more power efficiency compared to the previous generation IPU.
Click the markers on the diagram for more details about the chip
Swipe to see more
TSMC has worked closely with Graphcore as a leading customer for our breakthrough SoIC-WoW (Wafer–on-Wafer) solution as their pioneering designs in cutting-edge parallel processing architectures make them an ideal match for our technology. Graphcore has fully exploited the ability to add power delivery directly connected via our WoW technology to achieve a major step up in performance, and we look forward to working with them on further evolutions of this technology.
Paul de Bot, General Manager
TSMC Europe
The IPU is a completely new kind of massively parallel processor, co-designed from the ground up with the Poplar® SDK, to accelerate machine intelligence.
The compute and memory architecture are designed for AI scale-out. The hardware is developed together with the software, delivering a platform that is easy to use and excels at real-world applications.
Processors
Memory
Designed for scalar processes
Off-chip memory
SIMD/SIMT architecture. Designed for large blocks of dense contiguous data
Model and data spread across off-chip and small on-chip cache, and shared memory
Massively parallel MIMD. Designed for fine-grained, high-performance computing
Model and data tightly coupled, and large locally distributed SRAM
Ideal for exploring, the Bow Pod16 gives you all the power, performance and flexibility you need to fast-track IPU prototyping and leap from pilot to production.
Learn moreRamp up your AI projects, speed up production and see faster time to business value. Bow Pod64 is the powerful, flexible building block for world-leading AI performance.
Learn moreWhen you're ready to grow your processing capacity to supercomputing scale, choose Bow Pod256 for production deployment in your enterprise datacenter, private or public cloud.
Learn moreSign up below to get the latest news and updates: