GigaIO SuperNODE™
The World’s First 32 GPU Single-node AI Supercomputer for Next-Gen AI and Accelerated Computing
A New Era of Disaggregated Computing
Technologies that reduce the number of required node-to-accelerator data communications are crucial to providing the stripped-down horsepower necessary for a robust AI infrastructure.
The GigaIO SuperNODE can connect up to 32 AMD or NVIDIA GPUs to a single node at the same latency and performance as if they were physically located inside the server box. The power of all these accelerators, seamlessly connected by GigaIO’s transformative PCIe memory fabric, FabreX, can now be harnessed to drastically speed up time to results.
The SuperNODE is a simplified system capable of scaling multiple accelerator technologies such as GPUs and FPGAs without the latency, cost, and power overhead required for multi-CPU systems.
- We Significantly Shorten Their LLM Development Time
Developers can focus on model creation without the hassle of scaling across multiple servers, speeding up the deployment of LLMs. “It’s EASY to scale with GigalO!” - By Breaking the 8 GPU Server Limit
We overcome traditional limitations by providing a seamless, scalable computing environment, free from the complexities and high costs of InfiniBand. - We deliver leadership price-performance
Cost-effective, high-performance Al computing, making advanced technology more accessible and more profitable. - We are Ready for Immediate Deployment
Our solution is available now, allowing clients to leverage these benefits without delay.
See For Yourself
Made Possible by FabreX
The GigaIO SuperNODE is powered by FabreX, GigaIO’s transformative high-performance AI memory fabric. In addition to enabling unprecedented device-to-node configurations, FabreX is also unique in making possible node-to-node and device-to-device communication across the same high-performance PCIe memory fabric. FabreX can span multiple servers and multiple racks to scale up single-server systems and scale out multi-server systems, all unified via the FabreX software.
Resources normally located inside of a server — including accelerators such as GPUs and FPGAs, storage, and even memory — can now be pooled in accelerator or storage enclosures, where they are available to all of the servers in the system. These resources and servers continue to communicate over the FabreX native PCIe memory fabric for the lowest possible latency and highest possible bandwidth performance, just as they would if they were still plugged into the server motherboard.
Unprecedented Compute Capability Available Now
Available today for emerging AI and accelerated computing workloads, the SuperNODE engineered solution, part of the GigaPod family, offers both unprecedented accelerated computing power when you need it, and the ultimate in flexibility and in accelerator utilization when your workloads only require a few GPUs.
What will YOU discover?
Resources
Solution Brief
Introduction Video
Demo Video
Information
BrightTalk Webinar
TechTalk Podcast