GigaIO Launches the Revolutionary SuperNODE MI300X, Setting New Standards in AI Infrastructure
Scaling significantly larger AI models now made simple and seamless.
Carlsbad, California, May 9, 2024 – GigaIO, the award-winning provider of open workload-defined infrastructure for AI and accelerated computing, today announced that its flagship product, the 32-GPU single node server SuperNODETM, is now shipping with AMD InstinctTM MI300X accelerators. MI300X series accelerators are designed to deliver leadership performance for Generative AI workloads and HPC applications. The MI300X’s massively expanded memory capacity, combined with the SuperNODE’s ability to put 32 GPUs into a single server, allows users to effortlessly accommodate and train significantly larger AI models, which is essential for today’s data-intensive Generative AI applications.
GigaIO’s trailblazing FabreXTM AI memory fabric, working with AMD’s innovative Infinity FabricTM technology, equips the SuperNODE MI300X with industry-leading low latency, high bandwidth, and advanced congestion control capabilities. This unrivaled performance empowers the system to seamlessly handle the most demanding tensor parallelism workloads for next-generation AI model training.
“The SuperNODE means less time messing with infrastructure and faster time to running and optimizing LLMs,” said Greg Diamos, Co-founder & CTO of Lamini. The Enterprise AI platform recently raised $25M for enterprises to turn proprietary expertise into the next generation of LLM capabilities, and has been utilizing SuperNODEs in the TensorWave cloud.
The SuperNODE significantly simplifies the process of deploying and managing AI infrastructure. Traditional setups often involve intricate networking and the synchronization of several servers, which can be both technically challenging and time consuming. In contrast, the SuperNODE streamlines this process with 32 GPUS in a single server. This simplicity accelerates deployment times and reduces technical barriers, allowing organizations to focus more on innovation and less on infrastructure complexities.
“We’re consistently hearing from customers that AI infrastructure is hard, but SuperNODE makes it easy, with no performance penalty and no need to change a single line of code,” said Alan Benjamin, CEO of GigaIO. “SuperNODE is all about ease of use and performance — it streamlines the process of getting AI models up and running compared to dealing with multiple complex InfiniBand-networked server configurations, and provides better performance than any other solution available today.”
GigaIO SuperNODEs with AMD MI300X GPUs are available now. Stop by the AMD/GigaIO booth (#F19) at ISC High Performance 2024 for a live demo showing the SuperNODE with AMD MI300X GPUs running LLMs on a single node, or learn more here.
About GigaIO
GigaIO provides workload-defined infrastructure through its universal dynamic memory fabric, FabreX, which seamlessly composes rack-scale resources and integrates natively into industry-standard tools. The SuperNODE and the SuperDuperNODE are “impossible servers,” fully engineered to “Just Work” for AI and accelerated computing. These solutions allow users to deploy systems in hours instead of months and run more workloads at lower cost through higher utilization of resources and more agile deployment. Visit www.gigaio.com, or follow on Twitter (X) and LinkedIn.
Contact
Danica Yatko
760-487-8395
danica@xandmarketing.com