Skip to content
Menu
  • Contact Us
  • Blog
  • Support

REQUEST A DEMO

GigaIO
  • Products
    • Platform Overview
      • SuperNODE Solution Overview
      • Gryf Solution Overview
      • GigaPod Solution Overview
      • FabreX System Overview
    • Composition Software
      • FabreX Software
      • Fabrex CLI
      • Integrated Enterprise-Class Software
    • Composition Hardware
      • Fabric Switch
      • Accelerator Pooling Appliance (PCIe)
      • Accelerator Pooling Appliance (MI300X)
      • Storage Pooling Appliance
      • Fabric Card
  • Solutions
    • By Use Case
      • AI Readiness Assessment
      • High Performance Computing
      • Edge Computing
      • Cloud Computing
      • Visualization
    • By Industry
      • Education
      • Life Sciences
      • Manufacturing
      • Financial Services
    • By Technology
      • NVIDIA Base Command Manager
      • AMD Engineered Solutions
      • CXL
  • Resources
    • Solution Briefs
    • Primers
    • Case Studies
    • White Papers
    • Technical Papers
    • Data Sheets
    • FAQs
  • News
    • News
    • Press Releases
    • Events
    • Blog
  • Company
    • Leadership
    • Careers
  • Partners
    • Become a Partner
  • Let’s Talk
Close Menu
  • Contact Us
  • Blog
  • Support

REQUEST A DEMO

Test Results for GigaIO SuperNODE™

Sign up below and you’ll be the first to know of the latest SuperNODE test results.

AI-Specific Test Results

The video below showcases the power of 32 GPUs on a SuperNODE: Summarizing 82 articles per second utilizing MK1 Flywheel inference engine!👇

When running training workloads such as Llama and ResNet-50, SuperNODE continues to exhibit perfect scaling.

Llama 7B is part of the LLaMA (Large Language Model Meta AI) family of autoregressive large language models (LLMs), released by Meta AI starting in February 2023. “7B” refers to 7 billion parameters.

The graph below shows the gain in time-to-results when adding more GPUs to a single node — from 19 minutes with a standard 8-GPU server configuration to less than five minutes on a SuperNODE with 32 GPUs.

GigaIO SuperNODE Testing, LLaMA 7b Training Runtime Compared on a single node

ResNet stands for Residual Network and is a specific type of convolutional neural network (CNN) introduced in the 2015 paper “Deep Residual Learning for Image Recognition” by He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. CNNs are commonly used to power computer vision applications.

ResNet-50 is a 50-layer convolutional neural network (48 convolutional layers, one MaxPool layer, and one average pool layer). Residual neural networks are a type of artificial neural network (ANN) that forms networks by stacking residual blocks.

The graph below shows the increase in images per second as GPUs are added to a SuperNODE

GigaIO SuperNODE Testing, ResNet50 Performance on a Single Node

High Performance Computing Specific Test Results

Over the weekend I got to test #FluidX3D on the world's largest #HPC #GPU server, #GigaIOSuperNODE. Here is one of the largest #CFD simulations ever, the Concorde for 1s at 300km/h landing speed. 40 *Billion* cells resolution. 33h runtime on 32 @AMDInstinct MI210, 2TB VRAM.
🧵1/5 pic.twitter.com/it6EPsrr1g

— Dr. Moritz Lehmann (@ProjectPhysX) August 1, 2023

Initial Validation Test Results

GigaIO’s SuperNODE system was tested with 32 AMD Mi210 GPUs on a Supermicro 1U server with dual AMD Milan processors.

  • Hashcat: Workloads that utilize GPUs independently, such as Hashcat, scale perfectly linearly all the way to the 32 GPUs tested.
  • ResNet50: For workloads that utilize GPU Direct RDMA or peer-to-peer, such as ResNet50, the scale factor is slightly reduced as the GPU count rises. There is a one percent degradation per GPU, and at 32 GPUs, the overall scale factor is 70 percent.

These results demonstrate significantly improved scalability compared to the legacy alternative of scaling the number of GPUs using MPI to communicate between multiple nodes. When testing a multi-node model, GPU scalability is reduced to 50 percent or less.

The following charts show two real-world examples of these two use cases:

GigaIO SuperNODE Hashcat test results
GigaIO SuperNODE Testing, ResNet50 Performance on a Single Node

Several Top500 benchmarks for supercomputer performance testing also demonstrated extraordinary performance:

  • HPL-MxP showed excellent scaling and reduced precision compute bandwidth running on the SuperNODE, achieving 99.7% of ideal theoretical scaling.
  • HPL testing resulted in 95.2% of ideal theoretical scaling. 
  • HPCG showed 88% scaling, an excellent result for memory scaling.
SuperNODE Testing: HPL-MxP Scaling on the GigaIO SuperNODE
SuperNODE Testing: HPL Scaling on the GigaIO SuperNODE
SuperNODE Testing: HPCG Scaling Results on the GigaIO SuperNODE

Democratizing Access to AI and HPC’s Most Expensive Resources

The alternative off-the-shelf systems offering SuperNODE’s accelerator-centric performance are impractical, if not prohibitive, for most organizations.

  • SuperNODE drastically reduces AI costs: A 32-GPU deployment with a standard 4-GPU-to-server configuration would require a total of eight servers, at an average cost of $25,000 apiece ($175,000) — not including the cost of the GPUs. Eliminating additional per-node software licensing costs results in additional savings.
  • SuperNODE delivers significant savings on power consumption and rack space: Eliminating seven servers saves approximately 7KW, with additional power savings in associated networking equipment — all while increasing system performance. Compared to 4-GPU servers, SuperNODE delivers a 30% reduction in physical rack space (23U vs. 32U).
  • SuperNODE keeps code simple: An eight-server, 32-GPU system would require significant rewrites of higher order application code in order to scale data operations, further adding complexity and cost to deployment.
  • SuperNODE shortens time-to-results: Eliminating the need to connect multiple servers via legacy networks using MPI protocol, and replacing them with native intra-server peer-to-peer capabilities, delivers significantly better performance.
  • SuperNODE provides the ultimate in flexibility: When workloads only need a few GPUs, or when several users need accelerated computing, SuperNODE GPUs can easily be distributed across several servers.
Download the PDF
Learn More

Additional benchmark results will continue to be posted on this page

Our engineering team continues to work with various technology partners to run and validate benchmark tests to demonstrate the benefits of SuperNODE in various applications. Sign up below if you’d like to be the first to hear when new benchmarks are published.

  • This field is for validation purposes and should be left unchanged.

Sign up for GigaIO News

  • This field is for validation purposes and should be left unchanged.

Back To Top
GigaIO Logo

© 2025 GigaIO
GigaIO and FabreX are trademarks of GigaIO Networks, Inc., all rights reserved.

Privacy
Policy Terms & Conditions

  • Products
  • Solutions
  • Resources
    • Primers
    • Case Studies
    • White Papers
    • Technical Papers
    • Data Sheets
    • FAQs
  • News
    • Press Releases
    • Events
  • Company
  • Partners
  • Contact Us
  • Blog
  • Support
© 2019 GigaIO
GigaIO and FabreX are trademarks of GigaIO Networks, Inc., all rights reserved.

Privacy Policy
Terms & Conditions

Envelope

Contact Us

  • This field is for validation purposes and should be left unchanged.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. ACCEPT Reject
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT

Contact Our Team Today

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Lego Set Giveaway: Concorde

Concorde Lego Set Giveaway

Enter to win a 2,000+ piece Lego set. Must be present to win (booth #F19). Drawings take place:

  • Monday: 19:00
  • Tuesday: 17:00
  • Wednesday: 15:00

Download Your Resource

After submitting the form, your resource will be delivered to your inbox.

  • This field is hidden when viewing the form
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form
  • This field is for validation purposes and should be left unchanged.

Speak to Our Experts

"*" indicates required fields

Country*
This field is for validation purposes and should be left unchanged.

Be an AI Jedi with GigaIO

Jedi Masters Photo Gallery

Check back once SC23 begins to see the all the Jedi poses here.

Schedule a chat about SuperNODE

  • This field is for validation purposes and should be left unchanged.

John Ihnotic

John is an experienced R&D leader with over 15 years of experience in embedded software, with a focus on high performance scalable computing applications. As one of the earliest engineering hires at GigaIO, John has overseen every FabreX software release from alpha prototype to today. He has a proven track record of running efficient engineering teams that deliver innovative solutions in a startup environment.

Prior to GigaIO, John led DevOps at KnuEdge, a chip start-up focused on scalable compute fabrics for machine intelligence and AI applications.  John also has experience as a software solutions manager, heading successful projects with premier technology clients such as DirecTV, Akamai, Sony, and ViaSat.

John holds a BS in Computer Science from the University of Southern California.

Learn More

"*" indicates required fields

How can we help you?

Curious to see how it works? Check out this 15-minute demo on how to liberate your compute resources.

Great! We’ll send you the information you need to make the best decision for your EDU environment.

Good call! Let's set up a live demo on how our Composability Appliance delivers what you need when you need it.

Name*
This field is for validation purposes and should be left unchanged.

Request More Information

  • This field is for validation purposes and should be left unchanged.

Schedule a Meeting

  • This field is for validation purposes and should be left unchanged.

Subscribe to GigaIO News

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Marc Lehrer

With previous experience as a GM and VP of Sales at Mellanox Technologies, Marc comes to GigaIO with direct expertise in the high-performance interconnect market. At Mellanox, Marc achieved significant wins in Enterprise and Government HPC clusters. He is an experienced senior sales and business development executive with early stage and public companies, most recently as Senior Sales Director at Netronome Systems, winning high profile customers in the Cloud, Software Defined Networking (SDN) and Network Virtualization (NFV) markets.

Prior to joining Mellanox Marc held senior executive marketing and sales positions in the US and EMEA for Galileo Technology which was acquired by Marvell. Marc was responsible for building and growing the sales organization in EMEA winning many high-profile customers in the telecom and networking markets.

Marc also held senior marketing and product definition positions in the US for Quality semiconductor (later acquired by IDT) and business development in EMEA for Annapurna Labs (acquired by Amazon).

Marc holds an undergraduate degree in Electronic and Electrical Engineering and a postgraduate degree in Electronic and Digital Signal Processing Engineering from Westminster University, London.

Jacqueline Arsivaud

Jacqueline brings twenty years’ experience in marketing and strategic planning with both large companies and tech start-up ventures. At Hewlett-Packard, she managed worldwide product marketing teams launching industry-leading products and programs, and designed and executed highly innovative distribution strategies, including the first channel cooperation program between the direct sales force and the retail channel.

Jacqueline left HP to create an international market research and consulting company focused exclusively on the technology industry.  With offices in San Diego and Berlin, Analytique grew to be ranked in the top 100 research companies in the US by revenue, with a client list including 3Com, Canon, Compaq, IBM, Intel, Hewlett-Packard, and Microsoft among others. She joined GigaIO as Director of Marketing in May 2020 and was promoted to Vice President in August 2021 after leading a major brand repositioning and delivering traction with key industry analysts and strategic partner companies.

Jacqueline first came to the US for a summer session at San Diego State University’s Business School, as part of her MBA program from Institut Supérieur de Gestion in Paris.  She studied Economics and Political Science as an undergrad at France’s Institut d’Études Politiques.

Matt Demas

A seasoned IT executive, Matt brings to GigaIO two decades of experience in sales and solutions architecting. Matt has built federal, healthcare, and education-based vertical solutions at companies like Dell, where he was a Senior Solutions Architect, and Pivot 3, where he led regional sales. Immediately prior to joining GigaIO, he served as Field CTO at Liqid.

Matt spent seven year in IT in the US Air Force and has a deep expertise in Federal IT procurement through his subsequent work as a Senior Consultant with Booz Allen Hamilton, and the partnerships he has built helping systems integrators win Federal contracts.

Matt holds a Bachelor’s degree in Information Technology from American InterContinental University, and an MBA from Concordia University Austin.

Eric Oberhofer

Eric is a seasoned technical sales executive with 20 years of sales and technical management experience in the public and private sectors, and a track record of building and managing world-class sales organizations, and partner development. He has extensive experience in Enterprise IT transformation utilizing next generation data center technologies and management techniques.  His experience spans leadership roles in multiple startups, system integrators, and the US Federal government.

Most recently, Eric served as Vice President, Public Sector at Liqid, a storage and composable solutions provider. Prior to that, he served as Federal CTO and DoD Sales Director at Pivot3, a security infrastructure provider; he was the Managing Director, Consulting at Iron Bow Technologies, a leading systems integrator; and led a global sales team at Dell EMC.

Eric received his Master’s of Science in Management Information Systems from George Washington University and his Bachelor’s of Science in Operations and Information Management from Georgetown University.

Speak to a GigaIO HPC Expert

  • This field is for validation purposes and should be left unchanged.

Become a Partner

  • This field is for validation purposes and should be left unchanged.

Contact Us

  • This field is for validation purposes and should be left unchanged.

Alan Benjamin

Alan is one of the visionaries behind GigaIO’s innovative solution. He was most recently COO of Pulse Electronics – $800M communication components and subsystem supplier and previously CEO of Excelsus Technologies. Earlier he helped lead PDP, a true start-up, to a successful acquisition by a strategic buyer for $80M in year three. He started his career at Hewlett-Packard in Sales Support and quickly moved into management positions in Product Marketing and R&D. Alan graduated from Duke University with a BSEE in Electrical Engineering and attended Harvard Business School AMP program, as well as UCSD LAMP program.

Joey Maitra

Joey is a 25-year industry veteran, having held executive positions at Magma, Patriot Scientific and Metacomp. Joey has been instrumental in the development of a Unified System Area Network with PCI Express as the fabric and is the inventor of all of the IP associated with it. He has defined the software, hardware and the system architecture of the prototype Switch implementation and has been responsible for the design implementation. Having originated several patents in the areas of communication and image processing, Joey holds a Master’s degree in EE from SUNY and a Bachelor’s degree in Electrical Engineering from Indian Institute of Technology (IIT), India.

Steve Campbell

Steve has held senior VP Marketing positions for HPC and Enterprise vendors. Steve was most recently Vice President of Marketing and Solutions of the Hitachi Server Group. Previously, Steve served as VP of Marketing at Sun Microsystems Enterprise Systems Products Group responsible for the mid-range and high-end Sun Fire and Sun Enterprise servers. He was executive sponsor for several high profile customers in Asia and also responsible for Sun’s High-Performance Computing initiatives and the Data Center Insight Programs leading solution programs for data center consolidation and mainframe migration. Before joining Sun, Steve was a founding partner in a technology consulting company working with early stage technology and Internet start-ups helping raise over $300M. He served on the boards of and as interim CEO/CMO of several early-stage technology companies. Steve was Vice President of Marketing at FPS Computing and held executive positions at Convex.

Robin O’Neill

Robin O’Neill is a computer R&D executive with a history of version 1.0 innovation for companies such as Western Digital, Hewlett Packard, PolyServe (acquired by HP), Intel, the Advanced Computer Research Institute (Lyon, France), Sequent Computer Systems, and The Lawrence Livermore National Laboratory. Most recently, he was head of WDLABS, leading the exploration and incubation of emerging systems, software, and new business opportunities for Western Digital.

His more notable joint contributions include the Computational Flash Storage architecture at Western Digital, the industry’s first fully-symmetric clustered filesystem at PolyServe, the Decoupled-loop Pipelining CPU at the Advanced Computer Research Institute in France, the industry’s first commercial NUMA system and first commercial SMP system at Sequent, the IEEE POSIX standard, and the Unix-based time-sharing supercomputer operating system at LLNL.

Mr. O’Neill holds a Bachelor of Arts degree in Mathematics from the University of Utah.

Niraj Mathur

Niraj has over 20 years of industry experience in strategic and product marketing, product management, business development, customer applications and advanced silicon engineering. He has held senior leadership roles and led global, cross-functional teams to support these disciplines. Niraj was instrumental in driving numerous successful networking products at Nortel Networks, Quake Technologies, AppliedMicro, Snowbush, Gennum, Semtech and Rambus. He has defined, developed and supported carrier grade hardware and software for the world’s leading telecom, enterprise and cloud customers. His past projects include Ethernet PHYs, core Internet switches, metro optical routers, high-speed silicon IPs and PCI Express products. Niraj holds a Bachelor of Computer Engineering from McGill University and an MBA from Cornell University.

Scott Taylor

Scott has an extensive background in high speed networking, accelerators and security from working at companies like Cray Research and Sun Microsystems. Leveraging this background, he created the FabreX software architecture supporting Redfish Composability Service, NVMe-oF, GPU Direct RDMA, accelerators, MPI and TCP/IP all with a single PCI-compliant interconnect. He has built the engineering team at GigaIO from the ground up to implement a singular vision of FabreX as an open source, standards-based ecosystem. Scott’s previous experience includes Prisa Networks, a Fiber Channel startup, where he helped drive the shift from an arbitrated loop to switch based topologies. His many years working as an expert consultant helps him drive key intellectual property development at GigaIO. Scott holds a BS in computer science from UC Santa Barbara.

Request a demo

  • This field is for validation purposes and should be left unchanged.