GigaIO Introduces New Scalability for AI Workloads with FabreX™ 2.2 For Dynamically Configured Rack-scale Architectures
Showcasing Scaling Intel® Optane™ at WWT’s Advanced Technology Center
CARLSBAD, CA (April 30, 2021) – GigaIO, the creators of next-generation data center rack-scale architecture for AI and High-Performance Computing solutions, today announced FabreX release 2.2, the industry’s first native PCI Express (PCIe) Gen4 universal dynamic fabric, which supports NVMe-oF, GDR, MPI, and TCP/IP. This new release introduces an industry first in scalability over a PCIe fabric for AI workloads, by enabling the creation of composable GigaPods TM and GigaClusters TM with cascaded and interlinked switches. In addition, FabreX 2.2 delivers performance improvements of up to 30% across all server-to-server communications through new and improved DMA implementations.
Intel, WWT and GigaIO will be discussing in an upcoming virtual roundtable on April 27th how the new scalability challenges for AI workloads can be met through next-gen high performance solutions like Optane and FabreX. Register here for the event, “AI and the New Scalability of Storage Architecture: How to get there from here”, which will be moderated by HPC expert Addison Snell of Intersect360 Research.
By breaking the barrier of the server box, GigaIO’s technology enables the entire rack to be treated as the compute unit. All I/O resources normally located inside the server (GPUs, FPGAs, Optane high performance storage) can now be pooled in accelerator or storage appliances where they are available to all the servers in the rack. Components such as Intel Optane SSDs continue to communicate over native PCIe (and CXL in the future), as they would if they were still plugged into the server motherboard, for the lowest possible latency and highest performance.
Now with FabreX 2.2, the basic compute unit, the GigaCellTM, can be extended to several racks. This enables faster time to result since workloads run as if they were using components inside one server, but harness the power of many nodes, all communicating within one seamless universal fabric. Leaf and spine, dragonfly and other scale-out topologies are fully supported.
“The GigaIO FabreX environment with Intel Optane SSDs is enabling scalable performance with significantly lower latency than other options for NVMe. The combination of the native PCIe fabric with Optane SSDs unleashes the potential of both solutions and IT organizations can realize performance benefits and reduced storage bottlenecks across the network”, said Kristie Mann, Senior Director of Product Management for Intel Optane.
Data center managers save the extra cost, complexity of set up and maintenance burden of having several networks (such as InfiniBand and Ethernet) running in a rack with a “sea of NICs”, instead deploying PCIe outside the server using only PCIe switches. FabreX’s dynamic disaggregation and composability capability means they can “pay as they grow”, scaling resources independently, instead of overprovisioning to plan their infrastructure years ahead.
Recognizing the breakthrough nature of this innovation and the benefits to their customers deploying AI in their data centers, World Wide Technology (WWT), one of the largest System Integrator in the world with over $13 billion in revenues, has now deployed GigaIO’s platform in their Advanced Technology Center (ATC), where customers can test drive the new scalability with GigaIO’s platform deployed with Intel Optane SSDs. “WWT customers can now experience for themselves the ability and benefit to network Optane high performance storage across several servers without any performance or security penalty — all thanks to the FabreX PCIe universal fabric”, said Earl J. Dodd, Global HPC Business Practice leader at WWT.
”With our revolutionary technology, a true rack-scale system can be created with only PCIe as the network. The implication for HPC and AI workloads, which consume large amounts of accelerators and high-speed storage like Intel Optane SSDs to minimize time to results, is much faster computation, and the ability to run workloads which simply would not have been possible in the past.” said Alan Benjamin, CEO of GigaIO.
GigaIO further breaks one of the barriers to larger scale adoption of disaggregated infrastructure – vendor lock-in – by demonstrating a commitment to open standards: several off-the-shelf software options are available from a number of vendors to compose resources in FabreX, instead of yet another proprietary pane-of-glass and licensing fees.
Industry analyst Addison Snell of Intersect360 Research confirms the need for a composable universal fabric. “With analytics, AI, and new technologies to consider, organizations are finding their IT infrastructure needs to span new dimensions of scalability: across different workloads, incorporating new processing and storage options, following multiple standards, at full performance,” said Snell. “The data-centric composability of FabreX is aimed at solving this challenge, now and into the future.”
For more details and to learn how this new scalability can help data scientists get to results faster, while controlling budgets and making Optane performance available to more users, register for GigaIO’s April 27th Roundtable, read the primer on WWT’s website on “Rack-scale Composable Infrastructure with Intel Optane SSDs”, and/or go to www.gigaio.com to schedule a demo.
GigaIO has invented the first truly composable cloud-class software-defined universal infrastructure fabric, empowering users to accelerate workloads on-demand, using industry-standard PCI Express technology. The company’s patented network technology optimizes cluster and rack system performance, and greatly reduces total cost of ownership. With the innovative GigaIO FabreX™ open architecture, data centers can scale up or scale out the performance of their systems, enabling their existing investment to flex as workloads and business change over time. For more information, contact email@example.com or visit www.gigaio.com.