Fabric Switch
Hyper-Performance Network

Highlights
Through an all-new architecture, GigaIO™ offers a hyper-performance network that enables a unified, software-driven composable infrastructure. Disaggregation and composability meet the demands of new data-intensive applications and dynamically assign resources to match changing workloads.
The GigaIO Fabric Switch is the fundamental building block of GigaIO’s AI fabric, enabling true Software Defined Infrastructure (SDI).
The Fabric Switch communicates with host drivers to identify and coordinate resources required by the hosts, then quickly connects the respective resources. Choose from a variety of switch software topologies to provide the cluster configurations, management, and control you need.
Connections between compute, storage, and application accelerator resources in GigaIO’s AI fabric are implemented with the rugged, packetized communication protocol of industry-standard PCI-Express.
Fabric management is administered using DMTF open-source Redfish® RESTful APIs that provide an easy-to-use interface for configuring computing clusters on-the-fly.
The non-blocking ports feature latency values of less than 130ns for higher throughput and the lowest latency in the industry.
Each port can be configured as a single x8 link (256Gb/s) or aggregate two ports together to make a x16 link (512Gb/s).
Upgrade or add compute, storage, and application accelerators at the component level that plug-n-play with your environment. Every major subsystem can now operate on its own upgrade cycle.
The Fabric Switch can unite a far greater variety of resources, connecting GPUs, TPUs, FPGAs and SoCs to other compute elements or PCI endpoint devices, such as NVMe, PCIe native storage, and other I/O resources. Span multiple servers and multiple racks to scale up single-host systems and scale out multi-host systems, all unified via the Fabric Switch.
GigaIO’s AI fabric allows for direct memory access by an individual server to system memories of all other servers in the cluster fabric, for the industry’s first in-memory network. This unique capability enables Load and Store memory semantics across the interconnect.