RS4024 – Top of Rack PCIe Switch (Gen 4)
Hyper-Performance Network
Highlights
Through an all-new architecture, GigaIO™ offers a hyper-performance network that enables a unified, software-driven composable infrastructure. Disaggregation and composability meet the demands of new data-intensive applications and dynamically assign resources to match changing workloads.
The FabreX™ Switch is the fundamental building block of the FabreX network for true Software Defined Infrastructure (SDI). FabreX Gen4 double the capacity of FabreX Gen3 and is backward compatible with all Gen 3.0 hosts and devices.
The Switch communicates with FabreX host drivers to identify and coordinate resources required by the hosts, then quickly connects the respective resources. Choose from a variety of switch software packages to provide the cluster configurations, management and control you need.
Connections between compute, storage and application accelerator resources in the GigaIO FabreX network are implemented with the rugged, packetized communication protocol of industry-standard PCI-Express.
FabreX networking is administered using DMTF open-source Redfish® APIs that provide an easy-to-use interface for configuring computing clusters on-the-fly.
The non-blocking ports feature latency values of less than 110ns for higher throughput and the lowest latency in the industry.
This new generation with PCIe Gen 4.0 delivers up to 512Gbits/sec transmission rates per port at full duplex, soon to scale up to 1024Gbits/sec with PCIe Gen 5.0.
Every port of the FabreX Switch interfacing with the Host is equipped with DMA engines for full-duplex data traffic. Virtual channels and traffic classes with egress port arbitration contribute to QoS features of the FabreX network.
Upgrade or add compute, storage and application accelerators at the component level that plug-n-play with your environment. Every major subsystem can now operate on its own upgrade cycle.
The Switch can unite a far greater variety of resources, connecting GPUs, TPUs, FPGAs and SoCs to other compute elements or PCI endpoint devices, such as NVMe, PCIe native storage, and other I/O resources. Span multiple servers and multiple racks to scale up single-host systems and scale out multi-host systems, all unified via the FabreX Switch.
The FabreX network allows for direct memory access by an individual server to system memories of all other servers in the cluster fabric, for the industry’s first in-memory network. This unique capability enables Load and Store memory semantics across the interconnect.
The non-blocking ports feature latency values of less than 110ns for higher throughput and the lowest latency in the industry.
This new generation with PCIe Gen 4.0 delivers up to 512Gbits/sec transmission rates per port at full duplex, soon to scale up to 1024Gbits/sec with PCIe Gen 5.0.
Every port of the FabreX Switch interfacing with the Host is equipped with DMA engines for full-duplex data traffic. Virtual channels and traffic classes with egress port arbitration contribute to QoS features of the FabreX network.