GigaIO is your only cloud-class, enterprise-class, and open standards composable infrastructure solution. While other vendors may offer proprietary hardware and software solutions, or “open” solutions that require another pane of glass and proprietary software licenses, GigaIO is committed to open standards and a contributing member to Linux, to the PCIe standard, and to the CXL consortium. With FabreX you get freedom of choice: freedom to choose the orchestration software that best fits your needs, and freedom to build your infrastructure using your preferred vendor and model for servers, GPUS, FPGAs, storage, and for any other PCIe resource in your rack.
FabreX Universal Dynamic Fabric Overview
FabreX is the only fabric which enables complete disaggregation and composition of all your resources in the rack. In addition to composing resources to servers, only FabreX can compose your servers over PCIe (and CXL in the future), instead of introducing the cost, complexity and latency hit from having to switch to Ethernet or InfiniBand within the rack.
Rack-Scale Computing Made Simple
Universal dynamic fabric FabreX delivers on the promise of rack-scale computing by breaking the server chassis barrier. You can share and compose on the fly all the components in a rack and beyond, based on the demands of individual workloads. For ease of deployment, the GigaPod Engineered Solutions include NVIDIA Bright Cluster Manager and everything you need to painlessly scale the basic GigaPod™ to GigaClusters™ and beyond to deliver on the promise of the agility of the cloud at a fraction of the cost.
One of the key advantages of a FabreX-powered composable infrastructure is your freedom of choice when it comes to how you want to disaggregate and recompose your rack resources. From the utmost in control via our robust CLI (Command Line Interface) and RedFish interfaces to just drop in your existing DevOps workflow, to a unified, single pane of glass, to manage your entire infrastructure seamlessly within a GUI with advanced and automated resource scheduling features with Bright Cluster Manager or SuperCloud Composer, you get to choose the tool best suited for your environment.
Our host software enables server-to-server communication over FabreX for protocols such as NVMe-oF, MPI, Libfabric, and TCP/IP. It is open-source, supports all popular Linux installations and can be readily downloaded from our support portal.
Our switch software engine drives the performance and dynamic composability of GigaIO’s composable disaggregated infrastructure (CDI) for enterprise data centers and high-performance computing environments.
The FabreX Switch is the fundamental building block of the FabreX network for true CDI. Our newest Gen4 Switch features per port bandwidth up to 256Gb per second with 140 nanosecond port-to-port latency.
Administer your FabreX network using DMTF open-source Redfish® APIs, providing an easy-to-use interface for configuring computing clusters on-the-fly.
FabreX PCIe Network Adapter Card and Cables
The FabreX Network Adapter is the high-performance, cabled interface to cluster subsystems across the FabreX hyper-performance network. The adapter card includes both host and target (for PCIe I/O) modes. With the card installed, applications can access remote PCIe devices as if they are attached to the local system.
All elements on the FabreX Network are interconnected using standardized, robust and easy-to-use Copper and Active Optical cables. These cabling solutions support connection lengths ranging from 1m to 100m.
Managed Accelerator Pooling Appliance
This expansion chassis is a rack-mount, disaggregated compute accelerator enclosure with space and power for up to 10 PCIe Gen 3 or Gen 4 x16 accelerator (GPUs, FPGAs or custom ASIC) cards. The advanced management features of the resource box (Gen4 only) enable per-slot power control and out-of-band telemetry from the cards.
Managed High Performance Storage Pooling Appliance
This expansion chassis is perfect to create your high performance Flash Array JBOF (Just a Bunch of Flash) based on NVMe technology or even computational storage units. It can include up to 32 2.5” drives, and 1+1 redundant 1000W high efficiency 80 Plus Titanium PSUs to provide high throughput and low latency for resource sharing and high availability.