A July 2020 Gartner study found that upwards of 18% of surveyed IT decision makers viewed composable infrastructure as “just hype”. Does that describe how you view the term “Dynamic Composable Infrastructure” many vendors are touting (or CDI, Composable Disaggregated Infrastructure, for the complete mouthful)?
Let’s first define what composability means for the IT professional. Broadly speaking, it promises to improve efficiency and “infrastructure agility”. What does that even mean? Gartner describes it as “an emerging architecture for server, storage and network hardware that creates physical systems from shared pools of disaggregated resources using an API”. Imagine all your data center resources as so many Lego pieces you can arrange, and rearrange, as your users’ workloads require. Instead of only being accessible as part of the many static silos in a traditional data center racks or pods, each SSD, each CPU node and each GPU or FPGA is sitting on a virtual shelf, waiting to be picked up and assembled in whichever configuration the current workload requires. So, what’s in it for you?
- Much lower TCO (Total Cost of Ownership). Our TCO modeling tool allows you to plug in your existing rack resources and find out just how much you can save by implementing composability in your racks, but 50% savings would be typical;
- Higher efficiency from existing resources from coaxing a higher utilization rate from what you already own. Expensive resources like GPUs are only used about 15% to 20% of the time, yet your users clamor for more of them to enjoy the huge time savings and reduced time to solution from using accelerators. By composing your existing GPUS, you can effectively share them across all of your users and get away with a much lower CapEx and OpEx;
- Immediate resource availability for episodic workloads. Composability happens on the fly, as needed, and almost instantaneously. No need to overprovision for the peaks – share the pooled resources across users;
- You are not hamstrung by intermittent funding cycles, you can upgrade individual pieces of infrastructure as and when required. Just recompose your resources as needed.
- Flexibility, aka agility: racks can flex to meet all workload sizes, types and sizes
What’s the catch? Broadly, two-fold:
- With most vendors’ implementations of composable infrastructure, you are stuck with proprietary technology and hardware to realize the full benefits – what Gartner calls “vendor- specific solutions, resulting in another data center silo”;
- In most cases, you pay a “composability tax” because hardware latency adds up at each step of the building blocks, decreasing the overall system performance to the point of diminishing returns.
With GigaIO you avoid those pitfalls:
- Our platform is open, and based on RedFish API, not another proprietary software package you need to pay license fees for and administer as another pane of glass. The RedFish standard is designed to deliver simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Several enterprise-class composability and orchestration software options are available from a variety of vendors depending on your needs;
- We are completely hardware and software agnostic: use your favorite server or accelerator vendor, storage, or container or Hyper Converged Infrastructure supplier;
- There is no performance penalty or composability tax with FabreX, because we are the only native PCIe fabric throughout the rack, including node to node. The ultra-low latency inherent in PCIe remains throughout the disaggregation and re-aggregation process, we never need to switch to InfiniBand or Ethernet within the rack, including to communicate server to server. As a result, the full end-to-end latency remains below a microsecond.
Find out more about how to easily share expensive resources between multiple servers, or request a demo to see how you too could compute more and spend less.