System Architects and Dev Ops staff are being bombarded with a slew of acronyms, all purporting to make their lives easier, their bosses happier and their users quieter. From IaaS to SDC, the alphabet soup only grows by the day. Have you yet heard about CDI, aka Composable Disaggregated Infrastructure, the promise to transform silos in data centers into infinitely movable pieces as easy to assemble and reassemble as a Lego set? Frost and Sullivan described it as follows: “A Composable Infrastructure enables you to manage your infrastructure resources (physical, virtual, general-purpose, application-optimized, on-premises, and cloud) to deliver a better mix of performance, security, scalability, and cost for your workloads. It’s as if a child’s set of Lego bricks came with the ability to replicate blocks as needed and programmed instructions to configure those blocks into a ninja temple today and a working race car tomorrow”.
Today let’s examine the “D” part of the acronym. Turns out that even if you don’t plan or need to modify your data center configuration on the fly to accommodate various workloads, you might still benefit from the capability to free up resources trapped into a server, or to pool resources located in physically separated racks.
What is disaggregation? Simply put, it is the ability to individually address your accelerator, memory and storage resources, to virtually or physically separate them from the server or rack where they are housed. Why might you want to that?
- To extend the product life cycle of your most expensive resources. Accelerators or SSDs are not trapped within a server – upgrading or replacing an individual component lo longer implies throwing the entire server out;
- To scale your resources independently. You might have plenty of computing power but need more storage, or GPUs, without necessarily needing more CPUs;
- To pay as you grow. One thing you know for sure is your users will have intermittent peaks in demand over the next planning horizon, but instead of overprovisioning to accommodate the peaks, add just the resources you need when you need them;
- To decouple purchasing decisions. With a disaggregated rack you can refresh existing resources as needed, for example the latest GPU model in an existing cluster;
- To modify your CPU to GPU ratio as needed. Different workloads may need different combinations of compute resources;
- To give your users access to different types of GPUs. For example, for compute heavy tasks you might want to give them access to a set of V100s for example, but for visualization a different model (like RTX8000) might be a better fit;
- To manage your software license expenditures more efficiently. Instead of paying for another core and another license when all you need is more accelerators, just add GPUs to your existing CPU license;
- To improve serviceability. GPUs may fail more often than CPUs, and with a disaggregated GPU bank in a JBOG (Just a Bunch of GPUs), there is no need to open up the servers;
- Finally, to keep it simple. By removing the GPUs and FPGAs, which can use a lot of power and generate lots of heat, to an enclosure outside the server, you get back the advantages of the robustness and simplicity of a workhorse server.
If you’d like to find out more about how GigaIO can help with any of the above situation, request a demo. You might be surprised at how easy to deploy FabreX is.