Skip to content
Corner-Accent-1_1@2x
Corner-Accent-2@2x
Smackdown! Does HPC Need Composable Infrastructers Now?
sc22_hor_black@4x
PANEL
TUESDAY, 1:30-3:00pm (CST)
ROOM C147-148-154

In this fast-paced, lively, and highly interactive format, two HPC industry analysts will lead “Pro” and “Con” teams, with the audience deciding who wins. A typical debate format will feature opening statements, rebuttals, and tricky questions aimed at tripping up the other team. There will also be plenty of time for audience questions and comments.

divider-style-1
Organized by  GigaIO Logo
Corner-Accent-1_1@2x
Corner-Accent-2@2x
Team PRO
  • Dan Olds, Intersect360 Research (team captain)
  • Alan Sill, Texas Tech University (TTU)
  • Dan Stanzione, Texas Advanced Computing Center (TACC)
  • Maria del Carmen Ruiz Varela (AMD)
  • Frank Würthwein, San Diego Supercomputer Center (SDSC)
vs
Team CON
  • Addison Snell, Intersect360 Research (team captain)
  • Gary Grider, Los Alamos National Lab (LANL)
  • Andrew Jones, Microsoft Azure
  • Glenn Lockwood, Microsoft Azure
  • Ruth Marinshaw, Stanford University
divider-style-1
Why Have a Debate About Composable Infrastructures?

AI is driving heterogeneous compute, with specific workflows needing radically different hardware configurations. Composable infrastructure purports to eliminate the restrictions imposed by traditional static architectures by allowing hardware resources to be dynamically assigned, rather than being tied to physical servers. The promise is higher efficiency of high cost components, and the building of otherwise “impossible servers”. But can it scale, and is the added cost really recouped through increased flexibility and utilization?

Each team has been working to craft persuasive arguments, evidence, and proof points, and set their debate strategy in order to come up with points and counterpoints that will win their side of the debate. In general, the two sides of the argument can be briefly summed up as:

The Case For Composable Infrastructures (Pro)
  • Composable infrastructures can configure systems to match the needs of particular workloads and be dynamically changed on the fly.
  • New accelerators (GPUs, IPUs, DPUs, FPGAs, etc.) are constantly being introduced, and composability allows users to mix and match the right numbers of various accelerators to efficiently accelerate workloads.
  • With composable infrastructure, HPC users will see higher performance, higher utilization rates, and a better return on ROI.
The Case Against Composable Infrastructures (Con)
  • HPC systems are already highly utilized and composability won’t improve the situation.
  • Adding composability is an expensive and complex solution in search of a problem that doesn’t exist in today’s HPC.
  • Does the added expense of including composability outweigh its expected benefits?
  • Can it scale enough to do any good?
And The Winner Is…

   At the conclusion of the debate, the audience will be asked to vote on:

  • Which team won the debate?
  • Did the conversation form your opinion, change your opinion, or solidify your opinion?
  • How relevant was the discussion to the problems you are currently trying to solve?

Make sure you’re in the room to help decide the outcome!

Requires SC22 Technical Program Badge to attend.

Curious to see the results?

Moderators
Dan Olds
Chief Research Officer, Intersect360 Research
Dan Olds

Dan Olds is a veteran of the HPC industry with more than 25 years of experience in the high-end server market and as an industry analyst. As Chief Research Officer of Intersect360 Research,, Dan leads the demand-side and supplier-driven data analysis practice for the firm’s forward-looking market intelligence subscription service. In addition, he supports a range of client-specific services including custom research studies and strategic consulting. In addition to server, storage, and network technologies, Dan closely follows the HPC, AI/ML, and Cloud markets. Dan co-hosts the mildly popular Radio Free HPC podcast, and is the go-to person for the coverage and analysis of the supercomputing industry’s Student Cluster Competitions, along with being an organizer of the Winter Classic student cluster competitions.

Addison Snell
Co-Founder and CEO Intersect360 Research
Addison Snell

Addison Snell is a veteran of the high performance computing industry. He launched his company in 2007 as Tabor Research, a division of Tabor Communications. He brought the company independent in 2009 as Intersect360 Research together with his partner, Christopher Willard, Ph.D. Under his leadership, Intersect360 Research has become a premier source of market information, analysis, and consulting for the high-performance computing (HPC) and hyperscale industries worldwide. Addison was named one of 2010’s “People to Watch” by HPCwire. Prior to Intersect360 Research, Addison was an HPC industry analyst for IDC. Addison originally gained industry recognition as a marketing leader and spokesperson for SGI’s supercomputing products and strategy.

Panelists
Gary Grider – High Performance Computing Division Leader, Los Alamos National Lab
Gary Grider

Gary Grider is the Leader of the High Performance Computing (HPC) Division at Los Alamos National Laboratory. Los Alamos’ HPC Division operates one of the largest supercomputing centers in the world focused on US National Security for the US/DOE National Nuclear Security Administration. As Division Leader, Gary is responsible for all aspects of High Performance Computing technologies and deployment at Los Alamos. Additionally, Gary is responsible for managing the R&D portfolio for keeping the new technology pipeline full to provide solutions to problems in the Lab’s HPC environment, through funding of university and industry partners. Gary has 27 granted patents and over 16 pending in the data storage area and has been working in HPC and HPC related storage since 1984.

Andrew Jones – Future Capabilities for HPC & AI, Azure Engineering & Product Group, Microsoft Azure
andrew-jones_sq

Andrew Jones is planning future HPC & AI capabilities for Azure, as part of the corporate engineering & product group. He joined Microsoft in early 2020, after nearly 25 years experience of supercomputing. Andrew has been an HPC end-user, researcher, software developer, HPC service manager, and impartial consultant. He has been a trusted voice on HPC strategy, technology evaluation and benchmarking, metrics, cost/value models and more. He has been lucky to have had rare exposure to state-of-practice in a wide range of HPC services/facilities across industry, government and academia around the world. Andrew is active on twitter as @hpcnotes.

Glenn Lockwood – Principal Product Manager, HPC Storage, Microsoft Azure
glenn-lockwood

Glenn K. Lockwood is a product manager at Microsoft, where he develops HPC storage strategy for Microsoft Azure. His background is in system architecture and I/O performance analysis, and he previously held roles in storage, R&D, and system operations at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory. Glenn has a formal research background in silica surface chemistry, and he holds a Ph.D. in materials science and a B.S. in ceramic engineering, both from Rutgers University.

Ruth Marinshaw – CTO, Research Computing at Stanford University
marinshaw

Ruth Marinshaw is CTO, Research Computing at Stanford University, where she spends a lot of time with faculty and campus research IT leaders to learn about current and emerging needs, and creates partnerships to evolve cyberinfrastructure (data, storage, visualization, computation, networking and consultation) services to accelerate research at Stanford. Ruth has a background in high performance computing, scientific and statistical application support, statistical programming, and system administration. She is active in various national HPC communities and projects and is also currently co-chair of the National Science Foundation’s Advisory Committee on Cyberinfrastructure.

Alan Sill – Managing Director, High Performance Computing Center at Texas Tech University (TTU)
Alan-Sill

Alan Sill is the Managing Director of the High Performance Computing Center at Texas Tech University, where he is also adjunct professor of physics. He also co-directs the US National Science Foundation Industry/University Cooperative Research Center for Cloud and Autonomic Computing (CAC). Dr. Sill holds a PhD in particle physics from American University and has an extensive track record of work in scientific computing. He has published in topics spanning cloud and grid computing, scientific computing, particle and nuclear physics, cosmic ray physics and radioisotope analysis. He serves as President of the Open Grid Forum, an international computing standards organization. He is an active member of IEEE, the Distributed Management Task Force, and other computing standards working groups.

Dan Stanzione – Executive Director, Texas Advanced Computing Center (TACC)
Dan-Stanzione

Dr. Dan Stanzione, Associate Vice President for Research at The University of Texas at Austin since 2018 and Executive Director of the Texas Advanced Computing Center (TACC) since 2014, is a nationally recognized leader in high performance computing. He is the principal investigator (PI) for several projects including a multimillion-dollar National Science Foundation (NSF) grant to acquire and deploy Frontera, which will be the fastest supercomputer at a U.S. university. Stanzione is also the PI of TACC’s Stampede2 and Wrangler systems, supercomputers for high performance computing and for data-focused applications, respectively. He served for six years as the co-director of CyVerse, a large-scale NSF life sciences cyberinfrastructure in which TACC is a major partner. In addition, Stanzione was a co-principal investigator for TACC’s Ranger and Lonestar supercomputers, large-scale NSF systems previously deployed at UT Austin.

Maria del Carmen Ruiz Varela – HPC Engineer and Senior Member of Technical Staff, AMD
Maria-Ruiz

Maria del Carmen Ruiz Valera is an HPC Engineer, formerly with Intel, responsible for RAS system validation for the US DOE’s ALCF Aurora Exascale Supercomputer (A21). She has experience in cluster validation, integration, and execution in HPC, and extensive SW engineering experience supporting mission and safety critical applications for the Automotive industry in the US and Mexico. Maria has published research in the areas of fault-tolerance for massively-parallel-processing, large-scale systems and on emerging non-volatile memories for embedded systems. She is a member of the SC21 and SC22 Inclusivity committee. At SC21 she hosted the Hispanic-Latinx Affinity group and was part of the Beowulf Bash panel on Composable Computing. Maria holds a M.Sc. in Computer Science from University of Delaware.

Frank Würthwein – Director of the San Diego Supercomputer Center (SDSC) and Executive Director, Open Science Grid
FrankWuerthwein_Headshot

Frank Würthwein is Director of the San Diego Supercomputer Center and Executive Director of the Open Science Grid, a national cyberinfrastructure to advance the sharing of resources, software, and knowledge. He is also a physics professor at UC San Diego. Frank received his Ph.D. from Cornell in 1995, and after holding appointments at Caltech and MIT, he joined the UC San Diego faculty in 2003. Frank’s research focuses on experimental particle physics and distributed high-throughput computing. His primary physics interests lie in searching for new phenomena at the high energy frontier with the CMS detector at the Large Hadron Collider. Frank’s topics of interest include the search for dark matter, supersymmetry, and electroweak symmetry breaking. As an experimentalist, Frank is interested in instrumentation and data analysis. In the last few years, this meant developing, deploying, and now operating a worldwide distributed computing system for high-throughput computing with large data volumes.

divider-style-1
Smackdown! Does HPC Need Composable Infrastructers Now?

This SC22 Panel was organized by  

Chime into the conversation on Twitter using #SC22Smackdown

Curious to see the results?

Corner-Accent-1_1@2x
Corner-Accent-2@2x