HECN Work at SC19

December 20, 2019

Paul Lang next to the HECN-built equipment

SESDA’s HECN team, working with the Mid-Atlantic Crossroads (MAX), the Naval Research Laboratory (NRL), the 100-GigE Ciena Testbed, the StarLight national and international optical network exchange facility, CenturyLink, Internet2, and SCinet, created a multi-100-GigE network topology for live demos at SuperComputing 2019 in Denver, CO.  This included 4×100-GigE network paths between NASA Goddard and SC19, 2×100-GigE network paths between StarLight and SC19, and 2×100-GigE network paths between NASA Goddard and StarLight. 

At SC19 the HECN team demonstrated the use of NVMe over Fabric over TCP (NVMe-oF/TCP) technology across a 4×100-GigE wide-area network (WAN) infrastructure as a SCinet Network Research Experiment (NRE).  The experiments showcased very high performance disk-to-memory and disk-to-disk network data transfers between a single high performance 4×100-GigE NVMe server at SC19 and a single 2×100-GigE high performance NVMe client at NASA Goddard, with only a moderate level of system CPU utilization on the server.  The top result was an aggregate throughput of nearly 200 Gigabits per second (Gbps) on 64 reads across 16 NVMe drives using 2 100-GigE WAN links. When doing full disk-to-disk network data transfers using NVMe-oF/TCP to read a remote NVMe drive and then write to a local NVMe drive, the aggregate throughput dropped to about 120 Gbps.  This may be due to inadequate buffering to handle the slower NVMe write speeds and requires further investigation.  Also performed were more traditional network data transfers using the normal Linux TCP/IP network stack, which achieved an aggregate throughput of almost 200 Gbps while totally consuming all of the system CPU resources.

Network Topology used for the used for the demonstrations at the SuperComputing 2019 (SC19) event in  Denver, CO in Nov 2019

Comments are closed.