Leveraging faster speeds and innovative In-Network Computing, NVIDIA ConnectX smart adapters achieve extreme performance and scale. NVIDIA ConnectX lowers cost per operation, increasing ROI for high-performance computing (HPC), machine learning, advanced storage, clustered databases, low-latency embedded I/O applications, and more.
- High-performance silicon for applications requiring high bandwidth, low latency and high message rate
- Smart interconnect for x86, Power, Arm, and GPU-based compute and storage platforms
- Advanced performance in virtualized overlay networks NVGRE and GENEVE
- Efficient I/O consolidation, lowering data center costs and complexity
- Virtualization acceleration
- Power efficiency
- Scalability to tens-of-thousands of nodes
I/O virtualization
ConnectX-4 EN SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-4 EN gives data center administrators better server utilization while reducing cost, power, and cable complexity, allowing more Virtual Machines and more tenants on the same hardware.
Overlay networks
In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-4 effectively addresses this by providing advanced NVGRE and GENEVE hardware offloading engines that encapsulate and de-capsulate the overlay protocol headers, enabling the traditional offloads to be performed on the encapsulated traffic.
RDMA over Converged Ethernet (RoCE)
ConnectX-4 EN supports RoCE specifications delivering low-latency and high-performance over Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-4 EN advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.
Storage acceleration
Storage applications will see improved performance with the high bandwidth that ConnectX-4 EN delivers. Moreover, standard block and file access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.
Signature handover
ConnectX-4 EN supports hardware checking of T10 Data Integrity Field/Protection Information (T10-DIF/PI), reducing the CPU overhead and accelerating delivery of data to the application. Signature handover is handled by the adapter on ingress and/or egress packets, reducing the load on the CPU at the initiator and/or target machines.
Host management
Mellanox host management and control capabilities include NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.