Maximizing Performance for Distributed Deep Learning with NVIDIA SHARP (HGX)

Share it with your friends Like

Thanks! Share it with your friends!

Close

Today’s modern-day machine learning data centers require complex computations and fast, efficient data delivery. The NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) takes advantage of the in-network computing capabilities in the NVIDIA Quantum switch, dramatically improving the performance of distributed machine learning workloads.

https://developer.nvidia.com/networking/hpc-x
#infiniBand #ISC21 #Networking

Comments

Write a comment

*

Area 51
Ringing

Answer