Penguin Computing Announces NVIDIA Tesla V100-based Servers to Drive Deep Learning, Artificial Intelligence

Penguin Computing®, provider of high performance computing, enterprise datacenter and cloud solutions, today announced strategic support for the field of artificial intelligence through availability of its servers based on the highly-advanced NVIDIA® Tesla® V100 GPU accelerator, powered by the NVIDIA Volta GPU architecture.

“Deep learning, machine learning and artificial intelligence are vital tools for addressing the world’s most complex challenges and improving many aspects of our lives,” said William Wu, Director of Product Management, Penguin Computing. “Our breadth of products covers configurations that accelerate various demanding workloads – maximizing performance, minimizing P2P latency of multiple GPUs and providing minimal power consumption through creative cooling solutions.”

NVIDIA Tesla V100 GPUs join an expansive GPU server line that covers Penguin Computing’s Relion® servers (Intel®-based) and Altus servers (AMD-based) in both 19” and 21” Tundra® form factors. Penguin Computing will debut a high density 21” Tundra 1OU GPU server to support 4x Tesla V100 SXM2, and 19” 4U GPU server to support 8x Tesla V100 SXM2 with NVIDIA NVLink™ interconnect technology optional in single root complex.

The NVIDIA Volta architecture is bolstered by pairing NVIDIA CUDA® cores and NVIDIA Tensor Cores within a unified architecture. A single server with Tesla V100 GPUs can replace hundreds of CPU servers for AI. Equipped with 640 Tensor Cores, Tesla V100 delivers 125 TeraFLOPS of deep learning performance. That’s 12X Tensor FLOPS for deep learning training, and 6X Tensor FLOPS for deep learning inference when compared to NVIDIA Pascal™ GPUs.

“Penguin Computing continues to demonstrate leadership by providing…

Article Source…

Leave a Reply

Your email address will not be published. Required fields are marked *