GTC 2019 – Nvidia, the datacenter company, and the role of data
I am AI – Protector, Guardian, Navigator, Scientist, Healer, Composer – Nvidia’s CEO Jensen Huang started his keynote, highlighting the pervasiveness of Artificial Intelligence and Deep Learning in our lives, work and play.
It was a marathon of a keynote and clearly established Nvidia as a Datacenter company and an undisputed leader in the AI/ML space, pioneering Accelerated Computing. The Mellanox acquisition further strengthened the case.
SwiftStack was in full force at the event, and showcased our distributed, massively parallel object storage and multi-cloud data management for end-to-end AI/ML workflows. Anyone working in AI/ML should be interested in Nvidia’s vision for the role of data, and SwiftStack looks forward to complementing and augmenting this vision with our own strategy.
Nvidia’s vision and strategy
- Nvidia has clearly pivoted as a datacenter company.
- Showcased leadership not just with GPU computing but software stack and system offerings associated with it.
- Announced strong eco-system partnerships with OS, system, cloud vendors and consulting companies.
- Showcased solutions across several vertical domain.
- … and a portfolio of products across HPC, Enterprise, Hyperscale and Developer markets, each with a clear Go-To-Market strategy associated with it.
Accelerated computing (GPU computing in the broader sense), pioneered by Nvidia, was defined with the PRADA framework; Programmable (software defined), Acceleration of multiple Domains with one Architecture. It clearly showed CUDA-X accelerated library framework, could run on the RTX (for Computer graphics), DGX (data center GPU server), HGX (Hyperscaler GPU server) and AGX (Autonomous vehicle GPU server), with the domain specific libraries running in the NGC containerized cloud.
Huang’s keynote was divided into three chapters, each focusing on different swim lanes:
Chapter 1 – Compute graphics for Gaming and M&E verticals
In-addition to announcements like Omniverse, positioned as Google docs for 3D design, there is the 8U RTX (Ray tracing using Turing architecture) 8000 Workstation and RTX POD, which clearly placed computer graphics in the data center. Augment Reality / Virtual Reality use cases are centered around the efficient access to data and along with POD comes the need for massively parallel and concurrent, distributed storage. Nvidia has made several announcements around the ecosystem with Unity, Unreal, Vulcan, and GeForce Now, supporting rendering and 10,000 concurrent gamers per POD.
Chapter 2 – Deep Learning, the fastest growing field of data science
Huang defines the deep learning pipeline as ingest and data analytics, followed by feature engineering and predictive modeling for inference.
Use cases around healthcare, telecom and conversational search were showcased. All use cases showed the importance and the need for parallel and concurrent access to data.
- The Clara AI toolkit for radiologists has 13 pre-trained models and is used by leading research hospitals.
- A network operator, with LTE towers uses predictive analytics to determine WiFi end point placements, doing real time visualizations with OmniSci as the GPU accelerated distributed SQL database.
- In conversational search using Microsoft’s Bing engine, showed the complexity of data pipelines and the role of data.
The distinction between scale-up with supercomputer and scale-out with hyperscalers clearly showed the rationale behind the product announcements. Both offerings re-iterated the need for massively parallel and concurrent distributed storage.
DGX PODs represented the scale-up approach, whereas announcements around workstation for data scientist and enterprise server for data science using T4 GPU, represented the scale-out approach. The scale-up eco-system is built around Kubernetes and containers, while the scale-out (hyper-scaler) eco-system is built around Hadoop, SPARK and RAPIDS (GPU accelerated) framework. Eco-system announcements were made, around RAPIDS adoption by Accenture, Google cloud, Databricks etc. Huang made the case for Mellanox acquisition due to increased east-west traffic with scale-out approach.
Chapter 3 – Robotics, Autonomous machines and Autonomous vehicles
Jetson Nano running CUDAX stack priced at $99 and ISSAC SDK powering autonomous machines, show the potential of the intelligent edge. These also speak to data growth, the need for training at the core (data center), and the use of TensorRT, now integrated into Microsoft ONNX, as inference compiler.
Nvidia’s efforts in the autonomous vehicle industry is impressive. Nvidia’s DRIVE AV Release 9 software running on autonomous vehicles is built around perception, localization, mapping, and planning. Nvidia Path Planning with safety force field software is built on predictive modeling running on its Drive Constellation platform.
Drive Constellation platform availability was announced for simulation and re-simulation, virtual AV Test Fleet and Hardware in Loop (HIL) testing.
Huang’s “I am AI” represents a very profound vision and Nvidia is effectively executing on it. SwiftStack is proud to be part of Nvidia’s AI/ML ecosystem and we are extremely enthusiastic about the many vertical domains and use cases for AI/ML data pipelines.