NYU's Center for Data Science and NVIDIA are teaming up to develop the next generation of deep learning applications and algorithms for large-scale GPU-accelerated systems.

NYU's Center for Data Science and NVIDIA are teaming up to develop the next generation of deep learning applications and algorithms for large-scale GPU-accelerated systems. Tomorrow’s advances in deep learning—self-driving cars, computers that detect tumors, real-time speech translation—rely on new, more sophisticated algorithms. But these advances require sophisticated computing technologies, including GPU accelerators--considered by researchers to be the go-to technology for deep learning. (c)iStock/Andrew Ostrovsky

New York University’s Center for Data Science has teamed with NVIDIA to develop the next generation of deep learning applications and algorithms for large-scale GPU-accelerated systems.

Founded by Yann LeCun, a professor at NYU's Courant Institute of Mathematical Sciences and director of AI Research at Facebook, NYU’s Center for Data Science (CDS) is one of several institutions NVIDIA works with to push GPU-based deep learning forward.

Tomorrow’s advances in deep learning—self-driving cars, computers that detect tumors, real-time speech translation—rely on new, more sophisticated algorithms.

But these advances require sophisticated computing technologies.

Among these are GPU accelerators, considered by researchers to be the go-to technology for deep learning because they reduce the time it takes to train neural networks—the process by which computers first analyze and then “learn” to use data in order to drive a car, spot a medical affliction, or translate languages.

But until now, many researchers worked on systems with only one GPU, limiting the number training parameters and the size of the models researchers can develop.

By distributing the deep learning training process among many GPUs, researchers can increase the size of the models that can be trained and the number of models that can be tested—resulting in more accurate models and new classes of applications.

Recognizing this, NYU is deploying a new deep learning computing system — called “ScaLeNet.” It’s an eight-node Cirrascale cluster with 32 top-of-the-line NVIDIA® Tesla® K80 dual-GPU accelerators.

Available to NYU researchers later this spring, the new high-performance system will let them take on bigger challenges and create deep learning models that let computers do human-like perceptual tasks.

“Multi-GPU machines are a necessary tool for future progress in AI and deep learning. Potential applications include self-driving cars, medical image analysis systems, real-time speech-to-speech translation, and systems that can truly understand natural language and hold dialogs with people,” says LeCun.

ScaLeNet will be used by faculty members, research scientists, postdoctoral fellows, and graduate students for research projects and educational programs at CDS.

“CDS has research projects that apply machine and deep learning to the physical, life and social sciences,” LeCun says. “This includes Bayesian models of cosmology and high-energy physics, computational models of the visual and motor cortex, deep learning systems for medical and biological image analysis, as well as machine-learning models of social behavior and economics.”

He hopes the work at NYU can serve as a model used to advance the field of deep learning and train the next generation of AI experts.
 

Press Contact

James Devitt
James Devitt
(212) 998-6808