NYU Greene Supercomputer
Greene is NYU's newest high-performance computing cluster. It's the most powerful supercomputer in the New York metropolitan area, one of the top ten most powerful supercomputers in higher education, and one of the top 100 greenest supercomputers in the world.
Learn More
Watch the video to hear from top researchers how Greene is going to help confront some of the biggest issues facing society today.
Get to Know Greene
Performance | Expanding Research | Sustainability | High Speed Research Network | Technical Specifications
Performance
The Greene computer has many special qualities. The performance of Greene, measured using the Linpack benchmark, is 2.088 petaflops. The overall performance of Greene, including the contribution of GPUs, exceeds 4 petaflops. It can do four quadrillion (4 x 10^15) calculations per second – ten times more powerful than its predecessor, the NYU High Performance Computing (HPC) Prince cluster, and 1,000 times faster than NYU's 2005 supercomputer.
Expanding Research
The Greene HPC cluster bolsters research at NYU across a wide range of disciplines including biomolecular genetics, the political ramifications of social media behavior, artificial intelligence, and so much more. Greene supports a variety of job types and sizes, including jobs requiring multiple CPU cores, multiple GPU cards, terabytes of memory, as well as single core serial jobs.
Sustainability
Greene is power efficient and an environmentally friendly HPC cluster. Greene will have water-cooled CPU nodes using Lenovo's Neptune liquid cooling technology, eliminating the need for air conditioning. The new supercomputer is also more efficient, which means it will have a lower power cost. The combined efficiency and lower-energy requirement mean the that cost of computing will be reduced.
High Speed Research Network
The High Speed Research Network (HSRN) is dedicated to research community needs and is separate from the 40 gigabit/second academic NYU Campus Ethernet Network (NYU-Net). Individual computers can be configured to connect to the HSRN via copper and via optical fiber. HSRN-building level connections are made via a dual 400 gigabit/second to the network core. The phased HSRN implementation will scale to connect additional buildings and research labs throughout the University, and will provide additional access to high-performance computing resources, centralized storage for backup, disaster recovery, and University-level digital archiving/data management and repository services.
Technical Specifications
The new Greene HPC Cluster consists of 4 login nodes, 524 “standard'' compute nodes with 192GB RAM and dual CPU sockets, 40 “medium memory” nodes with 384GB RAM and dual CPU sockets, and 4 “large memory” nodes each with 3TB RAM and quad CPU sockets. All cluster nodes are equipped with 24-core Intel Cascade Lake Platinum 8268 chips.
- The “standard” and “medium memory” compute nodes (a total of 564 nodes with 27,072 processing cores) are Direct Water Cooled (DWC) nodes by operating two Cooling Distribution Units (CDUs). DWC allows us to run the CPU at Turbo frequency of 3.7GHz nodes while we maintain operation of all processing cores.
- The cluster includes 73 compute nodes each equipped with 4 NVIDIA RTX8000 GPUs (a total of 292 RTX8000 GPUs) while 10 of the nodes will be equipped with 4 V100 GPUs for a total of 40 V100 GPUs. Each node that is equipped with GPUs has 384GB of RAM and two CPU sockets. The total count of NVIDIA GPUs is 332. The total count of CPU processing cores on all cluster nodes is 32,000 and 145TeraBytes of RAM.
- All cluster components are interconnected with an Infiniband (IB) fabric in a non-blocking Fat-tree topology, consisting of 20 core switches and 29 leaf switches.
- All switches are 200Gbps HDR IB switches while each compute node connects to the fabric using an HDR-100 adapter. The cluster comes with 7.3PetaBytes of usable data storage running the GPFS file system.
HPC-Powered Research at NYU
- Mapping the Design of Rice: Advancing Rice Research Using High Performance Computing at NYU (NYU IT's the Download)
- David Holland: Modeling polar temperature fluctuations and water flow (NYU Abu Dhabi News)
- DETER Tracks Touch: Debra Laefer and Thomas Kirchner Lead Research Team to Track Touch Behavior (NYU IT's the Download)