NYU Greene Supercomputer
Greene is NYU's newest high-performance computing cluster. It's the most powerful supercomputer in the New York metropolitan area, one of the top ten most powerful supercomputers in higher education, and one of the top 100 greenest supercomputers in the world.
The NYU Greene high performance computing (HPC) cluster is a powerful cluster that bolsters research across a wide range of disciplines, from genomics and biomolecular genetics to the political ramifications of social media behavior to artificial intelligence. It is a general purpose cluster that supports a variety of job types and sizes, including jobs requiring multiple CPU cores, multiple GPU cards, Terabytes of memory, to single core serial jobs.
- The performance of Greene, measured using the Linpack benchmark, is 2.088 petaflops.
- The overall performance of Greene, including the contribution of GPUs, exceeds 4 petaflops.
- It can do four quadrillion (4 x 10^15) calculations per second – ten times more powerful than its predecessor, the NYU High Performance Computing (HPC) Prince cluster, and 1,000 times faster than NYU's 2005 supercomputer.
Greene bolsters research at NYU across a wide range of disciplines including biomolecular genetics, the political ramifications of social media behavior, artificial intelligence, and so much more. Greene supports a variety of job types and sizes, including jobs requiring multiple CPU cores, multiple GPU cards, terabytes of memory, as well as single core serial jobs.
Find out more about some of the current research using Greene.
- Most of the cluster compute nodes (standard memory and medium memory nodes, see below for a description of nodes) are water-cooled using Lenovo's Neptune liquid cooling technology.
- The remaining of the cluster nodes are air-cooled and are deployed in a heat-containment area, reducing the need for ambient air cooling in the data center.
- The liquid cooling technology used to cool most of the cluster nodes, the heat containment arrangement, and the Power Utilization Efficiency (PUE) of the data center of 1.35 make Greene a power efficient and environment friendly cluster.
Greene consists of 4 login nodes, 524 "standard" compute nodes with 192GB RAM and dual CPU sockets, 40 "medium memory" nodes with 384GB RAM and dual CPU sockets, and 4 "large memory" nodes each with 3TB RAM and quad CPU sockets. All cluster nodes are equipped with 24-core Intel Cascade Lake Platinum 8268 chips.
- The "standard" and "medium memory" compute nodes (a total of 564 nodes with 27,072 processing cores) are Direct Water Cooled (DWC) nodes by operating two Cooling Distribution Units (CDUs). DWC allows us to run the CPU at Turbo frequency of 3.7GHz nodes while we maintain operation of all processing cores.
- The cluster includes 73 compute nodes each equipped with 4 NVIDIA RTX8000 GPUs (a total of 292 RTX8000 GPUs) while 10 of the nodes will be equipped with 4 V100 GPUs for a total of 40 V100 GPUs. Each node that is equipped with GPUs has 384GB of RAM and two CPU sockets. The total count of NVIDIA GPUs is 332. The total count of CPU processing cores on all cluster nodes is 32,000 and 145TeraBytes of RAM.
- All cluster components are interconnected with an Infiniband (IB) fabric in a non-blocking Fat-tree topology, consisting of 20 core switches and 29 leaf switches.
- All switches are 200Gbps HDR IB switches while each compute node connects to the fabric using an HDR-100 adapter. The cluster comes with 7.3PetaBytes of usable data storage running the GPFS file system.
- Greene is available to NYU researchers, faculty, and staff, as well as faculty-sponsored students (students require sponsorship from a full time NYU faculty member to get an HPC user account).
- A valid NYU HPC user account is required in order to access and use the Greene cluster.
- For additional information on using Greene, including code and commands, see the NYU HPC Support site (Google Sites).