Powering Groundbreaking Research

The Greene cluster is named after Greene Street in SoHo, a neighborhood in Lower Manhattan near NYU. The cluster has also "green" characteristics, such as most of the cluster nodes are water cooled and it is deployed in a power efficient data center.


Greene at a Glance

The NYU Greene high performance computing (HPC) cluster is a powerful cluster that bolsters research across a wide range of disciplines, from genomics and biomolecular genetics to the political ramifications of social media behavior to artificial intelligence. It is a general purpose cluster that supports a variety of job types and sizes, including jobs requiring multiple CPU cores, multiple GPU cards, Terabytes of memory, to single core serial jobs.

Highlights Hardware
  • Greene is the primary NYU HPC cluster
  • General purpose HPC cluster, suitable for the majority of computing and data analytics tasks, such numerical simulations and AI.
  • Measured performance (from CPUs only) using LinPack: 2.088 PF
  • Ranked #271 on the June 2020 edition of the top500 list (based on an extrapolated performance of 1.731 PF)
  • Using SLURM for job submission
  • Number of nodes: 665
  • CPU cores:  31,584 Intel cores
  • NVIDIA GPUs:  332 (V100 and RTX8000)
  • Total RAM: 163 TB
  • Total Disk space:  9 PB 
  • GPFS file systems
  • HDR Infiniband interconnect

Performance

  • The performance of Greene, measured using the Linpack benchmark, is 2.088 petaflops.
  • The overall performance of Greene, including the contribution of GPUs, exceeds 4 petaflops.
  • It can do four quadrillion (4 x 10^15) calculations per second – ten times more powerful than its predecessor, the NYU High Performance Computing (HPC) Prince cluster, and 1,000 times faster than NYU's 2005 supercomputer.

Expanding Research

Greene bolsters research at NYU across a wide range of disciplines including biomolecular genetics, the political ramifications of social media behavior, artificial intelligence, and so much more. Greene supports a variety of job types and sizes, including jobs requiring multiple CPU cores, multiple GPU cards, terabytes of memory, as well as single core serial jobs.

Find out more about some of the current research using Greene.

Sustainability

  • Most of the cluster compute nodes (standard memory and medium memory nodes, see below for a description of nodes) are water-cooled using Lenovo's Neptune liquid cooling technology.
  • The remaining of the cluster nodes are air-cooled and are deployed in a heat-containment area, reducing the need for ambient air cooling in the data center.
  • The liquid cooling technology used to cool most of the cluster nodes, the heat containment arrangement, and the Power Utilization Efficiency (PUE) of the data center of 1.35 make Greene a power efficient and environment friendly cluster.

Technical Specifications

Greene consists of 4 login nodes, 524 "standard" compute nodes with 192GB RAM and dual CPU sockets, 40 "medium memory" nodes with 384GB RAM and dual CPU sockets, and 4 "large memory" nodes each with 3TB RAM and quad CPU sockets. All cluster nodes are equipped with 24-core Intel Cascade Lake Platinum 8268 chips.

  • The "standard" and "medium memory" compute nodes (a total of 564 nodes with 27,072 processing cores) are Direct Water Cooled (DWC) nodes by operating two Cooling Distribution Units (CDUs). DWC allows us to run the CPU at Turbo frequency of 3.7GHz nodes while we maintain operation of all processing cores.
  • The cluster includes 73 compute nodes each equipped with 4 NVIDIA RTX8000 GPUs (a total of 292 RTX8000 GPUs) while 10 of the nodes will be equipped with 4 V100 GPUs for a total of 40 V100 GPUs. Each node that is equipped with GPUs has 384GB of RAM and two CPU sockets. The total count of NVIDIA GPUs is 332. The total count of CPU processing cores on all cluster nodes is 32,000 and 145TeraBytes of RAM.
  • All cluster components are interconnected with an Infiniband (IB) fabric in a non-blocking Fat-tree topology, consisting of 20 core switches and 29 leaf switches.
  • All switches are 200Gbps HDR IB switches while each compute node connects to the fabric using an HDR-100 adapter. The cluster comes with 7.3PetaBytes of usable data storage running the GPFS file system.

Access and Use

  • Greene is available to NYU researchers, faculty, and staff, as well as faculty-sponsored students (students require sponsorship from a full time NYU faculty member to get an HPC user account).
  • A valid NYU HPC user account is required in order to access and use the Greene cluster.
  • For additional information on using Greene, including code and commands, see the NYU HPC Support site (Google Sites).

Hudson HPC/AI Cluster: Now Part of Greene

Highlights Hardware

The Hudson cluster was incorporated into the Greene cluster in April, 2023.

In 2020, AMD and technology partner Penguin Computing Inc donated AMD GPU nodes in order to support COVID-19 research. This included a donation to NYU, which became the Hudson AMD cluster.

As of April, 2023, these nodes have been incorporated into the Greene cluster in order to consolidate the expanding AMD computational ecosystem at NYU.

  • 20 nodes each equipped with 8 GPUs
  • AMD GPUs: 160 MI-50
  • CPU Cores: 960 EPYC Rome 
  • Total RAM: 10 TB
  • HDR Infiniband interconnect 
  • Same filesystems as Greene cluster