Center for Research Computing

Center for Research Computing
The Center for Research Computing (CRC) provides support and administration of Rice’s shared computing clusters; on-premises cloud, research storage, visualization facilities; and related services. Rice’s CRC is staffed with 18 professionals supported by the university and external funding. The team provides research computing support and services and currently manages Rice’s shared cluster infrastructure. These resources are available to faculty, students, and staff affiliated with Rice University and their collaborators.

Shared Research Computing Resources (ShaReCoRe)
The CRC manages four high-performance clusters, listed in the table below. User access is provided by sponsorship by a faculty member and through the SLURM job scheduler.

Cluster

Nodes

Cores

CPU

FLOPS

Total RAM

Vendor

GPU Accelerators

NOTS

235

12,096

Intel

752 TF

48 TB

HPE & Dell

32 NVIDIA V100 & 48 NVIDIA L40S

RAPID

10

448

AMD

248 TF

5 TB

Atipa

16 NVIDIA A100 & 64 NVIDIA A40

ARIES

24

1,360

AMD

1.3 PF

18 TB

Penguin

152 AMD MI50 & 24 AMD MI100

RANGE

12

544

Intel

5.5 PF

18 TB

Cambridge

64 NVIDIA H200 & 16 NVIDIA H100

These systems are attached to one or more of our storage systems. We utilize VAST for HPC scratch (600TB), Research High-Capacity Facility (RHF), and Research Data Facility (RDF) file systems for computational scratch and temporary project workspace. Funding for the different systems comes from a combination of grants, industry donations, faculty-purchased nodes, and university investments.

To support needs beyond campus-wide shared infrastructure, the CRC operates a “condominium” computing business model that permits a principal investigator to become “condo owner” for enhanced quality of service supporting customized queuing behavior associated with the condo infrastructure. The condominium computing model enables the CRC to support dedicated service delivery while at the same time leveraging operational efficiencies associated with managing shared cyberinfrastructure.

Private Cloud
The Owl Research Infrastructure Open-Nebula (ORION) system deployed in 2018 is for users and use-cases that need more interactive computing or a virtual server infrastructure. The ORION infrastructure is a catch-all and is serving a key role enabling broad and interactive computational exploration at a smaller scale.

Secure Computing Environment
The Center has developed and manages the Virtual Research Desktop Environment (VRDE), a multi-tenant, regulated environment for storage and compute of confidential or restricted information. VRDE provides a storage and computing platform for regulated data. CRC research facilitators collaborate with OIT security and client services offices to provide ongoing support and consultation for VRDE, including security compliance, reporting, feature enhancements, process improvements, and capacity planning.

Storage
To accommodate the growing need to store and manage large data sets and facilitate collaboration among research groups, CRC deployed and is operating the Research Data Facility (RDF) funded by Rice. RDF consists of a combination of cloud-based and local storage that is robust, secure, and flexible enough to meet a wide range of uses, as well as scalable to meet future storage needs. Additionally, the CRC was awarded a CC* storage award in August, 2023; this award added an S3 compatible object storage resource to our campus cyberinfrastructure, providing needed capacity for multi-terabyte data in active use. The resource will also become part of the Open Storage Network (OSN), making data available for local (or remote) computational analysis, and available to collaborators or shared publicly via the network. The Open Storage Network (OSN) supports research and education that requires data storage and transfer at scale by simplifying and accelerating access to data that is in active use. The CRC is committed to providing 20% of the available storage to the common OSN allocation pool, which is available through the NSF ACCESS program. In December, 2024, we launched our new 5PB performant storage resource, the Research High-Capacity Facility (RHF), which is designed to support active research using large datasets.

Data Center
Rice’s off-site scalable data center provides Rice with 8,000 sq. ft. of 48-inch raised floor, capable of supporting 6 megawatts of utility power with 4 megawatts of 2N generator backup installed, 1.5 megawatts of N+1 uninterruptible power supply capacity, and 700 tons of 2N cooling.

Networking
The campus network provides gigabit connectivity to desktops with multi-gigabit back-bone links to a network core with more than 1.2 Terabit of aggregate bandwidth. The NSF-funded Science DMZ supports moving data between Rice and collaborates at 100 Gbps without the friction that is typically seen from a university’s network. The Science DMZ supports several Data Transfer Nodes that leverage globus.org.

Website: https://researchcomputing.rice.edu/

Source: Melissa Cragin, Associate Vice President, Information Technology

Last updated: December 2025