Cyberinfrastructure Facilities

Rice provides researchers with individual password-protected e-mail and research storage accounts, all of which are backed up daily. Rice also provides researchers with secure electronic project-management and collaboration tools so that they may efficiently upload and download research data, as well as store, track, and secure that data. Rice Office of Information Technology (OIT) can recommend computing system solutions, provide necessary training, and troubleshoot computing issues that arise, thereby minimizing downtime and facilitating investigators’ ability to fulfill the University’s research, education, outreach, and engagement missions.

Ken Kennedy Institute

The Ken Kennedy Institute brings together the Rice community to collaboratively solve critical global challenges by fostering innovations in computing and harnessing the transformative power of data.

The Institute enables new conversations, promotes interdisciplinary research in AI and data science, develops new technology to serve society, advances the current and future workforce, works closely with industry to bring promising ideas to market, and develops academic, industry, and community partnerships in the computational sciences.

The Ken Kennedy Institute is the virtual home of over two hundred faculty members and senior researchers at Rice University spanning computer science, mathematics, statistics, engineering, natural sciences, humanities, social sciences, business, architecture and music.

Shared Facilities Supporting Research Cyberinfrastructure

Rice has been a model for providing computational resources in an integrated way through the partnership of the Ken Kennedy Institute with the Center for Research Computing (CRC) in OIT. This approach has allowed Rice to achieve significant economies of scale, with the majority of Rice faculty now using shared computing resources and services both on and off campus.

Current shared facilities supporting research cyberinfrastructure at Rice include a state-of-the-art data center; a service center supporting high-performance (HPC) and high-throughput computing (HTC) resources; high-availability clustered storage; a service center to support data storage needs at large scale and an on-prem VM infrastructure; high-bandwidth network connectivity; and a repository management platform for managing information that includes institutional repositories for research and education data as well as access to global repositories.

Data Center:

Rice’s off-site scalable data center provides Rice with 8000 sq. ft. of 48-inch raised floor, capability of supporting 6 megawatts of utility power with 4 megawatts of 2N generator backup installed, 1.5 megawatts of N+1 uninterruptible power supply capacity, and 700 tons of 2N cooling.

Networking:

The new network provides gigabit connectivity to desktops with multi-gigabit backbone links to a network core with over 1 Terabit of aggregate bandwidth. NSF funded Science DMZ supports moving data between Rice and collaborates at 100 Gbps without the friction that is typically seen from a university’s network. The Science DMZ supports several Data Transfer Nodes that leverage globus.org.

Research Data Facility:

To accommodate the growing need to store and manage large data sets and facilitate collaboration among research groups, CRC deployed and is operating the Research Data Facility (RDF) funded by Rice. RDF consists of a combination of cloud-based and local storage that is robust, secure, and flexible enough to meet a wide range of uses, as well as scalable to meet future storage needs.

Cyberinfrastructure Plan:

Long-term planning for IT strategies is guided by Rice’s Cyberinfrastructure Plan. The Vice President for Information Technology and Chief Information Officer are responsible for the plan and its implementation.

Shared Research Computing Infrastructure:

Rice’s CRC, a group of eight professionals supported by the university, provides research computing support and currently manages Rice’s shared clusters infrastructure, described in the table below. User access is provided by sponsorship by a faculty member and through the SLURM job scheduler. There is a modest user fee charged to sponsors for users and usage, to defray marginal recurring operational cost.  

Name Nodes Cores Processor Vendor Accelerators
NOTS 294 7,040 Intel HPE & Dell 4 NVIDIA K80s & 32 V100s
ARIES 21 976 AMD Penguin Computing 152 AMD MI50s

 

The systems are attached to one of our GPFS or Lustre file systems for computational scratch and temporary project work space. Funding for the different systems comes from a combination of grants, industry donations, faculty-purchased nodes, and university investments.

Non-HPC/HTC Research Computing:

The Owl Research Infrastructure Open-Nebula (ORION) deployed in 2018 is for users and use-cases that have needs for more interactive computing and server infrastructure. The ORION infrastructure is a catch-all, and is serving a key role in enabling broad and interactive computational exploration at smaller scale.

Source:  Erik Engquist, Center for Research Computing.

Last Updated:  August 2021.