2021 Research Technologies

TECHNOLOGY FOR WORLD-CLASS RESEARCH

Research Technologies provides key technology infrastructure and services to support Arizona’s world class researchers.

Compute Capacity and Streamlined Efficiencies Get Even Better

In FY21, continued investment in our research community resulted in a new high performance computing system assigned the moniker “Puma.” With this added system, available research compute time has more than doubled. El Gato, Ocelote, and Puma together provide each campus researcher with 113,000 CPU hours of compute per month at no charge—nearly 40 times the computational power of a laptop.

Researchers can also submit jobs for additional “windfall” hours that are available whenever there is any idle capacity.

The High Performance Computing team spent time streamlining and standardizing access to and use of the resources. They created a common scheduler for submitting jobs, and shared storage and applications between the three systems. This makes it much simpler for researchers to get trained on using Research Data Center resources and gives them greater flexibility when using the systems.

The three systems are now all connected, allowing them to interface with each other very quickly so researchers can do millions of small calculations—such as simulating 12 million galaxies over 400 million years.

Mapping Bennu for OSIRIS-Rex

In FY21, the OSIRIS-Rex mission reached a milestone supported by the power of the University’s supercomputing systems. Landing the spacecraft on the rough surface of the Bennu asteroid needed complex computational data modeling to select the right landing site. Lunar and Planetary Laboratory research professor Mike Nolan realized that the landing safety algorithm would not run quickly enough on a laptop to meet their time window. He suggested the team move the project to the Ocelote supercomputer in the Research Data Center where they could run multiple simulations in parallel, and at a higher resolution.

This greater processing capability allowed them to take the sites that were identified as the best places to collect samples, get even higher resolution photos, and re-run the calculations again. With the safest approach pre-calculated, OSIRIS-REx successfully captured a sample from the Prime “Nightingale” sample site in Hokioi crater on Bennu on October 20, 2020, and is headed back to earth.

Dark Matter Discoveries

Arizona theoretical astrophysicists have discovered a new way to study space. Using a supercomputer to create a model of what theory says a phenomenon should look like, they can see whether observations match the prediction. This was born out in the first photograph of a black hole, and now has been applied to dark matter.   

Since dark matter can’t be seen with human eyes, associate professor Gurtina Besla, doctoral student Nicolás Garavito-Camargo, and their team turned to computational modeling. They used the popular “cold dark matter” theory to predict what happened when the Large Magellanic Cloud  (LMC) traveled through the outer edges of the Milky Way galaxy. They used Research Data Center supercomputing to illustrate the wake that would be left behind as the Milky Way’s dark matter dragged on the LMC’s stars.

Harvard astronomers have been making observations of the area and confirmed what the Arizona computer model predicted. The Besla Group will continue their studies using different models for the dark matter particle in simulations on the new Puma system.

 

Learn more about HPC at rc.arizona.edu

$395M

Total Sponsored Research Expenditures by Investigators Using HPC.

78%

Top Principal Investigators Using HPC

Research Data Center Usage

Principal Investigators Using HPC Systems

449

Active Root Awards Using HPC Systems

1,791

Active Researchers (all users) Using HPC

1,545

Supercomputing Capacity

Total Cores of All HPC Systems

38.9K

Number of Puma Cores

23.6K

Total Memory in Puma

128TB

Research and Faculty Computing

Hours Per Month Faculty Compute Allocation

113K

CPU Hours/Month Per Faculty Research across all Clusters

13.8K

Yearly Faculty Compute Hours Allocation

1.356M

Services

  • Supercomputing (HPC)
  • Regulated Research Environment
  • Research Support Services
  • UAVITAE