You are here

Harnessing Supercomputers for Computational Materials Science

A new grant for more than $1.8 million will help materials scientists complete time-intensive quantum mechanical calculations by connecting multiple computational libraries

Duke University researchers and colleagues from the University of California, Berkeley have secured more than $1.8 million from the National Science Foundation to help materials scientists around the world solve a high school math problem in linear algebra.

Volker BlumThe problem in question is solving a system of many equations, a so-called eigenvalue problem. As long as there are only a couple of variables and few equations, the answer can be found by anyone with an undergraduate background in linear algebra using a pencil and a piece of paper. But when the complexity escalates to millions of variables and equations, it can take the world’s fastest computers days to come out with a set of solutions.

This is the challenge faced by materials scientists like Volker Blum, associate professor of mechanical engineering and materials science and of chemistry at Duke, who want to use models of atomic structure to predict a new material’s properties.

“We’re never going to be able to accurately model all of the atoms in a solar cell,” said Blum, who is the lead investigator on the new grant. “But there are ways of modeling a few thousand atoms in a system and extracting sensible results for a material on a scale relevant to the real world.”

These methods comprise a handful of computational libraries built by researchers around the world. With names like the EigensoLvers for Petascale Applications (ELPA) library, the Orbital Minimization Method (OMM), or the Pole EXpansion and Selective Inversion (PEXSI) library, these open-source algorithms use different tricks and programming numerical techniques to reduce the computational time required to find a decent answer.

The challenge for researchers, then, is to figure out which library will work best for their particular problem and to plug their equations into it.Jianfen Lu

Blum and his colleagues—including Jianfeng Lu, assistant professor of mathematics, chemistry and physics at Duke, Lin Lin of UC Berkeley, Chao Yang of Lawrence Berkeley National Laboratory, Alvaro Vazquez-Mayagoitia of Argonne National Laboratory and a large group of already committed “stakeholder” projects around the world —are looking to automate that challenge.

With the new grant, the researchers will build a platform that ties together the different computational libraries, making it easier for materials scientists to quickly solve their systems of linear equations.

“We'll provide the infrastructure so our colleagues around the world don’t have to go in and adjust the nitty-gritty details each time they want to run one of these programs,” said Blum. “Researchers can put in the system they need solved, and our platform will determine which library will work best and control enough of the process to make people’s lives easier.”

The project will also try to keep pace with the future of computing by retaining the ability to add new libraries and use new technologies. By partnering with industry leaders NVIDIA and Intel, the platform will be able to extend to emerging computational architectures like graphic processing units (GPU) and Many Integrated Core (MIC) processing, which should further decrease computing times in the future.

The project will have close ties to a separate “Electronic Structure Library” community effort, backed by Europe’s “Centre Européen de Calcul Atomique et Moléculaire” (CECAM) organization. Through these connections and the new four-year project, Blum hopes to bring some of this specialized computing expertise that has shifted to Europe in the past two decades back to the United States.

“With this project, we hope that we can, to some extent, build a new leg to stand on,” said Blum. “It won’t be the only leg by any means, but hopefully it’s going to be an important one.”