Vivek Sarkar, Ph.D.

Parallel Computing
Georgia Institute of Technology
Recruited: 2017

There’s a stunning statistic Vivek Sarkar likes to share with his computer engineering students: over the course of his career, he’s seen computer processor speeds increase by a factor of 1,000. And he expects that, over the course of his students’ careers, they will instead see the number of processors in a single computer increase by the same factor.

One of the world’s leading experts in parallel computing, Sarkar is tackling the engineering challenges posed by multi-processor computing. He develops programming models to enable computing speeds that are currently impossible.

When Sarkar tells his students about the exponential increase in processor speed, he’s referring to a phenomenon described by “Moore’s Law” and "Dennard's Scaling," which together predicted that computer chips would double in performance about every 18 months. This trend held steady for four decades, enabling the computing revolution that transformed the world.

But in the early 2000s, the trend began to slow. Today, single processors are as fast as they can get, and Moore’s Law is effectively dead. To build more powerful computers, engineers need a different approach, and one major avenue is the move from sequential computing to parallel computing. Instead of faster processors, computers will have more of them — and they will become increasingly different in their functions.

Parallel computing requires engineers and programmers to rethink how a computer solves a problem. Up until the early 2000s, most computer programming was about breaking down a problem into its most minute components, which the computer tackles in sequential steps. Now, programmers are searching for new ways to solve problems in parallel, utilizing all the processors at the same time. Effectively, a computer becomes a team, and getting this team to work effectively requires project management, or tasks will be done in the wrong order, causing bugs.

One potential bug is called the “data race,” a problem that Sarkar and his team are working to solve. The data race conflict occurs when one processor is meant to write some information to memory and another processor is meant to read it. If the “read” processor gets there first, the information it finds will be incorrect. Because the winner of the race is random, it’s a tough bug to identify; the program results can come out right sometimes and wrong others. Sarkar is developing programming techniques for testing this, as well as programming models to avoid the problem in the first place.

Another example has to do with a concept called locality. Picture a human team working in the same room; each person’s workstation should be positioned near the specific tools they need to do his or her job, or else everyone will waste time and energy walking around. Likewise, the more parallel processors, the more important it becomes where memory is stored. For maximum efficiency, the data a processor needs to access more frequently should be physically closest to it, so it can read and write faster. The software must track to the physical hardware. 

The programming leaps that Sarkar and his team have made are already influencing industry standards. The committees that govern major programming languages such as Java and C++ are applying Sarkar’s ideas, adding new commands that programmers can learn and apply.

Similarly, Sarkar and his team also work on compilers, a type of program that assembles code and directs it on how to run. The same piece of code can execute in different ways according to the compiler’s directions, so reprogramming the compiler can help code run better on parallel processors. Companies and open-source software teams that develop compiler programs apply Sarkar’s research and ideas to improve their products.

Sarkar’s career includes both prestigious appointments at universities and almost 20 years as a senior-level research leader at IBM. There, he developed a proof-of-concept parallel programming language called X10, designed as a sort of sandbox for research and testing. This framework was hailed by peers as a major breakthrough.

Sarkar also established the Habanero Extreme Scale Software Research Laboratory at Rice University and Georgia Tech, which brings together a number of researchers working to meet the next major milestone for programming supercomputers: exascale computing with a quintillion calculations per second. (A quintillion is a billion of a billion.) Current supercomputers operate at the petascale, or a quadrillion calculations a second. That’s three fewer zeroes – and three orders of magnitude.

In his new role as co-director of Georgia Tech’s Center for Research into Novel Computing Hierarchies, Sarkar will also strengthen Tech’s strategic partnership with Oak Ridge National Laboratory, currently home to the fastest supercomputer in the world. Sarkar also serves on the U.S. Department of Energy’s Advanced Scientific Computing Advisory Committee (ASCAC). His group's research is supported by several government agencies and labs as well as private industry.

More powerful computers would enable profound and world-changing breakthroughs in countless fields — renewable energy, nuclear power, personalized medicine, drug breakthroughs, urban design, space exploration and more. With the work of Sarkar and his colleagues, the next computing revolution gets closer every day.  

Research

  • Exascale computing
  • Programming languages for parallel platforms
  • Locality and parallelism
  • Compilers and runtime systems
  • Machine learning

Straight from the Scholar

“My work has mainly focused on software for high-end computing. With the shift to Georgia Tech, where a number of researchers are working on hardware at the Center for Research into Novel Computing Hierarchies, I look forward to being more directly involved with evolving future hardware so that it can be used by future software. Being co-llocated with these researchers, my team can more easily anticipate future hardware trends.”