Jack Dongarra’s road to the Turing award

Jack Dongarra has won the coveted Turing award for 2021. The Association for Computing Machinery (ACM) said his work has driven high-performance computing, and in turn impacted areas like artificial intelligence, computer graphics, analytics, deep learning, etc. Dubbed the Nobel Prize of Computing, the award comes with a cash prize of USD 1 million.

ACM president Gabriele Kotsis said Dongarra’s trailblazing work goes back to 1979 and he is one of the foremost and actively engaged leaders in the HPC community.

Background

Jack Dongarra got his bachelor’s degree in mathematics from Chicago State University and a master’s degree in computer science from the Illinois Institute of Technology. His doctorate is in Applied Mathematics from the University of New Mexico. He is now the University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee and Distinguished Research Staff in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL). Dongarra is a Turing Fellow at Manchester University.

Dongarra has created open-source software libraries and standards that use linear algebra as an intermediate language. His libraries also introduced important innovations such as autotuning, mixed precision arithmetic, and batch computations. His major contributions include:

  • EISPACK is a collection of Fortran subroutines that compute the eigenvalues ​​and eigenvectors of nine classes of matrices. These include complex general, complex Hermitian, real general, real symmetric, real symmetric banded, real symmetric tridiagonal, special real tridiagonal, generalized real, and generalized real symmetric matrices.
  • LINPACK was written in Fortran by Jack Dongarra, Jim Bunch, Cleve Moler (under whom Dongarra did his PhD) and Gilbert Stewart. It is a software library for performing numerical linear algebra on digital computers. LINPACK makes use of the BLAS (Basic Linear Algebra Subprograms) libraries for performing basic vector and matrix operations. Initially, the LINPACK benchmarks appeared as part of the LINPACK user’s manual.
  • Basic Linear Algebra Subprograms (BLAS) is a routine that provides standard building blocks for performing basic vector and matrix operations. While the Level 1 BLAS perform scalar, vector and vector-vector operations, the Level 2 BLAS perform matrix-vector operations and the Level 3 BLAS perform matrix-matrix operations.
  • Linear Algebra Package (LAPACK): Written in Fortran 90, LAPACK provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems etc.
  • ScaLAPACK is a library of high-performance linear algebra routines for parallel distributed memory machines. It solves least squares problems, eigenvalue problems, and singular value problems. It is designed for heterogeneous computing and is portable on any computer that supports Message Passing Interface (MPI) or Parallel Virtual Machine (PVM).
  • The TOP500 project was launched in 1993. Dongarra has played an active role since the origin and formation of the TOP500 list since its inception. It uses its LINPACK benchmark to evaluate the performance of supercomputers.

Autotuning

Dongarra worked on methods for automatically finding algorithmic parameters that produce linear algebra kernels or near-optimal efficiency which often outperformed vendor-supplied codes. He pioneered the use of multiple precisions of floating-point arithmetic to get accurate solutions quicker.

HPL-AI benchmark

In 2019, Jack Dongarra, Piotr Luszczek, and Azzam Haidar proposed the first implementation of the High-Performance Linpack – Accelerator Introspection (HPL-AI) benchmark. Same year, the trio released the HPL-AI reference implementation that looks at supercomputers that use mixed-precision (16- or 32-bit) arithmetic in data science.

Though traditional HPC works on simulation runs for modeling phenomena in physics, chemistry, biology, the mathematical models that used for these computations majorly require 64-bit accuracy. But machine learning models get the results needed at 32-bit (even lower floating-point precision formats). The HPL-AI benchmark works at the intersection of high-performance computing (HPC) and artificial intelligence (AI) workloads.

Leave a Reply

Your email address will not be published.

Back to top button