Communication overlapping krylov subspace methods for distributed memory systems

By: Contributor(s): Material type: TextTextLanguage: en. Publication details: Bangalore : Indian Institute of science , 2022 .Description: xv, 176p. e-Thesis col. ill. ; 29.1 cm * 20.5 cm 2.962MbSubject(s): DDC classification:
  • 004 TIW
Online resources: Dissertation note: PhD; 2022; Computational and data sciences Summary: Many high performance computing applications in computational fluid dynamics, electromagnetics etc. need to solve a linear system of equations $Ax=b$. For linear systems where $A$ is generally large and sparse, Krylov Subspace methods (KSMs) are used. In this thesis, we propose communication overlapping KSMs. We start with the Conjugate Gradient (CG) method, which is used when $A$ is sparse symmetric positive definite. Recent variants of CG include a Pipelined CG (PIPECG) method which overlaps the allreduce in CG with independent computations i.e., one Preconditioner (PC) and one Sparse Matrix Vector Product (SPMV). As we move towards the exascale era, the time for global synchronization and communication in allreduce increases with the large number of cores available in the exascale systems, and the allreduce time becomes the performance bottleneck which leads to poor scalability of CG. Therefore, it becomes necessary to reduce the number of allreduces in CG and adequately overlap the larger allreduce time with more independent computations than the independent computations provided by PIPECG. Towards this goal, we have developed PIPECG-OATI (PIPECG-One Allreduce per Two Iterations) which reduces the number of allreduces from three per iteration to one per two iterations and overlaps it with two PCs and two SPMVs. For better scalability with more overlapping, we also developed the Pipelined s-step CG method which reduces the number of allreduces to one per s iterations and overlaps it with s PCs and s SPMVs. We compared our methods with state-of-art CG variants on a variety of platforms and demonstrated that our method gives 2.15x - 3x speedup over the existing methods. We have also generalized our research with parallelization of CG on multi-node CPU systems in two dimensions. Firstly, we have developed communication overlapping variants of KSMs other than CG, including Conjugate Residual (CR), Minimum Residual (MINRES) and BiConjugate Gradient Stabilised (BiCGStab) methods for matrices with different properties. The pipelined variants give up to 1.9x, 2.5x and 2x speedup over the state-of-the-art MINRES, CR and BiCGStab methods respectively. Secondly, we developed communication overlapping CG variants for GPU accelerated nodes, where we proposed and implemented three hybrid CPU-GPU execution strategies for the PIPECG method. The first two strategies achieve task parallelism and the last method achieves data parallelism. Our experiments on GPUs showed that our methods give 1.45x - 3x average speedup over existing CPU and GPU-based implementations. The third method gives up to 6.8x speedup for problems that cannot be fit in GPU memory. We also implemented GPU related optimizations for the PIPECG-OATI method and show performance improvements over other GPU implementations of PCG and PIPECG on multiple nodes with multiple GPUs.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Call number Status Date due Barcode
E-BOOKS E-BOOKS JRD Tata Memorial Library Available ET00004

Includes bibliographical references and index

PhD; 2022; Computational and data sciences

Many high performance computing applications in computational fluid dynamics, electromagnetics etc. need to solve a linear system of equations $Ax=b$. For linear systems where $A$ is generally large and sparse, Krylov Subspace methods (KSMs) are used. In this thesis, we propose communication overlapping KSMs. We start with the Conjugate Gradient (CG) method, which is used when $A$ is sparse symmetric positive definite. Recent variants of CG include a Pipelined CG (PIPECG) method which overlaps the allreduce in CG with independent computations i.e., one Preconditioner (PC) and one Sparse Matrix Vector Product (SPMV). As we move towards the exascale era, the time for global synchronization and communication in allreduce increases with the large number of cores available in the exascale systems, and the allreduce time becomes the performance bottleneck which leads to poor scalability of CG. Therefore, it becomes necessary to reduce the number of allreduces in CG and adequately overlap the larger allreduce time with more independent computations than the independent computations provided by PIPECG. Towards this goal, we have developed PIPECG-OATI (PIPECG-One Allreduce per Two Iterations) which reduces the number of allreduces from three per iteration to one per two iterations and overlaps it with two PCs and two SPMVs. For better scalability with more overlapping, we also developed the Pipelined s-step CG method which reduces the number of allreduces to one per s iterations and overlaps it with s PCs and s SPMVs. We compared our methods with state-of-art CG variants on a variety of platforms and demonstrated that our method gives 2.15x - 3x speedup over the existing methods. We have also generalized our research with parallelization of CG on multi-node CPU systems in two dimensions. Firstly, we have developed communication overlapping variants of KSMs other than CG, including Conjugate Residual (CR), Minimum Residual (MINRES) and BiConjugate Gradient Stabilised (BiCGStab) methods for matrices with different properties. The pipelined variants give up to 1.9x, 2.5x and 2x speedup over the state-of-the-art MINRES, CR and BiCGStab methods respectively. Secondly, we developed communication overlapping CG variants for GPU accelerated nodes, where we proposed and implemented three hybrid CPU-GPU execution strategies for the PIPECG method. The first two strategies achieve task parallelism and the last method achieves data parallelism. Our experiments on GPUs showed that our methods give 1.45x - 3x average speedup over existing CPU and GPU-based implementations. The third method gives up to 6.8x speedup for problems that cannot be fit in GPU memory. We also implemented GPU related optimizations for the PIPECG-OATI method and show performance improvements over other GPU implementations of PCG and PIPECG on multiple nodes with multiple GPUs.

There are no comments on this title.

to post a comment.

                                                                                                                                                                                                    Facebook    Twitter

                             Copyright © 2024. J.R.D. Tata Memorial Library, Indian Institute of Science, Bengaluru - 560012

                             Contact   Phone: +91 80 2293 2832