Efficient Solutions for High-Dimensional PDEs Using Low-Rank Tensor Decompositions and Their Implementation on GPUs

Researcher(s)

  • James Mou, Mathematics, University of Delaware
  • Issac Castro, Mathematics, University of Delaware

Faculty Mentor(s)

  • Jingmei Qiu, Department of Mathematical Sciences, University of Delaware
  • William Sands, Department of Mathematical Sciences, University of Delaware

Abstract

Solving high-dimensional Partial Differential Equations (PDEs) presents a formidable challenge in various scientific computing domains. Our goal is to explore the use of low-rank tensor decompositions to efficiently solve PDEs when the solution exhibits low-rank properties. Our novel low-rank space-time decomposition is applied to the heat equation, utilizing a 2D-space and 1D-time Krylov method. This approach reduces memory and computational requirements compared to traditional solvers. Numerical results demonstrate the method’s performance, highlighting its scalability and efficiency.

We conducted comprehensive comparisons of CPU and GPU implementations of basic linear algebra computations on the Perlmutter supercomputer. The results emphasize the benefit of modern GPU hardware, particularly in dense linear algebra operations. This low storage-demand method is well-suited for GPU acceleration, making it a promising candidate for solving large-scale PDEs efficiently.

Looking forward, we aim to extend our approach to 3D Tucker decompositions, further exploring the potential of low-rank tensor methods in high-dimensional settings. Our ongoing efforts focus on enhancing GPU acceleration, optimizing key algorithm components, and demonstrating the practical advantages of low-rank tensor decompositions for complex scientific computing problems within the GEMS project.