COMP 5704: Parallel Algorithms and Applications in Data Science


School of Computer Science
Carleton University, Ottawa, Canada


Project Title: A Parallel Jacobi Gauss-Seidel Method with Dynamic Parallelization

Name: Nirav C. Pansuriya

E-Mail: niravchhaganbhaipan@cmail.carleton.ca


Project Outline: Computer architecture has become increasingly parallel in recent years. Modern GPUs, such as the NVIDIA GForce GTX 280 provide enormous parallelism. So, the focus of researchers has moved to parallelism rather than continuous improvements to the single unit speed of computation. We regularly encounter several linear systems in physics, mathematics, and engineering. In many scientific simulations, we have to solve large-scale linear equations. However, large-scale linear equations require a significant amount of time and resources, such as memory. It is always a trending topic among researchers to find an algorithm that can solve large-scale linear equations in less time and with fewer resources. The Jacobi and Gauss-Seidel methods are two famous and well-known iterative methods for solving systems of linear equations. Although the Jacobi approach is extremely simple to implement in a parallel environment, it requires an excessive number of iterations to solve a large system of linear equations. The Gauss-Seidel method is an improved version of the Jacobi method. The GS method is capable of solving a large system of linear equations in a small number of iterations. However, because this method is sequential in nature, it is extremely difficult to implement in a parallel environment. One method is easily implemented in a parallel environment but requires an excessive number of iterations to solve, whereas the other method can solve a large system of linear equations in a very few iterations but cannot be implemented in a parallel environment. To address this issue, the author of the base paper used for this research work introduced the PJG (Parallel Jacobian Gauss-Seidel Method). This method is capable of solving large systems of linear equations in a small number of iterations and is extremely simple to apply in a parallel environment too. The primary objective of this research is to further enhance the performance of the PJG method by implementing the concept of dynamic parallelization. Another aim is to compare the proposed method (PJG method with dynamic parallelization) with the Jacobi method, the GS method, and the PJG method. I aim to implement all of these algorithms in CUDA and execute them on a GPU in order to compare their performance with the proposed technique.

Startup Paper(s): A Parallel Jacobi-Embedded Gauss-Seidel Method [PDF]

Deliverables:

Relevant References: