Volume 21, pp. 47-65, 2005.

Communication balancing in parallel sparse matrix-vector multiplication

Rob H. Bisseling and Wouter Meesen

Abstract

Given a partitioning of a sparse matrix for parallel matrix–vector multiplication, which determines the total communication volume, we try to find a suitable vector partitioning that balances the communication load among the processors. We present a new lower bound for the maximum communication cost per processor, an optimal algorithm that attains this bound for the special case where each matrix column is owned by at most two processors, and a new heuristic algorithm for the general case that often attains the lower bound. This heuristic algorithm tries to avoid raising the current lower bound when assigning vector components to processors. Experimental results show that the new algorithm often improves upon the heuristic algorithm that is currently implemented in the sparse matrix partitioning package Mondriaan. Trying both heuristics combined with a greedy improvement procedure solves the problem optimally in most practical cases. The vector partitioning problem is proven to be NP-complete.

Full Text (PDF) [285 KB], BibTeX

Key words

vector partitioning, matrix–vector multiplication, parallel computing, sparse matrix, bulk synchronous parallel

AMS subject classifications

05C65, 65F10, 65F50, 65Y05

ETNA articles which cite this article

Vol. 37 (2010), pp. 263-275 Joachim Georgii and Rüdiger Westermann: A streaming approach for sparse matrix products and its application in Galerkin multigrid methods

< Back