Computational Science Technical Note CSTN-096

CSTN Home

Hypercubic Storage Layout and Transforms in Arbitrary Dimensions using GPUs and CUDA

K. A. Hawick and D. P. Playne

Archived June 2009

Abstract

Many simulations in the physical sciences are expressed in terms of rectilinear arrays of variables. It is attractive to develop such simulations for use in 1-, 2-, 3- or arbitrary physical dimensions and also in a manner that supports exploitation of data-parallelism on fast modern processing devices. We report on data layouts and transformation algorithms that support both conventional and data-parallel memory layouts. We present our implementations expressed in both conventional serial C code as well as in NVIDIA's Compute Unified Device Architecture (CUDA) concurrent programming language for use on General Purpose Graphical Processing Units (GPGPU). We discuss: general memory layouts; specific optimisations possible for dimensions that are powers-of-two; and common transformations such as inverting, shifting and crinkling. We present performance data for some illustrative scientific applications of these layouts and transforms using several current GPU devices and discuss the code scalability and speed scalability of this approach.

Keywords: data-parallelism; GPUs; CUDA; shifting; crinkling; hypercubic indexing.

Full Document Text: PDF version.

Citation Information: BiBTeX database for CSTN Notes.

BiBTeX reference:

@ARTICLE{CSTN-096,
  author = {K. A. Hawick and D. P. Playne},
  title = {{Hypercubic Storage Layout and Transforms in Arbitrary Dimensions
	using GPUs and CUDA}},
  journal = {Concurrency and Computation: Practice and Experience},
  year = {2011},
  volume = {23},
  pages = {1027-1050},
  number = {10},
  month = {July},
  doi = {10.1002/cpe.1628},
  institution = {Computer Science, Massey University},
  timestamp = {2009.09.06}
}


[ CSTN Index | CSTN BiBTeX ]