Computational Science Technical Note CSTN-162

CSTN Home

Directions in Multiple Device Parallel Acceleration for High Performance Applications

K. A. Hawick and D. P. Playne

July 2012

Abstract

Graphical Processing Units (GPUs) and other devices have proved a valuable mechanism to speed up many applications using a data-parallel programming model that can accelerate a conventional CPU. It is becoming economically viable to host multiple GPUs on a single CPU host node, and clusters using this multi-GPU assisted node model are becoming prevalent. We are experimenting with the hosting of up to eight GPUs on a single CPU using PCI bus extension technology and report on the attainable performance of a range of simulation and complex systems applications. We are exploring a range of combinations of multi-core CPUs, running various thread management software systems to manage their multiple GPU accelerators. We anticipate significant flexibility and scalability achievable using this approach and believe it has major implications for future generation HPC systems including clusters and supercomputer facilities.

Keywords: GPU; multi-core; data-parallelism; multiple device.

Full Document Text: PDF version.

Citation Information: BiBTeX database for CSTN Notes.

BiBTeX reference:

@TECHREPORT{CSTN-162,
  author = {K. A. Hawick and D. P. Playne},
  title = {Directions in Multiple Device Parallel Acceleration for High Performance
	Applications},
  institution = {Computer Science, Massey University},
  year = {2012},
  number = {CSTN-162},
  address = {Albany, North Shore 102-904, Auckland, New Zealand},
  month = {4-6 July},
  note = {Presented at NZ eResearch Symposium, Victoria Univ. Wellington, 4-6
	July 2012},
  timestamp = {2012.12.01}
}


[ CSTN Index | CSTN BiBTeX ]