Enabling a High Throughput Real Time Data Pipeline for a Large Radio Telescope with GPUs

Richard G. Edgar, Mike A. Clark, Kevin Dale, Daniel A. Mitchell, Stephen M. Ord, Randall B. Wayth, Hanspeter Pfister, and Lincoln J. Greenhill.

Computer Physics Communications, 2010.

Enabling a High Throughput Real Time Data Pipeline for a Large Radio Telescope with GPUs Authors Richard G. Edgar; Mike A. Clark; Kevin Dale; Daniel A. Mitchell; Stephen M. Ord; Randall B. Wayth; Hanspeter Pfister; Lincoln J. Greenhill Abstract The Murchison Widefield Array (MWA) is a next-generation radio telescope currently under construction in the remote Western Australia Outback. Raw data will be generated continuously at 5 GiB/s, grouped into 8 s cadences. This high throughput motivates the development of on-site, real time processing and reduction in preference to archiving, transport and off-line processing. Each batch of 8 s data must be completely reduced before the next batch arrives. Maintaining real time operation will require a sustained performance of around 2:5 TFLOP/s (including convolutions, FFTs, interpolations and matrix multiplications). We describe a scalable heterogeneous computing pipeline implementation, exploiting both the high computing density and FLOP-per-Watt ratio of modern GPUs. The architecture is highly parallel within and across nodes, with all major processing elements performed by GPUs. Necessary scatter-gather operations along the pipeline are loosely synchronized between the nodes hosting the GPUs. The MWA will be a frontier scientific instrument and a pathfinder for planned peta- and exascale facilities.

Gallery