HPCMP Users Group Conference
Download PDF

Abstract

This paper introduces a new parallel programming model motivated by: 1) the concept that computation should move to, and execute near, the global data which it accesses, 2) a set of extended memory semantics to provide fine-grained global synchronization, 3) architectural support for fast lightweight thread creation/destruction/migration, and 4) the need for a high performance language to provide the programmer with transparency to the generated code while protecting them from making low-level errors. Using pseudocode examples, we compare this new model to several other high performance languages: Chapel, Fortress, and UPC, in terms of 1) expressibility of parallel structures, 2) facility in synchronizing communication to avoid race conditions, and 3) ability to diagnose/resolve possible performance issues that result from the mapping of these structures to hardware and system software. The new model, combined with appropriate architectural support, provides equal potential for expressibility and safety while giving the programmer more direct insight into the code that ultimately executes.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles