Abstract
Optimizing collective I/O operations is of paramount importance for many data intensive high performance computing applications. Despite the large number of algorithms published in the field, most current approaches focus on tuning every single application scenario separately and do not offer a consistent and automatic method of identifying internal parameters for collective I/O algorithms. Most notably, published work exists to optimize the number of processes actually touching a file, the so-called aggregators. This paper introduces a novel runtime approach to determine the number of aggregator processes to be used in a collective I/O operation depending on the file view, process topology, the per-process write saturation point, and the actual amount of data written in a collective write operation. The algorithm is evaluated on two different file systems with multiple benchmarks. In more than 80\% of the test cases, our algorithm delivered a performance close to the best performance obtained by hand-tuning the number of aggregators for each scenario.