Developing parallel algorithms efficiently requires careful management of concurrency across diverse hardware architectures. C++ executors provide a standardized interface that simplifies the development process, allowing developers to write portable and uniform code. However, in some cases, they may not fully leverage hardware capabilities or optimally allocate resources for specific workloads, leading to potential performance inefficiencies. Building on our earlier conference paper [ Adaptively Optimizing the Performance of HPX's Parallel algorithms], which introduces a preliminary strategy based on cores and chunking (workload), and integrated it into HPX's executor API, that dynamically optimizes for workload distribution and resource allocation, based on runtime metrics and overheads, this paper, introduces a more detailed model of that strategy. It evaluates the efficiency of this implementation (as an HPX executor) across a wide range of compute-bound and memory-bound workloads on different architectures and with different algorithms. The results show consistent speedups across all tests, configurations, and workloads studied, offering improved performance through a familiar and user-friendly c++ executors API. Additionally, the paper highlights how runtime-driven executor adaptation can simplify performance optimization without increasing the complexity of algorithm development.
 翻译:暂无翻译