Parallelization via Static and Dynamic Analyses for HLS

High-level synthesis (HLS) can be used to create hardware accelerators for compute-intense software parts such as loop structures. Usually, this process requires significant amount of user interaction to steer kernel selection and optimizations. This can be tedious and time-consuming. In this article, we present an approach that fully autonomously finds independent loop iterations and reductions to create parallelized accelerators. We combine static analysis with information available only at runtime to maximize the parallelism exploited by the created accelerators. For loops where we see potential for parallelism, we create fully parallelized kernel implementations. If static information does not suffice to deduce independence, then we assume independence at compile time. We verify this assumption by statically created checks that are dynamically evaluated at runtime, before using the optimized kernel. Evaluating our approach, we can generate speedups for five out of seven benchmarks. With four loop iterations running in parallel, we achieve ideal speedups of up to 4x and on average speedups of 2.27x, both in comparison to an unoptimized accelerator


Publication

  • Florian Dewald, Johanna Rohde, Christian Hochberger and Heiko Mantel. Improving Loop Parallelization by a Combination of Static and Dynamic Analyses in HLS. In ACM Transactions on Reconfigurable Technology and Systems, 2022.
    BibTeX entry ]

Supplementary Material

  • Implementation and documentation: Download
A A A | Print | Imprint | Sitemap | Contact
zum Seitenanfang