We revisit parallel-innermost term rewriting as a model of parallel computation on inductive data structures and provide a corresponding notion of runtime complexity parametric in the size of the start term. We propose automatic techniques to derive both upper and lower bounds on parallel complexity of rewriting that enable a direct reuse of existing techniques for sequential complexity. Our approach to find lower bounds requires confluence of the parallel-innermost rewrite relation, thus we also provide effective sufficient criteria for proving confluence. The applicability and the precision of the method are demonstrated by the relatively light effort in extending the program analysis tool AProVE and by experiments on numerous benchmarks from the literature.
This article describes the *Confluence Framework*, a novel framework for proving and disproving confluence using a divide-and-conquer modular strategy, and its implementation in CONFident. Using this approach, we are able to automatically prove and disprove confluence of *Generalized Term Rewriting Systems*, where (i) only selected arguments of function symbols can be rewritten and (ii) a rather general class of conditional rules can be used. This includes, as particular cases, several variants of rewrite systems such as (context-sensitive) *term rewriting systems*, *string rewriting systems*, and (context-sensitive) *conditional term rewriting systems*. The divide-and-conquer modular strategy allows us to combine in a proof tree different techniques for proving confluence, including modular decompositions, checking joinability of (conditional) critical and variable pairs, transformations, etc., and auxiliary tasks required by them, e.g., joinability of terms, joinability of conditional pairs, etc.