Prescriptive Process Monitoring (PresPM) recommends interventions during business processes to optimize key performance indicators (KPIs). In realistic settings, interventions are rarely isolated: organizations need to align sequences of interventions to jointly steer the outcome of a case. Existing PresPM approaches fall short in this respect. Many focus on a single intervention decision, while others treat multiple interventions independently, ignoring how they interact over time. Methods that do address these dependencies depend either on simulation or data augmentation to approximate the process to train a Reinforcement Learning (RL) agent, which can create a reality gap and introduce bias. We introduce SCOPE, a PresPM approach that learns aligned sequential intervention recommendations. SCOPE employs backward induction to estimate the effect of each candidate intervention action, propagating its impact from the final decision point back to the first. By leveraging causal learners, our method can utilize observational data directly, unlike methods that require constructing process approximations for reinforcement learning. Experiments on both an existing synthetic dataset and a new semi-synthetic dataset show that SCOPE consistently outperforms state-of-the-art PresPM techniques in optimizing the KPI. The novel semi-synthetic setup, based on a real-life event log, is provided as a reusable benchmark for future work on sequential PresPM.
๐ Full Content
PresPM uses machine learning to provide case-specific recommendations at different decision points during the execution of business processes. These recommendations concern interventions, such as managerial escalations or customer communications, that aim to improve KPIs, for example, throughput time or cost efficiency. PresPM holds the potential to move organizations from merely describing and predicting process behavior towards actively steering process executions [16]. In this paper, we use the terms intervention for any controllable action taken on an ongoing case, and intervention recommendation for the action suggested by a PresPM method at a decision point.
In many processes, intervention decisions are not isolated. Typically, a process contains multiple, interdependent decision points that jointly determine the outcome of a case. The effect of an earlier intervention depends on which interventions will be applied later in the same case. Optimizing intervention decisions one by one is therefore insufficient, because actions that look beneficial locally can undermine overall KPI performance. For example, in a marketing process aimed at maximizing revenue, the decision to offer a client a discount may depend on an earlier choice to send a promotional email and the client’s response to that email. This sequence of decisions together shapes the revenue. Optimizing each decision independently without considering the future might make it look like sending a promotional email is only moderately valuable for some clients, even though if you follow it with a discount, it could be highly effective. Methods for PresPM thus need to reason over sequences of decisions and their combined effect on the final KPI.
Most existing PresPM approaches do not yet offer such sequential support. First, many methods address only a single intervention scenario, even when multiple opportunities to intervene exist [3,4,9,23,24,25,26]. These approaches may improve performance for a single intervention scenario, but they do not coordinate multiple decisions over the full case. Second, some approaches handle sequential decisions but optimize each decision point in isolation. They focus on the immediate effect of the next intervention, for instance, on the remaining processing time, without explicitly aligning interventions across decision points. This can still lead to suboptimal end-to-end outcomes with respect to the final KPI [18]. Third, other methods for sequential intervention recommendations rely on process approximations, such as Markov decision processes (MDP) or data augmentation [1,5], to train an RL agent, and the resulting policies inherit any misspecification in these approximations, which may lead to biased or underperforming recommendations in practice.
To address these limitations, this paper makes two key contributions:
We propose SCOPE, a PresPM approach that combines causal learners with backward induction to learn causally grounded sequential intervention policies that are aligned across multiple decision points. For each decision point, a causal model estimates the effect of alternative interventions on the target KPI, given the observed process execution history. Backward induction then propagates the impact of later intervention decisions back to earlier ones by starting from the final decision point and recursively deriving the recommended action and its expected outcome at each preceding decision point. Operating directly on observational event logs, SCOPE uses causal learning to be able to identify actions rarely chosen historically but likely to enhance the target KPI, without requiring process-specific simulators or log augmentation.
We provide an empirical evaluation on existing synthetic data and a new semisynthetic setup based on a real-life event log. Our code and the novel semi-synthetic benchmark are made publicly available as a reusable resource for sequential PresPM in our GitHub repository. The results demonstrate that SCOPE consistently outperforms state-of-the-art PresPM techniques and highlight the importance of combining causal learning with backward induction for KPI optimization.
The remainder of this paper is structured as follows. Section 2 provides background and related work. Section 3 outlines the methodology. Section 4 presents the experiments and discussion. Finally, Section 5 concludes the paper.
Current PresPM approaches are generally based on two streams of research. The first is Causal Inference (CI), which aims to estimate the effects of potential interventions from observational data and use these estimates to guide decisions. This is achieved through causal learners, which are model setups designed to predict outcomes or causal effects from observational data. For example, an S-learner fits a single model that predicts the outcome using the intervention action as an input in addition to other features. PresPM approaches typically adapt causal learners to the process context by aggregating event data for use with standard CI setups [9], or by leveraging sequential models like LSTMs [30]. The second stream is RL-based, where an agent interacts with an environment that represents/approximates the real process, observes states, selects actions, and receives rewards, with the goal of learning a policy that maximizes cumulative rewards over time [23].
Similar to CI, PresPM typically relies on offline observational event logs, rather than data from controlled experiments. This approach avoids the cost and risk associated with randomized experiments, such as randomized controlled trials (RCTs) or an online RL agent exploring a real-life environment, which are often not feasible (e.g., when testing a loan assignment strategy). However, observational data introduces challenges. Because the data reflects an existing historical decision policy (e.g., a bank’s current loan strategy), treatment/intervention assignment is not random as it would be in controlled trials. This complicates the estimation of optimal actions due to potential imbalance between treated and untreated cases or strong confounding factors affecting both treatment assignment and outcomes [22]. For example, if a bank offers optional financial literacy workshops (intervention) to improve repayment rates (KPI), individuals who choose to attend may already have stronger financial habits (confounder), making it difficult to isolate the workshop’s true effect.
This subsection reviews existing PresPM approaches, highlighting their key features and limitations. Table 1 summarizes how SCOPE compares to these methods, with the discussion below detailing the distinguishing dimensions.
Single-Interventional Approaches. Most existing PresPM methodologies focus on the impact of a single intervention scenario. For example, Bozorgi et al. [9] use causal effect estimation to identify which cases would benefit from one specific intervention decision, applying CI techniques after encoding event data into a suitable format. In their work, an example of such an intervention would be skipping a particular activity. Similarly, Shoush et al. [26] propose a white-box framework that integrates predictive, causal, and survival models to recommend single intervention decisions. In their case,
an example intervention would be offering either multiple loan options to a client or just one. Yet, real-world business processes often require multiple (interdependent) interventions rather than a single one. For instance, in a loan application process, a bank may want to optimize the sequential intervention decisions of requesting additional documents, performing a credit check, scheduling an interview, and escalating the case.
In Table 1, the first column shows whether a method is multi-interventional or not.
Multi-Interventional Approaches. Certain PresPM approaches do tackle multiple interventions, usually by recommending the next best activity to optimize a given KPI. One methodology is to use predictive models at each decision point to estimate the KPI for possible actions. For example, de Leoni et al. [18] identify possible next events using a transition-system, predict the KPI for each event, and choose the one expected to maximize the KPI. This methodology, however, predicts which action is best at the current moment, without considering what might happen in later decision points, potentially leading to suboptimal results. To illustrate, we return to the previously introduced example of a marketing process aimed at maximizing revenue: initially deciding whether to send a promotional email (decision point 1), followed by the decision of whether to offer a discount (decision point 2). If you have a model that predicts the revenue for both actions at decision point 1, and does not account for what might happen in decision point 2, then these predictions are averages over the distribution of actions taken in the historical data at decision point 2. For some clients, the model might make it look like sending the email is only moderately valuable, even though if you follow it with a discount, it could be highly effective. In other words, the model is blind to the sequence of actions, as it cannot capture the cumulative effect of multiple actions taken over time because it treats future steps as unknown and averages over the historical distribution. To make optimal decisions, it is necessary to consider possible sequences of actions across all decision points. In Table 1, the second column shows whether a method aligns interventions across decision points, or only at one decision point, like in [18]. A different line of work that addresses sequential interventions is presented by Abbasi et al. [1] and Branchi et al [5]. Both aim to recommend the next best activity while explicitly accounting for all possible sequences of future actions. Branchi et al. achieve this by (1) applying KMeans clustering to group process prefixes in an event log; (2) defining an MDP where states are represented by the cluster label and previous activity, rewards are defined as the average reward observed for that state in the log, and transition probabilities are obtained by replaying the log; and (3) applying offline RL (Q-learning specifically) to derive a next-best-action policy [5]. Because Q-learning optimizes cumulative return over a full case, the resulting recommendations are aligned across decision points3 . Abbasi et al. take a slightly different route by also applying offline RL, but combined with data augmentation [1]. They modify case timestamps and remove non-critical activities, arguing that these transformations help diversify and focus the training data. However, both approaches rely on approximations of the real process through simulation (in the form of an MDP) or data augmentation. These approximations may lead to policies that underperform in practice. For example, in Branchi et al., important prefix information may be lost when aggregating cases prior to clustering, and KMeans may struggle with high-dimensional or categorical data. In Abbasi et al., augmentation is constrained by manually crafted rules: timestamps are only varied within 10%, and three business rules (precedence, co-occurrence, conflict) guide modifications. Some processes may require different temporal bounds or a far richer set of constraints. In such situations, the learned policy may underperform. In Table 1, the last column indicates whether a method requires an approximation (e.g., a simulation or augmentation) of the process, or can directly use the data as-is 4 .
In summary, as shown in Table 1, current approaches exhibit a notable gap that SCOPE seeks to address.
Sequential Decision-Making & Causal Inference. Related to sequential Pre-sPM are sequential decision-making approaches in CI, commonly studied under the framework of Dynamic Treatment Regimes (DTR). DTR methods aim to identify optimal treatment regimes (or policies) to maximize long-term outcomes, typically in medical settings. These methods rely on backward induction and causal reasoning to optimize long-term (medical) outcomes. For instance, in [14], the authors use Q-learning, essentially a form of backward induction, to determine the optimal treatment regime. Another approach models the DTR objective as a classification problem [32], using causal learners and a classifier to recommend the best treatment at each step. Inspired by these techniques, we frame the sequential PresPM problem as finding the optimal policy, as in DTRs, and leverage ideas from DTR approaches to identify intervention recommendations that maximize outcomes in sequential decision settings.
This section introduces key definitions, notation, and a detailed explanation of SCOPE.
Definition 1 (Event, Event Log, Trace, Prefix). An event is a tuple e = (c,o,t,d,s), where c โ C is a case identifier, o โ O is the activity label, t โ R + is the timestamp, d=(d 1 ,…,d m d ) are optional event-specific attributes, and s=(s 1 ,…,s ms ) are static case-level attributes. An event log L is a collection of observed event instances:
, where N is the number of events. A trace ฯ is a sequence of all events belonging to the same case in ascending order of their timestamps: ฯ =โจe 1 ,e 2 ,…,e |ฯ| โฉ. The first l events of a trace form a prefix, denoted ฯ :l =โจe 1 ,…,e l โฉ.
Definition 2 (Decision point, Intervention action, Policy, Outcome). A decision point is denoted by an integer k โ N representing a prespecified or discovered point in the process where an intervention may occur, with 1โคk โคK. Each decision point corresponds to a fixed prefix length l k . Let a k โA k represent a potential intervention action one could take at decision point k, with A k being the prespecified or discovered feasible action space at that decision point. The historically observed intervention action is then a obs k . The feasible action space A k always corresponds to one of the attributes in the event log (i.e., activity label or any other event-specific attribute from s). This means that the prefix up until k, ฯ :l k , represents all information available including earlier intervention actions for recommending the next intervention action, which will take effect in the subsequent event e l k +1 . A policy ฯ is then defined as ฯ =โจฯ 1 ,ฯ 2 ,…โฉ=โจฯ 1 (ฯ :l1 ),…,ฯ 2 (ฯ :l K )โฉ, where each ฯ k is a function that maps a prefix ฯ :l k to an action in A k . Finally, let y denote the (continuous, binary, or multi-class) outcome corresponding to the KPI of a trace, observed at the end. An event log L can then be encoded into a dataset D ={(ฯ
, where each sample corresponds to a specific case and a specific decision point.
Objective. The goal is to find the optimal policy ฯ opt ={ฯ opt 1 ,…,ฯ opt K }, given by ฯ opt = argmax ฯ E y(ฯ)|ฯ :l k , where y(ฯ) is the potential outcome under the policy ฯ [20].
Below, we explain how SCOPE maximizes a KPI (for minimization, replace any max with min). First, we outline the theory of backward induction. Next, we describe the integration of causal learners into this procedure. Finally, we present the full SCOPE algorithm.
Backward Induction. We now explain how backward induction (theoretically) identifies the optimal sequential intervention policy. Following CI and PresPM literature, we begin with three standard assumptions: sequential ignorability, the stable unit treatment value assumption (SUTVA), and positivity 5 . Under these assumptions, the optimal policy ฯ opt is identifiable from observational data through backward induction, meaning it can, in principle, be obtained (see [21] for a detailed discussion on these assumptions and the identifiability).
Backward induction iterates through all decision points, starting from the last one (K). At K, we define the Q-function and value function as follows:
If we then continue recursively, we define the Q-function and value function at decision point k โ{K -1,…,1} as
Then, under a perfectly known Q-function, the optimal actions and policy are given by
which concludes the backward induction.
The value function can alternatively be expressed in a regret-based form as:
with V K+1 =y. This formulation yields the same optimal policy. Intuitively, backward induction works by starting at the last decision point K and asking: ‘Given a current prefix ฯ :l K , what is the expected outcome of each possible action?’. This gives us the Q-function at K. Using the standard formulation (eq. 1), the value function at K then tells us the best outcome achievable from ฯ :l K by selecting the highest Q-value. We then move one step backward to decision point K -1. Here, the Q-function asks: ‘If I take action a K-1 now for prefix ฯ :l K-1 , and then act optimally at K (with the new prefix ฯ :l K resulting from a K-1 ), what outcome can I expect?’. This is answered by the value function at K for ฯ :l K . The value function at K -1 again picks the action with the highest Q-value. We repeat this process all the way to the first decision point. In general, the Q-function answers: ‘How good is it to take a specific action now, assuming we act optimally afterward?’, while the value function answers: ‘How good is it have this prefix, assuming optimal actions from here on?’. By applying this procedure recursively, we can compute all Q-functions and thus determine the optimal policy. Causal learners. While backward induction provides the theoretically optimal solution, applying it in practice requires estimation. This is challenging because we rely on observational event logs, and, as discussed in Section 2, observed confounders make estimation difficult. To address this, we draw inspiration from the DTR approaches in [31,32] and use causal learners to estimate the Q-function at each decision point and, consequently, the optimal action. Causal learners are specifically designed to estimate outcomes and/or causal effects under different treatments (here, actions) using observational data [2,8,17]. These techniques try to explicitly reduce dependence on the historical decision policy (in contrast to [29], see footnote 4). In this paper, we apply three causal learners, widely adopted in CI: an S-learner, a T-learner, and an RA-learner [2,8,17], though any causal learner expected to perform well on a given dataset could be used. Each of these learners is model-agnostic: they can flexibly incorporate any suitable base predictive model (e.g., a neural network).
We adopt the regret-based formulation 4 of the value function, as it is more robust under model misspecification (e.g., incorrect model assumptions) [13]. Under this formulation, we must estimate two key components at each decision point k: the Q-function Q k and the corresponding optimal action a opt k . Once these are obtained, computing the value function becomes straightforward. Note that, while in the theoretical setting with a perfectly known Q-function, the optimal action is always given by the action that maximizes Q k (see eq. 3), in practical approximation settings, the optimal action may be better estimated using alternative strategies to compensate for prediction error and model uncertainty, as we do below with the RA-learner.
S-learner. The S-learner estimates Q k using a single predictive model [17,2]. The model (e.g., an LSTM) takes as input the features in ฯ :l k together with the historical action taken a obs k . At inference time, we query the model once for every possible action a k at decision point k, producing estimates of Q k (ฯ :l k ,a k ). The chosen action is then simply the one with the highest predicted value.
T-learner. The T-learner builds a separate model for each possible action at decision point k [2,17]. Each model takes as input only the features in ฯ :l k . At inference, we query each model to obtain the predicted outcome under its corresponding action. These predictions serve as estimates of Q k (ฯ :l k ,a k ), and the best action is again the one with the largest predicted score.
RA-learner. The RA-learner works in two steps [2]. First, it estimates Q k in the same way as the S-learner. Then, it constructs a pseudo-outcome for each possible action. Intuitively, this pseudo-outcome represents the estimated causal effect of choosing action a k on the KPI, given the observed prefix. For a given prefix, observed action a obs k , decision point k, and action of interest a k , the pseudo-outcome is:
where f k runs over all actions other than a k , and b k is an arbitrary baseline action. These pseudo-outcomes (which still have the observed y in eq. 5, and thus cannot serve as estimation for new data) act as targets in a second predictive model, which is trained to estimate the causal effect of each action at decision point k. The final action estimate is the one with the largest predicted causal effect from this second model. The RA-learner thus adds an explicit second step to better estimate causal effects before choosing the final action.
Algorithm. In summary, SCOPE combines regret-based backward induction with causal learning to derive effective policies from observational data. In Algorithm 1, we provide an overview of the full training and inference procedure for an S-learner variant, referred to as SCOPE-S. Lines highlighted in green mark the steps where the Q-function and optimal action at decision point k are estimated. When adopting a Tor RA-learner, these steps are the ones that change most substantially, being replaced by the corresponding estimation procedures described above. Full algorithms for these other learners are also provided in our repository (see footnote 7).
During training, the algorithm performs backward induction, as described above, starting from the last decision point. At each decision point, it uses a causal learner (here, an S-learner) to estimate the Q-function (line 4), and subsequently the optimal action for every case (line 7). Then, we compute the value (=output of value function) for every case V (c) k using the regret-based formulation, producing the targets for the preceding decision point (lines 8-10). Inference proceeds forward. Starting from the first decision point and given the prefix ฯ (c) :l1 for every case, the algorithm uses the trained models to estimate the optimal action (line 17). After recommending this action, each case evolves to the next decision point (line 16), where the same procedure is repeated until all decisions are made 6 ) on all c โท S-learner to estimate Q-function 5:
Append M k to M 6:
for each case c=1,…,C do 7:
รขopt,(c)
:l k ,a k ) โท Estimated optimal action 8:
Qobs,(c)
Qopt,(c)
) โท Q-value of optimal action 10:
:l 1 for each c 14: for k =1,2,…,K do โท Iterate forwards 15:
for each case c=1,…,C do 16:
Get ฯ (c)
:l k (given previous action if k >1) 17:
รขopt,(c)
:l k ,a k ) โท Estimated optimal action 18:
end for 19: end for
This section first presents the (semi-)synthetic data and methods for comparison. Next, we describe the experimental procedure, followed by results and their implications 7 .
Synthetic and semi-synthetic data are widely used in causal machine learning because real-world datasets do not provide ground-truth counterfactuals. Given an offline dataset, 6 Algorithm 1: 1) the action space can be predefined or discovered, e.g., using the transition-system in [18]. 2) Due to variable control flow in business processes, historical cases may skip one or more decision points. Such cases are excluded from the training data of models at skipped decision points. Backward induction still applies: if decision point k is skipped, the model at k-1 obtains its targets using the model at decision point k+1 instead of k, or directly from the outcome if k =K. At inference time, if a recommended action causes a decision point to be skipped, the corresponding model is simply not invoked. 7 The code, algorithms, and simulators are available at https://github.com/
JakobDeMoorKULstudent/SCOPE we cannot always directly verify whether the model’s recommendation would have led to a better outcome for a test case. We only observe what actually happened under the historical decision policy, not what would have happened under a different intervention action [12]. To our knowledge, SimBank is the only simulator designed specifically for PresPM, which we use in our evaluation [10]. We also introduce a new semi-synthetic setup based on the BPIC17_W event log [11] and release it publicly to support further research in sequential PresPM.
SimBank. SimBank models a bank’s loan application process and has been thoroughly validated. The main KPI to optimize is the total profit generated from loans in a test set. Among other options, we opt for the following two sequential interventions for this research: 1. Choose procedure: decide between a standard or priority procedure 2. Set interest rate: choose one of 3 possible interest rate levels. The effects of these interventions vary accross clients, making the decisions complex. The two decision points are highly interdependent. Selecting a standard procedure reduces costs, but increases the likelihood of a client refusal of the offer in the end. This refusal probability directly influences the optimal choice of interest rate, which must balance affordability for the client (to reduce refusals) against profitability (to cover costs, which are higher for priority procedures). The first decision thus affects both client refusal and costs, which are critical considerations in the second decision. SimBank allows us to vary the level of confounding in the training data by adjusting the % of confounded data versus full RCT data. As shown in previous studies, higher confounding typically reduces the performance of PresPM methods [9,10].
We additionally introduce SimBPIC17, a semi-synthetic simulation based on the BPIC17_W event log. In this simulation, we simplify the controlflow (by deleting 3 activities) and introduce an intervention on the existing activity ‘call_incomplete_files’. At each decision point, the choice is whether to call the client to follow up on incomplete files or to wait for the client to provide the information themselves. The KPI to minimize is the total cost incurred while completing the files: cost files = cost tpt * tpt files +n_calls * cost call , where tpt files is the throughput time of completing the files, which is sampled from a uniform distribution, and cost tpt and cost call are fixed constants. There is a trade-off: waiting is cheaper but increases throughput time, whereas calling is more expensive but shortens the throughput time. The (causal) effect of a call on the throughput time depends on two factors: the loan type (with ‘car’ and ’loan takeover’ having larger effects) and the average duration of all previous activities (longer durations increase the potential time saved by calling). This dependency on average duration creates strong inter-decision point interactions: a call in decision point 1 reduces throughput time but also lowers the average duration of activities after decision point 1, which in turn reduces the effect of a call in decision point 2. To generate datasets, variables except activity labels and activity durations are sampled from the original dataset. We then define the simplified control-flow, the function determining the effect of a call, the function calculating the KPI based on the actions taken, and the bank policy, which governs whether the bank chooses to call or wait. The bank calls if the loan type is ‘car’ or ’loan takeover’ and the average activity duration exceeds 4025 units. Unlike SimBank, SimBPIC17 makes it easy to vary the number of decision points by increasing the number of (obligatory) choices between call or wait. Similar to SimBank, defining the bank policy allows us to mimic an observational event log, and we can vary confounding levels by adjusting the probability of choosing a random action versus following the bank policy; this is equivalent to how we vary confounding in SimBank.
We compare SCOPE with two primary alternative approaches: the ones from Branchi et al. [5] and de Leoni et al. [18]. The approach of Branchi et al. requires building an MDP approximation of the process, which is then used to train a Q-learning algorithm. To do so, KMeans clustering is used, representing each state with the last executed activity and the cluster label assigned to the prefix at hand. In the original work, only the control flow is considered when defining states. However, the method can be extended to include additional variables by using them as inputs to the clustering procedure, which is how we apply it. We refer to this approach as KMeans-Q8 . The method of de Leoni et al. uses an S-learner-just not by name-at each decision point separately. As a result, the action chosen at one decision point does not consider the impact of subsequent actions in later decision points. We refer to this approach as SEP-S, indicating the use of separate S-learners. More generally, we use SEP to describe any strategy in our experiments that trains separate models for each decision point of intervention. We also include a simple random policy as a baseline, and an upper bound representing the best achievable KPI. The upper bound is computed by taking, for each test case, the maximum KPI obtainable across all possible action sequences (exhaustive search).
Cases are truncated at the decision points, producing prefixes that end immediately before the intervention. In SCOPE and SEP, LSTMs use tensor encoding [28], with case-level features added at the final layer. Other models use last-event encoding for time features and aggregation encoding for other features, following [27]. Categorical features are one-hot encoded and continuous features standardized. Depending on the learner, an action feature may be included (S-or RA-learner) or not (T-learner). For KMeans-Q, we adopt the original preprocessing [5] for control-flow features and add non-control-flow features using last-event and aggregation encoding as before.
Hyperparameters are tuned via Bayesian optimization using 20% of the training set for validation. Models are then retrained on the full training set. For evaluation, we use 10,000 test cases for SimBank and 1,000 for SimBPIC17 (due to the exponential growth of sequences with decision points: 2 n decisionpoints ). Both SCOPE and SEP can use any base model, tuned identically for fair comparison. In KMeans-Q, we jointly tune the KMeans and RL components using a normalized metric that combines silhouette score and average reward, allowing twice the tuning budget compared to SCOPE and SEP due to the dual-model setup9 .
To evaluate SCOPE, we use the Gain, which measures the % improvement in total KPI achieved by a method’s policy over the bank’s historical decision policy on the test set. We also vary key simulation parameters relevant to sequential PresPM, and do this especially in SimBank, as it is thoroughly validated in [10], contains more variables, and has a more complex causal structure than SimBPIC17 (where we focus on the number of decision points) 10 . In SimBank, we adjust: -Confounding level ฮด (0.9-0.99): Reflects real-world observational data, which is usually heavily confounded (as there is generally already a decision policy in place in business processes). Varying this helps assess whether a method is robust to biases introduced by the historical decision policy (e.g., bank policy) and whether it can still select effective actions even when those actions are rarely observed in the data. In SimBPIC17, the focus is evaluating performance over more than 2 decision points. We vary the number of decision points from 2 to 6 and run experiments under 3 levels of confounding (0.9, 0.95, 0.99) to assess robustness across longer decision horizons (see also footnote 10). Each setting is run using 10 different random seeds.
Figure 1 shows method performance across confounding levels and training sizes on SimBank. SCOPE and SEP are implemented as S-learners with XGBoost (SEP thus corresponds to de Leoni et al.’s SEP-S). SCOPE outperforms nearly all settings, except for the (1K, 0.99) case, demonstrating its strong ability to learn sequential intervention policies. Performance generally declines with higher confounding, except for KMeans-Q at 1K and in the 50K condition, where all methods benefit from more data. As expected, larger training sets improve performance. SCOPE’s advantage grows with training size, as each model uses the outputs of other models in later decision points as targets through the value function. With more data, model errors decrease and propagate less across decision points. KMeans-Q performs the worst, suggesting that approximating the process for RL while still aligning all interventions can be less effective than treating interventions independently, as SEP(-S) does.
Figure 2 compares SCOPE and SEP across confounding levels and learner types on SimBank (10K training cases, XGBoost). SCOPE consistently outperforms SEP, except for the (T-learner, 0.99) setting. Overall, the S-learner performs best, the T-learner worst, and the RA-learner’s performance declines least under high confounding, consistent with prior CI findings [8,17]. Figure 3 shows performance across confounding levels and base models (10K training cases, S-learner) 11 . SCOPE-S again outperforms SEP-S. XGBoost achieves the best overall performance, while Random Forest performs worst. Both these plots suggest aligning interventions across decision points is generally more effective than treating them independently, regardless of learner or base model type. Figure 4 shows method performance across different numbers of decision points and confounding levels on SimBPIC17 (10K training cases, SCOPE and SEP are again S-learners with XGBoost). SCOPE generally outperforms other methods, except in the (0.99, 2 decision points) setting. The performance gap between SCOPE and others grows with more decision points, as it introduces more interdependent decisions that SCOPE can effectively align. Both SCOPE and SEP-S gain more over the bank policy as decision points increase, but also move further from the upper bound: for SCOPE, due to error accumulation across models; for SEP, due to treating each decision point independently. KMeans-Q struggles with more decision points, likely because increasing the number of decision points causes the KMeans model to aggregate the data more heavily compared to its original form, reducing the MDP approximation quality for Q-learning.
In Summary, the setup of SCOPE demonstrates clear advantages across a range of settings. Its performance improves with larger training sets by reducing error accumulation in backward induction and scales well with more decision points by aligning interdependent interventions, something separate optimization fails to exploit. SCOPE consistently outperforms separate optimization across learner types and base models. Practitioners should choose the learner-model combination expected to perform best: S-learners suffice for our datasets, but RA-learners seem more robust under strong confounding, and T-learners struggle. While RL-based approaches that approximate the entire process as an MDP (like KMeans-Q) aim for joint optimization, they can perform worse than treating decision points independently. SCOPE does rely on the predictive accuracy of underlying models: a highly unpredictable KPI can lead to strong error accumulation. However, this challenge affects all methods, and SCOPE still likely retains its advantage by leveraging observational event logs directly and considering the interdependent decision point structure.
Limitations. SCOPE models sequential dependencies within individual process cases but assumes no interference between interventions across different cases, as required by the SUTVA assumption (Section 3). Similarly, sequential ignorability may not always hold in business processes. Despite these assumptions-common to all PresPM approaches-our experiments show that SCOPE can still identify effective sequential policies under realistic levels of confounding typical in business processes.
We introduce SCOPE, a method that combines backward induction with causal learning to optimize sequential interventions in business processes. Using a series of experiments, including a newly developed semi-synthetic simulator to support further research in sequential PresPM, we show that SCOPE outperforms existing sequential PresPM methods. Existing approaches either handle each intervention independently or rely on approximating the process to train an RL agent, both of which can limit their effectiveness. Future work could explore extending SCOPE to account for interference between cases or relax the sequential ignorability assumption, while still leveraging backward induction and causal learning. This might involve exploring causal methods for handling interference [7,15] and using instrumental variables to address hidden confounders [19].
Note that in[5], Branchi et al. extend their earlier work[6] in a log-agnostic manner, removing process-specific modelling assumptions and improving generalizability, which is why we focus on[5] here.
We also note the approach by Weinzierl et al.[29], which predicts the most likely future suffix and identifies similar historical cases to select actions with favorable KPIs. This method heavily relies on behaviors seen under the historical decision policy that generated the training data, as it only considers the neighborhood of the most likely suffix, which may miss better, but rarely observed actions. Therefore, we do not consider this method further in this paper.
Sequential ignorability: given a prefix, there are no unmeasured confounders; SUTVA: units do not affect each other’s outcomes and treatments have well-defined versions; positivity: every treatment has a nonzero chance of occurring for all relevant prefixes.
Abbasi et al.’s FORLAPS[1] (see Section 2) also supports sequential interventions by approximating the process using data augmentation for RL, but defines states in its MDP solely by observed activity labels. In contrast to KMeans-Q, where we can easily incorporate additional variables in the clustering, applying FORLAPS would require either expanding the state space, resulting in an impractically large Q-table, or redesigning the state representation, fundamentally altering the method. Therefore, we exclude it from our comparisons.
The original paper tuned KMeans first using clustering metrics (e.g., silhouette score) and then tuned the RL model. In our experiments, this approach performed poorly, as optimizing KMeans for clustering quality alone does not support effective intervention decisions.
Note that in our repository, we also provide results using different learners and base models for SimBPIC17, ensuring consistency of results across datasets.
For the right-most plot, we use an MLP for the early decision point and and an LSTM in the later decision point, where sequential structure becomes important.