Papers
Topics
Authors
Recent
Search
2000 character limit reached

How well can a large language model explain business processes as perceived by users?

Published 23 Jan 2024 in cs.AI | (2401.12846v4)

Abstract: LLMs are trained on a vast amount of text to interpret and generate human-like textual content. They are becoming a vital vehicle in realizing the vision of the autonomous enterprise, with organizations today actively adopting LLMs to automate many aspects of their operations. LLMs are likely to play a prominent role in future AI-augmented business process management systems, catering functionalities across all system lifecycle stages. One such system's functionality is Situation-Aware eXplainability (SAX), which relates to generating causally sound and human-interpretable explanations. In this paper, we present the SAX4BPM framework developed to generate SAX explanations. The SAX4BPM suite consists of a set of services and a central knowledge repository. The functionality of these services is to elicit the various knowledge ingredients that underlie SAX explanations. A key innovative component among these ingredients is the causal process execution view. In this work, we integrate the framework with an LLM to leverage its power to synthesize the various input ingredients for the sake of improved SAX explanations. Since the use of LLMs for SAX is also accompanied by a certain degree of doubt related to its capacity to adequately fulfill SAX along with its tendency for hallucination and lack of inherent capacity to reason, we pursued a methodological evaluation of the perceived quality of the generated explanations. We developed a designated scale and conducted a rigorous user study. Our findings show that the input presented to the LLMs aided with the guard-railing of its performance, yielding SAX explanations having better-perceived fidelity. This improvement is moderated by the perception of trust and curiosity. More so, this improvement comes at the cost of the perceived interpretability of the explanation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (54)
  1. doi:10.1145/3576047.
  2. doi:10.3390/electronics10050593.
  3. doi:10.18653/v1/2023.findings-emnlp.743. URL https://aclanthology.org/2023.findings-emnlp.743
  4. doi:10.48550/arXiv.2310.14975. URL https://arxiv.org/abs/2310.14975v1
  5. doi:10.1145/3560815.
  6. doi:10.2196/50638.
  7. doi:10.1007/978-3-662-59432-2{_}8. URL http://link.springer.com/10.1007/978-3-662-59432-2_8
  8. doi:10.1007/978-3-662-49851-4. URL http://link.springer.com/10.1007/978-3-662-49851-4
  9. doi:10.1007/978-3-319-77525-8{_}88. URL https://doi.org/10.1007/978-3-319-77525-8_88
  10. doi:10.7551/mitpress/1754.001.0001. URL https://direct.mit.edu/books/book/2057/causation-prediction-and-search
  11. doi:10.1007/978-3-030-66498-5{_}12. URL https://link.springer.com/10.1007/978-3-030-66498-5_12
  12. doi:10.1007/978-4-431-55784-5. URL https://link.springer.com/10.1007/978-4-431-55784-5
  13. doi:10.1007/978-3-031-41623-1{_}7. URL https://link.springer.com/10.1007/978-3-031-41623-1_7
  14. doi:10.1109/ACCESS.2018.2870052.
  15. doi:10.1080/10580530.2020.1849465.
  16. doi:10.3390/electronics12071670.
  17. doi:10.1007/s13218-019-00586-1.
  18. doi:10.1145/3236009.
  19. doi:10.1145/2939672.2939778.
  20. doi:10.2200/s00873ed1v01y201808dtm051.
  21. doi:10.1007/s13740-021-00122-1.
  22. doi:10.1007/978-3-031-08848-3{_}9. URL https://link.springer.com/10.1007/978-3-031-08848-3_9
  23. doi:10.1007/s10115-018-1214-x.
  24. doi:10.1007/978-3-031-25383-6{_}5.
  25. doi:10.1016/j.inffus.2021.05.009.
  26. doi:10.1145/3351095.3372870.
  27. doi:10.1016/j.jbi.2020.103655.
  28. doi:10.3390/electronics8080832.
  29. doi:10.1109/VLHCC.2013.6645235.
  30. doi:10.1518/hfes.46.1.50{_}30392.
  31. doi:10.1109/DSAA.2018.00018.
  32. doi:10.3389/fcomp.2023.1096257.
  33. doi:10.1007/s13218-020-00636-z.
  34. doi:oclc/56932490.
  35. doi:10.2307/249008.
  36. doi:10.1287/isre.2.3.192.
  37. doi:10.3969/j.issn.1672-7347.2012.02.007.
  38. doi:10.1177/070674379403900303.
  39. doi:10.1109/ICWS60048.2023.00099.
  40. doi:10.1007/978-3-031-50974-2{_}4.
  41. doi:10.1007/978-3-031-50974-2{_}34.
  42. doi:https://doi.org/10.1007/978-3-031-50974-2{_}32.
  43. doi:10.18653/v1/2022.acl-demo.9.
  44. doi:10.1109/ICPM49681.2020.00012. URL https://doi.org/10.1109/ICPM49681.2020.00012
  45. doi:10.1109/ICPM57379.2022.9980535. URL https://doi.org/10.1109/ICPM57379.2022.9980535
  46. doi:10.3390/A15060199. URL https://doi.org/10.3390/a15060199
  47. doi:10.1007/978-3-030-98581-3{_}15. URL https://doi.org/10.1007/978-3-030-98581-3_15
  48. doi:10.1007/978-3-030-91431-8{_}4. URL https://doi.org/10.1007/978-3-030-91431-8_4
  49. doi:10.1016/J.ENGAPPAI.2023.106678. URL https://doi.org/10.1016/j.engappai.2023.106678
  50. doi:10.1016/J.ENGAPPAI.2023.105904. URL https://doi.org/10.1016/j.engappai.2023.105904
  51. doi:10.1109/ICPM49681.2020.00028.
  52. doi:10.1109/ICPM53251.2021.9576853.
  53. doi:10.1016/j.is.2023.102198.
  54. doi:10.1073/pnas.1804597116.
Citations (10)

Summary

  • The paper presents SAX4BPM, a framework that leverages LLMs and knowledge graphs to automate context-aware explanations in business process management.
  • The framework integrates services such as process mining, causal discovery, and feature importance analysis to derive actionable insights from event logs.
  • Evaluation across multiple domains shows that incorporating process knowledge enhances fidelity but may compromise interpretability, indicating a nuanced trade-off.

Explainability of Business Processes Using LLMs: An Analysis

In this paper, the authors present a framework called SAX4BPM to automate the generation of Situation-Aware eXplanations (SAX) for business processes using LLMs. This research aims to enhance the explainability of AI-augmented Business Process Management Systems (ABPMS) by integrating LLMs to create human-interpretable explanations based on process context.

SAX4BPM Framework

The SAX4BPM suite is composed of services and a central knowledge repository designed to produce explanations by synthesizing various knowledge constituents of a business process.

  • Knowledge Graphs: At the heart of the SAX4BPM is a knowledge graph (KG), implemented as a Labeled Property Graph (LPG) in a Neo4j database. This graph serves as the backbone for storing and querying business process data. Figure 1

    Figure 1: Knowledge graph schema shows data storage and relationships in SAX4BPM.

  • Services Invocation: SAX4BPM provides several services, including Mining4Process for process model discovery, Causal4Process for causal relationship elicitation, ContextEnrichment for adding contextual information, and X4Process for feature importance analysis. These services process event logs to produce JSON inputs for the LLM. Figure 2

    Figure 2: SAX4BPM services invocation illustrates the integration of SAX components.

Illustrative Example: Parking Fines

An example scenario showcasing the SAX4BPM application involves the management of parking fines. The framework utilizes data generated on processes where fines are issued based on parking violations.

  • Mining4Process: Extracts process models showing sequences of tasks, such as verifying handicap permits and determining hazardous parking. Figure 3

    Figure 3: Process model discovery exposes the procedural workflow for handling parking violations.

  • Causal Discovery: Identifies causal links between activities, signifying dependencies that can clarify the sequence of actions influencing fines. Figure 4

    Figure 4: Causal graph discovery for identifying execution dependencies in the parking fines scenario.

  • XAI Feature Importance: Determines the role of various features affecting the decision-making process, e.g., the impact of choosing different towing companies or location regions. Figure 5

    Figure 5: XAI feature importance highlights factors influencing the processing of parking fines.

Hypotheses and Evaluation

The paper hypothesizes that equipping LLMs with additional knowledge about business processes, such as process and causal execution dependencies, enhances the perceived quality of explanations regarding business process activities.

  • Research Methodology: A mixed experimental design was applied across domains (pizza delivery, parking fines, and loan approval), using an online survey to measure perceived explanation quality against several constructs, including fidelity and interpretability. Figure 6

    Figure 6: Experiment model shows the relationship between input knowledge types and perceived explanation quality.

  • Findings: Results indicated that while additional knowledge enhances fidelity, it may compromise interpretability. This trade-off was domain-specific and moderated by factors like user trust and curiosity.

Conclusion

The integration of LLMs in creating explainable business process insights presents both opportunities and challenges. While the introduction of SAX explanations showed promising enhancement in user perception of fidelity, the balance between comprehensibility and informativity remains delicate. This research invites further exploration of knowledge synthesis methods and their impact on automated explainability in business systems. Future avenues suggest further automation of input narratives using dynamic templating and fine-tuning of LLMs for more contextualized outputs.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.