Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automatic extraction of cause-effect-relations from requirements artifacts (2312.06986v1)

Published 12 Dec 2023 in cs.SE

Abstract: Background: The detection and extraction of causality from natural language sentences have shown great potential in various fields of application. The field of requirements engineering is eligible for multiple reasons: (1) requirements artifacts are primarily written in natural language, (2) causal sentences convey essential context about the subject of requirements, and (3) extracted and formalized causality relations are usable for a (semi-)automatic translation into further artifacts, such as test cases. Objective: We aim at understanding the value of interactive causality extraction based on syntactic criteria for the context of requirements engineering. Method: We developed a prototype of a system for automatic causality extraction and evaluate it by applying it to a set of publicly available requirements artifacts, determining whether the automatic extraction reduces the manual effort of requirements formalization. Result: During the evaluation we analyzed 4457 natural language sentences from 18 requirements documents, 558 of which were causal (12.52%). The best evaluation of a requirements document provided an automatic extraction of 48.57% cause-effect graphs on average, which demonstrates the feasibility of the approach. Limitation: The feasibility of the approach has been proven in theory but lacks exploration of being scaled up for practical use. Evaluating the applicability of the automatic causality extraction for a requirements engineer is left for future research. Conclusion: A syntactic approach for causality extraction is viable for the context of requirements engineering and can aid a pipeline towards an automatic generation of further artifacts from requirements artifacts.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Nabiha Asghar. 2016. Automatic extraction of causal relations from natural language texts: a comprehensive survey. arXiv preprint arXiv:1605.07895 (2016).
  2. Causal Relation Classification using Convolutional Neural Networks and Grammar Tags. In 2019 IEEE 16th India Council International Conference (INDICON). IEEE, 1–3.
  3. Causal Relation Extraction.. In Lrec.
  4. Ki Chan and Wai Lam. 2005. Extracting causation knowledge from natural language texts. International Journal of Intelligent Systems 20, 3 (2005), 327–358.
  5. William R Elmendorf. 1973. Cause-effect graphs in functional testing. IBM Poughkeepsie Laboratory.
  6. Alessio Ferrari. [n.d.]. PURE: publicly available datasets. https://zenodo.org/record/1414117#.X0pItDVCRPY. Accessed: 2020-03-21.
  7. Pure: A dataset of public requirements documents. In 2017 IEEE 25th International Requirements Engineering Conference (RE). IEEE, 502–505.
  8. Automated Generation of Test Models from Semi-Structured Requirements. In 2019 IEEE 27th International Requirements Engineering Conference Workshops (REW). IEEE, 263–269.
  9. Julian Frattini. [n.d.]. Source code of the cerec system. https://zenodo.org/record/4009160#.X0zuUYtCRPY. Accessed: 2020-08-31.
  10. Norbert E Fuchs and Rolf Schwitter. 1995. Controlled natural language for requirements specifications. (1995).
  11. Roxana Girju. 2003. Automatic detection of causal relations for question answering. In Proceedings of the ACL 2003 workshop on Multilingual summarization and question answering-Volume 12. Association for Computational Linguistics, 76–83.
  12. Text mining for causal relations.. In FLAIRS conference. 360–364.
  13. Michal Gordon and David Harel. 2009. Generating executable scenarios from natural language. In International Conference on Intelligent Text Processing and Computational Linguistics. Springer, 456–467.
  14. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation. Association for Computational Linguistics, 33–38.
  15. Extracting causal knowledge from a medical database using graphical patterns. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, 336–343.
  16. Exploring new directions in traceability link recovery in models: The process models case. In International Conference on Advanced Information Systems Engineering. Springer, 359–373.
  17. Easy approach to requirements syntax (EARS). In 2009 17th IEEE International Requirements Engineering Conference. IEEE, 317–322.
  18. Maltparser: A data-driven parser-generator for dependency parsing.. In LREC, Vol. 6. 2216–2219.
  19. Specification-based testing using cause-effect graphs. Annals of Software Engineering 4, 1 (1997), 133–157.
  20. Semeval-2010 task 1: Coreference resolution in multiple languages. In Proceedings of the 5th International Workshop on Semantic Evaluation. Association for Computational Linguistics, 1–8.
  21. Learning textual graph patterns to detect causal event relations. In Twenty-Third International FLAIRS Conference.
  22. Formalising natural language specifications using a cognitive linguistic/configuration based approach. Information Systems 54 (2015), 191–208.
  23. Mamta Sharma et al. 2010. Automatic generation of test suites from decision table-theory and implementation. In 2010 Fifth International Conference on Software Engineering Advances. IEEE, 459–464.
  24. Test case generation from cause-effect graph based on model transformation. In 2014 International Conference on Information Science & Applications (ICISA). IEEE, 1–4.
  25. Cause effect graph to decision table generation. ACM SIGSOFT Software Engineering Notes 34, 2 (2009), 1–4.
  26. A taxonomy of model-based testing approaches. Software testing, verification and reliability 22, 5 (2012), 297–312.
  27. SemEval-2010 Task 13: TempEval-2. In Proceedings of the 5th international workshop on semantic evaluation. 57–62.
  28. Status quo in requirements engineering: A theory and a global family of surveys. ACM Transactions on Software Engineering and Methodology (TOSEM) 28, 2 (2019), 1–48.
  29. Roel J Wieringa. 2014. Design science methodology for information systems and software engineering. Springer.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Julian Frattini (26 papers)
  2. Maximilian Junker (2 papers)
  3. Michael Unterkalmsteiner (73 papers)
  4. Daniel Mendez (63 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.