Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analyzing Operator States and the Impact of AI-Enhanced Decision Support in Control Rooms: A Human-in-the-Loop Specialized Reinforcement Learning Framework for Intervention Strategies (2402.13219v1)

Published 20 Feb 2024 in cs.AI, cs.HC, cs.LG, cs.MA, cs.SY, and eess.SY

Abstract: In complex industrial and chemical process control rooms, effective decision-making is crucial for safety and efficiency. The experiments in this paper evaluate the impact and applications of an AI-based decision support system integrated into an improved human-machine interface, using dynamic influence diagrams, a hidden Markov model, and deep reinforcement learning. The enhanced support system aims to reduce operator workload, improve situational awareness, and provide different intervention strategies to the operator adapted to the current state of both the system and human performance. Such a system can be particularly useful in cases of information overload when many alarms and inputs are presented all within the same time window, or for junior operators during training. A comprehensive cross-data analysis was conducted, involving 47 participants and a diverse range of data sources such as smartwatch metrics, eye-tracking data, process logs, and responses from questionnaires. The results indicate interesting insights regarding the effectiveness of the approach in aiding decision-making, decreasing perceived workload, and increasing situational awareness for the scenarios considered. Additionally, the results provide valuable insights to compare differences between styles of information gathering when using the system by individual participants. These findings are particularly relevant when predicting the overall performance of the individual participant and their capacity to successfully handle a plant upset and the alarms connected to it using process and human-machine interaction logs in real-time. These predictions enable the development of more effective intervention strategies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. C. W. Amazu, A. Abbas, M. Demichela, and D. Fissore, “Decision making for process control management in control rooms: a survey methodology and initial findings.,” CET Journal-Chemical Engineering Transactions, vol. 99, 2023.
  2. M. Sethu, B. Kotla, D. Russell, M. Madadi, N. A. Titu, J. B. Coble, R. L. Boring, K. Blache, V. Agarwal, V. Yadav, et al., “Application of artificial intelligence in detection and mitigation of human factor errors in nuclear power plants: A review,” Nuclear Technology, vol. 209, no. 3, pp. 276–294, 2023.
  3. S. J. Lee, K. Mo, and P. H. Seong, “Development of an integrated decision support system to aid the cognitive activities of operators in main control rooms of nuclear power plants,” in 2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making, pp. 146–152, IEEE, 2007.
  4. J. S. Kang and S. J. Lee, “Concept of an intelligent operator support system for initial emergency responses in nuclear power plants,” Nuclear Engineering and Technology, vol. 54, no. 7, pp. 2453–2466, 2022.
  5. M. H. Hsieh, S. L. Hwang, K. H. Liu, S. F. M. Liang, and C. F. Chuang, “A decision support system for identifying abnormal operating procedures in a nuclear power plant,” vol. 249, pp. 413–418, 8 2012.
  6. B. Balaji, M. A. Shahab, B. Srinivasan, and R. Srinivasan, “Act-r based human digital twin to enhance operators’ performance in process industries,” Frontiers in Human Neuroscience, vol. 17, p. 18, 2023.
  7. R. A. Howard and J. E. Matheson, “Influence diagrams,” Decision Analysis, vol. 2, no. 3, pp. 127–143, 2005.
  8. J. A. Tatman and R. D. Shachter, “Dynamic programming and influence diagrams,” IEEE transactions on systems, man, and cybernetics, vol. 20, no. 2, pp. 365–379, 1990.
  9. J. Mietkiewicz, A. N. Abbas, C. W. Amazu, A. L. Madsen, and G. Baldissone, “Dynamic influence diagram-based deep reinforcement learning framework and application for decision support for operators in control rooms,” 2023.
  10. J. Mietkiewicz, A. N. Abbas, C. W. Amazu, G. Baldissone, A. L. Madsen, M. Demichela, and M. C. Leva, “Enhancing control room operator decision making,” Processes, vol. 12, no. 2, p. 328, 2024.
  11. U. B. Kjærulff and A. L. Madsen, “Bayesian networks and influence diagrams: A guide to construction and analysis,” 2013.
  12. V. François-Lavet, P. Henderson, R. Islam, M. G. Bellemare, J. Pineau, et al., “An introduction to deep reinforcement learning,” Foundations and Trends® in Machine Learning, vol. 11, no. 3-4, pp. 219–354, 2018.
  13. S. Spielberg, A. Tulsyan, N. P. Lawrence, P. D. Loewen, and R. B. Gopaluni, “Deep reinforcement learning for process control: A primer for beginners,” arXiv preprint arXiv:2004.05490, 2020.
  14. A. N. Abbas, G. C. Chasparis, and J. Kelleher, “Deep residual policy reinforcement learning as a corrective term in process control for alarm reduction: A preliminary report,” 2022.
  15. S. Fujimoto, H. Hoof, and D. Meger, “Addressing function approximation error in actor-critic methods,” in International conference on machine learning, pp. 1587–1596, PMLR, 2018.
  16. A. N. Abbas, G. C. Chasparis, and J. D. Kelleher, “Hierarchical framework for interpretable and specialized deep reinforcement learning-based predictive maintenance,” Data & Knowledge Engineering, vol. 149, p. 102240, 2024.
  17. A. N. Abbas, G. C. Chasparis, and J. D. Kelleher, “Specialized deep residual policy safe reinforcement learning-based controller for complex and continuous state-action spaces,” arXiv preprint arXiv:2310.14788, 2023.
  18. L. R. Rabiner, “A tutorial on hidden markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257–286, 1989.
  19. A. N. Abbas, G. C. Chasparis, and J. D. Kelleher, “Interpretable input-output hidden markov model-based deep reinforcement learning for the predictive maintenance of turbofan engines,” in International Conference on Big Data Analytics and Knowledge Discovery, pp. 133–148, Springer, 2022.
  20. A. Lee, “hmmlearn: Hidden markov models in python with scikit-learn like api,” 2023. Python Package Index.
  21. M. Demichela, G. Baldissone, and G. Camuncoli, “Risk-based decision making for the management of change in process plants: benefits of integrating probabilistic and phenomenological analysis,” Industrial & Engineering Chemistry Research, vol. 56, no. 50, pp. 14873–14887, 2017.
  22. S. Mathôt, J. Fabius, E. Van Heusden, and S. Van der Stigchel, “Safe and sensible preprocessing and baseline correction of pupil-size data,” Behavior research methods, vol. 50, pp. 94–106, 2018.
  23. C. W. Amazu, J. Mietkiewicz, A. N. Abbas, H. Briwa, A. A. Perez, G. Baldissone, M. Demichela, D. Fissore, A. L. Madsen, and M. C. Leva, “Experiment data: Human-in-the-loop decision support in process control rooms,” Data in Brief, p. 110170, 2024.
  24. A. N. Abbas and Winniewelsh, “Cisc-live-lab-3/dataset: v1.0.2,” Jan. 2024.
  25. A. N. Abbas, “ammar-n-abbas/drl-based-decision-support: v1.0.0,” Feb. 2024.
  26. C. W. Amazu, A. N. Abbas, J. Mietkiewicz, H. Briwa, A. A. Perez, G. Baldissone, D. Fissore, M. Demichela, and M. C. Leva, “Experiment data: Human-in-the-loop and decision support in process control rooms.” Manuscript under preparation, 2024.
  27. “Tobii pro glasses 3.” https://www.tobii.com/products/eye-trackers/wearables/tobii-pro-glasses-3. Accessed: February 20, 2024.
  28. “Tobii pro lab.” https://www.tobii.com/products/software/behavior-research-software/tobii-pro-lab. Accessed: February 20, 2024.
  29. C. W. Amazu, A. N. Abbas, J. Mietkiewicz, H. Briwa, A. A. Perez, G. Baldissone, D. Fissore, M. Demichela, and M. C. Leva, “Operational logs: Human-in-the-loop and decision support in process control rooms.” Manuscript under preparation, 2024.
  30. C. W. Amazu, J. Mietkiewicz, H. Briwa, A. N. Abbas, A. A. Perez, G. Baldissone, D. Fissore, M. Demichela, and M. C. Leva, “Impact of human system interfaces on process control room operators: An introduction of support tools and a design of experiment.” Manuscript under preparation, 2024.
  31. S. Ghiasi, A. Greco, R. Barbieri, E. P. Scilingo, and G. Valenza, “Assessing autonomic function from electrodermal activity and heart rate variability during cold-pressor test and emotional challenge,” Scientific reports, vol. 10, no. 1, p. 5406, 2020.
  32. P. J. Unema, S. Pannasch, M. Joos, and B. M. Velichkovsky, “Time course of information processing during scene perception: The relationship between saccade amplitude and fixation duration,” Visual cognition, vol. 12, no. 3, pp. 473–494, 2005.
  33. M. S. Fadardi, J. S. Fadardi, M. Mahjoob, and H. Doosti, “Post-saccadic eye movement indices under cognitive load: A path analysis to determine visual performance,” Journal of Ophthalmic & Vision Research, vol. 17, no. 3, p. 397, 2022.
  34. B. Mahanama, Y. Jayawardana, S. Rengarajan, G. Jayawardena, L. Chukoskie, J. Snider, and S. Jayarathna, “Eye movement and pupil measures: A review,” frontiers in Computer Science, vol. 3, p. 733531, 2022.
  35. E. Stuyven, K. Van der Goten, A. Vandierendonck, K. Claeys, and L. Crevits, “The effect of cognitive load on saccadic eye movements,” Acta psychologica, vol. 104, no. 1, pp. 69–85, 2000.
  36. T. Akiba, S. Sano, T. Yanase, T. Ohta, and M. Koyama, “Optuna: A next-generation hyperparameter optimization framework,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019.
Citations (1)

Summary

We haven't generated a summary for this paper yet.