Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A optimization framework for herbal prescription planning based on deep reinforcement learning (2304.12828v1)

Published 25 Apr 2023 in cs.AI

Abstract: Treatment planning for chronic diseases is a critical task in medical artificial intelligence, particularly in traditional Chinese medicine (TCM). However, generating optimized sequential treatment strategies for patients with chronic diseases in different clinical encounters remains a challenging issue that requires further exploration. In this study, we proposed a TCM herbal prescription planning framework based on deep reinforcement learning for chronic disease treatment (PrescDRL). PrescDRL is a sequential herbal prescription optimization model that focuses on long-term effectiveness rather than achieving maximum reward at every step, thereby ensuring better patient outcomes. We constructed a high-quality benchmark dataset for sequential diagnosis and treatment of diabetes and evaluated PrescDRL against this benchmark. Our results showed that PrescDRL achieved a higher curative effect, with the single-step reward improving by 117% and 153% compared to doctors. Furthermore, PrescDRL outperformed the benchmark in prescription prediction, with precision improving by 40.5% and recall improving by 63%. Overall, our study demonstrates the potential of using artificial intelligence to improve clinical intelligent diagnosis and treatment in TCM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Kuo Yang (21 papers)
  2. Zecong Yu (1 paper)
  3. Xin Su (67 papers)
  4. Xiong He (3 papers)
  5. Ning Wang (300 papers)
  6. Qiguang Zheng (2 papers)
  7. Feidie Yu (1 paper)
  8. Zhuang Liu (63 papers)
  9. Tiancai Wen (1 paper)
  10. Xuezhong Zhou (6 papers)

Summary

We haven't generated a summary for this paper yet.