Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcement Learning Based Minimum State-flipped Control for the Reachability of Boolean Control Networks (2304.04950v2)

Published 11 Apr 2023 in eess.SY and cs.SY

Abstract: This paper proposes model-free reinforcement learning methods for minimum-cost state-flipped control in Boolean control networks (BCNs). We tackle two questions: 1) finding the flipping kernel, namely the flip set with the smallest cardinality ensuring reachability, and 2) deriving optimal policies to minimize the number of flipping actions for reachability based on the obtained flipping kernel. For question 1), Q-learning's capability in determining reachability is demonstrated. To expedite convergence, we incorporate two improvements: i) demonstrating that previously reachable states remain reachable after adding elements to the flip set, followed by employing transfer learning, and ii) initiating each episode with special initial states whose reachability to the target state set are currently unknown. Question 2) requires optimal control with terminal constraints, while Q-learning only handles unconstrained problems. To bridge the gap, we propose a BCN-characteristics-based reward scheme and prove its optimality. Questions 1) and 2) with large-scale BCNs are addressed by employing small memory Q-learning, which reduces memory usage by only recording visited action-values. An upper bound on memory usage is provided to assess the algorithm's feasibility. To expedite convergence for question 2) in large-scale BCNs, we introduce adaptive variable rewards based on the known maximum steps needed to reach the target state set without cycles. Finally, the effectiveness of the proposed methods is validated on both small- and large-scale BCNs.

Citations (1)

Summary

We haven't generated a summary for this paper yet.