Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tensor Low-rank Approximation of Finite-horizon Value Functions (2405.17628v1)

Published 27 May 2024 in cs.LG and cs.AI

Abstract: The goal of reinforcement learning is estimating a policy that maps states to actions and maximizes the cumulative reward of a Markov Decision Process (MDP). This is oftentimes achieved by estimating first the optimal (reward) value function (VF) associated with each state-action pair. When the MDP has an infinite horizon, the optimal VFs and policies are stationary under mild conditions. However, in finite-horizon MDPs, the VFs (hence, the policies) vary with time. This poses a challenge since the number of VFs to estimate grows not only with the size of the state-action space but also with the time horizon. This paper proposes a non-parametric low-rank stochastic algorithm to approximate the VFs of finite-horizon MDPs. First, we represent the (unknown) VFs as a multi-dimensional array, or tensor, where time is one of the dimensions. Then, we use rewards sampled from the MDP to estimate the optimal VFs. More precisely, we use the (truncated) PARAFAC decomposition to design an online low-rank algorithm that recovers the entries of the tensor of VFs. The size of the low-rank PARAFAC model grows additively with respect to each of its dimensions, rendering our approach efficient, as demonstrated via numerical experiments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.
  2. M. G. Lagoudakis and R. Parr, “Least-squares policy iteration,” Journal of Machine Learning Research, vol. 4, pp. 1107–1149, 2003.
  3. S. Rozada, S. Paternain, and A. G. Marques, “Tensor and matrix low-rank value-function approximation in reinforcement learning,” arXiv preprint arXiv:2201.09736, 2023.
  4. M. Geist and O. Pietquin, “Algorithmic survey of parametric value function approximation,” IEEE Trans. Neural Networks and Learning Systems, vol. 24, no. 6, pp. 845–867, 2013.
  5. H. Guzey, H. Xu, and J. Sarangapani, “Neural network-based finite horizon optimal adaptive consensus control of mobile robot formations,” Optimal Control Applications and Methods, vol. 37, no. 5, pp. 1014–1034, 2016.
  6. C. Huré, H. Pham, A. Bachouch, and N. Langrené, “Deep neural networks algorithms for stochastic control problems on finite horizon: convergence analysis,” SIAM Journal on Numerical Analysis, vol. 59, no. 1, pp. 525–557, 2021.
  7. Q. Zhao, H. Xu, and S. Jagannathan, “Neural network-based finite-horizon optimal control of uncertain affine nonlinear discrete-time systems,” IEEE Trans. on Neural Networks and Learning Systems, vol. 26, no. 3, pp. 486–499, 2014.
  8. K. De Asis, A. Chan, S. Pitis, R. Sutton, and D. Graves, “Fixed-horizon temporal difference methods for stable reinforcement learning,” in Proc. of the 34th AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 3741–3748.
  9. C. Dann and E. Brunskill, “Sample complexity of episodic fixed-horizon reinforcement learning,” Proc. of the 29th Neural Information Processing Systems, vol. 28, 2015.
  10. N. D. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Trans. on Signal Processing, vol. 65, no. 13, pp. 3551–3582, 2017.
  11. T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM Review, vol. 51, no. 3, pp. 455–500, 2009.
  12. S. Gandy, B. Recht, and I. Yamada, “Tensor completion and low-n-rank tensor recovery via convex optimization,” Inverse Problems, vol. 27, no. 2, p. 025010, 2011.
  13. Y. Yang, G. Zhang, Z. Xu, and D. Katabi, “Harnessing structures for value-based planning and reinforcement learning,” in Proc. of the 8th International Conference on Learning Representations, 2020.
  14. D. Shah, D. Song, Z. Xu, and Y. Yang, “Sample efficient reinforcement learning via low-rank matrix estimation,” in Proc. of the 34th International Conference on Neural Information Processing Systems, ser. NIPS’20.   Red Hook, NY, USA: Curran Associates Inc., 2020.
  15. S. Rozada, V. Tenorio, and A. G. Marques, “Low-rank state-action value-function approximation,” in 2021 29th European Signal Processing Conference (EUSIPCO).   IEEE, 2021, pp. 1471–1475.
  16. K.-C. Tsai, Z. Zhuang, R. Lent, J. Wang, Q. Qi, L.-C. Wang, and Z. Han, “Tensor-based reinforcement learning for network routing,” IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 3, pp. 617–629, 2021.
  17. M. Oster, L. Sallandt, and R. Schneider, “Approximating optimal feedback controllers of finite horizon control problems using hierarchical tensor formats,” SIAM Journal on Scientific Computing, vol. 44, no. 3, pp. B746–B770, 2022.
  18. V. VP and D. S. Bhatnagar, “Finite horizon q-learning: stability, convergence, simulations and an application on smart grids,” arXiv preprint arXiv:2110.15093, 2021.
  19. R. Bro, “Parafac. tutorial and applications,” Chemometrics and Intelligent Laboratory Systems, vol. 38, no. 2, pp. 149–171, 1997.
  20. S. Rozada, “Tensor low-rank approximation of finite-horizon value functions,” https://github.com/sergiorozada12/fhtlr-learning, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Sergio Rozada (12 papers)
  2. Antonio G. Marques (78 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets