Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Managing Temporal Resolution in Continuous Value Estimation: A Fundamental Trade-off (2212.08949v3)

Published 17 Dec 2022 in cs.LG, cs.SY, eess.SY, and stat.ML

Abstract: A default assumption in reinforcement learning (RL) and optimal control is that observations arrive at discrete time points on a fixed clock cycle. Yet, many applications involve continuous-time systems where the time discretization, in principle, can be managed. The impact of time discretization on RL methods has not been fully characterized in existing theory, but a more detailed analysis of its effect could reveal opportunities for improving data-efficiency. We address this gap by analyzing Monte-Carlo policy evaluation for LQR systems and uncover a fundamental trade-off between approximation and statistical error in value estimation. Importantly, these two errors behave differently to time discretization, leading to an optimal choice of temporal resolution for a given data budget. These findings show that managing the temporal resolution can provably improve policy evaluation efficiency in LQR systems with finite data. Empirically, we demonstrate the trade-off in numerical simulations of LQR instances and standard RL benchmarks for non-linear continuous control.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zichen Zhang (30 papers)
  2. Johannes Kirschner (17 papers)
  3. Junxi Zhang (6 papers)
  4. Francesco Zanini (5 papers)
  5. Alex Ayoub (7 papers)
  6. Masood Dehghan (16 papers)
  7. Dale Schuurmans (112 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.