Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adapting to Non-stationarity with Growing Expert Ensembles (1103.0949v2)

Published 4 Mar 2011 in stat.ML, cs.LG, physics.data-an, and stat.ME

Abstract: When dealing with time series with complex non-stationarities, low retrospective regret on individual realizations is a more appropriate goal than low prospective risk in expectation. Online learning algorithms provide powerful guarantees of this form, and have often been proposed for use with non-stationary processes because of their ability to switch between different forecasters or experts''. However, existing methods assume that the set of experts whose forecasts are to be combined are all given at the start, which is not plausible when dealing with a genuinely historical or evolutionary system. We show how to modify thefixed shares'' algorithm for tracking the best expert to cope with a steadily growing set of experts, obtained by fitting new models to new data as it becomes available, and obtain regret bounds for the growing ensemble.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Cosma Rohilla Shalizi (32 papers)
  2. Abigail Z. Jacobs (21 papers)
  3. Kristina Lisa Klinkner (3 papers)
  4. Aaron Clauset (49 papers)
Citations (26)