Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
92 tokens/sec
Gemini 2.5 Pro Premium
51 tokens/sec
GPT-5 Medium
32 tokens/sec
GPT-5 High Premium
25 tokens/sec
GPT-4o
103 tokens/sec
DeepSeek R1 via Azure Premium
64 tokens/sec
GPT OSS 120B via Groq Premium
469 tokens/sec
Kimi K2 via Groq Premium
227 tokens/sec
2000 character limit reached

Tracking Changing Probabilities via Dynamic Learners (2402.10142v3)

Published 15 Feb 2024 in cs.LG and cs.AI

Abstract: Consider a predictor, a learner, whose input is a stream of discrete items. The predictor's task, at every time point, is probabilistic multiclass prediction, i.e. to predict which item may occur next by outputting zero or more candidate items, each with a probability, after which the actual item is revealed and the predictor updates. To output probabilities, the predictor keeps track of the proportions of the items it has seen. The stream is unbounded (lifelong), and the predictor has finite limited space. The task is open-ended: the set of items is unknown to the predictor and their totality can also grow unbounded. Moreover, there is non-stationarity: the underlying frequencies of items may change, substantially, from time to time. For instance, new items may start appearing and a few recently frequent items may cease to occur again. The predictor, being space-bounded, need only provide probabilities for those items which, at the time of prediction, have sufficiently high frequency, i.e., the salient items. This problem is motivated in the setting of Prediction Games, a self-supervised learning regime where concepts serve as both the predictors and the predictands, and the set of concepts grows over time, resulting in non-stationarities as new concepts are generated and used. We design and study a number of predictors, sparse moving averages(SMAs), for the task. One SMA adapts the sparse exponentiated moving average and another is based on queuing a few counts, keeping dynamic per-item histories. Evaluating the predicted probabilities, under noise and non-stationarity, presents challenges, and we discuss and develop evaluation methods, one based on bounding log-loss. We show that a combination of ideas, supporting dynamic predictand-specific learning rates, offers advantages in terms of faster adaption to change (plasticity), while also supporting low variance (stability).

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube