Papers
Topics
Authors
Recent
Search
2000 character limit reached

Recovering Markov Models from Closed-Loop Data

Published 20 Jun 2017 in math.OC | (1706.06359v4)

Abstract: Situations in which recommender systems are used to augument decision making are becoming prevalent in many application domains. Almost always, these prediction tools (recommenders) are created with a view to affecting behavioural change. Clearly, successful applications actuating behavioural change, affect the original model underpinning the predictor, leading to an inconsistency. This feedback loop is often not considered in standard so-called Big Data learning techniques which rely upon machine learning/statistical learning machinery. The objective of this paper is to develop tools that recover unbiased user models in the presence of recommenders. More specifically, we assume that we observe a time series which is a trajectory of a Markov chain ${R}$ modulated by another Markov chain ${S}$, i.e. the transition matrix of ${R}$ is unknown and depends on the current state of ${S}$. The transition matrix of the latter is also unknown. In other words, at each time instant, ${S}$ selects a transition matrix for ${R}$ within a given set which consists of known and unknown matrices. The state of ${S}$, in turn, depends on the current state of ${R}$ thus introducing a feedback loop. We propose an Expectation-Maximization (EM) type algorithm, which estimates the transition matrices of ${S}$ and ${R}$. Experimental results are given to demonstrate the efficacy of the approach.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.