Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Context-aware Bayesian Mixed Multinomial Logit Model (2210.05737v2)

Published 11 Oct 2022 in stat.ML and cs.LG

Abstract: The mixed multinomial logit model assumes constant preference parameters of a decision-maker throughout different choice situations, which may be considered too strong for certain choice modelling applications. This paper proposes an effective approach to model context-dependent intra-respondent heterogeneity, thereby introducing the concept of the Context-aware Bayesian mixed multinomial logit model, where a neural network maps contextual information to interpretable shifts in the preference parameters of each individual in each choice occasion. The proposed model offers several key advantages. First, it supports both continuous and discrete variables, as well as complex non-linear interactions between both types of variables. Secondly, each context specification is considered jointly as a whole by the neural network rather than each variable being considered independently. Finally, since the neural network parameters are shared across all decision-makers, it can leverage information from other decision-makers to infer the effect of a particular context on a particular decision-maker. Even though the context-aware Bayesian mixed multinomial logit model allows for flexible interactions between attributes, the increase in computational complexity is minor, compared to the mixed multinomial logit model. We illustrate the concept and interpretation of the proposed model in a simulation study. We furthermore present a real-world case study from the travel behaviour domain - a bicycle route choice model, based on a large-scale, crowdsourced dataset of GPS trajectories including 119,448 trips made by 8,555 cyclists.

Summary

We haven't generated a summary for this paper yet.