Selecting suitable reward features for IRL in continuous state spaces
Determine which feature functions should be used to represent reward functions in inverse reinforcement learning for continuous state spaces under the standard linear reward model, in which the reward is expressed as a weighted sum of features, so that the chosen feature set is suitable for capturing and reproducing expert policies.
References
A central open challenge in inverse reinforcement learning is the choice of suitable features to represent the reward.
— Automated Feature Selection for Inverse Reinforcement Learning
(2403.15079 - Baimukashev et al., 2024) in Figure 1 caption (Introduction)