Papers
Topics
Authors
Recent
Search
2000 character limit reached

Personalized Agents from Human Feedback

Updated 20 February 2026
  • PAHF is a class of machine learning systems that incorporates real-time, individualized human feedback to align agent policies with shifting user preferences.
  • It operationalizes personalization through per-user memory modules, user embeddings, and dual-channel feedback, enabling efficient adaptation in non-stationary settings.
  • Empirical studies show that approaches like reflective reward modeling and activation steering improve sample efficiency and personalization performance across diverse tasks.

Personalized Agents from Human Feedback (PAHF) are a class of machine learning systems that explicitly incorporate individualized human preferences at test time, leveraging direct and ongoing user feedback to continually adjust agent behavior and decision-making. These frameworks extend classical reinforcement learning from human feedback (RLHF) by treating each user as a distinct source of ground-truth reward, often in the presence of reward heterogeneity, ambiguity, and dynamic preference drift. The core technical challenge is to efficiently extract and operationalize concept-level or fine-grained preferences from limited interactions, utilizing them to align autonomous policies to each user's implicit objectives, even in non-stationary, open-world settings (Peng et al., 2023, Liang et al., 18 Feb 2026, Li et al., 2024, Park et al., 2024, Blair et al., 21 Jun 2025).

1. Formalization of the Personalized Learning Problem

PAHF formalizes control or decision-making tasks as Markov decision processes (MDPs) M=S,A,T,R\mathcal{M} = \langle \mathcal{S}, \mathcal{A}, \mathcal{T}, \mathcal{R} \rangle, where the agent policy πθ\pi_\theta must adapt not simply to environment shifts, but to target an evolving, user-specific reward R\mathcal{R}' at deployment. Ambiguity arises from state and reward shifts—test-time scenarios may differ in latent task structure or user goals, and naive aggregation of human data obscures individual value systems. The objective is to find a policy π\pi^* that maximizes the expected user-aligned reward under the test-time distribution of states and user intent:

π=argmaxπ E(s,a)pnew[R(s,a)]\pi^* = \arg\max_\pi \ \mathbb{E}_{(s,a)\sim p_\text{new}} [\mathcal{R}'(s, a)]

where pnewp_\text{new} reflects both the observed state shift and the hidden reward shift (Peng et al., 2023).

Personalization is operationalized via several mechanism classes:

2. Human-in-the-Loop Personalization Mechanisms

Various PAHF algorithms instantiate a tight feedback loop: the agent alternates between seeking user clarification when ambiguity is detected, grounding decisions in recalled preferences, and integrating post-hoc corrections after observing user feedback. For each interaction round, the agent:

  1. Retrieves candidate preferences from an explicit per-user memory or user embedding.
  2. Queries the user for clarification when the instruction is ambiguous or confidence is low.
  3. Augments its memory or model in response to explicit accept/reject feedback or corrective demonstrations (Liang et al., 18 Feb 2026, Peng et al., 2023).

A typical realization is the counterfactual human-in-the-loop design: the agent generates concept-edited counterfactuals to help the user distinguish between task-relevant and task-irrelevant aspects, then performs targeted data augmentation on user-identified task-irrelevant factors, dramatically reducing the demonstration cost for fine-tuning (Peng et al., 2023).

Some frameworks collect human preference data via pairwise comparisons, argumentation dialogues, or rubrics mapping actions to high-level value dimensions. In proactive assistant and conversational domains, implicit behavioral signals (typing speed, sentiment, engagement) are fused with explicit feedback (likes, ratings) to construct latent user profiles (Makridis et al., 4 Sep 2025, Kim et al., 26 Sep 2025).

3. User Modeling and Adaptation Algorithms

The core adaptation algorithms in PAHF systems span several approaches:

  • User-conditioned Reward and Policy Models: Policies πp(yx,u)\pi_p(y|x,u) are conditioned on a learned user embedding eu=fP(u;θu)e_u = f_P(u; \theta_u), with joint training by personalized reward modeling (P-RM) or direct preference optimization (P-DPO), supporting scaling to many users via clusters or factorized models (Li et al., 2024, Park et al., 2024).
  • Online Dual-Channel Feedback Integration: Agents interleave pre-action (clarification) and post-action (correction) feedback channels, each critical under partial observability and preference drift. Theoretical analysis demonstrates that omitting either channel yields linear regime error, while dual-channel schemes achieve O(K+γ)O(K+\gamma) regret for KK preference switches and ambiguity rate γ\gamma (Liang et al., 18 Feb 2026).
  • Reflective Verbal Reward Modeling: LLM-based dialogue scaffolds help users articulate value judgments; the resulting personalized dialogue histories are passed as in-context input to a frozen reward LM at evaluation time—yielding per-user reward models without retraining (Blair et al., 21 Jun 2025).
  • Activation-Based Steering and On-Device Adaptation: For resource-constrained settings, user feedback dynamically adjusts activation vectors at inference time, facilitating privacy-preserving, real-time personalization without parameter updates or cloud-side data transfer (Xuan et al., 3 Feb 2026).
  • Aggregation and Clustering: For multi-user scenarios, sample-efficient clustering assigns users to representative preference heads, or social-welfare-driven aggregation merges diverse reward functions using utilitarian or Leximin pooling, with DSIC mechanism design for strategic robustness (Park et al., 2024).

4. Evaluation Protocols and Empirical Results

PAHF research has established rigorous evaluation protocols using both simulation and human-subject studies:

  • Benchmarks: Tasks include embodied manipulation, online shopping, adaptive user interfaces, language modeling, and dialogue (Liang et al., 18 Feb 2026, Gaspar-Figueiredo et al., 29 Apr 2025, Li et al., 2024).
  • Metrics: Personalization error (sum of incorrect actions relative to ground truth), user satisfaction (Likert/QUIS), task success rates, feedback frequency, and adaptation speed to preference drift (Liang et al., 18 Feb 2026, Gaspar-Figueiredo et al., 29 Apr 2025).
  • Empirical Findings: Dual feedback channels improve both initial personalization and adaptation under preference drift, outperforming baselines lacking explicit user memory or single feedback streams. In language tasks, individualized or cluster-based models yield statistically significant improvements (e.g., personalized DPO exceeds vanilla baselines by \sim1–2% absolute accuracy) (Li et al., 2024). Reflective verbal reward modeling reaches higher accuracy and sample efficiency than supervised MLP/CNN baselines (9–12% improvement with 4–6 examples per user) (Blair et al., 21 Jun 2025). Proactive assistants with activation steering achieve performance within 2% of cloud-based RLHF even under on-device resource constraints (Xuan et al., 3 Feb 2026).

5. Challenges and Limitations

Despite empirical gains, PAHF frameworks face limitations:

  • Concept Extraction and Editability: Many algorithms presuppose access to a disentangled concept abstraction and generative editor, which remains challenging in complex, real-world sensory domains (Peng et al., 2023).
  • Scalability: Training per-user RL agents or storing large explicit per-user memory can pose bottlenecks for population-scale deployment; clustering and low-rank user modeling partially address this (Li et al., 2024, Park et al., 2024).
  • Reward and Preference Drift: Handling noisy, ambiguous, or adversarial feedback, as well as multi-turn clarification and rapidly shifting persona dynamics, remains an open technical problem (Liang et al., 18 Feb 2026).
  • Standardization and Explainability: Absence of benchmark datasets, domain transfer, and explainability in RL-based AUIs hinders generalization; future work aims to develop interpretable adaptation modules and field-realistic evaluation (Gaspar-Figueiredo et al., 29 Apr 2025).

6. Theoretical Foundations and Future Directions

Recent theoretical work formalizes the sample complexity of PAHF via multi-task representation learning and robust policy optimization. Sample complexity bounds depend on preference diversity, dimension of shared representations, and number of users (Park et al., 2024). Mechanism-design-based aggregation ensures truthful reporting and fairness in multi-user settings. Ongoing directions include:

  • Richer multimodal and longitudinal feedback integration.
  • Memory compression and federated meta-learning for scalability and privacy.
  • Automatic discovery of new preference dimensions beyond fixed taxonomies.
  • Deployment and validation in long-horizon, real-world tasks, with dynamic user bases and evolving social norms (Xuan et al., 3 Feb 2026, Blair et al., 21 Jun 2025, Liang et al., 18 Feb 2026).

7. Comparative Overview of PAHF Methodologies

Mechanism Adaptation Target Feedback Modality
Counterfactual Concept Editing RL policies Corrective demo + binary
User-Embedding RLHF LLM responses Pairwise/preference data
Explicit Per-User Memory General policies Pre & post-action feedback
Reflective Verbal Reward Value alignment Dialogue/rich explication
Activation Steering Device-resident LM Accept/reject + rating
Clustering/Aggregation Multi-user models Reward/probabilistic opin.

Distinct approaches trade off sample efficiency, interpretability, privacy, computational footprint, and adaptation speed. Empirical and theoretical research converges on the insight that real-time, per-user feedback, efficiently operationalized via explicit memory, user modeling, or in-context learning, is essential for robust, adaptive personalization in modern AI agents (Liang et al., 18 Feb 2026).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Personalized Agents from Human Feedback (PAHF).