Papers
Topics
Authors
Recent
2000 character limit reached

Humanlike Multi-user Agent (HUMA)

Updated 28 November 2025
  • Humanlike Multi-user Agents (HUMA) are AI systems designed to emulate natural, multi-user human interactions through modular pipelines and context-sensitive decision-making.
  • They integrate real-time value alignment, dynamic negotiation, and temporal realism to handle simultaneous user inputs and enable conflict resolution.
  • Empirical evaluations show HUMA systems achieve competitive compliance, fairness, and safety metrics, making them promising for adaptive group facilitation and ethical domestic AI.

A Humanlike Multi-user Agent (HUMA) denotes an artificial agent architecture or system that aims to emulate human-like interaction, negotiation, and facilitation behaviors in settings involving multiple simultaneous users. Drawing from research in domestic AI, simulation platforms, and conversational group chat facilitation, the HUMA paradigm encompasses value-aligned action selection, real-time conflict mediation, sophisticated user modeling, temporal behavioral realism, and seamless context management across diverse social and practical environments (Chandra et al., 21 Oct 2025, Jacniacki et al., 21 Nov 2025, Wang et al., 2023).

1. Architectural Foundations

HUMA system architectures are characterized by multi-module pipelines designed to process multimodal, concurrent, and sometimes asynchronous user inputs. Prominent implementations include the following distinct yet conceptually related frameworks:

  • Plural Voices Model (PVM) for domestic spaces relies on a single-agent system (“Agora-nest”) integrating (i) Perception & Identification, (ii) Value Alignment & Ethical Reasoning, (iii) Negotiation & Conflict Resolution, and (iv) Personalization & Adaptation. Perception covers speaker identification and multi-modal sensing; value alignment utilizes real-time inference and optimization for ethical and safe choices; negotiation arbitrates resource allocation under fairness and safety constraints; personalization adapts the interface to user archetypes and preferences (Chandra et al., 21 Oct 2025).
  • Humanoid Agents Simulations implement agent-centric worlds by composing a World Simulator, Agent Manager, LLM interface for dialogue and planning, agent-specific memory stores, and a visual-analytic front end. The architecture is tailored for investigations into System 1 social-cognitive processes such as basic needs, emotion, and relational closeness (Wang et al., 2023).
  • HUMA Facilitator for Group Chats employs an event-driven workflow with three primary stages: Router (strategy selection), Action Agent (tool execution), and Reflection (context summarization). The pipeline is designed for interruptibility, real-time multi-party input, and temporal realism in digital chat group facilitation (Jacniacki et al., 21 Nov 2025).

These architectures emphasize modularity, context maintenance, and robust interruption handling, enabling HUMA systems to replicate the fluid and concurrent nature of human multi-user interactions.

2. Core Principles and Algorithms

A defining feature of HUMA is the integration of value-sensitive and fairness-aware decision algorithms that operate in real time.

  • Value Alignment in Multi-user Contexts: At each timestep, agents aggregate user requests RiR_i and profiles PiP_i, computing utilities ui(aRi,Pi)[0,1]u_i(a \mid R_i, P_i) \in [0,1] over candidate actions aa. Users are assigned dynamic priorities wiw_i, normalized as wi=wi/jwjw'_i = w_i / \sum_j w_j. The agent solves

maxaiwiui(a)\max_a \sum_{i} w'_i \cdot u_i(a)

subject to safety (S(a)=1S(a)=1) and fairness (F({ai})τF(\{a_i\}) \geq \tau) constraints, where FF enforces proportional utility with F({ai})=miniui(ai)/maxiui(ai)τF(\{a_i\}) = \min_i u_i(a_i) / \max_i u_i(a_i) \geq \tau; typically τ=0.8\tau=0.8 (Chandra et al., 21 Oct 2025).

  • Strategy Selection and Timeliness Modeling: In group chat HUMAs, a set of strategies SS is scored by Appropriateness AsA_s and Timeliness TsT_s (which penalizes recent repetition), leading to selection by maximizing Scores=As+Ts\text{Score}_s = A_s + T_s (Jacniacki et al., 21 Nov 2025).
  • Interruptibility and Workflow Recursion: All major action modules incorporate mechanisms for workflow interruption and safe resumption, preserving partial plans in a scratchpad and replaying or continuing as necessary upon new event arrival. This enables agent response to overlapping requests, mirroring human multitasking in social settings (Jacniacki et al., 21 Nov 2025, Chandra et al., 21 Oct 2025).

The result is continuous, context-dependent adaptation of agent outputs that balances user needs, social conventions, and safety requirements.

3. User Modeling and Personalization

User modeling in HUMA is realized through segmentation by archetype and dynamic adaptation of interaction affordances.

  • Archetypal Profiles: Standard user categories include child, elderly, neurodivergent (e.g., ADHD), and typical adult. Each invokes parameterized modifications to dialogue complexity, output modality, confirmation steps, and content gating (e.g., parental controls for children, granular step confirmations for elderly, video-guided scaffolds and step-by-step chunking for neurodivergent users) (Chandra et al., 21 Oct 2025).
  • Autonomy Tuning: A global "autonomy slider" (0–100%) or parameter α[0,1]\alpha \in [0,1] affords users or households direct control over agent initiative, ranging from manual (confirmation-only) to fully autonomous execution (Chandra et al., 21 Oct 2025).
  • System 1 and 2 Attributes: Humanoid Agents platforms extend modeling to basic needs (e.g., fullness, fun, health), discrete or soft emotions, and per-pair relationship closeness Cab(t)C_{ab}(t), all governed by mathematical update equations and incorporated into planning and dialogue routines (Wang et al., 2023).
  • Fine-grained Output Adaptation: Language complexity (LL), hint-answer strategies (via learning-gain metrics), text size, speech rate, and video guidance are customized per archetype and controllable via user profiles or interface settings (Chandra et al., 21 Oct 2025).

This individualization paradigm enables HUMA to satisfy sophisticated accessibility, fairness, and engagement requirements in heterogeneous multi-user environments.

4. Temporal Realism and Interaction Management

Humanlike timing and interaction management are central to establishing social presence and naturalness in HUMA systems.

  • Typing and Response Time Modeling: In group chat facilitation, agent replies involve simulated typing delays with vUniform(50,100)v \sim \text{Uniform}(50, 100) WPM, imposing δtyping=60×(W/v)\delta_\text{typing} = 60 \times (W / v) seconds for WW-word replies. During this period, the agent broadcasts typing indicators and remains available for mid-generation interruptions (Jacniacki et al., 21 Nov 2025).
  • Asynchronous Event Handling: The event-driven pipeline permits processing of diverse chat events (join, message, reply, reaction, typing), updating context state CtC_t and orchestrating timely and situationally appropriate interventions or silences ("Keep Silent" strategy), reducing the risk of unnatural conversational rhythm or over-participation (Jacniacki et al., 21 Nov 2025).
  • Sequential Planning and Replanning: For simulation platforms, agent sense-plan-act cycles repeat at fixed time intervals (e.g., 15-minute ticks), with needs-driven or emotion-driven replanning when state thresholds are breached or events occur (Wang et al., 2023).

These mechanisms ensure that HUMA interactions align with human expectations of conversational cadence, reactivity, and interruption fluidity.

5. Evaluation Methodologies and Empirical Outcomes

HUMA systems have been evaluated in a variety of controlled and naturalistic settings using quantitative and qualitative metrics.

  • Plural Voices Model (Domestic AI): Evaluations involving 36 participants across plural family contexts compared PVM to multi-agent baselines on Compliance Rate (76% vs. 70%), Fairness Score (90% vs. 85%), Safety-Violation Rate (0% vs. 7%), and median latency (<800 ms). Addition of video guidance notably improved perceived usability, especially for neurodivergent users (4.5/5 rating), and family hub scheduling reduced reported family-time conflict by 30% (Chandra et al., 21 Oct 2025).
  • Group Chat Facilitation: With 97 participants, AI and human community managers led role-play group chats. Detection rates for identifying the AI as human clustered around chance (44.6% AI-labeled-as-human; 46.7% human-labeled-as-human), with all subjective experience metrics (effectiveness, social presence, engagement, humanlikeness) demonstrating only modest differences (Cohen's d<0.4|d| < 0.4). Qualitative cues such as speed, formality, and empathy failed to distinguish agents (Jacniacki et al., 21 Nov 2025).
  • Simulation Environments: Humanoid Agents achieved high micro-F1 (>0.84) on basic-need and emotion annotation, and found needs-driven time allocation and emotion/relationship fixpoints modulate simulated social dynamics (Wang et al., 2023).

Table: Summary of Empirical Performance (selected metrics)

System / Setting Compliance Rate Fairness Score Safety-Violation Rate Latency
PVM (domestic, N=36) 76% 90% 0% <800 ms
Multi-agent baseline 70% 85% 7% -
HUMA (group chat) 44.6% AI detected as human ∼4.14/5 human-likeness - Humanlike (simulated)

These findings substantiate that HUMA systems can match or exceed established baselines in compliance, fairness, and social indistinguishability, indicating the efficacy of value-aligned, temporally realistic, and user-tailored architectures.

6. Interface Innovations and Extensibility

Interface design and extensibility are crucial to operationalizing HUMA principles in practical applications.

  • Video Guidance: Automated generation of short (10–20s) instructional animations (e.g., with the avatar “Ava” via i2vgen-XL) as default scaffolding for neurodivergent users and optional support for others (Chandra et al., 21 Oct 2025).
  • Autonomy Slider and Family Hub: Interactive UX controls for agent initiative and centralized scheduling enhance transparency, user control, and coordination across multiple users (Chandra et al., 21 Oct 2025).
  • Adaptive Safety Dashboard: Real-time monitoring and transparent logging of applied safety scaffolds, content filters, and override actions bolster auditability and user trust (Chandra et al., 21 Oct 2025).
  • Analytics Visualization: Simulation platforms feature temporal dashboards displaying agent needs, emotions, and relationships, facilitating research analysis and interactive debugging (Wang et al., 2023).
  • Extensibility Guidelines: New behavioral dimensions (e.g., empathy, cultural bias) can be integrated by formalizing state variables, update rules, and augmenting LLM prompt templates, enabling principled expansion of social-cognitive modeling (Wang et al., 2023).

This suggests that HUMA research increasingly relies on modular, extensible GUIs and dashboards to manage operational complexity and facilitate both research and end-user objectives.

7. Societal Impact, Limitations, and Prospects

HUMA systems represent a convergence of AI, user-modelling, and human factors for collaborative, inclusive, and multi-party environments. Applications range from ethical domestic agents to scalable group chat facilitation and simulation for behavioral science.

Limitations include domain constraints (single household or art community), short evaluation windows, lack of modeling for individual temporal idiosyncrasies (e.g., personal typing cadence), and unaddressed adversarial scenarios such as manipulation or astroturfing (Jacniacki et al., 21 Nov 2025, Chandra et al., 21 Oct 2025). Long-term social dynamics and retention are largely unexplored.

Future research directions include:

  • Longitudinal studies on retention, fairness, and social capital in multi-user environments.
  • Cross-cultural adaptation and systematization of value alignment in globally heterogeneous groups.
  • Robustness to adversarial inputs, bias, and emergent group pathologies.

HUMA stands as a focal point for research into artificial social competence, plural-value negotiation, and adaptive cognition in agents acting amidst human collectives, as evidenced by empirical parity with or outperformance of human and multi-agent baselines in diverse interactive settings (Jacniacki et al., 21 Nov 2025, Chandra et al., 21 Oct 2025, Wang et al., 2023).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Humanlike Multi-user Agent (HUMA).