Papers
Topics
Authors
Recent
2000 character limit reached

Human–LLM Interaction Patterns

Updated 3 December 2025
  • Human–LLM Interaction Patterns are defined by distinct communication and collaboration methods that combine exploratory and exploitative cognitive engagement.
  • They evolve from terse, goal-oriented queries to context-rich requests, reflecting quantifiable user adaptation across text, voice, and embodied interfaces.
  • Empirical insights drive design guidelines that optimize nonverbal cues, conversation flow, failure recovery, and multi-agent synergy for robust interactions.

Human–LLM interaction patterns describe the distinctive ways in which humans collaborate, communicate, and co-create with LLMs across a spectrum of tasks and embodiments. These patterns are shaped by cognitive modes, interaction workflows, user adaptation strategies, linguistic and nonverbal expectations, the allocation of initiative between human and model, and the dynamics of synergy and breakdown. Research spanning robotics, HCI, cognitive augmentation, programming, negotiation, writing assistance, and voice-based agents reveals consistent, quantifiable trends in engagement style, conversational flow, failure recovery, and the evolution of user behavior around LLM capabilities.

1. Cognitive and Interaction Dimensions

Foundational frameworks organize human–LLM interactions along at least two axes: cognitive activity (exploration vs. exploitation) and cognitive engagement (constructive vs. detrimental) (Holstein et al., 3 Apr 2025). Exploration is characterized by divergent questioning (“what if,” “give alternatives”), while exploitation involves refining or implementing a known solution. Constructive engagement entails critical integration, challenge of model outputs, and explicit connection to domain knowledge, whereas detrimental interaction is typified by passive acceptance or superficial refinement.

Formally, this yields four empirically distinct quadrants:

  • Constructive Exploration: Generating and critically refining novel ideas.
  • Constructive Exploitation: Deepening or implementing a chosen approach with active oversight.
  • Detrimental Exploration: Unfocused brainstorming, minimal scrutiny.
  • Detrimental Exploitation: Outsourcing problem-solving, uncritical prompt acceptance.

Quantitative session analysis defines segment-wise exploration rate E=(1/N)ieiE = (1/N)\sum_i e_i and constructive engagement rate C=(1/N)iciC = (1/N)\sum_i c_i.

2. Taxonomies and Evolution of Request Patterns

Request-making evolves from terse, goal-only ([R]) imperatives toward rich, contextually embedded ([R]+[C]) queries, increasingly leveraging role assignments ([role]) and expressions (EXP) (Zhu et al., 2 Aug 2025). Segmenting chat queries reveals user lifecycle phases—exploration (R-only, 45% of initial dialogs), context enrichment ([R]+[C], rises from 15% to 35% over 20 dialogs), and convergence toward habitual structure (per-utterance embedding similarity di(1)d_i^{(1)} drops by 0.20).

Table: Four Segments in User Requests (from (Zhu et al., 2 Aug 2025))

Segment Definition Example/Role
Request ([R]) Minimal task/goal “Summarize this article…”
Context ([C]) Input data/background Additional paragraphs, tables
Role Assigned persona/expertise “Act as editor…”
Expression (EXP) Polite/conventional markers “Please,” “Hi,” “Thanks”

Major LLM upgrades trigger temporary increases in diversity and query length, with community-wide re-convergence on “good practices.”

3. Task-Specific Interaction Patterns and Embodiment Effects

Empirical studies expose how interaction patterns are strongly modulated by task and embodiment (Kim et al., 6 Jan 2024). For a robot powered by GPT-3.5 (vs. text/voice agents), observed patterns include:

  • Execute (stepwise instruction): Shortest inputs, lowest failure rate, highest satisfaction. Multimodal multitasking and dialog scaffolding are enhanced by robot gaze/head tilt.
  • Negotiate (multi-turn bargaining): Moderate input length/failure; physical presence supports rapport, while text embodiment feels transactional.
  • Choose (checklist selection): Longest text inputs, higher failures in voice/robot, frustrated by logical inconsistency and verbose detours.
  • Generate (collaborative storywriting): Highest input length, maximum failure counts in voice/robot, social pressure and interruption reduce creative depth.

Mismatch between sophisticated language output and impoverished nonverbal cues reduces trust (“creepy” impression), especially when conversational intelligence is not synchronized with physical action. Robot-embodied agents set higher user expectations for synchronized gestures, timing, and expressiveness.

4. Linguistic Divergence, Style Adaptation, and Interface Strategies

Users adapt communication style according to the recipient (LLM vs. human) (Zhang et al., 3 Oct 2025), exhibiting:

  • Lower grammar fluency (ΔG = +5.3% for humans), politeness (ΔP = +14.5%), and lexical diversity (ΔL = +1.4%) in initial LLM chat turns (all p < 0.001).
  • No significant differences in informativeness, clarity, or emotion intensity.

Adaptation strategies for improved robustness include training on stylistically diverse datasets (D₄ = D₁∪D₂∪D₃ yields +2.9% intent accuracy), outperforming inference-time message reformulation. Continuous post-launch log collection and synthetic style variation are necessary as user mental models and style preference drift.

5. Conversation Flow, Breakdown Recovery, and Nonverbal Dynamics

In voice‐driven settings, human–LLM interaction patterns span opening/closing signals, robust factual Q→A flows (76% of turns), egocentric (second-person) response framing, wait-state management (filler in 76%; small talk in 5.4%), and task-modulated response attributes (Mahmood et al., 2023, Chan et al., 29 Aug 2024). LLM-powered VAs absorb 81% of intent recognition failures due to context retention.

A three-stage analytical framework maps behavioral characteristics—verbal (queries, clarification, commands) and nonverbal (eye contact, gesture, tone change, spatial behavior)—onto interaction stages: exploration, conflict, and integration. Conflict triggers nonverbal elevation (eye contact, gestures, increased volume), with transitions to integration via reframed queries or repeated system awakening. Regression is rare but possible following unexpected errors.

6. Collaboration Patterns, Synergy, and Multi-Agent Dynamics

Systematic mapping procedures (k-means clustering on collaboration intensity and creative contribution) reveal four principal interaction categories (Li et al., 6 Apr 2024):

  • Processing Tool (human-led, low creativity)
  • Analysis Assistant (opinion/suggestion, moderate creativity)
  • Creative Companion (co-equal collaboration, high creativity)
  • Processing Agent (AI-driven automation, supervisor role)

In multi-agent deliberation (Sheffer et al., 15 Jun 2025), human–LLM pairs and human trios consistently achieve accuracy improvement post-discussion (Δ_acc of +5.9pp and +28.6pp, respectively), surpassing pre-discussion best performer. This is attributed to knowledge diversity (D=7.3%D=7.3\% in mixed pairs), enabling confidence-guided answer switching and metacognitive synergy. Pure LLM groups show little improvement due to homogenized knowledge states.

7. Design Implications and Guidelines

Best practices and actionable design recommendations are consistently extracted:

These implications are domain-transferrable: mixing human initiative and agent autonomy, scaffolding critical review, explicitly managing task/completion state, and maintaining transparency are pivotal to maximizing human cognition, trust, and collective output quality in human–LLM interaction.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Human-LLM Interaction Patterns.