Human–LLM Interaction Patterns
- Human–LLM Interaction Patterns are defined by complex taxonomies that classify collaboration modes, prompting strategies, and session structures.
- They adapt dynamically to variations in user expertise and model configurations, influencing productivity and learning outcomes.
- Empirical studies reveal that structured interaction patterns enhance performance metrics while also posing challenges in terms of cognitive load and system alignment.
Human–LLM Interaction Patterns
LLMs have introduced intricate new dynamics into human–AI collaboration, with research uncovering a diverse array of interaction patterns that reflect both the operational properties of LLMs and the evolving strategies of human users. These patterns are not only a function of prompt design or task structure, but also of user expertise, model configuration, and the socio-technical context in which interactions unfold.
1. Taxonomies and Structural Typologies
Human–LLM interaction patterns are commonly categorized along several axes, including request type, prompting strategy, collaboration mode, and session structure. In programming tasks, four primary request-type patterns have been identified: learning/exploration (conceptual inquiry or tutorial), solution-oriented (code generation), error-correcting (debugging and fixing), and unrelated/social (off-topic conversational turns) (Etsenake et al., 2024). Each is accompanied by canonical prompting strategies—ranging from zero/few-shot uses and chain-of-thought prompting to multi-turn, stepwise, and rephrasing approaches.
A broader systematization clusters interaction patterns into modes reflecting the division of agency and creativity:
| Cluster Name | AI Role | Human Role |
|---|---|---|
| Processing Tool | Deterministic | Supervisory, Decision |
| Analysis Assistant | Synthesizing | Steering, Judgment |
| Creative Companion | Autonomous, Creative | Co-author |
| Processing Agent | Autonomous, Structured | Overseeing, Validator |
This typology delineates patterns from tool-like use to high-autonomy creative partnership, confirmed across divergent empirical studies (Li et al., 2024). In collaborative writing, for instance, empirical decompositions via principal component analysis reveal seven "PATHs", with dominant behaviors including request revision, text exploration, question posing, content adjustment, and feedback (Mysore et al., 21 May 2025).
Multi-turn coding collaborations show further structural distinctions: linear (sequential single path), star (branching from a central prompt), and tree (multi-branch recursive dependencies), each associated with specific task types and success characteristics (Zhang et al., 11 Dec 2025).
2. Dynamics of Interaction and Adaptation
Empirical studies reveal that human–LLM interaction patterns are inherently adaptive, shaped by both user learning and the stochastic, non-deterministic outputs of the models. For example, users progressively shift from simple directive prompts to more elaborate contextual or example-driven forms as they gain experience with LLMs; there is rapid convergence in individual prompting style across sessions, as traced in query segmentation and diversity analyses (Zhu et al., 2 Aug 2025).
Conversational alignment exhibits strong modality effects: in referential communication games, human–human and AI–AI pairs readily form conventions that optimize for brevity and consistency, whereas human–LLM pairs struggle to achieve similar alignment, even when surface-level features of LLM output are human-like. This persistent gap is attributed to a lack of shared interpretative bias in heterogeneous dyads and is not fully overcome by prompting manipulations (Jones et al., 9 Feb 2026).
Turn-taking, revision loops, and escalation strategies (such as providing additional context or requesting model self-verification) are widespread. Complex tasks induce longer iterative cycles, especially for error correction and multi-file debugging, and are affected by factors including prompt length, model temperature, and model size (Etsenake et al., 2024).
3. Cognitive and Performance Outcomes
The impact of human–LLM interaction patterns is quantitatively expressed across metrics for time productivity, learning outcome, correctness, code security, and cognitive load. LLM assistance generally reduces program completion time (up to 45%) and boosts learning outcomes in conceptual exploration (mean test Δscore = +12%), but can foster dependency and reduce manual recall of syntax (–8% in forced manual post-test) (Etsenake et al., 2024).
Performance effects are heterogeneous:
- Collaborative correctness improves in most studies (+17% unit test pass rates), but LLM use can degrade performance in tasks demanding deep domain expertise or integration across files.
- Code security and readability show mixed patterns, with some reduction in code smells (–20%) and increases in insecure patterns (+12%) (Etsenake et al., 2024).
- Satisfaction and instruction compliance in coding are sensitive to session structural patterns: tree-structured interactions correlate with the lowest compliance and satisfaction, while linear and star patterns are more robust (Zhang et al., 11 Dec 2025).
Cognitive engagement dimensions—constructive versus detrimental—are tractable through transcript segmentation. Balanced, constructive exploration correlates with superior solution quality, while uncritical exploitation marks risk for “cognitive erosion” (Holstein et al., 3 Apr 2025).
4. Determinants: Human, Model, and Interactional Factors
Observed patterns are shaped by individual expertise, domain knowledge, model configuration, and the iterative prompt–response feedback loop. Novice users realize greater time savings but exhibit greater dependence and propensity to skip code review, while domain experts drive correctness through intensive vetting and contextualization (β₁ ≈ +0.25 tasks/hour per novice level for time savings; turn-taking frequency decreases with increasing domain familiarity) (Etsenake et al., 2024).
Model size and configuration (e.g., 20B vs. 6B) affect correctness (+10%) but at the cost of elevated hallucination rates and review time; lower generation temperatures reduce syntax error rates by 18% but constrain creative variation. Prompt length and increasing number of iterative turns are positively correlated with correctness and quality (r≈+0.32 for length vs. correctness; ≥5 turns yields +12% solution quality) (Etsenake et al., 2024).
Community-wide prompt diversity and style shift dynamically with the introduction of new LLM versions, producing collective adaptation effects traceable through lexical and structural diversity indices (MTLD) (Zhu et al., 2 Aug 2025).
5. Socio-Technical Role Archetypes and Collaborative Design
In domains such as decision support and open-ended problem solving, recurring "archetypes" codify the functional allocation between humans and LLMs. Seventeen distinct archetypes have been identified (e.g., Role Taker, Model, Communicator, Explainer, Knowledge Checker, Decision Scaffolder, Counterargument, Consensus Generator), each governing agency, autonomy, and control in decision pipelines (Chappidi et al., 12 Feb 2026). Clinical case studies confirm that even with identical underlying LLMs and tasks, choice of archetype can significantly alter prediction accuracy, agreement levels, and explanation style.
Design trade-offs inherent in archetype selection span autonomy (risk of automation bias), external versus internal knowledge reliance, cognitive easing versus forcing, social positioning (expert, peer, or subordinate), and group consensus mechanisms. Archetypes such as Decision Scaffolder and Implicit Reasoner are favored where human oversight and reflection are prioritized, while Model and Role Taker types support more automated or streamlined implementations (Chappidi et al., 12 Feb 2026).
6. Guidelines and Interface Patterns
Optimizing human–LLM interaction requires deliberate mapping of interaction types and patterns to interface designs and workflow structures. Strategies include:
- Matching request type to user objective: initial exploration for domain clarity, transition to solution-oriented prompts with increasing certainty.
- Iterative elaboration over single, monolithic prompts, leveraging modular re-asking, paraphrasing, and task transformation (Etsenake et al., 2024).
- Interface affordances such as plan-and-preview cycles, granular edit and feedback features, clarification prompting, and hierarchical output structures, grounded in cooperative communication theory (Gricean maxims as reinterpreted for LLMs) (Kim et al., 2 Mar 2025).
- Role- and context-aware presetting (persona, output format) and dynamic context tracking (memory dashboards) that support user control over task framing and memory injection (Kim et al., 2 Mar 2025).
- Monitoring of cognitive engagement quality and activity mode, with real-time targeting of interventions to mitigate detrimental exploitation or prompt constructive reasoning (Holstein et al., 3 Apr 2025).
- In group or workflow settings, explicit selection and documentation of role archetypes, and assignment of decision control to prevent overreliance on or subordination to model-generated outputs (Chappidi et al., 12 Feb 2026).
7. Open Challenges and Future Directions
Persistent challenges include achieving robust conversational alignment between humans and LLMs, especially in mixed-initiative or collaborative settings, where superficial mimicry of human style fails to produce genuine convention formation (Jones et al., 9 Feb 2026). The co-evolution of user prompting behavior with model upgrades raises new questions in community adaptation, prompt diversity, and collective learning (Zhu et al., 2 Aug 2025).
Designing interaction protocols and benchmarking tools that accommodate the spectrum from deterministic tool-like use to fully autonomous creative companionship remains a priority, especially as user expectations, interface modalities, and LLM architectures continue to evolve.
Emerging research agendas center on:
- Measurement and enhancement of creative co-production and semantic exploration in collaborative settings (Fundal et al., 18 Dec 2025).
- Expanded frameworks for analyzing decision-making dynamics with archetype-specific effect estimation (Chappidi et al., 12 Feb 2026).
- Systematic, longitudinal studies of real-world human–LLM ecosystems to chart the ongoing mutual adaptation of human strategies and LLM capabilities.