Human-AI Interactions: Cognitive, Behavioral, and Emotional Impacts (2510.17753v2)
Abstract: As stories of human-AI interactions continue to be highlighted in the news and research platforms, the challenges are becoming more pronounced, including potential risks of overreliance, cognitive offloading, social and emotional manipulation, and the nuanced degradation of human agency and judgment. This paper surveys recent research on these issues through the lens of the psychological triad: cognition, behavior, and emotion. Observations seem to suggest that while AI can substantially enhance memory, creativity, and engagement, it also introduces risks such as diminished critical thinking, skill erosion, and increased anxiety. Emotional outcomes are similarly mixed, with AI systems showing promise for support and stress reduction, but raising concerns about dependency, inappropriate attachments, and ethical oversight. This paper aims to underscore the need for responsible and context-aware AI design, highlighting gaps for longitudinal research and grounded evaluation frameworks to balance benefits with emerging human-centric risks.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Explain it Like I'm 14
Overview
This paper looks at how interacting with AI affects people’s minds, actions, and feelings. The authors review recent studies to show both the good and bad sides of AI in everyday life—like helping us learn, make decisions, or manage stress—while warning about risks such as overreliance on AI, weaker critical thinking, and emotional manipulation. Their goal is to guide safer, more responsible AI design and encourage long-term research that keeps humans in control.
Key Objectives
The paper explores simple but important questions:
- How does using AI change the way we think, learn, and create?
- How does AI influence our behavior—our habits, choices, and sense of control?
- How does AI affect emotions—like anxiety, loneliness, or confidence?
- How can designers make AI that helps people without causing harm, especially for kids, older adults, and those with disabilities?
Methods and Approach
This is a survey paper, which means the authors didn’t run one big experiment. Instead, they collected and summarized many studies to spot patterns.
To organize the review, they used three helpful ideas:
- Bloom’s Taxonomy (for thinking): Picture a staircase of mental skills—Remember, Understand, Apply, Analyze, Evaluate, Create. They explain how AI can help or harm at each step.
- The I-PACE model (for behavior): Think of a loop where your personal traits (who you are), feelings (how you react), thoughts (how you judge things), and actions (what you do) interact with technology. This helps explain why AI sometimes improves habits and other times leads to overreliance.
- Affective and conversational AI (for emotions): This is technology that notices or responds to feelings (like chatbots that offer support). The authors explain when this helps and when it can be risky.
Main Findings
Cognitive (Thinking) Impacts
- Remember (Memory): AI can boost memory by giving practice questions and quick feedback, like a smart paper buddy. But if you let AI do the thinking for you too often, your memory can weaken because you don’t deeply learn the material.
- Understand (Comprehension): Simplified AI summaries can make hard texts easier to grasp. However, overly simple summaries can lose important context, leading to misunderstandings.
- Apply (Using Skills): AI tutors and learning tools can help you practice skills in realistic ways. But depending on AI to solve problems for you can cause skill erosion—you get worse at doing it yourself.
- Analyze (Critical Thinking): Chatbots can push you to explain your reasoning, which strengthens analysis. On the flip side, people may “offload” thinking to AI and stop engaging deeply.
- Evaluate (Judgment): AI can surface hidden patterns (like spotting subtle medical issues), improving decisions. Yet heavy reliance on AI can weaken your own judgment, and because AI can sound convincingly “human,” you might accept answers without questioning them.
- Create (Creativity): AI can spark new ideas and encourage brainstorming. But it can also lead to sameness—designs and stories may start to look alike because AI pulls from common datasets, causing “design fixation” (copying familiar examples instead of inventing fresh ones).
Special Populations:
- Older adults: AI tools (like social robots or VR) can help with some memory and thinking tasks, but most results are short-term and need more paper.
- Students with disabilities: AI can help with reading, organizing, and writing. However, it may increase dependency, raising tough questions like: do we still practice spelling if AI always fixes it?
Behavioral (Action) Impacts
Using the I-PACE model, the authors found:
Positive outcomes:
- Personalized learning and health apps can nudge good habits (studying better, eating healthier, managing stress), acting like a coach that adjusts to your needs.
Negative outcomes:
- Overreliance: In high-stakes settings, too much trust in automation can reduce human attention and readiness to act when things go wrong.
- Attention shaping: Social media algorithms can steer what you see and how you feel, creating strong habits or biases over time. “Dark patterns” in interfaces can trick users into choices they didn’t mean to make.
Mixed outcomes:
- Human vs. AI agency: Who’s in control—you or the AI? The balance matters.
- Personalization vs. customization: Personalization (AI decides for you) can subtly shape your preferences. Customization (you choose settings) keeps you engaged and in control.
- Proactive vs. reactive AI: Proactive AI acts on its own and may feel intrusive. Reactive AI waits for your instruction and often builds trust and healthy use.
Emotional (Feeling) Impacts
Positive outcomes:
- Some conversational AIs reduce depression symptoms in certain youth groups and help people feel better after venting.
- AI can help prevent burnout and manage stress—for example, by monitoring work conditions or athlete strain and stepping in early.
- Screening tools can spot loneliness, anxiety, or depression by analyzing speech and facial cues, potentially guiding people to support sooner.
Negative outcomes:
- Anticipatory anxiety: Many workers (especially younger ones) worry that AI will replace their jobs. People also fear emotional surveillance at work.
- Manipulation and dependence: Chatbots can be designed to feel “human,” leading users to overshare or depend on them. Some systems fail to block harmful advice consistently, which is dangerous.
- Interference with development: For children and teens, AI companions can disrupt learning healthy relationships with real people. If a child bonds more with a chatbot than peers or caregivers, normal social growth may be affected.
- “AI guilt”: Students sometimes feel guilty or anxious about using AI, worrying it’s “cheating,” unfair, or harming their own growth—especially in creative work.
Mixed and ongoing questions:
- Mental health efficacy: AI assistance can help, but it often doesn’t beat human-delivered care, and results depend on symptom type and severity.
- Trust and engagement: People may disclose more to text-based AI than voice, and highly human-like robots can trigger the “uncanny valley” (they feel creepy, reducing trust). If you suspect someone used AI to write to you, you might trust them less—even if the message sounds nicer.
Why This Matters
This research shows that AI is powerful—but how we use and design it changes whether it helps or harms. It can boost learning, creativity, and health, yet it can also reduce critical thinking, steer our choices, and create emotional risks. The authors argue we need careful, human-centered design and long-term studies to protect agency, judgment, and well-being.
To make AI safer and more helpful, the paper suggests:
- Keep humans in control: Favor customization and reactive AI in sensitive areas, and avoid manipulative “dark patterns.”
- Support real learning: Use AI as training wheels—not a substitute. Encourage practice, explanation, and independent problem-solving.
- Be transparent: Make it clear when AI is involved and how it works, so people can question outputs and make informed choices.
- Protect vulnerable groups: Set strong safeguards for children, teens, and at-risk users; design for privacy and age-appropriate use.
- Test like healthcare tools: Evaluate AI for mental health and education with rigorous, long-term studies before wide deployment.
In short, AI can be a great tool—like a smart coach or helpful assistant—if we design and use it wisely, keep people in charge, and watch closely for long-term effects.
Knowledge Gaps
Knowledge gaps, limitations, and open questions
Below is a consolidated list of unresolved issues the paper identifies or implies, focusing on what is missing, uncertain, or left unexplored and framed to guide future research.
- Lack of longitudinal, ecologically valid studies that track cognitive, behavioral, and emotional impacts of AI over months/years across home, school, and workplace settings.
- Overemphasis on short-term lab or classroom experiments that isolate single cognitive operations (e.g., recall, comprehension) without modeling how multiple cognitive processes interact in real tasks.
- No standardized metrics or benchmarks to operationalize “overreliance,” “skill degradation,” “agency,” “cognitive offloading,” “design fixation,” and “AI guilt” across studies and domains.
- Insufficient causal evidence distinguishing AI’s short-term performance gains (e.g., test scores with GPT support) from long-term learning, transfer, and retention.
- Unclear dose–response relationships: how frequency, intensity, and context of AI use differentiate scaffolding benefits from dependency and atrophy across the Bloom taxonomy.
- Limited guidance on designing AI tools that maximize retrieval practice and critical thinking while minimizing shallow encoding and passive consumption.
- No systematic evaluation of hallucination and summary loss-of-context effects on comprehension and knowledge calibration for learners with different baseline skills.
- Unresolved trade-offs between personalization (system-driven) and customization (user-driven) on user agency, inhibitory control, and long-term self-regulation.
- Unclear conditions under which proactive AI (autonomous actions) helps or harms users’ executive control compared with reactive AI (user-invoked) in education and productivity tools.
- Sparse evidence on interventions that counter design fixation and homogenization of outputs when users co-create with generative models (e.g., prompts, constraints, diverse exemplars).
- Unknown domain and expertise moderators for creativity effects: how novices vs. experts, visual vs. textual tasks, and task complexity alter AI’s impact on originality and diversity.
- Lack of robust, pre-registered randomized trials comparing AI-augmented decision-making to expert-only and human-in-the-loop baselines in high-stakes domains (medicine, aviation, engineering).
- Insufficient quantification of AI-Chatbot-Induced Cognitive Atrophy (AICICA): prevalence, trajectories, early warning indicators, and reversibility via targeted training.
- Underdeveloped design patterns for “agency-preserving” interfaces (e.g., evidence exposure, justification prompts, adjustable autonomy, uncertainty displays) and their empirical validation.
- Limited external validity of recommender system findings (e.g., social media, YouTube) and weak causal links between exposure pathways, attention shaping, and long-term attitudes or mental health.
- Minimal measurement of the impact and prevalence of interface “dark patterns” in AI-mediated systems and effective countermeasures or policy standards to prevent them.
- Incomplete empirical validation of applying the I-PACE framework to general human–AI interaction: operational definitions, measurement of executive control changes, and predictive validity.
- Fragmented, short-term evidence for geriatric AI interventions: no large, diverse trials clarifying which cognitive domains benefit, for whom, and with what durability or side effects.
- Limited, mixed evidence for learners with disabilities on net skill development vs. dependency; absence of curriculum-level guidelines to preserve core competencies while using AI supports.
- Severe gaps in developmental research for children and pre-teens on conversational AI: long-term effects on attachment formation, theory of mind, social skills, and reality–fantasy boundaries.
- No safety benchmarks and auditing protocols tailored to minors for LLM risk detection (e.g., self-harm, age-inappropriate guidance) including measurable false-negative rates and red-team standards.
- Lack of validated, cross-cultural instruments to assess anticipatory anxiety about AI (e.g., job loss, surveillance) and organizational interventions that measurably reduce it.
- Inadequate evidence on how AI-mediated communication (e.g., email autocomplete, AI drafting) reshapes interpersonal trust, disclosure norms, and team cohesion; no tested disclosure designs that mitigate trust erosion.
- Nonlinear effects of anthropomorphism remain under-specified: need parametric studies that map modality (text/voice/embodied), human-likeness levels, and “uncanny valley” thresholds to trust, empathy, and compliance.
- Contradictory findings on modality suggest design-relevant gaps (e.g., text driving more self-disclosure vs. voice producing social errors); missing guidelines on matching modality to context and user traits.
- Mental health efficacy remains uncertain: conversational AI often does not outperform active controls; moderators (symptom severity, subtype, comorbidity, user goals) and adverse effects are understudied.
- Screening tools show uneven validity (e.g., CSWT vs. emoLDAnet); need standardized validation pipelines (reliability, convergent validity, clinical utility, subgroup fairness) prior to real-world deployment.
- Sparse pediatric mental health trials (small samples, adolescent-heavy), with limited evidence for pre-teens; missing developmental tailoring and safety monitoring for AI interventions in youth.
- Unclear governance for rapid-to-market AI mental health tools relative to drug/device standards: pre-market evidence thresholds, post-market surveillance, and harm reporting protocols are undefined.
- Privacy and data monetization risks from over-disclosure to chatbots lack quantified downstream harms (e.g., re-identification, profiling) and effective user-facing safeguards for minors and vulnerable users.
- Equity gaps are largely unmeasured: which groups benefit or are harmed (by SES, language, disability, culture), how digital literacy moderates outcomes, and which training mitigations work at scale.
- Cultural and linguistic generalizability is uncertain: many studies are Western, high-resource, and English-centric; cross-cultural replication and localization strategies are missing.
- Academic integrity and skill certification concerns remain unresolved: how to assess learning and competence fairly when AI assistance is variably available or undisclosed.
- Organizational design questions remain open: how job redesign, training, incident simulations, and automation transparency can maintain human readiness and prevent overdependence in safety-critical contexts.
- Conceptualization and measurement of AI guilt are nascent: need validated scales, cross-cultural norms, causal models linking guilt to behavior, and interventions that reduce maladaptive guilt without eroding ethics.
- Missing multi-level models that integrate cognition–behavior–emotion dynamics to predict trajectories (e.g., when cognitive offloading leads to emotional dependence and behavioral habit formation).
- Lack of open datasets and shared protocols for evaluating human impacts (e.g., critical thinking tasks with/without AI, creativity diversity benchmarks, agency-preservation metrics), hindering replication and comparison.
- Incomplete reporting and selection biases across the literature (short-term positive results, narrow settings) limit meta-analytic synthesis; pre-registration and standardized reporting are needed.
Practical Applications
Below is a structured inventory of practical, real-world applications derived from the paper’s findings, methods, and frameworks. Each item includes sectors, potential tools/products/workflows, and assumptions or dependencies that affect feasibility.
Immediate Applications
These applications can be deployed now with appropriate safeguards and oversight.
- AI-enhanced retrieval practice in education
- Sectors: Education
- Tools/products/workflows: LLM-generated question banks (e.g., ChatGPT) integrated into LMS; spaced retrieval and immediate feedback; teacher-curated prompts; formative assessment aligned to Bloom’s “Remember”
- Assumptions/dependencies: Model accuracy and alignment; educator oversight; academic integrity policies; data privacy in student interactions
- Text simplification and personalized reading to improve comprehension
- Sectors: Education, Accessibility
- Tools/products/workflows: ReadTheory-like platforms; LLM-assisted text simplification with links to original sources; adaptive reading levels; dual-source reading (AI summary + original text)
- Assumptions/dependencies: Avoid loss of contextual nuance; quality control of simplifications; bias and accuracy checks; accessibility compliance
- Intelligent tutoring and reasoning prompts to scaffold analysis
- Sectors: Education
- Tools/products/workflows: ChatTutor-style instructional chatbots; “think-aloud” prompting; rubric-embedded AI coaching aligned to Bloom’s “Analyze”
- Assumptions/dependencies: Prompts that require active reasoning (not answer outsourcing); teacher monitoring; content alignment with curriculum
- Decision support in clinical workflows
- Sectors: Healthcare
- Tools/products/workflows: AI-aided radiology detection systems (e.g., CAD for mammography); clinician-in-the-loop triage; EHR summarization to reduce administrative burden
- Assumptions/dependencies: Regulatory clearance (e.g., FDA/CE); robust validation; liability and audit trails; integration with hospital IT; continuous performance monitoring
- AI-driven customer support with affective intent recognition
- Sectors: Software/SaaS, Customer Support
- Tools/products/workflows: Conversational AI handling routine inquiries; sentiment detection for escalation; human escalation paths; “reactive” agent behaviors over “proactive” nudges
- Assumptions/dependencies: Clear boundaries on data collection; avoidance of manipulative patterns; transparency notices; opt-in consent
- Personalized behavior-change nudges for health and wellbeing
- Sectors: Digital Health, Wellness
- Tools/products/workflows: Lark-like real-time dietary/exercise feedback; Carrot Rewards micro-incentives; habit formation via small daily actions
- Assumptions/dependencies: Evidence-based protocols; user consent; interoperable wearables; effectiveness monitoring to avoid overdependence
- Conversational AI for subclinical mental health support
- Sectors: Mental Health, Public Health
- Tools/products/workflows: Woebot-like CBT micro-interventions; chatbot-assisted journaling for negative affect reduction; stepped-care models with referral pathways
- Assumptions/dependencies: Clear scope and disclaimers (not a replacement for clinicians); crisis routing; age-appropriate use; ongoing efficacy evaluation
- Stress and burnout monitoring in high-strain roles
- Sectors: Occupational Health, Public Sector, Sports
- Tools/products/workflows: ML models tracking environmental conditions (heat, humidity, pollution) for public health inspectors; kinematic stress detection for surgeons; team wearables in sports with psychoeducation
- Assumptions/dependencies: Privacy-preserving telemetry; consent and governance; risk communication without surveillance harm; validated thresholds to avoid false positives
- Socially assistive robots for older adults
- Sectors: Healthcare, Robotics, Elder Care
- Tools/products/workflows: Humanoid assistive robots (e.g., Sil-bot); AI phone interventions; VR-AI memory tasks with conversational companions
- Assumptions/dependencies: Safety and hygiene protocols; caregiver integration; cultural acceptance; short-term benefits don’t imply long-term gains—monitor for dependency
- Accessibility supports for learners with disabilities
- Sectors: Education, Accessibility
- Tools/products/workflows: LLMs aiding summarization, outlining, and structuring; multimodal assistance for vision/hearing impairments; configurable supports for ADHD/dyslexia
- Assumptions/dependencies: Avoid skill erosion with “practice-first, assist-second” workflows; institutional accommodations policies; secure data handling
- Agency-preserving product design using I-PACE and HAII
- Sectors: Software/Product Design, Social Media
- Tools/products/workflows: Defaults favoring customization (user-driven choices) over personalization (algorithm-driven); reactive AI modes; clear controls to manage feeds and recommendations; dark-pattern audits
- Assumptions/dependencies: Usability testing; transparent tradeoffs; measurable agency metrics; compliance with platform governance
- Workplace communication policies addressing AI use and trust
- Sectors: Enterprise, HR
- Tools/products/workflows: Email clients that disclose AI assistance; guidelines to prevent trust erosion; opt-in use of auto-responses for low-stakes communication
- Assumptions/dependencies: Cultural norms; change management; guardrails on sensitive communications; legal review of disclosures
Long-Term Applications
These applications require further research, scaling, development, and/or policy standardization before broad deployment.
- Longitudinal cognitive impact monitoring to mitigate AI-induced atrophy (AICICA)
- Sectors: Education, Workforce Training, HCI
- Tools/products/workflows: Organization-wide metrics tracking critical thinking, judgment, and memory over time; dashboards showing human engagement vs. automation reliance; periodic “skill retention drills”
- Assumptions/dependencies: Validated instruments across contexts; leadership buy-in; ethical telemetry; intervention protocols when decline is detected
- Certified youth-safe conversational AI and AI toys
- Sectors: Consumer Tech, Education Policy, Child Development
- Tools/products/workflows: Age-gating, risk detection and refusal for self-harm queries; “non-anthropomorphic by default” modes for toddlers; standards preventing attachment disruption; independent certification
- Assumptions/dependencies: Multidisciplinary standards (developmental psych, pediatrics, HCI); enforcement mechanisms; robust content filters; parental controls
- Validated, explainable emotional screening integrated into primary care
- Sectors: Healthcare, Public Health
- Tools/products/workflows: emoLDAnet-like systems for loneliness/depression/anxiety screening; explainable NLP models; clinician dashboards with triage pathways
- Assumptions/dependencies: Diverse, representative training data; convergence with gold-standard measures; clinical workflow integration; regulatory approval and reimbursement pathways
- Regulation of emotional surveillance and “LLM nudging”
- Sectors: Policy, Employment Law, Data Protection
- Tools/products/workflows: Policies limiting emotion inference in workplace settings; rules restricting nudges toward sensitive disclosures; audit logs and consent management
- Assumptions/dependencies: Legal frameworks; independent auditing capacity; harmonization across jurisdictions; worker representation in governance
- Creativity support tools that counter design fixation
- Sectors: Design, Media, Software
- Tools/products/workflows: Generative systems with “anti-fixation” features (diversity constraints, novelty boosts, multi-corpus mixing); guided ideation that enforces exploration before refinement
- Assumptions/dependencies: Curated, diverse datasets; bias/performance evaluation; UX patterns that promote divergent thinking; IP and provenance tracking
- Agency-centric standards for product and interface design
- Sectors: Software, HCI, Consumer Platforms
- Tools/products/workflows: Default “reactive” modes; transparency on personalization logic; anthropomorphism guidelines to avoid uncanny valley effects; measurable “human agency scorecards”
- Assumptions/dependencies: Consensus standards; cross-platform adoption; user education; empirical validation of agency metrics
- Educational curricula that preserve higher-order cognition with AI
- Sectors: Education
- Tools/products/workflows: Bloom’s-informed course design; dual-source reading requirements; “explain-first” assessment (students must articulate reasoning before seeing AI output); AI literacy modules
- Assumptions/dependencies: Faculty development; assessment redesign; institutional policy alignment; equity considerations
- Efficacy frameworks and randomized trials for mental health AI
- Sectors: Mental Health Research, Digital Therapeutics
- Tools/products/workflows: RCTs by diagnosis subtype and severity; longitudinal follow-ups; stepped-care integration; crisis routing effectiveness; pediatric trials beyond adolescents
- Assumptions/dependencies: Funding; ethics approvals; diverse samples; standard outcome measures; real-world effectiveness vs. efficacy gap management
- Algorithmic audit infrastructure for feeds and recommendations
- Sectors: Social Media, News, Policy
- Tools/products/workflows: Independent audits of migration paths to extreme content; transparency APIs; user-facing “exposure maps” and feed control tools
- Assumptions/dependencies: Platform cooperation; technical standards for auditability; privacy-preserving telemetry; public-interest governance
- Organizational programs to address AI anxiety and AI guilt
- Sectors: Education, Enterprise
- Tools/products/workflows: Values-aligned usage guidelines clarifying “appropriate AI use”; training on authenticity vs. augmentation; field-specific norms (applied vs. pure disciplines; creative vs. routine tasks)
- Assumptions/dependencies: Cultural adaptation; leadership endorsement; psychological safety; measurement of impact on usage and performance
- Safety-critical training that sustains human engagement and intervention readiness
- Sectors: Aviation, Manufacturing, Energy
- Tools/products/workflows: Simulators that vary automation levels to maintain pilot/operator proficiency; “human-in-the-loop minimums”; anomaly drills; design that prompts periodic manual checks
- Assumptions/dependencies: Regulatory standards; union and workforce input; risk assessment; incident learning integration
- Family-level guidance and digital hygiene for children’s AI use
- Sectors: Public Health, Education Policy
- Tools/products/workflows: Evidence-based family guidelines; school-parent toolkits; age-specific boundaries on conversational AI; practices that prioritize human peer interaction
- Assumptions/dependencies: Ongoing developmental research; accessible materials; school district adoption; cultural tailoring
Each application leverages the paper’s central insights: use AI as cognitive scaffolding (not a substitute), preserve human agency through design choices (I-PACE, HAII), and ensure emotional safety with age-appropriate guardrails and transparent, explainable systems.
Glossary
- Affective computing: A field of computing that relates to, arises from, or influences human emotions. "Picard first coined the term "affective computing" in the mid-nineties, defining this as technology that "relates to, arises from, or influences emotions.""
- Agentic: Having the capacity to act autonomously or with agency. "Modern research emphasizes human interaction with technology itself, especially as AI becomes more agentic (e.g., smart assistants, algorithms)."
- AI guilt: A negative emotional experience tied to moral conflict about using generative AI. "A recently introduced term, [72] AI guilt is described as a negative emotional experience related to a moral dilemma or conflict around use of generative AI."
- AI‑Chatbot‑Induced Cognitive Atrophy (AICICA): A hypothesized weakening of human cognitive abilities due to overreliance on chatbots. "Dergaa et al. [24] introduced the idea of AI- Chatbot-Induced Cognitive Atrophy (AICICA)."
- Anthropomorphization: Attributing human characteristics to non-human entities like robots or chatbots. "Degree of anthropomorphization of AI-powered robots and chatbots may offer guidance for human adaptation of, or trust in such technology."
- Attentional biases: Systematic patterns of attention shaped by cues and reinforcement, often influenced by AI systems. "These dynamics can seed persistent habits and attentional biases within the I-PACE cycle."
- Automation Overdependence: Excessive reliance on automated systems leading to reduced human vigilance or skill. "One major impact is Automation Overdependence."
- Behavioral nudge: Subtle design or interaction tactics that steer user behavior without coercion. "Systems such as Replika show how conversational AI can function as a persistent behavioral nudge."
- Bloom's Taxonomy: A hierarchical framework for categorizing cognitive skills from remembering to creating. "This section builds on the work in [4], by employing the Bloom's Taxonomy framework as a roadmap to analyze the benefits and negative effects within each cognitive level."
- Brainwriting: A collaborative ideation method where participants write down ideas, here augmented by AI. "Shaer et al. [26] investigated the use of a group-AI method of brainwriting in a course in Tangible Interaction Design."
- Cognitive behavioral therapy (CBT): A structured psychotherapeutic approach focusing on modifying thought and behavior patterns. "Woebot chatbot delivers cognitive behavioral therapy [38]"
- Cognitive offloading: Shifting cognitive tasks to external tools, potentially reducing internal critical thinking. "Gerlich [19] found that frequent reliance on AI tools encourages cognitive offloading, which may gradually reduce critical thinking."
- Conversational AI: AI systems designed to engage in dialogue with users via text or voice. "In the current AI/ML landscape, we observe increasingly sophisticated use of affective and conversational AI with both intended and unintended emotional outcomes."
- Convergent validity: The degree to which a measure correlates with other assessments of the same construct. "However, not all AI-powered screening measures offer convergent validity with traditional measures of stress that have been previously validated."
- Cue reactivity: Affective and cognitive responses triggered by specific stimuli, influencing behavior. "Such pathways can heighten cue reactivity and reduce stimulus specific inhibitory control which risks maladaptive execution over time."
- Customization: User-driven adjustments to system settings or content preferences. "When users actively change settings (by selecting news topics), customization takes place."
- Dark patterns: Interface designs that manipulate users into choices they might not otherwise make. "Interface dark patterns can steer choices toward intrusive defaults and can limit informed consent."
- Deep learning: A subset of machine learning using neural networks with multiple layers to learn complex patterns. "One such product under review, emoLDAnet, utilizes deep learning and machine learning to identify loneliness, depression, and anxiety (LDA) through recorded conversations..."
- Design fixation: The tendency to stick to familiar examples, limiting creative exploration. "The researchers also noted that participants frequently copied visual elements from the AI- generated images, demonstrating design fixation - an unconscious grasp to familiar examples that discourage the pursuit of novel alternatives."
- Divergent thinking: The cognitive process of generating many creative ideas or solutions. "Habib et al. [27] in their work similarly noted this with students using Chat GPT during ideation exercises which resulted in improvements in divergent thinking."
- Emotional surveillance: Monitoring of emotional states, often in workplace or digital contexts, raising privacy concerns. "In workplace settings, employees may experience worry or dread about the implications of emotional surveillance [68]."
- Executive control: Cognitive processes that regulate behavior, attention, and decision-making. "Taken together, the behavioral impacts of AI can be broadly understood through the lens of the I-PACE model, which provides a structured way of analyzing how person-level predispositions, affective responses, cognitive processing, and executive control interact with AI systems."
- Explainable AI (XAI): AI methods that provide interpretable insights into model decisions. "Another study [66] focused on screening for loneliness in older adults utilizing explainable AI (XAI) and NLP to analyze speech patterns in transcripts."
- Flipped class model: An instructional approach where students learn foundational content before class and apply knowledge during class. "Kwan et al. [15] claims that generative AI fits into a flipped class model in which AI can support students in learning by preparing before class and reinforcing the skills gained after class through application of knowledge."
- Generative AI: AI systems that create new content (text, images, etc.) based on learned patterns. "Makransky et al. [18] explored generative AI's potential to increase student engagement and reasoning around complex ideas."
- Human agency: The capacity of humans to make intentional choices and exert control over actions. "The author of [46] describes tensions between machine agency (AI autonomy) and human agency (user control) as central to Human-AI Interaction (HAII)."
- Human factors: The study of how humans interact with systems to optimize safety, usability, and performance. "by using human factors, system reliability, and performance metrics as validation models for design evaluation [1]."
- Human‑AI Interaction (HAII): The study and design of interactions between humans and AI systems. "The author of [46] describes tensions between machine agency (AI autonomy) and human agency (user control) as central to Human-AI Interaction (HAII)."
- I‑PACE model: A framework explaining technology-use behaviors via interactions of person, affect, cognition, and execution. "Taken together, the behavioral impacts of AI can be broadly understood through the lens of the I-PACE model..."
- Impostor phenomenon: Feelings of fraudulence despite competence, potentially linked to AI use and perceived authenticity. "Correlation of AI guilt with impostor phenomenon may prove to be a fertile area of research, as students may worry about their own skill erosion with overuse of generative AI."
- Inhibitory control: The ability to suppress impulses or dominant responses in favor of goal-directed actions. "the model emphasizes that the behavioral consequences stem from the dynamic interplay between individual factors (e.g., neurobiological or psychological vulnerabilities), situational cues inducing affective and cognitive evaluations, and individuals' degree of inhibitory and executive control they can muster in these situations."
- Intelligent tutoring systems: AI-enabled educational tools that adapt instruction and feedback to learner needs. "Xu et al. [14] reviewed various empirical studies of artificial intelligence used in STEM education and found that intelligent tutoring systems, predictive knowledge generators, and educational robots helped students practice acquired knowledge in practical situations."
- Internet‑use disorder: Problematic, compulsive use of the internet conceptualized within clinical frameworks. "Originally developed to explain the onset and maintenance of Internet-use disorder, the model emphasizes..."
- Kinematic data: Movement-related measurements used to infer states like stress or performance. "Kinematic data, or the analysis of movement, is another potential application of AI in healthcare settings, enabling detection of stress in surgeons during surgical procedures [60]"
- LLM: A type of generative AI trained on vast text data to produce and understand language. "Most participants who used the LLM struggled with the recall tasks."
- LLM nudging: Subtle prompts by LLMs that encourage users to disclose more information than intended. "Risk of over-disclosure may be exacerbated further with "LLM nudging," or when a chatbot subtly prompts users to disclose information they had not originally intended."
- Machine agency: The autonomous capacity of AI systems to act or make decisions. "The author of [46] describes tensions between machine agency (AI autonomy) and human agency (user control) as central to Human-AI Interaction (HAII)."
- Meta‑analysis: A statistical method that aggregates results across multiple studies to estimate overall effects. "A meta-analysis [56] explored use of conversational AI with teens and young adults in treatment of various mental health outcomes."
- MoCA: The Montreal Cognitive Assessment, a screening tool for cognitive impairment. "showed statistically significant positive cognitive changes based on cognitive screening tests (MoCA)."
- NLP: AI techniques for analyzing and generating human language. "They were able to compare the effect sizes of non-NLP and non-ML with more advanced NLP/ML-driven conversational AI."
- Personalization: System-driven tailoring of content or experiences without explicit user input. "Personalization occurs when AI customizes content without direct user input (e.g., Netflix recommendations)."
- Predictive knowledge generators: AI tools that forecast or infer knowledge to support learning or decision-making. "found that intelligent tutoring systems, predictive knowledge generators, and educational robots helped students practice acquired knowledge in practical situations."
- Proactive AI: Systems that act autonomously without explicit user commands. "Proactive AI acts on its own initiative, whereas reactive AI acts only when specifically instructed."
- Randomized controlled trials: Experimental studies with random assignment to test intervention efficacy. "An early systematic review of studies conducting randomized controlled trials of conversational AI targeting mental health symptoms, psychological distress, or optimization of emotional well-being in adults..."
- Reactive AI: Systems that respond only when instructed by the user. "reactive AI acts only when specifically instructed."
- Retrieval practice: Learning technique involving repeated recall to strengthen memory. "The students in the retrieval practice group achieved an average score of 89% while the students who studied without the advantage of AI earned 73% correct."
- Self‑disclosure: Sharing personal information or feelings, often studied in human-computer interaction. "In a randomized controlled trial, the text modality was found to be the most emotionally engaging, prompting the most self-disclosure from human users."
- Sub‑clinical: Symptoms or conditions present at a level below formal diagnostic thresholds. "Sub-group analysis added further specificity, clarifying that younger groups with sub-clinical depressive symptoms were most responsive."
- Text simplification: Techniques to reduce linguistic complexity to improve comprehension. "Text simplification lowered the complexity of the vocabulary and facilitated its processing."
- Uncanny valley: A dip in human comfort or trust when robots/AI appear almost—but not fully—human. "The "uncanny valley," or a decrease of trust and acceptance occurs when sensory cues accumulate [78], indicating that humans may lose trust or empathy when technology becomes too human-like."
Collections
Sign up for free to add this paper to one or more collections.