AI and Worker Well-Being
- AI use in worker well-being is a multidimensional concept that encompasses physical safety, psychological health, and social connectedness in modern workplaces.
- Empirical findings show AI boosts job enjoyment and mental health, yet can also lead to reduced compensation and risks of exploitation under opaque algorithmic management.
- Effective integration of AI requires participatory governance, transparent design, and tailored interventions to optimize benefits and mitigate negative impacts.
AI use and worker well-being constitute an emergent area of quantitative and qualitative research at the intersection of algorithmic management, organizational design, occupational psychology, and human-computer interaction. In contemporary workplaces and digital platforms, AI systems function both as productivity tools and as active mediators of the labor process, with measurable impacts—positive and negative—on physical, psychological, and social dimensions of worker well-being. These impacts are conditional on technical design, implementation context, demographic and occupational variables, and the surrounding legal-institutional framework.
1. Conceptual Foundations: Defining Well-Being in the Context of AI
Worker well-being in relation to AI is multidimensional, encompassing physical safety, psychological health, job satisfaction and meaningfulness, social connectedness, and autonomy. Several core frameworks inform this area:
- Job Decency: Following the International Labour Organization, “decent work” denotes fair compensation, reasonable hours, job security, developmental opportunities, and safe, supportive conditions (Ghosh et al., 20 Jun 2024).
- Job Meaningfulness: This is operationalized via constructs such as personal meaning, visible social impact, autonomy, skill variety, challenge, supportive relationships, and recognition (Ghosh et al., 20 Jun 2024).
- Well-being Indices: Large-scale survey work codes outcomes such as safety, pay, autonomy, mobility, and job security on Likert scales to form composite indices (Armstrong et al., 30 Sep 2024). Occupational health research adds metrics like self-determination (autonomy, competence, relatedness), motivation (enjoyment, effort, pressure, autonomy), and stress/fatigue as quantifiable outcomes (Yang et al., 9 Jun 2025).
AI systems touch these dimensions via explicit design goals (e.g., supporting ergonomics, intelligent workload management), indirect effects on social organization (algorithmic management, workflow adaptation), and emergent phenomena (reduced credit for AI-assisted work).
2. Empirical Evidence on AI’s Impacts on Worker Well-Being
2.1 Aggregate Effects and Heterogeneity
Cross-national survey analysis indicates that AI use is robustly associated with improved well-being: mental health (+11.3 percentage points), job enjoyment (+20.0 pp), and physical health/safety (+8.0 pp) (Nakavachara, 14 Nov 2025). Disaggregation reveals systematic heterogeneity:
| Demographic | Mental Health Δ | Job Enjoyment Δ | Physical Health Δ |
|---|---|---|---|
| Generation Y | +17.4 pp | +21.9 pp | +13.1 pp |
| Generation X | +7.0 pp | +20.8 pp | — |
| Generation Z | — | +15.2 pp | — |
| Boomers | +10.7 pp | — | — |
| Female | +11.9 pp | +19.3 pp | — |
| Male | +9.7 pp | +20.3 pp | +8.9 pp |
The strongest and most widespread gains are concentrated among Generation Y workers and women (mental health), with men receiving more pronounced physical safety benefits. Gen Z sees only job enjoyment increases, likely due to digital nativity and entry-level positions.
2.2 Nuanced Worker Perspectives
A 9,000-worker, nine-country survey (Armstrong et al., 30 Sep 2024) found that perceived benefits of AI outstrip costs on average; positivity is highest among those in complex, problem-solving jobs, employees with high trust and job satisfaction, and those who feel valued. Education correlates negatively with optimism about automation’s effects, contrary to skill-biased-technical-change predictions.
Regression estimates show job satisfaction (β = 0.10), feeling valued (β = 0.09), and self-identification as a “technology champion” (β = 0.21) are strong predictors of positive attitudes towards workplace AI. Workers in roles that blend routine and cognitive tasks exhibit the greatest optimism regarding pay, autonomy, and upward mobility.
2.3 Direct Experimental Measures
Interventions show that financial incentives—such as performance bonuses tied to AI tool use—increase reported job-security optimism (+12 pp), while increased opportunity for worker input does not measurably shift attitudes (Armstrong et al., 30 Sep 2024).
3. Organizational Contexts and Mechanisms
3.1 AI in Gig and Platform Work
Stakeholder-centered co-design demonstrates that exposing gig workers to interactive “data probes” (personal plus city-level visualizations of earnings, hours, safety risks, and algorithmic features) surfaces trade-offs (financial vs. physical/psychological well-being), unveils structural precarity, and enables drivers to articulate design needs (Zhang et al., 2023). Key equations for predictive planning:
Platform-level harms are rooted in information asymmetry, with drivers voicing distress over opaque incentives, fare calculations, assignment logic, and algorithmic “nudges” (surge, quests, performance tiers, etc.) (Rao et al., 16 Jun 2024). Policy proposals targeting well-being include mandatory transparency reports, exposing 50+ “indicators” spanning ride, driver, algorithm, and policy domains.
3.2 Algorithmic Management and Silent Exploitation
Experimental laboratory evidence indicates that algorithmic managers can systematically reduce pay (by up to 40%) without provoking demotivation or perceptions of unfairness; workers react less emotionally to impersonal AI evaluations (“just the system”) than to those from humans (Dong et al., 27 May 2025). Only excessively punitive AI managers (“tree-based” models) trigger demotivation. This finding demonstrates a “silent exploitation” risk: AI’s perceived impartiality can suppress the social checks that normally prevent extractive labor practices.
3.3 Compensation and Perceived Deservingness
Across over 3,800 participants, robust evidence demonstrates an “AI penalization effect”: workers using AI tools receive less compensation (–$12 to –$26 per task/bonus, ∼25–50% reduction) than peers producing identical output unaided (Kim et al., 22 Jan 2025). The mediator is perceived credit/deservingness, with the effect attenuated when strong contractual protections exist. Freelancers and gig workers—lacking formal wage floors—are most exposed.
4. Technological Modalities: From Emotion AI to Multi-Agent Systems
AI systems designed for workplace well-being range from biosensor-integrated, HRL-driven multi-agent platforms that optimize task and break timing based on individual cognitive/physical states (K et al., 4 Jan 2025), to sector-specific, conversational AI with distinct expert/peer personas for high-risk environments (Yang et al., 9 Jun 2025). Evaluations demonstrate simultaneous usability and well-being gains—e.g., multi-agent construction worker support yields +18% SUS, +40% self-determination, +60% trust and social presence relative to baseline chatbots (Yang et al., 9 Jun 2025).
Emotion AI in organizational settings leverages multimodal environmental, biometric, and behavioral inputs to drive real-time, personalized feedback and individual dashboards. Worker reactions are generally positive where personal benefit is concrete, but anxieties around privacy, surveillance, and downstream data use persist. Trust is a function of transparency, consent, benefit-sharing, and strict anonymization (Piispanen et al., 12 Dec 2024):
Ethical deployment balances data richness with strict access controls, opt-out provisions, and ongoing participatory dialogue.
5. Social, Group, and Relational Dimensions
Beyond individual interventions, AI agents mediate group-level social dynamics, targeting participation equity, psychological safety, and affective climate (Hamada et al., 2022). Interventions range from chatbots that prompt positive message exchange to peripheral robots that steer conversational turn-taking (measured in entropy). These systems, when well-calibrated, deliver measurable improvements in team cohesion, positive affect (PANAS, WHO-5), and participation rates (up to +18%).
Ethical and technical challenges specific to group-level agents include dynamic subgroup adaptation, privacy (continuous sensing), and the trade-off between behavioral nudges and autonomy. Participatory and stakeholder-driven approaches—where groups co-own intervention logic and data trail—mitigate these risks.
6. Design, Governance, and Policy Implications
AI’s effects on well-being are fundamentally shaped by implementation strategy and governance:
- Human-centric frameworks reject “task atomization” in favor of network/process approaches that treat expertise, coordination, and knowledge as co-produced (Willems et al., 8 Apr 2025). Automation assessments must address not only “Can we automate?” but “Should we?” and “What relational/social function does this task fulfill?”
- Organizational justice demands that transparent, explainable models underpin critical HR processes, with iterative employee voice in design and feedback (Sadeghi, 6 Dec 2024).
- Explicit credit-allocation and wage-protection schemas are necessary to inoculate against “AI penalization,” with collective bargaining serving as a bulwark for vulnerable worker cohorts (Kim et al., 22 Jan 2025).
- Targeted upskilling, participatory governance, and continuous psychosocial/environmental outcome tracking are required to sustain well-being and trust in the AI-augmented workplace.
7. Future Research and Open Challenges
Current evidence demonstrates both clear opportunities and stark risks. Open problems include:
- Isolating causal pathways by which AI mediates stress, motivation, burnout, and disengagement, especially across different task types and workflow architectures.
- Evaluating efficacy and equity of “well-being AI” in large-scale, cross-sector deployments, with longitudinal tracking.
- Understanding demographic and occupational stratifications in benefit distribution—ensuring that AI augments, rather than exacerbates, health or wage inequalities (Nakavachara, 14 Nov 2025).
- Codifying universal standards for algorithmic fairness, transparency, and worker-data control, especially in jurisdictions with divergent labor regimes.
A plausible implication is that the successful alignment of AI systems and worker well-being requires a holistic, contextually-attuned approach that spans technical design, workplace process integration, participatory governance, and continual empirical evaluation. Failure to do so risks offsetting productivity or efficiency gains with hidden psychological, financial, or social costs, especially for marginalized or unprotected workers.