Papers
Topics
Authors
Recent
2000 character limit reached

Cyber Humanism in Education

Updated 22 December 2025
  • Cyber humanism in education is a framework that repositions human agency in digital learning by treating AI as a co-author of knowledge.
  • It employs key pillars such as reflexive competence, algorithmic citizenship, and dialogic design to foster critical and ethical learning.
  • Current applications include AI-integrated curricula, VR experiences, and serious games that enhance digital literacy and ethical reasoning.

Cyber humanism in education refers to a set of design principles, theoretical frameworks, and pedagogical practices that reposition the human as the central agent within technology-rich learning ecosystems. Rather than viewing AI and digital infrastructure as either mere tools or existential threats, cyber humanist models conceptualize humans and machines as co-authors of knowledge, advocate for the preservation of human agency in the face of epistemic automation, and foreground critical, creative, and ethical dimensions of learning. These initiatives span from K–12 code literacy and serious gaming through university-level AI integration, fostering epistemic agency, reflexive competence, ethical reasoning, and solidarity in a digital context (Adorni, 18 Dec 2025, Pitts et al., 10 Jun 2025, Salas, 2017, Vehrer et al., 28 Aug 2025, Selitskiy et al., 27 Oct 2025, Henda, 2017, Melo et al., 2017, Zahir et al., 2015).

1. Foundational Definitions and Theoretical Lineage

Cyber humanism in education is formally defined as “a framework for reclaiming and strengthening human agency in AI-mediated learning environments by treating AI systems as cognitive infrastructures co-authored by humans and machines, and by positioning educators and learners as epistemic agents and algorithmic citizens with both the right and the responsibility to understand, interrogate, and shape those infrastructures” (Adorni, 18 Dec 2025). The lineage includes:

  • Humanist traditions: Humboldtian ideals (academic freedom, self-cultivation, unity of teaching and research), Deweyan progressive education, and postwar European digital humanism (Selitskiy et al., 27 Oct 2025).
  • Constructivist and sociocultural learning: Piagetian assimilation/accommodation, Vygotskian Zone of Proximal Development, and activity-system methodologies that integrate thought, communication, and collaborative action (Selitskiy et al., 27 Oct 2025, Henda, 2017).
  • Semiotic and technological traditions: Semiotic empowerment (code as system of symbols), “concrétisation” (Simondon), and theories of techno-social convergence (human–information–machine triads) (Salas, 2017, Henda, 2017).
  • Ethical and algorithmic citizenship: Brey’s disclosive method, algorithmic governance, and participatory curriculum design (Adorni, 18 Dec 2025, Melo et al., 2017).

2. Key Pillars and Models of Cyber Humanist Educational Design

Cyber humanism in education is operationalized through the following pillars and architectures (Adorni, 18 Dec 2025).

Pillar Definition Sample Implementation
Reflexive Competence Critical examination of AI’s role in cognition; extends metacognition to probe AI affordances and limits Reflection logs, model comparison
Algorithmic Citizenship Rights/responsibilities in algorithmic infrastructures; co-governance of policies and practices Class/school policy co-creation
Dialogic Design Multi-voiced dialogues with AI as interlocutor, not oracle; modeling epistemic conflict and uncertainty Prompt sequences, debate tasks

These principles align with international competence frameworks, including DigComp 3.0, UNESCO AI-Teacher, and DigCompEdu, promoting digital sovereignty, socio-technical negotiation, and dialogic pedagogy (Adorni, 18 Dec 2025).

The Biocybernetic Learning Model (BLM) instantiates the cyber-humanist triad—human (h), information (i), machine (m)—as a dynamic ecosystem:

Ecosystem={h+i+m}\text{Ecosystem} = \{h + i + m\}

where cognitive states h(t)h(t) evolve under the influence of information vectors i(t)\vec{i}(t) and machine mediation mm, with cybernetic control by educators (Salas, 2017).

3. Human–AI Co-Production: Trust, Agency, and Socio-Technical Infrastructures

A central construct is the notion of “human–AI trust” in educational settings, distinct from both interpersonal (human–human) and traditional technology trust. Human–AI trust is defined as “a student’s willingness to rely on an AI system’s guidance, feedback, and information under conditions of uncertainty and potential vulnerability,” acknowledging both social (empathy, dialogue) and system (reliability, functionality) affordances (Pitts et al., 10 Jun 2025).

Empirical PLS-SEM models reveal that:

  • Human-like trust (ability, benevolence, integrity) strongly drives trusting intention (β=0.539\beta = 0.539).
  • System-like trust (functionality, helpfulness, reliability) dominates behavioral intention (β=0.714\beta = 0.714) and perceived usefulness (β=0.618\beta = 0.618).
  • Both forms comparably influence perceived enjoyment (Pitts et al., 10 Jun 2025).

Effective cyber humanist design thus requires:

  • Enhancing functional/system trust for reliable, transparent AI.
  • Cultivating human-like trust through explainability, privacy, and demonstrable benevolence.
  • Fostering AI literacy and calibrated trust to sustain critical engagement, not blanket acceptance or rejection (Pitts et al., 10 Jun 2025).

Socio-technical infrastructures are conceptualized as co-authored spaces:

S={H,Amutual shaping of epistemic norms and practices}S = \{H, A \mid \text{mutual shaping of epistemic norms and practices}\}

placing agency, accountability, and governance at the human–machine interface (Adorni, 18 Dec 2025).

4. Cyber Humanist Pedagogies: Curriculum, Methods, and Assessment

Cyber humanist pedagogy encompasses:

  • Prompt-based learning and AI-mediated Socratic method: Students craft and critique prompts, analyze AI suggestions, and negotiate class norms in domains from social sciences to embedded systems (Adorni, 18 Dec 2025).
  • Immersive, dialogic digital experiences: VR and AI-enhanced environments present philosophy or abstract domains in spatially navigable 3D, mediating empathetic and conceptual understanding (e.g., Walter’s Cube for philosophy) (Vehrer et al., 28 Aug 2025).
  • Constructivist, inquiry-driven approaches: Emphasis on interactivity, group problem solving, and co-construction of evaluation criteria, with assessments favoring portfolios, open-book projects, and peer review (Selitskiy et al., 27 Oct 2025).
  • Cyber-ethics and algorithmic literacy: Embedding Brey’s disclosive method of ethical deliberation, reflection on algorithmic decision-making, and critical analysis of socio-technical systems across the SE curriculum (Melo et al., 2017).
  • Game-based learning and inclusion: Serious games such as “Protection and Deception” teach cyber literacy and game-theoretic concepts to diverse audiences, using embodied, hands-on metaphors and rigorous balancing to sustain engagement (Zahir et al., 2015).

5. Impact, Limitations, and Critical Assessment

Empirical results underline several outcomes:

  • In a VR-AI philosophy course, 80% of undergraduates achieved good or excellent exam grades, with 98.7% expressing willingness to adopt the method elsewhere (Vehrer et al., 28 Aug 2025).
  • In game-based cyber literacy interventions, participants exhibited rapid uptake of technical vocabulary and strategic reasoning, regardless of prior expertise (Zahir et al., 2015).
  • Prompt-based, AI-assisted learning fostered epistemic agency, algorithmic citizenship, and critical dialogue, but required substantial scaffolding to prevent cognitive offloading or uncritical acceptance of AI outputs (Adorni, 18 Dec 2025).

Critical assessments highlight open problems:

  • Risks of equity gaps, as access to high-quality AI and educator training is uneven (Adorni, 18 Dec 2025).
  • Institutional support is necessary to mitigate educator workload and foster the development of cyber humanist design (Adorni, 18 Dec 2025).
  • The absence of large-scale, longitudinal studies on the cognitive transfer and broader socio-cultural impacts of cyber humanist interventions remains a research gap (Henda, 2017).

6. Future Directions and Challenges

Cyber humanism in education faces several enduring design and policy challenges:

  • Operationalizing critical competencies: No single metric for skills like “cognitive sovereignty,” “collaborative innovation,” or “algorithmic citizenship” has yet achieved consensus; plausible proxies include reflection journals and project-based rubrics (Selitskiy et al., 27 Oct 2025).
  • Embedding ethics and critical inquiry: Continuous curriculum reviews, faculty development, interdisciplinary teamwork, and participatory governance models must be institutionalized for sustained impact (Melo et al., 2017, Adorni, 18 Dec 2025).
  • Equity and inclusion: Policy must address resource allocation, access disparities, and teacher professionalization to prevent new digital divides (Henda, 2017, Adorni, 18 Dec 2025).
  • Ecological and ethical stewardship: Given the energy and opacity costs of current LLM-based AI, human-centered pedagogical methods and data governance must remain priorities (Selitskiy et al., 27 Oct 2025).
  • Cross-disciplinary scaling: Expanding cyber humanist design patterns from STEM and philosophy into broader humanities and creative domains is both feasible and encouraged by current research (Vehrer et al., 28 Aug 2025).

In sum, cyber humanism in education reconfigures the digital learning environment as a site for the deliberate cultivation of agency, criticality, and co-constructed meaning, harnessing the complementary strengths of humans and machines while insisting on the primacy of ethical, creative, and communal values (Adorni, 18 Dec 2025, Selitskiy et al., 27 Oct 2025, Salas, 2017, Pitts et al., 10 Jun 2025).

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Cyber Humanism in Education.