Collaborative Autoethnography in Research
- Collaborative Autoethnography is a qualitative method wherein researchers collectively document and interpret their lived experiences to reveal multifaceted cultural and social insights.
- It employs systematic triangulation, reflexivity, and negotiation of divergent perspectives to enhance methodological rigor and uncover hidden biases.
- Practical applications include exploring accessibility in HCI and assessing GAI tools’ efficacy, with insights driving more inclusive research designs.
Collaborative Autoethnography (CAE) is a qualitative research methodology in which multiple co-researchers systematically study and analyze their personal lived experiences to illuminate broader cultural, technological, or social phenomena. In contrast to individual autoethnography, which centers a single author’s narrative as a lens onto a problem space, CAE brings together a plurality of voices—each documenting, comparing, and collectively interpreting their experiences. Built-in triangulation mechanisms, explicit reflexivity, and collective negotiation of divergent perspectives distinguish CAE, positioning it as a powerful approach for investigating complex, multifaceted topics such as accessibility in Human–Computer Interaction (HCI) and the impacts of Generative Artificial Intelligence (GAI) tools (Glazko et al., 2023).
1. Definition and Distinguishing Features
Collaborative Autoethnography extends the classic autoethnographic approach by situating multiple researchers as co-participants in both the documentation and analysis stages. Key characteristics include:
- Plural Voices: Multiple "I"s replace a singular author-narrator, yielding an inherently dialogic narrative.
- Triangulation: Systematic cross-coding and group discussions serve as internal checks on thematic development and interpretation.
- Reflexivity: Iterative reflexive practice pervades all stages, foregrounding negotiation—both of meaning and of group dynamics.
- Negotiation of Perspective: Differences in interpretation or experience are explicitly surfaced, rather than harmonized away.
These mechanisms enhance CAE’s capacity to address research topics marked by diversity of experience, intra-group disagreement, or emergent, contested phenomena. The methodology is particularly well-suited for accessibility research, where intersections of impairment, identity, and technical context necessitate continuously negotiated understanding (Glazko et al., 2023).
2. Research Team Configurations and Rationales
An exemplary CAE on GAI’s accessibility utility was conducted by a heterogeneous team comprising seven researchers at the University of Washington, including individuals who identify as blind, hard-of-hearing, neurodiverse, and/or living with chronic illness, as well as members without disabilities. The team also varied in seniority (junior and senior PhDs, postdocs, faculty) and cultural background (LGBTQ+, first-generation college graduates, international scholars).
The deliberate mixed composition was rationalized as follows:
- To ensure the spectrum of lived accessibility needs and professional contexts was authentically represented.
- To leverage peer support and domain expertise in cases where GAI-mediated access solutions required external verification.
- To expose both convergences and divergences in how GAI tools respond to different impairments and intersectional identities.
This collaborative design operationalizes CAE’s commitment to inclusively capturing and synthesizing diverse experiential data (Glazko et al., 2023).
3. Data Collection and Collaborative Workflows
The three-month study of GAI tools and accessibility instantiated CAE through a rigorously structured workflow:
- Onboarding (Month 0): The team collectively set the study’s scope—using GAI tools to address personal and professional access needs—and compiled a spreadsheet of accessible GAI resources (e.g., ChatGPT, GPT-4, Midjourney, DALL-E 2, Copilot, ChatPDF).
- Individual Diaries (Months 1–3): Each researcher appended weekly entries to a communal Google Doc, following a structured template covering:
- Task description;
- Tools/versions used;
- Prompts issued;
- Success criteria (e.g., verifiability, time savings, self-confidence);
- Observed outcomes;
- Reflections on bias, ableism, or training-data mismatch.
Weekly Reflection Meetings: One-hour team meetings enabled round-robin updates, in-depth collaborative discussion focused on 1–2 vignettes, and real-time reflexive analysis of team dynamics.
- Pairwise Cross-Coding: At conclusion, each diarist’s corpus was independently coded by a second author, using thematic rubrics (see Table 1).
- Group Synthesis Workshops: Full-team sessions resolved coding differences, collapsed codes to higher-level clusters, and developed amalgamated, anonymized vignettes for cross-case comparison.
Table 1: Thematic Categories for Pairwise Cross-Coding
| Thematic Category | Example Focus | Analytical Purpose |
|---|---|---|
| GAI’s value for personal access | Individual efficacy, productivity | Determine direct utility |
| GAI’s value for making materials accessible | Impact on third-party accessibility | Generalize utility |
| Verifiability challenges | Pass/fail, degree of independent checking | Identify reliability gap |
| Evidence of ableism or representational bias | Stereotypes or misrepresentation | Expose bias incidence |
| Prompt-iteration effort / tool-data mismatch | Required number of prompt refinements | Assess cost/benefit |
The analytic pipeline was conceptually modeled, without formal mathematics, as iterating over triples (team member, task, GAI tool):
Let : tasks; : team members; : GAI tools.
For each : prompt → output → verify → log(success/failure, bias_observed). Thematic clustering across the resulting entries distills meta-themes (Glazko et al., 2023).
4. Analytic Rigor: Addressing Verifiability, Bias, and Reflexivity
CAE deployed explicit mechanisms to mitigate methodological pitfalls frequently encountered in qualitative group inquiry:
- Verifiability: Distinguishing “low-stakes” tasks (e.g., BibTeX formatting) amenable to automated checking from “high-stakes” tasks (e.g., visual UI layout for screenreader users). Weekly queries systematically surfaced which outputs required independent or expert validation.
- Bias and Ableism: A continually updated glossary cataloged recurring biased representations (e.g., mischaracterizations of cane usage). Paired coding highlighted moments where model hallucinations reflected entrenched ableist tropes.
- Reflexivity: Detailed meeting minutes were annotated with “reflexive flags” when uncertainty or potential projection was voiced by a team member. Post-hoc review of these flagged moments ensured that internal disagreements or ambivalence were preserved in synthesized vignettes.
A notable example involved group debate regarding whether “robotic but grammatically perfect” GAI-generated Slack messages met access needs, with ongoing tensions between metrics (e.g., error rate), human preference, and user self-confidence repeatedly surfacing as a thematic axis (Glazko et al., 2023).
5. Key Findings: GAI Efficacy and CAE Contributions
Empirical findings illuminated both affordances and shortfalls of GAI for accessibility, as revealed through the CAE framework:
- Greatest Utility: GAI most reliably facilitated “easily verifiable, low-risk tasks,” such as reference formatting and basic prompt-to-code applications.
- Mixed/Negative Outcomes: Outcomes were inconsistent or negative when GAI outputs required verification by the same modality they aimed to augment (e.g., visual output for blind users).
- “False Promises” Phenomenon: GAI models frequently claimed compliance with accessibility guidelines but delivered non-compliant artifacts.
The CAE methodology enabled the extraction of higher-level thematic insights:
- Verifiability Gap: GAI utility is inherently limited by the user’s ability to independently audit outputs.
- Training-Data Blind Spot: Niche or specialized accessibility scenarios were underrepresented in training corpora, diminishing GAI effectiveness.
- Subtle Ableism: Even well-intentioned prompts triggered stereotypes, particularly in the absence of diverse input data.
- Prompt-Iteration Cost: The effort required to iteratively refine prompts could outweigh benefits, especially for complex or novel access needs.
Advantages of CAE included surfacing experiential patterns spanning diverse disabilities, embedding reflexivity throughout all methodological stages, and cultivating mutual accountability. Challenges included the logistical complexity of tracking multiple diaries and codings, balancing anonymization with narrative richness, and the reliance on qualitative rather than quantitative consensus (Glazko et al., 2023).
6. Implications and Emerging Research Trajectories
By aggregating, coding, and collectively interpreting diverse experiential accounts, CAE produces a “richly textured account” of both the possibilities and persistent limitations of GAI in accessibility domains. This approach is especially pertinent for fields confronting intersectional and under-researched populations or phenomena for which formalized metrics or models are not yet fully developed.
A plausible implication is that, while CAE’s inherent emphasis on reflexivity and plural perspective sets a high standard for inclusiveness and analytical depth, the absence of formal quantitative models sometimes limits reproducibility or generalizability. However, in rapidly evolving technical landscapes such as HCI and AI accessibility, CAE’s capacity to capture emergent discomfort, disagreement, and innovation makes it a crucial complement to other empirical paradigms (Glazko et al., 2023).