Papers
Topics
Authors
Recent
Search
2000 character limit reached

Critical Technical Practice in Tech Research

Updated 28 February 2026
  • Critical Technical Practice is an approach that integrates social critique, reflexivity, and normative inquiry into technical work to surface underlying values and power structures.
  • It operationalizes methods such as AI sprints, Critical Systems Heuristics, and structured dataset stewardship to iteratively audit technical outputs.
  • CTP challenges conventional tech ethics by embedding ethical scrutiny throughout design and development, advocating for distributed governance and participatory oversight.

Critical Technical Practice (CTP) is an approach that systematically integrates social critique, reflexivity, and normative inquiry directly into technical work. CTP recognizes that technical systems are always already shaped by, and productive of, particular values, assumptions, and power relations. Rather than treating design, engineering, or computational research as value-neutral or purely technical, CTP surfaces and interrogates the boundary judgments, normative claims, and institutional incentives that structure technological practice. It emerges in response to both the conceptual challenges of “ethics-washing” and the practical limitations of mainstream tech ethics frameworks, insisting instead that ethical reasoning, stakeholder critique, and structural alternatives be operationalized throughout the design, development, and deployment of computational systems (Green, 2021).

1. Genealogies and Definitions

CTP descends from traditions in Science and Technology Studies (STS), Critical Systems Thinking (CST), and digital methods, explicitly articulated in HCI by Dourish et al. (2004) as an “engineering stance that deliberately weaves social critique into technical work” (Duboc et al., 2019). Early formulations by Feenberg framed CTP as an antidote to technological determinism and as means of democratizing technical expertise by opening participation to wider publics. In contemporaneous AI and data science, CTP now appears as a direct response to the recursive abstraction of ethical concerns into ineffectual checklists, compliance regimes, and superficial norm-setting by powerful industry actors (Green, 2021, Jin et al., 12 Oct 2025).

A paradigmatic instantiation of CTP is the “AI sprint” methodology: time-boxed, intensive research sessions structured as iterative, dialogic loops between a human researcher and a LLM. This hybrid form recasts “book sprints” and “data sprints” as single human–LLM dyads, emphasizing explicit critical facilitation, reflexive workflow tracking, and vigilance against epistemic delegation or monological augmentation (Berry, 13 Dec 2025).

2. Principles and Theoretical Foundations

CTP is grounded in the recognition that technical artifacts are sociotechnical systems—outcomes of layered cycles of normative reflection, institutional incentives, and concrete materialization (Green, 2021, Jin et al., 12 Oct 2025). It juxtaposes traditional technical rationality with critical systems heuristics, critical code studies, and intersectional frameworks. Key axioms include:

  • Reflexivity: Systematic surfacing of boundary (who/what is included) and value judgments at every design and implementation stage.
  • Power-awareness: Explicit examination of how power structures (corporate, regulatory, epistemic) mediate what counts as valid knowledge, problem scope, or “success.”
  • Iterative critical feedback: Integration of descriptive (“is”) and normative (“ought”) judgments—enacting critique through iterative cycles of reflection and documentation (Duboc et al., 2019).
  • Integration, not isolation, of ethics: Embedding ethical scrutiny as part of technical, infrastructural, evaluative, and community-centered practice—eschewing a treatment of ethics as either external audit or discretionary add-on (Green, 2021, Jin et al., 12 Oct 2025).

3. Methodologies and Exemplary Workflows

CTP is operationalized through practical methods that embed critical inquiry into established research or engineering lifecycles. These include:

  • Critical Systems Heuristics (CSH): A twelve-question heuristic probing motivation, control, knowledge, and legitimization—the “client,” “purpose,” “criteria,” “control agents,” “resources,” “constraints,” etc.—with every cycle iteratively eliciting both actual (“is”) and ideal (“ought”) boundary judgments. Mapping these reflections into requirements artifacts ensures that hidden and competing values remain explicit throughout design (Duboc et al., 2019).
  • AI Sprints: Structured as four-stage cycles—preparation, initial LLM prompt loop, iterative critique/versioning of generative outputs, and synthesis/public reporting. CTP within AI sprints is maintained by explicit tracking of cognitive delegation, productive augmentation, and cognitive overhead, combined with systematic documentation of prompt logs and failure modes (Berry, 13 Dec 2025).
  • Critical Dataset Stewardship: Applying structured templates (e.g., “Datasheets for Datasets” by Gebru et al.) across dataset collection, cleaning, annotation, release, and deprecation; continuously exposing assumptions, labor, representational choices, and power asymmetries in dataset curation and deployment. Practical checklists, iterative audits, and open sharing of data provenance are central (Ciston et al., 26 Jan 2025).
  • Power Audits and Limitations Analyses: Requiring for each technical decision a systematic mapping of beneficiaries, risks, institutional incentives, and structural “red-lines”; integrating critical/limitation reviews of key assumptions, failure modes, and societal risk matrices at each project stage (Jin et al., 12 Oct 2025).

4. Cognitive Modes, Power Structures, and Feedback

A salient contribution to CTP analysis is the formal modeling of cognitive and power dynamics in human–AI collaboration and technical systems governance.

  • Cognitive Delegation: The risk that interpretive control and critical judgment are offloaded to computational agents, leading to black-boxing of latent theoretical or ontological assumptions (formally, wh→0w_h \to 0 as human input vanishes) (Berry, 13 Dec 2025).
  • Productive Augmentation: Retaining strategic human oversight while delegating routine or scale-intensive operations to AI; optimizing for mixed workload regimes where the locus of theoretical directionality remains with the human (α>β\alpha > \beta in Chuman=α, CAI=βC_{human} = \alpha,\ C_{AI} = \beta, α+β=Ctotal\alpha+\beta = C_{total}) (Berry, 13 Dec 2025).
  • Cognitive Overhead: Recognition of the human effort required for context management, output version-control, and managing computationally-induced feature creep. Beyond a certain threshold, increased overhead can negate the benefits of augmentation (Berry, 13 Dec 2025).
  • Power Structures in AI: Diagnosis that dysfunction in the translation of ethical intent to practice typically results from unchecked or unjust power structures, rather than from individual malfeasance or technical error. Rebalancing requires a theory and method for making power explicable, explicit, and subject to contestation at each technical and organizational layer (Jin et al., 12 Oct 2025).

5. Operationalization Across Domains

CTP is domain-independent, but concrete enactments differ by field:

  • Requirements Engineering: CSH is integrated with standard templates (e.g., Volere), systematically populating and revising requirements based on iterative boundary critiques, and surfacing “unrealistic aims,” silent stakeholders, and latent trade-offs (e.g., autonomy vs. safety in eldercare systems) (Duboc et al., 2019).
  • Machine Learning and Dataset Work: Critical field guides specify stepwise best practices encompassing selection/origins, preprocessing, annotation, sharing, maintenance, and deprecation; every stage is scrutinized for epistemic, ethical, and representational contingencies (Ciston et al., 26 Jan 2025).
  • Human-AI Collaboration: The AI sprint model details time-boxed, reflective, and thoroughly documented iterative loops. Strategic best practices are established for checking LLM hallucinations, ensuring anonymization, documenting all prompt pipelines, and embedding the full raw prompt–output archive in project appendices (Berry, 13 Dec 2025).

6. Systemic Challenges and Transformative Impact

CTP surfaces systemic failures of “mainstream tech ethics,” including:

  • Vagueness and Insufficient Enforcement: High-level ethical principles proliferate without binding authority or mechanisms for trade-off adjudication, enabling reputational risk management instead of substantive reform (Green, 2021).
  • Individualization of Responsibility: Overemphasis on toolkits, fair ML checklists, or virtue ethics diverts attention from structural incentives, strategic decision-making, or data economies driving harm (Green, 2021).
  • Ethics-washing and Co-option: Adoption of ethical language serves corporate self-protection, enabling strategies to marginalize substantive critique, disempower internal ethics roles, and—via feedback loops—continually reconfigure material outcomes and principles (Green, 2021).

In response, CTP calls for:

  • Distributed and Participatory Governance: Shifting locus of ethical authority from individual contributors to cross-functional, cross-institutional, and stakeholder-inclusive deliberative bodies with real veto power and public accountability (Green, 2021, Jin et al., 12 Oct 2025).
  • Integration of Critical and Scientific Norms: Commitment to critical examination and limitation analysis as co-equal with performance adjudication; reframing ethics as a constructive adversary, not a bureaucratic constraint (Jin et al., 12 Oct 2025).
  • Reflexive, Transparent Publication: Systematic logging and sharing of process artifacts (prompt protocols, “ideal maps,” PowerAudit tables), thus enabling external scrutiny and reproducibility and making CTP itself an object of ongoing collective inquiry (Berry, 13 Dec 2025, Ciston et al., 26 Jan 2025).

7. Guidelines and Best Practices

Consistent recommendations for enacting CTP across technical fields include:

  • Convene power-mapping and boundary-critique workshops at project inception, explicitly identifying beneficiaries, risk-bearers, and unexamined assumptions (Jin et al., 12 Oct 2025, Duboc et al., 2019).
  • Implement continuous power-audit and limitation-review checklists for every major architectural or analytical decision (Jin et al., 12 Oct 2025).
  • Conduct open, public documentation of ethical claims, artifacts, and failures; expose the full sociotechnical system, not just the technical surface (Berry, 13 Dec 2025, Ciston et al., 26 Jan 2025).
  • Codify and periodically reevaluate structural “red-lines” (absolute bans, audit requirements, participatory checkpoints) in both project and organizational governance (Green, 2021).
  • Treat every technical output as provisional, subject to re-interpretation, reframing, and contestation within a broader civic and institutional feedback loop (Berry, 13 Dec 2025).
  • Build and maintain coalitions of technical practitioners, ethicists, domain experts, and impacted communities to contest, revise, and co-govern the boundary work of technical practice (Jin et al., 12 Oct 2025).

Empirical case studies—ranging from machine learning dataset curation (e.g., community audits, dataset deprecation protocols), requirements engineering for eldercare (CSH mapping iterations), to systematic analyses of Explainable AI in medical imaging (template-based PowerAudits)—exemplify the concrete operationalization and transformative potential of Critical Technical Practice across contemporary computational research domains (Berry, 13 Dec 2025, Duboc et al., 2019, Ciston et al., 26 Jan 2025, Jin et al., 12 Oct 2025).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Critical Technical Practice.