Sense-Giving Strategies
- Sense-giving strategies are systematic approaches that influence collective meaning-making by filtering, framing, and amplifying information in uncertain contexts.
- They are evaluated through methodologies like social network analysis, content coding, and quantitative regression models to measure engagement and credibility.
- Practical applications include hybrid media amplification, scaffolded Q&A frameworks, and layered ML explanation designs that enhance interpretability and action.
Sense-giving strategies are systematic approaches through which actors—such as media organisations, educators, community moderators, and technical system designers—shape how audiences or participants interpret ambiguous phenomena, events, explanations, or research findings. In diverse socio-technical contexts, sense-giving is operationalized as a deliberate effort to structure collective meaning-making, influence action, and foster coherence by filtering, framing, and amplifying information. This notion is theoretically coupled to sense-making, the endogenous process by which individuals or groups extract cues and negotiate shared understandings from complex or uncertain situations (Marx et al., 2020). Sense-giving strategies have been empirically studied in crisis communication, collaborative Q&A environments, STEM education, and participatory design for machine learning interpretability.
1. Theoretical Foundations of Sense-Giving
Sense-giving is formally defined as “the attempt to influence or change the way others perceive a situation and steer actions towards a favourable direction” (Pratt 2000; Giuliani 2016) (Marx et al., 2020). It operates in tandem with collective sense-making, whereby individuals seek cues, share raw observations, and construct mutual interpretations (Weick et al. 2005). While sense-makers are primarily oriented toward seeking and sharing unprocessed information, sense-givers act as filters or amplifiers, actively shaping the information environment to promote coordinated understanding and action.
In machine learning interpretability, explanation strategies serve as an “empirical-analytical lens explicating how technical explanations mediate the contextual preferences concerning people’s interpretations” (Benjamin et al., 2021). Stakeholders deploy abductive reasoning—inferring likely causes or meanings from prior experience—when encountering technical explanations or artifacts, and their sense-giving strategies mediate the usability and resonance of these artifacts in situated contexts.
2. Taxonomies of Sense-Giving Strategies
Empirical research identifies distinct classes of sense-giving strategies across domains:
- Media Organisation Communication in Disasters (Marx et al., 2020):
- Retweeting of Local In-House Outlets (“Popularity Arbitrage”): National outlets redirect attention to regional branches via exclusive retweeting, with proxy amplification but minimal crisis-specific original content.
- Bound Amplification of Organisation-Associated Journalists: Retweeting is restricted to employed journalists who offer quality-assured, real-time updates, consolidated into curated feeds.
- Open Message Amplification: Retweets are inclusive, involving private eyewitnesses, EMAs, NGOs, with low filtering barriers and calls for user-generated content.
Community Q&A Science Sensemaking (He et al., 2023):
- Question-Title Strategies: Use of significance signs (impact, timeliness, high-profile venues, researcher background), rigorous descriptions (hedging, data use), and eye-catching narratives (quotes, emotional arousal, counter-intuitive claims).
- Question-Description Strategies: Supplementary resources (paper links, news, visuals), comprehensive methods/results, structured presentation (ledes, indentation, bulleting).
- Machine Learning Explanation Strategies (Benjamin et al., 2021):
- Paradigmatic Strategies: Utility (features as lookup tools) and contrast (relating ML output to domain language); hierarchy justification with cluster boundaries.
- Conceptual Strategies: Adopting an algorithmic perspective and interrogating coherence between model outputs and stakeholder expectations.
- Presuppositional Strategies: Organizational hierarchy mapping and anchoring meaning in socio-material relations.
3. Methodologies for Identifying and Evaluating Strategies
Sense-giving strategies are identified and evaluated using domain-specific methodologies:
- Social Network Analysis (SNA) (Marx et al., 2020): Construction of retweet networks , in-degree and betweenness centrality metrics, cascade analytics, and diversity indices to quantify amplification practices and network effects.
- Content Analysis: Coding tweets, questions, and answers with multi-category codebooks (e.g., Mayring 2000/2014), inter-rater reliability tests (Krippendorff’s ), and thematic analysis to distinguish strategy types.
- Quantitative Regression Models (He et al., 2023): Poisson and Beta regression to link strategy indicators to engagement (views, answers) and answer quality (on-topic, evidence, reasoning, social), controlling for topic, asker features, and question meta-data.
- Participatory Design Workshops (Benjamin et al., 2021): Data capture via video/audio, annotation, physical model construction, and staged interpretive coding to extract explanation strategy classes.
4. Efficacy and Trade-Offs of Sense-Giving Approaches
Comparative empirical findings indicate varying degrees of efficacy:
- In disaster contexts, bound amplification yields the highest credibility and responsive situational updates (cascade sizes averaging 500–1,500 retweets per journalist) (Marx et al., 2020), whereas open amplification maximizes coverage but introduces noise and a requirement for moderation. Popularity arbitrage maintains engagement but is less effective at disseminating timely information.
- In Q&A science communication, strategies leveraging emotional arousal and counter-intuitive framing dramatically increase engagement metrics (IRR for answers > 1.5), but do not improve answer quality; cues of rigor (hedging, paper links) elevate evidence and reasoning-based answers but do not attract wide participation (He et al., 2023).
- Prescribed prompts in STEM problem solving—when rigid—do not reliably induce sustained conceptual sensemaking, with procedural discussion dominating (mean conceptual discussion of 9%, max of 38% in optimal segments) (Martinuk et al., 2011). This suggests that superficial sense-giving (e.g., check-box prompts) is insufficient without deeper framing interventions and modeling.
A plausible implication is that the most effective sense-giving strategies are context-sensitive, balancing inclusivity, credibility, and participatory engagement, and requiring ongoing moderation and adaptive signal amplification.
5. Practical and Design Recommendations
Cross-domain findings underpin several actionable recommendations for practitioners:
- Media Organisations in Crisis (Marx et al., 2020):
- Adopt hybrid models blending bound and verified open amplification.
- Pre-define qualification criteria for external amplification.
- Monitor network metrics (in-degree, cascade growth, diversity index) for real-time prioritization.
- Incentivize and moderate user participation with structured hashtag campaigns and rapid fact-checking workflows.
- Community Q&A Platforms (He et al., 2023):
- Implement dual-track feedback (knowledge tags, upvotes) and AI-based answer filtering.
- Scaffold question-asking with real-time prompts for paper links and hedging.
- Enhance collaborative editing with badges, structured edit reasons, and mentorship mechanisms.
- Directly incentivize expert involvement via personalized invitations.
- Explanation Interfaces for Non-ML Experts (Benjamin et al., 2021):
- Support layering and juxtaposition of multiple explanation methods.
- Integrate contextual cues to anchor interpretations in lived professional environments.
- Surface algorithmic uncertainty visualization to foster deeper engagement.
- Design interactive modes that allow users to enact and reflect on ML pipelines.
6. Limitations, Controversies, and Directions for Further Research
Empirical studies highlight limitations and persistent challenges:
- Prompting vs. Framing (Martinuk et al., 2011): Prescribed strategies in educational contexts do not reliably shift epistemological framing toward sensemaking; instead, prompts serve as external cues for procedural compliance.
- Engagement-Quality Trade-Offs (He et al., 2023): Strategies that maximize public participation often underperform on knowledge construction metrics, with rigorous or hedged content suffering lower visibility.
- Contextual Mediation (Benjamin et al., 2021): Explanation strategies are mediated by stakeholders' prior experience, organizational hierarchies, and socio-material relations, requiring participatory research to surface invisible frames.
Research directions include:
- Employing technological mediation theories (post-phenomenology, actor-network theory) to analyze sense-giving in human-technology interfaces.
- Validating typologies of explanation and sense-giving strategies across domains with ethnographic and grounded theory methods.
- Designing interventions that actively engage participants in combined conceptual, procedural, and contextual framing, rather than relying exclusively on structural prompts or amplification metrics.
7. Summary Table: Exemplary Sense-Giving Strategies
| Domain | Strategy Class | Concrete Feature |
|---|---|---|
| Media Disaster Comm | Bound Amplification | Retweeting employed journalists |
| Community Q&A | Emotional Framing | Eye-catching narrative with emotional arousal |
| ML Interpretability | Paradigmatic Explanation | Utility of feature explanations as lookup tools |
Each strategy’s effectiveness is domain-contingent and tethered to mechanisms of credibility, engagement, and multi-level framing by the sense-giver. Comprehensive application requires adaptive measurement, iterative moderation, and contextual sensitivity to participant needs and organizational environments.