Papers
Topics
Authors
Recent
Search
2000 character limit reached

Student Expectations in Learning Analytics (SELAQ)

Updated 15 January 2026
  • SELAQ is a psychometric instrument that defines students' ‘ideal’ desires and ‘expected’ outcomes regarding learning analytics systems.
  • It utilizes dual-item responses and factor analysis to reveal significant gaps and cluster students based on privacy and service feature expectations.
  • The instrument informs the design of learning analytics dashboards by mapping user expectations to system features and data governance practices.

The Student Expectation of Learning Analytics Questionnaire (SELAQ) is a psychometric instrument developed to systematically capture higher education students’ expectations and preferences concerning the operation, features, and data practices of Learning Analytics (LA) systems. It is anchored in the dual recognition that LA adoption depends on both technical efficacy and stakeholder acceptance, especially with respect to privacy, transparency, and actionable feedback. SELAQ is used internationally for formative system design, user segmentation, and evaluation of LA deployment in educational settings, and has seen adaptations and analytic applications across a range of European and global higher education contexts (Brdnik et al., 2022, Aadmi-Laamech et al., 2024, Asatryan et al., 8 Jan 2026).

1. Conceptual Foundation and Purpose

SELAQ operationalizes student attitudes toward LA systems by distinguishing between “desires” (ideal features or practices) and “expectations” (anticipations about what will actually be delivered). This dyadic structure allows for empirical estimation of gaps between student preference and perceived institutional capability or trustworthiness. The instrument addresses two principal domains:

  • Data protection and governance (consent workflows, data security, anonymization, and use-limitation)
  • Core LA service features and staff involvement (progress dashboards, peer comparison, goal-tracking, feedback, and obligation/competence of teaching staff in interpreting LA outputs) The theoretical foundation of SELAQ is linked to the broader goals of LA dashboards, such as supporting self-reflection, data-driven decision making, and fostering self-regulated learning (Brdnik et al., 2022, Aadmi-Laamech et al., 2024, Asatryan et al., 8 Jan 2026).

2. Questionnaire Structure, Item Content, and Administration

SELAQ has been implemented in both 12- and 10-item variants, with items typically structured along either a 7-point (strongly disagree to strongly agree) or 5-point Likert scale. Each item is administered in two forms: “ideal” (what the student wants) and “expected” (what they think will occur). An exemplar SELAQ-12 configuration divides items between “Ethical and Privacy Expectations” (Q1–Q3, Q5–Q6) and “Service Feature Expectations” (Q4, Q7–Q12), as detailed below (Brdnik et al., 2022, Asatryan et al., 8 Jan 2026):

Item # Sample Item Description Example Factor
Q1 Data will be stored securely Ethics/Privacy
Q2 Data will be kept anonymous when shared Ethics/Privacy
Q3 Transparency about data processing Ethics/Privacy
Q4 Up-to-date learning progress Service Feature
Q5 Data not used for other purposes Ethics/Privacy
Q6 Consent for any new use Ethics/Privacy
Q7 Clear explanation of analytics features Service Feature
... ... ...
Q12 Ability to set/track personal goals Service Feature

Alternative SELAQ-10 itemizations, as in (Aadmi-Laamech et al., 2024), focus more explicitly on data governance (consent/security/etc.), analytics support for decision making, progress visualization, and teaching-staff involvement.

Administration protocols differ by study. For example, in (Aadmi-Laamech et al., 2024), SELAQ was administered during 20-minute workshop segments both before and after prototype use, using individual-item means ± SD as the analytic grain. In (Brdnik et al., 2022), survey translations and back-translations ensured semantic fidelity, and careless responses were excluded by removing the fastest 10% of completions.

3. Psychometric Structure and Statistical Analysis

SELAQ’s psychometric properties have been probed via exploratory and, in the original development, confirmatory factor analyses. In (Brdnik et al., 2022), separate EFAs (principal-axis, oblimin rotation) for ideal and predicted subscales (N = 276) yielded two interpretable factors:

  • “Ethical and Privacy Expectations” (Q1–Q3, Q5–Q6)
  • “Service Feature Expectations” (Q4, Q7–Q12) These two factors accounted for 46.57% and 53.74% of variance in ideal and predicted forms, respectively. All inter-item correlations met the >0.30 criterion; KMO measures were >0.88, indicating sampling adequacy. No CFA was reported for these validated translations, and Cronbach’s α values were not provided, although they are typically calculated via:

α=NN1(1i=1Nσi2σtotal2)\alpha = \frac{N}{N-1} \left(1 - \frac{\sum_{i=1}^N \sigma_i^2}{\sigma_\text{total}^2} \right)

Original validation work, as described in (Asatryan et al., 8 Jan 2026), establishes high reliability (α = .88–.90 for Data Protection, α = .85–.89 for LA General, α = .82–.86 for LA Staff) and configural/metric invariance across countries. In some studies (e.g., (Aadmi-Laamech et al., 2024)), no factor analysis or reliability metrics were reported.

4. Empirical Findings and Interpretive Clustering

Analysis of SELAQ responses consistently reveals that students’ “ideal” expectations substantially exceed their “predicted” or “actual” expectations for most items, with measured mean differences of roughly one Likert point in larger samples (Brdnik et al., 2022). Within-sample SDs generally range between 1.3–1.7. Advanced statistical techniques, such as k-means clustering and post-hoc decision tree explanation, have been used to segment students into attitudinal clusters (Asatryan et al., 8 Jan 2026):

Cluster DP Desire DP Expect LA Desire LA Expect Interpretation
Enthusiasts 6.34 6.14 6.01 5.64 High desire/expectation
Realists 6.34 4.95 5.76 3.73 Desire > expectation gap
Cautious 6.19 5.64 3.68 3.75 Strong DP, low LA
Indifferents 3.55 4.02 4.24 3.91 Low engagement

Disciplinary differences are observed: students in sustainability-related majors skew toward high engagement, whereas architecture and business students are more likely to fall in the “Indifferent” and “Realist” clusters. A plausible implication is that LA interventions and communications may need to be discipline-sensitive to maximize acceptance.

5. Instrumental Use in System and Interface Design

SELAQ responses directly inform the participatory design of student-facing LA dashboards and systems. In (Brdnik et al., 2022), synthesized expectation data and follow-up focus group insights determined the feature set of a dashboard prototype for an engineering course, mapping desired features to implemented interface widgets:

  • Peer comparison (anonymized percentile displays)
  • Historical performance and course pass rates (timeline and visualizations)
  • Early-grade prediction via classification (98% precision for pass/fail after one month) and regression (MAE = 11.2 points on a 0–100 scale)
  • Behavior-trend visualizations (active days, click counts)
  • Self-regulation support (goal tracking, “progress toward goal” pills)
  • Explainable AI (SHAP-based explanations for grade predictions)
  • Notifications/reminders and user-facing consent toggles (GDPR compliance, partially implemented)

These applications demonstrate a closed-loop cycle: student expectations assessed by SELAQ are mapped into actual feature allocation, with system evaluation (e.g., pre/post intervention in (Aadmi-Laamech et al., 2024)) tracking whether delivered functionality aligns with evolving expectations.

6. Limitations, Reporting Practices, and Recommendations

Several reported SELAQ deployments omit confirmatory factor analysis, criterion validation, and report no or incomplete reliability indices (e.g., Cronbach’s α is often referenced but not computed; (Brdnik et al., 2022, Aadmi-Laamech et al., 2024)). Many studies focus exclusively on student informants, omitting teaching staff or administrator perspectives. Some sample strategies (e.g., focus groups or opportunistic recruitment) may limit generalizability or introduce moderator bias. Additional psychometric validation, stakeholder triangulation, and large-scale cross-course implementations are repeatedly recommended. Further, emotional and ethical risks involved in automated prediction/feedback dissemination mandate careful review and pilot ethical scrutiny. A plausible implication is that future SELAQ research will benefit from more systematic reporting on validation statistics and from multi-stakeholder sampling protocols.

7. Implications for Learning Analytics Adoption and Future Directions

SELAQ facilitates institutional introspection about LA readiness and guides feature prioritization in LA tool design by empirically surfacing the divergence between what students desire and expect from analytics services. Its systematic use allows segmentation of the user base, diagnosis of trust gaps, and ongoing refinement of both service and privacy assurance mechanisms. Emergent directions include the extension of SELAQ beyond student stakeholders, integration with context-specific needs (e.g., well-being dashboards (Aadmi-Laamech et al., 2024)), and incorporation of multi-modal LA environments. The sustained absence of aggregate subscale reporting or robust psychometric stratification in some implementations suggests an ongoing need for consensus standards in LA expectation measurement and reporting (Brdnik et al., 2022, Aadmi-Laamech et al., 2024, Asatryan et al., 8 Jan 2026).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Student Expectation of Learning Analytics Questionnaire (SELAQ).