Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 130 tok/s
Gemini 3.0 Pro 29 tok/s Pro
Gemini 2.5 Flash 145 tok/s Pro
Kimi K2 191 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Design Science Research (DSR) Methodology

Updated 17 November 2025
  • Design Science Research (DSR) Methodology is an artifact-oriented approach that systematically develops and evaluates innovative solutions for complex organizational and technical challenges.
  • It integrates multiple cycles—relevance, design, and rigor—to ensure artifacts are aligned with real-world problems and grounded in theory.
  • DSR is applied across domains such as cloud services, healthcare, and security, utilizing structured processes for artifact creation, assessment, and iterative refinement.

Design Science Research (DSR) Methodology is a systematic, artifact-oriented research paradigm primarily developed in information systems and software engineering for the rigorous creation, demonstration, and evaluation of problem-solving artifacts. DSR explicitly targets those research domains where organizational or technical challenges cannot be addressed through explanatory, theory-focused science alone but require the synthesis, implementation, and assessment of innovative methods, models, artifacts, or systems.

1. Theoretical Foundations and Definitions

DSR originates in Simon’s concept of the "sciences of the artificial," emphasizing the construction of purposeful artifacts to transform real-world contexts. In contemporary IS/SE, its core deliverable is the artifact—defined as any engineered object (method, model, tool, process, or system) designed and investigated in a context for the purpose of adding value through intervention. The canonical definition is:

DSR:={(A,C)  A designedC    value-adding change}\text{DSR} := \{(A, C)\ |\ \text{A designed} \leftrightarrow C\, \implies\, \text{value-adding change}\}

with AA as the artifact and CC as the problem context (social and knowledge-related) (Pastor et al., 13 Jul 2024). DSR is underpinned by three interlocking cycles:

  • Relevance cycle (problem–context alignment and translation into requirements)
  • Design cycle (artifact construction and internal validation)
  • Rigor cycle (theoretical grounding, external evaluation, and communication).

2. Process Models and Methodological Steps

The most widely adopted DSRM (Design Science Research Methodology) process is the six-phase model by Peffers et al. (2007) (Benali et al., 2021, Peffers et al., 2020):

  1. Problem Identification and Motivation
  2. Objectives for a Solution
  3. Artifact Design and Development
  4. Demonstration
  5. Evaluation
  6. Communication

Each phase builds on the previous, allowing for iterative refinement. Researchers may enter at different phases, but the process is typically presented as sequential for clarity in reporting (Peffers et al., 2020).

Further methodological enhancements have been proposed, such as the dual-cycle model (design vs empirical) (Pastor et al., 13 Jul 2024), and domain-adapted variants for emerging, low-theory contexts which interpose explicit ontological and knowledge-consolidation steps (SCOA: Scoping, Conceptual modeling, Ontology, Artifact) (Thuan et al., 2016).

3. Artifact Types, Formalisms, and Decision Criteria

Artifacts include:

Selection of artifact type and development approach is contingent on: problem character (design vs knowledge question), stakeholder needs, theoretical landscape, implementation feasibility, and evaluability (Pastor et al., 13 Jul 2024).

Feature models and dynamic Software Product Line (SPL) models are used to specify system variability, while MAPE-K (Monitor–Analyze–Plan–Execute–Knowledge) loops operationalize self-adaptation (Benali et al., 2021). Ontologies are central for domain knowledge consolidation in settings with fragmented experience (Thuan et al., 2016).

4. Evaluation: Metrics, Validity, and Evidence

Evaluation in DSR is multifaceted. The dominant validation framework organizes claims and validity as follows (Larsen et al., 12 Mar 2025, Kroop, 16 Feb 2025):

High-level claim types:

  • Criterion claims (does the artifact deliver intended utility?)
  • Causal claims (what aspects of the design cause observed effects?)
  • Context claims (in which settings do claims still hold?)

Subtypes and Validity Metrics:

  • Criterion efficacy: Measured performance vs a reference, e.g., Eff(A)Eff(R)ε\mathrm{Eff}(A) \geq \mathrm{Eff}(R) - \varepsilon
  • Criterion characteristic: Comparison on non-performance attributes (theory, model, method, or instance validity)
  • Causal efficacy: Experimental ablation, Eff(A)Eff(Ap)δ\mathrm{Eff}(A) - \mathrm{Eff}(A_{\setminus p}) \geq \delta for component pp
  • Contextual/ecological/external validity: Evaluation in real-world deployment or replication across settings

Additional essential validity types—instrument, technical, design, purpose, and generalization—must be addressed explicitly to ensure measurement rigor and artifact credibility (Kroop, 16 Feb 2025).

Metrics may include task completion time, error rates, coverage ratios, satisfaction scales (often Likert or System Usability Scale), and qualitative thematic counts (Peffers et al., 2020, Miranda et al., 2021).

5. Practical Application Patterns and Case Studies

DSR has been systematically applied in several domains:

  • Cloud services: Dynamic SPL and MAPE-K-driven platforms for context-aware, runtime reconfiguration (Benali et al., 2021).
  • Healthcare and mHealth: User-centered architectures for clinical evidence retrieval (Miranda et al., 2021), scalable mHealth apps adhering to utility and composite score metrics Φ=αUsability+βEffectiveness+γEfficiency\Phi = \alpha \cdot \mathrm{Usability} + \beta \cdot \mathrm{Effectiveness} + \gamma \cdot \mathrm{Efficiency} (Jat et al., 28 Aug 2024).
  • Security: Provider-controlled digital watermarking in cloud storage environments, rigorously evaluated for robustness (PSNR ≥ 30 dB) under “management attacks” (Cusack et al., 2016).
  • Decision support: Crowdsource-decision DSS through ontologically grounded consolidation of experience in emergent, low-theory domains (Thuan et al., 2016).
  • Sustainability: SDG-driven requirements elicitation pipelines integrating Delphi structuring for reproducibility (Brooks, 2020).
  • Scientific computing infrastructure: Pilot-abstraction and adaptive resource management artifacts validated via both analytical and empirical scaling models (Luckow et al., 2020).

Best practices emphasized across studies include modular, iterative design; stakeholder involvement throughout (especially for context claims); explicit artifact and context definitions; and systematic documentation and communication of validation evidence.

6. Teaching, Adoption Barriers, and Best Practices

Evidence from surveys and educational interventions shows DSR is most often initiated at the doctoral level, with adoption strongly shaped by supervisory guidance (Pastor et al., 13 Jul 2024). Typical obstacles include distinguishing design vs. knowledge problems, artifact/context misalignment, and inconsistencies in evaluating and reporting validity.

Best teaching and methodological recommendations (Pastor et al., 13 Jul 2024, Knauss, 2020):

  • Early and precise definition of artifact and context
  • Clear mapping of research questions to DSR cycles and tasks
  • Explicit selection and justification of evaluation frameworks and validity criteria
  • Iterative feedback with structured reporting templates
  • Comprehensive documentation of design rationales, lessons learned, and limitations

7. Open Issues and Directions

Current DSR frameworks excel in embedding purpose validity but commonly lack explicit, systematic protocols for instrument, design, and generalization validity—introducing risks to artifact reliability and external applicability (Kroop, 16 Feb 2025). Revised frameworks advocate integrating checks for all five validity types as part of evaluation and reporting pipelines.

Several studies emphasize that rich knowledge bases (e.g., feature or context models, adaptation logs) enhance artifact traceability and form the foundation for reliable generalization and empirical replication (Benali et al., 2021). Formalization of these models, along with the release of artifact and validation toolchains, is recommended to advance cumulative science.

The Q-method offers an additional route to rigor by directly eliciting and quantifying stakeholder subjectivity, supporting artifact definition, design, evaluation, and persona-based communication (Nurhas et al., 2019). Its integration into standard DSR has the potential to bridge the gap between quantitative evaluation and human-centered innovation.

In sum, DSR provides a mature, structured approach for engineering and evaluating artifacts that address critical, under-theorized, or rapidly evolving challenges—contingent on rigorous articulation of goals, transparent validation, and sustained engagement with both academic and stakeholder communities.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Design Science Research (DSR) Methodology.