Cloud Models and Interval Data for MCGDM
- Cloud models and interval data are integrated formalisms that meld probabilistic structures with fuzzy semantic uncertainty to represent expert judgment ranges in MCGDM.
- They employ three parameters—expectation, entropy, and hyper-entropy—to convert interval data into a nuanced, Gaussian-based uncertainty model with practical aggregation methods.
- The framework integrates bilevel optimization and an extended TOPSIS approach to yield efficient, data-driven weighting and reliable ranking under real-world ambiguity.
Cloud models and interval data constitute an integrated formalism for representing and manipulating uncertainty within multicriteria group decision-making (MCGDM). Interval-valued data facilitate flexible expert assessments by expressing judgments as ranges, thereby encoding both inter-expert and intra-expert uncertainty. Cloud models synthesize the probabilistic structure of probability distributions and the graded semantic uncertainty of fuzzy membership, ultimately providing a compact, three-parameter characterization of uncertain concepts. Together, these constructs underpin a robust, computationally efficient framework for MCGDM under uncertainty, supporting aggregation of interval judgments, objective data-driven weighting, ranking via an extension of the technique for order of preference by similarity to ideal solution (TOPSIS), and verified by both simulation and domain-specific case studies (Khorshidi et al., 2020).
1. Interval Data and Representational Roles
Interval data enable experts to articulate their evaluations as intervals , thereby reflecting both a best-guess and an admissible variation. This approach natively incorporates the uncertainty and imprecision present in expert judgments for each alternative-criterion pair. Instead of presuming a uniform likelihood within each interval, the methodology adopts a Gaussian membership function over , leading to a more nuanced representation of uncertainty. The interval provides maximal information by capturing subjective tolerance around the nominal value.
Cloud models operationalize these interval-based preferences in MCGDM by bridging the qualitative descriptors (e.g., "high risk," "low cost") and their quantitative realization. Every normal cloud model is parametrized by expectation (Ex), entropy (En), and hyper-entropy (He), succinctly encoding the centroid, spread, and higher-order uncertainty of the underlying concept.
2. Mathematical Formulation of the Normal Cloud Model
A normal cloud model for a qualitative concept over a numerical universe is defined by the triple . The stochastic–fuzzy mechanism for generating cloud drops consists of:
- Sampling .
- Sampling .
- Assigning membership:
Cloud drops provide simultaneous numerical and semantic information. Generation (CG) and parameter estimation (CG-1) algorithms allow efficient forward and backward mapping between empirical data and cloud parameters. Specifically, given cloud drops , one recovers:
3. Aggregation of Interval Judgments via Cloud Models
Each expert's interval is translated into a Gaussian with and (by the three-sigma rule, 99.73% of mass falls within the interval). For aggregation across experts, the following closed-form expressions yield the group normal cloud model :
This approach leverages both expert-level uncertainty () and inter-expert dispersion (), encapsulating collective uncertainty in three parameters. The aggregation operator is direct, non-iterative, and scales linearly with the number of experts.
4. Bilevel Optimization for Objective Criterion Weighting
In the MCGDM setting with criteria and alternatives, every evaluation is a cloud vector . To privilege criteria where experts display greater consensus (lower hyper-entropy), weights are optimized through a bilevel program:
- Upper Level (Leader):
- Lower Level (Follower) Constraints:
- Joint Constraints:
This optimization can be reformulated as a single-level linear program, yielding unique weights reflecting both data structure and expert consensus, rather than relying on manual assignment or subjective scoring (Khorshidi et al., 2020).
5. Extension of TOPSIS to Interval Cloud Frameworks
Cloud-weighted scores generalize matrix normalization for TOPSIS in uncertain environments. Positive and negative ideal clouds for each criterion are identified as follows:
- Higher-the-better: , ,
- Lower-the-better: roles reversed.
The cloud-to-cloud distance metric is
satisfying all key distance properties. For each alternative , the distances to the ideal clouds are aggregated as , , and the ranking index is , where higher denotes greater preference.
6. Empirical Validation and Applications
Monte Carlo simulations evaluated the aggregation technique on 100 problems, each with 2–10 experts and 50 draws per expert. The overlapping-ratio similarity metric () yielded mean values exceeding 0.95 and no significant bias in means (), indicating faithful retention of statistical properties from intervals to clouds.
The methodology was applied to cybersecurity vulnerability assessment involving 38 experts, 14 system components, and 7 criteria, with interval ratings in . Aggregation produced a cloud matrix. The linear program assigned highest importance to "frequency" and lowest to "interaction." The cloud-enhanced TOPSIS ranking of system components by risk of successful attack demonstrated the framework's practical utility.
Robustness analyses compared the cloud-based TOPSIS with alternative distance measures (Euclidean-vector, Hamming-type), yielding Spearman correlations of 0.891 and 0.767 with the primary method. Comparative evaluation against interval-valued intuitionistic fuzzy numbers and type-2 fuzzy set algorithms gave Spearman correlations of 0.621 and 0.942, respectively, evidencing alignment with information-preserving benchmarks and superior computational efficiency (lowest average CPU time, Kruskal–Wallis test ).
7. Synthesis and Significance
By combining interval data representation, normal cloud model aggregation, bilevel linear programming for weight optimization, and cloud-specific extensions of TOPSIS, this framework effectively models multi-dimensional uncertainty within MCGDM. The approach offers low complexity, automatic data-driven weighting, and robust, information-preserving rankings, validated in both simulated and operational contexts. This methodology provides a principled mechanism to incorporate granular uncertainty from disparate human estimates, yielding reliable group decision outcomes under real-world ambiguity and complexity (Khorshidi et al., 2020).