Papers
Topics
Authors
Recent
2000 character limit reached

Cloud Models and Interval Data for MCGDM

Updated 26 November 2025
  • Cloud models and interval data are integrated formalisms that meld probabilistic structures with fuzzy semantic uncertainty to represent expert judgment ranges in MCGDM.
  • They employ three parameters—expectation, entropy, and hyper-entropy—to convert interval data into a nuanced, Gaussian-based uncertainty model with practical aggregation methods.
  • The framework integrates bilevel optimization and an extended TOPSIS approach to yield efficient, data-driven weighting and reliable ranking under real-world ambiguity.

Cloud models and interval data constitute an integrated formalism for representing and manipulating uncertainty within multicriteria group decision-making (MCGDM). Interval-valued data facilitate flexible expert assessments by expressing judgments as ranges, thereby encoding both inter-expert and intra-expert uncertainty. Cloud models synthesize the probabilistic structure of probability distributions and the graded semantic uncertainty of fuzzy membership, ultimately providing a compact, three-parameter characterization of uncertain concepts. Together, these constructs underpin a robust, computationally efficient framework for MCGDM under uncertainty, supporting aggregation of interval judgments, objective data-driven weighting, ranking via an extension of the technique for order of preference by similarity to ideal solution (TOPSIS), and verified by both simulation and domain-specific case studies (Khorshidi et al., 2020).

1. Interval Data and Representational Roles

Interval data enable experts to articulate their evaluations as intervals Ik=[ak,bk]I_k=[a_k, b_k], thereby reflecting both a best-guess and an admissible variation. This approach natively incorporates the uncertainty and imprecision present in expert judgments for each alternative-criterion pair. Instead of presuming a uniform likelihood within each interval, the methodology adopts a Gaussian membership function over IkI_k, leading to a more nuanced representation of uncertainty. The interval provides maximal information by capturing subjective tolerance around the nominal value.

Cloud models operationalize these interval-based preferences in MCGDM by bridging the qualitative descriptors (e.g., "high risk," "low cost") and their quantitative realization. Every normal cloud model is parametrized by expectation (Ex), entropy (En), and hyper-entropy (He), succinctly encoding the centroid, spread, and higher-order uncertainty of the underlying concept.

2. Mathematical Formulation of the Normal Cloud Model

A normal cloud model for a qualitative concept TT over a numerical universe UU is defined by the triple y=(Ex,En,He)y=(\mathrm{Ex}, \mathrm{En}, \mathrm{He}). The stochastic–fuzzy mechanism for generating cloud drops consists of:

  • Sampling EnN(En,He2)\mathrm{En}' \sim \mathcal{N}(\mathrm{En}, \mathrm{He}^2).
  • Sampling xN(Ex,(En)2)x \sim \mathcal{N}(\mathrm{Ex}, (\mathrm{En}')^2).
  • Assigning membership:

μ(x)=exp ⁣((xEx)22En2)\mu(x) = \exp\!\left(-\frac{(x-\mathrm{Ex})^2}{2\,\mathrm{En}^2}\right)

Cloud drops (x,μ(x))(x, \mu(x)) provide simultaneous numerical and semantic information. Generation (CG) and parameter estimation (CG-1) algorithms allow efficient forward and backward mapping between empirical data and cloud parameters. Specifically, given NN cloud drops {xi}\{x_i\}, one recovers: Ex=1Ni=1Nxi,S2=1Ni=1N(xiEx)2,En=1Ni=1NxiEx,He=S2En2\mathrm{Ex} = \frac{1}{N}\sum_{i=1}^N x_i,\quad S^2 = \frac{1}{N}\sum_{i=1}^N (x_i-\mathrm{Ex})^2,\quad \mathrm{En} = \frac{1}{N}\sum_{i=1}^N |x_i-\mathrm{Ex}|,\quad \mathrm{He} = \sqrt{|S^2 - \mathrm{En}^2|}

3. Aggregation of Interval Judgments via Cloud Models

Each expert's interval Ik=[ak,bk]I_k = [a_k, b_k] is translated into a Gaussian N(xk,σk2)\mathcal{N}(x_k, \sigma_k^2) with xk=(ak+bk)/2x_k = (a_k + b_k)/2 and σk=(bkak)/6\sigma_k = (b_k - a_k)/6 (by the three-sigma rule, 99.73% of mass falls within the interval). For aggregation across KK experts, the following closed-form expressions yield the group normal cloud model Y=(Ex,En,He)Y = (\mathrm{Ex}, \mathrm{En}, \mathrm{He}):

Ex=1Kk=1Kxk\mathrm{Ex} = \frac{1}{K}\sum_{k=1}^K x_k

En=k=1Kσk+1Kk=1KxkEx\mathrm{En} = \sum_{k=1}^K \sigma_k + \frac{1}{K}\sum_{k=1}^K |x_k - \mathrm{Ex}|

He=1Kk=1K(xkEx)2(k=1K(σkEn))2\mathrm{He} = \sqrt{\left|\frac{1}{K}\sum_{k=1}^K(x_k - \mathrm{Ex})^2 - \left(\sum_{k=1}^K(\sigma_k - \mathrm{En})\right)^2\right|}

This approach leverages both expert-level uncertainty (σk\sigma_k) and inter-expert dispersion (xkEx|x_k - \mathrm{Ex}|), encapsulating collective uncertainty in three parameters. The aggregation operator is direct, non-iterative, and scales linearly with the number of experts.

4. Bilevel Optimization for Objective Criterion Weighting

In the MCGDM setting with MM criteria and NN alternatives, every evaluation is a cloud vector yij=(Exij,Enij,Heij)y_{ij} = (\mathrm{Ex}_{ij}, \mathrm{En}_{ij}, \mathrm{He}_{ij}). To privilege criteria where experts display greater consensus (lower hyper-entropy), weights are optimized through a bilevel program:

  • Upper Level (Leader):

minw,δ  δ\min_{w, \delta}\; \delta

  • Lower Level (Follower) Constraints:

HeijwjHeij0wj0δ,i,jj0|\mathrm{He}_{ij}\,w_j - \mathrm{He}_{ij_0}\,w_{j_0}| \leq \delta,\quad \forall i, j \neq j_0

  • Joint Constraints:

j=1Mwj=1,  wj0\sum_{j=1}^M w_j = 1,\; w_j \geq 0

This optimization can be reformulated as a single-level linear program, yielding unique weights {wj}\{w_j\} reflecting both data structure and expert consensus, rather than relying on manual assignment or subjective scoring (Khorshidi et al., 2020).

5. Extension of TOPSIS to Interval Cloud Frameworks

Cloud-weighted scores y~ij=wjyij\tilde y_{ij} = w_j \cdot y_{ij} generalize matrix normalization for TOPSIS in uncertain environments. Positive and negative ideal clouds for each criterion are identified as follows:

  • Higher-the-better: y~j+=(maxiExij,maxiEnij,maxiHeij)\tilde y_j^+ = (\max_i \mathrm{Ex}_{ij}, \max_i \mathrm{En}_{ij}, \max_i \mathrm{He}_{ij}), y~j=(miniExij,miniEnij,miniHeij)\tilde y_j^- = (\min_i \mathrm{Ex}_{ij}, \min_i \mathrm{En}_{ij}, \min_i \mathrm{He}_{ij}),
  • Lower-the-better: roles reversed.

The cloud-to-cloud distance metric is

d(y1,y2)=(Ex1Ex2)2+En1En2+He1He2d(y_1, y_2) = \sqrt{(\mathrm{Ex}_1-\mathrm{Ex}_2)^2} + |\mathrm{En}_1-\mathrm{En}_2| + |\mathrm{He}_1-\mathrm{He}_2|

satisfying all key distance properties. For each alternative ii, the distances to the ideal clouds are aggregated as Di+=jd(y~ij,y~j+)D_i^+ = \sum_j d(\tilde y_{ij}, \tilde y_j^+), Di=jd(y~ij,y~j)D_i^- = \sum_j d(\tilde y_{ij}, \tilde y_j^-), and the ranking index is RSi=Di/(Di++Di)\mathrm{RS}_i = D_i^- / (D_i^+ + D_i^-), where higher RSi\mathrm{RS}_i denotes greater preference.

6. Empirical Validation and Applications

Monte Carlo simulations evaluated the aggregation technique on 100 problems, each with 2–10 experts and 50 draws per expert. The overlapping-ratio similarity metric (SOR\mathrm{SOR}) yielded mean values exceeding 0.95 and no significant bias in means (p>0.05p > 0.05), indicating faithful retention of statistical properties from intervals to clouds.

The methodology was applied to cybersecurity vulnerability assessment involving 38 experts, 14 system components, and 7 criteria, with interval ratings in [0,100][0,100]. Aggregation produced a 14×714 \times 7 cloud matrix. The linear program assigned highest importance to "frequency" and lowest to "interaction." The cloud-enhanced TOPSIS ranking of system components by risk of successful attack demonstrated the framework's practical utility.

Robustness analyses compared the cloud-based TOPSIS with alternative distance measures (Euclidean-vector, Hamming-type), yielding Spearman correlations of 0.891 and 0.767 with the primary method. Comparative evaluation against interval-valued intuitionistic fuzzy numbers and type-2 fuzzy set algorithms gave Spearman correlations of 0.621 and 0.942, respectively, evidencing alignment with information-preserving benchmarks and superior computational efficiency (lowest average CPU time, Kruskal–Wallis test p<0.01p<0.01).

7. Synthesis and Significance

By combining interval data representation, normal cloud model aggregation, bilevel linear programming for weight optimization, and cloud-specific extensions of TOPSIS, this framework effectively models multi-dimensional uncertainty within MCGDM. The approach offers low complexity, automatic data-driven weighting, and robust, information-preserving rankings, validated in both simulated and operational contexts. This methodology provides a principled mechanism to incorporate granular uncertainty from disparate human estimates, yielding reliable group decision outcomes under real-world ambiguity and complexity (Khorshidi et al., 2020).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Cloud Models and Interval Data.