Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
121 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-view Candidate Matching

Updated 4 July 2025
  • Multi-view candidate matching is a methodology that aggregates candidate scores from various criteria and expert assessments to manage uncertainty.
  • It employs frameworks such as Transferable Belief Model and Qualitative Possibility Theory to fuse evidence and handle conflicting opinions using discounting and weighted aggregation.
  • This approach enables practical applications in areas like personnel selection and multi-source evaluation by synthesizing quantitative and qualitative inputs for robust candidate ranking.

Multi-view candidate matching is the methodological backbone for inferring, integrating, and ranking candidate solutions in scenarios where information is distributed across multiple criteria and judged by multiple experts, typically with qualitative, uncertain, or incomplete data. The paradigmatic frameworks for such problems, as presented in Smets & Kennes’s Transferable Belief Model (TBM) and Dubois & Prade’s Qualitative Possibility Theory (QPT), provide structured means to aggregate evidence, assess candidate quality, handle conflicting opinions, and reflect varying confidence and importance across dimensions.

1. Frameworks for Multi-View Aggregation

Transferable Belief Model (TBM) operates in a quantitative domain, representing expert opinions as belief functions. It allows for partial belief, explicit ignorance, and nuanced aggregation using Dempster’s rule. TBM accommodates both uncertainty and imprecision, and yields scalar rankings via pignistic probability for final candidate assessment.

Qualitative Possibility Theory (QPT) uses symbolic, ordinal scales, modeling expert judgements as possibility distributions. Aggregation is performed with min, max, and weighted min/max operators, fitting settings where only the ordering of outcomes is meaningful, not their numerical difference.

Both frameworks are equipped to:

  • Aggregate multiple criteria (each possibly discretized, e.g., "very bad" to "very good") and experts (each with own confidence/reliability).
  • Allow expert self-confidence and decision-maker trust to directly influence the aggregation process through discounting or weighting mechanisms.
  • Employ fusion rules (e.g., Dempster’s rule, weighted disjunction/conjunction) capable of managing both agreement and conflict.

2. Candidate Evaluation and Mathematical Formulation

Evaluation Workflow:

  1. Scoring per Criterion: Each candidate KK receives a score ci(K)c_i(K) for each criterion ii, where ci(K)Lic_i(K) \in L_i (ordinal scale).
  2. Expert Assessment and Self-Confidence: Each expert jj provides possibly imprecise or interval-valued input for every criterion and an associated confidence level YijY_{ij}.
  3. Aggregation Across Experts: Inputs for each criterion are combined after confidence/reliability adjustments.

Mathematical Details:

  • In TBM, belief is assigned over subsets of criterion scores, adjusted by confidence, and merged via Dempster’s rule. For a given criterion:

plLi;[Cij](ciA)=maxxA,aCijII([a,a]ci=x)pl_{L_i;[C_{ij}]}(c_i \in A) = \max_{x \in A, a \in C_{ij}} II([a,a]\,|\,c_i = x)

After discounting, all expert beliefs are fused, mapped to a Goodness Score through criterion importance, and propagated across criteria—culminating in a pignistic transformation and candidate ranking.

  • In QPT, per-expert possibility distributions are discounted and merged with weighted max:

Πi(s)=j[wijπi(j)(s)],sLi\Pi_i(s) = \bigvee_j [w_{ij} \wedge \pi^{(j)}_i(s)],\quad s\in L_i

Aggregation across criteria depends on the decision attitude (e.g., conjunctive for strict, compensatory by Sugeno integrals for trade-off-friendly).

3. Representation of Expert Competence and Reliability

TBM Discounting:

  • Both expert self-confidence (YijY_{ij}) and the decision maker’s trust (aja_j) are mapped to numerical scales.
  • Discounting factor:

dij=10.95×(gij/3)×(0.75+0.25×(sj1)/3)d_{ij} = 1 - 0.95 \times (g_{ij}/3) \times (0.75 + 0.25 \times (s_j - 1)/3)

  • A portion of the mass is reassigned to the total ignorance set, reflecting doubt in the assessment.

QPT Discounting:

  • Confidence is folded into the possibility distribution for each expert via:

πi(j)(s)=πi(j)(s)Yij\pi^{(j)}_i{}^*(s) = \pi^{(j)}_i(s) \vee -Y_{ij}

  • Combined expert/DM confidence yields a weight for fusion:

wij=Yijajw_{ij} = Y_{ij} \otimes a_j

  • Aggregation across experts is accomplished by weighted disjunction (max-min).

4. Use and Importance of Qualitative Scales

Qualitative scales are fundamental in both frameworks.

  • Satisfaction, confidence, reliability, and importance are each mapped onto discrete, often linearly ordered sets (e.g., {1,2,3,4,5}\{1,2,3,4,5\} for satisfaction).
  • All discounting, fusion, aggregation, and final selection operators are defined in the context of these ordinal scales.
  • The frameworks maintain symbolic/ordinal reasoning throughout, only collapsing to numerical outputs (probabilities, expectations) at the point of final decision—thus preserving interpretability and suitability for subjective judgement.

5. Comparative Analysis of Frameworks

Aspect Transferable Belief Model (TBM) Qualitative Possibility Theory (QPT)
Type Quantitative belief functions Qualitative possibility distributions
Fusion of expert opinion Dempster's rule + discounting Weighted disjunction (max-min)
Criteria aggregation Compensatory via Goodness mapping/expectation Conjunctive or compensatory
Uncertainty handling Belief/plausibility, explicit ignorance Ordinal possibility/necessity
Explainability Moderate (numerical) Strong (symbolic, logic-based)
Best suited for Quantitative/mixed data, trade-offs Qualitative judgements, strict policies

TBM is amenable to nuanced trade-offs, can express reinforcement, and accommodates partial belief and ignorance, requiring numerically commensurate scales and incurring higher computational cost. QPT achieves maximal explainability and is robust in strictly ordinal, poorly quantified domains but tends toward conjunctive (non-compensatory) aggregation and may lack reinforcement effects.

6. Computational Considerations and Practical Issues

  • Both methodologies support the integration of imprecise, incomplete, or conflicting inputs across multiple views (criteria, experts).
  • TBM’s computational complexity grows with the number of mass assignments (exponential in the scale length and expert count), particularly when fusing a large number of qualitative assessments with high uncertainty.
  • QPT, being symbolic, avoids most metric computations, but careful design of aggregation and discounting operators is critical to preserving the desired decision attitude.

7. Applicability and Deployment Strategies

  • For applications where only the ranking or cut-off is required, TBM’s pignistic probability or QPT’s possibility/necessity degrees suffice.
  • When explanations to stakeholders are demanded, QPT’s symbolic chain of inference is preferable.
  • Both frameworks extend to scenarios with missing data, subjective or unreliable input, and allow scenario-specific weighting of criteria and experts.
  • Final deployment may involve mapping symbolic ranks to action thresholds, aggregating or partitioning candidate lists, or embedding unary/binary qualitative attributes into weighted structures for final summarization.

The integration of TBM and QPT for multi-view candidate matching establishes a rigorous, interpretable, and flexible foundation for candidate evaluation in complex, uncertain multi-criteria settings, supporting both symbolic and quantitative data aggregation, variable expert reliability, and diverse decision attitudes. The frameworks’ distinct aggregation and fusion rules enable their selection based on domain data type, explainability needs, and computational constraints, providing principled guidance for real-world applications in personnel selection, multi-source evaluation, and other judgment-laden domains.