Multi-view Candidate Matching
- Multi-view candidate matching is a methodology that aggregates candidate scores from various criteria and expert assessments to manage uncertainty.
- It employs frameworks such as Transferable Belief Model and Qualitative Possibility Theory to fuse evidence and handle conflicting opinions using discounting and weighted aggregation.
- This approach enables practical applications in areas like personnel selection and multi-source evaluation by synthesizing quantitative and qualitative inputs for robust candidate ranking.
Multi-view candidate matching is the methodological backbone for inferring, integrating, and ranking candidate solutions in scenarios where information is distributed across multiple criteria and judged by multiple experts, typically with qualitative, uncertain, or incomplete data. The paradigmatic frameworks for such problems, as presented in Smets & Kennes’s Transferable Belief Model (TBM) and Dubois & Prade’s Qualitative Possibility Theory (QPT), provide structured means to aggregate evidence, assess candidate quality, handle conflicting opinions, and reflect varying confidence and importance across dimensions.
1. Frameworks for Multi-View Aggregation
Transferable Belief Model (TBM) operates in a quantitative domain, representing expert opinions as belief functions. It allows for partial belief, explicit ignorance, and nuanced aggregation using Dempster’s rule. TBM accommodates both uncertainty and imprecision, and yields scalar rankings via pignistic probability for final candidate assessment.
Qualitative Possibility Theory (QPT) uses symbolic, ordinal scales, modeling expert judgements as possibility distributions. Aggregation is performed with min, max, and weighted min/max operators, fitting settings where only the ordering of outcomes is meaningful, not their numerical difference.
Both frameworks are equipped to:
- Aggregate multiple criteria (each possibly discretized, e.g., "very bad" to "very good") and experts (each with own confidence/reliability).
- Allow expert self-confidence and decision-maker trust to directly influence the aggregation process through discounting or weighting mechanisms.
- Employ fusion rules (e.g., Dempster’s rule, weighted disjunction/conjunction) capable of managing both agreement and conflict.
2. Candidate Evaluation and Mathematical Formulation
Evaluation Workflow:
- Scoring per Criterion: Each candidate receives a score for each criterion , where (ordinal scale).
- Expert Assessment and Self-Confidence: Each expert provides possibly imprecise or interval-valued input for every criterion and an associated confidence level .
- Aggregation Across Experts: Inputs for each criterion are combined after confidence/reliability adjustments.
Mathematical Details:
- In TBM, belief is assigned over subsets of criterion scores, adjusted by confidence, and merged via Dempster’s rule. For a given criterion:
After discounting, all expert beliefs are fused, mapped to a Goodness Score through criterion importance, and propagated across criteria—culminating in a pignistic transformation and candidate ranking.
- In QPT, per-expert possibility distributions are discounted and merged with weighted max:
Aggregation across criteria depends on the decision attitude (e.g., conjunctive for strict, compensatory by Sugeno integrals for trade-off-friendly).
3. Representation of Expert Competence and Reliability
TBM Discounting:
- Both expert self-confidence () and the decision maker’s trust () are mapped to numerical scales.
- Discounting factor:
- A portion of the mass is reassigned to the total ignorance set, reflecting doubt in the assessment.
QPT Discounting:
- Confidence is folded into the possibility distribution for each expert via:
- Combined expert/DM confidence yields a weight for fusion:
- Aggregation across experts is accomplished by weighted disjunction (max-min).
4. Use and Importance of Qualitative Scales
Qualitative scales are fundamental in both frameworks.
- Satisfaction, confidence, reliability, and importance are each mapped onto discrete, often linearly ordered sets (e.g., for satisfaction).
- All discounting, fusion, aggregation, and final selection operators are defined in the context of these ordinal scales.
- The frameworks maintain symbolic/ordinal reasoning throughout, only collapsing to numerical outputs (probabilities, expectations) at the point of final decision—thus preserving interpretability and suitability for subjective judgement.
5. Comparative Analysis of Frameworks
Aspect | Transferable Belief Model (TBM) | Qualitative Possibility Theory (QPT) |
---|---|---|
Type | Quantitative belief functions | Qualitative possibility distributions |
Fusion of expert opinion | Dempster's rule + discounting | Weighted disjunction (max-min) |
Criteria aggregation | Compensatory via Goodness mapping/expectation | Conjunctive or compensatory |
Uncertainty handling | Belief/plausibility, explicit ignorance | Ordinal possibility/necessity |
Explainability | Moderate (numerical) | Strong (symbolic, logic-based) |
Best suited for | Quantitative/mixed data, trade-offs | Qualitative judgements, strict policies |
TBM is amenable to nuanced trade-offs, can express reinforcement, and accommodates partial belief and ignorance, requiring numerically commensurate scales and incurring higher computational cost. QPT achieves maximal explainability and is robust in strictly ordinal, poorly quantified domains but tends toward conjunctive (non-compensatory) aggregation and may lack reinforcement effects.
6. Computational Considerations and Practical Issues
- Both methodologies support the integration of imprecise, incomplete, or conflicting inputs across multiple views (criteria, experts).
- TBM’s computational complexity grows with the number of mass assignments (exponential in the scale length and expert count), particularly when fusing a large number of qualitative assessments with high uncertainty.
- QPT, being symbolic, avoids most metric computations, but careful design of aggregation and discounting operators is critical to preserving the desired decision attitude.
7. Applicability and Deployment Strategies
- For applications where only the ranking or cut-off is required, TBM’s pignistic probability or QPT’s possibility/necessity degrees suffice.
- When explanations to stakeholders are demanded, QPT’s symbolic chain of inference is preferable.
- Both frameworks extend to scenarios with missing data, subjective or unreliable input, and allow scenario-specific weighting of criteria and experts.
- Final deployment may involve mapping symbolic ranks to action thresholds, aggregating or partitioning candidate lists, or embedding unary/binary qualitative attributes into weighted structures for final summarization.
The integration of TBM and QPT for multi-view candidate matching establishes a rigorous, interpretable, and flexible foundation for candidate evaluation in complex, uncertain multi-criteria settings, supporting both symbolic and quantitative data aggregation, variable expert reliability, and diverse decision attitudes. The frameworks’ distinct aggregation and fusion rules enable their selection based on domain data type, explainability needs, and computational constraints, providing principled guidance for real-world applications in personnel selection, multi-source evaluation, and other judgment-laden domains.