Actual Fusion in Virtual Fusion (AFVF)
- AFVF is a unifying fusion paradigm that integrates classical and modern theories, such as Bayesian, DST, and DSmT, within a common formal framework.
- It employs dynamic fusion rules and proportional conflict redistribution techniques to adaptively manage inconsistent multisource data.
- AFVF is effectively applied in image fusion and target tracking, enhancing real-time performance in cyber-physical and cognitive environments.
Actual Fusion within Virtual Fusion (AFVF) is a generalized and evolving principle for information fusion that synchronizes and unifies disparate fusion models, rules, and processes within a flexible formal framework. AFVF captures the act of operationalizing (“actual fusion”) the results of “virtual” or synthesized fusion spaces, modalities, or representations—thereby yielding actionable outputs directly within complex engineered, cyber-physical, or cognitive systems. The concept has been articulated and formalized in the context of unifying evidence theories, handling conflicts in data fusion, parameterizing virtual sensor setups, and achieving seamless interoperability between the virtual, physical, and cognitive domains.
1. Unification of Fusion Theories and Models
The AFVF paradigm is grounded in the consolidation and extension of classical and modern fusion theories. These include Bayesian probability, Dempster-Shafer Theory (DST), Dezert-Smarandache Theory (DSmT), Yager’s rule, and the Transferable Belief Model (TBM). AFVF situates these as specific cases within a global formalism that uses a common fusion space—often founded on Boolean algebra or “super–power sets”—closed under operations such as union, intersection, and complement. This allows for both exclusive and non-exclusive hypotheses to coexist.
The construction follows a layered hierarchy:
- For probability: P(A)+P(B)=1
- In DST: m_DS(A)+m_DS(B)+m_DS(A∪B)=1
- In DSmT: m_DSm(A)+m_DSm(B)+m_DSm(A∪B)+m_DSm(A∩B)=1
- Unified Fusion Theory (UFT):
This categorical generalization incorporates fuzzy and neutrosophic logics, with operators like -norms and -norms to handle degrees of truth, indeterminacy, and falsehood. In so doing, the AFVF framework is mathematically equipped to represent uncertainty, imprecision, and conflicts found in real-world multisource data.
2. Fusion Rules, Conflict Redistribution, and Procedures
AFVF comprises a collection of flexible rules that govern combination and redistribution of evidence or measurements:
- The conjunctive rule applies when all sources are reliable, combining via set intersection.
- The disjunctive rule is invoked when only partial reliability is known, using set unions.
- An exclusive disjunctive rule acts when exactly one reliable source is assumed.
- A mixed conjunctive–disjunctive rule facilitates cases of partial source reliability.
Conflict redistribution is central: masses resulting from contradictory sources (e.g., assignments to empty intersections) can be transferred to their union or redistributed among contributing hypotheses proportionally. Such redistribution is formalized in proportional conflict redistribution (PCR) rules, as in:
A general unification formula expresses the resultant fusion mass as:
where and are problem-dependent parameters, and is a normalization factor.
These rules are algorithmically instantiated using a recipe-like “logical chart” that adapts dynamically based on changes in knowledge about source reliability, underlying world assumptions (open vs. closed world), and the evolving nature of sensor relationships.
3. Embedded Application Domains: Image Fusion and Target Tracking
AFVF is operationalized in applications where multimodal, multi-source information must be fused for real-time, high-accuracy outputs:
- Image Fusion: Pixels are mapped to neutrosophic triplets (truth, indeterminacy, falsehood). Fusion uses -norms/-norms and -conorms/-conorms to combine data, with conflict management guided by fuzzy/neutrosophic logic. Examples include denoising, segmentation (using watershed methods), and multi-modal image combination.
- Target Tracking: AFVF extends to filter fusion in dynamic systems, proposing to unify Kalman, alpha-beta, particle, and other filtering algorithms through a nonlinear recurrent sequence framework. The approach adapts in real time to the changing reliability of sensor/measurements, dynamically selecting and fusing multiple filtering methods to enhance tracking accuracy and resilience.
In both cases, the core fusion rules are embedded as processing modules, so that actual fusion happens as data traverses the image analysis or tracking chain, with context-appropriate conflict handling and dynamic adjustment as application state evolves.
4. Algorithmic Recipes and Modular Implementation
AFVF’s computational instantiation involves stepwise, modular procedures:
- Source reliability is assessed; sources may be discounted or selected for combination.
- When all sources are deemed reliable, intersection (conjunctive) rules are preferred. Ambiguous reliability leads to disjunctive or mixed rule adoption.
- Post-fusion, explicit conflict masses are redistributed using rules such as PCR5.
- For filtering applications, the algorithm adaptively selects among candidate filters (KF, EKF, UKF, particle filter) in real time.
- In image scenarios, images are transformed to the neutrosophic domain and component-wise processed with relevant norms; conflict redistribution is performed within this transformed space before output normalization.
Algorithmically, a generic two-source fusion recipe is given by:
where denotes the combination operator, is the operator’s degree, is the relevant norm operator, and encapsulate proportionality parameters.
5. Modular and Evolutionary Unification Scenario
The AFVF framework is inherently modular and continually incorporating advances from the broader fields of fusion theory, engineering, and cognitive computation:
- It is structured like a “cooking recipe” or “programmer’s logical chart” detailing choices of fusion space, rule, and algorithmic workflow based on situational demands.
- The system is agnostic to the specific fusion theory adopted, dynamically selecting or evolving the model as data characteristics change (e.g., if sensor cross-correlation structure changes, if application constraints redefine permissible intersections).
- The unification scenario incorporates continual evidence updates; new domain knowledge or modeling discoveries can be assimilated by updating the logical chart and corresponding computational steps.
6. Spectrum of Real-World and Emerging Applications
AFVF is motivated by and continues to expand its role in a range of mission-critical applications:
- Military and Surveillance: Multisensor fusion for robust target tracking using heterogeneous data (video, radar, sonar) in environments with unreliable and/or contradictory signals.
- Image Processing and Robotics: Enhanced image denoising, segmentation, and combination from diverse modalities; improving robot perception and actuation in unpredictable environments through dynamically selected fusion and filtering.
- Control and Autonomous Systems: Filtering frameworks for dynamic sensor selection in navigation, SLAM, and control, adapting fusion logic based on setup and real-time reliability change.
- Uncertain (Paraconsistent) Data Environments: Wherever fusion of inconsistent, incomplete, or paraconsistent information is required, with systematic discounting and conflict redistribution to achieve robust consensus.
- The modular and extensible nature of AFVF supports its integration as new domains and fusion paradigms emerge, continually supporting increasingly complex, context-adaptive, and real-time fusion scenarios.
7. Conclusion
Actual Fusion within Virtual Fusion (AFVF) constitutes a holistic unification principle blending diverse data fusion theories, flexible and adaptive conflict resolution, and algorithmic recipes suitable for high-stakes, real-world applications. By leveraging generalized fusion spaces and flexible, scenario-dependent rules, AFVF facilitates the translation of abstract, virtual data constructs and relationships into actionable, robust outputs; it further allows continual adaptation and incorporation of new scientific advances and is designed to support evolving engineering, perception, and cognitive fusion paradigms (Smarandache, 2015).