Cross-scene Knowledge Sharing Preference (CKSP)
- Cross-scene Knowledge Sharing Preference (CKSP) is a principled methodology that optimally partitions transferable and private knowledge across heterogeneous scenes to prevent negative transfer.
- It employs selective sharing strategies—such as cross-connection matrices in recommendation, uncertainty-based weighting in hyperspectral classification, and client-specific adjustments in federated graph learning—to maintain domain-specific features.
- Empirical studies demonstrate that CKSP improves key performance metrics, including Hit-Rate@10, overall accuracy, and stability across various multi-domain scenarios.
Cross-scene Knowledge Sharing Preference (CKSP) is a principled methodology for selective parameter and knowledge transfer across structurally or semantically heterogeneous domains (“scenes”) in machine learning. CKSP addresses domain-structural shift, semantic misalignment, and user or scene-specific preference retention by orchestrating which features, representations, or parameters are globally shared and which are reserved for local adaptation. Across contemporary literature, CKSP has been instantiated in domains including cross-domain recommendation, hyperspectral image classification, and federated graph learning, each reflecting differing mechanisms for negotiating the shared-vs-private knowledge boundary.
1. Motivation and Definition
CKSP targets scenarios where naive or unfiltered sharing of parameters, features, or user preferences between domains induces negative transfer, representation conflict, or loss of essential domain-specific phenomena. The central question is: How to determine and enforce the optimal division between knowledge that should be transferred (shared) and that which must remain localized (private) to capture idiosyncratic scene or user traits?
In cross-domain recommendation, CKSP formalizes the hypothesis that certain preference traits (e.g., aesthetics) are stable across product domains and can be globally shared (Liu et al., 2019). In cross-scene hyperspectral classification, CKSP identifies semantically relevant shared features and down-weights or excludes source-only semantics to prevent semantic drift and negative transfer when label spaces differ (Huo et al., 8 Dec 2025). Federated graph learning systems adopt CKSP by restricting parameter sharing to spectrally generic GNN modules while enabling local, scene-specific message-passing biases (Tan et al., 26 Oct 2024).
2. Core Mechanisms and Mathematical Formalism
CKSP is operationalized via architectural and algorithmic mechanisms for feature/parameter sharing, selective transfer, and preference alignment.
2.1 CKSP in Cross-domain Recommendation
The Aesthetic preference Cross-Domain Network (ACDN) shares a user’s latent aesthetic trait Θ_{aes} and user embedding matrix P across domains, encoding all item images through a common feature extractor. Sparse, learnable cross-connection matrices Hℓ couple the two domain-specific networks, enabling dual knowledge flow:
- Aesthetic extractor:
- User embedding: (shared P)
- Fused input:
- Cross-connection: , and symmetrically for
- Joint objective: minimization over binary cross-entropy plus an regularizer on Hℓ.
2.2 CKSP in Hyperspectral Image Classification
The Cross-scene Knowledge Integration (CKI) framework incorporates a Source Similarity Mechanism (SSM) that assigns sample-wise transfer weights:
- Domain-similarity scoring: learns probability that a feature is source-like.
- Uncertainty scoring: normalized entropy .
- Preference weight:
Weighted source loss:
2.3 CKSP in Federated Graph Learning
FedSSP decomposes GNN knowledge into:
- Generic spectral encoders : globally aggregated to capture transfer-invariant spectral properties.
- Personalized preference vector : client-specific adjustment of graph-level feature vectors.
- Local loss (with global consensus regularizer):
Only the generic spectral parameters are communicated globally; personalization remains client-local.
3. Implementations Across Representative Domains
| Domain | Shared via CKSP | Non-shared/Personalized |
|---|---|---|
| Recommendation (Liu et al., 2019) | User aesthetic extractor, embeddings, Hℓ | Item embeddings, domain-specific MLPs |
| HSI Classification (Huo et al., 8 Dec 2025, Huo et al., 8 Dec 2025) | Aligned feature encoders, high-weighted semantics from source | Target-private teacher branch, complementary distillation |
| Federated GNN (Tan et al., 26 Oct 2024) | Spectral encoder/filter parameters | Convolution weights, preference vector, classification head |
CKSP instantiations are governed by domain and task structure: aesthetic preference transfer in recommendation, semantic-similarity weighting for class transfer in HSI, and spectral vs. structural parameter separation in GNN federated optimization.
4. Algorithmic Strategies for CKSP
CKSP requires both architectural innovation and training-time procedures to enforce the preferred sharing scheme.
- Selective parameter sharing: Dual-network architectures, cross-connections, and modularization (e.g., spectral modules) isolate transferable features for joint optimization.
- Sample- or semantics-aware transfer: Soft weights (ωs) for source samples, driven by discriminator and entropy measures, prevent negative transfer from out-of-range categories.
- Orthogonality constraints: Partial distance correlation and ensemble-based strategies yield distinct shared and private feature branches, mitigating redundancy and overfitting to frequent source classes (Huo et al., 8 Dec 2025).
- Personalized adaptation: Learnable adjustment modules (e.g., δ_i in FedSSP) enable fine-tuning transfer representations to local idiosyncrasies without impeding shared module optimization.
5. Empirical Effects and Ablation Evidence
CKSP implementations demonstrate measurable gains versus naive or fully-shared transfer across several empirical studies.
- In cross-domain recommendation (Clothing→Home Improvement), ACDN improves Hit-Rate@10 from 0.2230 (CoNet) to 0.2289, with comparable gains in NDCG and MRR, and ablation confirms the largest gain arises from shared aesthetic features (Liu et al., 2019).
- In HSI classification, CKSP’s SSM module consistently delivers +1 to +3 percentage point overall accuracy over ASC-only baselines, with gains sustained under severe target-label scarcity (Huo et al., 8 Dec 2025). In ADGKT, the staged introduction of GradVac, LogitNorm, and disagreement/ensemble strategies yield cumulative increases in OA (e.g., from 74.27% baseline to 87.52% with full CKSP-driven ADGKT) (Huo et al., 8 Dec 2025).
- In federated graph learning, FedSSP with full CKSP outperforms state-of-the-art baselines by 1–5.5 points across single, double and multi-domain splits. GSKS and PGPA each improve isolated performance, but their union under CKSP produces the strongest results and stability across domain-shifting scenarios (Tan et al., 26 Oct 2024).
6. Significance and Theoretical Implications
CKSP marks a shift from monolithic or adversarial alignment strategies toward nuanced, preference-sensitive knowledge transfer. CKSP frameworks reduce the risk of negative transfer, preserve domain-private diversity, and enable robust, data-efficient adaptation in settings with label non-overlap, structural heterogeneity, or user/personality-specific traits across scenes.
A plausible implication is that CKSP may serve as a general paradigm for multi-scene, multi-agent, or multi-modal machine intelligence, where sharing preference (i.e., “who should learn what from whom, and why”) is fundamental to scalable and reliable transfer. Application contexts span recommendation, remote sensing, federated graph reasoning, and potentially language and vision foundation models.
7. Open Questions and Future Directions
Outstanding problems include: formal optimality criteria for what constitutes “generic” vs. “scene-specific” knowledge; automated mechanisms for preference separation and shareability assessment; scaling CKSP to continual, multi-domain, or massively distributed scenarios under communication, privacy, and computation constraints; and deeper understanding of how CKSP impacts fairness, privacy, and robustness guarantees in federated and multi-task learning systems.