Papers
Topics
Authors
Recent
Search
2000 character limit reached

AVP-Pro: Multi-domain Advanced Framework

Updated 23 January 2026
  • The paper introduces a novel two-stage deep neural framework leveraging hierarchical feature fusion and contrastive learning for antiviral peptide identification and subtype classification.
  • It achieves outstanding performance with accuracy up to 0.9531 and macro-ACC improvements over baselines, demonstrating robust biomarker discovery.
  • The system integrates ESM-2 embeddings with ten handcrafted descriptors through parallel CNN and BiLSTM branches, ensuring precise and interpretable predictions.

AVP-Pro is a designation that refers to highly distinct advanced systems and research contexts. In recent peer-reviewed literature indexed on arXiv, the term "AVP-Pro" (or closely related architecture names) denotes prominent frameworks in deep learning for antiviral peptide identification, semantic multi-sensor SLAM for autonomous valet parking, and also appears as an abbreviation in research evaluating the Apple Vision Pro mixed-reality device. The following entry provides technical and analytic coverage of all key usages with primary focus on AVP-Pro in bioinformatics (Wen et al., 16 Jan 2026), while contextualizing related systems relevant to the automated vehicle perception (AVP) and spatial video domains.

1. Definition and Scope

AVP-Pro denotes a two-stage, deep neural framework for comprehensive antiviral peptide (AVP) identification and functional subtype classification, as developed by Xiong et al. (Wen et al., 16 Jan 2026). The system is engineered to address limitations in sequence-dependency modeling and discrimination among high-similarity peptide samples. It implements hierarchical multi-modal feature fusion and advanced contrastive learning—incorporating both handcrafted descriptors and pretrained protein LLM embeddings—with Online Hard Example Mining (OHEM) enhanced by BLOSUM62-driven augmentation. The architecture enables both general AVP discrimination and granular classification of viral subclasses, outperforming state-of-the-art AVP predictors.

The term "AVP-Pro" also appears as an informal system name for high-precision, multi-sensor valet parking localization frameworks derived from AVM-SLAM (Li et al., 2023). In consumer electronics, the "AVP" acronym is frequently associated with Apple Vision Pro and assessed in spatial video (Izadimehr et al., 6 Jun 2025) and eye-tracking studies (Huang et al., 2024); however, these latter uses employ the term "AVP" as a product abbreviation, not as a model or system acronym.

2. AVP-Pro Bioinformatics Architecture

System Workflow and Feature Engineering

AVP-Pro is designed as a sequential two-stage inference and transfer learning pipeline. The foundational pipeline components are:

  • Stage 1 (General AVP Identification):
    • Input: Peptide sequences labeled according to AVP activity or as negative (two reference negative sets).
    • Data Augmentation: BLOSUM62-guided in silico mutations, insertions, and deletions, ensuring that augmented variants remain biologically plausible.
    • Feature Extraction: Construction of a "panoramic" descriptor by concatenating ESM-2-predicted embeddings with ten handcrafted feature sets: AAC, DPC, CKSAAGP, DistancePair, PAAC, QSOrder, Z-Scale, GTPC, binary encoding, DDE.
    • Hierarchical Fusion Network: Parallel CNN and BiLSTM branches process local motifs and long-range dependencies respectively; outputs are merged by self-attention and a learnable adaptive gating module.
    • Training: Joint minimization of focal classification loss, OHEM-driven contrastive loss, and consistency regularization.
  • Stage 2 (Functional Subtype Prediction):
    • Transfer Fine-Tuning: Weights from the above feature extractor are transferred; only the classification head is retrained to recognize subclasses (six viral families, eight specific viruses) under limited-data constraints.

A schematic workflow is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
Sequence Data
    ↓
Data Augmentation (BLOSUM62)
    ↓
[Feature Engineering: Panoramic → X]
    ↓
[Parallel CNN → Self-attention]
[Parallel BiLSTM → Self-attention]
    ↓
Adaptive Gating → E_final
    ↓
MLP Classifier → Prediction (Stage 1: AVP vs non-AVP; Stage 2: subtype)
    ↓
Loss: L_focal + λ₁L_contrastive + λ₂L_consistency

Individual classic descriptors are precisely defined in the referenced work; for example, DPC (Dipeptide Composition): xDPCR400x_{DPC} \in \mathbb{R}^{400}, calculated as relative dipeptide frequencies across the input peptide.

Adaptive Fusion, Attention, and Gating

  • CNN Module: Three parallel 1D convolutions (kernels of 3/5/7, 128 filters each) extract multi-scale motifs. Feature maps are concatenated into McnnM_{cnn}.
  • BiLSTM Module: Two-layer, 256-units-per-direction bidirectional LSTM captures global context; outputs concatenated.
  • Self-Attention: Applied on both CNN and BiLSTM outputs, attention weights αt\alpha_t computed per hidden timestep to yield context vectors VcnnV_{cnn} and VbilstmV_{bilstm}.
  • Adaptive Gating: Gate g=σ(Wg[Vcnn;Vbilstm]+bg)g = \sigma(W_g[ V_{cnn}; V_{bilstm} ] + b_g ) determines the weighted sum for final embedding Efinal=gVcnn+(1g)VbilstmE_{final} = g \cdot V_{cnn} + (1-g)\cdot V_{bilstm}.

Loss Functions and Contrastive Learning

Contrastive learning is anchored by batches of anchors, positives (BLOSUM-augmented), and OHEM-selected hard negatives, with the final formulation:

Lcontrastive=1Ni=1Nlogexp(sim(xai,xpi)/τ)exp(sim(xai,xpi)/τ)+k=1Kexp(sim(xai,nki)/τ)L_{contrastive} = -\frac{1}{N} \sum_{i=1}^{N} \log \frac{\exp(\operatorname{sim}(x_a^i,x_p^i)/\tau)}{\exp(\operatorname{sim}(x_a^i,x_p^i)/\tau) + \sum_{k=1}^{K} \exp(\operatorname{sim}(x_a^i, n_k^i)/\tau)}

Positives are constructed via amino acid substitutions with the highest “second-best” BLOSUM62 scores; negatives are mined by their hardest similarity to the current positive prototype.

3. Evaluation, Performance, and Practical Deployment

Experimental Results

  • Stage 1 (General AVP):
    • On the Set 1-nonAVP benchmark, achieved accuracy 0.9531 and MCC 0.9064; perfect performance on Set 2-nonAMP.
    • Outperformed AVP-IFT by +3.1% ACC, +6.8% MCC.
  • Stage 2 (Subtype Prediction):
    • Viral family classification, ACC range 0.8872–0.9887; AUROC 0.9063–0.9870.
    • Specific virus classification, ACC up to 1.0; AUROC up to 1.0.
    • Macro-ACC substantially higher than prior work (0.9656 vs 0.8837).

Table: Summary of AVP-Pro Performance (test sets)

Task ACC MCC AUROC (max)
General AVP (Set 1) 0.9531 0.9064 --
General AVP (Set 2) 1.0000 1.0000 --
Viral Family 0.8872–0.9887 -- 0.9870
Specific Virus 0.9041–1.0000 -- 1.0000
  • Operational: The framework runs as a web service permitting FASTA sequence upload, with visualization of attention heatmaps and detailed CSV export (Wen et al., 16 Jan 2026).

Interpretability and Regularization

  • The system yields both residue-level attention maps from CNN and BiLSTM paths and provides gating coefficients for user interpretability.
  • Regularization techniques include test-time, BLOSUM62-consistent augmentations, and dropout.

4. Contrast with Other AVP-Pro/AVP Systems

Although AVP-Pro as described above pertains to bioinformatics, several other distinct AVP-related "Pro" systems exist in other domains:

  • Automated Valet Parking (Visual Perception and SLAM):
    • "AVP-Pro" designates a robust, multi-sensor, BEV-based SLAM system fusing four fisheye cameras, IMU, and wheel encoders for localization in challenging, GPS-denied environments (Li et al., 2023).
    • Key innovations include CNN-based flare removal in BEV mosaics and a Semantic Pre-Qualification (SPQ) module for robust loop detection.
    • Measured accuracy: RMSE 0.785 m in an underground garage; superior to AVP-SLAM and BEV Edge SLAM baselines.
  • Consumer Mixed Reality—Apple Vision Pro:
    • "AVP" abbreviates Apple Vision Pro, not a model or analysis.
    • AVP is assessed for eye-tracking accuracy (mean error 1.11° in MR, 0.93° in immersive VR; SUS ≈ 73.2), spatial video capture and encoding (dual 2200×2200 px@30 fps stereo, MV-HEVC encoding), and spatial dataset generation (Izadimehr et al., 6 Jun 2025, Huang et al., 2024).
    • The term "AVP-Pro" does not denote an analytic model or research system in this consumer context.

5. Limitations and Future Directions

  • Bioinformatics AVP-Pro:
    • Relies exclusively on sequence-derived features; lacks 3D structural input (e.g., AlphaFold2 predictions).
    • Limited generalization in subclasses with very sparse training data (<80 samples).
    • Proposed future enhancements: joint structural integration, active learning for data prioritization, peptide cocktail modeling.
  • Automated Valet Parking—SLAM AVP-Pro:
    • Dependent on clear road markings; struggles with novel graffiti or degraded paint.
    • Uses fixed homography for BEV mapping; variable ground planes (ramps) can require online planar adaptation.
    • Proposals include adaptive warping, transformer-based segmentation, differentiable EKF, and sparse 3D lidar fusion.

A general implication is that "AVP-Pro" systems, across both bioinformatics and vehicle perception, are characterized by tightly coupled multi-modal data fusion, modular deep feature extraction, dynamic adaptation to input structure, and explicit mechanisms to handle dataset or environment-specific challenges.

6. Data Availability and Broader Impact

  • Bioinformatics AVP-Pro:
    • Web server and source code: https://wwwy1031-avp-pro.hf.space, supporting single and batch mode inference, full interpretability outputs (Wen et al., 16 Jan 2026).
    • Facilitates high-throughput AVP screening for antiviral drug discovery, potentially impacting peptide therapeutics and related molecular design.
  • SLAM AVP-Pro:
  • Apple Vision Pro Spatial Video and Eye-Tracking:

The AVP-Pro designation thus spans high-impact emerging tools in sequence bioinformatics, automated vehicle mapping, and immersive computing, unified by strong emphasis on multi-modal fusion, robust feature representation, and state-of-the-art empirical validation.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to AVP-Pro.