Papers
Topics
Authors
Recent
Search
2000 character limit reached

Bona-Fide Presentation Classification Error Rate

Updated 10 April 2026
  • BPCER is defined as the proportion of bona-fide biometric presentations misclassified as attacks, which is key for assessing PAD system usability.
  • It is calculated using a decision threshold on PAD scores, comparing false rejections to the total number of genuine presentations according to ISO/IEC 30107-3.
  • Evaluating BPCER alongside APCER via DET/ROC curves helps visualize trade-offs between security and user convenience in biometric systems.

The Bona-Fide Presentation Classification Error Rate (BPCER) is an error metric established by ISO/IEC 30107-3 and widely used in biometric presentation attack detection (PAD) systems. BPCER quantifies the proportion of genuine (bona-fide) user attempts that are incorrectly classified as attacks—i.e., the system's false-rejection rate on legitimate presentations. BPCER, in conjunction with the Attack Presentation Classification Error Rate (APCER), forms the foundational means for empirically characterizing a PAD system’s trade-off between usability for legitimate users and security against spoof or attack presentations.

1. Formal Definition and Mathematical Expression

BPCER is defined as the fraction of bona-fide presentations misclassified as attacks by a PAD system at a specified decision threshold. Let NBFN_{\mathrm{BF}} be the total number of bona-fide presentations, and NBFAttackN_{\mathrm{BF} \to \mathrm{Attack}} the number of those presentations labeled as attacks (i.e., false rejects). The canonical formula is: BPCER=NBFAttackNBF×100%\mathrm{BPCER} = \frac{N_{\mathrm{BF} \to \mathrm{Attack}}}{N_{\mathrm{BF}}} \times 100\% This definition appears verbatim across multiple works, including deep learning-based systems for hand (1809.04364), fingerprint (Adami et al., 2023), finger-vein (Singh et al., 2019), and fingerphoto PAD (Li et al., 2024), as well as iris and document PAD (Chen et al., 2021, Dowling et al., 18 Mar 2026). In implementations based on thresholded detector scores, let score(x)\mathrm{score}(x) denote the PAD output for presentation xx and τ\tau the decision boundary. Presentations with score(x)<τ\mathrm{score}(x) < \tau are classified as attacks, and the formula operationalizes as: BPCER(τ)={xXBF:score(x)<τ}XBF×100%\mathrm{BPCER}(\tau) = \frac{|\{x \in \mathcal{X}_{\mathrm{BF}} : \mathrm{score}(x) < \tau\}|}{|\mathcal{X}_{\mathrm{BF}}|} \times 100\% where XBF\mathcal{X}_{\mathrm{BF}} is the bona-fide evaluation set.

2. Relationship to APCER, EER, and DET/ROC Curves

BPCER is paired with APCER, the fraction of attack presentations misclassified as bona-fide: APCER=NAttackBFNAttack×100%\mathrm{APCER} = \frac{N_{\mathrm{Attack} \to \mathrm{BF}}}{N_{\mathrm{Attack}}} \times 100\% The operating trade-off between BPCER and APCER is controlled by the decision threshold NBFAttackN_{\mathrm{BF} \to \mathrm{Attack}}0. Varying NBFAttackN_{\mathrm{BF} \to \mathrm{Attack}}1 traces out the Detection Error Tradeoff (DET) or ROC curves, enabling the visualization of trade-offs between security (minimizing APCER) and usability (minimizing BPCER). The Equal Error Rate (EER) is given by NBFAttackN_{\mathrm{BF} \to \mathrm{Attack}}2 such that NBFAttackN_{\mathrm{BF} \to \mathrm{Attack}}3 (Chen et al., 2021, Singh et al., 2019). DET and ROC curves are standard in both biometric and document PAD, and fixed operating points are typically reported as “APCER at BPCER = x%” or conversely (Dowling et al., 18 Mar 2026). This convention is mandated by ISO/IEC 30107-3.

3. Experimental Protocols and Threshold Selection

The standard PAD evaluation protocol involves splitting data into bona-fide and attack sets, computing PAD scores, and choosing a threshold NBFAttackN_{\mathrm{BF} \to \mathrm{Attack}}4 to set BPCER (or APCER) at a predetermined policy level. For example:

  • Choose NBFAttackN_{\mathrm{BF} \to \mathrm{Attack}}5 on the (training or validation) bona-fide set such that exactly NBFAttackN_{\mathrm{BF} \to \mathrm{Attack}}6 of bona-fide samples fall below NBFAttackN_{\mathrm{BF} \to \mathrm{Attack}}7 (i.e., BPCER = NBFAttackN_{\mathrm{BF} \to \mathrm{Attack}}8).
  • Apply NBFAttackN_{\mathrm{BF} \to \mathrm{Attack}}9 to the attack test set; APCER at this BPCER=NBFAttackNBF×100%\mathrm{BPCER} = \frac{N_{\mathrm{BF} \to \mathrm{Attack}}}{N_{\mathrm{BF}}} \times 100\%0 quantifies system vulnerability under the same user convenience constraint.
  • Reporting is thus summarized as “APCER @ BPCER = x%,” e.g., APCER @ BPCER = 1% (Dowling et al., 18 Mar 2026).

This framework is implemented for leave-one-attack-type-out open-set protocols (iris PAD) (Dowling et al., 18 Mar 2026), document PAD under unseen printing/imaging devices (Chen et al., 2021), diffusion-model-based fingerphoto PAD (Li et al., 2024), and contactless fingerprint (Adami et al., 2023). In some studies, the threshold is directly the softmax class boundary (e.g., hand PAD (1809.04364)).

4. Role in PAD System Performance and Impact

BPCER is interpreted as the empirical false-reject rate for genuine users, with low BPCER being crucial for user acceptance, minimizing inconvenience or lockout of legitimate users. In deployments, one generally sets a policy upper bound on tolerable BPCER (e.g., 1%, 5%, 10%) and seeks to minimize APCER at this operating point. The balanced summary is sometimes reflected by the Average Classification Error Rate (ACER): BPCER=NBFAttackNBF×100%\mathrm{BPCER} = \frac{N_{\mathrm{BF} \to \mathrm{Attack}}}{N_{\mathrm{BF}}} \times 100\%1 However, ACER is only meaningful with known constituent rates; reporting BPCER (and APCER) separately is standard (Adami et al., 2023).

Empirically, the literature reports BPCER values spanning from 0.12% (Adami et al., 2023) to 12.89% (Li et al., 2024). Zero BPCER is occasionally achieved, notably when thermal cues perfectly distinguish genuine from spoof presentations (1809.04364, Singh et al., 2019). Nonzero BPCER is a persistent challenge in systems encountering domain shift, limited spoof diversity, or unseen acquisition conditions (Li et al., 2024, Chen et al., 2021).

5. Comparative Results and Trade-off Insights

The following table summarizes representative BPCER values as reported across different PAD domains and datasets:

Domain BPCER (minimum–maximum) Condition/Note
Iris (VISER, (Dowling et al., 18 Mar 2026)) 1% (fixed) Saliency-guided, open-set, APCER @ 1% BPCER
Fingerphoto (DDPM, (Li et al., 2024)) 0–12.89% (intra); up to 96% (cross) DDPM model on diverse PAIs; cross-domain BPCER elevates sharply
Hand (thermal, (1809.04364)) 0% Thermal as PAD cue, open-set
Fingerprint (Adami et al., 2023) 0.12% Contactless scenario, unseen attacks
Finger-vein (Singh et al., 2019) 0–5% SVM score fusion, EER/fixed APCER points; BPCER = 0% in optimal cases

Results show that BPCER is highly sensitive to sensor modality, feature design, environmental mismatch, and domain generalization complexity. For instance, VISER’s saliency-driven eye-tracking models reduce APCER by over 10 percentage points at a fixed BPCER of 1%, but domain shifts in fingerphoto PAD cause BPCER to exceed 80% (Li et al., 2024).

6. Regulatory and Reporting Standards

ISO/IEC 30107-3 prescribes explicit reporting of BPCER at fixed APCER or vice versa. This practice is ubiquitous in the biometrics PAD literature (Chen et al., 2021, Dowling et al., 18 Mar 2026, Li et al., 2024). Fixed-BPCER reporting not only standardizes cross-comparisons of PAD solutions but aligns with deployment needs for limiting false rejections, particularly in high-security or user-facing environments.

Industry guidance further justifies this protocol, as it quantifies the practical impact of PAD thresholding policies on both system robustness and genuine-user experience. Reporting BPCER (and APCER) at multiple fixed points (e.g., 0.1%, 1%, 5%, 10%) across diverse datasets is recognized as the de-facto characterization of biometric PAD systems.

7. Practical Considerations and Open Challenges

BPCER is influenced by the diversity of bona-fide training data, variation in acquisition conditions, PAD score calibration, and attack vector novelty. Systems that generalize poorly may exhibit large increases in BPCER under unseen test environments or sensor types (Li et al., 2024). Conversely, architectures leveraging robust features (e.g., thermal, 3D shape, eye-tracking saliency) are capable of achieving BPCER approaching zero on well-matched data (1809.04364, Singh et al., 2019, Dowling et al., 18 Mar 2026).

A persistent challenge is maintaining a low BPCER in conjunction with a low APCER across operationally realistic, open-set scenarios and heterogeneous data conditions. A plausible implication is that effective PAD system deployment requires training with diverse bona-fide samples and careful selection of operating thresholds to ensure both security (low APCER) and usability (low BPCER).


For comprehensive benchmarking and comparison, BPCER must always be reported alongside APCER (and optionally ACER), at standardized, policy-relevant operating points, and under explicit cross-validation or domain-generalization protocols (Chen et al., 2021, Dowling et al., 18 Mar 2026, Li et al., 2024, Adami et al., 2023, Singh et al., 2019, 1809.04364).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Bona-Fide Presentation Classification Error Rate (BPCER).