Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 173 tok/s Pro
GPT OSS 120B 438 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Erasure-Based Analysis: Methods and Models

Updated 29 September 2025
  • Erasure-based analysis is a formal methodology in information security focused on securely erasing sensitive data with techniques like well-formedness and secret confinement.
  • It defines key properties such as secret singularity, secret confinement, and stream ability to prevent post-erasure exposure of secret data.
  • The composite erasure theorem demonstrates that combining an erasure-friendly user with an input-erasing system guarantees indistinguishable post-erasure behavior.

Erasure-based analysis encompasses a range of methodologies, definitions, and protocols centered on the removal or secure handling of sensitive information such that no subsequent behavior or output of a system reveals any retrievable trace of the erased data. In computer science and information security, erasure-based analysis has acquired formal significance for evaluating systems, protocols, and user interactions where strict confidentiality, time-limited data usage, regulatory compliance, or compositional security is required. Key technical developments in this domain include the formalization of erasure properties, compositional frameworks for joint erasure, and precise user/system models that ensure no secret-dependent channel persists after erasure.

1. Formalization of the User Model

The foundational framework presented by Hunt and Sands in "A User Model for Information Erasure" models the user as a composition of a behavioral component and a memory component. The behavioral component, U, is defined as a labeled transition system (LTS) with state set S={u0,u1,}\mathcal{S} = \{ u_0, u_1, \ldots \} and transitions labeled by interaction actions, notably including i?vi?v denoting the act of the user reading the ii-th secret value from a local memory store δ:N+V\delta : \mathbb{N}^+ \rightarrow \mathbb{V}. Thus, a user instance is denoted by U(δ)U(\delta), where δ\delta supplies the concrete secret values.

This abstraction supports modeling the interactive sequences by which a user may supply secrets to a system, as well as enabling the tracking of which secrets are in use at every protocol step. The explicit separation between the behavioral LTS and the secret memory is critical: it allows the analysis to distinguish between protocol mechanics and the propagation of actual sensitive data, a prerequisite for rigorous erasure-based correctness arguments.

2. Erasure Friendliness: A Set of Sufficient User Properties

Central to practical erasure-based analysis is the characterization of user behaviors that can be safely composed with erasure-enforcing systems. The paper identifies four formal properties that a user UU must satisfy to be considered erasure friendly (denoted EF(U)EF(U)):

  1. Well-Formedness
    • Upon receipt of a “begin erasure” (BE) signal, the user immediately performs a local memory fetch (i?vi?v) and emits the appropriate output, with no computation or dynamic feedback interleaved. This is formally encapsulated as ua?BEu1i?vu2a!vuu \xrightarrow{a?BE} u_1 \xrightarrow{i?v} u_2 \xrightarrow{a!v} u', ensuring deterministic input of secrets.
  2. Secret Singularity
    • Each secret indexed in δ\delta may be used at most once per execution. That is, for any execution trace tT(U)t \in T(U) and index ii, events labeled i?vi?v occur at most once. This precludes reuse or correlation of secrets across erasure sessions and protocols.
  3. Secret Confinement
    • All observable behavior after an erasure zone is independent of the secret supplied within that zone. For the “erasure frontier” E0(u)E_0(u) (the set of user states immediately following an erasure), the set of traces T(uv)T(u_v) is required to equal T(uw)T(u_w) for all secret values v,wv, w, axiomatizing the confinement of secret-dependent actions.
  4. Stream Ability
    • During erasure, output behavior must depend only on preselected secrets (via δ\delta) and not be dynamically varied based on interaction with the system. Formally, the output traces during an erasure zone must be “output equivalent,” meaning they reflect only data from δ\delta and no system-provided input. This prevents adversarial channels resulting from users reflecting or storing data mixed with secrets.

These properties are necessary because—even with an erasing system—an uninformed or malicious user could undo the security guarantees of erasure (for example, by leaking the secret in a post-erasure phase or recycling the same secret across sessions).

3. Composition with Input-Erasing Systems: The Composite Erasure Theorem

An input-erasing system SS is defined as an input-enabled, deterministic LTS that uses BE/EE (begin/end erasure) delimiters to mark erasure sessions. SS satisfies input erasure (denoted E(S)E(S)) if, for any input secret vv and an alternative ww, traces beyond the erasure session are observationally equivalent: the system’s later behavior is provably independent of the value provided.

When a user UU with EF(U)EF(U) is composed in parallel with such a system SS, with the respective synchronizations on interactive actions, the resulting system USU|S is proven to satisfy composite erasure (EC(US)EC(U|S)):

E(S)EF(U)EC(US)E(S) \wedge EF(U) \Longrightarrow EC(U|S)

Explicitly, for any trace where U(δ)SU(\delta)|S supplies a secret vv via i?vi?v, there exists a corresponding trace in U(δ)SU(\delta')|S (where δ\delta' differs only at index ii with value ww instead of vv) such that all observable behavior after the erasure session remains identical. This result is nontrivial: it relies on secret singularity to ensure protocol prefix invariance, stream ability for locally deterministic output during erasure, and secret confinement for complete “forgetting” after erasure.

4. Necessity and Strengthening of Formal User Requirements

Relative to prior informal discussions, the present formalism demonstrates that erasure-friendliness is strictly stronger than “do not copy nor echo secrets” intuitions. Secret singularity rules out inadvertent correlations (such as across multiple sessions), while secret confinement ensures the system cannot “stitch together” post-hoc inferences from protocol output. Stream ability stands in contrast to “safe wallet” heuristics, which might permit adversarial systems to covertly channel information out via user reflections.

This sharpening of the user model is essential: even securely designed erasing systems can be compromised without explicit user-side invariants, as users may themselves serve as side-channels or persistence surfaces for the secret data.

5. The Soundness Theorem and Its Implications

The main theorem of the paper formally asserts:

If SS satisfies input erasure (E(S)E(S)) and UU is erasure-friendly (EF(U)EF(U)) and satisfies a suitable liveness condition, then USU|S satisfies composite erasure (EC(US)EC(U|S)).

The proof proceeds by leveraging the independence of protocol prefix traces (by well-formedness and singularity), the deterministic, stream-like output during erasure (by stream ability), and the erasure of all secret effects in successor states (by secret confinement). The composite system thus guarantees that secret changes in δ\delta produce indistinguishable traces past erasure, regardless of adversarially constructed interleavings.

This result lays the formal foundation for engineering secure interactive systems—such as “secure wallets” or “trusted brokers”—which can mediate user–system interactions so that both adhere to their requisite constraints, extending beyond theoretical models to practical architectures.

6. Synthesis and Future Directions

The erasure-based analysis achieved here is characterized by the explicit dual modeling of user and system, a compendium of precise invariants for erasure-friendliness, and a compositional theorem that rigorously specifies how such systems interact without leaking secret data post-erasure. The requirements—well-formedness, singularity, confinement, and stream ability—collectively serve as both correctness conditions and design guidelines.

Potential developments building on this work include:

  • Generalizations to multi-level erasure and privilege schemes, where secrets may be erased at different “strengths” or isolation levels.
  • Tool-supported verification for automated checking of erasure-friendliness in protocol implementations.
  • Mediator agents that enforce EF(U)EF(U) on behalf of minimally trusted users, opening practical deployment venues for compliant erasure-based architectures.

This formal, wieldy approach unifies protocol design, compositional verification, and practical system construction in the domain of information erasure, setting precise technical standards for what it means for systems and users to collectively “forget” sensitive data.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Erasure-Based Analysis.