Symbolic verification of Apple's Find My location-tracking protocol
Abstract: Tracking devices, while designed to help users find their belongings in case of loss/theft, bring in new questions about privacy and surveillance of not just their own users, but in the case of crowd-sourced location tracking, even that of others even orthogonally associated with these platforms. Apple's Find My is perhaps the most ubiquitous such system which can even locate devices which do not possess any cellular support or GPS, running on millions of devices worldwide. Apple claims that this system is private and secure, but the code is proprietary, and such claims have to be taken on faith. It is well known that even with perfect cryptographic guarantees, logical flaws might creep into protocols, and allow undesirable attacks. In this paper, we present a symbolic model of the Find My protocol, as well as a precise formal specification of desirable properties, and provide automated, machine-checkable proofs of these properties in the Tamarin prover.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Explain it Like I'm 14
Overview
This paper checks whether Apple’s “Find My” system is really private and secure using math-based proofs. Find My helps you locate lost Apple devices (and accessories like AirTags) by asking nearby Apple devices to quietly send in where they saw it. Apple says this is designed to protect privacy, but the actual code is secret. The authors build a detailed, logic-based model of how Find My works and use an automated tool called Tamarin to prove that key security and privacy promises hold in their model.
What questions does the paper try to answer?
To make the big ideas easier to follow, here’s what the researchers wanted to know:
- Does the system keep the owner’s secret keys safe from attackers?
- If an attacker learns one of the temporary “rolling” keys, can they figure out past or future keys (or is each time window protected by “perfect forward secrecy”)?
- Are the actual locations of lost items kept secret, even though they are shared through other people’s phones and Apple’s servers?
- Could there be logical mistakes (not just weak encryption) that let an attacker trick the system?
How did they test it? (Approach in everyday language)
Think of Find My like a network of helpful phones. A lost device sends out a Bluetooth “beacon” using a public code that changes every 15 minutes (like changing the combination on a lock regularly). Nearby Apple devices (the “finders”) use that beacon to create a secret shared code and then upload an encrypted location report to Apple’s servers. Only the owner, who knows the special private codes, can later unlock those reports.
The researchers didn’t run Apple’s real code. Instead, they made a careful, step‑by‑step blueprint of the protocol and fed it to Tamarin, a tool that does symbolic verification. Here’s what that means in simple terms:
- Symbolic verification: Imagine every message in the system is a sealed envelope. The tool can “play out” all possible ways messages could be sent, intercepted, or altered—without breaking the envelope seals (encryption). It assumes the locks (crypto) are perfect, but the mail routes (the network) are controlled by a clever attacker.
- Attacker model: The attacker can read, block, and reorder any message, and pretend to be anyone. But they can’t open a sealed envelope unless they have the exact key.
- Protocol rules: The team wrote rules for the main steps:
- Pairing: The owner and device start with a “master beacon key” (their shared secret).
- Beacons: The lost device rotates keys every 15 minutes (“epochs”), like a car key fob that changes codes regularly.
- Finder reports: A nearby phone makes a fresh, temporary key, performs a secret “handshake” (ECDH) using the beacon, and encrypts the location with authenticated encryption (AES‑GCM). Think of it as putting the location in a safe that has both a lock and a tamper-proof seal.
- Owner access: The owner later matches the right beacon and uses their private key to recreate the handshake and open the safe.
- Proofs: The team asked Tamarin to automatically check logical properties—like “the attacker never learns the master secret unless it’s explicitly leaked”—and either produce a proof or a counterexample.
What did they find, and why does it matter?
Here are the key results in plain language:
- Master secrets stay secret: The private parts of the owner-device shared key were proven to remain hidden from attackers, unless those secrets are explicitly leaked.
- Rolling keys protect the future: Even if an attacker somehow learns one code used in a specific 15‑minute window, they still can’t figure out future codes without the original master secrets. This is called “perfect forward secrecy.”
- Locations are kept confidential: The encrypted location reports stayed secret in the model; only the owner with the right keys could read them.
- A couple of proofs were tricky: Some proofs about the chain of changing symmetric keys (the “update” step) in later time windows took too long to finish. This looks like a technical limitation of the tool with that specific kind of “keys that depend on previous keys” setup, not a discovered attack.
Why this matters:
- Apple has billions of devices, so even small weaknesses could affect a lot of people.
- These results build confidence that Find My’s design resists logical tricks, not just brute‑force hacking.
- The work shows how to mathematically check real-world systems, even when the source code is closed.
What’s the broader impact?
This research helps everyone—users, researchers, and companies—trust that crowd‑sourced tracking can be private when designed carefully. It also gives a reusable blueprint other teams can use to check similar tracking systems. The authors note some next steps:
- Checking “privacy by indistinguishability” (for example, making sure an attacker can’t tell which device is which) may require different tooling or modes.
- Modeling a fully malicious server and more complex scenarios could deepen the analysis.
- Exploring post‑quantum versions (crypto that stays safe even if powerful quantum computers exist) could future‑proof the protocol.
In short, the paper shows that with careful design and math‑based proofs, a huge, crowd‑powered tracking network like Find My can be both helpful and private.
Knowledge Gaps
Knowledge gaps, limitations, and open questions
Below is a focused list of what remains missing, uncertain, or unexplored, framed as concrete, actionable directions for future research.
- Faithfulness to Apple’s proprietary implementation: Validate the reconstructed protocol (KDF parameters, AEAD choices, message formats, epoch timing, curve choice) against empirical traces or additional reverse‑engineering to ensure the symbolic abstraction matches production Find My behavior.
- Key size inconsistency: The paper states a 32‑bit symmetric key in pairing while later using 32 bytes from KDF for AES‑GCM. Confirm the actual key lengths used by Apple (likely 256 bits) and adjust the model accordingly.
- AEAD semantics and tag handling: The custom AEADenc/AEADauthdec/AEAD_dec functions do not encode GCM’s tag generation and verification (and treat IV as AAD). Model AES‑GCM with explicit nonce (IV), AAD, ciphertext, and tag; include failure paths; and prove integrity/authenticity properties and resistance to malleability.
- Server authorization and access control: The owner’s authenticated retrieval from the server is not modeled (no server identity or policy). Add server keys and authenticated channels; specify and verify that only the legitimate owner can obtain location reports for their devices.
- Malicious server/finder behavior: Because the server and finder lack cryptographic identities in the model, collusion or compromise cannot be analyzed. Introduce identities/keys for server and finders; study report poisoning, suppression, replay, and selective disclosure attacks.
- Unlinkability and indistinguishability: Privacy claims (e.g., that beacons and reports cannot be linked across epochs or devices) are not verified due to tool constraints. Formalize and check these equivalence properties using Tamarin’s diff mode (despite usability challenges), ProVerif, or other tools supporting observational equivalence.
- Non-termination for SKi properties: Secrecy and PFS for SKi in inductive epochs timed out. Develop alternative proof strategies (e.g., explicit induction invariants, ghost variables, well‑founded orderings), lemma restructuring, or switch tools to conclusively determine whether these properties hold or identify counterexamples.
- Epoch modeling limitations: The absence of a global epoch state led to bespoke L_1/L_2 rules. Design synchronized yet privacy‑preserving epoch tracking (e.g., ghost state known only to O and L) to capture skew, overlaps, and concurrency more faithfully.
- BLE metadata and side identifiers: The model omits BLE address randomization, proprietary headers, and other metadata that may enable linkability. Incorporate these fields (or adversary access to them) and verify privacy claims under realistic BLE behavior.
- Replay and freshness guarantees: Timestamps are generated but replay protections are not proven. Formalize server‑side acceptance criteria and verify that owners reject stale/replayed location reports.
- Finder privacy: Ensure finders cannot be tracked via ephemeral key reuse or other report artifacts. Add properties that guarantee finder unlinkability and test whether any modeled fields leak finder identity.
- iCloud keychain and at-rest compromise: The model assumes the server does not store master beacon keys in the clear but does not analyze iCloud keychain compromise. Add events modeling at-rest key exposure and evaluate how secrecy/PFS/location privacy degrade under partial compromises.
- Fabrication and suppression of reports: Study attacks where adversaries upload bogus locations or suppress legitimate ones. Add integrity bindings between reports and beacons (e.g., cryptographic linkage) and verify detection/mitigation.
- Owner query privacy: Analyze whether owners’ queries reveal ownership or device linkage to Apple or network observers. Model query policies and explore privacy-preserving retrieval (e.g., PIR-like batching), then verify.
- Post-quantum readiness: Recast the model using PQC (e.g., ML‑KEM for ECDH replacement, PQ-safe KDFs) and re‑verify properties under appropriate equational theories; assess migration impacts on rolling keys and unlinkability.
- Multi-device, large-scale interactions: Extend to multiple owners/LTAs/finders concurrently to test for cross‑device collisions in h(p_i), server indexing correctness, and scalability of secrecy/PFS proofs under aggregation.
- Safety alerts and stalking mitigations: Integrate Apple’s “Item Safety Alerts” logic (thresholds, proximity patterns, clone detection) into the symbolic model and analyze whether known stalking attacks persist or are mitigated.
- ECC algebra fidelity: The “sec” restriction equates SS_fn(df, pk(di)) and SS_fn(di, pk(df)) but abstracts curve arithmetic. Evaluate whether richer equational theories (or DH algebra plugins) are needed to capture subtle algebraic/linkability risks.
- Authenticity/binding of reports: Prove that an owner accepts only reports cryptographically bound to the intended beacon p_i and that an active attacker cannot cause acceptance of a report bound to a different beacon.
- Data retention and deletion policies: Model server-side retention/expiry and verify that old beacons/reports cannot be used to reconstruct long-term movement histories beyond policy limits.
- Auditing and detectability: Add events modeling logging/audit and formulate properties ensuring that server or finder misuse is detectable (e.g., accountability for report handling), supporting operational transparency.
- Validation against empirical attacks: Cross-check the symbolic model’s guarantees against attacks documented in HSKH21 and MFBM21 (e.g., linkability, AirTag clones) to ensure the model captures the relevant threat patterns or identify modeling gaps.
Glossary
- AEAD_dec: A decryption function in authenticated encryption with associated data that returns the plaintext without checking associated data; used in formal modeling. Example: "we define three new functions, namely AEADenc, AEADauthdec, and AEAD_dec"
- AEADauthdec: An authenticated decryption function that verifies both the key and the associated data before recovering the plaintext. Example: "we define three new functions, namely AEADenc, AEADauthdec, and AEAD_dec"
- AEADenc: An encryption function for authenticated encryption with associated data that produces a ciphertext bound to both a key and associated data. Example: "we define three new functions, namely AEADenc, AEADauthdec, and AEAD_dec"
- AES-GCM authentication tag: The integrity tag output by AES-GCM that allows verification of ciphertext authenticity. Example: "the AES-GCM authentication tag (in the clear)"
- AirTags: Apple’s small tracking tokens that integrate with the Find My network. Example: "Apple acknowledged explicit concerns about the malicious use of their AirTags"
- ANSI X.963 KDF: A standardized key derivation function (X9.63) often used with elliptic-curve parameters and hash functions. Example: "the ANSI X.963 KDF, with SHA-256, and a generator G of the NIST P-224 curve"
- Authenticated encryption with associated data (AES-GCM): An encryption mode that provides both confidentiality and authenticity, binding ciphertext to associated data. Example: "F then encrypts its location using e' and IV using a type of authenticated encryption with associated data (AES-GCM)."
- Bluetooth Low Energy (BLE): A low-power wireless technology used for broadcasting beacons in proximity. Example: "broadcasts Bluetooth Low Energy (BLE) advertisements"
- Diff mode: A verification mode in Tamarin for checking equivalence properties across two protocol variants. Example: "Tamarin has a 'diff' mode"
- Dolev-Yao model: A symbolic attacker model assuming perfect cryptography and a fully controlled adversarial network. Example: "Tamarin operates over the standard Dolev-Yao model"
- EDHOC: An authenticated key exchange protocol designed for constrained environments. Example: "key exchange protocols like EDHOC"
- Elliptic Curve Diffie-Hellman (ECDH) key exchange: A method for two parties to derive a shared secret using elliptic-curve keys. Example: "we use elliptic curve cryptography here, and therefore, one tool at our disposal is ephemeral Elliptic Curve Diffie-Hellman (ECDH) key exchange"
- Ephemeral public key: A one-time-use public key generated per session or message to enhance privacy. Example: "an ephemeral public key pf"
- Equational theory: A set of algebraic equations defining the behavior of cryptographic operations in a symbolic model. Example: "Verification in Tamarin occurs in the presence of an equational theory"
- Equivalence properties: Security properties comparing two protocol executions, such as privacy via indistinguishability. Example: "Indistinguishability properties fall under the class of equivalence properties"
- Epoch: A fixed-duration interval used for rotating keys and emitting beacons. Example: "During an epoch (of duration 15 minutes), devices emit one beacon every two seconds"
- Event annotations (actions): Rule-level markers in Tamarin used to specify and verify properties tied to protocol events. Example: "Event annotations, called actions in Tamarin, allow us to state properties which talk about when a particular rewrite rule has been fired."
- First order logic: A formal logic used to specify properties over protocol executions, including quantification and timestamps. Example: "The property language is a fragment of first order logic equipped with timestamps"
- Generator G: A base point on an elliptic curve used to derive public keys from private scalars. Example: "a generator G of the NIST P-224 curve"
- iCloud keychain: Apple’s secure storage for keys and credentials synced across devices. Example: "the decryption key for which is stored in the iCloud keychain belonging to the owner."
- In (Tamarin keyword): A Tamarin fact indicating that a message has been received from the network. Example: "indicated by the In keyword in Tamarin"
- Initialization vector IV: A nonce-like value required by certain encryption modes to ensure semantic security. Example: "the remaining 16 bytes are used as an initialization vector IV."
- Kerberos: A network authentication protocol using tickets for secure identity verification. Example: "authentication protocols like Kerberos"
- Key derivation function (KDF): A function that derives new cryptographic keys from existing material. Example: "constructed using a key derivation function (more precisely, the ANSI X.963 KDF, with SHA-256, and a generator G of the NIST P-224 curve)."
- Key establishment: The process by which parties create or agree on shared keys for later use. Example: "a key establishment happens"
- LTA (location tracking accessory): A Find My-enabled device that can be tracked (e.g., AirTags). Example: "often called an LTA, standing for 'location tracking accessory'"
- Man-in-the-middle attacks: Adversarial interception and modification of communications between parties. Example: "man-in-the-middle attacks, for instance"
- Master beacon key: The foundational key material shared between owner and device used to derive rolling beacons. Example: "Together, these form the master beacon key."
- Multiset rewrite rules: The operational semantics in Tamarin describing protocol transitions via facts consumed and produced. Example: "In Tamarin, protocols are written using multiset rewrite rules"
- NIST P-224 elliptic curve: A specific elliptic curve standardized by NIST used for key generation and ECDH. Example: "using the NIST P-224 elliptic curve"
- Out (Tamarin keyword): A Tamarin fact indicating that a message is sent to the adversary-controlled network. Example: "indicated by the keyword Out"
- Partial blind signatures: Signatures where some message content is hidden from the signer while allowing controlled binding. Example: "uses a cryptographic primitive called partial blind signatures"
- Perfect forward secrecy (PFS): A property ensuring past session keys remain secure even if long-term keys are compromised later. Example: "Perfect forward secrecy (PFS) of master beacon key:"
- Persistent facts: Tamarin facts that are not consumed by rules and remain available throughout the execution. Example: "facts come in two flavours, temporary and persistent"
- Post-quantum cryptography (PQC) model: A security framework considering adversaries with quantum capabilities. Example: "under a PQC model"
- ProVerif: An automated tool for symbolic verification of security protocols, including equivalence properties. Example: "These include ProVerif, SAPIC+, and Tamarin"
- Quasi order: A reflexive and transitive relation used to model event timestamps without strict total ordering. Example: "Timestamps respect the usual quasi order"
- Rolling keys: Periodically changing keys to prevent long-term linkability of beacons and reports. Example: "based on the concept of rolling keys"
- SAPIC+: A protocol modeling language/framework used with symbolic verification tools. Example: "These include ProVerif, SAPIC+, and Tamarin"
- SHA-256: A cryptographic hash function used for identifiers and integrity. Example: "stored using an ID which is the SHA-256 hash of the pi"
- Signal: An end-to-end encrypted messaging protocol often analyzed with symbolic techniques. Example: "messaging protocols like Signal"
- Symbolic analysis: A verification approach modeling perfect cryptography to detect logical flaws without execution. Example: "Symbolic analysis is orthogonal to cryptographic analysis"
- Tamarin prover: An automated tool for interactive symbolic verification of security protocols. Example: "provide automated, machine-checkable proofs of these properties in the Tamarin prover."
- Temporary facts: Tamarin facts consumed when rules fire, modeling state transitions. Example: "Temporary facts get 'consumed' when a rewrite rule involving them gets fired"
- TLS: The Transport Layer Security protocol used for secure communications over networks. Example: "network protocols like TLS"
Practical Applications
Immediate Applications
Below are directly deployable applications that leverage the paper’s verified findings, abstractions, and modelling techniques.
- Industry (software/consumer electronics): Integrate a “formal verification gate” into CI/CD for Find My-related firmware and app updates
- What: Use the provided Tamarin model as a regression test suite to automatically re-verify secrecy and perfect forward secrecy (PFS) after protocol or implementation changes (e.g., key rotation cadence, report metadata).
- Tools/workflows: Tamarin-based automated checks; pre-release security gate; artifact generation for internal audits.
- Assumptions/dependencies: Fidelity of the model to evolving proprietary code; Dolev–Yao perfect-crypto assumption; access to accurate protocol details when features change.
- Accessory ecosystem (hardware/software, MFi vendors): Pre-certification compliance testing for Find My-enabled accessories
- What: Build a vendor-facing “conformance runner” that loads the model and validates that accessory behaviour preserves secrecy of the master beacon key, location secrecy, and PFS properties before submission to Apple’s certification.
- Tools/workflows: Turn the model and lemmas into a test harness; generate signed machine-checkable proof summaries for certification.
- Assumptions/dependencies: Stable interface between model parameters and accessory firmware; acceptance of symbolic proofs as part of certification evidence.
- Academia (security research & education): Teaching and reproducible research modules on symbolic verification of real-world protocols
- What: Course labs and research benchmarks using the Find My Tamarin model to teach BLE-based protocols, rolling keys, AEAD modelling, and ECDH constraints; comparative studies vs. cryptographic proofs (e.g., Blind My).
- Tools/workflows: Tamarin sessions, proof scripting, model extension assignments; cross-tool exercises with ProVerif.
- Assumptions/dependencies: Availability of the artifact repository; students trained in automated reasoning tools.
- Policy and compliance (consumer protection, certification bodies): Evidence-backed privacy assurance for crowd-sourced location tracking
- What: Use the verified properties as “minimum technical due diligence” for privacy claims; include formal verification artifacts in certification/audit checklists for trackers.
- Tools/workflows: Audit templates mapping claims (secrecy, PFS, location confidentiality) to machine-checkable lemmas; procurement requirements referencing formal proofs.
- Assumptions/dependencies: Regulators willing to accept symbolic proofs; alignment with legal standards for privacy-by-design.
- Security operations (industry/research): Rapid assessment of cloned or modified devices
- What: Leverage the model to assess whether third-party or tampered devices preserve the secrecy and PFS properties (e.g., anti-stalking mitigations hold under logical attacker actions).
- Tools/workflows: Model-driven “red team” playbooks; proof-guided vulnerability triage.
- Assumptions/dependencies: Ability to map clone behaviour to model states; precise threat model fit.
- Software engineering (protocol design library): Reusable modelling components for AEAD and ECDH constraints
- What: Adopt the paper’s AEAD function/equation design and ECDH equality restriction as ready-made building blocks to model other BLE/ECC-based protocols.
- Tools/workflows: Internal protocol modelling library for IoT stacks; standardized modelling patterns.
- Assumptions/dependencies: Teams use Tamarin or compatible tools; similar equational theories suffice.
- Enterprise IT (asset tracking workflows; healthcare, logistics): Policy-aligned adoption of privacy-preserving tracking
- What: Use the paper’s properties to justify deploying Find My-like systems for assets in hospitals, warehouses, or campuses where location privacy and unlinkability are critical.
- Tools/workflows: Risk assessments referencing secrecy/PFS proofs; integration playbooks for BLE trackers.
- Assumptions/dependencies: Enterprise requirements match the Dolev–Yao threat model; local policies allow cloud-assisted tracking.
Long-Term Applications
Below are applications that require further research, scaling, or development (e.g., new tooling, specification changes, broader adoption).
- Standards and certification (Bluetooth SIG, ISO/IEC): “Formally Verified Tracker” standard/profile for crowd-sourced location protocols
- What: Establish baseline machine-checkable properties (secrecy, PFS, indistinguishability) as certification criteria; require published models and proof artifacts for all compliant products.
- Tools/workflows: Conformance test suites; standardized proof formats and continuous compliance auditing.
- Assumptions/dependencies: Industry-wide buy-in; reference specifications transparent enough for formalization.
- Privacy equivalence verification pipeline (software/security): Indistinguishability/unlinkability proofs in practice
- What: Extend beyond secrecy to equivalence properties (e.g., unlinkability across epochs and devices) via ProVerif/Tamarin-diff workflows; generate end-to-end privacy claims.
- Tools/workflows: Cross-tool verification pipeline; diff-mode proof orchestration and reporting; continuous monitoring of privacy regressions.
- Assumptions/dependencies: Tooling maturity for equivalence proofs; careful abstraction of identities to avoid trivial failures.
- Post-quantum migration planning (industry/academia): Quantum-ready Find My protocol evaluation
- What: Model and verify PQC variants (e.g., Kyber-based KEM for shared secrets; PQ-safe KDF/AEAD combinations) preserving secrecy/PFS and location confidentiality under PQ adversaries.
- Tools/workflows: PQC equational theories; hybrid ECC–PQC modelling; migration playbooks and performance studies.
- Assumptions/dependencies: Availability of well-specified PQ primitives and APIs; consensus on PQ transition timelines.
- Stronger adversary modelling (industry/research): Malicious server/finder analyses and mitigations
- What: Introduce explicit server/finder keys and message footprints to analyze and mitigate server-side or finder collusion attacks; drive protocol changes to bound server knowledge.
- Tools/workflows: Model revisions adding server identities; new lemmas for server privacy constraints; guidance for architectural changes (e.g., blind signatures, split knowledge).
- Assumptions/dependencies: Changes to protocol/spec to expose necessary identities/messages; willingness to adopt more complex cryptography.
- Protocol innovation (product R&D): “Blind My 2.0” combining cryptographic indistinguishability and symbolic safety
- What: Co-design a next-generation tracker protocol that marries partial blind signatures with a Tamarin-verified logical model; deliver both crypto proofs and symbolic safety guarantees.
- Tools/workflows: Dual proof pipelines (game-based + symbolic); test deployments; performance/privacy trade-off analysis.
- Assumptions/dependencies: Engineering tolerance for added cryptographic complexity; availability of safe parameter choices.
- Sector-wide IoT privacy stacks (healthcare, energy, finance, robotics): Privacy-preserving location frameworks for regulated industries
- What: Adapt verified Find My principles to build compliant IoT location networks (e.g., medical assets, grid equipment, high-value devices) with formal privacy assurances.
- Tools/workflows: Sector-specific protocol profiles; audit-ready proof bundles; integration toolkits.
- Assumptions/dependencies: Regulatory alignment (HIPAA, NERC CIP, PCI DSS); sector acceptance of formal-methods evidence.
- Verification developer tooling (software): Human-in-the-loop proof engineering and “proof diff” dashboards
- What: Build developer-facing tools that visualize lemmas, counterexamples, and proof diffs between protocol versions; recommend model refactorings to avoid non-termination (e.g., SK inductive case).
- Tools/workflows: IDE plug-ins; proof health metrics; heuristic oracles to guide Tamarin search.
- Assumptions/dependencies: Investment in verification UX; extension hooks in existing tools.
- Public safety and policy (consumer protection, law enforcement oversight): Formal evaluation of anti-stalking features
- What: Use models to test safety-alert logic (e.g., thresholds, timing, adversarial behaviours like cloned tags); inform policy on minimum safeguards and disclosure.
- Tools/workflows: Scenario modelling; “misuse case” libraries; policy recommendations tied to formal outcomes.
- Assumptions/dependencies: Access to operational parameters; stakeholder cooperation; careful mapping from model events to real-world behaviours.
- Education and workforce development (academia/industry): Capstone kits and competitions on verified protocol design
- What: Create end-to-end curriculum and contests where teams extend or harden Find My-like protocols and produce machine-checkable privacy/security proofs.
- Tools/workflows: Hosted model repositories; automated grading of proofs; cross-institution challenges.
- Assumptions/dependencies: Sustained academic–industry partnerships; accessible tooling and documentation.
Notes on global assumptions/dependencies shared across items:
- The paper’s threat model (Dolev–Yao with perfect cryptography) evaluates logical flaws rather than cryptographic breakage; real-world claims depend on correct, secure cryptographic implementations.
- The current model is a reconstruction from public sources; fidelity to Apple’s proprietary implementation can affect conclusions and should be revisited as new details emerge.
- Some properties (e.g., SK inductive proofs) require tooling advances or model refinements to ensure termination.
- Equivalence (privacy/unlinkability) properties need additional tooling or model changes (e.g., ProVerif or Tamarin diff mode) to be reliably verified and should be treated as future work in high-stakes deployments.
Collections
Sign up for free to add this paper to one or more collections.