Papers
Topics
Authors
Recent
2000 character limit reached

Cloud-Native Privacy-Preserving Architecture

Updated 18 December 2025
  • Cloud-native privacy-preserving architecture is a system that protects data by using client-side encryption and an untrusted cloud as a synchronizer, ensuring confidentiality and integrity.
  • It employs per-row AES encryption and asymmetric key wrapping (RSA/EC) to secure data, effectively enforcing fine-grained access controls and forward security.
  • The design features scalable performance with minimal overhead, rigorous grant-and-revoke permission mechanisms, and supports offline operations for resilient data management.

A cloud-native privacy-preserving architecture is an end-to-end system that applies cryptographic and systems-level controls to protect user data confidentiality and data sharing integrity, even when operating over untrusted cloud storage or compute. The “iPrivacy” architecture represents a prototypical instance where the trust boundary and data control are delegated to the client-side, while the cloud acts only as an untrusted synchronizer and network relay. The architecture implements fine-grained (per-row) symmetric encryption, a rigorous grant-and-revoke capability model for shared data, and a minimal stateless cloud synchronization service, thereby achieving strong confidentiality and forward-security without resorting to trusted hardware or centralization of plaintext data (Damiani et al., 2015).

1. System Architecture and Component Topology

The system consists of three principal components: (1) Client agents, each embedding a local in-memory relational database (RDBMS, specifically HyperSQL in MEMORY mode) and a public/private key pair; (2) an untrusted Synchronizer process running in the cloud, accessible via RMI, holding only encrypted blobs and encryption key-shares; (3) a communication network facilitating RPC and data exchange.

The data placement follows these rules:

  • Each client-agent holds all data it owns in cleartext only locally. Data shared by others is always encrypted-at-rest locally.
  • The Synchronizer never observes any cleartext dossier or unencrypted symmetric key.

Textual component map:

1
2
3
4
5
6
7
8
9
[Client₁:LocalAgent]
 ├─ Local In-Memory DB (clear & encrypted rows)
 ├─ Private key, public key pair
 └─ Synchronizer-RMI client
[Client₂:LocalAgent] … (identical)
[Synchronizer]
 ├─ PendingDossiersStore (encrypted blobs)
 ├─ DecodingKeysStore (ciphertexts E_{pkᵢ}(k))
 └─ PublicKeysStore

Data, after encryption, and the cryptographic material required for decryption are synchronized by the cloud service. Operations (grant, share, receive, revoke) are all mediated by explicit RPCs but enforce all access control on the client side.

2. Data Storage, Encryption, and Key Management

Each agent’s RDBMS stores dossiers in-memory with two extra columns:

  • id_pending_row: NULL for owned data; identifies pending share for shared/encrypted data
  • encrypted_row: stores HEX(AESₖ(serialized-tuple)) for each encrypted row

On startup, the ScriptReader module parses .script/.log entries:

  • Rows in clear go directly into the in-memory tables.
  • Rows marked “id@HEXaredecryptedafterfetchingtheappropriatekeyfromthesynchronizer.</li></ul><p>Onupdate/shutdown,theScriptWriterinterceptsrowswithnonNULL<code>idpendingrow</code>,encrypts,andemitsid@HEX…” are decrypted after fetching the appropriate key from the synchronizer.</li> </ul> <p>On update/shutdown, the ScriptWriter intercepts rows with non-NULL <code>id_pending_row</code>, encrypts, and emits “id@HEX…” tags.

    Symmetric key management:

    • Each shared row or group is encrypted under a freshly-generated AES session key kk, with payload encryption in CBC mode and PKCS#7 padding.
    • The encryption key is distributed per recipient using asymmetric ciphers (RSA/EC-based): the symmetric key is encrypted as C=Epkr(k)C = E_{pk_r}(k) and signed by the owner.
    • The Synchronizer retains only these encrypted keys; clients fetch and decrypt with their local secret key as needed.

    Key derivation and distribution are performed per share-and-per-row. Example key assignment (for a particular grant):

    1
    2
    3
    
    k ← KDF(user_id ∥ nonce)  # Example; any CSPRNG source may be used
    DKeyCipher = RSA_enc(PKᵣ, k)
    SignedDKey = Sign_{SKₒwner}(DKeyCipher)

    3. Permission Model: Grant, Revoke, and Enforcement

    Grant-and-revoke are root privileged operations by data owners, performed via the synchronizer. Mechanisms:

    Granting access: Owner creates a symmetric key, encrypts it for each recipient, and deposits it in the synchronizer. Owner then encrypts the relevant row (or delta) and instructs the synchronizer to deliver to each recipient.

    Revoking access: The owner sends a delete RPC for the decoding key; the synchronizer removes the key material. All future attempts by the recipient to fetch or decrypt that row will fail. However, previously cached or decrypted plaintext that the recipient already obtained cannot be forcibly wiped—offline caching is a known limitation.

    Pseudocode samples:

    1
    2
    3
    4
    5
    6
    
    k  random_symmetric_key()
    cipheredKey  EncryptAsym(PK[u_r], k)
    signedKey   Sign(SK[u_o], cipheredKey)
    Synchronizer.depositDecodingKey(d_id, u_r, signedKey)
    
    Synchronizer.deleteDecodingKey(d_id, u_r)

    All shared and key-wrapped items are cryptographically signed by the owner; recipients verify signatures against the owner’s public key.

    4. Security Model and Threat Analysis

    Adversary: The Synchronizer (cloud provider), untrusted client RDBMS, and all network channels are considered honest-but-curious and potentially compromised.

    Capabilities Addressed:

    • All plaintext/ciphertext separation is enforced; no single component ever co-locates both data and keys.
    • Confidentiality: Only owner and authorized recipients may obtain the decryption key for a shared dossier.
    • Integrity and authenticity: All shared blobs and distributed keys are signed; recipients always verify via the owner’s public key.
    • Forward security: Once a key is revoked, future decryptions are impossible for that user.

    Limitations: Offline copies or malicious/fully-trusted recipients who have decrypted material can evade revocation in principle. More advanced anti-leakage methods (e.g., watermarking, trusted hardware) are required for full provenance support.

    5. Performance Analysis and Scalability

    Experiments were performed on a platform with HyperSQL MEMORY tables and up to 500,000 dossiers:

    • Overhead: At N=100,000 dossiers, encrypted rows yield ≈10% extra operation time; at 500,000 dossiers, overhead falls below 5%. The overhead is dominated by per-row AES for writes and one-time startup decrypts for reads.
    • Cost Model: Each grant triggers one RSA encryption per recipient and one AES per row; each read needs an AES decrypt per shared row, followed by in-memory RDBMS access.
    • Latency Scaling: All operations exhibit linear scaling with dataset size. Write overhead is constant per shared row; read overhead is mostly amortized at load time.
    • Summary Table:

    | #Dossiers | % Overhead (f=20%) | % Overhead (f=40%) | |-----------|--------------------|--------------------| | 100,000 | ~10% | ~10% | | 500,000 | <5% | <5% |

    Offline use is fully supported: agents can work on their local datasets (clear for owned, encrypted for shared) and synchronize when online.

    6. Design Guidelines and Operational Best Practices

    • Always co-locate sensitive business logic and encryption in client-trusted environments; never process or store cleartext in the cloud.
    • Store only ciphertext and wrapped key material in untrusted cloud services.
    • Use a mailbox-style synchronizer (minimal untrusted relay) for distributing encrypted data and keys.
    • Apply per-row (or fine-grained fragment) symmetric key encryption; share keys via public-key encryption per recipient.
    • Authenticate and sign all exchanged payloads.
    • Lightweight grant-and-revoke RPCs: access control resides in client logic; the cloud is always stateless.
    • Design for offline support and synchronization resumption.
    • Address leftover leakage via client-side cache policy, watermarking, or trusted execution environments for high-value data.

    7. Context and Significance

    This architecture achieves strong privacy by redistributing trust from centralized cloud services to the client edge. Unlike monolithic or server-centric cryptographic databases, iPrivacy’s model demonstrates that practical, fine-grained access control and forward secure sharing can be instantiated in a cloud-native system without relying on any trusted infrastructure components in the cloud. The prototype’s efficient per-row encryption and key wrapping—combined with a simple, stateless synchronizer—results in minimal operational overhead and robust, cryptographically enforced policies that are resilient to cloud misbehavior or compromise (Damiani et al., 2015).

    Definition Search Book Streamline Icon: https://streamlinehq.com
    References (1)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Cloud-Native Privacy-Preserving Architecture.