Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Performance Model for Similarity Caching (2309.12149v1)

Published 21 Sep 2023 in cs.NI and cs.PF

Abstract: Similarity caching allows requests for an item to be served by a similar item. Applications include recommendation systems, multimedia retrieval, and machine learning. Recently, many similarity caching policies have been proposed, like SIM-LRU and RND-LRU, but the performance analysis of their hit rate is still wanting. In this paper, we show how to extend the popular time-to-live approximation in classic caching to similarity caching. In particular, we propose a method to estimate the hit rate of the similarity caching policy RND-LRU. Our method, the RND-TTL approximation, introduces the RND-TTL cache model and then tunes its parameters in such a way to mimic the behavior of RND-LRU. The parameter tuning involves solving a fixed point system of equations for which we provide an algorithm for numerical resolution and sufficient conditions for its convergence. Our approach for approximating the hit rate of RND-LRU is evaluated on both synthetic and real world traces.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Younes Ben Mazziane (6 papers)
  2. Sara Alouf (5 papers)
  3. Giovanni Neglia (45 papers)
  4. Daniel S. Menasche (6 papers)

Summary

We haven't generated a summary for this paper yet.