Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rankitect: Ranking Architecture Search Battling World-class Engineers at Meta Scale (2311.08430v1)

Published 14 Nov 2023 in cs.LG, cs.AI, and cs.IR

Abstract: Neural Architecture Search (NAS) has demonstrated its efficacy in computer vision and potential for ranking systems. However, prior work focused on academic problems, which are evaluated at small scale under well-controlled fixed baselines. In industry system, such as ranking system in Meta, it is unclear whether NAS algorithms from the literature can outperform production baselines because of: (1) scale - Meta ranking systems serve billions of users, (2) strong baselines - the baselines are production models optimized by hundreds to thousands of world-class engineers for years since the rise of deep learning, (3) dynamic baselines - engineers may have established new and stronger baselines during NAS search, and (4) efficiency - the search pipeline must yield results quickly in alignment with the productionization life cycle. In this paper, we present Rankitect, a NAS software framework for ranking systems at Meta. Rankitect seeks to build brand new architectures by composing low level building blocks from scratch. Rankitect implements and improves state-of-the-art (SOTA) NAS methods for comprehensive and fair comparison under the same search space, including sampling-based NAS, one-shot NAS, and Differentiable NAS (DNAS). We evaluate Rankitect by comparing to multiple production ranking models at Meta. We find that Rankitect can discover new models from scratch achieving competitive tradeoff between Normalized Entropy loss and FLOPs. When utilizing search space designed by engineers, Rankitect can generate better models than engineers, achieving positive offline evaluation and online A/B test at Meta scale.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (22)
  1. Wei Wen (49 papers)
  2. Kuang-Hung Liu (3 papers)
  3. Igor Fedorov (24 papers)
  4. Xin Zhang (904 papers)
  5. Hang Yin (77 papers)
  6. Weiwei Chu (7 papers)
  7. Kaveh Hassani (20 papers)
  8. Mengying Sun (14 papers)
  9. Jiang Liu (143 papers)
  10. Xu Wang (319 papers)
  11. Lin Jiang (24 papers)
  12. Yuxin Chen (195 papers)
  13. Buyun Zhang (9 papers)
  14. Xi Liu (83 papers)
  15. Dehua Cheng (10 papers)
  16. Zhengxing Chen (20 papers)
  17. Guang Zhao (12 papers)
  18. Fangqiu Han (6 papers)
  19. Jiyan Yang (32 papers)
  20. Yuchen Hao (11 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com