Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cortex: A Compiler for Recursive Deep Learning Models (2011.01383v2)

Published 2 Nov 2020 in cs.LG and cs.DC

Abstract: Optimizing deep learning models is generally performed in two steps: (i) high-level graph optimizations such as kernel fusion and (ii) low level kernel optimizations such as those found in vendor libraries. This approach often leaves significant performance on the table, especially for the case of recursive deep learning models. In this paper, we present Cortex, a compiler-based approach to generate highly-efficient code for recursive models for low latency inference. Our compiler approach and low reliance on vendor libraries enables us to perform end-to-end optimizations, leading to up to 14X lower inference latencies over past work, across different backends.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Pratik Fegade (4 papers)
  2. Tianqi Chen (77 papers)
  3. Phillip B. Gibbons (28 papers)
  4. Todd C. Mowry (10 papers)
Citations (27)

Summary

We haven't generated a summary for this paper yet.