Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 154 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 33 tok/s Pro
GPT-4o 70 tok/s Pro
Kimi K2 184 tok/s Pro
GPT OSS 120B 437 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

TileLang Tensor Compiler

Updated 16 November 2025
  • TileLang Tensor Compiler is a composable programming model designed to generate highly optimized, tile-oriented tensor operations for AI and scientific workloads.
  • It decouples logical tile-based dataflow from explicit hardware scheduling, enabling flexible kernel tuning on CPUs, GPUs, and distributed systems.
  • Empirical results show that automated layout inference and advanced pipelining techniques can deliver competitive performance compared to state-of-the-art libraries.

TileLang Tensor Compiler refers to a composable programming and compilation model specialized for generating highly optimized, tile-oriented tensor operations, especially in the context of AI and scientific workloads. TileLang and its direct descendants enable clear separation between logical dataflow (operations on tiles) and the scheduling/mapping required for effective deployment on modern CPUs, GPUs, and distributed systems. These systems combine high-level abstractions for expressing tiled computations with domain-specific scheduling primitives, and downstream code generators that produce hardware-efficient kernels matching or exceeding state-of-the-art baselines.

1. Conceptual Foundations and Motivation

TileLang originates from the observation that contemporary AI and HPC kernels—such as GEMM, attention, and various tensor contractions—are fundamentally structured around the movement and processing of tiles (fixed-size tensor blocks) across multiple levels of modern memory and processor hierarchies. Unifying principles include explicit representation of sub-tensors (tiles), deliberate mapping of loop and thread axes to hardware, and design of dataflow pipelines with software-managed staging (e.g., DRAM-to-SRAM-to-registers as in GPUs).

The TileLang approach decouples dataflow—expressed using high-level tensor operations on tiles—from the scheduling space, which encapsulates thread binding, memory layout, tensorization (e.g., mapping to hardware tile/fused instructions), and pipelining. This separation enables flexible, programmable kernels that can be transparently tuned for diverse architectures, while avoiding the verbosity and fragility of hand-written, target-specific code (Wang et al., 24 Apr 2025, Zhang et al., 22 Apr 2025).

2. Core Abstractions: Syntax, Tiles, and Operators

TileLang systems typically embed their DSL in Python, leveraging function annotations and a set of primitive objects:

  • Tiles: First-class statically shaped subarrays of tensors, realized via constructs such as T.alloc_shared, T.alloc_fragment, or slices of host tensors. Tiles may reside in different hardware scopes (global, shared, registers).
  • Tile-Operators: High-level operations (e.g., T.copy, T.gemm, T.reduce, T.atomic) expose block-oriented compute, explicit tile movement, and reduction/elementwise semantics. Each tile-operator implements methods for layout inference (how indices map to addresses) and lowering (generating loops, threads, hardware instructions).
  • Scheduling Primitives: Annotations such as T.Parallel, T.Pipelined, T.annotate_layout, and external calls (T.call_extern, T.ptx) allow both automatic and user-driven specification of mapping, layout, and pipeline properties. In Hexcute, the mapping of threads to tile indices is inferred using a type-inference system (Zhang et al., 22 Apr 2025).

A generic kernel may be structured as:

1
2
3
4
5
6
7
8
9
10
11
12
@tilelang.jit
def Matmul(A: T.Tensor[M, K], B: T.Tensor[K, N], C: T.Tensor[M, N]):
    with T.Kernel(N // block_N, M // block_M, threads=threads) as (bx, by):
        A_s = T.alloc_shared(block_M, block_K)
        B_s = T.alloc_shared(block_K, block_N)
        C_f = T.alloc_fragment(block_M, block_N)
        T.clear(C_f)
        for k in T.Pipelined(K//block_K, num_stages=2):
            T.copy(A[by*block_M, k*block_K], A_s)
            T.copy(B[k*block_K, bx*block_N], B_s)
            T.gemm(A_s, B_s, C_f)
        T.copy(C_f, C[by*block_M, bx*block_N])
This structure decouples schedule (block/thread assignment, pipeline stages) from the dataflow logic (tile movement and computation) (Wang et al., 24 Apr 2025).

3. Formalism: Multi-dimensional Tiling and Scheduling Semantics

Formally, TileLang-like compilers define multi-dimensional tiling as the mapping

Atile[I1,,In,j1,,jn]=A[I1b1+j1,,Inbn+jn]A_{\text{tile}}[I_1,\ldots,I_n, j_1,\ldots,j_n] = A[I_1 \cdot b_1 + j_1,\ldots,I_n \cdot b_n + j_n]

where IdI_d indexes the tile block and jdj_d the local position. Scheduling transformations are functions on the iteration space, such as:

  • Thread binding: fthread:(I1,,In)(block/thread axes)f_{\text{thread}}: (I_1,\ldots,I_n) \to (\text{block/thread axes})
  • Pipelining stage: fpipe:kstage=kmodNstagesf_{\text{pipe}}: k \to \text{stage} = k \bmod N_{\text{stages}}
  • Tensorization: ftnsrf_{\text{tnsr}} maps selected loops to fused tensor instructions

These scheduling spaces are composed, and the final schedule SS is expressed as: S=fcall_externftnsrfpipefthreadS = f_{\text{call\_extern}} \circ f_{\text{tnsr}} \circ f_{\text{pipe}} \circ f_{\text{thread}} (Wang et al., 24 Apr 2025). Scheduling primitives serve as IR annotations, which are mapped to loop/launch/fusion constructs and, where supported, to asynchronous copy/fused tensor ops.

In Hexcute, thread-value layouts ff—mapping (tid,vid)(i,j)(\text{tid}, \text{vid}) \to (\text{i,j})—are types. Layout and mapping constraints for all tile-primitive calls are collected and solved via type-inference; anchor points are fixed by performance models or user hints, then propagated to adjacent tiles and ops (Zhang et al., 22 Apr 2025).

4. Compiler Pipeline: IR Lowering, Layout Inference, and Code Generation

TileLang systems follow a multi-phase pipeline:

  1. Parsing and AST Extraction: Python ASTs are parsed into TileLang/Hexcute-specific ASTs, identifying annotated kernels.
  2. Intermediate Representation (IR) Construction: High-level tile operators are mapped to an IR such as TVM TensorIR (TileLang) or Hidet IR (Hexcute), preserving tile and layout semantics.
  3. Layout and Mapping Inference:
    • Memory layout functions are inferred for each buffer/tensor (including swizzles/padding as needed, e.g., bank-conflict avoidance).
    • Thread binding is established via a top-down or type-inference process, composing mapping constraints imposed by tile primitives (Zhang et al., 22 Apr 2025).
    • Tensorization lowering matches block patterns to hardware instructions (e.g., mma, dp4a) or calls out to external templates (CUTLASS, etc.).
    • Pipelines are inferred; for instance, asynchronous copy stages with cp.async/mbarrier sequences may be inserted on Ampere/Hopper GPUs (Wang et al., 24 Apr 2025).
  4. Graph Optimizations: Loop fusion, tail handling for dynamic shapes, and constant propagation are applied.
  5. Code Generation: IR is lowered to device code (LLVM/CUDA/HIP), optionally inlining PTX or hard-coding external kernel calls. The output is a complete, hardware-adapted kernel.

Hexcute distinguishes itself by fully synthesizing the thread-value layouts, block-vs-thread maps, and shared/register layouts via its type-inference system, using a cost-driven search among valid mapping solutions (Zhang et al., 22 Apr 2025).

5. Performance and Empirical Results

TileLang-based compilers are empirically validated across multiple AI and scientific kernels. Performance data include:

  • General GEMM: TileLang on RTX 4090 / A100 / H100 / MI300X achieves 1.10× / 0.97× / 1.00× / 1.04× that of vendor libraries, and up to 1.13×–1.25× over Triton (Wang et al., 24 Apr 2025).
  • Dequantized Matmul (mixed precision): Up to 7.65× faster than cuBLAS FP16 on A100 for INT2×INT8 inputs.
  • FlashAttention: TileLang outperforms FlashAttention-3 by 1.36× and Triton by 1.41× on H100; on MI300X, up to 2.10× for linear attention kernels.
  • MoE (Mixture-of-Experts) layers: Hexcute fuses 256 experts into a single kernel, yielding up to 7.89× speedup over Triton (Zhang et al., 22 Apr 2025).
  • Mixed-type GEMM (FP16×INT4): Hexcute delivers up to 2.9× speedup over Triton, 2.1× over Ladder, and matches hand-optimized Marlin (Zhang et al., 22 Apr 2025).
  • End-to-end large model inference (vLLM): Up to 2.91× lower latency compared to Triton/Marlin/vendor kernels for 100-token generations on 8×H100 (Zhang et al., 22 Apr 2025).

Ablation experiments confirm that fully automated layout/thread mapping synthesis is a significant contributor—disabling these mechanisms reduced performance by up to 52.5% (Zhang et al., 22 Apr 2025).

TileLang and its derivatives differ from prior and contemporary tensor compiler systems in key respects:

System Tile-level abstraction Automated Layout/Mapping Fine-grained Pipelines Custom Intrinsics Expressiveness (packed/quant ops)
Triton Partial Manual Limited Partial Low for quantized/mixed-type
TVM/Ladder Loop-based Manual or template Loop transformation External only Moderate
CUTLASS C++ Template Manual High (per case) Yes High for coded cases, inflexible
Hidet Block IR Manual Partial Yes Moderate
Hexcute Full Automatic (type-inf.) Yes Yes (auto map) High, dynamic mixed-type, swizzle

TileLang exposes abstractions for on-chip and register tiling, explicit layout inference, composition of scheduling primitives, and flexible extension to new AI operator classes. Hexcute further automates the synthesis of thread, tile, and memory mappings, encompassing instruction-level constraints inherently.

7. Programming Model: Trade-offs, Usability, and Extensibility

A core design trade-off of the TileLang model is the separation of data-centric, tile-focused computation from the explicit scheduling space.

  • Developers may write only dataflow operators, relying on the compiler to infer mapping and layout.
  • For expert tuning or hardware adaptation, explicit scheduling annotations and custom external calls can override or extend the base system.
  • This modularity both reduces the effort required to port kernels to new hardware and exposes essential levers (layout, swizzle, pipelining) for performance tuning (Wang et al., 24 Apr 2025, Zhang et al., 22 Apr 2025).

The system supports dynamic shapes, mixed-precision compute, advanced pipeline patterns, and hardware-specific kernel selection (e.g., mma.sync.aligned.m16n8k16 for Nvidia GPUs) without requiring repeated low-level loop or thread-mapping surgery.

In summary, TileLang Tensor Compiler systems unify explicit tile-based dataflow with compositional scheduling and mapping spaces, providing competitive or superior performance across a wide range of AI and numerical workloads, while substantially reducing the engineering burden of kernel development for modern parallel accelerators (Wang et al., 24 Apr 2025, Zhang et al., 22 Apr 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to TileLang Tensor Compiler.