Papers
Topics
Authors
Recent
Search
2000 character limit reached

Fused-MBConv in EfficientNetV2

Updated 8 February 2026
  • Fused-MBConv is an operator that fuses 1×1 expansion and k×k depthwise convolution into a single operation to enhance training throughput and parameter efficiency.
  • It streamlines the convolutional block by using a fused k×k convolution followed by batch normalization, activation, optional squeeze-and-excitation, and projection.
  • Empirical results indicate that using Fused-MBConv in early network stages reduces training time while slightly increasing parameter counts and boosting overall accuracy.

Fused-MBConv is an operator introduced in the EfficientNetV2 convolutional architecture, designed to optimize training throughput and parameter efficiency by rethinking the structure of the canonical MBConv (Mobile Inverted Bottleneck Convolution) block. Fused-MBConv eliminates the traditional separation between the 1×1 expansion and the k×kk\times k depthwise convolution by merging them into a single k×kk\times k convolution with increased output channels, followed by batch normalization, activation, optional squeeze-and-excitation (SE), a projection, and skip connection (when applicable). This fused structure trades slightly higher parameter counts for significantly improved hardware utilization and faster throughput, especially in the early, narrow layers of modern convolutional neural networks (Tan et al., 2021).

1. Block Structure and Mathematical Formulation

The standard MBConv architecture consists of a sequence of expansion (1×1 convolution), depthwise (spatial k×kk\times k) convolution, squeeze-and-excitation, projection (1×1 convolution), and residual addition under constraints. In contrast, Fused-MBConv executes the expansion and spatial filtering in a single k×kk\times k convolution. The block’s structure can be precisely described as:

  • Fused Conv: k×kk\times k Conv, input channels CinC_{\mathrm{in}}, output channels tCint\,C_{\mathrm{in}}, stride ss.
  • BatchNorm and Activation
  • Squeeze-and-Excitation (optional):
    • u=GlobalAvgPool(Z1)u = \mathrm{GlobalAvgPool}(Z_1)
    • e=σ(W2Act(W1u))e = \sigma(W_2\,\mathrm{Act}(W_1\,u))
    • Z2=Z1eZ_2 = Z_1 \odot e
  • Projection: 1×11\times1 Conv, tCinCoutt\,C_{\mathrm{in}} \rightarrow C_{\mathrm{out}}, BN.
  • Residual: If s=1s=1 and Cin=CoutC_{\mathrm{in}}=C_{\mathrm{out}}, add input.

Mathematically, for input XRH×W×CinX \in \mathbb{R}^{H\times W\times C_{\mathrm{in}}}, expansion ratio tt, kernel size kk, stride ss, and output channels CoutC_{\mathrm{out}}: Z1=BN(Act(Convk×ks(Wfuse,X))), WfuseRk×k×Cin×(tCin) [optional SE]  Z2=Z1e Y=BN(Conv1×11(Wproj,Z2)) WprojR1×1×(tCin)×Cout If s=1Cin=Cout,  YY+X\begin{aligned} &Z_1 = \mathrm{BN}(\mathrm{Act}(\mathrm{Conv}_{k\times k}^{s}(W_{\mathrm{fuse}}, X))) \,,\ &\quad W_{\mathrm{fuse}} \in \mathbb{R}^{k\times k\times C_{\mathrm{in}}\times (t\,C_{\mathrm{in}})} \ &\text{{[optional SE]}}\; Z_2 = Z_1 \odot e \ &Y = \mathrm{BN}(\mathrm{Conv}_{1\times 1}^1(W_{\mathrm{proj}}, Z_2)) \ &\quad W_{\mathrm{proj}} \in \mathbb{R}^{1\times1\times(t\,C_{\mathrm{in}})\times C_{\mathrm{out}}} \ &\text{If } s=1 \land C_{\mathrm{in}}=C_{\mathrm{out}},\; Y \leftarrow Y + X \end{aligned}

This architecture eliminates the depthwise convolution and expansion 1×1 convolution, integrating both into a single dense k×kk\times k convolution, which results in improved hardware efficiency (Tan et al., 2021).

2. Implementation Workflow and Pseudocode

The Fused-MBConv block can be instantiated with the following pseudocode that details tensor shapes and parameterization:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
def Fused_MBConv(X, C_in, C_out, t, k, s, use_se):
    # 1. Fused k×k convolution
    Z1 = Conv2D(X, in_channels=C_in, out_channels=C_in*t, kernel_size=k, stride=s, bias=False)
    Z1 = BatchNorm(Z1)
    Z1 = Activation(Z1)  # e.g., Swish

    # 2. Optional SE operation
    if use_se:
        u = GlobalAvgPool(Z1)
        v1 = FullyConnected(u, out_dim=(t*C_in)//r)
        v1 = Activation(v1)
        v2 = FullyConnected(v1, out_dim=(t*C_in))
        e = Sigmoid(v2).reshape(1,1,t*C_in)
        Z2 = Z1 * e
    else:
        Z2 = Z1

    # 3. Project back to C_out
    Y = Conv2D(Z2, in_channels=C_in*t, out_channels=C_out, kernel_size=1, stride=1, bias=False)
    Y = BatchNorm(Y)

    # 4. Residual connection
    if (s == 1) and (C_in == C_out):
        Y = Y + X

    return Y

For example, in Stage 1 of EfficientNetV2-S, with Cin=24C_{\mathrm{in}}=24, t=1t=1, k=3k=3, s=1s=1, the fused conv is 3×3 ⁣: ⁣24 ⁣ ⁣243\times3\!:\!24\!\to\!24 (5184 parameters), projection 1×1 ⁣: ⁣24 ⁣ ⁣241\times1\!:\!24\!\to\!24 (576 parameters), totaling approximately 5760 parameters per block. Later stages with t=4t=4 (e.g., 3×3 ⁣: ⁣24 ⁣ ⁣963\times3\!:\!24\!\to\!96, 20736 parameters plus projection, 4608 parameters) total approximately 25,344 per block (Tan et al., 2021).

3. Computational Complexity and Parameter Analysis

Fused-MBConv and MBConv differ in their FLOPs and parameter composition. Let KK be kernel size, H,WH,W spatial dimensions.

  • MBConv:
    • Expand 1×11\times1: 2HW[Cin×(tCin)]2HW[C_{\mathrm{in}}\times (t\,C_{\mathrm{in}})]
    • Depthwise k×kk\times k: 2HW[k2(tCin)]2HW[k^2 (t\,C_{\mathrm{in}})]
    • Project 1×11\times1: 2HW[(tCin)Cout]2HW[(t\,C_{\mathrm{in}})C_{\mathrm{out}}]
    • FLOPsMB=2HW(CintCin+k2tCin+tCinCout)\mathrm{FLOPs}_{\mathrm{MB}} = 2HW(C_{\mathrm{in}}\,tC_{\mathrm{in}} + k^2\,tC_{\mathrm{in}} + tC_{\mathrm{in}}\,C_{\mathrm{out}})
    • Parameters: Cin(tCin)+k2tCin+(tCin)CoutC_{\mathrm{in}}(tC_{\mathrm{in}}) + k^2 tC_{\mathrm{in}} + (tC_{\mathrm{in}})C_{\mathrm{out}}
  • Fused-MBConv:
    • Fused k×kk\times k: 2HW[k2Cin(tCin)]2HW[k^2 C_{\mathrm{in}} (t\,C_{\mathrm{in}})]
    • Project 1×11\times1: 2HW[tCinCout]2HW[tC_{\mathrm{in}}C_{\mathrm{out}}]
    • FLOPsFuse=2HW(k2CintCin+tCinCout)\mathrm{FLOPs}_{\mathrm{Fuse}} = 2HW(k^2 C_{\mathrm{in}} tC_{\mathrm{in}} + tC_{\mathrm{in}}C_{\mathrm{out}})
    • Parameters: k2Cin(tCin)+(tCin)Coutk^2 C_{\mathrm{in}}(tC_{\mathrm{in}}) + (tC_{\mathrm{in}})C_{\mathrm{out}}

Although Fused-MBConv generally incurs a greater parameter and FLOP cost than standard depthwise-separable convolution, the increase is limited in the early stages (where CinC_{\mathrm{in}} is small) and is offset by improved throughput due to better accelerator utilization (Tan et al., 2021).

Empirical data, e.g., EfficientNet-B4 baseline versus a variant with Fused-MBConv in the early stages: | Configuration | Params | FLOPs | Top-1 | Images/sec (TPUv3) | |-----------------------------------|--------|-------|--------|--------------------| | No Fused (all MBConv) | 19.3M | 4.5B | 82.8% | 262 | | Fused in Stages 1–3 Only | 20.0M | 7.5B | 83.1% | 362 |

Fully replacing all MBConv blocks increases parameter count substantially (e.g., 132M) and degrades training efficiency, motivating a hybrid approach (Tan et al., 2021).

4. Neural Architecture Search and Block Selection

Fused-MBConv arose from training-aware neural architecture search (NAS) utilizing a stage-wise and factorized search over operator type, kernel size, expansion ratio, and repeat count. The search reward for configuration mm is: R(m)=A(m)×S(m)w×P(m)v,R(m) = A(m) \times S(m)^w \times P(m)^v, where AA is Top-1 accuracy, SS is normalized step time, PP is parameter count, w=0.07w=-0.07, v=0.05v=-0.05.

The search space included the operator choice {\in\{MBConv, Fused-MBConv}\}, kernel {3×3,5×5}\in \{3\times3, 5\times5\}, and expansion t{1,4,6}t\in\{1,4,6\}. Empirical search observations demonstrated:

  • Early stages (1–3) consistently favor Fused-MBConv for improved throughput.
  • Later stages (4–7), where CinC_{\mathrm{in}} is larger, favor conventional MBConv to maintain parameter and FLOP efficiency and exploit depthwise separation.

This hybrid pattern, adopting Fused-MBConv in early blocks and MBConv in later, high-channel-depth blocks, was computationally validated to offer better speed-accuracy tradeoffs than homogeneous block choices (Tan et al., 2021).

5. Empirical Results and Practical Impact

Empirical evaluation of the Fused-MBConv operator within EfficientNetV2 demonstrates:

  • Training step time is reduced by 30–40% when Fused-MBConv is used in early network stages (e.g., EfficientNetV2-S achieves \sim20 ms/step for 83.9% Top-1, compared to EfficientNet-V1’s 45 ms/step for similar accuracy).
  • Selective use of Fused-MBConv in stages 1–3 increases throughput by 38% and Top-1 accuracy by \sim0.3 percentage points compared to an all-MBConv baseline.
  • End-to-end model comparison (EfficientNetV2-S: 83.9% Top-1, 22M params, 8.8B FLOPs, 7h train-time) shows superior efficiency relative to earlier architectures (EfficientNet-B7: 84.7% Top-1, 66M params, 38B FLOPs, 139h train-time), with EfficientNetV2-M matching or exceeding B7 accuracy at %%%%591×11\times160%%%% faster training and \sim20% fewer parameters.
  • Overuse of Fused-MBConv (in later, wide stages) severely increases parameter count and can degrade accuracy, justifying its selective adoption (Tan et al., 2021).

6. Significance and Architectural Implications

Fused-MBConv represents an evolution in mobile and resource-aware convolutional block design. By merging the expand and depthwise operations into a single dense convolution, it addresses accelerator memory access bottlenecks prevalent in depthwise kernels for early-stage, low-channel layers. The block’s design enables modern GPUs/TPUs to operate at higher throughput on these stages with only minor overhead in parameter count, as confirmed by NAS-informed block selection. The significance is particularly evident in training efficiency; EfficientNetV2 models with selectively integrated Fused-MBConv blocks train 3–11×\times faster end-to-end while preserving or even improving state-of-the-art accuracy across diverse datasets (Tan et al., 2021). The block thus provides a principled, empirically grounded operator that supports both speed and accuracy targets in contemporary convolutional architectures.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Fused-MBConv.