Papers
Topics
Authors
Recent
Search
2000 character limit reached

mmWave Radar for Sleep Bruxism Detection

Updated 14 December 2025
  • The paper demonstrates that mmWave radar using FMCW technology efficiently extracts minute mandibular micro-motions for robust sleep bruxism recognition in realistic settings.
  • System architecture employs a 60–64 GHz Texas Instruments IWR6843 with FFT-based spatial filtering and precise phase extraction to isolate jaw movements.
  • Advanced feature engineering paired with Random Forest classification achieves over 96% accuracy, outperforming traditional earable approaches while preserving user privacy.

Bruxism, an oromandibular movement disorder characterized by teeth grinding and clenching, poses significant diagnostic challenges due to the discomfort and privacy concerns associated with traditional monitoring techniques. Millimeter-wave (mmWave) radar provides a contactless, privacy-preserving approach for sleep bruxism (SB) recognition, leveraging the radar’s sensitivity to minute facial micro-motions induced by mandibular activity. Recent work demonstrates that frequency-modulated continuous-wave (FMCW) mmWave radar operating at 60–64 GHz achieves high accuracy in SB detection by extracting and classifying distinct signal features from radar echoes, thus validating its feasibility for robust bruxism recognition in real-world environments (Shen et al., 7 Dec 2025).

1. System Architecture and Signal Acquisition

Millimeter-wave radar-based SB recognition systems utilize specialized FMCW radar hardware. A representative setup applies a Texas Instruments IWR6843 operating in the 60–64 GHz band, with a maximum modulation bandwidth B=4B=4 GHz, yielding a range resolution ΔR=c/(2B)3.75\Delta R = c/(2B) \approx 3.75 cm. The system employs a multiple-in, multiple-out (MIMO) array (3 Tx, 4 Rx), though only a single Tx–Rx channel is utilized for SB detection. The radar aperture is positioned perpendicular to the user’s face at a fixed 55 cm distance to ensure optimal sensitivity while minimizing near-field artifacts. The data, consisting of 16-bit signed integer I/Q samples, is acquired in an indoor office setting with environmental clutter to emulate realistic use conditions (Shen et al., 7 Dec 2025).

The raw signal is organized in a complex matrix XZNc×NsX \in \mathbb{Z}^{N_c \times N_s}, where NsN_s is the samples per chirp (fast-time, range domain) and NcN_c is the count of chirps (slow-time, temporal domain).

2. Signal Processing Pipeline

Signal preprocessing unfolds as a multi-stage workflow:

  1. Range-Domain Reorganization: Each chirp’s I/Q stream is stacked to form xn[m]x_n[m], m=0...Ns1m=0...N_s-1, n=1...Ncn=1...N_c, generating XrawCNc×NsX_{\text{raw}} \in \mathbb{C}^{N_c \times N_s}.
  2. Spatial Filtering via FFT: A 1D FFT is computed along the fast-time axis for each chirp:

Xn(k)=m=0Ns1xn[m]ej2πkm/Ns,k=0...Ns1X_n(k) = \sum_{m=0}^{N_s-1} x_n[m]\, e^{-j2\pi k m/N_s}, \quad k = 0...N_s-1

Incoherent integration produces a power spectrum P(k)=n=1NcXn(k)2P(k) = \sum_{n=1}^{N_c} |X_n(k)|^2. The target range-bin kk^* is selected as k=argmaxk[kmin,kmax]P(k)k^* = \arg\max_{k \in [k_\text{min}, k_\text{max}]} P(k), where [kmin,kmax][k_\text{min}, k_\text{max}] corresponds to expected facial range.

  1. Phase Extraction and Differencing: The wrapped phase at kk^* for each chirp is extracted:

φn=atan2(Im{Xn(k)},Re{Xn(k)}),n=1...Nc\varphi_n = \operatorname{atan2}(\operatorname{Im}\{X_n(k^*)\}, \operatorname{Re}\{X_n(k^*)\}), \quad n = 1...N_c

Phase unwrapping is applied to avoid 2π2\pi discontinuities. Phase differences Δφn=φnφn1\Delta\varphi_n = \varphi_n - \varphi_{n-1} are further computed, serving as a high-pass filter to suppress slow trends from respiration or head drift. This sequence {Δφn}n=2Nc\{\Delta\varphi_n\}_{n=2}^{N_c} is the primary signal for feature extraction (Shen et al., 7 Dec 2025).

3. Feature Engineering

SB recognition is anchored on 11 features drawn from the sequence {Δφn}\{\Delta\varphi_n\}, encompassing time-domain, frequency-domain, and structural descriptors.

Time-Domain Statistics

  • Absolute mean (μabs\mu_{\text{abs}}): Captures the average magnitude of micro-motions.
  • Variance (σ2\sigma^2): Quantifies high-frequency jaw oscillations.
  • Kurtosis (κ\kappa): Sensitive to impulsive grinding spikes.
  • Time-domain entropy (HtH_t): Measures randomness in micro-motion dynamics.

Frequency-Domain Measures

  • Spectral entropy (HsH_s): Indicates spectral complexity, typically elevated during stochastic grinding activity.
  • Spectral variance (Vars\mathrm{Var}_s): Quantifies the spread of energy in frequency domain.
  • Band energy (5–10 Hz, E510E_{5-10}): Focuses on mandibular oscillation band.

Structural Descriptors

  • Number of local maxima/minima in Δφn\Delta\varphi_n: Counts peaks and troughs denoting grinding impulses.
  • Counts above/below thresholds (Δφn>0.04rad,Δφn<0.04rad\Delta\varphi_n > 0.04\,\text{rad}, \Delta\varphi_n < -0.04\,\text{rad}): Isolates extreme muscle bulge events (Shen et al., 7 Dec 2025).

4. Classification Methodology

A Random Forest classifier is employed, utilizing M=90M = 90 trees with Gini impurity (IG=1c=12pc2I_G = 1 - \sum_{c=1}^2 p_c^2) as the split criterion. Each split considers a subset of features of size d\sqrt{d}, adhering to standard ensemble learning principles. The classifier is trained over a balanced dataset of 180 samples (90 grinding, 90 non-grinding), each of 5 s duration. A 10-fold cross-validation regime is adopted, with grid search tuning performed over tree count (M[50,90,150]M \in [50,90,150]) and tree depth (max_depth \in [None,10,20]) (Shen et al., 7 Dec 2025).

5. Evaluation Metrics and Performance

Classification results are evaluated using TP, TN, FP, FN from the aggregated confusion matrix, with metrics computed as follows:

  • Accuracy: TP+TNTP+TN+FP+FN\frac{\mathrm{TP} + \mathrm{TN}}{\mathrm{TP} + \mathrm{TN} + \mathrm{FP} + \mathrm{FN}}
  • Precision: TPTP+FP\frac{\mathrm{TP}}{\mathrm{TP} + \mathrm{FP}}
  • Recall: TPTP+FN\frac{\mathrm{TP}}{\mathrm{TP} + \mathrm{FN}}
  • F1-score: 2PrecisionRecallPrecision+Recall2 \cdot \frac{\text{Precision} \cdot \text{Recall}}{\text{Precision} + \text{Recall}}

Performance metrics on the test set (mean ± std over CV):

Metric Non-Grinding Grinding
Precision 0.9560 0.9663
Recall 0.9556 0.9667
F1-score 0.9609 0.9613
Accuracy \multicolumn{2}{c}{96.1% ±\pm 1.2%}
Training Acc. \multicolumn{2}{c}{99.8%}

The confusion matrix over all folds indicates TN=87TN=87, FP=4FP=4, FN=3FN=3, TP=86TP=86 (Shen et al., 7 Dec 2025).

Low-frequency interferences from respiration or head drift are mitigated via high-pass filtering of phase difference signals (Δφn\Delta\varphi_n). Spectral features exclude the 0.5–1.5 Hz band to avoid overlap with masseter or respiratory activity, and structural descriptors (extreme counts) provide robustness against random facial twitches.

6. Comparative Methodologies and Limitations

Contactless recognition using earable in-ear IMU platforms achieves up to 88% accuracy for grinding detection in controlled environments and 76% in realistic “in-the-wild” conditions. IMU-based models utilize gyroscope signals processed with time-domain and frequency-domain features (e.g., MFCCs, spectral centroid, zero-crossing rate), classified with SVM or Random Forest algorithms. The in-ear approach is less robust to quasi-static clenching motions and sensor noise, and controlled datasets differ from actual sleep-bruxism signals (Bondareva et al., 2021).

Millimeter-wave radar surpasses earable IMU recognition approaches in accuracy and privacy, offering fully non-contact sensing. However, current radar-based systems face limitations in dataset size (3 subjects, 180 sessions), single indoor testing, and susceptibility to facial micro-expressions (e.g., talking, swallowing), necessitating further validation and signal separation advances.

7. Future Directions

Research directions for mmWave radar-based SB recognition focus on:

  • Expanding datasets to encompass larger, demographically diverse populations in varied environments, including home and clinical settings.
  • Enhancing signal processing algorithms with adaptive clutter removal or blind source separation to isolate SB-specific micro-movements.
  • Integrating deep sequence models (e.g., LSTM, temporal CNN) for direct, end-to-end feature learning enabling real-time operation.
  • Pursuing multi-modal fusion (e.g., low-resolution thermal imaging or ultrasound) to further attenuate non-bruxism artifact signals (Shen et al., 7 Dec 2025).

A plausible implication is that with increased sample diversity, algorithmic advancements, and multimodal fusion, millimeter-wave radar may be established as a standard, unobtrusive tool for longitudinal SB monitoring.


References

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Millimeter-Wave Radar for SB Recognition.