mmWave Radar for Sleep Bruxism Detection
- The paper demonstrates that mmWave radar using FMCW technology efficiently extracts minute mandibular micro-motions for robust sleep bruxism recognition in realistic settings.
- System architecture employs a 60–64 GHz Texas Instruments IWR6843 with FFT-based spatial filtering and precise phase extraction to isolate jaw movements.
- Advanced feature engineering paired with Random Forest classification achieves over 96% accuracy, outperforming traditional earable approaches while preserving user privacy.
Bruxism, an oromandibular movement disorder characterized by teeth grinding and clenching, poses significant diagnostic challenges due to the discomfort and privacy concerns associated with traditional monitoring techniques. Millimeter-wave (mmWave) radar provides a contactless, privacy-preserving approach for sleep bruxism (SB) recognition, leveraging the radar’s sensitivity to minute facial micro-motions induced by mandibular activity. Recent work demonstrates that frequency-modulated continuous-wave (FMCW) mmWave radar operating at 60–64 GHz achieves high accuracy in SB detection by extracting and classifying distinct signal features from radar echoes, thus validating its feasibility for robust bruxism recognition in real-world environments (Shen et al., 7 Dec 2025).
1. System Architecture and Signal Acquisition
Millimeter-wave radar-based SB recognition systems utilize specialized FMCW radar hardware. A representative setup applies a Texas Instruments IWR6843 operating in the 60–64 GHz band, with a maximum modulation bandwidth GHz, yielding a range resolution cm. The system employs a multiple-in, multiple-out (MIMO) array (3 Tx, 4 Rx), though only a single Tx–Rx channel is utilized for SB detection. The radar aperture is positioned perpendicular to the user’s face at a fixed 55 cm distance to ensure optimal sensitivity while minimizing near-field artifacts. The data, consisting of 16-bit signed integer I/Q samples, is acquired in an indoor office setting with environmental clutter to emulate realistic use conditions (Shen et al., 7 Dec 2025).
The raw signal is organized in a complex matrix , where is the samples per chirp (fast-time, range domain) and is the count of chirps (slow-time, temporal domain).
2. Signal Processing Pipeline
Signal preprocessing unfolds as a multi-stage workflow:
- Range-Domain Reorganization: Each chirp’s I/Q stream is stacked to form , , , generating .
- Spatial Filtering via FFT: A 1D FFT is computed along the fast-time axis for each chirp:
Incoherent integration produces a power spectrum . The target range-bin is selected as , where corresponds to expected facial range.
- Phase Extraction and Differencing: The wrapped phase at for each chirp is extracted:
Phase unwrapping is applied to avoid discontinuities. Phase differences are further computed, serving as a high-pass filter to suppress slow trends from respiration or head drift. This sequence is the primary signal for feature extraction (Shen et al., 7 Dec 2025).
3. Feature Engineering
SB recognition is anchored on 11 features drawn from the sequence , encompassing time-domain, frequency-domain, and structural descriptors.
Time-Domain Statistics
- Absolute mean (): Captures the average magnitude of micro-motions.
- Variance (): Quantifies high-frequency jaw oscillations.
- Kurtosis (): Sensitive to impulsive grinding spikes.
- Time-domain entropy (): Measures randomness in micro-motion dynamics.
Frequency-Domain Measures
- Spectral entropy (): Indicates spectral complexity, typically elevated during stochastic grinding activity.
- Spectral variance (): Quantifies the spread of energy in frequency domain.
- Band energy (5–10 Hz, ): Focuses on mandibular oscillation band.
Structural Descriptors
- Number of local maxima/minima in : Counts peaks and troughs denoting grinding impulses.
- Counts above/below thresholds (): Isolates extreme muscle bulge events (Shen et al., 7 Dec 2025).
4. Classification Methodology
A Random Forest classifier is employed, utilizing trees with Gini impurity () as the split criterion. Each split considers a subset of features of size , adhering to standard ensemble learning principles. The classifier is trained over a balanced dataset of 180 samples (90 grinding, 90 non-grinding), each of 5 s duration. A 10-fold cross-validation regime is adopted, with grid search tuning performed over tree count () and tree depth (max_depth [None,10,20]) (Shen et al., 7 Dec 2025).
5. Evaluation Metrics and Performance
Classification results are evaluated using TP, TN, FP, FN from the aggregated confusion matrix, with metrics computed as follows:
- Accuracy:
- Precision:
- Recall:
- F1-score:
Performance metrics on the test set (mean ± std over CV):
| Metric | Non-Grinding | Grinding |
|---|---|---|
| Precision | 0.9560 | 0.9663 |
| Recall | 0.9556 | 0.9667 |
| F1-score | 0.9609 | 0.9613 |
| Accuracy | \multicolumn{2}{c}{96.1% 1.2%} | |
| Training Acc. | \multicolumn{2}{c}{99.8%} |
The confusion matrix over all folds indicates , , , (Shen et al., 7 Dec 2025).
Low-frequency interferences from respiration or head drift are mitigated via high-pass filtering of phase difference signals (). Spectral features exclude the 0.5–1.5 Hz band to avoid overlap with masseter or respiratory activity, and structural descriptors (extreme counts) provide robustness against random facial twitches.
6. Comparative Methodologies and Limitations
Contactless recognition using earable in-ear IMU platforms achieves up to 88% accuracy for grinding detection in controlled environments and 76% in realistic “in-the-wild” conditions. IMU-based models utilize gyroscope signals processed with time-domain and frequency-domain features (e.g., MFCCs, spectral centroid, zero-crossing rate), classified with SVM or Random Forest algorithms. The in-ear approach is less robust to quasi-static clenching motions and sensor noise, and controlled datasets differ from actual sleep-bruxism signals (Bondareva et al., 2021).
Millimeter-wave radar surpasses earable IMU recognition approaches in accuracy and privacy, offering fully non-contact sensing. However, current radar-based systems face limitations in dataset size (3 subjects, 180 sessions), single indoor testing, and susceptibility to facial micro-expressions (e.g., talking, swallowing), necessitating further validation and signal separation advances.
7. Future Directions
Research directions for mmWave radar-based SB recognition focus on:
- Expanding datasets to encompass larger, demographically diverse populations in varied environments, including home and clinical settings.
- Enhancing signal processing algorithms with adaptive clutter removal or blind source separation to isolate SB-specific micro-movements.
- Integrating deep sequence models (e.g., LSTM, temporal CNN) for direct, end-to-end feature learning enabling real-time operation.
- Pursuing multi-modal fusion (e.g., low-resolution thermal imaging or ultrasound) to further attenuate non-bruxism artifact signals (Shen et al., 7 Dec 2025).
A plausible implication is that with increased sample diversity, algorithmic advancements, and multimodal fusion, millimeter-wave radar may be established as a standard, unobtrusive tool for longitudinal SB monitoring.
References
- Bruxism Recognition via Wireless Signal (Shen et al., 7 Dec 2025)
- Earables for Detection of Bruxism: a Feasibility Study (Bondareva et al., 2021)