Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 78 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 23 tok/s
GPT-5 High 29 tok/s Pro
GPT-4o 93 tok/s
GPT OSS 120B 470 tok/s Pro
Kimi K2 183 tok/s Pro
2000 character limit reached

MSBG Hearing Loss & NAL-R Simulations

Updated 4 September 2025
  • The paper presents a unified probabilistic model that simulates sensorineural hearing loss using MSBG and prescriptive NAL-R frameworks to enhance personalization.
  • It employs Forney-style factor graphs and message passing techniques, analogous to Kalman filtering, for real-time dynamic gain adjustment.
  • The approach is validated through data-driven model fitting and Bayesian evaluation using a custom Julia toolbox for adaptive hearing aid design.

MSBG (Multi-Stimulus Based on Group) hearing loss models and NAL-R (National Acoustic Laboratories—Revised) fitting prescriptions represent foundational paradigms in the simulation and compensation of sensorineural hearing deficits. Both frameworks serve critical roles in research and clinical practice: MSBG enables detailed simulation of perceptual impairment by parameterizing the auditory pathway’s degradations, while NAL-R formalizes prescriptive gain settings for hearing aids based on audiometric profiles. Recent advances integrate probabilistic modeling, deep learning, and factor graph–based inference to implement these paradigms in unified frameworks, yielding improved personalization, analytical tractability, and real-time processing capabilities.

1. Probabilistic Generative Modeling Frameworks

The probabilistic modeling approach to hearing loss compensation introduces a generative model that ties observed log-power levels (ss), latent compensation gain (gg), and underlying tuning parameters (θ\theta) via an explicit hearing loss function, such as Zurek’s saturating model. The joint probability over model variables is given by

p(g,s,θ,m)=p(g0)p(θ)k=1np(skgk,θ)p(gkgk1,θ)p(g, s, \theta, m) = p(g_0) p(\theta) \prod_{k=1}^n p(s_k | g_k, \theta) p(g_k | g_{k-1}, \theta)

where gg is the gain time series, ss the observed log-power inputs, θ\theta encapsulates loss-related parameters (slope α\alpha, offset β\beta, transition and observation noise variances), and mm denotes model structure choice.

The hearing loss model maps input-plus-gain (sk+gks_k + g_k) to a perceptually “audible” level using a three-region piecewise linear transform:

L(sk+gk;α,β)={0sk+gk<βα α(sk+gk)+ββαsk+gk<βα1 sk+gkotherwiseL(s_k+g_k;\,\alpha,\,\beta) = \begin{cases} 0 & s_k+g_k < -\frac{\beta}{\alpha} \ \alpha(s_k+g_k)+\beta & -\frac{\beta}{\alpha} \le s_k+g_k < -\frac{\beta}{\alpha-1} \ s_k+g_k & \text{otherwise} \end{cases}

The observation model is expressed as

p(skgk,α,β,ϑ)N(skL(sk+gk;α,β),ϑ)p(s_k | g_k, \alpha, \beta, \vartheta) \propto \mathcal{N}(s_k | L(s_k+g_k; \alpha, \beta), \vartheta)

and gain evolution is enforced by a Gaussian random walk:

p(gkgk1,γ)=N(gkgk1,γ1)p(g_k | g_{k-1}, \gamma) = \mathcal{N}(g_k | g_{k-1}, \gamma^{-1})

For MSBG simulation, parameter choices α,β\alpha, \beta and their priors are set to replicate the typical audiometric degradation, especially loss profiles with frequency-specific gain reductions and compression. For NAL-R, prescriptive gain–input relations are encoded by selecting priors matching the standard NAL-R rule. Thus, both approaches are implemented within a single unified generative framework, tunable by parameter selection and prior specification.

2. Message Passing and Factor Graph Inference

Efficient inference in these generative models is achieved by Forney-style factor graph representations. The full joint distribution is factorized, and sum-product message passing yields recursive posterior updates of the form:

p(gks1:k,θ,m)p(gk,s1:k,θ,m)  d{g0,,gk1}p(g_k | s^{1:k}, \theta, m) \propto \int p(g_k, s^{1:k}, \theta, m)\; d\{g_0,\dots,g_{k-1}\}

Updates are analogous to Kalman filter iterations, specifically: Kk=akϑu,k/(ϑ+ak2ϑu,k)K_k = a_k \vartheta_{u, k} / (\vartheta + a_k^2 \vartheta_{u, k}) with

ϑu,k=γ1+ϑg,k1\vartheta_{u, k} = \gamma^{-1} + \vartheta_{g, k-1}

and gain estimates updated as

g^k=g^k1+Kk[skL(sk+g^k1)]\hat{g}_k = \hat{g}_{k-1} + K_k [s_k - L(s_k+\hat{g}_{k-1})]

Covariance estimates are similarly updated, capturing the dynamic range compression (DRC) effect that is core to both MSBG and NAL-R approaches.

Parameter estimation (fitting) is accomplished via forward–backward or variational message passing, solving for the posterior

p(θD,m)p(D,θ,m)p(\theta | \mathcal{D}, m) \propto p(\mathcal{D}, \theta, m)

using a dataset of preferred input–output pairs D={(s^,g^)}\mathcal{D} = \{(\hat{s}, \hat{g})\}, which can include patient or simulation data. The Bayesian framework naturally incorporates user feedback for continual refinement.

3. Model Fitting and Bayesian Performance Evaluation

The joint probabilistic structure enables three key improvements:

  • Data-driven fitting: Rather than relying solely on predetermined gain rules, the model fits (α,β,ϑ,γ)(\alpha, \beta, \vartheta, \gamma) to maximize congruence with individual MSBG profiles or NAL-R targets, capturing personalized hearing loss characteristics.
  • Principled model comparison: Bayesian metrics such as the Bayes factor,

BF=p(Dm1)p(Dm2),BF = \frac{p(D | m_1)}{p(D | m_2)},

allow objective comparison between models encoding, for example, MSBG vs NAL-R approaches, or models with/without gain transition dynamics.

  • Unified parameter adjustment: Feedback from real-world listening trials can directly update beliefs over θ\theta, enabling continuous personalization, rather than discrete adjustment.

4. Simulation Tools and Real-Time Deployment

The factor graph–based formulation is implemented using a custom Julia toolbox, which supports construction, simulation, and inference for models using equations (1)–(3) above. Outputs include steady-state input–gain functions, dynamic gain adjustment behaviors, and model comparison metrics.

A crucial implication is feasibility for real-time deployment on ultra–low–power DSP hardware. Message passing and Kalman-like recursions require only simple arithmetic, making them executable on embedded hearing aid hardware. The toolbox’s simulation results show automatic emergence of DRC and dynamic gain fitting, as well as objective evaluation capabilities (uniquely, Bayesian model scores), supporting adaptive and personalized algorithm development for clinical devices.

5. Integration with MSBG and NAL-R Simulation Paradigms

In MSBG simulations, the hearing loss model parameters are tailored to match typical sensorineural loss profiles, with the generative model capturing frequency-dependent gain reduction and compressive nonlinearities. For NAL-R simulations, prior distributions and the observable gain–input curves are selected to embody the prescriptive gain shaping required by the NAL-R standard.

Key implementation practice involves using audiogram-derived parameter selection and iteratively fitting these using simulation data or direct user feedback. The same computational engine is then used for both signal processing (online gain adjustment) and offline fitting/model selection. Data-driven personalization becomes intrinsic: as real-world responses (or iterative preference entries) are added, the model can converge on an optimal parametrization that precisely matches the user’s deficit and perceptual preferences.

6. Significance and Implications for Hearing Aid Design

This probabilistic, factor graph–based modeling paradigm represents a substantial shift from fixed prescriptive approaches toward unified, data-driven, and optimizable compensation solutions. The capacity to simulate MSBG and NAL-R cases within the same Bayesian engine enables integrated comparison, tuning, and personalization. Real-world technology implications include:

  • Continuous, patient-specific adaptation of parameter estimation and gain control within deployed hearing aids.
  • Direct objective model comparison via Bayesian metrics, supporting evidence-driven selection of signal processing algorithms without reliance on legacy heuristics.
  • Low computational cost, facilitating real-time deployment in embedded DSP hardware.
  • Enhanced user satisfaction, as fitting, feedback, and adaptation are unified—rather than separated—processes.

The framework provides a comprehensive scientific and engineering basis for the next generation of auditory compensation: customizable, analytically tractable, and dynamically adaptive hearing aid systems, with broad applicability to both MSBG-type impairment and standard prescriptive fitting regimes such as NAL-R.