Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 164 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 72 tok/s Pro
Kimi K2 204 tok/s Pro
GPT OSS 120B 450 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

ML Interference AoA Estimation Method

Updated 8 October 2025
  • The paper introduces a maximum likelihood formulation that caps the log-likelihood to exclude outlier AoA measurements, ensuring robust performance in multipath environments.
  • It employs a sequential LMMSE update process that fuses distributed AoA data with low computational complexity, particularly under LOS conditions.
  • The method integrates randomized bootstrapping and Bayesian data fusion to effectively suppress NLOS interference and achieve near-optimal localization accuracy.

The maximum likelihood (ML) interference angle-of-arrival (AoA) estimation method encompasses a suite of techniques for robust localization of a signal source using distributed AoA measurements, especially in environments exhibiting multipath and non-line-of-sight (NLOS) propagation. Rooted in a rigorous likelihood-based statistical framework, it is augmented by sequential estimation, outlier suppression, and Bayesian data fusion to achieve near-optimal localization accuracy at tractable computational cost in the presence of uncooperative interference and sporadic measurement corruption.

1. ML Formulation for AoA Measurements: LOS and NLOS Models

Under line-of-sight (LOS) conditions, each AoA measurement θ^k\hat{\theta}_k at receiver kk is modeled as a Gaussian random variable centered at the true bearing θk(X)\theta_k(X) with known error variance σk2\sigma_k^2:

pGaus(θ^kX)=12πσk(12Q(π/(2σk)))exp((θ^kθk(X))22σk2)p_{\mathrm{Gaus}}(\hat{\theta}_k \mid X) = \frac{1}{\sqrt{2\pi}\sigma_k(1 - 2Q(\pi/(2\sigma_k)))}\exp\left( -\frac{(\hat{\theta}_k - \theta_k(X))^2}{2\sigma_k^2} \right)

where Q()Q(\cdot) is the Gaussian tail probability. The joint log-likelihood for NN independent receivers is

L(θ^1,...,θ^NX)=k=1N(θ^kθk(X))2σk2L(\hat{\theta}_1, ..., \hat{\theta}_N \mid X) = -\sum_{k=1}^N \frac{(\hat{\theta}_k - \theta_k(X))^2}{\sigma_k^2}

resulting in an ML estimator for the source position: X^=argminxk=1N(θ^kθk(x))2σk2\widehat{X} = \underset{x}{\mathrm{argmin}} \sum_{k=1}^N \frac{(\hat{\theta}_k - \theta_k(x))^2}{\sigma_k^2} This is a nonlinear least squares problem in xx.

In NLOS conditions, a fraction α\alpha of AoA measurements are uniformly distributed outliers. The per-receiver likelihood becomes a mixture: pnarrow,k(θ^kX)=(1α)pGaus(θ^kX)+απp_{\mathrm{narrow},k}(\hat{\theta}_k \mid X) = (1-\alpha)p_{\mathrm{Gaus}}(\hat{\theta}_k \mid X) + \frac{\alpha}{\pi} Approximating the mixture by log[exp(a)+exp(b)]min(a,b)\log[\exp(a) + \exp(b)]\approx\min(a,b), the log-likelihood is capped: L(θ^X)k=1Nmin{(θ^kθk(X))2,Θmax,k2}L(\hat{\theta} \mid X) \approx -\sum_{k=1}^N \min\left\{ (\hat{\theta}_k - \theta_k(X))^2, \Theta_{\max, k}^2 \right\} with

Θmax,k2=2σk2log(π(1α)σk2α(12Q(π/(2σk))))\Theta_{\max, k}^2 = 2\sigma_k^2 \log \left( \frac{\sqrt{\pi}(1-\alpha)}{\sigma_k \sqrt{2} \alpha (1-2Q(\pi/(2\sigma_k)))} \right)

This capping excludes outlier measurements (large residuals) from overwhelming the fit, yielding robustness in NLOS settings.

2. Sequential ML Estimation via LMMSE Updates

Direct minimization of the ML or capped-loss cost function has exponential complexity due to the subset selection problem in outlier presence. To address this, a low-complexity sequential algorithm is constructed:

  • Initialization (“bootstrap”): Select two receivers and triangulate an initial source position.
  • At receiver kk, transform the prior estimate (Rˉ,θˉ)(\bar{R}, \bar{\theta}) and covariance Σˉ\bar{\Sigma} into local polar coordinates.
  • Update with new AoA measurement θ~\tilde{\theta} via a linear MMSE (Kalman-type) update in polar coordinates:

μ^=μˉ+K(θ~Aμˉ),K=AΣˉσθ~+AΣˉAT\hat{\mu} = \bar{\mu} + K(\tilde{\theta} - A\bar{\mu}), \quad K = \frac{A\bar{\Sigma}}{\sigma_{\tilde{\theta}} + A\bar{\Sigma}A^T}

yielding explicit updates for range and angle:

R^=Rˉ+ΣˉRθ(θ~θˉ)Σˉθθ+σθ~\hat{R} = \bar{R} + \frac{\bar{\Sigma}_{R\theta}(\tilde{\theta} - \bar{\theta})}{\bar{\Sigma}_{\theta\theta} + \sigma_{\tilde{\theta}}}

θ^=Σˉθθθ~+σθ~θˉΣˉθθ+σθ~\hat{\theta} = \frac{\bar{\Sigma}_{\theta\theta} \tilde{\theta} + \sigma_{\tilde{\theta}} \bar{\theta}}{\bar{\Sigma}_{\theta\theta} + \sigma_{\tilde{\theta}}}

  • Transform updated estimates back to global Cartesian and proceed to the next receiver.

In LOS settings, this approach approximates the ML solution as the system accumulates measurements and updates only require O(1)O(1) cost per receiver, making the total complexity linear in NN. Coordinate transformations between polar and Cartesian are computationally negligible relative to likelihood evaluations.

3. Randomized Bootstrapping and Outlier Suppression in NLOS

To mitigate outliers in NLOS environments, the sequential algorithm is randomized:

  • Perform MM independent “bootstraps” by randomly selecting initiating receiver pairs. Probability of both being LOS is (1α)2(1-\alpha)^2.
  • For each bootstrap, sequentially aggregate receivers, accepting a new AoA measurement only if all residuals e=θ^kθk(X^)e = |\hat{\theta}_k - \theta_k(\hat{X})| for previously included measurements are below Θmax\Theta_{\max}.
  • If a candidate measurement fails the threshold, it is ignored (outlier suppression).
  • Number of bootstraps required for a failure probability PfailureP_\text{failure} is upper bounded as

Mlog(Pfailure)log(1(1α)2)M \leq \frac{\log(P_\text{failure})}{\log\left(1-(1-\alpha)^2\right)}

  • For each bootstrap, the cost is O(N)O(N), and over all bootstraps, O(MN2)O(MN^2). With constant MM, complexity is nearly linear in NN.

This randomized sequential procedure ensures that, with high probability, at least one bootstrap aggregates only LOS receivers, causing the sequential update to track the optimal ML solution.

4. Bayesian Framework: Incorporation of Heterogeneous Measurements

The entire estimation process is cast within a Bayesian paradigm:

  • At each receiver, the posterior (mean and covariance) about the source location comprises all prior information from earlier receivers’ data fusion.
  • The LMMSE update is a Bayesian posterior mean given a Gaussian prior and new (possibly Gaussian-mixed) AoA measurement.
  • When additional measurement types (e.g., RSS-based range estimates) are available, the extension is straightforward:

μ~=[R~ θ~],S\tilde{\mu} = \begin{bmatrix} \tilde{R}\ \tilde{\theta} \end{bmatrix} ,\quad S

and updating

μ^=μˉ+K(μ~μˉ),K=Σˉ(Σˉ+S)1\hat{\mu} = \bar{\mu} + K(\tilde{\mu} - \bar{\mu}), \quad K = \bar{\Sigma}(\bar{\Sigma} + S)^{-1}

  • Prior knowledge about source location, system constraints, or even additional modalities can be merged seamlessly in this framework.

The Bayesian mixture-likelihood update in NLOS handles outliers probabilistically, ensuring robustness without explicit outlier removal logic.

5. Performance, Complexity, and Scaling Considerations

Performance:

  • In LOS environments, the sequential LMMSE update closely approximates ML and achieves near-optimal estimation error.
  • In NLOS, the randomized bootstrap approach—by capping outliers and aggregating only consistent measurements—empirically converges to the ML solution as MM increases.
  • For sufficient M (modest for practical outlier rates), performance is essentially as good as the full combinatorial ML search, but with managed complexity.

Complexity and Scalability:

  • LOS: Complexity is O(N)O(N); dominated by sequential updates.
  • NLOS: Complexity is O(MN2)O(MN^2), with MM logarithmic in 1/Pfailure1/P_\text{failure} and independent of NN.
  • The algorithm is scalable to large receiver networks, provided enough diversity exists to regularly select consistent (LOS) pairs as initialization.

Implementation:

  • Efficient coordinate transformations between polar (local) and Cartesian (global) representations are required.
  • The threshold Θmax\Theta_{\max} must be computed for each receiver, using relevant variances, mixture parameters, and the appropriating capping expression.
  • Rao-Blackwellization (maintaining sufficient statistics) ensures that the only messages passed in a cooperative, distributed deployment are the location estimate and covariance.

6. Limitations, Deployment Strategies, and Extensions

Limitations:

  • In highly correlated (clustered) NLOS settings, the fraction of consistently available LOS pairs may be diminished, raising the required number of bootstraps.
  • Bayesian robustness may deteriorate if the mixture model poorly fits the actual outlier distribution.
  • All performance guarantees rest on the accurate knowledge of measurement error variances and mixture parameters (α\alpha); adaptive estimation of these hyperparameters is required in operational systems.

Deployment:

  • The algorithm is well-suited for sensor networks, cooperative localization in vehicular systems, and distributed radio interferometry where receivers collect directional measurements in the presence of spatially inhomogeneous multipath environments.
  • Parallelization is inherent in the bootstrap process; receivers can asynchronously attempt different aggregations with minimal coordination.

Extensions:

  • Integration with other modalities (e.g., broadcast time-of-arrival, vehicle odometry) is immediate via the Bayesian update formalism.
  • For wideband systems, path resolution allows the algorithm to operate on per-path AoAs, exploiting the extra diversity in multipath for improved performance.
  • The likelihood mixture model can be generalized beyond uniform outlier distributions to more complex, empirically derived contamination distributions.

7. Summary Table: ML Interference AoA Algorithmic Structure

Component Operation Complexity
LOS Sequential LMMSE Sequential per-receiver fusion of AoA O(N)O(N)
NLOS Bootstrapping Randomized initialization, LMMSE aggregation up to threshold O(MN2)O(MN^2)
Bayesian Modal Fusion Kalman update with prior+AoA/range/RSS O(1)O(1) per update

This approach leverages robust likelihood modeling, efficient sequential data fusion, and outlier suppression via randomized bootstrapping to provide maximum-likelihood level accuracy for cooperative AoA-based source localization, even under adversarial multipath conditions, at computational cost compatible with large-scale deployments (Ananthasubramaniam et al., 2012).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Maximum Likelihood Interference AoA Estimation Method.