Papers
Topics
Authors
Recent
2000 character limit reached

Binary Anchor Optimization Algorithm

Updated 28 November 2025
  • The algorithm is a convex framework that minimizes active anchor count while enforcing CRB constraints for high-accuracy sensor localization.
  • It uses sparse optimization techniques with ℓ1 relaxation and iterative reweighted schemes to efficiently handle one-way TOA ranging in both transmission modes.
  • Numerical results indicate that the iterative reweighted approach notably reduces anchor usage (up to 45% fewer) while satisfying localization performance via LMIs.

The Binary Anchor Optimization Algorithm is a convex optimization-based framework for anchor placement in sensor network localization via one-way time-of-arrival (TOA) ranging. The algorithm exploits the inherent sparsity in anchor deployment, targeting minimal-anchor or minimal-energy solutions while ensuring that localization performance—quantified through the Cramér-Rao bound (CRB)—meets prescribed accuracy specifications throughout the sensor region. The approach is rigorously developed for both scenarios where anchors transmit (OW-A) or receive (OW-S) ranging signals, and involves a sequence of sparse optimization, convex relaxation, and solution refinement steps to yield provably feasible and sparse anchor placements (Chepuri et al., 2013).

1. System Formulation and Optimization Variables

Consider a deployment area with MM candidate anchor locations a1,,aMR2a_1,\ldots,a_M\in \mathbb{R}^2 in "anchor area" A\mathcal{A} and an unknown sensor position sSR2s\in\mathcal{S}\subset\mathbb{R}^2 discretized on a grid. The distance d(am,s)d(a_m, s) is computed as ams2\|a_m - s\|_2. Two operational modes are distinguished:

  • OW-A (Anchors Send): Each anchor mm transmits a ranging pulse of energy eme_m, resulting in TOA noise variance σm,s2=(ρa/γ2)/em\sigma_{m,s}^2 = (\rho_a/\gamma^2)/e_m, where ρa\rho_a depends on pulse and noise spectral density, and γ2=αd(am,s)β\gamma^2 = \alpha d(a_m,s)^{-\beta} models path-loss (with known α\alpha, β\beta).
  • OW-S (Sensor Sends): The sensor transmits once with fixed energy ese_s; each anchor mm observes TOA with variance σs,m2=(ρs/γ2)/es\sigma_{s,m}^2 = (\rho_s/\gamma^2)/e_s.
  • Selection Vectors: In OW-A, vector e=[e1,,eM]Te=[e_1,\ldots,e_M]^T (with em0e_m\ge 0) simultaneously selects and sets transmit energy per anchor. In OW-S, vector w=[w1,,wM]T{0,1}Mw=[w_1,\ldots,w_M]^T\in\{0,1\}^M selects the set of utilized anchors (pure selection).

This formulation subsumes a sparse-selection paradigm, as optimal performance is typically achieved with only a subset of anchors being active.

2. Fisher Information Matrix and Cramér-Rao Bound Constraints

Localization performance is governed by the Fisher Information Matrix (FIM):

  • OW-A: Ja(e,s)=m=1MemFa,m(s)J_a(e,s) = \sum_{m=1}^M e_m F_{a,m}(s)
  • OW-S: Js(w,s)=m=1MwmFs,m(s)J_s(w,s) = \sum_{m=1}^M w_m F_{s,m}(s)

with per-anchor Fisher information components

Fa,m(s)=αρa1d(am,s)(β+2)(sam)(sam)T,F_{a,m}(s) = \alpha\rho_a^{-1} d(a_m,s)^{-(\beta+2)} (s-a_m)(s-a_m)^T,

Fs,m(s)=esαρs1d(am,s)(β+2)(sam)(sam)T.F_{s,m}(s) = e_s \alpha \rho_s^{-1} d(a_m,s)^{-(\beta+2)} (s-a_m)(s-a_m)^T.

The unconstrained CRB is CRB(e,w;s)=trace(J(e,w;s)1)\mathrm{CRB}(e, w; s) = \mathrm{trace}(J(e, w; s)^{-1}), but in practice a constraint is imposed requiring the smallest FIM eigenvalue to satisfy λmin(J(e,w;s))λ\lambda_{\min}(J(e,w;s)) \geq \lambda, for all sSs\in\mathcal{S}, ensuring maximum variance in any direction does not exceed 2/λ2/\lambda. Equivalently, the set of convex matrix inequalities (LMIs):

  • For OW-A: memFa,m(s)λI2\sum_{m} e_m F_{a,m}(s) \succeq \lambda I_2 for all sSs\in \mathcal{S}
  • For OW-S: mwmFs,m(s)λI2\sum_{m} w_m F_{s,m}(s) \succeq \lambda I_2 for all sSs\in \mathcal{S}

3. Combinatorial 0\ell_0-based Formulation

The anchor optimization objective is to minimize the support of the selection vector (0\ell_0 norm or cardinality), subject to CRB (LMI) constraints:

  • OW-A:

mine0s.t.m=1MemFa,m(s)λI2 sS;0emeb\min \|e\|_0 \quad \text{s.t.} \quad \sum_{m=1}^M e_m F_{a,m}(s) \succeq \lambda I_2 ~ \forall s\in\mathcal{S};\, 0 \leq e_m \leq e_b

  • OW-S:

minw0=1Tws.t.m=1MwmFs,m(s)λI2 sS;wm{0,1}\min \|w\|_0 = 1^T w \quad \text{s.t.} \quad \sum_{m=1}^M w_m F_{s,m}(s) \succeq \lambda I_2 ~ \forall s\in\mathcal{S};\, w_m \in \{0,1\}

Both formulations are NP-hard and combinatorial in MM.

4. Convex Relaxation via 1\ell_1 and SDP

A standard convex surrogate is employed:

  • OW-A: The 0\ell_0-norm on ee is replaced by the 1\ell_1 norm, yielding a semidefinite program (SDP):

mineRM1Tes.t.m=1MemFa,m(s)λI2,0eeb1\min_{e\in\mathbb{R}^M} 1^T e \quad \text{s.t.} \sum_{m=1}^M e_m F_{a,m}(s) \succeq \lambda I_2,\, 0 \le e \le e_b 1

  • OW-S: Boolean constraints are relaxed using a Shor-type SDP lift:

min1Tws.t.m=1MwmFs,m(s)λI2,[Ww wT1 ]0,diag(W)=w,w0\min 1^T w \quad \text{s.t.} \sum_{m=1}^M w_m F_{s,m}(s) \succeq \lambda I_2,\, \begin{bmatrix} W & w \ w^T & 1 \ \end{bmatrix} \succeq 0,\, \text{diag}(W) = w,\, w \ge 0

This convex relaxation allows polynomial-time approximation to the original combinatorial problem.

5. Sparsity-Promoting Iterative Reweighted 1\ell_1

To induce higher sparsity, iterative reweighted 1\ell_1 optimization is applied. At iteration kk, selection weights are updated as ui(k+1)=1ϵ+ei(k)u_i^{(k+1)} = \frac{1}{\epsilon + e_i^{(k)}} and the current SDP is resolved with a weighted objective u(k)Teu^{(k)T} e. This process typically converges within 3–6 iterations to a solution with substantially fewer nonzero entries compared to plain 1\ell_1 relaxation. After convergence, all eie_i below a set threshold are discarded to finalize the sparse support.

In the OW-S mode, post-processing via randomization or simple thresholding of the continuous relaxation solution (w[0,1]Mw^* \in [0,1]^M) yields a binary selection. One standard approach is to sample Gaussian vectors from N(0,W)\mathcal{N}(0, W^*), set wi(trial)=1w_i^{(\text{trial})} = 1 if ξiθ\xi_i \geq \theta, and select the best cardinality-constrained trial that satisfies the LMIs (Chepuri et al., 2013).

6. Algorithmic Complexity and Numerical Behavior

Each 1\ell_1-relaxed problem forms an SDP of size M×MM \times M with S|\mathcal{S}| LMIs (each 2×22\times2). Standard interior-point SDP solvers address problems of this scale efficiently (e.g., CVX+SeDuMi), with worst-case per-iteration complexity O((M+S)3)O((M + |\mathcal{S}|)^3). For MM up to a few hundred and S|\mathcal{S}| up to a few thousand, run times are practical.

Iterative reweighted SDPs converge rapidly, and while global optimality to the original 0\ell_0 combinatorial problem is not ensured, empirical cardinalities are near-optimal. For example, compared to an intractable exhaustive search involving 1017\sim 10^{17} feasibility checks for moderate problem sizes, the algorithm executes efficiently and to high sparsity (Chepuri et al., 2013).

7. Numerical Results and Empirical Evaluation

Selected numerical evaluations with M=80M=80 anchor candidates (on a circle), accuracy specs Re=4R_e = 4 cm, Pe=0.95P_e = 0.95, path-loss β=2,α=1\beta = 2,\, \alpha = 1, and SNR10\mathrm{SNR} \approx 10 dB at $10$ m, substantiate the algorithm’s performance:

Mode Plain 1\ell_1 Iterative/Reweighted 1\ell_1 Final Support Total Energy [J]
OW-A \sim9 anchors 5 anchors (\sim45% fewer) 5 6
OW-S \sim20 anchors (soft) 4 anchors (binary) 4 N/A

For all tested grid points, the critical smallest FIM eigenvalue (λmin\lambda_{\min}) constraint is satisfied. These results highlight that iterative reweighting and proper relaxation can yield extremely sparse and energy-efficient anchor placements—orders of magnitude more efficiently than combinatorial search (Chepuri et al., 2013).

8. Summary of Methodological Innovations

The Binary Anchor Optimization Algorithm unifies several principles:

  1. Performance constraint handling: CRB requirements are encoded as small-eigenvalue LMIs for guaranteed localization accuracy.
  2. Sparsity-exploiting setup: Anchor deployment is naturally cast as an 0\ell_0-minimization problem.
  3. Convex tractable surrogates: Relaxation to 1\ell_1 norm or SDP renders the selection problem solvable in polynomial time.
  4. Sparsity enhancement: Iterative reweighted schemes and randomized rounding bridge the gap from convex relaxation to near-binary support, reconciling tractability and optimality.

A plausible implication is that these mathematical programming principles are extensible to other sensor deployment and energy allocation problems where sparsity and geometric coverage under statistical error constraints are critical (Chepuri et al., 2013).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Binary Anchor Optimization Algorithm.