EM Based p-norm-like Constraint RLS Algorithm for Sparse System Identification (2312.05829v1)
Abstract: In this paper, the recursive least squares (RLS) algorithm is considered in the sparse system identification setting. The cost function of RLS algorithm is regularized by a $p$-norm-like ($0 \leq p \leq 1$) constraint of the estimated system parameters. In order to minimize the regularized cost function, we transform it into a penalized maximum likelihood (ML) problem, which is solved by the expectation-maximization (EM) algorithm. With the introduction of a thresholding operator, the update equation of the tap-weight vector is derived. We also exploit the underlying sparsity to implement the proposed algorithm in a low computational complexity fashion. Numerical simulations demonstrate the superiority of the new algorithm over conventional sparse RLS algorithms, as well as regular RLS algorithm.
- W. Bajwa, J. Haupt, G. Raz, and R. Nowak, “Compressed channel sensing,” in Proc. CISS, 2008.
- R. Tibshirani, “Regression shinkage and selection via the LASSO,” J. Roy. Statist. Soc., 58(1), 267-288, 1996.
- D. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory, vol. 52, pp. 1289-1306, Apr. 2006.
- B. D. Rao and B. Song, “Adaptive filtering algorithms for promoting sparsity,” Proc. ICASSP, vol. 6, pp. VI361-VI364, April 2003.
- Y. Chen, Y. Gu, and A. O. Hero, “Sparse LMS for system identification,” ICASSP, 3125-3128, Taiwan, Apr. 2009.
- O. Taheri and S. A. Vorobyov, “Sparse channel estimation with ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-norm and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm penalized least mean squares,” ICASSP, 2011.
- Y. Gu, J. Jin, and S. Mei, “l0subscript𝑙0l_{0}italic_l start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT Norm constraint LMS algorithm for sparse system identification,” IEEE Signal Process. Lett., 16(9), 774-777, Sep. 2009.
- F. Wu and F. Tong, “Gradient optimization p-norm-like constraint LMS algorithm for sparse system estimation,” Signal Processing, 93, 967-971, 2013.
- M. L. Aliyu, M. A. Alkassim, and M. S. Salman, “A p𝑝pitalic_p-norm variable step-size LMS algorithm for sparse system identification,” Signal, Image and Video Processing, 1-7, 2014.
- L. Weruaga and S. Jimaa, “Exact NLMS algorithm with ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-norm constraint,” IEEE Signal Processing Letters, 22(3), 366-370, Mar. 2015.
- D. Angelosante, J. A. Bazerque, and G. B. Giannakis, “Online adaptive estimation of sparse signals: where RLS meets the ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm,” IEEE Trans. Signal Process., vol. 58, no. 7, pp. 3436-3447, Jul. 2010.
- E. M. Eksioglu, “Sparsity regularised recursive least squares adaptive filtering,” IET Signal Process., 5(5), 480-487, 2011.
- E. M. Eksioglu and A. K. Tanc, “RLS algorithm with convex regularization,” IEEE Signal Processing Letters, 18(8), 470-473, Aug. 2011.
- B. Babadi, N. Kalouptsidis, and V. Tarokh, “SPARLS: The sparse RLS algorithm,” IEEE Trans. Signal Process., 58(8), 4013-4025, Aug. 2010.
- M. Figueirado and R. Nowak, “An EM algorithm for wavelet-based image restoration,” IEEE Trans. Image Process., vol. 12, no. 8, pp. 906-916, Aug. 2003.
- Z. Liu, Y. Liu, and C. Li, “Distributed sparse recursive least-squares over networks,” IEEE Trans. Signal Process., vol. 62, no. 6, pp. 1386-1395, Mar. 2014.
- Y. V. Zakharov and V. H. Nascimento, “DCD-RLS adaptive filters with penalties for sparse identification,” IEEE Trans. Signal Process., vol. 61, no. 12, pp. 3198-3213, Jun. 2013.
- X. Hong, J. Gao, and S. Chen, “Zero-attracting recursive least squares algorithms,” IEEE Trans. Vehicular Technology, vol. 66, no. 1, pp. 213-221, Jan. 2017.
- Shuyang Jiang (15 papers)
- Kung Yao (2 papers)