2000 character limit reached
Normalized Online Learning (1408.2065v1)
Published 9 Aug 2014 in cs.LG and stat.ML
Abstract: We introduce online learning algorithms which are independent of feature scales, proving regret bounds dependent on the ratio of scales existent in the data rather than the absolute scale. This has several useful effects: there is no need to pre-normalize data, the test-time and test-space complexity are reduced, and the algorithms are more robust.