Learning in repeated auctions (2011.09365v2)
Abstract: Online auctions are one of the most fundamental facets of the modern economy and power an industry generating hundreds of billions of dollars a year in revenue. Auction theory has historically focused on the question of designing the best way to sell a single item to potential buyers, with the concurrent objectives of maximizing revenue generated or welfare created. Theoretical results in this area have typically relied on some prior Bayesian knowledge agents were assumed to have on each-other. This assumption is no longer satisfied in new markets such as online advertising: similar items are sold repeatedly, and agents are unaware of each other or might try to manipulate each-other. On the other hand, statistical learning theory now provides tools to supplement those missing pieces of information given enough data, as agents can learn from their environment to improve their strategies. This survey covers recent advances in learning in repeated auctions, starting from the traditional economic study of optimal one-shot auctions with a Bayesian prior. We then focus on the question of learning optimal mechanisms from a dataset of bidders' past values. The sample complexity as well as the computational efficiency of different methods will be studied. We will also investigate online variants where gathering data has a cost to be accounted for, either by seller or buyers ("earning while learning"). Later in the survey, we will further assume that bidders are also adaptive to the mechanism as they interact repeatedly with the same seller. We will show how strategic agents can actually manipulate repeated auctions, to their own advantage. All the questions discussed in this survey are grounded in real-world applications and many of the ideas and algorithms we describe are used every day to power the Internet economy.