Bias-Variance Games (1909.03618v2)
Abstract: Firms engaged in electronic commerce increasingly rely on predictive analytics via machine-learning algorithms to drive a wide array of managerial decisions. The tuning of many standard machine learning algorithms can be understood as trading off bias (i.e., accuracy) with variance (i.e., precision) in the algorithm's predictions. The goal of this paper is to understand how competition between firms affects their strategic choice of such algorithms. To this end, we model the interaction of two firms choosing learning algorithms as a game and analyze its equilibria. Absent competition, players care only about the magnitude of predictive error and not its source. In contrast, our main result is that with competition, players prefer to incur error due to variance rather than due to bias, even at the cost of higher total error. In addition, we show that competition can have counterintuitive implications -- for example, reducing the error incurred by a firm's algorithm can be harmful to that firm -- but we provide conditions under which such phenomena do not occur. In addition to our theoretical analysis, we also validate our insights by applying our metrics to several publicly available datasets.