Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Bradley-Terry Rating: Quantifying Properties from Comparisons (2307.13709v5)

Published 24 Jul 2023 in cs.LG and cs.AI

Abstract: Many properties in the real world don't have metrics and can't be numerically observed, making them difficult to learn. To deal with this challenging problem, prior works have primarily focused on estimating those properties by using graded human scores as the target label in the training. Meanwhile, rating algorithms based on the Bradley-Terry model are extensively studied to evaluate the competitiveness of players based on their match history. In this paper, we introduce the Neural Bradley-Terry Rating (NBTR), a novel machine learning framework designed to quantify and evaluate properties of unknown items. Our method seamlessly integrates the Bradley-Terry model into the neural network structure. Moreover, we generalize this architecture further to asymmetric environments with unfairness, a condition more commonly encountered in real-world settings. Through experimental analysis, we demonstrate that NBTR successfully learns to quantify and estimate desired properties.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324–345.
  2. The rating of chessplayers: Past and present.
  3. Ford, L. R. J. (1957). Solution of a ranking problem from binary comparisons. The American Mathematical Monthly, 64(8P2):28–33.
  4. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778.
  5. Trueskill™: a bayesian skill rating system. Advances in neural information processing systems, 19.
  6. Reducing the dimensionality of data with neural networks. science, 313(5786):504–507.
  7. Hunter, D. R. (2004). Mm algorithms for generalized bradley-terry models. The annals of statistics, 32(1):384–406.
  8. Tying word vectors and word classifiers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462.
  9. Evaluating collaborative filtering recommender algorithms: A survey. IEEE Access, 6:74003–74024.
  10. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  11. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324.
  12. Neural image beauty predictor based on bradley-terry model. arXiv preprint arXiv:2111.10127.
  13. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.
  14. Maystre, L. (2015). choix. https://choix.lum.li/en/latest/index.html.
  15. A bradley–terry artificial neural network model for individual ratings in group competitions. Neural computing and Applications, 17:175–186.
  16. T7 (2017). Pokemon- weedle’s cave. https://www.kaggle.com/datasets/terminus7/pokemon-challenge.
  17. Deep rating and review neural network for item recommendation. IEEE Transactions on Neural Networks and Learning Systems, 33(11):6726–6736.
  18. A neural network go rating model considering winning rate. In Proceedings of the 2019 3rd International Conference on Computer Science and Artificial Intelligence, CSAI ’19, page 23–27, New York, NY, USA. Association for Computing Machinery.

Summary

We haven't generated a summary for this paper yet.