Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 96 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 38 tok/s
GPT-5 High 38 tok/s Pro
GPT-4o 96 tok/s
GPT OSS 120B 466 tok/s Pro
Kimi K2 214 tok/s Pro
2000 character limit reached

Plackett-Luce model for learning-to-rank task (1909.06722v1)

Published 15 Sep 2019 in cs.IR and cs.LG

Abstract: List-wise based learning to rank methods are generally supposed to have better performance than point- and pair-wise based. However, in real-world applications, state-of-the-art systems are not from list-wise based camp. In this paper, we propose a new non-linear algorithm in the list-wise based framework called ListMLE, which uses the Plackett-Luce (PL) loss. Our experiments are conducted on the two largest publicly available real-world datasets, Yahoo challenge 2010 and Microsoft 30K. This is the first time in the single model level for a list-wise based system to match or overpass state-of-the-art systems in real-world datasets.

Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.