Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Studying Catastrophic Forgetting in Neural Ranking Models (2101.06984v1)

Published 18 Jan 2021 in cs.IR and cs.AI

Abstract: Several deep neural ranking models have been proposed in the recent IR literature. While their transferability to one target domain held by a dataset has been widely addressed using traditional domain adaptation strategies, the question of their cross-domain transferability is still under-studied. We study here in what extent neural ranking models catastrophically forget old knowledge acquired from previously observed domains after acquiring new knowledge, leading to performance decrease on those domains. Our experiments show that the effectiveness of neuralIR ranking models is achieved at the cost of catastrophic forgetting and that a lifelong learning strategy using a cross-domain regularizer success-fully mitigates the problem. Using an explanatory approach built on a regression model, we also show the effect of domain characteristics on the rise of catastrophic forgetting. We believe that the obtained results can be useful for both theoretical and practical future work in neural IR.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Laure Soulier (39 papers)
  2. Karen Pinel-Sauvagnat (5 papers)
  3. Lynda Tamine (10 papers)
  4. Jesus Lovon-Melgarejo (1 paper)
Citations (12)

Summary

We haven't generated a summary for this paper yet.