Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Learning Relevance: Creating Relevant Information (as Opposed to Retrieving it) (1606.07660v2)

Published 24 Jun 2016 in cs.IR

Abstract: What if Information Retrieval (IR) systems did not just retrieve relevant information that is stored in their indices, but could also "understand" it and synthesise it into a single document? We present a preliminary study that makes a first step towards answering this question. Given a query, we train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Christina Lioma (66 papers)
  2. Birger Larsen (17 papers)
  3. Casper Petersen (6 papers)
  4. Jakob Grue Simonsen (43 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.