Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Simple Attention-Based Representation Learning for Ranking Short Social Media Posts (1811.01013v2)

Published 2 Nov 2018 in cs.CL

Abstract: This paper explores the problem of ranking short social media posts with respect to user queries using neural networks. Instead of starting with a complex architecture, we proceed from the bottom up and examine the effectiveness of a simple, word-level Siamese architecture augmented with attention-based mechanisms for capturing semantic "soft" matches between query and post tokens. Extensive experiments on datasets from the TREC Microblog Tracks show that our simple models not only achieve better effectiveness than existing approaches that are far more complex or exploit a more diverse set of relevance signals, but are also much faster. Implementations of our samCNN (Simple Attention-based Matching CNN) models are shared with the community to support future work.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Peng Shi (80 papers)
  2. Jinfeng Rao (17 papers)
  3. Jimmy Lin (208 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.