Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 154 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 110 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 450 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

FewUser: Few-Shot Social User Geolocation via Contrastive Learning (2404.08662v1)

Published 28 Mar 2024 in cs.IR, cs.LG, and cs.SI

Abstract: To address the challenges of scarcity in geotagged data for social user geolocation, we propose FewUser, a novel framework for Few-shot social User geolocation. We incorporate a contrastive learning strategy between users and locations to improve geolocation performance with no or limited training data. FewUser features a user representation module that harnesses a pre-trained LLM (PLM) and a user encoder to process and fuse diverse social media inputs effectively. To bridge the gap between PLM's knowledge and geographical data, we introduce a geographical prompting module with hard, soft, and semi-soft prompts, to enhance the encoding of location information. Contrastive learning is implemented through a contrastive loss and a matching loss, complemented by a hard negative mining strategy to refine the learning process. We construct two datasets TwiU and FliU, containing richer metadata than existing benchmarks, to evaluate FewUser and the extensive experiments demonstrate that FewUser significantly outperforms state-of-the-art methods in both zero-shot and various few-shot settings, achieving absolute improvements of 26.95\% and \textbf{41.62\%} on TwiU and FliU, respectively, with only one training sample per class. We further conduct a comprehensive analysis to investigate the impact of user representation on geolocation performance and the effectiveness of FewUser's components, offering valuable insights for future research in this area.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: