Do Language Models Know the Way to Rome?
Abstract: The global geometry of LLMs is important for a range of applications, but LLM probes tend to evaluate rather local relations, for which ground truths are easily obtained. In this paper we exploit the fact that in geography, ground truths are available beyond local relations. In a series of experiments, we evaluate the extent to which LLM representations of city and country names are isomorphic to real-world geography, e.g., if you tell a LLM where Paris and Berlin are, does it know the way to Rome? We find that LLMs generally encode limited geographic information, but with larger models performing the best, suggesting that geographic knowledge can be induced from higher-order co-occurrence statistics.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.