Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Ensemble Blocking Scheme for Entity Resolution of Large and Sparse Datasets (1609.06265v2)

Published 20 Sep 2016 in cs.AI and cs.DB

Abstract: Entity Resolution, also called record linkage or deduplication, refers to the process of identifying and merging duplicate versions of the same entity into a unified representation. The standard practice is to use a Rule based or Machine Learning based model that compares entity pairs and assigns a score to represent the pairs' Match/Non-Match status. However, performing an exhaustive pair-wise comparison on all pairs of records leads to quadratic matcher complexity and hence a Blocking step is performed before the Matching to group similar entities into smaller blocks that the matcher can then examine exhaustively. Several blocking schemes have been developed to efficiently and effectively block the input dataset into manageable groups. At CareerBuilder (CB), we perform deduplication on massive datasets of people profiles collected from disparate sources with varying informational content. We observed that, employing a single blocking technique did not cover the base for all possible scenarios due to the multi-faceted nature of our data sources. In this paper, we describe our ensemble approach to blocking that combines two different blocking techniques to leverage their respective strengths.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Janani Balaji (3 papers)
  2. Faizan Javed (11 papers)
  3. Mayank Kejriwal (48 papers)
  4. Chris Min (1 paper)
  5. Sam Sander (1 paper)
  6. Ozgur Ozturk (4 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.