Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Embedding Representations for Knowledge Inference on Imperfect and Incomplete Repositories (1503.08155v1)

Published 27 Mar 2015 in cs.AI and cs.CL

Abstract: This paper considers the problem of knowledge inference on large-scale imperfect repositories with incomplete coverage by means of embedding entities and relations at the first attempt. We propose IIKE (Imperfect and Incomplete Knowledge Embedding), a probabilistic model which measures the probability of each belief, i.e. $\langle h,r,t\rangle$, in large-scale knowledge bases such as NELL and Freebase, and our objective is to learn a better low-dimensional vector representation for each entity ($h$ and $t$) and relation ($r$) in the process of minimizing the loss of fitting the corresponding confidence given by machine learning (NELL) or crowdsouring (Freebase), so that we can use $||{\bf h} + {\bf r} - {\bf t}||$ to assess the plausibility of a belief when conducting inference. We use subsets of those inexact knowledge bases to train our model and test the performances of link prediction and triplet classification on ground truth beliefs, respectively. The results of extensive experiments show that IIKE achieves significant improvement compared with the baseline and state-of-the-art approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Miao Fan (28 papers)
  2. Qiang Zhou (124 papers)
  3. Thomas Fang Zheng (36 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.