Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Integration-Oriented Ontology to Govern Evolution in Big Data Ecosystems (1801.05161v1)

Published 16 Jan 2018 in cs.DB

Abstract: Big Data architectures allow to flexibly store and process heterogeneous data, from multiple sources, in their original format. The structure of those data, commonly supplied by means of REST APIs, is continuously evolving. Thus data analysts need to adapt their analytical processes after each API release. This gets more challenging when performing an integrated or historical analysis. To cope with such complexity, in this paper, we present the Big Data Integration ontology, the core construct to govern the data integration process under schema evolution by systematically annotating it with information regarding the schema of the sources. We present a query rewriting algorithm that, using the annotated ontology, converts queries posed over the ontology to queries over the sources. To cope with syntactic evolution in the sources, we present an algorithm that semi-automatically adapts the ontology upon new releases. This guarantees ontology-mediated queries to correctly retrieve data from the most recent schema version as well as correctness in historical queries. A functional and performance evaluation on real-world APIs is performed to validate our approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sergi Nadal (4 papers)
  2. Oscar Romero (7 papers)
  3. Alberto Abelló (8 papers)
  4. Panos Vassiliadis (6 papers)
  5. Stijn Vansummeren (24 papers)
Citations (66)

Summary

We haven't generated a summary for this paper yet.