Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

S2RDF: RDF Querying with SPARQL on Spark (1512.07021v3)

Published 22 Dec 2015 in cs.DB and cs.DC

Abstract: RDF has become very popular for semantic data publishing due to its flexible and universal graph-like data model. Yet, the ever-increasing size of RDF data collections makes it more and more infeasible to store and process them on a single machine, raising the need for distributed approaches. Instead of building a standalone but closed distributed RDF store, we endorse the usage of existing infrastructures for Big Data processing, e.g. Hadoop. However, SPARQL query performance is a major challenge as these platforms are not designed for RDF processing from ground. Thus, existing Hadoop-based approaches often favor certain query pattern shape while performance drops significantly for other shapes. In this paper, we describe a novel relational partitioning schema for RDF data called ExtVP that uses a semi-join based preprocessing, akin to the concept of Join Indices in relational databases, to efficiently minimize query input size regardless of its pattern shape and diameter. Our prototype system S2RDF is built on top of Spark and uses its relational interface to execute SPARQL queries over ExtVP. We demonstrate its superior performance in comparison to state of the art SPARQL-on-Hadoop approaches using the recent WatDiv test suite. S2RDF achieves sub-second runtimes for majority of queries on a billion triples RDF graph.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Alexander Schätzle (2 papers)
  2. Martin Przyjaciel-Zablocki (2 papers)
  3. Simon Skilevic (1 paper)
  4. Georg Lausen (8 papers)
Citations (200)

Summary

Essay on "S2RDF: RDF Querying with SPARQL on Spark"

The paper "S2RDF: RDF Querying with SPARQL on Spark" addresses a significant challenge in the field of querying large-scale RDF datasets by leveraging distributed computing systems. RDF, with its graph-like data model, has become a standard for representing semantic data, yet the growing size of RDF collections presents a hurdle for single-machine storage and processing. This challenge propels the need for distributed approaches, notably avoiding standalone distributed RDF stores and utilizing existing Big Data infrastructures like Hadoop and Spark for cost-effective and efficient processing.

The core contribution of the paper is the introduction of a novel data partitioning scheme, Extended Vertical Partitioning (ExtVP), designed to optimize RDF data querying in a distributed setup. The ExtVP schema is an improvement upon the traditional Vertical Partitioning (VP) schema, incorporating semi-join based preprocessing inspired by the concept of Join Indices in relational databases. The primary aim is to minimize the input size of SPARQL queries by precomputing potential join correlations between tables, thereby reducing unnecessary data processing and I/O operations—key considerations in distributed environments.

The prototype system, S2RDF, is implemented on top of Spark, taking advantage of Spark's in-memory cluster computing capabilities and its SQL interface for executing SPARQL queries. The authors demonstrate that S2RDF significantly outperforms other SPARQL-on-Hadoop solutions, thanks to its ability to achieve sub-second query runtimes on a dataset comprising a billion RDF triples. Such performance is facilitated by ExtVP's effective reduction of query input size, irrespective of pattern shape and diameter, a considerable improvement over previous approaches that struggled with diverse RDF graph structures.

The evaluation employed a comprehensive testing suite, the WatDiv benchmark, which provides diverse query workloads. S2RDF consistently outperforms competitors across various query shapes, including linear, star, snowflake, and complex queries. The authors also introduce an additional Incremental Linear Testing use case within WatDiv to evaluate query performance for increasing query diameters—another area where S2RDF excels, demonstrating scalability that outmatches centralized RDF stores like Virtuoso and MapReduce-based systems.

In terms of implementation, the decision to use Spark is particularly notable. By operating on Hadoop's HDFS and using Spark's SQL for query execution, S2RDF not only ensures interoperability and integration with other Big Data applications but also maximizes efficiency in leveraging Spark's in-memory processing capabilities. The relational approach facilitates a broad spectrum of SPARQL-to-SQL mappings, underpinned by collected statistics and optimized join orderings, thereby enhancing performance.

The practical implications of this research are significant. With the capability to efficiently query large RDF datasets within distributed environments, S2RDF presents a viable solution for organizations dealing with massive semantic data, without the need for standalone RDF stores. This not only reduces costs but also enhances data interoperability and accessibility. Theoretically, the introduction of ExtVP offers new insights into data layout designs for RDF, emphasizing the benefits of semi-join reductions and precomputed correlations for query optimization in distributed systems.

Looking forward, additional optimizations could focus on reducing the size overhead of ExtVP further, potentially exploring bit-vector representations or unification strategies to diminish the overall number of tables. Furthermore, extending support for SPARQL 1.1 features, such as subqueries and aggregations, could broaden S2RDF's applicability in practical use cases.

Overall, this paper contributes valuable insights and innovations in the field of RDF querying in distributed systems, demonstrating robust empirical results that underscore the efficacy of its approach. The research bridges the gap between semantic data flexibility and large-scale data processing needs, fostering potential advancements in AI and semantic web applications faced with growing data complexities.