Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NLQxform-UI: A Natural Language Interface for Querying DBLP Interactively (2403.08475v1)

Published 13 Mar 2024 in cs.IR

Abstract: In recent years, the DBLP computer science bibliography has been prominently used for searching scholarly information, such as publications, scholars, and venues. However, its current search service lacks the capability to handle complex queries, which limits the usability of DBLP. In this paper, we present NLQxform-UI, a web-based natural language interface that enables users to query DBLP directly with complex natural language questions. NLQxform-UI automatically translates given questions into SPARQL queries and executes the queries over the DBLP knowledge graph to retrieve answers. The querying process is presented to users in an interactive manner, which improves the transparency of the system and helps examine the returned answers. Also, intermediate results in the querying process can be previewed and manually altered to improve the accuracy of the system. NLQxform-UI has been completely open-sourced: https://github.com/ruijie-wang-uzh/NLQxform-UI.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (10)
  1. Hanna Abi Akl. 2023. PSYCHIC: A Neuro-Symbolic Framework for Knowledge Graph Question-Answering Grounding. In Joint Proceedings of Scholarly QALD 2023 and SemREC 2023 co-located with ISWC 2023, November 6-10, 2023, Vol. 3592.
  2. DBLP-QuAD: A Question Answering Dataset over the DBLP Scholarly Knowledge Graph. In Proceedings of the 13th International Workshop on Bibliometric-enhanced Information Retrieval co-located with ECIR 2023, April 2nd, 2023, Vol. 3617. 37–51.
  3. UKP-SQUARE: An Online Platform for Question Answering Research. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 - System Demonstrations, May 22-27, 2022. 9–22.
  4. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, June 2-7, 2019, Volume 1. 4171–4186.
  5. Victor Dibia. 2020. NeuralQA: A Usable Library for Question Answering (Contextual Query Expansion + BERT) on Large Datasets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, November 16-20, 2020. 15–22.
  6. A Structure and Content Prompt-based Method for Knowledge Graph Question Answering over Scholarly Data. In Joint Proceedings of Scholarly QALD 2023 and SemREC 2023 co-located with ISWC 2023, November 6-10, 2023, Vol. 3592.
  7. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, July 5-10, 2020. 7871–7880.
  8. Shreya Rajpal and Ricardo Usbeck. [n. d.]. BERTologyNavigator: Advanced Question Answering with BERT-based Semantics. In Joint Proceedings of Scholarly QALD 2023 and SemREC 2023 co-located with 22nd International Semantic Web Conference ISWC 2023, November 6-10, 2023, Vol. 3592.
  9. NLQxform: A Language Model-based Question to SPARQL Transformer. In Joint Proceedings of Scholarly QALD 2023 and SemREC 2023 co-located with 22nd International Semantic Web Conference ISWC 2023, November 6-10, 2023, Vol. 3592.
  10. IQA: Interactive query construction in semantic question answering systems. J. Web Semant. 64 (2020), 100586.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ruijie Wang (43 papers)
  2. Zhiruo Zhang (2 papers)
  3. Luca Rossetto (21 papers)
  4. Florian Ruosch (2 papers)
  5. Abraham Bernstein (25 papers)

Summary

We haven't generated a summary for this paper yet.