Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ERATTA: Extreme RAG for Table To Answers with Large Language Models (2405.03963v4)

Published 7 May 2024 in cs.AI and cs.LG
ERATTA: Extreme RAG for Table To Answers with Large Language Models

Abstract: LLMs with retrieval augmented-generation (RAG) have been the optimal choice for scalable generative AI solutions in the recent past. Although RAG implemented with AI agents (agentic-RAG) has been recently popularized, its suffers from unstable cost and unreliable performances for Enterprise-level data-practices. Most existing use-cases that incorporate RAG with LLMs have been either generic or extremely domain specific, thereby questioning the scalability and generalizability of RAG-LLM approaches. In this work, we propose a unique LLM-based system where multiple LLMs can be invoked to enable data authentication, user-query routing, data-retrieval and custom prompting for question-answering capabilities from Enterprise-data tables. The source tables here are highly fluctuating and large in size and the proposed framework enables structured responses in under 10 seconds per query. Additionally, we propose a five metric scoring module that detects and reports hallucinations in the LLM responses. Our proposed system and scoring metrics achieve >90% confidence scores across hundreds of user queries in the sustainability, financial health and social media domains. Extensions to the proposed extreme RAG architectures can enable heterogeneous source querying using LLMs.

Analysis of "ERATTA: Extreme RAG for Table To Answers with LLMs"

The paper "ERATTA: Extreme RAG for Table To Answers with LLMs" is a pioneering work focusing on the utilization of LLMs in combination with Retrieval Augmented Generation (@@@@2@@@@) to address question-answering needs from extensive and dynamic data tables. This approach particularly aims at Enterprise-level data processing where the response efficiency and accuracy are paramount. The proposed system stands out with its ability to efficiently authenticate, fetch, and prompt responses through a well-structured multi-LLM strategy, ensuring queries are resolved in under ten seconds.

System Architecture and Methodology

The innovative system architecture integrates multiple LLMs to perform sequential tasks: user authentication, query routing, data retrieval, and response generation. The paper details a three-prompt system that operates as follows:

  1. Authentication RAG: This component determines user access to specific tables based on authentication rules, ensuring that only relevant data sources are pre-loaded and analyzed per user query.
  2. Prompt 1 - Query Routing: Each incoming query is analyzed for its intent and appropriately routed. This step ensures that responses are relevant to the specific data points required, aligning user queries with designated data tables.
  3. Prompt 2 - Data Retrieval: The system leverages a text-to-SQL conversion process, executing automatically generated SQL queries to extract pertinent data subsets. This results in a more efficient processing of queries across large and heterogeneous data tables without manual intervention for varying data domains.
  4. Prompt 3 - Response Generation: This final stage generates natural language responses by employing LLMs to interpret SQL output alongside standard instructions, ultimately ensuring answers are both contextually and semantically precise.

Hallucination Detection Mechanism

A noteworthy aspect of the system is its robust hallucination detection module that evaluates the reliability and consistency of LLM-generated responses. Utilizing five key metrics—number check, entity check, query check, regurgitation check, and increase/decrease modifier check—this mechanism rigorous screens the outputs for accuracy. The implementation of these checks results in over 90% confidence scores across diverse domains such as sustainability, finance, and social media.

Implications and Future Directions

The practical implications of this research are significant, particularly in fields requiring rapid access to comprehensive datasets, such as financial analytics, healthcare informatics, and real-time social media monitoring. The integration of LLMs in processing complex, multi-tabular queries underscores the potential to generalize this methodology across different data domains without specific training for each new dataset.

The proposed system is ripe for further augmentation, such as expanding its capabilities to incorporate real-time predictions and prescriptive analytics through extensions of prompt 2. This future direction indicates a potential shift towards dynamic scenario planning where LLMs could be leveraged for predictive insights seamlessly interwoven with natural language responses.

Conclusion

In conclusion, the ERATTA system showcases an effective melding of LLMs with RAG methodology to provide rapid, accurate question-answer capabilities across extensive datasets. The work underscores the need for scalable, versatile generative AI systems adept at navigating and extracting insights from large-scale data environments. As AI models evolve, the extension of such frameworks to more complex scenarios will be an exciting frontier, continuing to bridge the gap between large-scale data and actionable intelligence.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat et al., “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774, 2023.
  2. S. Roychowdhury, A. Alvarez, B. Moore, M. Krema, M. P. Gelpi, P. Agrawal, F. M. Rodríguez, Á. Rodríguez, J. R. Cabrejas, P. M. Serrano et al., “Hallucination-minimized data-to-answer framework for financial decision-makers,” in 2023 IEEE International Conference on Big Data (BigData).   IEEE, 2023, pp. 4693–4702.
  3. J. S. Park, J. O’Brien, C. J. Cai, M. R. Morris, P. Liang, and M. S. Bernstein, “Generative agents: Interactive simulacra of human behavior,” in Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, 2023, pp. 1–22.
  4. T. Bui, O. Tran, P. Nguyen, B. Ho, L. Nguyen, T. Bui, and T. Quan, “Cross-data knowledge graph construction for llm-enabled educational question-answering system: A~ case~ study~ at~ hcmut,” arXiv preprint arXiv:2404.09296, 2024.
  5. O. Ovadia, M. Brief, M. Mishaeli, and O. Elisha, “Fine-tuning or retrieval? comparing knowledge injection in llms,” arXiv preprint arXiv:2312.05934, 2023.
  6. X. Zhang, F. Yin, G. Ma, B. Ge, and W. Xiao, “M-sql: Multi-task representation learning for single-table text2sql generation,” IEEE Access, vol. 8, pp. 43 156–43 167, 2020.
  7. X. Wei, B. Croft, and A. McCallum, “Table extraction for answer retrieval,” Information retrieval, vol. 9, pp. 589–611, 2006.
  8. G. Huilin, G. Tong, W. Fan, and M. Chao, “Bidirectional attention for sql generation,” in 2019 IEEE 4th International Conference on Cloud Computing and Big Data Analysis (ICCCBDA).   IEEE, 2019, pp. 676–682.
  9. Z. Li, J. Cai, S. He, and H. Zhao, “Seq2seq dependency parsing,” in Proceedings of the 27th International Conference on Computational Linguistics, 2018, pp. 3203–3214.
  10. T. Yu, Z. Li, Z. Zhang, R. Zhang, and D. Radev, “Typesql: Knowledge-based type-aware neural text-to-sql generation,” arXiv preprint arXiv:1804.09769, 2018.
  11. P. Yin and G. Neubig, “Tranx: A transition-based neural abstract syntax parser for semantic parsing and code generation,” in Proceedings of the Conference on Empirical Methods in Natural Language Processing (Demo Track), 2018.
  12. Y. Sun, D. Tang, N. Duan, J. Ji, G. Cao, X. Feng, B. Qin, T. Liu, and M. Zhou, “Semantic parsing with syntax-and table-aware sql generation,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2018, pp. 361–372.
  13. P.-S. Huang, C. Wang, R. Singh, W.-t. Yih, and X. He, “Natural language to structured query generation via meta-learning,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers).   Association for Computational Linguistics, 2018.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sohini Roychowdhury (24 papers)
  2. Marko Krema (3 papers)
  3. Anvar Mahammad (1 paper)
  4. Brian Moore (6 papers)
  5. Arijit Mukherjee (12 papers)
  6. Punit Prakashchandra (1 paper)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com