Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow (2306.07209v8)
Abstract: Industries such as finance, meteorology, and energy generate vast amounts of data daily. Efficiently managing, processing, and displaying this data requires specialized expertise and is often tedious and repetitive. Leveraging LLMs to develop an automated workflow presents a highly promising solution. However, LLMs are not adept at handling complex numerical computations and table manipulations and are also constrained by a limited context budget. Based on this, we propose Data-Copilot, a data analysis agent that autonomously performs querying, processing, and visualization of massive data tailored to diverse human requests. The advancements are twofold: First, it is a code-centric agent that receives human requests and generates code as an intermediary to handle massive data, which is quite flexible for large-scale data processing tasks. Second, Data-Copilot involves a data exploration phase in advance, which explores how to design more universal and error-free interfaces for real-time response. Specifically, it actively explores data sources, discovers numerous common requests, and abstracts them into many universal interfaces for daily invocation. When deployed in real-time requests, Data-Copilot only needs to invoke these pre-designed interfaces, transforming raw data into visualized outputs (e.g., charts, tables) that best match the user's intent. Compared to generating code from scratch, invoking these pre-designed and compiler-validated interfaces can significantly reduce errors during real-time requests. Additionally, interface workflows are more efficient and offer greater interpretability than code. We open-sourced Data-Copilot with massive Chinese financial data, such as stocks, funds, and news, demonstrating promising application prospects.
- Language Models are Few-Shot Learners. In NeurIPS, 2020.
- Palm: Scaling language modeling with pathways. ArXiv, abs/2204.02311, 2022.
- Opt: Open Pre-trained Transformer Language Models. ArXiv, abs/2205.01068, 2022.
- Glm-130b: An Open Bilingual Pre-trained Model. ICLR 2023 poster, 2023.
- Llama: Open and Efficient Foundation Language Models. ArXiv, abs/2302.13971, 2023.
- OpenAI. Chatgpt. 2022.
- OpenAI. Gpt-4 technical report. 2023.
- Chain of Thought Prompting Elicits Reasoning in Large Language Models. In Conference on Neural Information Processing Systems (NeurIPS), 2022.
- Large Language Models are Zero-Shot Reasoners. In Conference on Neural Information Processing Systems (NeurIPS), 2022.
- Pal: Program-aided Language Models. ArXiv, abs/2211.10435, 2022.
- Self-Consistency Improves Chain of Thought Reasoning in Language Models. ICLR 2023 poster, abs/2203.11171, 2023.
- Training language models to follow instructions with human feedback. CoRR, abs/2203.02155, 2022.
- Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 2022.
- Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2022.
- Opt-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization. ArXiv, abs/2212.12017, 2022.
- Scaling instruction-finetuned language models. CoRR, abs/2210.11416, 2022.
- Victor Dibia. Lida: A tool for automatic generation of grammar-agnostic visualizations and infographics using large language models. arXiv preprint arXiv:2303.02927, 2023.
- Is gpt-4 a good data analyst? arXiv preprint arXiv:2305.15038, 2023.
- Sheetcopilot: Bringing software productivity to the next level through large language models. arXiv preprint arXiv:2305.19308, 2023.
- Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models. arXiv, 2023.
- Audiogpt: Understanding and generating speech, music, sound, and talking head. arXiv preprint arXiv:2304.12995, 2023.
- Peter Naur. Concise survey of computer methods. 1974.
- OpenAI. Gpt-4 technical report, 2023.
- Emergent abilities of large language models. CoRR, abs/2206.07682, 2022.
- Bloomberggpt: A large language model for finance. arXiv preprint arXiv:2303.17564, 2023.
- Self-instruct: Aligning language model with self generated instructions, 2022.
- Toolformer: Language Models Can Teach Themselves to Use Tools. ArXiv, abs/2302.04761, 2023.
- Pal: Program-aided language models. ArXiv, abs/2211.10435, 2022.
- Tool learning with foundation models, 2023.
- Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. ArXiv, abs/2305.11554, 2023.
- Vipergpt: Visual inference via python execution for reasoning, 2023.
- Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. ArXiv, abs/2303.17580, 2023.
- Taskmatrix.ai: Completing tasks by connecting foundation models with millions of apis, 2023.
- Large language models as tool makers. arXiv preprint arXiv:2305.17126, 2023.
- Creator: Disentangling abstract and concrete reasonings of large language models through tool creation. arXiv preprint arXiv:2305.14318, 2023.