Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring the Capabilities and Limitations of Large Language Models in the Electric Energy Sector (2403.09125v5)

Published 14 Mar 2024 in eess.SY and cs.SY

Abstract: LLMs as chatbots have drawn remarkable attention thanks to their versatile capability in natural language processing as well as in a wide range of tasks. While there has been great enthusiasm towards adopting such foundational model-based artificial intelligence tools in all sectors possible, the capabilities and limitations of such LLMs in improving the operation of the electric energy sector need to be explored, and this article identifies fruitful directions in this regard. Key future research directions include data collection systems for fine-tuning LLMs, embedding power system-specific tools in the LLMs, and retrieval augmented generation (RAG)-based knowledge pool to improve the quality of LLM responses and LLMs in safety-critical use cases.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. “Attention is all you need” In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17 Long Beach, California, USA: Curran Associates Inc., 2017, pp. 6000–6010 URL: https://dl.acm.org/doi/10.5555/3295222.3295349
  2. “Improving Language Understanding by Generative Pre-Training” In OpenAI, 2018 URL: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
  3. “Large Foundation Models for Power Systems” In arXiv, 2023 URL: https://doi.org/10.48550/arXiv.2312.07044
  4. “On the Potential of ChatGPT to Generate Distribution Systems for Load Flow Studies Using OpenDSS” In IEEE Trans. Power Syst. 38.6, 2023, pp. 5965–5968 DOI: 10.1109/TPWRS.2023.3315543
  5. “Real-Time Optimal Power Flow With Linguistic Stipulations: Integrating GPT-Agent and Deep Reinforcement Learning” In IEEE Trans. Power Syst. 39.2, 2024, pp. 4747–4750 DOI: 10.1109/TPWRS.2023.3338961
  6. “Data Governance in the Age of Large-Scale Data-Driven Language Technology” In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22 Seoul, Republic of Korea: Association for Computing Machinery, 2022, pp. 2206–2222 URL: https://doi.org/10.1145/3531146.3534637
  7. “Privacy in Large Language Models: Attacks, Defenses and Future Directions” In arXiv, 2023 URL: https://doi.org/10.48550/arXiv.2310.10383
  8. “A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation” In arXiv, 2023 URL: https://doi.org/10.48550/arXiv.2305.11391
  9. OpenAI “Enterprise Privacy at OpenAI” Accessed: 13/03/2024, 2023 URL: https://openai.com/enterprise-privacy
  10. “What’s in the chatterbox? Large language models, why they matter, and what we should do about them”, 2022 URL: https://stpp.fordschool.umich.edu/research/research-report/whats-in-the-chatterbox
  11. “NIST AI Risk Management Framework” URL: https://www.nist.gov/itl/ai-risk-management-framework
  12. “From Understanding to Utilization: A Survey on Explainability for Large Language Models” In arXiv, 2024 URL: https://doi.org/10.48550/arXiv.2401.12874
  13. “Learning unsupervised world models for autonomous driving via discrete diffusion” In arXiv, 2023 URL: https://doi.org/10.48550/arXiv.2311.01017
  14. “A survey on large language model (llm) security and privacy: The good, the bad, and the ugly” In High-Confidence Computing Elsevier, 2024, pp. 100211 DOI: https://doi.org/10.1016/j.hcc.2024.100211
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Lin Dong (17 papers)
  2. Subir Majumder (7 papers)
  3. Fatemeh Doudi (3 papers)
  4. Yuting Cai (4 papers)
  5. Chao Tian (78 papers)
  6. Dileep Kalathi (1 paper)
  7. Kevin Ding (1 paper)
  8. Anupam A. Thatte (2 papers)
  9. Le Xie (74 papers)
  10. Na Li (227 papers)
Citations (32)
Youtube Logo Streamline Icon: https://streamlinehq.com