Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 183 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 82 tok/s Pro
Kimi K2 213 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Harnessing the Power of Adversarial Prompting and Large Language Models for Robust Hypothesis Generation in Astronomy (2306.11648v1)

Published 20 Jun 2023 in astro-ph.IM, astro-ph.GA, cs.AI, and cs.CL

Abstract: This study investigates the application of LLMs, specifically GPT-4, within Astronomy. We employ in-context prompting, supplying the model with up to 1000 papers from the NASA Astrophysics Data System, to explore the extent to which performance can be improved by immersing the model in domain-specific literature. Our findings point towards a substantial boost in hypothesis generation when using in-context prompting, a benefit that is further accentuated by adversarial prompting. We illustrate how adversarial prompting empowers GPT-4 to extract essential details from a vast knowledge base to produce meaningful hypotheses, signaling an innovative step towards employing LLMs for scientific research in Astronomy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. ADS: The Next Generation Search Platform. In Holl, A., Lesteven, S., Dietrich, D., and Gasperini, A. (eds.), Open Science at the Frontiers of Librarianship, volume 492 of Astronomical Society of the Pacific Conference Series, pp.  189, April 2015. doi: 10.48550/arXiv.1503.04194.
  2. The eighteenth data release of the sloan digital sky surveys: Targeting and first spectra from sdss-v, 2023.
  3. Coupling local to global star formation in spiral galaxies: the effect of differential rotation. Monthly Notices of the Royal Astronomical Society, 496(4):5211–5226, jul 2020. doi: 10.1093/mnras/staa1945.
  4. From Data Processes to Data Products: Knowledge Infrastructures in Astronomy. arXiv e-prints, art. arXiv:2109.01707, September 2021. doi: 10.48550/arXiv.2109.01707.
  5. Language models are few-shot learners, 2020.
  6. Meta-learning via language model in-context tuning. arXiv preprint arXiv:2110.07814, 2021.
  7. Chasing the impact of the Gaia-Sausage-Enceladus merger on the formation of the Milky Way thick disc. Monthly Notices of the Royal Astronomical Society, March 2023. doi: 10.1093/mnrasl/slad033.
  8. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv e-prints, art. arXiv:1810.04805, October 2018. doi: 10.48550/arXiv.1810.04805.
  9. A feasibility study of answer-agnostic question generation for education. In Findings of the Association for Computational Linguistics: ACL 2022, pp.  1919–1926, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-acl.151.
  10. Can gpt-3 pass a writer’s turing test? Journal of Cultural Analytics, 5(2), 2020.
  11. <<<i>>>gaia<<</i>>>early data release 3. Astronomy &amp; Astrophysics, 649:A5, apr 2021. doi: 10.1051/0004-6361/202039834.
  12. On the effectiveness of adapter-based tuning for pretrained language model adaptation. arXiv preprint arXiv:2106.03164, 2021.
  13. LoRA: Low-Rank Adaptation of Large Language Models. arXiv e-prints, art. arXiv:2106.09685, June 2021. doi: 10.48550/arXiv.2106.09685.
  14. Compacter: Efficient low-rank hypercomplex adapter layers. Advances in Neural Information Processing Systems, 34:1022–1035, 2021.
  15. Learn to explain: Multimodal reasoning via thought chains for science question answering. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems, volume 35, pp.  2507–2521. Curran Associates, Inc., 2022.
  16. OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023.
  17. Training language models to follow instructions with human feedback. arXiv e-prints, art. arXiv:2203.02155, March 2022. doi: 10.48550/arXiv.2203.02155.
  18. Improving language understanding by generative pre-training. 2018.
  19. The local and global relations between Σ⋆subscriptΣ⋆\Sigma_{\star}roman_Σ start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT , ΣSFRsubscriptΣSFR\Sigma_{\rm SFR}roman_Σ start_POSTSUBSCRIPT roman_SFR end_POSTSUBSCRIPT and ΣmolsubscriptΣmol\Sigma_{\rm mol}roman_Σ start_POSTSUBSCRIPT roman_mol end_POSTSUBSCRIPT that regulate star-formation. arXiv e-prints, art. arXiv:2212.03738, December 2022. doi: 10.48550/arXiv.2212.03738.
  20. Study of galaxies on large-scale filaments in simulations. Monthly Notices of the Royal Astronomical Society, 497(2):2265–2275, jul 2020. doi: 10.1093/mnras/staa1913.
  21. Vig, J. Visualizing attention in transformer-based language representation models. arXiv preprint arXiv:1904.02679, 2019.
  22. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
  23. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
  24. Galaxy evolution in all five CANDELS fields and IllustrisTNG: Morphological, structural, and the major merger evolution to z ∼similar-to\sim∼ 3. The Astrophysical Journal, 919(2):139, oct 2021. doi: 10.3847/1538-4357/ac1422.
  25. An explanation of in-context learning as implicit bayesian inference. arXiv preprint arXiv:2111.02080, 2021.
  26. Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493, 2022.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 1 like.

Upgrade to Pro to view all of the tweets about this paper:

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube