Harnessing the Power of Adversarial Prompting and Large Language Models for Robust Hypothesis Generation in Astronomy (2306.11648v1)
Abstract: This study investigates the application of LLMs, specifically GPT-4, within Astronomy. We employ in-context prompting, supplying the model with up to 1000 papers from the NASA Astrophysics Data System, to explore the extent to which performance can be improved by immersing the model in domain-specific literature. Our findings point towards a substantial boost in hypothesis generation when using in-context prompting, a benefit that is further accentuated by adversarial prompting. We illustrate how adversarial prompting empowers GPT-4 to extract essential details from a vast knowledge base to produce meaningful hypotheses, signaling an innovative step towards employing LLMs for scientific research in Astronomy.
- ADS: The Next Generation Search Platform. In Holl, A., Lesteven, S., Dietrich, D., and Gasperini, A. (eds.), Open Science at the Frontiers of Librarianship, volume 492 of Astronomical Society of the Pacific Conference Series, pp. 189, April 2015. doi: 10.48550/arXiv.1503.04194.
- The eighteenth data release of the sloan digital sky surveys: Targeting and first spectra from sdss-v, 2023.
- Coupling local to global star formation in spiral galaxies: the effect of differential rotation. Monthly Notices of the Royal Astronomical Society, 496(4):5211–5226, jul 2020. doi: 10.1093/mnras/staa1945.
- From Data Processes to Data Products: Knowledge Infrastructures in Astronomy. arXiv e-prints, art. arXiv:2109.01707, September 2021. doi: 10.48550/arXiv.2109.01707.
- Language models are few-shot learners, 2020.
- Meta-learning via language model in-context tuning. arXiv preprint arXiv:2110.07814, 2021.
- Chasing the impact of the Gaia-Sausage-Enceladus merger on the formation of the Milky Way thick disc. Monthly Notices of the Royal Astronomical Society, March 2023. doi: 10.1093/mnrasl/slad033.
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv e-prints, art. arXiv:1810.04805, October 2018. doi: 10.48550/arXiv.1810.04805.
- A feasibility study of answer-agnostic question generation for education. In Findings of the Association for Computational Linguistics: ACL 2022, pp. 1919–1926, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-acl.151.
- Can gpt-3 pass a writer’s turing test? Journal of Cultural Analytics, 5(2), 2020.
- <<<i>>>gaia<<</i>>>early data release 3. Astronomy & Astrophysics, 649:A5, apr 2021. doi: 10.1051/0004-6361/202039834.
- On the effectiveness of adapter-based tuning for pretrained language model adaptation. arXiv preprint arXiv:2106.03164, 2021.
- LoRA: Low-Rank Adaptation of Large Language Models. arXiv e-prints, art. arXiv:2106.09685, June 2021. doi: 10.48550/arXiv.2106.09685.
- Compacter: Efficient low-rank hypercomplex adapter layers. Advances in Neural Information Processing Systems, 34:1022–1035, 2021.
- Learn to explain: Multimodal reasoning via thought chains for science question answering. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 2507–2521. Curran Associates, Inc., 2022.
- OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023.
- Training language models to follow instructions with human feedback. arXiv e-prints, art. arXiv:2203.02155, March 2022. doi: 10.48550/arXiv.2203.02155.
- Improving language understanding by generative pre-training. 2018.
- The local and global relations between Σ⋆subscriptΣ⋆\Sigma_{\star}roman_Σ start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT , ΣSFRsubscriptΣSFR\Sigma_{\rm SFR}roman_Σ start_POSTSUBSCRIPT roman_SFR end_POSTSUBSCRIPT and ΣmolsubscriptΣmol\Sigma_{\rm mol}roman_Σ start_POSTSUBSCRIPT roman_mol end_POSTSUBSCRIPT that regulate star-formation. arXiv e-prints, art. arXiv:2212.03738, December 2022. doi: 10.48550/arXiv.2212.03738.
- Study of galaxies on large-scale filaments in simulations. Monthly Notices of the Royal Astronomical Society, 497(2):2265–2275, jul 2020. doi: 10.1093/mnras/staa1913.
- Vig, J. Visualizing attention in transformer-based language representation models. arXiv preprint arXiv:1904.02679, 2019.
- Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
- Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
- Galaxy evolution in all five CANDELS fields and IllustrisTNG: Morphological, structural, and the major merger evolution to z ∼similar-to\sim∼ 3. The Astrophysical Journal, 919(2):139, oct 2021. doi: 10.3847/1538-4357/ac1422.
- An explanation of in-context learning as implicit bayesian inference. arXiv preprint arXiv:2111.02080, 2021.
- Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493, 2022.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.