Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When Geoscience Meets Generative AI and Large Language Models: Foundations, Trends, and Future Challenges (2402.03349v1)

Published 25 Jan 2024 in physics.geo-ph, cs.AI, cs.LG, and physics.ao-ph

Abstract: Generative Artificial Intelligence (GAI) represents an emerging field that promises the creation of synthetic data and outputs in different modalities. GAI has recently shown impressive results across a large spectrum of applications ranging from biology, medicine, education, legislation, computer science, and finance. As one strives for enhanced safety, efficiency, and sustainability, generative AI indeed emerges as a key differentiator and promises a paradigm shift in the field. This paper explores the potential applications of generative AI and LLMs in geoscience. The recent developments in the field of machine learning and deep learning have enabled the generative model's utility for tackling diverse prediction problems, simulation, and multi-criteria decision-making challenges related to geoscience and Earth system dynamics. This survey discusses several GAI models that have been used in geoscience comprising generative adversarial networks (GANs), physics-informed neural networks (PINNs), and generative pre-trained transformer (GPT)-based structures. These tools have helped the geoscience community in several applications, including (but not limited to) data generation/augmentation, super-resolution, panchromatic sharpening, haze removal, restoration, and land surface changing. Some challenges still remain such as ensuring physical interpretation, nefarious use cases, and trustworthiness. Beyond that, GAI models show promises to the geoscience community, especially with the support to climate change, urban science, atmospheric science, marine science, and planetary science through their extraordinary ability to data-driven modeling and uncertainty quantification.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (116)
  1. Y. Cao, S. Li, Y. Liu, Z. Yan, Y. Dai, P. S. Yu, and L. Sun, “A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt,” arXiv preprint arXiv:2303.04226, 2023.
  2. S. R. Eddy, “Hidden markov models,” Current opinion in structural biology, vol. 6, no. 3, pp. 361–365, 1996.
  3. L. R. Rabiner, “A tutorial on hidden markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257–286, 1989.
  4. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets (advances in neural information processing systems)(pp. 2672–2680),” Red Hook, NY Curran, 2014.
  5. J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli, “Deep unsupervised learning using nonequilibrium thermodynamics,” in International conference on machine learning.   PMLR, 2015, pp. 2256–2265.
  6. L. Yang, Z. Zhang, Y. Song, S. Hong, R. Xu, Y. Zhao, W. Zhang, B. Cui, and M.-H. Yang, “Diffusion models: A comprehensive survey of methods and applications,” ACM Computing Surveys, vol. 56, no. 4, pp. 1–39, 2023.
  7. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  8. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
  9. A. Radford, J. Wu, D. Amodei, D. Amodei, J. Clark, M. Brundage, and I. Sutskever, “Better language models and their implications,” OpenAI blog, vol. 1, no. 2, 2019.
  10. W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong et al., “A survey of large language models,” arXiv preprint arXiv:2303.18223, 2023.
  11. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  12. P. P. Ray, “Chatgpt: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope,” Internet of Things and Cyber-Physical Systems, 2023.
  13. K. Singhal, S. Azizi, T. Tu, S. S. Mahdavi, J. Wei, H. W. Chung, N. Scales, A. Tanwani, H. Cole-Lewis, S. Pfohl et al., “Large language models encode clinical knowledge,” Nature, vol. 620, no. 7972, pp. 172–180, 2023.
  14. K. Singhal, T. Tu, J. Gottweis, R. Sayres, E. Wulczyn, L. Hou, K. Clark, S. Pfohl, H. Cole-Lewis, D. Neal et al., “Towards expert-level medical question answering with large language models,” arXiv preprint arXiv:2305.09617, 2023.
  15. V. Liévin, C. E. Hother, and O. Winther, “Can large language models reason about medical questions?” arXiv preprint arXiv:2207.08143, 2022.
  16. H. Nori, N. King, S. M. McKinney, D. Carignan, and E. Horvitz, “Capabilities of gpt-4 on medical challenge problems,” arXiv preprint arXiv:2303.13375, 2023.
  17. P. Sharma, K. Thapa, P. Dhakal, M. D. Upadhaya, S. Adhikari, and S. R. Khanal, “Performance of chatgpt on usmle: Unlocking the potential of large language models for ai-assisted medical education,” arXiv preprint arXiv:2307.00112, 2023.
  18. F. Antaki, S. Touma, D. Milad, J. El-Khoury, and R. Duval, “Evaluating the performance of chatgpt in ophthalmology: An analysis of its successes and shortcomings,” Ophthalmology Science, p. 100324, 2023.
  19. D. M. Levine, R. Tuwani, B. Kompa, A. Varma, S. G. Finlayson, A. Mehrotra, and A. Beam, “The diagnostic and triage accuracy of the gpt-3 artificial intelligence model,” medRxiv, pp. 2023–01, 2023.
  20. A. Tack and C. Piech, “The ai teacher test: Measuring the pedagogical ability of blender and gpt-3 in educational dialogues,” arXiv preprint arXiv:2205.07540, 2022.
  21. R. E. Wang and D. Demszky, “Is chatgpt a good teacher coach? measuring zero-shot performance for scoring and providing actionable insights on classroom instruction,” arXiv preprint arXiv:2306.03090, 2023.
  22. Z. A. Pardos and S. Bhandari, “Learning gain differences between chatgpt and human tutor generated algebra hints,” arXiv preprint arXiv:2302.06871, 2023.
  23. W. Dai, J. Lin, H. Jin, T. Li, Y.-S. Tsai, D. Gašević, and G. Chen, “Can large language models provide feedback to students? a case study on chatgpt,” in 2023 IEEE International Conference on Advanced Learning Technologies (ICALT).   IEEE, 2023, pp. 323–325.
  24. J. Savelka, K. D. Ashley, M. A. Gray, H. Westermann, and H. Xu, “Explaining legal concepts with augmented large language models (gpt-4),” arXiv preprint arXiv:2306.09525, 2023.
  25. I. Chalkidis, M. Fergadiotis, P. Malakasiotis, N. Aletras, and I. Androutsopoulos, “Legal-bert: The muppets straight out of law school,” arXiv preprint arXiv:2010.02559, 2020.
  26. A. Blair-Stanek, N. Holzenberger, and B. Van Durme, “Can gpt-3 perform statutory reasoning?” arXiv preprint arXiv:2302.06100, 2023.
  27. F. Yu, L. Quartey, and F. Schilder, “Legal prompting: Teaching a language model to think like a lawyer,” arXiv preprint arXiv:2212.01326, 2022.
  28. S. I. Ross, F. Martinez, S. Houde, M. Muller, and J. D. Weisz, “The programmer’s assistant: Conversational interaction with a large language model for software development,” in Proceedings of the 28th International Conference on Intelligent User Interfaces, 2023, pp. 491–514.
  29. G. Sandoval, H. Pearce, T. Nys, R. Karri, S. Garg, and B. Dolan-Gavitt, “Lost at c: A user study on the security implications of large language model code assistants,” arXiv preprint arXiv:2208.09727, 2023.
  30. J. Leinonen, P. Denny, S. MacNeil, S. Sarsa, S. Bernstein, J. Kim, A. Tran, and A. Hellas, “Comparing code explanations created by students and large language models,” arXiv preprint arXiv:2304.03938, 2023.
  31. F. F. Xu, U. Alon, G. Neubig, and V. J. Hellendoorn, “A systematic evaluation of large language models of code,” in Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, 2022, pp. 1–10.
  32. C. Thapa, S. I. Jang, M. E. Ahmed, S. Camtepe, J. Pieprzyk, and S. Nepal, “Transformer-based language models for software vulnerability detection,” in Proceedings of the 38th Annual Computer Security Applications Conference, 2022, pp. 481–496.
  33. J. Liu, C. S. Xia, Y. Wang, and L. Zhang, “Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation,” arXiv preprint arXiv:2305.01210, 2023.
  34. C. Fieberg, L. Hornuf, and D. Streich, “Using gpt-4 for financial advice,” Available at SSRN 4488891, 2023.
  35. A. Zaremba and E. Demir, “Chatgpt: Unlocking the future of nlp in finance,” Available at SSRN 4323643, 2023.
  36. G. Son, H. Jung, M. Hahm, K. Na, and S. Jin, “Beyond classification: Financial reasoning in state-of-the-art language models,” arXiv preprint arXiv:2305.01505, 2023.
  37. S. Wu, O. Irsoy, S. Lu, V. Dabravolski, M. Dredze, S. Gehrmann, P. Kambadur, D. Rosenberg, and G. Mann, “Bloomberggpt: A large language model for finance,” arXiv preprint arXiv:2303.17564, 2023.
  38. D. Araci, “Finbert: Financial sentiment analysis with pre-trained language models,” arXiv preprint arXiv:1908.10063, 2019.
  39. X. Zhang and Q. Yang, “Xuanyuan 2.0: A large chinese financial chat model with hundreds of billions parameters,” in Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 2023, pp. 4435–4439.
  40. H. Zhang and J.-J. Xu, “When geoscience meets foundation models: Towards general geoscience artificial intelligence system,” arXiv preprint arXiv:2309.06799, 2023.
  41. N. Patel, “Generative artificial intelligence and remote sensing: A perspective on the past and the future [perspectives],” IEEE Geoscience and Remote Sensing Magazine, vol. 11, no. 2, pp. 86–100, 2023.
  42. A. Karpatne, I. Ebert-Uphoff, S. Ravela, H. A. Babaie, and V. Kumar, “Machine learning for the geosciences: Challenges and opportunities,” IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 8, pp. 1544–1554, 2018.
  43. K. J. Bergen, P. A. Johnson, M. V. de Hoop, and G. C. Beroza, “Machine learning for data-driven discovery in solid earth geoscience,” Science, vol. 363, no. 6433, p. eaau0323, 2019.
  44. L. Mosser, O. Dubrule, and M. J. Blunt, “Reconstruction of three-dimensional porous media using generative adversarial neural networks,” Physical Review E, vol. 96, no. 4, p. 043309, 2017.
  45. T.-F. Zhang, P. Tilke, E. Dupont, L.-C. Zhu, L. Liang, and W. Bailey, “Generating geologically realistic 3d reservoir facies models using deep learning of sedimentary architecture with generative adversarial networks,” Petroleum Science, vol. 16, pp. 541–549, 2019.
  46. T. Wang, D. Trugman, and Y. Lin, “Seismogen: Seismic waveform synthesis using gan with application to seismic data augmentation,” Journal of Geophysical Research: Solid Earth, vol. 126, no. 4, p. e2020JB020077, 2021.
  47. M. Romanello, S. Whitmee, E. Mulcahy, and A. Costello, “Further delays in tackling greenhouse gas emissions at cop28 will be an act of negligence,” The Lancet, vol. 402, no. 10417, pp. 2055–2057, 2023.
  48. N. Lv, H. Ma, C. Chen, Q. Pei, Y. Zhou, F. Xiao, and J. Li, “Remote sensing data augmentation through adversarial training,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 9318–9333, 2021.
  49. R. Thottolil, U. Kumar, and T. Chakraborty, “Prediction of transportation index for urban patterns in small and medium-sized indian cities using hybrid ridgegan model,” arXiv preprint arXiv:2306.05951, 2023.
  50. M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784, 2014.
  51. M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in International conference on machine learning.   PMLR, 2017, pp. 214–223.
  52. W. Li and J. Wang, “Residual learning of cycle-gan for seismic data denoising,” Ieee Access, vol. 9, pp. 11 585–11 597, 2021.
  53. A. Albert, E. Strano, J. Kaur, and M. C. González, “Modeling urbanization patterns with generative adversarial networks,” IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, pp. 2095–2098, 2018.
  54. W. Zhang, Y. Ma, D. Zhu, L. Dong, and Y. Liu, “Metrogan: Simulating urban morphology with generative adversarial network,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 2482–2492.
  55. T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” arXiv preprint arXiv:1710.10196, 2017.
  56. X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, “Least squares generative adversarial networks,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2794–2802.
  57. V. Vovk, “Kernel ridge regression,” in Empirical Inference: Festschrift in Honor of Vladimir N. Vapnik.   Springer, 2013, pp. 105–116.
  58. M. Marconcini, A. Metz-Marconcini, S. Üreyen, D. Palacios-Lopez, W. Hanke, F. Bachofer, J. Zeidler, T. Esch, N. Gorelick, A. Kakarla et al., “Outlining where humans live, the world settlement footprint 2015,” Scientific Data, vol. 7, no. 1, p. 242, 2020.
  59. R. Feng, D. Grana, T. Mukerji, and K. Mosegaard, “Application of bayesian generative adversarial networks to geological facies modeling,” Mathematical Geosciences, vol. 54, no. 5, pp. 831–855, 2022.
  60. C. Zhang, X. Song, and L. Azevedo, “U-net generative adversarial network for subsurface facies modeling,” Computational Geosciences, vol. 25, pp. 553–573, 2021.
  61. S. Song, T. Mukerji, and J. Hou, “Geological facies modeling based on progressive growing of generative adversarial networks (gans),” Computational Geosciences, vol. 25, pp. 1251–1273, 2021.
  62. C. Deng, T. Zhang, Z. He, Q. Chen, Y. Shi, L. Zhou, L. Fu, W. Zhang, X. Wang, C. Zhou, Z. Lin, and J. He, “Learning a foundation language model for geoscience knowledge understanding and utilization,” in The 17th ACM International Conference on Web Search and Data Mining, 03 2024.
  63. H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, “Llama: Open and efficient foundation language models,” 2023.
  64. R. Taylor, M. Kardas, G. Cucurull, T. Scialom, A. Hartshorn, E. Saravia, A. Poulton, V. Kerkez, and R. Stojnic, “Galactica: A large language model for science,” 2022.
  65. M. Team et al., “Introducing mpt-7b: a new standard for open-source, commercially usable llms,” 2023.
  66. W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. E. Gonzalez et al., “Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality,” See https://vicuna. lmsys. org (accessed 14 April 2023), 2023.
  67. R. Taori, I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, P. Liang, and T. B. Hashimoto, “Stanford alpaca: An instruction-following llama model,” 2023.
  68. K. Kuckreja, M. S. Danish, M. Naseer, A. Das, S. Khan, and F. S. Khan, “Geochat: Grounded large vision-language model for remote sensing,” arXiv preprint arXiv:2311.15826, 2023.
  69. D. Wang, J. Zhang, B. Du, M. Xu, L. Liu, D. Tao, and L. Zhang, “Samrs: Scaling-up remote sensing segmentation dataset with segment anything model,” in Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023.
  70. Y. Tay, M. C. Phan, L. A. Tuan, and S. C. Hui, “Learning to rank question answer pairs with holographic dual lstm architecture,” in Proceedings of the 40th international ACM SIGIR conference on research and development in information retrieval, 2017, pp. 695–704.
  71. D. Hendrycks and K. Gimpel, “Gaussian error linear units (gelus),” arXiv preprint arXiv:1606.08415, 2016.
  72. E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, “Lora: Low-rank adaptation of large language models,” arXiv preprint arXiv:2106.09685, 2021.
  73. A. Garza and M. Mergenthaler-Canseco, “Timegpt-1,” arXiv preprint arXiv:2310.03589, 2023.
  74. V. Vovk, A. Gammerman, and G. Shafer, “Conformal prediction,” Algorithmic learning in a random world, pp. 17–51, 2005.
  75. V. Vovk, J. Shen, V. Manokhin, and M.-g. Xie, “Nonparametric predictive distributions based on conformal prediction,” in Conformal and probabilistic prediction and applications.   PMLR, 2017, pp. 82–102.
  76. M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” Journal of Computational physics, vol. 378, pp. 686–707, 2019.
  77. M. Rasht-Behesht, C. Huber, K. Shukla, and G. E. Karniadakis, “Physics-informed neural networks (pinns) for wave propagation and full waveform inversions,” Journal of Geophysical Research: Solid Earth, vol. 127, no. 5, p. e2021JB023120, 2022.
  78. L. Yang, X. Meng, and G. E. Karniadakis, “B-pinns: Bayesian physics-informed neural networks for forward and inverse pde problems with noisy data,” Journal of Computational Physics, vol. 425, p. 109913, 2021.
  79. X. Jin, S. Cai, H. Li, and G. E. Karniadakis, “Nsfnets (navier-stokes flow nets): Physics-informed neural networks for the incompressible navier-stokes equations,” Journal of Computational Physics, vol. 426, p. 109951, 2021.
  80. Z. Mao, A. D. Jagtap, and G. E. Karniadakis, “Physics-informed neural networks for high-speed flows,” Computer Methods in Applied Mechanics and Engineering, vol. 360, p. 112789, 2020.
  81. L. Lu, X. Meng, Z. Mao, and G. E. Karniadakis, “Deepxde: A deep learning library for solving differential equations,” SIAM review, vol. 63, no. 1, pp. 208–228, 2021.
  82. Z. Elabid, T. Chakraborty, and A. Hadid, “Knowledge-based deep learning for modeling chaotic systems,” in 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA).   IEEE, 2022, pp. 1203–1209.
  83. A. Dutta, M. Panja, U. Kumar, C. Hens, and T. Chakraborty, “Van der pol-informed neural networks for multi-step-ahead forecasting of extreme climatic events,” in NeurIPS 2023 AI for Science Workshop, 2023.
  84. B. Moseley, A. Markham, and T. Nissen-Meyer, “Solving the wave equation with physics-informed deep learning,” arXiv preprint arXiv:2006.11894, 2020.
  85. J. Pu, J. Li, and Y. Chen, “Solving localized wave solutions of the derivative nonlinear schrödinger equation using an improved pinn method,” Nonlinear Dynamics, vol. 105, pp. 1723–1739, 2021.
  86. G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, and L. Yang, “Physics-informed machine learning,” Nature Reviews Physics, vol. 3, no. 6, pp. 422–440, 2021.
  87. U. bin Waheed, E. Haghighat, T. Alkhalifah, C. Song, and Q. Hao, “Pinneik: Eikonal solution using physics-informed neural networks,” Computers & Geosciences, vol. 155, p. 104833, 2021.
  88. S. Karimpouli and P. Tahmasebi, “Physics informed machine learning: Seismic wave equation,” Geoscience Frontiers, vol. 11, no. 6, pp. 1993–2001, 2020.
  89. R. Ranade, C. Hill, and J. Pathak, “Discretizationnet: A machine-learning based solver for navier–stokes equations using finite volume discretization,” Computer Methods in Applied Mechanics and Engineering, vol. 378, p. 113722, 2021.
  90. S. Hu, M. Liu, S. Zhang, S. Dong, and R. Zheng, “Physics-informed neural network combined with characteristic-based split for solving navier–stokes equations,” Engineering Applications of Artificial Intelligence, vol. 128, p. 107453, 2024.
  91. Y. Zhang, X. Zhu, and J. Gao, “Seismic inversion based on acoustic wave equations using physics-informed neural network,” IEEE transactions on geoscience and remote sensing, vol. 61, pp. 1–11, 2023.
  92. I. Depina, S. Jain, S. Mar Valsson, and H. Gotovac, “Application of physics-informed neural networks to inverse problems in unsaturated groundwater flow,” Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards, vol. 16, no. 1, pp. 21–36, 2022.
  93. W. Li, S. Wang, S. T. Arundel, and C.-Y. Hsu, “Geoimagenet: a multi-source natural feature benchmark dataset for geoai and supervised machine learning,” GeoInformatica, vol. 27, no. 3, pp. 619–640, 2023.
  94. G. Sumbul, M. Charfuelan, B. Demir, and V. Markl, “Bigearthnet: A large-scale benchmark archive for remote sensing image understanding,” in IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium.   IEEE, 2019, pp. 5901–5904.
  95. Z. Xiong, F. Zhang, Y. Wang, Y. Shi, and X. X. Zhu, “Earthnets: Empowering ai in earth observation,” arXiv preprint arXiv:2210.04936, 2022.
  96. M. P. Heris, N. L. Foks, K. J. Bagstad, A. Troy, and Z. H. Ancona, “A rasterized building footprint dataset for the united states,” Scientific data, vol. 7, no. 1, p. 207, 2020.
  97. H. West and M. Horswell, “Gis has changed! exploring the potential of arcgis online,” Teaching Geography, vol. 43, no. 1, pp. 22–24, 2018.
  98. B. Kranstauber, A. Cameron, R. Weinzerl, T. Fountain, S. Tilak, M. Wikelski, and R. Kays, “The movebank data model for animal tracking,” Environmental Modelling & Software, vol. 26, no. 6, pp. 834–835, 2011.
  99. Y. Zheng, X. Xie, W.-Y. Ma et al., “Geolife: A collaborative social networking service among user, location and trajectory.” IEEE Data Eng. Bull., vol. 33, no. 2, pp. 32–39, 2010.
  100. Y. Kang, S. Gao, Y. Liang, M. Li, J. Rao, and J. Kruse, “Multiscale dynamic human mobility flow dataset in the us during the covid-19 epidemic,” Scientific data, vol. 7, no. 1, p. 390, 2020.
  101. Q. Wu, “geemap: A python package for interactive mapping with google earth engine,” Journal of Open Source Software, vol. 5, no. 51, p. 2305, 2020.
  102. J. Fleming, S. W. Marvel, S. Supak, A. A. Motsinger-Reif, and D. M. Reif, “Toxpi* gis toolkit: Creating, viewing, and sharing integrative visualizations for geospatial data using arcgis,” Journal of Exposure Science & Environmental Epidemiology, vol. 32, no. 6, pp. 900–907, 2022.
  103. S. Wang, L. Anselin, B. Bhaduri, C. Crosby, M. F. Goodchild, Y. Liu, and T. L. Nyerges, “Cybergis software: a synthetic review and integration roadmap,” International Journal of Geographical Information Science, vol. 27, no. 11, pp. 2122–2145, 2013.
  104. M. G. De Vos, W. Hazeleger, D. Bari, J. Behrens, S. Bendoukha, I. Garcia-Marti, R. van Haren, S. E. Haupt, R. Hut, F. Jansson et al., “Open weather and climate science in the digital era,” Geoscience Communication, vol. 3, no. 2, pp. 191–201, 2020.
  105. A. Lewis, S. Oliver, L. Lymburner, B. Evans, L. Wyborn, N. Mueller, G. Raevksi, J. Hooke, R. Woodcock, J. Sixsmith et al., “The australian geoscience data cube—foundations and lessons learned,” Remote Sensing of Environment, vol. 202, pp. 276–292, 2017.
  106. R. R. Navalgund, V. Jayaraman, and P. Roy, “Remote sensing applications: An overview,” current science, pp. 1747–1766, 2007.
  107. T. S. Bressan, M. K. de Souza, T. J. Girelli, and F. C. Junior, “Evaluation of machine learning methods for lithology classification using geophysical data,” Computers & Geosciences, vol. 139, p. 104475, 2020.
  108. S. Zhang, H. Xu, Y. Jia, Y. Wen, D. Wang, L. Fu, X. Wang, and C. Zhou, “Geodeepshovel: A platform for building scientific database from geoscience literature with ai assistance,” Geoscience Data Journal, vol. 10, no. 4, pp. 519–537, 2023.
  109. T. Chakraborty, S. M. Naik, M. Panja, B. Manvitha et al., “Ten years of generative adversarial nets (gans): A survey of the state-of-the-art,” arXiv preprint arXiv:2308.16316, 2023.
  110. Y. Song, T. Wang, P. Cai, S. K. Mondal, and J. P. Sahoo, “A comprehensive survey of few-shot learning: Evolution, applications, challenges, and opportunities,” ACM Computing Surveys, 2023.
  111. F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, and Q. He, “A comprehensive survey on transfer learning,” Proceedings of the IEEE, vol. 109, no. 1, pp. 43–76, 2020.
  112. Z. Yu, J. Li, Z. Du, L. Zhu, and H. T. Shen, “A comprehensive survey on source-free domain adaptation,” arXiv preprint arXiv:2302.11803, 2023.
  113. L. e. a. Sun, “Trustllm: Trustworthiness in large language models,” arXiv preprint arXiv:2401.05561, 2024.
  114. R. Rodrigues, “Legal and human rights issues of ai: Gaps, challenges and vulnerabilities,” Journal of Responsible Technology, vol. 4, p. 100005, 2020.
  115. L. Sasal, T. Chakraborty, and A. Hadid, “W-transformers: A wavelet-based transformer framework for univariate time series forecasting,” in 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA).   IEEE, 2022, pp. 671–676.
  116. Z. Hao, S. Liu, Y. Zhang, C. Ying, Y. Feng, H. Su, and J. Zhu, “Physics-informed machine learning: A survey on problems, methods and applications,” arXiv preprint arXiv:2211.08064, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Abdenour Hadid (28 papers)
  2. Tanujit Chakraborty (31 papers)
  3. Daniel Busby (8 papers)
Citations (8)
Youtube Logo Streamline Icon: https://streamlinehq.com