Papers
Topics
Authors
Recent
2000 character limit reached

Cloud gap-filling with deep learning for improved grassland monitoring

Published 14 Mar 2024 in cs.CV and eess.IV | (2403.09554v1)

Abstract: Uninterrupted optical image time series are crucial for the timely monitoring of agricultural land changes. However, the continuity of such time series is often disrupted by clouds. In response to this challenge, we propose a deep learning method that integrates cloud-free optical (Sentinel-2) observations and weather-independent (Sentinel-1) Synthetic Aperture Radar (SAR) data, using a combined Convolutional Neural Network (CNN)-Recurrent Neural Network (RNN) architecture to generate continuous Normalized Difference Vegetation Index (NDVI) time series. We emphasize the significance of observation continuity by assessing the impact of the generated time series on the detection of grassland mowing events. We focus on Lithuania, a country characterized by extensive cloud coverage, and compare our approach with alternative interpolation techniques (i.e., linear, Akima, quadratic). Our method surpasses these techniques, with an average MAE of 0.024 and R2 of 0.92. It not only improves the accuracy of event detection tasks by employing a continuous time series, but also effectively filters out sudden shifts and noise originating from cloudy observations that cloud masks often fail to detect.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. doi:https://doi.org/10.1093/aob/mcs209.
  2. doi:https://doi.org/10.1007/s10980-020-00980-3.
  3. doi:https://doi.org/10.3390/rs10081300.
  4. doi:https://10.3389/fenvs.2023.1040551.
  5. doi:https://doi.org/10.1093/jpe/rtw005.
  6. doi:https://doi.org/10.1016/j.rse.2017.06.003.
  7. doi:https://doi.org/10.1088/1748-9326/aacc7a.
  8. doi:https://doi.org/10.3390/rs10081221.
  9. doi:https://doi.org/10.1016/j.rse.2019.03.017.
  10. doi:https://doi.org/10.1080/17538947.2019.1572799.
  11. doi:https://doi.org/10.1016/j.jag.2022.102818.
  12. doi:https://doi.org/10.1016/j.rse.2021.112419.
  13. doi:https://doi.org/10.1016/j.rse.2012.12.012.
  14. doi:https://doi.org/10.1016/j.rse.2019.111536.
  15. doi:https://doi.org/10.3390/rs13091769.
  16. doi:https://doi.org/10.1016/j.rse.2020.111901.
  17. doi:https://doi.org/10.1016/j.rse.2015.11.016.
  18. doi:https://doi.org/10.1016/j.jag.2020.102260.
  19. doi:https://doi.org/10.1016/j.rse.2018.09.002.
  20. doi:https://doi.org/10.1016/S2095-3119(18)62016-7.
  21. doi:https://doi.org/10.1016/j.rse.2020.111954.
  22. doi:https://doi.org/10.3390/rs14225739.
  23. doi:https://doi.org/10.1109/JSTARS.2015.2503773.
  24. doi:https://doi.org/10.1016/j.rse.2020.111952.
  25. doi:https://doi.org/10.1109/MGRS.2015.2441912.
  26. doi:https://doi.org/10.1016/j.jag.2021.102640.
  27. doi:https://doi.org/10.1016/j.jag.2018.11.008.
  28. doi:https://doi.org/10.1016/j.isprsjprs.2019.06.007.
  29. doi:https://doi.org/10.1016/j.rse.2019.111452.
  30. doi:https://doi.org/10.1016/j.compag.2022.106753.
  31. doi:https://doi.org/10.3390/rs11172067.
  32. doi:https://doi.org/10.1016/j.isprsjprs.2020.05.013.
  33. doi:https://doi.org/10.3390/ijgi7040129.
  34. doi:https://doi.org/10.1016/j.isprsjprs.2019.09.016.
  35. doi:https://doi.org/10.1016/j.rse.2021.112603.
  36. doi:https://doi.org/10.1109/JSTARS.2017.2679761.
  37. doi:https://doi.org/10.3390/rs8100802.
  38. doi:https://doi.org/10.3390/rs14071647.
  39. doi:https://doi.org/10.1016/j.rse.2022.113145.
  40. doi:https://doi.org/10.1016/j.rse.2021.112751.
  41. doi:https://doi.org/10.1016/j.rse.2022.112888.
  42. doi:https://doi.org/10.1016/j.rse.2023.113680.
  43. doi:https://10.1109/IGARSS.2019.8898540.
  44. doi:https://doi.org/10.1109/IVMSP54334.2022.9816291.
  45. doi:https://doi.org/10.1016/j.srs.2020.100010.
  46. doi:https://doi.org/10.1038/s41598-022-04932-6.
  47. doi:https://doi.org/10.1016/j.rse.2023.113577.
  48. doi:https://doi.org/10.1145/321607.321609.
  49. doi:https://doi.org/10.1016/j.rse.2021.112795.
  50. doi:https://doi.org/10.3390/agriengineering3010008.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.