Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Forest Parameter Prediction by Multiobjective Deep Learning of Regression Models Trained with Pseudo-Target Imputation (2306.11103v1)

Published 19 Jun 2023 in cs.CV and eess.IV

Abstract: In prediction of forest parameters with data from remote sensing (RS), regression models have traditionally been trained on a small sample of ground reference data. This paper proposes to impute this sample of true prediction targets with data from an existing RS-based prediction map that we consider as pseudo-targets. This substantially increases the amount of target training data and leverages the use of deep learning (DL) for semi-supervised regression modelling. We use prediction maps constructed from airborne laser scanning (ALS) data to provide accurate pseudo-targets and free data from Sentinel-1's C-band synthetic aperture radar (SAR) as regressors. A modified U-Net architecture is adapted with a selection of different training objectives. We demonstrate that when a judicious combination of loss functions is used, the semi-supervised imputation strategy produces results that surpass traditional ALS-based regression models, even though \sen data are considered as inferior for forest monitoring. These results are consistent for experiments on above-ground biomass prediction in Tanzania and stem volume prediction in Norway, representing a diversity in parameters and forest types that emphasises the robustness of the approach.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. T. Johansson, “Biomass production of Norway spruce (Picea abies (L.) Karst.) growing on abandoned farmland,” Silva Fennica, vol. 33, no. 4, pp. 261–280, 1999.
  2. K. Ericsson, S. Huttunen, L. J. Nilsson, and P. Svenningsson, “Bioenergy policy and market development in Finland and Sweden,” Energy Policy, vol. 32, no. 15, pp. 1707–1721, 2004.
  3. M. Segura and M. Kanninen, “Allometric models for tree volume and total aboveground biomass in a tropical humid forest in Costa Rica,” Biotropica, vol. 37, no. 1, pp. 2–8, 2005.
  4. J. Urban, J. Čermák, and R. Ceulemans, “Above-and below-ground biomass, surface and volume, and stored water in a mature Scots pine stand,” Eur. J. Forest Research, vol. 134, pp. 61–74, 2015.
  5. L. G. Marklund, “Biomass functions for pine, spruce and birch in Sweden,” Department of Forest Survey., Swedish University of Agricultural Sciences, Umeå, Sweden, Tech. Rep. 45, 1988.
  6. L. Noordermeer, O. M. Bollandsås, H. O. Ørka, E. Næsset, and T. Gobakken, “Comparing the accuracies of forest attributes predicted from airborne laser scanning and digital aerial photogrammetry in operational forest inventories,” Remote Sens. Environ., vol. 226, pp. 26–37, 2019.
  7. S. Solberg, R. Astrup, T. Gobakken, E. Næsset, and D. J. Weydahl, “Estimating spruce and pine biomass with interferometric X-band SAR,” Remote Sens. Environ., vol. 114, no. 10, pp. 2353–2360, Oct. 2010.
  8. S. Björk, S. N. Anfinsen, E. Næsset, T. Gobakken, and E. Zahabu, “On the potential of sequential and nonsequential regression models for Sentinel-1-based biomass prediction in Tanzanian miombo forests,” IEEE J. Select. Top. Appl. Earth Observ. Remote Sens., vol. 15, pp. 4612–4639, 2022.
  9. T. Kattenborn, J. Leitloff, F. Schiefer, and S. Hinz, “Review on convolutional neural networks (CNN) in vegetation remote sensing,” ISPRS J. Photogram. Remote Sens., vol. 173, pp. 24–49, 2021.
  10. A. Hamedianfar, C. Mohamedou, A. Kangas, and J. Vauhkonen, “Deep learning for forest inventory and planning: A critical review on the remote sensing approaches so far and prospects for further applications,” Forestry, vol. 95, no. 4, pp. 451–465, Feb. 2022.
  11. S. Zolkos, S. Goetz, and R. Dubayah, “A meta-analysis of terrestrial aboveground biomass estimation using Lidar remote sensing,” Remote Sens. Environ., vol. 128, pp. 289–298, Jan. 2013.
  12. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 1125–1134.
  13. J. Leonhardt, L. Drees, P. Jung, and R. Roscher, “Probabilistic biomass estimation with conditional generative adversarial networks,” in Proc. DAGM German Conf. Pattern Recognit. (GCPR), Konstanz, Germany, 2022, pp. 479–494.
  14. A. E. Pascarella, G. Giacco, M. Rigiroli, S. Marrone, and C. Sansone, “ReUse: REgressive Unet for carbon storage and above-ground biomass estimation,” J. Imaging, vol. 9, no. 3, 2023.
  15. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proc. 18th Int. Conf. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Part III 18, Munich, Germany, 2015, pp. 234–241.
  16. Y. Blau, R. Mechrez, R. Timofte, T. Michaeli, and L. Zelnik-Manor, “The 2018 PIRM challenge on perceptual image super-resolution,” in Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 334–355.
  17. W. Yang, X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao, “Deep learning for single image super-resolution: A brief review,” IEEE Trans. Multimedia, vol. 21, no. 12, pp. 3106–3121, 2019.
  18. J. W. Soh, G. Y. Park, J. Jo, and N. I. Cho, “Natural and Realistic Single Image Super-Resolution With Explicit Natural Manifold Discrimination,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2019, pp. 8114–8123.
  19. Y. Chen, G. Li, C. Jin, S. Liu, and T. Li, “SSD-GAN: Measuring the Realness in the Spatial and Spectral Domains,” in Proc. AAAI Conf. Artif. Intell., vol. 35, 2021, pp. 1105–1112.
  20. R. Durall, M. Keuper, and J. Keuper, “Watch your up-convolution: CNN based generative deep neural networks are failing to reproduce spectral distributions,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 7890–7899.
  21. K. Chandrasegaran, N.-T. Tran, and N.-M. Cheung, “A closer look at Fourier spectrum discrepancies for CNN-generated images detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2021, pp. 7200–7209.
  22. M. Khayatkhoei and A. Elgammal, “Spatial Frequency Bias in Convolutional Generative Adversarial Networks,” Proc. AAAI Conf. Artif. Intell., vol. 36, no. 7, pp. 7152–7159, Jun. 2022.
  23. S. Björk, J. N. Myhre, and T. H. Johansen, “Simpler is better: Spectral regularization and up-sampling techniques for variational autoencoders,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), 2022, pp. 3778–3782.
  24. S. Czolbe, O. Krause, I. Cox, and C. Igel, “A loss function for generative neural networks based on Watson’s perceptual model,” Adv. Neural Inf. Process. Syst. (NeurIPS), vol. 33, 2020.
  25. Y. Wang, L. Cai, D. Zhang, and S. Huang, “The frequency discrepancy between real and generated images,” IEEE Access, vol. 9, pp. 115 205–115 216, 2021.
  26. D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” in Proc. Int. Conf. Learning Representations (ICLR), Banff, Canada, 2014.
  27. L. T. Ene, E. Næsset, T. Gobakken, O. M. Bollandsås, E. W. Mauya, and E. Zahabu, “Large-scale estimation of change in aboveground biomass in miombo woodlands using airborne laser scanning and national forest inventory data,” Remote Sens. Environ., vol. 188, pp. 106–117, Jan. 2017.
  28. OpenStreetMap contributors, “Planet dump retrieved from https://planet.osm.org,” 2017.
  29. “QGIS Development Team (2019). QGIS Geographic Information System. Open Source Geospatial Foundation Project.”
  30. “SNAP - ESA Sentinel Application Platform v.8.0.”
  31. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 770–778.
  32. H. Iqbal, “HarisIqbal88/PlotNeuralNet v1.0.0,” Dec. 2018.
  33. P. Iakubovskii, “Segmentation Models Pytorch,” 2019.
  34. X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley, “Least Squares Generative Adversarial Networks,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 2813–2821.
  35. I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of Wasserstein GANs,” in Adv. Neural Inf. Process. Syst. (NeurIPS), 2017, pp. 5767–5777.
  36. D.-H. Lee et al., “Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks,” in Workshop on Challenges in Representation Learning, ICML, vol. 3, 2013, p. 896.
  37. L. Biewald, “Experiment Tracking with Weights and Biases,” https://www.wandb.com/, 2020.
  38. B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 136–144.
  39. S. Nah, T. Hyun Kim, and K. Mu Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 3883–3891.
Citations (1)

Summary

We haven't generated a summary for this paper yet.