Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Technical report: Impact of evaluation metrics and sampling on the comparison of machine learning methods for biodiversity indicators prediction (2108.07480v1)

Published 17 Aug 2021 in stat.AP

Abstract: Machine learning (ML) approaches are used more and more widely in biodiversity monitoring. In particular, an important application is the problem of predicting biodiversity indicators such as species abundance, species occurrence or species richness, based on predictor sets containing, e.g., climatic and anthropogenic factors. Considering the impressive number of different ML methods available in the litterature and the pace at which they are being published, it is crucial to develop uniform evaluation procedures, to allow the production of sound and fair empirical studies. However, defining fair evaluation procedures is challenging: because well-documented, intrinsic properties of biodiversity indicators such as their zero-inflation and over-dispersion, it is not trivial to design good sampling schemes for cross-validation nor good evaluation metrics. Indeed, the classical Mean Squared Error (MSE) fails to capture subtle differences in the performance of different methods, particularly in terms of prediction of very small, or very large values (e.g., zero counts or large counts). In this report, we illustrate this phenomenon by comparing ten statistical and machine learning models on the task of predicting waterbirds abundance in the North-African area, based on geographical, meteorological and spatio-temporal factors. Our results highlight that differnte off-the-shelf evaluation metrics and cross-validation sampling approaches yield drastically different rankings of the metrics, and fail to capture interpretable conclusions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Geneviève Robin (12 papers)
  2. Cathia Le Hasif (2 papers)

Summary

We haven't generated a summary for this paper yet.