Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TrashCan: A Semantically-Segmented Dataset towards Visual Detection of Marine Debris (2007.08097v1)

Published 16 Jul 2020 in cs.CV and cs.RO

Abstract: This paper presents TrashCan, a large dataset comprised of images of underwater trash collected from a variety of sources, annotated both using bounding boxes and segmentation labels, for development of robust detectors of marine debris. The dataset has two versions, TrashCan-Material and TrashCan-Instance, corresponding to different object class configurations. The eventual goal is to develop efficient and accurate trash detection methods suitable for onboard robot deployment. Along with information about the construction and sourcing of the TrashCan dataset, we present initial results of instance segmentation from Mask R-CNN and object detection from Faster R-CNN. These do not represent the best possible detection results but provides an initial baseline for future work in instance segmentation and object detection on the TrashCan dataset.

Citations (74)

Summary

  • The paper introduces a specialized dataset of 7,212 annotated images that enhances deep learning approaches for underwater marine debris detection.
  • It demonstrates that instance-level segmentation achieves higher precision, with Mask R-CNN reaching an AP of 30.0 compared to 28.2 for material segmentation.
  • The study provides a valuable resource for advancing autonomous underwater trash detection, offering a robust framework for future research and model improvements.

An Overview of "TrashCan: A Semantically-Segmented Dataset towards Visual Detection of Marine Debris"

The paper "TrashCan: A Semantically-Segmented Dataset towards Visual Detection of Marine Debris" presents a comprehensive and specialized dataset aimed at enhancing the capabilities of autonomous visual detection of marine debris using machine learning techniques. This paper originates from the growing necessity to address the challenge of marine debris, which poses a severe threat to aquatic ecosystems. Autonomous solutions, such as vision-equipped AUVs, are being considered pivotal in detecting and locating debris effectively. However, the development of such solutions has been limited by the unavailability of adequate datasets to train advanced deep learning models for underwater trash detection.

Dataset Construction and Composition

The "TrashCan" dataset fills this gap by providing a substantial collection of 7,212 annotated images sourced from the J-EDI dataset, which includes videos captured by ROVs under the purview of JAMSTEC. The dataset is divided into two distinct categories: TrashCan-Material and TrashCan-Instance. These categories are designed to differentiate marine debris based on material composition or object instance, respectively.

The annotation process was extensive, involving a team of 21 annotators over several months. Each image in the dataset has been tagged with instance segmentation masks, differentiating objects into classes such as trash, biological entities, and ROVs. The manual annotation process, quantifying approximately 1,500 work hours, was facilitated by employing an online tool named Supervisely. Crucially, objects in the dataset are labeled not only by their visible features but by additional characteristics such as decay state, leading to a detailed classification scheme that aids in robust model training.

Baseline Experiments and Results

The paper conducts baseline experiments using state-of-the-art object detection models, specifically Faster R-CNN and Mask R-CNN, each employing the ResNeXt-101-FPN backbone for enhanced feature extraction. The dataset conversions, alongside the training regimen, adhere strictly to the COCO format to ensure compatibility with standard object detection evaluation metrics.

The empirical results indicate that models trained on TrashCan-Instance tend to outperform those trained on TrashCan-Material across various performance metrics, such as Average Precision (AP). For instance, the AP for the Mask R-CNN model using the instance version of the dataset was recorded at 30.0, compared to 28.2 for the material version. This suggests the importance of categorizing objects by visual similarity to improve detection results, an insight that could inform future dataset designs and model training strategies.

Implications and Future Directions

The implications of this research are considerable for both practical implementations and theoretical explorations in marine robotics and machine learning. By providing a publicly accessible, richly annotated dataset, the paper sets the stage for further advancements in marine debris detection, potentially leading to more efficient AUV systems capable of autonomously identifying and removing trash.

Moreover, the insights gained from this research may drive innovations in dataset annotation strategies and the deployment of machine learning models in other underwater applications or similarly challenging environments. Future work can build upon this research by incorporating more advanced deep learning architectures or reinforcement learning approaches to enhance detection accuracy further. Expanding the dataset size and diversifying object categories may also lead to improved model generalization.

In conclusion, the TrashCan dataset represents a significant advancement in addressing the environmental challenge of marine debris through AI-driven solutions. By establishing a robust framework for underwater trash detection, this paper provides a valuable resource and a foundational baseline for ongoing and future research aimed at mitigating one of the pressing environmental concerns of our time.

Youtube Logo Streamline Icon: https://streamlinehq.com