Papers
Topics
Authors
Recent
Search
2000 character limit reached

CPLIP: Zero-Shot Learning for Histopathology with Comprehensive Vision-Language Alignment

Published 7 Jun 2024 in cs.CV, cs.CL, cs.LG, cs.MM, and eess.IV | (2406.05205v1)

Abstract: This paper proposes Comprehensive Pathology Language Image Pre-training (CPLIP), a new unsupervised technique designed to enhance the alignment of images and text in histopathology for tasks such as classification and segmentation. This methodology enriches vision-LLMs by leveraging extensive data without needing ground truth annotations. CPLIP involves constructing a pathology-specific dictionary, generating textual descriptions for images using LLMs, and retrieving relevant images for each text snippet via a pre-trained model. The model is then fine-tuned using a many-to-many contrastive learning method to align complex interrelated concepts across both modalities. Evaluated across multiple histopathology tasks, CPLIP shows notable improvements in zero-shot learning scenarios, outperforming existing methods in both interpretability and robustness and setting a higher benchmark for the application of vision-LLMs in the field. To encourage further research and replication, the code for CPLIP is available on GitHub at https://cplip.github.io/

Citations (4)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

GitHub

  1. CPLIP