Text augmentation, commonly used to enhance the generalizability of machine learning models, has been predominantly applied to image data. However, the field of Multimodal Person Re-identification, which benefits from both visual and textual data, faces the challenge of extending these techniques to text. The concern lies not only in the computational resources required but also in the availability and quality of multimodal datasets. Notably, most existing text augmentation methods necessitate external data such as thesauri for synonym replacement or pre-trained LLMs for linguistic transformation, which adds to the complexity.
In response to these challenges, a new paper introduces a method named "TextAug," which adapts two computer vision data augmentation techniques, "Cutout" (random erasure of image parts) and "CutMix" (blended combination of different images), for textual data in the context of person re-identification. The integration of these techniques, coined “CutMixOut,” creates diverse text examples by randomly removing words or phrases (Cutout) and intermixing parts of multiple sentences (CutMix), augmenting the input without prior training.
The TextAug approach was found to enhance performance across various benchmarks for multimodal person re-identification. Specifically, when applying this technique, substantial improvements were recorded in model performance, in comparison to both image-only and non-augmented text models. The strategy proved to be effective, delivering promising results when tested on two different datasets, via models of various architectures including vision transformers (ViTs) and more traditional ResNet50. Furthermore, TextAug demonstrated superiority over other text augmentation methods, such as synonym replacement, confirming its potential in creating robust input for re-identification systems.
In essence, TextAug has emerged as a simple yet efficient solution to improving the generalization of models in NLP and enhancing the robustness of multimodal person re-identification systems. It showcases the viability of applying image augmentation concepts to text data and highlights the value of synthesizing visual and textual modalities in generating powerful data representations for machine learning tasks.