Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Disjoint Contrastive Regression Learning for Multi-Sourced Annotations (2112.15411v2)

Published 31 Dec 2021 in cs.LG and cs.CV

Abstract: Large-scale datasets are important for the development of deep learning models. Such datasets usually require a heavy workload of annotations, which are extremely time-consuming and expensive. To accelerate the annotation procedure, multiple annotators may be employed to label different subsets of the data. However, the inconsistency and bias among different annotators are harmful to the model training, especially for qualitative and subjective tasks.To address this challenge, in this paper, we propose a novel contrastive regression framework to address the disjoint annotations problem, where each sample is labeled by only one annotator and multiple annotators work on disjoint subsets of the data. To take account of both the intra-annotator consistency and inter-annotator inconsistency, two strategies are employed.Firstly, a contrastive-based loss is applied to learn the relative ranking among different samples of the same annotator, with the assumption that the ranking of samples from the same annotator is unanimous. Secondly, we apply the gradient reversal layer to learn robust representations that are invariant to different annotators. Experiments on the facial expression prediction task, as well as the image quality assessment task, verify the effectiveness of our proposed framework.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. “Ranking consistency for image matching and object retrieval,” Pattern Recognition, vol. 47, no. 3, pp. 1349–1360, 2014.
  2. “Graph mode-based contextual kernels for robust svm tracking,” in 2011 international conference on computer vision. IEEE, 2011, pp. 1156–1163.
  3. “Learning multi-level density maps for crowd counting,” IEEE transactions on neural networks and learning systems, vol. 31, no. 8, pp. 2705–2715, 2019.
  4. “Recent advances and trends in multimodal deep learning: A review,” arXiv preprint arXiv:2105.11087, 2021.
  5. “Deep learning from crowds,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2018, vol. 32.
  6. “Learning effective embeddings from crowdsourced labels: An educational case study,” in 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE, 2019, pp. 1922–1927.
  7. “Representation learning from limited educational data with crowdsourced labels,” IEEE Transactions on Knowledge and Data Engineering, 2020.
  8. “Learning from noisy labels by regularized estimation of annotator confusion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 11244–11253.
  9. “Towards unbiased and accurate deferral to multiple experts,” arXiv preprint arXiv:2102.13004, 2021.
  10. “Who said what: Modeling individual labelers improves classification,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2018, vol. 32.
  11. “Early-learning regularization prevents memorization of noisy labels,” arXiv preprint arXiv:2007.00151, 2020.
  12. “Safeguarded dynamic label regression for noisy supervision,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, vol. 33, pp. 9103–9110.
  13. “Deep k-nn for noisy labels,” in International Conference on Machine Learning. PMLR, 2020, pp. 540–550.
  14. “Image classification with deep learning in the presence of noisy labels: A survey,” Knowledge-Based Systems, vol. 215, pp. 106771, 2021.
  15. “Unsupervised domain adaptation by backpropagation,” in International conference on machine learning. PMLR, 2015, pp. 1180–1189.
  16. “Dail: Dataset-aware and invariant learning for face recognition,” in 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021, pp. 8172–8179.
  17. “Cross-dataset adaptation for visual question answering,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5716–5725.
  18. “Multi-adversarial domain adaptation,” in Thirty-second AAAI conference on artificial intelligence, 2018.
  19. “Partial adversarial domain adaptation,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 135–150.
  20. “Contrastive adaptation network for unsupervised domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4893–4902.
  21. “On learning invariant representations for domain adaptation,” in International Conference on Machine Learning. PMLR, 2019, pp. 7523–7532.
  22. “Deep visual domain adaptation: A survey,” Neurocomputing, vol. 312, pp. 135–153, 2018.
  23. “Domain adaptation for image dehazing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2808–2817.
  24. “Fda: Fourier domain adaptation for semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 4085–4095.
  25. “Borrowing treasures from the wealthy: Deep transfer learning through selective joint fine-tuning,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1086–1095.
  26. “Spottune: transfer learning through adaptive fine-tuning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4805–4814.
  27. “An introduction to domain adaptation and transfer learning,” arXiv preprint arXiv:1812.11806, 2018.
  28. “Transferring deep representation for nir-vis heterogeneous face recognition,” in 2016 International Conference on Biometrics (ICB). IEEE, 2016, pp. 1–8.
  29. “Domain-symmetric networks for adversarial domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 5031–5040.
  30. “Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2517–2526.
  31. “Adversarial-learned loss for domain adaptation,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, vol. 34, pp. 3521–3528.
  32. “Unsupervised domain adaptation by backpropagation,” arXiv preprint arXiv:1409.7495, 2014.
  33. “Afew-va database for valence and arousal estimation in-the-wild,” Image and Vision Computing, vol. 65, pp. 23–36, 2017.
  34. “Aff-wild: valence and arousal’in-the-wild’challenge,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2017, pp. 34–41.
  35. “Most apparent distortion: full-reference image quality assessment and the role of strategy,” Journal of electronic imaging, vol. 19, no. 1, pp. 011006, 2010.
  36. “Color image database tid2013: Peculiarities and preliminary results,” in european workshop on visual information processing (EUVIP). IEEE, 2013, pp. 106–111.
  37. “A new color image database tid2013: Innovations and results,” in International Conference on Advanced Concepts for Intelligent Vision Systems. Springer, 2013, pp. 402–413.
  38. “Collecting large, richly annotated facial-expression databases from movies,” IEEE multimedia, vol. 19, no. 03, pp. 34–41, 2012.
  39. “Tid2008-a database for evaluation of full-reference visual quality assessment metrics,” Advances of Modern Radioelectronics, vol. 10, no. 4, pp. 30–45, 2009.
  40. “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Transactions on image processing, vol. 15, no. 11, pp. 3440–3451, 2006.
  41. “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  42. “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, pp. 8026–8037, 2019.
  43. “Metaiqa: Deep meta-learning for no-reference image quality assessment,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 14143–14152.
  44. “Blindly assess image quality in the wild guided by a self-adaptive hyper network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 3667–3676.
Citations (2)

Summary

We haven't generated a summary for this paper yet.