Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

Learning immune receptor representations with protein language models (2402.03823v1)

Published 6 Feb 2024 in q-bio.QM

Abstract: Protein LLMs (PLMs) learn contextual representations from protein sequences and are profoundly impacting various scientific disciplines spanning protein design, drug discovery, and structural predictions. One particular research area where PLMs have gained considerable attention is adaptive immune receptors, whose tremendous sequence diversity dictates the functional recognition of the adaptive immune system. The self-supervised nature underlying the training of PLMs has been recently leveraged to implement a variety of immune receptor-specific PLMs. These models have demonstrated promise in tasks such as predicting antigen-specificity and structure, computationally engineering therapeutic antibodies, and diagnostics. However, challenges including insufficient training data and considerations related to model architecture, training strategies, and data and model availability must be addressed before fully unlocking the potential of PLMs in understanding, translating, and engineering immune receptors.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com