Protein sequence classification using natural language processing techniques
Abstract: Purpose: This study aimed to enhance protein sequence classification using NLP techniques while addressing the impact of sequence similarity on model performance. We compared various machine learning and deep learning models under two different data-splitting strategies: random splitting and ECOD family-based splitting, which ensures evolutionary-related sequences are grouped together. Methods: The study evaluated models such as K-Nearest Neighbors (KNN), Multinomial Na\"ive Bayes, Logistic Regression, Multi-Layer Perceptron (MLP), Decision Tree, Random Forest, XGBoost, Voting and Stacking classifiers, Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and transformer models (BertForSequenceClassification, DistilBERT, and ProtBert). Performance was tested using different amino acid ranges and sequence lengths with a focus on generalization across unseen evolutionary families. Results: The Voting classifier achieved the highest performance with 74% accuracy, 74% weighted F1 score, and 65% macro F1 score under random splitting, while ProtBERT obtained 77% accuracy, 76% weighted F1 score, and 61% macro F1 score among transformer models. However, performance declined across all models when tested using ECOD-based splitting, revealing the impact of sequence similarity on classification performance. Conclusion: Advanced NLP techniques, particularly ensemble methods like Voting classifiers, and transformer models show significant potential in protein classification, with sufficient training data and sequence similarity management being crucial for optimal performance. However, the use of biologically meaningful splitting methods, such as ECOD family-based splitting, is crucial for realistic performance evaluation and generalization to unseen evolutionary families.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.