Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analysis of Twitter Users' Lifestyle Choices using Joint Embedding Model (2104.03189v3)

Published 7 Apr 2021 in cs.CL, cs.AI, cs.CY, cs.LG, and cs.SI

Abstract: Multiview representation learning of data can help construct coherent and contextualized users' representations on social media. This paper suggests a joint embedding model, incorporating users' social and textual information to learn contextualized user representations used for understanding their lifestyle choices. We apply our model to tweets related to two lifestyle activities, Yoga' andKeto diet' and use it to analyze users' activity type and motivation. We explain the data collection and annotation process in detail and provide an in-depth analysis of users from different classes based on their Twitter content. Our experiments show that our model results in performance improvements in both domains.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. Quantifying mental health from social media with neural user embeddings. arXiv preprint arXiv:1705.00335 .
  2. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015.
  3. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150 .
  4. Learning multiview embeddings of twitter users. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 14–19.
  5. Cohen, J. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement 20(1): 37–46.
  6. Predicting depression via social media. In Seventh international AAAI conference on weblogs and social media.
  7. You Shall Know a User by the Company It Keeps: Dynamic Representations for Social Media Users in NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 4701–4711.
  8. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171–4186.
  9. A unified neural network model for geolocating twitter users. In Proceedings of the 22nd Conference on Computational Natural Language Learning, 42–53.
  10. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, 855–864.
  11. Long short-term memory. Neural computation 9(8): 1735–1780.
  12. A Hierarchical Location Prediction Neural Network for Twitter User Geolocation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 4734–4744.
  13. Islam, T. 2019. Yoga-Veganism: Correlation Mining of Twitter Health Data. arXiv preprint arXiv:1906.07668 .
  14. Does Yoga Make You Happy? Analyzing Twitter User Happiness using Textual and Temporal Information. In 2020 IEEE International Conference on Big Data (Big Data), 4241–4249. doi:10.1109/BigData50022.2020.9378461.
  15. Do You Do Yoga? Understanding Twitter Users’ Types and Motivations using Social and Textual Information. In 2021 IEEE 15th International Conference on Semantic Computing (ICSC), 362–365. doi:10.1109/ICSC50631.2021.00067.
  16. Effects of a high-protein ketogenic diet on hunger, appetite, and weight loss in obese men feeding ad libitum. The American journal of clinical nutrition 87(1): 44–55.
  17. Khalsa, S. B. S. 2004. Treatment of chronic insomnia with yoga: A preliminary study with sleep–wake diaries. Applied psychophysiology and biofeedback 29(4): 269–278.
  18. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
  19. Private traits and attributes are predictable from digital records of human behavior. Proceedings of the national academy of sciences 110(15): 5802–5805.
  20. Learning multi-faceted representations of individuals from heterogeneous evidence using neural networks. arXiv preprint arXiv:1510.05198 .
  21. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 .
  22. Decoupled Weight Decay Regularization. In International Conference on Learning Representations.
  23. A novel intervention including individualized nutritional recommendations reduces hemoglobin A1c level, medication use, and weight in type 2 diabetes. JMIR diabetes 2(1): e5.
  24. Birds of a feather: Homophily in social networks. Annual review of sociology 27(1): 415–444.
  25. Efficient estimation of word representations in vector space .
  26. Abusive Language Detection with Graph Convolutional Networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2145–2150.
  27. Neural Character-based Composition Models for Abuse Detection. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), 1–10.
  28. Unifying text, metadata, and user network representations with a neural network for geolocation prediction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1260–1272.
  29. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), 807–814.
  30. Beyond weight loss: a review of the therapeutic uses of very-low-carbohydrate (ketogenic) diets. European journal of clinical nutrition 67(8): 789–796.
  31. Deep contextualized word representations. arXiv preprint arXiv:1802.05365 .
  32. Improving language understanding by generative pre-training.
  33. Exploiting text and network context for geolocation of social media users. arXiv preprint arXiv:1506.04803 .
  34. Forecasting the onset and course of mental illness with Twitter data. Scientific reports 7(1): 13006.
  35. Characterizing and detecting hateful users on twitter. arXiv preprint arXiv:1803.08977 .
  36. The health benefits of yoga and exercise: a review of comparison studies. The journal of alternative and complementary medicine 16(1): 3–12.
  37. Characterizing Geographic Variation in Well-Being Using Tweets. ICWSM 13: 583–591.
  38. Personality, gender, and age in the language of social media: The open-vocabulary approach. PloS one 8(9): e73791.
  39. Predicting individual well-being through the language of social media. In Biocomputing 2016: Proceedings of the Pacific Symposium, 516–527. World Scientific.
  40. An evidence-based review of yoga as a complementary intervention for patients with cancer. Psycho-Oncology: Journal of the Psychological, Social and Behavioral Dimensions of Cancer 18(5): 465–475.
  41. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15(1): 1929–1958.
  42. Attention is all you need. In Advances in neural information processing systems, 5998–6008.
  43. Detecting emotions in social media: A constrained optimization approach. In Twenty-fourth international joint conference on artificial intelligence.
  44. Life satisfaction and the pursuit of happiness on Twitter. PloS one 11(3): e0150881.
  45. Overcoming language variation in sentiment analysis with social attention. Transactions of the Association for Computational Linguistics 5: 295–307.
  46. A modified yoga-based exercise program in hemodialysis patients: a randomized controlled study. Complementary therapies in medicine 15(3): 164–171.
Citations (5)

Summary

We haven't generated a summary for this paper yet.