Papers
Topics
Authors
Recent
Search
2000 character limit reached

Mining User Opinions in Mobile App Reviews: A Keyword-based Approach

Published 18 May 2015 in cs.IR and cs.CL | (1505.04657v2)

Abstract: User reviews of mobile apps often contain complaints or suggestions which are valuable for app developers to improve user experience and satisfaction. However, due to the large volume and noisy-nature of those reviews, manually analyzing them for useful opinions is inherently challenging. To address this problem, we propose MARK, a keyword-based framework for semi-automated review analysis. MARK allows an analyst describing his interests in one or some mobile apps by a set of keywords. It then finds and lists the reviews most relevant to those keywords for further analysis. It can also draw the trends over time of those keywords and detect their sudden changes, which might indicate the occurrences of serious issues. To help analysts describe their interests more effectively, MARK can automatically extract keywords from raw reviews and rank them by their associations with negative reviews. In addition, based on a vector-based semantic representation of keywords, MARK can divide a large set of keywords into more cohesive subsets, or suggest keywords similar to the selected ones.

Citations (164)

Summary

  • The paper introduces MARK, a keyword-based framework that efficiently extracts and normalizes user opinions from vast, noisy mobile app reviews.
  • The framework employs techniques like contrast scoring for keyword ranking and Word2Vec-based clustering, achieving accuracies around 83-90% in evaluation.
  • The paper demonstrates that automated trend analysis and targeted review search can provide timely insights for app developers to enhance user experience.

Analyzing User Opinions in Mobile App Reviews through a Keyword-based Framework

The paper "Mining User Opinions in Mobile App Reviews: A Keyword-based Approach" introduces MARK, a framework designed to extract meaningful insights from the vast and often noisy data found in mobile app user reviews. Acknowledging the importance of user feedback for enhancing app user experience and satisfaction, the authors present a structured method to efficiently analyze user comments using keyword-based techniques. This paper offers a comprehensive approach blending elements of information retrieval with user sentiment analysis, specifically tailored for mobile app reviews.

Overview of MARK Framework

MARK is constructed to address two key challenges: the excessive volume of user reviews that makes manual analysis impractical, and the inherently noisy nature of these reviews which often include misspellings, abbreviations, non-standard language, and other informalities. The framework effectively leverages keywords to systematize this data into useful insights for developers.

MARK operates at multiple stages:

  1. Keyword Extraction and Normalization: The process begins with extracting keywords from raw reviews. Specialized techniques are employed to handle issues like non-English reviews and misspellings. A custom stemming algorithm and a dictionary of common misspelled words further refine the keyword extraction process.
  2. Keyword Recommendation: This involves ranking, clustering, and expanding keywords. Ranking is performed using a contrast score that quantifies a keyword's prevalence in negative reviews. Clustering, based on Word2Vec representations, groups similar keywords, while expansion identifies related terms that might not be explicitly presented by users.
  3. Review Search: The framework utilizes tf.idf weighting and the Vector Space Model to find the most relevant reviews associated with a chosen set of keywords. This process ensures efficient retrieval of reviews that reflect user concerns about specific app features or issues.
  4. Trend Analysis: MARK also tracks keyword occurrences over time, employing simple moving averages to detect unusual patterns in user reviews. This capability is pivotal for identifying persistent or rising problems in app versions, offering developers timely insights.

Experimental Evaluation and Findings

The authors conducted experiments on a dataset of over 2 million reviews from 95 mobile apps. Various components of MARK were evaluated:

  • Keyword Ranking: The proposed contrast score effectively identified keywords strongly associated with user dissatisfaction.
  • Keyword Clustering and Expansion: The results showed high relevance through clustering accuracy hovering around 83% and expansion accuracy near 90%, indicating efficient grouping and ability to uncover semantically similar terms.
  • Review Search: MARK reliably surfaced pertinent reviews, achieving about 90% accuracy in correctly identifying reviews relevant to specific user concerns.
  • Trend Analysis: The system successfully spotted anomalies in user feedback trends, particularly highlighting issues in new app releases that negatively impacted user experience.

Implications and Future Directions

The combination of automated keyword extraction and sophisticated analysis positions MARK as a valuable tool in app development cycles. By simplifying the identification of key areas for improvement directly from user feedback, developers can enhance their app's features, usability, and overall satisfaction. Given current trends, future advancements could involve integrating deep learning methods for even richer semantic analysis and adopting more nuanced sentiment analysis techniques to further improve the detection of user sentiment beyond keyword occurrence.

Overall, this paper contributes to the field of opinion mining and automated feedback analysis, highlighting the need for targeted and structured approaches to derive practical user insights from large-scale, unstructured review data. By addressing challenges unique to mobile app data, MARK represents a step forward in automating and enhancing this process, with potential applications beyond mobile environments into other digital platforms with similar feedback mechanisms.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.