Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Top-K Product Design Based on Collaborative Tagging Data (1304.0419v1)

Published 1 Apr 2013 in cs.SI, cs.DS, and cs.IR

Abstract: The widespread use and popularity of collaborative content sites (e.g., IMDB, Amazon, Yelp, etc.) has created rich resources for users to consult in order to make purchasing decisions on various products such as movies, e-commerce products, restaurants, etc. Products with desirable tags (e.g., modern, reliable, etc.) have higher chances of being selected by prospective customers. This creates an opportunity for product designers to design better products that are likely to attract desirable tags when published. In this paper, we investigate how to mine collaborative tagging data to decide the attribute values of new products and to return the top-k products that are likely to attract the maximum number of desirable tags when published. Given a training set of existing products with their features and user-submitted tags, we first build a Naive Bayes Classifier for each tag. We show that the problem of is NP-complete even if simple Naive Bayes Classifiers are used for tag prediction. We present a suite of algorithms for solving this problem: (a) an exact two tier algorithm(based on top-k querying techniques), which performs much better than the naive brute-force algorithm and works well for moderate problem instances, and (b) a set of approximation algorithms for larger problem instances: a novel polynomial-time approximation algorithm with provable error bound and a practical hill-climbing heuristic. We conduct detailed experiments on synthetic and real data crawled from the web to evaluate the efficiency and quality of our proposed algorithms, as well as show how product designers can benefit by leveraging collaborative tagging information.

Summary

We haven't generated a summary for this paper yet.