Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

OpenTag: Open Attribute Value Extraction from Product Profiles [Deep Learning, Active Learning, Named Entity Recognition] (1806.01264v2)

Published 1 Jun 2018 in cs.CL, cs.AI, cs.IR, and stat.ML

Abstract: Extraction of missing attribute values is to find values describing an attribute of interest from a free text input. Most past related work on extraction of missing attribute values work with a closed world assumption with the possible set of values known beforehand, or use dictionaries of values and hand-crafted features. How can we discover new attribute values that we have never seen before? Can we do this with limited human annotation or supervision? We study this problem in the context of product catalogs that often have missing values for many attributes of interest. In this work, we leverage product profile information such as titles and descriptions to discover missing values of product attributes. We develop a novel deep tagging model OpenTag for this extraction problem with the following contributions: (1) we formalize the problem as a sequence tagging task, and propose a joint model exploiting recurrent neural networks (specifically, bidirectional LSTM) to capture context and semantics, and Conditional Random Fields (CRF) to enforce tagging consistency, (2) we develop a novel attention mechanism to provide interpretable explanation for our model's decisions, (3) we propose a novel sampling strategy exploring active learning to reduce the burden of human annotation. OpenTag does not use any dictionary or hand-crafted features as in prior works. Extensive experiments in real-life datasets in different domains show that OpenTag with our active learning strategy discovers new attribute values from as few as 150 annotated samples (reduction in 3.3x amount of annotation effort) with a high F-score of 83%, outperforming state-of-the-art models.

Review of "OpenTag: Open Attribute Value Extraction from Product Profiles"

The paper "OpenTag: Open Attribute Value Extraction from Product Profiles" addresses the challenge of extracting missing attribute values from product profiles in eCommerce catalogs, particularly under an Open World Assumption (OWA). This assumption allows for the discovery of previously unseen attribute values, overcoming the limitations of traditional approaches that operate within a closed world assumption. This innovative approach leverages natural language processing techniques combined with machine learning models, presenting a significant advancement in product catalog management.

Key Contributions

The authors propose a novel sequence tagging model called OpenTag, designed to extract attribute values from unstructured text. The model consists of several components which collectively address the constraints posed by previous methodologies:

  1. Bidirectional LSTM and CRF: The paper formalizes the extraction task as a sequence tagging problem supported by a hybrid model. Bidirectional LSTM captures long-range dependencies and the context of input sequences more effectively, while Conditional Random Fields (CRF) ensure tag sequence consistency. This combination improves the accuracy and reliability of attribute extraction.
  2. Attention Mechanism: One of the notable innovations in the OpenTag model is the integration of a novel attention mechanism. This component enhances interpretability by focusing the model's predictions based on the importance of specific tokens relative to surrounding context. In doing so, it facilitates a more fine-grained understanding of the model’s decision-making process, leading to greater transparency and explanatory power in extracted attribute values.
  3. Active Learning Strategy: In addressing the practical challenges of limited annotated data, the paper pioneers a sampling strategy involving Tag Flips in active learning. This method identifies the most informative samples by analyzing the stability of token tags across training epochs, vastly reducing annotation efforts while maintaining high model performance.

Experimental Findings

The paper presents extensive experimental evaluations across various domains, including dog food, detergents, and cameras, demonstrating the efficacy of OpenTag. Real-world datasets were utilized to validate the model’s ability to extract and infer a multitude of attribute values. Notably, OpenTag achieves an impressive F-score of 83%, outperforming existing state-of-the-art models, even discovering new attribute values from an minimal annotated sample size, effectively reducing human annotation efforts by a factor of 3.3.

Implications and Future Directions

The implications of OpenTag are manifold, with practical applications in improving ecommerce platforms by enriching product profiles with comprehensive attribute listings. Theoretically, it opens up avenues for further exploration in open-world sequence tagging tasks across other domains. The integration of attention mechanisms and active learning suggests optimistic pathways for more refined learning models that can drive value in areas requiring adaptable feature extraction from complex data structures.

Future research could focus on refining the attention mechanism, investigating automated or semi-supervised domain adaptation capabilities, and enhancing the active learning framework to address dynamically evolving datasets. Furthermore, exploring model scalability and optimizing computational efficiency may prove beneficial as the complexity and volume of product catalogs continue to grow.

In conclusion, OpenTag represents a substantial step forward in tackling open attribute value extraction, providing a foundation for future advancements in AI-driven data enrichment methodologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Guineng Zheng (1 paper)
  2. Subhabrata Mukherjee (59 papers)
  3. Xin Luna Dong (46 papers)
  4. Feifei Li (47 papers)
Citations (179)