The paper concerns itself with the integration of search and recommendation methodologies within conversational systems. The paper introduces a chat agent designed to assist users in interactively locating items. The agent leverages deep learning technologies and incorporates insights from search and recommendation to address the task from a novel perspective. The system architecture comprises three primary modules: Natural Language Understanding (NLU), Dialogue Management (DM), and Natural Language Generation (NLG).
Here's a breakdown of the key components and concepts:
- NLU Module: Focuses on analyzing user utterances to extract item-specific metadata. A deep belief tracker is trained to analyze user utterances in context and extract facet values of the targeted item. The output updates the current user intention, represented as a query consisting of facet-value pairs.
- DM Module: Determines the appropriate action to take based on the current dialogue state. This module is integrated with an external recommender system and operates within a defined action space. A deep policy network is trained to decide the optimal action at each turn, considering the user query and long-term user preferences. Actions include requesting more information about a specific facet or recommending a list of products.
- Recommender System: Integrated to provide personalized recommendations based on user history and context.
- Faceted Search Integration: The conversational agent helps users find items interactively, similar to faceted search in e-commerce. The system selects a set of facets or facet-value pairs for the user to choose from based on context.
- Deep Reinforcement Learning: Used for decision-making in the DM module. The system learns to select actions that maximize the expected reward in the entire conversation session.
The paper details related work in dialogue systems, recommender systems, faceted search, and deep reinforcement learning. It positions the work relative to existing research, highlighting the focus on commercial success metrics, such as conversion rate, and the modeling of user preferences. The paper contrasts its approach with prior works that focus primarily on NLP challenges without fully integrating user preferences.
A key contribution of the paper is the deep policy network, which decides when and how to gather information from users and make recommendations based on past purchasing history and context.
The paper describes the implementation of a conversational recommender system using a Factorization Machine (FM) and a deep policy network. The FM is used to train the recommender with the dialogue state, user information, and item information. The deep policy network is trained using a policy gradient method to maximize the episodic expected reward.
The paper describes experiments conducted to evaluate the proposed system, including:
- Offline experiments with simulated users to pre-train the model.
- Online experiments with real users to evaluate the effectiveness of the learned agents.
The paper uses the Yelp challenge dataset for restaurants and food data. The dataset is adapted to create dialogue scripts. Simulated users are created with a simple agenda to interact with the agent, answering questions, finding items, and leaving the dialogue based on predefined rules.
The paper models the recommendation reward in different ways, including Linear, Normalized Discounted Cumulative Gain (NDCG), and Cascade models, each reflecting different assumptions about user behavior when reviewing recommendations. The paper describes a method for collecting user utterances using Amazon Mechanical Turk, where workers write natural language responses based on dialogue schemas.
The paper compares its Conversational Recommender Model (CRM) against a Maximum Entropy rule-based method ("MaxEnt Full") and its variations ("MaxEnt@K"). Key metrics used for evaluation include Average Reward, Success Rate, Average Number of Turns, Wrong Quit Rate, and Low Rank Rate. The results indicate that the CRM outperforms the baselines, achieving higher average reward and success rate in fewer turns. The paper analyzes the impact of belief tracker accuracy on the performance of the proposed framework, demonstrating the robustness of the reinforcement learning model. The paper examines the effects of different simulated environments by varying the Maximum Success Reward and Recommendation List Stop Threshold.
Online user studies are conducted to evaluate the trained model with real users, comparing the CRM against the MaxEnt Full method. The results show that the CRM achieves a higher success rate and shorter average turn count compared to the baseline.
In conclusion, the paper presents a framework for building conversational recommender systems, integrating techniques from dialogue systems and recommender systems. The system uses a deep policy network to manage the conversation and make personalized recommendations. Experimental results demonstrate the effectiveness of the proposed approach in both simulated and real-user settings.
The model maximizes the episodic expected reward from the starting state:
- is the episodic expected reward.
- is the expected value under policy .
- is the final time step.
- is a discount parameter.
- is the reward at time step .
The gradient of the learning object is:
- is the gradient of the learning object.
- is the sum of rewards from time step to .
- is the gradient of the logarithm of the policy with respect to the policy parameter .
Several limitations and future research directions are identified, including joint learning of dialogue policy and recommendation model, improvements to the facet search components, and exploration of different reward functions.