Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pre-training Tasks for User Intent Detection and Embedding Retrieval in E-commerce Search (2208.06150v2)

Published 12 Aug 2022 in cs.IR

Abstract: BERT-style models pre-trained on the general corpus (e.g., Wikipedia) and fine-tuned on specific task corpus, have recently emerged as breakthrough techniques in many NLP tasks: question answering, text classification, sequence labeling and so on. However, this technique may not always work, especially for two scenarios: a corpus that contains very different text from the general corpus Wikipedia, or a task that learns embedding spacial distribution for a specific purpose (e.g., approximate nearest neighbor search). In this paper, to tackle the above two scenarios that we have encountered in an industrial e-commerce search system, we propose customized and novel pre-training tasks for two critical modules: user intent detection and semantic embedding retrieval. The customized pre-trained models after fine-tuning, being less than 10% of BERT-base's size in order to be feasible for cost-efficient CPU serving, significantly improve the other baseline models: 1) no pre-training model and 2) fine-tuned model from the official pre-trained BERT using general corpus, on both offline datasets and online system. We have open sourced our datasets for the sake of reproducibility and future works.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Yiming Qiu (37 papers)
  2. Chenyu Zhao (10 papers)
  3. Han Zhang (338 papers)
  4. Jingwei Zhuo (12 papers)
  5. Tianhao Li (35 papers)
  6. Xiaowei Zhang (56 papers)
  7. Songlin Wang (17 papers)
  8. Sulong Xu (23 papers)
  9. Bo Long (60 papers)
  10. Wen-Yun Yang (10 papers)
Citations (11)