Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model Compression with Multi-Task Knowledge Distillation for Web-scale Question Answering System (1904.09636v1)

Published 21 Apr 2019 in cs.CL

Abstract: Deep pre-training and fine-tuning models (like BERT, OpenAI GPT) have demonstrated excellent results in question answering areas. However, due to the sheer amount of model parameters, the inference speed of these models is very slow. How to apply these complex models to real business scenarios becomes a challenging but practical problem. Previous works often leverage model compression approaches to resolve this problem. However, these methods usually induce information loss during the model compression procedure, leading to incomparable results between compressed model and the original model. To tackle this challenge, we propose a Multi-task Knowledge Distillation Model (MKDM for short) for web-scale Question Answering system, by distilling knowledge from multiple teacher models to a light-weight student model. In this way, more generalized knowledge can be transferred. The experiment results show that our method can significantly outperform the baseline methods and even achieve comparable results with the original teacher models, along with significant speedup of model inference.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ze Yang (51 papers)
  2. Linjun Shou (53 papers)
  3. Ming Gong (246 papers)
  4. Wutao Lin (4 papers)
  5. Daxin Jiang (138 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.