Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 86 TPS
Gemini 2.5 Pro 37 TPS Pro
GPT-5 Medium 38 TPS
GPT-5 High 27 TPS Pro
GPT-4o 90 TPS
GPT OSS 120B 467 TPS Pro
Kimi K2 139 TPS Pro
2000 character limit reached

What's in your Head? Emergent Behaviour in Multi-Task Transformer Models (2104.06129v2)

Published 13 Apr 2021 in cs.CL

Abstract: The primary paradigm for multi-task training in natural language processing is to represent the input with a shared pre-trained LLM, and add a small, thin network (head) per task. Given an input, a target head is the head that is selected for outputting the final prediction. In this work, we examine the behaviour of non-target heads, that is, the output of heads when given input that belongs to a different task than the one they were trained for. We find that non-target heads exhibit emergent behaviour, which may either explain the target task, or generalize beyond their original task. For example, in a numerical reasoning task, a span extraction head extracts from the input the arguments to a computation that results in a number generated by a target generative head. In addition, a summarization head that is trained with a target question answering head, outputs query-based summaries when given a question and a context from which the answer is to be extracted. This emergent behaviour suggests that multi-task training leads to non-trivial extrapolation of skills, which can be harnessed for interpretability and generalization.

Citations (11)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.