Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Constraints for Structured Prediction Using Rectifier Networks (2006.01209v1)

Published 23 May 2020 in cs.CL and cs.LG

Abstract: Various natural language processing tasks are structured prediction problems where outputs are constructed with multiple interdependent decisions. Past work has shown that domain knowledge, framed as constraints over the output space, can help improve predictive accuracy. However, designing good constraints often relies on domain expertise. In this paper, we study the problem of learning such constraints. We frame the problem as that of training a two-layer rectifier network to identify valid structures or substructures, and show a construction for converting a trained network into a system of linear constraints over the inference variables. Our experiments on several NLP tasks show that the learned constraints can improve the prediction accuracy, especially when the number of training examples is small.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Xingyuan Pan (9 papers)
  2. Maitrey Mehta (6 papers)
  3. Vivek Srikumar (55 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.