Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 76 tok/s
Gemini 2.5 Pro 59 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Basis Transformers for Multi-Task Tabular Regression (2506.06926v1)

Published 7 Jun 2025 in cs.LG

Abstract: Dealing with tabular data is challenging due to partial information, noise, and heterogeneous structure. Existing techniques often struggle to simultaneously address key aspects of tabular data such as textual information, a variable number of columns, and unseen data without metadata besides column names. We propose a novel architecture, \textit{basis transformers}, specifically designed to tackle these challenges while respecting inherent invariances in tabular data, including hierarchical structure and the representation of numeric values. We evaluate our design on a multi-task tabular regression benchmark, achieving an improvement of 0.338 in the median $R2$ score and the lowest standard deviation across 34 tasks from the OpenML-CTR23 benchmark. Furthermore, our model has five times fewer parameters than the best-performing baseline and surpasses pretrained LLM baselines -- even when initialized from randomized weights.

Summary

  • The paper presents basis transformers as a novel method for multi-task tabular regression that preserves heterogeneous data structure and enables zero-shot learning.
  • It employs innovative techniques like sign-magnitude encoding and basis queries to transform variable-length inputs into fixed-size summaries.
  • Experimental evaluations on the OpenML-CTR23 benchmark reveal a median R² improvement of 0.338 while using significantly fewer parameters than large language models.

Basis Transformers for Multi-Task Tabular Regression

The paper presents an investigation into basis transformers, a new architectural approach for efficiently handling tabular data in multi-modal, multi-task regression scenarios. Tabular data, often encountered in fields such as healthcare, finance, and robotics, presents unique challenges due to its inherent heterogeneity, missing values, textual information, and variable column structures. Conventional regression techniques fall short of addressing these challenges adequately. This paper proposes basis transformers as a comprehensive solution that respects tabular data characteristics while optimizing for efficient learning and adaptability across varied tasks.

Architectural Innovation

The basis transformer architecture centers on maintaining the structural integrity of tabular data while allowing for information propagation across different modalities within the dataset. Key components include:

  • Sign-Magnitude Representation (SMR): This enables numeric values to be encoded in a form compatible with textual data. Unlike traditional floating-point representations, which require data normalization and scaling, SMR preserves the scale and precision, thus facilitating effective transfer learning and enabling zero-shot learning scenarios.
  • Basis Queries and Compression Mechanisms: These techniques convert variable-length sequences into fixed-size summaries, ensuring a uniform treatment of heterogeneous data columns. This is achieved using cross-attention layers with inducing points that act as basis queries.
  • Multi-Label Classification for Regression: Inspired by classification, regression tasks are transformed using a multi-label approach, benefiting from stable optimization procedures that do not suffer from scale discrepancies typical in conventional regression loss functions.

Experimental Evaluation

The paper evaluates basis transformers using the OpenML-CTR23 benchmark, which consists of 34 datasets. Comparative analysis against LLMs reveals a marked improvement in performance metrics. The basis transformer achieved a median R2R^2 score improvement of 0.338 across tasks, showcasing efficacy even when initialized from random weights without pretraining. Notably, the architecture demonstrated superior efficiency, with significantly fewer parameters required compared to LLM baselines.

Implications and Future Directions

The basis transformer approach not only presents clear practical advantages in handling tabular data but also holds theoretical implications. It suggests a reevaluation of how tabular data is processed in multi-task environments, emphasizing seamless integration of heterogeneous data types. Potential future developments might explore scaling basis transformers into foundation models tailored for tabular data across even broader contexts.

From an industrial perspective, incorporating such models could enhance predictive accuracy and robustness in applications where tabular data prevails. However, the complexity and memory overhead associated with this approach remain challenges that future research must address. Additionally, investigating the applicability of basis transformers beyond tabular data to other structured data types could further extend their utility.

In summation, this paper contributes significantly to the field of tabular data processing and sets a precedent for further research in designing lightweight, efficient and adaptive models suited for multi-task regression challenges.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube