Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 60 tok/s Pro
Kimi K2 192 tok/s Pro
GPT OSS 120B 427 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

BetterTogether Algorithm Integration

Updated 24 October 2025
  • BetterTogether Algorithm is a framework that integrates diverse components such as predictors, classifiers, and agents to achieve superior performance.
  • It employs methodologies like weighted regression, ensemble learning, and prompt-weight optimization to address domain-specific challenges in finance, NLP, and collaborative decision-making.
  • Its scalable design and robust empirical results demonstrate significant gains in predictive accuracy while also addressing practical issues like fairness and anchoring effects.

The term “BetterTogether Algorithm” refers to a class of methodologies and system designs wherein disparate but functionally complementary components—predictors, ensembles, classifiers, decision-makers, module optimizers, or networked agents—are integrated in such a way that the collective outcome exceeds the performance of any isolated constituent. Modern research, as aggregated below, demonstrates that the BetterTogether principle emerges across diverse domains, from quantitative finance and entity linking in NLP, through human-algorithm collaboration, to networked cooperative learning and model program optimization.

1. Principles of Combination and Complementarity

Fundamentally, the BetterTogether concept leverages heterogeneity in system strengths by combining outputs such that net predictive accuracy, robustness, or utility is maximized. Complementarity is a central theme: for systems combining either multiple predictive algorithms, human expertise and machine outputs, or modular LM pipelines, strict performance gains are realized only when the complementary sources contribute differentially across problem regimes (Donahue et al., 2022). The quantitative definition of complementarity requires:

ipic(ai,hi)<min{A,H}\sum_i p_i c(a_i, h_i) < \min\{A, H\}

where c(ai,hi)c(a_i, h_i) is the loss under a “combining rule” for sources with per-regime losses aia_i (algorithm) and hih_i (human), and pip_i the regime weights; complementarity fails if losses (or system reliances) are uniform or one source dominates every regime.

The ensemble approach in entity linking quantifies complementarity by assigning each mention candidate an appropriate EL system through supervised selection—effectively predicting, on a per-example basis, which system is “best” in context (João et al., 2021). In modular LM pipelines, alternating optimization of prompts and weights enables the same model to self-teach decomposition strategies, exploiting latent complementarity between prompt design (discrete) and weight adaptation (continuous) (Soylu et al., 15 Jul 2024).

2. Algorithmic Methodologies

BetterTogether methodologies instantiate in several concrete algorithmic regimes:

  • Weighted Regression in High-Dimensional Portfolio Optimization: When combining a massive number NN of “alphas” (predictive financial signals), the optimization reduces from inverting singular N×NN \times N covariance matrices to a weighted regression that requires inverting only an (M1)×(M1)(M-1) \times (M-1) matrix (MNM \ll N is the number of available historical observations). Specifically, weights are derived via

wi=ηε~iσiw_i = \eta \frac{\tilde{\varepsilon}_i}{\sigma_i}

where ε~i\tilde{\varepsilon}_i is the regression residual of normalized expected returns (Kakushadze et al., 2016). This is computationally efficient (linear in NN), non-iterative, and robust to lack of clustering in alpha exposures.

  • Ensemble Learning for Entity Linking: BetterTogether (MetaEL+) collects outputs from multiple off-the-shelf EL systems. Multi-label and binary classifiers select, for each mention, which annotation is most likely to be correct, using a feature-rich representation that includes surface form statistics, document-level properties, and historical system performance:

s_ratio=correct disambiguationstotal disambiguationss\_ratio = \frac{\text{correct disambiguations}}{\text{total disambiguations}}

Performance metrics include precision, recall, F1-score, and real prediction accuracy, often surpassing individual systems as well as basic ensemble methods (majority voting, weighted voting) (João et al., 2021).

  • Alternating Prompt and Weight Optimization in Modular LM Pipelines: The approach alternates bootstrapped prompt refinement (via few-shot random search) with weight fine-tuning, using traces generated via the improved prompts. The objective is joint maximization over program weights Θ\Theta and prompt templates Π\Pi:

argmaxΘ,Π1X(x,m)Xμ(ΦΘ,Π(x),m)\arg\max_{\Theta, \Pi} \frac{1}{|X|}\sum_{(x, m)\in X}\mu(\Phi_{\langle\Theta, \Pi\rangle}(x), m)

Alternation synergistically improves downstream task performance, with gains up to 65% over single-modality optimization (Soylu et al., 15 Jul 2024).

  • Optimal Cooperative Learning in Networked Agents: In multi-agent settings, aggregate improvement is maximized by fixing the classifiers of agents with the largest “influence score” (computed via the convergence matrix WW and agent error rates), in quadratic time. More egalitarian improvement is NP-hard but admits (11/e)(1-1/e)-approximate greedy solutions under submodularity, using coverage conditions reminiscent of set cover (Haddadan et al., 31 May 2024).
  • Joint Human-Algorithm Decision-Making: When both human and algorithm have noisy access to rankings (e.g., Mallows or RUM models), limiting the set size kk (i.e., presenting a shortlist for human selection) often strictly improves selection of the best item compared to either source alone. Optimal kk is often 2 under balanced accuracy, but the benefit vanishes under strong anchoring effects, where human decisions mimic algorithmic suggestions (Donahue et al., 2023).

3. Computational Complexity and Performance

BetterTogether algorithms are characterized by favorable scaling properties and competitive performance:

  • Portfolio combination via weighted regression achieves computational cost O(M2N)O(M^2N), linear in NN and tractable for billions of predictors, due to the absence of full covariance matrix inversion (Kakushadze et al., 2016).
  • Entity linking ensemble learner achieves F1 improvements from the best baseline (74.3%) to 81.9% (LOOSE variant), and over 90% real prediction accuracy on large datasets (João et al., 2021).
  • Prompt-weight alternating optimization yields up to 65% improvement on QA, 2.5-10% on arithmetic reasoning, and variable improvements in classification, outperforming prompt-only and weight-only strategies (Soylu et al., 15 Jul 2024).
  • Cooperative network intervention via influence score maximization is polynomial time, while egalitarian improvement is NP-hard but efficiently approximated with greedy submodular strategies, achieving over 80% accuracy with O(logn)O(\log n) interventions in synthetic and real networks (Haddadan et al., 31 May 2024).
  • Joint list selection algorithms achieve strictly higher probability of selecting the best item than either standalone human or algorithm, provided anchoring bias is weak and accuracies balanced; otherwise, collaboration may degrade performance (Donahue et al., 2023).

4. Practical Deployment and Application Domains

BetterTogether principles have direct relevance in multiple real-world domains:

Domain BetterTogether Instantiation Characteristic Gains/Traits
Quant Finance Alpha combination via regression Real-time scalable portfolio construction
NLP / IR MetaEL+ for entity linking Improved F1/precision across corpora
Modular NLP Alternating prompt & weight opt. End-to-end pipeline performance; DSPy release
Cybersecurity Optimal agent intervention Logarithmic interventions yield network-wide fix
Collaborative ML Human-algorithm joint lists Strict improvement in decision selection

In finance, the technique resolves singularity and scaling constraints of sample covariance matrices. In NLP, ensemble meta-predictors offer robust entity annotation across varied corpora. LM program pipelines benefit from self-teaching alternation strategies when gradient or label signals are unavailable. Networked agent systems, as in cybersecurity or social platforms, require minimal targeted interventions identified through network structure. Joint decision-making algorithms inform the design of recommender systems, diagnostic workflows, and crowdsourced labeling tasks.

5. Theoretical Insights, Trade-offs, and Limitations

Several critical theoretical results and trade-offs are defined in the literature:

  • No Complementarity under Uniformity/Convexity: Complementarity fails if one source dominates all regimes, or combining rules are strictly convex (Donahue et al., 2022).
  • Fairness Tension: Loss reduction across the entire system may increase disparity (variance in losses), or fail to ensure every regime/group benefits; strict fairness and system-wide complementarity can be mutually exclusive (Donahue et al., 2022).
  • Anchoring Risks: Human anchoring on algorithmic output reverses benefits; collaborative performance may become strictly worse (Donahue et al., 2023).
  • NP-hardness in Egalitarian Network Optimization: While aggregate improvements can be optimized efficiently, egalitarian objectives are provably intractable except for greedy submodular approximations (Haddadan et al., 31 May 2024).
  • Feature Importance Analysis: Empirically, surface form ratio and positional features are most predictive in entity linking ensembles, guiding feature engineering in similar tasks (João et al., 2021).

6. Implementation and Source Code

Implementations span several frameworks and code releases:

  • An explicit R function (calc.opt.weights(...)) demonstrates the weighted regression for billion-alpha combination, with optional overall mode removal for portfolio construction (Kakushadze et al., 2016).
  • The MetaEL+ entity linking ensemble uses MEKA (for multi-label RF classification) and SMO for binary STRICT variants (João et al., 2021).
  • The DSPy toolkit hosts the alternating optimizer for modular LM pipelines, including prompts via BFRS and weights via BFT (Soylu et al., 15 Jul 2024).
  • Networked cooperative learning algorithms are implemented with clear influence score routines, greedy approximation for submodular objectives, and experimental validation over both synthetic and real graphs (Haddadan et al., 31 May 2024).

7. Future Directions

Research suggests several open areas:

  • Extension to broader model classes: Applying BetterTogether algorithms to neural entity linkers, larger and multi-modal LM architectures, and networked agents with complex dependencies.
  • Theoretical analysis of prompt-weight synergy: Understanding underlying reasons for observed performance gains in alternating modular LM optimization (Soylu et al., 15 Jul 2024).
  • Advanced combinatorial intervention: Improving fairness/egalitarian intervention selection in large networks via more sophisticated approximation or heuristic methods (Haddadan et al., 31 May 2024).
  • Anchoring mitigation: Developing strategies to counteract human decision-maker anchoring on system suggestions (Donahue et al., 2023).
  • Feature expansion and selection: Identifying new categories of predictive features to better inform ensemble learners and meta-predictors (João et al., 2021).

In summary, the BetterTogether Algorithm encompasses a family of scalable, theoretically robust, and empirically validated approaches to combining disparate but complementary systems, models, or agents for enhanced predictive performance, decision quality, and systemic robustness across domains.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to BetterTogether Algorithm.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube