Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 73 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 109 tok/s Pro
Kimi K2 194 tok/s Pro
GPT OSS 120B 421 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Experience Scaling: Post-Deployment Evolution For Large Language Models (2509.18771v1)

Published 23 Sep 2025 in cs.AI

Abstract: Scaling model size, training data, and compute power have driven advances in LLMs, but these approaches are reaching saturation as human-generated text is exhausted and further gains diminish. We propose experience scaling, a framework for continuous post-deployment evolution for LLMs through autonomous interaction with the environment and collaborative sharing of accumulated experience. The framework captures raw interactions, distills them into compact, reusable knowledge, and periodically refines stored content to preserve relevance and efficiency. We validate the framework in simulated real-world scenarios involving generalization to previously unseen but related tasks, repetitive queries, and over-saturated knowledge stores. Across all settings, experience scaling improves accuracy, sustains performance over time, and maintains gains when applied to novel situations. These results demonstrate that structured post-deployment learning can extend LLM capabilities beyond the limits of static human-generated data, offering a scalable path for continued intelligence progress.

Summary

  • The paper introduces 'experience scaling', a paradigm where large language models evolve post-deployment by learning from real-world interactions.
  • It details methods for collecting, distilling, and sharing interaction traces to refine model capabilities across diverse deployments.
  • System-level testing demonstrates enhanced safety, robustness, and adaptability, validating the practical benefits of post-deployment evolution.

Experience Scaling: Post-Deployment Evolution for LLMs

Introduction

The paper "Experience Scaling: Post-Deployment Evolution for LLMs" addresses a pivotal challenge in the development and deployment of LLMs: the diminishing returns from conventional scaling approaches involving increased parameters, datasets, and compute resources. As the abundance of high-quality human-generated text reaches saturation, further scaling provides limited benefits. This work introduces an innovative paradigm termed "experience scaling," focusing on post-deployment learning that enables LLMs to evolve based on interactions with their environment.

Experience Scaling Paradigm

Experience scaling is a post-deployment learning framework whereby LLMs actively collect interaction traces from their operational environment, distill the traces into a compact, reusable form, and iteratively refine their stored experiences. This continuously evolving experience store is designed to be shared across different deployed systems, fostering growth and adaptation beyond the initial deployment phase. This paradigm offers a practical avenue for maintaining capability growth, overcoming the restrictions posed by traditional scaling methodologies once models are operational.

Practical Implications

The implementation of experience scaling has significant implications for various fields, including safety monitoring, robotics, edge intelligence, and multi-agent collaboration. By facilitating continuous learning and adaptation, experience scaling can enhance the robustness and efficiency of AI systems operating in dynamic real-world environments. Moreover, this approach supports responsible AI development by allowing models to learn under deployment conditions, refining their responses based on actual user interactions and feedback.

System-Level Testing

This paper presents system-level validation of the experience scaling mechanism, underscoring its practical applicability and integration into existing AI frameworks. By deploying these processes in real systems, the paper demonstrates the tangible benefits and feasibility of post-deployment evolution, marking a pivotal step toward sustainable advancements in AI capabilities.

Data and Code Availability

The authors provide the code and scripts required to replicate the experiments conducted in the paper, available at the repository: https://github.com/NICE-HKU/ExperienceScaling. This transparency facilitates peer verification and further exploration of the proposed methodologies, promoting collaborative progress within the AI research community.

Conclusion

The introduction of experience scaling proposes a novel pathway for LLMs to achieve sustained capability growth post-deployment, transcending the limitations of traditional scaling practices. This approach aligns with the broader goals of developing responsible and efficient AI systems capable of refining their proficiency based on lived experiences and interactions. Future developments may extend this paradigm to a wider array of intelligent agents, enhancing adaptation and learning across diverse applications and environments.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube