Emma

Summary:

  • Large Language Models (LLMs), like Generative Pre-trained Transformers (GPTs), are useful in various areas like healthcare diagnostics and business reports, but they lack the ability to prioritize tasks efficiently due to their memory processing. To improve this, a new LLM multi-agent framework with layered memories is introduced, which is particularly effective for stock and fund trading.
  • In this framework, one agent organizes memory into three layers, each with a custom decay mechanism, to resemble human cognitive processes more closely. The agents can also engage in inter-agent debate. They use their layered memory system to integrate historical actions and market insights, enabling them to navigate financial changes, formulate strategies, and debate investment decisions. They are also equipped with individualized trading traits to improve decision robustness.

Tags:

Research