AutoMR: Adaptive Multi-Domain Algorithms
- AutoMR is an integrated suite of automated algorithms that adaptively optimize machine reasoning, learning, and representation across domains like motion recognition, software testing, recommendation, and algorithm selection.
- It leverages modular pipelines, Bayesian optimization, particle swarm methods, and reinforcement learning to handle sensor variability, parameter tuning, and meta reasoning dynamically.
- Empirical evaluations demonstrate state-of-the-art performance with notable gains, such as a 5% accuracy improvement on OPPORTUNITY and superior recall metrics in LLM-based recommendation and algorithm selection tasks.
AutoMR refers to a suite of algorithms, frameworks, and techniques unified by automated, adaptive mechanisms for machine reasoning, learning, and representation across diverse domains. The term encapsulates methods for motion recognition, metamorphic relation generation, memory retrieval enhancement in LLM-based recommendation, advanced algorithm selection, and meta reasoning skeleton search. Each AutoMR instantiation addresses specific challenges in its domain, leveraging principled automation to replace manual design or laborious tuning, and often operates on multimodal or heterogeneous data.
1. End-to-End Motion Recognition with Automated Pipeline (Time Series Domain)
AutoMR denotes a universal time series motion recognition pipeline designed for multimodal datasets and sensor heterogeneity (Zhang et al., 21 Feb 2025). The framework features a modular hierarchy:
- Core Layer: Encapsulates generic procedures for data preprocessing (standardization, segmentation, augmentation), model selection, training orchestration, and fully automated hyperparameter tuning.
- Dataset-Specific Layer: Interface scripts invoke the core modules per dataset, enabling adaptation to various sensor types (IMU, sEMG, skeleton, wearable arrays).
Data preprocessing standardizes formats and configures segmentation and windowing parameters according to sensor modality and meta-data. For instance, IMU datasets (UCI-HAR, MHEALTH) and sEMG datasets (DB4) adopt dataset-specific window sizes and overlap rates, all set automatically.
The backbone model is a variant of QuartzNet using 1D time-channel separable convolutions and residual connections, flexibly scaled per dataset. Automated hyperparameter optimization via SMAC (Bayesian sequential model-based algorithm configuration) explores ranges for learning rates, batch sizes, dropout, and model hyperparameters.
Metrics include accuracy, F1-score, recall, and confusion matrices. AutoMR attains state-of-the-art performance on 8 out of 10 benchmark datasets—most notably on OPPORTUNITY where accuracy gains exceed 5% over previous benchmarks.
| Dataset | AutoMR Acc (%) | Prev. SOTA (%) |
|---|---|---|
| SHREC2021 | 91.48 | 89.93 |
| MHEALTH | 99.81 | 99.80 |
| UCI-HAR | 97.05 | 95.25 |
| DB4 | 66.86 | 73.00 |
| Berkeley-MHAD | 98.98 | 97.91 |
The pipeline handles sensor variability, scales to new domains with minimal reconfiguration, and is generalizable for future extensions (e.g., 2D CNN, transformers, additional modalities).
2. Automated Metamorphic Relation Generation (Software Testing Domain)
Relevant in metamorphic testing, AutoMR is a search-based method for automatic discovery of numerical metamorphic relations (MRs) (Ayerdi et al., 2023). Its principal mechanism is the application of particle swarm optimization (PSO) to generate parameterized polynomial equations describing input–output relations over program executions.
A formal MR template is:
where both relations are polynomials over the input and output variables, and coefficients are fitted for correctness over sampled runs.
AutoMR operates with strict limitations:
- Only equality (not inequality) relations.
- Numerical inputs/outputs (no logical, sequence, or boolean structures).
- Fitness is evaluated solely on minimizing false positives (not fault detection).
Empirical evidence demonstrates that AutoMR synthesizes valid numerical MRs; however, it is outperformed by GenMorph—an evolutionary algorithm approach with broader MR classes (arbitrary logic, sequence manipulations, inequalities) and explicit optimization for both false positives and negatives.
| Aspect | AutoMR | GenMorph (Gen) |
|---|---|---|
| MR Type Support | Numeric, polynomial | Numeric, boolean, logical, sequence |
| Optimization | PSO | Genetic Programming |
| Fault Detection | Not optimized | Explicitly optimized |
| Filtering | Random inputs | Two-stage, search-based |
| Implementation | Not available | Open-source |
The approach is specific for automated software testing in numerical domains but is constrained in broader applicability.
3. Automatic Memory Retrieval for Enhanced LLM-Based Recommendation
AutoMR in generative recommendation leverages an Automatic Memory-Retrieval architecture to inject long-term user interests into LLM-based next-item prediction systems (Wang et al., 23 Dec 2024). The framework consists of:
- Memory Module: Stores all prior user interaction representations up to time , except the most recent short-term context.
- Retriever Module: Implements an MLP to select the most relevant historical interaction, scoring memory entries by expected reduction in LLM perplexity when predicting upcoming items.
- LLM-based Generator: Concatenates short-term context encoding with the retrieved long-term memory, passing this to the LLM's upper layers for next-item generation.
Label assignment for retriever training is based on the impact of each memory entry on LLM prediction perplexity. The retriever is trained to align its score distribution with a softmax-normalized vector measuring the improvement per memory element.
Experimental results on Amazon Books and Movies datasets reveal consistent superiority of AutoMR on Recall@1, Recall@5, and NDCG@5 over classical (SASRec, BIGRec) and semantic retrieval methods.
| Method | Book Recall@1 | Movie Recall@1 |
|---|---|---|
| AutoMR | 0.0291 | 0.0601 |
| BIGRec | 0.0281 | 0.0575 |
| TRSR | 0.0283 | 0.0591 |
This demonstrates the efficacy of learned, dynamic retrieval for modeling long-range user interests beyond context window constraints.
4. LLM-Driven Algorithm Selection
In the algorithm selection domain, AutoMR refers to automated machine representation strategies that utilize LLMs for universal, contextual embedding of algorithm code/descriptions (Wu et al., 2023). The pipeline involves:
- Extraction of algorithm embeddings by passing code or pseudo-code through an LLM, optionally mediated by LSTM and feature selection modules.
- Differentiable feature selection using Gumbel-Softmax, retaining only the most predictive features and discarding irrelevant dimensions.
- Adaptive embedding layers for known algorithms, fused via a learned weighted sum to yield the final algorithm representation.
- Problem features independently encoded, similarity calculated via cosine function, and the final matching degree predicted by an MLP selection module.
Theoretical analysis (via Rademacher complexity) bounds the model's generalization capacity, with feature selection shown to reduce overfitting risk. Empirical comparisons on ASlib benchmarks indicate AS-LLM (AutoMR/LLM pipeline) significantly outperforms classical portfolio and classifier-based algorithm selection methods in 8 out of 10 scenarios.
5. Adaptive Meta Reasoning Skeleton Search for LLM Reasoning
AutoMR also denotes a meta reasoning skeleton search framework for guiding LLM reasoning (Zhang et al., 5 Oct 2025). It frames meta reasoning skeletons as directed acyclic graphs (DAGs), generalizing all prior reasoning skeleton structures. Each DAG node represents a reasoning step, each edge a meta reasoning strategy (Next, Reflect, Explore, etc.), and the architecture is dynamically constructed at inference time conditioned on the evolving reasoning context.
A dynamic skeleton sampling algorithm incrementally builds the DAG via context-conditioned sampling policies (Algorithm 1 in the paper), leveraging a lightweight MLP to select strategies for incoming edges per node. The search is formulated as a policy optimization problem, solved via REINFORCE, maximizing expected reasoning accuracy.
| Method | MATH-500 (LLaMA) | GSM8K (LLaMA) | Science (LLaMA) |
|---|---|---|---|
| CoT | 36.8 | 71.1 | 31.5 |
| MRP | 40.8 | 74.6 | 36.4 |
| Meta-Reasoner | 44.4 | 76.8 | 44.3 |
| rStar | 46.6 | 78.9 | 42.6 |
| MaAS | 46.2 | 76.4 | 44.6 |
| AutoMR (DAG) | 50.2 | 81.9 | 48.9 |
AutoMR’s formulation and sampling enable context-adaptive, query-specific, and fine-grained meta reasoning flows, achieving superior scaling efficiency and accuracy across mathematical and general benchmarks.
6. Common Concepts and Future Research Trajectories
AutoMR methodologies highlight a trend toward principled automation of design, configuration, and reasoning mechanisms in machine learning, thereby reducing manual overhead and enhancing adaptability. Common foundation elements include:
- Modular pipeline engineering with automatic dataset and configuration adaptation.
- Search/optimization techniques (Bayesian, PSO, genetic programming, RL-based sampling).
- Use of LLMs for representation learning, retrieval, and context conditioning.
- Empirical evaluation across diverse benchmarks, evidencing state-of-the-art performance.
Identified limitations comprise restricted spatial modeling in motion recognition, the specificity of MR generation algorithms to numerical domains, and dependency on accessible algorithmic representations for the algorithm selection pipeline. Prospective advancements include expanding to new data modalities (video, tactile), integrating transformer architectures, and refining context-adaptive skeleton search strategies for broader classes of queries.
AutoMR denotes a paradigm of end-to-end, adaptive, cross-domain automation, contributing substantively to robust, generalizable, and scalable solutions for time-series analysis, software testing, recommendation, algorithm selection, and meta reasoning.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free