Dynamic Movement Primitive Parameterization
- Dynamic Movement Primitives (DMPs) are a motion encoding framework that uses nonlinear dynamics and interpretable parameters to represent complex robotic trajectories.
- The parameterization couples canonical and transformation systems with basis functions, enabling accurate weight learning from demonstrations for precise motion reproduction.
- Extensions such as reversibility, neural and probabilistic models enhance DMP adaptability, personalization, and integration with multimodal inputs in advanced robotics.
Dynamic Movement Primitive (DMP) Parameterization
Dynamic Movement Primitives (DMPs) are a class of motion encoding frameworks based on stable nonlinear dynamical systems coupled with flexible function approximators. DMPs parametrically represent complex trajectories via a low-dimensional set of physically interpretable parameters, supporting robust generation and generalization for robotic motion. DMP parameterization encompasses the mathematical form of the transformation and canonical systems, the basis function structure of the forcing term, methods for weight identification, and extensions for properties such as reversibility, compliance, and multi-modal perception.
1. Core Mathematical Structure and Canonical System
A canonical DMP comprises two coupled subsystems:
- Canonical (Phase) System: Encodes phase progression with a monotonic variable (or ) that evolves independently of absolute time:
where sets execution duration and is a positive scalar function (commonly linear or exponential ).
- Transformation (Attractor) System: Encodes the actual trajectory dynamics. In original DMPs, the system takes the form (for position ):
where is the goal, , are gains (typically for critical damping), and is the nonlinear forcing term.
Many modern DMPs, such as the reversible variant, reformulate this as a tracking system against a learned reference trajectory :
with explicit stiffness and damping decoupled from time scaling (Sidiropoulos et al., 2020).
2. Forcing Term Parameterization: Basis Functions and Weights
The nonlinearity required to reproduce general motion is encoded in , approximated as a normalized weighted sum of basis functions:
Standard choices for are:
- Gaussian RBFs: , with centers (often linearly or exponentially spaced) and widths adjusted for desired overlap (Jahn et al., 2022, Böckmann et al., 2016).
- Compactly Supported Bases (Mollifiers, Wendland): with compact support, promoting numerical stability ( is banded, not full) and efficient local updates. These enable condition numbers to grow linearly rather than exponentially with , improving scaling for high-resolution modeling (Ginesi et al., 2019).
The number and structure of basis functions directly control the parameterization complexity: for -dimensional motion, the total number of weights is (Jahn et al., 2022). Empirical results show that –$11$ RBFs per DOF suffice for sub-millimeter trajectory reconstruction on human data.
3. Parameter Identification and Learning
Weights are learned by direct regression from demonstration(s). For timed data and canonical , the target signal for each sample is
and
This reduces to weighted least squares. For multiple demonstrations, align spatially and temporally, extract for each, and solve a joint regression over all demos (Ginesi et al., 2019). Ridge regularization may be used to improve generalization.
Some enhanced DMPs decouple the learning of the path (reference trajectory) and the velocity profile. In reversible DMPs, a two-phase process first fits to position-only data and then kinesthetically teaches speed via a canonical system modified for phase pushing, followed by a second path fit (Sidiropoulos et al., 2020).
4. Hyperparameters, Affine Invariance, and Robustness
DMP performance is sensitive to the choice of:
- Number of kernels
- Kernel widths or scaling
- Gains
- Stiffness/damping and time scaling
Empirically, hyperparameter grids can be optimized for trade-offs between trajectory accuracy and smoothness. Some DMP formulations exploit affine invariance: by aligning starts/goals and applying transformations to , trajectory shapes become invariant under rotation, scaling, and translation (Ginesi et al., 2019).
Auto-tuning procedures have been proposed: Hong et al. extract spring-damper ratios from multiple demonstrations via a bi-objective function balancing trajectory error and skill stability, yielding improved compliance and stability over hand tuning (Hong et al., 2023).
5. Extensions: Reversibility, Multi-Segment, Neural and Probabilistic DMPs
- Reversibility: In reversible DMPs, simply negating the sign of the phase evolution exactly retraces the learned path in reverse, with no need for dual training. The transformation system tracks a reference path, and forward/backward execution uses the same weights and parameters (Sidiropoulos et al., 2020).
- Segmentation: Long-horizon or discontinuous tasks are handled by segmenting motions into multiple DMPs, each with independent parameterizations. DSDNet, for instance, predicts variable-length DMP sequences end-to-end for complex tasks (Anarossi et al., 2023).
- Neural Parameterization: Deep variants replace the basis function expansion with neural networks (MLPs, CNNs, autoencoders), conditioning on sensory data for context-aware adaptation. These "Neural DMPs" enable richer generalization at the expense of higher data and computational requirements (Rožanec et al., 2022).
- Probabilistic DMPs: Reformulation as linear Gaussian state-space models enables Bayesian inference, explicit uncertainty propagation, closed-loop feedback via Kalman filtering, and failure detection using predictive likelihoods (Meier et al., 2016).
6. Application-Specific and Multimodal DMP Parameterization
- Vision/Language Integration: Frameworks such as KeyMPs and LMPs leverage vision-LLMs to select or generate DMP parameters from multimodal input, mapping high-level intent to low-level control actions (e.g., one-shot motion generation for occlusion-rich tasks, direct language-conditioned motion specification) (Anarossi et al., 14 Apr 2025, Dai et al., 2 Feb 2026).
- Skill Transfer and Human-Likeness: Parameter extraction pipelines optimize DMP gains and weights to match dynamic features of human demonstration, enabling transfer with enhanced compliance and alignment of motion characteristics (Hong et al., 2023).
- Personalization: By encoding user-specific starts/goals and interactive velocity scaling (e.g., via human-applied forces), DMPs generate trajectories personalized to individual physical characteristics and preferences (Franceschi et al., 11 Jun 2025).
7. Parameterization Summary Table
| Component | Parameter(s) | Typical Choices / Notes |
|---|---|---|
| Canonical system | , | , linear/exponential |
| Transformation system | , , , | , , |
| Forcing term basis | , , , | Gaussians, mollifiers, Wendlands |
| Forcing term weights | Least-squares, LWR, learned or neural | |
| Objective gains | From demo or auto-tuning | Balance trajectory error, compliance |
| Personalization | , , | Per demo, user-chosen, adaptive |
Comprehensive DMP parameterization thus involves careful specification and learning of time/phase dynamics, attractor gains, force-basis expansions, and spatial/temporal scaling, in addition to pragmatic extensions for modern robotics contexts. The mathematical and algorithmic variety now available supports robust skill representation and sophisticated adaptations to user, context, and high-level task specification (Sidiropoulos et al., 2020, Hong et al., 2023, Ginesi et al., 2019, Anarossi et al., 14 Apr 2025).