Papers
Topics
Authors
Recent
Search
2000 character limit reached

Dynamic Movement Primitive Parameterization

Updated 9 February 2026
  • Dynamic Movement Primitives (DMPs) are a motion encoding framework that uses nonlinear dynamics and interpretable parameters to represent complex robotic trajectories.
  • The parameterization couples canonical and transformation systems with basis functions, enabling accurate weight learning from demonstrations for precise motion reproduction.
  • Extensions such as reversibility, neural and probabilistic models enhance DMP adaptability, personalization, and integration with multimodal inputs in advanced robotics.

Dynamic Movement Primitive (DMP) Parameterization

Dynamic Movement Primitives (DMPs) are a class of motion encoding frameworks based on stable nonlinear dynamical systems coupled with flexible function approximators. DMPs parametrically represent complex trajectories via a low-dimensional set of physically interpretable parameters, supporting robust generation and generalization for robotic motion. DMP parameterization encompasses the mathematical form of the transformation and canonical systems, the basis function structure of the forcing term, methods for weight identification, and extensions for properties such as reversibility, compliance, and multi-modal perception.

1. Core Mathematical Structure and Canonical System

A canonical DMP comprises two coupled subsystems:

  1. Canonical (Phase) System: Encodes phase progression with a monotonic variable xx (or ss) that evolves independently of absolute time:

x˙=1τh(x),x(0)=x0,x(tf/τ)=xf\dot{x} = \frac{1}{\tau} h(x), \qquad x(0)=x_0, \quad x(t_f/\tau)=x_f

where τ>0\tau>0 sets execution duration and h(x)h(x) is a positive scalar function (commonly linear h(x)=1h(x)=1 or exponential h(x)=axxh(x) = -a_x x).

  1. Transformation (Attractor) System: Encodes the actual trajectory dynamics. In original DMPs, the system takes the form (for position yy):

τ2y¨=αzβz(gy)αzτy˙+fnl(x)\tau^2 \ddot{y} = \alpha_z \beta_z (g - y) - \alpha_z \tau \dot{y} + f_{\mathrm{nl}}(x)

where gg is the goal, αz\alpha_z, βz\beta_z are gains (typically αz=4βz\alpha_z = 4\beta_z for critical damping), and fnlf_{\mathrm{nl}} is the nonlinear forcing term.

Many modern DMPs, such as the reversible variant, reformulate this as a tracking system against a learned reference trajectory yx(t)y_x(t):

y¨=y¨xD(y˙y˙x)K(yyx)\ddot y = \ddot y_x - D (\dot y - \dot y_x) - K (y - y_x)

with explicit stiffness KK and damping DD decoupled from time scaling (Sidiropoulos et al., 2020).

2. Forcing Term Parameterization: Basis Functions and Weights

The nonlinearity required to reproduce general motion is encoded in f(x)f(x), approximated as a normalized weighted sum of NN basis functions:

f(x)=i=1Nwiψi(x)j=1Nψj(x)f(x) = \frac{\sum_{i=1}^N w_i \psi_i(x)}{\sum_{j=1}^N \psi_j(x)}

Standard choices for ψi(x)\psi_i(x) are:

  • Gaussian RBFs: ψi(x)=exp(hi(xci)2)\psi_i(x) = \exp(-h_i (x - c_i)^2), with centers cic_i (often linearly or exponentially spaced) and widths hih_i adjusted for desired overlap (Jahn et al., 2022, Böckmann et al., 2016).
  • Compactly Supported Bases (Mollifiers, Wendland): ψi\psi_i with compact support, promoting numerical stability (AA is banded, not full) and efficient local updates. These enable condition numbers to grow linearly rather than exponentially with NN, improving scaling for high-resolution modeling (Ginesi et al., 2019).

The number and structure of basis functions directly control the parameterization complexity: for dd-dimensional motion, the total number of weights is nDMP=dNn_{\mathrm{DMP}} = dN (Jahn et al., 2022). Empirical results show that K=6K=6–$11$ RBFs per DOF suffice for sub-millimeter trajectory reconstruction on human data.

3. Parameter Identification and Learning

Weights wiw_i are learned by direct regression from demonstration(s). For timed data {yd(t),y˙d(t),y¨d(t)}\{y_d(t), \dot y_d(t), \ddot y_d(t)\} and canonical x(t)x(t), the target signal for each sample is

ftarget(xn)=τ2y¨nαz(βz(gyn)τy˙n)f_{\mathrm{target}}(x^n) = \tau^2 \ddot{y}^n - \alpha_z \left( \beta_z (g - y^n) - \tau \dot y^n \right)

and

wi=nψi(xn)(xn(gy0))ftargetnnψi(xn)(xn(gy0))2w_i = \frac{ \sum_n \psi_i(x^n) (x^n (g-y_0)) f_{\mathrm{target}}^n }{ \sum_n \psi_i(x^n) (x^n (g-y_0))^2 }

This reduces to weighted least squares. For multiple demonstrations, align spatially and temporally, extract ftarget(j)(x)f_{\mathrm{target}}^{(j)}(x) for each, and solve a joint regression over all demos (Ginesi et al., 2019). Ridge regularization may be used to improve generalization.

Some enhanced DMPs decouple the learning of the path (reference trajectory) and the velocity profile. In reversible DMPs, a two-phase process first fits fp(x)f_p(x) to position-only data and then kinesthetically teaches speed via a canonical system modified for phase pushing, followed by a second path fit (Sidiropoulos et al., 2020).

4. Hyperparameters, Affine Invariance, and Robustness

DMP performance is sensitive to the choice of:

  • Number of kernels NN
  • Kernel widths hih_i or scaling κ\kappa
  • Gains (αz,βz,αx)(\alpha_z, \beta_z, \alpha_x)
  • Stiffness/damping (K,D)(K, D) and time scaling τ\tau

Empirically, hyperparameter grids can be optimized for trade-offs between trajectory accuracy and smoothness. Some DMP formulations exploit affine invariance: by aligning starts/goals and applying transformations SS to K,D,fK, D, f, trajectory shapes become invariant under rotation, scaling, and translation (Ginesi et al., 2019).

Auto-tuning procedures have been proposed: Hong et al. extract spring-damper ratios (D/M,K/M)(D/M, K/M) from multiple demonstrations via a bi-objective function balancing trajectory error and skill stability, yielding improved compliance and stability over hand tuning (Hong et al., 2023).

5. Extensions: Reversibility, Multi-Segment, Neural and Probabilistic DMPs

  • Reversibility: In reversible DMPs, simply negating the sign of the phase evolution x˙\dot x exactly retraces the learned path in reverse, with no need for dual training. The transformation system tracks a reference path, and forward/backward execution uses the same weights and parameters (Sidiropoulos et al., 2020).
  • Segmentation: Long-horizon or discontinuous tasks are handled by segmenting motions into multiple DMPs, each with independent parameterizations. DSDNet, for instance, predicts variable-length DMP sequences end-to-end for complex tasks (Anarossi et al., 2023).
  • Neural Parameterization: Deep variants replace the basis function expansion with neural networks (MLPs, CNNs, autoencoders), conditioning on sensory data for context-aware adaptation. These "Neural DMPs" enable richer generalization at the expense of higher data and computational requirements (Rožanec et al., 2022).
  • Probabilistic DMPs: Reformulation as linear Gaussian state-space models enables Bayesian inference, explicit uncertainty propagation, closed-loop feedback via Kalman filtering, and failure detection using predictive likelihoods (Meier et al., 2016).

6. Application-Specific and Multimodal DMP Parameterization

  • Vision/Language Integration: Frameworks such as KeyMPs and LMPs leverage vision-LLMs to select or generate DMP parameters from multimodal input, mapping high-level intent to low-level control actions (e.g., one-shot motion generation for occlusion-rich tasks, direct language-conditioned motion specification) (Anarossi et al., 14 Apr 2025, Dai et al., 2 Feb 2026).
  • Skill Transfer and Human-Likeness: Parameter extraction pipelines optimize DMP gains and weights to match dynamic features of human demonstration, enabling transfer with enhanced compliance and alignment of motion characteristics (Hong et al., 2023).
  • Personalization: By encoding user-specific starts/goals and interactive velocity scaling (e.g., via human-applied forces), DMPs generate trajectories personalized to individual physical characteristics and preferences (Franceschi et al., 11 Jun 2025).

7. Parameterization Summary Table

Component Parameter(s) Typical Choices / Notes
Canonical system τ\tau, h(x)h(x) τ=Tdemo\tau=T_{demo}, h(x)h(x) linear/exponential
Transformation system KK, DD, αz\alpha_z, βz\beta_z K>0K>0, D>0D>0, βz=αz/4\beta_z=\alpha_z/4
Forcing term basis {ψi}\{\psi_i\}, NN, cic_i, hih_i Gaussians, mollifiers, Wendlands
Forcing term weights {wi}\{w_i\} Least-squares, LWR, learned or neural
Objective gains From demo or auto-tuning Balance trajectory error, compliance
Personalization y0y_0, gg, τ\tau Per demo, user-chosen, adaptive

Comprehensive DMP parameterization thus involves careful specification and learning of time/phase dynamics, attractor gains, force-basis expansions, and spatial/temporal scaling, in addition to pragmatic extensions for modern robotics contexts. The mathematical and algorithmic variety now available supports robust skill representation and sophisticated adaptations to user, context, and high-level task specification (Sidiropoulos et al., 2020, Hong et al., 2023, Ginesi et al., 2019, Anarossi et al., 14 Apr 2025).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Dynamic Movement Primitive (DMP) Parameterization.