GradSTL: Differentiable Signal Temporal Logic
- GradSTL is a formally verified framework for signal temporal logic that evaluates constraints over irregularly sampled signals using recursive, adaptive windows.
- It employs smooth robustness semantics with differentiable approximations (e.g., smooth min/max) to enable gradient descent in neurosymbolic learning tasks.
- The entire implementation is automatically generated from an interactive theorem prover, ensuring both semantic soundness and derivative correctness for reliable integration.
GradSTL denotes a formally verified, fully comprehensive implementation of signal temporal logic (STL) suitable for neurosymbolic reasoning and integration with gradient-based learning. Designed to evaluate any STL constraint over any signal, regardless of sampling regularity, GradSTL provides both Boolean and smooth (differentiable) STL semantics over tensor representations. Its construction includes formally proven soundness of the robust semantics and of its manually-implemented derivative, with the entire implementation automatically generated from an interactive theorem prover. This architecture enables seamless use of STL-based objectives in neurosymbolic optimization tasks, directly bridging temporal logic specification and machine learning by gradient descent.
1. Formal Specification and Recursive STL Evaluation
GradSTL is specified using formal methods, notably through the Isabelle theorem prover. Its core is a recursive evaluation function, E*, which reflects classic Boolean STL semantics while adapting to the temporal structure of arbitrarily sampled signals. When interpreting temporal operators such as Eventually () or Always (), the temporal recursion employs an adaptive window: as the evaluation proceeds from sample to , the temporal interval for the operator is shifted by the time gap . This mechanism ensures synchrony between the logical specification and the actual trajectory, supporting unevenly spaced samples. The resulting evaluation faithfully determines whether the STL constraint holds at each relevant time step for any given finite-length signal.
2. Smooth Robustness Semantics and Differentiability
For integration with learning methods that require differentiability, GradSTL provides a smooth, real-valued “robustness” function, denoted , which quantifies the degree of satisfaction or violation of a property. This function is derived from the recursive E* semantics by replacing every instance of non-differentiable min/max with regularized smooth approximations:
- for ; it recovers the standard maximum as .
- .
These constructs enable the formulation of smooth, recursive robust semantics, as exemplified for the Always operator:
This systematic smoothing ensures is suitable as a loss function for gradient-based optimizers, facilitating neurosymbolic learning where the satisfaction level of logical constraints must be differentiated with respect to signal variables.
3. Formal Verification of Soundness and Correctness
A central innovation is that not only the recursive semantics but also the robust derivative are fully verified. Two soundness properties are established:
- Semantic Soundness: As , the sign of exactly recovers the Boolean satisfaction of at index ; i.e., if and only if the Boolean evaluation is true, and otherwise.
- Derivative Correctness: The symbolically computed derivative of the robust semantics, , is formally proven to be equal to the true partial derivative of with respect to any signal variable.
These properties are established by meta-theoretical induction over both formula structure and signal length, guaranteeing that the core functions behave as mathematically intended in all cases—critical for trustworthy learning and control when high assurance is required.
4. Automatic Code Generation and Correctness by Construction
Rather than relying on manual implementation, the executable code for GradSTL is automatically synthesised from the Isabelle formal specification to OCaml via a correctness-guaranteed code generation pipeline. Only a thin, formally assured translation layer stands between the verified logic and the deployed implementation. This eliminates the risk of erroneous “hand coding”, ensuring correctness by construction all the way from the formal semantics to the runtime behaviour, and making the solution robust when integrated with machine learning frameworks (e.g., PyTorch) via simple foreign function interfaces.
5. Integration into Neurosymbolic Learning via Gradient Descent
GradSTL is engineered to serve as a bridge between formal logic specification and differentiable learning systems. The smooth robustness measure and its derivative can be wrapped as custom autograd functions in PyTorch or similar platforms. In practice, when a system output (such as a control trajectory) must be optimized to satisfy an STL constraint, the optimizer (e.g., Adam) can directly maximize , leveraging its correctness and differentiability properties. This allows direct deployment of rich, interpretable logical constraints in the objective function during neural network training or classical trajectory optimization.
The following table summarizes key properties:
Property | GradSTL Characteristic | Verified By |
---|---|---|
Adaptive window | Synchronizes temporal recursion with signal sampling | Isabelle Theorem Prover |
Robustness smoothing | , replacements | Isabelle Theorem Prover |
Soundness | Boolean limit of robust semantics | Isabelle Theorem Prover |
Derivative correctness | Symbolic derivative equals analytic derivative | Isabelle Theorem Prover |
Implementation | OCaml code auto-generated from formal spec | Isabelle codegen |
6. Case Study: Learning to Satisfy STL Constraints
A detailed case paper addresses a medical robot whose motion is governed by STL constraints, including spatial avoidance (obstacles), kinetic limits, and temporally articulated tasks (e.g., “stay at cabinet for at least 5 s”). The robot’s trajectory is represented by a high-dimensional, possibly non-uniformly sampled signal. GradSTL computes both satisfaction and robustness at all times—even in zones of denser sampling. Learning proceeds by maximizing via gradient descent, yielding a trajectory that ultimately satisfies the composite STL specification. Experimental evidence demonstrates that repeated optimization iterations visibly improve the robot’s task compliance, as measured by robustifying satisfaction of all logical constraints, even under irregular sampling regimes.
7. Applications, Benefits, and Limitations
GradSTL’s rigorous semantics and verified implementation make it particularly suitable for the following domains:
- Neurosymbolic learning, where task objectives are naturally described in STL, requiring differentiable supervision.
- Learning and control in safety-critical or high-assurance systems (e.g., robotics, industrial process automation), especially when signal sampling is irregular.
- Scenarios demanding automated compliance with explicit, human-readable temporal logic specifications at training or runtime.
Principal benefits include correctness guarantees (both logical and analytic), seamless integration with tensor libraries and autograd, and flexibility for arbitrary signal discretization. Potential limitations involve increased computational overhead in deeply nested or long-horizon STL formulas due to recursive evaluation, and the need to tune the smoothing parameter to balance sharpness of decision boundaries with optimization tractability. Care is also needed to ensure deployment environments faithfully execute the auto-generated code, though correctness by construction largely mitigates this risk.
GradSTL thus establishes a mathematically precise foundation for the use of signal temporal logic in contemporary learning systems, effectively enabling interpretable, reliable, and gradient-friendly specification-driven learning in a wide class of applications (Chevallier et al., 6 Aug 2025).