PaperSwipe

A Test-Function Approach to Incremental Stability

Published 5 months agoVersion 2arXiv:2507.00695

Authors

Daniel Pfrommer, Max Simchowitz, Ali Jadbabaie

Categories

cs.LGeess.SY

Abstract

This paper presents a novel framework for analyzing Incremental-Input-to-State Stability ($δ$ISS) based on the idea of using rewards as "test functions." Whereas control theory traditionally deals with Lyapunov functions that satisfy a time-decrease condition, reinforcement learning (RL) value functions are constructed by exponentially decaying a Lipschitz reward function that may be non-smooth and unbounded on both sides. Thus, these RL-style value functions cannot be directly understood as Lyapunov certificates. We develop a new equivalence between a variant of incremental input-to-state stability of a closed-loop system under given a policy, and the regularity of RL-style value functions under adversarial selection of a Hölder-continuous reward function. This result highlights that the regularity of value functions, and their connection to incremental stability, can be understood in a way that is distinct from the traditional Lyapunov-based approach to certifying stability in control theory.

A Test-Function Approach to Incremental Stability

5 months ago
v2
3 authors

Categories

cs.LGeess.SY

Abstract

This paper presents a novel framework for analyzing Incremental-Input-to-State Stability ($δ$ISS) based on the idea of using rewards as "test functions." Whereas control theory traditionally deals with Lyapunov functions that satisfy a time-decrease condition, reinforcement learning (RL) value functions are constructed by exponentially decaying a Lipschitz reward function that may be non-smooth and unbounded on both sides. Thus, these RL-style value functions cannot be directly understood as Lyapunov certificates. We develop a new equivalence between a variant of incremental input-to-state stability of a closed-loop system under given a policy, and the regularity of RL-style value functions under adversarial selection of a Hölder-continuous reward function. This result highlights that the regularity of value functions, and their connection to incremental stability, can be understood in a way that is distinct from the traditional Lyapunov-based approach to certifying stability in control theory.

Authors

Daniel Pfrommer, Max Simchowitz, Ali Jadbabaie

arXiv ID: 2507.00695
Published Jul 1, 2025

Click to preview the PDF directly in your browser