PaperSwipe

Plug-and-Play Homeostatic Spark: Zero-Cost Acceleration for SNN Training Across Paradigms

Published 1 day agoVersion 1arXiv:2512.05015

Authors

Rui Chen, Xingyu Chen, Yaoqing Hu, Shihan Kong, Zhiheng Wu, Junzhi Yu

Categories

cs.NE

Abstract

Spiking neural networks offer event driven computation, sparse activation, and hardware efficiency, yet training often converges slowly and lacks stability. We present Adaptive Homeostatic Spiking Activity Regulation (AHSAR), an extremely simple plug in and training paradigm agnostic method that stabilizes optimization and accelerates convergence without changing the model architecture, loss, or gradients. AHSAR introduces no trainable parameters. It maintains a per layer homeostatic state during the forward pass, maps centered firing rate deviations to threshold scales through a bounded nonlinearity, uses lightweight cross layer diffusion to avoid sharp imbalance, and applies a slow across epoch global gain that combines validation progress with activity energy to tune the operating point. The computational cost is negligible. Across diverse training methods, SNN architectures of different depths, widths, and temporal steps, and both RGB and DVS datasets, AHSAR consistently improves strong baselines and enhances out of distribution robustness. These results indicate that keeping layer activity within a moderate band is a simple and effective principle for scalable and efficient SNN training.

Plug-and-Play Homeostatic Spark: Zero-Cost Acceleration for SNN Training Across Paradigms

1 day ago
v1
6 authors

Categories

cs.NE

Abstract

Spiking neural networks offer event driven computation, sparse activation, and hardware efficiency, yet training often converges slowly and lacks stability. We present Adaptive Homeostatic Spiking Activity Regulation (AHSAR), an extremely simple plug in and training paradigm agnostic method that stabilizes optimization and accelerates convergence without changing the model architecture, loss, or gradients. AHSAR introduces no trainable parameters. It maintains a per layer homeostatic state during the forward pass, maps centered firing rate deviations to threshold scales through a bounded nonlinearity, uses lightweight cross layer diffusion to avoid sharp imbalance, and applies a slow across epoch global gain that combines validation progress with activity energy to tune the operating point. The computational cost is negligible. Across diverse training methods, SNN architectures of different depths, widths, and temporal steps, and both RGB and DVS datasets, AHSAR consistently improves strong baselines and enhances out of distribution robustness. These results indicate that keeping layer activity within a moderate band is a simple and effective principle for scalable and efficient SNN training.

Authors

Rui Chen, Xingyu Chen, Yaoqing Hu et al. (+3 more)

arXiv ID: 2512.05015
Published Dec 4, 2025

Click to preview the PDF directly in your browser