PaperSwipe

Convergence for Discrete Parameter Updates

Published 2 days agoVersion 1arXiv:2512.04051

Authors

Paul Wilson, Fabio Zanasi, George Constantinides

Categories

cs.LGmath.OC

Abstract

Modern deep learning models require immense computational resources, motivating research into low-precision training. Quantised training addresses this by representing training components in low-bit integers, but typically relies on discretising real-valued updates. We introduce an alternative approach where the update rule itself is discrete, avoiding the quantisation of continuous updates by design. We establish convergence guarantees for a general class of such discrete schemes, and present a multinomial update rule as a concrete example, supported by empirical evaluation. This perspective opens new avenues for efficient training, particularly for models with inherently discrete structure.

Convergence for Discrete Parameter Updates

2 days ago
v1
3 authors

Categories

cs.LGmath.OC

Abstract

Modern deep learning models require immense computational resources, motivating research into low-precision training. Quantised training addresses this by representing training components in low-bit integers, but typically relies on discretising real-valued updates. We introduce an alternative approach where the update rule itself is discrete, avoiding the quantisation of continuous updates by design. We establish convergence guarantees for a general class of such discrete schemes, and present a multinomial update rule as a concrete example, supported by empirical evaluation. This perspective opens new avenues for efficient training, particularly for models with inherently discrete structure.

Authors

Paul Wilson, Fabio Zanasi, George Constantinides

arXiv ID: 2512.04051
Published Dec 3, 2025

Click to preview the PDF directly in your browser