Abstract
The generalization performance of deep neural networks (DNNs) is a critical
factor in achieving robust model behavior on unseen data. Recent studies have
highlighted the importance of sharpness-based measures in promoting
generalization by encouraging convergence to flatter minima. Among these
approaches, Sharpness-Aware Minimization (SAM) has emerged as an effective
optimization technique for reducing the sharpness of the loss landscape,
thereby improving generalization. However, SAM's computational overhead and
sensitivity to noisy gradients limit its scalability and efficiency. To address
these challenges, we propose Gradient-Centralized Sharpness-Aware Minimization
(GCSAM), which incorporates Gradient Centralization (GC) to stabilize gradients
and accelerate convergence. GCSAM normalizes gradients before the ascent step,
reducing noise and variance, and improving stability during training. Our
evaluations indicate that GCSAM consistently outperforms SAM and the Adam
optimizer in terms of generalization and computational efficiency. These
findings demonstrate GCSAM's effectiveness across diverse domains, including
general and medical imaging tasks. Our code is available at https://github.com/mhassann22/GCSAM