A Novel Minimax Regularization Framework for Enhancing Neural Network Robustness

Main Article Content

Jincheng Zhang

Abstract

In the development of deep learning, regularization techniques have been widely used to improve the generalization ability and robustness of models. However, traditional regularization methods are often based on a priori assumptions and fail to fully consider the performance of the model in the worst case. This paper proposes a regularization mechanism based on the Minimax theorem, attempting to introduce the idea of ​​"worst-case adversarial" during the training process to improve the robustness of the model. Through experimental verification of the CIFAR-10 dataset, we observed that this method is slightly better than the standard multi-layer perceptron (MLP) model in multiple evaluation indicators and shows good generalization performance. This method has a wide range of applicability and can be extended to a variety of architectures including convolutional neural networks, graph neural networks, and natural language processing models.

Downloads

Download data is not yet available.

Article Details

How to Cite
[1]
J. Zhang, “A Novel Minimax Regularization Framework for Enhancing Neural Network Robustness”, JuTISI, vol. 11, no. 3, pp. 435–447, Dec. 2025.
Section
Articles