METHODS FOR COMBATING BIAS AND DOMAIN SHIFT IN MEDICAL NEURAL NETWORKS

Authors

DOI:

https://doi.org/10.31891/2219-9365-2026-85-38

Keywords:

medical neural networks, domain shift, data bias, Batch Normalization, Group Normalization, domain generalization, medical images, clinical deployment, model validation

Abstract

The article addresses the problem of generalization of medical neural networks under domain shift caused by heterogeneity in imaging hardware, acquisition protocols, and clinical practices. It is argued that the degradation of model performance after deployment is systemic and cannot be explained solely by insufficient architectural complexity or limited training data, but rather by distributional discrepancies between the source and target domains. The study focuses on the architectural assumptions embedded in standard neural network components, with particular emphasis on normalization mechanisms. A formal analysis of Batch Normalization is provided, demonstrating that its running statistics implicitly encode properties of the training domain and produce internal feature representations that remain optimal only under the assumption of distributional stationarity. Consequently, models relying on Batch Normalization become highly sensitive to covariate shift in real clinical environments where data distributions vary across institutions.

Based on this analysis, alternative normalization strategies are examined, including Instance Normalization and Group Normalization, which do not depend on global batch statistics and therefore exhibit greater robustness in multi-domain medical settings. The paper also investigates engineering aspects of model development and evaluation, highlighting the limitations of conventional internal validation procedures that fail to capture cross-site variability. To address this, the adoption of evaluation protocols that approximate real deployment conditions, particularly leave-one-hospital-out validation, is justified as a more reliable indicator of clinical performance. On this basis, practical recommendations are formulated regarding the selection of normalization mechanisms, architectural design choices, and validation methodologies aimed at improving the domain generalization and operational reliability of medical deep learning systems.

Published

2026-03-05

How to Cite

MESHCHERIAKOV О. (2026). METHODS FOR COMBATING BIAS AND DOMAIN SHIFT IN MEDICAL NEURAL NETWORKS . MEASURING AND COMPUTING DEVICES IN TECHNOLOGICAL PROCESSES, (1), 308–314. https://doi.org/10.31891/2219-9365-2026-85-38