EVOLUTION OF AI METHODS FOR MINIMIZING UNCERTAINTY RISKS IN IOT: FROM CLASSICAL ALGORITHMS TO DEEP LEARNING
DOI:
https://doi.org/10.31891/2219-9365-2025-84-52Keywords:
Internet of Things (IoT), artificial intelligence (AI), information uncertainty, machine learning, deep learning, convolutional neural networks (CNN), recurrent neural networks (RNN), generative adversarial networks (GAN), cybersecurity, predictive maintenanceAbstract
This article is a review paper that analyzes the historical development of artificial intelligence (AI) for overcoming information uncertainty in Internet of Things (IoT) systems. Information uncertainty in IoT is a critical issue arising from factors such as sensor noise, incomplete data packets, dynamic environmental changes, and potential cyber threats, leading to risks in data security, information processing efficiency, and real-time decision-making accuracy. The authors examines stages from classical methods dominant in the 1980s–2000s, such as rule-based systems, Bayesian networks, and fuzzy logic, which provided probabilistic evaluations using simple rules and mathematical formulas, ensuring ease of implementation and low computational requirements but limiting adaptability to dynamic and large-scale IoT networks where data can be variable and unpredictable.
The transitional period of the 2000s–2010s is characterized by the introduction of machine learning (ML), including supervised learning algorithms like support vector machines (SVM) and decision trees, as well as unsupervised learning for clustering, which significantly improved systems' ability to handle large volumes of data, enabling effective anomaly detection, device failure prediction, and noise mitigation with higher accuracy compared to classical methods, though dependent on the quality of training datasets. Modern DL methods, such as convolutional neural networks (CNN) for visual data analysis, recurrent neural networks (RNN) for time series, and generative adversarial networks (GAN) for synthetic data creation, are integrated with edge computing and cloud solutions, providing real-time uncertainty processing in areas like smart cities, Industry 4.0, and medical IoT devices, where accuracy can reach 30–50% higher than in previous generations. The article synthesizes existing literature from databases such as IEEE Xplore and Scopus, highlighting key advantages of each stage (from the simplicity of classical methods to automation in DL), limitations (e.g., computational complexity, vulnerability to adversarial attacks, and ethical privacy issues), and current challenges related to AI integration in IoT ecosystems. Additionally, prospects for development are proposed, including hybrid models combining AI with quantum computing and open platforms for greater resilience to uncertainties. The conclusions emphasize that the evolution of AI has transformed it into an essential tool for creating resilient and secure IoT systems, with recommendations for further research focused on method standardization and practical implementation in real scenarios.
The goal of this article is to provide a systematic analysis of AI methods' evolution for minimizing information uncertainty risks in IoT, from classical algorithms to deep learning, with a synthesis of key studies, comparison of advantages and limitations, and development prospects. Tasks include uncertainty classification, historical AI stage descriptions, modern method analysis, and future research recommendations.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Олександр ШВИДЧЕНКО, Дмитро ЗАГОРОДНІЙ

This work is licensed under a Creative Commons Attribution 4.0 International License.