AI MODELS FOR PREDICTING USER BEHAVIOR IN DIGITAL ECOSYSTEMS: ETHICAL AND TECHNICAL CHALLENGES
DOI:
https://doi.org/10.31891/2219-9365-2025-84-25Keywords:
consumer behavior, artificial intelligence, neural network, machine learningAbstract
The study aims to analyze the technical and ethical challenges associated with the application of artificial intelligence models for predicting user behavior in digital ecosystems. The author examines issues related to data quality, interpretability, algorithm scalability, resilience to attacks, as well as concerns regarding privacy, algorithmic bias, decision transparency, accountability for predictions, and risks of manipulation. Approaches to developing transparent, fair, and secure predictive systems are identified based on a comparison of machine learning models.
A literature review of contemporary publications was conducted, focusing on technical methods (machine learning, deep learning, Monte Carlo methods, cognitive analytics) and ethical aspects (SWOT analysis of AI implementation strategies, bias auditing). A comparative analysis of two models was performed: Random Forest (an ensemble method with 100 trees) and Neural Network MLP (architecture with 64-32-16 neurons). Experiments were conducted on synthetic data mimicking behavioral patterns, evaluating accuracy, fairness (metrics such as Disparate Impact Ratio, Equalized Odds, Equal Opportunity, Statistical Parity), differential privacy (ε=1.0), and model drift. Visualization (time series, multi-panel graphs) and tabular synthesis of challenges were employed.
The article integrates technical and ethical aspects into a unified analysis, quantifying bias and model degradation in dynamic environments. A comprehensive approach to mitigating issues through algorithmic auditing, explainable AI, participatory design, and differential privacy is proposed, surpassing fragmented studies. Gaps in existing methodologies are identified, particularly the lack of embedded ethical constraints at the algorithm design level and adaptation to evolving standards.
Random Forest achieved an accuracy of 65.50% on the test set with overfitting (20-25% gap), while the Neural Network reached 58.33% with low interpretability. Algorithmic bias was confirmed: Disparate Impact Ratio of 0.4858–0.5311 for underrepresented groups, with accuracy disparities (77.08% vs. 52.33–55.91%). Model degradation was observed: from 65.5% to 42.6% over 12 months. Differential privacy reduced accuracy by 3–5%. A table systematizes challenges (bias, privacy, interpretability) with mitigation methods (auditing, XAI, distributed computing).
AI models are effective for prediction but require a balance between accuracy and ethics due to bias, degradation, and opacity. Integrated frameworks with embedded ethical principles, continuous monitoring, and cultural inclusivity are needed. Future directions include developing adaptive architectures, formal ethical metrics, and interdisciplinary standards for secure digital systems.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 ОЛЕНА ЧЕРНИШОВА , Оксана ШИБКО , МАКСИМ ЖОЛОНДКІВСЬКИЙ

This work is licensed under a Creative Commons Attribution 4.0 International License.