https://vottp.khmnu.edu.ua/index.php/vottp/issue/feedMEASURING AND COMPUTING DEVICES IN TECHNOLOGICAL PROCESSES2025-10-02T17:07:34+03:00Юрій Васильович Кравчикgromplus7@gmail.comOpen Journal Systems<p><strong>ISSN </strong>2219-9365</p> <p><strong>Published</strong> since May 1997</p> <p><strong>Publisher:</strong> Khmelnytskyi National University (Ukraine)</p> <p><strong>Frequency:</strong> 4 times a year</p> <p><strong>Manuscript languages:</strong> mixed languages: Ukrainian, English, Polish</p> <p><strong>Editors:</strong> Valeriy Martyniuk (Khmelnytsky, Ukraine)</p> <p><strong>Certificate of state registration of print media:</strong> Series KB № 24923-14863 ПР (12.07.2021).</p> <p><strong>Registration: </strong>The journal is included in Category B of the List of scientific professional publications of Ukraine, in which the results of dissertations for obtaining scientific degrees of doctor and candidate of sciences (specialties: 121, 122, 123, 125, 126, 151, 152, 172) can be published. 1643 28.12.2019 Order of the Ministry of Education and Science of Ukraine of December 28, 2019 No. 1643.</p> <p><strong>License Terms:</strong> Authors retain the copyright and grant the journal the right of first publication along with a work that is simultaneously licensed under a Creative Commons Attribution International CC-BY license, allowing others to share work with proof of authorship and initial publication in that journal.</p> <p><strong>Open Access Statement:</strong> "MEASURING AND COMPUTING DEVICES IN TECHNOLOGICAL PROCESSES" provides immediate open access to its content on the principle that providing free access to research for the public supports a greater global exchange of knowledge. Full-text access to the scientific articles of the journal is presented on the official website in the Archives section.</p> <p><strong>Address:</strong> Scientific journal "MEASURING AND COMPUTING DEVICES IN TECHNOLOGICAL PROCESSES", Khmelnytsky National University, st. 11, Khmelnytsky, 29016, Ukraine.</p> <p><strong>Tel .:</strong> +380673817986</p> <p><strong>е-mail:</strong> vottp@khmnu.edu.ua</p> <p><strong>web-site:</strong> <span class="VIiyi" lang="uk"><span class="JLqJ4b" data-language-for-alternatives="uk" data-language-to-translate-into="en" data-phrase-index="0">https://vottp.khmnu.edu.ua/index.php/vottp/</span></span></p>https://vottp.khmnu.edu.ua/index.php/vottp/article/view/609CONTROLLING SOFTWARE CODE VULNERABILITIES USING AI-ORIENTED STATIC ANALYSIS2025-09-11T12:01:28+03:00Anna KOVALOVAkovalova.ann@gmail.com<p><em>This paper addresses the pressing issue of software security by exploring the integration of traditional static analysis techniques with advanced AI-based methods for source code vulnerability detection. The research proposes a hybrid architecture that combines rule-based engines, such as CodeQL with transformer-based neural networks like CodeBERT. While traditional static analyzers rely on manually crafted rules and patterns, they often fail to detect context-dependent or novel vulnerabilities. AI models, on the other hand, demonstrate a growing ability to learn latent semantic structures and security-relevant code patterns by leveraging abstract syntax trees (AST), data flow graphs (DFG), and language-model pretraining techniques. The presented architecture capitalizes on the strengths of both approaches by aggregating the results of a rule-based static analysis pipeline and an AI-assisted vulnerability classifier into a unified decision engine.</em></p> <p><em>To assess the system’s effectiveness, experiments were conducted on a labeled dataset of 15,000 code samples. The AI model, based on CodeBERT, was trained for 20 epochs using binary cross-entropy and evaluated by F1-score. Three approaches were compared: rule-based, standalone AI, and the hybrid model. Results showed that the AI-only model outperformed the rule-based analyzer (F1-score: 0.81 vs. 0.68), while the hybrid approach achieved the highest score of 0.86, balancing precision and recall.</em></p> <p><em>Beyond classification accuracy, the research also considered the computational trade-offs and runtime implications of integrating AI into static analysis workflows. While the AI-enhanced pipeline incurs higher memory and processing time costs, its ability to identify critical vulnerabilities missed by traditional tools justifies its application in security-sensitive environments. Case studies highlighted examples such as heap buffer overflows and use-after-free vulnerabilities, which were correctly identified by the AI model but missed by pattern-matching rules.</em></p> <p><em>The paper concludes that hybrid AI-assisted static analysis is a promising direction for enhancing secure software development practices, especially in the context of DevSecOps pipelines. Future work includes extending the architecture to support multiple programming languages, integrating explainable AI components for better result interpretability, and optimizing model performance for lightweight deployment scenarios. Overall, the findings emphasize the practical feasibility and advantages of embedding AI into traditional software assurance processes to improve code security in an automated and scalable manner.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Анна КОВАЛЬОВАhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/629FIREBIRD AS A DBMS FOR HIGHER EDUCATION INFORMATION SYSTEMS: BENEFITS, CHALLENGES AND IMPLEMENTATION EXPERIENCE2025-10-02T07:08:57+03:00Viktor LYSAKlysak.viktor@khmnu.edu.uaIhor MYKHALCHUKmykhalchukiv@khmnu.edu.ua<p><em>The research explores the architectural models of Firebird, including SuperServer, Classic, and Embedded, and highlights their significance for different types of applications within a university setting. Particular attention is devoted to comparing key features and innovations introduced in Firebird versions 2.5, 3.0, 4.0, and 5.0, such as improved transaction management, enhanced security mechanisms (including SRP authentication and encryption), native replication, and expanded support for modern data types.</em></p> <p><em>The article investigates the main challenges of integrating Firebird with contemporary web services, analytical platforms, and mobile applications. Limitations in the availability of standardized REST/SOAP connectors, as well as the relatively basic support for JSON and NoSQL functionality compared to other open-source DBMSs (such as PostgreSQL or MySQL), are identified as significant barriers to rapid development and system interoperability. The compatibility of Firebird with popular programming frameworks and object-relational mapping (ORM) tools – such as Django, .NET, Java, Node.js, and PHP – is analyzed in detail. The authors observe that, although various drivers and adapters are available, integration may require additional effort and technical expertise.</em></p> <p><em>Given the sensitivity of academic data, special emphasis is placed on information security. The paper reviews common vulnerabilities associated with SQL injection, brute-force attacks on authentication mechanisms, open network ports, outdated drivers, and misconfigured access rights. A set of practical recommendations is provided for mitigating these risks, based on the authors’ extensive experience in the administration and modernization of the university’s information systems.</em></p> <p><em>In conclusion, the article provides a set of guidelines for developers and system administrators, emphasizing the importance of continuous monitoring, regular software updates, effective backup management, and adherence to best security practices. The findings highlight Firebird's advantages and limitations as a DBMS for higher education institutions. Additionally, the article offers valuable insights for those considering its implementation or upgrade in large-scale academic environments.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Віктор ЛИСАК, Ігор МИХАЛЬЧУКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/610FEATURES OF LEGISLATION ON CYBER SECURITY AUDITING IN DIFFERENT REGIONS OF THE WORLD2025-09-11T12:13:42+03:00Olesia VOITOVYCHvoytovych.olesya@vntu.edu.uaVitalii VOLYNETSvolynets1026@gmail.com<p class="06AnnotationVKNUES"><em>The article is dedicated to a comparative analysis of the cyber security audit regulatory requirements in key regions of the world amidst global digital transformation. The study systematizes and compares regulatory approaches in the European Union, North America, Asia, and Africa, with a special focus on the requirements for the financial sector. The analysis covers key standards and legislative acts, including GDPR, NIS2, NIST CSF, PCI DSS, as well as leading national laws in China, such as MLPS 2.0, and in Africa, such as POPIA, NDPA, and DPA. The regional analysis revealed fundamental differences in approaches: from the strictly regulated and centralized in the EU (GDPR, NIS2) to the flexible, market-practice-driven in the USA (NIST CSF, SOC 2), heterogeneous in Asia with elements of strict state control (China's MLPS 2.0), and fragmented in Africa, where ineffective pan-African initiatives give way to national legislation facing implementation challenges. The study establishes that the main challenge for international companies is navigating this complex and inconsistent regulatory environment, which often results in significant operational overhead and the phenomenon of “compliance fatigue” due to duplicated audits across various standards. In response, the article offers practical "roadmaps" for financial companies to enter the markets of each of the considered regions, emphasizing the need for tailored, risk-based strategies. It also highlights the critical gap between formal compliance with requirements and actual cyber resilience. This disconnect is particularly noticeable in regions with a shortage of qualified personnel and weak institutional frameworks, where "paper compliance" may not translate into robust protection against sophisticated cyber threats. In conclusion, the article provides a structured overview of the global landscape of cybersecurity regulations and practical recommendations for building adaptive strategies for ensuring regulatory compliance in the context of international operations.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Олеся ВОЙТОВИЧ, Віталій ВОЛИНЕЦЬhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/596CRITERIA FOR THE EFFICIENCY AND QUALITY OF NEURAL NETWORKS2025-08-20T22:59:40+03:00Volodymyr KUCHERUKvladimir.kucheruk@gmail.comPavlo KULAKOVkulakovpi@gmail.comRoman LISHCHUKroma0lir@gmail.comSerhii KONTSEBAkontseba@meta.uaViktoriya MANKOVSKAviktoriasergiivna@gmail.com<p class="06AnnotationVKNUES"><em>The article examines the criteria for evaluating the efficiency and quality of artificial neural networks (ANNs), emphasizing the importance of balancing predictive performance with computational feasibility. In the era of rapidly growing data volumes and increasing model complexity, the need for systematic approaches to assessing both quality and efficiency becomes critical. The study provides a comprehensive classification of metrics into three major groups: quality metrics, efficiency metrics, and generalization properties. Quality criteria such as accuracy, precision, recall, F1-score, AUC-ROC, and regression-based measures (MSE, MAE, RMSE, R², MAPE) are analyzed as primary indicators of predictive reliability. At the same time, efficiency is measured through computational costs (training time, inference time, FLOPs, memory footprint, energy consumption), structural parameters (number of layers and parameters, compression potential), and practical adaptability. Generalization ability is addressed through overfitting and underfitting analysis, validation and test errors, cross-validation, and the bias–variance trade-off. Furthermore, the paper highlights the importance of robustness and reliability criteria, including sensitivity to noise, adversarial resistance, reproducibility, and stability across datasets. Integral evaluation approaches, such as weighted score, quality-to-cost ratio, and sustainability indices, are proposed as tools for holistic assessment of ANN performance in real-world environments. The practical significance of the study lies in enabling informed decision-making in architecture design and hyperparameter optimization, supporting efficient deployment of neural networks in domains ranging from real-time embedded systems and IoT devices to large-scale industrial and medical applications. The findings emphasize that a balanced multi-criteria framework not only ensures predictive accuracy but also promotes resource efficiency, scalability, and long-term reliability of AI solutions, thereby contributing to sustainable and context-aware artificial intelligence development.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Володимир КУЧЕРУК, Павло КУЛАКОВ, Роман ЛІЩУК, Сергій КОНЦЕБА, Вікторія МАНЬКОВСЬКАhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/533FORMAL METHODS FOR DESCRIBING AND DETERMINING THE PARAMETERS OF COPYRIGHT PROTECTION SYSTEMS2025-05-23T16:56:39+03:00Volodymyr SABATv_sabat@ukr.netOleksandr BOHONISoleksandr.o.bohonis@lpnu.ua<p class="06AnnotationVKNUES"><em>The functioning of the copyright protection system for electronic publications is closely connected with external negative influencing factors, which can be represented as attacks on vulnerable parameters of such publications, which primarily include confidentiality, modification, availability, and identity of the information contained in the publications. The determination of authorship of a publication in the copyright protection system can be solved by the method of embedding digital watermarks into them or by means of cryptographic algorithms, which accurately indicate the presence of copyright in the publication (digital signature). However, the methods of monitoring electronic products in the electronic publications market for the legality of their distribution are still insufficiently developed, and the types of attacks that can affect the functioning of the copyright protection system remain unexplored. Therefore, the purpose of the research in this article is the analysis of attacks on the copyright protection system and the determination of the security level for protecting electronic publications, which is closely connected with the level of risk of an attack on an electronic publication. To solve the tasks set in the research objective, it is necessary to determine the most critical attacks on the copyright protection system for electronic publications, to introduce the concepts of risk, vulnerability, threat, and controlling countermeasures. The article analyzes modern means of embedding digital watermarks, their shortcomings and advantages in countering external attacks, introduces a scale for evaluating the main parameters that determine the risk level, which makes it possible to normalize the risk level in numerical values and further improve the protection system by introducing certain countermeasures against attacks. Thus, the copyright protection system for electronic publications may be relevant under crisis conditions of external factors of attacks and threats on an electronic publication and capable of adapting to such external factors. This, in turn, makes it possible to control the necessary optimal level of security for the normalization of the functioning of the monitoring system of illegal publication distribution and the copyright protection system as a whole.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Володимир САБАТ, Олександр БОГОНІСhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/611DEVELOPMENT OF SOFTWARE QUALITY ASSURANCE PERFORMANCE INDICATORS FOR ASSESSING CYBER RESILIENCE OF SYSTEMS2025-09-11T12:51:41+03:00Bohdan SAVCHUKbogd.sav@gmail.com<p><em>Cyber resilience is becoming an essential property of modern information systems, particularly in critical infrastructure and enterprise environments where the ability to resist, absorb, and recover from cyberattacks is vital. While existing security frameworks emphasize threat detection, incident response, and risk management, the influence of software quality assurance (SQA) processes on cyber resilience remains insufficiently studied. This paper addresses this gap by proposing a structured methodology for evaluating the impact of SQA practices on the cyber resilience of software systems through a set of normalized and weighted quality indicators.</em></p> <p><em>The proposed approach combines elements of established software quality models such as ISO/IEC 25010 and CMMI with cybersecurity standards and frameworks including NIST, MITRE ATT&CK, and CIS Controls. It introduces a unified system of metrics that includes test coverage, defect density, response time to vulnerabilities, mean time to recovery, code complexity, and review frequency. These metrics were empirically assessed in a controlled experimental environment using widely adopted DevSecOps tools such as Jenkins, SonarQube, and Allure Report.</em></p> <p><em>The experiment involved two software development configurations: a basic setup with minimal quality assurance and an enhanced one featuring systematic testing, regular code reviews, and developer training. The findings show that improvements in SQA practices led to a significantly higher level of cyber resilience. The enhanced configuration demonstrated better performance in all key metrics, especially in reducing recovery time and increasing the percentage of test coverage.</em></p> <p><em>The results confirm a strong correlation between effective software quality assurance and the system’s capacity to withstand cyber threats. The proposed model can be used to support decision-making in secure software development, providing a foundation for automated monitoring of resilience based on existing quality assurance infrastructure. Future research will focus on expanding the metric set and applying the methodology to systems with diverse architectures and operational contexts.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Богдан САВЧУКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/612FORMATION OF DECISION-MAKING STRATEGIES IN BUSINESS ECOSYSTEMS THROUGH EVOLUTIONARY SEARCH BASED ON GENETIC ALGORITHMS2025-09-11T13:37:59+03:00Andrii SHKITOVopncore@gmail.comAnatolii TYMOSHENKOtimoshAG@i.ua<p class="06AnnotationVKNUES"><em>In the context of growing complexity and dynamism of modern markets, effective strategic decision-making within business ecosystems requires computational models capable of adapting to uncertain, resource-constrained, and multi-agent environments. This paper presents an evolutionary approach to decision strategy formation using genetic algorithms, designed to model and optimize behavior in simulated business ecosystems. The proposed model treats the ecosystem as a population of heterogeneous agents with interdependent strategies, competing objectives, and dynamic interactions. Each agent operates under individual constraints related to resources and risk thresholds, while the system optimizes a global fitness function composed of profitability, stability, and inter-agent balance.</em></p> <p class="06AnnotationVKNUES"><em>A specialized virtual environment was developed to simulate multi-agent dynamics and visualize strategy evolution. Agents are represented as network nodes whose behavior is encoded into chromosomes. The evolutionary engine utilizes tournament selection, single-point crossover, and Gaussian mutation. Fitness evaluation accounts for both local and systemic goals, and a feasibility check penalizes unfit solutions. Simulation results over 100 generations showed fast convergence, with significant improvements in average and best fitness, and a marked decrease in strategy variance. The genetic algorithm was benchmarked against greedy heuristics and random search, demonstrating superior performance in terms of solution stability, adaptability, and overall effectiveness.</em></p> <p class="06AnnotationVKNUES"><em>The study highlights the advantages of genetic algorithms in modeling emergent behaviors and adaptive strategy formation in business ecosystems. However, limitations include the reliance on synthetic data and fixed algorithm parameters. Future work will explore hybrid evolutionary-learning models and real-world validation using business case data to enhance realism, scalability, and domain applicability.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Андрій ШКІТОВ, Анатолій ТИМОШЕНКОhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/594ANALYSIS OF USER PASSWORD SECURITY USING PYTHON SCRIPTS2025-08-18T10:23:27+03:00Ihor LIMARquantum.biology@outlook.comViktor BASOVbasvic@bigmir.netІhor КІREEVkireev.igor@ukr.netDenys HOLEVd.v_holev@suitt.edu.uaIevhen SEVASTIEIEVseva.odessa@gmail.com<p><em>The paper presents a comprehensive analysis of user password strength using Python scripts, enabling the automation of credential security assessment. A dataset of 1,000 passwords of varying complexity levels (weak, medium, strong) was compiled and evaluated using multiple metrics: length, Shannon entropy, presence of different character types, and verification against open databases of leaked credentials. The study established a correlation between password structure and the probability of compromise, identifying key characteristics that reduce security even when passwords formally meet complexity standards. It was shown that a significant portion of long passwords exhibit low entropy due to character repetition, while popular patterns (e.g., word+digits) make them vulnerable to hybrid attacks. Three types of attacks-dictionary, brute-force, and hybrid-were simulated, and the results confirmed the effectiveness of the selected metrics in predicting vulnerabilities. A methodology for preliminary password audits in corporate and personal systems is proposed, combining entropy analysis with checks against leaked databases. The findings can be applied to improve authentication policies, implement automated password verification at the account creation stage, and enhance cyber hygiene training programs aimed at developing users’ skills in creating strong credentials. The relevance of this research is driven by the high prevalence of brute-force and credential stuffing attacks that exploit weak or reused passwords, as well as the need for accessible tools for their prompt evaluation.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Ihorhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/613CLUSTERING AND MULTI-OBJECTIVE OPTIMIZATION OF DECENTRALIZED LAST-MILE DELIVERY2025-09-11T13:51:43+03:00Yuliia LESHCHENKOytaraniuk@gmail.comMaria YUKHIMCHUKumc1987@vntu.edu.ua<p class="06AnnotationVKNUES"><em>Decentralized last-mile delivery using local hubs reduces transportation time, costs, and CO₂ emissions. The article formulates a multi-criteria mathematical model for route optimization, taking into account time windows and transportation constraints. The use of clustering methods and metaheuristics is proposed to reduce computational complexity and increase efficiency. The results confirm that the combination of geospatial analysis and intelligent algorithms provides a significant increase in the speed, reliability, and environmental sustainability of urban logistics systems.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Юлія ЛЕЩЕНКО, Марія ЮХИМЧУКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/614EVOLUTIONARY OPTIMISATION METHOD FOR THE STRUCTURE OF A WIRELESS SENSOR NETWORK 2025-09-11T14:06:23+03:00Yaroslav PYRIHyaroslavpyrih@gmail.comYuliia PYRIHyuliia.v.klymash@lpnu.ua<p class="06AnnotationVKNUES"><em>The paper focuses on the evolutionary optimisation of wireless sensor network (WSN) topology using a genetic algorithm (GA) as the main computational tool. The research addresses the critical problem of sensor node placement within a monitored area, which directly impacts coverage efficiency, energy consumption, and overall network resilience. The optimisation task is formulated as a structural problem that aims to determine node coordinates to achieve maximum coverage with a limited number of nodes, avoid redundant overlaps of sensing zones, and comply with minimum inter-node distance constraints to prevent coverage merging. To this end, the authors propose a GA-based approach that incorporates adaptive crossover and mutation probabilities, enabling the algorithm to escape premature convergence and to explore the solution space more effectively.</em></p> <p class="06AnnotationVKNUES"><em>A block diagram of the proposed method is presented, detailing the algorithmic stages of initial population generation, fitness evaluation, selection, crossover, and mutation. The fitness function is designed to minimise the overlap of coverage areas while maximising spatial distribution uniformity. The method was validated through a series of simulation experiments under conditions of random node deployment with varying sensing radii (20 m, 30 m, and 40 m) and different minimum inter-node distance thresholds. Visualisations of intermediate and final optimisation stages are provided, demonstrating how the algorithm progressively improves deployment quality with increasing generations. The results show that the proposed method effectively reduces redundant coverage and enhances network structure adaptability. The global optimum was achieved at generation G=117, yielding a balanced deployment of nodes across the target area.</em></p> <p class="06AnnotationVKNUES"><em>The experimental findings highlight the effectiveness of evolutionary approaches for WSN design, showing that the GA can provide near-optimal solutions in complex and large-scale environments where traditional analytical or brute-force methods fail. Furthermore, the adaptability of the algorithm makes it suitable for dynamic scenarios requiring rapid deployment with limited prior knowledge of terrain or application-specific constraints. The study also emphasises the potential integration of the proposed method with other evolutionary paradigms, such as particle swarm optimisation and differential evolution, to further improve accuracy and convergence speed.</em></p> <p class="06AnnotationVKNUES"><em>In conclusion, the work demonstrates that GA-based optimisation of WSN topology is a promising tool for achieving efficient coverage, reducing energy waste, and ensuring the reliability of sensor networks. Future research directions include hybridisation with other metaheuristics, real-world deployment testing, and applications in large-scale Internet of Things (IoT) systems where adaptive and scalable network design is essential.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Ярослав ПИРІГ, Юлія ПИРІГhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/599WAYS OF IMPLEMENTING AI IN MONITORING AND CONTROL OF AQUATIC ENVIRONMENTS2025-09-03T13:29:07+03:00Maksym MARKINm.markin@kpi.in.uaVitaliy PETROVstabilno@gmail.com<p class="06AnnotationVKNUES"><em>The article considers modern approaches to the implementation of artificial intelligence (AI) in water environment monitoring and control systems. The relevance of the study is due to the growth of anthropogenic load, the emergence of new types of pollution, the limitations of traditional laboratory methods and the difficult environmental situation in Ukraine, which has worsened in the context of military operations.</em></p> <p class="06AnnotationVKNUES"><em>The paper analyzes existing methods for monitoring water composition - physicochemical, spectroscopic, acoustic, biological and intellectual. Their advantages, limitations and examples of practical implementation based on information and measuring systems (IMS) using sensors, microcontrollers and cloud technologies are presented. The possibilities of using AI for automated data processing, anomaly detection, classification of pollution types and forecasting of critical changes in water parameters are shown. Special attention is paid to the creation of flexible and scalable IMS capable of integrating data from various sources (sensor networks, satellites, laboratory studies) and working with large amounts of information. The system architecture is proposed, which includes a sensor layer, communication gateways, a central server and an analytical module with neural network algorithms (LSTM, Autoencoder, classification models).</em></p> <p class="06AnnotationVKNUES"><em>The results of the study emphasize the effectiveness of combining traditional control methods with intelligent algorithms to increase the accuracy, speed and adaptability of monitoring. The use of AI allows for early diagnosis of pollution, the formation of early warning systems, the prediction of the dynamics of the state of water resources and the support of management decision-making. Thus, the proposed approaches demonstrate the prospects for the implementation of AI in water environment monitoring and create a basis for the development of intelligent ecological systems capable of ensuring sustainable development, public health protection and ecosystem preservation.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Максим МАРКІН, Віталій ПЕТРОВhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/607A METHOD FOR DECISION-MAKING IN DRONE NAVIGATION UNDER SENSOR JAMMING CONDITIONS BASED ON KALMAN FILTER2025-09-11T06:42:53+03:00Olga RUSANOVAolga.rusanova.v@gmail.comOleksandr MOROZOV-LEONOVolmorleon@yahoo.com<p class="06AnnotationVKNUES"><em>In environments where GPS, remote radio control, or radio communication are unreliable or subject to jamming, small drones must navigate using limited onboard autonomous sensors and detectors. This article investigates the stability of a lightweight autopilot for a drone in combination with a Kalman filter and gating for each sensor under simulated jamming scenarios. Jamming is modeled as Poisson pulses, dropouts, additive offset, and increased signal dispersion in four onboard sensors: IMU, magnetic compass, LiDAR, and camera-based optical flow.</em></p> <p class="06AnnotationVKNUES"><em>Using 20 planned experiments, the effectiveness of navigation (RMSE of position in space and speed, mission duration, energy consumption) and sensor behavior during decision-making (share of rejected measurements) were measured. The results show that under conditions of moderate jamming, a simple Kalman filter architecture provides opportunities for effective navigation along route points, but strong, prolonged, and nonlinear jamming of sensors leads to deterioration in accuracy or navigation failure. This suggests that even computationally simple decision-making using Mahalanobis distance filtering can provide reliable navigation in certain scenarios where GNSS connectivity is unavailable.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Ольга РУСАНОВА, Олександр МОРОЗОВ-ЛЕОНОВhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/574INTEGRATION OF AGILE AND WATERFALL METHODOLOGIES INTO HYBRID IT PROJECT MANAGEMENT MODELS: CHALLENGES AND BENEFITS2025-07-01T12:24:27+03:00Olga KRAVCHUKkravchukoa2@gmail.com<p><em>With the rapid development of information technology, it is becoming increasingly important to ensure effective IT project management based on the integration of various management approaches. In particular, the combination of Agile and Waterfall methodologies within hybrid models allows for a balance between process flexibility and predictability. This symbiosis contributes to more efficient planning, development and implementation of IT solutions. The article analyses the main factors that justify the feasibility of using a hybrid approach, its advantages and potential risks in the context of modern project management.</em></p> <p><em>Among the key advantages of the hybrid approach is the ability to combine the clear structure and phasing inherent in Waterfall with the flexibility and operational feedback that characterise Agile. Such a synthesis helps to increase the organisation's adaptability to changing requirements, ensures a quick response to market challenges, and improves the quality of the final product. At the same time, the effective use of hybrid models requires a high level of team maturity, developed communication skills, adaptive planning and a clear division of responsibilities. The success of this approach largely depends on a favourable organisational culture and active support from management.</em></p> <p><em>This article provides examples of the practical implementation of a hybrid approach in various industries, including financial, healthcare, and government. The article discusses how the integration of Waterfall and Agile elements has contributed to increasing the efficiency of project execution, minimising the risk of missed deadlines, and improving the management of available resources. The research results show that hybrid models are particularly effective in large-scale and complex projects where it is necessary to comply with clear regulatory requirements and at the same time respond to dynamic changes in the development process.</em></p> <p><em>Special attention is paid to the problem of choosing the appropriate balance between Agile and Waterfall elements. The success of a hybrid approach depends on a deep understanding of the project context, technical complexity, customer requirements, team experience, and resource availability. Attention is also drawn to the need to improve the skills of project managers, who must have skills in both approaches, have the ability to think strategically, and implement innovative management practices.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Ольга КРАВЧУКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/575ANALYTICAL REVIEW OF CLOUD SERVICE PROVIDERS2025-07-01T17:15:53+03:00Dmytro KYSIUKkneimad@gmail.com<p><em>The article provides a systematic analysis of the cloud computing market with a focus on key cloud infrastructure service providers, in particular Amazon, Microsoft and Google. Three main models of providing cloud services are considered: infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). The functional characteristics, applications and advantages of each model are analyzed. Particular attention is paid to multi-cloud strategies that allow organizations to optimize the load, improve the reliability of IT infrastructure and flexibly use the services of several providers at the same time. The statistics obtained from the current reports of the analytical company IDC, which demonstrate the rapid growth of global spending on cloud infrastructure: in the fourth quarter of 2024, expenses amounted to 67 billion US dollars, which is 99.3% more than in the corresponding period of the previous year. Total spending in this sector is projected to reach $271.5 billion in 2025. The main drivers of market growth are identified - digitalization of business, active introduction of artificial intelligence, transition to remote work and cloud modernization of traditional IT infrastructure. The article is of practical importance for specialists in the field of information technology, business leaders, analysts and anyone who makes decisions on the implementation of cloud solutions. The presented review contributes to a better understanding of the cloud services market and allows you to reasonably choose a provider and a cloud architecture model for specific tasks.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Дмитро Кисюкhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/606ADVANCED SELF-CONDITIONED GAN FOR HISTOLOGY IMAGE SYNTHESIS AND DATA AUGMENTATION2025-09-11T06:33:35+03:00Oleksandr MESHCHERIAKOVascellanova@gmail.com<p class="06AnnotationVKNUES"><em>The article explores modern methods and innovative approaches aimed at advancing medical image generation through the application of deep generative models, with particular emphasis on state-of-the-art Generative Adversarial Network (GAN) architectures enhanced by self-conditioning mechanisms in scenarios of limited data availability. A central research objective is the synthesis of histological images that combine high visual fidelity, strong realism, and adequate variability, thereby making them suitable for both clinical practice and scientific research applications. By generating artificial yet realistic images, the proposed methodology contributes to overcoming one of the most pressing barriers in medical imaging research—namely, the scarcity of sufficiently large and diverse annotated datasets.</em></p> <p class="06AnnotationVKNUES"><em>The paper provides an in-depth analysis of the main challenges inherent in medical image synthesis. These include the constraints imposed by limited and heterogeneous training datasets, the difficulty of ensuring anatomical and structural consistency in generated outputs, and the problem of maintaining stability during adversarial training. To evaluate the proposed solution, experiments were conducted using the PathMNIST dataset, a benchmark collection of histopathological image sections widely applied in computational pathology.</em></p> <p class="06AnnotationVKNUES"><em>The experimental results clearly demonstrate the benefits of incorporating self-conditioning within GAN frameworks. Specifically, self-conditioning was shown to stabilize the adversarial training process, significantly mitigate the risk of mode collapse, and improve the overall perceptual quality of generated samples. Furthermore, improvements were confirmed quantitatively through objective image quality metrics as well as classifier performance when trained on augmented data. These findings underscore the potential of the proposed approach for practical applications in data augmentation, robust evaluation of diagnostic algorithms, and the development of decision-support systems in digital pathology.</em></p> <p class="06AnnotationVKNUES"><em>The contribution of this work lies not only in the methodological novelty of applying self-conditioning to medical image generation, but also in its practical relevance for clinical AI pipelines, where high-quality synthetic data can accelerate innovation while ensuring reproducibility and generalizability of diagnostic models.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Олександр МЕЩЕРЯКОВhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/551INTELLIGENT APPROACHES TO SOURCE CODE PROTECTION2025-06-02T02:59:22+03:00Igor GOLOVKOi85.golovko@gmail.com<p><em>The article considers an integrated source code protection technology that combines traditional obfuscation methods with the capabilities of artificial intelligence to optimize the protection process. A methodology based on the analysis of intermediate code (IL) in .NET applications is presented, where AI is used to automatically select and apply the most effective obfuscation strategies. The system is implemented using modern tools for working with IL code, such as Mono.Cecil, in combination with machine learning frameworks (.NET ML, TensorFlow.NET, and niches), which allows you to adapt the obfuscation process to the characteristics of a specific code. The methodology involves a phased analysis of the input code, where at the first stage syntactic and semantic analysis is performed to identify critical areas that require enhanced protection. The next stage is the application of an AI module, which, using recurrent neural networks (e.g., LSTM) and deep autoencoders combined into ensemble structures, allows predicting the optimal obfuscation strategy for each code segment. The integration of ensemble approaches allows combining predictions from several models, which significantly improves the accuracy and resistance of the system to reverse engineering.</em></p> <p><em>The experiments conducted demonstrate that the integration of AI significantly increases the resistance of the code to reverse engineering, while maintaining the functionality of the software. The article considers the theoretical foundations, describes the architecture of the developed system, and demonstrates the results of experimental verification of the proposed approach, which confirm its effectiveness in modern software development conditions.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Ігор ГОЛОВКОhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/549METHOD OF STATIC CODE QUALITY ANALYSIS USING MACHINE LEARNING2025-06-01T13:46:39+03:00Ihor PROKOFIEVprokofjev.igor@gmail.com<p class="06AnnotationVKNUES"><em>This paper introduces a comprehensive model for assessing the quality of source code by leveraging a combination of established code quality attributes and modern analysis techniques. The study begins with an overview of the fundamental methods of static code analysis, outlining their capabilities as well as inherent limitations, and situates them within the broader context of software quality assurance practices. While software quality can be examined through multiple complementary approaches—such as manual inspection, automated unit and integration testing, peer developer code reviews, duplicate code detection, and metric-based evaluation via static analysis tools—none of these approaches alone is sufficient for a reliable and holistic evaluation. Instead, effective code quality assessment requires a multifaceted strategy that integrates diverse perspectives and tools.</em></p> <p class="06AnnotationVKNUES"><em>The core contribution of this work lies in the design and experimental validation of a novel assessment tool. To verify its effectiveness, an empirical study was conducted using a dataset of C# source code files extracted from real-world software projects of varying scale and complexity. The tool automatically scans source files in a designated directory, processes them, and generates detailed reports in CSV format for further analysis. Experimental results demonstrated the ability of the model to successfully identify complex code fragments, redundant constructs, and potential architectural deficiencies. This not only provides actionable insights for developers but also supports informed decisions during refactoring and long-term codebase maintenance.</em></p> <p class="06AnnotationVKNUES"><em>The results confirm that the proposed approach significantly enhances the accuracy and efficiency of code quality evaluation, offering advantages over traditional methods by combining structured analysis with extensibility. Future research will focus on expanding the range of supported metrics, improving the precision of quality assessments, and incorporating advanced detection of anti-patterns and anomalies through machine learning techniques. These enhancements are expected to further strengthen the applicability of the model in diverse software engineering contexts, making it a valuable resource for developers, project managers, and quality assurance teams alike.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Ігор ПРОКОФ'ЄВhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/622DECENTRALIZED LAST-MILE DELIVERY USING AI AND 5G: IMPROVING EFFICIENCY AND SPEED IN NEXT-GENERATION NETWORKS2025-09-18T11:46:53+03:00Yuliia LESHCHENKOleshchenko@vntu.edu.uaIhor MOROZigor3003moroz@gmail.comMariia YUKHYMCHUKumc1987@vntu.edu.ua<p class="06AnnotationVKNUES"><em>This paper explores the integration of artificial intelligence (AI) and fifth-generation mobile networks (5G) into decentralized last-mile delivery systems, aiming to improve efficiency, scalability, and customer satisfaction. Building on previous research, which employed AI-driven optimization techniques such as genetic algorithms and real-time weather data integration, the study highlights how 5G technology can overcome existing challenges related to network latency, data transmission speed, and coordination within decentralized logistics networks. The ultra-fast data transfer rates and ultra-low latency of 5G enable real-time route optimization, dynamic fleet management, and seamless coordination between multiple delivery nodes, significantly reducing delivery times and operational costs. The paper also examines the role of AI-powered autonomous vehicles supported by 5G, which can address the shortage of couriers and enhance delivery safety and efficiency in densely populated areas. Furthermore, the research analyzes potential applications of blockchain technology in last-mile logistics, particularly in enhancing transparency, authenticity verification, and automation through smart contracts. Key technical challenges such as infrastructure costs, limited network coverage, interoperability, cybersecurity, interference, and energy consumption are also discussed, along with potential solutions including network slicing, adaptive interference management, and blockchain-based security models. The study emphasizes the future potential of combining 5G, AI, and blockchain in creating sustainable, adaptive, and customer-oriented logistics ecosystems. Promising directions include machine learning for demand prediction, on-demand and drone-based deliveries, energy optimization, and advanced indoor navigation supported by 5G. Overall, the paper argues that integrating 5G into decentralized last-mile delivery not only addresses existing limitations but also opens avenues for innovation in logistics, contributing to economic growth, environmental sustainability, and improved quality of urban life.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Юлія ЛЕЩЕНКО, Ігор МОРОЗ, Марія ЮХИМЧУКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/617METHOD OF INTELLECTUAL ANALYSIS OF SHORT HIGH-DIMENSIONAL SAMPLES BASED ON BAGGING ENSEMBLE WITH DATA AUGMENTATION2025-09-15T06:48:51+03:00Myroslav HAVRYLIUKmyroslav.a.havryliuk@lpnu.ua<p class="06AnnotationVKNUES"><em>One of the persistent and critical challenges in the application of machine learning and statistical analysis methods in the medical field remains the effective processing of small data – datasets containing a limited number of observations for practical, ethical or biological reasons. In contrast to large-scale population studies or broad epidemiological databases, many real-world clinical scenarios involve working with small samples: individual patient data, rare diseases, early stage studies or specialized diagnostic procedures. As a result, researchers and clinicians are often forced to work with incomplete, sparse, or highly unbalanced data in an effort to create accurate and robust models that can be used to inform important clinical decisions. Thus, the development of efficient, reliable, and interpretable methods for processing short data is not only a methodological necessity but also a practical requirement of modern medicine. One of the most common ways to partially solve the problem of small sample analysis is data augmentation. Increasing the number of instances in the training set often has a positive effect on the accuracy of models. However, in the case of augmented data, relying on a single modeling strategy is sometimes not enough. Often, combining augmentation and ensemble learning approaches can lead to significant improvements in model robustness and performance.</em></p> <p class="06AnnotationVKNUES"><em><a name="_Hlk205792641"></a>This article develops a new method for intellectual analysis of short high-dimensional data samples for solving regression modeling problems, based on the use of a bagging ensemble of artificial neural networks with an additional data augmentation procedure. Its training algorithm and results are described in detail. Using this method, two medical problems were solved: predicting the level of bone fragility in patients with osteoarthritis and the percentage of body fat. According to the results of comparing the main performance metrics of the developed approach and the baseline models, proposed method demonstrated the best results for both problems. The developed bagging ensemble can be used in cases where the amount of available data is limited and classical models do not provide the required accuracy.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Мирослав ГАВРИЛЮКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/601METHOD FOR DETECTING FIRE-HAZARDOUS OPERATING MODES OF PHOTOVOLTAIC MODULES IN SOLAR POWER PLANTS2025-09-07T08:57:02+03:00Andriy LYSYIandriilysyi@khmnu.edu.uaBohdan SAVENKOsavenko_bohdan@ukr.net<p class="06AnnotationVKNUES"><em>This article addresses a current scientific and practical issue — improving fire safety levels at solar power plants by detecting fire-hazardous operating modes of photovoltaic (PV) modules. A novel method for automatic detection of hazardous conditions is proposed, based on data analysis obtained using an unmanned aerial vehicle (UAV) equipped with RGB and thermal (IR) cameras.</em></p> <p class="06AnnotationVKNUES"><em>The method is based on the construction of a disjunctive normal form that incorporates a set of key indicators: defect type (damage or contamination), results of visual and thermal analysis, and the temperature of the bypass diode. An algorithm has been developed that not only detects potentially hazardous modules but also classifies the cause of the defect, enabling prompt decisions regarding maintenance or replacement of faulty equipment.</em></p> <p class="06AnnotationVKNUES"><em>To implement the monitoring system, integration of the SCADA TRACE MODE software suite with UAVs is proposed, which provides real-time data collection and processing, result visualization, alarm generation, and maintenance recommendations. The use of additional temperature sensors to monitor the condition of bypass diodes is also envisaged.</em></p> <p class="06AnnotationVKNUES"><em>This comprehensive system enhances monitoring efficiency, reduces response time to emergency situations, and mitigates the risk of fires at solar energy facilities.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Андрій ЛИСИЙ, Богдан САВЕНКОhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/550DETECTION OF INTRUSIONS IN THE INTERNET OF THINGS USING COMPUTATIONAL INTELLIGENCE TECHNOLOGIES2025-06-01T20:11:19+03:00Olena SEMENOVAsemenova.o.o@vntu.edu.uaAndrii DZHUSdzhuz1988@gmail.comVolodymyr MARTYNIUKvm4ukr@gmail.com<p><em>Nowadays the technology of Internet of Things (IoT) is rapidly transforming modern industries by integrating physical objects with digital systems to enable intelligent interaction. However, the growing number of connected devices and the openness of IoT environments pose some cybersecurity issues. Traditional intrusion detection systems (IDS) are often ineffective in IoT networks due to limited device resources, dynamic network topologies, and the increasing number of novel, previously unseen attacks. To improve the efficiency of intrusion detection systems, it is advisable to use an approach based on the use of computational intelligence technologies, including machine learning methods, evolution models, and fuzzy logic algorithms. This paper explores the application of computational intelligence technologies to enhance the effectiveness of intrusion detection in IoT environments. The study substantiates the feasibility of applying computational intelligence technologies to enhance the efficiency of detecting anomalous behavior and unauthorized activities in network traffic. An approach is proposed for designing an intelligent intrusion detection system based on a fuzzy controller capable of adaptively responding to changing environmental conditions and uncertainty in input data. A Mamdani-type fuzzy inference system was developed, including the definition of input and output variables, the construction of a rule base, and the configuration of membership functions. The controller was modeled in the MATLAB environment. The developed fuzzy controller can serve as a component of intrusion detection systems. The obtained results confirm the feasibility of applying computational intelligence to enhance the reliability of IoT network security. The proposed approach can serve as a foundation for building effective next-generation intrusion detection systems systems to improve the resilience of IoT infrastructures.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Олена СЕМЕНОВА, Андрій ДЖУС, Володимир МАРТИНЮКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/633RESOURCE ALLOCATION IN MULTI-BAND WIRELESS NETWORKS WITH TRAFFIC PRIORITIZATION2025-10-02T14:45:57+03:00Denis TABORvottp@khmnu.edu.ua<p class="06AnnotationVKNUES"><em>The article investigates the technical capabilities and features of the new generation of wireless networks—the Wi-Fi 7 (IEEE 802.11be) standard, known as Extremely High Throughput (EHT). Primary attention is given to the implementation of the key technology Multi-Link Operation (MLO), which aims to provide a maximum throughput of up to 46 Gbit/s and significantly reduce latency. MLO technology allows stations (STA) and access points (AP) to simultaneously use multiple channels across different frequency bands (2.4, 5, and 6 GHz) for data transmission and reception. This is achieved through the simultaneous use of different channels, load distribution, and adaptive switching between frequencies, which enhances connection reliability.</em></p> <p class="06AnnotationVKNUES"><em>Different operating modes of Multi-Link Devices (MLD), classified by the number of radio interfaces and transmission type, are considered. Devices with a single radio interface are highlighted: MLSR (Multi-Link Single-Radio) and its enhanced version EMLSR (Enhanced Multi-Link Single-Radio). MLSR uses a single radio module to monitor multiple channels, but transmission occurs on one channel at a time. EMLSR manages channel switching more efficiently and can dynamically manage configurations (Nss, MCS, BW) for each channel, reducing latency. For devices with multiple radio interfaces (MLMR), the asynchronous mode STR-MLMR (Simultaneous Transmit and Receive Operation) and the synchronous mode NSTR-MLMR (Nonsimultaneous Transmit and Receive Operation) are investigated. STR-MLMR, having two or more radio modules, ensures simultaneous reception and transmission on different channels, which significantly increases throughput and is suitable for intensive traffic. NSTR-MLMR allows only reception or only transmission at any given moment across all channels, which is used to avoid inter-channel interference. The EMLMR (Enhanced Multi-link Multi-Radio) mode is also considered, which is an enhancement of STR-MLMR with the capability for dynamic modification of individual channel parameters and resource allocation.</em></p> <p class="06AnnotationVKNUES"><em>The operation of the Wi-Fi network was simulated in the NS3 simulator across three main scenarios, including device operation in SL (Single Link), EMLSR, and STR-MLMR modes. The simulation confirmed that multi-link devices significantly outperform single-link (SL) devices in performance, increasing throughput and reducing latency. In conditions of competition for channel access, the STR-MLMR mode proved to be the most productive due to the capability for simultaneous operation on multiple channels. The study also confirmed the more efficient operation of multi-link devices in the presence of competing SLDs.</em></p> <p class="06AnnotationVKNUES"><em>Based on the comparison results, the EMLSR and STR-MLMR modes are considered the most promising for application. STR-MLMR provides high throughput and medium latency, requiring a more complex implementation, whereas MLSR and EMLSR are simpler but have lower throughput. NSTR-MLMR is suitable for conditions of high interference, but it has lower throughput compared to asynchronous modes. The research emphasizes the importance of correctly selecting the multi-link access mode according to network requirements.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Денис ТАБОРhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/605ULTRA-WIDEBAND DISCONE ANTENNA FOR THE 3-7 GHZ FREQUENCY RANGE FOR INTERNET OF THINGS (IoT) INFORMATION AND MEASURING DEVICES2025-09-09T08:48:28+03:00Andrii SEMENOVsemenov.a.o@vntu.edu.uaAndrii KRYSTOFOROVandrew199910kr@gmail.com<p class="06AnnotationVKNUES"><em>This paper presents the results of a study on an ultra-wideband discone antenna designed to operate in the 3–7 GHz frequency range, covering the key standards of modern and emerging wireless communication systems oriented towards the Internet of Things (IoT). The relevance of the research is driven by the rapid growth of IoT measurement and monitoring devices, which require universal antenna solutions with a wide operational bandwidth, stable impedance matching, and the ability to support multiple standards simultaneously. The paper provides an overview of current approaches to broadband antenna design, including Vivaldi, log-periodic, and horn structures, and highlights the advantages of the discone antenna for IoT applications: structural simplicity, mechanical robustness, omnidirectional radiation pattern, and the ability to cover a wide spectrum without the need for complex matching circuits. To achieve the stated goal, a geometric model of the discone antenna was developed, electromagnetic simulations were carried out using modern software (MMANA-GAL basic), and the key characteristics were analyzed: voltage standing wave ratio (VSWR), gain, and radiation pattern. The results demonstrated that the proposed antenna provides stable operation in the 3–7 GHz frequency band with VSWR ≤ 2, gain in the range of 0–2.5 dBi, and an almost uniform omnidirectional radiation pattern. These properties confirm its suitability for IoT devices operating in 5G NR bands (n77, n78, n79), Wi-Fi 6E, C-V2X systems, and industrial wireless networks. The practical significance of this research lies in the development of a foundation for universal IoT measurement instruments capable of ensuring compatibility with multiple wireless communication standards. The obtained results can be applied in the design of IoT sensors, industrial controllers, vehicular communication systems, and medical devices. Future work is planned to focus on antenna miniaturization, integration into multi-channel MIMO systems, and adaptation for next-generation sixth-generation (6G) technologies.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Андрій СЕМЕНОВ, Андрій КРИСТОФОРОВhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/608ANALYTICAL SOFTWARE PRODUCTS FOR EVALUATING THE EFFECTIVENESS OF VIDEO ADVERTISING IN SOCIAL NETWORKS2025-09-11T11:49:22+03:00Oleksandr TKACHENKOant@vntu.edu.uaSergiy KULISHkulish@vntu.edu.ua<p class="06AnnotationVKNUES"><em>The article considers the current problem of assessing the effectiveness of advertising video materials using modern software products. A comparison of the indicators of different software products is made. The study shows that the known methods of assessing the effectiveness of advertising are insufficient, it is necessary to develop a software system that, using the obtained analytics data, will analyze video resources according to more detailed indicators, compare changes in the indicators of traditional analytics before and after making changes based on detailed analysis. The results of the study are the determined indicators of assessing the effectiveness of video advertising in social networks and determining the areas of detailing the analysis of each individual video resource.</em></p> <p class="06AnnotationVKNUES"><em>The choice of software product depends on the scale of the business, the goals of the SMM campaign and the required level of detail of the analysis. Multifunctional platforms are suitable for comprehensive management, while highly specialized services allow you to obtain unique insights for optimizing specific aspects of the strategy. To analyze the video resources of an advertising company, it is necessary to analyze the presence and mention of the brand, the possibility of user feedback, color balancing, sound accompaniment, etc. In accordance with specific requirements, a tool for data collection is selected from certain software products.</em></p> <p class="06AnnotationVKNUES"><em>For a long period of time, built-in and external software analytical tools have been actively used in digital marketing. To analyze the effectiveness of video advertising, data collection on its effectiveness is carried out using selected analytical programs. However, such programs provide information, but do not form recommendations for a more detailed analysis of various video advertising objects. Further detailing can be performed based on the needs of the customer, as well as using special methods of balancing colors, text, sound and image, etc. The changes made should also be evaluated based on performance indicators. The choice of such indicators depends on the characteristics of video advertising, social media channels, and the specifics of the user audience. The results obtained allow us to summarize the main performance indicators, as well as identify gaps in such a comprehensive assessment.</em></p> <p class="06AnnotationVKNUES"><em>The plans for further research include the development of methods, models, and algorithms for the composition and decomposition of video advertising, the evaluation of its individual objects, the comprehensive balancing of indicators, and the formation of recommendations for changes with subsequent comparison of the results obtained.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Олександр ТКАЧЕНКО, Сергій КУЛІШhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/616GRN-INFORMED CELLFLOW: ENHANCING CELL STATE TRAJECTORY INFERENCE WITH BIOLOGICAL REGULATORY NETWORKS2025-09-15T06:35:28+03:00Andrii SEMENOVsemenov.a.o@vntu.edu.uaVladyslav KUZNIAKkuzniakvl@gmail.com<p class="06AnnotationVKNUES"><em>This paper presents a study on integrating gene regulatory network (GRN) information into computational models for reconstructing cellular trajectories, with a particular focus on enhancing the CellFlow framework. Gene regulatory networks describe interconnected systems of genes and regulatory elements — including transcription factors and signaling pathways — that coordinate gene expression and ensure proper regulation of processes such as cell identity maintenance, lineage differentiation, and adaptive responses to environmental changes. Mapping these interactions provides a foundation for understanding how gene activity patterns shape cellular behavior and transitions between states. In this work, we introduce GRN-informed CellFlow, an extension of the original CellFlow model that explicitly incorporates regulatory dependencies between genes. Unlike the baseline approach, which treats genes as independent features, the proposed method integrates known gene–gene relationships to guide trajectory reconstruction. To achieve this, we constructed a GRN matrix using zebrafish cell data: transcription factors were identified via UniProtKB annotations, while their interactions with target genes were inferred using correlation analysis and the GRNBoost2 algorithm. The resulting network Laplacian was employed as a regularizer during model training, enabling CellFlow to account for structured dependencies between genes. Experimental results showed that GRN integration slightly worsens the loss function compared to the classical CellFlow configuration, yet improves the biological interpretability of reconstructed trajectories. These findings highlight the potential of combining structured network information with algorithmic approaches to cell trajectory inference.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Андрій СЕМЕНОВ, Владислав КУЗНЯКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/621VISUAL SYSTEM FOR SETTING UP MACHINE LEARNING ALGORITHMS AND DATA2025-09-18T10:15:25+03:00Vitalina BABENKOvita.babenko@gmail.comSergiy LUCHENKOoleh.lantrat@gmail.comOleksandr BILYKoleksandr.bilyk@nure.uaYevhenii DROZDYKevgeniy.d97@gmail.com<p class="06AnnotationVKNUES"><em>The article presents the development of a visual system for designing and configuring machine learning algorithms and datasets, aimed at reducing the complexity of building deep learning models. Despite the rapid growth of artificial intelligence and neural networks in recent decades, the creation and configuration of models have remained accessible only to specialists with strong programming skills. This work addresses the gap by developing Learn2Learn, an open-source software tool with a graphical interface that allows users to construct, customize, and train deep learning models almost entirely without coding.</em></p> <p class="06AnnotationVKNUES"><em>The study emphasizes the practical relevance of integrating graphical user interfaces (GUIs) into machine learning workflows, enabling a broader range of users, including researchers, educators, and beginners, to interact with neural networks more intuitively. The system is structured around key stages of machine learning: data loading and preprocessing, model construction, selection of loss functions and optimization algorithms, and monitoring of training progress through visualized metrics. Unlike most existing tools, Learn2Learn supports the integration of custom dataloaders and model layers, ensuring flexibility comparable to traditional coding while significantly lowering the entry threshold.</em></p> <p class="06AnnotationVKNUES"><em>The article provides an overview of implemented functionalities: visual model construction using drag-and-drop neural network layers, interactive parameter adjustment, real-time error visualization, and integrated recommendations for model design. The program supports diverse data types, including images, text, and numerical values, and allows preprocessing through augmentation techniques. By combining PyTorch as the computational backbone with PyQt6 for GUI design, the authors demonstrate how the system maintains both usability and technical rigor.</em></p> <p class="06AnnotationVKNUES"><em>A comparative analysis with existing visual tools highlights the advantages of Learn2Learn in flexibility, expandability, and error handling. In particular, user-friendly error messages and interactive hints help prevent common mistakes, making the system not only a development tool but also an educational platform. The authors emphasize that Learn2Learn is still at a prototyping stage, with future improvements planned, such as integration of standard datasets, pre-trained models, and distributed training on remote servers.</em></p> <p class="06AnnotationVKNUES"><em>The article concludes that the developed system significantly reduces barriers to entry into deep learning by providing an accessible, extensible environment for model creation. The prototype illustrates the feasibility of unifying the simplicity of visual interfaces with the flexibility of code-based programming, opening prospects for both educational applications and practical research in artificial intelligence.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Віталіна БАБЕНКО, Сергій ЛУЧЕНКО, Олександр БІЛИК, Євгеній ДРОЗДИКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/516INFORMATION PROTECTION METHOD BASED ON THE SYSTEM OF RESIDUAL CLASSES IN THE FORMATION OF TIMER SIGNALS 2025-05-15T13:59:43+03:00Volodymyr KORCHYNSKYIvladkorchin@ukr.netSerhii HAVELarkominer@gmail.comKostiantyn SIEDOVsedovmail2@gmail.comIevhen SEVASTIEIEVseva.odessa@gmail.comIhor LIMARquantum.biology@outlook.com<p><em>The article considers information protection method based on timer signal constructions, the impulses of which are formed using the system of residual classes. The use of timer signals allows to complicate the structure of combinations, which increases their structural secrecy. The system of residual classes allows to divide integral information into a set of independent residues according to pre-selected mutually simple modules, which opens the new possibilities for building signal constructions that can be used to increase the noise immunity of data transmission.</em></p> <p><em>Particular attention is given to the use of the residual class system combined with redundant modules for the formation of correction codes, which provide detection and correction of errors caused by interference, intentional influences or technical failures.</em> <em>The paper analyses in detail practical algorithms for applying the residual class system to form timer signal constructions, in which information is indicated in the form of pulses with a certain time interval. At the same time, the signal constructions acquire the properties of noise immunity code and structural secrecy, which to certain extent provide protection against unauthorized access due to the uncertainty of the signal structure.</em> <em>The properties of structural secrecy complicate the process of determining the signal structure in the event of interception of message by enemy electronic intelligence. Mathematical examples of the formation of residual vectors and the formation of timer signal constructions are given. The obtained results indicate the feasibility of using the residual class system as one of the promising areas in the design of secure communication and information and control systems, particularly in the military sector, the Internet of Things, critical infrastructure, and real-time devices.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Володимир КОРЧИНСЬКИЙ, Сергій ГАВЕЛЬ, Костянтин СЄДОВ, Євген СЕВАСТЄЄВ, Ігор ЛІМАРЬhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/602INVESTIGATION OF MOTOR OIL DEGRADATION BY FLUORESCENCE METHOD2025-09-08T00:55:13+03:00Hanna DOROZINSKAannakushnir30@ukr.netVitalii LYTVYNlytvet@ukr.netGlib Dorozinskygvdorozinsky@ukr.net<p><em>This article presents a study aimed at developing fluorescence spectroscopy as an express method for diagnosing the condition of motor oil, which can serve as an alternative to standard laboratory analyses.</em></p> <p><em>The work investigated the degradation of synthetic motor oil using fluorescence analysis and acid number determination. Fluorescence spectra were measured with a FluoroTest fluorimeter, and the acid number was determined by potentiometric titration with potassium hydroxide. The analysis used samples of fresh oil and used oil after the car had traveled distances of 180, 430, 720, and 910 km.</em></p> <p><em>It is shown that with increasing mileage, the fluorescence intensity decreases, and the ratio of peaks at 584 nm and 610 nm changes, indicating structural changes in the oil’s composition. The results revealed that the acid number of the oil increases with mileage, which points to its oxidation. A motor oil degradation coefficient (K<sub>dmo</sub>) is proposed, defined as the ratio of the intensities of the mentioned peaks, and which increases proportionally with the acid number at the early stages of use. A linear correlation between the acid number and K<sub>dmo</sub> was established for mileage up to 500 km (R<sup>2</sup>=0.9913).</em></p> <p><em>The obtained results confirm the feasibility of using spectroscopic methods in conjunction with titrimetric analysis for the rapid assessment of motor oil condition and the control of its operational characteristics.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Ганна ДОРОЖИНСЬКА, Віталій ЛИТВИН, Гліб ДОРОЖИНСЬКИЙhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/585A GENERALIZED MODEL OF AN INTELLIGENT SYSTEM FOR FORECASTING AND ANOMALY DETECTION IN CYBERINFRASTRUCTURE BASED ON DEEP LEARNING2025-09-18T09:24:53+03:00Volodymyr SHULHAmdes@khmnu.edu.uaIhor IVANCHENKOmdes@khmnu.edu.uaMykola RYZHAKOVmdes@khmnu.edu.ua<p class="06AnnotationVKNUES"><em>This paper proposes a generalized intelligent system for forecasting and detecting anomalies in cyberinfrastructures. The aim is to improve the effectiveness of cyber-threat detection by integrating modern deep learning methods (autoencoders and multilayer perceptrons) with an adaptive event-criticality analysis mechanism. The key innovation is a semantic attribution module for cyber incidents with XAI explanations and integrated risk scoring: it performs deep content analysis of traffic, forms vector representations, matches events against a case base, estimates attribution confidence, and passes the resulting risk score to the criticality module. The proposed system not only identifies anomalous events in real time but also forecasts possible deviations based on historical data, strengthening preventive capabilities. The architecture comprises modular subsystems for telemetry collection, behavior reconstruction, forecasting, anomaly aggregation, semantic attribution and risk scoring, criticality assessment, response, and self-learning; their interaction is implemented as an end-to-end processing pipeline with a feedback loop. The solution is scalable and compatible with SDN, IoT, cloud environments, and enterprise SIEM/SOAR platforms. Empirical evaluations in simulated network-attack scenarios (DoS, port scanning, brute-force, botnet activity) demonstrated high classification performance (F1 = 0.89), confirming the practical effectiveness and reliability of the approach. The conclusions highlight the promise of deploying the system amid increasing cyber-threat complexity and its ability to adapt without full model retraining.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Володимир ШУЛЬГА, Ігор ІВАНЧЕНКО, Микола РИЖАКОВhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/620ANALYSIS OF LDPC AND POLAR CODE DECODING ALGORITHMS IN THE 5G STANDARD: EVALUATION OF COMPLEXITY AND EFFICIENCY2025-09-18T09:51:45+03:00Juliy BOIKOboiko_julius@ukr.netDaria SUBERLIAKsuberlyak.do@x-city.ua<p><em>This paper presents a detailed analysis of the physical layer architecture of the 5G New Radio (5G NR) standard, with a particular focus on the channel coding schemes and their decoding algorithms, which are essential to achieving the stringent performance requirements of next-generation wireless communication systems. The study explores the rationale behind the adoption of Low-Density Parity-Check (LDPC) codes for downlink and uplink data channels carrying large payloads, due to their excellent error correction capabilities and near-capacity performance. In parallel, Polar codes are examined as the optimal choice for control channels with short block lengths, particularly in uplink scenarios, owing to their low complexity and robustness in low-latency environments.</em></p> <p><em>A comprehensive overview of key decoding algorithms for both LDPC and Polar codes is provided, including Belief Propagation (BP), Min-Sum, Offset Min-Sum (OMS), and Normalized Min-Sum (NMS) for LDPC decoding, and Successive Cancellation (SC), Successive Cancellation List (SCL), Cyclic Redundancy Check-aided SCL (CA-SCL), and SC Flip algorithms for Polar codes. The trade-offs between computational complexity, decoding latency, and bit error rate (BER) performance are discussed in detail. Special attention is given to the performance of these decoding schemes under different service scenarios defined in 5G NR, including enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and massive Machine Type Communications (mMTC). Simulation results demonstrate that while BP and Min-Sum offer acceptable performance with high parallelism for LDPC codes, CA-SCL decoding significantly enhances Polar code reliability, particularly when assisted by CRC verification.</em></p> <p><em>The paper also addresses the impact of coding and decoding strategies on energy efficiency and system throughput in real-time 5G deployments. These findings are crucial for base station and user equipment manufacturers striving to balance complexity and performance. Overall, the insights presented contribute to the ongoing development and optimization of reliable, high-capacity, and low-latency 5G NR physical layer technologies.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Юлій БОЙКО, Дар'я СУБЕРЛЯКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/541DEVELOPMENT OF A SYSTEM FOR TESTING THE PRODUCTIVITY OF CORPORATE INFORMATION SYSTEMS BASED ON THE BALANCED SCOREBOARD MODEL2025-09-18T09:09:25+03:00Olena KOVALENKOok@vntu.edu.uaYuriy STOROZUKystorozhuk@vntu.edu.ua<p class="06AnnotationVKNUES"><em>The article considers the current problem of developing a system for testing the productivity of corporate information systems based on the balanced scorecard model. Such a system is actively used to measure the efficiency of an enterprise, corporation and can be adapted to assess the productivity of a software product. The article provides a brief overview of modern approaches to testing the productivity of RIS, justifies the use of the BSC, develops a conceptual model of the testing system, describes practical implementation and formulates conclusions and prospects for further research.</em></p> <p class="06AnnotationVKNUES"><em>A system of productivity projections, key indicators, and examples of using software products is proposed. Based on the research results, recommendations for testing the performance of distributed information systems using the proposed projections were formulated. The article proposes a conceptual and practical model of the performance testing system for corporate distributed information systems based on the balanced scorecard model. The main scientific novelty is the integration of technical, business, process and innovation indicators into a single evaluation system, which allows to increase the objectivity and relevance of testing results. The practical significance of the developed system lies in the possibility of its adaptation to different types of corporate systems and their software, increasing the efficiency of testing and improving the quality of managerial decision-making.</em></p> <p class="06AnnotationVKNUES"><em>Prospects for further research are related to the automation of building MRP models for various industries, the introduction of machine learning methods to predict productivity changes, the development of tools for deeper integration with DevOps processes, and the expansion of the scorecard taking into account the specifics of the latest architectures (serverless, edge computing).</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Олена КОВАЛЕНКО, Юрій СТОРОЖУКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/372MODELING OF HEAT TRANSFER AND HYDRODYNAMIC PROCESSES IN A FLAT OVAL PIPE USING SOLIDWORKS FLOW SIMULATION2025-09-18T09:00:36+03:00Olha SVYNCHUK7011990@ukr.netYaroslav KLYMENKOyaroslav.klymenko98@gmail.com<p class="06AnnotationVKNUES"><em>The paper presents a comprehensive study of heat transfer and hydrodynamic processes in a flat-oval profile pipe using advanced numerical methods and laboratory tests. The research is based on experimental data obtained from a specially designed experimental stand equipped with an electric heater providing stable boundary conditions under constant heat flux. Modeling of the processes was conducted using SolidWorks Flow Simulation software, allowing for the detailed analysis of flow characteristics and heat transfer phenomena inherent to non-standard pipe profiles.</em></p> <p class="06AnnotationVKNUES"><em>Special attention is given to analyzing the deviations between numerical simulation results and laboratory experiments. It was found that, despite relatively good agreement, there is notable sensitivity of the results to selected numerical model settings, particularly turbulence parameters and mesh quality. Recommendations are provided for improving modeling accuracy, which can be applied in future research and in the design of heat power equipment.</em></p> <p class="06AnnotationVKNUES"><em>Beyond the technical details, this article also explores fundamental issues concerning the interaction and synergy between experimental and numerical methods in contemporary science, their influence on the development of engineering thought, and their potential to optimize the design and analysis of complex heat-power systems. The findings of this study can be beneficial for further scientific developments in thermal energy engineering, as well as for enhancing the efficiency of industrial heat-exchange equipment.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Ольга СВИНЧУК, Ярослав КЛИМЕНКОhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/583COMPARATIVE ANALYSIS OF THE EFFECTIVENESS OF MACHINE LEARNING METHODS FOR CYBER INCIDENT DETECTION2025-09-18T08:53:01+03:00Mykola KONOTOPETSnikolyalux@gmail.comOleksandr TUROVSKYs19641011@ukr.netAndriy BOURDEINNYburdes228@gmail.comAnton STORCHAKstorchakanton@gmail.com<p class="06AnnotationVKNUES"><em>The article presents a comparative analysis of modern machine learning methods (supervised, unsupervised, and reinforcement learning) for detecting cybersecurity incidents in corporate information and communication systems. The advantages and limitations of the most common algorithms, including Decision Tree, Naive Bayes, SVM, Isolation Forest, K-Means, BERT, GPT, DQN, PPO, and Soft Actor–Critic, are discussed in terms of accuracy, recall, precision, and false positive rate. The CICIDS 2018 dataset was used for experimental evaluation, allowing the practical applicability of these methods for detecting both known threats and zero-day attacks to be assessed. The study found that decision tree models demonstrate the highest accuracy and the lowest false positive rates for conventional threats, while the Isolation Forest algorithm is the most effective for detecting anomalous activity and new types of attacks. An optimized approach is proposed, combining supervised learning (Decision Trees) for detecting known threats with unsupervised anomaly detection (Isolation Forest) to minimize false positives and enhance system adaptability. The results obtained can be used to build efficient cybersecurity systems capable of promptly responding to modern threats while considering resource constraints and the need to reduce false positives. Particular attention is given to assessing the impact of model parameters on their performance and scalability in high-traffic environments. The possibilities of integrating machine learning with existing security monitoring systems for incident detection automation are considered. Directions for future research are identified, including the development of hybrid models to increase resilience to zero-day attacks. The study concludes that machine learning should be considered a key component of modern cybersecurity strategies.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Микола КОНОТОПЕЦЬ, Олександр ТУРОВСЬКИЙ, Андрій БУРДЕЙНИЙ, Антон СТОРЧАКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/371LIGHTWEIGHT CRYPTOSYSTEMS IN IOT AND 5G TELECOMMUNICATION ENVIRONMENTS: CHALLENGES, ARCHITECTURES, AND SECURITY TRADE-OFFS2025-09-18T09:39:14+03:00Juliy BOIKOboiko_julius@ukr.netViktor MISHANV_mishan@ukr.netDmytro YAVTUSHENKOdima.chef98@gmail.com<p><em>This paper provides a comprehensive analysis of the security architecture in 5G networks, focusing on a multi-layered protection framework that ensures confidentiality, integrity, and authenticity across various communication interfaces. The study examines three primary security layers—Non-Access Stratum (NAS) security, Access Stratum (AS) security, and transport security—highlighting their specific roles in safeguarding signaling, radio channels, and interactions between network components, respectively. Key cryptographic algorithms standardized in 5G, including AES, ZUC, and SNOW 3G, are analyzed in terms of their encryption and integrity verification capabilities, computational efficiency, and suitability for various deployment scenarios such as mobile broadband, IoT, and ultra-reliable low-latency communications (URLLC).</em></p> <p><em>A detailed examination of the hierarchical key management system reveals the cascading generation of cryptographic keys from a root key securely stored in the user’s USIM and the home network. This structure minimizes risks by isolating potential compromises within localized key subsets, thereby preserving overall system security. The paper also discusses transport-layer protocols such as IPsec, TLS, and DTLS, which protect inter-network communication channels between base stations, user plane functions, and core network elements.</em></p> <p><em>Furthermore, the study addresses emerging challenges in 5G security, including the need for enhanced protection mechanisms against evolving threats, integration of quantum-resistant algorithms, and adaptation to virtualization and software-defined networking (SDN) paradigms. The findings offer valuable insights for researchers and industry practitioners aiming to optimize security solutions while maintaining performance and energy efficiency in next-generation mobile networks.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Юлій БОЙКО, Віктор МІШАН, Дмитро ЯВТУШЕНКОhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/619METHOD OF FORMING AN AI PLATFORM FOR EMBEDDED COMPUTING SYSTEMS USING COGNITIVE TECHNOLOGIES2025-09-18T08:37:25+03:00Hryhoriy POTAPOVpgm201602@gmail.comVolodymyr RUSINOVv.rusinov.io11f@kpi.ua<p class="06AnnotationVKNUES"><em>The article considers the procedure for forming a cognitive artificial intelligence (AI) platform that serves as the foundation for the development of embedded computing systems. It is emphasized that the creation of such a platform is a complex and strategically important process, since it involves the analysis of vast amounts of poorly structured and unstructured data, which significantly complicates information processing. The authors argue that the efficiency of applying modern information technologies depends not only on the volume and quality of available information but also on the level of interaction between different technological components. This interaction predetermines the network orientation of the platform, while in the system environment, tools that integrate information resources from diverse fields of knowledge must be applied.</em></p> <p class="06AnnotationVKNUES"><em>To enhance the efficiency of processing information flows, the article proposes the use of cognitive information technologies incorporating elements of artificial intelligence. A special emphasis is placed on the role of monitoring information processes in embedded computing systems as a basis for generating output data. Within this framework, a new method for forming an AI platform using cognitive technologies is presented. This method ensures the structuring of information processes, the creation of ontologies, and the ranking of these processes to determine the most rational option for their processing.</em></p> <p class="06AnnotationVKNUES"><em>The scientific novelty of the proposed approach lies in integrating cognitive models into embedded systems, thereby enabling adaptive decision-making and more flexible responses to dynamic environments. The practical significance is in the potential application of the AI platform to a wide range of embedded systems, including information-analytical, industrial, and control systems, where high-speed data processing and intelligent analysis are crucial.</em></p> <p class="06AnnotationVKNUES"><em>The article concludes that the proposed method provides a systematic basis for developing embedded computing systems with higher levels of autonomy and cognitive adaptability. Future research will focus on practical implementation, testing the AI platform in real embedded environments, and expanding its use for interdisciplinary applications, thus contributing to the advancement of intelligent computing technologies.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Григорій ПОТАПОВ, Володимир РУСІНОВhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/584ANALYSIS OF DYNAMIC DATA CONSISTENCY MODELS IN DISTRIBUTED DATABASE MANAGEMENT SYSTEMS2025-07-23T20:50:29+03:00Andrii MYRHORODSKYImirgorodskijav@gmail.comOksana ROMANIUKromaniukoksanav@gmail.comOleksandr ROMANIUKrom8591@gmail.com<p><em>The article is devoted to the analysis of modern dynamic data consistency models in distributed database management systems (DBMS). Traditional distributed DBMS used static consistency models that did not consider system state or data access patterns. However, growing data scales, transaction complexity, and performance requirements demand new approaches.</em></p> <p><em>The research analyzes three main dynamic consistency models. The context-oriented model consists of consistency blocks, consistency policies, and context descriptors that dynamically determine consistency levels based on operation context and system state. The CAnDoR model uses data segmentation with continuous measurement of synchronization and response times to automatically distribute segments among nodes. The R-TBC/RTA model addresses network partitioning problems by forming hierarchical tree structures with different consistency guarantees for primary and secondary nodes. Various options for the practical implementation of mechanisms for individual models are also considered.</em></p> <p><em>Comparative analysis was conducted based on architectural criteria (concept, consistency determination, data placement), adaptive criteria (load monitoring, time adaptation, fault tolerance), and integration possibilities. The comparison revealed that simpler models like context-oriented provide higher flexibility and easier integration but depend heavily on implementation, while complex models like R-TBC/RTA offer better fault tolerance guarantees but require more sophisticated implementation. The use of architecture based on more complex models together with separate mechanisms of simpler models is promising, as it may allow the creation of a more universal dynamic model of data consistency.</em></p> <p><em>The results show that all models have effective consistency mechanisms for specific scenarios but differ significantly in implementation complexity. The analysis can be used for developing custom dynamic consistency models and advancing distributed systems research.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Андрій МИРГОРОДСЬКИЙ, Оксана РОМАНЮК, Олександр РОМАНЮКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/623POSTURAL CONTROL AND GAIT ALTERATIONS IN YOUNG ADULT TOBACCO AND E-CIGARETTE USERS: A COMPARATIVE STABILOMETRIC AND TREADMILL-BASED ANALYSIS2025-09-18T13:17:19+03:00Joanna CHWAŁjoanna.chwal@polsl.plHanna ZADOŃhanna.zadon@polsl.plPiotr SZAFLIKpiotr.szafik@polsl.plRadosław DZIKradoslaw.dzik@akademiaslaska.plAnna FILIPOWSKAanna.filipowska@polsl.plRafał DONIECrafal.doniec@polsl.plPaweł KOSTKApawel.kostka@polsl.plRobert MICHNIKrobert.michnik@polsl.pl<p><em>The research investigates how tobacco and electronic cigarette (e-cigarette) consumption affects postural control and walking patterns in young adult populations. The study included 60 participants who were divided into three groups of 20 each: non-smokers and traditional smokers and e-cigarette users. The participants completed stabilometric tests under static conditions with eyes open and closed while undergoing treadmill-based dynamic gait analysis. The researchers used parametric or non-parametric statistical tests together with Spearman’s correlation and principal component analysis (PCA) and supervised machine learning classifiers to analyze biomechanical features. The study revealed substantial differences between non-smokers and e-cigarette users regarding body mass index (BMI) and foot force distribution and walking speed and step length measurements. Correlation analyses revealed strong associations between center of pressure dynamics and plantar pressure distribution, with group-specific interaction patterns. PCA demonstrated partial group separation, especially for non-smokers versus e-cigarette users. Machine learning models, especially logistic regression, achieved the highest classification accuracy (up to 82.8%) in distinguishing non-smokers from e-cigarette users. These findings suggest that habitual use of tobacco or e-cigarettes may influence balance and locomotor control in subtle but measurable ways, with potential implications for neuromuscular health monitoring in young populations.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Joanna CHWAŁ, Hanna ZADOŃ, Piotr SZAFLIK, Radosław DZIK, Anna FILIPOWSKA, Rafał DONIEC, Paweł KOSTKA, Robert MICHNIKhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/542APPLICATION OF FUZZY LOGIC IN PROCESSING THE RESULTS OF MEDICAL RESEARCH2025-09-19T09:34:13+03:00Volodymyr YEREMENKOnau_307@ukr.netOlena MONCHENKOmonchenko_olena@ukr.netValentyna KUCHERENKOmarkiza2021@ukr.netOleksandra SYDNIVETSsasha.sydnivets@gmail.comTaras MONCHENKOmtm082220-ames27@lll.kpi.ua<p><em>The thesis presents information technology for processing medical indicators using fuzzy logic. The application of such technology to the distribution of endocrinological parameters makes it possible to assess the correctness of decision-making and the probability of false conclusions. Of the 29 endocrinological indicators, four key ones that have the greatest impact on the final result were identified: body mass index (BMI), total vitamin D3, total cholesterol, and fasting blood glucose. The concave curve rule was used to calculate probabilities, which allows for a more accurate assessment of risks and the reliability of conclusions.</em></p> <p><em>The practical implementation of the use of terms of fuzzy logic when comparing two ways of treating hypertension and obesity is shown. The research was carried out at the clinical bases of the Department of Family Medicine and Outpatient Polyclinic Care of P. L. Shupyka. Patients with hypertension and obesity were divided into two groups, randomized by age, sex, and comorbid pathology, who were given two types of treatment: the main group (M2) received treatment 1, the experimental group (M3) received treatment 2. The thesis investigates the optimization of complex therapy and diagnosis of patients with arterial hypertension and obesity in primary medical practice and the establishment of interrelationships between different treatment methods and confirmation of the effectiveness of treatment using terms of fuzzy logic. The study confirmed the importance of accounting for individual physiological characteristics. Results indicate that endocrinological indicators are highly individual: values considered normal for one person may be critical for another. This underscores the necessity of a personalized approach in diagnosing and treating endocrine disorders. The influence of measurement errors on analysis accuracy was also noted, necessitating further methodological improvements. Future research in this field will enhance diagnostic quality and the effectiveness of medical decisions. </em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Володимир ЄРЕМЕНКО, Олена МОНЧЕНКО, Валентина КУЧЕРЕНКО, Олександра СИДНІВЕЦЬ, Тарас МОНЧЕНКОhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/581ONTOLOGICAL APPROACH TO CREATING SUBJECT-ORIENTED TECHNOLOGIES IN THE FIELD OF IoT2025-07-21T12:40:35+03:00Bogdan MASLYIAKbm@wunu.edu.uaNataliia VOZNAnvozna@ukr.netOrest KOCHANorestvk@gmail.com<p class="06AnnotationVKNUES"><em>The work is devoted to the development of subject-oriented ontologies in the field of measurements and Internet of Things (IoT) technology with the aim of generating new knowledge and enhancing the efficiency of data interpretation and system operation. A generalized method of ontology construction is proposed, which defines the key components of such ontologies, including domain-specific concepts, semantic relations between them, and rules for axiom derivation. The study emphasizes that the ontology-based approach provides a universal mechanism for formalizing knowledge in measurement-related domains, ensuring semantic consistency and interoperability between heterogeneous information systems. The implementation of the ontology makes it possible to unify, at the conceptual level, the most significant elements of the subject area: basic concepts (such as measurement objects, measuring instruments, and measurement procedures), a set of semantic relations (for example, identifiers of measuring components, measurement results, and operating conditions), and functions for interpreting the results of measurement technologies (such as error estimation, determination of inter-verification and inter-calibration intervals, and evaluation of measurement accuracy).</em></p> <p class="06AnnotationVKNUES"><em>In addition, the knowledge base is proposed to include algorithms related to measurement accuracy and reliability—covering the determination of systematic and random errors, prediction of their future values, identification of the probability distribution of error components, calculation of calibration and verification periods, and forecasting the overall reliability of IoT systems and their subsystems. This enables not only precise error analysis but also proactive reliability management, which is critically important for long-term operation of IoT infrastructures. The paper also presents a detailed analysis of toolkits available for building subject-oriented ontologies. Such tools allow for the automation of multiple stages of ontology creation, including conceptual modeling, integration with external data sources, semantic alignment of heterogeneous datasets, and the management, visualization, and analysis of ontological structures. Thus, the proposed methodology serves as a foundation for developing intelligent measurement support systems within Industry 4.0 and IoT contexts, contributing to greater transparency, adaptability, and efficiency of modern cyber-physical systems.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Богдан МАСЛИЯК, Наталія ВОЗНА, Орест КОЧАНhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/508ANALYSIS OF EMISSIVITY COEFFICIENTS2025-05-08T12:24:30+03:00Oleksiy MOCHURADoleksii.p.mochurad@lpnu.uaNatalia HOTScie@lpnu.ua<p data-start="345" data-end="1070"><em>This article presents a comprehensive investigation of the emissivity coefficient of surfaces under various physical and operational conditions. The study emphasizes the fundamental meaning of emissivity, its dependence on radiation angle, surface temperature, material properties, degree of surface treatment, and radiation wavelength. Special attention is devoted to experimental observations of how emissivity evolves during the lifetime of a surface due to oxidation, contamination, or roughness changes, and how these modifications affect the accuracy of non-contact temperature measurements. A mathematical approach is also applied to examine the influence of wavelength variations on the measured temperature values.</em></p> <p data-start="1072" data-end="1774"><em>The research demonstrates that emissivity is not a constant property but a parameter strongly determined by physical factors such as the geometry of observation, spectral range of measuring instruments, and thermal state of the material. For example, metallic surfaces typically exhibit low emissivity, which increases when oxide layers form at higher temperatures, whereas ceramics or dielectrics tend to maintain relatively stable but wavelength-dependent emissivity values. Furthermore, experimental results confirm that angular deviation from normal measurement can lead to significant errors exceeding 5 °C, while surface treatment—such as polishing or oxidation—also induces notable variations.</em></p> <p data-start="1776" data-end="2216"><em>The experimental setup included measurements with pyrometers, contact thermometers, and specially designed calibration surfaces, enabling comparison of contact and non-contact results. The findings indicate that neglecting emissivity variations with wavelength or surface state can result in substantial measurement inaccuracies. In practical terms, this has a direct impact on industrial processes, product quality, and safety standards.</em></p> <p data-start="2218" data-end="2765"><em>The outcomes highlight the necessity of considering emissivity as a dynamic factor in temperature diagnostics, especially when infrared thermography or pyrometric techniques are applied in engineering, metallurgy, and materials science. The conclusions propose further research into the interaction of emissivity with specific influencing factors, aiming to develop refined correction models and measurement methodologies. Such advancements are expected to enhance the reliability of thermal monitoring systems in diverse technical applications.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Олексій МОЧУРАД, Наталія ГОЦhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/576DEVELOPMENT OF HIGH-LOAD INFORMATION SYSTEMS: EVOLUTION AND CHALLENGES2025-09-22T11:22:02+03:00Yaroslav IVANCHUKivanchuck@vntu.edu.uaPavlo YAKOVCHUKpyakovchuk@gmail.com<p class="06AnnotationVKNUES"><em>This article analyzes the key stages of development, technological approaches and modern challenges facing architects and developers. The article analyzes the development of high-load information systems, their evolution and challenges regarding the main characteristics. Various approaches to the development of high-load systems, parallel computing are presented. The results of the study indicate that the most promising is the microservice architecture. Although for small projects, a monolithic architecture can also be used. Distributed service architecture can also be implemented for corporate networks, specialized solutions. The prospects for the development of high-load information systems are associated with the further use of cloud technologies, serverless computing and the integration of artificial intelligence to optimize the operation of systems and predict the load. However, the expansion of intelligent functions can lead to a decrease in speed and fault tolerance. The development of effective high-load systems requires the formation of a balance of monitoring and expansion of functions with the number of microservices, the speed of working with system requests. Each high-load system has its own characteristics in accordance with the subject area.</em></p> <p class="06AnnotationVKNUES"><em>The plans for further research include studying the features of the operation of a high-load system for a brokerage exchange. High-Load Information Systems (High-Load Information Systems) are the basis of modern digital infrastructure for complex organizational systems, ensuring the functioning of powerful services in the areas of e-commerce, social networks, online banking, the activities of various dynamic trading platforms, cloud computing. such as e-commerce, social networks, online banking and cloud computing. The evolution of these systems reflects the constant search for optimal solutions for processing growing volumes of data, managing a large number of simultaneous requests and ensuring uninterrupted availability. The relevance of the development of high-load systems requires developers of new systems with the possibility of multi-tier architecture, special modules for working with dynamic data.</em></p> <p class="06AnnotationVKNUES"><em>The use of a specific architecture is closely related to the selected parallelism model. Parallelism models are ways of organizing calculations that allow you to perform several tasks simultaneously to increase productivity. They are closely related to computer system architectures, which define how hardware and software interact to implement parallel computing.</em></p> <p class="06AnnotationVKNUES"><em>Promising models of parallelism and practice, and scientists consider those that allow for the efficient use of hardware resources, such as multi-core processors and specialized accelerators, to solve complex tasks. They differ from traditional approaches, focusing on increasing performance, scalability and flexibility.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Ярослав ІВАНЧУК, Павло ЯКОВЧУКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/568ANALYSIS OF METHODS AND ALGORITHMS FOR TRAJECTORY PLANNING IN MULTI-UAV APPLICATIONS2025-06-23T11:58:05+03:00Vitalii KOSTENKOv.kostenko@khai.edu<p data-start="248" data-end="1200"><em>The paper provides a comprehensive analysis of modern methods and algorithms for trajectory planning in multi-UAV (Unmanned Aerial Vehicle) systems, emphasizing their relevance in both military and civilian applications. Traditional approaches, including graph-based methods such as A* and sampling-based methods like Rapidly-exploring Random Tree (RRT), are examined with regard to their computational efficiency, adaptability, and limitations in dynamic environments. The study highlights the increasing importance of intelligent techniques, particularly evolutionary algorithms such as Genetic Algorithms (GA) and Differential Evolution (DE), as well as bio-inspired methods like Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). These algorithms are evaluated in terms of their ability to address challenges related to real-time planning, multi-agent coordination, scalability, and resilience to dynamic changes.</em></p> <p data-start="1202" data-end="1879"><em>The article identifies key factors that critically affect UAV trajectory planning: mission type, obstacle complexity, communication reliability, energy and computational constraints, and the ability to replan under uncertain conditions. A comparative analysis is presented, summarizing the advantages, drawbacks, and application domains of each algorithm. The results show that classical methods remain efficient in static or partially known environments, whereas intelligent approaches demonstrate greater flexibility and global optimization capabilities in complex, dynamic scenarios. However, their high computational demands may limit applicability in real-time missions.</em></p> <p data-start="1881" data-end="2402"><em>The study concludes that the choice of algorithm must be guided by the nature of the mission, environmental complexity, and available resources. Future research directions include the development of hybrid solutions combining classical and intelligent methods, adaptive algorithms capable of real-time decision-making under uncertainty, and resilient architectures for large-scale UAV swarms. Such advancements will enhance autonomy, reliability, and efficiency of UAV applications in both defense and civilian sectors.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Віталій КОСТЕНКОhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/582DEFINING KPIs FOR A WEB APPLICATION: EXAMPLES OF PERFORMANCE METRICS2025-07-23T08:17:29+03:00Solomiia BRATASHsolomiia.p.bratash@lpnu.uaIryna PIKHiryna.v.pikh@lpnu.ua<p><em>The article examines Key Performance Indicators (KPIs) as a tool for assessing the quality and performance of web applications. A classification of KPIs is proposed based on technical, user-oriented, and business-related criteria. The paper analyzes current approaches to measuring each of these categories and provides examples of KPI implementation in various types of web systems. Special attention is given to the relationship between technical metrics and the quality of user perception. The findings may be useful for software engineers, analysts, and project managers in planning the development and optimization of web applications.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Соломія БРАТАШ, Ірина ПІХhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/630DEVELOPMENT OF CRITERIA FOR FUNCTIONAL EFFICIENCY AND SOCIALIZATION OF EMPLOYEES UNDER REMOTE WORK CONDITIONS2025-10-02T07:21:25+03:00Ruslan MAKRENKOr.makrenko@aspd.sumdu.edu.uaOksana SHOVKOPLYASr.makrenko@aspd.sumdu.edu.ua<p class="06AnnotationVKNUES"><em>The article considers the issues of assessing functional efficiency and socialization of employees in remote work, which has become widespread due to digital transformation. It is noted that remote work changes traditional approaches to labor organization, requiring the adaptation of performance evaluation systems and the introduction of communication support mechanisms. It is determined that a low level of employee socialization can lead to a decrease in motivation, professional identification, and task performance efficiency, which requires the development of new methods of assessment and personnel management.</em></p> <p class="06AnnotationVKNUES"><em>Modern information and analytical systems (IAS) used to monitor the productivity and interaction of remote workers are analyzed, in particular Toggl Track, Hubstaff, Asana, Trello, Redmine, and Jira. It is found that most of these platforms focus on time control and task performance, but do not sufficiently take into account the aspects of socialization and collective interaction. The integration of IAS with artificial intelligence technologies is proposed to increase adaptability to employee needs, automate feedback, and form effective interaction mechanisms.</em></p> <p class="06AnnotationVKNUES"><em>A system of criteria for functional efficiency and socialization of employees in remote work conditions has been developed. The first criterion combines the assessment of communication processes and the level of socialization of employees based on information entropy and corporate interaction. The proposed mathematical model allows for a quantitative assessment of the quality of communication in distributed teams and serves as a tool for analytical monitoring of digital interaction. The second criterion - integral psychosocial - takes into account the emotional state of the employee, the level of his motivational involvement and the degree of cognitive participation in the digital environment. This indicator is sensitive to the psychological climate and allows for a prompt response to the risks of maladjustment in remote employment conditions. The use of such solutions will allow optimizing the management of remote teams, increasing the level of staff involvement and ensuring the sustainable effectiveness of organizations in the context of digital transformation.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Руслан МАКРЕНКО, Оксана ШОВКОПЛЯСhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/627ANALYSIS OF CLASS-AGNOSTIC SINGLE-OBJECT TRACKING METHODS2025-09-30T14:22:27+03:00Myroslav SHCHERBATIUKRickLestatDT@gmail.comRoman MASLIImaslij.r.v@vntu.edu.ua<p class="06AnnotationVKNUES"><em>This study presents a comprehensive analysis of four class-agnostic single-object tracking algorithms: KCF (Kernelized Correlation Filter), CSRT (Channel and Spatial Reliability Tracking), SAMURAI, and MMTrack. The research evaluates their performance across multiple criteria including processing speed, localization accuracy (measured by LaSOT AUC), robustness to occlusions, illumination changes, and scale variations. The experimental results demonstrate distinct performance profiles for each method: KCF achieves the highest processing speed (201 fps on CPU) but shows limited accuracy (22% LaSOT AUC) and poor resilience to occlusions and scale changes; CSRT provides a balanced trade-off between speed (80 fps) and accuracy (28% AUC) with improved robustness to partial occlusions and lighting variations; SAMURAI, built upon SAM2 with motion-aware memory mechanisms, delivers exceptional accuracy (70-74% AUC) and excellent robustness to various challenging conditions, but requires substantial computational resources (0.4 fps on CPU, 13 fps on GPU); MMTrack implements a unified token-based approach for vision-language tracking, achieving comparable accuracy (70% AUC) with moderate processing speed (4 fps CPU, 54 fps GPU) and superior adaptability to scale changes. The analysis confirms that no universal solution dominates across all scenarios, and the optimal choice depends on specific application requirements, available computational resources, and performance priorities. The study establishes a methodological framework for informed algorithm selection in video surveillance, autonomous systems, and robotics applications.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Мирослав ЩЕРБАТЮК, Роман МАСЛІЙhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/631STUDY OF THE OCCURRENCE OF LOGICAL ERRORS IN SOFTWARE DEVELOPMENT AND METHODS FOR THEIR DETECTION2025-10-02T07:32:35+03:00Oleksandr SEMENETSo.y.semenets@csn.khai.eduArtem TETSKYIa.tetskiy@csn.khai.edu<p class="06AnnotationVKNUES"><em>The study presented in this paper focuses on the phenomenon of logical errors in software development and their impact on the cybersecurity of web applications. Logical errors represent one of the most elusive categories of software defects, since they are not related to the technical correctness of code syntax or compilation issues, but rather to the incorrect realization of the intended logic of the system. Unlike typical coding errors, logical flaws may remain undetected through conventional debugging or testing procedures, yet they often lead to critical vulnerabilities that can be exploited by attackers. This makes their identification and prevention a matter of significant importance for secure and reliable software engineering.</em></p> <p class="06AnnotationVKNUES"><em>The aim of the work is to examine the influence of logical errors on web application cybersecurity, to develop a classification system of such errors, to analyze their occurrence across different phases of the software development lifecycle, and to explore methodologies and tools that can improve their detection. The study also proposes a conceptual model of a logical error that provides a structured understanding of how such flaws emerge and propagate in application architectures.</em></p> <p class="06AnnotationVKNUES"><em>The research objectives include: identifying the most common sources of logical errors in the software development process; analyzing their impact on system architecture, data processing, and user interaction; classifying errors based on their origin and manifestation; and considering methodological approaches for their detection and mitigation. Special attention is paid to the cascade (waterfall) model of software development, where logical flaws may appear at each phase — from requirements analysis and system design to implementation, testing, and maintenance. By introducing a classification framework and error model, the paper contributes to a more systematic approach to understanding and handling logical errors.</em></p> <p class="06AnnotationVKNUES"><em>The results of the study highlight that logical errors not only affect the reliability and stability of software systems, but also play a critical role in weakening the cybersecurity posture of web applications. Logical vulnerabilities can open paths for unauthorized access, data leakage, privilege escalation, or violation of business logic, which attackers often exploit. Therefore, their timely detection is essential for both software quality assurance and cyber defense. Based on the conducted analysis, the paper provides recommendations on methodological practices and specialized tools that can assist developers, testers, and security analysts in identifying logical errors at different stages of development.</em></p> <p class="06AnnotationVKNUES"><em>Logical error detection remains a complex and insufficiently studied problem in modern software engineering. The results of this research contribute to a deeper understanding of the nature and lifecycle of logical flaws, providing a foundation for further methodological development in this field. The findings emphasize that considering logical errors throughout the entire software development process allows for more resilient application design and enhances cybersecurity protection.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Олександр СЕМЕНЕЦЬ, Артем ТЕЦЬКИЙhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/569ANALYSIS OF NOISE IN FUEL LEVEL MEASUREMENT DATA IN TRANSPORT2025-10-02T15:10:16+03:00Daniil IVASHCHEVivashchev_d@365.dnu.edu.uaVladimir GERASIMOVherasymov_v@365.dnu.edu.ua<p class="06AnnotationVKNUES"><em>This article addresses the contemporary issues of phraseology and provides an in-depth study of the cognitive-pragmatic and stylistic foundations of phraseological units in the context of translation. The research proceeds from the anthropocentric paradigm, which views language as a universal communication tool reflecting the inner world, worldview, and values of the linguistic personality, thus acting as a mirror of the culture and mentality of a given language community. Phraseological units are considered not only as fixed lexical combinations, but also as semiotic constructs of secondary nomination that carry cultural, emotional, and evaluative connotations. Their functioning in discourse is directly tied to the cognitive and pragmatic factors that determine communicative success.</em></p> <p class="06AnnotationVKNUES"><em>The study highlights that phraseological expressions encapsulate collective cultural experience, and through their stylistic and semantic characteristics, they embody key linguocultural dominants. Using English phraseological units as a primary material, the article identifies and analyzes groups that reflect such value categories as restraint, independence, arrogance, and fairness. These dominants are revealed not only at the semantic level but also in the stylistic coloring of the expressions, their pragmatic functions, and their communicative potential. The article emphasizes that the translation of phraseological units extends beyond literal equivalence, requiring a deep understanding of the conceptual and linguistic worldview of both source and target language speakers.</em></p> <p class="06AnnotationVKNUES"><em>The process of translating phraseological units is shown to be a multidimensional task, in which the translator must preserve not only the semantic content, but also the expressive force, stylistic nuance, and cultural specificity of the source expression. It is argued that phraseological translation should be approached as a process of cultural mediation, where cognitive and pragmatic aspects intersect with linguistic choices. Preserving stylistic and emotional expressiveness is crucial for achieving functional equivalence, since phraseological units often determine the communicative impact of the text.</em></p> <p class="06AnnotationVKNUES"><em>The research confirms that phraseological units are integral to the linguistic and cultural identity of a community, and their adequate translation ensures both semantic accuracy and communicative efficiency. Cognitive-pragmatic and stylistic approaches provide the necessary methodological tools for analyzing and translating phraseological material, offering a more holistic perspective that integrates semantics, culture, and communication. This integrated approach enables translators to retain the dual function of phraseological units—as carriers of meaning and as stylistic devices—thereby contributing to the effectiveness of cross-cultural discourse and preserving the cultural heritage embedded in language.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Даниїл ІВАЩЕВ, Володимир ГЕРАСИМОВhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/591APPLICATION OF PERIODICALLY CORRELATED STOCHASTIC PROCESSES FOR FORECASTING ELECTRICITY CONSUMPTION2025-08-13T12:38:44+03:00Andrii VOLOSHCHUKandriy.voloschuk30@gmail.comHalyna OSUKHIVSKAosukhivska@tntu.edu.uaMykola KHVOSTIVSKYIhvostivskyy@tntu.edu.uaAndrii SVERSTIUKsverstyuk@tdmu.edu.ua<p><em><span style="font-weight: 400;">The article substantiates the application of the mathematical apparatus of periodically correlated stochastic processes (PCSP) for modeling and forecasting electricity consumption in power systems. The relevance of the research is determined by the need to improve the accuracy of energy load forecasting under conditions of complex temporal consumption structure with pronounced daily, weekly, and seasonal periodicity.</span></em></p> <p><em><span style="font-weight: 400;">The aim of the work is to develop a new approach to energy load forecasting based on the energy theory of stochastic signals using the PCSP model. For the analysis, experimental data of hourly electricity consumption from a private household were used, aggregated at daily, weekly, and monthly scales.</span></em></p> <p><em><span style="font-weight: 400;">A common-phase method for processing electricity consumption signals is proposed, both with and without consideration of cross-correlation relationships between components. It was established that the consumption correlation function demonstrates periodic behavior with a 24-hour period, with the daily harmonic accounting for 65-75% of the total signal energy.</span></em></p> <p><em><span style="font-weight: 400;">The research results showed that the common-phase method with consideration of cross-correlation relationships ensures the detection of hidden patterns in the energy consumption structure and allows accounting for the inertia of power systems. The obtained correlation components can be used as informative features for load forecasting and training artificial intelligence models.</span></em></p> <p><em><span style="font-weight: 400;">The practical significance of the work lies in creating a theoretical foundation for developing adaptive algorithms for energy consumption forecasting and their implementation in smart grid management systems.</span></em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Андрій ВОЛОЩУК, Галина ОСУХІВСЬКА, Микола ХВОСТІВСЬКИЙ, Андрій СВЕРСТЮКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/579LEADER ELECTION METHOD FOR IMPROVING COMMUNICATION EFFICIENCY IN UAV SWARMS2025-10-02T16:42:30+03:00Artem VOLOKYTAartem.volokita@kpi.uaMykyta MELENCHUKOVmelenchukov.nikita@gmail.com<p class="06AnnotationVKNUES"><em>Unmanned aerial vehicle (UAV) swarms are increasingly employed in tasks requiring high coordination, resilience, and adaptability of communication systems. Ensuring efficient and reliable information exchange among swarm members is a critical challenge, particularly in dynamic environments with varying topology and communication constraints. Traditional leader election approaches in distributed systems rely primarily on static criteria such as energy levels, node degree, or link reliability. However, these methods often overlook the influence of message delivery delay, which directly affects synchronization, stability, and mission performance in UAV swarms.</em></p> <p class="06AnnotationVKNUES"><em>This paper introduces an enhanced leader election method for UAV swarms that combines classical parameters with a novel temporal criterion: the estimated time required for a candidate leader to deliver messages to all swarm members. By integrating this factor into the utility function, the method accounts for both structural and dynamic characteristics of the swarm network. This approach allows for improved alignment of inter-drone interactions, reduced communication delays, and minimized relay load while maintaining energy efficiency.</em></p> <p class="06AnnotationVKNUES"><em>The proposed method was evaluated through a series of simulations conducted in ROS 2 (Humble) with Gazebo. Swarms of 10 UAVs were modeled across different formations—line, wedge, and cube—under varying communication ranges. Leader failure scenarios were simulated to test re-election performance. Results demonstrated significant improvements: up to 15% reduction in average and maximum message delivery time and up to 42% decrease in relay load in topologies with limited connectivity. The most notable gains were observed in elongated (line) and constrained (cube with reduced range) formations, while performance improvements were less pronounced in densely connected networks. Importantly, energy consumption of elected leaders remained at a comparable level to baseline methods, confirming the efficiency of the proposed approach.</em></p> <p class="06AnnotationVKNUES"><em>The study highlights the potential of incorporating temporal delivery metrics into leader election algorithms for UAV swarms. The method enhances communication robustness and coordination efficiency, thereby contributing to safer and more reliable swarm operations. Future research directions include integration with intrusion detection mechanisms and adaptive routing strategies for highly dynamic or adversarial environments.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Артем ВОЛОКИТА, Микита МЕЛЕНЧУКОВhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/634DESCRIPTOR MODEL OF ACCESS CONTROL AND MANAGEMENT SYSTEM IN MICROSOFT WINDOWS OPERATING SYSTEMS2025-10-02T16:07:23+03:00Natalia PETLIAKnpetlyak@khmnu.edu.uaRastislav TOVTtovtrastik1@gmail.comIvan KOBYLIANSKYIvedirwan@gmail.comVolodymyr OLIINYKolivolodumur357@gmail.com<p class="06AnnotationVKNUES"><em>The article examines the descriptor-based model of access control and management in Windows operating systems, focusing on the formalized description of interactions between subjects and objects under security policies. The study provides an in-depth analysis of the Windows access control architecture, including access tokens, security identifiers (SID), access control lists (ACL), auditing mechanisms, and centralized management via Active Directory. The research identifies current threats, common misconfigurations, and vulnerabilities, while outlining recommendations for improving access control mechanisms in alignment with international information security standards such as ISO/IEC 27001, ISO/IEC 27002, and NIST SP 800-207. The article highlights the importance of adopting context-aware access, the principle of least privilege, Zero Trust architecture, and user behavior analytics to address emerging risks in dynamic IT environments. Special attention is given to domain-based infrastructures, where Group Policy Objects (GPO) and advanced audit configurations enhance centralized governance but also introduce complexity and potential mismanagement risks. The advantages of the descriptor model are emphasized in terms of its suitability for formal verification, automated monitoring, and adaptation to risk-oriented approaches. Directions for future research include the integration of artificial intelligence techniques—particularly behavioral analytics and anomaly detection—into Windows access management, supporting real-time policy adaptation and proactive incident response. Such advancements will enable organizations to align security practices with global standards while ensuring the confidentiality, integrity, and availability of information assets in modern digital ecosystems.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Наталія ПЕТЛЯК, Растіслав ТОВТ, Іван КОБИЛЯНСЬКИЙ, Володимир ОЛІЙНИКhttps://vottp.khmnu.edu.ua/index.php/vottp/article/view/635LEGAL REGULATION OF ATTRIBUTIVE DIGITAL SIGNATURE TECHNOLOGY IN UKRAINE2025-10-02T17:07:34+03:00Viktor CHESHUNcheshunvn@khmnu.edu.uaYurii KLOTSklots@khmnu.edu.uaNataliia PETLIAKnpetlyak@khmnu.edu.uaVira TITOVAtitovav@khmnu.edu.ua<p class="06AnnotationVKNUES"><em>The purpose of this work is to analyze the provisions of laws and regulatory documents of Ukraine, which regulate the use of electronic signatures, for compliance with the requirements for the implementation of digital signature technology based on the personal attributes of the signatory as a subject of personal data. The analysis was carried out to determine the compliance of the proposed version of the implementation of the technology with the requirements of the legislation of Ukraine regarding the processing and storage of the signer's personal data, which can be included in the digital signature based on attributes. The article discusses the main provisions of digital signature technology based on personal attributes, provides a classification of attributes used to form a signature, and analyzes the differences between a cryptographic electronic digital signature and a signature on attributes. To analyze the features of digital signature technology based on personal attributes, a method of formalized representation of various classes of attributes in a mathematical model is provided, and a signature synthesis scheme based on a formalized representation of the signer's personal attributes is presented. It was determined that the attribute digital signature technology provides maximum flexibility and adaptability to the signer's needs and can be both an alternative to the cryptographic electronic digital signature technology and its addition. The digital signature technology based on the personal attributes of the signer also increases the security and accuracy of the signer's identification, since not one, but an arbitrary number of attributes of different classes is used to identify the person. In order to determine the possibility of applying the attribute digital signature technology in the legal field of Ukraine, a comparative study of the basic provisions of the technology and the requirements of the current laws was conducted. The article proves the possibility of introducing electronic digital signature technology in Ukraine based on the attributes of the signatory person in accordance with current legislation and other regulatory documents. It was determined that the adaptation of the laws and standards of Ukraine to the normative legal documents of the European Union plays not the least role in creating the necessary conditions for this.</em></p>2025-08-28T00:00:00+03:00Copyright (c) 2025 Віктор ЧЕШУН, Юрій КЛЬОЦ, Наталія ПЕТЛЯК, Віра ТІТОВА