https://vottp.khmnu.edu.ua/index.php/vottp/issue/feed MEASURING AND COMPUTING DEVICES IN TECHNOLOGICAL PROCESSES 2025-06-30T20:31:05+03:00 Юрій Васильович Кравчик gromplus7@gmail.com Open Journal Systems <p><strong>ISSN </strong>2219-9365</p> <p><strong>Published</strong> since May 1997</p> <p><strong>Publisher:</strong> Khmelnytskyi National University (Ukraine)</p> <p><strong>Frequency:</strong> 4 times a year</p> <p><strong>Manuscript languages:</strong> mixed languages: Ukrainian, English, Polish</p> <p><strong>Editors:</strong> Valeriy Martyniuk (Khmelnytsky, Ukraine)</p> <p><strong>Certificate of state registration of print media:</strong> Series KB № 24923-14863 ПР (12.07.2021).</p> <p><strong>Registration: </strong>The journal is included in Category B of the List of scientific professional publications of Ukraine, in which the results of dissertations for obtaining scientific degrees of doctor and candidate of sciences (specialties: 121, 122, 123, 125, 126, 151, 152, 172) can be published. 1643 28.12.2019 Order of the Ministry of Education and Science of Ukraine of December 28, 2019 No. 1643.</p> <p><strong>License Terms:</strong> Authors retain the copyright and grant the journal the right of first publication along with a work that is simultaneously licensed under a Creative Commons Attribution International CC-BY license, allowing others to share work with proof of authorship and initial publication in that journal.</p> <p><strong>Open Access Statement:</strong> "MEASURING AND COMPUTING DEVICES IN TECHNOLOGICAL PROCESSES" provides immediate open access to its content on the principle that providing free access to research for the public supports a greater global exchange of knowledge. Full-text access to the scientific articles of the journal is presented on the official website in the Archives section.</p> <p><strong>Address:</strong> Scientific journal "MEASURING AND COMPUTING DEVICES IN TECHNOLOGICAL PROCESSES", Khmelnytsky National University, st. 11, Khmelnytsky, 29016, Ukraine.</p> <p><strong>Tel .:</strong> +380673817986</p> <p><strong>е-mail:</strong> vottp@khmnu.edu.ua</p> <p><strong>web-site:</strong> <span class="VIiyi" lang="uk"><span class="JLqJ4b" data-language-for-alternatives="uk" data-language-to-translate-into="en" data-phrase-index="0">https://vottp.khmnu.edu.ua/index.php/vottp/</span></span></p> https://vottp.khmnu.edu.ua/index.php/vottp/article/view/523 ONTOLOGICAL MODELING OF WEB APPLICATION STRUCTURE 2025-05-20T13:16:10+03:00 Iryna PIKH iryna.v.pikh@lpnu.ua Yulian MERENYCH merenich.julian@uzhnu.edu.ua <p class="06AnnotationVKNUES"><em>This article delves into the application of the ontological approach in modeling the structural components of web applications, emphasizing the design and implementation of a multi-level ontological model. As a case study, an online store was chosen to illustrate the capabilities of this method. The model comprises foundational classes — including Furniture, Customer, Price, Style, and others — each of which is further refined through hierarchically organized subclasses to reflect the complexity and diversity of real-world entities. This structure enables a more nuanced representation of domain knowledge, essential for building adaptive and scalable web systems.</em></p> <p class="06AnnotationVKNUES"><em>To formalize relationships among these elements, a comprehensive set of object properties was introduced, describing various types of interactions and associations. Additionally, datatype properties were used to specify attributes such as numerical values, textual descriptions, and stylistic parameters. These components were synthesized into an ontological graph using standard semantic web technologies, ensuring logical consistency, transparency, and extensibility. This graph-based representation serves not only as a knowledge repository but also as a foundation for semantic queries and automated reasoning.</em></p> <p class="06AnnotationVKNUES"><em>SPARQL, a powerful query language for RDF-based data, was employed to retrieve, filter, and manipulate ontological data efficiently. Through SPARQL queries, relationships between products, customer preferences, pricing models, and stylistic categories were extracted and analyzed. This enabled the creation of structured data sets suitable for visualization, recommendation systems, and performance evaluation. The model’s utility was validated through simulations of online store transactions, showcasing its ability to reflect realistic scenarios and decision-making processes.</em></p> <p class="06AnnotationVKNUES"><em>Furthermore, the ontological framework was designed with adaptability in mind, allowing it to be reused and reconfigured for various application domains such as healthcare, education, or logistics. The study also addressed key challenges related to knowledge representation, including automation of ontology generation, graphical visualization of ontological graphs, and the streamlining of data processing workflows.</em></p> <p class="06AnnotationVKNUES"><em>Overall, this research highlights the practical advantages of ontological modeling in web development. It supports data integration, semantic interoperability, and intelligent decision-making within complex information systems. The proposed methodology not only enhances conceptual clarity but also enables more efficient system evolution and maintenance. It provides a robust foundation for further research and development in the field of semantic technologies and intelligent web applications.</em></p> 2025-05-15T00:00:00+03:00 Copyright (c) 2025 Ірина ПІХ, Юліан МЕРЕНИЧ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/528 SHAPE MEMORY ALLOYS AND MACHINE LEARNING: A REVIEW 2025-05-23T10:23:03+03:00 Oleg YASNIY oleh.yasniy@gmail.com vladyslav DEMCHYK DemchykV@gmail.com <p><em>Shape memory alloys (SMAs) have found widespread application in various fields of science and technology due to their unique properties, such as superelasticity and shape memory effect. These alloys retain their initial form by memorising it between two transformation phases, which is temperature or magnetic field-dependent. The application of such materials is straightforward. The alloy can be deformed by force and recover to its initial shape or size after heating over a specific temperature. There are a lot of various kinds of SMA, for instance, Fe–Mn–Si, Cu–Zn–Al, and Cu–Al–N, and every type of SMA is applied specifically, though Nitinol Ni-Ti is ubiquitous because of its stable properties</em></p> <p><em>&nbsp;SMAs are widely used in medicine, the aerospace industry, motor building, civil engineering, dentistry, etc. During their operation, structural elements made of SMAs undergo long-term cyclic loading that can lead to premature loss of functional properties, exhaustion of lifetime, and subsequent failure. Therefore, ensuring sufficient functional properties and endurance of SMA is necessary. Often, the experiments are quite costly and time-consuming and require expert knowledge.&nbsp; Therefore, it is crucial to model the functional and structural properties of SMAs by employing AI (Artificial intelligence) and machine learning (ML) methods.</em></p> <p><em>AI can be employed to model SMA behaviour.&nbsp; AI is actively used in material science and fracture mechanics ML is a part of AI that can efficiently solve complicated tasks. This study aims to perform a comprehensive review of the application of ML methods to estimate various properties of shape memory alloys.&nbsp; A comprehensive analysis of ML methods was performed as applied to modelling various properties of SMAs. Several studies concern the application of methods of AI and ML to solve such problems. In general, AI and ML methods are promising and powerful tools to model the SMAs properties. Nevertheless, there is always room for improvement and further elaboration of the aforementioned methods and approaches for modelling the functional and structural properties of SMAs</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Олег ЯСНІЙ, Владислав ДЕМЧИК https://vottp.khmnu.edu.ua/index.php/vottp/article/view/524 MULTIFACTOR ANALYSIS FOR SELECTING ALTERNATIVES IN THE PREPRESS PROCESSING OF NEWSPAPER PUBLICATIONS 2025-05-20T13:24:34+03:00 Alona KUDRIASHOVA alona.v.kudriashova@lpnu.ua Yurii SLIPETSKYI yurii.b.slipetskyi@lpnu.ua <p class="06AnnotationVKNUES"><em>This study presents a comprehensive multifactor approach to selecting the optimal alternative for newspaper prepress processing. It systematically considers a wide array of factors that significantly influence the quality of the final printed product. The analysis focuses on key components such as dimensional parameters, typographic design, compositional and graphical layout, typesetting methods, and page layout structure. These criteria are crucial in achieving high-quality print results and ensuring the aesthetic and functional appeal of the publication.</em></p> <p class="06AnnotationVKNUES"><em>Three alternative prepress solutions were proposed and evaluated. To assess their effectiveness, a factor analysis was conducted wherein the relative importance of each factor was established through expert evaluation. A pairwise comparison matrix was constructed to rank the significance of the considered factors, enabling a structured and logical prioritization of their impact.</em></p> <p class="06AnnotationVKNUES"><em>To process the comparative data, the specialized software "Simulation Modeling by the Binary Comparison Method" was applied. This tool facilitated the calculation of normalized values of the principal eigenvector of the comparison matrix, which is essential for accurately determining the weights of each factor. Subsequently, separate pairwise comparison matrices were created for the alternatives in relation to each factor, leading to the computation of utility values for every alternative.</em></p> <p class="06AnnotationVKNUES"><em>A mathematical formula was then derived to integrate the weighted utility values into a final score for each alternative. The method ensured that the evaluation process accounted for both the relative importance of the factors and the performance of each alternative under these criteria. The alternative with the highest utility score was identified as the optimal choice, providing a rational and data-driven basis for decision-making in newspaper prepress planning.</em></p> <p class="06AnnotationVKNUES"><em>This methodological approach demonstrates the practical benefits of combining expert judgment, mathematical modeling, and software tools in solving complex selection problems in publishing workflows.</em></p> 2025-05-15T00:00:00+03:00 Copyright (c) 2025 Альона КУДРЯШОВА, Юрій СЛІПЕЦЬКИЙ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/525 IMPLEMENTATION OF INTELLIGENT PRODUCT CLASSIFICATION SYSTEMS: IMPACT ON THE EFFICIENCY OF CUSTOMS ADMINISTRATION 2025-05-20T13:59:09+03:00 Yurii KRYVENCHUK Yurii.P.Kryvenchuk@lpnu.ua Stepan KRUPA stepan.m.krupa@lpnu.ua <p class="06AnnotationVKNUES"><em>The article analyzes the possibilities of using intelligent systems in the field of goods classification for customs administration. The increase in international trade volumes, the increasing complexity of logistics chains and the constant evolution of the commodity nomenclature require the modernization of processes related to the identification and assignment of HS codes of foreign economic activity. The authors investigate how machine learning algorithms, in particular the naive Bayesian classifier and artificial neural networks with the Leaky ReLU activation function, can be adapted for automated classification of goods, increasing the efficiency and reliability of solutions. The key problems of the traditional manual approach are highlighted, in particular significant time costs, dependence on the qualifications of specialists, high probability of subjective errors and limited scalability. An empirical experiment was conducted in which the results of the classification of 10,000 commodity items were analyzed by three methods: manual, using naive Bayes and a neural network. Experimental data indicate a significant increase in the accuracy of automated approaches, as well as a significant reduction in the time for processing incoming information. In particular, the use of a neural network made it possible to achieve an accuracy of 94.8% with a processing time of 60 seconds, which significantly exceeds the result of manual classification. The study also highlights the advantages of using artificial intelligence algorithms in the context of strategic management of customs resources. Reducing the need to involve a large number of specialists in routine classification processes allows optimizing the staff structure, reorienting it to analytical and supervisory activities, in particular risk assessment and detection of attempts to evade customs payments. In addition, the standardization and transparency provided by intelligent systems have a positive effect on the level of trust from business and international partners. Special attention is paid to the prospects for improving intelligent classification systems. The possibilities of implementing natural language processing (NLP) for interpreting unstructured text descriptions of goods, using computer vision for automatic identification of products by visual features, as well as the development of federated learning as a mechanism for international cooperation between customs authorities without violating data confidentiality are considered. As an example, the experience of Singapore is given, where the implementation of systems based on machine learning made it possible to reduce the processing time of customs declarations by 50% and reduce the error rate to a minimum.The results obtained confirm that intelligent systems have the potential to become a key element of the digital transformation of the customs infrastructure, contributing to the integration of Ukraine into the global economic space, harmonization of procedures with EU standards, reduction of corruption risks and increase the efficiency of customs administration at the system level.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Юрій КРИВЕНЧУК, Степан КРУПА https://vottp.khmnu.edu.ua/index.php/vottp/article/view/498 VOICE FAKE DETECTION: MODERN TECHNIQUES AND APPLICATIONS FOR UKRAINIAN LANGUAGE 2025-05-02T15:22:02+03:00 Ivan VYNOGRADOV ipvinner@gmail.com <p class="06AnnotationVKNUES"><em>The subject matter of this article is the detection of fake voices generated by text-to-speech (TTS) synthesis and voice conversion (VC) technologies, with a focus on their application to the Ukrainian language. The goal is to analyze modern datasets, competitions (ASVspoof, ADD Challenge), and detection algorithms to assess the feasibility of integrating Ukrainian data into international frameworks or developing a dedicated dataset. This approach addresses not only the shortage of Ukrainian-language recordings in widely used repositories—many of which are limited to English or Chinese—but also the unique phonetic structures, diverse accents, and morphological complexities inherent to Ukrainian. By comparing performance across multiple spoofing scenarios, researchers can more accurately quantify how language-specific features influence classification accuracy, ultimately informing more robust detection frameworks. The tasks solved in the article: to examine existing datasets and their suitability for Ukrainian, evaluate the performance of fake voice detection systems using Equal Error Rate (EER), Weighted EER (WEER), and Detection Success Rate (DSR), and determine the best approach—expanding ASVspoof or creating a new resource. The methods used include systematic analysis, dataset comparison, and performance evaluation of modern synthesis systems like ElevenLabs, Assembly AI, and Tacotron. The results show that adapting fake voice detection systems to the Ukrainian language enhances accuracy and robustness. Moreover, targeted inclusion of different regional dialects and speaker profiles emerges as a key factor in maintaining high Detection Success Rate (DSR) values. The findings highlight that advanced neural vocoders, which replicate fine-grained prosodic and timbral nuances, necessitate specialized countermeasures able to discern subtle synthetic artifacts. Consequently, the study underscores the importance of iterative dataset refinement, periodic algorithmic updates, and cross-lingual benchmarking to sustain robust performance against evolving voice spoofing threats. Conclusions. The study confirms that integrating Ukrainian-language data into international datasets or developing a specialized dataset significantly improves detection reliability. The scientific novelty lies in: 1) the first systematic analysis of Ukrainian fake voice detection; 2) identification of key factors affecting detection performance; 3) recommendations for improving dataset structures and algorithm adaptation for Ukrainian speech.</em></p> <p><em><br /><br /></em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Іван ВИНОГРАДОВ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/478 MATHEMATICAL MODELING OF AGGREGATION AND PROCESSING OF MEASUREMENT DATA IN AUTOMATED METROLOGICAL MONITORING SYSTEMS 2025-03-30T17:42:01+03:00 Ulyana PANOVYK uliana.p.panovyk@lpnu.ua Roman HIDEI roman.v.hidei@lpnu.ua <p class="06AnnotationVKNUES"><em>The article presents a comprehensive mathematical model for the aggregation and processing of measurement data within automated metrological monitoring systems. It addresses the challenges of synthesizing heterogeneous data streams from various sensors, emphasizing the necessity to maintain key metrological characteristics such as accuracy, repeatability, timeliness, and representativeness. To achieve this, the study introduces a formalized, multi-stage aggregation algorithm that includes preprocessing, normalization, adaptive weighting, and validation stages. This structure ensures the algorithm can dynamically adjust to varying quality and availability of input data, thus improving the robustness and scalability of real-time monitoring applications.</em></p> <p class="06AnnotationVKNUES"><em>A central component of the model is the introduction of an integral quality indicator, designed to evaluate both the reliability and the practical usability of aggregated data. This indicator supports real-time decision-making by highlighting deviations in input quality and triggering appropriate aggregation responses. A novel feature of the model is the adaptive weighting mechanism, which modulates the contribution of each sensor’s data based on its individual quality profile. This enables the system to prioritize high-quality sources while down-weighting or even excluding unreliable ones.</em></p> <p class="06AnnotationVKNUES"><em>The model's effectiveness is validated through analytical simulations under several hypothetical but realistic scenarios, such as degradation in precision, delayed transmission, and incomplete data sampling. These case studies illustrate how the integral quality indicator reacts to various disruptions and guides the system's response to maintain optimal data fusion. The proposed approach enhances the trustworthiness and resilience of metrological systems, making it particularly suitable for deployment in Industry 4.0 environments, where the ability to integrate diverse sensor inputs in real time is critical. Furthermore, the model facilitates the early detection of faulty data sources and supports automated reconfiguration of the aggregation logic, thereby increasing operational efficiency and decision support capabilities.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Уляна ПАНОВИК, Роман ГІДЕЙ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/522 AUTOMATED FOREST LAND MONITORING SYSTEM 2025-05-20T13:06:03+03:00 Serhii GRIBAN emmarko2016@gmail.com Serhii ZAIETS zssvp0204@gmail.com <p class="06AnnotationVKNUES"><em>The work considers the state of forest lands as one of the key components of the global ecosystem, and their condition has a significant impact on the ecological balance of the planet, influencing biodiversity, climate regulation, and water cycles. Forests are vital carbon sinks, absorbing atmospheric carbon dioxide and mitigating climate change; their degradation or loss can transform them into carbon sources, exacerbating global warming. They harbor a vast majority of terrestrial biodiversity, providing habitats for countless species of flora and fauna, and their destruction leads to irreversible species loss and ecosystem disruption. Furthermore, forests play a crucial role in regulating hydrological systems, influencing rainfall patterns, preventing soil erosion, and ensuring water quality and availability for both human consumption and agricultural use.</em></p> <p class="06AnnotationVKNUES"><em>This work analyzes the main factors affecting the state of forest lands, which can be broadly categorized into anthropogenic and natural drivers. Anthropogenic factors include deforestation for agriculture, unsustainable logging practices, urbanization, infrastructure development, and pollution. The conversion of forest land for cattle ranching and commodity crop cultivation remains a primary driver of deforestation globally. Poorly managed logging operations can lead to significant degradation, impacting forest structure and regeneration capacity. Natural factors, often exacerbated by climate change, encompass forest fires, pest infestations, diseases, and extreme weather events such as droughts, storms, and floods. Increased frequency and intensity of wildfires, often linked to a combination of human activity and climatic conditions, cause widespread destruction. Similarly, climate change can alter the distribution and virulence of native and invasive pests and pathogens, leading to significant tree mortality.</em></p> <p class="06AnnotationVKNUES"><em>In addition, a forest ecosystem monitoring system is proposed, designed to promptly identify destructive factors and assess their impact, thereby enabling timely and targeted interventions. Such a system would integrate various technologies, including remote sensing (satellite imagery, LiDAR), geographic information systems (GIS), ground-based inventories, and citizen science. Remote sensing allows for large-scale and continuous monitoring of forest cover change, fire outbreaks, and indicators of forest health. GIS facilitates the analysis and visualization of spatial data to identify high-risk areas and prioritize management actions. Ground-based inventories provide detailed information on forest structure, species composition, and ecosystem health. Incorporating citizen science can enhance data collection and local engagement. This comprehensive monitoring system would provide early warnings of emerging threats, track the effectiveness of management interventions, and support adaptive management strategies to ensure the long-term health and resilience of forest ecosystems. The data generated is crucial for informing policy, guiding sustainable resource allocation, and fostering international cooperation in forest conservation.</em></p> 2025-05-15T00:00:00+03:00 Copyright (c) 2025 Сергій ГРИБАН, Сергій ЗАЄЦЬ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/506 METHOD FOR OPTIMAL CONTAINER PLACEMENT FOR WEB PORTALS BASED ON RESOURCES AND PERFORMANCE 2025-05-07T14:41:40+03:00 Dmytro STEPANOV dmytro.s.stepanov@lpnu.ua Maksym SENIV maksym.m.seniv@lpnu.ua <p><em>In the context of the rapid growth in the complexity of modern web portals and the increasing pressure on IT infrastructure, the issue of efficient resource allocation has become a central challenge in maintaining high system performance, reliability, and service availability. The dynamic and often unpredictable nature of user interactions, combined with fluctuating traffic patterns and the proliferation of microservices, places immense demands on container orchestration systems. Inefficient container placement strategies can result in suboptimal utilization of computational resources, disproportionate server load distribution, higher energy and operational costs, latency spikes, and ultimately, a degraded user experience.</em></p> <p><em>This research focuses on enhancing the method of optimal container placement specifically tailored for web portal environments. The proposed approach builds upon traditional optimization techniques by integrating dynamic and context-aware metrics, including real-time load variability, application-specific performance indicators, environmental stability measures, system recovery time objectives (RTOs), hardware limitations of physical servers, and network topology constraints. By leveraging multi-criteria decision-making and optimization models, the method seeks to find a balance between performance efficiency and resource conservation.</em></p> <p><em>The expected outcomes of the study are twofold: firstly, to significantly reduce the mean time to recovery (MTTR) in the event of system disruptions; and secondly, to ensure the consistent, high-quality operation of web portals even under volatile load conditions. The practical contribution of the proposed method lies in its applicability for automated infrastructure management, offering a scalable and adaptive solution for cloud and on-premise environments. This results in reduced IT operational costs, improved service delivery, and increased end-user satisfaction. Furthermore, the method provides a foundation for future advancements in intelligent orchestration and self-healing infrastructure systems.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Дмитро СТЕПАНОВ, Максим СЕНІВ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/529 CLASSIFICATION AND AGGREGATION OF RISKS IN SMART GRIDS 2025-05-23T11:14:23+03:00 Vitalii BUNIAK vetalbunjak@gmail.com Vitalii LUKICHOV lukichov.vitalyi@vntu.edu.ua <p class="06AnnotationVKNUES"><em>The article discusses approaches to the classification of information systems in the energy sector and the systemic aggregation of cyber risks in smart grids. The authors identify the main architectural groups - from distributed smart grids to centralized ICS/SCADA and integrated microgrids - and analyze the specifics of protective measures for each of them.</em></p> <p class="06AnnotationVKNUES"><em>Further, risk assessment methodologies are described: static models (e.g., FMEA), multi-criteria MCDM approaches (AHP, TOPSIS), probabilistic methods (Bayesian networks, Monte Carlo), and resilience metrics with their advantages, disadvantages, and data requirements.</em></p> <p class="06AnnotationVKNUES"><em>To model the systemic aggregation of risks, graph-based approaches, agent-based modeling, and the “failure propagation” scheme in the network are presented, which allows to assess the cumulative effect of cascading attacks.</em></p> <p class="06AnnotationVKNUES"><em>In addition, a multi-criteria indicator for ranking countermeasures by “return per unit cost” and an extended indicator that takes into account the absolute and relative risk reduction within a given budget are proposed.</em></p> <p class="06AnnotationVKNUES"><em>The conclusions emphasize the need to implement adaptive IAM solutions and the Zero Trust concept to minimize the human factor and increase the resilience of smart grids.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Віталій БУНЯК, Віталій ЛУКІЧОВ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/530 INFORMATION SYSTEM FOR MONITORING THE PSYCHOLOGICAL STATE OF MILITARY SOLDIERS WITH POST-TRAUMATIC STRESS DISORDER USING AI 2025-05-23T11:39:07+03:00 Anzhelika AZAROVA azarova.angelika@gmail.com Maksym SHERSHUN sm.shershun@gmail.com Oleksandr MURASHCHENKO murachenko@vntu.edu.ua Olga RUZAKOVA olgarkv81@gmail.com <p class="06AnnotationVKNUES"><em>The article presents the development and implementation of an innovative information system (IS) designed to monitor the psychological state of military personnel suffering from post-traumatic stress disorder (PTSD). This system is deployed on the iOS platform and leverages cutting-edge artificial intelligence technologies, specifically large language models (LLMs) such as GPT-4. These models are integrated via CoreML to analyze and interpret the textual responses provided by users during psychological assessments. Additionally, the system utilizes Apple's HealthKit framework to continuously collect and analyze physiological data, including heart rate, sleep patterns, and activity levels, which are critical for evaluating stress responses and overall mental health.</em></p> <p class="06AnnotationVKNUES"><em>A comprehensive architectural scheme of the IS has been developed, illustrating the integration of AI components and data acquisition modules. The system is supported by a robust mathematical model that enables dynamic and accurate assessment of the psychological state of soldiers. This model uniquely combines AI-driven analysis of user input with real-time physiological monitoring, thus enhancing both the accuracy and responsiveness of PTSD detection. Compared to existing methods, the proposed approach offers improved adaptability to the specific needs of military users, providing personalized insights and facilitating early intervention.</em></p> <p class="06AnnotationVKNUES"><em>The key scientific contribution of this research lies in the development of the mathematical model that underpins the IS. This model significantly advances current practices by allowing for the adaptive and efficient identification of PTSD symptoms, increasing diagnostic precision, and supporting real-time mental health monitoring in high-risk populations such as military personnel.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Анжеліка АЗАРОВА, Максим ШЕРШУН, Олександр МУРАЩЕНКО, Ольга РУЗАКОВА https://vottp.khmnu.edu.ua/index.php/vottp/article/view/526 METHOD OF COMPREHENSIVE OPTIMIZATION OF ENERGY CONSERVATION AND SECURITY FOR IoT TECHNOLOGY 2025-05-22T09:53:33+03:00 Oleksii KOROLKOV adroyal2017@gmail.com Serhii POPLAVSKYI sergey.poplavskii@gmail.com Oleksandr HLUKHENKYI goldbergoalexander@gmail.com Olena PONOCHOVNA olena.ponochovna@pdau.edu.ua <p class="06AnnotationVKNUES"><em>The article presents a new method of comprehensive adaptive optimization of energy consumption and cybersecurity in the Internet systems (IOT), developed taking into account the limited resources of built -in devices and the need to maintain a high level of data protection in real time. The method is based on the principles of dynamic control of IOT-gear modes using algorithms that take into account the current battery charge, the level of criticality of the data processed, the frequency of events and the risks of network threats. Its implementation allows you to automatically change the frequency of data transmission and energy consumption modes, ensuring a balanced interaction between the autonomy of the device and the safety of information.</em></p> <p class="06AnnotationVKNUES"><em>The study made a thorough analysis of current IOT challenges, classified existing approaches to energy saving and considered current cryptographic protocols, including IPSEC, TLS, AES, RSA and ECC. On the basis of the obtained conclusions, the original algorithm of adaptive control of energy consumption and protection of data transmission is proposed. To evaluate its effectiveness, an experimental prototype of the IOT device based on the ESP32 microcontroller, DT22 sensor, MQTT-protocol with TLS defense was developed and implemented, and the Node-Red software visualization was created. 108-hour threat modeling tests (port scanning, IP, Flooda attacks) showed a decrease in energy consumption by more than 40% compared to the fixed mode, without loss of accuracy or stability of the system.</em></p> <p class="06AnnotationVKNUES"><em>The results confirm the high efficiency of the developed method and its suitability for the introduction into household, infrastructure and industrial IOT systems, where the autonomy, reliability and safety of data are critical.</em></p> <p> </p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Олексій КОРОЛЬКОВ, Сергій ПОПЛАВСЬКИЙ, Олександр ГЛУХЕНЬКИЙ, Олена ПОНОЧОВНА https://vottp.khmnu.edu.ua/index.php/vottp/article/view/531 RESEARCH OF THE POSSIBILITY OF USING SEPARATE MOBILE DEVICE SENSORS AS A SOURCE OF ENTROPY FOR A RANDOM NUMBER GENERATOR 2025-05-23T12:07:10+03:00 Denys OSTAPETS odaua@i.ua Artur OPRIATNYI artur.opriatnyi@icloud.com <p class="06AnnotationVKNUES"><em>This paper explores the potential of utilizing the accelerometer, gyroscope, and magnetometer sensors embedded in modern mobile devices as novel entropy sources for hardware-based random number generators (RNGs). The study begins by defining the fundamental requirements for effective entropy sources, followed by a detailed comparative analysis of available mobile device sensors. Based on this analysis, specific sensors were selected for further investigation due to their responsiveness, accessibility, and variability in data output. A specialized software-hardware complex was developed, comprising a smartphone for data acquisition and a personal computer for processing and analysis. This system enables the extraction of raw sensor data and supports experimentation with different bit-level manipulations.</em></p> <p class="06AnnotationVKNUES"><em>The research examines the use of between 1 to 32 least significant bits (LSBs) from each axis (X, Y, Z) of the selected sensors. Various methods for combining these bits—such as simple concatenation, arithmetic summation, and modulo two addition (XOR)—are implemented and analyzed. Experimental evaluations focus on the statistical quality of the generated random numbers, their compliance with standard randomness criteria, and the throughput of generation.</em></p> <p class="06AnnotationVKNUES"><em>The findings indicate that sensor data from mobile devices can serve as viable entropy sources, significantly enhancing the performance and speed of hardware RNGs. This approach not only leverages readily available consumer technology but also offers a scalable and cost-effective solution for secure and efficient random number generation in various applications, including cryptographic systems, simulations, and secure communications.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Денис ОСТАПЕЦЬ, Артур ОПРЯТНИЙ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/519 IMPLEMENTATION OF AUTOMATED EDDY CURRENT TESTING TOOLS IN TECHNICAL DIAGNOSTICS SYSTEMS 2025-05-19T14:05:50+03:00 Oleksandr LEVINSKYI levinskiy.a.s@gmail.com Daniel OLMAN 4141456@as.op.edu.ua Viktor HANUSOVSKYI 3499109@as.op.edu.ua Denys BILOUS denisbilous1989@stud.op.edu.ua Fedir KERDAN teogof@stud.op.edu.ua <p><em>This study presents a comprehensive analysis of methods for configuring and optimizing automated eddy current testing tools in technical diagnostics systems. The advantages of eddy current testing were analyzed, and its classification was determined based on methodological, functional, technical, operational, and industry-specific characteristics. The evaluation of the effectiveness of automated eddy current testing tools in technical diagnostics should consider regulatory requirements, regulatory control, and economic feasibility, ensuring compliance with international standards and cost optimization. It was noted that the use of multi-channel sensor arrays significantly improves the efficiency of technical diagnostics with modern matrix-type eddy current testing tools, reducing monitoring time and enhancing defect detection effectiveness. It was determined that the primary task of improving the productivity of eddy current testing is the effective processing of electrical signals from sensor arrays, which allows for accurate amplitude and phase characterization of the signal. Digital extraction of these characteristics using discrete Hilbert transforms and adapted mathematical techniques helps to reduce errors and improve diagnostic accuracy, which is essential for defectoscopy tasks. Furthermore, the study proposed mathematical modeling of diagnostic processes based on sensors from the eddy current testing complex, which enables effective detection of defects on surfaces and in subsurface layers of materials, as well as adaptation of the model for solving other engineering problems such as crack detection and mechanical damage in objects with complex geometries. Signals received from the sensors during scanning reflect changes in voltage levels that occur when passing over defects, and are converted into digital values that allow for determining the depth of the defects. A three-dimensional map of the object is formed during the scanning process, and specialized algorithms for sensor connection management and placement schemes are used to enhance diagnostic accuracy and scanning speed.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Олександр ЛЕВИНСЬКИЙ, Даніель ОЛЬМАН, Віктор ГАНУСОВСЬКИЙ, Денис БІЛОУС, Федір КЕРДАНЬ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/479 AGENT-BASED MODELING OF THE BEHAVIOR OF A DISTRIBUTED IOT SYSTEM FOR PRINTING PRODUCTION 2025-03-30T17:46:42+03:00 Ulyana PANOVYK uliana.p.panovyk@lpnu.ua Serhii KUTAS serhii.a.kutas@lpnu.ua <p class="06AnnotationVKNUES"><em>This paper presents an agent-based modeling approach to the behavior of distributed Internet of Things (IoT) systems designed for use in industrial printing environments. The study focuses on the development of a decentralized architecture, in which each functional component of the system operates as an autonomous agent capable of local decision-making, real-time sensor-actuator interaction, and peer-to-peer communication. A two-level control structure is proposed: the low-level control manages direct communication with sensors and actuators in real-time, while the high-level control performs contextual decision-making using ontology-driven logic and coordinates interactions between agents.</em></p> <p class="06AnnotationVKNUES"><em>The model supports system self-adaptation and fault tolerance through dynamic reconfiguration mechanisms. In the event of node failures or changes in the network topology (e.g., adding a new functional unit), agents reorganize their connections and roles without centralized intervention. The paper includes behavioral pseudocode, architectural diagrams, and communication topology schemes that formalize the logic of interaction between components and enable virtual simulation of the system without the use of specialized software platforms.</em></p> <p class="06AnnotationVKNUES"><em>The proposed agent-based approach demonstrates high flexibility, scalability, and robustness, making it well-suited for implementation in the printing industry, where operational conditions often require dynamic adaptation and intelligent control of technological processes. The research lays the groundwork for further development of digital twins, predictive maintenance strategies, and autonomous control systems in industrial IoT infrastructures.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Уляна ПАНОВИК, Сергій КУТАС https://vottp.khmnu.edu.ua/index.php/vottp/article/view/532 ULTRASONIC METHODS FOR IMPROVING FUEL-LEVEL MEASUREMENT ACCURACY IN MOVING VEHICLES 2025-05-23T12:57:44+03:00 Valeriy ZDORENKO alzd123@meta.ua Oleksandr VINNICHENKO a.vinnichenko87@gmail.com <p class="06AnnotationVKNUES"><em>This paper presents a comprehensive analysis of contemporary ultrasonic techniques employed to measure fuel levels in truck fuel tanks under dynamic operating conditions. The study systematically examines a range of methods, including the pulse-echo technique, through-transmission, M-mode (motion-mode) imaging, phase-shift detection, and time-of-flight (TOF) measurements. For each method, the fundamental operating principles are described, alongside their respective advantages, limitations, and susceptibility to environmental and mechanical influences such as temperature fluctuations, tank inclination, and liquid sloshing during vehicle motion.</em></p> <p class="06AnnotationVKNUES"><em>A comparative analysis table is included to illustrate that although certain methods demonstrate high precision in controlled laboratory environments, they often fail to maintain measurement stability when exposed to multiple simultaneous dynamic disturbances. To address this challenge, the paper proposes a hybrid approach informed by a critical review of existing literature and validated by consolidated experimental results. This approach integrates pulse-echo measurements with TOF estimations to rapidly assess distance, while M-mode imaging is utilized to mitigate the impact of surface turbulence and vibration-induced fluctuations.</em></p> <p class="06AnnotationVKNUES"><em>The proposed hybrid strategy effectively reduces the root-mean-square (RMS) error to less than or equal to 1%, even under realistic conditions involving mechanical vibrations, angular deviations, and thermal variations—typical in heavy-duty vehicle applications. The findings offer a solid scientific foundation for advancing the development of robust ultrasonic fuel level sensors tailored for use in complex transportation environments.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Валерій ЗДОРЕНКО, Олександр ВІННІЧЕНКО https://vottp.khmnu.edu.ua/index.php/vottp/article/view/536 ALGORITHM FOR FEATURE EXTRACTION OF CHROMOSOME DIGITAL IMAGES THROUGH SEGMENTATION 2025-05-28T08:36:01+03:00 Oleksii PYSARCHUK platinumPA2212@gmail.com Yurii MIRONOV yuriymironov96@gmail.com <p class="06AnnotationVKNUES"><em>This paper presents a novel algorithm for the extraction of significant features from digital images of chromosomal objects. The main goal of the algorithm is to facilitate effective clustering and identification of chromosomes based on their segmented image data. The proposed method relies on advanced image segmentation techniques that isolate chromosomal objects regardless of their geometric form, which often varies unpredictably due to biological and technical imaging factors.</em></p> <p class="06AnnotationVKNUES"><em>A key advantage of this algorithm lies in its robustness against geometrical variability: it demonstrates consistent results even when applied to chromosome images of different shapes and contours. This adaptability makes the algorithm especially useful in real-world cytogenetic analysis, where image irregularities are common and can negatively impact the performance of classical neural networks or static feature extraction methods.</em></p> <p class="06AnnotationVKNUES"><em>The effectiveness and precision of the developed algorithm have been rigorously evaluated through comparative analysis with the widely used convolutional neural network model VGG16. The results show that the proposed algorithm performs on par with, and in some cases even surpasses, VGG16 in terms of feature extraction quality and stability across variable datasets. This suggests that the method can be a valuable alternative or complementary approach in automated chromosome recognition systems, particularly where classical models may face limitations due to shape variability or insufficient training data.</em></p> <p class="06AnnotationVKNUES"><em>The findings of this research contribute to the fields of digital cytogenetics, biomedical image processing, and intelligent diagnostic systems, highlighting a pathway toward more reliable chromosome analysis through tailored algorithmic approaches.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Олексій ПИСАРЧУК, Юрій МІРОНОВ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/486 INFOLOGICAL MODEL OF FACTORS, INDICATORS, AND ROUTE OPTIMALITY CRITERIA IN GRAPH DATABASES 2025-04-10T18:09:32+03:00 Nazar MELNYK na.melnyk@kpi.ua Oleksandr KOROCHKIN avcora@gmail.com <p class="06AnnotationVKNUES"><em>Modern transport and logistics systems operate in a highly dynamic and complex environment, characterized by dense traffic flows, high volumes of heterogeneous information, and the necessity to make multi-criteria decisions under uncertainty. These challenges are further intensified in the era of digital transformation, where the rapid evolution of intelligent logistics systems, globalized supply chains, economic volatility, and external factors such as military conflicts or pandemics place additional demands on the adaptability and responsiveness of transport infrastructures.</em></p> <p class="06AnnotationVKNUES"><em>In this context, there is a growing need for innovative methods and technologies that enable the storage, processing, and analysis of large-scale transport data in real time. One such solution is the integration of graph-based data models, which are particularly suitable for representing and analyzing complex transport networks due to their natural structure and ability to support flexible querying.</em></p> <p class="06AnnotationVKNUES"><em>The paper proposes an infological model that formalizes the key factors, indicators, and optimality criteria relevant to transportation routing. This model serves as a semantic layer that connects high-level decision-making logic with the underlying data architecture. When implemented using graph databases, the model provides an efficient and scalable framework for adaptive route optimization.</em></p> <p class="06AnnotationVKNUES"><em>By incorporating multi-criteria analysis into the decision-making process, the developed optimization approach allows for the identification of transportation routes that strike a balance between cost efficiency, delivery time, network congestion, safety considerations, and other contextual parameters. Moreover, the adaptive nature of the system enables continuous reconfiguration of route parameters in response to real-time changes in network conditions, infrastructure availability, and external disruptions.</em></p> <p class="06AnnotationVKNUES"><em>The proposed approach enhances the resilience, efficiency, and intelligence of modern transport systems, offering a foundation for the development of decision support tools in the field of logistics and supply chain management.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Назар МЕЛЬНИК, Олександр КОРОЧКІН https://vottp.khmnu.edu.ua/index.php/vottp/article/view/537 METHOD OF USING A NEURAL NETWORK WITH A HYBRID ARCHITECTURE TO DETERMINE THE EMOTIONAL TONE OF TEXT MESSAGES 2025-05-28T09:13:53+03:00 Dmytro YURCHENKO di4iker@gmail.com Oleksandr OVCHARUK off4aruk@gmail.com Oleksandr MAZURETS exe.chong@gmail.com Pavlo SHEVCHUK shevchuk12072005@gmail.com <p><em>The article reviews the current state of the scientific direction of determining emotional tone and presents a method for using a hybrid architecture neural network to determine the emotional tone of text messages. The method of using a hybrid architecture neural network to determine the emotional tone of text messages is intended for automated conversion of input data in the form of a trained hybrid architecture neural network model with a tokenizer and a text message for analysis into output data in the form of a membership class by emotional tone and its numerical evaluation. Method is based on the use of a hybrid neural network architecture that combines CNN and BiLSTM. The proposed combination contributes to the effective selection of local patterns, due to the properties of the CNN layer, and also allows to take into account long-term dependencies in the text, due to the properties of BiLSTM. The neural network model starts with an Embedding layer, which transforms text data into fixed-length numeric vectors. Next comes a layer that randomly “turns off” 20% of the neurons to reduce the risk of overfitting. Then comes a layer that uses convolutions to detect local patterns in the input data. Next comes a bidirectional LSTM layer, capable of taking into account context from both ends of the sequence, with mechanisms for randomly turning off neurons to improve generalization. This is followed by a layer that selects the maximum values from all features to reduce dimensionality. The final stage is a dense layer with a single neuron and sigmoid activation, which gives the probability that the text belongs to a class with positive tone. Experimental study of the effectiveness of method of using a hybrid architecture neural network to determine the emotional tone of text messages using the created software is presented. It was found that the use of the specified hybrid architecture allows you to achieve an accuracy of 0.974, which is higher than currently known analogues by more than 0.07 for the Accuracy metric.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Дмитро ЮРЧЕНКО, Олександр ОВЧАРУК, Олександр МАЗУРЕЦЬ, Павло ШЕВЧУК https://vottp.khmnu.edu.ua/index.php/vottp/article/view/515 VERIFICATION OF THERMOGRAPHIC MONITORING RESULTS OF EXTERNAL TURNING THERMAL PROCESSES BASED ON MATHEMATICAL MODELLING OF THE THERMAL STATE IN THE CUTTING ZONE 2025-05-28T09:26:48+03:00 Volodymyr GOLOBORODKO holoborodkovolodymyr@gmail.com Liudmula PERPERI perperi.l.m@op.edu.ua <p class="06AnnotationVKNUES"><em>The article presents an approach to evaluating the reliability of thermographic monitoring of cutting processes during external turning operations performed without the use of a lubricating and cooling technological medium. For this purpose, a modelling approach was defined for the thermal state of the cutting zone based on the heat conduction equation, incorporating initial and boundary conditions. The proposed simplified mathematical model was employed to verify the experimental temperatures obtained through thermographic monitoring. The comparative analysis of experimental results obtained by infrared thermographic measurements with the theoretical temperatures calculated from the model, based on error estimation, confirmed the applicability of the thermographic method in the machining of materials.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Володимир ГОЛОБОРОДЬКО, Людмила ПЕРПЕРІ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/538 SYMMETRIC ENCRYPTION ALGORITHM OF INCREASED CRYPTOGRAPHIC RESISTANCE 2025-05-28T09:41:45+03:00 Yuriy KLOTS klots@khmnu.edu.ua Volodymyr DZHULIY dzhuliivm@khmnu.edu.ua Roman KOROBKO korobkord@khmnu.edu.ua <p class="06AnnotationVKNUES"><em>This article proposes a symmetric encryption algorithm of increased cryptographic stability, which is based on the Feistel network, but includes improved solutions to increase the level of security reliability and efficiency. The main goal of the research is to demonstrate how new cryptographic methods can improve the resistance of ciphers to modern attacks, as well as provide greater speed of data processing without loss of reliability.</em></p> <p class="06AnnotationVKNUES"><em>The algorithm uses 119-bit blocks and a 112-bit key, ensuring an encryption process of 10 rounds. The key aspect of the encryption algorithm is that with the known structure of the algorithm, crypto-resistance is ensured only at the expense of the secret key.</em></p> <p class="06AnnotationVKNUES"><em>Special attention is paid to the need to use longer encryption blocks (119 bits) than the standard 64-bit blocks in symmetric encryption algorithms, to protect against possible attacks in view of the growth of computing power. In addition, the peculiarity of the symmetric encryption algorithm of increased cryptographic stability is that when using standard encodings (ASCII, UTF-8, etc.), no block will contain a whole number of logical symbols. This makes it difficult to carry out cryptographic analysis, since parts of symbols can be on the border of different blocks, which creates additional difficulties for attackers.</em></p> <p class="06AnnotationVKNUES"><em>The proposed symmetric encryption algorithm of increased cryptographic stability has potential for use in modern encryption systems and provides a high level of information protection.</em></p> <p class="06AnnotationVKNUES"><em>The conducted research not only contributes to the further development of encryption theory, but also has practical significance for the development of new, more secure information protection systems.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Юрій КЛЬОЦ, Володимир ДЖУЛІЙ, Роман КОРОБКО https://vottp.khmnu.edu.ua/index.php/vottp/article/view/480 SIMULATION MODELING FOR THE DIGITAL TRANSFORMATION OF THE HIGHER EDUCATION PROCESS MANAGEMENT SUBSYSTEM 2025-03-30T17:51:41+03:00 Roman PANOVYK roman.r.panovyk@lpnu.ua Ulyana PANOVYK uliana.p.panovyk@lpnu.ua Bohdana FEDYNA bohdana.i.fedyna@lpnu.ua <p><em>This paper presents an approach to the digital transformation of the higher education process management subsystem through simulation modeling. The proposed model captures the dynamic structure of the educational environment, incorporating key entities such as students, teachers, academic disciplines, study groups, class schedules, and administrative controllers. The architecture is implemented using the Python programming language in combination with the SimPy library, allowing the system to simulate event-driven interactions flexibly and scalable.</em></p> <p><em>The simulation logic is based on a weekly academic cycle and includes processes such as attendance tracking, teaching load monitoring, scheduling conflicts resolution, and performance evaluation. A BPMN-based block diagram is provided to illustrate the system's logic, supported by fragments of code and parameterized settings. Initial conditions and simulation parameters are clearly defined to reflect realistic academic settings.</em></p> <p><em>Several experimental scenarios are modeled, including teacher overload, room shortages, and overlapping schedules, with the aim of identifying system bottlenecks and testing adaptive responses. The results demonstrate that the simulation model enables early detection of critical areas in process planning and supports data-driven decision-making at the administrative level.</em></p> <p><em>By combining simulation modeling techniques with adaptive control logic, this research offers a powerful tool for the analysis, optimization, and modernization of academic process management systems in higher education institutions. The approach enhances transparency, flexibility, and responsiveness of university operations within the context of ongoing digital transformation.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Роман ПАНОВИК, Уляна ПАНОВИК, Богдана ФЕДИНА https://vottp.khmnu.edu.ua/index.php/vottp/article/view/539 COMPARATIVE ANALYSIS OF SIMULATION PLATFORMS FOR UAV STABILIZATION WITH REINFORCEMENT LEARNING METHODS 2025-05-28T10:05:41+03:00 Dmytro PETRENKO dmytro.o.petrenko@lpnu.ua Yurii KRYVENCHUK yurii.p.kryvenchuk@lpnu.ua <p class="06AnnotationVKNUES"><em>This paper presents an in-depth comparative analysis of four prominent simulation platforms commonly utilized for unmanned aerial vehicle (UAV) stabilization tasks involving reinforcement learning (RL): AirSim, Gazebo with RotorS, Flightmare, and Unity ML Agents. The evaluation is structured around five pivotal criteria that are essential for effective RL training in the context of UAV stabilization: the realism of physics simulation, the fidelity and variety of sensor emulation, the ease and depth of integration with RL frameworks, the capability to model atmospheric turbulence, and the degree of flexibility offered for environment customization. Each platform was systematically assessed in simulated scenarios reflecting real-world UAV stabilization challenges.</em></p> <p class="06AnnotationVKNUES"><em>The findings reveal nuanced strengths and limitations across the platforms. Flightmare excels in physics realism and seamless RL integration, making it particularly suited for high-precision stabilization tasks in dynamic environments. However, its limited support for environment customization may constrain its broader applicability. AirSim emerges as a versatile choice, offering robust sensor simulation and a good balance between realism and configurability, positioning it well for general-purpose UAV training scenarios. Gazebo with RotorS demonstrates exceptional environment customization capabilities and modular architecture but faces integration complexities with modern RL toolkits. Unity ML Agents offers a user-friendly interface and fast prototyping benefits but falls short in simulating the complex aerodynamics necessary for advanced UAV stabilization.</em></p> <p class="06AnnotationVKNUES"><em>This study emphasizes the importance of aligning simulation platform capabilities with the specific needs of UAV stabilization research and development. Moreover, it underscores the necessity of continued innovation to bridge the sim-to-real transfer gap that hinders the deployment of RL-trained UAV control systems in practical settings.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Дмитро ПЕТРЕНКО, Юрій КРИВЕНЧУК https://vottp.khmnu.edu.ua/index.php/vottp/article/view/540 THREE-PARAMETER METHOD FOR MEASURING INFORMATIONAL PARAMETERS OF WASTEWATER SAMPLE FROM CONFECTIONERY FACTORIES 2025-05-29T11:45:12+03:00 Vadim SEBKO vadim.sebko@gmail.com Valerii ZDORENKO alzd123@meta.ua Nataliia ZASHCHEPKINA nanic1604@gmail.com Sergii BARYLKO poo4ta@bigmir.net <p class="06AnnotationVKNUES"><em>The expansion of the functional and technical capabilities of the submersible eddy current transducer (<a name="_Hlk191795439"></a>SEСT) was investigated, regarding the combined three-parameter informative control of the specific electrical conductivity χ, the relative dielectric permittivity εr and the temperature t of alkaline wastewaters from confectionery industries. Based on the submersible SEСT, which includes only one winding that can perform two functions: sample magnetization and measurement of wastewater sample parameters, a three-parameter method for measuring the physicochemical parameters of alkaline wastewaters from confectionery industries was investigated. In this case, instead of the core, the SEСT uses a column of liquid that fills a round through-hole as a result of immersing the SEСT in a container with wastewater, i.e. a column of alkaline wastewater from a corresponding container (reception chamber, settling tank, container for averaging wastewater, biological reactors, stabilization pools, etc.), which fills the hole of the immersed SEСT is both the core and a sample of wastewater whose parameters are subject to measurement. Since the main criteria when choosing a technology for treating concentrated wastewater from processing and food industries is the composition of water, the numerical data of the specific electrical conductivity χ, relative dielectric permittivity εr and temperature t of alkaline wastewater can be used to determine the regulatory characteristics on the basis of which the optimal method for treating wastewater from confectionery industries is selected. Algorithms for measuring and calculating procedures for joint measurements of the parameters εr, χ and t are presented. The results of measurements of the specific electrical conductivity χt, relative dielectric permittivity εrt and temperature t of a sample of alkaline wastewater from confectionery industries were obtained.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Вадим СЕБКО, Валерій ЗДОРЕНКО, Наталія ЗАЩЕПКІНА, Сергій БАРИЛКО https://vottp.khmnu.edu.ua/index.php/vottp/article/view/469 OVERVIEW OF METHODS, MODULES TESTING OF SUBSYSTEMS AND PROGRAMS IN GENERAL 2025-03-12T15:10:39+02:00 Yurii KYRYCHUK kirichuky@gmail.com Anzhelika STAKHOVA sap@nau.edu.ua Natalia NAZARENKO N_Nazarenko@kpi.ua Serhii ZAYETS zssvp0204@gmail.com <p class="06AnnotationVKNUES"><em><span class="hps">This study examined the methods of software quality assurance, as well as the problems encountered in the implementation of software quality in an effort to improve software quality. It is known that developing a quality software product is an important need for the software industry. It was determined that focusing on product quality allows software end users to adapt the product more easily and efficiently. Quality is a vital role for software users. This is a confirmation of all customer satisfaction requirements. Therefore, it is important to determine the right software development process that leads to a quality software product. In the work, the basic concepts in the field of software testing, criteria for selection of tests, assessment of project testing are considered. Considerable attention was paid to software testing methods, all existing testing methods operate within the formal process of testing software that is being researched or developed. Such a formal verification process can prove that defects are absent from the point of view of the method used. That is, there is no way to accurately identify or guarantee the absence of defects in the software product, taking into account the human factor present at all stages of the software life cycle. The issues of automation of the testing process and the relationship between the testing process and software quality were considered. Thus, testing is one of the ways to develop a quality software product and is part of a set of effective tools for a modern software product quality assurance system.</span></em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Юрій КИРИЧУК, Анжеліка СТАХОВА , Наталія НАЗАРЕНКО , Сергій ЗАЄЦЬ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/494 DEVICE OF SPEECH ACOUSTIC SIGNAL PROTECTION IN THE CONDITIONS OF RADIO ELECTRONIC WARFARE 2025-04-21T00:10:53+03:00 Volodymyr KORCHYNSKYI vladkorchin@ukr.net Oleksandr RIABUKHA ryabukha@gmail.com Vadim STEPANOV stepanovvadym333@gmail.com Denys HOLEV d.v_holev@suitt.edu.ua Ihor LIMAR quantum.biology@outlook.com <p><em>The article considers the issue of speech information, which is transmitted over the communication channel in the conditions of modern threats, particularly signal interception, unauthorized access and analysis of acoustic signals. An additional device for radio stations is proposed, which provides an increased level of speech signal protection due to its digital conversion, encryption and modification of acoustic characteristics. The basic principle of operation of the device is to convert the acoustic signal into digital using the analog-to-digital converter, further encryption and insertion of synchronizing signals into the speech stream. It allows to ensure exact and correct information recovery in the receiver. After processing, the digital signal is converted back into analog and fed to the telephone capsule, which affects the radio station microphone, ensuring the transmission of a protected speech signal over the radio channel. In the conditions of electronic warfare, the proposed device provides high efficiency due to the complexity of the process of analyzing the speech signal in the event of its interception through the communication channel. The presence of synchronization signals in the digital stream will allow to reduce the influence of interference and ensure correct renovation of the speech signal in the receiver. The analysis is given and the expediency of using Barker codes as cyclic synchronization signals is proven. Structural schemes of speech signal conversion devices are developed. The purpose of the work is to develop and analyse an additional device for radio station that provides increased level of speech signal protection by digital conversion, encryption, and the use of synchronization signals for correct reception and recovery of information.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Володимир КОРЧИНСЬКИЙ, Олександр РЯБУХА, Вадим СТЕПАНОВ, Денис ГОЛЕВ, Ігор ЛІМАРЬ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/514 METHOD OF ADAPTIVE DETECTING FAKE NEWS BASED ON A GENERALIZED VECTOR OF TEXTUAL FEATURES 2025-05-15T09:38:29+03:00 Andrii SHUPTA andrii.shupta@gmail.com <p class="06AnnotationVKNUES"><em>The rapid spread of "fake news" via social media and online platforms poses a significant threat to informed public discourse and trust in information. While existing detection methods analyze content (text, images) or social context (source, sharer sentiment), they often struggle to adapt to the evolving, sophisticated tactics of misinformation campaigns, losing efficacy as new deceptive forms emerge. This paper presents an innovative, adaptive Natural Language Processing framework designed to tackle this dynamic challenge. Our core strategy involves a feature vector built from generalized textual characteristics, capturing enduring linguistic patterns and structural irregularities indicative of fabricated content, rather than superficial, easily outdated markers. A key aspect is the system’s designed evolvability: it supports continuous expansion of this feature vector and retraining of the classifier with new datasets. This ensures sustained responsiveness and effectiveness against novel fake news iterations in a constantly changing information landscape. The system’s efficacy is validated through a dual evaluation: qualitative visual analytics offer insights into its decision-making, while quantitative statistical metrics (precision, recall, F1-score) confirm its robustness. Experimental results demonstrate a commendable detection accuracy of approximately 90%, underscoring the power of the generalized features and adaptive learning. Ultimately, this research contributes to the critical development of a more reliable, accurate, and dynamically responsive system for identifying and mitigating the spread of fake news. The development of such sophisticated tools holds profound implications for safeguarding the integrity of information, fostering media literacy, and addressing one of the most pressing informational challenges in contemporary society.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Андрій ШУПТА https://vottp.khmnu.edu.ua/index.php/vottp/article/view/520 COST OPTIMISATION IN IT PROJECTS: MODELS AND APPROACHES TO COST MANAGEMENT 2025-05-19T14:53:37+03:00 OLGA KRAVCHUK kravchukoa2@gmail.com <p><em>The article considers modern approaches to cost optimisation in IT projects in order to increase the efficiency of cost management. The key stages of budget control, methods of cost planning and financial risk assessment are analysed. Particular attention is paid to the implementation of integrated approaches: a combination of accurate planning, regular monitoring, modern tools and flexible response to changes. A multi-level model of IT project cost management has been developed. The application of this model is demonstrated on the example of a specific IT project, which includes a detailed analysis of its effectiveness using specific metrics and criteria. The results of the study confirm that a multi-level value management model can effectively structure the financial management process by dividing functions between strategic, tactical and operational levels. The study also identifies common mistakes that lead to budget overruns. The results of the study may be useful for IT project managers, financial analysts and digital consultants.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 ОЛЬГА КРАВЧУК https://vottp.khmnu.edu.ua/index.php/vottp/article/view/543 HIGH-EFFICIENCY MIMO SOLUTIONS FOR WIMAX AND 5G WIRELESS NETWORKS 2025-05-29T14:33:09+03:00 Juliy BOIKO boiko_julius@ukr.net Lesya KARPOVA rtlesya@gmail.com Denys NAZARCHUK boykojuliy6@gmail.com <p><em>This article addresses the critical and timely task of employing strike FPV drones in kamikaze mode to neutralize enemy counter- This study presents a comprehensive analysis of the design and optimization of MIMO antenna systems, which are fundamental to modern wireless communication technologies such as WiMAX and 5G. MIMO technology enables the simultaneous transmission of multiple independent data streams by employing multiple antennas at both the transmitter and receiver. This significantly enhances network capacity, spectral efficiency, and overall communication reliability, making MIMO a key enabler for high-speed and high-capacity wireless networks. The advantages of MIMO systems are particularly evident in urban environments, where multipath propagation and interference present considerable challenges to signal integrity. By leveraging spatial diversity, MIMO mitigates these issues, ensuring robust signal reception even in complex propagation conditions. Moreover, MIMO contributes to increased energy efficiency, leading to more sustainable and cost-effective network operations. This is especially relevant for next-generation wireless networks, where power consumption and spectral optimization are crucial factors in maintaining high system performance. A critical aspect of MIMO system optimization is the refinement of key structural parameters, including element isolation, radiation efficiency, and antenna array geometry. The results obtained in this study demonstrate that the optimized MIMO system achieves an extremely low envelope correlation coefficient (ECC&lt;0.0002), ensuring minimal interference between antenna elements. Additionally, the system exhibits a high diversity gain (DG&gt;9.991 dB), which enhances link reliability and signal robustness. Furthermore, the optimized configuration minimizes capacity loss (CCL&lt;0.1 bit/s/Hz) while maintaining an efficient total active reflection coefficient (TARC&lt;–10 dB) and an optimal mean effective gain (MEG ranging from –6.2 dB to –7 dB). These performance indicators confirm the effectiveness of the proposed design in delivering high data throughput, stable connectivity, and improved system reliability, all of which are essential for WiMAX and 5G technologies operating in high-density user environments and under heavy traffic conditions.</em></p> <p><em>In addition to structural optimizations, adaptive beamforming techniques further enhance system efficiency by dynamically adjusting antenna radiation patterns based on real-time channel conditions. This approach not only improves spectral utilization but also maximizes signal strength and reduces interference, leading to better overall network performance. The findings of this study provide valuable insights for the development of next-generation wireless networks, offering improved connectivity, optimized spectral efficiency, and enhanced system resilience in dynamic communication environments.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Юлій БОЙКО, Леся КАРПОВА, Денис НАЗАРЧУК https://vottp.khmnu.edu.ua/index.php/vottp/article/view/544 METHODS OF PROCESSING AUDIO SIGNALS 2025-05-29T14:54:08+03:00 Maksym KOT redkotyara@yahoo.com Mikhailo STEPANOV 2m.stepanov@gmail.com <p class="06AnnotationVKNUES"><em>The article examines the principal methods of acoustic signal analysis employed in contemporary research and practical applications. Special attention is devoted to three key approaches: spectral analysis, wavelet transformation, and machine learning methods, particularly neural networks. The author provides a detailed description of spectral analysis principles, which are based on Fourier transformation and its modifications (Discrete Fourier Transform, Fast Fourier Transform). The article emphasizes that spectral analysis is particularly effective for studying stationary processes, enabling precise characterization of energy distribution across frequencies. However, for analyzing non-stationary signals (such as automobile noise, musical and speech signals), time-dependent Fourier transform (STFT) is applied, which has certain limitations regarding simultaneous resolution in time and frequency domains.</em></p> <p class="06AnnotationVKNUES"><em>Wavelet transformation is presented as an alternative mathematical tool that provides simultaneous representation of signals in both time and frequency domains. Unlike classical Fourier transformation, this method allows for the localization of spectral components in time, which is especially important for non-stationary acoustic signals. The principle of wavelet transformation involves decomposing signals into basis functions—wavelets—obtained through scaling and shifting of a mother wavelet. This approach enables the detection of signal features at different scales and at different moments in time, and also effectively reduces noise levels without significant loss of useful information.</em></p> <p class="06AnnotationVKNUES"><em>The article also explores contemporary machine learning methods and neural networks for acoustic signal analysis. It emphasizes that over the past two decades, the use of machine learning for audio signal processing has grown substantially, and today these methods dominate new approaches to sound signal processing. Particular attention is paid to deep neural networks, which often outperform traditional signal processing methods. The author notes that despite borrowing many deep learning methods from image processing, there are important differences between these fields that require specialized approaches to audio analysis. Audio signals form one-dimensional time series that fundamentally differ from two-dimensional images and must be studied sequentially in chronological order. These properties have given rise to audio-specific solutions in the field of signal processing. The article concludes that the integration of these diverse methods allows for more comprehensive analysis of complex acoustic phenomena in various applications.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Максим КОТ, Михайло СТЕПАНОВ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/545 OPTIMIZATION OF LOGGING IN INFORMATION SYSTEMS BASED ON A MATHEMATICAL MODEL OF BUFFER MANAGEMENT IN THE PRODUCER–CONSUMER PATTERN 2025-05-29T15:08:22+03:00 Igor PARKHOMEY i_parhomey@ukr.net Juliy BOIKO boiko_julius@ukr.net Viacheslav LEMESHKO slava.lemeshko@gmail.com Oleksander EROMENKO yeromenko_s@ukr.net <p><em>The article presents a comprehensive approach to optimizing logging processes in information systems by utilizing a mathematical model for buffer management within the widely used Producer–Consumer interaction pattern. The study addresses the challenges associated with high-frequency logging, such as data loss, increased latency, and resource overload, which are common in distributed and high-load environments. To mitigate these issues, a dynamic buffer management model is proposed that adaptively regulates the interaction between log-generating components (producers) and log-handling or storage subsystems (consumers). The model takes into account critical system parameters, including buffer size, event generation rate, and consumer processing speed. It enables dynamic adjustment of logging strategies depending on the current load and buffer state.</em></p> <p><em>The research includes the formalization of the proposed model using discrete-time mathematics, with particular attention to queue theory and finite-buffer constraints. Simulation experiments conducted in the study demonstrate that the model significantly reduces log loss, optimizes system responsiveness, and ensures stable operation of logging mechanisms even under extreme workloads. The findings suggest that implementing such a model contributes to the design of resilient auditing and monitoring subsystems, especially in cybersecurity-sensitive and mission-critical infrastructures. The approach can be integrated into various architectural layers of modern information systems, improving reliability, maintainability, and traceability of operations in real time. Recommendations for practical implementation and possible extensions of the model for adaptive load prediction are also provided.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Ігор ПАРХОМЕЙ, Юлій БОЙКО, В'ячеслав ЛЕМЕШКО, Олександр ЄРЬОМЕНКО https://vottp.khmnu.edu.ua/index.php/vottp/article/view/546 CONCEPTUAL MODEL OF ORGANIZATIONAL AND TECHNICAL SYSTEM FOR CYBER SECURITY OF IoT PLATFORM 2025-05-29T15:22:49+03:00 Nataliia HALAHAN n.halahan@duikt.edu.ua Iryna BORYSENKO borysenko.iryna@knu.ua Nataliia KHABIUK n.khabiuk@duikt.edu.ua Yaroslav STARODUBTSEV y.starodubtsev@stud.duikt.edu.ua Nikita KOVALCHUK bonita1953112233@gmail.com <p class="06AnnotationVKNUES"><em>The article presents a comprehensive exploration of how the development of Internet of Speech technologies, a subset of the Internet of Things (IoT), contributes to improved security measures through the application of differential confidentiality methods. The central argument of the study is that enhancing security in intelligent systems requires innovative approaches to data protection, especially in environments characterized by constant data exchange and user interaction. The authors propose the integration of differential confidentiality to mitigate risks related to unauthorized access to critical digital resources such as storage units, software platforms, databases, and archival systems. This technique ensures that sensitive information remains protected even when shared or processed, by introducing controlled statistical noise that obscures personal identifiers. A key contribution of the article is the justification of a nominal-structural method as a foundational element of an individualized security model. This method is designed to manage and monitor the interplay between internal and external components of an intelligent system, thereby maintaining system coherence and strengthening resistance to cyber threats. The study outlines how this structural framework facilitates efficient connection management and adaptability to various system states and operational conditions. Furthermore, the paper provides a thorough analysis of both organizational and technical security measures within the IoT ecosystem. These include life cycle management of IoT devices, the development of robust security architectures, the importance of continuous personnel training, implementation of encryption standards, real-time system monitoring, and configuration management.</em></p> <p class="06AnnotationVKNUES"><em>Additionally, the article explores the mathematical underpinnings of differential confidentiality, offering formal models that support its practical implementation. The authors emphasize the dual impact of this method: while it intentionally adds noise to data for anonymization purposes, it paradoxically strengthens overall system security by preventing precise data extraction, thereby enhancing the integrity and resilience of robotic and intelligent systems.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Наталія ГАЛАГАН, Ірина БОРИСЕНКО, Наталія ХАБʼЮК, Ярослав СТАРОДУБЦЕВ, Нікіта КОВАЛЬЧУК https://vottp.khmnu.edu.ua/index.php/vottp/article/view/547 A METHOD FOR ESTIMATING LOCAL EXTREMA OF DIGITAL SIGNALS BASED ON INTERPOLATION ANALOGS OF THE FEJÉR OPERATOR 2025-05-29T15:37:15+03:00 Oleh KOPIIKA okopiyka@gmail.com Оleg BARABASH bar64@ukr.net Oleksandr KOVAL avkovalgm@gmail.com Andriy MAKARCHUK makarchukandriy1999@gmail.com <p class="06AnnotationVKNUES"><em>In a number of problems, it becomes necessary to find local extrema of a function that describes a certain process or phenomenon over a specific interval of its argument. This task becomes particularly relevant in the context of signal processing. However, as is often the case in signal processing, the analyzed signal may be presented either as a sequence of discrete samples or as a function that is too complex for analytical determination of its local extrema, which typically complicates solving the problem.</em></p> <p class="06AnnotationVKNUES"><em>An overview of existing optimization and signal processing methods reveals that one of the most common approaches to solving this problem is to tabulate the function that represents the signal and analyze the resulting sequence of samples. If the signal is already presented in digital form, the process is usually limited to the second step. However, this method is unreliable due to its strong dependence on the sampling density and the number of samples. For this reason, signal approximation using Lagrange interpolation polynomials is sometimes suggested. Nevertheless, this approach also has limitations, as interpolation polynomials such as those of Lagrange type possess certain mathematical properties that may lead to the appearance of so-called fictitious extrema, potentially resulting in inaccurate conclusions.</em></p> <p class="06AnnotationVKNUES"><em>As an alternative to classical interpolation polynomials in such cases, approaches based on Fourier analysis are sometimes proposed. One of the most well-studied tools in the context of signal approximation is the class of interpolation analogs of operators generated by linear summation methods of Fourier series. As shown by previous research, some of these interpolation polynomials allow for high-accuracy signal approximation. However, their use in locating the local extrema of functions describing signals has received relatively little attention. Therefore, the aim of this work is to investigate this aspect using one of the oldest and most well-known interpolation analogs of operators generated by summation of Fourier series — namely, the interpolation analogs of the Fejér operator.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Олег КОПІЙКА, Олег БАРАБАШ, Олександр КОВАЛЬ, Андрій МАКАРЧУК https://vottp.khmnu.edu.ua/index.php/vottp/article/view/511 DETERMINATIVE CHAOS GENERATOR BASED ON BIPOLAR FIELD TRANSISTOR STRUCTURE WITH NEGATIVE DIFFERENTIAL RESISTANCE 2025-05-13T10:17:16+03:00 Oleksandr OSADCHUK osadchuk.av69@gmail.com Iaroslav OSADCHUK osadchuk.j93@gmail.com Valentyn SKOSHCHUK skoschuk999@gmail.com Vitaliy PETRENKO Heraldes@khmnu.edu.ua <p class="06AnnotationVKNUES"><em>This paper proposes and comprehensively investigates an innovative circuit solution for the implementation of a deterministic chaos generator. The proposed system is based on a bipolar field-effect transistor (BFET) structure featuring negative differential resistance, which enables the generation of chaotic electrical oscillations with extremely short settling times, ranging from 17.25 to 21.28 nanoseconds. Such a short transition to stationary chaotic behavior makes the circuit particularly suitable for high-speed applications in electronics and communication systems.</em></p> <p class="06AnnotationVKNUES"><em>To support theoretical analysis and practical design, a detailed mathematical model of the chaos generator has been developed using the state variable method. This model takes the form of a system of first-order differential equations and enables precise determination of the output signal frequency as a function of the applied control voltage. Furthermore, the model allows for tracking and analyzing the behavior of the main oscillator components at any location in the circuit and at any moment in time, offering a valuable tool for both theoretical insight and real-time control.</em></p> <p class="06AnnotationVKNUES"><em>The MATLAB software package was employed to conduct an extensive computer-based study of the circuit’s performance. These simulations examined key parameters and characteristics of the chaotic oscillations, including waveform behavior, spectral content, and system stability under various conditions. The results of the simulation confirmed the effectiveness and robustness of the proposed design in achieving deterministic chaos with well-defined controllability.</em></p> <p class="06AnnotationVKNUES"><em>Compared to existing analog designs, the proposed deterministic chaos generator demonstrates enhanced load-driving capability and significantly higher operational speed, establishing it as a superior alternative for advanced applications. Potential use cases include secure communications, cryptographic systems, random number generation, and modeling of complex nonlinear phenomena in electronic systems.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Олександр ОСАДЧУК, Ярослав ОСАДЧУК, Валентин СКОЩУК, Віталій ПЕТРЕНКО https://vottp.khmnu.edu.ua/index.php/vottp/article/view/510 ANALYSIS OF METHODS FOR MEASURING UNEVENNESS OF FEED MOVEMENT OF METAL CUTTING MACHINES 2025-05-09T12:12:38+03:00 Maryna HOLOFIEIEVA mgolofeyeva@gmail.com Yuriy PALENNYY yuripalenny@gmail.com Maksym SALIUTIN salutin@stud.op.edu.ua Valentyn TIKHENKO tichenko.v.m@op.edu.ua Oleksii LAVRUK iso.alex.l@stud.op.edu.ua <p class="06AnnotationVKNUES"><em>At very low feed rates in metal-cutting machines, the feed motion often becomes irregular and unstable. This irregularity is typically characterized by a sequence of periodic stops and sudden jerks, a phenomenon known in tribology and mechanical systems as the “stick-slip” effect. This effect arises due to the alternating sticking and sliding behavior between the contacting surfaces in motion, despite the fact that the machine's drive system is designed to maintain a consistent feed rate. As a consequence, the real feed velocity fluctuates around the intended value, leading to undesirable oscillations that significantly compromise both the dimensional accuracy and the surface finish of machined parts. The stick-slip effect becomes particularly problematic in high-precision machining operations, especially those involving microfeeds, where even minute deviations in the motion path can result in substantial quality issues or functional defects in the manufactured components.</em></p> <p class="06AnnotationVKNUES"><em>Understanding and mitigating the stick-slip effect requires a detailed examination of both physical and mechanical factors contributing to motion instability. These include friction characteristics, system stiffness, drive dynamics, and the interaction between mechanical components at the microscopic level. Moreover, evaluating the effect quantitatively is essential for engineers and researchers working on improving feed mechanisms and control systems in precision machines.</em></p> <p class="06AnnotationVKNUES"><em>This paper aims to provide a comprehensive analysis of the origins and behavior of irregular feed motion in low-speed cutting operations. It outlines key indicators and signatures of oscillatory feed dynamics and compares two primary methods for measuring these irregularities: indirect (based on control signals and position feedback) and direct (using displacement sensors or high-resolution encoders). The discussion includes an evaluation of the accuracy and limitations of each method, offering insights into their relative error characteristics and applicability in real-world machining environments.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Марина ГОЛОФЄЄВА, Юрій ПАЛЕННИЙ, Максим САЛЮТІН, Валентин ТІХЕНКО, Олексій ЛАВРУК https://vottp.khmnu.edu.ua/index.php/vottp/article/view/548 IMPROVING THE METOD OF FUNCTIONING OF THE CYBER-PHYSICAL SYSTEM FOR MONITORING DEFECTS IN PHOTOVOLTAIC MODULES OF A SOLAR POWER STATION 2025-05-29T16:10:47+03:00 Mykola LYSYI lisiy3152@ukr.net Serhii PARTYKA shelby1969z5@gmail.com Igor KUSHNER ztigorkushner@gmail.com Andrii LYSYI Andrii.lysyi1@gmail.com <p class="06AnnotationVKNUES"><em>The article presents an improved model of the functioning of a cyber-physical system for monitoring defects in photovoltaic modules of a solar power plant. The key feature of the developed system is its integrated architecture, which involves the integration of a surveillance camera model, image processing functions based on a convolutional neural network (CNN), as well as object detection and object tracking algorithms. To ensure the geometric accuracy of image analysis, the surveillance camera is modeled using a pinhole model that allows determining the geometric parameters of images in computer vision tasks, calibrating the camera to determine its internal and external parameters, and correcting lens distortion. Additionally, the developed model provides for automated determination of whether the detected objects belong to predefined classes of defects. The classification is based on the output of a convolutional neural network using the softmax function, which predicts the probability of a defect in each cell of the image grid, providing a quantitative assessment of the confidence in the detected class. An important aspect of the improvement is the integration of Object Detection and Object Tracking technologies, which effectively eliminates the re-detection of already detected defects in the video sequence. This leads to a significant reduction in the number of duplicate and false alarms of the system, increasing its computational efficiency and the reliability of monitoring results. To further improve the tracking accuracy and reliable identification of previously detected defects over time, the model comprehensively uses Deep Simple Online and Realtime Tracking (Deep SORT) algorithms. This approach is based on a combination of two mathematical methods: the Kalman filter to eliminate noise and random outliers in the weighting coefficients of the tracked objects, which ensures more stable and reliable tracking and prediction of the position of objects in subsequent frames, and the Mahalanobis distance to quantify the degree of similarity between the weighting coefficients of already known and newly detected objects, which contributes to more accurate defect identification. In addition, the system integrates the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering algorithm, which classifies detected defect polygons by their spatial location. This allows detecting groups of closely spaced defects, which can be useful for diagnosing system problems or identifying patterns in the distribution of defects on the surface of a solar power plant. The results of the integrated approach demonstrate a significant improvement in the accuracy of defect detection due to the synergistic effect of the combination of CNN for pattern recognition, Softmax for probabilistic classification, DBSCAN for spatial distribution analysis, and Deep SORT for stable tracking. The detection speed is also increased by integrating Object Detection and Object Tracking, which minimizes the need to re-analyze the same image areas. The system's reliability is enhanced by the use of the Kalman filter to reduce the impact of random noise, the Mahalanobis distance for more objective identification, and the DBSCAN algorithm for detecting spatial anomalies.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Микола ЛИСИЙ, Сергій ПАРТИКА, Ігор КУШНЕР, Андрій ЛИСИЙ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/513 A MODEL FOR DYNAMIC OPTIMISATION OF COLOUR REPRODUCTION PARAMETERS UNDER CONDITIONS OF VARIABILITY OF PRINTED IMAGES IN SOLVENT PRINTING 2025-05-14T13:09:47+03:00 Roman TYNDYK Roman.S.Tyndyk@lpnu.ua <p class="06AnnotationVKNUES"><em>The article presents the development and practical implementation of an adaptive model for the automated selection of colour separation and halftone smoothing algorithms in the context of large-format solvent-based printing. This model is designed to ensure stable and high-quality colour reproduction under variable printing conditions, which are typical for the solvent printing industry. The proposed system analyses the spectral and morphological features of the input image, allowing the dynamic adjustment of colour processing parameters based on the specific content and structure of the image, including distinctions between graphical elements, text, and photographic content. This content-sensitive adaptation enables the model to maintain colour consistency and fidelity while optimizing ink consumption and minimizing material waste.</em></p> <p class="06AnnotationVKNUES"><em>A key advantage of the model lies in its integration with Raster Image Processor (RIP) systems, which facilitates automation of the image preparation process and reduces the dependency on operator expertise. This, in turn, leads to improved productivity, reduced production errors, and better alignment with modern lean manufacturing principles. By leveraging data-driven optimization methods and heuristic rules, the model fine-tunes the rendering pipeline to meet both aesthetic and technical requirements of the final print output.</em></p> <p class="06AnnotationVKNUES"><em>The article also explores the results of practical testing of the model in real-world production environments. These tests confirm improvements in output quality, ink usage efficiency, and process repeatability. Furthermore, the authors identify potential directions for future research, such as integrating the model with machine learning systems to further enhance decision-making capabilities, and expanding compatibility with a wider range of printing technologies. Overall, the study demonstrates how adaptive algorithmic approaches can significantly enhance the performance and reliability of colour management in professional printing workflows.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Роман ТИНДИК https://vottp.khmnu.edu.ua/index.php/vottp/article/view/512 ALGORITHMS FOR IMPROVING IMAGE QUALITY USING DEEP NEURAL NETWORKS: A COMPARATIVE ANALYSIS OF MODERN METHODS 2025-05-13T21:54:01+03:00 Arsen LIPOVYI arsen.y.lypovyi@lpnu.ua <p class="06AnnotationVKNUES"><em>This paper presents a comprehensive analysis of contemporary deep learning algorithms aimed at enhancing image quality. The study focuses on state-of-the-art methods such as Super-Resolution Convolutional Neural Network (SRCNN), Generative Adversarial Networks (GAN), Denoising Convolutional Neural Networks (DnCNN), and Enhanced Super-Resolution GAN (ESRGAN). These algorithms are evaluated for their effectiveness in improving image clarity, contrast, and resolution under various conditions and types of distortions.</em></p> <p class="06AnnotationVKNUES"><em>The research delves into the architectural nuances of each algorithm, highlighting their unique approaches to image enhancement. For instance, SRCNN utilizes a straightforward convolutional framework for super-resolution tasks, while GAN-based methods, including ESRGAN, employ adversarial training to generate high-fidelity images with realistic textures. DnCNN focuses on removing noise from images using deep convolutional layers, demonstrating significant improvements in denoising performance.</em></p> <p class="06AnnotationVKNUES"><em>Evaluation metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) are employed to quantitatively assess the performance of these algorithms. The comparative analysis reveals that while traditional methods like SRCNN offer substantial improvements over baseline techniques, advanced models like ESRGAN achieve superior results in preserving fine details and textures, albeit sometimes at the cost of introducing artifacts.<a href="https://www.academia.edu/68897278/Medical_Image_Enhancement_Using_Super_Resolution_Methods?utm_source=chatgpt.com" target="_blank" rel="noopener">Academia</a></em></p> <p class="06AnnotationVKNUES"><em>The study also explores the practical applications of these algorithms in various domains, including medical imaging, surveillance, and autonomous vehicles. In medical imaging, enhanced image quality can lead to more accurate diagnoses. In surveillance, clearer images improve object recognition and tracking. For autonomous vehicles, high-resolution images contribute to better environment perception and decision-making.</em></p> <p class="06AnnotationVKNUES"><em>Furthermore, the paper discusses the computational complexities associated with each algorithm, considering factors such as processing time and resource requirements. This analysis is crucial for real-world applications where computational efficiency is paramount.</em></p> <p class="06AnnotationVKNUES"><em>In conclusion, the paper underscores the significant advancements in image quality enhancement achieved through deep learning techniques. While challenges remain, particularly concerning computational demands and potential artifacts, the progress in this field holds promise for numerous practical applications. Future research directions include optimizing these algorithms for real-time processing and further improving their robustness across diverse image types and conditions.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Арсен ЛИПОВИЙ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/552 COMPARATIVE ANALYSIS OF MODERN INTERACTION MEANS BETWEEN COMPONENTS IN DISTRIBUTED SOFTWARE SYSTEMS 2025-06-03T17:30:36+03:00 Denys OSTAPETS odaua@i.ua Mykola NAHORIANSKYI n.nagoryanskiy@gmail.com <p class="06AnnotationVKNUES"><em>This paper presents an in-depth comparative analysis of contemporary communication mechanisms used in distributed software systems, which play a pivotal role in the development of efficient, reliable, and scalable information technologies. A systematic classification of interaction methods is proposed based on the model of communication—synchronous and asynchronous. This classification serves as a fundamental criterion for selecting the appropriate communication mechanism and is closely tied to the architectural paradigm of the distributed system, significantly influencing its performance, reliability, and adaptability.</em></p> <p class="06AnnotationVKNUES"><em>The study provides a comprehensive overview and technical evaluation of widely adopted technologies for component interaction. In the realm of synchronous communication, RESTful APIs and gRPC are analyzed for their usability, protocol characteristics, and compatibility. For asynchronous messaging, the paper investigates the features and implementations of Apache Kafka and RabbitMQ, emphasizing their messaging models, persistence capabilities, and event-driven design.</em></p> <p class="06AnnotationVKNUES"><em>Each technology is assessed in terms of architectural implications, performance metrics, scalability potential, integration complexity, message delivery guarantees, and support for complex routing scenarios. The strengths and limitations of each solution are discussed, supported by real-world application cases and usage patterns.</em></p> <p class="06AnnotationVKNUES"><em>Based on the comparative insights, the paper provides practical recommendations for selecting communication mechanisms that align with specific architectural and operational requirements. The results aim to support system architects and developers in designing robust and maintainable distributed systems.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Денис ОСТАПЕЦЬ, Микола НАГОРЯНСЬКИЙ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/553 EXPERIMENTAL RESEARCH OF DIFFERENTIAL UNIFORMITY INFLUENCE ON RESISTANCE OF S-BOXES TO DIFFERENT TYPES OF CRYPTANALYSIS 2025-06-03T17:44:19+03:00 Oleh YAREMA yarema_oleg@i.ua Nataliya ZAGORODNA zagorodna.n@gmail.com <p><em>The impact of differential uniformity on the resistance of 4-bit S-blocks to different types of &nbsp;cryptanalysis and the complexity of their generation for use in low-resource devices is investigated in this paper. A comparative analysis of the resistance of the generated blocks to differential, linear cryptanalysis and brute force attack is carried out on the example of the proposed experimental cipher. The relevance of this research topic is due to the rapid spread of low-resource devices, in particular Internet of Things (IoT) devices, and the need to ensure their information security.&nbsp; The use of small and efficient cryptographic ciphers is critical for such devices.</em></p> <p><em>As part of this study, a set of 4-bit S-blocks with different differential uniformity values was generated. To evaluate the cryptographic strength of these blocks, a simplified block cipher scheme based on SPN (substitution permutation network) was developed. A comparative analysis of the resistance of the generated S-blocks to differential, linear cryptanalysis and brute force was carried out by applying appropriate cryptographic methods to break the experimental cipher.</em></p> <p><em>The analysis of the obtained experimental data is presented in the form of summary tables. The results of the study clearly demonstrate the dependence between the values of the differential uniformity of S-blocks and their ability to resist differential cryptanalysis. At the same time, it is shown that the differential uniformity does not affect the brute force cracking method. Comparison of the results of assessing the resistance to differential and linear cryptanalysis clearly illustrates the importance of an integrated approach to the design of S-blocks, taking into account all cryptographic properties, since optimizing only one cryptographic property does not guarantee high resistance to all types of cryptographic attacks.</em></p> <p><em>The obtained experimental results are of great practical importance for the development of secure and efficient cryptographic ciphers that can be used in conditions of limited resources of IoT devices, as well as in systems that provide for the dynamic generation of lightweight S-blocks to increase cryptographic security and prevent some cryptographic attacks.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Олег ЯРЕМА, Наталія ЗАГОРОДНА https://vottp.khmnu.edu.ua/index.php/vottp/article/view/554 MODELLING OF HYSTERESIS BEHAVIOUR OF NICKEL-TITANIUM SHAPE MEMORY ALLOY USING ARTIFICIAL NEURAL NETWORK 2025-06-03T17:57:44+03:00 Dmytro TYMOSHCHUK dmytro.tymoshchuk@gmail.com Oleg YASNIY oleh.yasniy@gmail.com <p class="06AnnotationVKNUES"><em>Shape memory alloys (SMAs) are a class of materials that have the ability to return to their previous shape when exposed to temperature or mechanical stress. The main functional properties of these alloys, the shape memory effect (SME) and superelasticity (SE), make them indispensable in various industries. The SMA superelasticity is the ability of a material to return to its original shape after loading and unloading due to transformations between austenite and martensite. These phase transitions are accompanied by hysteresis, which can be observed in the stress-strain diagram. In this study, the hysteresis behavior of SMA, particularly nickel-titanium alloy (NiTi or Nitinol), was modeled using artificial neural networks. The use of neural networks in the study made it possible to obtain accurate material strain predictions and reduce the number of actual experiments. The results showed the high accuracy of the prediction model, which indicates the prospects of using artificial neural networks in the study of SMA characteristics.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Дмитро ТИМОЩУК, Олег ЯСНІЙ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/555 MODEL OF INFORMATION ATTACK ON ENTERPRISES USING FAKE DIGITAL IDENTITIES 2025-06-04T09:48:23+03:00 Mykhailo MARCHUK smoke22catches@gmail.com Vitalii LUKICHOV lukichov.vitalyi@vntu.edu.ua <p class="06AnnotationVKNUES"><em>The article discusses a new type of cyber threats related to the use of deepfake technologies to create fake digital identities in the context of targeted attacks on financial, corporate and governmental entities. The purpose of the study is to build a formalized model of an information attack, including the stages of collecting media data, setting up a fake digital infrastructure, establishing trust with target employees, and implementing malicious requests.</em></p> <p class="06AnnotationVKNUES"><em>The research methodology is based on the analysis of incidents recorded in open sources using The MIT AI Risk Repository taxonomy. As a result, it was found that the main targets of impersonation are top managers, and the attacks themselves are mostly aimed at employees with access to financial resources. The article also contains recommendations for proactive digital identity protection, including the use of watermarks, multi-level verification, and mechanisms such as D-CAPTCHA. The presented results can be used to improve cybersecurity strategies in the face of the growing threat from generative artificial intelligence technologies.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Михайло МАРЧУК, Віталій ЛУКІЧОВ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/556 METHOD OF COMBINING CONTEXTUAL EMBBEDINGS WITH A VECTOR REPRESENTATION OF THE MEDICAL DOMAIN 2025-06-04T10:04:04+03:00 Oleksandr CHABAN chabanolek@khmnu.edu.ua <p class="06AnnotationVKNUES"><em>Navigating the intricate logical connections within clinical narratives—a medical natural language inference task—is paramount for advancing applications like AI-assisted clinical decision-making and the automated interpretation of patient records. However, mastering this domain is particularly arduous due to the specialized lexicon, complex conceptual relationships, and subtle semantic variations inherent in medical texts. This research introduces an innovative methodology to elevate medical natural language inference performance by effectively combining structured, field-specific knowledge with insights gleaned from textual sentiment. Our approach capitalizes on MultE, a cutting-edge algorithm for embedding knowledge graphs, to distill profound semantic relationships from the Unified Medical Language System (UMLS). These distilled knowledge representations are then amalgamated with contextual word embeddings generated by BioELMo. To further enrich contextual understanding, sentiment data pertinent to the medical field, extracted via MetaMap, is also integrated. The system architecture processes this composite feature set—BioELMo embeddings augmented by domain knowledge and sentiment vectors—through a bidirectional Long Short-Term Memory (BiLSTM) network, which is subsequently enhanced by an attention mechanism that dynamically assigns importance to different input segments. Validation on the MedNLI benchmark dataset, featuring 14,049 expert-labeled premise-hypothesis pairs, revealed exceptional efficacy. The proposed system achieved 81.14% accuracy, 79.62% recall, an F1-score of 79.85%, and an AUC-ROC of 85.06%, surpassing established baseline techniques. These accomplishments underscore that the deliberate incorporation of specialized knowledge and sentiment cues can dramatically boost natural language inference capabilities in the medical arena, thereby providing a sturdy platform for engineering more dependable and intelligent healthcare solutions.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Олександр ЧАБАН https://vottp.khmnu.edu.ua/index.php/vottp/article/view/557 APPLICATION OF PHASE-LOGIC ELEMENTS IN RADIOENGINEERING COMMUNICATION SYSTEMS 2025-06-04T10:23:06+03:00 Oleh KAPLYCHNYI olegkapl@gmail.com <p class="06AnnotationVKNUES"><em>This article is devoted to the study of the effectiveness of phase-logic methods for signal processing in telecommunication systems, in comparison with traditional approaches. The key advantages of phase-logic elements are analyzed, in particular their ability to reduce noise and distortions, enhance resistance to external interferences, and lower power consumption. Unlike classical logic structures, phase-logic elements use the phase of a signal as the primary logical parameter, which enables reliable data transmission even under significant noise load. Under network congestion, these approaches demonstrate stable signal quality indicators.</em></p> <p class="06AnnotationVKNUES"><em>The study is based on the results of numerical modeling and analysis of experimental data, reflecting the behavior of systems with phase-logic structures under various scenarios—from high noise levels to limited bandwidth. Particular attention is paid to the comparative evaluation of such characteristics as data loss, energy consumption, and resistance to interference in phase-logic and traditional signal processing schemes. A methodology is also proposed for assessing efficiency considering environmental factors, traffic intensity, and the types of communication protocols used.</em></p> <p class="06AnnotationVKNUES"><em>The results of the study confirm the feasibility of integrating phase-logic elements into modern telecommunication systems, in particular for use in 5G, 6G networks, and the Internet of Things (IoT). Potential directions for the practical implementation of the proposed methods in real communication infrastructures are presented, along with an outline of future scientific developments in the field of microelectronics and phase-logic information processing.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Олег КАПЛИЧНИЙ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/559 ANALYSIS OF MODERN METHODS FOR CONTROLLING SURFACE WATER POLLUTION 2025-06-13T14:20:02+03:00 Valeriy ZDORENKO alzd123@meta.ua Kyrylo SHOLUDKO kirillsholydko@gmail.com <p class="06AnnotationVKNUES"><em>Surface water pollution is one of the key environmental challenges of our time, with both local and global consequences. Microplastics, chemical reagents, and other anthropogenic contaminants not only deteriorate water quality but also disrupt ecosystem functioning, posing a threat to biodiversity and human health. The same applies to films of oily substances. Therefore, ensuring reliable, timely, and accurate monitoring of surface water conditions is of utmost importance.</em></p> <p class="06AnnotationVKNUES"><em>Modern methods for controlling surface water pollution have a number of limitations, particularly in terms of sensitivity, spatial and temporal resolution, process automation, and accessibility for widespread implementation. Existing laboratory techniques often require significant time for sample analysis, while remote methods do not always provide the required accuracy under complex natural conditions.</em></p> <p class="06AnnotationVKNUES"><em>Thus, a pressing scientific and practical task arises: to analyze current methods for controlling surface water pollution in order to identify their advantages, limitations, and potential for improvement. This will make it possible to define further research directions in the field of information and measurement technologies for environmental monitoring. Solving this problem will contribute to enhancing the effectiveness of water resource management, reducing ecological risks, and supporting sustainable development.</em></p> <p class="06AnnotationVKNUES"><em>This work presents a review of modern methods for controlling pollution on the surface of water bodies, which is a pressing issue in the field of environmental monitoring. The study analyzes patented approaches based on acoustic, radar and optical principles. The characteristics and limitations of each method are examined, including accuracy, response speed, dependence on environmental conditions, and potential for automation. Based on a comparative analysis, the most effective and promising approach is identified as the method using ultrasonic waves, which enables non-contact measurement with high sensitivity. The article outlines future research directions, including the adaptation of the method to real-world conditions and the development of a mobile measurement system. The results of this work can serve as a foundation for the creation of innovative information and measurement systems for controlling pollution on water surfaces.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Валерій ЗДОРЕНКО, Кирило ШОЛУДЬКО https://vottp.khmnu.edu.ua/index.php/vottp/article/view/560 IMPROVING CLOUD SYSTEM PERFORMANCE THROUGH ADAPTIVE RESOURCE OPTIMIZATION BASED ON GENETIC ALGORITHMS 2025-06-13T14:41:40+03:00 Oleh SIHUNOV Heraldes@khmnu.edu.ua <p class="06AnnotationVKNUES"><em>The article investigates an approach to improving the performance of cloud systems through adaptive resource optimization based on genetic algorithms (GA). Particular attention is paid to evaluating system efficiency under high-load conditions in a hybrid AWS cloud environment that simulates real-world usage scenarios. The study was conducted on an architecture comprising three t3.small EC2 instances, which acted as request processing servers, and one t3.medium EC2 instance that served as a router. The router hosted genetic algorithms (GA) and a neural network (NN) that predicted peak loads and helped adaptively distribute requests.</em></p> <p class="06AnnotationVKNUES"><em>The research methodology is based on load testing using the Gatling tool, which enables user behavior simulation and system performance analysis under various load conditions. Key performance parameters such as total execution time, resource usage cost, and actual CPU and memory utilization were analyzed. A series of experiments was conducted with various configurations, including the use of the Classic Genetic Algorithm (Classic GA), the Multi-Objective Genetic Algorithm (Multi-Objective GA), and the Hybrid GA + RL algorithm with a neural network trained for 15 minutes, 30 minutes, 1 hour, and 12 hours.</em></p> <p class="06AnnotationVKNUES"><em>The results demonstrated that using genetic algorithms significantly improves system performance compared to traditional load balancing approaches. The Hybrid GA + RL approach with 12 hours of neural network training proved to be the most effective, achieving the lowest execution time, optimal CPU and memory usage, and minimal resource costs among all tested configurations. The Multi-Objective GA also outperformed the classic algorithm, particularly in cases of unstable workloads.</em></p> <p class="06AnnotationVKNUES"><em>Thus, the obtained results confirm the feasibility of applying adaptive optimization based on genetic algorithms and neural networks in AWS cloud systems. The proposed approaches provide enhanced performance, cost reduction, and improved system stability. The findings can be useful for engineers working with cloud services as well as developers of scalable, high-load web applications.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Олег СІГУНОВ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/562 INTELLIGENT METHODS FOR DETECTION OF PERFORMANCE DISORDERS IN WIRELESS SENSOR NETWORKS 2025-06-19T12:08:09+03:00 Roman KYRYCHENKO aka.roman.kirichenko@gmail.com <p class="06AnnotationVKNUES"><em>This article explores advanced intelligent methods for detecting malfunctions in wireless sensor networks (WSNs), with a particular emphasis on enhancing operational reliability under conditions of limited resources. The study substantiates the viability of employing hybrid machine learning models that integrate traffic characteristics, topological information, and energy parameters to comprehensively assess the state of network nodes. A novel multi-level diagnostic system architecture is proposed, consisting of sequential stages including real-time monitoring, intelligent classification, and precise localization of network faults. This layered approach ensures improved adaptability and responsiveness in dynamic WSN environments. Simulation experiments were conducted using the AnyLogic and OMNeT++ platforms to evaluate and compare the effectiveness of conventional diagnostic methods versus the proposed intelligent approaches. The results clearly demonstrate the superiority of the intelligent models in terms of diagnostic accuracy, reduced fault detection latency, and optimized energy consumption. The proposed methodology significantly enhances the robustness and efficiency of WSN operations, making it particularly suitable for application in distributed Internet of Things (IoT) infrastructures and other systems where computational resources are constrained. These findings underscore the potential of intelligent diagnostic frameworks to ensure high data availability, prolong the operational lifetime of sensor nodes, and minimize the risk of information loss in critical network deployments.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Роман КИРИЧЕНКО https://vottp.khmnu.edu.ua/index.php/vottp/article/view/563 THE RELEVANCE OF CONFIDENTIALITY OF INFORMATION TRANSMISSION IN TELECOMMUNICATION SYSTEMS 2025-06-19T12:25:06+03:00 Volodymyr KORCHYNSKYI vkadkorchin@ukr.net Anatoliy SEMENKO setel@ukr.net Roman JAROVIY roman.yaroviy@e-u.edu.ua Oleksandr ZAHORULKO ozhrlko@ukr.net <p><em>An important requirement for the operation of telecommunication systems is the secrecy and concealment of information transmission, which is one of the main tasks of information security. At the same time, the most dangerous is unauthorized access to data by intruders, which causes great damage to the country's economy and defense. The violator first tries to detect the signal of the information source. Next, an attempt is made to disclose the information, even using anti-jamming and anti-virus methods. If the efforts are successful, the attacker will receive a signal created using timer signal constructions, the most effective method of ensuring the secrecy of information transmission. Due to the search for a large number of parameters of the timer signal construction - significant modulation moments, the number of generated pulses, the number of elementary time intervals and their mutual use in all possible combinations. At the same time, the potentially minimal probability of receiving a valid signal will be achieved, which will make it virtually impossible to disclose information.</em></p> <p><em>The article proposes the use of Timer Signal Constructions (TSC) as an alternative to traditional Pulse Code Modulation (PCM) to enhance the structural concealment of the information signal. A study was conducted on the probability of signal structure disclosure depending on the number of pulses, significant modulation moments, and time parameters. Analytical calculations of the number of possible TSC signal realizations were presented, and the level of their structural concealment was evaluated.</em></p> <p><em>Modeling was performed to analyze the dependence of the signal disclosure probability by an adversary under varying TSC parameters, demonstrating the potential to reduce this probability to the level of 10⁻⁵⁰. The impact of interference and the potential for signal cleansing by an adversary using antivirus tools were investigated. Special attention was given to the prospects of combining TSC with wideband signals, chaotic structures, pseudo-random sequences (PRS), and encryption methods (AES, 3-DES, SNOW), which allows for a comprehensive increase in telecommunication security. The proposed approaches are recommended for use in secure communication channels, particularly in conditions requiring resistance to technical intelligence threats.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Володимир КОРЧИНСЬКИЙ, Анатолій СЕМЕНКО, Роман ЯРОВИЙ, Олександр ЗАГОРУЛЬКО https://vottp.khmnu.edu.ua/index.php/vottp/article/view/564 ENSURING CYBER PROTECTION OF THE CORPORATE NETWORK OF DATA TRANSMISSION DURING THE ORGANIZATION OF INTERACTION BETWEEN IoT DEVICES OF RESEARCH LABORATORIES 2025-06-19T13:57:27+03:00 Yaroslav TARASENKO yaroslav.tarasenko93@gmail.com Oleksandr TUROVSKY s19641011@ukr.net Viktor IVANKIN victorentino@gmail.com Pavlo MATUSYAK pavelmatusyak@gmail.com Mykhailo TOMASHEVSKYY m.Tomashevskyi@stud.duikt.edu.ua <p class="06AnnotationVKNUES"><em>The paper analyzed the effectiveness of existing methods for protecting the corporate network of a research laboratory when interacting with IoT devices that are part of the sensor network of laboratory tests. The analysis was carried out taking into account the overlap of threats and the convergence of physical and information security of the cyber-physical system formed on the basis of their interaction. To obtain an accurate representation of the interaction of sensor and corporate networks under the influence of IoT device threats, a list of common threats of both networks and threats characteristic of each of them under the conditions of performing laboratory test tasks was determined. The vectors of influence of sensor network threats on threats to the corporate network were determined. Methods of protection against each threat were studied from the point of view of their advantages and disadvantages when applied to the cyber-physical system of laboratory tests and their integration potential was investigated. For each threat, the objects of potential attacks by attackers were identified. Based on the identified threats, protection methods and objects of potential attacks, a principle model of the impact of threats on the areas of corporate network protection was formed. The impact in conditions of overlapping threats and a double barrier of protection was taken into account. The impact of threats characteristic of the sensor network on the areas of corporate network protection was determined, which allowed us to take into account the convergence of physical and information security in the cyber-physical system. Research into the impact of threats based on the constructed principle model allowed us to prove an increase in the corporate network threat coefficient for 5 out of 8 objects of the corporate network protection area, which proves a decrease in the effectiveness of the use of protection methods and the need to take into account additional impact coefficients when ensuring corporate network protection when interacting with IoT devices in laboratory tests.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Ярослав ТАРАСЕНКО, Олександр ТУРОВСЬКИЙ, Віктор ІВАНКІН, Павло МАТУСЯК, Михайло ТОМАШЕВСЬКИЙ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/565 METHODS OF MEASURING GEOMETRIC PARAMETERS OF CUTTING TOOL WEAR 2025-06-19T14:11:31+03:00 Nataliia ZASHCHEPKINA nanic1604@gmail.com Roman TYMCHYK tymchik88@gmail.com <p class="06AnnotationVKNUES"><em>In modern instrument manufacturing, the main method of manufacturing instrument parts is the mechanical processing of parts blanks on <span class="y2iqfc">NSC</span> machines, which require high accuracy of geometric dimensions and surface shape. <span class="y2iqfc">To measure the accuracy of the geometric dimensions and shape of the surface of the body of rotation, it is important to create relevant information and measurement systems and analytical models for evaluating and predicting possible changes in the parameters of the tools. Such systems and models should be the basis for creating appropriate software for NSC machine tool systems to perform the necessary corrections in the trajectory of the tool movement relative to the surface of the work piece in the machine coordinate system. his will make it possible to compensate for possible elastic deformations of the work piece and thermal deformations of the geometric dimensions of the tool, which, in turn, will ensure the necessary production efficiency. Research and development are relevant and have the characteristics of a scientific and practical task in modern instrument making, related to the production of special purpose devices.</span></em></p> <p class="06AnnotationVKNUES"><em><span class="y2iqfc">The problems of wear of the cutting tool during the processing of parts were considered. Types of tool wear, causes, methods and devices for wear determination are analyzed. It has been determined that one way to reduce set-up time is to automatically account for actual tool sizes in the control program. With an increase in the accuracy of the tool setting, the machine time is reduced due to the reduction of: idle movements of the cutter holder and "safety zones" - distances provided by the programmer to ensure a safe approach of the tool to the processed surface and its indentation. The combination of traditional and advanced modeling methods allows to reduce the uncertainty of measurements and improve the accuracy of the prediction of shape deviations, which is especially important in the precision manufacturing of device parts, where precise control of geometric characteristics is key.</span></em></p> <p class="06AnnotationVKNUES"><em><span class="y2iqfc">The creation of precision non-contact means of controlling the dimensional wear of the cutting tool will ensure the production of high-precision parts of the devices.</span></em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Наталія ЗАЩЕПКІНА, Роман ТИМЧИК https://vottp.khmnu.edu.ua/index.php/vottp/article/view/570 AUTOMATED INFORMATION SYSTEM FOR ADMISSION TO TNTU “ELECTRONIC OFFICE OF THE APPLICANT” 2025-06-30T12:47:23+03:00 Oleksandr KARNAUKHOV karnaukhov@tntu.edu.ua Serhii MARTSENKO martsenko_s@tntu.edu.ua <p class="06AnnotationVKNUES"><em>This paper is dedicated to an in-depth exploration of the development and implementation of the “Electronic Applicant's Office” information system, designed as a comprehensive solution to automate and optimize the university admission process at Ternopil National Technical University named after Ivan Puluj (TNTU). The primary objective of the developed system is to streamline the handling of a significant volume of incoming applications, reduce the administrative workload on university personnel, and enhance the efficiency and quality of admission-related processes.</em></p> <p class="06AnnotationVKNUES"><em>The research provides a detailed analysis of the system’s architecture, outlining the technological components and design decisions that ensure scalability, fault tolerance, and high performance. A comprehensive model of the admissions committee's workflow is presented, demonstrating how various modules of the system interact to support the end-to-end process—from initial registration of applicants to the final decision-making stages.</em></p> <p class="06AnnotationVKNUES"><em>Particular attention is paid to the implementation aspects, including the use of modern methodologies for automated deployment, containerization, and cloud-based scaling, which collectively ensure the system's adaptability to varying loads during admission campaigns. The paper also discusses the challenges and solutions related to the integration of the system with external data services and governmental platforms, which play a critical role in validating applicant data and ensuring legal compliance.</em></p> <p class="06AnnotationVKNUES"><em>In conclusion, the study highlights the positive outcomes observed at TNTU following the adoption of the system, such as improved transparency, faster application processing times, and greater satisfaction among applicants and staff. This case study may serve as a valuable reference for other educational institutions seeking to digitalize and modernize their admission infrastructure.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Олександр КАРНАУХОВ, Сергій МАРЦЕНКО https://vottp.khmnu.edu.ua/index.php/vottp/article/view/571 ANALYSIS OF ECOLOGICAL FACTORS FOR ENVIRONMENTAL IMPACT ASSESSMENT OF INDUSTRIAL ENERGY ENTERPRISES 2025-06-30T13:14:27+03:00 Nataliia ZASHCHEPKINA nanic1604@gmail.com Roman RUDNYTSKY romarudnytskiy@gmail.com <p><em>Environmental protection, rational use of natural resources, ensuring the ecological safety of human activities are an integral condition for the sustainable economic and social development of Ukraine. One of the reasons for the occurrence of environmental problems is the impact on the state of the environment of factors that change its quantitative and qualitative characteristics. The work considers the types and analysis of factors that affect longevity. Including an analysis of the sources of origin and the state of factors for further assessment of the actions of energy enterprises.</em></p> <p><em>According to the nature of origin, natural and anthropogenic pollution of the environment are distinguished. Natural pollution (volcanic eruptions, forest fires, weathering, mass reproduction of insects, etc.); anthropogenic pollution is the result of human activity.</em></p> <p><em>Persistent pollutants include those that slowly decompose naturally (plastics, pesticides, polyethylene) or toxic compounds - mercury, lead. Unstable pollutants are neutralized in ecosystems as a result of natural physico-chemical or biological processes. According to nature, pollution factors are divided into four groups: mechanical, physical, chemical and biological. The structural scheme of the system of harmful effects on the surrounding natural environment of polluting substances and fuel combustion products is given and analyzed. The method of conducting pollutant emissions analysis is considered, namely, inventorying, which is mandatory for production associations and industrial enterprises, organizations and institutions that emit pollutants into the atmosphere, regardless of departmental subordination and forms of ownership.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Наталія ЗАЩЕПКІНА, Роман РУДНИЦЬКИЙ https://vottp.khmnu.edu.ua/index.php/vottp/article/view/572 OVERVIEW OF TRANSFORMERS ROLE IN DATA MINING FROM UNSTRUCTURED DATA 2025-06-30T13:29:37+03:00 Denys OLIANIN denys.olianin@gmail.com Halyna TSYPRYK tsupryk_h@tntu.edu.ua <p><em>The rapid growth of Big Data has made it increasingly important to extract meaningful insights from unstructured sources such as text, audio, video, and emails. Traditional data mining techniques—like tokenization, clustering, classification, and association rule mining—have provided a basis for processing these complex data forms. However, they often struggle to capture the subtle semantic and contextual relationships that are inherent in unstructured data. In this article, we examine the limitations of these conventional methods and explore the impact of Transformer Neural Networks (TNNs) on unstructured data mining.</em></p> <p><em>Transformer architectures have revolutionized the field by employing self-attention mechanisms and positional encodings, which allow for parallel processing of data. This new approach enables the creation of high-quality embeddings that capture both semantic and syntactic information. As a result, tasks such as sentiment analysis, topic modeling, and automated summarization are significantly enhanced. Additionally, integrating transformers into audio signal processing and email mining has led to notable improvements in automatic speech recognition and semantic analysis, effectively addressing some of the long-standing challenges in these areas. The findings discussed in this article highlight the potential of transformer-based approaches to not only overcome the limitations of traditional data mining methods but also to open the door to innovative applications across various fields. Future research directions include developing more computationally efficient transformer models and exploring hybrid approaches that combine traditional techniques with advanced neural architectures. These efforts will ultimately push the boundaries of what is possible in unstructured data mining.</em></p> 2025-05-21T00:00:00+03:00 Copyright (c) 2025 Денис ОЛЯНІН, Галина ЦУПРИК https://vottp.khmnu.edu.ua/index.php/vottp/article/view/573 UNCERTAINTIES ASSOCIATED WITH PIPETTE DISPENSERS 2025-06-30T20:31:05+03:00 Oleksandr REDKO vottp@khmnu.edu.ua Valentyn MOKIICHUK vottp@khmnu.edu.ua <p><em>The article analyzes the regulatory and technical documents on the operation and calibration of fixed and adjustable dose volume pipettes. The sources of uncertainty in the measurement result during operation and calibration of pipettes, which are advisable to use in the practice of accredited testing and calibration laboratories, are identified and described. Particular attention is paid to the uncertainty associated with the evaporation of liquid during the dosing and measurement processes. A formula and experimental procedure for determining mass loss due to liquid evaporation are proposed. The elements of calculation of the uncertainty components of the measurement result are presented.</em></p> 2025-05-15T00:00:00+03:00 Copyright (c) 2025 Олександр РЕДЬКО, Валентин МОКІЙЧУК