The article considers the evolution of methods and technologies of decision-making by officials in the context of their use of situational management capabilities in the «individual», «information layout», «intelligent systems» modes, and implementation of artificial intelligence (AI). The problems and disadvantages of different management models are considered. The requirements for the intellectualization of computers are defined. It is shown that the introduction of cloud technologies and analytics of extensive data have further expanded the operational capabilities of situational management systems. Key trends, including the integration of machine learning and AI, the introduction of advanced sensor technologies, and the development of network communication systems for the uninterrupted exchange of information in the network of situational centers, have expanded the operational capabilities of situational management systems. The need for the introduction of AI, which will become one of the main directions in the development of decision support systems, is proven. The possibility and necessity of using AI agents in situational management systems is also considered and substantiated. An AI agent is an intelligent system designed to perceive the environment, make decisions, and perform actions with the intention of achieving a specific goal. AI agents represent a paradigm shift in traditional computing. They are not just tools we use, but intelligent partners that can learn, adapt, and solve problems with us. The concept of AI agents is not new, but recent technological advances have transformed them from theoretical constructs into powerful practical instructions. The described transformative technology in the building blocks of AI agents and the large language model allow us to understand how they serve as the «brain» of AI agents.Figs.: 6. Refs.: 5 titles.
The article analyzes the processes of development of artificial intelligence (AI) technologies in the world and in Ukraine and its connection with security based on a risk-oriented approach. The latest achievements, a brief description of the technology, the state of implementation of standardization and regulation of the technology development processes, possible risks to human life and their specifics in our country are considered. Participation of Ukrainian scientific institutions, in particular the National Academy of Sciences of Ukraine, state regulation, public-private partnership, activities of private firms, and the possibilities of applying AI technology in the security sector are also discussed. The article is a review, so it contains a lot of cited material. It provides data on the spread of the technology in Ukraine in various spheres of life, brief information on the main state organizations and enterprises involved in the implementation of AI, and data on some private enterprises and public-private partnerships in this area. At the same time, the authors draw attention to the fact that the vast majority of works on this topic belong to private business, which is mostly founded by foreign companies, and that official science, the National Academy of Sciences, etc. are lagging behind. The article focuses on the fact that inadequate attention (compared to advanced countries) is paid to the security issues of both AI technology itself and new processes of activity involving AI. The authors analyze in detail the danger of some private enterprises using AI in the pseudo-scientific sphere — the so-called assistance in creating scientific articles and raising the scientific status of scientists, education, etc. The authors see this phenomenon not only as violating the ethics of scientific integrity, but also as a catastrophic risk for the state as a whole. The article analyzes the state of implementation of standards and laws, provides directions for the development of security software using AI and cloud technologies, etc. It is proposed to accelerate the introduction of technology and international regulations at the state level with the coordination of the National Academy of Sciences.Figs.: 4. Refs.: 32 titles.
Self-recovery, often mentioned as self-healing and remediation, is an extremely important superpower-like feature of large systems on national, international, up to the global level. As any systems, especially large and distributed, can often be represented in a network form, with nodes as their components and links as communications in between, the recovery of any systems may be considered first of all as the recovery of their network structures. And this recovery may often need using mostly internal system resources with minimum external intervention, supplement, or control, as other systems may have problems too and be unable to share their resources. The paper investigates and shows in detail how the developed Spatial Grasp Model and Technology with its recursive Spatial Grasp Language can organize distributed networks of any volumes and topologies to behave in a really self-healing, self-repairing, actually «immortal» manner. It offers a universal solution where all networked nodes, being potentially active, can cooperatively provide network analysis to the reasonable depth from each of them, share the results by subsequently supplying each node with the full description of the whole network, and then use this description to collectively restore the whole network from any damages if at least a single node still remains alive. This solution uses spatially controlled unique supervirus-like flexibility and self-replication of SGL scenarios, freely migrating between network nodes using only local communications among them. The paper also provides practical recommendations of how to use the offered self-recovery solution for huge networks, and shows how to additionally involve other critical infrastructures in case of complex crises and problems in social systems.Figs.: 7. Refs.: 29 titles.
The paper identifies the features of solving simple arithmetic problems with a logical component using chatbots with generative artificial intelligence. Copilot is considered as such a chatbot. It was found that when the query is extended or clarified, the probability of obtaining the correct answer decreases because expanding the query parameters leads to errors in forming the answer. Such a problem really exists and is called the limitations of large language models in mathematical reasoning. The user’s query and the search for information to answer their question occur non-linearly. Users change and supplement their queries, which only worsens the final answer of the chatbot. To minimize the occurrence of such errors and inaccuracies, it is proposed to use an approach with a non-linear Z-approximation. Given that Z-transformations are based on adaptive algorithms and are able to change the structural features of these algorithms, with each iteration of such an algorithm, there will be an approximation to a certain point containing the correct answer and not moving beyond the search area. In this case, the behavior of the user who adds new constraints and refinements to the problem to the chatbot with generative artificial intelligence can be described through the recurrent relations of the generated small-rational approximations. As a result, this allows you to build a direction of movement to the correct answer with a restriction of remarks that are not essential for the essence of understanding the problem. The proposed algorithm enables performing such calculations for user actions that are described by basic trigonometric functions. The algorithm has been tested for the functions sin x, cos x, tg x, arctg x in the Python programming language using the TensorFlow library. As a result, the algorithm allowed us to obtain the correct solution to the problem, but an increase in the task execution time was recorded when the logical conditions became more complicated.Figs.: 6. Refs.: 16 titles.
The article is devoted to solving the problem of cybersecurity of corporate IT infrastructure by using the concept of security operations centers. For modern SOCs (Security Operations Centers), the main monitoring, threat detection and response technology is based on SIEM (Security Information and Event Management) systems, which combine event information from all security tools and have a lot in common with LMS (Log Management Systems). In general, LMS and SIEM are focused on different tasks but have a common information base in the form of log data and partially overlap in the capabilities of performing security monitoring functions. The article provides an overview of the available free open-source tools for the implementation of modern SIEM functions in connection with typical SOC processes and compatible operation with LMS. Functions and typical SOC processes are considered. An appropriate instrumental basis for the implementation of processes is represented by the evolutionary scheme of SIEM development proposed by Gartner. The absence of a free full-featured SIEM platform in the existing and expected offers of the leaders of the modern market of corresponding software (according to the magic quadrant of Gartner-2024) justifies the limitation of the possibilities of solving the given task using the available separate tools, their adaptation, gradual development, and integration within the framework of a single platform. A review of available sources on existing free open-source tools for implementing the basic functions of SIEM and IRP, SOA, TIP, and UEBA platformshas been conducted. According to the authors, these tools primarily deserve attention in the context of the given task. It is noted that against the background of a huge number of paid tools, the share of free ones is quite large, and there are corresponding offers in each nomination of the Gartner scheme. For the first stage of creating a full-featured SIEM, an ELK-OSSIM integration solution is proposed with further gradual development and integration of SOA–SOAR components. At the first stage, such a solution should ensure the performance of the functions of vulnerability assessment, intrusion detection, event correlation, behavioral monitoring, threat intelligence, long-term intelligence analysis, etc.Figs.: 5. Refs.: 23 titles.
Requirements engineering is one of the defining processes of systems engineering. The intensive development and spread of the Internet of Things (IoT) systems requires the development of an information and methodological basis for their development, especially at the stage of requirements definition. The complexity, heterogeneity, and convergence of the IoT systems cause corresponding problems at all stages of their design, including the stage of requirements definition. The process of determining system requirements transforms the stakeholders’ view of the desired capabilities into the developer’s technical understanding of how the system can achieve these capabilities. System requirements describe the requirements that the target system must meet in order to satisfy the needs of stakeholders and are expressed in an appropriate combination of well-formed textual statements and supporting models or diagrams. The development of the IoT systems is associated with solving problems caused by the convergent and adaptive nature of such systems, which determine their complexity. Taking into account the adaptive nature of the IoT systems at the early stages of the development life cycle is important for developing a complete and accurate specification of the target system. There are many aspects of the IoT systems: software, hardware, and network in the context of the environment. Software is the basis of information processing in the IoT systems. It controls the system and provides interaction between the system and the environment, equipment and network. Hardware includes physical objects or devices that are part of the IoT system and must be specified at the requirements stage to ensure that the features of interaction with the hardware are taken into account. The variability of the environment causes uncertainties in the work. The IoT system must be designed in such a way as to take into account the uncertain nature of the environment in which the system functions. The article analyzes approaches and tools for managing requirements, taking into account the context of the Internet of Things systems.Figs.: 3. Refs.: 36 titles.
The last few years have been record-breaking for cybercriminals, who have caused harm to organizations through various malicious programs. Preventative measures are the best protection, and many recommendations emphasize the importance of having a proper data backup strategy in place for organizations. However, considering the increasing complexity and diversity of IT systems and cyber threats, choosing an effective data backup strategy that would protect all critical organizational data while minimizing data loss and downtime is no easy task today. The article addresses the issue of selecting an effective data backup and recovery strategy. The recommended strategy is the 3-2-1-1-0 rule based on the concept of an «air gap» or «air isolation», implying the creation of a barrier between the backup data and access to that data. This additional data protection feature is used to isolate and disconnect target storage devices from unprotected networks, production environments, and host platforms. The main advantages of using this strategy are the protection against ransomware and other malicious software, since backup copies are inaccessible from the backup server or another network location. Backup data remains offline and protected from such attacks due to isolated storage, ensuring long-term data preservation as well. The immutability of backup copies is the most important component of a reliable data protection strategy. When combined with isolated storage, organizations can increase the security and integrity of their backup copies, ensuring that the data remains safe from online threats, providing a high level of protection. As the digital environment continues to evolve, the need for immutable backup copies with isolated storage is becoming more apparent to both organizations and government institutions. Refs.: 3 titles.
Ukraine has one of the highest fertility rates in the world. Therefore, to ensure maximum yields when growing crops, an important component is constant monitoring of the agrobiological state during the entire period of growing crops, before sowing and after harvesting. This makes it possible to optimize the rates of application of technological materials and, accordingly, minimize the costs of growing crops. The implementation of constant monitoring of the agrobiological state of agricultural lands is an important element of modern agricultural production and is possible using a wide range of devices of information and technical systems for monitoring the agrobiological state of agricultural lands, ground, air, and space-based at different stages of growing crops. As a result, we have large data sets that require their prompt processing, visualization, presentation for working with them, and making operational decisions for effective management of technological operations. That is why, to work with large data sets, a method of cluster analysis of data from the information and technical system of precision agriculture of the agrobiological state of the biodiversity of agricultural lands has been developed. The proposed program code allows for the differentiated application of technological material (seeds, fertilizers, plant protection products, etc.) based on monitoring data using information and technical systems for operational monitoring of the agrobiological state of agricultural lands. According to preliminary calculations, possession of such information will allow for saving technological material by 10–25% and help to increase the yield of agricultural crops by 10–20centners per hectare. Figs.: 1. Refs.: 12 titles.
The paper investigates the relationship between different groups of quality indicators of the regression model. Three groups of indicators are considered: statisticalindicators, indicators of approximation accuracy in accordance with the requirements of technical applications, and theoretical indicators in relation to regression coefficients. The statistical indicators are those that are the most common in applied research (although they are not enough for a reasonable assessment of the regression equationquality), namely the residual variance and the multiple correlation coefficient. Approximation accuracy estimates are those that are most often put forward as adequacy criteria in technical applications: average relative deviation of the model from experimental data, maximum relative deviation, average absolute deviation, and maximum absolute deviation. In addition, the difference between the values of the correlation coefficients and their estimates is considered. In order to cover the possible variants of the conditions under which the model is built, the research was conducted under the following conditions: a) compliance with all the prerequisites of the regression analysis; b) compliance with all the prerequisites of the regression analysis in the presence of large dispersion (large dispersion of reproducibility); c) the presence of heteroskedasticity; d) the presence of outliers (with different locations and different numbers in the training sample). It was established that when the prerequisites and assumptions of the regression analysis are met, the difference when using different methods is insignificant from an applied point of view for the indicators of all groups. With large dispersion and heteroskedasticity, methods with the aim of obtaining the best relative indicators give a significant improvement of «their» characteristics with a significant deterioration of all other characteristics. Recommendations for the selection of methods depending on the characteristics of the data and requirements for model characteristics have been developed. Tabl.: 10. Figs.: 7. Refs.: 17 titles.
The article is dedicated to the development and implementation of a real-time air quality monitoring system based on modern information technologies. The study emphasizes the use of the MQTT protocol for efficient data transmission, the Django framework for building the server-side application, and Celery for asynchronous task processing. A key feature of the system is its modular architecture, which ensures high scalability and adaptability to growing requirements. The algorithms for calculating air quality indices, AQI and CAQI, enabling rapid assessment of atmospheric pollution levels, have been thoroughly analyzed. To process large volumes of environmental data, storage optimization methods have been used, including aggregation of average, minimum, and maximum values. These approaches reduce the volume of stored information, enhance processing speed, and improve system efficiency. The system gathers data from various sources, including IoT sensors, open APIs, and MQTT servers, enabling comprehensive air quality monitoring in urban and industrial areas. An interactive web interface with dashboards is provided to ensure user access to information, allowing real-time data visualization, analysis of historical trends, and alerts in case of threshold exceedances. The article highlights the importance of integrating innovative technologies and continuously improving monitoring methods. This includes the implementation of adaptive algorithms for automating data collection and analysis processes, integration with new types of sensors, and enhancement of big data processing capabilities. Such approaches aim to improve the accuracy and relevance of environmental data and establish a reliable foundation for effectively addressing ecological challenges arising from global atmospheric pollution.Figs.: 5. Refs.: 8 titles.
Data assimilation (DA) is a crucial task in pollution forecasting, as it enables the integration of observations with models, improving the accuracy of pollutant dispersion predictions in the atmosphere, ocean, and land. This is particularly important for assessing the impact of accidental emissions and managing environmental risks. This paper reviews and compares four different data assimilation methods for pollution dispersion following accidental releases. The methods include ensemble-based approaches: the ensemble Kalman filter (EnKF), Ensemble Smoother (ES), and two novel approaches based on machine learning (ML). The mathematical foundations of these methods are justified, and their advantages and drawbacks are analyzed. Ensemble data assimilation methods, particularly EnKF, are computationally efficient and can provide good results even in non-linear models. They require fewer resources compared to classical methods while preserving critical information with a limited ensemble size. However, their drawback is the Gaussian approximation, which can lead to numerical instabilities and non-physical results in problems with non-Gaussian distributions. Additionally, the need for multiple reinitializations can increase computational costs. An alternative is ES, which does not require recursive updates and reduces modeling time but may produce worse results in certain cases, despite its higher computational efficiency. Two hybrid methods that combine data assimilation with machine learning have been examined: the first involves correcting tendencies or resolvents, leveraging a compositional structure that outperforms similar methods without corrections in benchmark problems; the second utilizes a neural network with a non-standard loss function. Both methods have demonstrated the ability to correct model errors and account for biases in forecasts. Hybrid approaches that integrate traditional assimilation techniques with machine learning offer high accuracy while reducing the number of DA-ML cycles. Figs.: 4. Refs.: 30 titles.
The article is devoted to the topical problem of improving the convenience and efficiency of shopping in hypermarkets. Currently, there are very few solutions that can help with traditional shopping. Difficulties in finding products, the hugeness of stores and the incomprehensible location of departments create discomfort for customers, which can reduce the level of their satisfaction and negatively affect sales. In this work, the authors propose the creation of a web application that will help to solve these problems by automating the search for products and increasing the convenience of navigation in large shopping areas. The authors have also conducted a survey of hypermarket buyers to present statistics on this problem. As part of the study, an analysis of the subject area and a comparative analysis of existing solutions and methods have been conducted, and the main stakeholders have been identified to determine the key tasks of the application. The BM25 probabilistic model, which provides effective information search, has been selected to solve the problem. The prospects for improving the application’s functionality by implementing a recommendation system that will allow customers to offer products that match their preferences have been identified. A conceptual model of the subject area has been developed, and a functional decomposition of the problem has been performed. A business model of processes has been formed as an eEPC diagram to display and analyze processes as they are without a web assistant for shopping. Also, a description of functional and non-functional requirements for the future application, which should be taken into account during implementation, was made, and their priorities were determined. The proposed solution is aimed at enhancing the level of customer comfort, increasing their loyalty and sales growth in hypermarkets. Using the developed web application will optimize the processes of searching for products in large stores. Таbl.: 2. Figs.: 5. Refs.: 13titles.
QUALITY, RELIABILITY, AND CERTIFICATION OFCOMPUTER TECHNIQUE AND SOFTWARE
The paper is dedicated to describing a method for increasing the reliability and survivability of on-board control systems. The object of the research is a class of aircraft —cruise missiles, which are considered one of the most promising weapon systems. The article analyzes modern structures of on-board control systems that ensure high reliability of such systems. The study focuses on analyzing four-channel fault-tolerant systems for on-board control complexes with enhanced survivability, which ensure high operational reliability of the complexes over extended periods without physical maintenance while maintaining resistance to equipment malfunctions and failures. Additionally, the paper examines its prototype—a three-channel system with synchronization outputs. Both positive and negative aspects of these systems are highlighted, noting their high level of hardware redundancy. For short-lifespan aircraft, such as cruise missiles and similar systems, the authors propose a two-channel quasi-bridge structure (QBS) for the on-board control system, featuring block-level redundancy and reconfiguration. In general, the QBS represents a system consisting of a sequential connection of redundant nodes with equal reliability. When one of the functional subunits of a redundant node fails, the control and reconfiguration scheme excludes it from the computational process and reconfigures the system structure in a non-stop mode. As a tool for studying the reliability of the on-board system, a probabilistic-physical method (PP-method) has been used, which is based on a diffusive distribution of time-to-failure (DN-distribution), specifically formalized for the assessment and prediction of the reliability of electronic, electrical, and electromechanical elements and systems. While maintaining the level of redundancy characteristic of all two-channel structures, the proposed two-channel redundant QBS, with its decomposition of channels into equally reliable redundant nodes, leads to an increase in the probability of failure-free operation (the R-effect), which becomes more significant as the number of nodes increases. The paper also describes the principle of placing functional subunits of the QBS, providing an additional effect — the overall increase in system survivability.Figs.: 5. Refs.: 10 titles.