Mardanov M.J., Rzayev R.R., Alizadeh P.E. On one approach to data fuzzification on the example of the Dow-Jones Index time series. Mathematical machines and systems. 2020. N 2. P. 3 – 13.
The nature of weakly structured systems is specified by using of expert estimates, the inherent uncertainty of which belongs to the fuzzy class. Unlike stochastic uncertainty, fuzziness complicates or even eliminates the use of statistical methods and models, but can be used to make subject-oriented decisions based on approximate reasoning of a person. The formalization of intellectual operations simulating fuzzy human statements about the state and behavior of weakly structured systems and/or complex phenomena today forms an independent direction of scientific and applied researches, one of which is fuzzy modeling of time (or dynamic) series. This direction includes a task complex, the methodology of which is based on the theory of fuzzy sets, fuzzy logic and fuzzy models (or fuzzy inference systems). The initial procedure for fuzzy modeling of time series is fuzzification of historical data obtained by observing on the basis of “soft measurements” of the behavior of a dynamic system for a certain period of time. The paper proposes a new rule of fuzzification of such data, which is tested on the indicators of the Dow Jones Industrial Average, established by the results of daily trading on the US stock exchange by the usual arithmetic averaging of component indicators. The fuzzification procedure proposed in the given paper is implemented through a fuzzy inference system, which ensures that the membership functions of the corresponding fuzzy subsets of the discrete universe are found, covering the entire set of indicators of the Dow Jones index for more than a year. Таbl.: 4. Figs.: 3. Refs.: 8 titles.
Podroiko Ye.V., Lysetskyi Yu.M. Network technologies: evolution and peculiarities.Mathematical machines and systems. 2020. N 2. P. 14 – 29.
Today corporate network is seen as a complex system and traditionally provides the set of interacting essential components, such as: Main Site – a network of head office; Remote Site (Branch) – networks of remote office; WAN – global network uniting networks of the offices; LAN – a local network; WAN Edge – a point of connection to WAN. Internet Edge – a point of connection to the Internet; DataCenter– corporate centre of data processing. Some sources also regard Service Block as a component, which is a separate segment of the network with specific services. Every component of corporate network features contains individual set of technologies, each having its history of origination and development. The paper offers short review of basic technologies which form the history of development of corporate network, as well as their evolution from a set of separated network technologies to a unified multiservice network infrastructure. This unified infrastructure is inextricably linked with a global network of Internet which is both a service and a carrier for majority of modern corporate networks. The paper describes origination and development of Internet, local and global networks, Wi-Fi networks and software defined networks. Corporate network has been through a long evolution from co-existence of separated technologies to modern unified intellectual network infrastructure with high security and reliable management. Due to fast-moving development of information technologies the corporate networks have dynamically transformed in several directions: network functions virtualization (NFV – Network Functions Virtualization); utilization of SDN solutions; automation of management processes; analytics; security; cloud services. In the course of such a transformation the corporate network turned into unified, flexible, application oriented infrastructure with high reliability, easily modified and expanded functionality, single management center, unified security policies, fast and detailed analysis of internal network processes. Таbl.: 3. Figs.: 13. Refs.: 9 titles.
INFORMATION AND TELECOMMUNICATION TECHNOLOGY
Khalchenkov O.V., Kovalets I.V. The use of relaxation methods in the WRF model for the analysis of meteorological conditions in Ukraine over a long period. Mathematical machines and systems. 2020. N 2. P. 30 – 42.
The possibility of using grid and spectral relaxation methods and other options in the WRF mesoscale model for long-term continuous calculations has been investigated. Results of comparison of selected meteorological parameters with surface measurements are presented. The basic recommendations for selecting the optimal combination of long-term calculation parameters are given. The use of the selected parameters allowed to obtain continuous meteorological fields over a long period (several months), which are well consistent with surface measurements, retain large scale synoptic structures and have a deviation from measurements commensurate with the results of short-term simulations over corresponding time period. The selected optimal combination of parameters allowed us to perform continuous calculation for the period from January 1, 2019 to November 6, 2019 without accumulating errors. In a long-run calculation of meteorological conditions in Ukraine with spatial resolution 0,15 deg. for a temperature at a height of 2 meters was obtained a mean absolute error of MAE=2,05 ºC, a correlation coefficient of Corr=0,97, for a wind speed at a height of 10 meters of MAE=1,4 m/s, of Corr=0,75, and for a wind direction at a height of 10 meters of MAE=24,6 degrees, Corr=0,66. The influence of the parametrizations of the underlying surface and the active soil layer on the quality of calculation of meteorological fields is studied. Using the option to update the water surface temperature allowed to reduce the MAE for the temperature from 2,17 ºС to 2,05 ºС. Each of the investigated surface models showed its advantages and disadvantages. The parameterizations RUC and NOAH LSM showed good agreement with the measurements for all studied parameters and can be recommended for use in long-term continuous calculations. A long calculation made it possible to describe the process of accumulation and melting of snow correctly, and made it possible to reproduce the temperature of the upper soil layer correctly as well. The paper shows that the disadvantage of long- term calculations is the inability to determine the temperature of the lower layers of the soil correctly.Tabl.: 6. Figs.: 6. Refs.: 11 titles.
Vyshnevskyi V.V., Romanenko T.N., Lugovskyi Yu.O. The validity of a person’s authentication using an electrocardiogram with a limited number of channels.Mathematical machines and systems. 2020. N 2. P. 43 – 50. The concept of mobile and home telemedicine for screening and early diagnostics of cardiovascular diseases is being expanded due to the emergence of mobile diagnostic devices and smartphones. In the course of such telemedicine consultations, the doctor must be sure that the digital electrocardiogram (ECG) belongs to the patient who was registered. Both multi-channel and single-channel ECG-recording devices are available on the telemedicine consulting market now. Single-channel electrocardiographs are more economic feasible for home use. Previously, the authors have developed and experimentally tested the algorithms for patient authentication by his/her multi-channel ECG. These algorithms are based on the analysis of the shape of QRS complex in three-dimensional phase space of coordinates. Therefore, it is reasonable to adapt these algorithms to single-channel ECG. In case of multi-channel ECG, we can construct a three-dimensional phase space of coordinates by obtaining all the necessary data from the ECG leads. In a case of the single-channel ECG it is necessary to create two additional signals artificially and then it will be possible to form a synthetic phase space. In general, the question of the validity of biometric person authentication algorithms by his/her ECG with a limited number of channels is discussed in this paper. Besides the algorithms for solving the problem of authentication, the comparison of sensitivity and specificity indicators, calculated on the results of experiments for multi-channel and single-channel ECG, are also given in this paper. The results of experiments with multi-channel and single-channel ECG of a larger number of patients are given in comparison to the previous experiments. The results of the experiments for the case of recording ECG signals by different devices are given as well. Figs.: 2. Refs.: 10 titles.
Lysetsky Yu.M., Bobrov S.I. Security Operation System. Mathematical machines and systems. 2020. N 2. P. 51 – 59.
The number of cyber attacks and cyber crimes grows every year. This is why there constantly appear new products, technologies and tools for protection against cyber threats. Security Operation Center (SOC) is one of the most up-to-date and reliable cybersecurity tools of enterprise level. There are already several SOCs in Ukraine in government and law enforcement bodies and there is strong interest to their implementation shown by organizations and enterprises of practically every industry of national economy. SOC allows monitoring, detection and quick response to incidents which is necessary to reduce damage and financial losses caused by such incidents. Implementation of SOC requires significant expenses which can be afforded only by some organizations and enterprises. This is why creation of similar but more affordable tool is very urgent. The paper describes Security Operation System (SOS) designed for effective protection against cyber threats and cyber attacks, which collects, normalizes, correlates and analyses events in organization’s IT infrastructure. Main advantage of this system is ability to receive information on events from different sources and their correlation which is important as today attacks can only be discovered on the basis of combination of events in the IT infrastructure.Another advantage of SOS is ability to add new correlation rules into analytical module which can be based on the unique experience of system exploitation, analysis of new attacks against organization’s IT infrastructure or borrowing such correlation rules from other organizations.Figs.: 5. Refs.: 5 titles.
Ali A.E. Evaluation of investment projects using traditional and fuzzy methods of analysis. Mathematical machines and systems. 2020. N 2. P. 60 – 69.
Financial analysis is one of the main tools and methods of modern investment design, where the projects are alternative or mutually exclusive, that is, the implementation of one of them makes it impossible or unreasonable to implement the others. It is understood that alternative investment projects have the same target orientation, that is, in the process of financial analysis, each of the alternative projects is considered independently, and the possible effect of its implementation is determined without communication with other investment projects. In this regard, the paper considers approaches to solving the problem of multi-criteria assessment of investment project portfolio having an information on financial indicators of projects from reliable sources. Within the framework of the proposed approaches and the formed set of financial indicators, a comprehensive methodology for comparative analysis and selection of investment decisions is proposed, which is based on the application in a certain order and using the capabilities of the methods of comparative analysis of Paret and Bord, as well as fuzzy maximin convolution methods and fuzzy inference, provided that the evaluation criteria are equally important. When using fuzzy methods of analysis, each of the financial indicators is considered as a qualitative criterion for evaluating the effectiveness of an investment project, which is interpreted by suitable fuzzy sets. Based on the results of the calculations, the ranking of hypothetical alternative investment projects was carried out using the considered multi-criteria evaluation methods and their comparative analysis was carried out, which ultimately allows financing the most effective projects. Таbl.: 7. Refs.: 4 titles.
SIMULATION AND MANAGEMENT
Lytvynov V.V., Bogdan I.V., Zadorozhnyi A.О., Bilous I.V. Task prioritization methods in flexible software development methodologies.Mathematical machines and systems. 2020. N 2. P. 70 – 78.
The modern task prioritization methods that are used in flexible software development methodologies are discussed in the paper. Very flexible development methodologies such as Scrum, Kunban and others are currently the most popular because they allow making adjustments to it at any stage of a project, to improve the quality of the created product through daily monitoring of its creation and quickly release the first versions of the software. All task prioritization methods that are used in software projects development including flexible methodologies are divided into those that take into account the point of view of the development team and those that are based on various quantitative assessments, among which various metrics, expert opinions, points of view of those who are interested in the project, available classifications etc. Among the considered prioritization methods, which take into account the opinion of the development team, there are such popular and actively used nowadays methods, as MoSCoW, story map (User story mapping) and proactive improvement. Among the considered methods, based on quantitative assessment, is Kano's model, the method based on the creation of evaluation sheets, the method for estimating relative priorities for a set of functions offered by Carl Wigers and the method of structuring quality functions (Quality Function Deployment). Depending on the features of the project, customer requirements, the wishes of the development team and the other objective or subjective factors, the project can use one or several prioritization methods at the same time or the combination of them. In addition, some of the considered methods can be used in short-term planning, the others – in the long-term, but there are those that can be used at each stage. Таbl.: 2. Figs.: 2. Refs.: 8 titles.
Brovarets O.O., Chovnyuk Yu.V. Improvement of the managing methodology of the development of complex agrotechnical systems of special purpose in modern conditions. Mathematical machines and systems. 2020. N 2. P. 79 – 88.
Proper management of the agrobiological state of the soil environment is impossible without predicting the dynamics of changes in its condition. At the present stage, traditional monitoring systems based on laboratory analysis are most widely used. Such methods are quite accurate, but also cost-effective. Moreover, these systems have a rather large drawback – the speed of determining the agrobiological parameters, in particular the nutrient content in the soil. All this ultimately affects the reliability and effectiveness of decision making. However, even such information cannot provide the proper quality of technological operations in accordance with the agrobiological state of the soil environment. In this regard, there is an urgent need for the development and study of a methodology for managing the development of complex agrotechnical systems for special purposes in modern crop production technologies, the use of data that make it possible to ensure a given quality when performing a technological operation. Based on the systematization of knowledge about managing the development of complex special-purpose systems, the features of their functioning in modern conditions, the directions of improving the management methodology, the principles of constructing a complex of methodological support for management are determined, which implement closed cycles of developing control decisions and the organic relationship of long-term and current planning. As examples of complex special-purpose agrotechnical systems in this work, we use: agronomic (aerospace) monitoring systems for agricultural soils, precision farming systems using modern space navigation systems, and electrical conductivity monitoring systems for agricultural soils.Refs.: 13 titles.
Gaziyev Z.Z. Assessment of microcredit borrowers by the fuzzy maximin convolution method. Mathematical machines and systems. 2020. N 2. P. 89 – 98.
The problem of timely repayment of loans at all times has been and continues to be actual for commercial banks. Overcoming this problem substantially depends on the quality of the solvency assessment of potential borrowers, which is carried out by experts on the basis of retrospective information. In the microcredit system, the assessment of the borrower's credit history is usually carried out by an expert who mainly relies on his heuristic knowledge and intuition, which usually extols subjective considerations that do not have sufficient grounds. In practice, the opinions of different analysts or those responsible for making credit decisions often differ, especially if controversial situations are considered that have many acceptable alternative solutions. As a result, in assessing the solvency of potential microloan borrowers, the subjective opinion of the expert and the incompetent or deliberate interpretation of the information resulting in the adoption of decisions that are detrimental to the microfinance organization are overweight. To increase the degree of objectivity, the paper discusses an approach to assessing the responsibility and solvency of microloan borrowers, based on the use of the fuzzy method of maximin convolution. This approach, given the poorly structured personal data of applicants, allows them to be flexibly and quickly assessed for the provision of microloans. The qualitative assessment criteria applied in this case are weighed based on expert opinions regarding the priority of each of them. An important advantage of the proposed model is that it is simple, convenient to use and able to adapt to the requirements of various micro-financial organizations. Таbl.: 4. Refs.: 6 titles.
Ievlev M.G. Automatic control of the rolling mode “at an angle” on a reversing hot rolling mill. Mathematical machines and systems. 2020. N 2. P. 99 – 104.
Оne of the most important indicators of the quality of control of the rolling process on a plate mill is the accuracy of obtaining the geometric dimensions of rolled metal, which determines the cost of metal per ton of product, in particular, the accuracy of the implementation of a given width. The sheet width exceeding the width of the initial billet, under rolling on plate mills, is obtained at the stage of breaking the width in a horizontal stand. Breakdown of the width in most cases is carried out by rolling the workpiece at an angle of 90˚ to the longitudinal axis of the future sheet. However, if the rotation of the roll (slab) is not possible at the stated angle (for example, when the width of the roll is less than the distance between the rollers), then a part of theconsidered stage is carried out by rolling “at an angle” less than 90˚. In the process of such rolling, the dimensions of the roll along all axes and its shape in plan are changed (initially the rectangular shape takes the form of a parallelogram). The paper describes the mathematical description for one of the options for forming the width of a thick sheet when rolling “at an angle” in relation to an automated control system for the technological process of a plate mill. On the basis of the concepts describing the formation of the roll in the plan during rolling “at an angle”, the expressions were obtained that relate the parameters of the roll and the rolling mode.The problem of automatic control of the rolling mode “at an angle” is formulated. The proposed calculation of the rolling mode, which is implemented at a pace with the rolling process, provides a rectangular roll of a predetermined width for a minimum number of passes, subject to restrictions on the reduction in passes. The described approach to the automatic control of the rolling mode during rolling at the “angle” is implemented in the mathematical support of the system for calculating the rolling parameters and heat hardening of the plate mill 5000. Fig.: 1. Refs.: 6 titles.
QUALITY, RELIABILITY AND CERTIFICATION OF COMPUTER TECHNIQUE AND SOFTWARE
Kruglova N.V., Dykhovychnyi O.O., Alekseeva I.V. Features of the application of mathematical models of tests in the conditions of remote control. Mathematical machines and systems. 2020. N 2. P. 105 – 116.
The paper explores one of the current issues of distance education - the quality of computer tests in terms of ensuring objective control of knowledge. This issue is especially important in today's pandemic and temporary quarantine. The main attention is paid to a statistical analysis of test quality based on test results using CTTand IRT methods. Using modern statistical methods, the authors analyzed the results of testing prepared and conducted during the quarantine period. As an object of study, a test on “Integration of functions of one variable” was chosen, which students completely mastered remotely.The tests were created on the basis of the MOODLE platform at Igor Sikorsky Kyiv Polytechnic Institute, by proffessors of the Department of Mathematical Analysis and Probability Theory. Data processing is carried out using a system for test analysis, created by the authors in the programming environment R. The system allows you to process tests in different areas: pedagogy, psychology, sociology, etc., different in structure; use both CTT and IRT apparatus; work with large data sets; to analyze not only test questions, but also respondents; more accurately differentiate respondents.Based on the study, the possibility of conducting electronic testing remotely was confirmed.The technology used in the study can be used to create and analyze the tasks of external evaluation, conducting session control during quarantine.The use of the methods studied in the work for the analysis of test tasks will increase the competence of high school teachers to conduct electronic remote testing. Таbl.: 2. Figs.: 10. Refs.: 20 titles.
Fedukhin O.V. R-effect of decomposition of duplicate systems into equiprobable functional nodes.Mathematical machines and systems. 2020. N 2. P.117 – 123.
The work is devoted to the reliability of non-recoverable two-channel automation systems and computer equipment. As alternative options, a system with block duplication (SBD) and a system with a quasi-bridge structure (QBS) are considered. SBD in general is a two-channel system consisting of a series connection of duplicated nodes of different reliability. In case of failure of one of the functional subunits (FSU) of the duplicated node using the control and reconfiguration (SCR) scheme, it is masked, withdrawn from the computational process, and reconfigured the system structure in the “Non Stop” operating mode. A QBS system also represents a two-channel structure, but consisting of a serial connection of duplicated nodes of equal reliability, while the technical element intensity (redundancy level) and the functionality of this system are identical to SPD. The QBS system is also a fail-safe system that provides the “Non Stop” mode of operation. The probabilistic-physical calculation method (WF-method) is used as a tool for studying the reliability of systems, which is based on the diffusion distribution of mean-time-to-failure (DN-distribution), specially formalized for assessing the reliability of electronic, electrical and electromechanical elements and systems. While maintaining the redundancy level of the considered two-channel redundant systems, decomposing the channels into equally reliable duplicated nodes leads to the R-effect – an increase in the likelihood of system uptime with an increase in the number of nodes. The presence of the R-effect was established by other methods of calculation and by statistical modeling for both non-restored and restored systems. Таbl.: 9. Refs.: 8 titles.