Yakovlev Yu.S., TymashovО.О. On application of computer systems with ring buses for HIL modelling in real time. Mathematical machines and systems. 2018. N 3. P. 3 – 18.
Due to the fact that foreign and domestic scientists do not pay sufficient attention to qualitative and quantitative estimation of highly-efficient computer systems with the ring buses, difficult dynamic managerial processes used for HIL imitating modelling especially in the conditions of real time is an actual problem. The urgency of creation of such systems also is caused by continuously growing complexity of operated objects and processes with simultaneous reduction of time which has been taken away by the person making decision (PMD) on the analysis of a problem situation and acceptance of necessary operating actions. The paper considers basic elements of HIL modeling. Moreover, the models and methods of a comparative estimation of three variants of highly-efficient computer systems with ring buses (RB), differing bythe way of paralleling the algorithm of the user in comparison with its consecutive realization. Thus for each type of RB the dependency diagrams of productivity from parameters of probability of optimum loading of D3processors and algorithm fragments parallelization are constructed. The given dependency diagrams showed that the most prestigious for the above values of parameters was an intelligent memory system with one ring bus, which was adopted as the base version, since it has the least number of processor cycles in the implementation of the user algorithm compared to its sequential implementation. The most universal variant is an intellectual system of memory with the partitioned modules on FPLD as the developer has the possibility to choose quantity of sections and sector sets of each section, due to a specific target of paralleling, using modern element base (for example, FPLD with PCI – Express).Figs.: 9. Refs.: 27 titles.
UDC 681.3
Yashchenko V.O. Some problematic issues of artificial brain development. Mathematical machines and systems. 2018. N 3. P. 19 – 31. The paper deals with a number of problematic issues of artificial intelligence development. Systems of formation of natural and artificial intelligence. The computer has a soul, which is some attribute of a living being. Is it possible to create artificial intelligence with artificial intelligence, to create such software tools that will give the computer intelligence so that it can think, feel, perceive the surrounding world and has emotions. Multiconnected, multidimensional neural-like growing networks are also considered as the basis for creating a strong artificial intelligence. The theory of strong artificial intelligence suggests that computers can acquire the ability to think and realize themselves, although not necessarily their thought process will be similar to the human. At the heart of multiconnected multidimensional neuron-like growing networks lies the synthesis of knowledge developed by classical theories – growing pyramidal networks of Gladun and neural networks. Multiconnected multidimensional neural-like growing networks form information models in which the main elements are not numbers and computational operations, but names and logical connections. Since these network components are neural-like elements, and the links acquire a weight corresponding to the value of the component being bound, and furthermore germinate, combining the connected components, changing the network structure, a universal multiconnected multidimensional growing neural-like network is obtained. This network acquires an increased semantic clarity due to the formation not only of connections between neuron-like elements, but also of the elements as such, that is, there is not simply a network construction by placing semantic structures in the environment of neural-like elements, but, in fact, creating the environment itself, which completely corresponds to the structure reflected in the brain, where each explicit concept is represented by a specific structure and has its own designating symbol. It is shown that a new type of neural networks allows to simulate the functions of conditional and unconditioned reflexes, which, according to I.P. Pavlov, are the base of conditioned reflex activity of the human brain, which provides adequate and most perfect relations of the organism to the external world, i.e. training and improvement, which predetermines the possibility of creating systems and robots with Strong AI. Figs.: 6. Refs.: 20 titles.
INFORMATION AND TELECOMMUNICATION TECHNOLOGY
UDC 004.89
Grechaninov V.F., Kuzmenko G.E., Lopushansky A.V., Morozov A.O. The network of situational centers of government authorities is the basis for increasing the effectiveness of their activities (interaction). Mathematical machines and systems. 2018. N 3. P. 32 – 39.
The article deals with the issues of improving the quality and efficiency of interaction between government authorities in the Security and Defence Sector (SDS) in assessing the situations that occur in Ukraine, forecasting their development, preventing and minimizing possible threats to the country and making informed and agreed government decisions simultaneously by all SDS bodies. It is justified that it is necessary to create a decision support system for each state body of the SDS, such as Situational Centre (SC), in order to resolve issues related to the protection of Ukraine's national interests. Main approaches to enhance the intellectual capabilities of such SCs are formulated using the integration of formalized and non-formalized knowledge; activation of intellectual, intuitive, creative activity of a person and his individual abilities by maximizing the visual presentation of information; gaining new knowledge and using brainstorming in the process of collective discussion and decision-making. The necessity of integrating the SCs of government authorities in SDS into a single network is substantiated and approaches to ensuring their interaction in a network mode are proposed. They include the creation of a set of secure information networks, use of global information networks as a technical basis for fast information exchange channels in real time, the creation of a unified information environment, using common system-wide tools, the provision of a single conceptual service by creating a data metabase, providing opportunities for joint, simultaneous decision-making by all SDS bodies in the environment of parallel work of all SCs, united in one network. The necessity of creating a state targeted program for the design, development and implementation of situational centres of government authorities in the security and defence sector has been substantiated.Refs.: 8 titles.
UDC 004.7
KlymenkoV.P., OksanychI.M., LopushanskyA.V. Data metamodel as a basis for building a unified information environment of a system of situational centers of the Security and Defense Sector. Mathematical machines and systems. 2018. N 3. P. 40 – 47.
The article is devoted to the problem of creating a system of situational centers of the Security and Defence Sector (SDS) of Ukraine. The problem is topical, since in case of threats of a national scale, the SDS should work in a coherent manner as a single mechanism. The article lists the government authorities in the Security and Defence Sector (SDS) of Ukraine in accordance with the Law of Ukraine «On National Security of Ukraine». The role of situational centres (SCs) in the systems for support of management decision making by SDS structures when responding to crisis situations has been noted. The need to create a SCs system and a unified information environment (UIE) for their work was reported. The generalized structure of the SCs system, headed by the Main SC, has been presented. It is noted that the creation of a data metamodel that included in the UIE should be the first priority in the creation of the UIE. As a data metamodel, a metamodel of the NATO Multilateral Interoperability Program (MIP) is proposed; this complies with the MIP JC3IEDM standard (Joint Command, Control and Consultation Information Exchange Data Model). A parallel is drawn between the work of NATO with the participating countries and the system of the SCs of government authorities in the SDS of Ukraine, the system is headed by the Main Center. The description and conceptual scheme of the data metamodel that can be used in the construction of the UIE system of SDS SCs are provided. The conceptual schema is shown in the IDEF1X notation, which is used to describe relational databases structure. It is proposed to introduce the concept of standardization levels for the basic structural elements of the metamodel and the rules for their names according to MIP. Some examples are shown. The conclusions are drawn about the value and necessity of data metamodel for the SDS SC UIE creation. Таbl.: 2. Figs.: 2. Refs.: 5 titles.
UDC 004.942
Golub S.V., Ushenko Yu.O., Vanchuliak O.Ya., Talakh M.V. Development of models-classifiers based on the results of multidimensional polarization microscopy in the technology of forensic-medical intellectual monitoring of heart diseases. Mathematical machines and systems. 2018. N 3. P. 48 – 59.
The significant prevalence of the cases of acute coronary insufficiency in the practice of a forensic expert and its suddenness gives a rise to suspicion of forensic investigating authorities about the violent nature of the death of a person. It requires the use of objective and precise methods for diagnosis of acute ischemia of the myocardium. In this work the results of application of the methodology of creation of information technologies of multilevel intelligent monitoring are presented in order to provide data of decision-making processes by forensic medical expert. The methods of multidimensional polarization microscopy and statistical processing of data with methods of inductive modeling are combined for constructing a methodology for the creation of intelligent systems of multilevel forensic medical monitoring. On the example of the process of posthumous diagnosis of coronary heart disease and acute coronary insufficiency, processes of coordination of interactions of various types of formation of an array of informative features, typical aggregates of synthesis of model-classifiers at each of the stages of monitoring are researched. To obtain informative features, a model of biological tissue of the myocardium was developed and the main diagnostic parameters were determined (statistical moments of 1–4 orders of coordinate distributions of the values of azimuths and the ellipticity of polarization and their autocorrelation functions, as well as wavelet coefficients of the corresponding distributions), which are dynamic due to its necrotic changes. Numerical characteristics of informative features form an array of input data for the synthesizer of models of monitoring information system. The classification of these data was provided by constructing a decisive rule for the synthesizer on a multi-path algorithm of GMDH. The efficiency of the described methodology has been experimentally proved. Таbl.: 2. Figs.: 2. Refs.: 18 titles.
UDC 004.02
SharypanovA.V., Kalmykov V.G. Multiresolution in visual perception and image processing. Mathematical machines and systems. 2018. N 3. P. 60 – 75.
There exist image processing methods known as «coarse-to-fine” that use different image resolutions. The idea of these methods is lies in the fact that the initial data set considered with different resolution to exclude inappropriate or irrelevant parts of image on earlier stages of processing. Result that obtained for “coarse” resolution is used as initial approximation for processing with subsequent resolution. In that case the computationally-intensive part of algorithm will be applied to reduced amount of data on the final stage of processing. One of the variants of this approach is to use a coarse-to-fine modification of known algorithm, suitable for solving the particular class of problems. At the same time many image recognition tasks that have NP computational complexity or can’t be addressed at all with traditional methods are solved by visual system in no moment and tasks of video stream processing are solved in real time. In particular image segmentation happens on subconscious level, in less than no moment, without tangible effort even with large amount of noise in field of view. So it is natural to pay attention to processes that accompany visual perception, namely on sequential resolution changes in visual system during visual act from lowest resolution to highest possible. In this article we propose a review of state of the art in studying and using of variable resolution in domains of scientific activity mentioned above. Better understanding of visual perception nature should be the next step towards developing of new effective methods for visual information processing in information systems.Figs.: 11. Refs.: 24 titles.
UDC 004.7
Vasylenko V.M. Method of parametric adaptation of turbo codes under uncertainty conditions. Mathematical machines and systems. 2018. N 3. P. 76 – 88.
In modern conditions of development of wireless data transmission systems, such as second generation mobile communication systems 2G, third generation 3G, fourth generation 4G LTE-Advanced, mobile radio access systems WiFi and WiMax, requires the use of technologies that allow real-time high-quality transmission of coded data from the source of data transfer to the receiver. In the process of data transfer using wireless systems, there are problems associated with the influence of industrial, natural and deliberate interference. Under conditions of dynamically changing interference, the probability of a bit error in data transmission increases, it is impossible to provide a given level of reliability of information by simply using known coding methods. Uncertainty about the nature of the interference leads to the problem of ensuring the constant reliability of information within the specified limits for a certain period of time. Therefore, to increase the reliability of data transmission in wireless networks, turbo codes are increasingly used. Turbo codes are widely used in third generation mobile communication systems (UMTS and CDMA 2000 standards), satellite communications, digital television and wireless broadband access systems. The article describes the method of parametric adaptation under uncertainty conditions. The method is based on the adaptive choice of parameters of the S-random interleaver depending on the values of the normalized number of changes of the sign of the posteriori-a priori the log likelihood ratio (LLR) for transmitted data bits of the turbo code decoder, and retransmission of data bits that have been identified as erroneous, using additional information on the LLR relationships for these bits when calculating the resulting likelihood functions by the turbo code decoder. The application of the method will increase the reliability of information transmission in conditions of an increased noise level in the data transmission channel and increase the data transfer rate by including only those data bits that have been identified as erroneous.Figs.: 5. Refs.: 9 titles.
UDC 004.2; 004.7
Ryndych Ye.V., Koniashyn V.V., Zaitsev S.V., Usov Ya.Yu. Features of creating a network intrusion detection system in computer systems. Mathematical machines and systems. 2018. N 3. P. 89 – 96.
The existing network-based Intrusion Detection System (IDS) based host (Host-based intrusion detection system, HIDS) and network detection systems (Network intrusion detection system, NIDS) have been investigated. Special attention is paid to open source systems, as they provide an opportunity to research not only work principles, but the software architecture and the principles of their implementation. Systems such as Snort, Suricata, Bro IDS, Security Onion have been studied. Snort is a leader in open source systems and has been widely recognized as an effective network intrusion detection system solution for a wide range of scenarios and cases of use in local and corporate networks. Determine the general features of network systems to detect invasions and their positive and negative features. A generalized three-level «client-server» architecture of the network intrusion detection system using a web application is proposed. Compared to the two-level «client-server» architecture or «file-server» architecture, the three-tier architecture provides, as a rule, greater scalability, better configuration capability. The underlying layers of the proposed architecture are the web application layer or the web application cluster, the application server layer that can be scaled, and the database layer. Determine the peculiarities of building network systems for detecting server-side intrusions. The bottleneck of the entire system is a set of signatures and inconvenience of interaction with the user. One of the most important problems of modern systems is that this set cannot track threats that have not had precedents in the past. Therefore, the main direction of research is to introduce intrusion searches based on the search for abnormal activity. The next problem that has been discovered and investigated is the development of friendly interfaces. This problem is solved by the introduction of a web application that can efficiently interact with a server hosting intrusion detection systems and a database that stores all the necessary information. Prospective areas of research should be considered the development of methods for searching intruders based on the search for abnormal activity and their implementation into real computer networks. Таbl.: 1. Figs.: 2. Refs.: 6 titles.
UDC 681.513
Brovarets O.O. Quality dependence of the performance of technological operations on the marginal rapid information and technical systems of local operational monitoring of the state of farmland. Mathematical machines and systems. 2018. N 3. P. 97 – 108.
The existing methods of controlling the agrobiological state of the soil environment according to available methods do not take into account the variability of their parameters over the area of farmland. The most effective way to monitor the agrobiological state of agricultural land quickly is to measure the electrical conductivity of the soil environment. The electroconductive properties of the soil medium are a complex indicator of its agrobiological state, taking into account the hardness, humidity, nutrient content in the soil, etc. High content of moisture, salts and nutrients in the soil contribute to increase in the electrical conductivity of the soil medium within a single field, recorded by the information and technical system of local operational monitoring of the agrobiological state of farmland. Such information makes it possible to identify zones of soil environment variability and to effectively manage the agrobiological state of farmland. The information and technical system of local operational monitoring of agrobiological state of farmland is used before the technological operation, simultaneously with the implementation of the technological operation (sowing, mineral fertilization, etc.), during the growing season and after harvesting. It is offered a mathematical model for dependence definition on quality performance of technological operations from limiting rapid information and technical systems of local operative monitoring of farmland condition. This model makes it possible to provide effective control over the quality of execution of technological operations. It opens new prospects for organic farming using such «smart» agricultural machines.Fig.: 1. Refs.: 15 titles.
SIMULATION AND MANAGEMENT
UDC 519.657:004.021
Vakal L.P., Vakal E.S. Finding optimal parameters of the empirical formulas of several variables using evolutionary algorithms. Mathematical machines and systems. 2018. N 3. P. 109 – 116.
The problem of constructing empirical formulas of several variables for experimental data approximation is considered. It is proposed to adapt adifferentialevolutionalgorithm forfinding optimal parameters of the empirical formulas. It is one of the best evolutionary algorithms stably finding function global optimum in minimum time. In the algorithm the evolutionary process begins with a generation of random vectors population. Coordinates of the vectors are the possible values of the required parameters. Further, the vectors are constantly modified using operators of crossover, mutation and selection in order to decrease an approximation error of experimental data by an empirical formula. The algorithm is ended if maximum number of population generations is exhausted or the evolutionary process stagnates. The algorithm permits to find optimal values of parameters for linear and nonlinear (with respect to the parameters) empirical formulas using different norms: quadratic, uniform, etc. The best values of setting parameters of the differential evolution algorithm such as a population size, a mutation force, a crossing probability are determined. Two examples of constructing a linear empirical formula of four variables for calculating the iron oxide content from indications of an X-ray emission sensor and constructing a nonlinear empirical formula of two variables for approximation of experimental data on density of dilute solution as a function of temperature and salt concentration are considered. The obtained results of approximations for experimental data from different fields of science and technology permit to conclude that the proposed algorithm is effective. It is already simple for programming and using (it contains few setting parameters requiring customization). Таbl.: 2. Refs.: 17 titles.
UDC 519.8
Kolechkina L.M., Nagirna A.M. Solution of optimization combinatorial minimization problem. Mathematical machines and systems. 2018. N 3. P. 117 – 124.
The paper presents a mathematical model of optimization minimization problem on a combinatorial set of permutations, which can be a model of many applied problems. A mathematical model on a combinatorial set of permutations by the method of generating them is considered on a graph the vertices of which correspond to the points of the set of permutations. The described algorithm consists of five consecutive steps and provides finding a single optimal solution of the optimization minimization problem, taking into account the combinatorial properties of the set of permutations. In the first step, the normalization of additional constraints is carried out according to the order of growth of the coefficients of the objective function: the matrix of normalization, which ensures the transformation of the resulting solutions into the required form for restrictions or the objective function. The second step is the reference solution, which satisfies all the limitations of the task. In addition, the search is carried out among the boundary points of the graph of constraints. The third step is to build incremental target function in order of growth and a minimum choice. Through the transpositions of the corresponding elements of the reference solution, all other possible optimal solutions are determined, and at the same time, only the increments of the target function are calculated. The minimum gain allows you to find the best optimal solution among them. The fourth step of the algorithm is to verify the implementation of the restrictions and determine the optimal solution. In the fifth step, the minimum value of the target function is calculated. The article presents a numerical example that demonstrates the work of the algorithm. Due to the use of transposition of elements in the permutation, there is a reduction in the steps of solving the minimization problem, which is shown in the given numerical example. The solution was found in six steps, with the improvement of the reference solution, five transpositions were considered, at the same time, at full interrogation, it would be necessary to calculate the 24 permutations, taking into account the limitations. Therefore, the proposed algorithm provides the shortest path for finding an optimal solution, which achieves the minimum value of the objective function on the set of permutations.Figs.: 3. Refs.: 19 titles.
UDC 536.24
BerdnykM.G. Mathematical model and method of solving the generalized mixed heat exchange problem of an empty isotropic body of rotation. Mathematical machines and systems. 2018. N 3. P. 125 – 134.
In the article for the first time a generalized spatial mathematical model for calculating temperature fields in an empty isotropic body of rotation with known equations of generating lines in a cylindrical coordinate system is constructed.It rotates with a constant angular velocity around the OZ axis, taking into account the final rate of heat propagation in the form of a mixed boundary problem for the hyperbolic equation of heat conduction with initial and boundary conditions, provided that the thermophysical properties of the body are constant, and internal sources of heat are absent. At the initial moment, the temperature of the body is constant, and on the outside of the body are known values of temperature and heat flux which are continuous coordinate functions.The hyperbolic heat equation is derived from a generalized energy transfer equation for a moving element of a continuous medium, taking into account the finiteness of the velocity distribution of heat.To solve the boundary-value problem, the desired temperature field is represented as a complex Fourier series. The solutions of the boundary value problems obtained for Fourier coefficients were found using integral Laplace transforms and constructed a new integral transform for a two-dimensional finite space. Own values and their own functions for the integral transformation core are found using finite element and Galerkin methods. In this case, the division of the area into simplex elements was made. As a result, the temperature field in an empty isotropic body of rotation is found in the form of convergent series in Fourier functions. The obtained solution of the boundary value problem is twice continuously differentiated by spatial coordinates and once per time.The solution of the generalized boundary-value heat transfer problem for an isotropic rotating body that rotates, taking into account the finiteness of the velocity of heat propagation, can be found in the modulation of temperature fields that occur in a number of technical systems (satellites, rollers, rotors of power generating units, disk brakes, etc.). Figs.: 3. Refs.: 4 titles.
UDC 004.942.519.87(045)
DodonovYe.O. Network electronic table model of the transport problem with intermediate points. Mathematical machines and systems. 2018. N 3. P.135 – 141.
The model of the transport problem with intermediate points in the matrix formulation because of its versatility and flexibility is classical, based on its methods and algorithms of network optimization were developed, which takes into account the specificity of real networks, in particular, TPIP, although the critical problem remains critical requirements to the memory parameters and the speed of the calculators. Matrix models of network optimization tasks have a fundamental drawback associated with the dimension: usually the actual network task contains nodes that are not connected with all other nodes, but only with neighboring nodes, as can be clearly seen on any geographic communication map, but the traditional matrix version of the TPIP network model requires taking into account all n2links in the adjacency matrix or nm bonds in the incidence matrix, where: n is the number of nodes, m is the number of arcs, if there are not really arcs, their exponents represent fictitious numbers. Therefore, a serious problem remains the implementation of the transition from the matrix to the network, which in compact form can be represented, in particular, by lists of nodes and arcs in the real network, which requires special functions to implement certain elements of the algorithm. An important such element is the implementation of the flow balance principle in the node, first introduced in the problems of streamline optimization, according to which the algebraic sum of the input and output streams does not exceed the node potential (supply, demand). Electronic tables (ET) and their perfect versions with a developed set of functions, procedures and add-on programs have identified effective information technologies ET-modeling and ET-optimization, which have replenished the arsenal of modern business analytics. It is these tools that made it possible to examine network structures, in particular, determining their configuration, which is forced to change as a response to external influences. The proposed network version of the TPIP model can serve as a working tool for investigating real problems of this type, the result obtained in a spreadsheet environment allows to adapt the model to the real state of the research object by modifying the ET model, to determine the actions for the formation of the long-term plan for the development of the network studied on the model.Figs.: 7. Refs.: 8 titles.
UDC 519.237.5: 621.9
LapachS.M. Risksofapplicationofthecorrelationcoefficientunderacertain specification of the regression model. Mathematical machines and systems. 2018. N 3. P. 142 – 148.
The reliability problem of the selective correlation coefficient is considered to determine a certain specification of the regression model. With a certain specification, a list of model members is determined that provide the desired set of selected characteristics of the model. Most often, only a training sample is used for this, but variants with training and examinations are possible. In previous works it was shown that the use of the Pearson correlation coefficient is the most reliable for determining the specification. At the same time, the question of the limits of its reliable usage was not considered. The need for such an investigation is caused by the use of multiple correlations in multiple regression analysis, which can formally be insignificant. This raises two questions. First, which absolute value of the correlation coefficient can be considered reliable for making a decision to include a regressor in the model. Secondly, what should be the size of the examination sample, so that the correlation coefficients of the teaching and examination can be attributed to one general population. It is shown that the selection at a coefficient of less than 0,2 is doubtful and unreasonable. In this situation, firstly, additional checks are needed for the validity of the inclusion of regressors in the model, and, secondly, consideration of this fact in the analysis and use of the model. The first, as a rule, is carried out, as the procedures for the formation of the structure, as a rule, are multistage. The second is actually ensured by the fact that with an exponential form of distribution of the force of influence of the regressors, an insignificant part of the influence of the influence falls on the “tail” of the model, which is explained by the model. In the case where the experimental matrix is random, the result of a passive experiment or using an examination sample is desirable for making a decision to use the median of the correlation coefficient calculated by the “folding knife” method or by forming a set of random samples from the original one. The size of the examination sample when using it for the specification can be equal to 0,25 from the training sample only if it is not less than 30 experiments. Otherwise, its size should be set at least 0,5. Таbl.: 6. Figs.: 2. Refs.: 10 titles.
QUALITY, RELIABILITY AND CERTIFICATION OF COMPUTER TECHNIQUE AND SOFTWARE
UDC 621.3.019.3
Fedukhin O.V., Strelnikov V.P., Cespedes Garcia N.V., Mukha Ar.A. Approximate estimate of reliability of the recovered products at the stage of preliminary design. Mathematical machines and systems. 2018. N 3. P. 149 – 155.
The article is devoted to the development of approximate estimates of the mean time between failures and the average service life of the recovered products within the hypothesis of the diffusion distribution law (DN-distribution). At the stage of preliminary design, the need arises for an approximate estimate of the quantitative indicators of the reliability of a new product. Using a traditional and obsolete mathematical apparatus based on exponential distribution, the indices of failure-free operation of non-recoverable and recoverable products in the form of an average operating time to failure and the mean time between failures are identical. In addition, within the hypothesis of the exponential distribution law, there is no dependence of the mean time to failure on the time of operation, which is not true, and also in the framework of this model there is currently no educated estimate of the durability of the product to be restored. More reliable estimates of reliability can be obtained within the framework of the probabilistic-physical approach to reliability theory based on the use of more adequate two-parameter distributions (DM and DN-distributions), but within these models, accurate calculations of reliability indicators, based on the analytical dependence for the failure flow parameter , are based on the initial data on the reliability of the elements and component parts of the product, which also, as a rule, is not available at the stage of preliminary design. Therefore, in the early stages of designing, it is advisable to use approximate (phenomenological) expressions for practical calculations that do not require the use of detailed information on the nomenclature of the elements and the component parts of the product and their reliability characteristics, allowing at least approximate reliability estimates, while maintaining an acceptable, at this stage of design, accuracy. Approximate expressions for estimating the mean time between failures and the average service life of the recovered products are obtained in this paper. They do not require the use of complete information on the nomenclature and reliability characteristics of the elements and constituent parts that make up the product. Fig.: 1. Refs.: 3 titles.