Neural Networks and the Philosophy of Dialectical Positivism

The fact that the theory of neural networks permits the completeness of the concept of global evolutionism has been shown. This concept in the current philosophical literature is seen, inter alia, as an effective platform for interdisciplinary cooperation, the need for which is becoming more acute, which is reflected in the anniversary report of the Club of Rome in the form of the thesis on the “New Enlightenment”. The theory of neural networks allows us to give a consistent interpretation of the category of “complex”, in accordance with which a system of arbitrary nature is treated as “complex” if it is possible to indicate a complementary analog of a neural network. With this interpretation, the evolution of systems of an arbitrary nature can indeed be described in a uniform way. In particular, the philosophical law of transition from quantity to quality can be reduced to a description in terms of information theory (through the description of the evolution of a neural network complementary to a complex system). The main result of the work is a new interpretation of the dialectical philosophy categorical apparatus on the basis of the theory of neural networks.


Introduction
At the present time, the issue of overcoming interdisciplinary barriers, which significantly reduces the effectiveness of scientific research worldwide, is becoming increasingly acute. Moreover, the trends associated with a significant increase in the number of scientific disciplines, an increase in the volume of scientific information, etc. lead to quite definite crisis phenomena that caused the authors of the jubilee report of the Club of Rome [1] to raise the question of the "New Enlightenment". In [1] it is recognized that the transition "from the consideration of reality as a whole to its division into many small fragments", once placed in the basis of the philosophy of the science of the New Time, no longer meets the current needs of civilization. An urgent necessity is the formation of a new philosophy of science as the foundation of the knowledge economy.
The Roman Club, once integrated into the global agenda of environmental discourse, retains its position as the elite of the world expert community. Its theses reflected and reflect a very specific social order, so there is reason to talk about the realized need for a radical change in the role of philosophy in society. The task of a new synthesis of scientific knowledge cannot be solved without an adequate philosophical understanding, respectively, the Renaissance of philosophical knowledge is inevitable.
At the same time, the problem under consideration cannot be solved by means of philosophy alone. The corresponding base is created within the natural sciences.
It was on this basis that the concept of global evolutionism was constructed [2,3], based on the assumption that the laws of the evolution of systems of different nature are common.
In this paper it is shown that the theory of neural networks allows us to give a new interpretation of the concept of global evolutionism. It is de facto one of the prerequisites for the further development of the philosophy of dialectical positivism, one of the basic concepts of which is the concept of metalanguage (including the metalanguage of science).

Neural networks and the concept of global evolutionism
Analogies between neural networks and systems that are realized in nature and society, as shown in [we] are sufficiently deep. In particular, they allow us to justify the existence of the communication shell of a complex system as some new quality that appears in a complex system of any nature and that controls its behavior, including evolution. In section 2.1 we consider a concrete example of the appearance of such a quality, the nature of which does not reduce to the properties of the individual elements constituting the system.

Neural networks in nature and society
In works [4][5][6] it was shown that analogues of neural networks are found in nature and society much more often than it may seem at first glance.
In particular, in [5,6] considered an example of voting in a certain Council. Each member of the Council can be placed in accordance with the analog of the neuron. The analogy is legitimate, since each member of the Council actually converts the input signal (the information it receives, for example, the dissertation report) into a binary variable (if the "abstain" situation is excluded); "0" = "For"; "1" = "Against". However, in practice, each of the members of the Council takes into account, in one way or another, the opinion of the other colleagues. For example, a situation is common where the voice "Against" is given if the dissertant is a pupil of a competitor or an opponent. (It's easy to imagine a reverse situation.) Schematically, the influence of council members on each other can be represented through a system of feedbacks, Fig.1.

Voting result
External signal Figure 1. Scheme of interactions between members of the voting council [5] The circuit in Fig. 1 is topologically equivalent to the Hopfield neuroprocessor and is described via a matrix of weight coefficients (Fig. 2). This example is important because it clearly shows that the decision is actually made not by individual members of the Council, or even by their aggregate, but by the neural network formed by them (if the number of members is large enough, and the connections between them are distributed). In other words, this example shows that there are processes of processing information that occur at a higher level than that associated with an individual element of the system. A new quality (communication shell) is emerging in the system, irreducible to the behavior of the individual constituent elements.

Neural networks and the category of complex
Consideration of objects similar to the voting Council and the generalization of the results obtained allowed us to work out the following interpretation of the category of the complex [4,7]. The system should be considered as complex when there is a complementary analog of the neural network. In other words, any complex system has its own communication shell, the processes in which influence its behavior, including evolution.
Namely, consideration of the communication shells of complex systems makes it possible to propose a mechanism for their evolution, an alternative to what goes back to the theory of the origin of C. Darwin's species. In accordance with this point of view, the evolution of complex systems proceeds in two stages. At the first stage, the structure of connections between the elements of the system takes place, which is interpreted as the evolution of its communication shell, which has relatively independent behavior. At this stage, the properties of the individual elements do not change.
At the next stage, the system as a whole is converted into an analog filter, which creates favorable conditions for the appearance of elements whose properties are more in line with the new state of the system. This approach allows to overcome many contradictions inherent in evolutionary theories based on the Darwinian point of view [4].
The theory of neural networks is important from this point of view to justify the concept of global evolutionism. However, with respect to the purposes of this work, another aspect is also important: the behavior of a complex system is controlled by a different quality (called the communication shell), which forms links between the elements and which is only indirectly related to the specific properties of the individual elements of the system.
For languages, including for the language of science, such a communication shell is the extra text structures discussed in the next section.

Languages and extra text structures from the point of view of the theory of neural networks
The history of the development of positivism consists of several stages. The latest of these stages is somehow connected with postpositivism, the development of which, in turn, is largely associated with the ideas of K. Popper.
According to Popper, science in general cannot deal with truth, since research activity is reduced to the promotion of hypotheses about the world, assumptions and conjectures about it. Judgments of this kind in [8] were interpreted as a kind of scientific decadence, the very emergence of which in the 20 th century serves as a graphic illustration of the growing negative trends in the development of world science, which eventually raised the question of the "New Enlightenment" [1].
Dialectical positivism does not share the views of postpositivists and continues the tradition of neopositivism, overcoming the inherent contradictions on the basis of the dialectical method.

Modern reading of the ideas of neopositivism
Representatives of logical positivism (empiricism), in particular M. Shlik, R. Carnap, G. Reichenbach, focused their attention on the linguistic (with some simplification) problems of science [8].
There were quite certain grounds for this. In particular, the development of mathematics clearly showed that the language need not necessarily be natural, there are also artificial languages, in particular, the language of formal logic.
Further, everything that is done in science, in the final analysis, is expressed in certain linguistic forms, more precisely in the form of sentences of some language. Moreover, logic provides a convincing example of the existence of a language, the rules of operating which give reliable results. The question is, what then should the theory of knowledge as a whole differ from the discipline studying languages (both artificial and natural) as a means of constructing judgments? Indeed, if any theory is a certain system of judgments logically (or otherwise) related to each other (at least, ideally), therefore, at a minimum, there must be rules according to which the theory is constructed. But, if the theory is a means of cognition, then to an even greater extent the means of cognition are those tools with which it is used -that is, the means of the language, united by well-defined rules of operation. The developed logic of science (in the broad sense of the term) was called, in the opinion of neopositivists, to replace not only traditional philosophical ontology, but also traditional epistemology, the theory of knowledge.
We emphasize that the construction of neopositivists is currently acquiring an unexpected sound, in particular, in connection with the problems of developing a fuzzy logic apparatus [9]. (Fuzzy logic makes an attempt to link natural language and artificial, in particular, it actively uses such a concept as "linguistic variables.") This circumstance -the existence of fuzzy logic, as well as other logics that do not coincide with the classical one, -highlights an important component of the reasoning of neopositivists. They, in fact, never claimed that classical logic, formalized by the works of mathematicians, is the only possible language of science. The apparatus of mathematical logic was really an ideal in their eyes, according to the canons of which the language of science should be built, but they clearly understood that the language of science is not limited to statements formalized by the simplest means.
Rather, neo-positivism formulated a program aimed at overcoming speculative philosophy, on the principles of developing a language of science broader than classical logic, but built on the same basic ideas. They saw one of the most important tasks of philosophy in creating a theory of scientific knowledge of a very specific type.
It is in this sense that dialectical positivism continues the tradition of neo-positivism, although its provisions are far from being reduced to solving problems that are somehow connected with the theory of knowledge. However, it is the analysis of the problems of the "language of science" that allows us to demonstrate the possibilities of the method of dialectical positivism, as well as to reveal the most important philosophical aspects of the theory of neural networks.

The problem of undecidable concepts from the point of view of dialectical positivism
The question of the basic concepts of logic has never been finally solved, which is demonstrated, in particular, by numerous attempts to construct a propositional calculus that operates modality categories, which include, in particular, the works of K. Lewis [10] and his followers [11,12].
From the most general positions, logic can be considered as the result of the formalization of one or another method of reasoning, which can be constructed according to various schemes. From this point of view, it is quite logical to develop alternatives to classical logic, which will increasingly approach the kind of reasoning that real human thinking uses.
Studies in the field of modal logic are actively pursued in our days [4,12]. Common to them is the expansion of the classical logic of propositions and the classical logic of predicates, when the language of logic is replenished by modal operators of necessity and possibility acting on language sentences.
In a simplified form, the state of affairs can be expressed as follows. Classical logic is the method of reasoning that is most simple for formalization, the way that has been largely transferred to computers, only because formalization tools have already been well developed and mastered.
Evidence of this is the scheme of constructing a conceptual apparatus, applied in any of the scientific disciplines. This scheme is based on the construction of a system of definitions that make it possible to reduce the system of concepts used to basic concepts.
However, the development of the theory of neural networks [13] led to the appearance of the opposition "sequential calculations -parallel computations". Logic, one of the main tools of which is the construction of new true statements as consequences of already existing ones, is obviously associated with sequential computations. The theory of neural networks, the formation of which was originally associated with attempts to reveal the mechanisms of the functioning of human consciousness, unambiguously interprets the human brain as a system that realizes parallel computations.
From this point of view, in particular, it is of interest to analyze the existing approaches to the construction of axiomatics in any scientific discipline. The classical scheme of constructing axiomatics implies the existence of basic concepts through which all the others are expressed (more precisely, there is a certain hierarchy of concepts that allows one group of concepts to be consistently reduced to another). It is permissible to put such a scheme in line with the logic on which the sequential calculations are based.
However, this approach has a very definite drawback. Namely, it does not allow us to define (in the classical sense) the basic concepts. An example in this respect are the difficulties associated with the interpretation of the concept of "information" [4]. The definitions of information used in modern textbooks (information is the content of messages and the like) are not constructive; in fact, they are a tautology [14].
Objective dialectics finds a way out of this situation, defining the basic concepts through opposition. This is how pairs of categories of objective dialectics "contentsform", "quantity-quality", etc. are defined. In [4] arguments were presented proving that the concept of information should also be defined through opposition, treating this category as a pair to the category of matter.

The conceptual apparatus as a neural network
By appealing to the opposition "sequential computationsparallel computations", one can argue that the approach used by objective dialectics admits a definite generalization. Indeed, the opposition can be regarded as a special case of the connection between concepts.
Accordingly, it can be argued that it is permissible to define the basic concepts through a system of connections between them, and the opposition is considered no more than a particular case of the relationship established between the concepts. In other words, the concepts forming the basis form a network structure, which makes it legitimate to compare with parallel calculations performed by neural networks. Slightly exaggerating, we can say that a very specific neural network -the human brain -also uses a network conceptual structure to reflect the world around it, in which concepts are defined through connections with each other.
We emphasize that such an approach, in a certain sense, contradicts the foundations of classical logic. Namely, the use of a system of concepts that is selfenclosed can be interpreted as a "vicious logical circle". This, however, cannot be considered an obstacle for the following reasons. First, as emphasized above, the question of the foundations of logic is by no means closed. Secondly, this approach is much closer to everyday thinking than classical formal logic.
Hence, we can formulate the following problem. There is a certain language formed by a certain set of concepts and relations between them. Obviously, not all concepts in this language are basic, in any case, this is exactly what corresponds to the general case of "language". The question is whether there are formalized tools that ensure the allocation of a basic set of concepts that form a network structure that defines them through interrelationships. Part of the task of this kind is solved by modern mathematical linguistics. For example, the problem of determining the vocabulary minimum necessary to explain in a foreign language was repeatedly discussed; the question is raised about the development of various variants of the "algebra of concepts" or "the calculus of concepts" [15][16][17]. However, the methods of solving such problems are far from complete formalization.
At the same time, the question of creating an algebra of concepts has one more dimension, which makes it necessary to consider it in a philosophical way. Namely, any system that has passed a certain critical threshold of complexity, for example, a neural network, generates a different quality that is irreducible to the properties of the aggregate of constituent elements, as was shown in the previous sections. The most significant example is human consciousness -a new quality that appears in a system of related nerve cells, only because they are able to exchange signals that are electrical in nature.
Consequently, there is reason to believe that the basic structure of a fairly complex language, i. language, providing the formation of meaningful judgments, should also form some other quality, not reducible to individual concepts or their collections. In this respect, it is pertinent to emphasize that, as applied to natural language, the question of extra-textual structures, about the meanings that cannot be reduced directly to the text, has been discussed for a long time [18]. As noted by MK Mamardashvili, speaking about the subject of philosophy, "to feel those living things that stand behind the text and because of which, in fact, it arises. These things usually die in the text, they look badly through it, but nevertheless they are. " It is much less obvious that extra-textual structures are equally inherent in artificial languages, for example, programming languages. Simplifying, the extra-textual structure of the programming language is its internal logic, the author's intention, which allowed transforming the original idea into a specific algorithmic structure. This circumstance allows us to say that the systems of concepts that generate a new quality are constructed in the same way as algebraic structures are constructed.
Representations of extra-textual structures as a new quality that forms the system of words of a language (the system of its concepts) correspond with the notions of a new quality that the neural network generates as follows. In both cases, we are talking about the emergence of a new quality, which is a purely systemic effect. It can be said that something else that appears in complex systems through the exchange of information between elements (say, through the exchange of signals between neurons of the brain) cannot have other reflection, except similar in information properties. Specifically, here we are talking about that new quality, which gives rise to a connection between the concepts of a particular language, it is not important whether it is artificial or natural.
In accordance with the proposed interpretation of the category of complex (section 2.2), the language can also be considered as a "complex system," which also must have its own communication shell, formed by what can be called meta-concepts. Such representations allow to speak correctly about "extra text structures", i.e. about an object that was previously studied only by means of the humanities. Meta-concepts are inexpressible in the language itself, but it is their existence that determines its logic. Consequently, the construction of any calculus of concepts that to some extent approximates reasonings in natural language must take into account the existence of meta-concepts.

Conclusions
Thus, the philosophical interpretation of the theory of neural networks, as a minimum, makes it possible to raise the question of consistent interpretation of the character by "extra textual structures" understood in the spirit of Yu.M. Lotman and U. Eco. The out-text structure in this interpretation is nothing more than a new quality that generates an analogue of a neural network formed by concepts and not reducible to the aggregate of individual elements of the network (in this case, individual concepts and / or utterances of natural or artificial language).
It is in this vein that we should also interpret the importance of analogies with neural networks for the philosophy of dialectical positivism, as a direction that focuses on the use of metalanguage forms.