CYBERNETICS & HUMAN KNOWING

A Journal of Second Order Cybernetics & Cyber-Semiotics


Vol. 3 no. 1 1995

Søren Brier:
CYBER-SEMIOTICS: On autopoiesis, code-duality and sign games in bio-semiotics.1

Abstract

This paper discusses how the second order cybernetics of von Foerster, Maturana, Varela and Luhmann, can be fruitfully integrated with Peirce's semiotics through the bio-semiotics of Hoffmeyer. The conclusion is that what distinguish animals from machines is that they are autopoietic, have code-duality and through their living organization constitutes a biological interpretant. Through this they come to inhabit a new life world: their games of life take place in their own semiotic Umwelt (von Uexküll). It is the biological context and the history of the species and the individual the determine the meaning of signs in the structural couplings that constitutes the channels of communication. Inspired by Wittgenstein's theory of language games as the context that determines semantic content of the expressions of sentences, we suggest that animals participate in sign games.

Introduction: The problems of cognitive science.

In the last decade the leading research program for information- and communication science has been Cognitive Science. This program has provided us with the thus far most successful attempt to create a general theory of information processing across the material and organisational differences of animal, man and the machine.

The basic theory of classical cognitive science (not neural networks) is that a logical-algorith-mical program manipulating symbolic representations syntactically correctly that is the "deepest" description of reality. This symbol manipulating algorithm creates intelligence and the ability to generate and receive meaningful strings of signs. In self-conscious humans this is called language, and the theory is based on Chomsky's idea of a deep generative grammar for all languages. So - to use the computer metaphor - the software is the decisive thing, not the hardware or "wetware". Intelligence can be implemented in any system that "run the program".

We can summarise the main points in the "information processing paradigm" of cognitive science as follows (see e.g. Gardner 1985, Lindsay & Norman 1977, Winograd & Flores 1987, Searle 1989):

  1. Different information systems (central nervous systems, humans, machines, animals and organizations) process information in the same way.
  2. Conscious logical thinking is generally taken as a model for cognitive processes.
  3. Understanding is viewed as categoric (a question of ascribing a thing to the right logical class). The analytical-categorical logical adaptation that is emphasized.
  4. It is generally believed that cognitive processes can be broken down into parts of a process and ultimately can be seen as a series of clear cut choices.
  5. Perception is viewed primarily as categorical and denotative (concrete description).
  6. Learning is viewed as taking place according to rules and principles, and is viewed primarily as the construction of the structures of knowledge.
  7. A language system is viewed primarily as a formal mechanism for the transfer of information via symbol manipulation between humans, machines or between humans and machines.
  8. The subject is primarily defined as a cognitive subject.
  9. There is a clear tendency to view the cognitive subject as analogous with a computer.
  10. The emphasis on the syntactic-structural aspects of cognition, thought, and communication leads to a de-emphasis of the function of cultural-societal and historical dimensions in the human communicative growth of meaning.
  11. The mechanism behind memory, the growth of meaning, and the handling and understanding of symbols, are seen as a so-called 'semantic network' (Lindsay & Norman 1977 and Vickery & Vickery 1988). This follows from the recognition that when one tries to define the meaning of symbols and ideas, this occurs lexically with reference to other symbols and conceptions.

Thus meaning is seen as residing in a network of mutually defined conceptions: a so called knowledge structure. The cognitive viewpoint is in this way very structural. The idea of this network is an effect of the above mentioned approaches and has a very denotative and atomic character. It represents a very formal entry to semantics. In other words, concepts were originally viewed primarily as context free objective-lexically described symbols. As this was obviously wrong, the computer simulations have had to develop artificial contexts such as frames, schemes and toy worlds. Further cognitive science usually does not consider intuitive and emotionally based sources for cognition and therefore have problems with intentionality.

This functionalistic paradigm strives to give a syntactic representation of semantic content. Let me use J. Fodor, who is one of the foremost philosophers of language defending the paradigm of cognitive science, as a short way of documenting the above analysis. Fodor (1987, p. 19) writes:

"But, now, we know from modern logic that certain of the semantic relations among symbols can be, as it were, 'mimicked' by their syntactic relations; that, when seen from a very great distance, is what proof-theory is about. So, within certain famous limits, the semantic relation that holds between two symbols when the proposition expressed by the one is entailed by the proposition expressed by the other can be mimicked by syntactic relations in virtue of which one of the symbols is derivable from the other. We can therefore build machines which have, again within famous limits, the following property:

The operations of the machine consist entirely of transformations of symbols;

in the course of performing these operations, the machine is sensitive solely to syntactic properties of the symbols;

and the operations that the machine performs on the symbols are entirely confined to altering their shapes.

Yet the machine is so devised that it will transform one symbol into another if and only if the propositions expressed by the symbols that are so transformed stand in certain semantic relations - e.g., the relation that the premises bear to the conclusion is of valid argument."

The idea is to make the computer manipulate signals which to humans have symbolic value in a logic syntax describable in algorithms in such a way that it becomes meaningful to other language users. But although there is a lot of talk of semantics and symbols in Cognitive Science, the concept of information seems to be like Wiener's (1961) combination of Shannon's statistical information theory and Bolzmann's probabilistic understanding of thermodynamics (Brier 1992). This is again combined with a linguistic theory which claims that the semantic content that symbols of a sentence represent can be determined through truths tables. The basic reality of a sentence is seen as a logical structure with semantically empty symbols. The semantical content can then be "poured into the symbols" by their capacity to refer to things, and determined through truth tables. Reason and the working of language in communication is seen, roughly, as fitting the model of formal logic. The core of meaning, intelligence and reasoning is seen as logical algoritms.

From a biological cybernetic and semiotic point of view this theory overlooks various fundamental characteristics of biological systems: self-organisation, closedness, complexity, autonomy, self-interest in survival, life history and intentionality. The mechanistic idea of reason and knowledge - and of logic, of course - has led to a simplistic understanding of how meaning functions in both language and in practise. The mechanicist hope that the causal interaction of symbols can be explained through their syntactic relations. Fodor (1987 p. 18-19 -footnote deleted) shows this approach clearly:

"Here, in barest outline, is how the new story is supposed to go: You connect the causal properties of a symbol with its semantic properties via its syntax. The syntax of a symbol is one of its higher-order physical properties. To a metaphorical first approximation, we can think of the syntactic structure of a symbol as an abstract feature of its shape. Because, to all intents and purposes, syntax reduces to shape, and because the shape of a symbol is a potential determinant of its causal role, it is fairly easy to see how there could be environments in which the causal role of a symbol correlates with its syntax. It's easy, that is to say, to imagine symbol tokens interacting causally in virtue of their syntactic structures. The syntax of a symbol might determine the causes and effects of its tokenings in much the way that the geometry of a key determines which locks it will open."

There are many arguments against this. Mine are from a biological perspective. I do not think that meaning can be fully represented in a syntactic logical form. Meaning is very much tied to biological existence. Two important aspects of this are the dynamics of biological organisation in relation both to evolution and to the dynamics of the population. The new concepts developed within second-order cybernetics and bio-semiotics, which describe these qualities, are autopoiesis and code-duality. In this paper I want to suggest a combination of these including a semiotic approach. This is what I call cyber-semiotics.

Cyber-semiotics is based on different view of what a sign - and more specifically a symbol -is than the syntactic denotative concept of cognitive science. It is based on Peirce's semiotics (Buckler 1955). In Peirce's semiotics, signs are triadic dynamical processes called semiosis, where the representamens get their interpretants from a semiotic web in an ongoing historical evolution which will over time be able to stand for more and more aspects of the dynamical object. From a biological view, then, meaning is in the bio-social praxis which the sign takes part in. This is the short version of this biological cyber-semiotics I want to unfold here.

Second-order cybernetic's concept of information

There is a big gap between the syntactic-logical information concept of Cognitive Science and the pragmatic dynamic semiotic concept of informational meaning in Peirce's semiotics. I believe, among other things, that the body as producer of semantics is in that gap. But it is not only a question of an understanding of the human body. We need a general theory of biological systems and their production of meaning and, further, we need to combine this with a pragmatic socio-semiotic-linguistic theory of the generation of meaning in communication.

I want to go to a meta-plane to suggest some elements for the construction of a theoretical framework for an interdisciplinary information- and communication theory , "... to find a way of putting the knower into a known that is constructed so as to keep the knower viable" (Krippendorff 1991). Biological thinking is one of the ways to keep the knower viable, but not the usual mechanistic biological thinking.

My idea is that the biological thinking of second order cybernetics combined with the new thinking in bio-semiotics is a stepping stone between the logic of semantics and the understanding of how meaning is generated in the cultural communication of a full-fledged language. This I will try to show below.

No doubt cybernetics is one of the ancestors of modern Artificial Intelligence, Cognitive Sci-ence and the information processing paradigm. But second-order cybernetics is a development that overcomes some of the deep-rooted epistemological problems connected with the attempts to create a science which has the same knowledge, perception, biological organization and intentionality as its objects, which are its tools for acquiring knowledge.

One of the main features of the second-order cybernetics of von Foerster, Maturana and Varela when we discuss perception, thinking and communication is that biology matters! The idea in second-order cybernetics is to make a second-order science, where the epistemological processes and problems of cybernetics are attempted dealt cybernetically. It is a cybernetics of cybernetics. It looks into the cybernetics of the observing system in a scientific way and has, therefore, concentrated on the influence of the biological organisation of the observer on observing. The basic concepts developed in second order cybernetics are autopoiesis, closedness, eigenvalues and the idea that the production of information is internal to the autopoietic system, and that perception and communication are based on structural couplings (Maturana 1983, Maturana & Varela 1980 and von Foerster 1984) and generalized media (Luhmann 1990).

According to the viewpoints of von Foerster, Maturana and Varela, the biological system is characterized by its closed self-organized organisation, which produces the elements from which its material structure is built and forms the boundary of the system. The system does not receive objective information from outside as a stimulus to which it responds. Instead its organization is perturbed by some things happening in the surroundings and its "answer" is an internal adjustment to maintain its organization. It is reacting to its own internal network. Information therefore is not something outside, but rather is a phenomenon created inside the organism (See Brier 1993a and b).

Bateson took the first step away from an objectivist concept of information by defining information in cybernetic systems as a difference which makes a difference. In Maturana's version, information is a difference created inside the autopoietic system because of some perturbation. His theory is somewhat solipsistic (Qvortrup 1993a, Brier 1992).

But in von Foerster's and Luhmann's more refined version, information is rather a difference (created inside) which finds a difference outside (Qvortrup 1993), or better, which selects difference outside and establishes a correspondence to it through an eigenvalue function. Von Foerster (1984 p. 263) writes for instance:

"...the information associated with a description depends on an observer's ability to draw inferences from this description... the environment contains no information; the environment is as it is."

Von Foerster (1984) writes about "eigenvalues" as those stable modes of dynamics a biological system drifts into when it is pertubed again and again in the same way. It is an attempt to show how what is normally called a representation, comes about in a biological system. Maturana and Varela (1980) call the steady connection through which eigenvalues can be established "structural couplings". These concepts seem to be more fruitful for the description of the habits and dynamics behind the phenomenon the ethologists call sign stimuli created by an innate response mechanism in the instinctive actions of animals (Brier 1993b). Although the whole idea of structural determinism is still very mechanistic, there is an important shift in description from physiological structures to organizational dynamics. So the meaning of information is depends on the systems own autopoietic organisation and its historical drift and co-evolution with the environment and the other observing systems in it.

"In this coupling, the autopoietic conduct of an organism A becomes a source of deformation for an organism B, and the compensatory behavior of organism B acts, in turn, as a source of deformation of orga-nism A, whose compensatory behavior acts again as a source of deformation of B, and so on recursively until the coupling is inter-rupted." (Maturana & Varela 1980, p. 120)

When observing, autopoietic systems are forced by the circumstances to be a part of each other's surroundings for a longer time and, a dance of mutual structural couplings develops. But this is not, as it is usually understood from a cognitivistic and logic of language point of view, an exchange of information or codes.

"Notions such as coding and transmission of information do not enter in the realization of a concrete autopoietic system because they do not refer to actual processes in it. (...) The notion of coding is a cognitive notion which represents the interactions of the observer, not a phenomenon operative in the observed domain." (Maturana and Varela 1980, p. 90)

But the further development of these organisational processes leads to what Maturana calls 'languaging'. This process of languaging I understand as the biological foundation for any kind of co-generation of meaning between biological systems. But it does not deal specifically with the function of language in cultural communication. Niklas Luhmann has developed a generalized version of the second order bio-cybernetic understanding of perception, generation and communication of information:

"If we abstract from life and define autopoiesis as a general form of system building using self-referential closure, we would have to admit that there are non-living autopoietic systems, different modes of autopoietic reproduction, and that there are general principles of autopoietic organization that materialize as life, but also in other modes of circularity and self-reproduction. In other words, if we find non-living autopoietic systems in our world, then and only then will we need a truly general theory of autopoiesis that carefully avoids references that hold true only for living systems." (Luhmann 1990, s. 2)

Luhmann's idea is not to claim that computers are autopoietic, but that systems exist, which are not primarily biological but are autopoietic. He thinks of psychological and social-communicative systems.

Luhmann wants to be able to distinguish, but not to explain, these different levels of autopoiesis, and characterize basic differences in their way of functioning. He wants to underline that are psychological and social-communicative autopoiesis are qualitatively different. So in a general theory of autopoiesis he, so far, wants to distinguish three different systems:

"It distinguishes a general theory of self-referential autopoietic systems and a more concrete level at which we may distinguish living systems (cells, brains, organisms, etc.), psychic systems, and social systems (societies, organizations, interactions) as different kinds of autopoietic systems. See figure 1.

Figure 12

This scheme is not to be understood as describing an internal system's differentiation. It is a scheme not for the operation of systems, but for their observation. It differentiates different types of systems or different modes of realization of autopoiesis." (Luhmann 1990, s. 29)

So it is important to understand that communicative systems are autonomous and have their own intrinsic form of organization which, although it builds on biological individuals, has aspects which transcends the biological sphere:

But as far as we know they can only function with a biological autopoietic system as a basis. As a biologist I do not think that Luhmann takes this fact seriously enough, but I find it important to have the three levels defined as closed systems. Although they are all present in the human being functioning simultaneously, there is no direct "inner connection" between them. They can only communicate through interpenetration. This is an elegant cybernetic formulation of the organizational reasons for the problems of integrating self-consciousness, the body-mind, and social communication through language.

"Social systems use communication as their particular mode of autopoietic reproduction. Their elements are communications that are recursively produced and reproduced by a network of communications and that cannot exist outside of such a network. Communications are not "living" units, they are not "conscious" units, they are not "actions." (Luhmann 1990, s. 3)

Luhmann needs the distinction between the three systems to be able to create what has so far not been developed in the positions of von Foerster and Maturana, a psychological and social theory of meaning and communication. Luhmann writes:

"It leads to a sharp distinction between meaning and life as different kinds of autopoietic organization, and meaning-using systems again have to be distinguished according to whether they use consciousness or communication as a mode of meaning-based reproduction. On the one hand, then, a psychological and sociological theory has to be developed that meets these requirements. On the other hand, the concept of autopoiesis has to be abstracted from biological connotations." (Luhmann 1990, s. 2 )

This is an important move for him to be able to explain the human phenomenon of communication and the concept of meaning, which are concepts that do not occur on the biological level of second order cybernetics. He defines meaning as a representation of complexity which provide acces to all possible topics of communication. In this way he is able to introduce the concept of meaning in a fundamental system-theoretical way without having to deal with a transcendental subject. Instead of some transcendental idea, meaning becomes a new and powerful form of coping with complexity under the unavoidable condition of enforced selectivity. (Luhmann 1990, p.84)

In Luhmann's theory of social-communication (Luhmann 1990) the structural coupling in the communication in social systems is called generalized media (money, power, love, truth). Only inside social structural couplings created through the history of society is it possible to have meaningful communication. Not even in this setting can one speak of the exchange of information. Communication is a shared actualization of meaning that is able to inform at least one of the participants, Luhmann writes:

"What remains identical in communication, however, is not a transmitted, but a common underlying meaning structure that allows the reciprocal regulation of surpri-ses. That this meaning fundament is itself historical in nature, i.e., that it arises within the history of experience and communicative processes, is another matter altogether and does not contradict my thesis that communication does not transmit or transfer meaning, but rather requires it as pregiven and as forming a shared background against which informative surprises may be articulated." (Luhmann 1990, p. 32)

Although the tri-partitioning of autopoiesis in biological, psychological and social systems seems warranted for being able to define a fundamental scientific and operational concept of meaning and social communication, it does not tell us much about how these systems came about. Luhmann's thinking does not seem to integrate evolutionary ecological thinking in a dynamic way. Furthermore, we do not learn much about what it is that the systems exchange when they communicate without transferring information. Although we can never with science describe or grasp the human core of emotional and existential aspects of meaning and speach nor the biological drive and emotioning of exchanging signs, it must be possible to enrich this second order cybernetic theory with our knowledge of how signs function among living systems.

Cyber-semiotics

My suggestion is that we use the triadic and dynamical concept of sign or semiosis from C.S. Peirce to give a deeper explanation of what it is that is exchanged, and how information is created and has effects even on a biological level.

Second-order cyberneticians do not even have the triadic concept of sign in their theory, and some are opposed to it. But I think that it is possible to fuse the two theories here, as they both are of second-order. All the elements in Peirce's semiosis are signs themselves. In a very famous quotation Peirce (Buckler 1955, p. 99-100) defines his dynamical and pragmatic concept of a sign:

"A Sign, or Representamen, is a First which stands in such a genuine triadic relation to a Second, called its Object, as to be capable of determining a Third, called its Interpretant, to assume the same triadic relation to its Object in which it stands itself to the same Object. The triadic relation is genuine, that is its three members are bound together by it in a way that does not consist in any complexus of dyadic relations. That is the reason the Interpretant, or Third, cannot stand in a mere dyadic relation to the Object, but must stand in such a relation to it as the Representamen itself does.

... A Sign is a Representamen with a mental Interpretant".

As one can see, Peirce's definition is second order because all the elements of the sign process are signs themselves. Further a sign is not a thing, but a dynamical process. In humans the meaning of the sign emerges out of a social dynamic network of relational logic, creating an ever evolving interpretant. The interpretation of a sign is never finished because the meaning of the sign is the social habits it gives rise to and they are in constant development and are forever spreading and returning. Peirce talks about unlimited semiosis.

So a sign is something which stands for something in some capacity for somebody, and to be able to do so it must be triadic. As signs are the basic components of perception, understanding, thinking and communication, Peirce's development of his system gave rise to a whole system of thought and logic. The triadic categories went right through the whole system and became the basic categories of knowing. Peirce simply calls them firstness, secondness and thirdness. In the following quotation (Buckler 1955 p. 76) Peirce gives a short description of his triadic categories:

"Actuality is something brute. There is no reason in it. For instance putting your shoulder against a door and trying to force it open against an unseen, silent and unknown resistance. We have a two-sided consciousness of effort and resistance, which seems to me to come tolerably near to pure sense of actuality. On the whole, I think we have here a mode of being of one thing which consists in how a second object is. I call this Secondness.

Besides this, there are two modes of being that I call Firstness and Thirdness. Firstness is the mode of being which consists in its subject's being positively such as it is regardless of aught else. That can only be a possibility. For as long as things do not act upon one another there is no sense or meaning in saying that they have any being, unless it be that they are such in themselves that they may perhaps come into relation with others. The mode of being a redness, before anything in the universe was yet red, was nevertheless a positive qualitative possibility. And redness in itself, even if it be embodied, is something positive and sui generis....

Now for Thirdness. Five minutes of our waking life will hardly pass without our making some kind of prediction; and in the majority of cases these predictions are fulfilled in the event. Yet a prediction is essentially of a general nature, and cannot ever be completely fulfilled. To say that a prediction has a tendency to be fulfilled, is to say that the future events are in a measure really governed by law.... This mode of being that consists ... in the fact that future facts of Secondness will take on a determinate character, I call Thirdness".

Firstness is among other things a monadic characteristic or predicate: sense qualities, simple forms and feelings, the modus of possibilities, that which exists without reference to any other thing and pure quality. Firstness is vague because it does not in itself stand in any relation to anything else.

Secondness is a dyadic quality which something has in its relation to something else, but independent of some third thing. This is the category for the characteristics of the objects which makes it possible to know them and identify them independent of concepts by pointing and saying this/that. For example indexes are signs that stand for things without describing them. Secondness is the subject in logic. It is resistance, breaks, separateness, quantity. Where firstness is possibility, secondness is necessity such as local causality.

Thirdness is the triadic quality, which only that has which is as it is, because it brings something second and third in relation to each other. This is the category of generality and understandability, rationality and lawfulness. By this it is first of all the category of the sign and of logical inference. From a human point of view firstness is feeling and secondness experience. Thirdness is a generation of some kind of biological, cultural or linguistic habit which elevates us above firstness' universe of possibilities and secondness' numberless incidents. Thirdness puts quality and quantity together in a relation parallel to logical inference as is done in science.

In biology, Hoffmeyer (1992 a & b) has pointed out that the idea of autopoiesis is not quite enough to understand why biological systems become subjective and sign-interpreting. Their code-duality is also vital. That is to say that they carry in their genes a digitalized version of themselves, of which they are the analogous version. In a population this digitized version is changed selectively over the generations via evolution through natural selection and other interactive processes. So in the genes there is a kind of selective history which is always a little different from the actual individual, because the phenotype has a history of its own. This creates a kind of biological subjectivity. (See Hoffmeyer's paper in this issue of the journal). Hoffmeyer sees the body as swarms of cells organized into a super swarm through semiotic communication between the nervous, hormonal and immune systems. When he talks of intelligence as not coming from a central controller but as an emergent phenomenon of the self-organization of these swarms of swarms of cells, he is in agreement with Hofstadter, who also claims that the autopoietic thinking system is not organized from above in a classical logical way, but rather from below through a lot of subconscious activity. Hofstadter writes:

"The brain itself does not manipulate symbols; the brain is the medium in which the symbols are floating and in which they trigger each other. There is no central manipulator, no central program. There is simply a vast collection of "teams" - patterns of neural firings that, like teams of ants, trigger other patterns of neural firings. The symbols are not "down there" at the level of the individual firings; they are "up here" where we do our verbalization. We feel those symbols churning within ourselves in somewhat the same way we feel our stomach churning. We do not do symbol manipulation by some sort of act of will, let alone some set of logical rules of deduction. We cannot decide what we will next think of nor how our thoughts will progress.

Not only are we not symbol manipulators; in fact, quite to the contrary, we are manipulated by our symbols! As Scott Kim has put it, rather than speak of "free will", perhaps it is more appropriate to speak of "free won't". (Hofstadter 1983 p. 279)

Maturana has pointed out that there is an ongoing interaction between the autopoetic system and its environment. They co-evolve in a non-deterministic historical drift. Organisms that live together become surroundings for each other coordinating their internal organization. Finally languaging appears as coordination of coordinations of behaviour.

So there is a complicated psycho-biological development and dynamic-system organisation behind perception, thinking and communication. Self-consciousness and the aspects of the processes of mind that can be modeled in classical logical terms do not seem to have any special position of controlling of how the intentions, goals and ideas of the system are created. Furthermore the elementary processes of which this system consists do not seem to be made of classical mechanistic information processing, but rather out of a self-organized semiotic dynamics.

In ethology, one says that ritualized instinctive behavior becomes sign stimuli in the coordination of behavior between, for instance, the two sexes of a species in their mating play. So -as it is already in the language of ethology - a piece of behavior or plumage color in movement becomes a sign for the coordination of behavior in a specific mood, as mating for instance. It is the mood and the context that determine the biological meaning of these signs. The red belly of a female stickleback is the representamen for a male code-dual autopoietic system languaging with the female - because it is in a sexual mood - creating in him the interpretant that she is worth mating.

In his language philosophy, Wittgenstein (1958) says that the meaning of words/signs can only be defined in a language game, such as seduction or writing a scientific paper, which again only arises as part of a life form, such as mating and scientific research. It is clear from Wittgenstein that the human speech, it origin and meaning for the individual, can never be given a scientific explanation as such. Humans have speech, meaning and knowledge before they make science. Science is about what we - in principle - all can agree about, so it can never explain the acts and meanings of the individual living being. Science cannot explain either the world, our consciousness or our language, but still it can describe - and even manipulate - some of the regularities/habits that are the necessary structures and organizational principles for living systems to communicate in and about worlds.

Animals do not have language with syntax and generative grammars, so let us call what they do for a sign game. I am streching Wittgenstein's life form concept into the animal kingdom, taking him seriously on his claim that life forms are a part of our natural history. So the structural coupling of mating creates the sign game of the mating game life form. I think that this is a fruitful specification in biology of Peirce's idea that the meaning of signs is created in the semiotic web of society. To make Peirce and Wittgenstein meet, we further stress that unlimited semiosis means, that the interpretants of signs are created through cultural history. The habits of Peirce that are the meaning of the signs are equivalent to the life forms of Wittgenstein. But seen from both ethology, second order cybernetics and biosemiotics, the basis for this is the creation of sign games in our natural history, where the habits are called instincts. Instincts can be combined in different degrees with individual learning to make the communicative act possible, such as in bird song.

What second order cybernetics gives to bio-semiotics is the ideas of closedness, structural couplings and languaging. For developing the semantic aspect of the latter concept, I prefer to use the Wittgenstein-inspired concept of sign game as a way to state the biological foundation of language without claiming that animals have language. The concept of a sign game connects at the same time to Peirce's second order theory of signs. We thus combine second order cybernetics and Peirce's triadic second order semiotics to form what I call cyber-semiotics. It is my opinion that this cyber-semiotic frame of thinking take us a step forward in the understanding of how signs get their meaning and produce information inside communicative systems. Information is actualized meaning in shared sign or language games.

In his critique of classical Artificial Intelligence (AI) Hofstadter underlines the necessity of this move towards the bio-cultural Umwelt if we want to understand how the semantic aspect of language is created:

"Once we abandon perfect mathematical isomorphism as our criterion for symbolizing and suggest that the value of symbol-triggering patterns comes largely from their suggestive value and their metaphorical richness, this severely complicates the question of what it means when we say that a symbol in the brain symbolizes anything." (Hofstadter 1983 p. 281)

The suggestive value is always working in the context of a life form both in biology and in human cultural life. The key to the understanding of understanding and communication is that both the animals and we humans live in self-organized Umwelts, which we not only project around us but also deep inside our systems. The organization of signs and the meaning they acquire through the habits of the mind and body follow very much the principles of second order cybernetics, in that they produce their own eigenvalues of sign and meaning and thereby their own Umwelt. In humans, these signs are organised into language through social self-conscious communication and accordingly, our universe is organized also as and through texts. But this is of course not an explanation of meaning. It is an attempt to describe the dynamics of meaning generation and sharing systems, and how they are organized.

Acknowledgements

I would like to thank Claus Emmeche (Niels Bohr Inst. Copenhagen U.), Jesper Hoffmeyer (Inst. of Molecular Biology Copenhagen U.), Ole Fogh Kirkeby (Inst. for Systems and Computers Sciences, Copenhagen Business School), Michael Manthey (Aalborg University) and Axel Randrup (Centre for Interdisciplinary Research) for their valuable critique in reviewing an earlier version of this article.

References

Brier, S. (1992): "Information and Consciousness: A Critique of the Mechanistic Foundation for the Concept of Information", Cybernetics & Human Knowing vol.1, no.2/3, Aalborg.

Brier, S. (1993a): "Cyber-Semiotics: Second Order Cybernetics and the Semiotics of C.S. Peirce" in the Proceedings from the Second European Congress on System Science, Prague, October 5-8. 1993. AFCET.

Brier, S. (1993b): "A Cybernetic and Semiotic View on a Galilean Theory of Psychology" in Cybernetics & Human Knowing, Vol. 2, nr. 2, Aalborg

Buckler, J. (ed.)(1955): Philosophical Writings of Peirce, Dover Publication, New York.

Fodor, J.A. (1987): Psychosemantics; The Problem of Meaning in the Philosophy of Mind, A Bradford Book, The MIT Press, Cambridge, Massachusetts.

Foerster, H. von (1984): Observing Systems, The Systems Inquiry Series. Intersystems Publications. California, USA.

Gardner, H. (1985): The Mind's New Science. Basic Books, Inc., Publishers/New York.

Hoffmeyer, J.(1992 a): "Some Semiotic Aspects of the Psycho-Physical Relation: The Endo-Exosemiotic Boundary" pp. 101-123 in Sebeok & Umiker-Sebeok (Ed.)(1991): Biosemiotics, The Semiotic Web 1991, Mouton de Gruyer, Berlin/New York.

Hoffmeyer, J. (1992 b): "Semiotic Aspects of Biology: Biosemiotics" i Posner, R. and Robins, K. & Sebeok, T.A. (Eds.): Semiotics: A Handbook of the Sign - Theoretic Foundations of Nature and Culture, Berlin/New York.

Hoffmeyer, J. and Emmeche, C. (1990): "Code-Duality and the Semiotics of Nature" in Anderson, M. and Merrell; F. (red) (1991): On Semiotic Modelling, Mouton, Berlin/New York.

Hofstadter, D.R. (1983): "Artificial Intelligence: Subcognition as Computation" in Machlup, F. & Mansfield, U. (eds.): The Study of Information, Wiley, New York, p. 263-285.

Krippendorff, K. (1991): Stepping Stones Towards a Constructivist Epistemology for Mass-Communication, manuscript prepared for presentation to the annual meeting of the Deutche Gesellschaft fuer Publizistik und Kommunikationswissenschaft in Bamberg, may 8-10, pp.32.

Lindsay, P. & Norman, D.A. (1977): Human Information Processing: An Introduction to Psychology, 2.nd edition, Hartcourt Brace Jovanovich, San Diego.

Luhmann, N (1990): Essays on Self-Reference, Columbia University Press, New York.

Maturana, H. (1983): "What is it to see?" Arch.Biol.Med.Exp. 16, pp.255-269.

Maturana & Varela (1980): Autopoiesis and Cognition. The realization of the living. Reidel Publishing Company, Dordrecht.

Qvortrup, L. (1993):"The Controversy over the Concept of Information: An Overview and a Selected Bibliography" pp. 3-26 i Cybernetics & Human Knowing vol.1, no.4, Aalborg.

Vickery, A. & Vickery, B.(1987): Information Science - Theory and Practice. Bowker-Saur, London.

Wiener, N. (1961): Cybernetics or control and communication in the animal and the machine, The M.I.T. Press and John Wiley & Sons, New York, sec. ed. (org. 1948).

Winograd, T. & Flores, F. (1987): Understanding Computers and Cognition, Alex Publishing Corporation, Norwood, New Jersey.

Wittgenstein, L. (1958): Philosophical Investigation, third edition, transl. by G.E.M. Anscombe, New York, MacMillan Publishing Co.Inc.

Notes

1.This paper is an extended version of my talk in the bio-semiotics section to the conference Semiotics Around the World of the International Association for Semiotics 1994 at University of California in Berkeley, California, USA.

2.The number of the table is Luhmann's not mine.


Return to the content of this issue

Return to the Cybernetics and Human Knowing Homepage


The Web edition of Cybernetics and Human Knowing is edited by Søren Brier
Rev. 13.01.1998