Knowledge as Environmental Fit

From DigiVis
Revision as of 15:33, 5 March 2020 by Admin (talk | contribs) (Created page with "{{DigiVisDocumentPage |author=Ernst von Glasersfeld und Paul Cobb |publication_date=1983 |published_in=Man-Environment Systems |location= }}")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


Knowledge as Environmental Fit
Title Knowledge as Environmental Fit
Author Ernst von Glasersfeld und Paul Cobb
Date of Publication 1983
Published in Man-Environment Systems
Location
Knowledge as Environmental Fit



Go to page of article

In: Man-Environment Systems, 13 (5), 216–224, 1983. 084

Knowledge as Environmental Fit

Co-author: Paul Cobb

Preface

In the philosophical tradition of the Western World, it is held that knowledge forms a sharp contrast to belief, opinion, hypothesis, and illusion. What is called “knowledge,” is supposed to be not only unquestionable but also independent of the knowing subject. Knowledge, therefore, is considered much more than know-how. It is intended to refer to a “true” picture of the world, of objects and events, and of the rules and laws that govern them. Though the human knower may include some knowledge of the knowing self, by far the greater part of it concerns the world in which that cognizing subject lives. Knowledge, thus, is usually assumed to be knowledge of the environment.
From the very beginning, however, that philosophical tradition has been plagued by doubts about it and by dissidents who maintained that no such knowledge is possible. In this paper we discuss a theory of knowledge that proposes a third way, a way that avoids the realist’s unwarranted assertions as well as the sceptics’s wholly negative attitude. It does this by positing a different relationship between man, the cognizing organism, and the environment of the “real” world.
The constructivist theory of knowledge has roots in the Renaissance, in 17thcentury France, and above all in the early work of Vico. In our times, it was developed by Jean Piaget and given something of a rigorous formulation by one of the pioneers of cybernetics, Heinz von Foerster. In this essay, we argue that a coherent theory of knowledge is possible, provided we focus on the activity of knowing rather than on the preconception of an independently existing world.

The Traditional View of Knowledge

The history of intellectual endeavor, in most areas, is a checkered one. There are records of successes and of failures. Every so often there is evidence of some more or less far-reaching reorganization, a reshuffling of notions and definitions, or the introduction of novel concepts that generated new solutions as well as new problems. The history of science, especially the scientific view of the universe in which we find ourselves living, is a prime example. We have gone through many shifts of ideas, from fancying ourselves in the center of a set of spherical Chinese boxes, to the view that we are clinging to a piece of debris that is still hurtling away from a cosmic explosion. These re-organizations were both colorful and profound. They have opened up a multitude of paths, led to new benefits, to new risks, and above all they have repeatedly changed the basic perception of what is important. No one in his senses would doubt that our present conceptual structures and operations enable us to do a great deal more and to take for granted things that Ptolemy could not even dream of.
Irrespective of whether one tends to consider this development progress or merely change, the point we want to make here is that our way of looking and of seeing has been drastically altered by what has happened to our concepts and our thinking during the roughly twenty-five centuries of recorded history of ideas.
There is, nevertheless, one intellectual discipline whose history presents a very different picture, a picture characterized by the absence of change. That discipline is epistemology, the discipline that investigates knowledge itself, what knowledge should be, and how we come to have it.
Any course that deals with theory of knowledge, it is often said, must begin with the Pre-Socratics because they were the first to leave a record of having asked epistemological questions. That, however, is not the most important reason why we should begin with the Pre-Socratics. Democritus, for instance, was the first who went on record as saying that the world was made of atoms, but to mention him in a course on atomic physics today adds at best a quaint historical note; it can add nothing to the modern theory of atoms, because that theory is a structure of concepts that were not and could not have been part of the conceptual world of Democritus.
In the theory of knowledge, the situation is remarkably different. The questions the Pre-Socratics raised are the very same questions that anyone who begins to think about knowledge is likely to raise today. These questions have not changed, because no unassailable answers have been found for them in all the time that has gone by since then. The history of epistemology, at least as far as traditional philosophy is concerned, is the history of an unsolved puzzle. There have, of course, been some individual thinkers who stepped outside the conventional way of thinking and managed to resolve at least some of the problems to their own satisfaction. But they have had no lasting effect on the official discipline. The reason for this lies in the seemingly immutable meaning that the word “to know” was given by the Pre-Socratics when they first developed the scenario in which the activity of knowing was believed to take place.
In that scenario, the activity of knowing or getting to know is, in fact, not a real activity at all. It is a passive receiving, an accepting of impressions, much like sand on the beach receives the footprints of birds or people that happen to run over it. It was perhaps quite inevitable that the first knowers who ventured to ask how their knowledge came about, should have conceived of it in this way. After all, though they lived around 500 B.C., they had a great many skills, they managed fire, water, weapons, and tools in a highly efficient fashion, they built magnificent temples, chiseled marble, and cast bronze. They had a large repertoire of practical schemes that enabled them to procure necessities, avoid certain discomforts, and provide certain pleasures. They had coordinated a great many causes and effects, and they used these links as well as they could to control their experience.
All these skills were the result of observation, of trying things out and inductively retaining what seemed likely to work again. Among these observations there were, of course, many that concerned other people. Once you had come too close to a fire, felt the burning heat, and had your skin blistered, you could observe others make the same mistake—and you could conclude that they, too, had the same burning sensation as you had experienced. Thus, it seemed perfectly clear that the fire was the cause of those burns, regardless of whether they were your burns or another’s. The fire had to be there, a thing that existed in itself and for itself, and anyone coming too close would get burned. Reality was not only very tangible, but it was also pretty reliable. It caused effects in the experiencer, and the effects were sufficiently regular to warrant making predictions. The fact that many of these predictions turned out to be correct made reality all the more real and gave it stability.
Small wonder, then, that when questions began to be asked about how one experiences, how one perceives, and how one comes to know, it seemed quite natural to answer them by saying that it had to be Reality that caused what one experiences, what one perceives, and what one comes to know. The scenario of knowing quite naturally took shape as a scenario that has the cognizing subject come into the world as a discoverer, as a subject that must find out what the things of the Real World are like, how they work, and in what way they can be managed. To see was to receive visual impressions, to hear was to receive sounds, and to acquire knowledge was to put all one’s perceptions together and to discover how the things that caused them were actually related and what exactly they were like. Knowledge, therefore, was knowledge of the things that caused one’s experiences, the things that were given, the data, and it could all be put together as a picture of Reality.
That seemed as solid a theory of knowledge as one could wish to have. It captured common sense, and every moment of ordinary living seemed to confirm it. It still does. Every child that is born goes through more or less the same cognitive apprenticeship, and every human knower in the world shaped by the Pre-Socratics has grown into this common sense theory of knowledge. Consequently, every such knower who, for one reason or another, comes to ask questions about the nature of knowledge, formulates those questions in much the same way and then runs up against the same problem that reared its head already at the time of the Pre-Socratics.

The Unsolved Problem

The problem springs, not from a mistaken answer to an epistemological question, but from the question itself. To be more precise, it springs from a tacit assumption that is inherent in the question. This assumption is so natural and has come to seem so inevitable that it is difficult to become aware of and to see it clearly for what it is. This is largely due to the fact that one is not likely to ask questions about the nature of knowledge unless one already possesses something that one considers knowledge. That is to say, one begins at a point where one has tacitly accepted the traditional notion that “knowledge” is knowledge of something else, knowledge that corresponds to, depicts, or represents something that was there before it became known. In other words, one takes for granted that what one has come to know had its own independent existence before one captured it by a cognizing effort. Given that perspective, it is indeed difficult to avoid asking just how well the knowledge one has acquired “corresponds to,” “depicts,” or “represents” what it is supposed to correspond to, depict, or represent, namely Reality.
With this, the question of truth enters into the theory of knowledge. A statement will be called “true” when we believe that it matches a state of affairs in the real world. Consequently, true knowledge consists of statements that are true and it will therefore be expected correctly or veridically to correspond to, depict, or represent what exists or happens in “reality."[1]
On second thought, though, any such “truth” will need to be verified. To ascertain whether or not a statement is true in this particular “ontological” sense, we shall have to check it with something that is supposed to “exist” in a world apart from statements and experience. That is to say, it would be a question of comparing a statement, not with other statements or past experiences, but with states of affairs that are supposed to be the causes of what we experience, states of affairs that are supposed to be there, in themselves and for themselves in an ontic world, irrespective of anyone’s experience.
This comparison is a comparison that can never be made. Xenophanes, one of the earliest among the Pre-Socratics, had already become aware of that impossibility. “If a man succeeded to the full in saying what is completely true, he himself would nevertheless be unaware of it.”
Pyrrho, a little later, formulated the argument that quickly became and still remains the cornerstone of all kinds of philosophical scepticism. How, he asked, could we ever tell whether or not the pictures our senses “convey” are accurate and true, if the only way they can be checked is again through our senses? The question is, indeed, unanswerable. It is analogous to asking, say, what the magnification of a telescope might be if nothing that is seen through the telescope can be seen or measured in any other way.
Western epistemologists have twisted and wriggled in every conceivable direction to find a way out of that impasse, and, although none of them succeeded, they staunchly continue to hope that, somehow, a way will be found. The impasse is as absolute as anything in the sphere of human thought can be, but philosophers, by and large, refused to admit it. As Hilary Putnam recently said, “…it is impossible to find a philosopher before Kant (and after the Pre-Socratics) who was not a metaphysical realist, at least about what he took to be basic or unreducible assertions…”[2]
Kant, in fact, extended the sceptics’ argument beyond the area of sensory data to the very structure of experience. Pyrrho and his followers had successfully argued that if, say, an apple appears to have a certain color and a certain smell, feels smooth and tastes sweet to us, this cannot give us the knowledge that a real apple possesses these properties, because we have no way of examining the apple other than by seeing, smelling, tasting, and feeling it again. Hence, if our senses distort what they are supposed to “convey,” we have no way of ever discovering that distortion. Kant, however, pushed doubt much further. By suggesting that time and space are aspects of our human way of experiencing rather than properties of the ontic world, he cast doubt upon the very notion of thinghood. Thus, it is not only the real apple’s color, smell, smoothness, and taste that are uncertain, but we can no longer be sure that there exists a real unitary object, a “thing-in-itself,” that corresponds to the constellation of sensory properties which we isolate as an “apple” from the rest of our experiential field.
The scenario, in which the knower is supposed to acquire “true” pictures or representations of the real world, is thus inherently unsatisfactory. If the knower can never be sure that the picture of the world which he or she distills from experience is unquestionably a correct representation of a world that exists as such, the knower is cast in the role of a discoverer who has no possible access to what he or she is expected to discover.

The Idealist Attempt

The sceptics’ arguments are indeed irrefutable and there would seem to be little merit in burying one’s head in the sand and attempting to carry on as though they had never been formulated. There have, of course, been philosophers who, following a line of thought that was already quite fully developed by Plato, attempted to circumvent the problem by discrediting sensory experience altogether and saying that the real reality was not to be found on the other side of our sensory interface but rather in the core of our minds in a world of ideas. By considering everyday experience illusory, this school of thought promised to bring into focus an immutable world of eternal truths and values. Though it proved a fertile starting point for metaphysical speculation and religious belief systems, it did not and could not lead to a satisfactory theory of knowledge. Most of the sceptics’ arguments were equally applicable also to the mysterious process of becoming aware of ideas that were supposed to be slumbering in one’s mind; and since even the most extreme idealism could not quite eliminate the realm of sensory experience, there still remained the problem of tying the world of perfect ideas to the world of imperfect experience. Moreover, if idealism was carried to its logical extreme, it led to solipsism, the doctrine according to which there exists nothing but the subject’s own ideas. Although this doctrine has an attractive intrinsic elegance, it would be difficult to accept, because every one of us knows only too well that the world he or she has to live in is usually not quite the world he or she would like to have. In other words, we cannot help realizing that our experience is subject to constraints that are altogether outside our control.

An Alternative Scenario

Up to now we have argued that Western epistemology, in spite of a history of varied attempts to counter the sceptics’ contention, has actually made their position even stronger than it was at the beginning. Whereas, originally, it was the sensory properties of the objects of experience that seemed questionable, Kant’s Critique of the rational processes suggested that not only the sensory properties but also the very articulation of experience into things and events in a framework of space and time could be due to the experiencer’s way of operating rather than to a given ontological structure of the world.[3]
If, as the sceptics have always claimed, there is no way of deriving knowledge of the real world from experience, it would seem reasonable to suggest that we relinquish the traditional scenario of the discoverer. In contemporary terms we might say that one should think of ontic reality as a “black box,” i.e., an entity whose internal structure and functioning are forever inaccessible to the human knower. That does not mean that one should follow the idealist and deny its existence. It merely means that one accepts the fact that one cannot discover what Reality might look like when it is not experienced by a human subject who conceptualizes it within a subjective framework of space and time.
To take this view does not mean that epistemological investigation has come to an end. It merely means that we shall adopt a different cognitive scenario and a different conception of what it is “to know.” In fact, the realization that the world of our experience is always and irrevocably the world as we see it, constitutes a new beginning. It immediately raises the question why and, above all, how it comes about that we search for and also seem to find structure in our experiential world. On closer examination this question splits into two. First, we shall have to ask on what grounds and by what means we manage to construct the world of everyday life, the world with which we cope for better or for worse, the world in which and about which we communicate with others. Such an investigation is, in fact, no less and no more than a continuation of what Kant called his “transcendental project.” However, in proceeding with it, we shall deviate in one important way. To accept Kant’s view that neither sensory nor any other kind of experience can furnish reliable knowledge of things-inthemselves does not oblige one also to accept his notion of an immutable a priori. That notion, in fact, is no less an ontological assumption than the realist’s assumption that the experiencer-independent ontic reality should have a knowable structure. The character of experiential reality will have to be explained, not as a result of preordained ways of experiencing (Kant’s Anschauungsformen), but as a result of the experiencer’s coordinatory and conceptual operations.
The second question to be answered concerns the cognizing activity itself, how it produces what we call “knowledge” and what relation obtains between that knowledge and the black box of ontic reality. For though we relinquish the traditional requirement that knowledge must depict, correspond to, or represent the real world, we must nevertheless (if we want to avoid the absurdity of solipsism) establish that and why what we call “knowledge” cannot be an altogether unconstrained fiction but must in some way be related to reality.
The theory of knowledge that we have called Radical Constructivism attempts to provide an answer to both these questions. It does this by replacing the relation between the knower and the known. Traditional epistemology has always taken it as a matter of course that there is a knowable ontic world and that it is the knower’s task to get to know and to describe it.[4] The activity of “knowing,” thus, was always seen as the acquisition of something that was already there. Our theory, instead, focuses on the activity of “knowing” as a constructive activity whose results are not merely compilations of material which the knower passively receives through the senses or through some other experiential conduit, but rather coordinations of elements which originate, within the knower, as products of the knower’s own activities of generating and coordinating. Isolating elements in one’s experiential field and relating them to one another are mutually dependent activities. “Knowledge” and the process of cognizing are therefore seen as inseparable. They reciprocally entail one another in the same way as drawing a “figure” entails categorizing the sheet of paper as “ground.” Knowledge, thus, becomes the product of an active, constructive mind. Although the mind as an ontic entity remains an unknowable counterpart to the black box of reality, our theory of knowledge proposes a model of the cognizing agent in the cybernetical sense of the word “model.” That is to say, the theory suggests a conceivable arrangement of elements and operations which, under similar circumstances and in similar situations, would produce results similar to those of which we ourselves, as cognizing subjects, become aware.
The second question, namely what relation “knowledge” is to have to the real world, is answered in a way that differentiates this constructivism from traditional theories of knowledge and makes it both radical and instrumentalist. Here, too, we make an important basic assumption: the cognizing subject is an agent that has preferences with regard to experience. That is to say, once this agent begins to isolate and categorize recurrent structures in the flow of experience, there will be structures which the agent would like to repeat and, others which it would like to avoid. This is not intended to mean that the agent starts out with something like an a priori scale of values, but merely that the agent has the potential to build up such a scale and will build it up, once it has begun to articulate the flow of experience into separate, individually re-cognizable chunks.
How the cognizing subject articulates the flow of experience and conceptually establishes recurrent experiential structures is essentially a psychological problem and we shall come to it shortly. Whatever it is that we want to call “knowledge,” it must be a conceptual commodity because it consists of conceptual structures. However, given the current epistemological tradition, we would be expected to focus not on the genesis of conceptual structures that constitute knowledge, but rather on the relationship that could be said to obtain between these structures and the ontic reality in which the cognizing subject is supposed to be living and generating them. This at once brings us to one of the major discrepancies between the traditional and the radical constructivist theories of knowledge. Professional philosophers, as a rule, carefully exclude from their consideration anything that smacks of genesis or psychological development. They speak with disdain of the “genetic fallacy” and of “psychologism,” and thus, implicitly or explicitly, perpetuate the notion that the knowledge that is worth analyzing must be objective knowledge, and therefore independent of the particular knower’s mental operations and the circumstances under which he or she came to acquire it.
Radical constructivism does not agree with this proscription. If one accepts the sceptics’ view that the human knower cannot obtain a picture of ontic reality, the question becomes: how do we come to have the “reality” we do have. We constantly make useful distinctions between what we consider “real” and what “illusory,” and between “fact” and “fiction.” If that “reality” and those “facts” are not impressed on us from the outside, we ourselves must have a way of generating them. The question, therefore, turns into: How does the human mind construct its reality? An answer to that question, then, must involve the workings of the human mind. That is to say, it must be found in an area that belongs to psychology and, specifically, to the area that investigates the operations of the mind and the generation of conceptual structures. For constructivists, then, studying the genesis of the concepts that allow us to organize our experience is not a sin but a necessity; and the way in which that genesis will be studied should undoubtedly be part of psychology, even if the psychological establishment, with the exception of Piaget and his Geneva School, has hitherto not done very much in that direction.
We seem to be getting deeper and deeper into a paradox. On the one hand, we are saying, with the sceptics, that the reality we construct for ourselves cannot be considered a picture or iconic representation of an ontic world but, on the other hand, we are not admitting solipsism, although we do say that whatever “reality” we come to have must be our own construction.
The way out of this apparent paradox lies in the concept of viability, and the application of that concept is extremely simple, once we manage to get rid of the traditional interpretation of the word “to know.”[5] In our habitual way of thinking and speaking, “to know something” is intended to mean that one possesses a conceptual structure that matches some part or aspect of something that is considered ontologically real. From the constructivist perspective, this is an impossibility, and we therefore replace the notion of match with the notion of fit. It is one thing to believe that one has a conceptual structure that represents a part or an aspect of ontic reality iconically, which is to say, that all relevant differences between it and reality have been eliminated; and it is another thing to believe that one has a conceptual structure which will fit a certain type of experiential situation.
From the radical constructivist perspective, “knowledge” fits reality in much the same way that a key fits a lock that it is able to open. The fit describes a capacity of the key, not a property of the lock. When we face a novel problem, we are in much the same position as the burglar who wishes to enter a house. The “key” with which he successfully opens the door might be a paper clip, a bobby pin, a credit card, or a skillfully crafted skeleton key. All that matters is that it fits within the constraints of the particular lock and allows the burglar to get in. Similarly, the problem-solver attempts to conceive a method that will successfully open a path to his or her goal. Any method that does this will serve as well as any other, and to the extent that the problem-solver is successful, his or her know-how is functionally adapted to the constraints of unknowable ontic reality. Note that considerations as to how well a method serves its purpose are secondary in that they require reflection on what has been done as well as the introduction of ulterior values, such as speed, economy, ease of execution, compatibility with the methods used for other problems, etc.

The Concept of Viability

Given this central notion of fit, the radical constructivist theory of knowledge is essentially a cybernetic theory in that it is based on the principle of adaptation to constraints rather than the principle of causation. [6] Adaptation to constraints is, of course, a well-known concept in Darwinian and neo-Darwinian theories of evolution. In that context, ‘adaptation’ is the result of the selective effects the environment has on populations of organisms who manifest a certain degree of variability. The criterion of selection is of stark simplicity: an organism either has what it takes to survive in the given environment, or it hasn’t. To say of an organism that it is “adapted to” or “fits” its environment, therefore, is to say that it possesses biological and behavioral features that have enabled it to survive (and procreate) up to now, in spite of whatever constraints (i.e., obstacles, inimical conditions, disasters, etc.) its environment has imposed on it. From the fact that organisms are viable, however, we cannot derive a description of the environment because, whatever the viable organisms are like, they constitute only one out of an unlimited number of possibilities that would also be viable.
In the evolutionary context, the viability of organisms is tantamount to survival, and survival is a binary affair.[7] An organism either survives or it doesn’t. It has no way of changing its genetic make-up when some genetic feature turns out to be counterproductive. There is no learning in evolution, only natural selection against variations that impede viability. Errors are fatal and they cannot be corrected in individual organisms. They can be “corrected” only in the population of the species by eliminating the deficient organisms.
When the concept of viability is transferred to the cognitive domain, the situation changes. Here, errors are not always immediately fatal for who makes them. The cognizing organism can, indeed, learn. It can embark on a line of action, realize that it does not lead where it was expected to lead, and either modify the action or abort it and try something else. The method of trial, error, and retention of successful solutions is a deliberate method within the cognitive domain, whereas in the biological domain of phylogeny it is at best a fanciful, metaphorical description.[8]
In the cognitive realm of conceptual structures, then, the concept of viability applies to those structures which, in the cognizing organism’s past experience, have led to success. But success is relative. The more often a particular conceptual structure has led to satisfactory results, the more closely it comes to resemble what, in the traditional way of thinking, would be called experiential or, more precisely, inductive knowledge. The resemblance, however, is misleading.
In the traditional way of thinking, there is a sleight of hand that usually remains hidden even to the thinker him- or herself. It is the same trick that the statistician performs quite openly: when something has recurred a sufficient number of times, it is considered “significant”—which is to say, it is considered probable enough to be taken as a “fact.” The good statistician, of course, does not forget that it was he or she who decided the level of recurrence beyond which things were to be considered “significant.” Like the good modern physicist, he does not argue that, just because the sun has risen every morning for as long as we can remember or have records, we have the right to assume that it must continue to do so in the future. With David Hume, they know that there is no conceivable logical reason why the future should resemble the past. But, for practical reasons, we tend to assume that it will. If we did not make that assumption, we could not draw any inferences at all from past experience, and our attempts at predicting and controlling future experience could not even get started.

Goals and Purposes

Living organisms, as Maturana said more than a decade ago, operate as inductive systems and their “organization (genetic and otherwise) is conservative and repeats only that which works.”[9] The phrase “that which works” must be interpreted somewhat differently, depending on the realm in which it is used. In the cognitive realm, something will be said to “work” when it does what is expected of it in the context of attaining a goal. This is a delicate and often debated point. Given the longstanding objection, both in psychology and biology, against the notions of goal or purpose, we want to be very explicit about it.
In the realm of phylogeny, to “work” means no more than to be viable, to manage to survive and to procreate, and the repetition of that which works is built into the conceptual system that constitutes the theory of evolution. What does not work, or is not viable, is necessarily eliminated. Because survival and the perpetuation of the genome are the central mechanisms of the theory, biologists need not, and indeed must not, attribute any goal or purpose to the process of evolution which the theory purports to describe.[10]
On the cognitive level, however, as we have suggested before, things may be said to “work” in contexts where survival (or procreation) are not directly involved or, perhaps, not involved at all. We do not have to think of extreme cases, such as suicides or drug addicts. Every one of us has inductively developed schemes of action that “work” in that they have brought us success in attaining goals, and some of these goals have had no conceivable connection with physical survival, procreation, or anything biological. But the expression “to work,” in the context of cognitive construction, has yet another subtle aspect. In most contexts, it would be decidedly odd, for someone who has learned that coming too close to the fire will blister the skin, to say “putting your hand on the burner works.” (One could, of course, think up a context in which that statement would make sense, but the context would have to establish that the statement is intended ironically or that, for some reason, burning your hand was, exceptionally, considered a desirable goal.)
The point is simply this: Induction, on the cognitive level, presupposes that we abstract regularities from past experience in order to attain desirable states and events and to avoid undesirable ones. In other words, to speak of induction implies values, in the sense that inductive inferences are made with the expectation that they provide tools for the pursuit of specific goals.
Hence we conclude that the conceptual structures that constitute inductive knowledge are instrumental. And instrumental knowledge is good knowledge as long as it “works,” which is to say, as long as it helps us to attain the goals we want to attain. If it ceases to do so, we discard it, because it no longer fits our purpose and, thus, is not viable.[11]
This viability is, in principle, the same notion as in the case of the lock and the key. What changes, in its various applications, is merely the type of goal. Because inductive knowledge is instrumental knowledge it does not have to, and indeed cannot, match any ontic reality in the sense that it corresponds to, depicts, or represents it iconically; but in order to be good knowledge, it must fit the reality in which we have gathered our past experience. The enormous conceptual difference resides in the fact that, in traditional epistemology, knowledge was supposed to convey or reflect something of the structure of the “real” world, whereas in the radical constructivist theory of knowledge, the term refers exclusively to the schemes of doing and thinking which the knower has constructed to organize and manage experience.

The Construction of Experiential Reality

Our conception of the cognitive organism, as we have suggested before, involves certain presuppositions. First, because induction draws on past experience, there could be no inductive inferences at all, if the organism were not able in some way to record experiences or if experiences did not leave some specific, retraceable residue. It is from its own experiences alone that the cognitive organism can abstract invariances and regularities with which to build up a relatively stable experiential world. Second, the organism must have the capability of developing some scale of values, no matter how rudimental. There must be certain experiences which the organism would like to have again and others which it would rather avoid. Only some such discrimination of the desirable from the undesirable enables the organism to assess the viability of its constructions and draw the incentive for induction and for attempts to use inductive inferences as instruments in the management and control of experience.
Third and last among the major presuppositions is the organism’s disposition to act in response to any biological or cognitive perturbation. (The concept of perturbation, on the level of cognition, implies that the organism has at least one preferred state among its possible states, and can discriminate the preferred one from the others.) Though these presuppositions are probably not all that we tend to make, they are sufficient for a rough sketch of how the cognitive organism comes to have what we ordinarily call “reality.”
First of all, it is important to realize that there are several levels of reality that differ largely in the material that is used to construct the items that are then considered “real.” An account of these levels has been provided elsewhere.[12] Here we shall give merely a brief outline with the help of a simple, prosaic example.
The conception of reality we are adopting is based on the notion of repeatability. This is a commonplace notion which, it seems, is used everywhere in conceptual construction. Imagine you are looking out the window, see a dark patch on the lawn and, the next time you look, the dark patch is gone. You now wonder what it was. If there is no ready explanation, you may conclude that it was nothing but a figment of your visual system which is showing fatigue, and you therefore dismiss the experience as illusory, which is to say, you eliminate it from the sequence of experiences that you consider “real.” If, however, the dark patch is seen a second time, you will work much harder to find an explanation for it that would allow you to consider it real. If you are unable to account for it, but you see the patch every time you look out the window, you will be considerably disturbed, because this now means either that there are inexplicable entities visiting your lawn or—no less worrying—that your perceptual system has developed a serious malfunction. In both cases, the dark patch would have acquired a higher degree of reality than it had after you had seen it only once. As a next step, you might walk out and inspect the place where you have seen or are seeing the dark patch. This could, in fact, lead to a “confirmation” of the experience in another sensory mode. If, now, there is some other perceptual discrimination that you can coordinate with the visual discrimination of the patch— the feel of sticky wetness as you put your hand on the ground, a tactual resistance, or even a smell or sound—the experience of the dark patch will make something like a quantum jump with regard to the “reality” you would assign to it. (It is true, of course, that psychologists have found cases of illusion that involve more than one sensory mode, but they are rather rare and you would be extremely reluctant to accept the idea that it is you who is having such a multimodal illusion.)
Obviously, repetition would again play an important part on this second level. If the compound experience were recurrent, so that you have it again after shorter or longer intervals, you would at once assign to it a higher degree of reality than if you had had it only once.
The situation may then develop in two different ways. On the one hand, you may be able to draw an analogy and coordinate the experience of the dark patch with some of the rules and regularities that you have (inductively) abstracted in some area of past experience. That is to say, you may be able to construct an “explanation” for the dark patch that conforms to, or is in harmony with, explanations you have successfully used on other occasions and in other circumstances.
In that case, the explanation you have just produced would be registered as an hypothesis about the appearance of dark patches on your lawn. If you happened to be of a scientific bent of mind, you would then cease to doubt the reliability of your visual sense and you would begin to search for ways and means of “testing” your hypothesis. On the other hand, you may decide to call your spouse or someone else, ask them to look at the particular place on the lawn, and see what happens. If, in the past, they have usually corroborated your perceptions but now do not corroborate your experience of the dark patch, you will have some difficulty in maintaining its reality. (Of course, there is always the possibility of attributing supernatural powers to yourself, but few people are willing to take that rather awesome step with so little provocation.) If, however, your witness concurs and corroborates that a dark patch can be discriminated from the rest of the lawn, then the experience makes yet another jump with regard to its reality: you now are quite sure that it “exists.”
With the corroboration by Others, one’s experiences acquire the kind of reality that is usually called “objective.” From the traditional epistemologist’s point of view, as well as in the common sense view, this seems an iron-clad way of proceeding. Things that are perceived not only by oneself but also by Others must be “real.” The entire traditional world view, one might say, is founded on that democratic principle. Yet, from the constructivist perspective, it is not nearly as simple and straightforward.

The Concept of Objectivity

If constructivists want to be consistent in their claim that the world we live in, the environment in which we find ourselves, has the structure that we ourselves have imposed on it by our ways of perceiving and conceptualizing, they must explain in their own terms how it comes about that this world turns out to be populated not by the experiencer alone but by Others who seem to have their own surprisingly similar experiential world. Here, once more, it is of paramount importance to remember that radical constructivism is a theory of knowledge and not an ontology. It deals with what we call knowledge, not with “existence” or the world of “being.” The question of how we come to have Others in our experiential word, therefore, does not in any way touch upon questions concerning their status in an ontic world or whatever structure or attributes they might have as “things-in-themselves.”
The experiential world becomes structured and organized by means of such regularities and invariants as the experiencer is able to abstract from his or her experience. It consists of whatever viable concepts, relationships, and models enable the experiencer to attain his goals. And in this context it is important also to remember that an experiencer’s goals are necessarily conceived and formulated in terms that are part and parcel of that experiencer’s own construction.
During the process of segmenting, relating, and structuring his or her experiential field, the experiencer develops models for “things” isolated in what is categorized as “environment” and models for this environment as a coherent “world.” But that is not all. A model will also be developed for whatever he or she has come to categorize, respectively, as “himself” or “herself.” The self, thus, is an experiential entity to which the experiencer attributes a number of specific properties, abilities, and functions.
At a certain stage, then, in the organization of the experiential field, certain items that have been isolated in the ordinary way (i.e. in the way in which the other furniture of the experiential world has been constructed) manifest insubordination and effectively refute whatever categorization is tentatively assigned to them. This may occur for the first time when the child who has managed to catch a bright beetle, puts it with a collection of marbles and discovers a moment later that the marbles are still where they were but the beetle is busily crawling away. The child may then abstract the “ability to move by itself” and attribute it as an inherent property to certain items which can be managed successfully only if some such property is expected of them. In other words, the child will have to construct a rather more sophisticated model for beetles than for marbles.
As the organization of the experiential field continues and expands, perceptual capabilities, emotional reactions, intentions, and, eventually, the very faculty of experiencing in the same sense in which the subject himself or herself experiences, are attributed to a select category of experiential items. These items, finally, are seen as Others who not only experience as one does oneself, but also organize their experiential field and try, for better or for worse, to predict and to manage their own experiential futures. From this perspective, the corroboration of one’s own experience by an Other takes on a somewhat different but no less important significance. Since the Other is as much the subject’s own construction as everything else in the experiential field, the fact that an Other confirms some item one has oneself experienced, does not confer an independent “existence” on that item, but it does show that the particular construct one has used is viable, not only in the structure and organization of one’s own experience, but also as an interpretation of the Other’s way of constructing his or her experience. There arises, thus, a second level of assessing the viability of constructs: their viability in one’s interpretation of Others’ construction of reality. This second-order viability supplements the viability of regularities and rules one has coordinated in one’s “environment.” The radical constructivist, therefore, must not be thought to do away with “objectivity”—he merely defines it in a different way. Any concept, event, theory, or model will be considered “objective” if and only if it has proved to be viable not only in one’s own organization of the experiential world, but also in the particular area of conceptual organization that proves to be a viable model for the experiential worlds one imputes to others. Finally, lest someone should be inclined to conclude that the constructivist theory of knowledge would, because of its subjective component, lead to the subversion of every kind of ethics, we want to emphasize that the very notion of objectivity that is central to this theory, promises to supply a new and rather solid foundation for Kant’s Categorical Imperative. For unless we want to suffer a permanent crack in the reality we construct, we simply cannot afford to maintain ethical rules and values for ourselves that are not also viable in the models we construct to interpret the experiential worlds of Others.

Footnotes

WissenschaftlicheReferenz2
Living organisms, as Maturana said more than a decade ago, operate as inductive systems and their “organization (genetic and otherwise) is conservative and repeats only that which works.”[9]
Innovationsdiskurs2
Up to now we have argued that Western epistemology, in spite of a history of varied attempts to counter the sceptics’ contention, has actually made their position even stronger than it was at the beginning.
Argumentation2
In the philosophical tradition of the Western World, it is held that knowledge forms a sharp contrast to belief, opinion, hypothesis, and illusion. What is called “knowledge,” is supposed to be not only unquestionable but also independent of the knowing subject. Knowledge, therefore, is considered much more than know-how. It is intended to refer to a “true” picture of the world, of objects and events, and of the rules and laws that govern them. Though the human knower may include some knowledge of the knowing self, by far the greater part of it concerns the world in which that cognizing subject lives. Knowledge, thus, is usually assumed to be knowledge of the environment.
Argumentation2
The problem springs, not from a mistaken answer to an epistemological question, but from the question itself. To be more precise, it springs from a tacit assumption that is inherent in the question. This assumption is so natural and has come to seem so inevitable that it is difficult to become aware of and to see it clearly for what it is. This is largely due to the fact that one is not likely to ask questions about the nature of knowledge unless one already possesses something that one considers knowledge. That is to say, one begins at a point where one has tacitly accepted the traditional notion that “knowledge” is knowledge of something else, knowledge that corresponds to, depicts, or represents something that was there before it became known. In other words, one takes for granted that what one has come to know had its own independent existence before one captured it by a cognizing effort. Given that perspective, it is indeed difficult to avoid asking just how well the knowledge one has acquired “corresponds to,” “depicts,” or “represents” what it is supposed to correspond to, depict, or represent, namely Reality.
Argumentation2
Given this central notion of fit, the radical constructivist theory of knowledge is essentially a cybernetic theory in that it is based on the principle of adaptation to constraints rather than the principle of causation. [6] Adaptation to constraints is, of course, a well-known concept in Darwinian and neo-Darwinian theories of evolution. In that context, ‘adaptation’ is the result of the selective effects the environment has on populations of organisms who manifest a certain degree of variability. The criterion of selection is of stark simplicity: an organism either has what it takes to survive in the given environment, or it hasn’t. To say of an organism that it is “adapted to” or “fits” its environment, therefore, is to say that it possesses biological and behavioral features that have enabled it to survive (and procreate) up to now, in spite of whatever constraints (i.e., obstacles, inimical conditions, disasters, etc.) its environment has imposed on it. From the fact that organisms are viable, however, we cannot derive a description of the environment because, whatever the viable organisms are like, they constitute only one out of an unlimited number of possibilities that would also be viable. In the evolutionary context, the viability of organisms is tantamount to survival, and survival is a binary affair.[7] An organism either survives or it doesn’t. It has no way of changing its genetic make-up when some genetic feature turns out to be counterproductive. There is no learning in evolution, only natural selection against variations that impede viability. Errors are fatal and they cannot be corrected in individual organisms. They can be “corrected” only in the population of the species by eliminating the deficient organisms.
Argumentation2
The sceptics’ arguments are indeed irrefutable and there would seem to be little merit in burying one’s head in the sand and attempting to carry on as though they had never been formulated. There have, of course, been philosophers who, following a line of thought that was already quite fully developed by Plato, attempted to circumvent the problem by discrediting sensory experience altogether and saying that the real reality was not to be found on the other side of our sensory interface but rather in the core of our minds in a world of ideas. By considering everyday experience illusory, this school of thought promised to bring into focus an immutable world of eternal truths and values. Though it proved a fertile starting point for metaphysical speculation and religious belief systems, it did not and could not lead to a satisfactory theory of knowledge. Most of the sceptics’ arguments were equally applicable also to the mysterious process of becoming aware of ideas that were supposed to be slumbering in one’s mind; and since even the most extreme idealism could not quite eliminate the realm of sensory experience, there still remained the problem of tying the world of perfect ideas to the world of imperfect experience. Moreover, if idealism was carried to its logical extreme, it led to solipsism, the doctrine according to which there exists nothing but the subject’s own ideas. Although this doctrine has an attractive intrinsic elegance, it would be difficult to accept, because every one of us knows only too well that the world he or she has to live in is usually not quite the world he or she would like to have. In other words, we cannot help realizing that our experience is subject to constraints that are altogether outside our control.
Argumentation2
The phrase “that which works” must be interpreted somewhat differently, depending on the realm in which it is used. In the cognitive realm, something will be said to “work” when it does what is expected of it in the context of attaining a goal. This is a delicate and often debated point. Given the longstanding objection, both in psychology and biology, against the notions of goal or purpose, we want to be very explicit about it. In the realm of phylogeny, to “work” means no more than to be viable, to manage to survive and to procreate, and the repetition of that which works is built into the conceptual system that constitutes the theory of evolution. What does not work, or is not viable, is necessarily eliminated. Because survival and the perpetuation of the genome are the central mechanisms of the theory, biologists need not, and indeed must not, attribute any goal or purpose to the process of evolution which the theory purports to describe.[10]
WissenschaftlicheReferenz2
Whereas, originally, it was the sensory properties of the objects of experience that seemed questionable, Kant’s Critique of the rational processes suggested that not only the sensory properties but also the very articulation of experience into things and events in a framework of space and time could be due to the experiencer’s way of operating rather than to a given ontological structure of the world.[3]
Argumentation2
If, as the sceptics have always claimed, there is no way of deriving knowledge of the real world from experience, it would seem reasonable to suggest that we relinquish the traditional scenario of the discoverer. In contemporary terms we might say that one should think of ontic reality as a “black box,” i.e., an entity whose internal structure and functioning are forever inaccessible to the human knower. That does not mean that one should follow the idealist and deny its existence. It merely means that one accepts the fact that one cannot discover what Reality might look like when it is not experienced by a human subject who conceptualizes it within a subjective framework of space and time. To take this view does not mean that epistemological investigation has come to an end. It merely means that we shall adopt a different cognitive scenario and a different conception of what it is “to know.” In fact, the realization that the world of our experience is always and irrevocably the world as we see it, constitutes a new beginning. It immediately raises the question why and, above all, how it comes about that we search for and also seem to find structure in our experiential world. On closer examination this question splits into two. First, we shall have to ask on what grounds and by what means we manage to construct the world of everyday life, the world with which we cope for better or for worse, the world in which and about which we communicate with others. Such an investigation is, in fact, no less and no more than a continuation of what Kant called his “transcendental project.” However, in proceeding with it, we shall deviate in one important way. To accept Kant’s view that neither sensory nor any other kind of experience can furnish reliable knowledge of things-inthemselves does not oblige one also to accept his notion of an immutable a priori. That notion, in fact, is no less an ontological assumption than the realist’s assumption that the experiencer-independent ontic reality should have a knowable structure. The character of experiential reality will have to be explained, not as a result of preordained ways of experiencing (Kant’s Anschauungsformen), but as a result of the experiencer’s coordinatory and conceptual operations.
Argumentation2
The experiential world becomes structured and organized by means of such regularities and invariants as the experiencer is able to abstract from his or her experience. It consists of whatever viable concepts, relationships, and models enable the experiencer to attain his goals. And in this context it is important also to remember that an experiencer’s goals are necessarily conceived and formulated in terms that are part and parcel of that experiencer’s own construction. During the process of segmenting, relating, and structuring his or her experiential field, the experiencer develops models for “things” isolated in what is categorized as “environment” and models for this environment as a coherent “world.” But that is not all. A model will also be developed for whatever he or she has come to categorize, respectively, as “himself” or “herself.” The self, thus, is an experiential entity to which the experiencer attributes a number of specific properties, abilities, and functions.
Argumentation2
On the cognitive level, however, as we have suggested before, things may be said to “work” in contexts where survival (or procreation) are not directly involved or, perhaps, not involved at all. We do not have to think of extreme cases, such as suicides or drug addicts. Every one of us has inductively developed schemes of action that “work” in that they have brought us success in attaining goals, and some of these goals have had no conceivable connection with physical survival, procreation, or anything biological. But the expression “to work,” in the context of cognitive construction, has yet another subtle aspect. In most contexts, it would be decidedly odd, for someone who has learned that coming too close to the fire will blister the skin, to say “putting your hand on the burner works.” (One could, of course, think up a context in which that statement would make sense, but the context would have to establish that the statement is intended ironically or that, for some reason, burning your hand was, exceptionally, considered a desirable goal.)

The point is simply this: Induction, on the cognitive level, presupposes that we abstract regularities from past experience in order to attain desirable states and events and to avoid undesirable ones. In other words, to speak of induction implies values, in the sense that inductive inferences are made with the expectation that they provide tools for the pursuit of specific goals. Hence we conclude that the conceptual structures that constitute inductive knowledge are instrumental. And instrumental knowledge is good knowledge as long as it “works,” which is to say, as long as it helps us to attain the goals we want to attain. If it ceases to do so, we discard it, because it no longer fits our purpose and, thus, is not viable.[11]

This viability is, in principle, the same notion as in the case of the lock and the key. What changes, in its various applications, is merely the type of goal. Because inductive knowledge is instrumental knowledge it does not have to, and indeed cannot, match any ontic reality in the sense that it corresponds to, depicts, or represents it iconically; but in order to be good knowledge, it must fit the reality in which we have gathered our past experience. The enormous conceptual difference resides in the fact that, in traditional epistemology, knowledge was supposed to convey or reflect something of the structure of the “real” world, whereas in the radical constructivist theory of knowledge, the term refers exclusively to the schemes of doing and thinking which the knower has constructed to organize and manage experience.
Innovationsdiskurs2
There is, nevertheless, one intellectual discipline whose history presents a very different picture, a picture characterized by the absence of change. That discipline is epistemology, the discipline that investigates knowledge itself, what knowledge should be, and how we come to have it.
Argumentation2
On second thought, though, any such “truth” will need to be verified. To ascertain whether or not a statement is true in this particular “ontological” sense, we shall have to check it with something that is supposed to “exist” in a world apart from statements and experience. That is to say, it would be a question of comparing a statement, not with other statements or past experiences, but with states of affairs that are supposed to be the causes of what we experience, states of affairs that are supposed to be there, in themselves and for themselves in an ontic world, irrespective of anyone’s experience.

This comparison is a comparison that can never be made. Xenophanes, one of the earliest among the Pre-Socratics, had already become aware of that impossibility. “If a man succeeded to the full in saying what is completely true, he himself would nevertheless be unaware of it.”

Pyrrho, a little later, formulated the argument that quickly became and still remains the cornerstone of all kinds of philosophical scepticism. How, he asked, could we ever tell whether or not the pictures our senses “convey” are accurate and true, if the only way they can be checked is again through our senses? The question is, indeed, unanswerable. It is analogous to asking, say, what the magnification of a telescope might be if nothing that is seen through the telescope can be seen or measured in any other way.
Argumentation2
First of all, it is important to realize that there are several levels of reality that differ largely in the material that is used to construct the items that are then considered “real.” An account of these levels has been provided elsewhere.[12] Here we shall give merely a brief outline with the help of a simple, prosaic example.

The conception of reality we are adopting is based on the notion of repeatability. This is a commonplace notion which, it seems, is used everywhere in conceptual construction. Imagine you are looking out the window, see a dark patch on the lawn and, the next time you look, the dark patch is gone. You now wonder what it was. If there is no ready explanation, you may conclude that it was nothing but a figment of your visual system which is showing fatigue, and you therefore dismiss the experience as illusory, which is to say, you eliminate it from the sequence of experiences that you consider “real.” If, however, the dark patch is seen a second time, you will work much harder to find an explanation for it that would allow you to consider it real. If you are unable to account for it, but you see the patch every time you look out the window, you will be considerably disturbed, because this now means either that there are inexplicable entities visiting your lawn or—no less worrying—that your perceptual system has developed a serious malfunction. In both cases, the dark patch would have acquired a higher degree of reality than it had after you had seen it only once. As a next step, you might walk out and inspect the place where you have seen or are seeing the dark patch. This could, in fact, lead to a “confirmation” of the experience in another sensory mode. If, now, there is some other perceptual discrimination that you can coordinate with the visual discrimination of the patch— the feel of sticky wetness as you put your hand on the ground, a tactual resistance, or even a smell or sound—the experience of the dark patch will make something like a quantum jump with regard to the “reality” you would assign to it. (It is true, of course, that psychologists have found cases of illusion that involve more than one sensory mode, but they are rather rare and you would be extremely reluctant to accept the idea that it is you who is having such a multimodal illusion.) Obviously, repetition would again play an important part on this second level. If the compound experience were recurrent, so that you have it again after shorter or longer intervals, you would at once assign to it a higher degree of reality than if you had had it only once. The situation may then develop in two different ways. On the one hand, you may be able to draw an analogy and coordinate the experience of the dark patch with some of the rules and regularities that you have (inductively) abstracted in some area of past experience. That is to say, you may be able to construct an “explanation” for the dark patch that conforms to, or is in harmony with, explanations you have successfully used on other occasions and in other circumstances. In that case, the explanation you have just produced would be registered as an hypothesis about the appearance of dark patches on your lawn. If you happened to be of a scientific bent of mind, you would then cease to doubt the reliability of your visual sense and you would begin to search for ways and means of “testing” your hypothesis.

On the other hand, you may decide to call your spouse or someone else, ask them to look at the particular place on the lawn, and see what happens. If, in the past, they have usually corroborated your perceptions but now do not corroborate your experience of the dark patch, you will have some difficulty in maintaining its reality. (Of course, there is always the possibility of attributing supernatural powers to yourself, but few people are willing to take that rather awesome step with so little provocation.) If, however, your witness concurs and corroborates that a dark patch can be discriminated from the rest of the lawn, then the experience makes yet another jump with regard to its reality: you now are quite sure that it “exists.”
Argumentation2
When the concept of viability is transferred to the cognitive domain, the situation changes. Here, errors are not always immediately fatal for who makes them. The cognizing organism can, indeed, learn. It can embark on a line of action, realize that it does not lead where it was expected to lead, and either modify the action or abort it and try something else. The method of trial, error, and retention of successful solutions is a deliberate method within the cognitive domain, whereas in the biological domain of phylogeny it is at best a fanciful, metaphorical description.[8] In the cognitive realm of conceptual structures, then, the concept of viability applies to those structures which, in the cognizing organism’s past experience, have led to success.
Argumentation2
At a certain stage, then, in the organization of the experiential field, certain items that have been isolated in the ordinary way (i.e. in the way in which the other furniture of the experiential world has been constructed) manifest insubordination and effectively refute whatever categorization is tentatively assigned to them. This may occur for the first time when the child who has managed to catch a bright beetle, puts it with a collection of marbles and discovers a moment later that the marbles are still where they were but the beetle is busily crawling away. The child may then abstract the “ability to move by itself” and attribute it as an inherent property to certain items which can be managed successfully only if some such property is expected of them. In other words, the child will have to construct a rather more sophisticated model for beetles than for marbles. As the organization of the experiential field continues and expands, perceptual capabilities, emotional reactions, intentions, and, eventually, the very faculty of experiencing in the same sense in which the subject himself or herself experiences, are attributed to a select category of experiential items. These items, finally, are seen as Others who not only experience as one does oneself, but also organize their experiential field and try, for better or for worse, to predict and to manage their own experiential futures. From this perspective, the corroboration of one’s own experience by an Other takes on a somewhat different but no less important significance. Since the Other is as much the subject’s own construction as everything else in the experiential field, the fact that an Other confirms some item one has oneself experienced, does not confer an independent “existence” on that item, but it does show that the particular construct one has used is viable, not only in the structure and organization of one’s own experience, but also as an interpretation of the Other’s way of constructing his or her experience. There arises, thus, a second level of assessing the viability of constructs: their viability in one’s interpretation of Others’ construction of reality. This second-order viability supplements the viability of regularities and rules one has coordinated in one’s “environment.” The radical constructivist, therefore, must not be thought to do away with “objectivity”—he merely defines it in a different way. Any concept, event, theory, or model will be considered “objective” if and only if it has proved to be viable not only in one’s own organization of the experiential world, but also in the particular area of conceptual organization that proves to be a viable model for the experiential worlds one imputes to others.
WissenschaftlicheReferenz2
Xenophanes, one of the earliest among the Pre-Socratics, had already become aware of that impossibility. “If a man succeeded to the full in saying what is completely true, he himself would nevertheless be unaware of it.” Pyrrho, a little later, formulated the argument that quickly became and still remains the cornerstone of all kinds of philosophical scepticism. How, he asked, could we ever tell whether or not the pictures our senses “convey” are accurate and true, if the only way they can be checked is again through our senses?
Argumentation2
But success is relative. The more often a particular conceptual structure has led to satisfactory results, the more closely it comes to resemble what, in the traditional way of thinking, would be called experiential or, more precisely, inductive knowledge. The resemblance, however, is misleading. In the traditional way of thinking, there is a sleight of hand that usually remains hidden even to the thinker him- or herself. It is the same trick that the statistician performs quite openly: when something has recurred a sufficient number of times, it is considered “significant”—which is to say, it is considered probable enough to be taken as a “fact.” The good statistician, of course, does not forget that it was he or she who decided the level of recurrence beyond which things were to be considered “significant.” Like the good modern physicist, he does not argue that, just because the sun has risen every morning for as long as we can remember or have records, we have the right to assume that it must continue to do so in the future. With David Hume, they know that there is no conceivable logical reason why the future should resemble the past. But, for practical reasons, we tend to assume that it will. If we did not make that assumption, we could not draw any inferences at all from past experience, and our attempts at predicting and controlling future experience could not even get started.
Innovationsdiskurs2
The resemblance, however, is misleading. In the traditional way of thinking, there is a sleight of hand that usually remains hidden even to the thinker him- or herself.
Argumentation2
The second question to be answered concerns the cognizing activity itself, how it produces what we call “knowledge” and what relation obtains between that knowledge and the black box of ontic reality. For though we relinquish the traditional requirement that knowledge must depict, correspond to, or represent the real world, we must nevertheless (if we want to avoid the absurdity of solipsism) establish that and why what we call “knowledge” cannot be an altogether unconstrained fiction but must in some way be related to reality.

The theory of knowledge that we have called Radical Constructivism attempts to provide an answer to both these questions. It does this by replacing the relation between the knower and the known. Traditional epistemology has always taken it as a matter of course that there is a knowable ontic world and that it is the knower’s task to get to know and to describe it.[4] The activity of “knowing,” thus, was always seen as the acquisition of something that was already there. Our theory, instead, focuses on the activity of “knowing” as a constructive activity whose results are not merely compilations of material which the knower passively receives through the senses or through some other experiential conduit, but rather coordinations of elements which originate, within the knower, as products of the knower’s own activities of generating and coordinating. Isolating elements in one’s experiential field and relating them to one another are mutually dependent activities. “Knowledge” and the process of cognizing are therefore seen as inseparable. They reciprocally entail one another in the same way as drawing a “figure” entails categorizing the sheet of paper as “ground.”

Knowledge, thus, becomes the product of an active, constructive mind.
Innovationsdiskurs2
Western epistemologists have twisted and wriggled in every conceivable direction to find a way out of that impasse, and, although none of them succeeded, they staunchly continue to hope that, somehow, a way will be found.
WissenschaftlicheReferenz2
The impasse is as absolute as anything in the sphere of human thought can be, but philosophers, by and large, refused to admit it. As Hilary Putnam recently said, “…it is impossible to find a philosopher before Kant (and after the Pre-Socratics) who was not a metaphysical realist, at least about what he took to be basic or unreducible assertions…”[2]
Argumentation2
Kant, in fact, extended the sceptics’ argument beyond the area of sensory data to the very structure of experience. Pyrrho and his followers had successfully argued that if, say, an apple appears to have a certain color and a certain smell, feels smooth and tastes sweet to us, this cannot give us the knowledge that a real apple possesses these properties, because we have no way of examining the apple other than by seeing, smelling, tasting, and feeling it again. Hence, if our senses distort what they are supposed to “convey,” we have no way of ever discovering that distortion. Kant, however, pushed doubt much further. By suggesting that time and space are aspects of our human way of experiencing rather than properties of the ontic world, he cast doubt upon the very notion of thinghood. Thus, it is not only the real apple’s color, smell, smoothness, and taste that are uncertain, but we can no longer be sure that there exists a real unitary object, a “thing-in-itself,” that corresponds to the constellation of sensory properties which we isolate as an “apple” from the rest of our experiential field. The scenario, in which the knower is supposed to acquire “true” pictures or representations of the real world, is thus inherently unsatisfactory. If the knower can never be sure that the picture of the world which he or she distills from experience is unquestionably a correct representation of a world that exists as such, the knower is cast in the role of a discoverer who has no possible access to what he or she is expected to discover.
Argumentation2
All these skills were the result of observation, of trying things out and inductively retaining what seemed likely to work again. Among these observations there were, of course, many that concerned other people. Once you had come too close to a fire, felt the burning heat, and had your skin blistered, you could observe others make the same mistake—and you could conclude that they, too, had the same burning sensation as you had experienced. Thus, it seemed perfectly clear that the fire was the cause of those burns, regardless of whether they were your burns or another’s. The fire had to be there, a thing that existed in itself and for itself, and anyone coming too close would get burned. Reality was not only very tangible, but it was also pretty reliable. It caused effects in the experiencer, and the effects were sufficiently regular to warrant making predictions. The fact that many of these predictions turned out to be correct made reality all the more real and gave it stability. Small wonder, then, that when questions began to be asked about how one experiences, how one perceives, and how one comes to know, it seemed quite natural to answer them by saying that it had to be Reality that caused what one experiences, what one perceives, and what one comes to know. The scenario of knowing quite naturally took shape as a scenario that has the cognizing subject come into the world as a discoverer, as a subject that must find out what the things of the Real World are like, how they work, and in what way they can be managed. To see was to receive visual impressions, to hear was to receive sounds, and to acquire knowledge was to put all one’s perceptions together and to discover how the things that caused them were actually related and what exactly they were like. Knowledge, therefore, was knowledge of the things that caused one’s experiences, the things that were given, the data, and it could all be put together as a picture of Reality.
Innovationsdiskurs2
With the corroboration by Others, one’s experiences acquire the kind of reality that is usually called “objective.” From the traditional epistemologist’s point of view, as well as in the common sense view, this seems an iron-clad way of proceeding. Things that are perceived not only by oneself but also by Others must be “real.” The entire traditional world view, one might say, is founded on that democratic principle. Yet, from the constructivist perspective, it is not nearly as simple and straightforward.
Innovationsdiskurs2
This at once brings us to one of the major discrepancies between the traditional and the radical constructivist theories of knowledge. Professional philosophers, as a rule, carefully exclude from their consideration anything that smacks of genesis or psychological development. They speak with disdain of the “genetic fallacy” and of “psychologism,” and thus, implicitly or explicitly, perpetuate the notion that the knowledge that is worth analyzing must be objective knowledge, and therefore independent of the particular knower’s mental operations and the circumstances under which he or she came to acquire it.
Innovationsdiskurs2
Radical constructivism does not agree with this proscription. If one accepts the sceptics’ view that the human knower cannot obtain a picture of ontic reality, the question becomes: how do we come to have the “reality” we do have.
Argumentation2
We constantly make useful distinctions between what we consider “real” and what “illusory,” and between “fact” and “fiction.” If that “reality” and those “facts” are not impressed on us from the outside, we ourselves must have a way of generating them. The question, therefore, turns into: How does the human mind construct its reality? An answer to that question, then, must involve the workings of the human mind. That is to say, it must be found in an area that belongs to psychology and, specifically, to the area that investigates the operations of the mind and the generation of conceptual structures. For constructivists, then, studying the genesis of the concepts that allow us to organize our experience is not a sin but a necessity; and the way in which that genesis will be studied should undoubtedly be part of psychology, even if the psychological establishment, with the exception of Piaget and his Geneva School, has hitherto not done very much in that direction.
Argumentation2
We seem to be getting deeper and deeper into a paradox. On the one hand, we are saying, with the sceptics, that the reality we construct for ourselves cannot be considered a picture or iconic representation of an ontic world but, on the other hand, we are not admitting solipsism, although we do say that whatever “reality” we come to have must be our own construction.

The way out of this apparent paradox lies in the concept of viability, and the application of that concept is extremely simple, once we manage to get rid of the traditional interpretation of the word “to know.”[5] In our habitual way of thinking and speaking, “to know something” is intended to mean that one possesses a conceptual structure that matches some part or aspect of something that is considered ontologically real. From the constructivist perspective, this is an impossibility, and we therefore replace the notion of match with the notion of fit. It is one thing to believe that one has a conceptual structure that represents a part or an aspect of ontic reality iconically, which is to say, that all relevant differences between it and reality have been eliminated; and it is another thing to believe that one has a conceptual structure which will fit a certain type of experiential situation.

From the radical constructivist perspective, “knowledge” fits reality in much the same way that a key fits a lock that it is able to open. The fit describes a capacity of the key, not a property of the lock. When we face a novel problem, we are in much the same position as the burglar who wishes to enter a house. The “key” with which he successfully opens the door might be a paper clip, a bobby pin, a credit card, or a skillfully crafted skeleton key. All that matters is that it fits within the constraints of the particular lock and allows the burglar to get in. Similarly, the problem-solver attempts to conceive a method that will successfully open a path to his or her goal. Any method that does this will serve as well as any other, and to the extent that the problem-solver is successful, his or her know-how is functionally adapted to the constraints of unknowable ontic reality. Note that considerations as to how well a method serves its purpose are secondary in that they require reflection on what has been done as well as the introduction of ulterior values, such as speed, economy, ease of execution, compatibility with the methods used for other problems, etc.
  1. Though we are mostly unaware of it, the word “truth” has two meanings. On the one hand, knowledge is said to be “true” when it is thought to reflect the real world. On the other, we say that someone “speaks the truth” if what he or she says about an experience is what he or she said, or could have said, about that experience at some earlier time. That is, we consider a statement “true,” if it matches a statement that was made or could have been made by an experiencer in a given context. From this second conception of “truth” derive formal logic and the syllogistic rules that are manifest in deductive reasoning. Since, in this paper, we are focusing on the relation between knowledge and “reality” (or “environment”) we shall not discuss deduction and the “certain truths” of logic which, explicitly, concern statements and not their interpretation in terms of actual experience.
  2. Putnam, Hilary, Reason, Truth and History, Cambridge, U.K.: Cambridge University Press, 1981, p.49.
  3. Though Kant was the one who systematically developed and elaborated this view, it was already at least partially implicit in the theories of knowledge Berkeley and Vico published unbeknownst to each other in the year 1710.
  4. Note that even the most adamant sceptics, by denying that ontic reality can be known, implicitly confirmed the belief that ontic reality has some kind of stable, albeit unknowable, structure.
  5. To get rid of the traditional conception of knowledge is, of course, quite difficult. It is difficult not only because of the long-established way of thinking into which we have all been immersed during our formative years, but also because the language we have learned to use has been forever steeped in that naive realist tradition and has incorporated it as an unquestionable presupposition in the whole process of communication.
  6. Heinz von Foerster has remarked (personal communication, 1980) that this principle of adaptation to constraints is not an invention of cyberneticists but was implicit already in the “principle of least resistance” that was first formulated by Pierre-Louis de Maupertuis in the 18th century.
  7. Sociobiologists have complicated the issue by speaking of “genetic fitness,” a quantitative term rather than a binary one. But they are concerned either with species or with genes, and they frequently sound as though they were oblivious of the fact that, even in their own theory, both species and genes are dependent on the survival of individual organisms.
  8. This difference is often overlooked because biologists have at times used expressions such “trial and error” and “tinkering” when discussing evolution (cf. Jacob, F., Evolution and tinkering, Science, 1977, 196, 1161–1166). But that is a picturesque metaphorical way of speaking because it implies a deliberate agent, such as Nature or Evolution personified, who does the “trying” and “tinkering.”
  9. cf. Maturana, H. Biology of cognition,” BCL Report No.9.0, Urbana: The University of Illinois, 1970, p.39.
  10. It is worth noting that evolutionary biologists have only recently admitted that there can be mutations that produce features that are “neutral” with regard to survival and are, therefore, reproduced from generation to generation in spite of the fact that they have no function whatsoever in the struggle for survival.
  11. Note that this is the original meaning of the word “viable.” It referred to a road and implicitly contained the presupposition that one wanted to take that road in order to get to a specific location, i.e., to a goal.
  12. cf. von Glasersfeld, E. An interpretation of Piaget’s constructivism (Revue Internationale de Philosophie, 1982, 36(4), pp.623ff), where the abstraction of regularities and the processes of assimilation nd accommodation are treated at greater length.