Annotationen:The Logic of Scientific Fallibility

From DigiVis
Revision as of 13:31, 12 September 2019 by Admin (talk | contribs) (Created page with "{{Layer2Annotate}}")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Argumentation2
To approach this point, I should like to tell a story. It’s a very simple little story. It reads like a fairy tale, but it is not. In fact, it is a rather serious story. I quote it from Science, the journal of the American Association for the Advancement of Science. It appeared in the issue of June 26, 1987:

In 1959, a badger broke through the security lines here at the world’s first plutonium factory (the Department of Energy facility at Hanford, in the State of Washington). The badger ignored all the warnings and dug a hole in one of the waste pits. After he left, rabbits began to stop by for an occasional lick of salt, but it was no ordinary salt they found. Before long, they scattered 200 curies of radioactive droppings over 2500 acres of the Hanford Reserve. The rabbit mess ... created one of the largest contaminated areas, one that remains hot today with cesium-137 (half-life of 30 years) and strontium-90 (half-life 28 ys.). Hanford also has trouble with ground squirrels, burrowing owls, pocket mice, insects, and plants like rabbit brush and tumbleweed. With roots that can grow 20 feet, tumbleweeds reach down into waste dumps and take up strontium-90, break off, and blow around the dry land. If the dry weeds build up and there is a brush fire, they may produce airborne contamination...[1] Airborne contamination spreads over very much wider areas than even the most energetic rabbits can spread their pellets. The problem, therefore, is not a trivial one. That badgers and rabbits dig burrows, eat certain things, and drop little turds all over the place—these are observations that our ancestors could make, and probably did make, when they lived in their caves 40 or 50 thousand years ago. That plants grow roots and absorb chemicals from soil and subsoil has also been known for quite a while. In fact, in Milan, where I lived part of my Italian life, legend has it that Leonardo da Vinci experimented with this idea and dug some sinister substances into the soil around a peach tree, in order to see whether one could grow poisoned peaches. How strange, you might say, that the scientists who direct the Hanford Reserve did not think of what badgers, rabbits, and tumbleweed do, when they lead their normal and quite well-known lives.—I shall try to show that, given the logic of science, this is not surprising. But first, let me tell another story, a story I am sure you have heard before, but it illustrates a slight variation from the first. For many thousands of years the river Nile flooded the Egyptian lowlands near the Mediterranean coast at least once a year. Vast amounts of fresh water seeped into the soil, fertilized it, and created a natural pressure against the water of the sea. The floods were a nuisance and, quite apart from this, using the Nile’s water to irrigate parts of the desert up-stream seemed eminently desirable. So the Assuan Dam was built to solve these two problems. The Nile no longer got out of hand and new land up- stream could be irrigated and cultivated. For a little while the dam seemed a wonderful success of science and engineering. Then it became clear that the salt of the Mediterranean was slowly but steadily seeping into and devastating the lowlands along the coast which had fed Egypt for millennia. I do not know whether, prior to this experience, hydrologists knew much about the balance of pressures on the level of the groundwater. They certainly had the theoretical equipment and the formulas to figure it out. Yet, they apparently did not do so before the Assuan Dam was built. Well, you may say, one can’t think of everything—the next time they’ll do better. I would agree with this. Scientists, engineers—even members of the medical profession— are capable of learning. But that is not the problem I have in mind. Learning is usually defined as “modifying a behavior or a way of thinking on the basis of experience,” and there is, of course, the implication that the modification is towards effectiveness, efficiency, or, at any rate, something that makes it easier to attain the chosen goal. I have no doubt that the next time a major dumping ground for radio-active waste is chosen and prepared, someone will think of the fauna and flora and of a way to keep them from spreading the poison. And when big dams were built after Assuan, someone, I’m sure, tried to work out how the water table in down-stream lands would be affected. Science and its professionals can, in fact, see more and further than the lay public—precisely because of the often uncommon experiences they have accumulated. The problem I have in mind is that they often do not look.

I would like to submit that it is, indeed, the logic of science and the scientific method that frequently stops scientists from looking outside a specific domain of possibilities.
Argumentation2
Maturana divides the scientific procedure into four steps:[2]

1. Observation. In order to count as “scientific,” an observation must be carried out under certain constraints, and the constraints must be made explicit (so that the observation can be repeated). 2. By relating the observations, a model is inductively derived—usually a model that involves causal connections. (Often an idea of the model precedes the observations of step (1) and to some extent determines their constraints.) 3. By deduction, a prediction is derived from the model, a prediction that concerns an event that has not yet been observed. 4. The scientist then sets out to observe the predicted event, and this observation must again comply with the constraints that governed observation in (1). I am confident that all who have been trained or engaged in “doing science,” will recognize in this description the famous “hypothetico-deductive Method.” In fact, I have not heard of any scientists, conventional or not, who could not agree with this definition of “science.” Some might want it to include more or to formulate it somewhat differently, but all can accept it as a minimal description of what scientists, by and large, are actually doing. What is new in Maturana’s break-down is that it illustrates the epistemological implications in a way you will not find in any of the textbooks on “scientific method.” The four steps make clear that what matters is experience. Observing is a way of experiencing and, to be scientific, it must be regulated by certain constraints. The inductively constructed model relates experiences, not “things-in-themselves.” The predictions, too, regard experiences, not events that take place in some “real” world beyond the observer’s experiential interface.

Seen in this way, the scientific method does not refer to, nor does it need, the assumption of an “objective” ontological reality—it concerns exclusively the experiential world of observers.
Argumentation2
In the roughly two hundred years since Hume quite a few thinkers have advanced the analysis and classification of mental operations. Two, however, stand out above all others: Immanuel Kant and Jean Piaget. The one I want to draw on here, in the discussion of the operations that constitute the “scientific method,” is Piaget. Although he did not invent the notions of assimilation and accommodation, it was he who refined them and made them generally applicable.[6] The basic principle of these operations is this: The cognitive subject organizes experience in terms of conceptual structures. The subject’s attempts at organization are always goal-directed and if one wants to treat these goals, whatever they may be, collectively, one may subsume them by saying that the subject is intent upon maintaining some form of equilibrium. At any given moment, the subject “sees” and categorizes its experience in terms of the conceptual structures that it has available. Hence, the seemingly paradoxical assertion that an observer sees only what he or she already knows. This, in fact, is called “assimilation.”
Argumentation2
As Hume clearly saw, this involves the belief that the real world is essentially an orderly world in which events do not take place randomly. A hundred years after Hume, this was expressed very beautifully by the German scientist von Helmholtz, when he wrote in 1881:

It was only quite late in my life that I realized that the law of causality is nothing but our presupposition that the universe we live in is a lawful universe.[7]

That is to say, we expect the world we live in to be a world in which there are certain regularities, a world that runs according to certain rules. Since scientists call some of these regularities “Laws of Nature,” it may not be a silly question to ask how we come to know, or how we construct regularities?

Let me return to Maturana’s methodology: The second step in his break-down, was relating observations, and relating them in such a way that one comes up with a Model. A scientific model, of course, is no more and no less than the crystallization of specific regularities. In order to speak of specific regularities, one must be fairly precise about the events that are claimed to be regular. That is to say, one must define certain experiences so that one can recognize them when one experiences them again. There can hardly be regularity before one has noticed repetition.
Innovationsdiskurs2
Conventional psychologists have tried to make hay of that apparent paradox— and they have succeeded whenever they were able to obscure the fact that, in Piaget’s theory, assimilation always goes together with accommodation. They frequently announced that if the notion of assimilation were correct, infants could never acquire new behaviors or new ideas. Let me try to correct this misinterpretation as simply as I can.
WissenschaftlicheReferenz2
To approach this point, I should like to tell a story. It’s a very simple little story. It reads like a fairy tale, but it is not. In fact, it is a rather serious story. I quote it from Science, the journal of the American Association for the Advancement of Science. It appeared in the issue of June 26, 1987:

In 1959, a badger broke through the security lines here at the world’s first plutonium factory (the Department of Energy facility at Hanford, in the State of Washington). The badger ignored all the warnings and dug a hole in one of the waste pits. After he left, rabbits began to stop by for an occasional lick of salt, but it was no ordinary salt they found. Before long, they scattered 200 curies of radioactive droppings over 2500 acres of the Hanford Reserve. The rabbit mess ... created one of the largest contaminated areas, one that remains hot today with cesium-137 (half-life of 30 years) and strontium-90 (half-life 28 ys.).

Hanford also has trouble with ground squirrels, burrowing owls, pocket mice, insects, and plants like rabbit brush and tumbleweed. With roots that can grow 20 feet, tumbleweeds reach down into waste dumps and take up strontium-90, break off, and blow around the dry land. If the dry weeds build up and there is a brush fire, they may produce airborne contamination...[1]
WissenschaftlicheReferenz2
Maturana divides the scientific procedure into four steps:[2]

1. Observation. In order to count as “scientific,” an observation must be carried out under certain constraints, and the constraints must be made explicit (so that the observation can be repeated). 2. By relating the observations, a model is inductively derived—usually a model that involves causal connections. (Often an idea of the model precedes the observations of step (1) and to some extent determines their constraints.) 3. By deduction, a prediction is derived from the model, a prediction that concerns an event that has not yet been observed. 4. The scientist then sets out to observe the predicted event, and this observation must again comply with the constraints that governed

observation in (1).
WissenschaftlicheReferenz2
The conventional approach to science has always maintained that the more people observe a thing, the more “real” that thing must be. Yet, the skeptics, ever since Pyrrho in the 3rd century B.C., have produced quite irrefutable arguments against this view. And, in our time, Paul Feyerabend has spoken against it from a different perspective. In his essay How to be a Good Empiricist, he argues as follows. (I have taken the liberty of leaving out a couple of paragraphs that refer to quantum mechanics and of changing one word— I have substituted the word “model” where he has “theory”):

... assume that the pursuit of a (theoretical) model has led to success and that the model has explained in a satisfactory manner circumstances that had been unintelligible for quite sometime. This gives empirical support to an idea which to start with seemed to possess only this advantage: It was interesting and intriguing. The concentration upon the model will now be reinforced, the attitude towards alternatives will become less tolerant.

... At the same time it is evident, ... that this appearance of success cannot be regarded as a sign of truth and correspondence with nature. Quite the contrary, the suspicion arises that the absence of major difficulties is a result of the decrease of empirical content brought about by the elimination of alternatives, and of facts that can be discovered with the help of these alternatives only. In other words, the suspicion arises that this alleged success is due to the fact that in the process of application to new domains the model has been turned into a metaphysical system. Such a system will of course be very ‘successful’ not, however, because it agrees so well with the facts, but because no facts have been specified that would constitute a test and because some such facts have even been removed. Its ‘success’ is entirely man-made. It was decided to stick to some ideas and the result was, quite naturally, the survival of these ideas.[3]
WissenschaftlicheReferenz2
Reading Locke, one soon discovers that, while he does indeed argue against the notion of innate ideas, he also states that the source of all our “complex ideas” is not the material we get from the senses, but the mind’s reflection upon its own operations.[4]
WissenschaftlicheReferenz2
Hume, then, in spite of some nasty things he said about his predecessor, took Locke’s injunction seriously. “It becomes,” he says, “no inconsiderable Part of Science to know the different Operations of the Mind, to separate them from each other, to class them under their proper Divisions,...”[5] And he explicitly says that this can be done when these operations are made the “object of reflection.” Hume admittedly simplifies that task quite drastically. He reduces the relations that the operations of the mind produce to three: Contiguity, Similarity, and Cause/Effect.
WissenschaftlicheReferenz2
As Hume clearly saw, this involves the belief that the real world is essentially an orderly world in which events do not take place randomly. A hundred years after Hume, this was expressed very beautifully by the German scientist von Helmholtz, when he wrote in 1881: It was only quite late in my life that I realized that the law of causality is nothing but our presupposition that the universe we live in is a lawful universe.[7]