Friday, January 12, 2018

The Demeaning of Life...Chapter 4. Scientific Materialism (Comes With Excess Baggage)

The problem is to construct a third view, one that sees the entire world neither as an indissoluble whole nor with the equally incorrect, but currently dominant, view that at every level the world is made up of bits and pieces that can be isolated and that have properties that can be studied in isolation.… In the end, they prevent a rich understanding of nature and prevent us from solving the problems to which science is supposed to apply itself.
   Richard Lewontin, Biology as Ideology: The Doctrine of DNA   
           
The brilliant 17th century French mathematician and philosopher René Descartes is credited with being first to conceive of the entire universe as inherently intelligible and open to rational investigation, an arena for intellectual pursuits of all kinds. For its time, this idea was a bold leap. And a philosophical derivative of this so-called Cartesian worldview gave birth to the basic premises of the scientific method.

The philosophic stance of materialistic naturalism (materialism) denotes a belief that physical matter is the only reality and therefore everything in the universe (including thought, feeling, perception, and will) can be explained solely in terms of physical law. By extension, materialism holds that all phenomena are the result of natural causes and require no explanation involving any sort of moral, spiritual, or supernatural influence. Materialism’s capacity to accurately describe the world we experience provides a powerful tool for making predictions that extend the use and control of nature. A materialistic approach has helped beget most of the advances that make our current way of life possible; from medicines and improved food crops to scanning electron microscopes and Doppler radar, science’s creative innovations surround us at all times.

Viewing the world from a materialistic standpoint provides a concrete way of seeing things. For instance, very little was known about genes until the discovery of DNA confirmed their material nature. Many experimental doors were thrown open once it was perceived that the double helix acted as an actual bearer of hereditary information. After that historic shift, impressive successes within the fledgling field of microbiology (thanks to brand-new technologies) provided confidence that unscrambling the code hidden within the physical substance of heredity would reveal evolution’s secrets. Practically overnight, scientists were able to analyze and manipulate enzymes and other nucleic acids. This led to an attitude of unreserved certainty that life, just like other aspects of the material universe, would eventually be explained purely on the basis of its chemical nature and molecular parts. Riding the wave, Francis Crick (a steadfast physicalist) famously wrote, “The ultimate aim of the modern movement in biology is to explain all biology in terms of physics and chemistry.” Following its inauguration, the modern Darwinian synthesis soon became recognized as biology’s foundation and to this day remains the dominant paradigm of evolutionary thought.

However, biochemical researches conducted during the last few decades, plus findings from new lines of enquiry, have shown that all biochemical processes are far more complicated than was previously imagined and in some instances (as in the case of the “behavior” of DNA repair enzymes) appear to go beyond the dictates of basic chemistry. Unsurprisingly, biochemists and molecular biologists, long committed to their materialist stance, are unwilling to so much as consider that living things function by means other than straightforward chemical processes.

The practice of science holds a genuine (and powerful) taboo against taking into account hypotheses that might open the door to supernatural explanations. However, for the same reasons that our understanding of what life is deserves a fresh look, newly revealed forms of biological complexity invite a reassessment of the way we perceive natural processes. Such re-evaluation should extend to current evolutionary thought, origin of life theories, genetics, and developmental biology.

A conceptual adjustment is actually well underway. (This, despite a lack of media attention, which nowadays correlates to a lack of public awareness.) Currently, the Gaia hypothesis—the notion that our entire planet can be regarded as a colossal organism—is gaining increasing support. A somewhat similar viewpoint was that held by Russian scientist Vladimir Vernadsky, whose ideas never gained wide interest in the west but deserve more attention. His idiosyncratic theories are related to the Gaia hypothesis but precede it by decades.[1] Unlike his early 20th century contemporaries, Vernadsky saw life as being a geological force that had a profound effect on Earth through the chemical and physical alteration of its surface, and by transforming and transporting materials around the globe. In line with the Gaia concept, he saw life as being a planet-wide phenomena—but moreso as part of a larger process. Vernadsky did not regard life independently from other material forms or aspects of nature; it was simply “living matter.” (He sometimes referred to life as “animated water.”) Furthermore, he pictured the entire biosphere—including both animate and inanimate matter—as being alive. By rejecting accepted principles and manners of categorizing and labeling, Vernadsky was able to formulate an entirely original and conceptually sound world model. In future, a profound shift in our conceptualization of life might follow similar lines.

In contrast to such ideas, the materialistic worldview appeals by way of its concrete simplicity. One hugely influential derivative of materialism was the cultivation of a powerful idea: all entities are best studied and can be understood by reducing them to their parts. This is known as reductionism or reductive thinking, an absolutely essential tool for acquiring empirical knowledge.[2] How could anatomy be tackled except through the study of individual organs, their tissues, and their tissues’ cells? In biology, this approach has continually proven its worth and has been key to most advances.

While reductionism is and always will be an essential scientific practice, there is currently a nascent movement away from purely reductive thinking in the biosciences toward a more wide-ranging “systems” approach, which a growing number of scientists (in certain fields, at least) regard as the way of the future.[3] Reductionist methods, ideal for explaining how circulatory systems function or nerve impulses are conducted, might prove inadequate in cases involving the layers upon layers of complexity ubiquitous to seamlessly integrated life processes. It should be noted that many such criticisms leveled against reductive thinking are actually targeted at what has been termed radical reductionism—the assumption that large scale phenomena can be explained entirely by what occurs at smaller scales. (As in, Biology is nothing more than chemistry.)

One of the main limitations of reductive thinking in biology is a past tendency to down-play the import of so-called emergent properties—patterns, behaviors, or traits that cannot be deduced from either lower or higher levels of association. (The deliberate, organismic “behavior” of DNA helper molecules being a fine example.) This trend is changing, however, with the role of emergence gaining increased attention throughout the biosciences. Nevertheless, despite its recognized limitations, reductionism will doubtless continue to occupy its central role as the backbone of all scientific inquiry.

Another unforeseen spinoff of endless scientific triumphs has been the previously mentioned tendency to fragment knowledge into discrete areas of specialization. Biology professor and science writer Rob Dunn writes of the resulting quandary: “For individuals, it has become more difficult to have a broad perspective. The scientists of each field have developed more and more specific words and concepts for their quarry. It is now difficult for a neurobiologist to understand a nephrologist and vice versa, but it is even difficult for different neurobiologists to understand each other. The average individual’s ability to understand other scientific realms has become limited…. The more divided into tiny parts a field is, the less likely some types of big discoveries become.… Very few individuals are standing far enough back from what they are looking at to be able to make big conceptual breakthroughs.”

Entirely new sub-sub-disciplines have emerged in recent decades—paleomicrobiology, endocytobiology, and biogeochemistry, to name a few—and this compartmentalization of knowledge leads to an excessively narrow focus on complex but relatively minor issues. It is typical today that a scientist might devote years, even an entire career, focusing on a single feature of one type of cell in one specific organism. This extreme specialization inexorably leads to distorted perspectives, affecting overall views relevant to a researcher’s own discipline.[4] The same can take place on a larger scale in terms of collective effects. An example of the latter, pertinent to this narrative: only a century ago, the intellectual isolation of several interrelated fields led to a sort of scientific provincialism that created a need for the modern synthesis.

Neo-Darwinism rose to a dominant working model of evolutionary pathways by virtue of that succession of breathtaking new microbiological discoveries. Due in part to the lucid and entertaining writings of authors such as Richard Dawkins, Carl Zimmer, and Sean B. Carroll, the elegant simplicity of neo-Darwinian precepts leads scientifically literate people to believe that any of the baffling problems still faced by evolutionary theory have by and large already been solved—or soon will be.

Among our well-educated populace, many of whose image of reality is now effectively delineated in scientific terms, a mind-set is often on display that could be characterized as a self-assured conviction that their worldview represents an objective description of the “real” world. The people of any given era and culture share a generally agreed-upon overall worldview; those inhabiting a given period feel sure that their view of reality—one supplanting the mistaken and antiquated beliefs of generations past—is balanced and accurate. Historians have identified this overall outlook as a recurring cultural illusion that has been called “the arrogance of the present.” So: why is it that we continue to be under the sway of this cultural delusion? Virtually every person living in our time has been swept up in the relentless flood of technology-driven innovations that have transformed entire cultures in the span of a single generation. No lives have been left untouched by the changes, many of which have taken place with a rapidity never experienced in human history. Meanwhile, the apparent boundlessness of technological advancement has become an assumed part of our collective birthright.

One might imagine that this attitude would be accompanied by a widespread awareness of a chronic historical pattern exacerbated by this “arrogance of the present.” And that the humility-inducing insight would lead to an equally widespread recognition that certain things fervently believed to be watertight fact will, inevitably, become obsolete anachronisms. During a speech at Cambridge in 1923, Haldane—a leading popularizer of science in his day—said, “Science is in its infancy, and we can foretell little of the future save that the thing that has not been is the thing that shall be; that no beliefs, no values, no institutions are safe.” This was a prescient statement in that bygone era and its truth is undiminished almost a century later. But alas, another observable historical pattern bears witness to human myopia: History is made…recorded…and then ignored. As always, in future our grandchildren will hark back to their ancestors’ quaint and primitive ways, marvel at the strange clothes, their hilariously crude machines, and yearn for simpler times.

Another thing: religion, especially among the well-educated, has been displaced by science as a reality-defining milieu. Faith in God has been exchanged for faith in the power of science to such an extent that there is a term (in use since the mid-1800s) for what has been recognized as a philosophy or even a quasi-religion: scientism.[5] As such, it represents an unabashedly materialistic standpoint, maintaining that only empirical methods are capable of providing accurate views of reality and that other modes of thought, if not just plain wrong, lack substance. Those unfamiliar with the nitty-gritty of actual research or how things work in academia often seem oblivious of how messy the scientific arena can be, with incessant struggles for funding, researchers’ sometimes slipshod work, bitter rivalries and envy—even the occasional outright frauds.

Research is still well-funded and supported in the United States but it is controlled to a great extent by universities which are in turn funded and controlled by corporate interests. To an extent virtually unknown to average citizens, the sort of research being done today is often dictated by potential economic benefits (which are frequently presented as social benefits). The gigantic corporations—the Monsantos and Mercks and 3Ms—exemplify the influence of big business on science. Researchers find themselves hobbled while new college graduates are funneled into fields they are not particularly drawn to and may not have chosen on their own. There are many stories of researchers’ work being completely controlled and manipulated, their findings masked or repressed. The results of scientific investigations are often “owned” by the controlling entity, and protests (which could lead to a loss of funding) are suppressed. All this presents a serious impediment to the sort of going-by-feel methods that, in the past, resulted in significant—and often serendipitous—discoveries.

The importance of academic publication is well known; one’s career can hinge on sheer output. The peer review process has become highly politicized and something as trivial as a personal rivalry or ideological conflict with one member of a review panel can result in a paper being buried or left unpublished for years. Again, the net result is a stifling of creativity and intellectual freedom. Scientific research today is largely technology-driven and extremely expensive; corporations are reluctant to provide funding for “long-shot” projects. Another unforeseen consequence of these issues is that crucial follow-up testing (necessary for recreating previous experiments and checking hypotheses) is simply not done—being unglamorous and not likely to bring prestige or attract additional funding.

This may seem to be an unduly cynical take on the state of current scientific research. In truth, there is a continuous stream of important new findings and the occasional groundbreaking discovery. My intention is neither to belittle the substance of current work nor to denigrate scientists or venerable institutions. But neither should the stark reality of science being perpetually subject to human foibles and failings be down-played. Astronomy and physics researcher Alan Lightman put it this way: “One must distinguish between science and the practice of science. Science is an ideal, a conception of logical laws acting in the world and a set of tools for discovering those laws. By contrast, the practice of science is a human affair, complicated by all the bedraggled but marvelous psychology that makes us human.” Stephen Jay Gould further emphasizes that practitioners of science often fall prey to cultural predispositions:“Our ways of learning about the world are strongly influenced by the social preconceptions and biased modes of thinking that each scientist must apply to any problem. The stereotype of a fully rational and objective “scientific method” with individual scientists as logical (and interchangeable) robots is self-serving mythology.”

The truth behind Gould’s words is exposed by another striking historic pattern: phenomena being accounted for in language linked to a particular era’s latest promising discovery or leading technology. In times past, the universally acknowledged biological vital force was ascribed to fire, magnetism, electricity…even radioactivity. And thus has there been a tendency to describe observable fact using terminology borrowed from—in succession—clock making, steam power, industrial machinery, radio technology, and electrical engineering. In just the last century, life processes have been framed in terms of quantum dynamic effects, computer science, game theory and nonlinear systems (the focus of so-called chaos theory). Not surprisingly, the burgeoning field of information theory is presently chief candidate for holding the key to understanding life.

Recognizing the consequence of such predisposition-creating influences reveals the subtle effect culture has on the practice of science. Is it coincidental that the contemporary movement away from a mechanistic to a more informational perspective in the biosciences parallels our conversion from a largely mechanical-industrial society to one that is information-driven? Tellingly, it is difficult to even imagine from what technological innovation future contextual aids might be drawn. 


The doctrine of materialism, aside from being discussed in philosophy courses, is never explicitly taught in the classroom. Many students and scientists are unaware of how much its influence has shaped their assumptions and practices. In our time, there is a widely held conviction that scientists, in order to be considered scientists, are limited exclusively to materialist explanations for all phenomena. Materialistic naturalism grants no basic worth or import to nature’s rich pageantry, not to mention the preeminence of mind. Brushing aside life’s inscrutable character, a materialistic approach pays no heed to the reality of things beyond its scope. Taken to an extreme, materialism goes so far as to argue that concepts such as beauty and morality are illusions serving no constructive purpose…that all living things (including humanity) are the result of chance events…that life itself, ultimately, has no object or underlying significance.

This is the stance of well-known evolutionary scientists such as Jacques Monod, George C. Williams, and Richard Dawkins—each of whom has rebuked (in some instances quite harshly) those who make the cardinal error of inserting subjectivity into matters that lie within science’s purview. Their position is entirely justifiable in the context of science’s dealing exclusively with matters within reach of external verification, but not things that can  only be experienced. However, those authors go beyond simply reminding their readers that science is powerless (or, more to the point, is the wrong tool) to make judgments about concerns like morality or beauty. Instead, they steadfastly insist that the products of our minds and sense organs have no objective value per se, aside from their contributions toward genetic success. Dawkins informs us that life “is just bytes and bytes and bytes of digital information” …that life’s sole “purpose” is for DNA to make more DNA. (Oddly, he never considers asking why this should be so.)

Once again: we do not even know what life is, much less what—if any—its “purpose” might be. Science, taken as a whole, is perhaps humanity’s single greatest innovation and our bequest to a bright future (or, in the ever-astute S.J. Gould’s more incisive words, “whatever forever we allow ourselves”). Lifting us out of darker ages, the work of all those individuals, building on that of their predecessors, made our way of life possible. 

Nonetheless, we are not even close to knowing precisely how our senses work, what dreams are for, or where memories reside. Consciousness, perhaps the greatest of all mysteries, remains a variegated enigma. Lacking humbleness, we persist in taking as a given our ability to perceive, evaluate, and act with consistent propriety—or restraint. Which has (to put it mildy) led to problems. Humility is a natural partner to inquisitive-ness and, without it, knowledge lacks perspective.

Renowned physicist Richard Feynman, widely considered in his day to be one of the most brilliant people alive, said in an interview: “I can live with doubt and uncertainty and not knowing. I think it’s much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about different things but I’m not absolutely sure of anything and there are many things I don’t know anything about such as whether it means anything to ask why we’re here…. I don’t have to know an answer. I don’t feel frightened by not knowing things….”While it is generally assumed that scientists are objective and impartial in their views, such intellectual bravery and humbleness as this is quite rare. For Richard Feynman (who suffered from terminal curiosity) it always came down to the pure joy of discovery. Darwin also displayed this quality in spades; he not only had the courage to question his own views, but openly invited others to challenge his cherished theories.

So why this tendency to feel such certainty, such resolute assurance, that the   wondrous things all around us ultimately have no importance…or are of no particular consequence? Why the need for such staunch conviction? And what exactly does this say about our culture, that we denigrate life so? Even if philosophy and religion are left out of the picture entirely, individuals will still seek meaning and purpose in the world, invite beauty into their lives, and go on living by a moral code based on what they believe to be objective values. Such things remain fundamental, inescapable aspects of what it is to be human—for mystics, atheists, and rational materialists alike.

We eternally owe a debt of gratitude to those great minds that made our modern way of life possible, whose shoulders modern scientists stand upon. Yes, certainly to René Descartes. But, in an essay (more akin to a manifesto) co-written by Dorian Sagan, Lynn Margulis, and Ricardo Guerrero, we are reminded of another way: “Perhaps Descartes did not dare admit the celebratory sensuality of life’s exuberance. He negated that the will to live and grow emanating from all live beings, human and nonhuman, is declared by their simple presence. He ignored the existence of nonhuman sensuality. His legacy of denial has led to mechanistic unstated assumptions. Nearly all our scientific colleagues still seek “mechanisms” to “explain” matter, and they expect laws to emerge amenable to mathematical analysis. We demur; we should shed Descartes’ legacy that surrounds us still and replace it with a deeper understanding of life’s sentience. In Samuel Butler’s terms, it is time to put the life back into biology.”[6]

An appropriate sentiment though one not likely to gain much traction, given the climate of our time. For my part, I am unable to shake this burning conviction: There is a deeper  reality behind the way we currently perceive nature. At each level one finds that same “celebratory exuberance”—in DNA, in cells, bodies, populations, ecosystems. Are we missing something? Is that not at least a possibility? For instance, as with DNA, our views of how cells work shows how little we credit the creative and formative powers of Natural Design. Of nature itself.                              
                                                                              
©2017 Tim Forsell                          
19 Nov 2017

[1] Vladimir Ivanovich Vernadsky (1863–1945) was considered one of the founders of  both geochemistry and radiometric dating and also popularized the concept of the noösphere. “In Vernadsky’s theory of the Earth’s development, the noösphere [human and technological] is the third stage in the earth’s development, after the geosphere (inanimate matter) and the biosphere (biological life). Just as the emergence of life fundamentally transformed the geosphere, the emergence of human cognition will fundamentally transform the biosphere. In this theory, the principles of both life and cognition are essential features of the Earth’s evolution, and must have been implicit in the earth all along.… Vernadsky was an important pioneer of the scientific bases for the environmental sciences.”
[2] The first use of reductive thinking is attributed to Thales of Miletus (c. 624–c. 546 BC) or, by some, to Aristotle (384–322 BC).
[3] The approach may have limited application. In an attempt to help understand the complexity of a system, vast amounts of data are plugged into a computer model. But the model itself can quickly become so complex that it becomes virtually useless. In addition, it is unrealistic to expect that all applicable information can be gathered, that it will be accurate, or that all factors in the interactions being considered are relevant.
[4] Niles Eldredge writes, “When I did my initial study on the Phacops rana trilobite lineage, I followed tradition and looked only at my trilobites. I ignored, except in passing, the 300-odd other species that lived in the same ancient Devonian seaways. I saw their fossils lying about, but only had time, and eyes, for my own quarry. ’Twas ever thus…. Everybody focuses on a particular group, and such specialization quickly becomes forbidding. In the nineteenth century, a paleontologist could reasonably expect to become master of an entire fauna, or even a series of faunas, if one’s career lasted long enough…. Not so any more. Paleontologists work hard to learn the anatomical and classificatory intricacies of a single group.”
[5] The term, in our time,  is often used pejoratively by those who insist that scientism results to an impoverished worldview.
[6] It is worth remembering that Descartes performed vivisections on still-living dogs, believing that animals (being without minds) were incapable of feeling.




No comments:

Post a Comment