Denise Caruso on the inability of experts to correctly evaluate their own work.
In her important new book, Intervention: Confronting the Real Risks of Genetic Engineering and Life on a Biotech Planet, Caruso takes a critical look at the risks to society presented by innovation and technology. Commenting on this work, Peter G. Neumann recently wrote in RISKS DIGEST: "There are many problems and lessons to be learned from what we have in common. It is important for everyone to see that these problems are generic and relevant to essentially all technologies, not just computer systems." Denise Caruso is a seasoned technology analyst and journalist, and was the Digital Commerce columnist for the New York Times before confounding and serving as executive director for the Hybrid Vigor Institute, a not-for-profit research and consulting practice. One focus of the Institute is the development of new methods for assessing the risks of innovations in science and technology.
John Veitch - I reprint this chapter for reasons of my own. The theme I see repeated here is one that has concerned me for a long time. We see our own actions with internal knowledge that we "intended to do good" and so we excuse ourselves from any blame when our own ideas prove to be wrong. Much is written in the law of most countries about the importance of "self regulation" of industries and professions. There's lots of evidence that self regulation doesn't work. You can see another glaring example in American politics as the war in Iraq runs into 5 years and looks endless.
"Victory has a thousand fathers, but defeat is an orphan", said a rueful John F. Kennedy after the Bay of Pigs. George W. Bush knows today whereof his predecessor spoke. The Neocons are still convinced that idea of launching an unprovoked war of liberation, for which they had beaten the drums for half a decade before 9-11, remains a lovely concept. Professor Eliot Cohen of Johns Hopkins, whose book on war leaders Bush used to carry about, says his mistake was in not knowing "how incredibly incompetent" the Bush team would be. No neocon concedes that the very idea itself of launching an unprovoked war against a country in the heart of the Arab world - one that had not attacked the USA, did not threaten the USA and did not want war with the USA - might not be wildly welcomed by the "liberated." No neocon has yet conceded that Bismarck may have been right when he warned, "Preventive war is like committing suicide out of fear of death."
"Huge mistakes were made," says Richard Perle, "and I want to be very clear on this: They were not made by neoconservatives. ... I'm getting damn tired of being described as an architect of the war."
You get the picture. Everyone else is guilty BUT NOT ME. Does this disease exist where you live? To be honest about it we have to say yes. This is about being human. We are human, this is how human's behave. It's not good for us, not good for our companies or countries, and it betrays those who trust us to make good decisions. How do we protect ourselves from the wrongness of some of our own ideas? The solution is not "positive thinking". The solution lies in better communication and better use of our imagination. I ask you to read the following with your self analysis capabilities turned on.
"The question is, how do you prepare to be wrong? If you know you can't walk away from the consequences of what you do, how do you not screw it up?" said Todd La Porte, sitting across from me in the dappled light of the faculty dining room at the University of California, Berkeley. La Porte, a former Marine, is also a veteran political scientist who is internationally known for his thoughtful study of "long-term stewardship" of man-made hazards; that is, how a society prepares to take care of the messes it has made that it can't get rid of, generations into the future.
La Porte has spent many years studying how nuclear engineers and scientists go about the business of containing radioactive waste, which to date is the most persevering toxic substance known to (and created by) man. I had contacted him when I first started my research into risk and genetic engineering. It occurred to me that if something bad happened as a result of our self-assured release of transgenic organisms throughout the world, we might eventually need to have a more intimate understanding of his work. For starters, as La Porte noted, "nuclear waste doesn't reproduce." A population of living, multiplying transgenic organisms gone awry could end up being significantly more difficult to contain than radioactive sludge.
And while the thought of being wrong about having stocked the entire planet with self-replicating hazards was sobering enough, La Porte posed yet another, equally troubling question about the topic of my inquiry: "How are you going to get the scientists to listen to you?"
After decades of study, La Porte himself had no answer. "My experience with technical people, with scientists, is that they're utopians and they see us as the problem," he said. "This was a tragedy in the nuclear industry."
Nuclear scientists, said La Porte, entered their profession believing they were doing something good for the world by developing what was then called "atomic energy." Many of us remember this era, when nuclear energy was pervasively (and now infamously) touted by the nuclear industry as 100 percent safe, clean and "too cheap to meter." The scientific basis for those claims was accurate as far as it went, but clearly it didn't go far enough. When the industry's claims of safety literally blew up - with operator and engineering errors triggering the meltdowns at Three Mile Island in 1979 and Chernobyl in 1986 - the public rejected the technology as too risky for the benefits promised by its government and industry champions.
"To think that other people might suffer as a result of their actions is not part of the expert's world, or it gets pushed away in the drive to deploy the technology," said La Porte. "But what are the consequences if it turns out that all the things they believed in are wrong? That's really hard. And most technical people can't talk about this. What they do is theology to them, not science.
This untested technology is, of course, biotechnology. Using a laboratory technique known as "recombinant DNA," scientists now can splice together the genetic material from deep within the cells of two or more organisms of different species. As a result, they can "engineer" living hybrids with new traits that would have been impossible to create using traditional breeding techniques.
Genetic engineering commenced what was heralded as a new era, both of scientific discovery and commercial potential. The technique itself was quickly patented, and the first biotech company, Genentech, Inc., was launched in 1976. A torrent of research and experimentation followed, and a new generation of genetic engineers immediately began to add, remove or otherwise modify the DNA of all kinds of living things. The term "genome" had long since been understood to describe all the genes in an organism. But this newfound ability to directly manipulate individual DNA sequences to change the way that organisms behave provided new impetus to discover and map as many genes and their functions as possible. In 1977, for the first time, the entire complement of genetic material in a biological entity - a virus that kills bacteria, called a bacteriophage - was mapped and published.
Many more genomes were mapped and published in subsequent decades. But the climax of these efforts was the dramatic completion of a working draft of the human genome map in June 2000. For many people, this historic achievement - combined with the power of recombinant DNA to re-engineer the structures and behaviors not just of microbes, plants, and other animals, but of humans as well - inspired researchers to dream big about how humankind could use this knowledge.
But what we know from history is that every promise based on discovery or invention, no matter how positive, comes factory-equipped with its own unintended dark-side consequences. For all the utopian results that genetic engineers have imagined for us, the ability to "rewire" the genetic material of living organisms could just as plausibly yield an equal and opposite nightmare. It is not especially difficult to come up with scenarios whereby mucking around in the genes of living organisms leads to serious biological, social and/or economic disruption. Yet neither knowledge of history nor dark-side scenarios has tempered the zeal or the speed with which the products of genetic engineering are being dispatched into the global marketplace.
Are the experts who build these products thinking critically about these dark possibilities? What set of facts, based on what specific scientific knowledge, have they provided to government regulators who decide whether the products of genetic engineering are safe? Do either the scientists or the regulators know enough about what they're doing with this largely unexplored science to speed biotech products to market as quickly as they are today?
On the surface, there's no denying public ignorance. As the Canadian philosopher John Ralston Saul wrote, "When faced by questioning from non-experts, the scientist invariably retreats behind veils of complication and specialization, [making] it impossible for the citizen to know and to understand, and therefore to act, except in ignorance." What's more, the claim that ordinary people are incapable of understanding the risks of scientific and technological interventions has been proven to be patently untrue, time and again, by risk researchers.
To begin with, those who discover, invent or work with new technologies are often spectacularly nearsighted about the risks those technologies create. To deny this is to ignore at least a century of the history of biology and technological advancement.
The tragedy of the drug DES, for example, continues to reverberate through generations. As many as 10 million pregnant women in the U.S. alone took diethylstilbestrol, a synthetic estrogen, between 1940 and 1971 (despite several studies that proved its ineffectiveness) hoping that it would prevent miscarriages. But in 1971, researchers discovered the link between daughters of DES mothers and what was until then a rare cancer: clear cell adenocarcinoma.
Animal studies a decade earlier had signaled possible links between early estrogen exposure and later cancers in offspring, yet these findings had been dismissed and considered irrelevant to human health by doctors and drug makers, as well as by the U.S. Food and Drug Administration (FDA), which had approved DES. But even after human studies made the linkage irrefutable, researchers had to fight to get colleagues in the scientific and medical communities to believe the proof. Remarkably, the skepticism continues, even as many more problems have surfaced in the subsequent decades, some of which also affect sons of DES mothers. Research now shows that even the children of DES children are at high risk for cancer and other DES-related health problems.
Another similar health crisis is already well under way as a result of our overuse of man-made antibiotics - namely, the steep increase in antibiotic resistance that many dangerous pathogens have developed as a result.
Antibiotics were once considered miracle drugs that, for the first time in history, greatly reduced the probability that people would die from common bacterial infections. But once these new drugs became cheap and readily available, doctors prescribed them for virtually every ailment, often thoughtlessly or incorrectly. As a result, bacteria became immune to the drugs that once killed them.
Resistance to antibiotics has become pervasive among pathogens that infect people and animals all around the world. In hospitals in particular, patients often contract "superbugs," like Staphylococcus aureus or pneumonia, which now are virtually unkillable. Staph infections, for example, are already resistant to common antibiotics like penicillin, methicillin, tetracycline and erythromycin. As a result, these low-cost treatments have become practically useless for common infections. This leads to more frequent use of newer and more expensive compounds, which in turn leads inexorably to the rise of resistance to the new drugs as well. A never-ending, ever-spiraling race to discover new and different antibiotics has ensued, just to prevent losing further ground in the battle against infection.
The situation is worsened by the fact that the genetic material responsible for conferring antibiotic resistance can move with relative ease between different species of bacteria. This is evolutionary selection in action: the transfer of resistance makes it possible for pathogens never exposed to an antibiotic to acquire resistance from those that have been and thus survive. (Antibiotic-resistant genes play an important role in genetic engineering as well, as you'll see.)
Another great concern is that in the United States, antibiotics are still routinely included in the diets of healthy livestock, for no reason other than to make the animals grow faster. But now the bacteria the animals harbor have become widely resistant to antibiotics, too. It has been well documented by the U.S. Centers for Disease Control and Prevention (CDC) and by the U.S. FDA that since the time these farm animals were first fed medically unnecessary doses of antibiotics, the meat supply has become highly contaminated with bacteria. What's more, foodborne illness has become a much more serious problem - especially illnesses caused by Salmonella, Campylobacter and E. coli, pathogens that are resistant to nearly all antibiotics. In addition to the issue of foodborne illness from contamination, the resistant bacteria get passed along to humans who eat resistant animals, like chickens and cows, or their products, like eggs and milk.
As a result of this growing problem, many countries have long since banned the use of antibiotics for growth promotion or disease prevention. In the U.S., however, it took until March 2004 before the FDA disallowed just one single type of antibiotic - enrofloxicin - that was widely used in poultry. Enrofloxicin in animals metabolizes into ciprofloxicin, a.k.a. Cipro, the drug that made headlines in 2001 as the treatment of choice for humans who inhale anthrax spores.
Scientific shortsightedness does not apply only to products, but to discoveries as well. It can be hard to measure how much scientific progress is held back by a research community too mired in its prejudices to accept truly revolutionary discoveries. The history of science is full of such examples, but a relevant one for the 21st century is the story of Stanley Prusiner, a neurologist at the University of California, San Francisco. Prusiner lost much of his funding, his academic tenure (temporarily), and, for many years, credibility in his field - all for research that would later bring him a Nobel Prize in Medicine. In 1982, he discovered a strange misfolded protein that he called a "prion" (he derived the word from protein and infectious) and that apparently could transmit disease.
In fact, it is now widely known that prions can transmit disease. They are the infectious agent that causes the brain-wasting disease in animals known as transmissible spongiform encephalopathy (TSE). In its variant forms, it's known as bovine spongiform encephalopathy (BSE) or mad-cow disease in cattle, scrapie in sheep and "variant Creutzfeld-Jacob Disease" (vCJD) in humans.
The research community had believed that these brain-wasting diseases, which Prusiner had traced to prions, were caused by viruses. A virus is a parasite with no cell of its own, so it has to "hijack" the DNA in the cells of another organism in order to reproduce and become infectious. A virus can do this because it contains the machinery of reproduction - i.e., DNA or the RNA molecules that help decode the information carried by DNA. But prions do not contain DNA or RNA. In fact, prions are the only known infectious agents that don't contain DNA or RNA. Bacteria contain DNA. So do fungi, parasites and protozoa. So when Prusiner isolated the infective protein particle, scientists simply refused to believe that it contained no genetic material. In fact, more than 20 years after his discovery, some definitions still call prions infective particles "which (almost certainly) do not have a nucleic acid genome."
Despite Prusiner's prior achievements, many scientists in the research community also discounted his more recent claims that prions reside not only in the spinal cord but also in the muscle tissue of animals that we eat. Yet in 2006, prions had been, in fact, discovered in many other parts of animals, including the muscle tissue of North American deer and elk. And these scientists are presently rejecting his ideas about the relationship of prion diseases to other disorders, such as Alzheimer's and Parkinson's diseases. Time will tell who prevails.
Scientists don't hurt only themselves with this kind of behavior. They hurt us, too. By refusing, for whatever reasons, to look beyond the narrow boundaries of their own expertise, they often have overlooked the cause of problems as well as potential cures or solutions. Prusiner himself best sums up what scientists risk by indulging in these dogmatic attitudes: "While it is quite reasonable for scientists to be skeptical of new ideas that do not fit within the accepted realm of scientific knowledge," he wrote with great understatement in his Nobel autobiography, "the best science often emerges from situations where results carefully obtained do not fit within the accepted paradigms."
What's even more distressing is how frequently scientists reject these "results carefully obtained" when they actually do fit within the bounds of paradigms they understand. This type of scientific myopia may be closest in spirit to the issues in question around genetic engineering. It is also where we find what may be the most persistently damaging effect of shortsightedness: the destructive and exponential growth of invasive species.
People make mistakes and accidents happen, of course; unintended infestations by alien plants, animals and microbes are one of the risks of global mobility. But even more distressing are the invasive species that were purposely introduced. Time and again, various government agencies and people with the best of intentions, having had quite enough of one pest or another, have imported various critters to combat them. These immigrants, deliberately introduced, often become pests far worse than those they were brought in to eradicate.
One notorious example is the Hawaiian cane toad, Bufo marinus, brought into Australia in 1935 to rid its sugarcane plantations of cane beetles. The brains behind this idea was the Australian Bureau of Sugar Experimental Stations, which apparently didn't ask Bufo for references before hiring. The fact is that this toad has an immense appetite for everything but the cane beetle. It is big and aggressive, and its skin is poisonous to any natural predator - except one lone snake species, which is destined never to hunger again. Worse yet, the tadpoles of cane toads mature earlier than other tadpoles in Australia, so in addition to being nasty-tasting to potential predators, the hungry babies also eat up everyone else's food.
With these unnatural advantages, it didn't take long for the toads to spread along the north coast toward the center of the continent, eating all the native amphibian and invertebrate species in their path - except, as noted, the cane beetle, which flies over its head, and cane grubs. While the grubs are at least within reach, the toads apparently cannot be bothered with them, since they live below the soil and require at least a token amount of effort - effort that's quite unnecessary given the toads' luxurious circumstances.
Similarly, the introduction of the European rabbit to Australia as a game animal proved to be a mistake of magnificent proportions, and the proposed solutions are proving even more frightening than the original invasion.
The unexpected behaviour of introduced species places a serious question mark over our ability to predict the behaviour of invading organisms placed in ecologically different environments, and thus to protect a naive environment from an invasion. The lessons that should have been learned from the escape of myxoma were not accepted, and RHDV escaped in a virtually identical manner. Have we learned our lessons yet, or can we expect similar escapes in the future?
Based on recent history, we can hazard an answer to both those questions: No, we haven't; and yes, we can.
At various times Australian scientists have tried to import viruses from other countries, including Venezuela, as biological controls for cane toads as well - which, considering that the toads were intended to be a biological control themselves, is something akin to fighting fire with gasoline. A research organization established nearly a century ago to benefit Australian industry claims to be working with "cutting edge genetic technology" to find a biological control method to "stop the hop" of cane toads across the continent. These types of efforts continue despite the fact that in 2001, researchers in Canberra, working to create a genetically engineered sterility vaccine to control a national infestation of mice, instead accidentally created a "mousepox" virus so powerful that it killed even the mice that had been inoculated against it. (A U.S. team of researchers immediately replicated it, in the name of biological defense.)
While public ignorance is most often cited as the reason that risk is so misunderstood, these are examples of scientific ignorance. In each of these situations, the proposed intervention was subjected to some degree of regulatory and/or scientific scrutiny. Those involved in making the decision had an opportunity to revise their "beliefs" (I use the term advisedly). Instead, they ignored the pushback and declared the interventions to be safe, or safe within what were believed to be easily defensible and understood boundaries.
Another factor in the public's relationship to risk, which one could probably call unintended public ignorance, affects us more often than we can know. Unintended ignorance results when regulatory agencies or industries willfully downplay or deny the risks that are already known to them, in the interest of protecting financial or some other kind of gain. In the early 21st century, this game of "hide the risk" has already reached epidemic proportions in the United States at least.
Public-interest groups in the U.S. have railed for decades about the dangers of the revolving door between government and industry, whereby people with a financial interest in a given industry - industries that generally provide largesse in various forms to those in power - are asked to serve as regulators of that industry. The practice has become increasingly common and bold, and as a result, American citizens are witnessing an ongoing rollback of hard-fought federal safeguards in agencies that regulate food safety, the quality of drinking water, worker health and safety, civil rights, toxic pollution, health care and other common public resources.
The most blatant recent examples in the U.S. have involved the government censorship of EPA reports that connected auto emissions and other human activities with global warming; EPA administrators selectively editing a risk analysis that the agency commissioned on mercury emissions; the sabotaging of a World Health Organization initiative on obesity because the sugar and packaged food industries felt "attacked" and opposed its suggestions; and the stacking of a CDC committee with industry-friendly experts to re-examine federal standards for lead in school drinking water.
[Home] [About Ubiquity] [The Editors]