During the 20th century, parents feared polio. The illness struck in waves during warm summer months and left many temporarily or permanent paralyzed. Children were especially vulnerable, and thousands lost their lives to the crippling disease. The vaccine that stopped this frightening virus was tested on more than 600,000 U.S. schoolchildren in a randomized clinical trial conducted in 1954 by Dr. Jonas Salk, the scientist credited with the breakthrough development. Like many of his predecessors, Salk believed in the value of clinical research. But the same scientific process that saved thousands also took the lives of a few children and caused hundreds of others to contract the disease when researchers injected them with vaccine from a bad batch during the trials. Therein lie the benefit and the cost of clinical trials, a research process that can claim the lives of a few in pursuit of a greater good.
In various forms, clinical research has existed since antiquity, when Egyptian medicine, as practiced by the physician Imhotep circa 3,000 BC, dominated. But the first recorded clinical trial, as we understand it today, reportedly, occurred even earlier: In 526 BC, a eunuch in King Nebuchadnezzar’s household permitted an experiment. One of the prisoners in his care, Daniel of Judah, told the eunuch that he and his comrades could become stronger if they were allowed to eat their preferred diet of vegetables and water for 10 days. After eating this way for the prescribed time, Daniel’s group was compared with a group of Babylonian courtiers who ate meat and drank wine. The results supported Daniel’s hypothesis: He and his countrymen were more hale and hearty. The simple observational study of a treatment group (Daniel and comrades) and a control group (the courtiers) compared how different diets affected the health outcome of each group.
In 1537, Ambroise Paré was treating the battlefield wounds of soldiers, when he ran out of the boiling oil typically used to cauterize injuries. So Paré mixed a paste of egg yolks, rose oil and turpentine and applied it to the soldiers’ wounds. In the morning, Paré received a shock. “I found those to whom I had applied the digestive medicament had but little pain, and their wounds without inflammation,” he wrote in his autobiography, Journeys in Diverse Places. The soldiers Paré had treated with boiling oil were feverish with pain, and their wounds were swollen. Paré had invented the controlled study.
Fast-forward to May 20, 1747. On the British naval ship HMS Salisbury, scurvy struck. Ship’s surgeon Dr. James Lind conducted an experiment. According to papers from his library, Lind chose 12 of the ailing men and divided them into pairs. He fed the same rations to these men that the rest of the crew ate, but he also treated each pair with six different treatments: one quart of cider each day; 25 drops of diluted sulfuric acid; two spoonfuls of vinegar three times a day, before meals; a half pint of sea water; two oranges and one lemon for six days, until the supply was exhausted; and a medicinal paste of garlic, mustard seed, dried radish root and gum myrrh. Lind seems to have been the first doctor to intentionally conduct a controlled clinical trial. When the men who ate the citrus fruits became well enough to resume active duty six days later, Lind sailed into history as the first doctor to conduct a controlled clinical trial.
In the early 1800s, the concept of a placebo emerged when U.S. doctor Austin Flint used a dummy remedy in his clinical study on rheumatism. A placebo is a substance or treatment that has no effect on the body. Typically, doctors incorporate these nonremedies into clinical trials as a way to ensure that any observed effects are actually caused by the treatment and not some other variable.
More than 100 years later, in 1943, researchers developed another form of clinical study: the double-blind controlled trial, in which neither study volunteers nor their doctors know who gets what treatment. In 1946, doctors introduced randomized clinical trials, a study design that uses chance to assign participants to separate groups for a comparison of different treatments. And in 1948, the British Medical Research Council organized a randomized trial of streptomycin used to treat pulmonary tuberculosis. This trial and another done in 1944—to assess the effects of the antibiotic patulin on the common cold—are widely regarded as watershed moments in the evolution of clinical trials methodology.
By the 1950s, randomized, blind clinical trials had emerged as a key clinical research tool guided by the U.S. National Institutes of Health, the British Medical Research Council and academic research institutions. The studies continued to increase in number and size. But as researchers chased opportunities to test theories and treatments, some scientists exploited vulnerable populations as test subjects for their experiments. Researchers used prisoners, infants and people in mental health institutions for medical experimentation, often soliciting their cooperation without securing their fully informed consent (or that of their guardians). According to Robert Finn, author of Cancer Clinical Trials: Experimental Treatments & How They Can Help You, in “one shocking example, orange juice was withheld from orphans at the Hebrew Infant Asylum of New York City so doctors could study the development of scurvy.”
Few of these studies were considered unethical at the time. That changed, however, after World War II, when the exposure of further atrocities resulted in a series of ethical standards designed to regulate these studies. The first was the Nuremberg Code, enacted in 1947, in response to horrific Nazi experiments. But the code had no force of law in the United States. Doctors stateside were free to conduct clinical trials using human test subjects without concerns about oversight or accountability.
Other guidelines and rules codifying human rights followed the Nuremberg Code. The United Nations General Assembly adopted the Universal Declaration of Human Rights in 1948, and the U.S. enacted the Kefauver-Harris Amendments in 1962, after the drug thalidomide caused birth defects in women who took the medication for morning sickness. The Helsinki Declaration followed in 1964 and the International Covenant on Civil and Political Rights in 1966. None of these stopped the notorious Tuskegee experiment, in which researchers withheld penicillin from nearly 400 poor black men with syphilis.
After shocking details about the Tuskegee study surfaced, the U.S. government passed the 1974 U.S. National Research Act. The act set the stage for institutional review boards (IRBs) to oversee biomedical and behavioral research using human beings. This act also created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, which published the Belmont Report in 1979. The Belmont Report summarized key ethical principles that eventually generated a number of enforceable provisions that allowed IRBs to suspend research studies if proper protocols weren’t being followed. In addition, IRBS could hold researchers and their organizations accountable for the protection of study participants’ human rights.
Today, most clinical trials are much safer and more ethically conducted. There are guidelines in place that require proper study design and planning approval, and principles to protect study participants, ensure informed consent, and monitor data and researchers’ compliance with good clinical practices.
Yet problems with clinical trials persist. Many pharmaceutical companies have gone global with their clinical trials, and, according to a Reuters report last year, ethics remain a major concern because clinical work is sometimes relegated to contract research organizations (CROs). “People in many developing countries are often poor or illiterate, which makes them vulnerable,” said Annelies den Boer of the Wemos Foundation, a Dutch nonprofit that has been following the globalization of clinical trials since 2006. “It’s very difficult to check if companies do indeed abide by [guidelines] because governments in countries where these trials take place do not exercise a lot of control. There’s an entire chain—vulnerable patients, doctors with conflicts of interests, CROs that are geared to doing trials extremely fast—which is detrimental to ethical guidelines.”
In addition, there have been stories in The Independent, a British paper, about a steady stream of news reports from India about drug companies exploiting the poor and illiterate to test their medications in clinical trials that are inadequately policed.
Of course, it’s easy to be cynical. Despite making strides in tempering clinical trials abuses, there is a long way to go before these unethical incidents are totally eliminated. But there’s also no denying that clinical trials have improved the quality of human life. Unlike almost 60 years ago, today parents need no longer dread the coming of June and a climb in the thermometer’s mercury.
Comments
Comments