23.04.2024

Doctors Gave Her Antipsychotics. She Decided to Live With Her Voices.

Better outcomes, the W.H.O. predicts, “will depend on a re-evaluation of many of the assumptions, norms and practices that currently operate, including a different perspective on what ‘expertise’ means when it comes to mental health.”

Michelle Funk, a former clinician and researcher who is leading the W.H.O’s work on mental-health policy, law and human rights and is the primary author of the report, spoke to me about the need for a radical change in prevailing clinical presumptions: “Practitioners cannot put their expertise above the expertise and experience of those they’re trying to support.”

To back its position, the W.H.O. highlights stark words from Thomas R. Insel, who from 2002 to 2015 was head of the National Institute of Mental Health, the largest funder of mental-health research in the world: “I spent 13 years at N.I.M.H. really pushing on the neuroscience and genetics of mental disorders, and when I look back on that, I realize that while I think I succeeded at getting lots of really cool papers published by cool scientists at fairly large costs — I think $20 billion — I don’t think we moved the needle in reducing suicide, reducing hospitalizations, improving recovery for the tens of millions of people who have mental illness.”

Present methods can do damage and undermine outcomes not only through psychotropic side effects, and not only through the power imbalances of locked wards and court-ordered outpatient care and even seemingly benign practitioner-patient relationships, but also through a singular focus on reducing symptoms, a professional mind-set that leaves people feeling that they are seen as checklists of diagnostic criteria, not as human beings. “The widespread belief by many in the health sector that people with a mental-health condition have a brain defect or disorder of the brain,” Funk added, “so easily leads to overwhelming disempowerment, loss of identity, loss of hope, self-stigma and isolation.”

In demanding a “fundamental paradigm shift” in the field of mental health, the W.H.O. is calling for a close to half a century of psychiatric history. In the early 1960s, weeks before his assassination, President John F. Kennedy signed a mental-health bill into law and declared that “under present conditions of scientific achievement, it will be possible for a nation as rich in human and material resources as ours to make the remote reaches of the mind accessible.” American science, he pledged, would not just land a man on the moon but would triumph over mental illness.

This confidence stemmed from psychiatry’s first pharmaceutical breakthrough a decade earlier, the discovery of chlorpromazine (marketed in the United States as Thorazine), the original antipsychotic. The drug brought on debilitating side effects — a shuffling gait, facial rigidity, persistent tics, stupor — but it becalmed difficult behavior and seemed to curtail aberrant beliefs. The Times hailed the drug’s “humanitarian and social significance,” and Time magazine compared Thorazine to the “germ-killing sulfas,” groundbreaking drugs developed in the 1930s and 1940s to fight off bacterial infections. But patients didn’t seem persuaded that the benefits outweighed the harm; they frequently abandoned their medication.

Thorazine was followed by Haldol, a more potent antipsychotic whose side effects were no kinder. Yet each drug contributed to a sweeping release of residents from psychiatric asylums, and by the 1970s, crude concepts emerged about how these medications work. Overactive systems of dopamine, a neurotransmitter, were thought to be the culprit in psychosis, and antipsychotics inhibited these systems. The problem was that they impaired dopamine networks all over the brain, including in ways that led to movement disorders and torpor.

By the 1980s, though, biological psychiatrists believed that they would solve this flaw by creating more finely tuned antipsychotics. Joseph Coyle, then a professor of psychiatry and neuroscience at the Johns Hopkins School of Medicine, was quoted in a 1984 Pulitzer Prize-winning Baltimore Sun series that heralded new brain research and deftly targeted antipsychotics and other psychotropics on the horizon: “We’ve gone from ignorance to almost a surfeit of knowledge in only 10 years.”

A protégé of Coyle’s, Donald Goff, now a psychiatry professor at New York University’s Grossman School of Medicine and for decades one of the country’s pre-eminent researchers into psychosis, told me, about the end of the 1980s, “Those were heady years.” Every day, as he neared a Boston clinic he directed, he saw the marks of Haldol in some of the people he passed on the sidewalk: “As you approached, there were the patients from the clinic with their strange movements, their bent-over bodies, their tremors. Not only was the illness debilitating; the medications were leaving them physically so miserable.” Yet he sensed, he said, “the possibility of limitless progress.”

What were christened the “second-generation antipsychotics” — among them Risperdal, Seroquel and Zyprexa — came on the market mostly in the 1990s. In addition to their assault on dopamine, they seemed to act, in lesser ways, on other neurotransmitters, and they appeared to have fewer side effects. “There was so much optimism,” Goff remembered. “We were sure we were improving people’s lives.” But quickly worries arose, and eventually Eli Lilly and Johnson & Johnson, makers of Zyprexa and Risperdal, would pay out several billions of dollars — a fraction of the drugs’ profits — in lawsuits over illegal marketing and the drugs’ effects on users’ metabolisms.

Zyprexa caused a greatly heightened risk of diabetes and severe weight gain (Eli Lilly concealed internal data showing that 16 percent of patients gained over 66 pounds on Zyprexa). Some boys and young men who took Risperdal were affected by gynecomastia; they grew pendulous breasts. In 2005, the N.I.M.H. published a study with 1,460 subjects looking at whether the new antipsychotics were in fact better, in efficacy or safety, than one of the first-generation drugs. The answer was no. “It was a resounding disappointment,” Goff said, though he advocates long-term and probably lifelong medication as, on balance, the best way to guard against psychiatric devastation.

Leave a Reply

Your email address will not be published. Required fields are marked *