March 14, 1962. This meeting was a long time coming. Faculty members and graduate students gathered in the basement auditorium of the Center for Research in Personality at Harvard University to discuss Timothy Leary and Richard Alpert’s psilocybin research. Two years earlier, Leary launched the Harvard Psilocybin Project (hereafter referred to as the Project) to study the substance’s effects on human behavior. Alpert—who later changed his name to Ram Dass—joined the Project as a coinvestigator. Their first psilocybin study was exploratory. Over 175 participants, including psychologists, writers, musicians, housewives, and graduate students, took the drug in the posh living rooms of faculty investigators’ homes. Subjects picked their own dosages, filled out questionnaires, and detailed their experiences in written reports. In the Journal of Nervous and Mental Disorders, Leary and his colleagues reported that a majority of participants rated the drug experience as “very pleasant” or “wonderful” and one that changed their lives “for the better.”1 The Project eventually moved from living rooms to a prison infirmary, where the research team gave psilocybin to hardened criminals during group therapy sessions. Leary and his colleagues claimed that the experiment reduced recidivism in prisoners who received psilocybin—a 27 percent recidivism rate compared to the national average of 67 percent.2
Two years into the Project, faculty members at the Center were coming forward with complaints. David McClelland, the Center’s director, organized a meeting for faculty to air their grievances. The list of charges against Leary and his research team were largely methodological, including failing to follow a conventional research protocol with control groups and random subject selection, administering the drug in private homes without the presence of a medical doctor, taking the drug with research subjects, and focusing on subjects’ subjective experiences instead of quantifying results.3 On top of that, some faculty members accused Leary and Alpert of creating divisions between those members of the Center who had taken psilocybin and those who hadn’t. Others complained that they were hogging graduate students—a limited but valuable commodity for faculty research assistance. Moreover, they questioned the kind of methodological training students received. With the negative campus buzz surrounding the Project, a few members worried that the entire Center’s credibility was in jeopardy.4
The following day, the campus newspaper, the Harvard Crimson, published a piece on the meeting titled, “Psychologists Disagree on Psilocybin Research.” The reporter quoted social psychologist Herbert Kelman, one of the more vocal critics at the meeting. “This research violates the values of the academic community,” Kelman argued, adding that the Project “has an anti-intellectual atmosphere. Its emphasis is on pure experience, not verbalizing findings. It is an attempt to reject most of what psychology tries to do.”5 Despite his criticisms, Kelman was not recommending that the Center terminate the Project. A few days later, he wrote Leary and Alpert, expressing his anger at the Harvard Crimson reporter who had furtively attended the meeting and quoted him without permission. Kelman told the pair that he wanted to “define this as an internal matter and to de-emphasize the drug itself which, as you know, I do not regard as the primary issue at all.”6 The primary issue, for Kelman and other faculty members, was apparently that the Project team failed to follow conventional scientific methods. The same day that Kelman wrote to Leary and Alpert that he hoped to “keep this debate out of the public domain,” the Boston Herald picked up the Crimson story but with a more sensationalized title: “Hallucination Drug Fought at Harvard—350 Students Take Pills.” The media blitz that followed led to a crackdown on the Project; members of the Center formed an advisory committee tasked with developing new procedures for the research.
Outside Harvard, some psychedelic investigators worried that the controversy surrounding the Project might cause problems for their own research. Psychiatrist Gerald Klerman, for example, accused Leary and Alpert of “fail[ing] to observe the rules of scientific investigation” by holding “psilocybin parties” at their homes where they took drugs along with subjects who were not randomly selected or properly screened for existing psychiatric illnesses. Klerman insisted that Leary and Alpert’s actions had “poisoned” his own research with psilocybin, which he had spent the previous eight years studying without raising any eyebrows.7 Henry Beecher, a professor at Harvard Medical School, echoed Klerman’s criticisms. Although Beecher is best known for his work on medical ethics, he also studied the subjective effects of LSD while working in an anesthesia laboratory in the 1950s. In a commentary published in the Harvard Alumni Bulletin, he asserted that the Project had traveled “well beyond the physical confines of the laboratory as well as beyond the spirit of sound laboratory investigation.” Beecher suspected that the fiasco surrounding Leary and Alpert’s psilocybin experiments would make it “difficult for responsible investigators to continue their work with these unquestionably interesting drugs.”8 As the rogue researchers thumbed their noses at, in Kelman’s words, “most of what psychology tries to do,” critics charged them with tarnishing the credibility of psychedelic research conducted by responsible investigators who were using sound methods.
The following spring, Harvard administrators sacked both psychologists. Alpert was fired for violating an agreement not to give psilocybin to undergraduates, and Leary’s contract was terminated because he failed to teach his classes.9 Newspapers and magazines like the New York Times, Newsweek, and the Saturday Evening Post ran stories about the headline-grabbing Harvard drug scandal.10 Leary, however, was not demoralized by his well-publicized ousting. After getting kicked out of Harvard, he told a journalist that he was “through playing the science game.”11 Leary considered the popular manifestation of science to be a flawed endeavor. He was particularly critical of what he called the game of science: the specific roles, rules, and goals that reward those who play ball with a certain model of science. Sociologist Pierre Bourdieu also liked the game metaphor, which he used to describe how people jockey for position within social fields. Bourdieu envisioned the social world as a series of distinct fields, like politics and education, each guided by its own set of rules, dispositions, and capital. Similar to players on a baseball field, social actors in these fields struggle for position and compete with one another to win access to valuable resources. Within the scientific field, however, Leary soured on playing that game. He criticized his fellow psychologists’ “obsessive devotion” to controlled experimentation, imploring them to “be more flexible and eclectic” in their methodological approach.12 Around the time that he left Harvard, Leary was setting up his own nonprofit psychedelic research organization, the International Federation for Internal Freedom, for people who “want to have fun and be good scientists, too.”13 For those with more skin in the science game, however, Leary’s methodological approach was an affront to what they were trying to do. But Leary wasn’t the only psychedelic researcher who failed to play the science game according to the rules; many of his methodological practices were characteristic of early psychedelic therapy research.
This chapter examines the methodological legitimacy crisis in psychedelic therapy. Leary, along with many other first-generation researchers, argued that psychedelic drugs carried minimal risks and offered significant therapeutic benefits when administered in the appropriate set and setting. Set and setting formed the backbone of first-wave researchers’ therapeutic model; they designed their studies to create a comfortable space for drug sessions, complete with psychedelic guides and processing sessions afterward. But this approach became the field’s Achilles’ heel as it failed to live up to the burgeoning gold standard of clinical research: the randomized controlled trial, or RCT. As clinical research moved toward this new model, the rules of the science game changed, and psychedelic researchers found themselves on the receiving end of acidic critiques of the efficacy of LSD therapy. By the close of the first wave, the findings coming out of their studies were largely dismissed as pseudoscience.
The situation faced by today’s crew of psychedelic researchers is in many ways similar to that faced by first-generation researchers. Sure, the situation has changed. This wave of researchers, for example, confront a different set of legal obstacles than their predecessors. But questions about aligning psychedelic psychotherapy with RCT methods remain. Contemporary researchers actively champion gold-standard methodologies while casting off Leary as the impure scientist who refused to work within that model. However, today’s researchers also draw from Leary’s techniques for administering psychedelic therapy—namely, set and setting. Slippages in their sober performances create a hybrid expertise that merges conventional biomedical expertise (with its attention to placebos, blinding, and standardization) with the expertise of the impure scientist. Set and setting, therefore, are a part of the first wave’s expertise culture that persists today, revealing the continuities between earlier and later waves of psychedelic therapy.
Early LSD Psychotherapy Research
At St. Louis Hospital in the late 1940s, psychiatrists Anthony Busch and Warren Johnson were looking for a drug that could induce a transitory delirious state, which they hypothesized could help patients verbalize repressed memories underlying their psychiatric symptoms.14 In their search, the investigators came across a new drug that they hoped would fit the bill: D-lysergic acid diethylamide, or LSD-25. They gave over two dozen patients small doses of the substance. As Busch and Johnson suspected, patients were able to express themselves better under the influence of LSD. Take, for example, a forty-one-year-old woman who was diagnosed with psychoneurosis hysteria. She had previously undergone over 120 hours of therapy—including sessions with sodium amytal, a barbiturate with sedative-hypnotic effects—with little success. With LSD, however, she was able to recall repressed childhood experiences that had been causing her extreme psychological distress. Another example is that of a twenty-eight-year-old man diagnosed as psychosomatic who tried the drug. He had been treated unsuccessfully with narcosynthesis, a hypnosis technique used on patients with what we now call PTSD. But with LSD, he was able to remember and relive traumatic episodes from his stint in the navy, offering him insights into his distressing symptoms. Busch and Johnson published these findings in the journal Diseases of the Nervous System in 1950, marking the first English-language publication on LSD.15 The psychiatrists concluded that LSD might “serve as a new tool for shortening psychotherapy,” offering a unique way to elicit patients’ repressed memories and help them reevaluate their psychological conflicts.16 Soon after, psychiatrists and other mental health professionals started tapping into LSD’s potential for accelerating therapy.
One medical professional excited about the possibilities of LSD therapy was Harold Abramson, a physician who specialized in treating patients with allergies but who also dabbled in psychotherapy. He published one of the earliest accounts of LSD therapy in a 1955 issue of the Journal of Psychology. The article consists of a verbatim transcript of a four-hour LSD session with a middle-aged married woman seeking psychoanalysis for psychosomatic complaints of eczema and hay fever. The twenty-page conversation starts with the patient swallowing forty micrograms of LSD. While waiting for the effects to kick in, she eats a tuna sandwich and drinks iced tea. As she starts feeling the LSD, Abramson moves the conversation toward her sexuality. The patient admits that she prefers masturbating to having vaginal sex with her husband. At one point, she recalls a passage from The Second Sex—the feminist treatise written in 1949 by French philosopher Simone de Beauvoir—that “said something about a girl never getting past the clitoris stage [in Freud’s psychosexual stages of development], so to speak, where she does derive some sensation, some satisfaction, except by manipulating the clitoris.”17 She remembers the first time feeling water pour over her clitoris: “I can still picture the bathtub, I think, and the faucet, and how the water came out. And then, I, of course, I recall one time I turned around and found my father watching me.”18 After the transcript of the LSD session, Abramson suggests that the patient masturbated to relieve anxiety from strained childhood relationships with her parents—conflicts that she had brought up in previous, nondrugged therapy sessions but that Abramson felt she was better able to address on LSD. Abramson concluded that LSD is an effective and novel adjunct to psychotherapy, adding that it also appears to be pharmacologically safe.
Other medical professionals reported similar improvements using LSD therapy to treat a variety of conditions, including depression, anxiety, hysteria, and homosexuality (the last two considered legitimate psychiatric diagnoses at the time).19 One of most prolific areas of LSD experimentation in the first wave, however, was treating people with alcohol dependency. In the 1950s, a group of Canadian researchers hypothesized that a single high-dose LSD session could trigger personal insight that would steer habitual heavy drinkers into recovery. Early studies reported promising results.20 One research group at Weyburn Hospital in Saskatchewan, Canada, followed twenty-four patients with alcohol dependency after they were given LSD psychotherapy.21 The good news: no one was worse off, and even better, six stopped drinking completely and another six reported a marked reduction in drinking. The Weyburn group followed an additional sixteen alcoholic patients, publishing results from that study in a 1959 issue of the Quarterly Journal of Studies on Alcohol. Investigators reported that ten participants were much improved or completely abstinent at the six-month follow-up.22 Meanwhile, a larger-scale study out of Hollywood Hospital in Vancouver also reported substantial recovery rates: nearly 50 percent of alcoholic subjects were “much improved” at the six-month mark, meaning they either stopped drinking or drank substantially less than they did before the study.23 With such astounding success rates, one pair of investigators declared LSD “the newest and in some ways the most successful tool presently available” for psychotherapy.24
These early LSD therapy studies largely relied on uncontrolled methods like case studies, an in-depth investigation used in clinical medicine that involves constructing narrative histories on a given patient or group. But most psychiatric studies at the time lacked adequate controls, blinded raters, objective measures, and consistent follow-up.25 The shortage of controlled studies on LSD psychotherapy was symptomatic of psychiatry in general. But this style of research became a methodological sticking point amid larger changes in clinical experimentation and regulatory arrangements.
The Push for Controlled Psychedelic Research
By the late 1950s, LSD therapy studies were taking a hit in medical circles as criticisms mounted that their astounding success rates were based on bogus methods. Several years before lambasting Leary’s psilocybin experiments, Henry Beecher opined that “too many have approached the most complex field of [psychedelic] therapy with very nearly complete disregard for the controls that are essential to sound conclusion.”26 By the early 1960s, the volume of critiques increased. One group of researchers argued that “almost all [LSD therapy] studies can be criticized on the usual grounds: absence of controls or random assignment to comparison treatments, failure to use blinding techniques, failure to account for nonspecific factors in treatment programs, and inadequate follow-up procedures.”27 Around this time, institutional and cultural changes in clinical medicine and drug development set off a sea change in how the science game was played. This context created a perfect storm for one of the earliest legitimacy crises in psychedelic therapy research: demonstrating treatment efficacy using RCTs.28
LSD entered the medical scene on the cusp of the post–World War II wonder drug boom. During the 1950s, antipsychotics, antibiotics, and steroids flooded the medical marketplace. Alongside this stream of new drugs came questions about how to reliably evaluate their therapeutic safety and efficacy, particularly in the face of overzealous researchers and greedy pharmaceutical corporations. Previously, the subjective but experienced judgment of individual researchers played a significant role in assessing the reliability of claims about a therapeutic intervention’s efficacy. Since the early twentieth century, however, so-called therapeutic reformers pushed for the integration of scientific evidence into clinical medicine to minimize some investigators’ excessive enthusiasm and hasty conclusions.29 During the 1950s and 1960s, their long-standing call for the marriage of science and medical practice crystallized into what we now know as the gold standard of clinical experimentation: the RCT.
RCTs share certain features designed to neutralize investigator influence. For one thing, they are comparative, meaning that patients are randomly placed into an experimental group that receives the drug treatment or a control group that receives no intervention or a placebo (either active or inactive). Masked assessment is used to further control for intentional or unintentional bias in assessing treatment effects. This typically takes the form of double-blind techniques in which neither the subject nor the investigator knows which patients were assigned to the experimental or control groups. By using these methodological practices to purge researchers’ subjectivity from the scientific equation, reformers claimed that RCTs offered objective measures and statistical evidence for therapeutic efficacy. Additionally, this new experimental design helped federal regulators and medical professionals parcel out the claims of trustworthy, objective investigators from the untrustworthy, pseudoscientific quacks who were inappropriately peddling novel drug treatments.
Government regulatory agencies helped propel clinical medicine toward this burgeoning model. In the late 1950s, politicians and policy makers grew increasingly concerned about the corrupting influence of pharmaceutical companies on drug development research. These concerns culminated in the passage of the Kefauver-Harris Amendment in 1962. The passage of the amendment was catalyzed by an unexpected crisis with thalidomide. First synthesized in Germany in 1954, physicians prescribed thalidomide for morning sickness, but the drug had an unexpected side effect: babies born with phocomelia, a condition where arms and legs are malformed. When word spread about the birth defects in 1962, thalidomide was still an investigational drug in the United States. Just two years earlier, the FDA had rejected manufacturer Richardson-Merrell’s application to market and distribute the drug, citing safety concerns.
Thalidomide’s disastrous side effects raised questions about regulatory controls on new investigational drugs, prompting the passage of the Kefauver-Harris, or Drug Efficacy, Amendment in 1962. The amendment modified the Federal Food, Drug, and Cosmetic Act passed by Congress in 1938, which granted authority to the FDA to set and enforce regulations to protect consumers from unsafe products. The amendment expanded the FDA’s oversight of the approval and regulation of new drugs by requiring that pharmaceutical companies demonstrate both the safety and efficacy of their products. The amendment also required that researchers obtain informed consent from patients participating in clinical drug trials and that drug companies report any adverse effects to the FDA. On top of that, drug companies were now required to complete an investigational new drug (IND) application before beginning any clinical research with experimental drugs. One key way that the amendment shaped psychedelic therapy research was methodological; the FDA assessed treatment efficacy on the basis of evidence gathered from controlled clinical studies.30 This immediately posed problems for psychedelic psychotherapy because most research reporting positive results lacked controlled methodological designs. For psychedelic therapy to make it to prime time, researchers had to show the treatment’s efficacy using appropriately controlled studies.
In the 1960s, several groups of psychologists and psychiatrists—many without previous psychedelic therapy experience—set up their own controlled alcoholism studies to see if they could replicate earlier uncontrolled findings, but they came up empty.31 One study took place at the Addiction Research Foundation in Toronto in 1964 under the direction of research psychologists Reginald Smart and Thomas Storm. Smart and Storm argued that the lack of controlled procedures in LSD therapy research “raised serious questions concerning the scientific warrant for any belief that LSD is a useful adjunct to the treatment of alcoholism.”32 In their own controlled study, they randomly assigned thirty subjects with alcohol dependency to one of three groups: one group received LSD therapy, a second group received therapy with a control drug—ephedrine sulfate, a stimulant that is frequently prescribed as a decongestant or appetite suppressant—and a control group received only therapy. Researchers blindfolded subjects in the drug therapy groups and placed them in “a light but strong (Posey) belt for security.”33 After finding no difference in outcomes between the group treated with LSD and the other two groups, the investigators concluded that LSD “failed as an effective adjunct to psychotherapy.”34
A similar study came out of Mendota State Hospital in Wisconsin, led by psychiatrists Arnold Ludwig and Jerome Levine. Both psychiatrists were critical of the results coming out of uncontrolled LSD studies. During a 1967 conference, Ludwig observed that calling for controlled studies on LSD therapy was becoming “the obvious and now hackneyed conclusion of almost all the review articles in this area.”35 Ludwig and his colleagues designed their own controlled study in which they randomly assigned 195 patients to either a treatment or a control group (no LSD, therapy only). Patients who were assigned to the treatment group were further divided into one of three LSD treatments: they either received LSD psychotherapy, hypnodelic therapy (a treatment model that combined LSD with hypnosis), or LSD without therapy. Investigators did not find any significant difference in outcome between the treatment and control groups, leading them to conclude that “LSD procedures do not offer any more for the treatment of alcoholism than an intensive milieu therapy program.”36 The study received the 1970 Lester N. Hofheimer Award from the American Psychiatric Association (APA), an award given for research excellence. The award committee declared that “their research design can serve as a paradigm for the study of other psychiatric treatments.”37
The legitimacy of psychedelic therapy hit a snag when investigators’ initial enthusiasm surrounding the potential for LSD in aiding psychotherapy gave way to mounting questions about efficacy. Critics were tearing apart uncontrolled studies, arguing that the treatment model wasn’t what it was cracked up to be, as controlled research consistently failed to achieve the same astounding recovery rates. As a result, those who were part of what we might call the home team of psychedelic therapy found themselves on the defensive, and they were quick to shoot back with their own methodological criticisms of this new crop of research.
Psychedelic Researchers’ Criticisms of the RCT
It might strike you as quite obvious that strapping an LSD research subject to a bed might preclude any therapeutic benefits, but for advocates of controlled research, having these kinds of controls in place was good science. However, for experienced psychedelic researchers, controlling for nondrug effects is pointless—and in fact misses the point, as it isn’t the drug alone that is therapeutic. Psychedelic therapy works differently than typical pharmacotherapy, a model that I’ll more fully flesh out in chapter 3. For now, I’ll say that psychedelic drugs do not work like antibiotics and similar magic-bullet medicines. “LSD is not a medication in the usual sense,” explained one research team, adding, “It is not the effect of the drug alone which is beneficial but the reaction brought about by the drug in combination with the particular type of psychotherapeutic technique.”38 Psychedelic researchers claimed that LSD’s therapeutic effects emerged from insightful experiences facilitated by an appropriate set and setting, their shorthand for the physical, psychological, and social factors that shape the drug experience. Although many first-wave psychedelic researchers hypothesized that an optimal set and setting catalyzed therapeutic drug experiences, the phrase is typically attributed to Leary, who condensed this hypothesis into a pithy catchphrase.39
In 1955, participants at an APA roundtable on LSD and mescaline suspected that psychological factors might account for, as one presenter put it, the “differences by which people digest their [LSD] experiences.”40 Leary described these psychological factors as set, which “denotes the preparation of the individual, including his personality structure and his mood at the time.”41 A good example of how psychological set can shape the LSD experience comes from one of Leary’s critics, Henry Beecher. In a 1956 publication in the Journal of Clinical and Experimental Psychopathology, Beecher and his colleagues argued that a subject’s personality heavily influences the LSD reaction. As evidence, they highlight the case of Subject Y, a healthy male participant who, according to predrug personality tests, had an “immature and chronically anxious personality” dominated by sexual impulses. Subject Y had vivid erotic visuals of a naked couple making out while he was tripping, causing him to wonder aloud if he had “come off in [his] pants.” (He hadn’t.) At one point, investigators left the room as these sexual images continued to flood his vision. When they returned, the subject said he was more anxious when the investigators were gone, admitting that he was not sure “whether you are testing me or the drug.”42 Subject Y’s drug reaction made sense to investigators, given his predrug personality evaluation. Meanwhile, LSD failed to elicit the same visual results and paranoia in patients with different personality types. In addition to personality differences, Leary’s first psilocybin publication demonstrated how subjects reacted differently to the drug depending on their expectations and moods. He found that subjects who went into the drug session optimistic and relaxed typically had more positive drug experiences than those subjects who were apprehensive.43 Consequently, researchers found that differences in set partly contributed to the variability in psychedelic drug reactions.
In 1958, a World Health Organization study group released a technical report on hallucinogenic drugs, noting “the striking dependence of [their] effects on the precise constellation of environmental factors.”44 Leary described these environmental factors as setting, which includes the “physical—the weather, the room’s atmosphere; social—the feelings of persons present towards one another; and cultural—prevailing views as to what is real.”45 In his earliest research, Leary found that setting, particularly interpersonal dynamics, shaped subjects’ drug reactions: subjects who took psilocybin with people they knew and felt comfortable with had better experiences than those who did not.46 Robert Hyde, a psychiatrist whose claim to fame is being one of the first persons to drop acid in the United States, also tested the influence of social milieu on subjects’ LSD reactions.47 Among other things, he found that varying hospital staffs’ attitudes and behaviors (e.g., acting hostile or friendly) toward tripping patients made a difference in subjects’ anxiety levels.48 Researchers quickly incorporated these observations into their treatment model.
You’ve probably never heard of Al Hubbard, but he was the first person to propose manipulating set and setting to maximize psychedelic psychotherapy, offering a crucial addition to this treatment model.49 He was an Office of Strategic Services officer when he got his first dose of LSD from British psychiatrist Ron Sandison in the early 1950s.50 Inspired by his own psychedelic experience, Hubbard quickly morphed into Johnny Acidseed, spreading his legally obtained supply of Sandoz LSD to anyone interested in turning on. After informally guiding several alcoholics through intense LSD sessions, he noticed that certain conditions facilitated more insightful trips. He shared his observations with several Canadian research teams, who took up his suggestion to tweak set and setting to create optimal conditions for their alcoholism studies. In contrast to sterile hospital rooms and psychiatric wards, LSD treatment rooms were furnished with comfortable couches, draped windows, and tables decorated with floral arrangements. Researchers brought pictures and music into drug sessions as a way to enhance subjects’ perceptual experiences, hoping to provoke insightful reactions instead of anxious ones. Members of the research team actively worked to build rapport with subjects to increase their comfort level. One research group, for example, prepared subjects for their LSD session in the morning over coffee, while another group had lunch with their subjects during their LSD treatment.51 In contrast, one team of investigators who designed a controlled LSD study left tripping subjects alone to eat their lunch.52
It was one thing to want to subject LSD therapy to scientific scrutiny, but psychedelic researchers argued that it was misguided and even dangerous to ignore set and setting.53 These controlled experiments failed, and psychedelic researchers knew why: investigators didn’t prepare patients for the drug’s effects, and they assumed the drug’s pharmacological effects could be isolated—an approach one group of researchers derided as “psychedelic chemotherapy.”54 In reference to the Addiction Research Foundation study, Humphry Osmond, a British psychiatrist who pioneered LSD psychotherapy for alcoholism, opined, “It appears that almost idiocy can get by, provided you add the right label.”55 The “right label,” Osmond explained, was “carefully controlled.” In his article “Methodology: Handmaiden or Taskmistress?” Osmond chastised advocates of controlled clinical experiments for assuming that “once learned, [controlled methods] could presumably be applied everywhere.”56 Instead, Osmond observed that science is “like life, from which it derives,” meaning that the pursuit is “open-ended, indeterminate, uncertain—untidy.”57 Osmond and his colleague, Abram Hoffer, criticized proponents of double-blind research who were casting off the “anecdotalist psychiatrist” as “ignorant, naive, simple-minded and biased.”58 Meanwhile, Osmond and Hoffer declared that anecdotally inclined investigators have a method to their madness while double-blind disciples have a “madness in their method.”59
This madness posed problems for demonstrating the efficacy of psychedelic treatment models. During an international conference on LSD therapy in 1965, Hoffer defended his research team’s decision to forgo a controlled experimental design, particularly the use of a placebo control group: “No experienced therapist would be in any doubt within one hour about determining whether distilled water or 200mcg of LSD had been given even if he were blind and could not see the pupillary bloating produced by LSD; every scientist who has worked with LSD agrees with this.”60 Similarly, another research team wrote in a federal grant proposal that “the effects of LSD-25 are so striking that an inert placebo is a waste of time.”61 Leary was also skeptical of blinded studies with placebo controls. Reflecting on his own attempt to use this approach, he commented that “the ridiculousness of running a double-blind study of psychedelic drugs was apparent . . . after thirty minutes everyone knew who had taken the [psychedelic] pill” compared to a control group who were given an active placebo.62 By enforcing methodological techniques like placebo controls, Hoffer argued that RCT advocates “have hung a millstone around our [psychedelic researchers’] necks which is steadily becoming more burdensome.”63 Put differently, RCTs cramped psychedelic researchers’ style.
Some psychedelic researchers, however, were optimistic that controlled methods could be integrated into their therapeutic model. Take, for example, investigators at the International Foundation for Advanced Study, an organization dedicated to studying psychedelic therapy. In a federal grant application for a controlled study using LSD therapy to treat alcohol dependency, the researchers echoed Osmond, arguing that “LSD is useless unless given in a way which will maximize its effect.” Having said that, they also reprimanded Osmond for “throw[ing] in the towel by denying that controlled studies are important.”64 Another group that attempted to blend set and setting with controlled methodologies was located at Spring Grove State Hospital in Baltimore, Maryland, home of the largest and longest running U.S.-based psychedelic research team in the first wave.65 Their controlled psychedelic research was funded by the National Institutes of Health for over $140,000, or around $1.1 million in today’s dollars.66 In their controlled alcoholism study, Spring Grove researchers randomly assigned 135 inpatient alcoholics to either a treatment group receiving a high dose of LSD (450 micrograms) or a control group receiving a low dose of LSD (fifty micrograms). Investigators spent hours preparing subjects for the drug session, including sharing information about the drug’s potential effects. Drug sessions took place in living room–like spaces decorated with flowers and paintings.67 Subjects lay on couches while wearing eyeshades and listening to music. After a six-month follow-up, 53 percent of the treatment group reported either no or minimal drinking compared to 33 percent in the group receiving low-dose LSD.68 However, the gains made by their brand of LSD therapy wouldn’t last. At the twelve-month check-in, researchers did not find any significant difference between the groups.69 Nonetheless, the Spring Grove team advocated more research to figure out how to maximize the initial therapeutic benefits of high-dose LSD-assisted psychotherapy.
The Spring Grove team’s research received mixed reviews from their medical colleagues. Donald Louria, who served as the medical advisor on narcotics addiction for New York governor Nelson Rockefeller, praised the team’s concerted efforts to subject LSD therapy to controlled clinical research. During an LSD symposium at Wesleyan in 1967, Louria presented on a panel with Albert Kurland, a member of the Spring Grove group. When asked about the efficacy of LSD therapy, Louria told the audience, “I want people like Dr. Kurland to study it more. If we had a hundred more studies like his expanded on a whole variety of levels . . . disciplined studies, carefully evaluated, why then five years from now, we would really be able to say what this drug can do and what it cannot.”70
Other mental health professionals were less impressed with the Spring Grove team’s attempts to conduct controlled LSD research. In July 1969, Spring Grove investigators submitted a manuscript entitled “Psychedelic Psychotherapy: Preliminary Research Findings” to the journal Psychotherapy. The editor returned their submission with a firm but encouraging “reject” decision along with two less encouraging reviewers’ comments.71 “Overall evaluation is negative,” wrote Reviewer 2. Neither reviewer was convinced that a low dose of LSD was an adequate control. The decision flabbergasted Reviewer 1: “How they can possibly think that using a 50-microgram group provides even an approximation to a double-blind study is beyond me.” Meanwhile, Reviewer 2 was surprised that the team “actually ha[d] the guts to criticize other studies for their lack of controls” in the paper’s literature review.
By the end of the 1960s, researchers remained divided over the best way to assess the efficacy of LSD psychotherapy. At a conference sponsored by the Student Association for the Study of Hallucinogens in 1969, Walter Pahnke, a Spring Grove researcher, commented, “Part of the conflicting evidence about the efficacy of psychedelic drugs as a therapeutic tool for the treatment of mental illness comes from the differences in methods employed by various groups of researchers . . . Rather than replicating each other’s methods, each research team seems to have developed its own procedure.”72 When studied using controlled procedures, LSD didn’t seem very effective, and by the early 1970s, a consensus emerged within the medical community that LSD therapy was an ineffective treatment model.73
Psychedelic therapy came of age in the midst of larger shifts in the assessment and regulation of clinical drug research. With this institutional transformation, psychedelic psychotherapy, as envisioned by early researchers, didn’t make the cut. Methodological and regulatory changes stunted the field’s growth, both as a result of researchers’ inability, and in some cases unwillingness, to align psychedelic treatment models with the emerging gold standard of the RCT.74 Set and setting complicated the credibility of psychedelic therapy research in the past. While this technique floundered in the first wave, a new generation of researchers is now hoping to make set and setting work in controlled clinical trials.
Sober Performances of Methodological Rigor
October 21, 1993. Dozens of speakers gathered in Lugano-Agno, Switzerland, for a two-day symposium entitled “50 Years of LSD.” Sponsored by the Swiss Academy of Medical Sciences, the event celebrated the semicentennial anniversary of LSD’s discovery. Clinical research with psychedelic drugs in the United States had been stuck in a dry spell for nearly two decades. While clinical work was slow to resume in the States, researchers in Switzerland, the birthplace of LSD, were starting to pick it up again.75 With the potential return of aboveground psychedelic psychotherapy, researchers turned their attention to methods. A pair of researchers outlined a list of proposed guidelines for designing psychedelic research: testing specific diagnoses, randomly assigning participants to treatment options, standardizing therapy, getting informed consent, using raters in a blinded manner to assess subject improvement, setting up placebo control groups, and using consistent follow-up procedures.76 These recommendations mirror the gold standard of clinical research. Consequently, today’s researchers draw on an existing system of knowledge, practices, and tools rooted in biomedical science—the RCT—to resolve the lingering methodological legitimacy crisis.
To distinguish between new and old forms of expertise, researchers tell stories about first-wave researchers who failed to follow the professionalized standards of the RCT. Leary occupies a central place in these antithesis performances. Jeremy—I use pseudonyms for all interview participants—explains how early researchers had a “relaxed approach to psychedelics,” telling me, “Leary is the best example. The controls just weren’t in place. They didn’t exist.” Similarly, Scott, who studies MDMA therapy, describes how “the research that went on back then. . . . I mean, from what I’ve read about Leary, a lot of [drug] sessions were very unstructured, so that’s one of the differences now. We are using very different research methods that are much more highly controlled to produce more valid statistics.” Decades after Leary declared he was “through playing the science game,” this generation of researchers has committed to playing that same game to reinvent their expertise culture. Aaron, a psychiatrist who studies psilocybin, describes how “Leary didn’t think psychedelics fit into the science game. We will fit it into the science game, and we’re going to come up with really clever experimental designs, and we are going to figure some things out.” Likewise, Terry declares that “the science game is worth playing.” The problem, in the minds of today’s researchers, is not that Leary was methodologically inept; it was that he was unwilling to conform to the changing demands of science. As Chris, who researches psilocybin, explains, “I’m sure his [Leary’s] intentions were good, but when he started to do science in a way that was not conventionally appreciated by the rest of the [psychiatric] field, that’s what caused the backlash that made it almost impossible to do this research for decades.” Consequently, researchers pivot away from Leary and his impure practices by engaging in a sober performance that molds psychedelic therapy into a treatment model that can be controlled and objectively observed.
A particular institutional context pushes today’s researchers into this sober performance. Eric, who studies MDMA, suggests that first-wave psychedelic researchers like Leary were “painted as not rigorous enough, but really, they just had their own understanding of what the right scientific approach should be.” But “while [Leary and other researchers] said that we can’t use these methods [randomized controlled trials],” Eric explains that “the people who won this argument are the people who said that we have to use RCTs and checklists.” The “people who won” include the federal regulators who approve and evaluate clinical research. Aaron, for example, notes that it is important to stick to the science game because doing otherwise is “going to prevent me from being able to do the work that I’m doing, to look at these therapeutic applications, which if things continue to look safe and efficacious, may lead to FDA approval.” Likewise, Jerry, who has spent his career studying a variety of psychedelic drugs, emphasizes that playing the science game matters for federal legitimacy: “We want to be able to go to the FDA and say, ‘Here are the results of our double-blind, placebo-controlled studies. We tested ninety-five patients in our Phase 2 trials. Here are the statistics. It works.’ The FDA is a scientific group, and they’re going to say, ‘That looks pretty good.’” Christian echoes Jerry’s comment, asserting, “All contemporary studies have to be like that [double blind and placebo controlled] or they’re not going to be deemed scientific, so it’s really important to get right.” Consequently, researchers are compelled by federal regulators to engage in a sober performance of methodological rigor.
The psilocybin research team at Johns Hopkins University strikes me as a good example of this performance. Johns Hopkins has been a hub for clinical research in the revival, churning out promising results from multiple studies using psilocybin to help cancer patients feel less anxious and smokers kick their nicotine habit.77 Their work has been so prolific that in late 2019, Hopkins launched the Center for Psychedelic and Consciousness Research—an initiative supported by $17 million in private donations from the cofounder of WordPress, the founder of the shoe brand TOMS, and entrepreneur Tim Ferris, among others—with plans to develop studies on psilocybin-assisted therapy for treating opioid addiction, PTSD, anorexia nervosa, and Alzheimer disease. Back in the early 2000s, however, the team’s first study examined the acute and long-term effects of psilocybin in healthy, hallucinogenic-naive volunteers. The double-blind between-group crossover design study compared an experimental group given psilocybin with a control group given a placebo in two or three eight-hour drug sessions at two-month intervals.78 The Hopkins team chose methylphenidate, more commonly known as Ritalin, as the pharmacologically active placebo.79
In addition to blinding procedures and placebo selection, Hopkins researchers applied typical clinical trial procedures, including standardized measures and checklists, to evaluate treatment outcomes. Thirty-six participants, mostly middle-aged, highly educated, and religious or spiritually identified, were carefully screened for family histories of psychotic disorders. Investigators regularly monitored subjects’ blood pressure and heart rate during the drug sessions. Seven hours after being given the drug, participants completed a battery of questionnaires designed to assess the subjective effects of the drug. Two months after the drug session, researchers gathered additional outcome measures to determine whether participants experienced persisting changes in their attitudes and behaviors. In addition to measuring outcomes through self- and clinician reports, the study used “community observers” (e.g., family, friends, and colleagues), who rated changes in participants’ behaviors and attitudes.80 Using these rating scales allowed researchers to quantify results, making the subjective aspects of the experience amendable to objective methodologies. This stands in contrast to earlier published LSD therapy research that heavily relied on case studies and snippets from subjects’ trip reports.
As evidenced by the Hopkins study, today’s researchers pivot away from the impure scientist through their willingness to incorporate RCT methods into their clinical trials. At the same time, however, their methodological approach doesn’t diverge significantly from that of the impure scientist, as their practices continue to rely on the importance of set and setting.
Lingering Criticisms of RCTs
“They didn’t understand set and setting in the beginning,” explains psilocybin researcher Stephen Ross.81 “Patients would be injected with LSD, put in restraints, and somebody would come back hours later. They were put in very drab clinical environments.” Similarly, while talking about the controlled alcoholism studies in the first wave, another researcher notes, “Previous research was well intended, but methodologically flawed. They left people at risk and didn’t provide the kind of information that would justify the risk.” On the one hand, today’s researchers are critical of Leary for refusing to work within the confines of RCT models, but on the other hand, they’re critical of investigators like Smart and Storm (whom Ross implicitly calls out in the above quote) for their rigid adherence to that same model. Writing in the Journal of Psychopharmacology, psychedelic researchers at Imperial College London assert that the “therapeutic action of psychedelics is fundamentally reliant on context,” a term they use in place of “set and setting,” adding that “neglect of context could render a psychedelic experience not only clinically ineffective but also potentially harmful.”82 Like Leary and other first wavers, this generation of researchers designs clinical trials with set and setting in mind as a means of minimizing risk and maximizing benefits.
In the Hopkins study, for example, careful attention was paid to set and setting. Participants met with study monitors several times before their first drug session in order to build rapport with the therapists who would be present during their drug sessions, as well as to outline the range of possible effects of the drug so that the participant would have some idea of what might happen. Careful attention was also paid to the physical layout of the room where the drug sessions took place. A few years ago, I got to peek inside this treatment room. Despite its small size, the space was inviting. The room had a comfortable and familiar feeling, as if I just stepped into someone’s living room. Pillows were placed at both ends of a large white couch. A colorful rug covered the carpeted floor. On the wall was a large oil painting of a scenic landscape. A lamp placed on the end table beside the couch provided softer lighting than the harsh overhead fluorescent lighting typically used in clinical spaces. For the Hopkins team, these welcoming details served an important purpose: “Aesthetically pleasing environments such as this, free of extraneous medical or research equipment, in combination with careful volunteer screening, volunteer preparation, and interpersonal support from two or more trained monitors, may help to minimize the probability of acute psychological distress during hallucinogen studies.”83 Similar to their predecessors, Hopkins investigators optimized set and setting to minimize adverse effects.
Using this approach, Hopkins researchers have successfully demonstrated the safety and efficacy of psilocybin therapy. The Hopkins team published results from the abovementioned study in a 2006 issue of Psychopharmacology with a title that’s a mouthful but concisely summarizes their major finding: “Psilocybin Can Occasion Mystical-Type Experiences Having Substantial and Sustained Personal Meaning and Spiritual Significance.” With findings reminiscent of Leary’s earlier uncontrolled studies in his living room, nearly 60 percent of participants rated their psilocybin experience as one of their top five most meaningful experiences, with about 15 percent claiming it was the single most important.84 While part of the Hopkins research design aligns with RCT mandates, such as using double-blind techniques, other parts draw heavily on the impure scientists’ practices, particularly that of set and setting.
However, today’s researchers are running into the same issues that their predecessors did as they merge set and setting with the demands of RCTs. Researchers would often tell me that the FDA’s conceptions of safety and objectivity constrain their study’s set and setting. While talking to a researcher working on a controlled trial of MDMA-assisted therapy for PTSD, for example, he excitedly describes their treatment room: “There’s nice artwork and flowers in the room,” he says, adding, “We have futons with sheets, blankets, and pillows, so they [subjects] can either lean against the wall of pillows or get under the covers. And then we have a sound system with headphones and large [eye] shades for people to use part of the time.” “But,” he sighs, “we also have a blood pressure cuff; that’s part of it. You check their blood pressure every half hour.” Apparently, the FDA initially requested that subjects’ blood pressure be measured every fifteen minutes; researchers negotiated them down to every thirty minutes after arguing that constantly interrupting the treatment session by taking subjects’ blood pressure or bombarding them with psychiatric evaluations might increase their anxiety. James, a psilocybin researcher, agrees that “to do it [psychedelic psychotherapy] safely, we need to monitor blood pressure, but it’s also really distracting.” “I think some people want to get up and walk or move, but they can’t,” he points out, while arguing that “their experience could be profoundly altered if they were able to go outside and spend time in nature.”
Other lingering methodological problems from the past rear their ugly heads in the revival. For one, there is the placebo dilemma; as one researcher told me, “You know when someone’s tripping balls.” Psychedelic drugs have incredibly unique and intense effects, which makes selecting an appropriate placebo difficult. During a symposium on psychedelic medicine held at Harvard University in October 2018, psychiatrist Julie Holland admitted, “For most people, it becomes clear at some point whether people have had psilocybin or a placebo.” Holland explained how some studies use active placebos like niacin or Ritalin, substances with noticeable physiological effects, that can “give people a little bit of a rush” and, the researchers hope, bolster the integrity of their blinding procedures. Ritalin might not be a bad choice. The Hopkins team found that even experienced study monitors misclassified the drug in 23 percent of the sessions.85
Meanwhile, some researchers I spoke with are still hesitant to use placebos, feeling that it reinforces the mistaken idea that psychedelic drugs are a magic bullet. Echoing the “psychedelic chemotherapy” critique of an earlier generation, James complains how “Western medicine sees the power in a molecule and wants to strip the molecule of all the psychological associations that a person has to it, to control it as if the medicine is an inert substance which carriers all the power, and it is purely a material object that treats an underlying disease process.” But psychedelic-assisted therapy doesn’t fit this model; the drugs’ effects are highly variable even in the same dosage. Shawn, who studies psilocybin therapy, explains how “if someone has, you know, a bacterially infected gallbladder, then no matter what their personality is, no matter what’s going on with them, if you give them antibiotics, the vast majority will respond predictably and get better.” “That’s the typical model of disease,” he explains, adding, “but it doesn’t work like that with psychedelics.”
Psychedelic drugs are unruly, but researchers hope to make the experience more regular and predictable by standardizing set and setting. In some ways, psychedelic therapy is quite the orchestrated event: there’s the eyeshades that investigators hand to patients, the preselected music that’s piped through headphones, the supportive mantras (“let go”) that therapists repeat as patients encounter difficult moments.86 Several research teams have even developed guidelines in the hopes of standardizing this psychedelic therapy model. Investigators studying MDMA-assisted psychotherapy for PTSD, for example, put together a manual that provides tips for, among other things, preparing the treatment space: it should be private, quiet, and free from distractions, as well as comfortably and attractively furnished with pillows, flowers, and artwork.87 At the same time, this treatment model has not coalesced into a one-size-fits-all approach, and questions remain about the link between set and setting and treatment efficacy.
The set and setting hypothesis—that positive experiences are facilitated by an optimal context—has not been tested using modern-day research methods. In a commentary piece in the Journal of Psychopharmacology, Robin Carhart-Harris and his research group at Imperial College London call for “an evidence base for long-held assumptions about the critical importance of context” for the efficacy of psychedelic therapy.88 They propose testing the set and setting hypothesis using controlled methodologies, which they argue would offer researchers palatable evidence to back their claims that this approach maximizes benefits while minimizing harms. Their proposed study design would compare the drug and placebo condition in enriched and unenriched contexts to see if, in fact, Condition 1 (the drug condition in an enriched context) consistently yields more positive results. Researchers hope that such a study would provide an evidence base to draw on as they standardize set and setting to achieve a particular clinical outcome.89 Some researchers I spoke with were skeptical that drug sessions could be standardized in this way. When I asked Walter, who is part of an MDMA therapy team, what he thought about standardizing psychedelic therapy, he mentions that the model is “not cookie cutter.” “You can’t do therapy out of a manual,” he argues while conceding, “But you can’t sell it to politicians unless there is a manual.” The point Walter is making is that whether desired or not, many researchers feel that an empirically supported, paint-by-numbers approach to psychedelic therapy is necessary to legitimate their expertise culture.
This push for an evidence-based rationale for set and setting is an interesting turn of events. Whereas many early researchers suggested that set and setting could not be integrated with RCT methods, today’s researchers are using those same methods to support the set and setting hypothesis. But this isn’t the first time that researchers have called for a “science” of set and setting.90 Leary and a graduate student of his, Ralph Metzner, published a paper in 1967 suggesting that psychedelic experiences can be “programmed.”91 They argued that programmed experiences are those “in which the sequences and patterning of stimuli are not left to chance but are arranged in a predetermined manner.” Leary and Metzner’s programming suggestions, however, weren’t derived from controlled studies; instead, they offered examples from tantric psychology and peyote ceremonies. Nonetheless, like today’s researchers, Leary was interested in finding ways to give shape and order to the psychedelic experience.
Today’s researchers are playing the science game that Leary rejected, but just as in play, there is a creative aspect to their crisis-adjudicating sober performance. Their playful methodological merging allows them to espouse the virtues of the gold standard while also importing the lessons of set and setting to salvage the efficacy of their clinical trials.
The Methodological Bricolage of the Revival
In an op-ed piece published in the Lancet, psychiatrist Ben Sessa appropriates Leary’s most quotable slogan by inviting scientists to “turn on, tune in to evidence-based medicine.” In his assessment, “the future for psychedelic drug research looks promising,” so long as researchers concentrate on producing “evidence-based data” deriving from accepted experimental methods.92 The revival of psychedelic therapy is unfolding against the backdrop of significant institutional changes in biomedical science, including the push toward evidence-based medicine as measured by controlled clinical trials. Not surprisingly, then, contemporary researchers’ expertise culture incorporates objective, standardized knowledge in establishing the efficacy of psychedelic therapy. At the same time, making psychedelics therapeutically effective, they argue, also requires attention to Leary’s more subjective set and setting. Rather than tossing out the impure scientist’s expertise completely, there is a process of bricolage, a comingling of sober and impure expertise.
This hybridized expertise is gaining traction. Many medical professionals and federal regulators working outside of psychedelic science are impressed with the methodological rigor of clinical studies in the revival. Take, for example, the Hopkins psilocybin study with healthy volutneers. What was particularly striking about the team’s Psychopharmacology article is the response it received. A series of editorials ran alongside the article, with big names in the field of substance abuse praising the study’s methodological design. One of the commentaries came from Charles Schuster, the director of the National Institute for Drug Abuse from 1986 to 1992, who opened his commentary by declaring that “the study by Griffiths et al. is noteworthy both for the rigorousness of its design and execution, as well as the clarity of its results.”93 Herbert Kleber, a substance abuse researcher, echoed Schuster: “The authors should be commended for the way they designed and carried out the double-blind project,” pointing out that “the blinding was done so carefully that even the experienced monitors misidentified the administered agent approximately one quarter of the time.”94 The reviewers rarely mentioned set and setting; it was the RCT angle of this work that they found the most compelling. Nonetheless, it is still worth mentioning that the reviewers didn’t disavow these researchers’ use of set and setting either. More recently, representatives from the FDA’s Division of Psychiatry Products held a two-hour special topics workshop on “Psychedelic Drug Development” at the American Society of Clinical Psychopharmacology annual meeting in May 2019, offering advice to interested investigators on how to conduct controlled psychedelic research and move though the FDA drug approval process. In addition to discussing RCT methodological staples like blinding procedures, some presenters emphasized the importance of setting in shaping psychedelic drug reactions.95 Medical professionals and federal regulators consequently appear increasingly open to the revival’s hybridized approach. Today’s researchers retain set and setting, an important feature of the first wave’s expertise culture, but mobilize this technique in a way that takes into account the contemporary methodological practices that grant legitimacy. In this way, researchers pivot away from the impure scientist at the same time that they mimic his practices.
But this isn’t some newfangled technique; the Spring Grove team used controlled methodologies while adjusting set and setting to optimize the drug experience. Hopkins team members, which includes William Richards, a septuagenarian psychotherapist who worked at Spring Grove, have heavily drawn from Spring Grove’s LSD therapy research to design their own clinical studies. So what’s different? Why is this hybridized model gaining traction today when it seemingly faltered decades earlier?
Today’s researchers are likely benefiting from methodological debates that emerged during the psychedelic science doldrums. Initial models of RCTs favored fixed protocols, but in the intervening decades, medical professionals, federal regulators, and patient consumers have pushed for greater flexibility in designing clinical research. In the 1980s, medical professionals began distinguishing between fastidious and pragmatic clinical trials.96 While the latter model embraces “heterogeneity, occasional or frequent ambiguity, and other ‘messy’ aspects of ordinary clinical practice” in clinical trial designs, advocates of the fastidious approach “prefer a ‘clean’ arrangement, using homogenous groups, reducing or eliminating ambiguity, and avoiding the specter of biased results.”97 In his study of AIDS treatment activists, sociologist Steven Epstein shows how these “lay experts” took up this debate to successfully convince the FDA to accept pragmatic trial designs, asserting that “messy” science wasn’t necessarily worse science, and in fact was arguably more ethical.98 Meanwhile, since the early 1990s, many investigators have pushed for adaptive design in drug development research, suggesting that flexibility might replace standardization as the new methodological virtue—one that is both ethical, as it helps patients get more real-world results, and efficient, as it saves researchers’ time by allowing them to adjust trial designs as they go.99
In the revival, researchers work within the scope of the science game, playing it in the hopes of legitimating their expertise on psychedelic therapy. But at the same time, they are taking a page from their predecessors by bringing in set and setting. As a result of institutional changes, researchers can more readily import a therapeutic model that was characteristic of the impure scientist (with its enthusiastic embrace of messy science) while remaking it to fit the larger institutional context (with its growing appreciation of flexibility in randomized trials).