Can your doctor’s beliefs about the efficacy of a treatment affect how you experience pain? In episode 65, we’re joined by Luke Chang from the Department of Psychological & Brain Sciences at Dartmouth College. He talks with us about his research into socially transmitted placebo effects, through which patients can pick up on subtle facial cues that reveal their doctor’s beliefs about how effective a treatment will be. His article “Socially transmitted placebo effects” was published with Pin-Hao Chen, Jin Hyun Cheong, Eshin Jolly, Hirsh Elhence, and Tor Wager on October 21st, 2019 in Nature Human Behaviour.
Websites and other resources
-
- Luke’s lab and personal website
- Luke on Twitter
- “FaceSync: Open source framework for recording facial expressions with head-mounted cameras” including photos of the head-mounted cameras
- Permalink for article
NPR | Science Daily |Business Standard | Reddit | STAT
Bonus Clips
Patrons of Parsing Science gain exclusive access to bonus clips from all our episodes and can also download mp3s of every individual episode.
Support us for as little as $1 per month at Patreon. Cancel anytime.
🔊 Patrons can access bonus content here.
We’re not a registered tax-exempt organization, so unfortunately gifts aren’t tax deductible.
Hosts / Producers
Ryan Watkins & Doug Leigh
How to Cite
Watkins, R., Leigh, D., & Chang, L.. (2020, January 7). Parsing Science – Transmitting Placebo Effects. figshare. https://doi.org/10.6084/m9.figshare.11550186
Music
What’s The Angle? by Shane Ivers
Transcript
Luke Chang: I’m more shocked when something works than when it doesn’t. And so, at first I was like – I just didn’t believe that it actually worked.
Ryan Watkins: This is Parsing Science: the unpublished stories behind the world’s most compelling science, as told by the researcher themselves. I’m Ryan Watkins.
Doug Leigh: And I’m Doug Leigh. Today, in episode 65 of Parsing Science, we’re joined by Luke Chang from Dartmouth College’s Department of Psychological & Brain Sciences. He’ll talk with us about his research into how our beliefs in doctors’ expectations of how effective a treatment will be can influence our body’s experience of – and responses to – pain. Here’s Luke Chang.
Chang: I’m Luke Chang. I was born in St. Louis, Missouri and grew up in Denver, Colorado. I attended college at Reed College in Portland Oregon, and then I did a master’s in psychology at the New School for Social Research in New York. And then I transferred to start my PhD in clinical psychology at the University of Arizona in Tucson, Arizona, and then completed a clinical internship in behavioral medicine at the University of California at Los Angeles. And then a postdoc fellowship in neuroimaging methods and placebos and pain research with Tor Wager at the University of Colorado in Boulder. And I started as an assistant professor at Dartmouth College in 2015. So I’ve been here about four years.
You know, being trained as a clinician I was always really interested in how, like, psychotherapy worked. And in the beginning I was, like, really skeptical about it – so I was actually started being trained as a neuropsychologist, but then found that I actually enjoyed doing therapy much more than doing assessments. And one of the things that I found surprising was that I was pretty convinced people were getting better [by] doing different types of therapies, but I wasn’t exactly convinced that of the reason why they were getting better. And there’s … I wouldn’t say, like, a lot of research, but there’s a lot of papers talking about things called “nonspecific factors.” And so these are things that, like, how the provider connects with the patient, or maybe instills hope, or manipulates expectations, that can impact the patient’s outcomes. And that’s regardless of the technique that you’re using. And those have been thought to account for a lot of the variance in how – at least in psychotherapy research – but they haven’t been studied very systematically yet – [so I] was always really interested in this – and then, can we find a way to actually study this?
Interest in somatization
Leigh: Coined by the Austrian psychoanalyst Wilhelm Stekel in 1924, “somatization” is the process by which psychological distress may be expressed through physical symptoms, such as head and stomach aches. As it’s an area of Luke’s expertise, we began our conversation by asking him how he got interested in the topic.
Chang: So, I’ve actually been interested in expectations for a really long time. I guess originally even, like, how do we make decisions? What’s morality? Those types of, like, bigger questions and that kind of got me into the decision-making world. And then in graduate school I did a lot of work using, like, game theory to try to study how people interact in the context of decisions and these economic games. And most of the work that I was involved in in some way involved, like, some type of expectation. So it might be a social norm that – it’s, like, a shared expectation for which many different players might converge on. Or you might be thinking about what another player might be thinking: so the first and second order beliefs. And we did a lot of expectation manipulations. And then, clinically I was really interested in, like, how therapy worked. And a lot of the clinical experiences I was doing while I was training in graduate school – and then also during internship – were in behavioral medicine. And I was really interested in somatization: so how does, like, the body and brain give rise to these different types of feelings or pain. And it’s not really clear exactly how that all works, but it’s really interesting. So a lot of patients I would see were on the GI ward or in surgery … you know, recovery, or in different domains in psychiatry. So I got to see lots of, like, kind of chronic medical conditions, and how families and couples deal with that. And also really interesting somatization cases. So when I was trying to figure out what I wanted to do for a postdoc, I was really interested in trying to bring my clinical interest in pain and somatization together with my research that was more in, like, social interactions, and decision-making, and expectations.
[ Back to topics ]
Interactions within two-person dyads
Leigh: Researchers’ desire to minimize the effects of patients’ and clinicians’ expectations has led to the adoption of the double-blind randomized clinical trial as the gold standard for testing new medical treatments. In theory, both the physician and patient are unaware of whether the patients are being administered a new treatment or a placebo in such experiments. But – as we’ll hear more about in a moment – Luke and his team’s study employed a mock single-blind design in which “doctors” believed that they were administering a real analgesic cream or sham one, though in reality both creams were inert. Next, Luke explains what led him to explore interactions within two-person dyads such as he did in this study.
Chang: So I’ve been trying to find ways to study social actions for a few years, like, in including during my postdoc. Because even in the field of social psychology which has a lot of scientists in it, there’s not that many who focus on the interaction part. A lot of its on, like, cognition or perception or there’s many other areas. And one of the reasons why I think people don’t really study is because it’s really hard to collect data objectively. And then it’s also really hard to design experiments where you can manipulate factors to see how it impacts the interaction. So, there’s been a lot of research – I guess, like, starting in, like, the 30s and 40s when it really picked up – showing that people’s expectations about treatment success seem to impact their outcomes in many different domains. So that, well, eventually led to, like, the rise of, like, the double-blind clinical trial. So the first ones of that weren’t published until basically the 40s. And then, surprisingly, it wasn’t even required by the FDA until, like, the 70s to do double-blind clinical trials. I would have assumed it would be much earlier. So, we do double-blind studies – when you can do them – so, in a lot of, like … [with] medicines, for example, only the pharmacist will know which one’s which. But in psychotherapy it’s, like, impossible to do double-blind trials. The therapists always know what type of treatment they’re administering. And then in practice, you know, the physicians aren’t usually administering placebo treatments, but they’re always administering some type of treatment over another one. And they’ll have some belief about the likelihood of how well it’s going to work for that patient given the symptoms and the problem that they’ve conceptualized. So the single-blind was, like, I think, that was the part that were really interested in being able to model.
[ Back to topics ]
Deceptions in the experimental design
Leigh: Luke and his colleagues were interested in learning whether people playing the role of doctors might transmit their expectations about the effectiveness of medications to patients when the doctors’ were falsely led to believe that an analgesic was either a real treatment, or a placebo. Ryan and I asked Luke to describe the clever deceptions required for this study.
Chang: So we were really interested in if we could find a way to show, like, a causal manipulation. That it’s the doctors expectations that are the key thing that drives placebo effects. We thought about doing real doctors and real patients, but then there’s some selection issues … so, doctors believe things really strongly. Like, certain treatments might work better. So it was hard to remove all those types of biases. And they also might be more confident when they talk to patients, or [they’re] biased to go into the profession the first place because they’re more empathic. So one of the key things was, like, that we wanted to just randomly assign people to be in the role of doctors or patient. And that also that it’s not just, like, one doctor – so it’s not, like, we had a confederate – we wanted to show that it would work pretty much for any random person that gets selected to be in the study. So, the participants were instructed that we were interested in studying the role of interactions in these simulated clinical contexts. And, so, they knew that they’re going to be randomly assigned to these different roles. And then we actually had them dress up. We had scrubs for the doctors and then they wore, like, a lab coat. And then the patients were, like, a gown over the other clothes. And then they both got private instructions about how the experiment was going to go. So, they they had kind of different information going in.
And so, the design has, like, two parts. So the first part is essentially, like, a classic design in the placebo-conditioning literature. So, the doctor receives two different creams on different parts of their forearm. And then we basically told them that we called the cream “Thermodol,” and the idea was that it would actually impact the thermal receptors in your peripheral nervous system. And that there was, like, some amount of half-life – that it had to wear off – so they had to wait for a little bit for the cream to wear off before we did the other cream. Then we delivered thermal stimulation to each of the different creams, and the machine gives lower temperatures when it’s on [one] cream. So they believe that one cream is an active treatment, and one is like a sham or placebo. And that’s what creates this expectation and reinforces it through the learning process. And, so, that’s basically what the doctors did, and that’s kind of like a standard time design. So, we told them which cream was which and then the experience to believe that it actually worked. The second part was – we try to, like, minimize everything. So, the doctor couldn’t tell the patient which cream was which. So it was a single-blind design. And then they basically would introduce themselves to the patient being, like, you know, “I’m Doctor so-and-so, and this is what the study’s about. And I’m going to be administering this cream and some pain.” And then also we had an experimenter in the room, just for safety purposes. And then they applied the cream. And then when the patients were receiving the pain they received the exact same temperature in each condition. And it’s all kind of controlled through a computer, but it was actually the doctors themselves that were administering the the pain directly to the patient. And so, then, that allows us to see the impact of the doctors’ expectations that gets transmitted somehow to the patient – to see if that impacted the patient’s perception of pain.
[ Back to topics ]
Prior research on placebos
Watkins: In 1803, the New Medical Dictionary defined placebos as “any medicine adapted more to please than to benefit the patient.” And although this definition may have been an unflattering one, it also didn’t necessarily imply that they have no effect. Given the long history of such remedies, Doug and I were curious what prior research has found with regard to their efficacy.
Chang: So, around the late 70s, there were some dentists studying pain and analgesics. And they found that giving an opioid antagonist would basically make patients feel more pain than the control condition. And they were able to do a series of studies showing that part of the reason why placebos work for analgesia is because your brain releases endogenous opioids. And using these opiate antagonist they were actually blocking it, which is why people were perceiving more pain. So, the placebo response itself has its own physiological mechanism, which is I think pretty interesting.
And it’s studied in many different domains – so it’s – a lot of it’s studied in the context of pain, but it’s also studied and placebo conditioning on dopamine with Parkinson’s patients. There’s a lot of work – more recent stuff – on cannabinoid receptors mediating it. And then, also, there’s evidence that you’re, like, immune system and digestive system show some properties of conditioning consistent with, like, this placebo-conditioning paradigms. And there’s an enormous literature on placebos from the patient perspective. So we know that the doctors expectations also matter, but it’s never really been quantified to degree to how much they mattered. And so, that was basically one of the things we were trying to set out in the study: can we manipulate the doctors beliefs, and then show that that gets transferred over a social interaction that affects patients outcomes?
[ Back to topics ]
Placebos that actively induce side-effects
Leigh: The “placebo effect” is the reason that all FDA approved drugs have to go through a double-blind placebo-controlled clinical trial before being approved for use. But this gold standard of evidence only works if participants remain blinded to whether they’re on the real treatment or the placebo. However, doing so can be challenging, particularly when a course of treatment is well-known for its side-effects, such as hair loss during chemotherapy. Luke talked with us about how researchers sometimes create placebos which, for this very reason, aren’t inert “sugar pills” but rather actively induce side-effects.
Chang: In most clinical trials there’s huge and unblinding effects because of side effects. So, it depends on the kind of medicine and stuff, but [in] some of the ones have been reported, the doctors can guess with like 80 to 100 percent accuracy which condition that the patient’s in. And the patient’s usually – like, it depends on the type of medication medication – but like 60 to 80 percent of time [they] can guess which treatment they’re in based on the side effect profiles. So it was … to take the case of, like, antidepressants, which have a lot of side effects. But the antidepressants worked really well for treating depression for, you know, mild to moderate to varying degrees for severe depression – but placebos also work really really really well for mild to moderate depression – and if you have an active placebo one with side effects they’re basically indistinguishable from each other. And I had undergraduates in this Power of Belief class I teach, and in it they’ll have to do a research project. [They] look at meta-analysis, where they’ll come up with some question that they want to test, and then they’ll have to code a bunch of studies and then test it using R or something. But the first year I did this meta-analysis project idea, one of the students did one where she took – I can’t remember exactly – maybe like 14 different antidepressants, and got their side effect profiles from the FDA, and then did, basically, a mini meta-analysis for each one to get the overall effect size. And [she] basically was trying to figure out which side effects seemed to lead to a stronger effect size. And it turns out it was, like, nausea. So, antidepressants that have more symptoms that elicit nausea: they seem to have stronger effect sizes. And it’s a particularly interesting symptom in the context of healing because, you know, from over the course of history, that’s a common side effect of medicines that seem to be working really well – severe treatments – is you start throwing up and getting sick. You know, you have some reason to think that that’s actually making you heal.
How we’ve been thinking about it is, if you start extra any side-effects you’re like, “Oh, the treatments actually working; this seems like it’s supposed to happen,” and then you think that you’re in the real treatment treatment and you should be improving. But there’s been some other studies that showed that – if you look at the amount of side effects that participants experienced – it correlates with the overall effect size of the study, across studies. And we’ve done some work which we never ended up publishing where basically you’re trying to guess, like, what someone’s response is going to be – or predict what someone’s response is going to be. And so we used, like, a reinforcement learning framework where you’re getting feedback, and the feedback is whether you experienced any side effects or not. So we got some data from a clinical trial where they had some of this data, at least, recorded, and then we try to see if you would update your beliefs faster that you’re in a treatment, and if I would predict your treatment response. And it looked like it was working, but we didn’t fully work out all the controls to convince ourselves that it was, like, a real thing. But the side effect thing, I think, is really interesting, and I think it can actually really argument or enhance these placebo effects.
[ Back to topics ]
Recording facial expressions
Watkins: Luke and his team’s study involved three related experiments. The first involved both patients’ and doctors’ facial expressions being recorded using custom head-mounted video cameras. Combined with self-assessments completed by patients, as well as measures of their sweat gland activity, these data were later used to train a computational algorithm which predicts the pain through facial expressions alone, as Luke explains next.
Chang: In some of the pilot studies that we’ve done, we tried using things like mounting cameras on tripods. We’ve also tried some where you mount them on the wall and they have, like, controls where you can, like, zoom in and tilt. But one of the issues we always have was when your in a interaction, that’s different than when you’re just, like, working on a computer experiment, because people will tend to move around a lot, and they would go out of frame of the camera. So, it was hard to track them. So, one of the engineering feats that we came up with was to try to come up with a way just to mount them on each participant’s head. So that when they would move it was kind of … the cameras were mounted, so was invariant to the rotation. And the model that we kind of used was that – in motion capture studios, they have these things that mount on your head with a camera on it, but they’re very pricey. So we just came up with a way to build our own. I actually had some undergraduates who made some 3D models of the parts, and then we just printed them in our engineering school, and then mounted GoPros on them.
And I had some colleagues when I was in graduate school that had developed some software that uses computer vision to try to convert pixels into representations of facial expressions that are called “action units.” And these basically refer to, like, groups of muscles that you can kind of combine in different ways to categorize different facial expressions. And these were developed by Paul Ekman and a long time ago when he was trying to find a way to codify a system of how we can study facial expressions.
[ Back to topics ]
Analyzing action units
Leigh: We followed up by asking Luke for more details on how these video recordings of people experiencing pain were analyzed to train this model.
Chang: For every pain trial – so they get administered pain. It takes about two seconds for it to ramp up, and then it lasts about seven seconds, and then it ramps down for about two seconds. So, I think it was about 12 seconds, roughly, the they were … the patients were experiencing pain. And then you normally, like, you would have, like, a … if you record, like, people’s second-by-second perceptions of how they’re feeling, you would kind of get a slope that it’s delayed after that: the stimulus delivery. And then kind of persists longer and keeps rising until after people are receiving the pain, and then it decays back down to baseline. And the skin conductance responses kind of fit that same pattern. And we assumed that the facial expressions would probably have something like that as well. So what we did was: the cameras were recording, like, pixels, and then those get converted into action units. And the particular software we’re using – it’s like a deep convolutional neural net – it turns the pixels into action unit representations. You get one per frame. And so we’re recording these videos through the whole experiment, so we just have this continuous stream of what the action unit predicted probabilities are for every frame.
Then we basically trained our own model to predict pain. And it’s really simple just a regression model. The concept came from some earlier work that was done by Tor Weger on trying to find: is there a way we could have an objective measurement of pain from the brain itself? And so, he administered pain to people in the scanner, and basically found a way to predict the intensity of their pain from, basically, patterns of brain activations. And it ended up being, like, really, really reliable. And we can also take that same kind of idea or methodology and apply it to these facial expressions. So, let’s say we have 20 action units, and we can create a feature representation in that space that we can use to predict how much pain people are feeling. So, we used a couple different features. So one was, like, what’s the maximum intensity that ever got? And then the minimum intensity for every action unit, and then also the amount of time it took to get to the maximum. So, for every action unit we had these three different features we used. And it corresponds to – there’s a, like, a whole other literature on pain and facial expressions, and we basically kind of just replicate what they’ve been finding over and over again. Where you kind of like get your eyebrows raised, but your eyes kind of close, and then you kind of like grimace almost. Like, you pull your lips back. And so we trained the model using the doctors … when they were receiving the pain and in the first stage. So, the model it’s a penalized regression. And it basically had to be able to predict a new subject out-of-sample, and that’s how we evaluated it. That biscuit gives us, like, one model – so it should generalize over all subjects – that we can then apply to the patients to get an idea of how much pain that the model thinks that they’re experiencing.
[ Back to topics ]
Findings
Watkins: Luke and his colleague’s first study examined if the transmission of beliefs between doctor and patient was mediated by the doctor’s facial expressions. And their second study sought to replicate this with a new sample of participants and several tweaks to the experimental set-up. We’ll hear what they found after this short break.
ad: Altmetric‘s podcast, available on Soundcloud, Apple Podcasts & Google Podcasts
Watkins: Here again is Luke Chang.
Chang: If we use the pain facial expression model and then apply it to the doctors – who weren’t actually experiencing pain at all – the model actually showed lower displays of the doctors expressing pain when the doctors believed the treatment to be working as well. And then, when the doctors believed that they were delivering the real treatment, patients reported feeling less pain, but then also, the facial expressions showed – also show – that predicting, like, less favorable displays of pain.
And, I’m more shocked when something works than when it doesn’t. And so, at first I was like – I just didn’t believe that it actually worked. I just figured that the research assistants or the experimenters had done something to kind of tip the hand, or done something else. So, first I wanted to see if we were gonna be able to replicate it. And then we have some other things, like one experimenter was in the room for all of them, so we had two experimenters to see if that impacted. We also, like, well, maybe the temperature matters [so] we tried a couple different temperatures. And we kind of, basically, had only done conditions where the control cream was delivered first and then the Thermodol.
Previous work that was done in Tor Weger’s lab – who’s a co-author on this, and also was my postdoc mentor – they had found, almost sort of accidentally, that if you do the placebo condition first you often don’t get the placebo response. But if you do the control first then you get it fairly reliably. So, one idea is that the way for perception works is it’s sort of like a relative state comparison. So when things feel really hot, it’s not that we’re able to have an absolute magnitude estimate of, like, what the temperature is. We just know that prior to the previous state it’s a lot hotter than it was. And so if you apply, like, something that’s hot, but it’s really cold outside, then that’s gonna feel more hot than if it’s really, really hot outside and you apply something hot. So it could be something like that: it’s this relative judgement, so you kind of need a reference point. And that’s kind of how I’m thinking about the problem. And a lot of times if you have, like, chronic pain, or if you’re, you know, depressed, or have a migraine, or any of these other things where placebos have been explored: you already have, like, a current state, and so any change in that will be relative to that current state. But if you’re, like, a healthy participant in these studies, you’re not in chronic pain or anything, so you have to kind of create these things first.
So, we knew going in that we were expecting to see this reference dependent effect. But there’s a strong possibility that it could have been a habituation effect, meaning that … so, basically, the first time you experience any type of the thermal stimulation it hurts. It’s basically, like, the equivalent of, like, pouring, like, hot coffee on your arm. So you’ll kind of see your skin turn red but it won’t, like, blister. But then over time – the more times you get stimulated at the same sight, you actually see a habituation where it decreases pretty dramatically. So one of the things we wanted to rule out was that it wasn’t a habituation effect. So we also did, like, the full counterbalancing. It didn’t fully rule it out, but it didn’t seem like it was entirely habituation either.
So what we found was that: we replicate the original effect and this, with … across experimenters and different levels of temperature intensities. But when we flip it, like, so when the … you get the Thermodol first and the control cream second: we didn’t see the effect. But it also wasn’t exactly a situation, where the control cream was lower also. So it was kind of, like, ambiguous.
[ Back to topics ]
The final study
Leigh: As solid and Luke and his team’s findings were, peer reviewers of their paper made several critiques that required that they return to the lab to carry out a final study, as Luke describes next.
Chang: I’ve submitted a lot of papers, and I’ve had a lot of terrible reviews, and this was one of the better review process I’ve ever had of any paper. So, it went relatively fast: the reviewers, like, got it, and they were really supportive, and had a lot of really great constructive criticism. So, one: they weren’t fully convinced of our extinction – or this habituation effect. So we had to come up with a stronger way to try to show that. And one reviewer was really worried about that, maybe, because they’re wearing these cameras, and it’s not very natural, that people might be more self-conscious, or more attending, or more expressive with their facial expressions. So we want to rule that out. And then another reviewer raised another really great critique: that the experimenters knew what condition the patients were in. And so, even though it’s still a second-order belief transmission, it could have been from the experimenter to the patient rather than the doctor to the patient.
[In] the third study, we also made it so the experimenter who is in the room didn’t know which treatment was which, so they were blind to it. So only the doctor knew which one was which. The experimental time we used was to show that wasn’t habituation was … but also controlling for the fact that are accommodating that there’s this kind of known issue with having the control go first. We used an ABBA design. So it was [that] every participant – it’s within-subjects – but they got the control cream then the placebo cream, the [the] placebo cream and then the control cream. And then if it’s habituation then you just expect it to go down and stay down. But if it’s not, then you’d expect it to kind of go back up for the last control condition. And that’s basically what we found, in both in terms of self-reported pain, and also with skin conductance response.
[ Back to topics ]
How doctors convey their true expectations
Watkins: The team’s third study also replicated their previous findings, suggesting that doctors’ subtle facial cues do indeed transmit placebo effects. But since quantitative studies are better at identifying “what’s going on” than they are at explaining why, we asked Luke what some alternative explanations of their findings might be.
Chang: Yeah, so like why … why does it work, or how is it even happening? So, one idea is that they’re like hyper-tuned to when they think the treatment is going to work to the patient. So they’re paying more attention to them – they might be more empathic, or caring, or warming, or more perceptive – and these subtle kind of, like, nonverbal cues might be perceived by the patient, and that’s what gives rise to them perceiving. And what they self-report is been feeling like the doctor was more empathic than that condition compared to the other. And so, that might be the mechanism by which the beliefs are getting transmitted, and also that the patients are feeling better. Again, we don’t really know.
And another possibility is it’s the other way: so they could feel less pain because there’s some type of social interaction – or something about being supported by another person – that makes you feel better. And then you misattribute that feeling to that it’s actually the cream that’s working. And I don’t know if that’s, like, what’s happening but that’s one possibility. Another possibility is that the doctor does something to convey that they have a lot of confidence that this treatments going to work. And then so the patient thinks, “Oh, this must be the good treatment.” It’s kind of like how a normal placebo mechanism might work, where they believe that the treatments working and then so it has the self-fulfilling prophecy effect where it starts feeling like they feel less pain that way.
[ Back to topics ]
Future research plans
Leigh: We wrapped up our conversation by asking Luke if he’s planning on extending this line of research into what might make otherwise effective treatments become ineffective … or placebos become effective.
Chang: We have some follow-up things: we’re looking at this dataset, and collecting some other ones, to find out well, like, if you have … not, maybe, necessarily what the patient thought, but if you have other third parties watch the doctors videos, is there any systematic difference in how they’re expressing? Do they seem warmer, or are they paying more attention, or, if they seem more confident. So, keeping the raters blinds but trying to get some other information about what the doctors might have been doing differently in the two different conditions. And then another one is: we have this assumption that everybody expressed in the same way, but it’s possible that there’s many different types of pain facial expressions, and it could be across people, or even within a person. And so, we’re also trying to do things to see if we can model that better, kind of using, like, unsupervised learning techniques to identify different patterns of pain facial expressions.
And then, there’s a group who’s been doing acupuncture while they do what’s called hyperscanning. So they have, like, a doctor in one [MRI scanner] – like, acupuncturist and then a patient in the other one – and then I think it’s, like, some type of electrical stimulation. So the doctor will actually deliver the treatment by hitting a button, and then it goes, and they get real-time – they’re streaming video to each other, so they can see each other. It’s a really impressive setup. I don’t … I’m not sure if they published it yet, but they’ve been working on it for many years. So, one idea is that we want to scan, although I’ve been a little hesitant, because I’m not sure it’s gonna work. I think there’s something about the interaction that’s important, and if the doctor’s not in the room, or if he can’t see them, I’m not sure how it’s gonna work. So, we haven’t tested it yet, but that’s one idea. Another one is to try to test it with real doctors and patients in a clinic somewhere to see if it generalizes. And I would expect that it would actually be stronger in that context, at least with this laboratory treatment of pain – acute pain, not necessarily chronic pain.
[ Back to topics ]
Links to article, bonus audio and other materials
Leigh: That was Luke Chang, discussing his article “Socially transmitted placebo effects” which he published Tor Wager and four other researchers on October 21, 2019 in the journal Nature Human Behaviour. You’ll find a link to their paper at parsingscience.org/e65, along with bonus audio and other materials we discussed during the episode.
[ Back to topics ]
Preview of next episode
Watkins: As we begin the New Year, you might like to keep an eye on who we’re planning to have on the show in 2020, and maybe even suggest questions for Doug and me to ask them. You can now do so at parsingscience.org/upcoming. We’ll also be highlighting our newly-booked scientists in our weekly newsletter, so if you’re interested in joining in the conversation, you can sign up at parsingscience.org/newsletter.
Leigh: Next time, in episode 66 of Parsing Science, we’ll be joined by Katherine Wood from the University of Illinois’ Department of Psychology. She’ll talk with us about her research with Daniel Simons – the scientist behind the famous “Invisible Gorilla” experiment – into if and when people notice unexpected objects in inattentional blindness tasks.
Katherine Wood: Inattentional blindness reveals how the attentional system is prioritizing information in kind of a similar way that visual illusions reveal how color constancy is computed, or how we figure out linear perspective and relative size, and so on.
Leigh: We hope that you’ll join us again.
[ Back to topics ]
Next time, in episode 66 of Parsing Science, we’ll be joined by Katherine Wood from the University of Illinois’ Department of Psychology. She’ll talk with us about her research with Daniel Simons - the scientist behind the famous “Invisible Gorilla” experiment - into if and when people notice unexpected objects in inattentional blindness tasks.@rwatkins says:
We wrapped up our conversation by asking Luke if he's planning on extending this line of research into what might make otherwise effective treatments become ineffective ... or placebos become effective.@rwatkins says:
The team's third study also replicated their previous findings, suggesting that doctors' subtle facial cues do indeed transmit placebo effects. But since quantitative studies are better at identifying what's going on than they are at explaining why, we asked Luke what some alternative explanations of their findings might be.@rwatkins says:
As solid and Luke and his team's findings were, peer reviewers of their paper made several critiques that required that they return to the lab to carry out a final study, as Luke describes next.@rwatkins says:
Luke and his colleague's first study examined if the transmission of beliefs between doctor and patient was mediated by the doctor’s facial expressions. And their second study sought to replicate this with a new sample of participants and several tweaks to the experimental set-up. We'll hear what they found after this short break.@rwatkins says:
We followed up by asking Luke for more details on how these video recordings of people experiencing pain were analyzed to train this model.@rwatkins says:
Luke and his team's study involved three related experiments. The first involved both patients’ and doctors’ facial expressions being recorded using custom head-mounted video cameras. Combined with self-assessments completed by patients, as well as measures of their sweat gland activity, these data were later used to train a computational algorithm which predicts the pain through facial expressions alone, as Luke explains next.@rwatkins says:
The placebo effect is the reason that all FDA approved drugs have to go through a double-blind placebo-controlled clinical trial before being approved for use. But this gold standard of evidence only works if participants remain blinded to whether they're on the real treatment or the placebo. However, doing so can be challenging, particularly when a course of treatment is well-known for its side-effects, such as hair loss during chemotherapy. Luke talked with us about how researchers sometimes create placebos which - for this very reason - aren't inert "sugar pills" but rather actively induce side-effects.@rwatkins says:
In 1803, the New Medical Dictionary defined placebos as "any medicine adapted more to please than to benefit the patient" and although this definition may have been an unflattering one, it also didn't necessarily imply that they have no effect. Given the long history of such remedies, Doug and I were curious what prior research has found with regard to their efficacy.@rwatkins says:
Luke and his colleagues were interested in learning whether people playing the role of doctors might transmit their expectations about the effectiveness of medications to patients when the doctors' were falsely led to believe an that analgesic was either a real treatment ... or a placebo. Ryan and I asked Luke to describe the clever deceptions required for this study.@rwatkins says:
Researchers' desire to minimize the effects of patients’ and clinicians’ expectations has led to the adoption of the double-blind randomized clinical trial as the gold standard for testing new medical treatments. In theory, both the physician and patient are unaware of whether the patients are being administered a new treatment or a placebo in such experiments. But - as we'll more about hear in a moment - Luke and his team's study employed a mock single-blind design in which "doctors" believed that they were administering a real analgesic cream or sham one, though in reality both creams were inert. Next, Luke explains what led him to explore interactions within two-person dyads such as he did in this study.@rwatkins says:
Coined by the Austrian psychoanalyst Wilhelm Stekel in 1924, somatization is the process by which psychological distress may be expressed through physical symptoms, such as head and stomach aches. As it’s an area of Luke’s expertise, we began our conversation by asking him how he got interested in the topic.