By now, we’re all familiar with the idea that social media can – and has – been used to spread untruths. But why does this work? Soroush Vosoughi from MIT’s Laboratory for Social Machines and Harvard’s Berkman Klein Center talks with us in episode 20 about his research into how false news disseminates differently than true news on Twitter. His open-access article “The spread of true and false news online“, co-authored with Deb Roy and Sinan Aral, was published in the journal Science on March 9, 2018.
Websites and other resources
- “Fake news spreads faster than true news on Twitter—thanks to people, not bots” in Science Magazine
- “Cover stories: Visualizing the spread of true and false news on social media” in Science Magazine
- “Answers to Some Frequently Asked Questions About ‘The Spread of True and False News Online’” by Soroush Vosoughi, Deb Roy, Sinan Aral (March 28th, 2018)
- “Parallel narratives: Clinton and Trump supporters really don’t listen to each other on Twitter” (Vice News article mentioned in episode)
- Co-author Deb Roy clarifying the study’s scope on Twitter:
Thanks to @jjaron for some clarifying comments to my first diagram. Here's a second one that fills in some more detail. The outer circles point to areas where we need further research! pic.twitter.com/KhJth1zFnp
— Deb Roy (@dkroy) March 15, 2018
Press coverage
Bonus clips
Patrons of Parsing Science gain exclusive access to bonus clips from all our episodes and can also download mp3s of every individual episode.
Support us for as little as $1 per month at Patreon. Cancel anytime.
Patrons can access bonus content here.
We’re not a registered tax-exempt organization, so unfortunately gifts aren’t tax deductible.
How to Cite
Leigh, D., Watkins, R., & Vosoughi, S..(2018, April 2). Parsing Science – How Misinformation Spreads Online. figshare. https://doi.org/10.6084/m9.figshare.6081206.v3
Hosts / Producers
Doug Leigh & Ryan Watkins
Music
What’s The Angle? by Shane Ivers
Transcript
Soroush Vosoughi: This disgust might be one reason – or this negative feelings people have towards false stories – might be one reason why they’re more likely sharing.
Doug Leigh: This is Parsing Science. The unpublished stories behind the world’s most compelling science as told by the researchers themselves. I’m Doug Leigh.
Ryan Watkins: and I’m Ryan Watkins. Speaking of a quickly-debunked Wikileaks hoax in 2012, the investigative journalist Glenn Greenwald wrote “It is true that the Internet can be used to disseminate falsehoods quickly, but it just as quickly roots them out and exposes them.” So is Greenwald’s stance true today as well? Today we’re joined by Soroush Vosoughi from MIT’s Media Lab, who spoke with us about his research into how true and false news spread online.
ad: WeShareScience
Hi, I’m Soroush Vosoughi. I’m a postdoc at the MIT Media Lab; also a fellow at Harvard Beckman Client Center. I’m a MIT lifer; I’ve been at MIT for many, many years. I came to Boston to study at MIT in 2004 as an undergrad, and I ended up getting my bachelor’s there and then my masters and then my PhD, and then I decided to stay a couple years for postdoc. And now actually I’m on the job market this year, so I’m going around giving job talks; hopefully finding a position for next fall.
Leigh: Ryan and I began our conversation with Soroush by asking what led to his interest in researching the spread of false news and rumors online.
Vosoughi: I was a second year PhD student in 2013 when the Boston Marathon bombings happened. At that point I was still exploring research ideas for my PhD thesis, but one kind of area that I was getting closer to making my PhD topic was on creating computational models of language learning. I’ve always been interested in linguistics and natural language processing from a computational point of view, so I’ve been interested in, for example, coming up the computational models of how children learn language.
And I did a little work on that for my master’s thesis, and so I wanted to kind of dig deeper on that topic for my PhD, but then the bombings happening in April 2013. And as you remember, probably, MIT was at the center of some of the stuff that happened. There was a MIT police officer that got shot and killed on campus at MIT, Officer Sean Collier. And there was a man hunt; Cambridge went on lock down. There was a lot of crazy stuff happening that week, from Monday when the bombings happened until Thursday, I believe, and suspects were apprehended … or Friday morning I think. In that week, Twitter and other social media platforms like Facebook and Twitter – and Reddit – became my main source of information and not just for me and most of my friends who were locked down in MIT grad dorms. They went on social media to figure out what the hell is happening. And it was great because you would get real time updates from people on the ground; you know like, they could say something like, “Oh, just so you know there’s a shooter out here, and here’s a picture” or, “I just heard that it was another bomb there.” And so you’d get real time update on what’s happening, but what I found not like a couple of days after things settled down and people kind of looked back at what was happening, we realized that a lot of the stuff we were the reading on social media was not true. Some of it was harmless in a way but some of it could have been really damaging, and some of it was damaging. And so after I realized, you know, basically that half of – or a good chunk of what I was being exposed to on Twitter was false … and other social media platforms … I decided to change my topic for my dissertation and use my skills. So my background’s in natural language processing and machine learning, applied to big data – and so I wanted to use these skills that I had to study basically the spread of rumors on social media. And to be more specific, I was interested in creating a tool – engineering a tool – that would detect stories that spread around real world events, like the Boston Marathon bombings, so like a real time or near real time rumor detection system, and then I wanted my system to be able to verify these stories as fast as it could.
[ Back to topics ]
Four characteristics of information cascades
Watkins: With social media, especially on Twitter, information is shared from one or more users to multiple other users. When such information is reshared, a “cascade” is formed. Soroush and his team selected over 126,000 of these Twitter cascades whose claims could be verified as true or false by professional fact-checking organizations. They then examined four characteristics inherent to cascades, as he describes next.
Vosoughi: Well, the size is pretty straightforward, that’s just the number of retweets. But the depth is how many hops on the followership graph that thing spreads and so if person A tweets something and person B (re)tweets person A – because person means falling person a person A – and if person C retweets person B because person C is following person B – and so on and so forth, depth captures that: how many hops it goes, right? And it’s basically word of mouth, like I had this from a friend of a friend of a friend: so that’s three depth, right? Then the other matter of structural virality is somewhat related to depth: it captures the general structure of the cascade. And so – it’s is a little technical – but I can tell the technical definition and also what it actually means non-technically, but the technical definition is the average shortest distance between all the nodes in a tree; that’s the virality. But what it actually means is it captures whether something – whether a tree like a cascade, a diffusion tree or a diffusion cascade, has a for example a broadcast shape or a kinda viral peer to peer shape. So it really is good at distinguishing between broadcast diffusion and peer to peer diffusion. And so, if you have … if there’s an account that has a million followers and they tweet something and a million people of the follower – all million – retweet that, that has a very broadcast shape, right? It’s just one person and then boom. Whereas let’s say my account only has a hundred followers: I tweet something, but then because it’s something so interesting if you know it basically goes viral not because I have a lot of followers but it because a friend of mine retweets, another friend retweets, and then it just becomes … it goes really deep and that it reaches different areas of their followership net graph, and that has that has a very different shape. That has a peer to peer diffusion shape. And so a cascade with a high structural virality score is more like a second example I give: the more broadcast-like the diffusion is the lower the virality of that is.
[ Back to topics ]
Number of followers, age of account, and verified accounts
Leigh: Their findings showed that false news spread further, faster, farther and more broadly online than does the truth. Furthermore, the algorithm Soroush trained could also predict whether other online rumors were true or false 75% of the time, even before those tweets were fact-checked. Ryan and I were curious to learn which characteristics that Soroush and his team looked at were most influential in the spread of false news online.
Vosoughi: One hypothesis could be that false news spreads faster than true news because people who spread false news have more followers or or anything like that. Or like they’re more engaged on Twitter or something like that. And so we looked at various account characteristics: things like the number of followers, number of followers, how old their accounts are, how engaged they are on Twitter, and things like that – and whether they’re verified or not – and we compared the accounts that spread false news with the accounts that spread true news – we compared characteristics – and what we found is that the difference doesn’t explain it. If fact, the opposite of what you’d expect is true, which is that accounts that spread true news are actually more likely to be verified and have slightly more number of followers. So it’s not that these characteristics explain why false news spreads faster than true news; that was the basic finding. One thing I wanna make clear is that we did look at bots, but one thing we didn’t look at – and it could, for example, and again we didn’t look at it, so that we don’t know that is true or not, but it could potentially be an explanation of why false news spreads faster than true, is that you know we didn’t look at for example troll accounts, right? Or any kind of kind of a cyborg accounts … these are accounts that are not bots but, there’s like a person who’s controlling a few accounts you tools that let them amplify their message, right? And/or like people gaming Twitter’s algorithms, like, so that certain things will be trending or something like that. So these are things we didn’t look at, and they could very well explain – like we don’t know, so anything we say speculative, so we don’t know about it would explain the behavior we are seeing or not. If it is true that they are responsible for this, it’s very sophisticated behavior for them to be … and I mean like having bots is not super sophisticated, but like a lot of these other things is sophisticated behavior … and if it is the case that this is the reason why, you know, what we’re seeing what we’re seeing, then it shows like it really coordinated, sophisticated effort behind it. And again, we didn’t look at this, so I don’t know if it’s true one way or the other, so anything we say is speculative.
[ Back to topics ]
The emotional content of replies to true and false news
Watkins: To learn whether there might be any linguistic clues that could explain how false rumors propagate, Soroush and his team examined the emotional content of replies to true and false news tweets. Here, he explains how they explored the ways in which people respond to both types of news.
Vosoughi: The best way to do this would be to survey people, actually. To ask them all you saw this thing how do you feel about that allow you to share ideas or anything or like any kind questions you could ask about people’s behavior on false or true news; that would be the best way to address this question. But obviously we didn’t do that for this study, and for many different reasons it’s not and it’s not easy to run these surveys on Twitter. And that we decided to look at people’s; actually people what they’re saying in response to these articles as a proxy for their reaction to these stories and … One way you could do this is to run this through sentiment analysis to get kind of positive/negative but we wanted to get more more a specific emotions than just positive and negative, and so we used to a basic emotions – like there are eight basic emotions that capture most of people’s emotions, so like everything you can think of would fall under one of these eight emotions. And these are things like fear, disgust, anticipation and you know … joy and sadness are also part of it. So, anyways there are eight emotions. so we basically the same way people create sentiment classifiers we did that for these eight emotions so just extended it slightly to cover more emotions to get a better sense of exactly – not whether some response is negative or positive but what is it exactly that makes a negative – is that fear, is it disgust, you know … things like that. And so that was that was our attempt to understand people’s reasoning – or people’s reaction – to these news articles without actually having to create surveys.
And so, yeah, the surprise finding didn’t really surprise us that much, mostly because our previous analysis showed that false news is apparently more novel than true news, and obviously you would expect people to be more surprised when they see something novel. The disgust as interesting because we didn’t expect that, but it kind of makes sense. There’s been other research that, you know, in in the feel of communication that show that people are more likely to share something – share news piece – that has negative content that positive content. They didn’t look at social media; they looked, I believe, just like New York Times … like people sharing articles, like either with their family members or an email or something … and it seems like people just like to share things with negative thing … a negative feel to it more than they like to share positive stories. And so that kinda made us think that this disgust might be one reason – or this negative feelings people have towards false stories – might be one reason why they’re more likely sharing; because of this other research that kinda shows that these kind of negative emotions do have an effect on people’s sharing behaviors. So, yeah, it was an interesting finding – we didn’t expect it – but it was interesting.
[ Back to topics ]
What motivates (and demotivates) sharing false news
Leigh: Though it wasn’t the focus of their study, Ryan and I were interested in learning if people may have different motivations for sharing false news online, as well as what might help persuade them to reconsider doing so. Here, Soroush gives his thoughts on the question.
Vosoughi: This is outside the scope of the study and so, again, this is me speculating but I – my hypothesis is that there are three kinds of people who share false news. One, they do it maliciously, on purpose. The second type, they do it because, you know, they like juicy gossip, interesting things. And they don’t care what is true or not – so not that they are on purpose sharing something they just wanna share something interesting. And the third type, they do it out of you know I’m gonna call it ignorance, but but I mean is that they just don’t have the time or don’t, you know, they don’t look at data source that closely and they share something and it might end up being false. And I do believe that these three groups of people basically account for – like there’s enough; look if you do a division I believe there’s enough people in each category and it will be interesting to – for further study – to divide the and they look at these three different groups of people, and I bet their behavior is very different. And the ways that they … you need to intervene to stop in this thread of misinformation is very different for each group. So, behavioral intervention methods could potentially apply to groups number two and three, but the first: you need very different kind of systems. You need you need to detect these malicious agents – and, you know I don’t know what you would do – but at least you know the approach you take these people is gonna be very different. I think the three different types of people … like the third type will probably will not be as interested in engaging really around the conversation and the replies, but the first two might be. And so again, something we didn’t look at, but it will be really interesting for future research to look at that.
[ Back to topics ]
How Soroush got access to Twitter’s data
Watkins: Unlike Facebook, tweets posted to Twitter are almost always public. The analytics data that Soroush and his team gained access to, however, is not. Since we spoke to him just as details of Cambridge Analytica’s misuse of private Facebook data were first emerging, Doug and I were especially interested in learning what led Twitter to share every tweet ever made with Soroush’s team.
Vosoughi: The coauthors in my study are my postdoc adviser, professor Deb Roy and Professor Sinan Aral, who was on my PhD committee. And so after my PhD defense we [unintelligible] rumors, as I mentioned – we had discussions, so he decided to collaborate with me on this project as well.
Our laboratory – the Laboratory for Social Machine, which is headed by Professor Deb Roy … we had a special relationship with Twitter when it comes to data access. Not that we have; so we have the access to data that everyone else could get access to, it’s just that it costs a lot of money to get access to that kind of data. And so, you can think of it as kind of a sponsorship: so Twitter is sponsoring us by giving us free access to their data and – I mean definitely having Deb as someone involved with Twitter really helped get that you know get that sponsorship … Twitter also sponsors our lab with money as well so, not just data access but also also financial gifts from Twitter, and so dad dad relationship definitely helped us a lot to get access to these kind of data.
So, Twitter sponsors our lab, but they don’t tell us what to work on. So, they give us access to data … and so this was something that I decided to work on just because of my PhD thesis, and they weren’t kind of either for it or against it: they supported any sort of research that came out of our lab. So, it wasn’t like Twitter said, “Look at this,” or, “Don’t look at this”; so, it was just something we did.
Now days, most of the interesting data lives, you know, it corporations like Google, Microsoft, Facebook and especially Facebook and Twitter … and so any kind of research you want to do you have to you know work with these corporations nowadays to get access to interesting data. And certain platforms like Twitter is easy – much easier – to work with because this stuff is public, whereas with Facebook there is a lot more privacy concerns around data.
[ Back to topics ]
What social media platforms can do about misinformation and disinformation
Leigh: Given its role in recent world elections, we asked Soroush for his thoughts on how the spread of false news and rumors might be best investigated, as well as what courses of action he hopes that social media platforms might adopt to mitigate them.
Vosoughi: You know, I think this kind of research needs to be – if we don’t do anything like about this, the problem’s gonna persist, and I’d rather the research on detecting and studying false news and rumors, I’d rather that research be done in the open. Where people can actually see the results. And maybe disagree with it, and that they can build on top of it … even if that means that others would get access to, you know, something that would let them game the system. But you know then you push back and you do something better. So, yes, it is kind of – you are creating an arms race but I that’s the kind of transparency is the best way forward this kind of work.
My other authors would probably disagree, or maybe they would agree, I don’t know … but my personal opinion is that I don’t think the platforms should do anything that would be even closely … could closely the because censorship. So I don’t think they should for example remove news that think is fake or anything like that; I don’t think I should be doing that. But what they could be doing is they could be coming … like, providing these information quality metrics. So, if they let you see whatever you’re saying – but give me a sense of, for example, “this source that you’re reading from,” you know “according to Snopes, 80 percent of what they said in the past was false,” and then you decide whether you want to retweet that. I think providing more information about what people are reading and the sources of these things is really helpful, and something that the platforms could do, but I think actually censoring information and not letting people see certain things, that’s something I don’t agree with.
[ Back to topics ]
Detecting these false stories before they propagate
Watkins: Soroush’s earlier work involved creating a tool for detecting false rumors on social media platforms, and this project extended that research by examining how false and true rumors are shared. Given his expertise, Doug and I wanted to learn where his interests might take him next.
Vosoughi: One thing I am planning on doing is taking the dataset that we’ve collected and thinking really – from an engineering point of view – thinking about, all right: here’s here we have this larger data set across many different topics. Which is: could we create an engineering tools that could detect these false stories before they’re actually verified, or before they’re shown to be to true or false by these fact-checking organizations? I think that would be a very interesting extension.
And, the most – from my point of view – the extension to this work that I’m most interested about is actually looking at behavioral intervention experiments. So I’m less interested in now detecting false news and rumors, but more interested in figuring out whether there are certain ways you could intervene to dampen the spread of rumors and false news. I mean “intervene” I don’t mean like that platforms intervening, or there be policy you know that would censor people. I mean algorithmic interventions. So maybe having tools that detect false news spreading and figure out who is on the path, you know, to be exposed, and maybe let them know the truth before they’re exposed to the falsehood, or something like that. Or maybe come up with an information like quality metric that lets people know when they’re reading something how good the quality of that information is, and maybe that’ll help them think twice before they retweet something that has a low quality rating. So these kind of intervention methods are what I’m really interested in as an extension of this study.
[ Back to topics ]
Mapping tribal networks and echo chambers
Leigh: The widespread use of social media seems to have accelerated our capacity to isolate ourselves from others whose opinions and perspectives differ from our own. This amplification and reinforcement of media that aligns with our established beliefs and opinions has been dubbed the “echo chamber effect.” We asked Soroush for his perspective into this phenomenon.
Vosoughi: We had this study, basically, about the 2016 presidential election, and using our access to the Twitter data we looked at people in our dataset who had commented about the election, and at the end of the election when everything was over we mapped them, basically, on Twitter. And we had some way of determining those people were supporters of Trump or Clinton or Sanders and what not. And so when we mapped these people we saw these really clear echo chambers … so, for example there was a cocoon with very little connections – like a tightly connected cocoon with very little connections outside of the cocoon. And interestingly enough, we looked at journalists on Twitter, and we saw that mostly they’re outside of these cocoons. And so was kind of a communication gap between that cocoon and the rest of Twitter including journalists. So we did look at cocoons and filter bubbles through that lens, through the lens of the 2016 election.
One thing that we would be really interested to do in the future is to combine that research on – and by the way that was published in Vice News if you’re interested in reading that. It’s called “Journalists and Trump voters live in separate online bubbles.” So that work is still under review. We’re gonna publish it in an academic journal, but because it was so interesting, we wanted to kind of have it out there as a news article. And, yeah, so we’re really interested in the future maybe combining some of the work on a false news and our work on mapping these tribal networks and these echo chambers, but we haven’t done that kind of intersection yet.
[ Back to topics ]
Misinformation about Soroush’s study of misinformation
Watkins: Publicity and press coverage of Soroush’s study has been ubiquitous since its publication in early March of 2018. As of the release of this episode of Parsing Science, the paper ranks in the top 5% of all research publications that are monitored by Altmetric, a company which tracks the online popularity of published research. It also lists the paper as the second highest scoring article published ever by the journal Science over its 138-year history. We asked Soroush about his experience of all the media attention the study has received.
Vosoughi: In terms of the general news, you know, as you said, it’s garnered so much attention that I don’t even know like who else is … like a lot of these articles that didn’t even talk to us; they just read the paper and they wrote an article. Or they’re quoting from some other article or something like that. And so, yeah, it is really interesting actually to look at the … look at the coverage. And we’re actually joking that, you know, that title of paper is, “The spread of true and false news online” and we’re joking of doing a follow-on project called, “The spread of of spread of true and false news online online,” basically. And so I think, yeah, there’s there’s been some misunderstandings in the media, but I think overall – at least in the academic circles you know – the message has been received that I think has been mostly positive. And there is already talks of collaborations and kind of moving forward with behavioral intervention metrics that I described. And so I think that the coverage even though they had some negatives, I think overall has been good to raise awareness and also has created a lot of opportunities for collaboration, which I think is very valuable.
[ Back to topics ]
Ethical implications algorithmic decision-making
Leigh: Often invisibly, computational algorithms mine and analyze our interactions with technology. While the conveniences afforded by everything from speech recognition and computer vision to traffic navigation and medical diagnosis afford many benefits, they can also expose us to privacy risks, and even threaten our civil liberties. Given their pace of their growth, we wanted to hear what implications Soroush predicts that such technologies will have in our future.
Vosoughi: I think they definitely have a lot to offer, but I’m really, really worried about the ethical implications of using an algorithm for everything, mostly because nowadays most algorithms are black boxes, and you don’t really know why an algorithm made a decision. And so I think any time an algorithm is used case like this it should be really easy to ask the algorithm why it made a decision. You can’t just have an algorithm as a black box saying, “Oh, this is right this is wrong” or, “This guy’s guilty, this guy’s not” without – it being – without us being able to understand why it made that decision. And you know that’s a very active area of machine learning which is, you have these, for example, neural network – deep learning system – that does something: can you get it to explain to you why did what it did? That’s an activity of machine learning which kind of connects to this whole thing, which is of getting systems to explain their decision making process. And so and I think, yes algorithms have a lot to offer but they have to be very transparent and very easy to interpret. As you know algorithms are never 100 percent accurate, and algorithms’ performance really relies on the training data that was used to train it. And so, if there was any bias in that training data you might be, like, the outcome of the algorithm that predictions of the algorithm might be biased. And so, I think yes algorithms are useful but they need to be more transparency and better ways to understand exactly what is going on inside these algorithms.
[ Back to topics ]
Artificial Intelligence in governance
Watkins: The fields of social media, machine learning, and artificial intelligence are moving rapidly, making it a challenge to keep up with their advances. So we asked Soroush what unifying idea he thinks we should pay attention to.
Vosoughi: There’s a whole movement of what is called “AI in governance,” and I think it’s become somewhat inevitable that in the next few decades there’s going to be more and more machine learning and AI use. Even now as we speak for example in certain states in the US the sentences given out by the courts are informed by machine learning algorithms. And so there is going to more and more of these algorithms being used in different areas of governance. And so there’s this whole new movement emerging which is basically a correlation of machine learning scientists and computer scientists, basically, and policy and law, and you know public service – to try to really – for both sides to understand each other. You know, these things as they started running society, we don’t want them to be these black boxes that no one knows exactly how to work out, why they’re making these decisions. And so we want we want transparency, and we want both sides of this equation – the policy makers and the computer scientists – to be aware of each other and so this whole movement of AI in governance is something that is really shaping up to be a big movement in computer sciences and and social sciences and something people might be interested in learning more about.
[ Back to topics ]
Links to article, bonus audio and other materials
Watkins: That was Soroush Vosoughi, discussing his article “The spread of true and false news online,” published in the journal Science. You’ll find a link to his paper on parsingscience.org, along with bonus content and other materials that he discussed during the show.
Leigh: Have you got a tip for Parsing Science? If so, we’d love to hear from you. Drop us a note at parsingscience.org/suggest … or leave us a toll-free message at 1-844-XPLORIT and we might feature your call in a future show. That’s 1-844-975-6748.
[ Back to topics ]
Preview of next episode
Watkins: Next time on Parsing Science, we’ll be joined by Philip Zimbardo, professor emeritus at Stanford University and lead investigator of the Stanford Prison Experiment. He’ll join us to talk about his current research into the heroism of defying unjust authority figures.
Zimbardo: It’s not heroic not to do something that people tells you to do something that is wrong; but then it’s to challenge that authority openly and publicly.
Leigh: We hope that you’ll join us again.
[ Back to topics ]
The fields of social media, machine learning, and artificial intelligence are moving rapidly, making a challenge of keeping up with their advances. So we asked Soroush what unifying idea he thinks we should pay more attention to.@rwatkins says:
Often invisibly, computational algorithms mine and analyze our interactions with technology. While the conveniences afforded by such technologies as speech recognition and computer vision through traffic navigation and medical diagnosis provide many benefits, they can also expose us to privacy risks … and even threaten our civil liberties. Given the pace of their growth, we wanted to hear what implications Soroush predicts that such technologies will have in our future.@rwatkins says:
Publicity and press coverage of Soroush's study has been ubiquitous since its publication in early March of 2018. As of the release of this episode of Parsing Science, it ranks in the top 5% of all research publications that are monitored by Altmetric, a company which tracks the online popularity of published research, which also lists it as the second highest scoring article published ever by the journal Science over its 138-year history. We asked Soroush about his experience of all the media attention the study has received.@rwatkins says:
The widespread use of social media seems to have accelerated our capacity to isolate ourselves from others whose opinions and perspectives differ from our own. This amplification and reinforcement of media that aligns with our established beliefs and opinions has been dubbed the "echo chamber effect." We asked Soroush for his perspective into this phenomenon.@rwatkins says:
Soroush's earlier work involved creating a tool for detecting false rumors on social media platforms, and this project extended that research by examining how false and true rumors are shared. Given his expertise, Doug and I wanted to learn where his interests might take him next.@rwatkins says:
We asked Soroush for his thoughts on how the spread of false news and rumors might be best investigated, as well as what courses of action he hopes that social media platforms might adopt to mitigate them.@rwatkins says:
Unlike Facebook, tweets posted to Twitter are almost always public. The analytics data that Soroush and his team gained access to, however, are not. Since we spoke to him just as details of Cambridge Analytica's misuse of private Facebook data were first emerging, Doug and I were especially interested in learning what led Twitter to share every tweet ever made with Soroush's team.@rwatkins says:
Though it wasn’t the focus of their study, Ryan and I were interested in learning if people may have different motivations for sharing false news online, as well as what might help persuade them to reconsider doing so. Here, Soroush gives his thoughts on the question.@rwatkins says:
To learn whether there might be any linguistic clues that could explain how false rumors propogate, Soroush and his team examined the emotional content of replies to true and false news tweets. Here, he explains how they explored the ways in which people respond to both types of news.@rwatkins says:
Their findings showed that false news spreads farther, faster, further and more broadly online than does the truth. Furthermore, the algorithm Soroush trained could also predict whether other online rumors were true or false 75% of the time … even before those tweets were fact-checked. Ryan and I were curious to learn which characteristics that Soroush and his team looked at were most influential in the spread of false news online.@rwatkins says:
With social media, especially on Twitter, information is shared from one or more users to multiple other users. When such information is reshared, a “cascade” is formed. Soroush and his team selected over 126,000 tweets involving contested news stories which linked to one of six professional fact-checking websites, and could therefore be verified as true or false claims. They then examined four characteristics inherent to cascades, as he describes next.@rwatkins says:
Ryan and I began our conversation with Soroush by asking what led to his interest in studying the spread of false news and rumors online.