Advertisement
Cell
This journal offers authors two options (open access or subscription) to publish research

Scientific misinformation: A perfect storm, missteps, and moving forward

        The spread of scientific misinformation is not new but rather has long posed threats to human health, environmental well-being, and the creation of a sustainable and equitable future. However, with the COVID-19 pandemic, the need to develop strategies to counteract scientific misinformation has taken on an acute urgency. Cell editor Nicole Neuman sat down with Walter Quattrociocchi and Dietram Scheufele to gain insights on how we got here and what does—and does not—work to fight the spread of scientific misinformation. Excerpts from this conversation, edited for clarity and length, are presented below, and the full conversation is available with the article online.
        Figure thumbnail gr1
        (L to R) Walter Quattrociocchi, Sapienza Università di Roma, Nicole Neuman, Cell, and Dietram Scheufele, University of Wisconsin-Madison
        It is an echo chamber, and it’s really difficult to inject information into an echo chamber, because the reaction to dissenting information is to ignore, to push out, or to react to that information.There are a lot of initiatives that are working. The most effective ones are trying to explain to users how our brain works when we are processing information. Creating awareness of our biases, awareness of echo chambers.… we wanted to correct misinformation that we knew to be wrong with science that we weren’t quite sure yet would turn out to be right.The moment science is seen as a partisan actor, and not as a neutral arbiter of the best available information that we as society can produce, we’re in big trouble.

        Main Text

        Nicole Neuman: Why don’t we start with a simple question about common terminologies. When we say misinformation, what do we really mean by that? Is there a distinction between this and disinformation?
        Walter Quattrociocchi: When we talk about misinformation or disinformation, we are just outlining the intentionality. It may be an error, or it may be engineering of the information spreading process. However, both are side effects of a change in the business model of the information cycle. Information, until 30 years ago, was selected from experts and journalists, who were setting the agenda of discussion. There are still the experts selecting information, but people, through social media, are selecting the information they like the most. The second element that created the total collapse of the present business model is that social media are not intended for dissemination of complex content. They are made for entertainment.
        NN: In the case of COVID-19–related misinformation, what role did the massive explosion of scientific information play in the dissemination of scientific misinformation?
        WQ: COVID-19 is the perfect storm to study misinformation. Our capacity to process all the information is not at the level of the request. This process created a situation in which we are really eager to have information about what is going on, because it’s scary. This need for information created an increase in the authoring of content.
        The second issue is the dynamics of science, in which there is hypothesis verification. The scientific method is ongoing, and so with COVID-19 we have a kind of reality show of science. It wasn’t clear to everyone that the scientists were just humans trying to understand something. That a good paper could be proven wrong in the long run, because that is the scientific method. And so these two layers basically collapse because the scientific process, which was unknown for the audience, was causing distrust because it seems like science is not giving correct information.
        Then the third ingredient is that institutional communication—at least in Italy, in Europe—was not so good. Managing the communication of science is so critical, to an extent that the World Health Organization coined the term “infodemic,” just to talk about the overabundance of information all over the world.
        NN: One thing about misinformation that I think is very frustrating to many people is that it’s very tenacious. Even when confronted with many facts to the contrary, misinformation seems to keep coming back. What’s your take on why it persists?
        WQ: We did an experiment in 2015. By measuring how conspiracy users respond to debunking information, we found that they ignore dissenting information. The real anatomy of social dynamics online is that we seek information adhering to our system of beliefs. We seek a group of like-minded peers in which we frame our narrative and tribe. It is an echo chamber, and it’s really difficult to inject information into an echo chamber, because the reaction to dissenting information is to ignore, to push out, or to react to that information.
        This is the process behind what we are seeing right now. It’s frustrating, yes, but we are human beings. And every one of us has his own bias.
        NN: If confronting people with facts that are contrary to their worldview and their beliefs doesn’t really work, what approaches may work? Do you have any optimism about making progress in fighting misinformation?
        WQ: There are a lot of initiatives that are working. The most effective ones are trying to explain to users how our brain works when we are processing information. Creating awareness of our biases, awareness of echo chambers. There is even a good effect from this change in the business model of information—that we can access a huge amount of information. We have to learn to deal with that, and it’s a work in progress. Foolish people will be around forever, but I think that the majority of people are just adapting to these new systems, to these new environments.
        NN: Dietram, are you optimistic about making progress in combating misinformation? And if so, why and how do you think that we need to be approaching this?
        Dietram Scheufele: I’ll backtrack just a little bit. We’ve had lots of conspiracy theories going around, but it’s never been as economically viable as it is in a social media environment. The economic incentives for a social media platform to feed this economy of outrage is at a level where we’ve never seen it before. There’s a complete disincentive to dial down the outrage and to focus on the things that we should know, rather than the things that we want to hear. That’s both good and bad, because it’s not a new problem, but the perfect storm is there. There’s a machinery of people who produce this stuff, and in the US, we of course managed to put one of the biggest producers of misinformation at the center.
        It’s reasonable for citizens to turn to their President and say, “Well, that person is telling me the truth.” In normal times, that’s just me looking for the best available information. We were in the situation where we had somebody in charge of a democratically elected government that intentionally tells you things that are wrong. So one of the pernicious elements of that perfect storm that we’re in right now is that we also have governments in Brazil and the US, also the UK a little bit, that have contributed to the problem.
        For COVID in particular, the issue is confounded by the fact that we knew in the beginning that we would produce research that would not replicate. We knew that we would go down dead ends, and we knew we would do this really fast, and in the public eye, more so than ever before. Every misstep would be seen by people who normally don’t pay attention. And I think we made two mistakes there. Mistake number one is we said, “That’s just how science always works.” And that’s incorrect, science doesn’t retract papers from high profile journals when science has just moved on. We retract studies if they shouldn’t have been published in the first place. And we did a whole bunch of that very visibly, and we paid for it with political commentators saying, “If they ever tell you to trust science again, look at all these retractions.” That was our big first misstep, that we said this is just science as it normally works, and it wasn’t. This is science under intense pressure to produce results. And I think the second mistake is that we wanted to correct misinformation that we knew to be wrong with science that we weren’t quite sure yet would turn out to be right. Very often, we didn’t say that this is where the science is right now, it will develop.
        The upside of that is we learned as much about how to navigate a political information environment as we learned about the virus and vaccines. That’s really important because the vaccines themselves won’t stop COVID, people getting vaccinated will stop COVID.
        That means half the challenge is to navigate this new information environment successfully, where we persuade people to do what’s best, not just for them, but for everyone around them. I’m actually hopeful that we’ve learned a bunch from this not just in bench science, but in social science. If anything, it has pushed that to the forefront of thinking in the bench sciences as well. The last time we had a moment like this, I think was climate change where we really realized that we weren’t doing a very good job and wide gaps were opening up between people living in different realities. This was another one of those moments, but with a lot more urgency.
        NN: Dietram, you just mentioned that we’re going to have a challenge with getting people vaccinated. Vaccine skepticism and misinformation around vaccines has been around for quite a while now, and despite a lot of really great efforts, it persists. How are we going to take what we’ve learned and do things differently to get people to take a SARS-CoV-2 vaccine?
        DS: There’s a couple of things that are really important. One is a cautionary note about correcting misinformation. One of the things that has come up again, and again, is people are saying, “Well, mRNA vaccine platforms will change your DNA. And it’s really dangerous you shouldn’t take them.” Our response to that has been, “No, don’t worry, it won’t change your DNA.” But what we’re actually saying is that changing somebody’s DNA is a bad thing. This year, we gave a Nobel prize in chemistry for breakthroughs in CRISPR that do that very thing, that changed people’s genetic makeup, in order to cure sickle cell, Tay-Sachs disease, and so on. So we’re setting up the contrast as something that’s bad, except that contrast is crucially important for therapies moving forward. There are unintended consequences of some of these corrections.
        Number two, we know from social science that a lot of what we do has zero to do with information. In fact, it’s the opposite. I know that I’m being manipulated by an industry to change my behavior all the time. I know it’s stupid to do it, and I still do it. Why? Because everybody else does. We know that people get solar panels on their roofs not because they understand renewable energies, but because their neighbors got them. So this idea of modeling behavior, this idea of establishing a social norm is a really important one.
        Then the last thing is that in the US for the first time, we’re seeing from survey data that ideology strongly predicts vaccine hesitancy. Normally, there are other factors and ideology is not as strong, but right now ideology is a strong predictor, which puts this right back into politics. This goes back to what we said earlier. Normally, we don’t have the executive branch of the government producing misinformation. That means that in order to get over vaccine hesitancy, scientists will have to play in the political realm, like it or not. That is tricky, because how can you be successful in the political realm without being partisan? That’s going to be the interesting part. We’re seeing some of the same things playing out in some European countries with the ultraconservative movement—the AfD in Germany, for instance—also aligning with anti-vaccine rhetoric and anti-climate rhetoric. All of a sudden, we’re layering a political map on top of what the best available science tells us, and that forces the scientific community into politics. That’s a game that’s played with very different rules than what we’re used to from our labs and our faculty meetings.
        NN: A question that’s on a lot of scientists’ minds is: should scientists become political, should scientists get involved? There’s a lot of mixed feelings within the scientific community about this.
        DS: Scientists will need to make sure that their science informs politics and informs political choices. That’s complicated in two ways. One is, scientists very often believe that policy should be determined by science and it never has been, and it never will be.
        We drive faster than we should. We know that if we lowered the speed limit by 10 miles an hour, we could literally save tens of thousands of lives. But we don’t, because we take science and we take a lot of other considerations, and we integrate those into a larger set of decisions that are informed by the best available science, but not determined. For us as scientists, that’s very often hard to grasp, because we think we know what the best outcomes are. For vaccines, there’s a really good parallel: scientists can tell us what the likelihood is of an epidemic if X percent do or don’t get vaccinated. That’s a scientific question. They cannot answer the question of whether that means we should force every parent to vaccinate their child before they send them to a childcare facility. That’s a political question because it clearly infringes on some individual rights. So yes, science needs to be political, but it needs to understand that it’s one of many stakeholders that try to influence political decisions. And that’s not bad, that’s exactly how it should be.
        The second thing is the partisan part, and this is where science missteps routinely: we feel that the policy choices that we favor are scientific and are not partisan. But that’s not always true. We did a study when nanotechnology first emerged and there were a lot of questions, and we did surveys of the leading scientists. When we asked what predicts their attitudes on regulation, after everything is controlled out, you still have a significant predictor of ideology, with conservatives favoring less regulation and liberals favoring more regulation. This is because scientists are citizens like everybody else. So the tricky part about playing in politics is that it’s really hard to extract yourself from your own politics. The moment science is seen as a partisan actor, and not as a neutral arbiter of the best available information that we as society can produce, we’re in big trouble.
        The moment that happens, we have a crack in Enlightenment. That’s why I would always urge scientists to steer away from partisanship.
        WQ: I totally agree with you. And the level of education of the policy makers sometimes also could be better. Now, we’re passing some analysis to the ministry of innovation in Italy, and they were contextualizing the information, which was really difficult because there are two different languages. They want something that is useful for immediate changes while the scientific perspective is complex. The level of complexity produces distrust because they are not able to grasp what is happening. So, science has to inform policy makers, but the final decision is political. But for this to happen we have to set up a common ontology. Otherwise, we are not talking the same language.
        DS: I totally agree. For the disciplines that I work in, there’s a little bit of responsibility that also comes our way because when we’re saying our data are not being used, or are not being used in the right way, part of that is also because we don’t curate our scientific findings in a way that makes it easy for policy makers to access those findings. Hopefully, we’re learning from COVID how to better utilize social, behavioral, and economic sciences.
        In the US the National Science Foundation funded, about a year ago, what they call a Societal Experts Action Network that brings together social scientists, economists, and others. Over the course of COVID, they wrote short reports that can be used by various stakeholders, including policymakers. One of them was actually about how to read data and how to make sense of large trend data and statistics. Almost a primer for COVID data, for policy makers, because they identified exactly the problem that you’re describing.
        If, and how, that will be effective, we will see down the road. I thought this was an interesting experiment at just the right time, and I’m not involved in it, so I can say it’s awesome.
        NN: It’s interesting that you bring up data curation. Within the scientific community, there’s been a push toward making more data available sooner and with fewer hurdles and letting the scientific community, and even to a certain extent, the broader public at large, self-curate that rather than using the traditional gatekeepers of journal editors and reviewers. These things are a bit in conflict, because for instance, we’re seeing news reporters pick up on research before peer review, publishing stories on them as though this is fact, and then that becoming part of the general public knowledge. There’s value in getting more data out more quickly, but at the same time, that push for less curation is causing some problems. How do we resolve this?
        WQ: The quality of the content is really heterogenous, and information overload is a real problem. Curation is very important in decision-making, and I’m really happy to know there is something happening in the US in this direction. In Italy, we are pretty far from that. Still, the problem is information overload because of interpretation of data. I am scared if a journalist has printed up data from my research, because it’s difficult. Curation implies a collaboration between academics, journalists, and policy makers. We have to find a way to create a common language.
        DS: And I think a good illustration of that is actually the Obama administration in the US, which has often been lauded as one of the most transparent administrations because they did gigantic data dumps that anybody could use. But what happened is that nobody ended up using them because nobody was qualified to actually go through them. Even a lot of the high-end journalists for the New York Times who do data journalism just barely scratched the surface. Nobody was ready to make sense of these data. So just making data available very often doesn’t mean anything because without those collaborations that Walter mentioned, there’s just no meaningful narrative that comes out.
        There’s a couple of other things that are tricky. One, is that the values that we hold dear in science, like open science and transparency, all of a sudden conflict with a world that doesn’t think the same way. They don’t see this as a preliminary product that’s still undergoing vetting. That thing is a product that is now available, and more often than not, you have teams of researchers who give into the temptation to talk about their pre-prints and not make that distinction when they get the call from the New York Times or the Washington Post.
        In the US right now, we’re seeing that whole transparency movement being pushed to the extreme. At the tail end of the Trump administration we saw EPA rules being implemented that mandated that you can only use data for regulations that are totally transparent and open for everybody to look at. The problem is most of the data that we use for toxicology and other things are based on very finite samples, where we know who all the respondents are, because they’re the ones who are really affected by this. By definition, we cannot make these data public because we’re going to reveal who’s been affected. So the data needed for regulation is data that cannot be shared. By saying we need to put more data out so that everybody can make sense of it, we’re leaving ourselves open to some of those vulnerabilities.
        The last thing I’ll say is that we now require that our master’s journalism students all take statistics. That doesn’t take away from Walter’s point that the ideal scenario is you working with data scientists to make sense of these data, but in order to even meaningfully ask questions to a data scientist, you need to understand at least the basic of statistics and computational work in order to ask the right question. It’s almost like saying if you cover politics and you don’t understand the basic rules of how congress gets elected, you’re probably not going to cover this right. The same thing is true for data.
        NN: One theme from this discussion is the need to break down silos and to have data scientists talking to social scientists, talking to clinical and bench researchers, talking to politicians, and all of these people engaging in conversation. For any Cell readers who might be part of these different disciplines and interested in helping to combat misinformation and scientific misinformation what is your advice on how to approach this? How do we break down silos?
        WQ: That’s a million dollar question. One answer for sure is to be curious, and curiosity has to be the driver. The second one is that, to break out of our echo chamber we have to learn to talk with people because most of the time when we have discussions with other fields people are not talking the same language. I totally agree that data science has to be one of the fundamental pillars for scientific understanding. Right now, there is no option to avoid data science in decisions. Otherwise, it is just regulation.
        DS: Data science already is becoming a pillar—especially for fields like biology. For example, how we look at the interplay of environmental factors in genomics: we’re looking at cell phone data, location data, respondent data, and matching that up over long periods of time and large numbers of respondents with DNA data, to see how particular genes or combinations of genes interact with environmental factors. That’s inherently a data science problem. If you look at the medical field and what DeepMind and those places are doing; that Google has an algorithm that can take a picture of a human retina and predict the gender, which no human doctor can, and we don’t understand fully yet what the algorithm it is actually using to make that prediction. It shows both the potential and the importance for collaborations.
        I really like Walter’s comment about echo chambers, because we think in academia that we’re immune from those things. But of course, with our disciplines, we’ve done exactly that. We’ve actually created the perfect echo chambers and we specialized them more and more so that people work in a different hallways of the same building but don’t always speak the same language. So I think we’re seeing the integration of toolkits from statistics and data science.
        For example in genetics, now people are talking about principal component analysis as a new tool, and the social sciences used that in the ’70s. That doesn’t mean the social sciences are more advanced. It simply means we have so much to learn from one another. What we have are problems that need solving—not disciplines that need building. Then I think once we figured that out, that disciplines are a means to an end. Then, we’re really ready for whatever post-COVID challenges there are.

        Supplemental information