Facebook testimony highlighted the mental health dangers, but research suggests it matters plenty how you present scientific data on a contentious issue like this. By Sara Degli Esposti, CSIC Are social media bad for your mental health? That’s a question that dominated news headlines this week following explosive Congressional testimony by a former Facebook manager that the world’s biggest social media company commissioned and covered up research showing its products are harmful – especially to impressionable teenagers. Facebook tried to discredit the testimony; but as a group studying online misinformation, we know that this is a long-running, and much-disputed, area of social science. For instance, in a 2018 book ‘Ten arguments for deleting your social media accounts right now’, Jaron Lanier, an American tech visionary best known for his early work developing virtual reality, argues that social media are making people angrier, more fearful, less empathetic, more isolated and more tribal. Thus, he comes to the conclusion that deleting social media is highly advisable. Is it true? Are social media, which are supposed to bring us together, tearing us apart? Well before the dramatic Washington testimony, several published studies showed a small negative relationship between the use of social media and young peoples’ mental wellbeing – but this effect has been debated. For instance, in Nature magazine in February 2020, Jonathan Haidt of New York University cited evidence of a correlation between rising social media usage, and rising mental health problems among teens. But another researcher in the publication argued that there could also be positive effects. And separately, Cambridge University’s Amy Orben has argued that a lot of the research in this field is “low quality” and a more rigorous scientific approach is needed. Thus, scientists are (as usual) divided. So what do most people – citizens, rather than scientists – think about it? To find out, last year we asked 7,120 people across seven European Union countries: France, Germany, Italy, Spain, Poland, Hungary and the Netherlands. The result was unambiguous. Most people agree social media are harmful. But the story is more complicated than it might at first seem. We presented the respondents – diverse in age, gender and socioeconomic status – with a statement that “several studies show” a correlation between screen time and poor mental health. In all, 83% of respondents said they agreed there is a problem, and 57% said they would consider deleting some of their social media accounts. But then we offered them the chance to check out the assertion for themselves – in short, to make a more informed opinion. Surprisingly, given how cynical politicians can be about citizen engagement, 77% said they would indeed like to check the claim further. Among those who did so (we made it fast and easy for them), 86% said they’d like to know even more about the effects of social media. And after checking, the share of people who wanted to delete their social media accounts actually rose by 5 percentage points, to 62%. In short, with more information, the social media problem now appeared to them to be even worse than they had first thought. Figure 1: Survey results from WP3 activities of the TRESCA project (see also the work plan for WP3 for more information) So it matters how you present this kind of contentious social science data to people. But more important, it shows many people are willing, if given the chance, to look at the facts for themselves. Interestingly, there were some pretty clear national differences in this show-me attitude. Italians were the most likely to want to see the data for themselves. Germans were the least interested, but also those who trusted institutions the most. Why the difference? History, culture, media, politics – there are all kinds of factors differing from one country to the next that affect how people view contentious data. Our research into social media and misinformation is continuing, and we will be reporting more findings from our surveys and other activities. The work is part of the TRESCA Horizon 2020 project coordinated by Erasmus University. In fact, we are developing an app, Ms.W. (the Misinformation Widget) to help people check for themselves the veracity of claims about science that they see online. It responds to a well-document problem of “cloaked science”: false claims dressed up to look like science. In the pandemic, we have all seen how damaging this can be if unchecked: thousands of people have unnecessarily died because they believed false claims about vaccines, treatments, masks or social distancing. A widget like this won’t solve the problem of social media and teenage mental health. But it can slowly contribute to getting more people to realise they shouldn’t believe everything they read about science in social media.
By Adolfo Antón Bravo, David Arroyo and Sara Degli Esposti, CSIC The COVID-19 pandemic has made clear the risks of online misinformation. Are the vaccines safe? Should we keep wearing masks? On such questions, social media has become a minefield of wrong, misleading or false information – with real-world consequences. Surely there’s a way that we, when online, can more easily check the trustworthiness of a post? That’s the aim of Ms.W, the Misinformation Widget being developed by the EU-funded TRESCA project. It is a bit of software, that can run as a phone app, to help judge whether what you are reading is reliable or not. In short, it’s an online vaccine against today’s online “infodemic.” An infodemic, according to the World Health Organisation, is “”an overabundance of information —some accurate and some not— that makes it hard for people to find trustworthy sources and reliable guidance when they need it”. It can include misinformation: postings that are wrong, perhaps by accident, for many reasons. In posts about science, the technical vocabulary can be misunderstood or misinterpreted. The source’s expertise, reputation or reliability aren’t always easy to check. And misinformation frequently overlaps with disinformation – deliberately false or misleading information posted for political or economic gain. There are plenty of examples. In early 2020, an online hoax about the benefits of bleach-based alcohol for use against COVID-19 led to the hospitalisation of hundreds. The challenge of separating fact from “faction” gets even harder when dealing, not just with text, but also with “memes.” Though the term originated in evolutionary biology, most commonly today a meme is an image, video, or piece of text that is copied and spread rapidly by Internet users, often with slight variations. They are often funny; they catch your attention, and you forward them to your friends. A classic example: memes about next-generation 5G mobile phone systems – some harmless, some harmful. The latter, see figure 1 for an example, helped spread unfounded fears that 5G transmission towers were somehow connected with COVID-19. (Spoiler alert: they aren’t.) Figure 1: A typical online meme – in this case, making fun of false claims that 5G mobile phone towers somehow spread COVID-19. (Surprise: they don’t.) Source: https://www.reddit.com/r/meirl/comments/fxszo6/meirl/ Another problem: Images or text can be easily taken out of context online. According to Newtrals.com, a fact-checking news site, a WhatsApp text circulated widely claiming that in India only people who are vaccinated were getting COVID. It was based on an interview with a doctor in New Delhi – but taken entirely out of context. Why? To spread panic? Attract attention? Or was it part of something more nefarious? How to deal with this? The answer isn’t simple. Detecting, fact-checking and taking down dodgy content is harder than it might seem. There are already many online services that offer fact-checking help, in one way or another. But there are not always clear-cut lines separating good-faith information, opinions, fake news, disinformation, and incomplete or misleading information. Thus, there is a need for trustworthy oversight mechanisms that incorporate algorithmic fairness and accountability principles. The Misinformation Widget, or Ms.W, is a science-communication tool being developed by TRESCA, a Horizon 2020 project on scientific disinformation. Ms.W is both a methodology and a toolkit. It is a methodology in the sense that it indicates how to use a variety of online resources. It integrates a number of existing services that help users perform different tasks associated with content verification: news gathering, fact checking, blacklisting of fake sites, reverse image search, and more. It will also include new algorithms coming from current research about the application of machine learning to guide information curation and combat disinformation. It will help users track online news and their authors, and will help them evaluate the accuracy, credibility and trustworthiness of written and visual content. Ms.W can be used as a web service or a mobile app. Ms.W will be available in early 2022, as part of TRESCA’s free online course about scientific disinformation. Now in production, the TRESCA course, to be available on Coursera, will be a seven-week set of lectures and supporting documents, aimed at researchers, communicators and journalists, and exploring how to communicate trustworthy knowledge in the digital world. Stay tuned to join us there online – and stay away from COVID towers.
Evidence from the TRESCA Citizen Science Communication Workshops, an experiment on pre- and post-audio perception of video mediated scientific communication. By Chiara Lovati and Giuseppe Pellegrini, Observa Science in Society Citizens find it increasingly difficult to choose between the growing number of science communication sources and discern trustworthy and reliable information from fake news. But there are some elements that science communicators can pay attention to and ensure that their message gets across as effectively as possible.In a series of science communication workshops organised by TRESCA in December 2020 in Austria, Italy and the Netherlands, researchers were able to test the perceptions of regular citizens on scientific communication. Their methodological design made it possible to explore the effects of visual and audio elements on news trustworthiness and emotional response and identify strategies that can improve and make science communication more effective in these times of uncertainty. Figure 1: A segment of TRESCA workshop participants from Italy The workshops preparation included steps such as the definition of an engagement strategy for citizens, the training of the moderators, and the selection of the two videos to be shown to participants. In this last step, two very different videos were chosen. The first one was a very fast paced, chaotic, YouTube style video with the aim to debunk Covid-19 related conspiracy theories, while the second video was a journalistic report on ex-Covid-19 patients stories, with a newscast style. The session began with an introductory speech about the TRESCA project and the activity of the day, partakers were then invited to join breakout rooms to continue the meeting in small groups. Citizens here viewed the two videos without audio and had a first discussion round on their impressions and emotional response. Following, after a small break, everyone watched the two videos again with the original audio and subtitles in the respective languages. Participants debated the credibility of the information communicated, the roles and responsibilities of the different people appearing in the videos and the impact of their emotions on how the message was being received. The comparison between participants’ reactions to the videos with and without sound helped the TRESCA team focus on the impact of images and words on participants’ perceptions of the science communication videos. Figure 2: TRESCA workshop design Researchers were able to draw some interesting conclusions from the experiment. Firstly, it is important to develop a communication style that is suitable to different types of audiences and use emotional language with caution. Science communicators should avoid extreme or contradictory positions when presenting topics, or excessive simplifications, to prevent viewers from distrusting the communicator or communication channel, and therefore rejecting the content. Secondly, to create successful forms of engagement with the public, it is better to avoid polarisations and disputes where possible. Workshops data also suggests the public is able to judge and evaluate information, therefore, science communicators should not appear to have an attitude of superiority and arrogance, and to avoid a ‘deficit model’ approach when explaining scientific findings to citizens. Figure 3: PCMAG.com, “No, 5G is not causing Coronavirus (or anything else), still from Workshop video #1 Trust in science and scientific institutions is not as straightforward as one may think. Trust is actually the result of a process rooted in previous experiences and attitudes and it cannot be changed very easily. It is crucial to create spaces for people to freely build their own knowledge and opinions. Furthermore, context varies greatly across languages and cultures, causing at times some unintended references and issues to arise in some communications, this should also be kept in mind when transmitting scientific news. Figure 4: Photo by Matheus Bertelli, from Pexels There are no easy solutions to avoiding disinformation and the dissemination of fake news. However, a powerful tool in the fight against misinformation could be fact checking apps and platforms. Nevertheless, these do not seem to be popular amongst the public: sometimes because of an accessibility issue, perhaps cultural or social, other times because they are just ignored. Indeed, for the time being, these tools are not particularly practical, but more research is being conducted – especially in the TRESCA project – to surpass these issues. Figure 5: Photo by Pixabay, via Pexels Effective scientific communication requires a time-consuming group effort by researchers, scientists, and communicators to consolidate best practices and shared knowledge. Nonetheless, effective scientific communication can certainly be mastered. Summarising, data from the workshops highlighted how important it is to avoid extremisms when discussing scientific findings, especially highly emotional language, excessive simplifications or contradictory positions. Furthermore, these workshops demonstrated the importance of maintaining contact with the public in order to detect a very useful point of view for the development of better communication. It is also crucial to steer clear of patronizing language or an attitude of superiority, while always trying to create spaces for the members of the public to freely build their knowledge and opinions.
By Asher van der Schelde, Marina Tulin, and Jay Lee, Erasmus University Rotterdam How do you stay informed about what is happening in the world? Chances are social media play a crucial role. This is not entirely surprising as science communication increasingly occurs via platforms such as YouTube and Twitter. Nonetheless, we know little about how viewers perceive, trust, or judge, this type of science communication. In this study, we tried to bridge this knowledge gap by running a survey experiment using a science communication video by the popular animation studio Kurzgesagt (In a Nutshell). The experiment focused on climate change. Despite the vast majority of scientists agreeing that humans have a (lasting) impact on climate change, a fraction of the population does not believe in the existence of the phenomenon, or its man-made cause. As a consequence, science communication experts face the critical, yet difficult, task of increasing public understanding, and stimulating engagement within this group of skeptics. The Kurzgesagt video we used is called “Who is responsible for climate change? – Who needs to fix it?”. As the title suggests, the video revolves around climate change and which countries should take responsibility in countering this worrying development. Figure 1: Screenshot from the Kurzgesagt video “Who Is Responsible For Climate Change? – Who Needs To Fix It?” We split up this video in three chapters to which we made several manipulations (e.g., the gender of the narrator, textual changes). After watching one chapter, respondents answered how they perceived the video in terms of trustworthiness, reliability, engagement, and entertainment. This allowed us to understand whether small changes in the video affect the overall perceptions of the video. Respondents also answered how they perceived the narrator, the overall production and what they thought the primary aim of the video was, which produced interesting insights as well. Positive perceptions First, it is important to mention that the video was perceived very positively. Respondents deemed the video to be trustworthy, reliable, engaging and entertaining (Figure 2). They were also positive towards the female and male narrator. The narrators were perceived to be similar in terms of warmth and trustworthiness, but the male narrator was perceived to be more competent (Figure 3). This result can reflect some underlying gender bias and would require further analysis to disentangle cultural and individual effects. Figure 2: Video perceptions. 1 = strongly disagree, 5 = strongly agree (with 95% CI). Figure 3: Narrator perceptions by gender (F = female, M = male). 1 = strongly disagree, 5 = strongly agree (with 95% CI). Trustworthiness and entertainment go hand-in-hand Science communicators can be hesitant to communicate insights in an entertaining way as they fear this might harm the trustworthiness of the message. Our analysis indicates this fear is not grounded in reality. Respondents who perceived the video to be entertaining, were more likely to say the video was also trustworthy, reliable and engaging. As a consequence, quality and entertainment should not be treated as a trade-off. On the contrary: they go hand-in hand. All correlations between the video and narrator perceptions are illustrated in table 1. Table 1: Correlations between video and narrator perceptions Perceived aim matters One of the most striking findings is the effect of perceived aim on the overall perception of the video. When respondents believe the aim of the video is to blame, they perceive the video to be less trustworthy. Some of these respondents commented that the video was ‘quite brain washing’ or ‘seemed to be aimed at children’. Therefore, science communicators should make sure their message does not come across as such. On the contrary, respondents that indicated the video aimed to inform, rated the video higher on trustworthiness than those who did not. The same goes for changing behaviour, which is a remarkable finding, as this suggests that science communicators do not have to obscure their beliefs if they feel strongly about a certain topic and want to make a change. The significant effects of perceived aim on trustworthiness are displayed in figure 4. Figure 4: Perceived trustworthiness by perceived aim. 1 = strongly disagree, 5 = strongly agree (with 95% CI). Production value serves a proxy for overall perception When production value was perceived to be high, the video and narrator were perceived more positively. By perceived production value we mean the extent to which viewers believe that the video is of high quality regarding technical aspects, such as quality of resolution, professional voice-over, recording quality, sound design, detail of illustrations, or smoothness of animation. Perceived production value possibly serves as a heuristic that viewers use as a proxy for their overall perception of the video. Viewers project good intentions onto the creators and give them the benefit of the doubt. Figure 5: Perceived trustworthiness, engagement and entertainment value separated by perceived production value. 1 = strongly disagree, 5 = strongly agree (with 95% CI). Effects of manipulations We incorporated several manipulations to see if they would affect perceptions of the video. Through a check in the survey, we found that a substantial part of the participants had not noticed or remembered the manipulations. In hindsight, this is unsurprising as most manipulations were made in the narrative while humans tend to be more focused on visual information. Running a separate analysis in which we only included respondents who did notice and remember the manipulation did result in more significant findings. For example: participants who remembered that negative consequences of climate change for Europe (instead of worldwide) were mentioned, were more engaged with the video (leaving a comment about the video was coded as engagement). Since all respondents reside in the UK, this illustrates the human tendency to be more concerned by local developments. Similarly, engagement was greater among the respondents who watched (and remembered) the video that contained a fearful message than among those who were provided with a hopeful message. Figure 6: Differences in engagement by remembered manipulation. X-axis corresponds to proportion of respondents in group who engaged (i.e., left a comment) with the video (with 95% CI). Besides more engagement, the ‘fearful message’ also resulted in a decrease in the narrator’s perceived warmth. Thus, ending on a positive note results in a warmer perception of the narrator. This difference is depicted in Figure 7. Figure 7: Differences in perceived warmth by message. 1 = strongly disagree, 5 = strongly agree (with 95% CI). Communicate uncertainty! Science is often uncertain. However, science communicators might fear that communicating these uncertainties will confuse people. We did not find significant negative effects of using uncertain terms (e.g., approximately and presenting numbers in ranges) on the video’s perceived trustworthiness, reliability, engagement, and entertainment. This indicates that science communicators can be honest about presenting uncertain findings. Audience matters Finally, we found that science communicators should be well aware who they are targeting as climate change attitudes affect how the videos are perceived. For example: the inclusion of prominent sources enhances engagement, but this effect is muted for those who ‘believe’ more in climate change. This leads us to believe that this group is not fond of “over-the-top” stimuli. Similarly, climate change deniers are more likely to perceive the video as less trustworthy when the narrator uses uncertain terms in comparison to believers. In sum, the public tends to be informed (sometimes falsely) about many scientific topics. As such, people have already formed their opinions. Consequently, science communicators need to be aware they are reaching out to an already informed, and critical audience. The next TRESCA objective will be to produce a new video with Kurzgesagt on the history of rationality, and different aspects of human reasoning. Valuable insights that were gained in this experiment will be used to improve the video. Keep an eye on the TRESCA website and the YouTube channel of Kurzgesagt-In a nutshell for the video release (expected in September 2021).