Hot Topics: Covid and social media misinformation
We begin our series on the key AI issues and technologies artists should be looking at today, with a timely focus on the dangers of social media misinformation. By Elena Simperl and Gefion Thuermer.
Coronavirus is one of the biggest threats to global health in modern history. Widespread misinformation about it, in combination with a general lack of data and scientific literacy among citizens, has led to demonstrations against protective measures, such as face masks and vaccinations. This in turn has increased the danger for vulnerable populations and exacerbated mistrust in institutions.
One aspect of this issue is the role of social media, which is widely perceived as a public community space but is really a space of individual interests. A study by Thomson Reuters found that 50% of people trust content posted on social media. However, these platforms are focused on broadcasting views and quick reactions, and are not conducive to dialogue and deep engagement. Recommendation and personalisation algorithms intensify this problem, by limiting online interactions to people in filter bubbles that already share the same mind-set, effectively preventing them from being confronted with opposing views.
As a society, just as we develop techniques to protect ourselves from the pandemic, we need innovative and inclusive strategies to account for the complexity of the coronavirus crisis. We urgently need measures to facilitate citizens' understanding of data-based resources, as well as mechanisms for scientific and technical rigour. Transparency of results and where they come from, as well as critical thinking, are vital to this.
Data and AI can help us understand the scope and scale of this crisis, develop interventions to mitigate the impact and spread of the virus as well as misinformation about it, and evaluate the effectiveness of such interventions. Artists can act as mediators, amplifiers, and inspiration in this scenario. They can help both tech experts and citizens understand new approaches to how we think about, design and experience AI. And, by communicating about their own data sources and interpretations, they convey the dangers of AI – and the potential solutions, too.