Is misinformation the biggest problem to tackle?
As harmful as it may be to the people directly affected and targeted, misinformation (and its prevalence) is a sign, not a cause, of bigger societal issues we are facing.
I recently had a couple of occasions to discuss “an increasing threat” of mis- and disinformation to democratic systems with policymakers, academics, civil society activists, journalists, and educators in Asia at some conferences.
I was a bit taken aback by the alarmist commentaries some people seem to believe genuinely, even though the actual impact and influence of mis/disinformation is extremely complicated to pin down, if not impossible.
For years, many researchers have warned about the short-sightedness of narrowly focusing our attention on online falsehood (the advent of generative AI has lately exacerbated that conversation) and, consequently, missing the bigger picture.
Misleading rhetorics, fallacies, one-sided narratives with cherry-picked or made-up facts, nonsensical squabbles, hateful scaremongering, smear campaigns, doctored ‘evidence,’ unsubstantiated rumours, voodoo medical treatment . . . whatever we see as “threatening” now have always been with us for centuries.
After all, it was more than 300 years ago, in 1710, when Jonathan Swift, the author of Gulliver's Travels (which itself was a satirical book lampooning human nature — I majored in English and American Literature in college 🙂) wrote:
“Falsehood flies, and the Truth comes limping after it,” and “As the vilest writer hath his readers, so the greatest liar hath his believers.”
Key takeaways
Here are three main points I and some other colleagues made at those conferences:
Misinformation thrives only when there’s already a condition for it to grow. The organic spread of false claims is often a symptom of polarisation, inequality, hate, mistrust, and other issues that already exist in our society. It reflects who we are, how we think, and what we do, not the other way around.
Disinformation campaigns through CIBs (Coordinated inauthentic behaviours) and ANs (Adversarial Networks) are pretty visible, but again, their actual impact on public sentiments and opinions is hard to discern. Other factors could easily play much more significant roles. Say, for instance, gerrymandering has far more impact on the outcome of elections than any disinformation efforts.
Identifying and producing quality information is much more difficult than debunking misinformation. For example, it’s easy enough to investigate a claim about COVID-19 vaccines making people magnetic; it’s much more challenging to fully understand possible side effects and potential risks of vaccination with sufficient medical science knowledge.
In social science, we can’t always have control and treatment groups. There is no parallel world where we can observe the same election without mis/disinformation, and compare it to our world so that we can reasonably estimate the real effect of what mis/disinformation does to us and our society.
But I do believe our conversations surrounding mis/disinformation must still be more evidence-based. Moderation and intervention efforts may alleviate the symptoms, but if they don’t necessarily cure the root cause of the problems, shouldn’t we also discuss how to tackle the cause at the same time?
Reducing and removing hateful online comments is not the same as eradicating hateful thinking and the toxic culture we have in the offline world, right? The former cannot be achieved unless we deal with the latter.
In news literacy education, fact-checking skills are, of course, essential, and I love teaching them. But ultimately, we should go beyond that and allocate more efforts and resources to teaching how to identify and develop quality content.
Especially in Asia, it is important to do this in our own languages because the sources of internationally recognised, high-quality information are predominantly in English, the language most people in our part of the world don’t speak or understand.
ANNIE Connect on Nov. 17
Our next online ‘study group’ gathering will be held on 17 November 2023 at 10:30 a.m. HKT (UTC+8).
This month, we will have a guest speaker from Watchdog, a multidisciplinary team of journalists, researchers, and software engineers in Sri Lanka, tackling misinformation with a unique approach.
Their project manager, Rukshana Rizwie, will discuss how she looks at the works of Carl Sagan and brings in philosophical razors when thinking about misinformation. She will also demonstrate an AI-supported tool that Watchdog developed, which, I believe, many journalists and educators in this field would be interested.
Are you new to our group and want to join the event? Everybody is welcome in our monthly get-together. Please register using the link below (if you haven’t done so yet, that is):
https://hku.zoom.us/meeting/register/tJ0rcuuvqzspGNBeY11cN6QZG8X6WIuVxhCO
As usual, the above link includes automated invitations for our future meetings, but we will cancel our December gathering (it’s the holiday season), and we will probably go back to our original time slots (14:30 HKT, third Friday of the month) starting from January 2024.