As COVID-19 continues to spread, so does misinformation about it

VIRUS MISINFORMATION 1
A ward for COVID-19 patients at Elmhurst Hospital in Queens on May 8, 2020. Doctors are exasperated by the persistence of false and misleading claims about the coronavirus. (File photo: NYTimes)
Nearly three years into the pandemic, COVID-19 remains stubbornly persistent. So, too, does misinformation about the virus.

As COVID cases, hospitalizations and deaths rise some places globally, myths and misleading narratives continue to evolve and spread, exasperating overburdened doctors and evading content moderators.اضافة اعلان

What began in 2020 as rumors that cast doubt on the existence or seriousness of COVID quickly evolved into often outlandish claims about dangerous technology lurking in masks and the supposed miracle cures from unproven drugs, like ivermectin. Last year’s vaccine rollout fueled another wave of unfounded alarm. Now, in addition to all the claims still being bandied about, there are conspiracy theories about the long-term effects of the treatments, researchers say.

“It’s easy to forget that health misinformation, including about COVID, can still contribute to people not getting vaccinated or creating stigmas”
The ideas still thrive on social media platforms, and the constant barrage, now a years long accumulation, has made it increasingly difficult for accurate advice to break through, misinformation researchers say. That leaves people already suffering from pandemic fatigue to become further inured to COVID’s continuing dangers and susceptible to other harmful medical content.

“It’s easy to forget that health misinformation, including about COVID, can still contribute to people not getting vaccinated or creating stigmas,” said Megan Marrelli, editorial director of Meedan, a nonprofit focused on digital literacy and information access. “We know for a fact that health misinformation contributes to the spread of real-world disease.”

Twitter is of particular concern for researchers. The company recently gutted the teams responsible for keeping dangerous or inaccurate material in check on the platform, stopped enforcing its COVID misinformation policy and began basing some content moderation decisions on public polls posted by its new owner and chief executive, billionaire Elon Musk.

From November 1 to December 5, 2022, Australian researchers collected more than half a million conspiratorial and misleading English-language tweets about COVID, using terms such as “deep state”, “hoax”, and “bioweapon”. The tweets drew more than 1.6 million likes and 580,000 retweets.

The researchers said the volume of toxic material surged in late November with the release of a film that included baseless claims that COVID vaccines set off “the greatest orchestrated die-off in the history of the world.”

Naomi Smith, a sociologist at Federation University Australia who helped conduct the research with Timothy Graham, a digital media expert at Queensland University of Technology, said Twitter’s misinformation policies helped tamp down anti-vaccination content that had been common on the platform in 2015 and 2016. From January 2020 to September 2022, Twitter suspended more than 11,000 accounts over violations of its COVID misinformation policy.

Now, Smith said, the protective barriers are “falling over in real time, which is both interesting as an academic and absolutely terrifying.”
NewsGuard, an organization that tracks online misinformation, found this fall that typing “COVID vaccine” into TikTok caused it to suggest searches for “COVID vaccine injury”
“Pre-COVID, people who believed in medical misinformation were generally just talking to each other, contained within their own little bubble, and you had to go and do a bit of work to find that bubble,” she said. “But now, you don’t have to do any work to find that information — it is presented in your feed with any other types of information.”

Twitter did not respond to a request for comment. Other major social platforms, including TikTok and YouTube, said in recent weeks that they remained committed to combating COVID misinformation.

YouTube prohibits content — including videos, comments, and links — about vaccines and COVID-19 that contradicts recommendations from the local health authorities or the World Health Organization. Facebook’s policy on COVID content is more than 4,500 words long. TikTok said it had removed more than 250,000 videos for COVID misinformation and worked with partners such as its content advisory council to develop its policies and enforcement strategies. (Musk disbanded Twitter’s advisory council last month.)

But the platforms have struggled to enforce their COVID rules.

NewsGuard, an organization that tracks online misinformation, found this fall that typing “COVID vaccine” into TikTok caused it to suggest searches for “COVID vaccine injury” and “COVID vaccine warning,” while the same query on Google led to recommendations for “walk-in COVID vaccine” and “types of COVID vaccines.” One search on TikTok for “mRNA vaccine” brought up five videos containing false claims within the first 10 results, according to researchers. TikTok said in a statement that its community guidelines “make clear that we do not allow harmful misinformation, including medical misinformation, and we will remove it from the platform.”

Dr Graham Walker, an emergency physician in San Francisco, said the rumors spreading online about the pandemic drove him and many of his colleagues to social media to try to correct inaccuracies. He has posted several Twitter threads with more than a hundred evidence-packed tweets trying to debunk misinformation about the coronavirus.

But this year, he said he felt increasingly defeated by the onslaught of toxic content about a variety of medical issues. He left Twitter after the company abandoned its COVID misinformation policy.

“I began to think that this was not a winning battle,” he said. “It doesn’t feel like a fair fight.”


Read more Health
Jordan News