A recent study by researchers from the University of Technology Sydney and the University of Sydney has shed light on the difficulties Facebook’s parent company, Meta, encounters in its battle against misinformation.
The findings suggest that Meta’s efforts, particularly during the COVID-19 pandemic, have not been as successful as hoped.
The study’s insights were shared in the Media International Australia journal, focusing on the years 2020 and 2021.
Meta tried to control false information on Facebook by using strategies like marking certain content as misleading (content labeling) and shadowbanning.
Shadowbanning is a technique where the platform reduces the visibility of content it finds problematic without the user knowing, making it less likely to appear in newsfeeds, searches, or recommendations.
Despite these efforts, the study found that accounts promoting far-right ideologies and anti-vaccination sentiments actually gained more attention and followers after Meta announced its content moderation policies.
Amelia Johns, a leading researcher from UTS, expressed concerns over Meta’s commitment to eliminating harmful content.
She pointed out that the company prefers to limit the spread of misinformation through indirect methods like content labeling and shadowbanning rather than removing the content outright.
Meta argues that removing misleading content doesn’t work because users will find ways to bypass these restrictions.
However, the study’s findings challenge this view, showing that indirect methods like shadowbanning and labeling also lead to users finding ways around them.
This was particularly true for groups opposed to vaccinations and those with extreme political views. These groups not only continued to spread misinformation but also found new methods to avoid Meta’s restrictions, proving the company’s internal models wrong.
The research highlights a significant issue: Meta’s approach to handling misinformation—trying to reduce its spread rather than eliminate it—has been met with mixed results.
It appears that these policies have not discouraged groups dedicated to spreading false information. Instead, these groups have become more determined to outsmart the platform’s algorithms.
This situation raises concerns about the effectiveness of Meta’s strategies in protecting users from false information.
It seems that the company’s cautious approach, aimed at balancing content moderation with freedom of expression, may not be sufficient in preventing the spread of harmful misinformation.
The study calls for a reevaluation of these strategies, emphasizing the need for more direct action against misleading content to protect vulnerable communities and individuals from misinformation.