Shopping cart

No Widget Added

Please add some widget in Offcanvs Sidebar

Latest News:
  • Home
  • Tech
  • Survey Reveals Increase in Harmful Online Content After Meta Changes Policies
Tech

Survey Reveals Increase in Harmful Online Content After Meta Changes Policies

Email :0
Meta ditched third-party fact-checking in the United States in January
Meta ditched third-party fact-checking in the United States in January.

A recent survey reveals that harmful content, including hate speech, has risen sharply across Meta’s platforms after the company ceased its use of third-party fact-checkers in the U.S. and relaxed its content moderation rules.

The survey, which involved around 7,000 users from Instagram, Facebook, and Threads, was conducted after Meta discontinued the use of fact-checkers in January. Instead, they shifted the responsibility of verifying misleading information to regular users through a system known as “Community Notes,” a concept that gained traction on X.

This move was interpreted as catering to the administration of President Donald Trump, whose conservative supporters have long argued that fact-checking on social media limited free speech and suppressed right-leaning viewpoints.

Additionally, Meta has relaxed its guidelines concerning issues related to gender and sexual identity. Their revised community standards now allow users to label others as having “mental illness” or being “abnormal” based on their gender or sexual orientation.

According to the survey, which was compiled by digital rights groups such as UltraViolet, GLAAD, and All Out, these changes marked a significant rollback of content moderation efforts that had been developed over the past decade.

Among the surveyed users, about one in six reported experiencing gender-based or sexual violence on Meta platforms, and a staggering 66% noted they had come across harmful content, including hate speech or violent material.

Moreover, 92% of participants expressed concern over the rise in harmful content, stating they feel “less protected” from such material on Meta’s platforms. Additionally, 77% reported feeling “less safe” when expressing their opinions freely.

Meta has not provided comments regarding the survey findings.

In their latest quarterly report from May, Meta claimed that the changes made in January had little impact on their platform. They stated, “Since the January changes, we’ve halved enforcement errors in the U.S., while the occurrence of violating content has remained mostly unchanged in most areas.”

However, the organizations behind the survey contend that this report does not accurately reflect users’ experiences concerning targeted harassment and hate.

Jenna Sherman, campaign director at UltraViolet, emphasized the importance of allowing individuals to engage safely on social media, pointing out how central these platforms have become to daily life. She criticized Meta’s decision to retreat from established content moderation practices, arguing this endangers vulnerable users.

Sherman mentioned that the equity issues within Facebook and Instagram have escalated following these policy changes.

The advocacy groups are urging Meta to appoint an independent third party to assess the impact of these policy changes and to promptly reinstate the former moderation standards.

The International Fact-Checking Network has previously warned that further diluting fact-checking practices could have severe repercussions beyond the U.S., affecting Meta’s operations in over 100 countries.

AFP currently collaborates in 26 languages with Meta’s fact-checking initiative, which spans regions including Asia, Latin America, and the European Union.

If you would like to see similar Tech posts like this, click here & share this article with your friends!

Related Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Post