Opinion: Is Meta’s Safety Regulation System Broken?

Image generated with DALL·E through ChatGPT

Opinion: Is Meta’s Safety Regulation System Broken?

Reading time: 5 min

  • Andrea Miliani

    Written by: Andrea Miliani Tech News Expert

  • Justyn Newman

    Fact-Checked by Justyn Newman Lead Cybersecurity Editor

Almost any content creator or active social media manager has faced the same issue when posting on Facebook or Instagram recently: a post or an account is banned, probably for the wrong reasons.

This annoying situation is just a piece of the puzzle of a complex problem with Meta’s content regulation system. While Meta seems to have many control measures—sometimes absurd—the root of the problem doesn’t seem to be solved.

Over the past few months, Meta has introduced numerous updates to its content guidelines and implemented stricter rules aimed at building a healthier online environment. One of the consequences has been that many businesses, publishers, and user accounts have been banned, leading to hundreds of complaints across forums, chat platforms, and social media channels. Multiple news publishers and brands were removed from Meta’s platforms in certain regions this year, raising concerns among business owners, journalists, and content creators.

Despite Meta’s recent updates, which give the impression of stricter moderation and closer scrutiny of content shared on its platforms, posts related to drugs, suicide, sexual harassment, bullying, hate speech, abuse, and fake news continue to slip through algorithms, reaching vulnerable communities.

I can’t help but wonder: What is happening to Meta’s safety regulation system?

Accounts Banned For the Wrong Reasons

It all starts with a similar message: “Your Meta account doesn’t follow our rules.” Many Facebook and Instagram users have been banned or temporarily kicked out of their accounts “for not complying” with Meta’s rules, even when they believe they have.

It’s a situation that we have experienced at Wizcase. Meta’s system flagged relevant items as inappropriate and made the community manager go through an appeal process and provide government-issued IDs.

Hundreds of users, community managers, and account managers have complained on platforms like Reddit and other forums, chats, and social media channels about similar situations. In multiple unfortunate scenarios, users lose their accounts and there’s nothing they can do about it, and they don’t even get an explanation.

“The support team at Facebook is terrible. No matter how many times we tell them everything or explain everything to them, they just simply don’t understand,” said one user on Reddit on a threat about banned accounts. “One thing I can say right away is that you’re not going to get your account reinstated unless you were spending hundreds of thousands per day,” added another one.

This problem, although it may seem to affect only content creators, is only a small part of a bigger challenge.

Meta Against Lawsuits

For years, Meta has been investing in content moderation, and new strategies to work on a safer platform for users and to protect themselves from more lawsuits—the most recent one is from Kenyan content moderators, currently requiring $1.6 billion in compensation for massive layoffs and to compensate for the distressful material they were exposed to while analyzing content for Facebook.

The tech company relies on third parties to help with content regulations and develops tools to recognize when content violates the platform rules. However, these measures have not been enough and the situation got out of hand, especially among underage users.

In 2021 Meta introduced new protection features, but it didn’t stop the Senate from including Instagram and Facebook among the platforms considered harmful for children last year. The tech giant faced a joint lawsuit filed by 33 states in the United States for its manipulative and harmful algorithm.

Since January, Meta has shared updates on new safety control tools. In June, the company improved its anti-harassment tool to protect teenagers on Instagram with a default “Close Friends” with a “Limits” feature so that only real friends and family could interact with the content posted by young users.

Just a few days ago, the company announced the new Teen Accounts to protect teenagers and “reassure parents that teens are having safe experiences.” Would it be enough to protect children? What about older users?

An Unsafe Ecosystem

Facebook, Instagram, and WhatsApp are still plagued with harmful content that affects users, of all ages, professions, and social groups, and it will hardly be solved any time soon.

A study performed by the Center for Countering Digital Hate (CCDH) analyzed over 500 million comments on Instagram made the accounts of 10 women politicians in the United States and revealed that Meta failed to remove or take action on 93% of these comments.

The multiple lawsuits against Meta have literally proven that the company is struggling to protect users from damaging content, and has also been hurting creators, publishers, and brands with unfair filters and poorly implemented safety tools.

Of course, this is a complex and deep issue that should be addressed in depth, but maybe it is time to accept that Meta has not been able to handle this situation. Band-Aid fixes won’t solve a system that’s fundamentally broken. Now the real question is: How many more users need to be unfairly banned, misled, or manipulated before real change happens?

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Show more...