Artificial Ignorance and the Age of Robo-Moderation

Something strange is happening on Facebook.

Ange Espeland
5 min readFeb 4, 2021

Perhaps you’ve had your personal Facebook account restricted (or “zucced”, short for “Zuckerberg”, as in founder Mark Zuckerberg, as it’s colloquially called) for violating community standards.

Perhaps you engaged in a heated political debate, and called someone a stupid piece of trash. Fair enough, that’s a clear violation of the site’s community standards.
Or perhaps you called a friend a “silly goose”, which is apparently just as bad.

Above is just one of several examples of the phrase “silly goose” being flagged as “harassment and bullying”. I wish I could say that’s the strangest example of Facebook’s robotic moderators failing to detect context, but it’s far from.
People have also been struck with restrictions for calling others names such as “potato”.

Even the comment “That is in fact, how you hold a cat” was somehow flagged as “harassment and bullying”.

Oh, and you might not want to comment on the size of a parrot, just to be safe.

I was personally fresh off of a seven day restriction for the statement “I agree, totally creepy”, in the context of agreeing that a particular man was, well, making creepy comments.

Then, I was almost immediately hit with yet another violation for a comment I made four years ago, flagged for “sexual activity”, and I am once again unable to use Facebook for a week. It’s almost as if they could sense I was writing this, and were kind enough to provide another example.

So, it appears that even if I am on my “best behavior” (which I suppose means simply not making any comments ever) going forward, I may still risk a violation — or even permanently losing my account — for something I said in 2014, or 2007.

The ridiculous restrictions are only half the story, too. On the other side of this phenomenon is the fact that actual hate speech and bigotry are rarely ever removed. Why is this? Because again, the moderation AI cannot detect context. This is why memes and cartoons depicting minorities as animals, subtly calling for segregation, violence, and even mocking suicide all continue to be shared widely across the platform. Groups and pages created as havens of bigotry seem untouchable.

The bigots have learned how to avoid a zucc — and it’s now easier than ever to evade a violation. When you know that AI is scanning your online presence for certain words and word patterns without the chance of a human being having a look, you can simply speak in coded language. Or use slurs by spacing apart the letters. Or share your hateful ideas through art. The options are endless.

In the interest of not triggering readers, I will not share examples of the racist, anti-Semitic, Islamophobic, LGBT-phobic, and overall heinous content that Facebook decides is a-ok.

What happens when you attempt to report a page, group, or post like this?
I recently reported a page called “That’s Racist”, which exists for the sole purpose of, well, sharing racist memes. Unsurprisingly, I received the standard response:

“Thanks for your report — you did the right thing by letting us know about this. The Page you reported was reviewed, and though it doesn’t go against one of our specific Community Standards, we understand that the Page or something shared on it may still be offensive to you and others. No one should have to see posts they consider hateful on Facebook, so we want to help you avoid things like this in the future.
If you want us to review something specific on this or another Page, you can report that exact content (example: photo) instead of the entire Page.
From the list above, you can also block That’s Racist directly, or you may be able to unfollow the Page. If you unfollow it, you’ll be able to find the Page on Facebook but you won’t see its posts in your News Feed.”

They might as well say “Don’t be so sensitive, you silly goose!” — but we’ve established that this is basically hate speech.

So what actually happens when you’re struck down by a nonsense violation?
At the very least, you won’t be able to post, comment, message, or even react on Facebook for a predetermined amount of time. These restrictions can last anywhere between 24 hours and 30 days. There is (sort of) an “appeals” process, in which you can disagree with the violation and request a second review. However, you will likely just receive another instant automated response confirming the violation.

After an unspecified number of violations, your account may be permanently removed.

Why is this a little scary? Because many people like myself have had Facebook accounts for 15+ years, and the platform is the main source of all our memories. All our photos. All our friendships. To know that, at any moment, our entire online lives can disappear for no reason whatsoever is unnerving to say the least.

This this recent content moderation phenomenon hits those of us with an addiction to Facebook (admittedly myself included) much harder. Our social lives are undeniably deeply intertwined with this platform. Should we try to take social media less seriously for our own health? Probably.
Perhaps my seemingly nonstop restrictions are a proverbial blessing in disguise. It sure doesn’t seem like it, because I really want to get back to all of my cat photo groups.

We don’t know if this faceless AI is the future of content moderation, but it certainly proves that newer isn’t always better.

--

--