With almost 1.4 billion active users worldwide, Facebook's audience is vast, with a huge variance in age, cultural values and laws across the globe. People from different backgrounds may have different ideas about what is appropriate to share. As such, people post — or try to post — just about everything one can possibly imagine. Yet despite its published guidelines, the reasoning behind Facebook's decisions to block or allow content is often opaque and inconsistent. As such, it has now provided the public with more information about what material is banned on the social network. Its revamped community standards now include a separate section on dangerous organizations. Terrorist organizations like the so-called Islamic State have long been banned from the service. But supporting or praising groups or the acts of their leaders involved in “violent, criminal or hateful behavior” is also banned, the updated rules now say. That wasn't something that was detailed before. While Facebook was originally created to stay in touch with friends and family, it has seeped into society's darker side. Bullying on Facebook has led to several suicides, especially by teenagers, so now images altered to degrade an individual and videos of physical bullying posted to shame the victim are now expressly forbidden. Facebook's definition of hate speech covers content that directly attacks people based on their race, ethnicity, national origin, religion, gender, serious disabilities or diseases. You are also banned from bragging about crimes that you have committed, or using the site to plan them. And you cannot post anything which directly threatens a person. The new Facebook rules are not foolproof. The rules on, for example, violent content have not made that much of a difference. It is still not explicitly banned in updates. The company must weigh whether to show graphic videos against its desire to allow the free sharing of information, but so far it has not reached conclusive decisions, flip-flopping on beheading videos, first allowing them and then banning them. Facebook is still depending on users warning their audience that updates which include graphic violence are coming up. But warning an audience about what they are about to see does not prevent videos from playing automatically, meaning that people may inadvertently watch the video anyway. Facebook can add those warnings itself, but only when videos are reported. One thing that has not changed: Facebook has no plans to automatically scan for and remove potentially offensive content. It will still rely on users to report violations of the standards. This seems to be an outmoded way of working. Even if the company has review teams working at all hours of the day around the globe, and every report is examined by one of them before a decision is made, the process still take time - typically 48 hours on matters of safety. That may not be fast enough in an era where graphic content can go viral in minutes. Facebook has rewritten its guidelines on what is and is not allowed in order to avoid confusion and to provide clarity. It has to walk a delicate line when it tries to ban violent or offensive content without suppressing the free sharing of information that it says it wants to encourage.
Facebook's social network is the closest thing there is to a universal communication platform. Because of its vastness, the landscape is complicated. But rules, like sports records, are meant to be broken. It does not take long for someone to walk around a newly established rule or to find a new interpretation for it not thought of before.