Facebook, which has managed to transcend geographic borders to draw in a population equal to roughly a third of all human life on Earth, has made its final charter for a “Supreme Court” of Facebook public. The company pledges to launch this initiative by November of next year.
The new Oversight Board will have five key powers, according to a charter (PDF) Facebook released yesterday. It can “request that Facebook provide information” it needs in a timely manner; it can make interpretations of Facebook standards and guidelines “in light of Facebook’s articulated values,” and it can instruct the company to allow or remove content, to uphold or reverse a decision leading to content being permitted or removed, and to issue “prompt, written explanations of the board’s decisions.”
“If someone disagrees with a decision we’ve made, they can appeal to us first, and soon they will be able to further appeal this to the independent board,” company CEO Mark Zuckerberg wrote in a letter (PDF). “As an independent organization, we hope it gives people confidence that their views will be heard and that Facebook doesn’t have the ultimate power over their expression.”
The board will launch with at least 11 members and should eventually get up to 40. The entity will contract its services to Facebook. Participants will serve a maximum of three three-year terms each and will be paid for their time. Their decisions will “be made publicly available and archived in a database of case decisions,” with details subject to certain data or privacy restrictions. Facebook can also contact the board for an “automatic and expedited review” in exceptional circumstances, “when content could result in urgent real world consequences,” such as, for example, if a mass-murderer islivestreaming his crimes.
The panel’s decisions will be binding, Facebook added, and the company will implement its findings promptly, “unless implementation of a resolution could violate the law.”
Today, Facebook has a few problems with content moderation. The first is, simply, scale. The site boasts more than 2.4 billion monthly active users and even if only half of those accounts posted content, the result would still be a mind-boggling volume of photos, memes, videos, and text. Keeping up with content that other users flag and report is anutterly thankless taskthat haslingering harmful effectson the people who do it.
The size of the job is only one challenge, though. The rules themselves are another. Facebook has had an extremely difficult time trying to write universal guidelines defining hateful or threatening speech in recent years. Attempts to unify its policies have resulted in some frankly bonkers outcomes. The site infamouslyremoved the Declaration of Independencein 2018, citing it as hate speech, two years after sparking criticism for pulling downa famous, Pulitzer winning photographfrom the Vietnam War. Documents obtained by ProPublica in 2017revealedthat while “black children” did not make the cut as a protected category on Facebook, “white men” explicitly did, heightening criticisms that Facebook policy operated outside of any real-world understanding of hate speech and Internet violence.
Even more challenging for literally billions of users: Despite Facebook’s existing community guidelines, finding out why a post has been blocked or deleted, or appealing the decision if one has, has proven to be an enormous black box. Facebook only instituted an appeals process inApril 2018, but even that process has left many users wondering why a post they reported was not removed, or a post they made was. Several months later, a large coalition of digital and civil rights groupscalled on the companyto make its appeals process transparent to users, as well as to issue regular transparency reports about “community standards enforcement.”
That’s where the new board comes in, theoretically. Zuckerberg first suggestedin early 2018that content decisions could rest with such a body, telling Vox, “You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.”
This time we mean it
To its credit, Facebook has started issuingregular transparency reportsshowing how many posts were removed for being spam, fake accounts, hate speech, terrorist propaganda, harassment, or child sexual exploitation in a given quarter. Quantifying the problem, however, has notamelioratedthe problem.
Facebook executive Monika Bickert was joined earlier today by counterparts from Twitter and Google, as well as a representative from the Anti-Defamation League, totestify before the Senateregarding the prevalence of extremism on the platform. Bickert in her written testimony (PDF) spoke extensively of Facebook’s efforts to combat hate speech, extremism, and terrorist content on the site. The company currently has 30,000 employees working on content moderation around the world, Bickert said. Those employees review content in more than 50 languages, in shifts, 24 hours a day.
This leads to an obvious question: If 30,000 moderators, all of whom are already badly overworked and struggling as the system changes endlessly around them, can’t keep up with everything… why should a panel of 40 people, at first chosen by Facebook but operating independently of them, be able to stem the tide any better?
In this sense, perhaps Zuckerberg’s Supreme Court metaphor is apt. The panel will not by any stretch of the imagination be able to take on all cases brought before it. Staff will pick and choose which cases deserve appeal, according to a flowchart provided by Facebook:
Some appeals will be heard; others, ignored. Initially, all cases will be referred to the board by Facebook itself, without users able to submit appeals for more than a year. Users, then, still don’t necessarily have a fair mechanism from bringing forth complaints.