In the last year, we have seen a deluge of reports over moderation on the platforms such as Facebook and YouTube. Facebook in particular has attracted widespread criticism as a result of their moderation policies, and how they treat those who are enforcing them.
The situation has only really come to light as a result of whistleblowers like Chris Gray, who went to the press over the working conditions of moderators in Ireland, and he was at this session, along with Cori Crider from Foxglove to talk about how the situation could be improved. This was the second session these two were involved in, the first being how to better protect whistleblowers in the tech industry, and as you could imagine, the two go hand in hand. The focus of the session can be split into three key points:
Content moderation is hard
There is a workforce on the frontline whom without, the platforms themselves wouldn’t exist
How do we protect them?
The session started with us breaking out into four groups to have a discussion on what moderation means, and it was really interesting to hear how people’s understanding of the term differed.
To some, moderation just referred to the small groups that existed on Facebook, or the moderation you might see on Reddit, which is much more community driven. Some mentioned self-moderation, and thinking about the public and the private, and others, such as Chris, thought about the grand scale moderation done in centres that looked at reported content. The gulf between the grassroots moderation, and the industrialised movement we are now seeing is huge, and is still not widely understood by the public.
For instance, there was surprise by some in my group that the moderation of platforms like Facebook are often outsourced to third party companies, and the average length of time an employee lasts is just 12 months. They were also shocked to learn that there are algorithmic management and metrics measuring the performance of workers, and that they are allocated an accuracy metric that checks whether you’ve made the correct moderation decision on a post.
Following a similar discussion happening at the echelons of technology and politics at the minute, there was a discussion around the difference between moderation and censorship. Although some felt that certain content being pulled from platforms is censorship, the reality is that these platforms are companies, and as such are free to set whatever policies they like in regards to the content they host. For example Voice has community guidelines as to what is acceptable and not acceptable to post on our platform ─ the same is true of big platforms. The only difference, and perhaps the key difference, is scale. We have a modest community of people, while Facebook commands the attention of nearly a third of the global population, and as such, perhaps there should be further regulation on how they operate, but that extends beyond the remit of this session.
Returning to the conditions of workers, Chris was given some time to talk about his own situation. He made it clear that he didn’t want to focus on the gruesome aspects of the job ─ the beheadings and murders etc. ─ but instead the bureaucracy and general lack of employee care. He spoke of the constantly changing policy decisions, which could be fed down from the top on a daily basis. You had to get to grips with the new policy decisions, while at the same time dealing with the precedents set by previous decisions ─ even if you weren’t there when the decision was made. Chris specifically mentioned a time he was disciplined for making a wrong decision because guidance had been sent around about it six months ago, even though he wasn’t at the company at the time.In general, the high churn of employees means you will unlikely ever be fully abreast of policies, and your quality score will inevitably be impacted as a result.
We also heard from an attendee named Aldo, who worked as a moderator. He mentioned how there was no job security because all moderators were freelancers on 11 month contracts.
But what are the solutions? That is the million ─ or perhaps billion─ dollar question, and at the moment we don’t have any concrete answers. Chris was adamant that any solution has to be lead by the moderations themselves, a case of change being done by the workers, not for them. Other suggestions included working to rule, and further whistleblowing to keep the issue alive in the media. Some suggested that class action suits could be an option, but in so many cases, employee contacts have a mandatory arbitration clause in them that prevent you from doing it.
Unionisation was also proposed, but deemed exceptionally challenging to implement, just because of how dispersed the workers are across the world, and how easily the moderation companies can fire you. Some thought that full-time employees of the platforms should be internally pushing for changes to corporate attitudes to moderators. As seen with Google, walkouts, change can come from within.
A final, more dramatic solution would be to take the decision away from the big companies, and change regulations to force companies to start treating moderations correctly.
In any case, this is an issue that is only going to become more pressing. YouTube, Facebook, Twitter, and other such companies are only increasing in size, and as such the cesspit of content is only getting worse. The moderators are crucial to the functioning of the company, and are on the frontline in the seemingly futile battle of trying to keep the platforms a pleasant place to socialise. Isn’t it about time they were treated as such?