Facebook’s parent company Meta has faced mounting accusations of failing to protect its moderators and the public from online abuse, leading to pressure for legislation worldwide to address the company’s failure to act on concerns.
Originally brought to the fore in 2018, the momentum around the struggles faced by moderators picked up pace in 2021 as leading politicians gave whistleblowers a chance to raise their concerns.
Despite the media platform being ordered to pay $52 million in compensation to US moderators, who suffered psychological distress, and legal action still looming from its European employees, experts say it still has not addresses its “toxic” model.
Former moderator Chris Gray is presently taking Meta to court in Ireland over claims he suffered post-traumatic stress disorder as a result of viewing explicit content, including videos involving abuse and terrorism beheadings, as part of his job at the social media company.
He told The National his case is far from over and that there were few developments in 2021.
“Meta says it is going to take action but it just means more stress for those doing the work,” he said. “It is not looking to fix a broken system.
“There has still been no movement in my case. I feel the company is just dragging its feet in my case. When it says it is going to make improvements, such as concessions it claimed it would make following the US case, they were things moderators would have to do as part of their job in any event. I can’t see things ever changing.”
Mr Gray is one of more than 30 moderators across Europe pursuing Meta through the courts for compensation.
Facebook’s biggest challenge began when whistleblower, and former Facebook engineer, Frances Haugen leaked thousands of internal documents that showed how the company had dealt with moderating hate speech and misinformation.
Her voice has added impetus to the calls for the platform to change after she was given a stage in the UK Parliament, the EU and the US Congress to expose Facebook’s inner workings.
She told the UK Parliament that Facebook will fuel more violent unrest around the world unless it stops its algorithms from pushing extreme and divisive content.
Whistleblower Isabella Plunkett attended a hearing at the Irish parliament to urge Meta to provide proper psychological support to workers and limit their exposure to harmful content.
She revealed that moderators are provided with non-medically qualified “wellness coaches”, who suggest “karaoke and painting” as coping mechanisms.
That failure is emblematic of the wider issues facing the social media sphere. Ms Haugen claims Facebook’s algorithms are specifically designed to sow division and that studies show that users are more likely to create content when they are angered.
Facebook, she warned, considered online safety as a cost and said the company had promoted a start-up culture where cutting corners was good.
“Unquestionably, it is making hate worse,” she said. “Facebook has been unwilling to accept even a little sliver of profit being sacrificed for safety.”
In December, she told Congress the most extreme content was the one distributed the most.
“Let us imagine you encountered a piece of content that was actively defaming a group that you belong to. It could be Christians, it could be Muslims, it could be anyone,” she said.
“If that post causes controversy in the comments, it will get blasted out to those people’s friends, even if they didn’t follow that group. And so the most offensive content, the most extreme content gets the most distribution.”
Facebook said it has “always had the commercial incentive to remote harmful content”.
Daniel Markuson, a digital privacy expert at NordVPN, said the company has done little to address the concerns of its moderators.
“It appears to be run as a traditional company that only serves to fulfil its bottom line. However, the focus on profit seems to come at the expense of user and employee well-being,” he told The National.
“Back in 2019, Mark Zuckerberg called claims of PTSD by content moderators ‘a little too over dramatic’. The company’s stance does not seem to have moved since then, as Facebook has done little to address internal complaints concerning poor working conditions for content moderators.
“Following the rebranding to Meta, it seems that the company is trying to distance itself from its toxic reputation instead of fixing deep-rooted issues. It has also been using a similar tactic for years in regard to the way in which content moderation is executed. It appears that by hiring third-party firms to moderate content for Facebook, the company is trying to deny responsibility for the employees’ well-being.
“The introduction of ‘wellness coaches’ that are supposed to provide short-term help has been reported. However, considering the constant stream of morbid content moderators are faced with, the type of help they receive seems to be a drop in the ocean.
“The public outcry of several employees-turned-whistleblowers suggests that in 2021 the company does not seem to have made the necessary advances in addressing the needs of the moderators. The horrible content they are exposed to daily and the lack of appropriate mental health services definitely seems like a breeding ground for severe psychological issues.”
The company’s shares fell 9 per cent in the last quarter of 2021 as regulators and legislators prepared action on allegations it faced. About 30 pieces of draft legislation have been proposed in the US to update the regulatory framework that the social media companies operate within.
In the UK a bill going through Parliament would allow regulators to call in data that reveals how the company mitigates alleged harms. The EU is set to adopt a new legal framework later this year while India is expected to set up a joint parliamentary commission to investigate Facebook’s operations there in 2022.
After Leo Varadkar, Ireland’s Minister for Enterprise, Trade and Employment, met with moderators from Facebook’s Dublin offices last year he wrote to Meta to raise their concerns.
He also contacted Ireland’s Health and Safety Authority (HSA), which is continuing to investigate workplace issues within Facebook, to highlight the “importance” of people being protected in the workplace and urged moderators who felt they were being failed to contact them.
The HSA refused to tell The National how many complaints against the company it is investigating.
“The authority does not comment on individual employers or workplaces nor does it provide detail on inspections or investigations that are undertaken or under way,” it said.
“All complaints received by the authority are reviewed and followed up in the appropriate manner.”
Dr Hans-Jakob Schindler, senior director of the Counter Extremism Project, a think tank, believes the platform will remain “toxic until regulation is introduced”.
“The issue with Facebook is the combination of a hermetically sealed platform that is not actually open to outside analysis or audits, coupled with an absolute commercial drive to grow and to increase profits at all costs, and it is situated within a whole unregulated industry with next to no liability risks,” he told The National.
“This combination is particularly toxic as it ensures that only when problems become absolutely undeniable, either, as was the case with the proliferation of the Christchurch attack video or when an internal whistleblower reveals the issue to the public, they are somewhat addressed but never in a manner that actually solves any of the issues in a systemic and sustainable manner.
“It has been in crisis mode for several years; 2021 was not a crisis for Facebook since the problems are systemic within the company as well as external due to lack of regulation and liability risk.
“The problems Facebook faced in 2021 are a reflection of this and, therefore, the company will not be able to overcome these issues under the current circumstances because this would require that it voluntarily changes its core business model or voluntarily limits its profits.
“The impetus will have to come from the outside through regulations, the setting of binding standards, the introduction of both legal as well as financial risks for [breaches] or lack or implementation of such regulations and standards.”
Meta told The National it supports its moderators and uses technology to help to monitor inappropriate content.
“We are committed to working with our partners to provide support for our content reviewers as we recognise that reviewing certain types of content can sometimes be hard,” the company’s representative said.
“Everyone who reviews content for Meta goes through an in-depth training programme on our Community Standards and has access to psychological support to ensure their well-being.
“This includes 24/7 on-site support with trained practitioners, an on-call service and access to private health care from the first day of employment.
“We also have technical solutions to limit their exposure to potentially graphic material. This is an important issue and we are committed to getting this right.”
The company says it has also taken measures to help reduce the amount of disturbing content that moderators have to view by increasing its use of artificial intelligence to remove harmful content.
“We invest in artificial intelligence to improve our ability to detect violating content and keep people safe,” it says.
“Whether it is improving an existing system or introducing a new one, these investments help us automate decisions on content so we can respond faster and reduce mistakes.”
Improvements include the development of a new system called Reinforced Integrity Optimiser, which learns from online signals to improve its ability to detect hate speech, and a tool called Linformer, which analyses content globally.
Other initiatives include an image-matching tool called SimSearchNet, which is able to detect subtle distinctions in content so it can be removed swiftly, and language tool XLM, which helps find the same actionable content in different languages.
“The challenges of harmful content affect the entire tech industry and society at large. That is why we open-source our technology to make it available for others to use,” it says.
“We believe that being open and collaborative with the AI community will spur research and development, create new ways of detecting and preventing harmful content, and help keep people safe.
“We have improved our tools for detecting hate speech over the last several years, so now we remove much of this content before people report it – and, in some cases, before anyone sees it.”
Updated: January 3rd 2022, 2:02 PM
The post Facebook’s troubles to reach court as moderators seek redress appeared first on Correct Success.