Oireachtas Joint and Select Committees
Wednesday, 1 August 2018
Joint Oireachtas Committee on Communications, Climate Action and Environment
Moderation of Violent and Harmful Content on the Facebook Platform: Discussion
12:00 pm
Ms Niamh Sweeney:
I thank the committee for asking us to be here today to discuss some of the issues raised in the recent "Dispatches" programme that aired on Channel 4 on 17 July 2018. I am head of public policy for Facebook Ireland. My colleague, Siobhán Cummiskey, is Facebook's head of content policy for Europe, the Middle East and Africa, and we are both based in our international headquarters in Dublin.
We know that many who watched the "Dispatches" programme were upset and concerned by what they saw. Siobhán and I, along with our colleagues here in Dublin and all around the world, were also upset by what came to light in the programme, and we fully understand why the committee wanted to meet us.
The safety and security of our users is a top priority for us at Facebook, and we have created policies, tools and a reporting infrastructure that are designed to protect all our users, especially those who are most vulnerable to attacks online, such as children, migrants, ethnic minorities, those at risk from suicide and self-harm, and others. It was deeply disturbing for all of us who work on these issues to watch the footage that was captured on camera at our content review centre in Dublin, as much of it did not accurately reflect Facebook's policies or values.
As our colleague, Richard Allan, said during an interview with the "Dispatches" team, we are one of the most heavily scrutinised companies in the world, and that is right. It is right that we are held to high standards, and we also hold ourselves to those high standards. "Dispatches" identified some areas where we have failed, and Siobhán and I are here today to reiterate our apologies for those failings. We should not be in this position and we want to reassure the committee that whenever failings are brought to our attention, we are committed to taking them seriously, addressing them in as swift and comprehensive a manner as possible, and ensuring we do better in future.
First, I would like to address one of the claims made in the programme, that it is in our interests to turn a blind eye to controversial or disturbing content on our platform. This is categorically untrue. Creating a safe environment where people from all over the world can share and connect is core to our business model. If our services are not safe, people will not share with each other and, over time, would stop using them. Nor do advertisers want their brands associated with disturbing or problematic content, and advertising is Facebook's main source of revenue. We understand that what I am saying to the committee now is undermined by the comments that were captured on camera by the "Dispatches" reporter. We are in the process of carrying out an internal investigation to understand why some actions taken by CPL were not reflective of our policies and the underlying values on which they are based. I will explain who CPL are as I go on.
We also wish to address a misconception about reports that relate to an imminent risk of self-harm or suicide. During the programme, a CPL staff member was asked if a backlog of reports could include reports about people who were at risk of suicide. The staff member's answer was that it could, but this was wrong. Suicide related reports are routed to a different queue so we can get to them quickly. Queue is the word we use for the list of reports that we have coming in to us. Reports about suicide or self-harm are considered high priority, and almost 100% of the high-priority reports during those two months of filming were reviewed within the set timeframe.
This is a somewhat shorter opening statement than we have submitted in writing, but I will touch on all the main points covered in the written version. I will address the actions to fix specific content errors that were highlighted by the programme. "Dispatches" highlighted a number of issues. I want to take the committee through what we have already done and are continuing to do to improve the accuracy of our enforcement.
As I highlighted at the outset, some of the guidance given by the trainers to content reviewers during the "Dispatches" programme was incorrect.
As soon as we became aware of these mistakes, we took immediate steps to remove those pieces of content from our platform in line with our existing policies. These included a decision not to remove a video depicting a three year old child being physically assaulted by an adult. This was a mistake because we know that the child and the perpetrator were both identified in 2012. The video should have been removed from the platform at that time. We removed this video as soon as "Dispatches" brought it to our attention. Our policies make it clear that videos depicting child abuse should be removed from Facebook if the child in question has been rescued. In addition to removing the specific piece of content, we are now using media matching technology to prevent future uploads of the content to the platform.
We do not allow videos of this nature to be shared except in a very narrow set of circumstances, namely, if the video is shared to condemn the behaviour, the child is still at risk and there is a chance the child and perpetrator could be identified to local law enforcement as a result of awareness being raised. According to Malaysian news reports, that is what happened in this particular case. A neighbour recognised the child in the video, having seen it on Facebook. In that instance, once we know the child has been brought to safety, it is our policy to remove the video and prevent it from being re-uploaded to our platform by using media matching software. In the relatively few cases where this kind of video is allowed to remain on the platform, in line with what I have described, we apply a warning screen for users and limit its distribution to only those who are 18 years of age or older. We also send this content to an internal Facebook team known as the law enforcement response team which can contact local law enforcement.
We recognise that there are a number of competing interests at play when it comes to this type of content, namely, the child's safety and privacy, the effect of that content on those who may view it and the importance of raising awareness of real world happenings. However, on foot of the concerns voiced by safety NGOs and others following the "Dispatches" programme, we are actively considering a change to this policy and have started an extensive consultation process with external organisations, including law enforcement agencies and child safety organisations, to seek their views on the exception we currently make for children we believe to be at risk or who could be brought to safety.
It is important to make absolutely clear that we take a zero tolerance approach to child sexual abuse imagery. Whether it is detected by our technology or reported to us, we remove it and report it to the US-based National Centre for Missing and Exploited Children, NCMEC, as soon as we find it. NCMEC leads the global co-ordinated effort to tackle child sexual abuse imagery of which we, other tech companies and law enforcement agencies around the world, including An Garda Síochána, are a part. We also use photo and video matching technology to prevent this content from being uploaded to Facebook again and we report attempts to re-upload it to law enforcement agencies where appropriate.
One of the other examples highlighted by the programme included a decision not to remove a video of teenage girls in the United Kingdom who were filmed fighting with each other. This video has since been removed from our platform. It is our policy to always remove bullying or teenage fight videos unless they are shared to condemn the behaviour. Even content shared in condemnation appears behind a warning screen and is only visible to people over the age of 18. The user must click through this warning screen if he or she wants to continue to view the content. In the example highlighted in "Dispatches", the person who shared the video did so to condemn the behaviour. However, it is our policy to always remove such content, regardless of whether it is shared to condemn it, if the minor or his or her guardian has requested its removal. When we learned from the "Dispatches" team that the mother of one of the teenagers involved was deeply upset and wanted this video removed, we immediately deleted it and took steps to prevent it from being uploaded to the platform again.
There was also a decision not to remove a post comparing Muslims to sponges and a disturbing meme that read, "When your daughter's first crush is a little negro boy". These were both violations of our hate speech policy and should have been removed by the reviewer. Hate speech is never acceptable on Facebook and we work hard to keep it off our platform. These posts were left up in error and were quickly removed once we became aware of them via "Dispatches". The meme in particular violates our hate speech policy as it is mocking a hate crime, that is, depicting violence motivated by a racial bias. We have deleted it and are using image matching software to prevent it from being uploaded again. The post comparing Muslims to sponges violates our hate speech policy as it is dehumanising.
We are increasingly using technology to detect hate speech on our platform which means we are no longer relying on user reports alone. Of the 2.5 million pieces of hate speech we removed from Facebook in the first three months of 2018, 38% of it was flagged by our technology. In 2017, the European Commission monitored the compliance of Facebook and other tech companies as part of the code of conduct on countering illegal hate speech online and we received the highest score, removing 79% of potential hate speech, 89% of which was removed within 24 hours.
We are also making some changes to our processes and policies to address the issues raised. The first is that we will now flag accounts of users suspected to be under 13 years of age.
We do not allow people under that age to have Facebook accounts. If someone is reported to us as being under 13, the content reviewer will look at the content on the profile - meaning text and photos - to try to ascertain the user's age. If the reviewer believes the person is under 13, the account will be put on hold and the person will not be able to use Facebook until he or she provides proof of age. Since the "Dispatches" programme, we have been working to update the instructions for reviewers to put a hold on any account they encounter if they have a strong indication the user is underage, even if they have another reason for undertaking the review.
As I flagged, our policy in respect of non-sexual child abuse videos is under review. We have started this consultation process with external organisations to decide if it is appropriate to continue with our policy of allowing these videos on our platform in the limited circumstances I described, namely, when they are shared to condemn the behaviour and the child is still at risk.
We are also taking actions to address training and enforcement of our content policies. We recognise the responsibility we have to get our training and the enforcement of our policies right. Content review at this scale has never been done before, as there has never been a platform where so many people communicate in as many languages across so many countries and cultures. We work with reputable partners to deliver content moderation services because it enables us to respond more quickly to changing business needs. For example, we may need to quickly increase the number of staff we have in different regions, and the outsourcing model enables us to do that. As I stated, CPL Resources is one of our outsourcing partners here in Dublin and we have worked with the company since 2009. However, in light of the failings highlighted by "Dispatches", we are making changes to substantially increase the level of oversight of our training by in-house Facebook policy experts and to test even further the readiness of our content reviewers before they start reviewing real reports.
We are in the process of carrying out an internal investigation with CPL to establish how these gaps between our policies and values and the training given by CPL staff came about. The investigation is being led by Facebook, rather than CPL, due to the extremely high priority we attach to this. It began in earnest on Monday, 23 July, as out of an abundance of caution and concern for their well-being, CPL encouraged the staff members directly affected by the programme to take some time off. We immediately carried out retraining for all retrainers at our CPL centre in Dublin as soon as we became aware of discrepancies between our policies and the guidance that was being given by trainers to new staff. Ongoing training will now continue with twice weekly sessions to be delivered by content policy experts from Ms Siobhán Cummiskey's team. CPL is also now directly involved in weekly deep-dive discussions with Ms Cummiskey's team on our policies covering issues like hate speech and bullying. All content reviewers will continue to receive regular coaching sessions and updated training on our policies as they evolve. We have also revised our training materials used to train content reviewers to ensure they accurately reflect our policies and illustrate the correct actions that should be taken in all circumstances. This has been done both for CPL and for all of our content review centres globally. These materials have been drafted and approved by Facebook only and will continue to be updated by us as our content policies evolve.
We are also seconding highly experienced subject matter experts from Facebook to CPL's office for a minimum of six months to oversee all training and provide coaching and mentoring. We are introducing new quality control measures, including new dedicated quality control staff to be permanently assigned to each of our content review centres globally. We are also conducting an audit of past quality control checks at CPL, going back for a period of six months to identify any repeat failings that may have been missed. This will include temporarily removing content reviewers who have made consistent or repeated errors from this type of work until they have been retrained. We will also continue to deploy spot testing at our review centres. If we find any irregularities in the application of certain policies more broadly, we will test for accuracy using targeted spot-checking of all content reviewers to improve accuracy. We have for several months been in the process of enhancing our entire on-boarding curriculum and are continuing to do so. The enhancements to our curriculum include even more practice, coaching and personalisation to help content reviewers focus on areas where they may benefit from additional upskilling.
As I have been speaking for some time, I will conclude with some comments on the Digital Safety Commissioner Bill 2017. I would like to share our thoughts on the 2016 proposal by the Law Reform Commission, LRC, to create a digital safety commissioner with statutory take-down powers. As the committee is no doubt aware, the LRC's proposal also provided a foundation for Deputy Donnchadh Ó Laoghaire's Private Member's Bill, the Digital Safety Commissioner Bill 2017. We understand the motivation behind the establishment of a digital safety commissioner and have discussed it with many of our safety partners in Ireland. We also understand the appeal of having an independent statutory body that is authorised to adjudicate in cases where there is disagreement between a platform and an affected user about what constitutes a "harmful" communication, or to provide a path to appeal for an affected user where we have, in error, failed to uphold our policies. We also acknowledge the draft Bill's efforts to ensure its scope is not overly broad in that an appeal to the digital safety commissioner could only be made by an individual where the specified communication concerns him or her. We see great benefit in a single office having the ability to oversee and co-ordinate efforts to promote digital safety, much of which has been captured in the Government's recently published Action Plan for Online Safety 2018-2019.
Only through a multi-pronged approach, of which education is a critical part, can we begin to see positive changes in how people engage and protect themselves online.
In addressing the nature of harmful communications, the Law Reform Commission report states that while there is "no single agreed definition of bullying or of cyberbullying, the well-accepted definitions include the most serious form of harmful communications, such as ... so-called "revenge porn"; intimidating and threatening messages, whether directed at private persons or public figures; harassment; stalking; and non-consensual taking and communication of intimate images". We agree with the Law Reform Commission with respect to all of these types of communications. The sharing of non-consensual intimate images, otherwise known as revenge porn, harassment, stalking and threatening messages are all egregious forms of harmful communication and are banned both by our community standards and, in some cases, the law. We fully support the Law Reform Commission's proposals to create new criminal offences to tackle non-consensual sharing of intimate images and online harassment where those offences are clearly defined and practicable for a digital environment. We have also taken steps improve how we tackle the sharing of non-consensual intimate images on our platform. More information on this was shared in a Facebook newsroom post in April 2017.
However, beyond the egregious examples I have outlined, the proposed Bill is unclear as to what precisely constitutes a harmful communication. No definition is included in the draft legislation, but from the drafting of the Bill, it appears that this concept is intended to be broader than content that is clearly criminal in nature, much of which I outlined and on which we are in full agreement with the Law Reform Commission. The exact parameters are left undefined and this will lead to uncertainty and unpredictability. In its 2016 report, the Law Reform Commission states:
The internet also enables individuals to contribute to and shape debates on important political and social issues, and within states with repressive regimes, the internet can be a particularly valuable means of allowing people to have their voices heard. Freedom of expression is therefore the lifeblood of the internet and needs to be protected.
Later, the report notes:
Thus, balancing the right to freedom of expression and the right to privacy is a challenging task, particularly in the digital and online context. Proposing heavy handed law based measures intended to provide a remedy for victims of harmful digital communications has the potential to interfere with freedom of expression unjustifiably, and impact on the open and democratic nature of information sharing online which is the internet's greatest strength.
We agree with the Law Reform Commission's analysis. While it would clearly not be the intention of this Bill to impact on free speech in Ireland, the commissioner's ability to issue a decision ordering the removal of harmful communications should be considered in light of the potential for limiting freedom of expression. It is important, therefore, to have a clear definition of what constitutes a harmful communication included in the legislation.
Facebook has put community standards in place for a reason. We want members of our community to feel safe and secure when they use our platform and we are committed to the removal of content that breaches those standards. I thank the committee for meeting us today to discuss these important issues.
No comments