Oireachtas Joint and Select Committees
Tuesday, 24 October 2023
Joint Oireachtas Committee on Finance, Public Expenditure and Reform, and Taoiseach
Authorised Push Payment Fraud: Discussion (Resumed)
Mr. Dualta ? Broin:
I thank members for inviting us to today's session to discuss the subject of APP fraud. I am head of public policy for Meta in Ireland and I joined by my colleague, Philip Milton, who is a member of our UK public policy team. Mr. Milton has been involved in the engagements with the UK Parliament and UK Government on the issue of fraud for the past few years. In this opening statement, I will focus on Meta's efforts to tackle fraud on our platforms.
The safety of our users is a priority for Meta and we therefore take a zero tolerance approach to fraud on our platforms. By its very nature, fraud is adversarial and hard to spot and the perpetrators of fraud are continually searching for ways to subvert the rules, processes and safeguards we put in place to protect our users. The perpetrators are also operating across platforms and industries to avoid disruption by any one platform, which makes this a very challenging area. Just as it is unlikely that fraud will ever be eradicated in society at large, it is unlikely we will ever be able to completely eradicate it online.
Nonetheless, Meta is committed to doing all we can to prevent fraudulent activity on our platforms wherever we can. We invest substantially in our safety and security teams to that end. All told, since 2016, we have spent about $20 billion on teams and technology in this area and that is not slowing down - $5 billion of that was in the past year alone. We have a team of highly trained experts solely focused on identifying fraud and building tools to counter this kind of activity, which are used to help catch suspicious activity at various points of interaction on the site, block accounts used for fraudulent purposes and remove bad actors.
It is directly in our interest to do all we can to combat fraud on our platforms. Failure to do so will expose our users to risk, severely degrade the experience of using our platforms for users and make them an unattractive place for brands and businesses to advertise. Meta has a set of strict advertising policies, community standards and community guidelines that govern what is and is not allowed in advertising and non-paid or organic surfaces on Facebook and Instagram.
Where we believe anyone has violated our terms, standards and policies, we take action and use a range of tools to enforce our policies, either via proactive automated systems or reactive methods or both. We deploy a combination of proactive detection and reactive action to disrupt bad actors on our platforms. This includes using our artificial intelligence systems to proactively detect suspicious activity. We focus our attention on behaviours rather than content, as while the content of these scams changes frequently, the modus operandiof the bad actor typically remains the same.
Where our systems are near certain that content or profiles are violating because they possess the signals we associate with a scam to a high degree of confidence, they will immediately be automatically removed. Where less certain, content may be prioritised for our moderation teams to review. Our aim is to catch these bad actors proactively, as early as possible, before they have a chance to engage with users.
APP is, of course, concerned with inauthentic behaviour. When someone looks to create a page or profile, we will use our artificial intelligence, Al, to check for signs they are being created by real people and not automated bots. This is because scammers can use bots to help them commit fraud. Accordingly, one of the tools we use to combat APP is taking down fake accounts, and in half 1 of 2023, we removed 1.1 billion.
We also use a mixture of nudge behaviour and proactive warnings via Messenger to let users know when they are messaging an account demonstrating behaviour similar to that we have previously seen from scammers. These accounts have not breached the levels we would need to suspend or disable an account but are suspicious enough to warn users about.
For fraudulent activity advertising on our platforms, we also focus on behaviours rather than content, given the ever changing nature of these scams. These efforts are geared toward building more proactive tools to automatically take down this content before it goes live, using a combination of Al and human review.
Our systems incorporate signals such as user feedback, fake or compromised account signals and ad content signals and tactics which go into building our proactive detection technology. We have also invested in ensuring our specialised reviewers can understand and identify this content which, by its very nature, is hard to spot. Relative to other harms on Facebook, the scams space is more complex and difficult for reviewers to accurately classify, so we have sought to build a more holistic understanding of the abuse over time.
While our aim is to catch content proactively, where users do come across such content, we want to make the process of reporting it to us and getting it taken down as easy as possible. Our in-app reporting function is available via the three dots that appear in every piece of posted content. Users can report organic content they consider harmful in some way or advertising content they no longer want to see or think is irrelevant or inappropriate. These reports are an integral part of training our systems to better spot fraudulent activity.
We also have the ability to onboard regulators to our consumer policy channel, CPC. The CPC enables us to work with consumer protection bodies, government departments, regulators and law enforcement around the world to help us better detect and remove content that violates our policies or local law by taking action on content reported to us by agencies that have the appropriate authority to make determinations in relation to the commercial content or activity they are reporting. We have several of these relationships in Ireland, covering a wide range of regulatory issues and harms.
Where we see a trend towards a particular type of activity that is not captured by our policies, we review those policies with the input of experts to ensure they remain fit for purpose as the landscape evolves.
Our priority is always to act against a bad actor as quickly as possible for any violation, but we are operating in a particularly adversarial space with bad actors who use increasingly sophisticated means to avoid detection. This is a complex issue that requires a joined-up multi-stakeholder approach.
I hope this provides members with an overview of how seriously Meta takes the issue of APP fraud and fraud more generally, and the various methods we employ to combat it. We look forward to the committee's questions on this important subject.
No comments