Oireachtas Joint and Select Committees

Wednesday, 9 October 2019

Joint Oireachtas Committee on Justice, Defence and Equality

Online Harassment and Harmful Communications: Discussion (Resumed)

Photo of Alice-Mary HigginsAlice-Mary Higgins (Independent) | Oireachtas source

The Quinn case and the Ryan family case have been very well delved into. The issue of targeted harassment is one I hope to return to in a moment, specifically in conversation with the representatives from Twitter. It is notable that in its testimony today, Facebook has told us it is not confident there is any lawful weight to its guidelines. Twitter, with regard to the Ryan case specifically, said that if there had been any legal action, it would have co-operated in the case. Even in the representatives' own words, they are strongly making the case not only regarding legislation on hate speech but also regarding legislation on online regulation. Representatives from both Facebook and Twitter pointed out that they would have moved further if required to by law. This underscores the need for legislation in this area and the fact that self-regulation clearly has its limits. Community standards have little concrete weight when it comes to persons' lives and persons who may be the subject of online abuse, regardless of whether they choose to be part of the online community. That is important because the duty of protection is not simply to users but to all citizens. That is a wider concern. The duty of protection also extends to those who may not be citizens but who are resident in our country.

Let me point to an issue raised by Deputy Brophy. Follow-up information and granular detail may be required. Deputy Brophy asked about the resources and number of staff allocated for taking down offensive or harmful content. I would like a breakdown also on the resources devoted to brand protection. All the organisations represented today have a large number of staff working on brand protection, for example, to ensure their advertisers' branding is not being misused. I am aware there are those whose work is dedicated specifically to targeting and watching out for issues that may be of concern to advertisers. I do not know the extent to which those issues, in terms of brand protection removals, are subject to notification requirements. Are there staff permanently allocated to deal with these issues? In providing us with the information on staffing and resources allocated for taking down content that violates standards, the representatives might give a specific breakdown on the proportion of resources allocated to deal with harmful content and the proportion triggered by notifications, by comparison with the resources allocated to deal with issues such as brand protection.

Comments

No comments

Log in or join to post a public comment.