Oireachtas Joint and Select Committees
Thursday, 7 November 2019
Joint Oireachtas Committee on Communications, Climate Action and Environment
Session 4: International Collaboration
This is the fourth session of the International Grand Committee on Disinformation and 'Fake News'. I welcome Mr. Rogier Huizenga, human rights programme manager, Inter-Parliamentary Union; Mr. Adrian Lovett, chief executive officer, Web Foundation, and director of policy, Ms Áine Kerr, co-founder and chief operations office, Kinzen Limited; Mr. Frane Maroevic, director of the content and jurisdiction programme, Internet and Jurisdiction Policy Network and Lorna Woods, professor of Internet law, University of Essex.
I advise witnesses that by virtue of section 17(2)(l) of the Defamation Act 2009, they are protected by absolute privilege in respect of their evidence to the committee. If they are directed by the committee to cease giving evidence on a particular matter and they continue to do so, they are entitled thereafter only to a qualified privilege in respect of that evidence. They are directed that only evidence connected with the subject matter of these proceedings is to be given and they are asked to respect the parliamentary practice to the effect that, where possible, they should not criticise nor make charges against any person, persons, or entity by name or in such a way as to make him, her or it identifiable. Any submissions or opening statements made to the committee will be published on the committee website after the meeting.
Members are reminded of the long-standing parliamentary practice to the effect that they should not comment on, criticise or make charges against a person outside the House or an official by name or in such a way as to make him or her identifiable.
I now invite Mr. Huizenga to make his opening statement.
Mr. Rogier Huizenga:
I thank the committee for inviting the Inter-Parliamentary Union, IPU, to this meeting and for giving us an opportunity to follow the debate as it is happening. I would like to use my time to share with the committee some of the challenges that we see from the perspective of our organisation for parliaments being mobilised around the topic that is under discussion today and some of the opportunities.
For those who are unfamiliar with the Inter-Parliamentary Union, it is the world organisation of national parliaments. There are 179 parliaments in the world that are members of the organisation. With the exception of one, all of the parliaments that are represented here are members of the organisation. As an international organisation we are grappling with this question. In terms of challenges, we see that parliamentarians are not well equipped to deal with it effectively. I am speaking from purely from a global perspective. They have a lack of understanding of the technical issues that are at stake.
They are also finding it very difficult to situate the debate between what needs to be done at the national level and what is happening at the international level. In addition, as was said earlier, there is a lack of clarity for them as to the governance structure which is dealing with this question at international level. There is also a challenge for them in getting access to the right information. What I really appreciate is seeing a variety of Members of Parliament represented here in the room. That said, they are obviously from advanced countries so they have some similarities when it comes to their own national context. Maybe also, because they are from advanced countries, they can more easily access representatives of tech companies because I would doubt if high-level representatives of the major tech companies would come before national parliaments far away from the West.
There is also the reality that we see in our discussions at the IPU about differences - legal differences and philosophical differences - around some of these major questions between countries but also between continents. Even today, obviously we have two sides of the pond being represented. Also, I guess there are fundamental differences between how freedom of expression is seen and what the limits are. We all know that hate speech is not necessarily criminalised in the United States. In addition, there are issues with regard to the financing of political campaigns. In the United States, spending is pitched whereas, in Europe, the tradition is more one of using public funds to finance public campaigns.
We come from different realities and this is just the western world. So what we are also facing in our discussions within the organisation is that there is a real variety of organisations and bringing everyone together is very difficult. Where do we find common ground? In trying to adopt a shared vision both of the problems but also the solutions.
We have seen that there is a growing sense, which was stated in the course of this morning, is that the business model is the real heart of the problem and that is something that needs to be tackled. Now, also within the IPU, there is discussion around that. I really like the statement that was made this morning around human rights and personal data. Of course, if we could reset the clock and start again things could possibly look differently now. Is that realistic? Short of a radical reform, there is discussion within the organisation to push for increased transparency of the work of the tech companies, particularly when it comes to the algorithm amplification and political ads. There is also a lot of debate on making a distinction between illegal versus harmful content. Also, to see the issue as something slightly larger than misinformation and see it really as something that concerns society, as a whole, in the sense that we are faced by what has been termed "junk news" rather than fake news, which requires us all to move and come together to promote better civic education, to help promote civil discourse and help the people, at large, to be better able to recognise fake news.
Mr. Adrian Lovett:
I thank the Chairman and her committee for the opportunity to present the Worldwide Web Foundation's views on this topic. The foundation is a non-profit organisation founded by the inventor of the world wide web, Sir Tim Berners-Lee, to promote and defend the web as a basic right and public good for everyone. We work to achieve this mission by securing policy change based on evidence and robust research. Crucially, to a point that the previous speaker, Mr. Huizenga, has just made, we work equally and are equally active in developing countries as we are in the countries represented by the Parliaments here.
I guess the committee has heard a lot of important things today, many of which I would agree with, but I would like to make one further, very specific, proposal for the members of the committee and that is as follows. We believe that a crucial step for lawmakers like yourselves is to require companies to publish regular human rights impact assessments and transparency reports. That means companies will be expected to tell us how they have weighed the impact of their policies and products on individual human rights, and on our societies. These reports should be grounded in international human rights frameworks and focus on disinformation and misinformation, hate speech, electoral interference and political ads. These kinds of assessments have been more and more prevalent in other sectors such as extractives, manufacturing and agriculture but less so in technology. Although some tech companies have started to publish transparency reports but we think they can go further. The more we learn about how companies make these decisions then the more informed and empowered governments will be to regulate effectively in this area.
Before I go back to a little more detail on that, I will step back for a moment. When Sir Tim Berners-Lee invented the world wide web in 1989 - 30 years ago - he changed the world. He expanded our access to knowledge and freedom of expression more than arguably any other development in modern times. In recent years, and as today's conversation has shown, we have seen the web misused to spread lies and hatred, and sow division within our communities. It is a complex dynamic and tackling it, we think, will require an equally complex set of policies and, crucially, long-term thinking.
As public representatives, you are faced with a difficult balancing act: the need to respond swiftly to the harms caused to your constituents by disinformation, with the responsibility to uphold people's human rights and freedom of expression and avoid triggering unintended harms. There are numerous challenges in the context of international co-operation on disinformation, and a number of them have been mentioned already. Like the web itself, the platforms where much of the disinformation is spread operate globally but national laws vary and sometimes diverge. Platforms struggle to fact-check content consistently at a global scale. While platforms are making decisions that impact billions of lives globally, we have not had real insight into how those decisions are made.
Despite all of this, our sense - and certainly Sir Tim Berners-Lee's sense - is that there is hope. Sir Tim Berners-Lee would maintain, in spite of it all, a determined optimism that there are ways we can come together now and in the future to address these challenges. Last year, Sir Tim announced the creation of the contract for the web. It is a social contract that would bring together governments, companies and civil society to agree on a set of ground rules to guide digital policy agendas. More than 350 organisations have been part of the process so far. The initiative began with a set of nine principles small enough to fit on a postcard. In the next few weeks we will announce the full contract for the web with 76 clauses that emanate from the nine principles. It is intended that this contract, which we will launch at the Internet Governance Forum in Berlin, can play its part as a broad plan to protect and secure the web as a public good.
In that context I shall return to that one specific proposal, which, in our view, sits very much at the heart of the contract for the web. I refer to the increased commitment to transparency around risk assessments by companies. While transparency is not, and should not be, the end of the road, it is an essential first step that supports evidence-based regulation and enforcement but it is that necessary first step. So we call on companies to publish regular transparency reports that tell us how they have weighed the impact of their policies and products on human rights when it comes to misinformation and disinformation. We also call on national parliaments around the world to take this first step by requiring platforms to take that step of their own. I am happy to discuss this matter more as we get into questions.
Ms Áine Kerr:
I thank the Chairman and members for the opportunity to speak here today.
We are in the midst of a wicked problem and are running out of time. A wicked problem is, by its very definition, something that is difficult or almost impossible to solve because of incomplete, contradictory and ever-changing requirements. It cannot be solved through one solution alone. Our wicked problem is an information disorder and that disorder is a symptom of a much wider societal problem - a collapse of trust in institutions, including media.
I have worked as a traditional journalist with Ireland's main newspapers, with Storyful, the world's first social media news agency, and with Facebook, the world's largest social media platform. I launched a new start-up called Kinzen, the purpose of which is to reconnect people with quality news and information. Throughout 16 years in the industry, I have remained very resolute about the importance of journalism to better help people to understand the world around them but also ever concerned about the phenomenon of more and faster content everywhere. I have seen a new information ecosystem emerge in which everyone became a publisher, the means of consumption changed and the modes of distributing news became fragmented.
There were positives, of course. We have largely become more connected, engaged and educated because of the Internet. However, amid that period of massive disruption and digital transformation wherein online experiences were built for speed, clicks and scale, the connection between people and journalism was lost.
We are at another crossroads now and there is an opportunity for correction. While the task list is long and complicated, there are three core areas in which to make a start: to ensure that we connect people with quality news and information; to build collaborative projects across multiple sectors so there are shared learnings; and to find thoughtful and pragmatic ways to hold technology companies accountable.
What does that mean in reality? I believe that today’s recommendation systems are a root cause of information disorder. We need radical transparency on the programming of algorithms. A new generation is seeking alternatives to the toxic noise, hidden surveillance and the endless distraction of their social feeds. They demand an experience of information that rewards their best intentions, fits their daily routine and protects their privacy. That means, we as an industry, need to build radically different news experiences. In Kinzen, for example, we are building a citizen algorithm, in which data science and machine learning is guided by human curation and explicit user feedback of every single individual person.
We also require collaboration at an unprecedented scale across multiple industries. In journalism, we need to find more opportunities for newsrooms to collaborate and not compete. That means funding projects like the CrossCheck initiatives undertaken by the news organisation, First Draft, in places like Brazil and France where journalists from competitive organisations worked to find, debunk and report on different rumours during election cycles. In digital literacy, we need long-term joined-up education initiatives, rather than one-off campaigns, when it comes to giving people - young and old - the skills and tools for consuming information online. That means taking successful models like the Finnish Newspapers Association's work in schools for the past 50 years or the news literacy project curriculum in the United States, and using schools and libraries as the key gateways to build global media and information literacy, MIL, playbooks.
In research, we need to find more opportunities for academia to study anonymised data from technology companies so they can better understand what is working and not working. That means scaling efforts like Social Science One - the non-profit commission launched in 2018 - to establish concrete partnerships between academics and data-rich institutions. It now has 32 million individual links extracted from Facebook upon which to conduct research.
To be truly collaborative, we need to bring together civil society, technology companies, publishers, academics and governments so we can answer the question: what can we do together to tackle this wicked problem, while protecting freedom of speech?
Regulating the Internet is complex. The risks are immense and committees such as this one must ensure there is careful deliberation of well-researched evidence so that practical enforceable standards and laws can emerge.
A new report emanating from France outlining a detailed strategy for increasing oversight of social platforms while allowing for an independent regulator to ensure compliance with standards, deserves consideration. Ideas are emerging from the Transatlantic High-Level Working Group on Content Moderation and Freedom of Expression, which propose to enable platforms to set standards while enabling governments to hold those platforms accountable to those standards via Internet courts.
Wicked problems are difficult but not impossible to diminish. It will now take collaboration, transparency and innovation on an unprecedented global scale for us to realise if the moment for correction is upon us.
Mr. Frane Maroevic:
It is my honour to contribute to the committee's deliberations on advancing international collaboration on online regulation. Clearly there is no need to stress that this is a transnational issue: that the committee has come together to discuss this issue speaks for itself. The initiation of the Grand International Committee is also a sign that the existing institutions and processes are not adequate to deal with these issues.
Most online interactions and data flows today involve multiple jurisdictions based on the locations of users, servers, Internet platforms or technical operators. Current frameworks for interstate legal co-operation struggle to handle this new digital reality. In many cases they hinder or even prevent co-operation. In some cases, they empower those who want to do harm or commit crimes.
How do we address issues such as the interoperability between the different norms; the interplay and the hierarchy between companies’ terms of service, national legislation, international treaties and commitments? Who sets the standards? What are geographically proportionate and relevant responses to these issues? How do we ensure the rule of law and transparency of all these processes?
How do we effectively co-operate to ensure that those who commit crimes and inflame hatred or violence are prosecuted? What is an appropriate punishment and what is the recourse for the victims? We need institutions for all these things because the most common forms of punishment and redress seem to be take-down of problematic social media posts or accounts.
In order to come up with workable answers, we need new international tools and institutions for Internet governance. This is one of the greatest challenges of the 21st century that no one can solve unilaterally. In the absence of policy standards and appropriate frameworks, we face increasing tensions that trigger unco-ordinated short-term solutions. National laws are enacted to try to deal with transnational problems, resulting in a legal arms race that risks unintended and harmful consequences, including jurisdictional conflicts and unwanted fragmentation of the Internet.
The organisation I work for, the Internet & Jurisdiction Policy Network, is about to publish the first ever report on the status of global Internet governance which shows that globally there are more than 300 such laws or Acts. As the majority of them are not co-ordinated we risk fragmenting the Internet.
We need solutions that will bring values and rules-based international order to the Internet, while at the same time ensuring that our democratic institutions and all our fundamental human rights are fully respected. This is a colossal task and shows why Internet governance must be a multi-stakeholder process. In reality it is not because in most cases when we discuss inter-governance it ends up being an intergovernmental process. In a true multi-stakeholder process all three branches of the state need to work with the fourth estate, the media, along with civil society groups, academia and the companies. This is what the Internet & Jurisdiction Policy Network does as a multi-stakeholder Internet governance process. We bring together approximately 300 key stakeholders from governments, Internet companies, technical operators, civil society groups, academia and international organisations from more than 50 countries to work together to develop policy standards and operational solutions.
In recent months our contact groups, working in three separate jurisdiction programs on data, content and domains, produced a series of proposals norms, procedures and mechanisms that we call operational approaches and I would be happy to share them with the committee. The next focus for us will be on standards for recourse mechanisms, looking at issues of normative interoperability and transparency.
Regarding the committee's work I would like to highlight transparency, a matter mentioned by a number of people today, as one of the most useful tools in tackling disinformation. I am referring to establishing standards and mechanisms to deal with the abuse of platforms and technical infrastructure for political, financial or other gains.
Members of the committee know the challenges of regulating speech. The most problematic content does not fall neatly into what could be restricted under international human rights standards on freedom of expression. These issues are constantly evolving and changing, and artificial intelligence only contributes to the speed and complexity of the problems to be solved.
I call on the members of committee, as representatives of the people, to support a multi-stakeholder, transnational Internet governance process.
Professor Lorna Woods:
I am grateful for the invitation to give evidence to the committee. A way forward is to try to identify a model which is common - perhaps not a model law, but an underlying model that can then be deployed around the world in different jurisdictions.
The difficulty that is often raised with regard to content regulation is the subjectivity of content and the fact that standards differ from state to state. To address this problem, I would suggest a regulatory system that looks at the underlying systems, one that moves the focus from the content to the mechanisms that encourage content, facilitate its sharing and distribution, and select the content that comes to people's attention.
It is my contention that the platforms have developed with a disregard for the impact on the sorts of content and the sort of content that is prioritised and widely shared. This goes back to the discussion about the business model, which is about encouraging user engagement with the aim of getting more data, which is then used as a commodity. The principle put forward by Carnegie United Kingdom Trust is that there should be a statutory duty of care as regards the systems. It is starting with an assessment of risk; identifying the consequences, be they intended or unintended; and, crucially, taking steps to mitigate. Rather than leaving it at transparency, it involves asking why something has been deployed and how it can be made better.
When Google first started, it said that it viewed data as an exhaust product of the search engine business. I now wonder whether we have reached the point where people's content is the exhaust product of the data collection business and that we should be moving to try to get a cleaner engine rather than a dirty diesel one. That is what the Carnegie project is about. I am happy to share further detail. We have done quite a bit of work on this but I do not want to weigh discussions down. I would suggest that using a model that focuses on systems is beneficial in an international context because one does not have the difficulty of agreeing about difficult content. What one is looking at isn on the whole, further up the process and it is possibly easier to find common ground there so one can then be in a position of deploying a common model that can be responded to within an individual state context. I would emphasise that this can never be a silver bullet. There will probably always be a role for moderation for take-down and in some instances, possible law enforcement engagement but this sits on top of this common model. It is when one starts looking at particular items of content that the difficulty and differences arise. That is the proposal. It involves a statutory duty of care aimed at the systems - the infrastructure itself.
Mr. Milton Dick:
I will return to where Professor Woods finished about the model. My next question is directed to Mr. Lovett and possibly the IPU. In terms of best practice, which countries either through the IPU or otherwise are leading by example? This is where the rubber hits the road in terms of where we move forward. I have not heard a lot of examples today of any nation states or countries leading by example with new laws, technology or common models to deal with the "wicked problem". Are we the people who are setting the scene or are there other people, who I would hope are outside this room, who are dealing with this?
Mr. Rogier Huizenga:
The reality of which we are aware is that there are quite a number of initiatives at national level, particularly in Europe, that are also shared within the IPU. There is no full analysis of how these different national initiatives regarding hate speech compare to each other. There are a number of what seem to be good examples but with some potentially difficult implications.
Mr. Rogier Huizenga:
France has recently developed a law to tackle hate speech and harmful content. The law adopted in Germany is another example so there are a number of examples. The challenge is that these examples differ from each other so how do we compare them? This is also where we see real merit in having these kind of discussions among parliamentarians possibly first as a start, as is happening here, for more like-minded situations but also extending it to a slightly wider audience. I certainly think that the countries from which the participants today come are already tackling this. I am not saying they are producing the best of results but they are already to the forefront of dealing with this.
Mr. Milton Dick:
I am glad Mr. Huizenga said that. Australia has been leading the way in some respects in terms of our eSafety Commissioner and the laws our nation has passed. I will use Australia as an example; to our north, there are countries like Indonesia and further north, there are anti-democratic countries like China and Vietnam. Regarding a common regulatory system, the involvement of law enforcement was mentioned. What would suit our nation in the Indo-Pacific region - the freedom of speech argument - would not apply inside China and Vietnam. I would not want a system that is one-size-fits-all so I disagree slightly with the witness statement about a common ground versus what is already happening around the world. I would be interested what the panel thinks about that - either a two-series or a two-step process in terms of democratic reforms versus the opposite.
Professor Lorna Woods:
I would like to clarify what I said. The model that I said might provide a common ground is based on process. It is not based on specific items of content. In a way, it is a two-step process. There is common ground about standards for recommender algorithms or whether metrics such as "likes" and those sort of devices have unforeseen consequences. I think agreement can be reached there. It is when one goes into the question of whether a particular item of content is problematic that one finds a lot of difference. We must accept that there is a limit to how far we can go with that.
Mr. Adrian Lovett:
I will take both those questions. Regarding the first question, I wish we could put an excellent national model before the committee. I think what has happened in Australia, and Ireland is looking at something similar in terms of the role of the Data Protection Commissioner, is important. Our view is that what has been taken forward in Germany is not a model to follow so there is a lot of work to be done there. The committee is at the front line of trying to figure this out - what it looks like at a national level. I agree with Professor Woods that for us, a focus on the process is as, or possibly more, important as a focus on the end content. Our hope is that the idea about transparency reports and human rights assessments will expose not just the numbers of take-downs and so on, which we are starting to see more of, but the basis for decisions. One could look at it and say "well, we wouldn't have done it that way" or "we can see why they came to that conclusion". It involves having that qualitative understanding of how decisions are being made.
Mr. Dick mentioned Indonesia. I was there earlier this year and spoke to Ministers and so on. To underline the challenge mentioned by Mr. Dick, it was clear that when we and some government officials were talking about a healthy Internet, we meant very different things.
Ms Áine Kerr:
Parts of this are done country by country, i.e., speech, while parts can be done at an international level around political advertising. Regarding the point that this is about process and framework, I mentioned that high-level working group earlier. There is a formula - a framework - there to which it has given a lot of thought. It involves whether one can work with the public, civil society and a coalition of organisations to build standards that are about behaviour and, in parallel with that, the technology platforms would build covenants - that these are their warranties - so that they are held responsible as well to ensure there is platform responsibility matched to those standards.
Layered on top of that could be independent regulators tracking and monitoring the implementation of the standards and covenants and, ultimately, imposing huge fines where there has been a stepping out of line. In parallel, there may be a need for Internet courts on very specific issues to deal matters on a case-by-case basis. Obviously, Facebook would be considered. There is a role for the oversight boards. We are starting to see, in some of the high-level working groups and in the conversation today, that there is a framework and process that could be built in country by country and company by company.
Mr. Frane Maroevic:
I will be brief because many of the points of been covered. I agree very much with what was said. I do not believe we have come across perfect legislation that we could just forward or copy. Most legislation that deals with content and its regulation has potentially serious implications for freedom of expression and freedom of speech.. That is why we are all talking about trying to find processes to deal with these issues, examining issues of impact, virality, reach and proportionality. Regulating speech in the analogue world was also about the impact and the possibility speech having an effect. That is important. It is not just about the speech but also about the context. That is why the call is to try to find models and mechanisms that would be agreeable to a larger group of nations in terms of systems. One will never find an agreement on specific regulation of content between each country. We need a certain level of flexibility.
Mr. Tom Packalén:
What is Mr. Lovett's view on the approach to fake news and fact-checking? There are great dangers. What really is fake news? There is nothing there but there are also different kinds of realities, and people see the same issues in different ways. In the same country, different counties might have different views and see certain problems in different ways. What should the approach be so that we will not be in some kind of Orwellian world when it comes to free speech?
Mr. Adrian Lovett:
If one starts from a perspective of human rights, it does not answer the problem, at least not easily, but it does frame the challenge and the question. If we are clear that we are equally concerned about a range of human rights, including the right to freedom of expression but also the right not to be harmed in various ways, that has to be the starting point. The way we look at it, the disinformation and misinformation have three parts. First, there is deliberate, malicious intent, whether it is state-sponsored or otherwise. That was talked about today. I refer to intent to bring about an income systematically and in a very determined way. There is system design that creates perverse incentives and rewards. The teenagers in Macedonia churning out factually incorrect stories about Hillary Clinton's health, for example, did not in most cases have a political axe to grind. They figured out how to make a few euro. That was a result of the incentives created by the system. Then there are the unintended negative consequences of design, which also come with what we might argue are positives associated with the more open discourse we are now able to have online. It breaks down into those three areas, at least, and a different approach is required for each.
Professor Lorna Woods:
Let me follow up on that. Maybe solutions are not about looking at content but about companies looking at the factors they put into their recommender algorithms in terms of whether they value reliable content rather than content about particular stories. There was some research done on the recent Indian election and the role of WhatsApp. Some of it was suggesting that the intimacy of a WhatsApp group has an impact on people's tendency to believe. A question arises, therefore, about the design of groups. Are they really small groups of friends or are they just conglomerations of relative strangers? One might want to consider how easy it is to embed content from sources because one then gets contextualisation, which makes it more difficult for people to assess. There are issues with unintended consequences and design in respect of how we share information.
Ms Áine Kerr:
There are a couple of levels. On the technology level, we require fundamental rewiring of the recommendation or algorithm I talked about in terms of transparency. This is so people will understand why they are seeing certain recommendations. They should have the ability to reset their preferences. We need to accelerate our AI machine-learning efforts ultimately to be preventive and remove bad actors who are trying to spew confusion. There is a human layer on top, whether it is through the fact-checkers and efforts like First Draft, which I mentioned, whereby one is ultimately accelerating the efforts of the fact-checkers but ensuring the fact check travels back to those who might have viewed something false. A problem with many of the fact-checking efforts mentioned earlier is that they often occur long after the viral peak window. We are not capturing the people in the first 12 hours when there is peak virality. Therefore, we need to build systems and technology that identifies and goes back to the people and asks them whether they have seen a certain other perspective, which is from a trustworthy source. On that, there is a considerable amount of work happening across the industry on trust metrics and nutrition labels. We keep talking about transparency today. What does it look like if one can click on a little icon across multiple platforms or websites that indicates who owns information and how long the owner has been up and running. Thus, people can engage in critical thinking to determine whether they want to trust the information.
Ms Nino Goguadze:
My question is for all who wish to answer. The issue has already been covered in one of the statements but I would like to go a little more deep in discussing it. We know that there is no unified definition of hate speech or harmful content. There is no common understanding as to what constitutes information or fake news. There is no one certain definition for these terms. The legal frameworks of the countries show there is no common approach among countries towards regulations. Today, when we are discussing the possibility of parliamentary co-operation and how we can work together and address the challenges together, do we need to agree on basic definitions of terms or basic issues of regulation? Would this have a role in addressing the issue?
Mr. Frane Maroevic:
To build on what I already said, I fully agree that we might not find agreement on a panel, never mind in this room, as to what constitutes hate speech or harmful speech. It would be a very difficult and useful discussion but, as we are all saying, we should start off trying to examine regulatory aspects and what we can agree on. Sometimes the problem with speech is not just what is said but the context, virality, reach and impact. These can be examined in much greater detail and measured.
I call for us to try to find a way to get together and reach agreement on the terms of the regulation on as broad a scale as possible. That would allow us to compare different countries and it would be much easier to implement. It would give the experience of the Internet a much more uniform approach; therefore, it would be much more positive and useful in this context. Nevertheless, the discussion on what constitutes difficult, harmful speech needs to continue, but it would be difficult to implement.
Professor Lorna Woods:
We are probably saying something similar. I emphasise the point that if one is putting the obligation on companies to look at how their systems are facilitating the spread, one does not rely so much on definitions of how one categorises certain kinds of speech. The techniques for allowing the content to spread cut across a range of harmful content. It softens the need to understand it. Finding an agreement on hate speech would be difficult, but perhaps one could start with asking whether there were repeat instances and other such technical questions, on which there might be agreement.
Mr. Rogier Huizenga:
On the question of hate speech, we should not forget that there is a series of international standards and instruments that apply to pretty much everyone in the world, including the International Covenant on Civil and Political Rights in Articles 19 and 20. Most countries have ratified that treaty. The UN human rights committee has jurisprudence. There is a UN special rapporteur on freedom of expression who has just produced an interesting report on precisely this topic and human rights. A lot material is available. There is the Rabat plan of action on hate speech which was developed eight years ago. It is supposed to serve a global audience. The international standards are in place for some specific forms of expression. They need to be adapted to the national context.
We need to distinguish between the worldwide web which, in my mind, has a basic structure that is holding up and the platforms which are using it, in respect of which there is a governance gap. Tim Berners-Lee and the Web Foundation have a very good insight into how that is happening. Our key question is where is the governance and how do we secure collaboration on it. Mr. Lovett has said he is travelling to Berlin at the end of November with the Internet Governance Forum. To use an expression he might know, he and whose army? How can national jurisdictions and the European Union, as several others have said, make sure it is collaborative across jurisdictions? Where are the teeth? How will that collaboration be brought about?
Mr. Adrian Lovett:
About 0.5% of the world might say the Internet Governance Forum is a good place, while the rest of us wonder what it is. Tim Berners-Lee's idea on which we have been working for the last year or so has three phases. The first was to land the set of top line principles that I mentioned. They consist of nine principles, with three for governments, three for companies and three for citizens. On the question of us and whose army, approximately 350 organisations signed up in support of the principles and committed to engaging with the process of turning them into concrete actions. The 350 organisations include most or all of the major platforms, a number of governments, including the German Government and the French Government, and some terrific civil society organisations in the global south and elsewhere. That was only the first step and if we had stopped there, it would have been useless.
The second step is the one at which we will land in two or three weeks' time in Berlin. Approximately 80 experts from different sectors and backgrounds have deliberated over the last three months or so to announce 76 clauses related to the nine principles, covering the whole range of what we saw as challenges in ensuring the web worked for everyone.
The third stage is to ensure all those who have signed up to these commitments - we hope as many as possible of the 350 organisations involved in the first stage will do so - will be held accountable. The Web Foundation will take it on itself to ensure there will be a robust monitoring mechanism that will track progress against the commitments made. Part of the conditions in endorsing the contract for the web in Berlin by companies, governments and civil society groups will be publishing an annual accountability report of their own addressing the elements of the contract.
The Deputy made a good point about the web versus the wider Internet, including the big platforms. We absolutely recognise this. Tim Berners-Lee has always had a strong sense that while Facebook and Google are not the web, there is an opportunity which perhaps is unrealised for those players and others to help to strengthen and build the web and, within the wider Internet, to reinforce the values of the web, including transparency and openness, to have a permissionless space and enable people to have the opportunity to be creators, as well as consumers. While we are, first and foremost, concerned about the worldwide web that Tim Berners-Lee invented, we are also determined to see how much of the spirit, value and ethics behind it can be experienced.
I have a question for Ms Kerr. It is a sad day for Irish media, with job losses having been announced at the national public broadcaster. I thought her contribution was very important as we have been thinking about the business model. Is it data harvesting to fund advertising or is it a paid subscription model? That is not, however, the whole picture because, as Ms Kerr says, everyone has to be able to connect to quality journalism. As some people will not be able to pay subscriptions, we need public service, high quality journalism that everyone can access. What is ironic is that the current business model for social media makes it impossible to fund public service broadcasting in this country. Does Ms Kerr have any specific proposal for the funding of public service broadcasting?
Ms Áine Kerr:
The advertising model is broken for the journalism industry. We need to think about devising radically diverse models for the funding of journalism. The Deputy is right that, when it comes to public service journalism, we need to help to fund it. In this country that includes rethinking the television licence fee. As we all know, the mode by which we consume media has changed. We are the Netflix and Spotify generation. We need to think about taxes and subsidies that address this device by device mode of consumption.
On the model itself, we have to think about a system wherein people will understand why journalism is important and make a donation or contribution or become a member. That means that we will have to do a better job as an industry in amplifying the reasons. With that comes a new form of journalism that will be people-powered. It states less is more and that we are going to stop annoying people with irritating advertisements. Instead of having a lot of content that keeps people scrolling endlessly on platforms, applications and websites, the new form of journalism should give people a productive experience. That means giving them the right content and right format in the right amount of time that feels purposeful and productive. Right now all of the evidence shows that people feel incredibly overwhelmed by the amount of content online. They hide their digital footprints and turn off. There is, however, an opportunity. There are means for governments to ensure taxes incentivise and subsidise media. Perhaps there could be a tax incentive to pay a subscription or become a member of a media organisation whereby they could claim back the fee when making their tax return each year.
It also comes to funding it from the ground up to ensure that local journalism can survive. If we look across at the US desert, that is a phenomenon that increasingly, we will see across Europe. We must ask how can we fund local and public service journalism and ensure that there are institutions that can take the funding and distribute it to the media at large.
Yesterday, we heard a great deal about the duty of care process approach. I understand that informed the UK Government's White Paper on harmful content. I expect Professor Woods worked with the UK Government on that. Was there any indication from the outgoing UK Government that were it to return to office, it would continue with the approach set out in the White Paper? Is it agreed? Would it be implemented if the outgoing Government is returned?
Professor Lorna Woods:
While I do not know the thinking of the current occupants of No. 10 Downing Street, the Queen's Speech certainly contained a commitment to publish a draft Bill in 2020, which indicates a determination to take it further. While I do not know whether the committee is aware of it, the Digital Economy Act contained a provision to have age verification for online pornography. That took a while to be sorted out and was at the brink of coming in when the Government said it would not do so. The Minister with responsibility for digital was questioned on it and said the issues around online pornography would be swept up through the progression of the online harms White Paper. I do not know what will happen but this is an indication that there is an intention to continue.
Mr. Amrin Amin:
I seek thoughts in this regard because an earlier panel discussed a voluntary code of practice. Why are sanctions necessary? Does Professor Woods think that voluntary codes work? Will she clarify whether this would be a regulation that is enforced and monitored locally and that the policies and oversight boards or internal checks would be subject to this process or regulation?
Professor Lorna Woods:
Yes, we envisaged a formal legal obligation on companies falling within the remit and that it would be enforced by an independent regulator. We suggested Ofcom, the communications regulator in the UK, which has a track record for evidence-based and proportionate regulation. We envisaged that there should be sanctions for a company that did not comply but that there would be a sliding scale in order that the starting point for the regulator would be to try to engage and inform before going to enforcement. We thought that given the size of some of the companies at issue, we should look at GDPR-sized fines and we discussed whether, in the financial model, there might be personal liability for directors to get them to pay attention. The underlying model was the Health and Safety at Work etc Act. The Health and Safety Executive, which is the enforcer for that legislation, has the power to prosecute. Then the question was whether one goes further with recalcitrant operators. The problem with that is that we are still in a free speech context and that would be a very heavy sanction and would be difficult to make proportionate.
Mr. Amrin Amin:
I understand. I wish to touch on a point on free speech. Earlier, Facebook stated that one of its core values is to protect freedom of expression. As a professor of law, does Professor Woods believe that deliberately deceptive political advertisements paid for by politicians and the use of bots by foreign entities to influence an outcome, as we saw in the Irish abortion referendum, as well as in the US with alleged Russian interference, covered under the concept of freedom of expression? Is it not the case that these things harm our democracy?
I may be able to help with Deputy Eamon Ryan's question earlier. Our analysis of the Queen's Speech and the notes to it, indicate the online harms Bill will reach Parliament but that it has been watered down quite significantly. It is not the same Bill to which most of us made submissions.
Mr. Huizenga referred to digital understanding, we could also call it digital literacy or many other names. In the UK's every attempt, we have been unable to find best practice. Many countries say they are doing various things but there does not seem to be a European agreement about best practice. Finland is at the top of the list, UK is in eighth place and Estonia is fourth. It has become quite clear to me that the Department for Education, and this is probably true in other countries, is not equipped to deliver what we are talking about. In the UK, the Department for Education sees this as part of computing lessons and rolls it under such lessons, when it clearly is not. What advice does Mr. Huizenga suggest and where would we go to to look for best practice?
Mr. Rogier Huizenga:
I have no concrete advice on that. It is a very valid question and is something we see in other countries where because of the new ways of communicating, it fall between the cracks. We have heard this before. However, I cannot give precise advice on how some countries have been able to deal with it effectively. Maybe it is with the Finnish experience, as Lord Puttnam observed. We are aware that the digital literacy initiatives which have been developed in Finland had been appreciated everywhere. Maybe the committee's Finnish colleagues can elaborate.
Ms Áine Kerr:
I am a member of the Council of Europe's committee of experts on quality journalism in the digital age and as part of its work, there has been a massive audit of news literacy projects globally. Its report should be published in coming weeks. It will include everything from NewsWise in London, which is part of The Guardian, to the Finland newspaper model and what is working and not working. Out of that we should see some best practices emerge.
Something that might be helpful to the committee is the evidence given to our select committee by Baroness Onora O'Neill, who is a philosopher.
Her laying out of the ethics of this issue and the ethics of free speech applies directly to Professor Woods's point. In Europe, we do not have first amendment rights. Ours is not binary. It is quite subtle. I sometimes feel that in Europe we get dragged into the same basket as first amendment issues and in a sense we have more space. That is an observation.
My last question is to Ms Kerr. Deputy Ryan, to an extent, asked it already about public service broadcasting, which is under pressure everywhere. What I find extraordinary, and we got this more of less right in the UK when I was growing up in the 1950s, is that an understanding of the subtleties and importance of public service broadcasting was entirely understood. It got badly beaten during the Thatcher years, and reconstructing the understanding that existed generally - it was not an imposed but a societal understanding - is extremely difficult. It possibly means levies and it definitely means a transfer of revenues from the major players to public service broadcasting of some sort, be it online or offline. Has Ms Kerr an observation on that?
Ms Áine Kerr:
There is definitely an argument for a fairer playing space, particularly when it comes to advertising. As we know, the monopoly platforms are acquiring much of the new digital revenue. Is there an effort that can be made where we can create a fairer competitive space? It comes back to the point I tried to make earlier, that for public broadcasters or journalism in general, we have to get away from the attention economy to the intention economy. What I mean by that is that we are being tracked across the Internet. What does it look like for a public service broadcaster and others to give power back to people, to give them control of their experiences and, by doing that, for them to feel that experience with journalism? That is where the imperative is at in general for us, but it will take a level playing field and taxes and subsidies to ensure that from the ground up people understand that role.
That is why Lord Puttnam's media literacy question is important. We have to ingrain this from the get-go for people to understand why the institution of media is important. Critical thinking skills are important because, unfortunately, misinformation and disinformation are a part of our lives. They are part of the human condition. There will always be bad actors. They will get more sophisticated about how they go about their business. Much of the analysis from the United States suggests we will want to inoculate children from the age of 12 against misinformation so that they will seek out the public broadcaster and engage with it as a source of truth, fairness and accuracy.
Mr. David Cicilline:
I also concur with Lord Puttnam's comments about media literacy and best practice both in terms of our ability to make that available and real investment, at least in the United States, in civics education in order that people understand the implications of some of this. That is a critical step. We look forward to that report for some guidance.
Mr. Lovett, in his written testimony, referred to prohibiting the use of micro targeting for political ads. Will he explain why he believes that is a good recommendation?
Mr. Adrian Lovett:
This has been covered already to some extent. We as the Web Foundation joined with the Mozilla Foundation and others in recent days to call for a temporary ban or moratorium on political advertising in the UK around the current general election. We did that not mainly because of the threat of clearly false information being put into the public domain. A very bad example of that in recent days was a doctored television interview with one of the major UK party spokespeople, but everyone could see that and it was quickly exposed. We think it is right in the UK context to pause because of what is not seen and because of the degree of micro targeting. There has always been targeting in advertising and we have had that for centuries, but what is different now is the scale and speed at which that works and the opacity of it and the great difficulty we have in seeing the effect of it. We should recognise it is not a simple question, because while there may well be a place in some form for political advertising, properly regulated and fully transparent, there is not enough confidence at the moment that the unseen is not doing great harm. While that is the case we think it is important to step back from political advertising, but in the longer term what is required is a way to be found such that micro targeting is not applied in the case of political advertising because it is too important a context for that.
Mr. David Cicilline:
During the community's last meeting, Peter Kent noted that dominant platforms may simply pull out of jurisdictions in the absence of harmonised regulations across our democracies. We are seeing this in real time with Google's threats to withdraw its Google news service in response to the European Union's copyright directive and its more recent decision to stop displaying news previews of articles in France to avoid complying with this law. What recommendations, if any, does Mr. Huizenga have for harmonising regulations internationally to avoid this kind of dynamic?
Mr. Rogier Huizenga:
That is a very complex question because it cuts across many of the issues we are discussing. It is about seeing if there is a real willingness for countries to come together around a common norm. I could see that happening with regard to some issues. I mentioned hate speech, and there are quite a number of international standards in jurisprudence, but there are still many differences between countries, including at the European level, and different approaches are being followed. I do not see an immediate end of the tunnel with countries coming together with a unified approach and vision as to what harmful content, and even hate speech, should exactly look like. This is all the more reason for countries to make that extra effort to see if it is not possible after all to do just that. Ultimately, if tech companies can move their business around and simply disappear but continue with the same business model wherever they can, that is not a long-term solution.
Mr. David Cicilline:
Mr. Lovett described a contract with the web which sounds exciting, consisting of nine principles and involving 87 directives, but I take it that essentially requires voluntary compliance with the large technology platforms. Has he thought about what would be the most effective way to operationalise that because, as we have seen, there is very little likelihood these technology platforms will regulate themselves or correct their own misbehaviour? They seem to be driven by a model that is about growth and revenue. Therefore, does it involve public shaming or is it intended jurisdictions would take these principles and directives and incorporate them in legislation, or what is his view on these issues?
Mr. Adrian Lovett:
We try to recognise that, in such a broad challenge, different approaches are needed. For example, we think the GDPR standard will be pretty much reflected in the relevant parts of the contractual web clauses because it exists, is a good start and can be built on. The contract with the web does not try to offer an overall regulatory approach. It tries to acknowledge where there are relevant initiatives and mechanisms already in place. Beyond that, we believe in the power of transparency and the public spotlight not as an answer to everything but as an important element. The reason the human rights assessments I spoke about earlier would be important is that it would put information in the public domain. Companies would be required to do that, we would then see that and those companies could be held to account. In some aspects of the contract for the web's scope, there would be regulatory and legislative means to do so. In other parts, it would be the within the power and ability of the public to scrutinise and see what companies are doing and to judge them accordingly, including where they take their business.
Professor Lorna Woods:
It is borrowed from our health and safety at work legislation and is an obligation on the platform to take steps to prevent against reasonably foreseeable harms. It is pushing it back to a company and essentially saying to do a proper risk assessment, identify what is happening with one's platform and the way one has designed one's services. Where there are downsides, companies should try to mitigate these. This is in the context of the service provided. One would look at how large the platform was and also at the sort of services offered. Live video streaming, I would say, is quite a high risk thing to do, as opposed to just text. If one is aiming a service at children, then one will want more safeguards than if one is aiming at adults. This allows a certain flexibility to the company to do things in the way they want to, but within a framework of having to do it, keeping an eye on this, not just doing a tick-box exercise but to keep monitoring. I am afraid that the time for self-regulation is over.
Can I ask Mr. Lovett - although I am interested in all views on this - about another old chestnut which is the question of net neutrality, and how we enforce this? How do we aspire to it? This is dual-layered because even if we manage our service providers, and legislate that service providers cannot boost or block particular types or sources of content, social media platforms are themselves adopting a net bias, in that they are promoting sponsored contents and applying their own algorithms as to what bubbles to the top and what does not. Going back to Sir Tim Berners-Lee, Mr. Lovett eloquently described his original vision earlier on. The power of the web was that sort of randomness, where it was built on meritocracy and random discovery. That, in turn, drove content producers where indigenous content and citizen journalism could come about. Ms Kerr has talked about that as well today. That was the dream, the goal and the utopia of the web at the outset. That has been choked back in two ways. The net neutrality debate rages on. The platform is layered on top of that and is equally filtering material out. For some people Facebook, or Google to an extent, are the Internet. In this context, and with a view to the platforms and the kind of regulation we are discussing today, how do we best manage that? I do not know if we can turn back the clock, and maybe we do not want to completely turn the clock back, but there is a goal from the earlier days of the web that is worth preserving, which is the meritocracy model and that citizen journalism and openness to anyone to have their 15 minutes of fame on whatever channel it may be.
Mr. Adrian Lovett:
We are still very focused on that fight for net neutrality, most particularly in the United States, where there was a great step forward, and now we are fighting to re-secure that several years on; we have not given up on that fight.
On the broader question, it is true that we cannot turn back the clock. Tim Berners-Lee and others of those Internet pioneers would not want us to do that. Channelling some of the inspiration and vision of those early days and applying it to the very different times we find ourselves in now, it may be time to think about a new connecting imperative. Perhaps this might help. I was talking a little while ago to one of the chief executive officers of one of the big airlines. I was asking him what his focus was. He was quite new in the job, six months in, and said obviously his was an airline company so the first priority is safety. He went on to talk about the things I thought he was going to talk about around the brand, staff relations, and so on. He got me thinking then about the aviation industry being a place where businesses make healthy profits and provide a service to customers but also where public safety is the first priority and where companies are required to co-operate with each other, as well as with regulators. There is zero-tolerance of mistakes and a significant degree of transparency. If we agree that the Internet can not only improve and enhance lives, upon which we all agree, but that it can also profoundly damage and - it is not too much of an overstatement - can cost lives, then surely the evidence is that we could agree that for companies on the web, the universally-accepted imperative must be public safety. That might be a new driver for how we organise all of this thinking for the next 30 years, quite different from the thinking that has prevailed for the past 30.