Three questions to Jacob Berntsson, Research Manager at Tech Against Terrorism

 

*This interview first appeared in our March newsletter in French. 

 

1. Can you tell us about the origins and activities of Tech Against Terrorism?

We started off as a project launched by UN CTED – the United Nations Counter-Terrorism Committee Executive Directorate, who sits under the UN Security Council – specifically aimed at identifying ways to respond to terrorist use of the internet. Back in 2016 when they launched this project, this was a rampant problem, even on the big platforms like Twitter and Facebook – platforms that have now improved their responses. The UN CTED carried out three global workshops speaking with various stakeholders, tech companies, civil society groups, governments and intergovernmental agencies, academics, etc. There were four main conclusions coming out of those workshops : First, there is still a limited evidence base in terms of terrorist use of the internet. (That has improved a bit since then, but there is still more to do.) The second is that this a problem that has its roots in the offline sphere, but manifests itself online, so it is important that we take a holistic approach. Thirdly, it is vital that we incorporate human rights into our response. If we respond in a way that restricts freedom of expression or human rights we will only exacerbate the problem. And fourthly, smaller companies have limited capacity to respond to this problem because of a lack of capacity, expertise, knowledge. 

In 2017 we became our own organisation. We work under the guidance of UN CTED but we are an independent organisation. We are a public-private partnership, with funding from governments and the private sector. Our main activity is about helping platforms improve responses to terrorist and violent extremist use of their services. We have three pillars of work. The first is outreach. We do a lot of risk mapping using open source intelligence analysis techniques, identifying which platforms are being used by terrorists. After that we do our best to build a constructive working relationship with the platforms and to learn how we can support them specifically. Secondly, we do a lot of work to facilitate knowledge sharing within the tech industry. Some of the larger tech companies – like Facebook, Twitter, Google Microsoft, which formed the GIFCT (The Global Internet Forum to  Counter Terrorism) – have gotten quite far in their thinking on this and have invested considerable resources, so there are many best practices to share between such companies and companies that are run by only a few people that are just discovering this problem on their site. Our third pillar is operational capacity building and tech support: our developers and data scientists help companies, either by building products or by helping them strengthen their tech, to identify the content or to keep malicious actors out. 

We see it as our job to take knowledge from civil society and from academia and to distill it in a way that is actionable and operationalizable for tech companies, so that they can implement effective responses on their platforms. As a public-private partnership, we work with everyone. We like to see ourselves as coordinating these efforts between various sectors. We are in a good position to do that given that we are a neutral and independent organisation. 

 

2. How does the absence of a universally agreed upon definition of terrorism influence the way you understand your mandate and carry out your work?

As you can imagine, this is not only a matter of academic debate, it actually practically impacts how tech companies are able to operate, and what kind of action they’re able to take. One reason why larger platforms have been so effective in countering ISIS content is because there is a very clear international mandate that has allowed the development of tech-based solutions such as image hashing and hash sharing. This is an area that has a lot of grey and nuance, and tech solutions require a more black and white approach. I do not mean that this issue is black and white and that tech solutions should function in this manner, but rather that machines are bad at nuance so when we train automated solutions we need to give them very specific metrics that determine what is and isn’t terrorism. In this way, consensus around terrorist groups help as they create clarity – if there is consensus that Group X is a terrorist group, you can run that group’s logo in your training data, for example. Definitions matter and clarity is important.

We always start with the UN sanctions list (United Nations Security Council Consolidated List). For us that is a good place because it covers many of the Islamist terrorist groups. We see it as the best forum to achieve some sort of international consensus given that it is still a process that will go through the UN Security Council, the best body to do that in our view. That doesn’t mean it is perfect. For example, there are no Far Right groups on that list at all. A lot of our research is on the Far Right l we have been working with platforms in the past few months around identifying far right symbolism. But again, there is no consensus. The “Far Right” is a very wide and to some extent unhelpful term, which can mean everything from Marie Le Pen to people who conduct actual violence. For us, we tend to draw the distinction at “Violent Extremism”.

It is important to have specific criteria around human rights, to ensure that companies using our hash sharing data base, for example, do not engage in any activities that restrict human rights and freedom of speech. The way we measure that is we invite companies to sign the Tech Against Terrorism Pledge, a pledge based on internationally recognized documents around freedom of expression, the ICCPR, and also the UN Guiding Principles on Business and Human Rights. The GIFCT has a membership program, based on our membership program from 2017, to increase company participation and to allow for smaller actors to participate. The program obliges members to meet specific criteria, for example, to have an explicit prohibition of terrorism in your terms of service, to publish transparency reports on a regular basis, to make a public commitment to human rights. That’s where we come in as a neutral independent organisation, to work with the platforms to ensure that they meet these criteria. This is based on the premise that an actor like Facebook for example does not want to be in a position where they tell another platforms what to do in terms of their policies.

 

3. Can you elaborate on Tech against Terrorism’s attention to smaller platforms? Is this the aim of your Knowledge Sharing Platform and your Terrorist Content Analysis Platform (TCAP)? Also might regulation like the European Terrorist Content Regulation affect your work with these smaller actors?

The Knowledge Sharing Platform was launched in 2017 at the UN in New York. It is essentially a secure database of tools and resources for tech companies, behind a password login. Companies can find advice in three main areas: terms of service, content moderation, and transparency reporting. We also offer a dashboard and tools to help companies get an understanding of what other actors in the industry are doing. We provide indexed lists of terminology that is often used by terrorist groups, but also of symbols, such as flags, logos, of both extreme far right groups and Islamist terrorist groups. Outside of that specific platform we also work with companies on an ad hoc basis where suitable and possible, in terms of specific research projects and risks assessments and threat intelligence.

The Terrorist Content Analysis Platform (TCAP) is something we’re developing now with support from Public Safety Canada (the Canadian interior ministry). It will be the first free, centralised database of verified terrorist content. We will be working with an academic advisory board and a number of academic partners. We will also build our own collection system to collect verified terrorist content to host on the database. The importance of the project is to support tech companies to identify terrorist content, but also to allow them to increase their understanding of what this content looks like. Companies will be able to examine this content in a secure environment. We will also be able to send them alerts as soon as we find something on the platform. That will be important in a situation like ChristChurch or another “online crisis” situation when content goes viral. It will also be open for academics for research purposes, to help improve quantitative analysis of terrorist use of the internet. There is a lot of good qualitative and discourse analysis, but we want to raise the game in terms of quantitative analysis. In the first instance we will only be collecting Al Qaeda and Isis content, and that is specifically because of that rule of law aspect: given the lack of definition and consensus around terms like “Far Right” and “extreme far right” we want to start in a more clearly defined, cautious manner. But the aim is very much to include more ideologies. Further down the line, given that we have collected this data, we hope that the platform will be able to function as a centralised hub for training data for algorithmic and automated solutions.

Regarding regulation, we often find that there is sometimes a lack of appreciation of the challenges that smaller platforms face. For example, some of the things that are suggested in this proposal [European Terrorist Content Regulation], like the one hour removal time frame, the 24/7 point of contact, won’t be feasible for a smaller platform. Some of the platforms that are being exploited by groups like ISIS on a day to day basis are literally run by one person. What happens when they go to bed? There are some questions about practicality that we think will need to be considered, at least based on the last version that we saw. It is currently being discussed so we hope that will be taken into account in the final version. Speaking generally about regulatory approaches, I do think we need to be careful with initiatives that make platforms overly liable, that necessitate implementing upload filters, for example, which can have a chilling effect on human rights and freedom of expression. We have to be careful that we as democratic countries don’t set up a precedent that other less democratic countries can then follow suit on. The question is: what do terrorists want? We think that terrorists want to have political impact, and they do that by committing attacks, by creating fear in society, so if we then take steps to infringe upon our own values as part of our fight to counter terrorism, that might actually be counterproductive. It is important to look at it holistically and to look at the root causes of terrorism, and not only punish companies who happen to be the arena where this problem manifests itself, when the problem has roots in society.

, ,

INSCRIVEZ-VOUS à NOTRE NEWSLETTER

Vous affirmé avoir pris connaissance de notre Politique de Confidentialité. Vous pouvez vous désinscrire à tout moment à l'aide des liens de désinscription ou en nous contactant à l'adresse contact@seriously.ong.