Human Rights and Technologies: Nation States and Private Corporations Use Counter-Terrorism and Security Rhetoric to Fuel Surveillance Policies

Human Rights and Technologies: Nation States and Private Corporations Use Counter-Terrorism and Security Rhetoric to Fuel Surveillance Policies

[ad_1]

ROME – In a report presented during the fifty-second session of the Human Rights Council, the United Nations Special Rapporteur on the promotion and their protection in the fight against terrorism, Fionnuala Ní Aoláin, denounced the increasingly common use of invasive technologies in the fight against terrorism. Drones, biometric data, artificial intelligence and spyware – comments Ní Aoláin – without specific rules that define their limits, risk becoming a serious problem for human rights and governance. Just as – he concludes – the unregulated transfer of high-risk technologies to states engaged in systematic violations of human rights should be stopped.

Technology and rights. The impact of these technologies on rights is worrying, especially as regards the right to privacy, freedom of expression, association and political participation. The key point to make is that these new forms of technology are being used in the fight against terrorism and violent extremism. But in the absence of a universally shared definition of these phenomena, there is a risk that states will use drones and data to promote a series of interests that have nothing to do with the protection of the rule of law.

The biometric data. They are those that concern the biological or behavioral characteristics of a person: from fingerprints to DNA, from eye color to the way of walking. For this reason they are an accurate identification tool for men. And for this reason they are also sensitive data, which needs to be protected. Today biometric data is used by law enforcement and administrative agencies in many contexts, including criminal justice, border control, counter-terrorism. Biometric data and other intrusive technologies have, for example, been used by China in the Xinjiang Uyghur Autonomous Region in an attempt to enforce anti-terrorism law.

The drones. It is another field where the tendency to misuse new technologies is growing. Drone attacks have been used both against targets in war zones and against specific individuals in so-called “targeted killings”, therefore outside the geographical limits of the ongoing conflict and often precisely in the context of interventions referred to as “anti-terrorism”. Although many attempts have been made in the last decade to urge states to adopt precise rules on the legal use of armed drones, in reality little progress has been made.

The nano-drones. Many states have begun to expand the use of armed drones within their borders, and new technologies have emerged, including nano-drones, drones equipped with non-lethal weapons, and lethal non-incendiary drones, raising new concerns on the matter. of human rights.

Artificial Intelligence. It is contributing to the change of multiple sectors in the social, economic, political and military fields. Artificial Intelligence has the properties of a generic technology, which means it can open up broad application opportunities. States are increasingly using it in law enforcement, homeland security, criminal justice, counter-terrorism, and border management systems.

The algorithms. At the center of the development and use of AI, however, there are algorithms, which exploit a large amount of data: historical, health, from social media, from travel. It’s a type of technology that can be used to create profiles of people, identify places as probable sites of criminal activity, and flag individuals as suspected terrorists. But it has a probabilistic nature, not certain, and for this reason it should be used with caution and according to specific rules – reads the report – because its use produces direct consequences for the individual, especially in his relationship with the State.

[ad_2]

Source link