Ethiopia, Myanmar: this is how social network algorithms can spread hatred and fuel wars

Ethiopia, Myanmar: this is how social network algorithms can spread hatred and fuel wars

[ad_1]

ROME (DIRE agency) – Meareg Amare taught Chemical Analysis at Bahir Dar University, Amhara Region, Ethiopia. He was shot dead in front of his home on November 3, 2021, in the midst of a civil conflict that a peace agreement now promises to somehow overcome. The body, reported one of his four sons, Abrham, remained on the ground for hours. Justice is now being sought for the professor with a legal action filed in Nairobi, the capital of Kenya: the accused are not paramilitaries or men of the Amhara Special Forces but a global giant of social networks.

Injurious and incendiary algorithms. The thesis is that the murder of Amare, a native of Tigray, the epicenter region of the Ethiopian conflict, was instigated through online posts and comments that Facebook was unable or unwilling to block. And there’s more, explains Alia Al Ghussain, a researcher at Amnesty International expert in artificial intelligence and human rights a Overseastitle published byItalian agency for development cooperation (Aics): “The social business model based onengagement causes the algorithms to be programmed to feed harmful and incendiary content capable of going viral”. The accusation is aimed at Half, the American multinational that owns not only Facebook but also WhatsApp and Instagram. Which now, if the Nairobi case were to go ahead, risks: in the event of a conviction, it would have to adopt emergency measures to combat hate speech, increase the number of people in charge of online monitoring and moderation and create an ad hoc fund to two billion dollars to compensate victims of online instigated violence.

However, a story to be clarified. If and how harmful, threatening and defamatory social posts have put Amare at risk, favoring or causing his murder, it will be investigated and clarified. However, a knot remains, which has already emerged in the past in other regions of the global South crossed by conflicts and is now more topical than ever: just think of the fact that the Kenyan company in charge of moderating social content in Africa will stop working for Half next month. A spokesman for the multinational, heard by the Reuters news agency, assured that the “next transition phase” will have “no consequences on the ability to monitor messages”.

Given the precedents, however, fears remain. Second Overseas, affect the global geopolitical imbalances. Listen to Bridget Andere, a Kenyan lawyer who works with Access Now, an NGO committed to digital rights. “Half“, she says, “the number of content moderators in Africa should increase so that they can check also in local languages ​​and dialects”. According to the expert, “more transparency on the algorithms that promote harmful content” would also be needed.

It’s not just about Ethiopia. In 2021, refugees of Rohingya origin forced to leave Myanmar sued Meta for failing to monitor and block hate messages and insults directed against their community, a mostly Muslim minority. The lawsuit, launched in the United States, includes fines and compensation claims for 150 billion dollars. Already in 2018 UN experts working on Myanmar had denounced Facebook’s failure to control online violence. The managers of the social network had first admitted slowness in the checks, underlining however that they had employed Burmese moderators and banned the profile of the Tatmadaw, the army that responds to the generals responsible for the 2021 coup, for some time accused of raids and indiscriminate violence against minority communities. The American survey cited a Reuters study which in just one year collected over a thousand examples of posts, comments or disparaging, offensive and discriminatory images on Facebook against people of Rohingya origin or Islamic faith.

[ad_2]

Source link