Artificial iintelligence facial recognition surveillance and the breach of privacy rights: The ‘Clearview AI’ and ‘Rite Aid’ case studies

Author Ifeoma E. Nwafor

ISSN: 2521-2591
Affiliations: Senior Lecturer, Godfrey Okoye University (Nigeria); Visiting Scholar, Faculty of Law and Criminology, KU Leuven, Belgium; Member, United Nations Development Programme AI4Dev Reference Group; Research Member, Centre for Artificial Intelligence Digital Policy
Source: South African Intellectual Property Law Journal, 2023, p. 88 – 92
https://doi.org/10.47348/SAIPL/v11/a5

Abstract

The increasing sophistication of artificial intelligence (AI) facial recognition models and the accessibility of photos online by companies and governments have amounted to the excessive misuse of facial surveillance systems. The government, the police and organisations have a long history of using AI facial recognition technologies to gather data on citizens without respecting their data and privacy rights. The government relies on national security and public safety to justify such gathering of data. Marginalised groups and people of colour are disproportionately affected by such surveillance. Data protection and privacy rights activists have called on governments to regulate facial recognition systems. It is also essential to establish AI oversight agencies with the responsibility to monitor the use of AI models and to ban such use when it breaches citizens’ data and privacy rights, and any other human rights.

The European Union’s AI Act is the first comprehensive regulation on AI. It provides a risk management framework with different rules for different risk levels: unacceptable risks, high risks; and limited or low-risk applications. In June 2023, the European Parliament voted in favour of a total ban on live facial recognition in public spaces. Although the new Act did not stipulate a full ban on live facial recognition surveillance, it provides that all high-risk AI systems will be assessed before being put on the market and throughout their lifecycle.

African governments have caught the AI bug but only a few African countries have an existing AI strategy; these include Mauritius, Egypt and Rwanda. It has been argued that international AI technologies and ethical deliberations are modelled without Africa in mind. Against this backdrop, it is more likely that Africans as people of colour will be subjected to AI ethical bias, privacy and data protection concerns, risks and harms. It is essential that African countries develop AI policies. Additionally, Africa should also take a strategic place in the ongoing debate on global AI regulation.