Face Recognition Expansion: Exploring Implications for Ethnic Minority Communities

Lola Yusuf
3 min readFeb 17, 2023

--

Image by Scott Webb

The Metropolitan Police is expanding its facial recognition capabilities, and with this comes the risk of minority groups being unfairly targeted. There is huge risk that this could lead to the violation of civil liberties, especially for marginalised communities. To figure out where we stand on this technology and how to navigate its use responsibly, we need to open up a dialogue that’s honest, transparent, and grounded in evidence.

A diverse, inclusive and intersectional approach to AI should prioritize public trust and acknowledge those who are most susceptible to algorithmic bias. Understanding how different social groups perceive emerging technologies, particularly those with reason to distrust our current systems of social control , is more crucial than ever.

My recent research project, which examines perceptions of algorithmic fairness and trust among various social groups and individuals with intersecting identities, serves as a starting point for this conversation.

What is Automated Facial Recognition

Automated facial recognition is an artificially intelligent data-analysis technology which performs real-time biometric analysis of a person’s features to create a unique ‘map’ which can be cross-referenced with previously stored data to identify persons of interest. A.I technologies are often considered the next major advancement in crime-fighting technology ; however, the use of these systems are also largely controversial.

Why the controversy?

Picture this: facial recognition cameras lurking in public spaces, silently collecting your biometric data. It’s a huge red flag for privacy violations and potential human rights breaches, not to mention the issue of inaccuracies and biases within the technology that could exacerbate existing societal disparities, particularly for certain groups.

These issues are indicative of what, Ruha Benjamin coined "the New Jim Code” a term which describes how new technologies can reflect and reproduce existing and historic inequalities, but that are promoted and perceived as more objective and progressive regardless .

Interestingly, major tech companies like IBM, Microsoft, and Amazon have opted out of supplying facial recognition technology to law enforcement agencies, citing ethical concerns. But while some step back, the Metropolitan Police is moving ahead with deployment, triggering calls for stricter regulations to govern its use.

Why Should We Pay Attention?

Policing and surveillance play pivotal roles in societal control and can reinforce existing power differentials and inequitable outcomes. Introducing AI into these domains risks perpetuating discriminatory surveillance practices. While discussions on AI in policing are prevalent among tech experts and policymakers, the voices of the public often get drowned out. It’s important to ensure we bring everyone to the table, from tech experts to community leaders, these voices are invaluable in ensuring that emerging technologies are wielded ethically as well as effectively.

Peeking into People’s Minds

My recent study took a deep dive into public attitudes on facial recognition, unearthing a wealth of insights and concerns across different communities. Through open-ended surveys and careful analysis, I was able to uncover five key themes: distrust of law enforcement, concerns about efficacy, fears of racial bias, socioeconomic disparities, and privacy/data protection.

It’s clear that not everyone sees AI decision-making through the same lenses, especially those who’ve been marginalised by human biases in the past. That’s why we need research that actively seeks out and listens to diverse voices, capturing the full spectrum of experiences with AI.

It’s important to recognise that not everyone perceives AI decision-making in the same way, particularly if they face marginalisation at the hands of other humans. That is why research that purposely recruits and studies different social groups and accounts for individual differences in experiences with AI is essential.

In a Nutshell

As defenders tout the potential benefits of A.I. policing, it is equally important to acknowledge the potential for exploitation and abuse. As a society, we need to adopt a more critical perspective towards idealistic views of AI. It is more important than ever to challenge tech solutionism and to instead, build a society that values not just efficiency, but justice and fairness for all.

--

--

Lola Yusuf
Lola Yusuf

Written by Lola Yusuf

Passionate about social justice and confronting race, gender and ability bias.

No responses yet