On 28 May 2024, the Assembly of European Regions (AER) and the Directorate General of Migration, Asylum, and Antiracism of Catalonia held a webinar titled “Addressing Racial Injustice, Discrimination, and Bias in AI within Public Administration.” Organised within the EU-Belong knowledge transfer clusters on Skills, this session centred on building skills and expanding education to equip public administration professionals with the knowledge needed to manage AI in ways that protect against racial bias and discrimination.
This webinar addressed the growing reliance of public administrations on AI for services ranging from recruitment to fraud prevention and public safety. While AI offers efficiencies, it also presents risks, especially for vulnerable populations who may face unintended biases in automated decision-making. Recent studies, including the Council of Europe’s 2023 study on AI impacts, underscore these concerns, warning of the potential for systemic discrimination and bias in AI-driven public services.
The Risks of AI in Public Services
AI applications in public administration are increasingly used to manage large datasets, analyse patterns, and support decision-making processes. However, as discussed in the webinar, these systems may inadvertently reinforce biases present in their training data, leading to higher error rates for minorities and underrepresented groups. For example, AI tools in border control, recruitment, or social services can yield skewed results if they operate on data that inadequately represents diverse populations, resulting in unfair treatment. The Council of Europe’s 2023 study points out that such AI risks are compounded by the dynamic nature of these systems, which evolve within a fast-changing legal and technological landscape. To mitigate these risks, experts advised incorporating preventive measures throughout the AI lifecycle, including bias assessments, third-party audits, and public supervision.
Understanding New Legal Frameworks for AI
This year marked significant developments in AI regulation with the European Parliament’s adoption of the EU Artificial Intelligence Act (“AI Act”) and the Council of Europe’s draft Framework Convention on AI, which introduces binding guidelines for managing AI-related risks to human rights. These frameworks aim to provide clearer guidelines for AI use, establishing rules for high-risk AI applications and setting out requirements for transparency, accountability, and redress.
The webinar highlighted critical questions for policymakers: What should they prioritise in these new legal texts? How should they evaluate AI services from vendors to ensure compliance with anti-discrimination and ethical standards?
Expert Insights for Better Policymaking
To support policymakers, the webinar featured insights from AI ethics and human rights experts. Laura Sentis Margalef, EU-Belong Project Officer at the Department for Equality and Feminisms in Catalonia, opened the session, followed by Joan de Lara, Senior Officer from the same department, who spoke on the importance of fostering intercultural and inclusive policies. Sofia Trejo, an AI ethics researcher at the Barcelona Supercomputing Center, explored the social, political, and ethical dimensions of AI, particularly its susceptibility to biases when handling diverse datasets.
Menno Ettema, Head of Unit for Hate Speech, Hate Crime, and Artificial Intelligence at the Council of Europe, discussed how AI intersects with human rights concerns, especially in terms of discrimination. He stressed the need for regulatory frameworks that protect rights and ensure accountability in AI development. Sarah Chander, Senior Policy Advisor at European Digital Rights and co-founder of the Equinox Initiative for Racial Justice, shared strategies for combating racism in digital intelligence. Her presentation focused on policy and advocacy as essential tools in promoting AI systems that support equity and prevent discrimination. The webinar closed with a summary from Laura Sentis Margalef, who underscored the importance of continued collaboration and dialogue to ensure AI technologies are designed and deployed in ways that respect and protect all citizens.
Key Takeaways for Public Administrators
The discussion illuminated several actionable insights for public administrations. To mitigate AI biases, experts recommended implementing preventive requirements, like human rights impact assessments and third-party certifications, to monitor and supervise AI systems closely.
The webinar also addressed the need for democratic participation, allowing affected communities to contribute to AI policy decisions, especially in areas such as law enforcement, welfare, and public administration. The EU Artificial Intelligence Act addresses these issues by establishing guidelines for redress, transparency, and accountability in high-risk AI applications.
As AI continues to permeate public administration, this webinar underscored the importance of awareness, proactive risk assessment, and robust frameworks for reducing bias. By taking an intercultural, inclusive approach, public institutions can develop AI policies that protect all citizens, creating a fairer, more just society.
More Information on AI in Public Administration
For readers interested in exploring the topics of AI bias and discrimination further, the following resources provide valuable insights and in-depth research:
- Gender Shades: Research on face recognition software and its biases.
- AI Bias and the Roma Community: Fundación Secretariado Gitano (FSG) has published a comprehensive report discussing AI bias in relation to the Roma community.
Read the report in English
Leer el informe en español - EU Artificial Intelligence Act: Learn about the common rules and obligations for AI providers and deployers within the EU internal market.
- Machine Learning Harm Framework: Harini Suresh and John Guttag provide a framework for understanding sources of harm throughout the machine learning lifecycle.
- Amnesty International Case Study: Examine the impact of algorithmic discrimination in public services through the Dutch childcare benefits scandal.
- Recommendations by Sarah Chander (Equinox Initiative for Racial Justice):
- Beyond Bias: Discussion on AI-driven systems and debiasing efforts.
- Civil Society Recommendations on the AI Act: Read the political statement and the recent status update.
- Discriminatory Surveillance and Migration Laws
- Algorace Website: For more information on algorithmic fairness and race, visit the Algorace website.
- Council of Europe Resources:
About EU-Belong
EU-Belong is a 3-year project co-funded by the Asylum, Migration and Integration Fund of the European Union. Coordinated by the Assembly of European Regions (AER) within the framework of its Intercultural Regions Network (IRN), it is implemented in partnership with ten regional authorities from seven European countries: Arad and Timiș in Romania; Catalonia and Navarra in Spain; Donegal in Ireland; Emilia-Romagna in Italy; Leipzig in Germany; Pomerania and Poznan in Poland; Salzburg in Austria; and two technical partners: ART-ER Attrattività Ricerca Territorio and Istituto Economico Cooperazione Internazionale (ICEI).
Stay up to date with all updates by following EU-Belong on X/Twitter @EU_Belong.
For questions, contact Emanuela Pisanó, EU-Belong Project Manager at [email protected]