BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Assembly of European Regions - ECPv6.1.3//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Assembly of European Regions
X-ORIGINAL-URL:https://aer.eu
X-WR-CALDESC:Events for Assembly of European Regions
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Brussels
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Brussels:20240528T130000
DTEND;TZID=Europe/Brussels:20240528T143000
DTSTAMP:20260416T000702
CREATED:20240502T224545Z
LAST-MODIFIED:20241007T082623Z
UID:47937-1716901200-1716906600@aer.eu
SUMMARY:Addressing Racial Injustice\, Discrimination\, and Bias in AI within Public Administration
DESCRIPTION:Join the conversation! On 28 May 2024 the AER and the DG of Migration\, Asylum and Antiracism of the Government of Catalonia organised a webinar on “Addressing Racial Injustice\, Discrimination\, and Bias in AI within Public Administration” in the context of the EU-Belong knowledge transfer clusters \n\n\n\n\n\nPublic administrations and their service providers are increasingly using AI applications to accelerate their operations\, analyse big data sets\, assess risks and make decisions. Usages may involve for instance support to recruitment processes\, fraud control\, the assessment of requests for benefits\, the security of public spaces or border control. \nRisks of AI usage in public services\nSeveral studies\, including the Council of Europe’s 2023  Study on the impact of artificial intelligence systems (…)\, have highlighted the risks of artificial intelligence (AI)\, with regards to increased discriminations of vulnerabilised people. Automated decision-making systems\, have indeed in some cases led public administrations to implement wide systems of discrimination and unprecedented injustice. \nA changing legal landscape\nIn March 2024\, the European Parliament adopted the EU Artificial Intelligence Act (“AI Act”) establishing common rules and obligations for providers and deployers of AI-based systems in the EU internal market. At the same time\, the Council of Europe finalised the draft Framework Convention on artificial intelligence\, which is the first legally binding global instrument to address the risks posed by Artificial Intelligence and is based on the Council of Europe’s standards on human rights\, democracy and the rule of law. \n\nWhat is new about these texts?\nWhat should policymakers be aware of when designing policies and practices?\nWhat should they be looking at specifically when buying services from consultancies or tech companies?\n\nA webinar for better policymaking\nIn a rapidly evolving landscape policymakers and regional stakeholders may find it difficult to simultaneously gather useful and up-to-date information\, while also identifying areas where they can act and create impact. \nThis webinar gathered experts who shared: \n\nhow they work to address racial injustice\, discrimination\, and bias in AI within public administration\nwhat are the main changes ahead and how to prepare\nwhat they see as the main elements for effective policymaking and civic participation on the topic of AI & discriminations\nwhat their advice is to politicians\, civil servants\, and different regional stakeholders\n\nDraft Agenda\n\n\n\n13:00-13:07\n \nWelcome\n \nLaura Sentis Margalef\, EU Belong Project Officer\, Department for Equality and Feminisms of Catalonia\n\n\n13:07-13:10\n \nThe Intercultural Regions Network\n \nJoan de Lara\, Senior officer\, Department for Equality and Feminisms of Catalonia\n\n\n13:10-13:20\n \nIntroduction on AI\n \nSofia Trejo\, Researcher in AI Ethics and Specialist in the ethical\, social\, political\, and cultural dimensions of AI\, Barcelona Supercomputing Center\n\n\n13:20-13:25\n \n \n \nQ&A\n\n\n13:25-13:35\n \nAI\, discrimination and human rights protection\n \nMenno Ettema\, Head of Unit\, Hate Speech\, Hate Crime and Artificial Intelligence\, Directorate of Equal Rights and Dignity\, Council of Europe\n\n\n13:35-13:40\n \n \n \nQ&A\n\n\n13:40-13:50\n \nThe intersection betweenanti-racism and digital rights\n \nSarah Chander\, Senior Policy Advisor at European Digital Rights and co-founder of the Equinox Initiative for Racial Justice\n\n\n13:50-13:55\n \n \n \nQ&A\n\n\n13:55-14:05\n \nFighting racism in digital intelligence\n \nPaula Guerra Cáceres\, Antiracist activist and Head of Advocacy at Algorace\n\n\n14:05-14:10\n \n \n \nQ&A\n\n\n14:10-14:20\n \n \n \nRoundtable discussion\n\n\n14:20-14:25\n \n \n \nQ&A\n\n\n14:25-14:30\n \n\nConclusion & Announcements \n\n \nLaura Sentis Margalef\, EU Belong Project Officer\, Department for Equality and Feminisms of Catalonia\n\n\n\nMore Information on AI in Public Administration\nFor readers interested in exploring the topics of AI bias and discrimination further\, the following resources provide valuable insights and in-depth research: \n\nGender Shades: Research on face recognition software and its biases.\nAI Bias and the Roma Community: Fundación Secretariado Gitano (FSG) has published a comprehensive report discussing AI bias in relation to the Roma community. Read the report in EnglishLeer el informe en español\nEU Artificial Intelligence Act: Learn about the common rules and obligations for AI providers and deployers within the EU internal market.\nMachine Learning Harm Framework: Harini Suresh and John Guttag provide a framework for understanding sources of harm throughout the machine learning lifecycle.\nAmnesty International Case Study: Examine the impact of algorithmic discrimination in public services through the Dutch childcare benefits scandal.\nRecommendations by Sarah Chander (Equinox Initiative for Racial Justice):– Beyond Bias: Discussion on AI-driven systems and debiasing efforts.– Civil Society Recommendations on the AI Act: Read the political statement and the recent status update.– Discriminatory Surveillance and Migration Laws\nAlgorace Website: For more information on algorithmic fairness and race\, visit the Algorace website.\nCouncil of Europe Resources:– Framework Convention on AI and HR Democracy and the Rule of Law– Study on AI and Equality– Study on Discrimination and Algorithmic Decision-Making– Committee of Experts on AI\, Equality\, and Discrimination (GEC/ADI-AI)\n\n 
URL:https://aer.eu/events/addressing-racial-injustice-discrimination-and-bias-in-ai-within-public-administration/
CATEGORIES:AER Events
ATTACH;FMTTYPE=image/png:https://aer.eu/wp-content/uploads/2024/05/EU-Belong-Webinar-2-Visual-option-1-2.png
END:VEVENT
END:VCALENDAR