During AER’s event on artificial intelligence (AI) attendees broke into five roundtable discussion groups, each addressing an area of AI important to stakeholders. The group called “AI: Towards a Soulless World” discussed the ethical issues created by the rise of AI and what it means to be human in an era of highly intelligent robots. Debates were moderated by Diane Whitehouse, a Principal eHealth Policy Analyst at EHTEL. Participants discussed issues such as AI’s potential to increase inequality and the social consequences of artificial intelligence.
This group contained a wide range of perspectives involving representatives from industry, the political sphere, and the healthcare sector. Contributors Nilofar Niazi the Founder and CEO of the TRAINM Neuro Rehabilitation Center and Benoît Vidal the Co-founder of Dataveyes provided insight on how design can be the key to making Artificial Intelligence more human-friendly. TRAINM uses brain stimulation technology to help patients recover from strokes. TRAINM is actively involved in using the data they have collected to help identify at risk patients in advance and in clinical trials.
Dataveyes seeks to facilitate seamless interactions between humans and data, by teaching people to interact directly with data. Eva Hallström a County Councillor from Värmland brought knowledge on how ethical issues involving AI are being addressed at a political level. The group was moderated by Diane Whitehouse, a Principal eHealth Policy Analyst.
Ms. Niazi stated that one of the main ethical issues TRAINM has been forced to consider is access to potentially transformative technologies. According to Ms. Niazi, governments have been reluctant to subsidise usage of the technology, meaning that only people with the resources to pay can get access to the technology. The ethical dilemma brought up by Ms. Niazi is part of a larger discussion around AI’s potential to increase inequality if the benefits of AI are only available to those who can afford it.
Another issue, raised by Mr. Vidal is disconnect between the technology created in labs and its real-world applications. While technology developers are great at creating programs, too often they remain solely focused on the optimisation of a system without considering its potential social consequences according to Mr. Vidal. To illustrate his point, Mr. Vidal used the example of AI employed in the US criminal justice system. Using technology to predict the likelihood that an offender would repeat, the algorithms incorrectly predicted that African Americans were more likely to repeat offend than their white counterparts.
Dag Rønning, AER Committee 3 President raised concerns around the potential exploitation of AI by certain companies. Bringing up the issue of a breeding company which uses technology to choose the sex of their offspring, Mr. Rønning wondered about the dangers of humans using selective breeding.
To rectify the problems with AI’s applications, Mr. Vidal believes that society must strike a balance between humans and algorithms. For him, an accurate perception of reality involves more than just the simple analyzation of statistics performed by machines. Instead of relying solely on the intelligence of machines, we must focus on the ability of machines to enhance our own intelligence. This means people must envision the human directed ways in which AI can be used.
According to Mr. Hammarstedt, an analyst at Hammarstedt Advice, given their considerable influence in society, politicians need to increase their awareness around AI’s potential consequences. He believes, citizens must be active and express their ethical concerns about AI to politicians, who must take those views into account in their decisions.
Mr. Mori, AER’s Secretary General, seconded Mr. Hammarstedt’s point stating that gaps in knowledge between politicians and AI experts must be filled. Mr. Hammarstedt believes that updating knowledge about AI will be easier at the regional level than national, because of politicians’ proximity to experts (i.e. at Universities, industry).
Finally, Ms. Niazi stressed the need to develop regulations around the usage of AI at all levels to avoid fallout as AI comes to play a larger role in society.