Ethical issues around the future usage of artificial intelligence (AI) have helped turn it into a polarising issue in society. From techno-optimists to people derided as luddites, many have an opinion on the ethics of AI. Later this month, AER’s event Artificial Intelligence: Are Regions Up to The Challenge? will bring together field experts, academics, artists, politicians, and stakeholders to discuss these ethical issues, giving regional decision makers the opportunity prepare for the future together.
Difficult Ethical Questions
It is quickly becoming impossible to imagine a world where artificial intelligence does not play a prominent role. Whether it’s using Google Maps to navigate a new city or asking Siri to make restaurant reservations, AI impacts nearly everyone’s daily life. As AI continues to advance, man and machine will only become more integrated. While AI holds the potential to vastly improve our lives, complicated ethical questions cloud its future usage. Last year, high profile figures such as Elon Musk, Stephen Hawking, and Bill Gates penned an open later to the International Joint Conference in Argentina claiming that the progression of AI could be humankind’s most monumental achievement but could also become its greatest mistake if not handled responsibly.
Towards an Unequal Society?
A defining feature of 20th century was an unprecedented reduction in inequality between classes, races, and genders. Futurists worry that if not properly regulated, AI could once again increase inequality between the masses. Firstly, there is the matter of economic inequality. As more people are displaced by machines, AI will simultaneously result in larger profit margins for the companies that both develop and employ AI. With people no longer collecting weekly paychecks and those creating and using AI profiting, many fear that wealth will become concentrated in fewer hands. A related concern is that future and current economic inequality may beget biological inequality. AI assisted biotechnology will soon make upgrading our cognitive and physical abilities a distinct possibility. Biological engineering would likely only be available to those with the means to purchase it, possibly splitting humanity into biological castes. The 2017 Forum Européen de Bioéthique in the region of Grand Est offered several discussions around this theme. Of course, not all view inequality as an inevitable bi-product of the advancement of AI. Google CEO Ray Kurzweil believes everyone will be able to benefit from AI, comparing its progression to the advancement of cellphones which are now owned by nearly five billion people.
Social Justice and AI
Another concern about the development of AI is that it adopts society’s dominant social norms, privileging certain groups over others. For example, in 2015 a study found that jobs displayed by Google’s advertising algorithm to women paid less than those it showed to men. Similarly, a report from ProPublica revealed that AI used by US law enforcement agencies incorrectly predicted that black offenders are more likely to become repeat offenders than white offenders with similar criminal records. Given that gender inequality and systemic racial biases are still issues society is confronting, people have expressed concerns that AI could reinforce racial and gender inequalities.
Big Data and Privacy
By 2020, scores of sensors and billions of devices will be online. More devices connected to the internet means more companies holding sensitive personal information. While usage of personal data could do everything from making roadways safer to improving medical diagnoses, the collection of data carries questions surrounding the protection of people’s privacy. After a record number of cybersecurity breaches in 2016, the ability for corporations to safeguard people’s personal information has been questioned. How companies such as Facebook use the personal data they possess has been another ongoing concern. For example, previously unknown genetic diseases detected by AI programs could potentially raise premiums for thousands if obtained by insurance companies. According to Professor Erik Vermeulen society will have to redefine the word privacy itself. He believes humans have already become so dependent on AI assisted technologies which deliver better user experiences that we will never return to a world where people can exist in complete solitude. For Vermuelen, in a world where privacy is an illusion the next step is ensuring that data protection regulations are stringent and consistently updated.
What it Means to Be Human
Outspoken entrepreneur Elon Musk believes that by 2030, AI will be able to outperform humans at everything. To continue to add value to the economy Musk believes that humans must become cyborgs, merging biological and machine intelligence. Theoretically, this technology would involve self involve direct brain to machine communication which creates “lag free interactions between us and our external devices” making the program a literal extension of the self. Possessing the ability to fuse people and machine would give humans unprecedented power to alter their physical and cognitive abilities which have been stable for thousands of years. Thus, the development of brain-machine technology poses ethical questions about whether its usage would alter the natural course of evolution. Moreover, some state that the hasty integration of such programs has the potential to create unforeseen consequences. While brain-machine interfaces are a reality which is still years away, some believe that devices like smartphones and laptops have already become virtual extensions of the self. For them our integration with modern technology has already created a new class of humans who exist as cyborgs.
Can Machines be Moral?
Perhaps due to popular culture, people’s worst fears about AI often conjure up images of robots running amuck and taking over the planet. While experts believe the reality is far less dramatic, it still creates complicated potential situations for programmers and society to deal with. Popular scenarios often involve something like this: With a self-driving car moving quickly down the road, a child darts out onto the street to grab a ball. Does the car risk killing its passengers by making a sudden correction? Or should the car do nothing and continue ahead? This scenario highlights an ongoing debate about how programmers should introduce morality into AI as it becomes increasingly autonomous. Microsoft Researcher Danah Boyd says there are legitimate questions regarding the values being programmed into AI claiming ““There is increasing desire by regulators, civil society, and social theorists to see these technologies be fair and ethical, but these concepts are fuzzy at best”. Amid these concerns, earlier this year MEPs voted to consider granting legal status to robots who would be classified as “electronic persons” suggesting that new legislation was required to hold AI responsible for their actions. Complicating matters is the difficultly of introducing regulations when products are already playing a prominent role in people’s lives, making grappling with these issues before AI becomes fully autonomous a necessity.
Imagining the Future
Many of the issues previously mentioned outline future scenarios where AI creates as many problems as it solves. Introducing these topics is not meant to induce a sense of fatalism, but rather the opposite. Discussing and raising awareness about the ethical issues involving AI is vital to the development of appropriate policy responses. Given the breakneck speed at which technology develops, policy makers must be forward thinking to avoid playing catch up. AER’s event on AI will give attendees the opportunity to grapple with some of these issues alongside contributors who have experience in dealing with these issues. Speakers include Nilofar Niazi Founder & CEO of the TRAINM Neuro Rehabilitation Center and Beniot Vidal a co-founder of DataVeyes. Trainm helps humans become more human via technology, including artificial intelligence, while Dataveyes which specializes on the interactions between humans and data.The ethical issues surrounding usage of AI will only intensify as it progresses meaning people will have to examine difficult questions about what society’s future will look like. AER’s event on AI gives the chance to shape and imagine that future.