Reported by Helen Brooks, NERC-funded CSaP Policy Intern (March-July 2018)
Dr Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence, delivered a talk to CSaP Policy Fellows and a group of early-career researchers on how artificial intelligence had the potential to improve autonomy and provide intelligence.
However, Dr Cave also noted that with these benefits came new ethical and governance issues, for which current policy and legislation might not be applicable. In a world where artificial intelligence was becoming increasingly used, policymakers needed to consider new, or modified policies for data security, data privacy and acceptable autonomy – i.e. which decisions were acceptable for a machine to make, rather than a human.
Responding to Dr Cave’s comments, Imogen Parker, the Programme Head of Justice, Rights and Digital Society at the Nuffield Foundation, gave insights into the possible impact of artificial intelligence on society.
Imogen explained that while some people might benefit from the increased use of artificial intelligence, other members of society could be adversely impacted. Imogen highlighted the need for having a well-informed public conversation, and appropriate legal structures to keep pace with innovation.
In the discussion that followed, Brittany Smith, a Senior Policy Analyst at DeepMind, mentioned that there are likely be more benefits to artificial intelligence than we currently recognise. The importance of ethics in policy considerations was highlighted and it was suggested that, for a successful world with artificial intelligence, the accountability of companies would need to be to the public (i.e. ethics-based), rather than to shareholders (financially-based).
Photo of M31, the Andromeda Galaxy by Joel Tonyan – https://goo.gl/D9t3PG licensed under CC BY 4.0