News

The opportunities and risks of artificial intelligence

6 March 2015

Share

Reported by Henry Rex, CSaP Policy & Communications Officer.

Earlier this month CSaP held the seventh meeting of our Policy Leaders Fellowship, bringing together the most senior members of CSaP's network of policy professionals with researchers from the University and other institutions. The subject for this meeting’s discussion was the opportunities and risks of artificial intelligence.

The discussion included two presentations to the group from leading researchers in the field: Professor Christopher Bishop, Distinguished Scientist at Microsoft Research Cambridge, and Dr Seán Ó hÉigeartaigh, the Executive Director of the Centre for the Study of Existential Risk at the University of Cambridge.

Professor Bishop discussed how AI works, why it has recently come to the fore, and what the future prospects are. He described AI as machines that capture some of the capabilities of the human brain. This is a tricky goal to achieve given that the human brain is “the most extraordinarily complex thing in the known universe.

Researchers have made a great deal of progress in some areas, but there is still a lot of work to be done. He highlighted the difference between ‘weak’ AI, which is already all around us, and ‘strong’ AI, which is the goal of current research. ‘Weak’ AI is fixed intelligence: machines that can do a specific function much better than the human brain, but can’t do anything else (think of Deep Blue and Kasparov). These machines require coding by hand to perform a particular task. ‘Strong’ AI aims for machines that can learn: they can study data to formulate models, and apply the models, in theory matching and even surpassing the function of the brain.

He then gave a brief history of AI, starting with Rosenblatt’s perceptrons and ending with a real-time demonstration of AI: a programme that recommends films depending on your previous choices.

Dr Ó hÉigeartaigh focused on the challenges AI could pose society and the public policy implications. He spoke briefly on how AI has progressed over the years, and said that now development of capabilities had reached a certain stage, there needed to be a focus on “robust and beneficial” development.

While there is a great deal of hype about the consequences of AI, as AI becomes ‘stronger’ it will raise some existential concerns. And even if there is a low probability of the worst of these consequences coming to pass, the impact would be so high that thought should be given to them.

His presentation addressed AI’s impact on the future of employment, economic inequality, and privacy and the law. Seán warned that it was hard to design algorithms and systems that perform exactly as expected in all circumstances (citing the 2010 ‘Flash Crash’ as an example), so he stressed that future AI research needed to focus on transparent, verifiable and predictable systems. Safeguards must be installed to ensure that systems ‘fail gracefully’ when they fail, and, crucially, the user needs to understand the limitations of the system. He concluded by saying that because very strong AI is a hypothetical technology at the moment, it would be foolish to plan to regulate it now, and having the public debate around it too early could actually cause damage.

The presentations were followed by a lively debate among the assembled group.

Thumbnail image by www.flickr.com/photos/cblue98/

Banner image by Logan Ingalls