Unveiling the future of AI

17 July 2023

Unveiling the Future of AI

Reported by Patrick McAlary, CSaP Policy Assistant and Victoria Price, CSaP Policy Intern (April-July, 2023)

CSaP’s Horn Fellows gathered at Jesus College, Cambridge for a day of interdisciplinary discussions on Artificial Intelligence (AI), balancing the potential for the application of AI in health and science against its risks.

Navigating the Limitations of AI in Healthcare

Dr Adrian Boyle, Consultant in Emergency Medicine and President of the Royal College of Emergency Medicine, opened the session by discussing the transformative potential of AI in healthcare, particularly in patient empowerment and for improving radiology image interpretation. Using the healthcare management and patient record software EPIC as an example, Dr Boyle noted the high potential for advanced analytics when using AI for care. However, he also detailed several limitations and risks, including fragmented systems, over diagnosis, concerns around privacy and the risk of trying to solve problems that don't exist. Dr Boyle also advocated for co-production and clinical involvement in AI implementation to overcome the scepticism of clinicians towards new initiatives and to ensure actionable data is obtained.

“The history of medicine is littered with good ideas that turned out to be harmful or weren't implemented properly.” - Dr Adrian Boyle

Then Professor Cecilia Mascolo, Professor of Mobile Systems, Computer Laboratory, picked up the trade-off between data collection and privacy in public health as she spoke about her research in the field of wearable devices - such as Apple Watches - and their potential applications in healthcare. Professor Mascolo explained that her ongoing research involves analysing coughing, breathing, and voice data to detect respiratory conditions like COVID-19 and further noted longitudinal data collection's promise for disease progression and treatment effectiveness. She then discussed wearable device data accuracy and reliability, acknowledging their potential compensatory value due to the large volume of collected data.

Scientific Discovery and AI

Professor Richard Durbin, Al Kindi Professor, Department of Genetics, highlighted the rapid development of genomic sequencing since the first bacterial genome was sequenced in 1995. Professor Durbin highlighted that we are in the middle of a technological and conceptual advance in genomics, and there are opportunities to be seized, pointing out that the UK Government has a life science strategy that largely misses genomics. He then started talking about the Earth BioGenome Project and the Darwin Tree of Life Project that would help humans to sequence the genomes of all species soon.

“…we tried doing this kind of thing in the mid-nineties, and the conclusion then was machine learning's no good. It [wasn't] fast enough. Physics was better!” - Professor Chris Pickard

Professor Chris Pickard, Sir Alan Cotterell Professor, Department of Materials Science and Metallurgy, explained how his research focuses on combining elements in a specific way to create better battery materials and superconductors. Professor Pickard further discussed the challenges of solving complex equations and how machine learning has taken on a significant role in making these calculations extremely fast and more efficient.

Humanity's Future with Super Intelligent Machines

William Tunstall-Pedoe, Founder and CEO of Unlikely AI, suggested that defining Artificial General Intelligence (AGI) is contentious and questioned whether AGI should be defined as human-like intelligence, or something more. Mr Tunstall-Pedoe argued that Large Language Models (LLMs), trained on terabytes of data, simply predict the next word in a sentence and such models present bias, hallucination, and overconfidence in presenting results. Nonetheless, he discussed that, in terms of memory, LLMs are already super-human and companies like Microsoft have pointed to them as the beginnings of AGI. Stating that more effort is needed to make these systems safe, Mr Tunstall-Pedoe also argued that the picture is not entirely one of existential dread: AGI has the potential to help people across the world.

Addressing some of these issues, Dr Guy Emerson, Department of Computer Science and Technology, explored the relationship between AI and language sciences, with a focus on the current prominence of Chat GPT and other LLMs. Dr Emerson then emphasised the need for clarifying our goals from the technology, whether that is machine translation, data analytics and insights, or scientific endeavour. He argued that if the goal is not made clear at the outset it can lead to systems that do nothing well. Accordingly, Dr Emerson discussed that users need to have a clear evaluative framework when engaging with such systems.

“…we need a guiding principle in democracies which says we will not tolerate unaccountable power, period.” - Professor John Naughton

Professor John Naughton, Senior Research Fellow, Centre for Research in the Arts, Social Sciences and Humanities (CRASSH) and Emeritus Professor of the Public Understanding of Technology, asked what would it be like for humanity if we did succeed in creating super intelligent machines. He argued that the answer can be found by comparing AI to corporations, which both exist to fulfil a single overarching objective - for the latter, maximising shareholder value - can control most wealth and outlive us. This led to a lively discussion on whether corporations have had a primarily negative influence on the world, and whether AI will be similar. However, Professor Naughton suggested that Chat GPT had the potential to be a liberating technology for those who struggle with writing.

Image by D koi - Unsplash

Patrick McAlary

Institute for Government (IfG)

Dr Victoria Price

Centre for Science and Policy, University of Cambridge