News

Impacts of Artificial Intelligence on Science and Policy

30 November 2023

Share

Impacts of Artificial Intelligence on Science and Policy

Reported by Patrick McAlary, CSaP Policy Research Assistant

Artificial Intelligence (AI) has come to dominate discussions on potential developments in science and in policy. The CSaP Dowling Policy Fellowship met in Cambridge for a day of talks and discussion about the uses, abuses, and unknowns associated with AI.

Possibilities and Risks: Preparing for the Unknown

The UK Government has a range of initiatives to help direct national and international movement on AI including the recent AI Safety Summit that happened in November 2023, and the development of the Foundation Model Taskforce, which focuses on large generative AI models like ChatGPT. However, the great unknowns are ever-present, with one participant comparing the situation in Government to the situation following the Brexit vote in 2016: everyone knows the changes will be seismic, but no one is completely sure what the endgame will look like. It was explained that AI will super charge extant risks (like fraud), it will evolve other risks (like intellectual property theft), but there are a bucket of risks that remain completely unknown given the current state of knowledge.

Speaking at the event, Sarah Connolly (Director for Security and Online Harms in the Department for Science, Innovation and Technology) and Professor Gina Neff (Executive Director, Minderoo Centre for Technology and Democracy, University of Cambridge), warned that in the wake of a suite of important global elections, AI technology could be utilised in a way that deteriorates public trust in democracy and in governments.

The potential futures of AI are being pulled in different directions. It was noted that a major geopolitical problem underlies the development of AI: namely that two of the three biggest players—China and the United States—will not talk to each other about the issue. Will AI become a football in a geopolitical competition between east and west? Will the overbearing existential risk dominate capacity building? Or, alternatively, will the emergence of AI be taken as an opportunity to harness potentially transformative technologies for a major step change. We need to be clear about how we talk about AI, lest scaremongering wins the day and we lose the chance to harness its transformative power.

Speaking to Machines

Professor Steve Young (Emeritus Professor of Information Engineering, Department of Engineering, University of Cambridge) explained that as a PhD student in 1975, his aspiration was to talk to machines, and this pursuit has driven his entire career. It quickly became evident, however, that the issue was not making machines speak but making them understand what was being said. The failure of rule-based intelligence systems indicated that the way to build a system which could recognise speech and, eventually, solve perceptual problems was to provide examples that it could be trained to recognise.

In the 1990s, a speech recognition system comprised two components: an acoustic model—which recognised the individual sounds of a language—and a so-called language model—which predicted the next word. However, the model could not be scaled to use more context than that provided by the previous three words. In this period, it was the Hidden Markov Model that dominated, with the rival Neural Network model failing to compete: a lesson about being at the right place at the right time as twenty-five years later, with greater processing power and access to more data, Neural Networks overtook all competitors.

The real breakthrough was the development of the transformer architecture that breaks away from the strictly sequential structure of previous language modelling. This provides the basis for developing ever longer sequences and this, alongside access to huge datasets provided by the internet, is how models like ChatGPT emerged. Such models go beyond simple probabilities and encapsulate the fundamentals of language: grammar, semantics, and pragmatics.

Professor Young noted that progress towards Artificial General Intelligence (AGI) is minimal in scientific terms. Large Language Models (LLMs) like ChatGPT are not going to contribute to the takeover of the world by AI no matter how big they become as there is simply so much that they cannot do. LLMs have no ability to encapsulate knowledge and no ability to learn from experience, but they are fantastic tools and it is their function as tools, Professor Young argued, that should inform how they are regulated: the onus should not be on the provider, but the user of the tool.

Conspiracy Spill Over

Dr Ramit Debnath (Assistant Professor of Computational Social Science & Design, University of Cambridge) discussed how misinformation and disinformation related to climate move within social networks. Issues around climate repair, which has large outstanding questions around global governance and policy implications, are particularly prone to mis/disinformation. Debate around Solar Radiation Management (SRM), where sunlight is reflected through intervention techniques has become a magnet for conspiracy theories. The impact is observable in the fate of the Scope X project funded by Harvard University. It prompted a strong people-led movement in Sweden, where SRM technologies were due to be tested, that lead to the cancellation of the project.

Interpreting thirteen years of Twitter data through the utilisation of language processing technologies, Dr Debnath showed how SRM has become attached to conspiracies around chemtrails and that individuals who engage with topics like chemtrails are more likely to interact with other conspiracy issues. By examining hashtags associated with chemtrails, geographic clusters of conspiracies can be identified. For instance, in the UK chemtrails tie into Brexit, in the US to issues around freedom, and in India to regional geopolitical issues.

Dr Debnath explained that there are characteristics of groups likely to become embroiled in conspiracy theories and this provides a context for designing targeted inoculations. It is also the case that some topics, like SRM, may be more susceptible to mis/disinformation because there is a small kernel of truth: risks associated with SRMs are unknown and global governance structures are not well developed. Ultimately, Dr Debnath suggested that there is a greater role for science-based evidence to shape online discussions and to infiltrate echo chambers.

Digitising Chemistry?

Professor Alexei Lapkin (Professor of Sustainable Reaction Engineering, Department of Chemical Engineering and Biotechnology, University of Cambridge) explained that scenarios that could achieve net zero by 2050 in the production of chemicals include politically unpalatable options, such as prohibition or severe restriction of fossil fuels. Developing sustainable pathways is especially difficult because, as Professor Lapkin explained, “everything is chemistry, literally”. Clothes, food, and transport are all underpinned by chemistry.

While the actual production of chemicals represents 36% of emissions, it is the feedstocks and the use of chemicals that create most emissions: the biggest issue, therefore, is where the chemicals end up. The current process is that primary raw materials are extracted, processed and then manufactured into a product that reaches consumers for use. However, this is entirely linear and processes for the reuse and recycle of material are not available.

The industry is facing a major shock with various mechanisms, including carbon capture and storage, bringing circularity into play. The emergence of diversity, however, creates a great deal of uncertainty. The chemical industry does not exist in isolation: if biofuels are introduced into the process this pulls resources and fuels from other areas. Professor Lapkin argued that the only way to address sustainability problems was to digitise. It is only by populating data science tools that AI agents can be utilized to start examining supply chains and embedding sustainability into the system. Through this, molecules can be traced from feedstocks into the supply chain to better understand (and to build) circularity within the system.

Patrick McAlary

Centre for Science and Policy, University of Cambridge