Banner & thumbnail images: Drone on campus Gabriel Garcia Marengo
Reported by Rebecca Van Hove
CSaP’s Continuing Policy Fellows held a roundtable meeting last month to discuss how advances in digital technologies could help address challenges and opportunities in an increasingly digitally-reliant world. Hosted at the Royal Society, the event saw a group of speakers from the University of Cambridge present new research, aiming to shed light on the opportunities of digital and technological advancements and discuss how these might be harnessed for policy making.
Kicking off the discussion was Professor Simon Moore, who is the Co-Chair of University’s Trust and Technology Initiative. The Initiative brings together interdisciplinary research which explores the dynamics of trust and distrust in relation to internet technologies, society and power. Professor Moore argued for the importance of rigorous technical foundations for resilient, secure and safe computer systems – if we are to build more trust in the digital, and as technology becomes ever more pervasive in society, resilience and security become ever more crucial.
Professor Moore presented his work on the security improvement of computer architecture, which has created systems which are more robust, secure and less vulnerable to cyber-attacks.
Illustrating that some of the technologies necessary to resist cyber-attacks such as the 2017 Wannacry attack on the NHS, for example, actually already exist, he demonstrated that a real difficulty is how to migrate technology from academia to the commercial world. There is a disconnect in the supply chain which prevents consumer choice on security and which makes it extremely difficult to get new, more secure and safe technology to market, as it is often more expensive.
Professor Moore argued that there is a real need for standards and a regulatory framework to help force industry to provide technology products which make use of newly available and rigorously-tested safety and security innovations.
Dr Tanya Filer, a Research Associate in Public Policy and Policy Engagement at the Bennett Institute for Public Policy, spoke next to introduce the Institute’s Digital State project. Within this, Dr Filer’s current research focuses on Govtech, which refers broadly to digital and new technologies designed for the public sector.
As the idea has grown that the ‘innovation community’ is a sector with a wealth of untapped knowledge and skill which could be applied to public sector administration and services delivery, governments across the world are increasingly making use of the tech community, generating Govtech ideas through initiatives such as government-led challenges programmes, incubators and accelerators, and public/private sector joint competitions.
As a relatively young eco-system, Govtech has to grapple with a number of challenges, ranging from scalability issues and the need for procurement reform considering the risk-averse nature of governmental institutions, to the issue of trust.
Dr Filer argued that we need a better education of how digital technology can be used in order to show how necessary it is: the public often considers digital innovation low on lists of governments’ spending priorities, but digital innovation – at its best – should and can contribute to improvements across all types of sectors.
Dr Filer argued for the importance of regulating the Govtech industry: we need to have conversations around Govtech’s research agendas, trust and accountability now, not in five or ten years, as this will be an industry hard to regulate in hindsight. As the industry involves two sectors - government and the technology sector - which at the moment suffer from a lack of public trust, it is even more vital to examine how the Govtech industry may best be governed and to ask questions about where accountability lies.
The project examines Artificially Intelligent Communications Technology (AICT), which are language-based AI technologies. These make extensive use of speech technology, natural language processing, smart telecommunications, and social media to change the way in which we communicate, enabling unprecedentedly swift and diffuse new language-based interactions.
Dr Tomalin made the case for the UK’s role in the advancement of AI, arguing that it could – and should – focus on taking a lead in the development of an ethics of AI.
With regard to AICT, this is equally important: language and ethics have a complex relationship. Many legal systems, for example, prohibit certain linguistic utterances such as those classified as hate speech, while we also (self-)regulate what is acceptable to say dependent on the situation and context. However, language-based AI technologies, such as Alexa, Siri or Google Translate, do not have any such regard for the ethics of the language they use. Dr Tomalin argued that they should have this - in the interest of social responsibility and the ethics of human translation. He contended that these AI systems should be set up not to censure when confronted with potentially problematic content, but rather to create a ‘bufferzone’, so as to give readers an awareness and choice of how to interact with potentially harmful language.
In this way, Dr Tomalin argued, users of AICT will be able to uphold not only their ‘freedom to’ but, crucially, also their ‘freedom from’ – in this case, freedom from seeing hate speech whilst using the internet. Questions from the attendees added additional dimensions to the discussion, ranging from how this links to fact-checking, the dangers such technologies adjustments might have with regard to abuses of power, and the need for ethics to be included in IT education, similarly to medical ethics.