Share
Reported by Christian Neubacher, Centre for Science and Policy
In collaboration with the Maxwell Centre, CSaP convened a Science and Health Policy Engagement Roundtable to further dialogues on ethical innovation with artificial intelligence in life sciences and healthcare.
Key questions included: What ethical principles should guide AI development in life sciences and healthcare? How can academia, industry and policy collaborate to create effective governance frameworks ensuring accountability, transparency and fairness? What unique policy challenges exist in regulating emerging AI technologies, and how can these regulations balance innovation and appropriate safeguards?
The discussion brought together key stakeholders including Reggie Townsend (Vice President of Data Ethics at SAS) – whose visit to Cambridge prompted this timely discussion – and researchers working at the frontlines of life sciences and healthcare.
Participants reflected on the preeminent role which data plays in shaping artificial intelligence tools, and societal debates about how to apply, shape and regulate these tools. As one participant noted, data is our digital history and therefore as AI uses this data, it can provide us with startling clarity about the distortions present within society. He urged others to not accept data systems which are distorted, but rather to strive for data which better captures and reflect all segments of society. This was necessary, he argued, to ensure that AI makes us better versions of ourselves, instead of providing humans with a mirror to their existing inequities.
Regulating rapidly evolving AI technologies
Further discussions explored how to regulate AI, and how to better communicate to the public how AI tools operate, noting that solutions to AI cannot rely on governments alone but rather require all stakeholders to take action. Given the rapid pace of innovation in AI systems, it is difficult for regulators to develop regulatory toolkits which are simultaneously sufficiently broad and comprehensive while also being enforceable and effective.
Regulation works well when technologies are in a steady state; when technology is dynamically evolving, it is challenging to regulate. Given the significant investment behind AI, the rapid pace of its development makes it inherently difficult to regulate. One suggestion as a potential solution was to take different approaches to different industries, such as not permitting the use of Generative AI systems in the healthcare sector, while permitting its use in lower-risk industries. This would allow regulators to capture the unique challenges which industries such as the healthcare sector face when needing to communicate with patients about why an AI tool was effective or ineffective in a treatment. As a representative from the healthcare sector noted, how can we be certain that AI does no harm?
One method for improving consumer understanding of different AI models is to provide accessible and more easily understandable model cards akin to nutrition labels, which clearly and concisely provide participants with an overview of an AI model functions. A related suggestion included determining whether AI methods could be validated in a similar manner as traditional biomedical checks. However, this would be difficult given the challenge of independently recreating AI models as is done for medical innovations. Rather, AI firms would need to self-govern where the burden of proof of safety for an AI health intervention is shifted onto the innovating firms. Firms may then be asked to demonstrate to regulators that new technology is safe before it is approved for release to the public. Given the current lack of reliable enforcement of AI health applications, this could provide a viable alternative which promotes safety.
The role of education in developing industry best practice
Universities can play a significant role in shaping the future of AI applications in society by researching and promoting best practices to industry through business schools. The process for trustworthy AI begins before any lines of code are written, and business schools and similar institutions can help equip their students with the necessary skills and knowledge needed to help guide companies toward trustworthy AI applications. Further, incorporating key industry partners’ input into educational programmes can empower and upskill the next generation of leaders while also providing an opportunity to reflect on and improve existing industry practices.
The roundtable, which took place on 10 March, concluded with a debate over whether applications which are effective yet reflect and potentially exacerbate existing inequities should be permitted or promoted. This prompted substantial debate within the room. Additional questions which remained from the workshop included whether AI models could be tested and validated prior to being released, such as in the automotive sector, rather than after being released.