Dr Christopher Markou

Leverhulme Early Career Fellow and Affiliated Lecturer at Faculty of Law, University of Cambridge

Share

Leverhulme Early Career Fellow and Affiliated Lecturer

Lawyers, economists and technologists are forecasting fundamental changes to the legal system and provision of legal services. Technological advancements, globalisation and the emergence of alternative legal service providers are contributing to a growing sense of de-stabilisation and uncertainty about the future of the legal practice, education and the conceptual foundations of the law itself. It seems that the knowledge and expertise of lawyers and judges is now at stake; with the burgeoning LegalTech industry mapping out what legal processes—and implicitly: legal concepts—might be amenable to proprietary interpretation and automation. With algorithmic decision systems seeping into more aspects of public and private sector contexts--doing legal 'work' more cheaply, quickly and effectively than humans--some suggest that the law is approaching a 'legal singularity': a hypothetical point at which the functional capabilities of Artificial Intelligence (AI) vastly surpasses those of human lawyers and judges. But what does this mean for the future of law and the role of human decision-making?

One consequence of digitalisation, AI and Machine Learning (ML) is that it has made previously tacit and conventional knowledge encodable and computable. This process is, however, not altogether new. A similar thing occurred, for example, in the transformation of the tacit knowledge of the silk weavers’ guild into a mechanised process thanks to the invention of the Jacquard loom in the early years of the last industrial revolution. When applied both to adjudication and lawmaking, algorithms promise powerful increases in speed and accuracy in legal decision-making, perhaps also eliminating the biases that can permeate human judgment. Furthermore, using ML to automate lawmaking and enforcement might prove especially useful, even essential, for overseeing automated private-sector activity, such as high-speed securities trading. According to some optimistic appraisals, there is no aspect of lawmaking and adjudication that cannot be improved upon or replaced by machines.

Despite these apparent advantages, the spectre of 'rule by algorithm' has begun to raise alarm. Algorithmic adjudication and lawmaking imply a loss of autonomy and control over self-government. If the law was nothing more than an elaborate series of rules than perhaps many aspects of it would be amenable to mathemistation and automation. However, law seems to entail more than that, and there might be some irreducible quality to social facts, legal concepts and processes that cannot be imputed computationally, or at least not completely. If this is so, the question becomes: are there limits to the computability of legal processes and concepts? Are there contexts where computers should not be trusted to make consequential decisions, and thus prohibited entirely? How do we identify, define and justify what those contexts are when the allure of ever-greater efficiency is hard for governments and courts to resist?

The answer to these question is profoundly consequential for the growth of the LegalTech industry and the shift towards an increasingly algorithmic and computational legal system. Would such a system really be more fair, equitable or accessible? Or simply one where the 'unquestionable effectiveness of mathematics' is used to legitimise and entrench algorithmic authority in society? The replacement of human juridical reasoning with computation risks undermining the legal system as one of the principal institutions of a liberal-democratic order. As such, the line between 'improvement' and 'replacement' cannot be drawn until the consequences of redrawing it haphazardly are clarified. This research project is an attempt to explore and clarify those consequences for the future of law as a social institution.

Research Interests

- Societal Impact of Artificial Intelligence (AI)
- Computability of Law/Legal Norms
- Regulation of AI and Emerging Technologies
- Socio-Legal/Complexity Theory
- LegalTech
- 'Future of Law' and Legal Education
- AI Governance and Industrial Strategy
- Psychosocial, behavioural, and socio-economic impact of technology

  • 26 June 2019, 9:30am

    CSaP Annual Conference 2019

    CSaP's Annual Conference will bring together members of our network from government, academia and elsewhere to discuss some of the policy challenges we have worked on over the past year.