Published On: September 20, 2023

I. Stephanie Boyce is the past president of the Law Society of England and Wales. In March 2021 she became the 177th, the sixth female, the first black and first person of colour to become president. In this article, she writes about how effective regulation of Artificial Intelligence (AI) through principle-based regulation is needed to maintain public confidence and protect against rogue AI operators. Reflecting on her work to develop a set of principles to guide the ethical development and use of AI and lawtech, which she led as president of the Law Society of England and Wales; the principles are designed to empower the profession to understand the main considerations for deploying lawtech. They also aim to encourage greater dialogue on the development of future products and services to unlock the benefits brought by AI capability. Crucially, they can help lawtech providers understand the regulatory parameters of justice and law practice, to help embed trust, accountability and maintain democracy.

It is fair to say that Artificial Intelligence (AI) is changing our lives with or without our permission. In a recent email Professor Richard Susskind OBE KC (Hon) is quick to point out “ChatGPT and generative AI are significant not for what they are today (mightily impressive but sometimes defective) but for what later generations of these systems are likely to become. We are still at the foothills. But the pace of change is accelerating and we can reasonably expect increasingly more capable and accurate systems. In the long run, AI systems will be unfathomably capable and outperform humans in many if not most activities. Whether this is desirable is another issue and is the focus of the current debate on the ethics and regulation of AI.”

According to a report from Bankless Times “Artificial intelligence (AI) could displace 800 million jobs (30% of the global workforce) by 2030. …..AI adoption will greatly impact people’s lives and their ability to make a living in the years ahead.” AI’s capabilities and economic impact are potentially immense – both on us as individuals and on how we exists and thrive. I was struck by the recent headline: “Robo judges could make legal rulings, says Master of the Rolls”, from the online publication, Legal Cheek. I was not surprised to read the possibility that legal decisions ordinarily undertaken by trained judges could be undertaken by AI in the future.

Master of the Rolls Geoffrey Vos, has previously spoken enthusiastically of an integrated online civil justice system and the use of technology to allow “legal rights to be cheaply and quickly vindicated.” He also cautioned that the coming generation, “will not accept a slow paper-based and court-house centric justice system… that the use of technology by the courts is not optional, but inevitable and essential.” Indeed, there is a push for online justice as the government seeks to transform our justice system. However, online justice brings its own challenges – chiefly around access to justice, due process and the ability of vulnerable individuals and those who are digitally excluded to interact with the legal system.

Whilst lawtech has not yet been as disruptive in our lives or profession as fintech or medtech, there is no denying that AI has the potential to make justice cheaper, quicker and more accessible. The most developed legal AI tools currently available on the market are all directed at legal professionals (tools to help with due diligence (e.g., Diligen) or analyse litigation patterns (e.g., LexMachina, run by LexisNexis)). AI and lawtech will continue to improve lawyers’ ability to focus on the more challenging matters, through automation of otherwise time-consuming or repetitive tasks.

The debate around AI and its governance hinges not on this kind of automation (despite concomitant job losses or changes) but on the generative end of the spectrum – the use of AI to aid decisions. How can we be sure, for instance that those being subjected to decisions made by AI will fully understand the consequences of those decisions?

Whilst the Master of the Rolls makes it clear that there would be an option to appeal a decision made by a robot; the potential of generative AI to be trained with requisite knowledge could and will mean that judicial decisions can be made far more quickly than a human, i.e., seconds rather than weeks, and removing the anxiety caused by delay.

The true disruption in the legal industry will be through innovation of low cost and free AI tools which increase access to justice. Indeed, this is essential if AI is going to close, and not further widen, the justice gap. The question becomes, how do we ensure that this is a positive disruption? This needs to also ensure that those accessing justice via AI-based services potentially are sufficiently informed and aware that the AI justice they receive is deployed effectively or in line with clear and fair principles, for it to be fully accepted by the public as part of our democratic justice system.

Could regulation be the answer?

The UK government’s recently published AI White Paper states that the government would prefer not to legislate, saying it “will avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI.”

The government has indicated it will do this by empowering existing regulators to prepare tailored, context-specific approaches that suit how AI is used in each specific sector. Since the White Paper was published, numerous individuals including tech CEOs have come out very forcefully to say the government needs to take a different approach, calling instead for leadership. The government has since changed its position set out in the White Paper, disregarding the dangers of AI and instead favouring innovation. A Financial Times report on  discussions with G7 members suggests the “UK could promote a model of regulation that would be less “draconian” than the approach taken by the EU, while more stringent than any framework in the US.” The preference is for government and companies to devise guardrails to regulate AI. But are we assured that this approach will provide the guardrails necessary to protect ourselves from bad actors and poor automated judicial decisions?

The key challenge is around interpretation. Most of our current laws were not written with AI in mind, meaning real-life issues will inevitably go unaddressed. There are questions too around privacy – such as the methods deployed by facial recognition or being free from discrimination by algorithms that cannot mimic the human act of taking and making discriminatory decisions. We now have technological systems that can converse with us as if they were other people, and large language models like ChatGPT can recreate the most human of traits. However, while there are plenty of laws that regulate how we behave as humans, there are few or no laws that govern AI.

While I appreciate the need to not hamper innovation or growth, leaving it to regulators alone to solely rein in AI may do more harm than good, risking acceptance of  damaging but legal AI with no effective framework for the enforcement of legal rights and duties. The European Parliament has gone further and set out a legal framework on AI with the Artificial Intelligence Act seeking to ensure safety and fundamental rights, including a number of restrictions in place around ChatGPT and predictive policing. This Act is the first formal regulatory instrument developed by a major regulator so far.

Regulation and compliance typically fall behind the adoption of new technology, playing catch-up for the most part and failing to address the associated risks adequately, if at all. We can take some comfort from the fact that the AI community wants regulation too: imploring governments to regulate AI to set mandatory, enforceable requirements and standards that will prevent rogue operators, provide assurance and clarity on service expectations and quality to AI- justice users, and protect the integrity of those responsible, while mitigating any risks to humans. Most of us want to see AI being used for the good of all humanity; not to see one outlier ruin the integrity of AI and the whole industry through unorthodox actors impacting on our basic human rights.

Guiding principles for regulating AI

As president of the Law Society of England and Wales in March 2021, to aid the governance of digital legal services I led the launch of a set of principles to guide the ethical development and use of lawtech, urging greater accountability and regulation in this space. The principles were the culmination of two years of work, including consultations with law firms, developers and regulators. The principles are:

  • Compliance: Lawtech should be underpinned by regulatory compliance. The design, development and use of lawtech must comply with all applicable regulations.
  • Lawfulness: Lawtech should be underpinned by the rule of law. Design, development and use of lawtech should comply with all applicable laws.
  • Capability: Lawtech producers and operators should understand the functionality, benefit, limitations and risks of legal technology products used in the course of their work.
  • Transparency: Information on how a lawtech solution has been designed, developed, deployed and used should be accessible for the lawtech operator and for its client.
  • Accountability: Lawtech should have an appropriate level of oversight when used to deliver or provide legal services.

These principles are designed to empower the solicitor profession to understand the main considerations they should make when designing, developing or deploying lawtech. They also aim to encourage greater dialogue between the profession and lawtech providers in the development of future products and services, helping solicitors to unlock the benefits brought by digital transformation by providing a starting point to assess the compatibility of lawtech products and services with professional duties. Likewise, they can help lawtech providers understand the regulatory parameters of solicitors’ practice, embed trust and build market-ready solutions.

The five main principles should inform lawtech design, development and deployment. They are linked to an overarching client care principle: to reflect the use of lawtech in a way which is compatible with solicitors’ professional duties to their clients. Whilst these principles were developed specifically for the solicitor profession, they provide a basis for other sectors to devise their own set of ethical principles on the evolution of AI. In a recent NPC article “Safety of Artificial Intelligence: Prelude, Principles and Pragmatics.” by Professor John McDermid OBE FREng suggests a form of principle-based regulation “..as a starting point for addressing the core elements of responsible innovation, as well as helping achieve safety of AI, thereby enabling the benefits of AI to be realised whilst avoiding or reducing its harms.” It seems there is emerging agreement on this, in principle.

Government approach

The government recently published five principles for taking an adaptable approach to regulating AI: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.

The Rt Hon Prime Minister has since gone further than his government’s White Paper, highlighting the need to impose guardrails and acknowledging that AI could pose an “existential” threat to humanity; indicating that the UK should lead the way with a new set of guidelines to govern the industry. With the consultation period for the AI White Paper now closed, it remains to be seen as to who will write these new guidelines. What I do know is that it will require a collaborative, cohesive approach from all regulators – both within the UK and globally.

This is a global challenge that demands a  global response. All companies that develop and use AI technology need to be held to account for their developments and to stem any harm that may flow from these technological tools before they are released. Regulation would also undoubtedly result in huge benefits for gaining and maintaining the trust and confidence of the public.

Returning to whether AI is a threat or an enabler to justice; we read daily news of inflationary court backlogs, crumbling court infrastructure, lack of legal aid, lack of court staff. There are increasing numbers seeking legal advice, but this is not matched by the resources needed to deliver justice promptly. In addition, the aftermath of Covid-19 still lingers along with the exacerbated impact of the current cost of living crisis and pressures on the public purse mean the justice system lacks much-needed resources, challenging a key tenet of our democratic system. Could AI help to address this deficit? Whilst the surge of AI may not replace humans for now, it is hard to discern the degree of optimism that AI may bring to closing the justice gap. For some AI may prove the lifeline necessary to vindicate their rights cheaply and quickly. However, just because there may come a time where AI can make judicial decisions, it doesn’t mean it should.

Share this story

Related posts