An article published in the Butterworths Journal of International Banking and Financial Law, in July 2017 (Vol 32, Issue 07)
Featuring: All-party parliamentary on Artifical Intelligence releases a theme report discussing whether the proliferating advances in practical applications of AI in all industries and in legal practice will change any conceptual understandings of the law.
KEY POINTS:
- AI may be able to assist in contractual interpretation, but if it does it should not be by a more detailed parsing of language.
- Could AI be programmed to follow rules of equity – where equity is given its broader meaning to encompass, for instance, the discretion which judges have in interpreting contracts, principles-based consumer protection rules and legislation relating to unfair contract terms? We may need to clarify principles of equity and interpretation currently embedded in decision-making.
- We may need to clarify principles of equity and interpretation currently embedded in decision-making
This is the first in a series of articles which ask whether technology developments will require changes in how legal theorists analyse certain legal principles and what, if anything, that means for the technology. The second article will consider uncertainty. The third article will consider causation and legal fallibility. The three subjects (and others) are in my view related.
‘Justice? – You get justice in the next world. In this one you have the law.’
– William Gaddis, A Frolic of His Own (Scribner, 1994).
This article comments on private law contracts governed by English law because that is the area I know best. Comments will be more or less applicable to other areas of law and other jurisdictions.
A theme report titled: “What is AI?”. (Disclosing an interest, I am a member of the Advisory Board of the Group. – interested parties, including lawyers, are invited to assist and give evidence to the group.) e purpose of this article is to ask whether the proliferating advances in practical applications of AI in all industries and in legal practice will change any conceptual understandings of the law.
Artificial Intelligence
AI as a subject for research currently includes:
- machine learning
- decision-making (including expert systems)
- natural language processing
- automated reasoning
- autonomous systems
- multi-agent systems
- sematic web.
AI is now generally referred to in the industry as “cognitive computing”.
TECHNOLOGY
Current technology in legal practice in England mainly involves working with documents since this is the most time- consuming part of a practicing lawyer’s job. It broadly operates in two main areas: document automation and information retrieval from structured and unstructured documents.
The technology is currently suitable for procedural or mundane tasks. It uses weak AI (see next page). e role of senior practitioners is not much changed and continues to involve exercising judgment based on experience and an understanding of commercial parties’ positions and market practice.
If strong AI is developed, it will change this analysis significantly. There is no expectation that strong AI will be available to lawyers in the immediate future, however.
This still leaves open questions arising from foreseeable advances in weak AI.
For example, smart (ie self-executing) contracts will become common for simple transactions (as they are to a large extent in the B2C IT context – think of your relationship with your smartphone). Machine learning and natural language processing will develop the capability to derive meaning from more complex smart contracts. Advances in data science will enable smart contracts to implement a broader range of instructions and to operate dynamically by responding to changes in facts underlying the contract.
CONTRACT THEORY
Contract theory is robust. A contract is simply an agreement which is intended to be legally enforceable.
A contract may require the resolution of questions and disputes by contractual interpretation. Lawyers and judges ascertain the intention of the parties based on the words of the contract.
In legal terms, smart contracts are no different from any other type of contract. If a question of interpretation or a dispute arises, this will ultimately be determined by the courts according to normal principles of contractual interpretation.
There is currently a view that the value of smart contracts is in relation to simple or binary matters where there is little room for dispute. I think that this may, however, underestimate the ability of contract parties to find disputes even in these circumstances. Recent experience in consumer contracts relating to financial services products attests to this.
Between commercial (ie non-consumer) parties, where detailed negotiation of contracts takes place, contract disputes often arise when facts change or contract terms do not deal adequately with the facts.
AI science
- Many AI systems are modeled on the neural networks in the human brain.
- Neural networks in the human brain transmit electrical and chemical information.
- In machines, “neurons” are code performing computations, such as regression and classification tasks.
- These neurons communicate to form a virtual neural network running a series of statistical models.
- Each model receives inputs and produces outputs.
- Machine learning occurs where representations in a computer are updated in light of examples and virtual neurons are assigned a numerical “weight” which determines how the neurons respond to new data.
- Initially, the neural network is also supervised, i.e. provided with the correct answers.
- If the network does not perform accurately then the system adjusts the weights of the neural network and uses more training data items, and continues to do so to obtain results which are closer and closer to the correct answer.
- Recent developments have been based on a greater number of networks operating between the input and the output to produce more accurate and more nuanced, but less transparent results.
- After many iterations, the network will generalise to examples not previously encountered and the results will demonstrate the success of the system.
We face the conceptual problem of the inability of a contract to be perfectly clear however much e ort is put into it. Goode on Commercial Law (Ewan McKendrick, Penguin Reference, 2016) refers to “The Problem of Language” (p21) stating: ‘Those whose business it is to work with words soon acquire an appreciation of the limitations of language… It is, moreover, astonishingly hard to avoid ambiguity.’
Strong and weak AI
Strong AI describes a machine with an intellectual capability similar to (or better than) a human.
Weak AI describes the automation of narrow and specific tasks, without the ability to generalise.
The foundational, and much cited, American text by Francis Lieber, Legal and Political Hermeneutics, or, Principles of Interpretation and Construction in Law and Politics, with Remarks on Precedents and Authorities (Charles L. Little and James Brown, 1839) discussed this at length:
‘The British spirit of civil liberty induced the English judges to adhere strictly to the law, to its exact expressions. This again induces the lawmakers to be, in their phraseology, as explicit and minute as possible, which causes such a tautology and endless repetition in the statutes of that country that even so eminent a statement as Sir Robert Peel declared, in Parliament, that he “contemplates no task with so much distaste as the reading through an ordinary act of parliament”.
Men have at length found out that little or nothing is gained by attempting to speak with absolute clearness and endless specifications, but that human speech is the clearer, the less we endeavour to supply by words and specifications that interpretation which common sense must give to human words. However, minutely we may define, somewhere we needs must trust at last to common sense and good faith…. The more we strive in document to go beyond plain clearness and perspicuity, the more we do increase, in fact, the chances of sinister interpretation.’
The problem is that treating interpretation as a problem of language simply drives more focus on the language. Can AI assist? If it does, it will not be by a more detailed parsing of language. Before considering this we need to broaden the scope of the questions to consider the ways in which AI will be tested before it can be proved to the satisfaction of jurists.
‘The UK has internationally leading activity in Machine Learning – DeepMind is the world leader, and several UK universities have world-class groups. The main opportunity for the UK is in leveraging this advantage, particularly in the start-up sector.’
– Professor Michael Wooldridge, Head of Department of Computer Science at the University of Oxford.
EQUITY
Equity in a legal context refers to doctrines and remedies originally developed by the English courts of equity. As Hanbury & Martin: Modern Equity (Sweet & Maxwell, 2015) notes: ‘It is not synonymous with justice in a broad sense.’ is suggests that AI could be programmed to follow rules of equity: if there are rules to be followed then AI could follow them. Equitable remedies and doctrines such as restitution, recti cation and estoppel show that equity provides some element of judicial discretion in deciding specific cases, however. e history of the classification of types of implied trusts shows the nature of the challenge in an area where classifications can be contested.
I also think that in the context I am considering it here there is a broader meaning to “equity” than that set out in equity text books. is context would
also need to include the discretion which judges have in interpreting contracts, principles-based consumer protection rules, legislation relating to unfair contract terms, insolvency rules which can set aside contract terms in certain circumstances, and other rules of this type.
Technology developments require common standards. Principles of equity are common standards of law.
GOOD FAITH
English law, unlike other legal systems (including New York law), does not imply a duty of good faith into contracts. In principle this reflects a policy of giving more weight to certainty than fairness. English courts will, however, recognise an express good faith obligation in a contract and regulation governing consumer contracts imposes requirements of fairness to address the imbalance of bargaining power between parties.
AI by its nature can deliver a form of certainty. It is an open question whether weak or strong AI can deliver fairness.
At some point we will face the question (and not just in contract theory): could AI be programmed to make determinations of good faith and fairness?
There is an analogy between the regulation of consumer contracts and proposals for the regulation of AI. In financial services the FCA sourcebook contains the principle that a rm ‘must pay due regard to the interests of its customers and treat them fairly’.
The 2017 Asilomar conference, a landmark meeting of leading AI developers and thinkers, produced the Asilomar AI Principles.
These provide in the “Ethics and Values” section:
‘8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
…
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
…
16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.’
While the Asilomar Principles are hard to disagree with, they beg the question in at least a couple of ways. For example, in machine to machine transactions the Asilomar principles are clearly not of a type which machines could use to resolve their own disputes. Transparency, which is gaining support as a guiding principle for AI (including in the Asilomar principles), helps frame the problem. In order for AI decision-making to be transparent, the “goals” of the machine must be capable
of being stated. What is the machine trying to maximise? Pro t, service, speed, performance criteria, franchise risk? At what cost? How are competing aims balanced? e ambiguity of language may be replaced by ambiguity of operations. If so, it will radically change the current approach to contracts. If so, it is di cult to see this proceeding without a human to judge the “correct” decision, but we may need to clarify principles of equity and interpretation which are embedded in decision-making now.
If the machines’ rules are teleological, judges will have a different role. AI will make explicit some of the systematic points which are now somewhat implicit in contract theory.
AGENCY
Agency is important to the subject of AI because it is an element in the formation of a contract, ie agreement between parties requires parties.
The philosophical view here is that agents must be active and part of the natural order (see Understanding Human Agency, Erasmus Mayr, Oxford University Press, 2011). e conventional view of AI has been that it does not have agency.
Technologists assume a different view. They work on an assumption that machines will buy from and sell to each other and therefore must enter into contracts.
In 2015 the US regulator stated that it viewed a self-driving car’s AI as its “driver”: ‘NHTSA [the National Highway Traffic Safety Administration] will interpret “driver” in the context of Google’s described motor vehicle design as referring to the (self-driving system), and not to any of the vehicle occupants.’
The US is most advanced in its thinking about this issue. US academics Squires Chopra and Laurence F. White published
in 2011 “A Legal Theory for Autonomous Artificial Agents” to argue for the legal personhood of an artificial agent. Not everyone agrees, of course: ‘Rights for robots is no more than an intellectual game’, Jonathan Margolis, Financial Times, May 2017.
A draft European Parliament in 2017 report raised the possibility of robots being classified as “electronic persons”. This would be a natural extension of the existing concept of the juridical person.
Machine agency has already been found to introduce the paradox of automation. Automatic systems enable incompetence because they are easy to operate and automatically correct mistakes. Automatic systems remove the need for human operators to practise. Automatic systems fail in unusual situations, requiring a particularly skillful response.
‘My interest in AI is primarily for its scienti c implications. It has given us a host of concepts, and a variety of modelling techniques, that can throw light on fundamental problems in psychology, neuroscience, and theoretical biology …
Ideally, AI will help us to value the speci cally human aspects of life [-] empathy, love, fellow-feeling, and a shared appreciation of the human condition.’
– Professor Margaret Boden, Professor of Cognitive Science at the Department of Informatics, University of Sussex; author of AI, Its Nature and Future (Oxford University Press, 2016).
Acceptable outcomes – as well as causing puzzles for contract theory, AI is likely to bring in new concepts. For example, it is conceivable that at the outset of each “transaction” each party’s contracting agent sets a number of parameters: expectation of profits, tolerance of loss, term of transaction, franchise risk. In this case the concept of acceptable outcomes (or something similar) will develop to ensure that parties’ AI agents “interpret” contract terms in a way which was foreseen and acceptable to the greatest extent possible. People do not think about the operation of a contract. They think about what they want to get out of the contract. Dynamic smart contracts will resolve disputes by reference to the parameters before a dispute becomes apparent.
DECISION MAKING
We know from psychological research that much human decision-making derives from the use of heuristics (source). Heuristics also assist in solving hard computational problems, such as the traveling salesman problem (explained in In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation – William J. Cook, Princeton University Press, 2012).
Since legal decisions are made by people, they must, like decisions of machines involve algorithms and heuristics, albeit of different types. By analogy to computer decision-making:
common law = brute force processing | equity = heuristics
‘Questions that only two decades ago were considered to be metaphysical. Today they are formulated mathematically and are being answered statistically.’
– Judea Pearl, www.edge.org April 3, 2017.
In general the biggest barrier to adopting AI and deep learning is access to large scale, good quality data. For legal theory and practice this is by far the biggest challenge to better adoption of AI.
Interpretation and resolution – what do we need to know?
This article is not intended to be theoretical. But theory informs our unconscious biases. At this stage questions are more useful than arguments. I set out below some of the questions I am working on:
- Will technological developments make smart contracts mainstream?
- Will regulatory developments expressly imply good faith and fairness into parties’ bargains to an increasing extent?
- Will the Internet of Things enable real world monitoring and ongoing amendment of contracts?
- Will smart contacts come to be automatically updated in real time using tools such as embedded option pricing and insurance-type provisions?
- Will an integrated system of contracts develop to enable risks for parties and for systems to be managed and laid o in real time?
- Will regulatory oversight come from AI frameworks, including the monitoring of contracts?
- Will disputes arise in the same way if deviations from parties’ expected outcomes are anticipated and managed?
- Will good faith as mediated by machines come to imply a need for fairness in outcomes rather than initial positions?
- Will the lawyers’ role become more that of an expert arbitrator, more concerned with overseeing questions of the balance of individual equitable results and the systemic relationship of contracts?
- Will technology become capable in some way of judging community standards? Will AI incorporate or filter our human cognitive biases? This is an important question. If biases are not programmed into the software then we may not recognise its results. If they are then we may not like them.
The work has started, but traditionally lawyers in practice have not measured data relating to their product (other than their hours worked).
In practice, the contracts will disappear into the software so we will find our data in relation to agreements in other areas. e software will be able to show this as text but it is more likely that data visualisation techniques will apply some of the transparency which will be needed.
CONCLUSION
Clearly, there is no conclusion, except that lawyers in all disciplines should look further than the practical applications of AI to what it means for theory (and to start measuring and collecting more and better data).
Some of this is hard to think about, uncomfortable to write about and just unnatural as a way of thinking for an English private lawyer.
In the UK we do, however, have world- leading technologists and an expedient political desire to develop the country’s technology (including AI) sector far beyond its current size.
Hans Kelsen’s search for a “scientific” theory of the law in early twentieth century Vienna did not survive criticisms from the perspective of human actors and complex cases. He might simply have been ahead of his time.