
Reflecting on President Trump’s first 100 days in office
Artificial intelligence (AI) is transforming the legal world, from the use of AI-assisted document review to deployment of generative AI for research and submission drafting. Recognising the prevailing wind, many jurisdictions are already regulating AI use in court proceedings. Arbitration bodies have also started publishing "soft-law" guidance on using AI in arbitration.
Recently, the Chartered Institute of Arbitrators, which represents the interests of alternative dispute resolution and arbitration practitioners globally, published its Guideline on the Use of AI in Arbitration (the "Guidelines").
The Guidelines are intended to operate alongside applicable laws, regulations and institutional rules. Although the Guidelines are non-binding, parties may choose to incorporate them (or parts of them) into arbitral proceedings via their arbitration agreements and tribunals may adopt them by way of procedural order (in a similar way to the non-mandatory but widely adopted IBA Rules on the Taking of Evidence in International Arbitration).
The Guidelines provide practical guidance to parties and arbitrators on how AI may be used in an arbitration context to leverage its benefits while seeking to mitigate its risks. They also raise issues that parties and arbitrators should consider if deploying AI in their arbitration.
As for the benefits, the Guidelines recognise that AI can save parties time and costs, for example by analysing documents and extracting relevant information, summarising texts, streamlining the taking of evidence, conducting legal research, and predicting the outcome of procedural strategies or arguments. AI tools may also support arbitrators in carrying out their roles, for example by enabling the "more accurate and efficient processing of submitted information".
However, the Guidelines also expose the tension between, on one hand, technological advances and, on the other, protecting the fundamental principles of arbitration, including confidentiality, the impartiality and independence of arbitrators, and the enforceability of arbitral awards. For example, when it comes to arbitrators' impartiality, it is unclear how biases in AI algorithms and in the selection of data sets for those algorithms will be identified, assessed and addressed by tribunals. As for the confidentiality of, and control over, data, where confidential information obtained in the arbitration is inputted in a third-party AI tool, there is a greater risk of third-party access (which is exacerbated if the underlying data is used by the AI tool to "machine learn").
To mitigate these (and other) risks, the Guidelines advise that arbitrators and parties should seek to understand the technology, function and data in AI tools, and carefully weigh up the benefits and risks of using AI tools in their arbitration.
Additionally, the Guidelines acknowledge that arbitrators have the power to issue directions and to make rulings on the use of AI by parties in arbitration (subject to any express prohibition agreed by the parties, or arising under mandatory applicable laws, institutional rules etc.), a power which falls within the scope of arbitrators' general power to make orders regarding the conduct of proceedings. Arbitrators may also require parties to disclose the use of an AI tool in certain circumstances, for example if they consider it is likely to affect the evidence or the outcome of the arbitration.
While the Guidelines go a long way to address the risks of using AI tools in arbitration proceedings, a number of questions nevertheless remain:
The Guidelines also suggest that arbitrators may use AI tools where these will support the more accurate and efficient processing of information and the quality of their decision-making, provided they do not relinquish their decision-making powers to AI and they independently verify the accuracy and correctness of AI-generated information. The Guidelines "encourage" arbitrators to consult the parties on the use of an AI tool and to take into account parties' feedback. But is this sufficient in circumstances where inappropriate use of AI by an arbitrator could potentially "compromise the integrity of the proceedings or the validity or enforcement of the award"?
The Guidelines are a significant and much needed framework for parties and arbitrators wanting to use AI in arbitration in a responsible and ethical manner. Nevertheless, the practicalities of using AI will take time to be fully understood and AI technology is constantly changing (with the Guidelines themselves acknowledging that they may need periodic updates to accommodate the rapid development of new technologies). In the meantime, parties, their legal advisers and arbitrators must seek to engage with the many questions emerging from the growing use of AI in the arbitration process.
Authored by Katie Duval.