AI in IA: survey highlights fear over arbitrator tech skills


This year’s Bryan Cave Leighton Paisner survey has found that practitioners have a low level of confidence in the technical capability of arbitrators to use artificial intelligence tools – and were overwhelmingly in favour of regulating how the technology is used.

Launched today, the survey features the responses of 221 practitioners from across the world, including private practice lawyers, in-house counsel, arbitrators, staff at arbitral institutions, academics, experts and representatives of legal tech providers.

This year’s survey focused on how much AI is already being used in arbitration and current attitudes towards the technology, as well as the potential need for transparency and disclosure and whether regulation is required.

A word cloud illustrating the words that best described respondents’ views about AI in arbitration showed that “inevitable” was the most commonly used.

According to an introduction drafted by ChatGPT, the survey found that AI tools can “offer many benefits to the arbitration process” but also “pose significant challenges and risks, such as ethical, legal and technical issues”.

“Therefore the use of AI tools in international arbitration requires careful consideration and regulation to ensure that they are compatible with the principles and values of arbitration, such as fairness, impartiality, transparency and party autonomy.”

The survey found that 79% of respondents rated their confidence in the technical capability of arbitrators to give directions on the use of AI tools in arbitration at 5 out of 10 or below. Similarly, 73% of arbitrators surveyed rated their own confidence in their AI capabilities at 5 out of 10 or below. 75% of respondents agreed or strongly agreed that arbitrators should not use AI to draft adjudicatory elements of an award.

Three quarters of respondents thought that there was a greater need for transparency over the use of AI by arbitrators, and 87% thought that the use of AI should be regulated in some way (whether through law, “soft law” guidelines, arbitration rules or other rules regulating the legal profession).

On the current use of AI tools by arbitration practitioners, the survey found that 28% of respondents had used ChatGPT in a professional context; out of which the largest group fell into the age bracket of 56-65 years old (36%). Only 11% of those surveyed who had used ChatGPT were under 25 years old.

The most common uses of AI by practitioners were translation (with 37% having used it), document review and production (30%) and text formatting and editing (30%). Less than one third of respondents had used AI tools for document review and production in arbitration.

Most respondents said they would have no objection to using AI to perform tasks in arbitration, with 80% saying they would use it to detect if AI has been used to generate text and images.

The respondents said the main benefit of AI was saving time (with 85% saying it was the most important or second most important benefit), followed by cost effectiveness (with 60% ranking it most important or second most important).

However, 88% of those surveyed reported concerns over cybersecurity, and the same numbers reported concerns over AI hallucinations, breaches of confidentiality and deepfakes in relation to evidence. 3% said they had experienced the integrity of evidence being compromised using AI – which the survey says is significant as it indicates a real risk of AI affecting the integrity of adduced evidence.

60% of respondents therefore agreed that there was a need for greater transparency over the use of AI by parties – with nearly three quarters saying parties should be required to disclose the use of AI in drafting expert reports and 62% saying they should disclose the use of AI in translation.

Responses were mixed on to whom the use of AI should be disclosed: only half felt disclosure should be given to all parties involved, with the remainder split between the tribunal, the institution, third party funders or parties only. Not one respondent had ever experienced a tribunal refusing to allow the use of an AI tool in arbitration.

BCLP partner Claire Morel de Westgaver tells GAR the implications for the use of AI in arbitration are “many and complex” and that the “real challenge for the international arbitration community will be to work out how associated risks should be managed to preserve the integrity of the arbitral process while embracing the benefits of AI, particularly in terms of efficiency and competitiveness”.

“We at BCLP are hopeful that this survey will contribute to tackling this challenge.”



Source link

Previous articleCME Becomes Top Bitcoin (BTC) Futures Trading Venue, Toppling Crypto Exchange Binance
Next articleApple co-founder Steve Wozniak released by hospital after stroke