Balancing Innovation and Integrity: AI in International Arbitration and Legal Practice
- Kushagra Mishra
- Aug 24
- 6 min read
[Kushagra is a student at ILS Law College, Pune.]
As artificial intelligence (AI) finds itself at the centre of the colosseum, surrounded by a flurry of incidents evincing the ethical harm that it can potentially inflict, be it the scandalous hearing at the New York State Supreme Court Appellate Division’s First Judicial Department, where an AI-generated video was used in proceeding arguments, or when the Karnataka High Court recommended action against a Bengaluru city civil judge for citing two non-existent Supreme Court judgements in a matter related to a commercial dispute, suspected to have been fabricated using ChatGPT, civilization once again faces the classical dilemma of finding answers to a boon-or-bane question. The focus must be directed towards AI’s ‘fundamentally-altering-impact’ on international arbitration, along with the challenges and the advantages it offers, enabling proactive policymaking, especially in light of its cross-border, transnational, and multi-party nature.
The Singapore International Commercial Court recently junked an arbitral award passed by a panel of retired Indian judges. Similarly, the Singapore Supreme Court annulled an award passed by the former Chief Justice of India Dipak Misra, calling it a “copy-paste” judgement. Thus, as even the upper echelons of the legal field seem to grapple with plagiarism and redundancy, and law firms partner up with AI tools, like the recent collaboration between Shardul Amarchand Mangaldas and Harvey, a generative AI tailored to the legal profession, critical tasks like document review, predictive analytics, and award drafting are being increasingly handled by automation, thereby requiring a thorough reconsideration of the way forward, given the high stakes. This article, therefore, explores how AI is reshaping arbitration, examines its ethical pitfalls, and proposes frameworks to balance innovation with integrity.
Utilizing AI in Arbitration: The Digital Gavel at the Table
Procedural aspect: Streamlining document review and e-discovery
In today’s day and age, data and information is the new gold. Careful data analysis has emerged as a pre-requisite to ensure successful operations. However, the vast and expansive nature of such data sets in general, and variations in the purpose of such analysis in particular, has been a hurdle in its assessment. Thus, multinational conglomerates have opted towards bespoke data assessment tools, tailored to the organizations specifications, like KPMG’s eDiscovery system, which enables significant cost reduction. The specialized branch or type of AI model used herein is known as natural language processing which is equipped with the ability to comprehend language and text, in all its aspects, akin to a human. Tools like Relativity and Everlaw have therefore seen a upward trajectory in terms of their adoption for cross-border dispute resolution involving terabytes of multilingual data.
Setting the stage: Automating case preparation
At the core of some of the world’s biggest legal battles, have been mere drafting errors and omissions, sometimes caused by no more than a single clause which would later turn into protracted lawsuits. Therefore, tools like the American Arbitration Association’s ClauseBuilder AI, which automate drafting of arbitration agreements, institutional compliance, with a minimal error rate as compared to humans. AI is making strides from the bench as well, taking a seat with the arbitrator, like the Singapore International Arbitration Centre’s AI case management system.
Donning the robe: Predicting and suggesting case strategy
Arbitration involves high stakes. As such, economic considerations are chief, resulting into the high affinity that corporates possess for it. Therefore, tools like Premonition and Lex Machina allow forecasting case probabilities, based on an analysis of historical arbitration outcomes, arbitrator tendencies, and settlement patterns, allowing firms to develop and contemplate a thorough cost-to-benefit analysis of their litigious endeavours.
Stepping up to the panel: Award drafting and legal research
Going beyond its merely assistive roles aforementioned, generative AI has also taken up the role of the arbitrator himself. The Queen Mary School of International Arbitration notes that 54% of practitioners now use AI for crucial and critical aspects as well. Therefore, AI has taken up an eclectic range of roles in matters of dispute resolution, from arbitrator to the counsel. This upsurge in AI use for dispute resolution does expectedly, raises ethical concerns that must be contemplated. These potential challenges to it have been, therefore, discussed below.
Challenges: The Clash between Efficiency and Fairness
Algorithmic bias and neutrality
The intelligence in AI systems is fundamentally developed by the use of historical data and empirical information, using which they are trained. However, this leads to results that reflect the historical biases and prejudices, as contained in such datasets. It thus may lead to perpetuation of the historical oppression of the diverse and marginalized groups. This challenge is so significant that it has been theorized and termed as algorithmic bias. Further, real cases like Roberto Mata v. Avianca, Inc. (2023) show how garbage in, garbage out training can lead to fabricated and fictitious cases to be cited, undermining arbitrator neutrality.
Confidentiality and data security
While the right to privacy hangs onto the ledge of judicial recognition in India, AI looks down on it as the nemesis, ready to further push it further into the ravine. The collaboration between AI systems with third-party servers for handling and storing sensitive case materials, risks security breach and violations of case confidentiality. The Institute of Company Secretaries in India highlights incidents where, unencrypted AI platforms have raised security concerns and violated the sacrosanct confidentiality of proceedings, the desire for which stems directly from the most fundamental principle of arbitration – party autonomy.
Evasion from due process
In its assistive role, AI systems exercise great control over how certain evidence, statements, records, etc. are presented, especially when they are lengthy and complex, requiring AI-assistance for simplification. This over-reliance however risks undermining parties’ right to be heard, and affect arbitrator neutrality. The Silicon Valley Arbitration and Mediation Center’s (SVAMC) guidelines caution that AI-generated summaries are prone to omitting nuanced cultural contexts, critical to equitable outcomes.
The myth of the impartial AI-arbitrator
Proposals of the AI-Arbitrator have been gaining traction worldwide. However, they grossly overlook the fact that, at last, AI systems are entirely mechanical and lack fundamental human qualities of empathy and contextual judgement. Consequently, AI may leave the parties in a limbo, especially in cases involving contemplation of the spirit of the law, or upholding complete justice. The Chartered Institute of Arbitrators (CIArb) has opined that AI systems are bereft of the moral reasoning, which is especially necessary for decisions made ex aequo et bono, that is, according to what is fair and good.
Regulatory Viewpoint; Addressing the Lacunae
The CIArb guidelines (2025)
The CIArb requires reasonable enquiries into the reliability of AI tools and mandates disclosures about its usage, with explicit focus on parity of information between the parties must be maintained whenever AI is employed in any capacity. Further, they mention that AI outputs must be treated as advisory and explanatory, rather than authoritative and outright substantive. They also provide for training data must be audited for demographic or jurisdictional biases.
SVAMC’s guidelines on the use of artificial intelligence in arbitration
The SVAMC guidelines for the participants in general, and the parties and arbitrators in particular, are significant in shaping today’s best practices, for AI use tomorrow.
Judicial precedents and legislative initiatives
Strict judicial positions on the use of AI, like the Massachusetts Superior Court’s order in Darlene Smith v. Matthew Farwell (2024) excluding AI-generated evidence, must be modelled and recognized for adoption across jurisdictions. Further, Singapore’s AI verify framework shall be used to incorporate mandatory AI bias testing for arbitration tools used in Indian venues.
The Path Forward: Preemptively Addressing the Issues
AI tools may be limited to assistive features like document sorting and preliminary research, while retaining human control over substantive decisions, thereby creating certain no-go areas for AI integration and leading to development of hybrid workflows. Furthermore, historical biases entrenched in latent and intricate forms, can be detected and purged from systems by carrying out proper AI due diligence, with the aid of auditing tools like IBM’s Fairness 360, thereby enabling compliance with General Data Protection Regulation 2016 and California Consumer Privacy Act 2018.
Another important aspect would be to ensure transparency and proper disclosure about AI usage in awards passed and freedom to challenge algorithmic outputs must be availed to the parties. This would help uphold the overarching principle of party autonomy. Lastly, dynamic training and coaching tools to equip arbitrators with skills to harness AI systems for efficiency while, simultaneously, enabling the AI detection for preserving neutrality, must be pioneered. This would help balance the good, bad, and the ugly of this technology.
Conclusion: Taking a Step Back in the Algorithmic Rush
The integration of AI and its sub-systems in international arbitration is inevitable, but instead of blind participation in the AI race, the stakeholders must be vigilant and even de-limit such integration in certain sensitive areas, ensuring that technology remains a tool, not a tribunal. Only after the ground-level implementation of institutional forecasts and cautions, can these advancements become a boon for dispute resolution. It must be noted that AI should illuminate, not dictate, the path to justice.