Algorithmic Collusion and the Limits of Agreement under Indian Competition Law
- Ayush Agrawal
- 2 days ago
- 7 min read
[Ayush is a student at National Law Institute University Bhopal.]
The recent market study on AI by Competition Commission of India (CCI) confirms that AI is no longer just a productivity tool; it is now a potential architect of market structure. Notably, 37% of surveyed startups cited AI-facilitated collusion as their primary competition concern. This anxiety reflects a deeper doctrinal problem: Indian cartel law is built around the idea that there must be an “agreement” to collude, but self-learning algorithms can now generate cartel-like outcomes without any human conspiracy.
This blog argues that while simple algorithms implementing human-devised cartels and hub-and-spoke platforms still fit comfortably within the existing Section 3 of the Indian Competition Act 2002 (Act), the framework, truly autonomous pricing systems expose a structural gap. They can independently learn to sustain supra-competitive prices without any human “meeting of minds.” To address this, Indian law must gradually move from a purely intent-centric approach to one that is more willing to infer liability from collusive outcomes in certain AI-driven markets. The discussion proceeds by (i) revisiting what counts as an “agreement” under Indian law, (ii) examining global categories of algorithmic coordination, (iii) identifying the enforcement gap for tacit algorithmic collusion, and (iv) sketching a practical roadmap for reform
The Challenge of Defining ‘Agreement’ under Indian Competition Law
Section 2(b) of the Act defines "agreement" broadly to include any arrangement or concerted practice. Indian courts have consistently read this to require a "meeting of minds" (consensus ad idem). This legal standard, which assumes people are consciously communicating and making decisions, is the single biggest hurdle in prosecuting collusion carried out by AI.
The Supreme Court’s landmark ruling in Excel Crop Care Limited v. CCI (2017) held that a cartel agreement can be deduced from circumstantial evidence and market behaviour, but that “mere parallel behaviour” is inadequate: the conduct must point to a prior understanding among competitors. Rajasthan Cylinders and Containers Limited v. Union of India and Another (2019) went further, warning against treating parallel pricing in concentrated markets as proof of collusion, and emphasising that conscious parallelism can be a rational, independent response to market conditions rather than a cartel. In platform cases such as Samir Agrawal v. ANI Technologies Private Limited and Others (2018), the CCI’s focus remained on whether human actors used the platform to facilitate a cartel, not on whether the platform’s algorithm itself autonomously set supra-competitive prices.
This “meeting of minds” requirement, while coherent for human actors, creates an obvious friction in the AI context: self-learning pricing systems can generate sustained, collusive outcomes without any communication or prior understanding between their human principals.
The Global Experience: Categorizing Algorithmic Coordination
Global jurisprudence provides a useful lens through which to analyze the spectrum of algorithmic collusion, revealing that while some forms are easily captured by existing laws, others expose significant enforcement gaps. The cases discussed by the CCI's report can be broadly classified into three categories.
Category 1: Algorithm as a tool for explicit collusion
The most straightforward scenario involves using an algorithm to merely implement a pre-existing human conspiracy. The US v. Topkins case (2015) in the USA, where sellers first explicitly agreed to fix prices and then deployed an algorithm to enforce their cartel. Similarly, Trod/GB Eye case (2016), in the UK, two sellers used automated software to maintain a price-fixing agreement. The European Commission also fined several consumer electronics firms for using sophisticated algorithms to monitor and enforce resale price maintenance policies with their retailers. In all these cases, the algorithm is merely the weapon; the agreement is plainly between humans. Such conduct slots neatly within Section 3 of the Act as currently interpreted.
Category 2: The hub-and-spoke model
A more complex situation arises in a hub-and-spoke arrangement, where a central entity facilitates coordination among competitors. The EU's E-TURAS case (2016) held that travel agencies who were aware of a platform's discount cap and continued using a system without publicly distancing themselves from it were presumed to have participated in a concerted practice. The legal principle here is one of tacit acquiescence. In the Spanish Proptech case (2024), real estate agencies used a common software that was explicitly designed to prevent the listing properties below a minimum commission fee. Under Indian law, especially after the Competition (Amendment) Act 2023 explicitly recognises hub-and-spoke cartels under Section 3(3) of the Act, knowing reliance on a common pricing algorithm can be treated as tacit acceptance of a concerted practice.
Category 3: The true challenge - tacit algorithmic collusion
The true analytical challenge lies with tacit collusion achieved by autonomous, "self-learning algorithms." These algorithms are not pre-programmed to collude. Instead, using deep reinforcement learning (often with techniques like Q-learning), they independently analyze market data. Through millions of trial-and-error simulations, they simply learn that coordinating prices with rivals is the optimal long-term strategy for maximizing profits.
The CCI report itself acknowledges this threat, referencing academic studies which show that even simple pricing algorithms can learn to collaborate and sustain prices above competitive levels, all without any communication between firms. This is also backed by empirical evidence, such as the finding that when German gasoline retailers adopted algorithmic pricing, their margins increased. This final category is different; there is no instruction to collude, no exchange of information, and no express hub. The collusive outcome is an emergent property of self-learning algorithms optimising for profit.
Identifying the Gap in the Indian Framework
While our legal framework can handle the first two categories of algorithmic collusion, it fails when faced with the third: tacit collusion by autonomous AI. The first two categories of algorithmic collusion can be prosecuted within the existing framework, either they involve explicit human conspiracies or they fit within a hub-and-spoke theory now expressly recognised in Section 3(3) of the Act. The real enforcement gap arises in the third category, tacit algorithmic collusion by autonomous AI.
Here, Rajasthan Cylinders (2019) becomes a double-edged sword. The Supreme Court protected "conscious parallelism," recognising it as a rational market response. The Supreme Court's protection of "conscious parallelism" as a rational, independent market response makes sense for human decision-making. But a profit-maximising pricing algorithm is essentially an ultra-rational agent. Two rival firms that independently deploy such systems in an oligopolistic market may end up in a stable, high-price equilibrium without any communication or instruction to collude. Under the current doctrine, that looks indistinguishable from lawful parallelism, not a "concerted practice".
If two competing AIs, designed by different firms with no communication between them, learn to mimic each other's price increases to reach a stable, high-price equilibrium, this would not constitute "concerted practice". Their conduct mirrors the exact parallel behaviour that the Supreme Court has already deemed insufficient to prove collusion. The opacity of these "black box" algorithms further complicates matters. When the decision-making process is not fully explainable even to its creators, it becomes impossible for a regulator to dissect the algorithm's logic to find the "plus factors" needed to distinguish tacit collusion from fierce, independent competition.
The opacity of modern "black box" algorithms exacerbates the problem: even their designers may be unable to reconstruct the precise learning path that led to a collusive outcome. This makes it extremely hard for the CCI to produce the "plus factors" needed to differentiate unlawful coordination from aggressive but independent competition, and raises awkward questions about who, if anyone, possessed the requisite intent.
This raises intractable questions of liability and intent. The algorithm's autonomous learning process severs the causal link between a human decision and the collusive outcome. Furthermore, proving intent to collude becomes nearly impossible. The traditional evidence of a cartel emails, call logs, meeting minutes is entirely absent.
The Way Forward
Closing this gap requires building on the CCI’s calibrated recommendations in its AI market study, rather than overhauling Section 3 of the Act. The study urges enterprises, especially those with significant market power, to deploy AI systems to adopt a self-audit framework for competition compliance, ensuring "responsible autonomy" and preempting market distortions.
CCI's self-audit framework
This framework rests on core principles: (i) rigorous documentation and transparency, mandating records of algorithmic objectives, data sources, access protocols, and explainability of key functions for stakeholder disclosure; (ii) proactive design and monitoring, with built-in safeguards, regular audits of outputs for price alignments or discrimination, and protections against sensitive data sharing; and (iii) holistic compliance integration, embedding competition law into AI lifecycles via training and alignment with corporate programs. An annexed guidance note provides a template, promoting early risk detection in black-box systems.
Feasibility for tacit collusion
For Category 1 (explicit) and Category 2 (hub-and-spoke) collusion, the framework suffices; audits can uncover human instructions or platform acquiescence, generating "plus factors" beyond Rajasthan Cylinders' conscious parallelism. However, for Category 3 tacit collusion via self-learning algorithms, feasibility falters. These systems emerge with collusive outcomes through reinforcement learning without programmed intent, evading pre-deployment safeguards; post-hoc audits struggle with opaque learning paths, lacking traditional evidence like emails. While unique globally, the framework's success hinges on voluntary adoption and CCI enforcement teeth, risking under-deterrence in oligopolies.
Three enhancements that can operationalise and strengthen it:
Read "concerted practice" with purpose. If you are deploying AI in a tight market full of known colluders, that counts as joining the tacit party—backed by your own audit records as proof.
Set up rebuttable presumptions. Make law that flags ongoing supra-competitive price-matching in AI-dominated markets as suspect, unless audits prove otherwise.
Add structured defences. Mandate efficiency justifications from the CCI, and beef up that guidance note with collusion stress-tests plus options for independent third-party reviews.
Conclusion
The CCI’s AI market study rightly recognizes algorithmic collusion as an emerging threat to competitive markets. India’s existing framework remains adequate for explicit and hub-and-spoke cartels, but struggles where self-learning algorithms independently discover and sustain collusive outcomes. A doctrine built around human “meetings of minds” is not fully equipped for mindless machines that nonetheless behave like cartels.
A combination of purposive interpretation, outcome-based rebuttable presumptions, and clear compliance guidance can preserve the core of Section 3 while making it responsive to AI-driven markets. The choice for Indian competition law is no longer whether to grapple with algorithmic collusion, but how quickly it can adapt before the market reality overtakes the law.
Comments