top of page

SEBI's C-Suite Conundrum: Can Managers Govern AI in India?

  • Anuja Chatterjee, Sahil Singh
  • 24 hours ago
  • 6 min read

[Anuja and Sahil are students at Chanakya National Law University.]


The use of artificial intelligence (AI) and machine learning (ML) is growing at a rapid pace in India’s securities market. Exchanges, brokers, mutual funds, and other market participants now use AI/ML for risk management, portfolio advisory, client support functions, and fraud detection. This adoption of AI and ML in the securities market represents a fusion of both opportunity and heightened risk. On one side, these applications increase efficiency and market discipline; on the other, they also bring algorithmic bias, system failures, and cybersecurity vulnerabilities. The Securities and Exchange Board of India (SEBI), acknowledging the dual promise and peril, released a consultation paper on 20 June 2025, setting out policy guidelines for their use in India’s securities market. 


Drawing on the recommendations of the Working Group (a focused body constituted by SEBI to examine AI/ML applications in the Indian securities market, bringing together expertise to create and finalize the guidelines), SEBI identified 5 objectives for responsible AI/ML incorporation in the Indian securities market: (i) model governance, (ii) investor protection and disclosure, (iii) testing frameworks, (iv) fairness and bias, and (v) data privacy and cybersecurity. These objectives are outlined to serve as a safety net promoting innovation and efficiency while also safeguarding investor confidence and market stability. In this piece, the authors aim to critically analyze SEBI’s liability framework by analyzing why the personnel on whom it is placing the liability, i.e., senior management, are ill-equipped to be responsible for it because they lack the necessary AI/ML expertise. Further, it provides a comparative review with other jurisdictions to highlight the shortcomings of SEBI’s framework, or lack thereof, and to draw suggestions from global standards of AI accountability. Lastly, it aims to offer concrete suggestions that provide a solution to these prevalent issues in the financial sector, informed by international best practices.


SEBI’s Liability Framework: The Core Proposal


Model governance emerges as the centerpiece of regulation in SEBI’s consultation paper. Market players using AI/ML are expected to appoint leaders at the very top who not only hold authority but also have “appropriate technical knowledge and experience” to take charge of every aspect of model use from development and validation to deployment and oversight. This places extraordinary weight on leadership’s shoulders. SEBI draws its rationale from IOSCO’s global guidance, and insists on clear accountability and senior management approval for AI systems that affect clients. This approach ensures firms cannot offload responsibility onto vendors or bury it within technical teams. However, this is essentially a bet on boardroom accountability. It mirrors the compliance culture regulators are comfortable with, but ignores a hard truth that most senior leaders lack the AI/ML expertise needed to manage these risks meaningfully.


The Skills Gap at the Top: An Unbridgeable Divide?


According to a Deloitte survey, 86% of financial services AI adopters considered AI to be critically important to their success. 90% of Indian financial institutions are focusing on AI and GenAI as the primary technology enablers of innovation. IT costs in Indian banks have seen a compounded annual growth rate of 17.4% till FY25, which is the highest growth in the category of operating expenses. When the presence of AI is this pervasive, the need for accountability becomes unquestionable. Executives in corporate C-suites globally now consider it a top priority, yet there is a major gap between strategic vision and actual technical capacity at leadership levels. 


A study in 2024 revealed that 31% of Indian firms cite a lack of in-house AI talent as a key obstacle; 28% also flag data governance challenges, and 41% of senior managers themselves lack confidence in AI.  A 2022 PwC study and a 2023 IIMA study on Indian BFSI and NBFCs showed that just 11% of these companies qualify as “leaders”, whereas “laggards” (companies where management upskilling is urgently needed for competitiveness) represent two-thirds of the sample. 


Even After Almost a Decade of Advent of AI in the Financial Sector, Why Does this Gap Persist? 


The reasons appear to be threefold. First, most Indian C-suite leaders come from traditional finance or compliance backgrounds, not tech/data science. Second, the pace of AI/ML advances outstrips boardroom upskilling. By the time one wave of AI technology is understood, new capabilities and paradigms emerge rapidly, which makes AI governance and strategic leadership challenging. Lastly, high-level, domain-specific AI training is expensive and time-consuming, especially for senior leaders who are already occupied with legacy risk and regulatory issues.


Evidently, until Indian financial leaders acquire or embed true AI/ML expertise at the top, ambitious digital transformation and governance objectives will be constrained. Despite this, SEBI’s consultation paper emphasizes senior management accountability for AI/ML governance, without offering clarity on how individuals without deep technical expertise should effectively assume AI governance liability. 


Comparative Review: How Global Peers Address AI/ML Accountability


SEBI’s consultation paper leaves significant ambiguity in the allocation of liability, especially when viewed through the lens of the Indian financial ecosystem, since it assumes that senior management accountability can be operationalized without addressing the critical knowledge gaps that these actors face. A comparative review of leading jurisdictions reveals this imbalance. Although global regulators similarly grapple with balancing innovation and accountability, they have moved toward more policy designs and enforcement mechanisms that attempt to counter this more concretely. 


US SEC/FINRA's Technical AI Committees and Certification


In the United States, the Securities and Exchange Commission and the Financial Regulatory Authority rely heavily on cross-functional technology to oversee AI risks. Firms are expected to establish an internal AI oversight committee consisting of compliance, legal, technology, and risk professionals to comprehensively supervise an AI model’s performance. FINRA’s regulatory obligations require companies to have a reasonably designed supervisory system tailored to their systems, and for their policies and procedures to address technology governance. Certification and registration requirements for personnel involved in AI design and operation further enforce this designated accountability system. 


European Union's AI Act 


The AI Act, a European regulation, is the first comprehensive regulation on AI by a major regulator worldwide. The Parliament has established a working group to oversee the implementation of the AI Act. It makes designated AI leads and a quality management system, which are responsible for ensuring compliance, as a mandatory requirement. Further, the AI Act mandates maintaining detailed technical documentation and conducting periodic risk assessments which these designated leads must manage or supervise. This approach reflects an explicit regulatory intent to place skilled technical oversight at designated accountability points. 


Singapore MAS


Singapore’s Monetary Authority of Singapore emphasizes the formation of multidisciplinary AI governance committees to help support the board and the senior management in exercising oversight over banks’ use of AI. It also strongly encourages collaboration between regulators and industry participants through initiatives such as Project MindForge. 


These international examples share one common recognition - effective AI accountability demands clearly defined roles that are backed by technical expertise and supported by multidisciplinary oversight structures. Going completely against this, SEBI is placing an overarching liability on senior management. Bridging governance gaps is a need at this time to maintain monetary and financial stability. However, as long as SEBI continues to rely on broad principles, it misses the opportunity to lay down a strong foundation for other financial entities to build upon, thereby establishing a reliable structure for governance of AI/ML, which would cater to the deficiencies and needs of the Indian market.


The Path Forward


There is an urgent need for SEBI to recognize that an effective regulatory framework is required to address the problems that AI/ML poses, due to its inherent complexity and uncertainty, in order to become a facilitator of responsible AI innovation. It is important that there is a clear responsibility for and ownership of AI/ML-driven decisions within a financial institution. However, instead of a blanket imposition of responsibility on senior management, there needs to be a clear distributed accountability model. Central to this approach is the formal creation of dedicated interdisciplinary AI governance committees within financial institutions.


This aligns with current international norms and is essential because, given the wide-ranging impacts of AI risk, it is necessary that the committee is interdisciplinary. Additionally, a dedicated committee would ensure that accountability is both fair and actively monitored in real-time. This body should enable policy review, guidance issuance, and addressing emerging challenges and potential gaps in risk management. It is also essential that institutions using AI/ML proactively establish plans to raise awareness and upskill both their staff and senior executives, enabling them to effectively incorporate AI into their governance frameworks. In sum, SEBI’s role should evolve from a prescriptive regulator to an enabler of responsible AI deployment, encouraging structured, expert-led governance frameworks and fostering a culture of continuous learning and accountability across India’s financial ecosystem.

Related Posts

See All
Sign up to receive updates on our latest posts.

Thank you for subscribing to IRCCL!

©2025 by The Indian Review of Corporate and Commercial Laws.

bottom of page