Algorithmic Credit Risk Assessment and RBI’S Ignorance: Time for Action
- Shivam Agrawal, Disha Daga
- Aug 9
- 7 min read
[Disha and Shivam are students at Hidayatullah National Law University.]
The rise of fintech entities across the world has ushered in a new era of technologically oriented financial solutions. These entities have transformed the traditional lending ecosystem by redefining the disbursal of loans, smoothing the borrower-lender interaction, and removing traditional bottlenecks by replacing manual, time-consuming channels with automated and faster mechanisms.
The digital lending market, in recent times, has witnessed the integration and use of algorithms and artificial intelligence (AI) in assessing the creditworthiness of borrowers. Algorithm-driven lending models increase efficiency and promote financial inclusion by moving beyond traditional standards of assessment. However, this innovation has also raised concerns such as algorithmic biases and consumer discrimination.
In this context, given that India’s digital lending market is poised grow to approximately USD 515 billion by 2030, it was expected that the Reserve Bank of India (RBI) would take steps to regulate the use of AI models in such markets. However, the recent RBI (Digital Lending) Directions 2025 (2025 Directions), fail to address the challenges associated with the use of AI models, creating a regulatory vacuum. This article aims to explore the functioning of AI models in assessing creditworthiness. Secondly, it examines the lacuna of the 2025 Directions in addressing the challenges posed by these models. Thirdly, it aims to seek guidance from the international framework and suggest policy reforms and the way forward for India.
Growth of AI Use in Credit Risk Assessment: The Good and the Bad
As part of prudent lending practices, credit risk assessment is essential for the lender to evaluate the borrower’s capability of repayment of loans and reduce the possibility of default. Of late, fintech entities have started incorporating AI and machine learning (ML) models to improve credit risk assessment operations. These models can analyze vast amounts of data relating to the income, transactions, and credit history of the borrowers. They can also provide a real-time credit monitoring facility to identify patterns and reveal potential risks.
AI-based credit risk assessment offers a plethora of advantages over traditional assessment methods. The use of AI and ML reduces the risk of data entry errors and streamlines the process by eliminating subjectivity. As against the traditional ways, algorithms often rely on large amounts of data sets from various sources, like consumer browsing history, spending patterns, social media activities, etc. AI models, equipped with the power of real-time monitoring, can make more accurate predictions of creditworthiness.
While AI-based models enhance the overall process, they come with their share of risks and challenges. These algorithmic models can be used to perpetuate biases by feeding discriminatory data inputs, leading to arbitrary credit risk assessment. They operate in “black boxes”, which makes it difficult to identify such biases. This lack of transparency erodes borrowers’ confidence and affects the reliability of the assessment process. Additionally, their heavy reliance on the personal data of the borrowers runs the risk of invading consumer privacy.
As there is a need to address these concerns, it is important to examine how the recent 2025 Directions have sought to resolve these issues.
RBI Digital Lending Directions 2025: A Missed Opportunity for Addressing AI Concerns
On 8 May 2025, the RBI introduced the 2025 Directions, providing for a consolidated framework on digital lending and fulfilling the objectives of consumer protection, transparency, and accountability. These directions replace the 2022 Guidelines on Digital Lending.
Paragraph 7 of the 2025 Directions provides for a mandate for the regulated entity (RE), which refers to entities providing digital lending services, to assess borrowers’ creditworthiness by taking into account factors like their age, occupation, income details, etc. The RE is to ensure that credit limits do not increase without an explicit request for the same. The directions introduce significant changes like enhanced transparency measures for the partnership between lending service providers (LSPs) and REs, improving the reporting mechanism, and incorporating a consumer grievance redressal mechanism.
Despite following a comprehensive approach, the 2025 Directions make a glaring omission in regulating the increasing use of AI and ML models in credit risk assessment. It does not recognize how banks and other lenders have moved past manual methods and adopted algorithmic tools for evaluating creditworthiness. This regulatory gap is concerning as the use of AI challenges the existing framework of data privacy, transparency, and consumer protection.
These AI models may be trained to rely on borrowers’ personal data. Such reliance may not be obvious given that these models operate in a “black box”. In this context, Paragraph 12 of the 2025 Directions mandates obtaining explicit borrower consent before such data is used. Moreover, Paragraph 13 provides that the storing of personal information by LSPs of borrowers shall not be allowed. While this provides some relief by ensuring that borrowers’ privacy is protected, it does not explicitly and specifically acknowledge the use of AI, whose challenges go beyond the horizons of the present framework.
It fails to address transparency issues, auditing measures, and biases inherent in the use of AI. This is crucial as regulators in other jurisdictions, including the European Union (EU) and the United States (US), have specifically identified these concerns and have sought to tackle them proactively.
International Regulatory Perspectives
The EU has introduced the Artificial Intelligence Act 2024 (AI Act), which, inter alia, regulates the use of AI models in assessing credit risks. It classifies these models as “high-risk” since they have the potential to affect access to financial services or essential services like housing and can perpetuate discriminatory biases. Since it is classified as a “high-risk” AI system, it is subject to various compliance requirements. Firstly, a risk management system is required to be established, which will identify the risks posed to health, safety, or fundamental rights of persons, suggesting appropriate measures to tackle the same. Secondly, it incorporates transparency measures by mandating that such AI systems shall be designed in a way that their outputs can be interpreted by developers and must also provide instructions for use. Thirdly, it provides for a human oversight mechanism that is commensurate with the risks of AI technology and the minimum level of autonomy accorded to it. Apart from these, it also provides standards for the accuracy of output, technical documentation, and data governance.
In the US, the use of algorithms in credit risk assessment is regulated through a combination of legal frameworks, including the Fair Credit Reporting Act 1970, Equal Credit Opportunity Act 1974, and the recent Consumer Financial Protection Bureau Circular 2022-03 (Circular 2022-03). ECOA and Regulation B stipulate that creditors should provide specific reasons to the applicants in case of denying the credit. In this context, the Circular 2022-03 has provided further clarification that even those creditors who use complex algorithms in credit risk assessment cannot use the defence of AI models and technology for non-compliance with the requirement of stating accurate reasons for their credit decisions. This makes the creditors responsible and makes the overall lending process transparent, despite the use of “black-box” models that operate opaquely.
In Singapore, the Monetary Authority of Singapore, in collaboration with the Personal Data Protection Commission and the Infocomm Media Development Authority, has laid down principles for promoting Fairness, Ethics, Accountability, and Transparency (FEAT Principles) for the use of AI in the financial sector. The fairness principle emphasizes justifiability and accuracy of the decisions made by AI models. The ethics principle is based on the incorporation of minimum ethical standards in these models. The principles of accountability and transparency provide for oversight mechanisms, both internal and external, and effective communication strategies. While Singapore does not offer specific rules, these principles are significant in determining policy decisions concerning the use of AI in the digital lending segment.
Charting the Way Forward for India
The use of AI models and ML in assessing creditworthiness, despite the risks, is unregulated and not recognized in India. Based on the international practices in addressing the challenge of the ‘black-box’ model in the financial sector, the following policy reforms can be suitable for India.
First, India must explicitly recognize the use and role of AI in credit risk assessment. This recognition must not be limited to piecemeal, like regulating only those aspects of AI that relate to data protection. A comprehensive approach is necessary to tackle all the risks created by AI, beginning from data collection to output generation.
Second, India should provide some guidance on holding the entities responsible to state the rationale behind their credit decision-making, even when such decisions are taken by AI models. A parallel can be drawn with the US’s Circular 2022-03 that limits the scope for the entities to justify their adverse decisions based on AI.
Third, on the lines of the EU’s AI Act, India must incorporate specific standards and compliance requirements for such models. It should mandate the establishment of risk management systems, human oversight mechanisms, and provide specific standards of accuracy and transparency. These standards must be devised after consultation with concerned stakeholders to ensure their effectiveness. A joint working group, consisting of industry experts and government officials, must be formed to draft a basic code of conduct for the appropriate use of AI in assessing creditworthiness, based on Singapore’s FEAT Principles.
Finally, the RBI must, through a circular, provide a mechanism for sectoral interaction to ensure that authorities under the Digital Personal Data Protection Act 2023, and the RBI, apart from other authorities like the Ministry of Information Technology, can cooperate effectively and ensure oversight.
Conclusion
As India is witnessing the rapid expansion of digital lending in its financial sector, it becomes increasingly important to redefine its regulatory framework accordingly. While the use of AI models and ML in credit risk assessment helps in improving the overall efficiency and scalability, the challenges like AI-bias, data protection, and lack of transparency call for prompt intervention.
Taking the lessons from the EU’s AI Act, and US’s Circular 2022-03, and Singapore’s FEAT Principles, India can craft a forward-looking regulatory framework to balance innovation with ethical standards. By embedding the principles of fairness and responsibility and upholding consumer privacy, India can not only boost borrower trust but also ensure financial inclusion in an equitable manner.
