Will a borrower repay a loan? This is the fundamental question for any credit approval decision. Lenders have always looked to invest in processes that improve their ability to predict a given counterparty’s risk of default, or from a balance sheet perspective, a particular loan portfolio’s aggregate credit risk.
Traditionally, banks have produced internal credit scores, which reflect quantitative data points (namely financial ratios) as well as subjective factors (market forecasts, reputation and so on) in order to evaluate a borrower’s credit risk. The last decade however has seen significant changes in how lenders grant and
manage credit risk. This is in part by a regulatory push, namely the Basel accords, which oblige financial institutions to produce sophisticated risk analysis. Another factor is market pull as a function of technology advances, namely increases in data storage capacity and improvements in processing power.
In particular, the application of artificial neural networks in modelling credit risk has seen significant investment. Information that flows through the network changes it as it effectively learns based on that input and output. Traditional lending processes use linear statistical methods to model a given borrower’s credit risk within the context of qualitative market and structural analysis. With the required data and processing power, non-linear statistical models deployed in artificial neural networks provide creditors the opportunity to consider disparate information points in order to identify complex relationships and patterns. These are revised over time to support the model’s learning.
While increasing a lender’s predictive power in relation to default risk has obvious benefits for loan credit approval, this application of artificial intelligence will also allow lenders to more accurately price loans for risk and will support decisions on customer and market lending strategy. There are of course downsides to automated credit approval, particularly within the consumer context. Bias in source data can yield biased credit outcomes, and similarly, algorithms designed to increase the weighting of certain data features (university education for example), may instead inadvertently amplify pre-existing structural biases. The risk that large groups of borrowers are unintentionally, yet systematically denied credit should be assessed and mitigated.