In today’s environment, where the race for companies to acquire information is intensifying and where data is the new oil, financial institutions find themselves in an ambiguous situation. Banks are among the companies with the biggest ammount of data (relating to customers, operations managed, contracts signed), but very often they possess more than they use to improve their processes or decisions. There has been a proliferation of solutions to make the most of the information available and gain in attractiveness and efficiency compared to competitors.
The process that leads to the use of data to obtain analysis and useful suggestions for improving business and commercial performance is a complex one. Even in the initial phase, that of data collection, some aspects are essential to monitor and define in order not to compromise the output:
- You need to constantly improve the quality of the data, but there is no such thing as perfection in data, so you need to analyze it quickly and select it from time to time.
- Too much data can be a double-edged sword. The high complexity of information can be a great asset in identifying new business opportunities, but it can also slow down and increase the margin of error in analysis. Often this information is redundant and therefore of little use to the bank.
- To choose which data to acquire and what type of analysis to carry out on it, it is essential to start from the real needs of the business and the related processes to be optimized.
To be able to use and extract all the necessary information from the data, financial institutions need a wide range of Predictive Models and Artificial Intelligence algorithms to be able to manage the large volumes and variety of data. Using software that makes these technologies available inevitably requires support at a technical level, but also the involvement of an analytical structure in the various business divisions. In fact, in similar processes, it is essential to collaborate and share strategies between the various teams, such as data science, marketing and IT, to ensure that all those involved have a complete understanding of the models.
Models and Use Case in Banking
The models presented above can be applied in a variety of areas to improve the efficiency and profitability of banks.
One area of application can be the analysis of customer insolvencies. In particular, by studying new data sources, such as current account movements, it is possible to better understand behaviour and predict possible insolvencies through the adoption of new risk parameters.
A further area of modelling concerns the support to managers, in fact, through clustering models, it is possible to provide advanced support in the relationship with the customer, not only identifying the most suitable products for customers with specific characteristics but also defining the optimal pricing for each product based on the propensity to consume and the price elasticity of the same.
The models implemented must be tested and validated with the support of the results obtained from the first analysis. One of the most popular evaluation methods is to create different forecast scenarios or what-if scenarios that allow predicting the possible impact of the variation of certain parameters.
AI models can be very effective when a large amount of data is available from which it is possible to identify unknown relationships between variables or provide particularly accurate results (as a rule, in AI, a result with an accuracy of no less than 95% is considered acceptable). These models, however, must fit into the complex landscape of a banking institution and must therefore be subject to strict regulatory parameters, often requiring hybrid modelling.
Indeterministic vs deterministic models
In this field, tools that combine Artificial Intelligence techniques and mathematical-statistical models are widely applied, such as generalized linear models (GLM) or additive models (GAM), currently widely used by financial institutions to support the definition of pricing.
Deterministic models, i.e. those which, to simplify, do not contain random elements and which, given a certain value of the exogenous variables and parameters, will always lead to the same result. Models of this type are still the most acceptable and attractive to banks and regulators because they offer:
- Greater transparency, allowing the nature of the outcome to be known in advance;
- Greater control, through targeted adjustments to a variable;
- Possible new scenarios through limited changes in parameters.
These models will be increasingly paralleled with indeterministic AI models that will allow a deeper analysis of the data and the identification of outputs that can guarantee a greater competitive advantage.