Risk data journey: from challenges to opportunities
Advanced data platforms and global data frameworks to manage risks are incredibly necessary within the current economic and competitive financial landscape.
by
Stephane Rio
October 24, 2022
Share to
Regulatory reporting requirements and time-sensitive business data demands are forcing financial institutions to confront the challenges of defining their global data journey strategy. At stake is constant access to aggregate or run impact simulations on heterogeneous sets of data that get larger every day.
Institutions face demands from regulators for detailed reporting, relating for example to Basel IV, Anacredit and Loan Origination Monitoring, while business users need to be able to analyse data on-demand and in a timely manner, such as for COVID analysis and an ESG impact study. Indeed, these same challenges apply across all areas of risk management - from Market Risk to Credit Risk and from Liquidity Risk to Climate Risk.
The current economic and competitive context is pushing financial institutions to set a global data framework that responds to the new needs, by refurbishing the reporting production chains, while striking the right balance from an operational performance and cost perspective.
What makes it so challenging for financial institutions are the multiple constraints in meeting their data and reporting needs. They need to take into account the complexities of their organisation, IT legacy systems and the reliability of information while complying with regulatory imperatives that require more and more granularity and certification consistency for all reports.
Even if the reporting production chains struggle with the exponential increase in requests, this situation is an opportunity to improve access to the detail within the data and to enhance risk management through a robust, data-driven decision-making process.
Every business has a gold mine of data, the value of which can be realised as a lever for optimising growth and defining business strategies.
To achieve this without compromising operational excellence, the production teams should be able to handle huge amounts of data easily and autonomously without technical limitations or performance deterioration. This is something which could be costly for financial institutions if they follow traditional infrastructure solutions.
Let’s analyse the various challenges along the ‘Risk data journey’ and production cycle and explore how technology can alleviate them.
As illustrated below, there are typically five steps to this journey, each with associated challenges. Note, the production of any report often requires the system’s user to go through all of these steps iteratively or to follow a variable sequence.
Production & monitoring: The first challenge prior to manipulating any data is to extract the data sets from the relevant data lakes or from various IT systems. This may lead to the collection of massive volumes of data, which may be very costly to store without a data management strategy and a highly scalable environment. Not only does this step require the business requirements to be pre-defined, with no possibility to adjust the expected data afterwards; similarly, the way IT teams prepare and provide the raw data must be specified in advance with limited flexibility or at a high infrastructure cost. A scalable solution to overcome the inflexibility of legacy systems is a “must” to complete an efficient extraction process and to let users access any relevant data in a short period of time without compromising the adaptability or the quality of data. The preparation of the data may be complex and time consuming, requiring a framework of controls to monitor the correct feed of input flows and to raise awareness of any data quality issues that may impact usage.
Transformation & controls: Once data is accessible, the user needs to be able to perform massive aggregation dynamically from a very granular level to allow deep-dive analysis later on. However, it is also important at this stage to modify any incorrect data or to enrich data granularity directly within the dataset. The manual adjustments are the daily concern of any reporting teams, taking up most of the production time for reporting and carrying operational risk with it. In this context, a big data solution becomes more relevant by offering the ability to track and secure any required data changes while providing an audit trail of the report production process.
Calculation: The next stage involving simulation or analysis requires the calculation of key metrics. Often these are handled by complex calculation engines outside of the big data platform, which may interrupt the production process and leave users dealing with aggregated metrics that are insufficient for drill-down investigation.For consistency in production, detailed calculations should come either from the official calculation engine using the most granular information or from an embedded tool with the capacity to “simulate” metrics or impacts dynamically. This allows a smooth running of the process targeted and addresses the challenge of understanding immediately the impact of any data transformation, correction or stress scenario simulation. Furthermore, a full open solution provides the business with the ability to access and fully understand the lineageof a calculation process, preventing the black box effect of an aggregated calculation.
Analysis & Certification: Afterwards comes the main challenge of being able to perform multiple analyses, to understand and investigate them, and to check the reported metrics before final submission. Adding an AI algorithmic approach to this analysis significantly increases efficiency and reduces the time to perform this critical step. Whether it’s explaining the control at any level or validating the reporting accuracy, the final user is dependent on local ownership of a data or group of data. This is where a solution to manage the workflow validation process is key to supporting the global certification of a report.
Restitution: The last step of the data journey is to display the results of the analysis or metrics through various dashboards for sharing with internal or external stakeholders. While producing the content for the dashboard is one of the main workloads for the teams, the data visualisation layer needs to be automated and have the flexibility to evolve over time, so the analyst can focus on more added-value tasks. Simplifying this step is essential to support information-sharing using a common source of up-to-date data.
Financial Institutions have shown a healthy appetite for advanced data platforms because of their direct benefits, demonstrating why the highly anticipated next technological shift will be a game changer for usages.
Opensee’s investment in developing a user-friendly big data solution coupled with business analytics capacities stems from its goal of bringing the most agile support to users at financial institutions to carry out advanced analysis and to remove any tasks with low added value. It is convinced that full transparency and limitless modularity needs to be integral to the design of a big data solution that is future proof and has no incompatibilities with other systems.
This article appeared in Informa Connect on October 18th, 2022, and TabbFORUM on October 26, 2022.
Following our engagement at Risk Minds International 2022, in Barcelona, Stéphane Rio, Founder & CEO of Opensee, and Julie Louvrier, Senior Sales Executive, have written an appealing blog on the significance of data and technology to manage risks in today's uncertain economic times.