BCBS 239 best practices meet Data Mesh in capital markets

Learn how Financial Risk organizations can benefit from both BCBS 239 practices and Data Mesh pillars to create consumable and explainable datasets.

by
Emmanuel Richard
December 14, 2023
Share to

Creating Consumable & Explainable Datasets for Financial Risk

For investment organizations, where high-stakes decisions are made daily on risk hedging, capital exposure, or trade execution quality, the data consumption challenge is deeply rooted in the aspects of data quality, storage, aggregation, and the execution of coded calculations.

This complication arises primarily because the vast data repositories, often in the form of data lakes, lack practical business usability. The situation is further exacerbated by the prohibitive and unpredictable costs associated with cloud computing and storage, necessary for performing complex, real-time calculations on massive datasets.

Moreover, the critical processes of backtesting, executing 'what-if' scenarios, and providing intraday refreshes are hampered by these limitations.

The crux of the issue lies in the growing need for data consumption against the backdrop of technical debt in data chains, widespread data silos, and the intricate challenge of applying business logic effectively to massive and often unstructured data repositories. This has led to a scenario where accessing, aggregating, and analyzing data in a meaningful and timely manner has become an increasingly burdensome task, both in terms of cost and complexity.

BCBS 239, developed in the wake of the financial crisis, aimed to bolster the banking sector's risk management by improving data practices. It underscored the importance of strong data governance, effective data aggregation, and reliable reporting. However, it emerged at a time when cloud computing and big data were in their infancy.

Data Mesh was created when big data technologies had matured. It was designed to address major bottlenecks generated by the gravity of data lakes and IT organizations, designed more than ever as a shared service across all business lines.

While BCBS 239 is focused on risk analytics practices for the regulated banking sector, Data Mesh was envisioned with a wider application, shifting from an organizational model layering responsibilities and architectural components with a project approach, to a collaborative model focused on data as a product, legitimate data ownership, increased systemic governance, and the deletion of technological / structural bottlenecks.

This article dives deep into how Financial Risk organizations can benefit from both BCBS 239 practices and Data Mesh pillars, exploring their compatibility and how their influences can inspire a new model of implementation for creating consumable and explainable risk datasets.

BCBS 239 in Modern Capital Markets

BCBS 239, also known as the Basel Committee on Banking Supervision's standard number 239, was established in response to the 2007-2008 global financial crisis. The crisis highlighted significant shortcomings in banks' information technology and data architecture systems, particularly in risk data aggregation and risk reporting practices. These deficiencies impeded the ability of banks to identify and manage risks effectively.

The primary aim of BCBS 239 is to strengthen banks' risk data aggregation capabilities and internal risk reporting practices. This standard is crucial for enhancing the resilience and risk management practices of banks, particularly those classified as globally systemically important banks (G-SIBs). It sets out principles that these banks must adhere to in order to improve their ability to identify, measure, and manage risk comprehensively and accurately.

The principles of BCBS 239 cover four key areas:

4 principles bcbs 239

BCBS 239 lays out clear principles geared towards refining banks' capability to aggregate and report on risk data, particularly during financial distress. The principles touch on:

  • Scope of Application: All material risk data, across bank levels.
  • Data Architecture and IT Infrastructure: Cohesive and adaptable infrastructure.
  • Accuracy and Integrity: Reliable, up-to-date, and validated data.
  • Completeness: All necessary data attributes and datasets.
  • Timeliness: Frequency adhering to the risks being monitored.
  • Adaptability: Capable of producing aggregated risk data in crisis.

Given the intricacies of quantitative risk dimensions like market, credit, counterparty, and liquidity risk, there's a clear necessity for real-time, scalable, and comprehensive data management solutions.Requirements such as FRTB's Standardized Approach (SA) and Internal Models Approach (IMA), RWA calculations, and liquidity ratios (including NSFR and LCR) further underscore the necessity for granular data aggregation, simulations, and 'what-if' scenarios.

Embracing the Data Mesh Paradigm

Data mesh shifts the focus from centralized data lakes or warehouses to a more democratized, product-centric approach. It fosters four foundational pillars:

  • Domain-oriented decentralized data ownership and architecture: Promoting self-serve data infrastructure as a product.
  • Product thinking applied to data: Recognizing data as a product.
  • Self-serve data infrastructure as a platform: Decentralized teams autonomously handling their data needs.
  • Federated computational governance: Establishing global standards while promoting local autonomy.

This approach is especially pertinent for entities engaged in trade management and execution. For buy-side institutions, which find themselves under an increasing regulatory spotlight, the ability to quickly pull, process, and analyze data within their domain is invaluable.

data mesh graph
Image Credit: Zhamak Dehghani

A new way to deliver consumable data to data citizens

In the evolving landscape of risk, treasury, and trade execution analytics, there's a growing need to integrate the rigor of BCBS 239 with the flexibility of Data Mesh principles. This approach involves a sequence of steps, starting from initial workshops to the final self-service implementation for end-users.

Step 1: Workshops for Business Requirements and Technical Feasibility

The first step involves conducting comprehensive workshops focused on understanding business requirements and assessing technical feasibility. These workshops serve as a platform for stakeholders to articulate their needs and for technical teams to identify potential challenges and solutions. The goal is to ensure alignment between business objectives and technical capabilities.

Step 2: Building a Comprehensive Data Model

Next, a comprehensive data model is developed. This model should encompass a wide array of data types and sources, reflecting the multifaceted nature of risk, treasury, and trade execution domains. The data model must be robust, scalable, and flexible, capable of adapting to evolving business needs.

Step 3: Creating a Semantic Layer for Enhanced Navigability

To facilitate easy access and understanding of the data, a semantic layer is added. This layer acts as a bridge between the complex data model and the end-users, enabling them to navigate through large groups of dimensions effortlessly. It simplifies the user experience by abstracting the underlying data complexity.

Step 4: Building a Physical Data Store with Embedded Business Logic and Data Quality

A physical data store is then constructed, integrating business logic and data quality controls directly at the data level. This step ensures that the data is not only stored efficiently but is also processed and validated, enhancing reliability and accuracy. The integration of business logic at this stage aligns with the principles of Data Mesh, bringing intelligence closer to the data.

Step 5: Enabling True Self-Service for Data Consumers

The final step is to offer true self-service capabilities to data consumers. This can be achieved in two ways:

  • Self-Service for Building and Consuming OLAP Aggregators and Python Calculators: Users are provided with tools and interfaces to create their own OLAP aggregators and Python calculators. This approach empowers them to tailor analytics to their specific requirements, leveraging the scalability and flexibility of the system.
  • Pre-Built Aggregators and Calculators with Self-Service Access: Alternatively, a set of pre-built OLAP aggregators and Python calculators can be developed and made available for users. This option reduces the complexity for end-users, allowing them to leverage sophisticated analytics tools without the need for in-depth technical expertise.

Conclusion: Towards a Unified Analytics Framework

By following these steps, organizations can implement a robust analytics framework that leverages the strengths of both BCBS 239 and Data Mesh principles. This hybrid model ensures compliance, scalability, and flexibility, catering to the dynamic requirements of risk, treasury, and trade execution analytics. It represents a forward-thinking approach to data management, marrying regulatory rigor with the agility of modern data architectures.

About the author: Emmanuel Richard is a Data and Analytics expert with over 25 years of experience in the technology industry. His extensive background includes leadership roles at industry giants and startups across the US and Europe.

Other articles

Put Opensee to work for your use case.

Get in touch to find out how we can help with your big data challenges.

Get a demo