Also known as P&L Explained, the purpose of P&L Attribution analysis and reporting is to provide users with a coherent breakdown of the drivers of P&L movements between two points in time with reference to a select number of easily understandable pricing factors.
But data is never as clean as you would like. That issue gets worse when data comes from different sources. But P&L Attribution requires a single, consolidated report that can handle all the data issues.
Let’s have a flexible, rapid and modular way of getting the information we need.


Data is often in different places, sometimes even in a single asset class. Different file formats, different timings, different naming conventions. A flexible, light touch way of accessing the data without having to build a data warehouse is our first step.






Clean and join

When do data assumptions and specification documents survive the first contact with the data? A P&L attribution system will need data normalised before joining. A coherent process is required to standardise data. And the data can change over time: such a process needs to be flexible but with central control.






Whether Valuation or Sensitivity based, the core process to determine the P&L explained versus unexplained.






Review and validate

The first step where the P&L Attribution process starts to add value: automated validation tests are carried out to help identify areas of focus; the joins to finance data are crucial to allow reconciliation to the firm’s source of truth and a sign off process needs to be followed and recorded.







Excel is often the go-to tool but we need something better. Fixed reporting packs can be useful, but dynamic visualisation tools that offer drill down, historical views or changes put into context from just a few mouse clicks helps users meet the changing demands of the market.

How CP Analytics helps the P&L attribution process


Easily access disparate sources of data

Whether it is legacy COBOL files, batch csv files, xml messages, big data repositories or databases, we can connect and acquire data without needing a data warehouse to be built. It’s light touch on IT as well as we clean the data ourselves.

Visual data model and processing

It’s easy to see where data is coming from, where it is being cleaned and joined, even if the pathway is complex. The living data picture promotes understanding and decision making.


Modular Libraries

Standardisation of data requires standardised processes, we don’t want to duplicate work and need central control. Modular libraries allow us to repeat, adjust and deploy logic quickly and coherently.

Rapid and Accurate

Our approach is highly iterative from the start: we get data, examine and validate which means we don’t have to get to the waterfall testing phase before find the issues. And the end product is robust and can be automated.

See more how CP Analytics makes handling data processes such as P&L Attribution easier, faster and more straightforward for the business and business user.

Want to know more?

P&L Attribution should be a simple process, but the practical barriers generally revolve around the data and too much effort is spent managing data than analysing. We want to change the normal large waterfall projects to be quicker, more flexible and actually get to the destination. To help you get a fuller understanding of what CPRA can offer here are a few things we prepared for you.



Any other questions we can answer?

We are experienced in dealing with the data challenges posed by bringing together risk sensitivities, market data and P&L information from disparate sources and combining them in a consistent and usable format. Email us straight away and you’ll receive a response in no time.