As the requirements around having adequate controls across the lineage of a metric have become more stringent, merely performing additional controls at every data transfer and/or transformation step may not have the intended benefit of improving the accuracy and integrity of the underlying data.
Again, using Collateral as an example, let us say there are two separate processes – the first one intends to use Collateral Value from the underlying database to calculate the Loan to Value of a portfolio. The second process queries the same attribute, albeit independently, but to calculate the Foundation LGD as per the Basel IV requirements. In cases where the collateral values are stale (older than the date of sanctioning of the loan), they can be flagged easily by the first process, by simply having a validation rule that compares the date of sanctioning of the loan with the date of the last valuation capture. However, this might be missed by the second process, as it would not need to fetch the underlying attribute for the loan commencement date in the first place to calculate the Foundation LGD.
Indeed, an Integrated Layer also allows for collating the validation rules and various other controls that have been applied to the same attribute by different processes, to create a more holistic view of data quality and highlight defects for all downstream consumers.