Attribution Analysis — How Performance Is Decomposed Into Sources of Return
Attribution analysis decomposes a return figure into its sources — allocation, selection, interaction, and timing. Definition, the standard frameworks, and what attribution can and cannot prove on its own.
- Attribution analysis decomposes a portfolio's return into the components that explain it — typically allocation, selection, interaction, and currency or timing effects.
- The standard framework (Brinson-Fachler / Brinson-Hood-Beebower) splits active return between asset-allocation decisions and security-selection decisions.
- Attribution depends on accurate primary records; without a verified track record underneath, attribution numbers inherit the same uncertainty.
Definition
Attribution analysis is the family of techniques that decompose a portfolio's total return into the underlying drivers — typically allocation across asset classes or sectors, selection within each segment, the interaction effect that captures the cross-term, and where relevant a currency or timing component. The output is a table that says 'of the X% return the portfolio produced, A came from sector allocation, B came from security selection, C came from interaction, and D came from currency or timing'. The inputs are the portfolio's weights and returns at each segment level, the benchmark's weights and returns at the same segmentation, and a documented attribution model.
The standard frameworks
Two attribution frameworks dominate institutional practice. The Brinson-Hood-Beebower (BHB) model decomposes active return into allocation effect (the impact of overweighting or underweighting segments relative to the benchmark) and selection effect (the impact of holding different securities within each segment than the benchmark holds). The Brinson-Fachler (BF) extension treats selection as the active return within each segment relative to that segment's benchmark return, rather than relative to the total benchmark return — which gives a cleaner interpretation when sectors with different total returns are aggregated. Both models are well-documented in performance-measurement literature; both depend on accurate weights and returns at the segment level for every period being attributed.
Why attribution depends on primary records
Attribution numbers are computed from the same primary records that produce the headline return. If the headline return is a verified track record — derived from venue API responses, computed on documented methodology, written into an append-only chain — the attribution sits on a defensible base. If the headline return is a self-reported figure with no append-only chain, the attribution inherits the same uncertainty: a reviewer can run the BHB algorithm on whatever inputs are given, but cannot confirm those inputs match the underlying primary records. Attribution is a layer on top of performance measurement, not a substitute for it.
What attribution does and does not prove
- Attribution can show that a 5% active return decomposes into 3% from allocation and 2% from selection — useful for understanding where skill expressed itself.
- Attribution can identify whether outperformance was concentrated in a few months or distributed across the period, which speaks to consistency.
- Attribution cannot prove the underlying figures match primary records — that is the job of verification.
- Attribution cannot prove the manager will continue to allocate or select the same way in future — that is forecasting, not measurement.
- Attribution cannot fix a survivorship bias in the headline number — it can only decompose what is given.
How NakedPnL relates to attribution
NakedPnL's role is the layer underneath attribution. The registry produces a verified time-weighted return computed from daily NAV pulled via read-only API keys, with every snapshot canonicalised, SHA-256 hashed, and chained to a Bitcoin-anchored daily Merkle root. The trader-side analytics depth tiers expose the underlying data an attribution analyst needs — segment-level NAV breakdowns where the venue data supports it — but NakedPnL does not run BHB or BF attribution as a public output. Attribution is downstream analysis the allocator runs themselves once the registry has provided a verified base.
Related terms
- Time-weighted return (TWR) — the metric attribution decomposes.
- GIPS standards — performance presentation standards that govern composite construction and reporting.
- Verified track record — the four-property surface that gives attribution numbers a defensible base.
- Benchmark — the reference series against which active return is computed.