New to attribution?#
You know Python. You may have landed on this repository because a quant on your team asked you to "plug in Brinson", or because you are curious about what this corner of finance actually does. This page is a ten-minute primer. By the end you will know what problem pybrinson solves, why it has the words it has in it, and what to read next.
Nothing on this page assumes a finance degree. It does assume comfort with percentages and weighted averages. Finance jargon is kept to a minimum, and every remaining term (portfolio, benchmark, segment, excess return, pp, allocation, selection, interaction…) shows up as a dotted-underlined word — hover over it to see the definition.
The question#
A portfolio manager runs a fund. At the end of the quarter, the fund returned +8.3%. The benchmark the fund measures itself against returned +6.4%. The fund beat the benchmark by +1.9 percentage points.
The question the manager's boss asks — the question the client asks, the question the regulator asks, the question the manager asks themselves — is not "by how much did we beat the benchmark". They already know. The question is:
Why?
Was it because the manager bet more on UK equity and UK equity happened to go up? Was it because their US stock-pickers actually chose better stocks than the US index? Was it both? Was it neither, and they just got lucky with a sector weight?
Performance attribution is the quantitative discipline that takes a +1.9% excess return and splits it into named causes. The Brinson family of models is the oldest and most widely understood way of doing this.
Two teams, same total#
Imagine the fund holds three groups of stocks: UK, Japan, US. The benchmark also classifies its universe the same way. For each group \(i\) you have four numbers per period:
| Portfolio weight \(w_{p,i}\) | Benchmark weight \(w_{b,i}\) | Portfolio return \(R_{p,i}\) | Benchmark return \(R_{b,i}\) | |
|---|---|---|---|---|
| UK | 40% | 40% | +20.0% | +10.0% |
| Japan | 30% | 20% | -5.0% | -4.0% |
| US | 30% | 40% | +6.0% | +8.0% |
The two numbers that actually drive Brinson attribution are the active bets the manager took versus the benchmark: how differently they weighted each group, and how differently each group's stocks performed.
Reading, segment by segment:
- UK — neutral weight (0pp), but the portfolio's UK stocks beat the UK index by +10pp. Pure stock-picking.
- Japan — the portfolio overweighted Japan by +10pp, and its Japan stocks lagged by 1pp.
- US — the portfolio underweighted US by 10pp, and its US stocks lagged by 2pp.
The total portfolio return is the weight-average of segment returns:
The total benchmark return similarly:
The excess return is \(R_p - R_b = +1.9\%\).
Now: why is the portfolio's total different from the benchmark's total? Two reasons are possible, and in general both contribute:
- The portfolio weights the groups differently from the benchmark (it is overweight Japan at 30% vs 20%, underweight US at 30% vs 40%).
- Inside each group, the portfolio's stocks returned differently from the benchmark's stocks for that group (UK: the portfolio's UK holdings returned +20%, the benchmark's UK index returned +10%).
The insight of Brinson, Hood and Beebower (1986) is that you can algebraically split the +1.9% into three pieces, one for each reason plus a cross-term, such that the three pieces sum exactly to +1.9%.
Three effects, in plain words#
Allocation#
Did we bet on the right groups?
Allocation is how much return you gained or lost because your portfolio was overweight or underweight a group, evaluated at the benchmark's return for that group:
Intuition: if you were overweight a group whose benchmark return was positive, allocation is positive for that group. You bet correctly. In the example, Japan had \(w_{p} - w_{b} = +10\%\) but its benchmark return was \(-4\%\) — you over-weighted a loser. Allocation is \(0.10 \cdot -0.04 = -0.40\%\). Negative. Ouch.
Selection#
Did we pick better stocks than the index inside each group?
Selection is how much you gained or lost because your portfolio's stocks in a group outperformed (or underperformed) the benchmark's stocks in the same group, evaluated at the benchmark's weight for the group:
Intuition: if your UK stocks returned +20% while the UK index only returned +10%, your UK stock picker earned you \(0.40 \cdot (0.20 - 0.10) = +4\%\) on the total. That is pure stock-picking skill, netted of any over/under-weight effect.
Interaction#
The bit left over when overweight and outperforming happen together.
Interaction is the cross-term. You overweighted a group and you outperformed the benchmark inside it. The effect is:
It has no clean business-English interpretation; people sometimes call it "interaction" and sometimes "residual" — but it is not a numerical residual in the error sense, it is a genuine algebraic term in the decomposition. pybrinson always separates it out explicitly so you can see whether your excess return came mostly from allocation, selection, or the cross-effect of both.
The identity#
Summed across all groups, the three effects exactly reconstruct excess return:
This is an algebraic identity, not an approximation. pybrinson
raises AttributionError if the three sums do not add up to the
arithmetic difference within \(10^{-9}\). No silent residuals.
On the worked example above, the three effects split +1.9% like this (in percentage points):
Reading the chart: the manager's group-level bets actually cost 1.2pp (too much Japan, too little US), but stock picking inside each group added 3.0pp, and the cross-term contributed a tiny +0.1pp. Net: +1.9pp.
Per-segment, the same three effects look like:
Why "Brinson-Fachler" and "hierarchies" matter#
Brinson-Fachler (1985) is a refinement of allocation that measures it against the benchmark's total return, not the benchmark's segment return. The selection effect is unchanged. Some firms prefer BHB; some prefer BF. Both are in pybrinson; the same inputs produce comparable totals.
Hierarchies happen when your classification has levels — e.g.
you split the world into regions (Americas, Europe), then each
region into countries, each country into sectors. You might want
attribution at every level: how much did the decision to overweight
Americas cost us? Within Americas, how much did the bet on Tech
versus Energy earn us? pybrinson handles arbitrary-depth
hierarchies with a single parents= argument.
flowchart TD
Total[Total portfolio]
Total --> Americas
Total --> Europe
Americas --> US[US]
Americas --> CA[Canada]
Europe --> UK
Europe --> DE[Germany]
US --> USTech[Tech]
US --> USEnergy[Energy]
UK --> UKTech[Tech]
UK --> UKEnergy[Energy]
Attribution rolls up additively: the allocation effect at
Americas equals the sum of allocation effects of its children,
and the same at every level above.
Why multi-period "linking" is hard#
So far everything was one period. In real life, the fund reports quarterly, yearly, since-inception. And here finance gets tricky: returns compound multiplicatively, but attribution effects are additive inside each period. If Q1 allocation was +0.5% and Q2 allocation was -0.3%, the linked allocation is not \(0.5 - 0.3 = +0.2\%\), because the dollar value of the portfolio changed between Q1 and Q2.
There are several published ways to link single-period effects across time so that they still sum to the compounded excess return. pybrinson ships all five that matter:
| Method | Who | Intuition |
|---|---|---|
| Cariño | Cariño (1999) | Log-smoothing coefficient per period. |
| GRAP | GRAP (1997) | French industry body's additive factor linking. |
| Frongello | Frongello (2002) | Recursive "scale previous cumulative" linking. |
| Menchero | Menchero (2000, 2004) | Optimised uniform scaling. Patent expired 2024. |
| Geometric | Bacon (2008) | Multiplicative: \((1+A)(1+S)(1+I) - 1 = \text{excess}\). |
Each has trade-offs (uniform scaling vs per-period weighting, additive vs multiplicative identity, sensitivity to pathological returns). pybrinson's linking guide walks through the differences.
flowchart LR
Q1["Q1 result<br/>A=+0.5%<br/>S=+0.3%<br/>I=-0.1%"]
Q2["Q2 result<br/>A=-0.3%<br/>S=+0.4%<br/>I=+0.0%"]
Q3["Q3 result<br/>A=+0.2%<br/>S=-0.1%<br/>I=+0.1%"]
Q4["Q4 result<br/>A=+0.1%<br/>S=+0.2%<br/>I=-0.0%"]
Link{{"link_carino<br/>(or GRAP, geometric, ...)"}}
Year["Linked year<br/>A + S + I = compounded excess"]
Q1 --> Link
Q2 --> Link
Q3 --> Link
Q4 --> Link
Link --> Year
Why currency makes things complicated#
A fund holding UK stocks in GBP, reporting in USD, has a second
source of return on top of UK stock movement: the GBP/USD exchange
rate. Karnosky & Singer (1994) generalise Brinson to split
global excess return into four effects: local-market allocation,
security selection, currency allocation, and interaction. pybrinson
ships this under pybrinson.currency_attribution; see the
currency guide.
What pybrinson actually gives you#
Now the API should read naturally:
from pybrinson import Segment, bhb
segments = [
Segment(name, portfolio_weight=w_p, benchmark_weight=w_b,
portfolio_return=R_p, benchmark_return=R_b)
for name, w_p, w_b, R_p, R_b in your_data
]
result = bhb(segments, period="2024-Q1")
# result.allocation, result.selection, result.interaction
# result.by_segment -> per-group breakdown
# result.excess_return == allocation + selection + interaction (within 1e-9)
For multi-period linking:
from pybrinson import link_carino
quarters = [bhb(segments_q1, period="Q1"), bhb(segments_q2, period="Q2")]
linked = link_carino(quarters)
For hierarchies:
For currency:
from pybrinson import currency_attribution
currency_attribution(segments_with_local_and_currency_returns)
Every call produces a frozen dataclass. No magic, no DataFrame dependency, no silent residual, no network call, no license server.
Where to go next#
- Install & quickstart — get it on your laptop in under a minute.
- Methods reference — each method with its formula and primary source.
- Multi-period linking — when to pick Cariño over GRAP over geometric, with worked examples.
- Multi-level hierarchies — arbitrary depth, additive roll-up at every level.
- Currency attribution — Karnosky-Singer in practice.
If you want to check that pybrinson agrees with the
well-established published sources: grab any fixture from
pybrinson.fixtures, feed it into the matching function, and
compare. The fixtures, the tests and the docstrings all point at
the same worked examples — that is the point.