Diving Deeper with Goodhart’s Law

Common Usage of Goodhart’s Law

Most of us have come across what’s known as Goodhart’s Law, often quoted as:

“When a measure becomes a target, it stops being a good measure.”

I’ve always viewed this as a warning about how we use metrics, especially when we tie them to performance. It’s human nature that when we’re given a target, we focus on hitting it. The problem is that any measurement is only a proxy for the actual outcome we hope to achieve.

For example, if customer service representatives are told to shorten average call times, they might end calls after solving only the superficial issue rather than addressing the root cause. Customers then call back a few weeks later with the same problem. The metric (shorter calls) diverges from the real outcome (better customer experience).

The Bullseyes hit their target, but they didn’t come close to winning the game!

So how do we use metrics as guides without falling into the trap of turning them into goals? We should use metrics as indicators, not as targets. We can use metrics as a trigger to warn us that we are experiencing abnormal conditions. Set a normal operating range. When a metric moves outside that range, treat it as a signal to investigate, not a score to hit.

We can also use metrics as a way to test our hypotheses when we are hoping to change outcomes. For example, if you want to reduce customer call time, start with a hypothesis: “Shorter calls will improve satisfaction.” When the call time metric changes, test that hypothesis. If satisfaction doesn’t improve, the assumption is wrong. Maybe customers want shorter calls for simple issues but more time for complex ones. You can then coach representatives on when to go deeper and when to focus on efficiency, and perhaps start tracking repeat calls instead.

However, there’s more to Goodhart’s Law than this everyday interpretation.

The Origin Story - Who is Goodhart?

Charles Goodhart is a British economist who just celebrated his 89th birthday on October 23. He worked at the Bank of England from 1968 to 1985. In 1975, while attending a Reserve Bank of Australia conference, he jotted a small note in the margin of his papers, an observation that would later be immortalized as Goodhart’s Law.

“Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.” — Charles Goodhart, “Problems of Monetary Management: The U.K. Experience” (1975)

Goodhart’s Law is kind of the opposite of a “footnote to history.” Instead, you’re reading about the history of his footnote!

Goodhart’s insight was that giving people, or organizations, a metric to hit changes their behavior. Because people operate within systems, those behavioral shifts change the system itself. The “regularities” the metric once represented can break down under the weight of being used for control.

Goodhart’s focus was monetary policy, specifically how rules imposed by central banks on individual financial institutions produced macro-level outcomes that didn’t match policymakers’ intentions. That insight bridges directly to the common, modern interpretation of his law.

Later in his career, Goodhart encountered another example, one that offers additional insight into the dangers of using metrics to drive behavior.

Basel I, Basel II, and the Perils of a Metric as Control Mechanism

In the late 1980s, regulators introduced Basel I, which required banks to hold a fixed level of capital as a percentage of their assets. It was simple but rigid. The financial crises of the 1990s exposed its limitations, so regulators began designing Basel II, a more “sophisticated” system.

Basel II introduced Value at Risk (VaR), a model that used recent volatility to set dynamic capital requirements. When markets were stable (low volatility), banks could use more leverage. When markets grew turbulent, leverage limits would tighten.

On paper, this seemed prudent: take less risk when the market is risky. But Goodhart saw the flaw in how the market dynamics would play out.

VaR was a proxy for typical daily volatility, not a measure of systemic resilience. In calm markets, VaR looks small, signaling low risk; in volatile markets, it spikes, signaling high risk. Basel II turned this proxy into a rule.

Goodhart realized that once volatility rose and banks took early losses, the regulation would require them to de-leverage, raising capital by selling assets. Those sales would push prices even lower, increasing volatility further and forcing yet more sales. The rule created a positive feedback loop, amplifying downturns rather than containing them.

That’s exactly what happened during the 2008 mortgage-backed-securities collapse.

You don’t need to understand every market mechanism to appreciate the lesson. Basel II’s problem wasn’t cheating or manipulation, it was choosing a bad proxy to drive market behavior during times of stress. VaR measures ordinary fluctuations, not crisis behavior. As soon as regulators used it to control behavior, it drove the exact opposite of the intended outcome.

We can learn two additional lessons from this case study.

1. Look at the Whole System

It’s not enough to analyze individuals or teams in isolation. You have to consider the macro-level effects of many local optimizations interacting at once. Organizations, like economies and markets, are complex adaptive systems made up of independent agents whose actions produce emergent results.

When all those agents chase local metrics, you can get system-wide distortions, depending on the type of feedback loops that develop.

  • Negative feedback loops dampen variation and stabilize outcomes.

  • Positive feedback loops amplify variation and can lead to runaway effects.

Metrics that create positive feedback across a system can generate results that are disproportionate, and sometimes existentially harmful.

Positive feedback loops can cause unintended consequential damage

2. Know When Your Metric Holds

Goodhart warned that statistical regularities often hold under “normal” conditions but collapse under stress. VaR described risk accurately in calm markets but failed catastrophically during volatility spikes.

Before using a metric to drive behavior, ask:

  • Under what conditions is this metric a reliable signal?

  • Under what conditions might it give misleading or dangerous guidance?

Metrics behave like instruments on an airplane in that they’re useful only when you know the conditions they’re calibrated for.

The Broader Lesson

Goodhart’s Law isn’t just a warning about people being focused on hitting targets. It’s a reminder that metrics are proxies for purpose. They only serve us when they remain connected to the outcomes we actually care about.

In Basel II, regulation was the mechanism linking metric to behavior, with disastrous results. In most organizations, it’s human nature and reward incentives that provides the linkage.

So the next time you hear Goodhart’s Law, remember its real meaning. Metrics can guide, but only if we treat them as signals, not as scorecards. Used wisely, they help us learn. Used blindly, they often steer us away from the very outcomes we set out to achieve.

Next
Next

The Airport Story Dispenser