image missing
Date: 2024-10-13 Page is: DBtxt003.php txt00004678

Metrics
Sustainability Metrics Dialog

Questioning Assumptions ... 3 measurement pitfalls the sustainability world should avoid

Burgess COMMENTARY

Peter Burgess

Questioning Assumptions ... 3 measurement pitfalls the sustainability world should avoid

Editor's note: This is the second part in a multi-part series examining the pitfalls of sustainability measurements that draws on examples from outside the business world.

The World Economic Forum’s recent Global Agenda Survey 2012, compiled before Superstorm Sandy slammed the U.S. coastline, shows thought leaders worldwide ranked resource scarcity and climate change/natural disasters as Top 10 global trends. Would these issues rank even higher if the survey had been done after Sandy hit?

We believe the answer is yes, based on increased focus on climate challenges in politics, the press and corporate boardrooms. If Sandy’s clouds have any silver lining, it may be this opportunity to capitalize on rising awareness of sustainability as a business imperative. Given that, it’s all the more essential that our measurement systems be accurate, reliable, and account for human and organizational foibles. In this series, we offer a look at measurement pitfalls that have caused problems mostly outside the business world, to help lead to better sustainability measurements within our field.

In Part I of this series, which over the course of several articles will outline a dozen sustainability measurement pitfalls, we looked at the unintended consequences that have hampered the decade-long, multi-billion-dollars efforts to measure U.S. student learning and achievement. This example from education of Pitfall 1: Counting what's easy to count rather than what's important has practical lessons for business practitioners who are measuring the performance and value of their business’ sustainability activities.

Here in Part II, we offer three more examples from criminal justice and global finance to highlight how human subjectivity influences and complicates measurement:

Pitfall 2: Same data, but seen from different worlds

We like to think that numbers we cite are objective measurements. Numbers are numbers, data is data, facts are facts. That might be true, until you have a subjective, biased (perhaps unconsciously so), unavoidably emotional (however much we’d like to deny it) human interpreting them. The following criminal justice example is a good example of what we’re talking about: New York City’s “stop and frisk” policy empowers officers to stop individuals on reasonable suspicion of criminal activity. In 2003, officers confiscated 604 guns through 160,851 such stops, finding one gun for every 266 stops. In 2011, 780 guns were found through 685,724 encounters, or one gun for 879 stops. So what do these numbers mean? For stop-and-frisk opponents, the declining numbers were a sign that the department was stopping too many innocent people,” evidence of racial profiling and violation of civil rights. The NYPD, by contrast, saw the same statistics as proof that stop-and-frisk stops have effectively deterred criminals from carrying guns, as potential violators know they might be stopped by the police.

Whose view is correct? It depends whom you ask.

Which is exactly the point. If it were true that, as is often heard, numbers speak for themselves, their meaning wouldn’t be so ambiguous, dependent on interpretation, and predictable from the interpreter’s pre-measurement position.

Yet we often see this tendency from the “frisk” example in other fields that use metrics. Despite our best intentions, we cherry pick, and find ourselves stuck with an incomplete understanding of the available full story borne out of our unconscious confirmation biases. It can be difficult to avoid this trap, especially without a conscious effort.

Pitfall 3: Mis-categorizing what you think you are measuring

The next pitfall example also comes from the criminal justice field. In the wake of the tragic Aug. 5, 2012 mass shooting at a Sikh temple in Milwaukee, a New York Times op-ed piece explained how bias crimes against American Sikhs are both mis- and under-reported. The FBI currently classifies nearly all hate violence against American Sikhs as instances of anti-Islamic or anti-Muslim hate crimes. As a result, the FBI is potentially over-counting bias against American Muslims and failing to measure anti-Sikh violence entirely. This lumping-in of Sikh with Muslim bias crimes is a categorical “mistaken identity.”

The solution here is to add an additional “Sikh” checkbox to the official Hate Crime Incident Report. Now the FBI's Advisory Policy Board is examining whether to do precisely that.

While this example may seem far removed from everyday sustainable business management concerns, the take-home lesson applies to try to make at least a modest effort to initially look beyond the numbers being presented, and be critical about how the categories are conceived, assigned and counted. Once satisfied that the categories are actually right, we can then proceed to the analyses.

Pitfall 4: Relying on averages, estimates, and even lies rather than actual and appropriate data

The July 2012 Libor scandaÍ offers a clear example from the financial world about the dangers of relying on estimates or averages rather than data.

The story broke when U.K.-based investment bank Barclays Capital was caught lying about its London Interbank Offered Rate, or Libor, borrowing cost submissions. The crazy part is that the lies weren’t even about real money being borrowed or loaned. Libor is a hypothetical rate—the rate at which each of the 20 banks on the Libor panel believed they could borrow funds that morning. It is not a transaction rate. In other words, these numbers are estimates that are then averaged. And even that was made up.

This was not widely known, although it was by insiders.

Without controls to stop them from lying, bankers had a free hand to under-estimate their borrowing rate -- and did. Manipulation of the Libor rate brought short-term financial rewards to the colluding banks, but has lasting long-term negative effects downstream on others who did not know.

It will take years for the lawsuits currently underway to work their way through the courts. One of the fixes being considered is to replace Libor with another rate based on actual market data such as bank repo rates.

The lessons here for business players are clear. We must hold ourselves and our peers to scrupulous auditing and transparency standards. But to do so also requires that we apply a critical eye to assure the data being used to make business decisions is reliable, accurate, and actually represents what you think it does. What seems to be objective, may not actually be so.

Conclusions

While all the examples above lead to confusing or misleading outcomes, there is a difference between them. The Libor case rests on deliberate manipulation of an imprecise tool, while the criminal justice pitfalls arguably stem from well-intentioned attempts to use data more effectively. However, all three offer an opportunity to incorporate cues and lessons learned from fields mostly outside the corporate ballpark. We hope this will help sustainability professionals avoid unnecessary pitfalls, foresee backlashes, and learn to spot some of the lurking question marks that may exist in their own businesses’ metrics.

Some of the questions we can ask towards the goal of designing and choosing metrics more likely to lead to sustainable business outcomes are: Are we currently missing something important in our journey to sustainability? Will our actions create unexpected outcomes, or unleash perverse incentives that may later bite us or our stakeholders? Are the numbers actually telling us what we think they’re telling us?'

The next parts of this series will offer examples from economics, psychology, political science, and even weather forecasting.

SITE COUNT Amazing and shiny stats
Copyright © 2005-2021 Peter Burgess. All rights reserved. This material may only be used for limited low profit purposes: e.g. socio-enviro-economic performance analysis, education and training.