Category Archives: Data Disasters

Effective means of display in Excel

There are many great articles and tutorials on how to build interactive data displays in Excel. However, there are few articles on how best to effectively display data in Excel. I think this is an important subject that every Excel user should at least think about.

On the other hand, I personally don’t like any set of rules that say this key performance indicator or metric or chart must always be used. If we think about the best books on English, say The Elements of Style, or even the best books on data visualization, say The Visual Display of Quantitative Information, we find neither author posits their principles as hard and fast rules. Rather, they submit a set of guidelines, heuristics to help us test and decide. The Excel developer world should have such guidelines, to which I present the following for effective means of display in Excel [1]:

– Mutual Exclusivity

– Common Interpretation

– Sufficiency

Mutual Exclusivity

Mutual exclusivity speaks to presenting data and metrics whose different meanings do not overlap. I say “meanings,” here, because there may be instances in which metrics overlap, but their meanings do not. Consider a simple income statement. We all know Gross Profit = Revenue – Costs. If we were to ignore meaning when applying the test of mutual exclusivity, we might venture to only include two of these measures, since the third measure is only one algebraic calculation away. (For instance, we could divine Gross Profit in our heads by only present Revenue and Costs.) In this sense, there is surely overlap in all three of these metrics. But in our understanding of business performance, showing all three is the best option because the meaning they impart is separate and independent.

On the other hand, take a look at the following chart.


The addition of a total measure into the Unit Sales chart does little help us divine regional sales differences. Indeed, the total is already plotted to a certain extent – it’s the aggregation of the other regions. Now consider the average measure. The data variation between regions is such that we can gauge the average with our eyes (perhaps not with granularity, but the additional metric does not help us with this either). I argue these two metrics provide an unjustifiable overlap already inherent within the presented regional sales differences. (But you may want to include these metrics for good reason, and we’ll deal with a way to do that by the end of this article.)


Common Interpretation

Common interpretation refers the organizational, institutional, and even societal paradigm through which we understand and interpret data. That may have been a mouthful, but it simply means this: all the people looking at the same metrics must understand them in the same way.

This is especially important for metrics created by and for a specific organization or market. Most of us agree on how to calculate the Current Ratio (we should, there’s really only one way to do it). On the hand, a single organization could come up with its own metric. Consider a performance metrics that results in a score within the boundaries of zero and one. For those familiar with the metric, the results are easy to interpret. But for those on the outside, there may be puzzlement. And unless everyone agrees, placing it on your dashboard or report is sure to generate confusion.

I speak from experience. I’ve watched organizations argue over such metrics and what they mean. When this happens it’s time to take a second look at the measure. Or, consider this scenario: an organization rates its employees once a year. One underperformer receives a 3 out of 5 each year, which according to the company definition of its own metrics is a “good” score. However, compared to his 5 out of 5 peers, he is an underperformer. The employee is fired for underperforming but sues the organization since a 3 out of 5 is considered good on paper. Who’s right? This is an example of an uncommon interpretation.


Sufficiency speaks to whether there are too many or too few metrics to sufficiently deliver the message required. Sufficiency is related to mutual exclusivity. However, sufficiency deals with the extraneous and insufficient inclusion of metrics. For example, you may find some white space on your dashboard that could be filled in with even more information. If that information doesn’t aid in the understanding of that data already included therein—don’t add it! The whitespace might feel like empty real estate, but not every metric needs a neighbor.

Let’s go back to the first chart presented. Here again, average and total are the problem, but removing them might also present an insufficiency in what information should be delivered. If the aim of the chart is to present the intra performance of regional sales, then the inclusion of these metrics certainly inhibit our understanding. For example, the inclusion of the total requires we stretch the chart range out considerably, thus dampening the variation between each region. But we can still present this information in a way that doesn’t compete with its message while presenting a sufficient amount of information. The chart below demonstrates one such way you could use in a pinch.



The most important thing to remember is that these aren’t hard and fast rules. Consider them as a guide as you continue your Excel journey. As we are now merely scratching of what Excel can do, effective means of data display will serve us well.


[1] Adapted from the field Value Focused Thinking and quite specifically from the work of Gregory Parnell.

Parnell, G. S., Chapter 19, Value-Focused Thinking Using Multiple Objective Decision Analysis, Methods for Conducting Military Operational Analysis: Best Practices in Use Throughout the Department of Defense, Military Operations Research Society, Editors Andrew Loerch and Larry Rainey, 2007. link

Newspapers – Read Your Sources!

Update – Apologies for the bad proofreading job in the original blog post. Before writing this, I had spent the entire day writing a book proposal for a technical publisher. I was a bit burned-out.

It’s an exciting time for the United States. In the forthcoming days (possibly tomorrow), the Supreme Court of the United States will issue a ruling on whether all or part of the Affordable Care Act is constitutional. Surrounding the hype is an attempt by the media to figure out–that is, predict–the court’s ruling before it’s released this week. One factoid passed around is that a group of former Supreme Court law clerks and attorneys surveyed after arguments a few months ago suggest that the law will be overturned.

In an article by Business Insider, author Brett LoGiurato writes:

Most former Supreme Court attorneys and clerks believe that the individual mandate to buy health insurance — the signature provision of President Barack Obama’s Affordable Care Act — will soon be struck down as unconstitutional by the high court.

That’s according to a joint poll by the American Action Forum, Center Forward and Purple Strategies. An astounding 57 percent of clerks and attorneys polled now think that the mandate will be ruled unconstitutional. That’s up from just 35 percent in March, before the court held oral arguments on the case.

Read more:

The pic below appears in the actual Business Insider article and is a snapshot from the actual survey.

I’m not sure how Business Insider misunderstood the study, especially because a picture of it (the very same picture above) appears in their article. Clearly the survey does not say that 57 percent of clerks and attorneys think the mandate will be ruled unconstitutional. Rather, it says the average probability given by all the respondents is 57 percent. But can we say that “most” of the survey respondents believe that the mandate will likely be struck down? No, that’s wrong, too. We really don’t have enough information to draw that conclusion. More on that point in a minute.

This article, by the Washington Post, gets it wrong too:

new poll of 56 former Supreme Court clerks finds that 57 percent think the individual mandate will be overturned. That’s a 22-point jump from the last time the same group of clerks was surveyed, right before oral arguments. Back then, 35 percent thought the court would toss out the required purchase of health insurance.

Ezra Klien (who loves charts) gets it wrong.

Here, the Wall Street Journal‘s Joseph Rago, too.

Did anyone get it right? Yes. Laura Green, of the Palm Beach Post, writes:

A poll of former Supreme Court clerks and lawyers who have argued before the justices found that many switched their initial prediction that the court would uphold the sweeping law. The oral argument performance led them to now speculate that there is a 57 percent chance the court will strike down the heart of the law, which requires virtually every American to buy insurance.

Read more:

What can we say about the survey’s results?

Not a ton. I question the usefulness of engaging folks on the probability that a dependent, nonrandom event will occur. When the poll’s respondents came up with their “probabilities,” they were really giving a score to how confident they are in their belief that the Supreme Court will strike down the mandate. Calling this a “probability” is misleading.

The average is above 50%–can’t we say that most believe the mandate will be struck down? 

Not really. Consider this scenario. Of the 56 respondents, nine said the probability of the Supreme Court overturning the mandate was 100%; that’s a rather slim, if certain, margin. Now consider that the rest — 47 respondents — giving the probability of the court overturning the mandate a 49%. In other words, the 47 feel the likelihood is slightly less certain than flipping a coin. Do the math; you’ll see that it averages to about 57%. Do those nine respondents in this example constitute a “most believe”?

Survey Source:
Affordable Care Act Survey, June 2012
Sponsored by: American Action Forum, Center Forward, and Purple Strategies Garbage In, Garbage Out

There are a lot of awful visualizations and info graphics floating around the internet. I’ve sent some of the worst I’ve found to my new internet friend, Kaiser Fung, at JunkCharts. His blog hosts a formidable, if unfortunate, collection of chartjunk and he attempts both to make sense of them as well as to provide suggestions on how to make them better. 

So where do I find these graphics?  I go to, a site I am finding harder to respect as time goes on.  That might sound harsh, but if the aim of the site is to “make sense of complex issues,” I think they often miss this point while giving praise and monetary rewards to data designs that, in fact, make issues harder to understand.  To demonstrate this point consider a recent winning visualization, E-CUBE-LIBRIUM, from a team at Columbia University.  According to the team: 

By configuring social, economic, and environmental data in terms of a “Rubik’s Cube” analogy, the cube represents a country’s growth with inversely proportional categories placed opposite each other. The 3D extrusions on each cube face is a sustainability indicator, showing volumes where data increases or decreases. From these models, we are able to quickly draw connections and visually identify how each factor affect the equilibrium of the entire system….For example, how do economic gains adversely affect social and environmental health? In each cube, we are searching for stable equilibrium and positive, balanced development that can be sustained over time. Unstable equilibrium may appear to be balanced, but it is not sustainable, due to an irresponsible levels of CO2 emissions, or alarming income and social disparity, for example.

Here’s how the country of Chad is represented in the Rubik’s cube analogy:

In theory, plotting a country’s relevant indicators should give some visual cues and insights. However, their three-dimensional forms obscure much of this data. There is no scale and no way to tell while looking at each figure if a system is balanced, or even how much gain is too much gain. Moreover, so much is represented that we must keep deferring to the legend to figure out exactly what we’re looking at.

But there are larger problems that seriously undermine the team’s work.  First, this paragraph I excerpted from their write up:

By abstracting data into these virtual objects, they can then be tested for physical properties such as center of gravity, balance point, mass, inertia, aerodynamics, and any number of precise simulation results…The center of gravity of a cube gives us indications as to where a country’s development is distorted and which direction the country needs to move in order to achieve balance. Inertia allows us to test for direction of past and future movement – or lack thereof. Aerodynamics would reveal holes and outliers in a country’s cube, indicative of a crisis or a boom/bust capitalist cycle.

I’m not sure what to say here except…no, these things aren’t possible.  There’s a reason we don’t test a bar chart for its weight in pounds – or how far we can toss a three-dimensional pie chart like a Frisbee.  Graphs are representations of abstract values in made-up space; they don’t have inherent physical properties.    

Which brings me to the second major problem.  Because humans are pattern-oriented, ideas that manifest symmetry and balance can be appealing and convenient.  Consider American politics: you might sit on the left, the right, or smack dab in the middle.  It’s an arguably useful simplification; but when you consider where you fall on the political spectrum, is the “ideal” policy truly at the center point?  Put another way, what reason exists to assert that if a country’s indicators aren’t in “balance”, or tending toward some abstract and invented equilibrium, national turmoil will result?  Nothing, of course.

But a steady rise in CO2 emissions sounds like sustainable growth, right?  Maybe, but E-CUBE-LIBRIUM never speaks to this point visually – it’s a point asserted in the team’s write up.  In fact, nothing new is gained by the visual Rubik’s cube.  The actual country indicators by themselves, as numbers, likely do a better job of telling this story, if it exists.

Which brings me to the biggest problem of all: the entire thing is rather heavy-handed.  I believe the designers knew going in which countries they wanted to portray as “unsustainable.” Consider,

Unstable equilibrium may appear to be balanced, but it is not sustainable, due to an irresponsible levels of CO2 emissions, or alarming income and social disparity, for example.

Because there’s nothing in this visualization that indicates system imbalance, or, for that matter, irresponsible CO2 emissions or alarming social disparity, I’m left wondering if these items aren’t just the preconceived notions the designers imposed upon their visualization.  Indeed, the passage appears to argue that their design could give a misleading impression about a country’s stability, that knowledge of a country not presented here (such as social disparity) might mean the appearance of sustainability is incorrect.  Put another way, it doesn’t matter what our visualization says, it can’t be sustainable because we can’t have a country that pollutes at an alarming rate violate our ridiculous analogy. Sounds like confirmation bias to me.  

Apparently,, Columbia University, and the United Nations disagrees with me (and, to some extent, Newtonian Physics). This team’s nonsense won their school $10,000 to continue visualization education.  Hopefully, Columbia will invest in some books on the subject

I’ll leave you with this.