3 min read

Data Visualization for Humans

Dot York

I have a personal hate of badly done infographics (or infoposters as they’re called) that actually show less information than say a good old table or list would. Thankfully, Rick Threlfall acknowledged that this was sometimes the case. He went far and beyond this with his presentation about how to make better data visualisations through understanding human perception, based on his experience making them for science-related situations.

Threlfall explained that the reason a badly made graph is worse than a table is because of forcing a visual shift to compare, but that in these situations interaction can help overcome it e.g. allowing people to click to change the axes or what element they want to compare.

He suggested four primary means of thinking about design patterns when it comes to perception:

  1. Clusters – we naturally see some things together e.g. anything that looks like a face, is.Scott McCloud on Anthroporphism
    This is known as visual search or pre-attentive processing .
    This can also work against us as we are basically lazy – people will scan a short list before a longer one (i.e. clear out your data!)
  2. Outliers – spotting the outlier is hard with too much static (or “chart junk” as Tufte calls it). Threfall called out www.cruise.co.uk as having so many competing elements that “I have no idea where to click”.
  3. Extents – choosing the right extents is key – using false scales can effectively let you lie with statistics
  4. Correlations – are there important relationships that you need to highlight? What’s a priority?

If Steve Krug’s suggestion is ‘don’t make me think’, Therefall’s was ‘don’t make me concentrate’. He suggested using data viz to answer a particular question, and eliminating all excess information to allow for direct limited focus.

More than anything, Threlfell asked people to ask the right question before doing things with data – it’s too easy to make a table or a graph with data that you know that’s at best confusing to others and at worst downright misleading.

He finished with an example from his own work – testing samples for dangerous materials. In a situation such as this, it’s not actually a question of how much is there, but if it’s over (or could be over) the safety threshold. In this instance, he therefore used a dashboard that showed using colour whether the sample was safe or not safe in terms of the chemicals.

I also liked his comment in the Q&A about how you have to be careful with showing inconclusive results – people want black and white and won’t read the small print for things such as margin of error, so you need to make it clear what’s ambiguous and let them investigate.

I would have liked to have seen mention of Gestalt psychology – something that a lot of design students are taught as a way of understanding how visual information is prioritised.

The issue of choosing the right questions and eliminating excess information (Strunk’s “omit needless words” anyone?)  reminded me of similar work that Ben Holliday has been presenting as the work from the UK Department of Work and Pensions (DWP)