I'm really not sure when or whether sketchiness works to communicate uncertainty.
But that hasn't stopped me sharing a few ideas in talks at recent meetings - such as Antony Unwin's Data Meets Viz Workshop or the Evolving GIScience event held this summer in memory of Pete Fisher.
A few folks have asked me to document some of these ideas about how we might use the roughness or sketchiness of symbols to communicate. Here goes ...
BINARY UNCERTAINTY
We explored this a little in Sketchy Rendering for Information Visualization (Wood et al., 2012), finding that people seemed to engage with sketchy depictions of predictions. See the text around Figure 15, which is reproduced below.
ENCODING QUALITATIVE INFORMATION
But what about encoding richer information? We can vary a range of characteristics in the sketchy renderer and these might be used to relate to different categorical characteristics data sets. Let's take some rapidly sketched maps of an island I know.
Let's vary the width of lines used ...
... and their separation ...
We can make the maps visually distinguishable by varying these and other characteristics. Some sketchy symbols look more like pencil drawings ...
... and others like they have been produced with a Sharpie. The point is, that we can make the symbols visually distinguishable and these differences could be used to encode information.
ENCODING QUANTITATIVE INFORMATION
We can also vary the roughness or sketchiness applied. We can do so systematically - from low to high in these cases depending upon distance from the top left.
This leads to a number of questions: can we detect order in 'roughness'? how consistently? what are the factors that act as distractors? can we encode other quantitative or qualitative information in symbols and then apply roughness and interpret both items of information concurrently? what is the 'length' (in Bertin's terms) of this visual characteristic?
We began exploring some of these questions in the Sketchy Rendering paper, but don't have robust answers and are still exploring design possibilities. Some of these are documented below.
ORDINAL POSITIONAL UNCERTAINTY
We did suggest that positional uncertainty might be encoded using sketchiness - see Figure 2 in the paper. Our experimental results show that roughness is not interpreted consistently between participants, but that individuals are frequently able to order symbols in terms of the roughness applied.
Here are the boroughs of London, with one square symbol per borough arranged at the borough centroid using a standard projection (British National Grid).
This is about as big as we cam make the symbols without occlusion occurring. It doesn't give us much room to add statistical graphics, such as those we used in the BallotMaps paper (Wood et al., 2011) to identify name ordering bias in local elections in London.
And if we make things bigger then we get occlusion. Things are either too small to detect or too occluded to discern.
So we relax the geography - moving using a GridMap (Eppstein et al., 2013) or a Spatially Ordered Treemap (Wood & Dykes, 2008), which can be described according to it's properties at each level of the graphical hierarchy (Slingsby et al, 2009).
This gives us the space to see the data at the cost of some geography, and much recent giCentre work is about this trade-off between geography, which is so important in helping us identify, detect and explain trends in spatial data and acting as a reference frame upon which we can apply our tacit knowledge to the problem in hand, and possibilities for rich and sophisticated depiction of statistics.
We find that by compromising on the geography (g) we can use more visual channels and information space for the statistics (s). We are looking for sweet spots here and aiming to push above the curve wherrre we can.
But how do we learn new geographies, and relate abstract geographies to Cartesian spaces? We use animated transitions in our dynamic applications, but enrich these with optional symbolism to show positional error. Here we add curves to show the way in which the boroughs have been moved.
But we could use roughness. Evidently this obfuscates and undermines our efforts to make the data more interpretable by removing some of the geography. But if we can switch this on or off, or show sketchy edges, there is a chance we might be on to something?
With vectors ...
Without vectors ...
With data ...
Of course there are lots of efforts to re-project data into layouts that retain geography but aid interpretation and the space-filling grid is just one of these. The designers at After The Flood have produced an excellent alternative in which they retain geography more effectively but sacrifice some space in which statistics can be plotted. We think that their London Squared Map is a great example of a G/S sweet spot and hope it is learned by Londoners.
QUANTITATIVE & QUALITATIVE ENCODING
Anecdotally people seem to notice and engage with sketchy graphics. Actually, there is some empirical evidence to support this too. So we might want to use sketchiness even where it is a sub-optimal encoding. I argue that this would particularly be the case where the nature of the sketchy rendering draws attention to the phenomenon under study. The best I can think of here relates to performance in school tests.
Alan MacEachren discusses the issue of Matching Symbols with Referents very usefully in a section in Some Truth With Maps (MacEachren, 1994).
Sketchiness might be useful here, to encode (or double encode - let's be cautious) numeric values whilst emphasizing the phenomenon under consideration.
Here are symbols representing the 1km grid cells in London that contain primary schools. The circles are sized according to the number of schools and coloured by performance in the Standard Assessment Tests (SATs). Darker colours are used to indicate less good performance. We don't need a legend as you won't be using this map to try to understand school performance in London (please!). Now, here's a huge caveat. SATs do not (explicitly) test handwriting. So the title is a little misleading - but the map is designed to convey a message, the message here being that this might be an interesting means of symbolism for this kind of referent.
Now let's double encode by applying the sketchy renderer and using roughness to represent performance too. Rougher circles relate to locations with less good performance in the SATs.
As is the case with pretty much any visual channel used to encode numeric data, we can decide on the range of variation to use and how to map this to the range of values. In terms of colour, I have used the full range of ColorBrewer GREENS to span the range of values. This is a controversial colour equivalent of using a non-zero baseline on a bar chart. But it makes a pretty map and emphasizes variation. I do the same with roughness. But we have to decide on a maximum level. Below, the maximum level of roughness is increased, making a less tidy map and perhaps suggesting that the kids in London are doing less well in terms of their handwriting than the tidy map indicates. Which is why it is a good job that I have explained that this is not the case.
One thing that this shows is that (map) design involves decision-making and I hope this piece has identified some of the kinds of decisions and trade-offs that map-makers should be considering as they design. As cartographic opportunities arise and data become more complex this process is unlikely to become more straightforward.
The piece is also intended to draw attention to possibilities, open questions and knowledge needs. I'd be happy to discuss these. Do grab the handy library to play with sketchy graphics if you are interested. Jo has provided comprehensive documentation and interesting examples to set you up and get you going.
I'm still not sure about sketchy uncertainty, but I do think there are some interesting possibilities for design and research. Perhaps we'd better get on with this ...
JASON