At the end of 2012, comScore estimated there were 52.4 million tablet owners in the U.S.; Apple sold another 19.5 million iPads in the first three months of 2013 alone. So it shouldn’t come as a surprise that some companies, such as Roambi, Tableau, and Bloomberg are starting to offer mobile, touch-aware data visualization apps.
But dropping your desktop user interface onto a tablet doesn’t really take the best advantage of all of those touches and gestures now, does it?
Consider Bloomberg’s iPad app. By most standards, it’s a superbly designed interface for browsing and visualizing stock information:
But, for instance, adding and removing comparative measures to graphs still requires pecking through menus, with a finger that sometimes feels just a little too obese for the job. With the exception of zooming in / out on a graph with a pinch, the app largely misses a chance to rethink the entire experience in the presence of things like accelerometers, multi-touch, and gestures.
It’s a worthwhile challenge: How can we design more natural tablet-native interactions and metaphors for data visualizations?
Thankfully there’s been more and more research oriented toward figuring out how to do data visualization on tablets and touch surfaces in a way that acknowledges the challenges, but also takes advantage of these devices’ unique interaction affordances.
One project at Carnegie Mellon is exploring how physical models can enable the exploration of multivariate data using multi-touch. In the video below, researcher Jeff Rzeszotarski shows off the prototype on an iPad.
He introduces tools like the razor, attractor and lens, which allow the user to filter and arrange points in a scatter plot in different ways. The razor tool is particularly interesting: As you run it across the data (using two fingers to define the end points of it, and then swiping it across), it separates some points out of a scatter plot depending on its filter parameters. Shaking the iPad makes sure all the points have been passed through the sieve. Other interactions are even more physically grounded, such as tilting the display to sort points by “gravity”, and “throwing” points by swiping the razor quickly across the data. Though I haven’t tried the prototype myself, these interactions do seem like they would make the user feel “closer” or more connected to the data — interacting with it more directly. Exploratory visualization could benefit a lot from this. Check out the video of the demonstration below. (And even more detail is available in a recently published abstract.)
Another example comes to us from Microsoft Research, where researcher Steven Drucker and colleagues built a tablet and gesture-enabled visualization interface called FLUID and compared it to a more traditional desktop WIMP interface running on a tablet. They designed the interface for interacting with our familiar friend, the bar chart.
In their CHI paper the authors write, “Our goal is to understand whether, and how, the fluid touch-based gesture interaction offers subjective or performance advantages over the current WIMP approach to data exploration on touch surfaces.”
First they developed a set of semantic actions, things you might want to do to data, such as choosing categories to be represented, filtering irrelevant data, or ordering and navigating. Then they mapped these operations to gestures that they group-brainstormed. For instance, flicking down on a bar in a bar chart filters it out and ordering a set of bars is achieved by swiping in either direction along the axis. (For more design inspiration on gestures for data visualization, see Dominukus Baur’s video on TouchWave).
The comparison of FLUID to the WIMP interface showed that it was more accurate and faster for the given tasks, and that most users (13 of 17) thought it was subjectively easier to use and learn. Almost all users in the study commented that the interface would be helpful when presenting data, since the gestures would be apparent to an audience. One of the drawbacks to the FLUID interface was that not all of the options were always visible or available on screen, and so it couldn’t leverage the common “recognition over recall” UI design principle. Ultimately though, having some functionality tucked away in menus will still be necessary, since not all operations can be encoded into distinct gestures and some amount of interface chrome will still be needed.
There are plenty of open design questions to work out for touchable data visualization: How to make intuitive gestures that are easy to discover and remember, whether touch may have advantages in data storytelling interfaces, and how to blend gestures into more traditional UI designs, among others. And deep research questions are also waiting to be resolved. Petra Isenberg, a researcher at INRIA in France, published a paper on data visualization on interactive surfaces that stipulates some key questions: “[We] don’t fully understand how touching virtual data affects comprehension or memorability of information,” she writes.
So whether you’re a practitioner or a researcher, there is a lot to work on here. Not only will tablet usage continue to grow, but other opportunities for museum installations, kiosks, and large-format presentation systems offer plenty of use-contexts to explore data visualization that takes advantage of the full interaction bandwidth afforded by these new displays and devices.
Nick Diakopoulos is a NYC-based consultant specializing in the research, design, and development of computational media applications. His expertise spans data visualization, social media analytics, and interaction design. Follow him on Twitter.