Saturday, November 1, 2008

Response to "5 False Myths of InfoVis"

I started this post as a comment response to Enrico Bertini's interesting post "5 False Myths of InfoVis", but it started getting really long, so I decided probably I could justify posting my response here.

For reference, here are Bertini's 5 False Myths (I think 'false' may be a redundant word here):

  • FM1 - InfoVis is about data exploration
  • FM2 - InfoVis is about discovery
  • FM3 - InfoVis is about new visualization techniques
  • FM4 - InfoVis is about vision
  • FM5 - InfoVis is about the data

Well, this is a very interesting list. I agree with many of the points, especially the second class way our community seems to treat interaction. I think interaction design should play as central to good InfoVis as the visual design.

I'm not sure what I think about FM1 and FM2 -- I think some combination of discovery and exploration certainly is undertaken by real users using real infovis tools, especially in the bioinformatics and intelligence domains. I've seen this sort of exploration myself in ethnographic studies of scientists at work with ad hoc visualizations. But, yes, these are not the only reasons for doing InfoVis research.

Some work is moving forward on FM3 -- we are seeing more studies of how people see data and use visualization (e.g., Van Ham & Rogowitz at this year's InfoVis) rather than always creating "new" techniques. My own work from last year's conference was about combining existing techniques to leverage the benefits linking of multiple visualizations of related data.

Finally, I think FM5 is my favourite -- I'm a big proponent of the "human-in-the-loop" decision making model, often advocated in the CHI and CSCW communities. I think we need to create interactive experiences that aid task completion through a blending of the user's world knowledge and any new information present in the data to be visualized. We shouldn't make presenting the right answer from the data our goal, as we won't be able to do it without solving artificial intelligence in a real way. And if we could do that, we could just solve problems algorithmically and wouldn't need InfoVis. Bertini is right in that most tools do not take account of any prior knowledge, but there are some good examples from the VAST community where prior knowledge can be explicitly entered into the analysis process (e.g. i2's 'Analyst's Notebook' or IBM's Research's HARVEST project). My U of C colleague Torre Zuk has also done some analysis of how a physician's prior knowledge affects their decision making when presented with a visualization. This is certainly a challenging and fruitful area for more research.

Thanks Enrico the thoughtful posting!