Enrico Bertini writes the interesting Visuale blog, and recently posted a piece arguing that our research quest for 'Sensemaking' misses the forest for the trees: in the creation and study of analysis processes, we are not actually supporting realistic scenarios where decision support is needed in a timely manner. Specifically, he says "visualization is useless if it doesn't help people take actions". While I don't necessarily agree that all our InfoVis research is barking up the wrong tree, I see his point. Some projects, such as my own Uncertainty Lattices, are specifically designed to help people make fast decisions about data. However, it is true that in the InfoVis, and especially in the sensemaking communities, we seem to focus on process before results.
I see his point in that many of the solutions we develop as researchers are decoupled from actual use. I think Shneiderman & Plaisant addressed this somewhat in their paper on MILCS (longitudinal case studies). The problem is indeed structural: we cannot prove real usefulness without long term deployments, and the incentive for such deployments is low in academia (and, these sorts of experiments are time consuming). We cannot become toolbuilders for business without careful (and publishable) follow up evaluations. So, what is the solution?
I think we could be doing great InfoVis research but also having an impact in the analytics world, especially business analytics. We need to partner more with those real world users of data... I would be elated to see some of the great ideas I see every year at InfoVis and other venues actually become real products. There is a gaping hole between the great research we do and the market.
However, I'm not sure that adding the constraints Enrico mentioned will necessarily lead to a situation of improved design, no matter how much design is improved by explicit constraints. Even a cursory look at the bulk of currently commercially available business analytics tools shows that they would never been acceptable to the 'academic' audience (due to poor information design, layout, and breaking well known constraints about human perception). On top of that, they are almost all ugly.
I recently saw a deployed visual analytic tool using dark blue text on a purple background. It was illegible. But it was deployed and paid for. And, it was working for the customer. I would argue that deployment success and ability to provide insight over exploration is not an indicator of quality design. This is the age old question of the mystery of product adoption by the market. Perhaps it is a factor of providing that immediacy Bertini mentions: the decision support in a short time; the answer rather than a lengthy exploration process. The hated fuel gauges might do that better than my own VisLinks. Great, if we are going for speed and quality of decisions and not depth of insight or potential for discovery. We need to separate the two, as they can't be supported the same way. Sensemaking is not about providing a single answer. That's artificial intelligence, or maybe even 'smart graphics'.
I agree completely on Data Mining vs. Visualization... I would sum it up to say the 'vs.' needs to become '&'. I think the strength for the future lies in closer ties between the two. We have 'data manipulations' as a step in every version of the InfoVis pipeline and in all visual analytics process diagrams, but too often the visualization is actually of some surface data, or the outputs of data mining. I think a closer coupling of the two, bringing vis as a 'box opening' tool for data mining will be important. My own thesis research as been looking at just this for statistical linguistic processes such as translation and information retrieval, and I hope to do more of it in the future.