Discussion


Note: Images may be inserted into your messages by uploading a file attachment (see "Manage Attachments"). Even though it doesn't appear when previewed, the image will appear at the end of your message once it is posted.
Register Latest Topics
 
 
 


Reply
  Author   Comment   Page 1 of 5      1   2   3   4   Next   »
bpierce

Moderator
Registered:
Posts: 100
Reply with quote  #1 

For his October/November/December 2015 Visual Business Intelligence Newsletter article, titled Information Visualization Research as Pseudo-Science, Stephen critiques a recent research paper titled "Beyond Memorability: Visualization Recognition and Recall," which was presented at the 2015 VisWeek Conference. This paper illustrates many of the problems that are common in information visualization research today. In revealing and explaining its flaws, Stephen attempts to help the next generation of researchers avoid them.

What are your thoughts about the article? Do you agree or disagree with Stephen's assessment of the paper, the prevalence of these problems in information visualization research, or Stephen's proposed solutions? We invite you to post your comments here.

-Bryan

Jon_Peltier

Registered:
Posts: 2
Reply with quote  #2 
I have the same response to Borkin's new paper as to her earlier one. Why is the "memorability" of a visual deemed so important? The memorable graphics have visually recognizable objects: maps, dinosaurs, and unique background or accessory images. No kittens, as Stephen mentions, but his point is clear. Unmemorable graphics consisted of undecorated tables and simple bar charts; we've all seen thousands of these, none in particular stick in our memory, but our knowledge includes information we've amassed from these thousands of messages we've understood and stored in memory.

What is important is the memorability of the message: How well was the message communicated, and how well was this message remembered?
tomshanleynz

Registered:
Posts: 3
Reply with quote  #3 
Hi - I too agree with your points. I don't attend these conferences (being NZ based) so don't know what reception these papers had on the day.

Re 4) The information visualization community is complacent  

I know you've said before that you don't want to be involved in Twitter, but from my point of view, this seems to be place where a lot of discourse about the merits of data viz techniques and research takes place, with influential people in the field.  I think you would find a lot of backing for your critique of this paper on there (for example, I would imagine Kaiser Fung, Robert Kosara and Alberto C - and of course Jon Peltier :) ).  Kosara did blog about the same paper (eagereyes.org/blog/2015/vis-2015-thursday), but it was very high level note on the key takeaways.

Twitter doesn't have to be great drain on your time.  We can provide links your critiques, and endorsements, but this would be enhanced by your direct presence - and you will more likely to find your "compatriots", and embolden others to speak up.

Just a thought. I will endeavour to do more, albeit with my very limited reach.

Cheers!
Tom

kris_erickson

Registered:
Posts: 3
Reply with quote  #4 
Steven, is the problem one of semantics now?  Should we make clear distinctions between data art, infographics, and true scientific data visualization?  I recently had a manager 'jazzed up' about a very low-information store web 'map' with a vanishing point perspective.  He of course admitted it wasn't a priority and probably took a dozen people to program it, but it grabbed someone else's eye and then they sent it to our group.  I can usually say "oh that's for public consumption, but we have specific needs," and try to frame the conversation as one between "mass consumption" vs "user A's specific need"

I am adding some embellishment to the reports I build.  If I have a "lumber" report, I do place a small "wood" icon off to the side in the report.  This actually helps some with navigating a report server where thumbnails of reports are seen.  However that seems like a meta issue about reports in general rather than the information in a specific report.
sfew

Moderator
Registered:
Posts: 838
Reply with quote  #5 
Kris,

The term "scientific visualization" has a specific meaning. It refers to the visualization of things that are physical in nature. For example, an MRI scan of someone's brain is a scientific visualization. Research in this field of study has actually been going on at universities for longer than research in the field of information visualization. My article is about scientific research regarding information visualization (i.e., the visualization of abstract data (a.k.a., statistical data); it is not about scientific visualization. Good scientific research in the field of information visualization reveals how it works and how it can be done most effectively. I and some others use the term "data art" to describe works of art that are in some manner based on or related to data. Unless a data artist intends his work to inform viewers about something in particular, scientific research into information visualization does not pertain to it.

__________________
Stephen Few
dragice

Registered:
Posts: 7
Reply with quote  #6 
Despite the unnecessarily aggressive tone, critiques like this are important for the advancement of science. I am looking forward to the authors' responses and I hope this critique will start an insightful and constructive debate.
 
Although I haven't had the chance to read the paper yet, the part on statistical unreliability seems exaggerated irrespective of the paper's content. The proper choice of sample size does depend on several factors. A few are already mentioned, but the most important of all is effect size. It is possible to show strong evidence for a large effect with very few participants. I can give examples where two participants are enough.
 
Sample representativeness is also important to consider, but no researcher tries to collect a truly random sample of the entire world's population, including in cognitive sciences where "the scientific method" is used. In experiments involving human subjects, convenience samples are the norm. The use of a convenience sample does not invalidate a study -- for low-level cognitive and perceptual processes, it is reasonable to assume that convenience samples give fairly good information about humans in general.
 
Power analysis can improve study design, but it is hard to put in practice and is not without problems. Again, in most soft science disciplines, it is rarely used. I think that subjectivity in the choice of sample size is fine, provided that the uncertainty in the collected data is conveyed faithfully.
 
No one would question that an ideal study should use power analysis and a large sample size that is representative of a meaningful population. However, discarding studies that do not meet all of these criteria is putting the bar unrealistically high. Stating that the use of 33 test subjects is a "fundamental flaw" that would "cause many scientists to read no further than the abstract" is exaggerated.
 
Finally, an unrelated question: if research in information visualization is usually mediocre and only occasionally well done, why persecute a single researcher? Why always Michelle Borkin?

__________________
Pierre Dragicevic

sfew

Moderator
Registered:
Posts: 838
Reply with quote  #7 
Pierre,

Your opening comment illustrates one of the reasons why critiques like mine are rarely published: the information visualization community tends to accuse anyone who writes a negative critique of being "aggressive." In fact, there are no examples of aggression in my critique. I invite you to read my article again and then, if you still feel that I was "unnecessarily aggressive" in any way, please do me the kindness of providing examples.

Regarding the size of her sample, after reading Borkin's paper, if you believe that her claims are justified despite the small size of her sample, please make your case. I think you'll find that in this case, "uncertainty in the collected data" was not "conveyed faithfully." I'm very open to learning something about the adequacy of small samples in some cases. My knowledge of statistically sampling sufficiency for various types of studies is limited. I must admit, however, that I'm inclined to doubt your statement that a sample of two is sometimes sufficient.

Why persecute Michelle Borkin in particular? I have written several negative reviews that were not written by Borkin. Actually, I explained in the article why I selected this paper.  I selected it because it has been getting a great deal of media attention--attention that is potentially harmful. I also selected it because it illustrates many problems that are common--more than most papers tend to exhibit. I've only encountered Borkin's work on three occasions, and on the first I wrote a enthusiastic review, because the work was brilliant.

__________________
Stephen Few
jannepyykko

Registered:
Posts: 39
Reply with quote  #8 
Before reading Stephen's newsletter article, I did quickly browse through the research paper (watched images 10 seconds, ha, and read the conclusions) and thought:
- Wow, this memorability issue must be important in advertising.
- When communicating numerical data in print media, slideware and dashboards, however, I see no use of it.

Thanks for a thoughtful article.


__________________
-- Mr. Janne Pyykkö, Espoo, Finland, Europe
acotgreave

Registered:
Posts: 18
Reply with quote  #9 
Hi Steve
Fascinating article. I took the time to read the paper before reading your critique. That was a very interesting exercise, because I don't read so many papers in detail.

On reading the paper, I agreed with the main critique: huh, why are they studying memorability?

I also wondered why academic papers are still published online in 2-column printed format. That's so annoying to read on smartphone/tablet!

I think it’s unfair for you to say this is a problem with visualisation research: it’s a problem with all research. In all fields, there are great studies and there are bad studies. Nothing you list is unique to this field. For example:

i. Statistical unreliability

There’s no shortage of academics papers with statistical problems caused by small samples. Here’s one on fish oil, dismantled by Ben Goldacre. Incidentally, the study he refers to also used 33 subjects.

He also outlines a statistical anomaly so extreme, that half of all neuroscience studies are statistically wrong.

Conclusion? Statistical problems are not unique to visualisation research.

ii. Methodological misdirection

How many of the 53 landmark studies in cancer had results that could be replicated? 6.

Yes, 90% of leading cancer studies have results which cannot be replicated. (source: this brilliant article “When Science Goes Wrong” from The Economist)

Conclusion? Methodolgical problems exist in all science.

iii. Logical fallacies

Logical fallacies are hardly unique to visualisation research. For example, this list of the top 20 logical fallacies is a good example of how this is a problem in all science, not just visualisation research.

What I don't see is why you focus on one area when it's a general problem: good and bad papers are common in all fields. 

acotgreave

Registered:
Posts: 18
Reply with quote  #10 
Furthermore - Pierre felt the post was aggressive. There were two places I felt this was true:

"Borkin’s study illustrates a fundamental problem in many visualization research studies: the researchers do not understand what people actually do with data visualizations or how visualizations work perceptually and cognitively. Consequently, they don’t know what’s worth studying. Everyone who does research in the field of data visualization must spend some time actually working as a practitioner in the field. Relatively few do."

Whatever your intentions were, the phrase "researchers do not understand what people actually do with data visualisations" jumped out. I know you wrapped that in caveats but it comes across as a generalisation. It was only rereading this that I realised you weren't generalising. But still, the damage was done on the first read.

"the design of this new study was fatally flawed"
Fatally? Are you saying this work has absolutely no value? That is, in my opinion, aggressive. You say yourself where the value in this paper lies: "this study could at best serve as a prototype to suggest the need for a legitimate, properly designed, statistically valid scientific study."
sfew

Moderator
Registered:
Posts: 838
Reply with quote  #11 
Andy,

It is absolutely true that most of the flaws that I identified in this study occur in other fields of research as well. Why do I focus on the fact that these problems are occurring in infovis research? The answer is simple: I work in the field of infovis. There is another reason as well: these flaws occur more commonly in infovis research than in any other field that I know. Most other fields of study, such as those that you mentioned (e.g., medical research), have developed better peer review processes than infovis, despite the problems that persist. They have actually worked to develop research standards. So far, this effort as received no traction in infovis research.

Regarding the statements that you have identified as aggressive, reading what you wrote highlights the fact that "aggression" is not easy to measure objectively. I assure you that neither of the statements that you mentioned were acts of aggression, despite your perception. You admitted that you perceived the first statement--that "in many research studies...the researchers do not understand what people actually do with data visualizations or how visualizations work perceptually and cognitive" as aggressive because you misread it. Therefore, your perception that this statement was aggressive was not my doing, but your error. My statement was a straightforward observation that is unfortunately true. The second statement, that "the design of this new study was fatally flawed" is another simple statement of fact, without exaggeration. Your assertion that I later admitted that study had merit is not accurate. The statement that you quoted appears in the "Statistical Unreliability" section of the article and says: "Because Borkin has not provided a convincing rationale for a sample of this small size, this study could at best serve as a prototype to suggest the need for a legitimate, properly designed, statistically valid scientific study." This statement seems clear. Saying that the study "could at best serve as a prototype" had Borkin provided a convincing rationale for her small sample is not the same as saying that it did.

__________________
Stephen Few
acotgreave

Registered:
Posts: 18
Reply with quote  #12 
hi Steve
The peer review process is something I simply don't know about so can only respond to your comments and what I read about. The article from the Economist astounded me with its withering review of the state of peer review. I'd be very interested in hearing other people's opinions and experiences in this.
Andy
eadar

Registered:
Posts: 2
Reply with quote  #13 
"Why persecute Michelle Borkin in particular?"

Persecute: subject (someone) to hostility and ill-treatment, especially because of their race or political or religious beliefs; harass or annoy (someone) persistently.

See how that might be perceived as aggressive?  (the fact that you don't *just* "persecute" her doesn't change the meaning)
sfew

Moderator
Registered:
Posts: 838
Reply with quote  #14 
Eadar,

Your statement makes no sense. According to the definition that you've provided, there are no examples of "persecution" in my article.

Let's keep this discussion focused on the facts. Is my critique accurate? If so, let's discuss how we can fix this. If there are ways in which my critique errs, let me know and I will correct them. Don't try to dismiss the case that I'm making by claiming that I've said or done things that I haven't. The problems that I'm addressing deserve a rational discussion.

__________________
Stephen Few
acraft

Registered:
Posts: 51
Reply with quote  #15 
I wondered while reading the research paper (before I lost interest) why a person remembering briefly looking at something a few minutes ago was relevant.  I mean, it's important to a marketing-type person, someone interested in branding and recognition and so on.  But I saw no way in which it related to infovis and the paper made no effort to connect the two beyond simply assuming they were connected.

Very nice critique, I think you nailed it.

Also:
Quote:
the researchers do not understand what people actually do with data visualizations or how visualizations work perceptually and cognitively

My immediate thought was that you could say the same for most BI vendors. ;)
Strange that others read it as aggressive - it's a completely accurate observation, evidenced by the research paper being critiqued (as well as many others).  Maybe some people are just really sensitive?
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.