Discussion


Note: Images may be inserted into your messages by uploading a file attachment (see "Manage Attachments"). Even though it doesn't appear when previewed, the image will appear at the end of your message once it is posted.
Register Latest Topics
 
 
 


Reply
  Author   Comment   Page 3 of 5      Prev   1   2   3   4   5   Next
sfew

Moderator
Registered:
Posts: 803
Reply with quote  #31 
Jean-Daniel,

I have a question for you that I believe is important. Would it surprise or concern you to learn that many leading data visualization practitioners--people like me who work to help people solve real problems in the world using data visualization--do not feel welcome in the Infovis research community that supposedly informs and supports their efforts? This is a problem that concerns many people in the community and it is one that some folks have attempted to address, but with little success. Why do you suppose that is? What could you do to improve the situation?

__________________
Stephen Few
cagataydemiralp

Registered:
Posts: 3
Reply with quote  #32 


Nothing to be puzzled. We agree on that visualization researchers should have  a foundation of knowledge about perception and cognition.  This foundation of knowledge is, however, far from  providing an operational understanding of how visualizations work perceptually and cognitively, beyond some high level principles. Therefore it is a subject of active research. 

jheer

Registered:
Posts: 3
Reply with quote  #33 

Stephen,

Scientific publication is an ongoing conversation, initially among reviewers and authors, and then in the literature and the larger community. Had I been involved in reviewing the "Beyond Memorability" paper, I would have raised the issues that I noted in my earlier post. The authors would have had a chance to respond, clarify experimental design decisions that lack rationales, and adjust research claims as needed. I would have benefited from the reviews of other researchers who had read and considered the paper, and would have discussed the work with them. So a fair amount of additional information and interaction would have been at my disposal, and this context is important for making a judgment.

What I can say unequivocally is that I would have raised the issues I noted above (which only partially intersect with yours) and they could have shaped the reviewing process and the resulting paper. Of course, such conversations and clarifications can and should continue post-publication. I would love to hear additional perspectives, including those of the authors and others close to the work. However, given their earlier treatment I would not expect that to happen here.

This leads to your second and third questions for me. For example, given what I just wrote above, should we seek more transparency in the reviewing process? Our research group at the University of Washington held a discussion this week on how we might help accelerate development of the field. We considered a number of options and are in the process of writing a longer piece on the topic. Once that is posted I will share a link here and invite others to comment.

Regards,
Jeffrey Heer

sfew

Moderator
Registered:
Posts: 803
Reply with quote  #34 
Jeff,

I appreciate your response, even though you've given a safe, political response to my first question. Fair enough. I am very interested in reading the suggestions that you group and UW comes up with for improvements in the field.

I have one more point to make, which is in response to your statement: "I would love to hear additional perspectives, including those of the authors and others close to the work. However, given their earlier treatment I would not expect that to happen." I have not mistreated these authors. Until the community learns to welcome fair assessments of its work without crying foul, it will not progress. In my world, we are held to account for our errors, as it should be. Even though the failures of this particular paper are common in infovis research, they are failures nonetheless for which the authors are responsible. When someone justly criticizes my work, such as one of my books or courses, I do not complain that I was singled out. I acknowledge the error and do what I can to correct it. That's what these authors should do as well. If they did, they would move beyond their failure and become an example of what scientists can and should do when they make errors.

One thing that you didn't address in your point-by-point response to my critique was anything that I included in the final section, which is where I addressed the fallacies of the study's claims. This is where the authors are most at fault and should be held accountable. This is where our pool of collected knowledge is polluted. Not just the authors of this particular paper, but the authors of all infovis papers that exhibit similar flaws should be held accountable. I'm not talking about punishment; I'm talking about correcting errors when they occur, which is the process of science. While it might seem unfair that this one team of authors alone has been exposed in this instance when others remain unscathed and invisible, accurately critiquing their work is not ill-treatment. Let's not forget that Michelle Borkin and some of these authors produced a prior study of memorability that exhibited many of the same flaws, which I revealed at the time. Because the infovis research community remained silent, however, these authors chose to do similarly flawed research again. This isn't going to end without shaking things up. We are adults, not children who should be coddled. We are professionals who are obligated to do good work.

__________________
Stephen Few
bella_gotie

Registered:
Posts: 21
Reply with quote  #35 
Fix it for me if I'm wrong. We're here to learn from each and other and develop a theme or to tell the writer how he should behave? No need to explain to me what reason you make a mistake, fix it and continue forward.
benbendc

Registered:
Posts: 3
Reply with quote  #36 

I am a fan of Stephen Few’s and positively reviewed his books.  While I share his goal of promoting research quality, I was raised in the school of lighting candles, rather than cursing darkness.  I hope Stephen will continue his careful reading of InfoVis research and apply his principled critiques to celebrating excellence while making constructive suggestions as Jean-Daniel Fekete requests.  However, Stephen has long understood the stronger memorability of overly broad criticisms ("usually mediocre, often severely flawed"), compared with the quieter mood of positive statements.

 I think Borkin et al. have conducted an interesting study, but I agree with Jeff Heer that the generalizability is in doubt.  Memorability is important in some arenas, e.g. marketing, and it might have relevance to InfoVis goals of promoting policy shifts and changing minds. However, the 10-second viewing only captures perceptual experiences, missing the important and deeper sense-making goal of most InfoVis research.

 I suggest we all consider this a learning experience, and direct our energy to helping each other do great work by teaching students, mentoring junior colleagues, and reviewing papers constructively.   We’re all imperfect and all have a lot to learn.  Stephen should know that criticisms are most effective in an atmosphere of well-established trust and emotional safety. 

  -- Ben Shneiderman


__________________
benbendc
sfew

Moderator
Registered:
Posts: 803
Reply with quote  #37 

Ben,

We are all lighting candles. In my work, I get to light thousands of candles each year out in the world where they’re most needed. I suspect that you and I do not differ in our diligence to light candles, but in our understanding of what constitutes darkness. Bad science spreads darkness. My critique of this research paper was an act of candle lighting. I have no desire to snuff out anyone’s light, but I do wish to teach people the difference between darkness and light.

You have worked hard to promote the methods of science in the realm of data visualization research for many years, so I’m puzzled by your unwillingness to light candles to expose it. Let me illustrate the difference in the way that you and I approach the problem of bad infovis science. At the last VisWeek Conference that I attended, in 2011, a paper was presented titled “Benefitting InfoVis with Visual Difficulties,” which was a mess of bad science—one that was posed to do great harm if it were allowed to exert influence outside of academia. You and I both recognized its flaws. During a break at the conference, you and I were chatting when the paper’s primary author walked up to engage you in conversation. I stood nearby, biting my tongue, as you regaled her with praise for the paper. You had a chance to light a candle by sharing your concerns with her, but you didn’t. I was stunned, and actually despondent in the aftermath. That paper was given an award by VisWeek and, to my horror, it went on to receive media attention, as provocative studies often do. It was left to me to light a torch to battle the darkness by publishing a critique of the paper. This is the position I am placed in when the paper review system that Jean-Daniel calls “robust” fails to help authors of bad science learn from their mistakes. Yes, Ben, “criticisms are most effective in an atmosphere of well-established trust and emotional safety.” Had the system worked as it should, a candle could have been used rather than a torch. Our goal is to spread light. Jesus would not have had to fashion a whip to drive the money-changers from the temple in Jerusalem had the priests denied them access in the first place. I take no pleasure is doing stridently what others could do more gently but too often don’t.


__________________
Stephen Few
kimrees

Registered:
Posts: 2
Reply with quote  #38 
Quote:
Originally Posted by sfew
It's noteworthy that the people in your list of those who actively critique poor visualization practices are people like you and I who work outside of the academic infovis research community.


That's a good distinction, Stephen. I don't really follow the research community because their focus is rarely in touch with the realities of practice. 
sfew

Moderator
Registered:
Posts: 803
Reply with quote  #39 
Kim,

Your perspective needs to be heard. Thanks for sharing it. The infovis research community does not fully appreciate how its work can be used to do good in the world where you and I work. Rather than hearing us when we express our concerns, they are turning a deaf ear on us. They are failing us. Increasingly, they are alienating us. 

Leading up to this year's VisWeek Conference, I was asked by a group of professors to participate on a panel that would address the need for infovis research to address the interests of business. I explained to the professor who contacted me why I don't participate in the conference and further explained why frequent attempts over the years by them to connect with the world of business have always failed. I took time to suggest steps that they could take to improve the situation, but never heard back. The biggest reason that these attempts have failed, of course, is the fact that the folks making them are not in touch with the typical needs of business. Their isolation is self-imposed and perpetuated by systemic problems in the conference's process. For the work of the infovis research community to become relevant to the world, they must spend some time with us.

I'm still waiting for Jean-Daniel to address this issue. The questions that I have posed for him about the alienation that people like you and I feel are sincere. He and others in the conference's leadership can make infovis research more relevant, but not by slamming the door in our faces when we express concerns; not by demonizing us when we demand better of them.

__________________
Stephen Few
dragice

Registered:
Posts: 7
Reply with quote  #40 
Stephen,

I accept your skepticism but I don't consider that you have provided a logical refutation of my argument. Your counter-argument is a tautology ("a study’s conclusions may not be accepted if the sample is insufficiently small") followed by a gratuitous assertion that is just the negation of what I concluded ("which would always be the case with a sample of two").

Pierre
sfew

Moderator
Registered:
Posts: 803
Reply with quote  #41 
Pierre,

I am asking you to provide a real example of a sample of two that we could rely on for generalized knowledge. You presented a contrived scenario and admitted that you could not come up with one that is real. Until you do, I have nothing concrete to refute. To be honest, I'm not qualified to debate this in the abstract. Your knowledge of statistical power is no doubt greater than mine. I am well aware, however, of the problems that have plagued research in the social sciences regarding insufficiently small samples. Efforts have been made for many years now, especially in psychology, to address these concerns. I have not, however, seen an effort to address them in infovis research. That is the crux of my concern.

When we began this discussion, you said that you had not yet read the research paper that I critiqued. It would be useful if you did and responded more specifically about the sufficiency of the sample that was used in that study as justification for the claims that were made.

__________________
Stephen Few
dragice

Registered:
Posts: 7
Reply with quote  #42 
I have no real example of a sample of two that we could rely on in an actual infovis study, and I believe it is very unlikely that a sample of two has ever been or will ever be sufficient in studies of this sort. I think we can agree on that. 

My example of N=2 was only part of a logical argument where I tried to explain why we can't reject a study based on information on sample size only. Put differently, claims such as "we shouldn't trust any study with N=33" are dubious, unless we have a very good idea of the effect sizes we can expect in the studies we have in mind and we did some preliminary power analysis calculations. If not, we should probably refrain from making such assertions. Conversely, a sample size of N=2000 may not be enough in some studies. I wonder if we can also agree on that? You do acknowledge in your original paper that the proper sample size depends on many factors.

I am also not a big fan of dichotomizing sample sizes as either "sufficient" or "not sufficient", and results as either "reliable" or "unreliable", but I suppose this helps the discussion.
sfew

Moderator
Registered:
Posts: 803
Reply with quote  #43 
Pierre,

When you put words between quotation signs, they should reflect what was actually said. Nowhere did I say "we shouldn't trust any study with N=33." If you read what I actually said, in context, I doubt that we disagree.

__________________
Stephen Few
sfew

Moderator
Registered:
Posts: 803
Reply with quote  #44 

Much ado has been made over the opening sentence of my article: “Research in the field of information visualization is usually mediocre, often severely flawed, and only occasionally well done.” This sentence was intentionally provocative, but accurate. Review the quality of the papers that were accepted by the Infovis Conference in this or any year, keeping in mind that only the “best” were accepted of a much larger collection. Anyone who understands information visualization, is acquainted with the scientific method, and is capable of being objective would come to the same conclusion as I have. If the accepted papers are the best, then most of the research is indeed mediocre (“of middling quality, neither bad nor good, average”). Given the prevalence of flaws in the accepted papers, I was perhaps guilty of understatement when I said that the research is “often severely flawed.” Fortunately, in any given year, there are several good papers, but compared to the total, it is quite accurate to say that the work is “only occasionally well done.” This is a bitter pill to swallow, but it was offered in the context of an antidote.

What I said about the quality of infovis research could be accurately said about almost every field of study. My assessment shouldn't be controversial in the least. So, why then have leaders in the infovis research community responded with such hostility and denial? Why is the community so defensive? I could speculate, but I won't. I'll leave that to you.


__________________
Stephen Few
dragice

Registered:
Posts: 7
Reply with quote  #45 
I didn't mean to quote you there  I agree it is misleading. I also agree that we should probably resume the discussion once I've read Borkin et al's paper and looked at what you said more carefully.

Many problems have plagued research in the social sciences, not only small samples. In my opinion, small samples are a distraction that tend to draw people's attention away from more serious issues.

Pierre

Previous Topic | Next Topic
Print
Reply

Quick Navigation: