Note: Images may be inserted into your messages by uploading a file attachment (see "Manage Attachments"). Even though it doesn't appear when previewed, the image will appear at the end of your message once it is posted.
Register Latest Topics

  Author   Comment   Page 2 of 5      Prev   1   2   3   4   5   Next

Posts: 853
Reply with quote  #16 
PLEASE help me keep this discussion focused on the content of my critique and the merits of the research paper, not on irrational and diversionary claims that I am persecuting people. I deleted an additional comment by eadar (an anonymous detractor from the University of Michigan where one of the paper's authors is located) because it was just plain idiotic and therefore added nothing to the discussion. It's extremely difficult to keep Web-based discussions on track. I'll continue to vet the content as best I can to keep it relevant and useful. I welcome disagreement. I don't welcome irrationality, irrelevance, or abuse.

(By the way, I'm confident that eadar is not a pseudonym used by Daniel Borkin of the University of Michigan, but is more likely someone who is upset that I have negatively critiqued the work of his university-mate. Despite the flaws of his paper, I sincerely doubt that Daniel would ever make the absurd comments that eadar made.)

Stephen Few

Posts: 200
Reply with quote  #17 

I have never understood the purpose of focusing on memorability in data visualization.

We all know how to make things stand out visually, or how to make them recognizable. I cannot see such tactics as anything other than irrelevant to most areas of data visualization.

I have always advocated this approach:  visualize the data in the most appropriate way possible; make it "as simple as possible but no simpler".
Dress up the page around the chart(s) in whatever way you want. Draw the user in with illustrations, with distinct and memorable design, text, and imagery, and let the chart speak for itself.

In that light, I feel your critique is relevant and important.

In light of the actual content of the study, and the lack of supportable conclusion, I feel your critique is spot on.


As far as 'aggression' goes, it is an unfortunate fact that speaking frankly and directly is more often than not taken to be aggressive or mean these days.  There seems to be a very big split between those who feel this way, and those of us who welcome frank, direct communication, neither side being able to grasp the others' position.


Posts: 18
Reply with quote  #18 
"As far as 'aggression' goes, it is an unfortunate fact that speaking frankly and directly is more often than not taken to be aggressive or mean these days.  There seems to be a very big split between those who feel this way, and those of us who welcome frank, direct communication, neither side being able to grasp the others' position."

I suspect if we were all in a pub talking about this subject, we'd realise we actually are all in agreement, and where we weren't, the disagreements would be jovial and well-taken.

I love robust conversation and acknowledge it's hard to read someone else's tone in the way they intended it. Also, it's hard to write words that come across with the tone I intend. 

Posts: 7
Reply with quote  #19 

I am also very much in favor of frank, direct communication. I like the title of Few's paper, BTW. But I'd also rather have the discussions focus more on the content than on the form.

"I'm inclined to doubt your statement that a sample of two is sometimes sufficient."

It's interesting that few scientists believe this, and yet in their daily lives they are perfectly OK with making judgments based on sample sizes of two or even one. When you get a terrible service at a restaurant, you may give it a second chance, but you likely won't feel the need to collect 50 more samples to be sure that your judgment is fair.

Here is a simple (perhaps simplistic) example involving a lab study. Suppose an investigator, John, wants to establish the existence of a strictly positive effect using conventional hypothesis testing (a t-test with alpha = .05). Also suppose John has strong reasons to believe that his metric of interest is normally distributed, and has good reasons to think that the effect size is enormous (i.e., a Cohen's d of about 10 -- bear with me).

In that case, it would be sufficient and perfectly justified for John to run a study with a sample size of N = 2.

Suppose John obtains p < .05 (which is, as we will see, a likely outcome). A peer reviewer may object because:

1) N = 2 does not provide enough evidence to reject the null hypothesis -- with such a small sample, the risk of committing a Type I error must be quite high.

2) With N = 2, the study must have been dramatically under-powered -- either John was very lucky or did something fishy.

Both objections are wrong. Objection 1) is wrong because if the true effect size is zero, the probability of John getting p < .05 is exactly 0.05, as it should be. This is how the t-test has been constructed, and it is true irrespective of N (N has to be >= 2 as the t-test is undefined for N < 2). Objection 2) is wrong because if the investigator's initial guess about the effect size was correct (d = 10), then the likelihood of finding p < .05 is close to 0.8, a pretty good statistical power (see de Winter (2013) for a Monte-Carlo simulation).

If we endorse the reasoning behind frequentist statistics and NHST, John has perfectly valid evidence. There are certainly many reasons to doubt the results and criticize the study, but these have nothing do to with N, as the exact same arguments could be used to criticize the study had N been much larger.

The example is arguably contrived -- an effect size of d = 10 is extremely uncommon in studies involving human subjects. I must admit that I couldn't come up with a realistic HCI or infovis example, but de Winter (2013) does mention occurrences of d = 6 to 10 in behavioral research. It is also fair to wonder why one would one want to test the silly null-nil hypothesis with such an enormous effect size. I can imagine this could happen as a response to a skeptical reviewer who insists on having the "scientific method" applied rigorously.

I don't claim that my example is realistic or that we should use N = 2. We should strive to use large sample sizes. My example is only to illustrate that there is no valid statistical argument to reject a study's conclusions based on sample size alone, without considering at least the question investigated, the methods used, and our reasonable assumptions about data. Sample size is very important, the larger the better, but I am growing tired of claims (especially from reviewers) that a study should have a sample size of at least 12, 33, 200, or whatever the magic number may be.

The intuition that a statistically significant result with large N is more impressive or more trustworthy than a statistically significant result with small N stems from a misunderstanding of p-values (see discussion in Bakan (1966), pp.429-430). It could also be the case that scientists don't like small-sample studies because small samples could be diagnostic of other issues (e.g., lack of rigour, of time/funding, use of a small convenience sample, etc.). This seems sensible, but again, those are not statistical arguments.

Pierre Dragicevic

Posts: 4
Reply with quote  #20 

[The comments below (black text) were posted by Jeff Heer. Because they are extensive, I (Stephen Few) have chosen to integrate my responses to Jeff directly into the body of his comments. My responses appear in brackets and in red, following my initials: SF.]

Dear Stephen,

I appreciate your passion for improving the relevance and quality of Information Visualization research. I agree with a number of your critiques, but also think that others are overblown or less severe than your post implies. For specific details regarding the “Beyond Memorability” paper, please see the end of this post for my point-by-point review of the methodological issues you raise. Here, I would like to respond primarily to your statements regarding the Information Visualization research community, while sidestepping some of the valid but much more general concerns endemic to all of scientific publishing.

There are two themes you raise that I would also emphasize. First, the obvious: research claims should always be worded carefully, with a clear argument grounded in evidence. As both a papers co-chair (2013-14) and reviewer at InfoVis, I've seen many healthy exchanges where reviewers have required authors to revise inaccurate or over-reaching claims. My papers co-chairs and I urged the importance of this to our program committee and reviewers. However, I've also witnessed enough shortcomings to think stronger guidelines might help. We are always seeking ways to improve the process and foster more thorough, constructive critique.

Second, one’s choice of research agenda matters, and we should strive to tackle the most important problems. Broadly speaking, I agree that the community would benefit from more work deeply grounded in the needs and experience of working analysts, whether in academia or industry. In my estimation the community possesses more expertise in this matter than you give it credit for. Nonetheless, I concur that more is needed.

That said, the last thing I want is a monoculture hostile to differing viewpoints or prohibitive of research without immediate practical applicability. A focus only on pragmatic results may be stifling to innovation on the whole. Much of science -- and academic research more generally -- is incremental and evolutionary. On the one hand, advances aren't always immediately recognized as such, and may require time to develop and prove their worth across multiple publications / projects. On the other hand, even works with identifiable (non-fatal) flaws may have merits that warrant publication and ultimately advance the field through accretive work. At times I disagree with and challenge certain research endeavors, but at the same time I respect their right to exist and develop within our (imperfect) system of peer review. On the whole, I am thankful to be contributing to an exciting and impactful field.

[SF: I agree with everything you've said in the two paragraphs above.]

You also claim that the InfoVis community is complacent. In my experience this is simply not true. In the years I've been involved with the field (beginning 2004), I've observed it steadily mature. There remains much room for improvement, but I believe we are on a trajectory of more impactful projects and improved research methods. The rate of change may be slower than one would like, but the derivative is positive. In my two years as papers co-chair, we worked tremendously hard to improve the quality of the conference. We intervened both for papers with high review scores but clear failings missed by reviewers, and papers with worthy ideas that received lower scores due to unreasonably harsh treatment. [SF: I'm encouraged to hear about your efforts as a papers' co-chair. If your efforts were being matched throughout the conference, it is unlikely that I would have felt compelled to write my article.]

In addition to the reviewing process, the types of critique you yearn for do occur; certainly among my students and collaborators, but also at the microphone and in the hallways of the conference. Had you attended InfoVis 2015, you would have seen that there was indeed debate of the “Beyond Memorability” paper, including public questions well-aligned with some of your critiques. Still, such “internal” discussion is rarely transcribed and made public. I agree that our community could benefit from more accessible, constructive debates. [SF: Once again, I am encouraged to hear this, but I also heard this back when I was participating in the conference, yet I observed no progress at that time in the community’s attitude toward constructive critique. Because I have not attended the conference in the last few years but have only read the papers, I am basing my observations on the papers alone and on the responses that I get when I critique them. I will gladly admit, however, that your comments here are giving me some hope.]

This last point brings me to the topic of your chosen style of writing. I know that your primary concern is the content and quality of research work, not singling out particular individuals. So why do you consistently refer to "Borkin" alone, rather than using more accurate phrases such as "Borkin et al.", "the researchers", and "the authors"? Your writing gives the false impression that Prof. Borkin carries full responsibility for the work, eliding her list of co-authors. Changes in phrasing could mitigate the appearance of personally-directed comments and put the spotlight more squarely on the work itself. You conclude by writing: "I suspect that her studies of memorability were dysfunctional because she lacked the experience and training required to do this type of research... I’m concerned that she will teach [her students] to produce pseudo-science." Why even include these remarks? These comments are speculative and unambiguously target a single individual. In my opinion, they are unnecessary and create an environment less conducive to productive debate. [SF: Why include this remark? This is the beating heart of my critique, not a gratuitous comment. My statement that professors who produce research papers such as this one will encourage their students to produce pseudo-science is not speculative, assuming that you accept my premise that this paper qualifies as pseudo-science. Michelle will teach her students to do research in the way that she does it. Your own work is a prime example of this. Your good research habits can be seen in your students’ published work.]

I know you wish to avoid this topic. But I am more concerned that people new to the field (especially women) will read this critique, see the pointed references to Borkin alone, and see it go largely unchallenged. Intended or not, our actions serve as examples of what is and is not acceptable conduct in the community. [SF: I clearly stated more than once in the article that Michelle's work is but an example of a systemic problem. I have referred to this paper primarily as the work of Michelle Borkin because that is my understanding. She was the primary author. This was a follow-up to her previous work. She has been cited in the media as the primary author. Although it is true that some research projects with multiple authors are done as an evenly distributed collaboration, this is often not the case, as you know. A paper with multiple authors is often almost entirely the work of an individual who was merely assisted by others. Nonetheless, if I have attributed work primarily to Michelle what was more evenly distributed across the larger team of authors, I sincerely apologize. If Michelle or the other authors clarify that this was not primarily her work, I will do what I can to correct the impression that I’ve made. In the meantime, I’ll acknowledge another point as well: given the fact that I was using this paper as an example of a larger, systemic problem, it would have been better had I referred to the people responsible for it as “the authors,” to depersonalize it. I am in the habit of personalizing books, articles, and research papers by referring to their authors by name rather than in an impersonal way. I believe that this is usually a good practice, but in this case I should have taken an impersonal approach.]


Jeffrey Heer


Methodological Issues

Issue 0. Participant pool.

You claim that, due to the number and background of study participants, “no reliable findings can be claimed.” I don’t think this claim holds up to scrutiny. I would reiterate the points that Pierre Dragicevic has already raised. For what it is worth, many of my own studies which you have praised -- including our Voyager paper at InfoVis’15 and our graphical perception papers at CHI’09, CHI’10 and InfoVis’10 -- use smaller samples sizes than the “Beyond Memorability” paper. (You are of course free to revise your opinion.) Sample sizes in this rough range are not uncommon in many areas of human-subjects research (including psychology, human-computer interaction, etc), particularly those in which we expect to see a pronounced effect.

There is an important distinction between internal validity (does the study lead to reliable inferences about the experimental context?) and external validity (do the findings generalize to other situations or populations?). Limited participant pools can certainly limit generalizability. For example, the vast majority of human-subjects research does not cover the world’s diversity of people, educational backgrounds, socio-economic conditions, and so on. That does not imply that such research is “pseudo-science” or unworthy of publication, though it does require appropriate care regarding the claims made.

[SF: I agree with your comments in the two paragraphs above. Unfortunately, unlike your work, this paper made general claims based on a sample that was of insufficient size. That is the problem. I’m fine with small samples that are used as a prototype for later study or later as part of a larger meta-study.]

You also write: "Why are visualization research studies still plagued with these small samples when it is well known that they are inadequate? It is for one reason only: small samples are convenient." I do not dispute that convenience exerts considerable influence. However, there are other reasons. In many instances a sample size on the order of 2-3 dozen people is sufficient to demonstrate an effect. [SF: As I stated in the article, small samples are sometimes valid in particular areas of study, but usually not. My point is that we have no reason to trust a small sample in this case and the authors made no attempt to justify it. Instead, they made sweeping claims based on a small sample.]

(As an aside, I think the true heart of your critique may actually concern construct validity.)

Issue 1. Random selection of visualizations.

The authors show each participant a selection of 100 visualizations. The paper states “participants were each shown about 100 visualizations, randomly selected from the 393 labeled target visualizations”. This leads me to think each participant saw a unique random draw of 100 visualizations. However, if you read the online supplementary material (http://vcg.seas.harvard.edu/files/pfister/files/infovis_submission251-supplementalmaterial-camera.pdf), you’ll find the following paragraph:

“Each experiment covered ~25% of the target visualizations (98­-100 target images), thus each individual could participate in up to four different versions of the experiment (on separate days). Participants who returned to participate would never see the same images as in their previous sessions. On average each participant completed 2 experiment sessions with 9 participants completing all 4 experiment sessions. The selection and permutation of the visualizations were randomized in each case.”

This paragraph leads me to a different interpretation, in which blocks of participants saw the same draw of randomly selected visualizations within each of four experimental deployments on Mechanical Turk. It also describes subjects participating in multiple experiment runs. This description is not commensurate with what is stated in the paper body, so on this item I am left puzzled. It should be clarified. [Yes, this is not what the paper describes.]

Issue 2. 10 seconds reading time.

The critique here is that many (most?) visualizations require more than 10 seconds of exposure to facilitate comprehension. An exposure of 10 seconds may seem artificial, diminishing ecological validity. I suspect the decision here stems from reasons not uncommon in perceptual psychology. Short exposure times may increase the difficulty of the task in order to more stringently test visual recognition and recall. This decision does not undermine the internal validity of the study, but does raise questions around generalizability. [SF: This decision does undermine the internal validity of the study if it was designed to produce the generalized conclusions that the authors claimed.]

Issue 3. Instructions are not reported.

I was unable to find the instructions for the encoding phase in either the paper or the online supplementary material. The instructions for the recall phase, on the other hand, are provided in the paper: “Describe the visualization in as much detail as possible.” Ideally, all instructions necessary to replicate the study should be made available. [SF: I agree.]

Issue 4. How are the 100 filler visualizations chosen? Why provide feedback?

I agree that the choice of filler visualizations could affect the results of the recognition phase. As you note, very similar looking visualizations could increase the difficulty of correct recall. The authors mention that these charts were chosen so as to “match the exact distribution of visualization types and original sources as the target visualizations”. While this does not guarantee visual similarity, I don’t think it is accurate to state “no effort was made to control this influence”. Regarding the decision to provide feedback to subjects during this phase, I could not find a rationale for why subjects were told their responses were correct or incorrect. [SF: If the authors made an effort to control the influence that concerns me, it was not mentioned. I will gladly rescind this criticism if the authors show us that they addressed this concern in an effective manner.]

Issue 5. Blurring.

How do we know that the blurring prevented extraction of information from the visualization? It’s a good question. You write: "I would have liked to see an example of a blurred visualization, but none were shown in the paper." In fact, one example is indeed shown in Figure 1. The supplementary materials might have included more blurred examples for review. That said, the visualizations used by the authors are available online and the blurring procedure is sufficiently described to reproduce. [SF: Thanks for pointing out that a small version of a blurred visualization did appear in Figure 1. In looking at that now, I remain concerned about this approach.]

Issue 6. Subjects only described visualizations that they correctly identified.

The paper and supplementary material do not provide a rationale for this study design decision. It should have been included. [SF: Agreed.]

Issue 7. Partitioning of visualizations by "experts".

The grouping issues you identify do not appear to affect the results. The subsequent analysis is performed at a categorical level. Whether all pie slices are contained in a single annotation or each slice is within a unique annotation does not change the resulting fixation counts by item category. Similarly for labels, assuming fixations on the white space between grouped labels did not occur and get counted incorrectly. What primarily matters here is if the experts (whose identity and background is not well described) all applied the provided taxonomy in a consistent fashion. The authors have made this data freely available online. [SF: I get your point and acknowledge my error. My concern about individual units of perception not being identified is not relevant to this experiment. However, I do question the usefulness of the categories that were assigned.]

Issue 8. The study reports ancillary findings.

The authors include observations beyond their primary research questions. Why not share this data? I don’t see why this undermines the work. [SF: My concern relates to a problem that sometimes occurs in experimental research design. When experimenters don’t focus on a specific set of observations related to specific hypotheses, but instead collect a bunch of data using a shotgun approach, they are tempted to cherry-pick the results that they wish to report and, in effect, design the study after the fact. Researchers should design their studies carefully in advance, declare what they are looking for, and then report these specific findings. Whether this particular study suffers from this problem is not clear.]


Posts: 1
Reply with quote  #21 

As Chair of the Information Visualization Conference (InfoVis) Steering Committee, I am proud of the work that the InfoVis community has done. As with any technical community, we are confident it will be judged in the long term on the basis of the best that it has contributed.

After some discussion with other steering committee members, I respond to the tone and nature of attacks on individuals and on the conference itself in this reply. Other members of the InfoVis community, including other members of the steering committee, may well choose to join the discussion of the specific scientific points under debate according to their individual points of view; Jeffrey Heer has already done so.

As much as we invite discussions and critique, we protest the tone of Mr. Few’s remarks about the work of Borkin, Bylinskii, Kim, Bainbridge, Borkin, Pfister and Oliva. It is particularly unfortunate that Dr. Michelle Borkin was singled out, apparently only because she was the first author and presenter of the submission. It should be remembered that a paper represents the work of the entire team with complementary skills and knowledge.  We recommend that in discussing the work of a group of 8 co-authors, it is referred to as “Borkin et al.” rather than “Borkin”. This avoids the impression and perhaps assumption that an individual is being criticised and keeps any critiquing within the  realm of intellectual debate. This nuance is particularly important given that often first authors of papers are the most junior people of the team, close to the start of their careers.

We reject Mr. Few's remarks about the InfoVis conference being "Pseudo-Science", and the multiple unsubstantiated and gratuitous assertions of the same kind scattered throughout his article (Research ... usually mediocre, often severely flawed, and only occasionally well done, research often errs most egregiously, researchers do not understand what people actually do with data visualizations,  Most … research is done by people who have not been trained in the scientific method, The... community is complacent). These remarks bring nothing to the debate, but trigger irritation detrimental to the scientific conversation.

Factually, the review process at InfoVis is rigorous and involves critique, debate, and disagreement. The kind of scientific questions raised in Mr. Few’s blog post are exactly the kinds of debate that occur in the reviewing process. Each year, more than 900 detailed reviews (4 per paper), plus discussion postings, are read by the papers chairs when deciding which of the papers to accept. The acceptance rate is about 25% and papers are improved through this critiquing and feedback. All of this voluntary - and much appreciated - work is undertaken by a carefully chosen group of academics and practitioners. We discuss the vetting process in our governance FAQ, posted publicly at http://ieeevis.org/attachments/InfoVis_SC_Policies_FAQ.pdf. The papers are frequently complex and reviewers sometimes disagree about the pros, cons and quality of papers. This level of disagreement between reasonable and well-informed people is not particular to InfoVis. Debate continues at the conference - in session and between sessions, and we invite Mr. Few and everyone interested to attend. As for that particular article, it was discussed at length after its presentation, in particular by myself, and not in a complacent way, but still in a courteous way.

Finally, we would like to remind everyone that a conference is a conversation, and we do invite discussions and critiques. Infovis is an interdisciplinary area, so it offers a wonderful opportunity for specialists from diverse areas to help each other. We feel more can be accomplished in a spirit of good will.

Jean-Daniel Fekete

Chair, InfoVis Steering Committee


Posts: 2
Reply with quote  #22 

Hi Stephen,

I agree with your points about the paper (and have a few more complaints of my own).

Regarding your point that "The information visualization community is complacent," I think you might be exaggerating that a bit. Twitter is often abuzz with critiques of visualizations. Also, there's the great community that Jon Schwabish is fostering over at http://helpmeviz.com/. There's also http://viz.wtf/ which is more confection than practical, but can be informative. 

Additionally, Andy Kirk, Robert Kosara, and Cole Nussbaumer regularly share best practices which often have a dose of "what not to do." FlowingData and other blogs share flawed examples from time to time as well.

Personally, I've done a bit of critiquing when I felt particularly appalled by something. We've also received our fair share of critiques and I welcome them. However, I don't feel it's my duty to be a critic. It's very time-consuming, and I already have plenty to occupy my time; calling out people on their shortcomings isn't high on my time-spending list.

We take A LOT of time here at Periscopic making sure we get things right (or appropriate within the constraints). We spend A LOT of time educating ourselves internally. Hopefully, the time we spend on it will translate to leading by example. Do you really feel that it's our duty as practitioners to be critics as well?




Posts: 3
Reply with quote  #23 

While style matters, I find the criticism healthy for the larger visualization community.
Most of the problems pointed are consequences of the field being young, without much established foundation or theory. 

By the way,  you cannot really blame visualization researchers  for not “understanding how visualizations work perceptually and cognitively.”  That is what they are researching for, understanding how visualizations work perceptually and cognitively. 


Posts: 51
Reply with quote  #24 


We reject Mr. Few's ... gratuitous assertions ... (...researchers do not understand what people actually do with data visualizations...)

What else would I believe, with all this research dedicated to understanding what makes a visualization memorable, a quality that users don't actually need in data visualization?  That doesn't indicate understanding to me.


By the way,  you cannot really blame visualization researchers  for not “understanding how visualizations work perceptually and cognitively.”  That is what they are researching for, understanding how visualizations work perceptually and cognitively.

Don't we already understand a LOT about how visualizations work perceptually and cognitively - hasn't that research been done already?  Data visualizations are no mystery - sure we can always improve our understanding, but let's build on existing research rather than ignore it or forget it.


Posts: 853
Reply with quote  #25 
You must have thought I was nuts when I posted a comment yesterday expressing my frustration that the discussion had petered out without hearing from anyone in the infovis research community, when in fact a few discussions from well known and respected members of the community had been made. The reason is that, despite them being visible to you, they were not visible to me, and I don't know why. They are visible to me now, and I appreciate them. I will respond to them soon.
Stephen Few

Posts: 853
Reply with quote  #26 

I'm very grateful for your thoughtful and extensive comments. To make it easy for readers to connect my responses to your comments, I've integrated them directly into the body of your comments. You'll find my responses in brackets and in red, following my initials: SF.

You've provided the level of discourse that I was hoping for from members of the infovis research community. I hope that others follow your example. I also hope that others will follow your example by doing research that matches yours in quality and usefulness.

You have already been so generous with your time, I hate to impose on you further, but I feel that your answer to three additional questions would would be useful:
  1. If you had reviewed the "Beyond Memorability" paper, would you have recommended it for publication in its current form?
  2. What do you suggest that the infovis research community should do to prevent the kinds of flaws that you and I have both identified in this paper?
  3. What are the primary obstacles that currently prevent the infovis research community from progressing as it should?

Stephen Few

Posts: 853
Reply with quote  #27 


I appreciate your willingness to respond, but not the closed nature of your response. As the Chair of the InfoVis Steering Committee, you apparently see it as your role to defend the conference, but the community needs leaders who do more than defend it; it needs leaders who also acknowledge its faults and address them with vision. Your statement, “We reject Mr. Few’s remarks about the InfoVis conference being “Pseudo-Science,” is inaccurate. I did not call the VisWeek Conference pseudo-science. I branded a particular paper as pseudo-science and expressed the opinion that it illustrates many problems that are common in infovis research. If you wish to defend the paper or other papers that also exhibit its flaws, you should do so, not with sweeping, unsupported declarations, but with reason and evidence, as Jeff Heer did. I expressed the opinion that infovis research is “usually mediocre, often severely flawed, and only occasionally well done.” You responded that my remarks “bring nothing to the debate, but trigger irritation detrimental to the scientific conversation.” This is not the response of a scientist. It is more akin to the response of a politician.

Your assertion that the “review process at InfoVis is rigorous” is not universal even among those who serve in the conference’s leadership. I have participated in the VisWeek review process several times and know that it is not rigorous as a matter of policy and standardized practice, even though some participants such as Jeff Heer do indeed bring rigor to the process.  I no longer accept invitations to participate in the paper review process because you insist that reviews be done anonymously, which I won’t do because people have the right to know the identity of those who judge their work. It is one thing to allow reviews to be done anonymously, but to insist on this does not add rigor to the process. In fact, it enables the opposite by allowing reviewers to behave irresponsibly or to do their work without competence in the shadows.  What’s most striking in your response is the fact that you did not address the flaws that I identified in the research paper, which was accepted by VisWeek. If the review process were robust, this paper and others like it would not be accepted. If you disagree, you should defend the review process, not by quoting statistics about the number of papers, etc., but by explaining why poor research papers are accepted.

For the InfoVis conference to progress, it must open its doors to the concerns of people like me who know the field well, contribute to it, provide valuable perspectives, and express legitimate concerns. Your response to my concerns is not welcoming, which is why I no longer participate in the conference directly. In your position of leadership, you could do a great deal to change this. Your comments thus far, however, do not encourage the full range of discourse that the community needs to embrace.

Stephen Few

Posts: 853
Reply with quote  #28 


Your defense of a sample size of two is indeed contrived and, in my opinion, invalid. Contrary to your assertion, a study’s conclusions may not be accepted if the sample is insufficiently small, which would always be the case with a sample of two. If a study makes generalized claims, as was the case in this particular paper, those claims cannot be accepted on the basis of a single, insufficiently small sample. This is not to say that the claims are necessarily false, but rather that generalized claims cannot be made based on the data.

Stephen Few

Posts: 853
Reply with quote  #29 

Please note that I did not say that everyone involved in infovis is complacent about poor research practices. It's noteworthy that the people in your list of those who actively critique poor visualization practices are people like you and I who work outside of the academic infovis research community.

Stephen Few

Posts: 853
Reply with quote  #30 


I’m puzzled by your comment: “You cannot really blame visualization researchers for not ‘understanding how visualizations work perceptually and cognitively.’ That is what they are researching for, understanding how visualizations work perceptually and cognitively.” Huh? Those whose work is published in academic infovis research journals should already have a foundation of knowledge about perception and cognition, otherwise they are not equipped to extend our knowledge of these phenomena. You agree with this—right? If so, what was the point of your comment?

Stephen Few
Previous Topic | Next Topic

Quick Navigation:

Easily create a Forum Website with Website Toolbox.