Registered: 1347056889 Posts: 1
Reply with quote #1
(This post is primarily directed to Stephen, but please feel free to provide your own thoughts!)
Thanks so much for writing this book - I'm glad a colleague recommended it, and I devoured it voraciously :)
Especially now that you're working on a new version, I've got some humble suggestions for your section on testing dashboard designs for usability. My career has been shared between UI design and testing (and at big companies, small companies, and as a consultant), so I like to think that I have perspective that blends these together. My testing experience has been split between testing my own designs and testing designs from others.
What I like to really get out of testing is a deeper understanding of how users think & approach things, then trying to boil this down to advice that can be used to improve a design. Keeping in mind that goal of getting deeper understanding, I humbly suggest that a few of the things you recommend on pg 172 are missing some opportunities:
"Don't present them with several alternative designs..." "If you are introducing display media that are new to them, begin with simple instruction in how they work and explain why you chose those mechanisms rather than others that might be more familiar."
For point 1, I would suggest presenting alternative designs *does* have a place for getting greater insight into how users approach problems, both in total volume of feedback, but, more importantly, by having the users verbalize any meta-connections they make about the underlying issues between designs. However, I've got some suggestions/caveats around careful multi-design testing:
testing multiple designs certainly expands the length of time required to do a test, and I think 60-90 minutes total is the maximum realistic duration for most users, during which you can gather good feedback don't do it in a way that interrupts one discrete "chunk" of their mental flow or process of discovery (e.g., let them complete a whole task & gather feedback on that task before moving on) gather a user's complete feedback/though process on one design before moving on to another one from user to user, vary the order in which you show things, to reduce ordering effects (perhaps the most delicate) this may have an impact on how the design process is perceived and understood within an organization. The designer needs to have a very open, transparent relationship with the peers/managers/clients that will be observing the study and/or its results, and these people need to understand not to take any one result/piece of data out of context.
For point 2, here's my suggestions:
don't directly explain the design rationale to the user, except, perhaps, at the very end of the session. (And, if there's a chance you might be bringing the user back in the future for another session, I suggest refraining from this altogether.) however, it's great to encourage the user to verbalize what they're thinking as often & in as detailed a way as possible - I think getting them to disclose what they think the purpose is is a lot more informative & revealing it's not the designer's job to defend the design to the user (that's the job of sales & marketing ;) I would also suggest having at least some of your group of users start without any hand-holding/walkthrough whatsoever, to see what they do when left to their own devices. at the beginning of the study, users should be clearly encouraged to do things in as natural a way as possible, trying to act as though the tester isn't there (of course, the act of testing invariably colors the results, to an extent) this may seem unfair to the design & designer, but I think a lot more is to be learned about how people experience a design this way. (Most people use Help as a last resort, if at all - features that rely on Help to be understood will be very seldom used, and gathering this information is an important job of a test.)
Finally, a couple overall suggestions:
unless the designer is very experienced at being a "reflective surface," I think it's preferable to have someone other than the designer be the moderator, the face of the test. It's often too hard for the designer to be completely dispassionate & distance their emotions & ego, and/or want to actively start solving new design problems then & there. if the user is OK with it, record at least the audio of the sessions - audio+video is preferable, if you can get a good, steady picture of the screen/paper of the prototype, and encourage the user to actively point things out with mouse or finger, whatever will be picked up well by the recording also try & have there be only one person in the room besides the user - more people tend to make users very self-conscious & quiet. More users can watch live via a usability lab one-way mirror, something like Skype screen casting, or just watch your recordings after the fact.
I hope this info is helpful, hope that I haven't presumed too much, and I very much look forward to the next edition!!