Discussion


Note: Images may be inserted into your messages by uploading a file attachment (see "Manage Attachments"). Even though it doesn't appear when previewed, the image will appear at the end of your message once it is posted.
Register Latest Topics
 
 
 


Reply
  Author   Comment   Page 1 of 2      1   2   Next
Jeff

Registered:
Posts: 53
Reply with quote  #1 
Hi, all. 

I'll add my congratulations and I will say that I did expect that this competition would be stiffer than the last (at least, from viewing the samples printed in Stephen's dashboard book). From looking at the submissions to date it appears this is the case.

Here are my submissions. Version A is the "by the book" offering. Version B colors outside the lines in that it assumes that it's possible to provide an expected grade range based on the assessment score. In the sample data, the two are tightly correlated. Of course, you couldn't do this based on a sample of 30 kids, but in the interest of doing something useful for the world I chose to assume that this was a project commissioned by the district, which would have the necessary data, and showed how I would choose to depict this if the correlation were to hold true.

The assessment sparklines were created in Excel and placed into the graphic; the rest was a manual effort. 

Thanks to Stephen for the contest; I've been reading these boards for a while but this is the first time I've had occasion to post anything I've done.

Jeff

Attached Images
Name: Dashboard_Version_A.png, Views: 721, Size: 118.34 KB

Name: Dashboard_Version_B.png, Views: 957, Size: 120.19 KB


grasshopper

Registered:
Posts: 247
Reply with quote  #2 
I like the expected score boxplot in the 2nd version. 

In both dashboards, it's a little difficult to see which lines go with which names, so I'd recommend making the horizontal lines between every 2 or 3 names, rather than every 5.
Jeff

Registered:
Posts: 53
Reply with quote  #3 
Thank you, grasshopper. I can see where that could make it easier to scan across. It might even eliminate the need for lines for every row in the discipline section (which I added because the bar charts seemed to need grounding).

Jeff
danz

Registered:
Posts: 190
Reply with quote  #4 
Jeff,


Box plot looks so much better, but there are just 5 values to "compact"! I am not sure that 2 graphical representations of the same values would help the teacher more then just one. I like more the heat map only. With the boxplots in top of it.

The history graph, has to be more compact, it would give a better understanding of the trend.
The comparison with the district average I think is more useful on a global scale then on individual.

I prefer your solution of grouping the absences per months rather then winner solution. However, I am not sure that stacking absences and tardies or referrals and detentions have any added value then the usage of space.

Late graph is not necessary, I feel I focus more on that size then on the scores themselves.

I personally prefer the student names on first column.

Daniel
Jeff

Registered:
Posts: 53
Reply with quote  #5 
Thank you for the feedback, Daniel. 

To be clear, I wasn't proposing giving the teacher both versions. The two dashboards were separate entries. I also like the box plots in version B better, but since they represent an expected range based on district-wide assessment and grade data (which we didn't have for the exercise), version B strayed from the rules a bit and made some assumptions. Version A confines itself to plotting the data in the spreadsheet.

Jeff
jlbriggs

Registered:
Posts: 200
Reply with quote  #6 
I have to say I *really* like the heat map matrix approach to the current grade and previous assignments.
I also love how clear it makes the missing assignment.

The dotplot/bullet graph display for the grade vs goal/previous is quite nice too, though I'm unsure how I feel about the green color.

The late/tardy/discipline section loses me a little bit...
And it's not completely clear what the box plots at the top are displaying.


Jeff

Registered:
Posts: 53
Reply with quote  #7 
Hi, jlbriggs. Thank you for your comments.

Since you raised questions...

I viewed the late/tardy/discipline data as possibly providing useful context for poor performance but not as fertile ground for exploration. It seemed like the only story you'd likely get out of it was that lots of absences would correlate with poor performance on assignments that happened at about the same time, which you hardly need a picture to tell you. Also, I don't believe that two disciplinary events last term compared to one this term represents a meaningful downward trend. So I collapsed all this stuff into the stacked bars you see here. Students with a lot of surface area covered by these bars have been gone (or unruly) a lot; the exact number of these things seemed less important from a teacher's viewpoint (as they don't factor directly into grade calculation). So the design was a result of a conscious decision that this information might be helpful (but only a little helpful) on a daily basis.

The box plots at the top could use a little more structure around them to make their purpose clearer. They represent the distribution of scores for the corresponding assignments below. The idea was to give the teacher a quick view of how the entire class fared on each assignment, in order to determine whether additional review of a given topic with the entire class would make more sense than individual interventions.

Jeff
sfew

Moderator
Registered:
Posts: 837
Reply with quote  #8 
Jeff,

In looking at your two versions of the dashboard above, I realized that I only saw the first version during the scoring process. I like the second version better, especially your treatment of the graphical display of current grade data compared to the student's goal and previous math score. Had I noticed your second version during the scoring process, I might have included it in the short list of the best eight. There is much to like about your designs. They are clean and aesthetically pleasing, although the first version in particular introduces a couple of colors that aren’t necessary. The rest of my comments below pertain specifically to your first version.

  1. The order of students from best to worst is probably the opposite of what the teacher would find most useful. You can certainly find the students who are doing poorly easily enough, but by placing them at the bottom of the list they haven’t been adequately featured.
  2. The “Last Name” label is not aligned with the students’ last names. Because first and last names do not appear in separate columns, a label such as “Student” would have worked better.
  3. Historical assessment scores are not the most important student information that’s included on the dashboard, yet it appears in the first column. In this particular dashboard, it probably works best to place the students names first and to left align them.
  4. The short black horizontal dash in at the right edge of the sparklines, which represents the district average would work better as a line that spans the entire width of the sparkline with a lighter color than the sparkline.
  5. I like the use of box plots above the individual assignments scores. This is an effective way to integrate an overview of the distribution of scores with the individual scores. The graphical display of current course grade information compared to the target and previous math grade is a bit confusing. The current score is clear and appropriately salient. The goal however is confusing. Is it a bar that runs from right to left from 100% to the goal? The goal is a specific grade, not a range. A simple mark would have worked better. With a bit of study, I figured out that the light graph line spans the range of scores that’s associated with the previous math grade, but the light gray line is difficult to focus on because the green bars are much more salient. Also in this section, the meaning of the orange line isn’t obvious. It is the same color that was used for missing assignments elsewhere. Rather than expressing the per student scores as letter grades, a graphical display, such as a sparkline would have revealed the pattern of change better than the heatmap use of color.
  6. What I do like about the use of heatmap colors, however, is the fact that they are used consistently wherever letter grades appear. They clearly draw your eyes to the lowest grades and also make it easy to spot the highest grades.
  7. The horizontal lines that delineate groups of five students' each are not the best way to assist the reader when scanning across a specific row. The light zebra striping that appears in the winning solution works better.

__________________
Stephen Few
Jeff

Registered:
Posts: 53
Reply with quote  #9 
Thank you for your comments, Stephen. 

To answer your question in #5, yes, I interpreted the goal more as a threshold than a target on the assumption that a student who set a goal of B would also be satisfied with an A. The "green zone" therefore represents the range of acceptable grades for a student. I think this makes sense conceptually, but I accept that your recommendation may still be a more pleasing way of depicting it.

I have a general question related to your comment in #4. I included the brief dash instead of a line because we had a district average only for the most recent year. A minor distinction, maybe, but is it at all problematic to plot last year's score against this year's district average? Or am I overthinking this?

Jeff
sfew

Moderator
Registered:
Posts: 837
Reply with quote  #10 
Sorry Jeff. In my haste, I failed to review the data, which would have reminded me that the district average was only for the current year. As such, it was more appropriate to show a single point rather than a line that spanned the full five years. I'm curious, though, why you decided to show the district average rather than the school average? If you can only show one, which is more relevant to these students?
__________________
Stephen Few
Jeff

Registered:
Posts: 53
Reply with quote  #11 
That's a good question. The school average may be more relevant to these students. In this case, though, given the scale and resolution of the graphs and the proximity of the averages to one another, it didn't seem to matter too much; my primary goal was to provide a fixed reference point that would allow better comparisons among the sparklines. I chose the district average over the school average at a moment when I was thinking in terms of scalability. Assuming that the dashboard was provided to teachers by the district (or that it soon would be, once Susan Metcalf's colleagues saw it in use), I chose the number that would be the same for everyone.

Jeff

Francis

Registered:
Posts: 13
Reply with quote  #12 
Hi Jeff - As Stephen Few pointed out, excellent work. Here are a few things I like about your design.
  1. You did not put the names in the first column. With so much data on the horizontal line, I find that a placement of the key reference (name) closer to the middle equals out the distance the eye has to travel to understand the data. You made a good choice to separate the data from previous years and in general it seems that the division should follow some concept, such as grades on one side and behaviors on the other. In any case, I wish we saw more experimentation with this.
  2. I prefer the line every five row than the shading of every second row. I find that it's easier to find which line we are looking at. I understand that Stephen Few differs on this. I would be curious to see what the research says about this.
  3. It is visually pleasing. You made good color choices.
  4. I like the red dot for the absence in the last three days. Since the teacher will see this dashboard every day, it seems natural that she will want to know "what's new". For that reason, I think that the dashboards should signal more often the recent discipline problems or the good and bad trends in assignment scores.
  5. The expected grade range, of course. I think it's important for the teacher to know what remains possible as the year progresses.

A few things I would do differently:
  1. The orange for the missing assignment is too strong for me. I would have gone with a black space for instance, which would have followed the color scale that you use of the rest of the
  2. I agree that the students in the worst position should be at the top.
  3. Instead of showing the percentage of assignments late, it would be good to visually encode the assignment grades. It would show better the correlation between grade and tardiness. In some cases, it could be negative where students need better time organization.
  4. ELL and SEPT could have been in the same column, not separate. I would even have just made a mention at the end of the name, perhaps in bold or color to highlight it.
xan

Registered:
Posts: 44
Reply with quote  #13 
Very nice, Jeff. Maybe my favorite. I especially like the simplified assessment scores. Most of us were afraid to loose the detailed scores, but it's hard to cram so much details into that space. And an extra touch is the way the the box plots above line up with the details below.

I still don't follow the expected grade box plots, so I prefer the first version except with only target grade range shaded instead of the target grade and above. I understand the idea, but by shading only the target grade the shading is less dominant and it's easier to distinguish above-target students.

The historical grade scores don't do much for me on a dashboard and the mostly horizontal lines don't say much, but your representation is as good as any. And in a way, putting those scores before the name (appropriately) places them in the past and de-emphasizes them since the name column serves as an anchor for the teacher to start reading left-to-right from.

Jeff

Registered:
Posts: 53
Reply with quote  #14 
Hi, xan. Thank you for your comments.

Since you and others have expressed uncertainty about the expected grade box plots, I'll try to explain them here.

The expected grade box plots in version B were based on my observation that the current assessment scores and the current grade correlated very strongly, at least in this one sample. If the correlation between assessments and grades exists across the entire population, you should be able to make a prediction about how a student will do in a course if you know his/her performance on assessments.

This design wouldn’t be possible with the data from a single class, but it should be possible to test the correlation between assessments and grades if you had district-wide data. So, for this version, in the interest of doing something useful for the world, I chose to assume that this tool was commissioned by the district rather than by an individual teacher (which is a plausible scenario), and that the correlation had proven to be real. The predictions shown in version B thus represent data that was not included in or derived from the spreadsheet. The intent of this design is to show how this data could be put to use if it were available for the entire district. In my view, a comparison of a student's current grade against a distribution of grades from everyone who scored similarly on the assessment is more compelling than a comparison against the previous single math grade, which was earned in a class about a different topic and taught by a different teacher.

Hope this helps.

Jeff

Attached Images
Name: correlation.png, Views: 627, Size: 9.70 KB


Jeff

Registered:
Posts: 53
Reply with quote  #15 
Oh, and I see that I missed a post before. Thank you as well for your comments, Francis. 
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.