One of the most frequently used methods for evaluating the quality of creative works is the Consensual Assessment Technique (CAT). Originally developed by Teresa Amabile, now a professor at Harvard Business School, CAT is based on the premise that experts can recognize creativity and that if they are in agreement about a creative work, then their opinion should be accepted. Thus, judges assess creative products independently of each other, usually on a number of criteria, and if they are in agreement, their assessments are averaged into global scores.
Experts in training evaluate performances at the 2010 Trophée Eric Bompard
So far, the similarities to skating are obvious, right? Well, here are where the processes diverge: first, CAT has been used to evaluate finished creative products - such as drawings, poems, and musical compositions - relative to one another, while skating judges must judge performances in real time, against absolute standards (in that respect, 6.0 is more similar to CAT). In addition, judges in CAT studies are usually given relatively general criteria, while skating judges have numerous bullet points to consider, not to mention GOEs to sort out. Also, because skating judges evaluate in real time, they cannot view the performances in random order, or to evaluate each component separately from the others - there simply isn't enough time. Next, instead of randomizing the order in which judges evaluate each component, the PCS components are presented to the judges in the same order, with skating skills first. While judges are free to enter the components in the order they choose, the presentation order in itself could affect their evaluations, which in turn might favor skaters who are stronger in terms of skating skills while hurting skaters whose strengths might lie elsewhere. Finally, rather than checking whether the judges' scores are reliable (that is, in line with one another), the issue is basically forced by means of the infamous corridor.
Now, I will be the first to admit that CAT is an academic method and one that may not be as effectively utilized in the real world. Also, as far as I know, it has been used in assessing finished creative products rather than real-time performances. Nonetheless, I believe that parts of this technique may be successfully applied in skating. In my post on rewarding A while hoping for B, I noted that the focus on objective criteria at the expense of the less easily quantifiable can cause various problems. Thus, one change that I would like to see is the simplification of the points for each component. Rather than trying to break everything into parts that might not truly reflect the sum, I would suggest that judges be encouraged to think on their own and evaluate each component based on a more general outline, as well as their best understanding of it (Sonia Bianchetti, no doubt, would be thrilled with this suggestion!).
While skating programs cannot be viewed in random order or truly compared to one another, I believe it may be time to consider going back to random draws in the SP, to negate the advantage skaters in the later groups gain merely from skating late. In addition, I don't believe that components should be assessed in the same order; to encourage this, it might be best if each judge evaluated them in a different order, with this order randomly switched for each skater. Surely this wouldn't be that difficult to program?
Is there any chance of this happening? Probably not, though senior B events do use random SP draws. But since I am not as mathematically inclined as some skating fans, I'll consider this my contribution to the ongoing discussion of how to improve skating judging.