Hello and welcome back to our series on army appearance judging. I’m going through the most commonly used approaches and talking about the pros and cons of each. I previously talked about crowdsourcing and objective criteria, and this time I’m going to talk about subjective assessment.
As a quick reminder, this is intended to be food for thought for Tournament Organizers (TOs), and not a defense of soft scores or a discussion of whether we should incentivize people to improve their hobby skills. I’m just going to assume that if you’re reading this, you’re already on board. 😉 In the interest of full disclosure, I feel like I should tell you that the subjective assessment approach is my preferred approach and the one I use at my event. But I’m going to do my best to not fanboy over it…too much, since it does have some particularly tricky cons and pitfalls to deal with.
For the subjective assessment approach, a judge (or possibly the TO) examines all the armies and gives each one a score within a range defined by the TO. The scores are based on the judge’s opinion of how each army looks. The army with the highest score wins.
Subjective Assessment Pros
- Flexible – The judge is free to consider, and reward, anything and everything the hobbyists have done. The assessment isn’t limited to strict predefined criteria, which gives hobbyists more freedom to experiment without worrying that it will hurt their score.
- Considers Quality – The judge can also consider how well the hobbyists have done what they have done, and reward them accordingly. This incentivizes hobbyists to really work on perfecting their hobby skills over time.
- Relative Scores and Ranked Results – Another advantage of this approach is that the judge can start by ranking all the armies, and then go back and assign appropriate scores to them. Even if multiple armies receive the same score, the TO can refer to the rankings if needed to break ties. You also don’t have to worry about the case where one or more hobbyists go above and beyond the quality level you initially based your score ranges on. The judge can adjust the score ranges as needed relative to the highest quality army at the event.
Subjective Assessment Cons
- High level of Hobby Expertise required for the Judge – So, this all sounds great, but it’s also completely dependent on whether or not the judge is up to the task, and falls apart really quickly if they aren’t. The judge has to be proficient in and knowledgeable of painting techniques, color theory, model ranges, basing techniques, fluff, etc. Not every TO really has the expertise required to do the job well, and finding someone else qualified and willing to do it can be challenging.
- High level of effort for the Judge – This approach takes time. The judge has to both closely examine individual models and stand back to take in each army as a whole. They will probably have to make more than one pass to compare armies at similar quality levels and make sure they are ranking them as accurately as possible. It’s not uncommon for the whole process to take most of the duration of the event.
Potential Pitfalls
- Personal Bias – Since the approach is entirely subjective, and it all really comes down to the judge’s opinion, there’s a risk that either some personal bias might creep into the assessment, or that some hobbyists who don’t feel like they were evaluated accurately, attribute it to personal bias. There are a few ways you can go about avoiding this issue, some of which I’ll talk about under the next pitfall. One way is to use more than one judge, but that can have it’s own complications. To begin with, you have to find multiple qualified judges. Then, all the judges really need to look at all the armies, which can add even more logistical issues to your event. It might be tempting to split the field up and have each judge only evaluate a portion of the armies to save time, but that doesn’t really address the issue of each army’s score being based on just one judge’s opinion and can cause problems if the different judges are using different standards or criteria.
My preferred way to handle it is to get a qualified judge from outside the Kings of War community. That way the judge doesn’t really know anything about the players, the meta, the rankings, etc, and there are fewer personal biases to worry about.
- Low Transparency – For all the pros to this approach, a single score based on subjective criteria is also very opaque. While this approach rewards hobbyists who already excel, the lack of transparency can make it difficult for aspiring hobbyists to know what they should work on to improve their scores. I think there’s also a concern from TOs that the lack of transparency will invite criticism from attendees who feel their scores don’t properly reflect the quality level of their army, and that such criticisms can tarnish peoples’ enjoyment of the event. I almost want to call this issue a ‘con’ for the approach, but I feel like there are some reasonable measures you can take to help.
First and foremost, the judge has to be willing and available to provide some level of feedback to the hobbyists. The gold standard is for the judge to provide written feedback to all the hobbyists, but that’s a huge level of effort for the judge to commit to. A more workable approach is for the judge to have a brief chat with hobbyists that request it, assuming only a small number of hobbyists are interested.
Another approach is to break the score range up into tiers. Give the tiers each a general quality description and a score range, for example a ‘fully painted’ or ‘table top quality’ army might expect to score between 5 and 10 points on a scale of 0 to 20. This helps appropriately set the expectations of hobbyists in advance. The more details you can provide on how high the quality requirements are for the upper tiers, the better. Bonus points if you can provide photos of example armies at each tier.
The approach that I’ve been using for this issue is to split the single monolithic score up into three captions: paint (like, just the paint), composition (physical construction, arrangement, basing, conversions), and general effect (theme, narrative, overall impression). Each caption has quality tiers with their own descriptions. You end up with a reference chart for the judge to use and the hobbyists to refer to. The judge evaluates each caption in a vacuum, and we provide the hobbyists with the individual caption scores in addition to their overall combined score. This allows the hobbyists more visibility into how the judge arrived at the final scores, what areas they could work to improve, and why one army ranked higher than others, without really increasing the workload on the judge.
Another advantage to splitting the score up into distinct captions is that you can have multiple judges who each only judge a single caption: a paint judge, a composition judge, and a general effect judge. This can be useful at a very large event where one judge can’t assess all the aspects of all the armies. Having multiple judges each assess only some of the armies can result in consistency issues, as stated earlier. But by dedicating each judge to just one caption, each judge can still evaluate all the armies since they can ignore the aspects that fall under the other two captions and spend less time on each army, ie. the paint judge just has to worry about paint and doesn’t need to spend time looking for mold lines or unfilled gaps since that’s the composition judge’s job.
Between multibasing and a wide range of allowable models, Kings of War has the potential to be one of the best looking war games out there. I really feel like it’s up to TOs to set the incentives for their communities so they will keep advancing their local ‘hobby meta’. After all, nothing recruits new players like beautiful armies. I hope I’ve given you all some food for thought on what you can do in your own events to make that happen.