The US Masters is in the books for another year, and now begins the process looking back, reflecting, and trying to make sense of what happened, what it means, and what we can learn. There were a lot of prediction-ruining upsets in the early rounds, some of the dark horse picks had good showings, and I suspect a lot of people are taking another look at the Herd for the first time since 3rd edition dropped.
I didn’t attend the event this year, but I did follow along as best I could from a distance. So here’s my contribution as a Monday-Morning-Quarterback and my quick and dirty attempt to find some data driven takeaways from the event. The source data can be found here.
First, how did the 21 represented armies perform, on average?
Army | Average Battle Score | Number of Players |
Herd | 93 | 1 |
Ratkin | 81 | 2 |
Elves | 80 | 1 |
Goblins | 73.6 | 5 |
Salamanders | 69 | 2 |
Nightstalkers | 65.66666667 | 6 |
Ogres | 64.8 | 5 |
Forces of the Abyss | 64.5 | 4 |
Trident Realms | 62 | 2 |
Kingdoms of Men | 62 | 2 |
Empire of Dust | 61.66666667 | 3 |
Northern Alliance | 61 | 3 |
Orcs | 59.75 | 4 |
Basilea | 59.75 | 4 |
Brothermark | 59 | 1 |
Varangur | 58.66666667 | 3 |
Order of the Green Lady | 58.5 | 2 |
Dwarfs | 54.25 | 4 |
League of Rhordia | 54 | 2 |
Abyssal Dwarfs | 53.25 | 4 |
Undead | 52.25 | 4 |
So the first big obvious things here are that the Herd and Elves are higher than you would expect. You can also interpret that as “people named Keith are good at this game”. The next big thing that jumps out is that Undead and Abyssal Dwarfs are at the bottom. These results are nearly the opposite of what you would expect based on the general community consensus of top and bottom tier armies. So, time to nerf Herd and Elves and buff Abyssal Dwarfs and Undead, right? 😉
Nightstalkers and Goblins being near the top is expected, but Ratkin and Salamanders came in higher than I would have guessed. I expected Northern Alliance and Varangur to finish higher given who was playing them, but I think they took a hit in some of the early upsets. The middle armies feel pretty reasonable.
The average is around 63, and the standard deviation is 10.2. All but the bottom and top 4 armies are within 1 standard deviation of the average, and only the top army is more than 2 standard deviations from the average. You normally expect 68% of a dataset to fall within 1 standard deviation, but here we’re looking at more like 76% within 1 standard deviation. That means the average army scores are a little less varied and a little more clustered than you would normally expect. Also, the armies on the low end aren’t really statistically significantly low.
Now, let’s dig a little deeper into the attributes of the lists themselves. Previously, Tom Annis wrote about the lists and counted up things like Drops, Scoring Drops, Unit Strength, etc to see what the general list construction themes were. Now that we have the results we can see whether any of these attributes correlate with a higher Battle Score.
Attribute | Correlation Coefficient |
Scoring Drops | 0.08132972278 |
Drops | 0.05447296894 |
Unit Strength | -0.1753569286 |
Speed 7+ | 0.1779911996 |
Shots | 0.348841099 |
Defense 6 | -0.1523860543 |
So first off, none of the attributes we tracked have a strong correlation with Battle Score. That’s great news as it means there’s no one list building strategy that always works.
There is a moderate correlation between total number of Shots and Battle Score. Just eyeballing the results, you can see that the high shot armies seem to cluster towards the top, and that also seems to jive with some of the first impressions we’ve heard from folks. Terrain can obviously play a big part in how effective shooting can be, and the general impression I’ve gotten is that there were more, smaller pieces of terrain compared to last year’s Masters, and that resulted in more shooting lanes than last year. That’s not to say that the terrain was light or in any way insufficient, the pictures of the tables I’ve seen looked to be about what I would consider an average amount of terrain coverage. But this does suggest that mass volume of shots can be a successful strategy despite having been nerfed in 3rd edition.
Scoring Drops, Total Drops, and Speed 7+ Units have a slight but likely unimportant correlation to Battle Score. Speed 7+ is more likely to be relevant, but the correlation is still pretty weak and is possibly a little skewed by the winning list being an outlier on the high end.
And finally, Total Unit Strength and Defense 6 units have a slight but likely unimportant negative correlation to Battle Score. A wall of Defense 6 units can be intimidating, but bringing a lot of them is far from a guaranteed strategy for success and might even be a slight disadvantage. A lot these units had Defense 6 due to the Big Shield rule, so it’s possible that people have just gotten used to having to flank them, or there was just a ton of Crushing Strength and Bane Chant to negate it.
I feel like this all supports our common claim that Kings of War is a fairly well balanced game where player skill is more important than the army or list. Things like terrain, scenario, match up, and the randomness of the dice, of course, always play a role as well.
So there you go, there’s my hastily done analysis. I’m certain there is still plenty of space for more discussion and analysis, so stay tuned for more coverage over the next couple of weeks.
The following is the best model for predicting Battle score based on data provided (ignoring region and merc data)
So Shots, SOS and SP7+ gives a slight advantage.
(1)Battle Score = 0.1794Shots + 0.1841SOS + 1.2271 SP.7. -18.7560
i made a mistake there an included SOS which is not really relevant, this is the better model
(1)Battle Score = 1.2251Drops +0.1857Shots -1.0103US + 1.5712 SP.7. +53.9837
So Drops and SP7 have a stronger influence and then shots.