18 min

Thinking Beyond the Drill, or the “Teaching to the Test” Fallacy The Everyday Marksman

    • Wilderness

More than one time, John Simpson mentioned to me that you don't prepare for the test by practicing the test. It's a bit of a call out against shooters who think that the path to improvement is merely about faster times on their drill of preference. My observation is that a shooter's preferred "game" usually dictates what drills they care about. Bill Drills, classifiers, 1-5, dot torture, and many more track against defensive, competition, handgun, long gun, and other disciplines. These shooters then post their scores and compete against other enthusiasts to see who is the "best."







There's a rub, though. Being good at a particular drill does not automatically translate to being particularly good at all the related skills. It's merely a point-in-time indication to compare yourself against. Take the Bill Drill, for example. The entire drill consists of starting from the draw and firing six shots as quickly as possible at an IPSC target placed at 7 yards. It's a sort of "maximum effort" test.







I take issue with the idea that the drill teaches anything in particular. Six shots isn't enough to teach anything. Using strength lifting as an example, nearly everyone understands that you don't prepare for a one rep maximum test by only training at maximum weights for one rep at a time. I saw the same kind of thing in the military where some guys would "prepare" for their upcoming PT test by doing it once or twice every week leading up to the actual test.







You must identify and target weak points, build volume of practice addressing those weak points- often at weights far less than the maximum. With time and effort, your ability to display maximum strength improves.







Another Example







A significant portion of my day job involves developing technical certifications. Another huge part of it is developing the training programs teaching the knowledge and skills required to pass those certifications.







When we develop a certification exam, it's a all a process of compromises. We bring in a team of industry subject matter experts and have them work through a process of documenting the skills and knowledge required to be successful at the level of the test we're creating. This list usually ends up being dozens of items long, with each item having several sub components to it. We then go through another process to prioritize which items are more important than others. By the time we're done, we have a ranked order of skills and knowledge.







If the test is only 60 items long, but the team wrote 70 objectives, there isn't enough space to evaluate everything. Even at a 1:1 match, that would mean that there could only be one question per objective. One single metric is far from enough to comprehensively evaluate what someone knows. That usually means that we have to select the top 20 or so objectives deemed "most important." After that, we still probably won't evenly distribute the questions. The top five objectives might get 5 questions each, while the least important ones might get one or two. Keep in mind that the "least important" were still in the top 20 out of 60 to 70 objectives.







In the end, you end up with a certification that provides an objective standard to compare against, but it's far from a complete picture of someone's knowledge and capabilities.















Preparing for the Tests







Most people are lazy. When someone asks for help with passing the exam, they almost never want to know what skills and variations they should practice to improve- they want to know the questions ahead of time so they can study the answers.

More than one time, John Simpson mentioned to me that you don't prepare for the test by practicing the test. It's a bit of a call out against shooters who think that the path to improvement is merely about faster times on their drill of preference. My observation is that a shooter's preferred "game" usually dictates what drills they care about. Bill Drills, classifiers, 1-5, dot torture, and many more track against defensive, competition, handgun, long gun, and other disciplines. These shooters then post their scores and compete against other enthusiasts to see who is the "best."







There's a rub, though. Being good at a particular drill does not automatically translate to being particularly good at all the related skills. It's merely a point-in-time indication to compare yourself against. Take the Bill Drill, for example. The entire drill consists of starting from the draw and firing six shots as quickly as possible at an IPSC target placed at 7 yards. It's a sort of "maximum effort" test.







I take issue with the idea that the drill teaches anything in particular. Six shots isn't enough to teach anything. Using strength lifting as an example, nearly everyone understands that you don't prepare for a one rep maximum test by only training at maximum weights for one rep at a time. I saw the same kind of thing in the military where some guys would "prepare" for their upcoming PT test by doing it once or twice every week leading up to the actual test.







You must identify and target weak points, build volume of practice addressing those weak points- often at weights far less than the maximum. With time and effort, your ability to display maximum strength improves.







Another Example







A significant portion of my day job involves developing technical certifications. Another huge part of it is developing the training programs teaching the knowledge and skills required to pass those certifications.







When we develop a certification exam, it's a all a process of compromises. We bring in a team of industry subject matter experts and have them work through a process of documenting the skills and knowledge required to be successful at the level of the test we're creating. This list usually ends up being dozens of items long, with each item having several sub components to it. We then go through another process to prioritize which items are more important than others. By the time we're done, we have a ranked order of skills and knowledge.







If the test is only 60 items long, but the team wrote 70 objectives, there isn't enough space to evaluate everything. Even at a 1:1 match, that would mean that there could only be one question per objective. One single metric is far from enough to comprehensively evaluate what someone knows. That usually means that we have to select the top 20 or so objectives deemed "most important." After that, we still probably won't evenly distribute the questions. The top five objectives might get 5 questions each, while the least important ones might get one or two. Keep in mind that the "least important" were still in the top 20 out of 60 to 70 objectives.







In the end, you end up with a certification that provides an objective standard to compare against, but it's far from a complete picture of someone's knowledge and capabilities.















Preparing for the Tests







Most people are lazy. When someone asks for help with passing the exam, they almost never want to know what skills and variations they should practice to improve- they want to know the questions ahead of time so they can study the answers.

18 min