We’ll start this article out by stating the obvious: this pair of articles is going to be critical of our industry. This isn’t because the industry hasn’t been improving significantly over the past few decades, or because it doesn’t continue to improve. It has and does. It’s not because the people who work in the industry are screwed up (some are of course, just like any industry, but those are individual issues, not a systemic ones). Neither is it because there’s some lack of effort or desire to do a good job. In fact, in the aggregate, trainers in this industry probably have more passion for what they do than just about any other professional group. Even most of the “screwed up” folks would fit into this category—they are passionately wrong, if you will.
Instead, the systemic failures we see as most affecting our industry to its detriment (and that are the root causes of many manifested operational performance issues) result from some very real and valid reasons. These include a lack of resource availability (let’s face it, if everybody had access to SEAL Team Six’s facilities, training time, ammunition, ranges, simulators, and other resources, operational performance across the industry would look markedly different), a lack of knowledge about how to practically apply scientific research that is, in some cases, less than a decade old, and, also, often conflicting priorities (or at least perceived conflicting priorities) between operational and administrative functions within many of the institutions that require applied firearms and tactical skills.
So, yeah, the industry has been and keeps moving forward. This is a good thing. However, you can also still talk to any trainer or serious shooter (work, competition, defense) anywhere and they are going to give you a laundry list of issues and challenges that they face on a daily basis.
While these issues are varied, in at least the defensive and armed professional applications, they all really seem to mostly boil down to two fundamental problems. Or, as we will frame it here, the two ways we fail—systemically—in this industry. And, the problem is, no matter how much effort we put into making things better with gear, tactics, techniques etc., we are always going to be fighting windmills and behind the power curve on the street if we don’t address these two fundamental, structural problems that exist within the industry.
We fail in how we deliver training, and we fail in how we measure success.
In this article, we’ll look at the first one.
Failure 1: How we deliver
If you’ve followed our articles for a while, or read our book, Building Shooters, you know this is our flagship issue. While there certainly are some bad techniques and methods out there (more on this in the next article), at a systemic level it’s not so much what we train people to do that’s a problem.
We’ve been at war for over 15 years now. Between the two (or more, depending how you count) fronts overseas and the domestic requirements of modern law enforcement, there’s been a lot of gunfighting in recent decades. This has given us a pretty good idea of what works and what doesn’t, at least if you care to put in the effort to find out from the right people. There’s more than one way to do most things and there’s always going to be room for improvement, but the what isn’t so much what concerns us at a systemic level.
The how is.
Between our book, and many of our prior articles about topics such as progressive interference, (article) different training structures, (article) and training foundations, (article) we have written fairly extensively about the specifics of this issue. While there are also some common training myths out there that don’t help things such as the idea of different learning styles (article), telling students to “find what’s comfortable,” (article) or “find what works,” (article), and working off the concept of “train to standard, not to time,” (article) (more on this in Part 2) the real failure in training delivery isn’t centered around instructor interactions. Rather, it’s rooted in the structural design of the training systems we use.
In a nutshell, our current approaches are neither effective nor efficient. They will eventually get students there, given enough time and effort; however, they are not a direct route to proficiency.
In fact, a lot of what we currently do in training actually places barriers to high-level performance in the students’ way. If they are to become truly proficient, students must usually compensate on their own time and their own dime for the (unintentional) damage that was done to them via the institutionally provided training structure.
Especially in institutional settings, where the organization is asking its employees to conduct dangerous tasks where performance of these skillsets is required for not only the safety of the employee, but also the safety of the general public, the training structures that are used quite simply need to change.
Consider the following:
A recent article by police trainer Louis Hayes claims that much of what we currently do in training is too focused on discreet, physical skill performance. We agree, and expect most folks in the industry would accept that this contention is accurate. If not, we suggest that its truth will be borne out by any simple review of the content of institutional qualifications and performance standards. Yet, a recent study conducted by the Force Science Institute indicates that, despite this (perhaps mismatched) focus in our training systems, trained and qualified law enforcement officers show very little fundamental physical skill performance advantage over civilians with no training at all, at least at common gunfight ranges.
This begs the question: what the heck is going on?
We suggest that if you take the time to neurologically model what our current training systems actually do in the students’ brains, the answer to this question becomes blatantly obvious.
(Our book Building Shooters explains and demonstrates a tool for doing this.)
What’s going on is that we aren’t actually giving people the skills they need to do their job during training. And, in many cases, we’re actually impeding their ability to develop those skills effectively.
Institutions need to stop designing training around administrative convenience, artificial performance standards (more on this in the next article) and resource limitations (perceived or otherwise) and, instead, start designing their programs to achieve specific operational outcomes through the intentional, informed development of the corresponding neurological networks. In other words, we need to design training to facilitate performance of the job—at a neurological level. Then, we need to deliver this training to the students in the same way that their brains learn.
There’s good news and bad news here.
Bad news first. The changes required are not simple and easy. You can’t add 2.5 training hours annually and hire a consultant to toss a few power point presentations on the intraweb and check off a block on an Excel spreadsheet somewhere to solve this problem. What is required is a fundamental change in training system infrastructure and method, starting at the root level.
Now for the good news. Not only will making this change produce better operational performance (not just skills performance either—better decision-making and better operational outcomes too), it will also cost less.
Yes, less. Few training hours required, less resource dependence required—better outcomes.
This admittedly sounds a bit like we are selling a cure-all elixir from the back of a wagon, but the reason why is actually pretty straightforward. What we do now is so inefficient that once you realize it, it almost hurts to watch.
Simply re-aligning training methods to match how the brain works, and understanding at a fundamental level what our training methods are going to do to the students, has the potential to eliminate a lot of wasted time, wasted effort, and wasted allocation of resources.
It’s as simple as this: if we stop wasting most of our time and resources, we can make what we have stretch a lot further, and just maybe even do a much better job with less than we use right now.
There are a lot of indicators out there, our own research included, that this, in fact, can be done. Certainly the indicators are there that, when it comes to the aggregate skillsets out on the street, something needs to improve, and soon.
Our question is: as an industry, what are we waiting for?
In the next article we’ll shift gears a bit and look at the second fundamental failure: how we measure.