Happy 2018 to all my readers! Thanks for your patience while I took an extended holiday break. A minor surgery and the flu had sidelined me for a bit, but I’m happy to be back.
This morning, FEMA issued NIMS Alert 01-18: National Engagement for Draft NIMS Implementation Objectives. NIMS Implementation Objectives were last released in 2009, covering a period of FY 2009-FY2017. With the release of the updated NIMS last year, FEMA is updating the implementation objectives and has established a national engagement period for their review.
So first, a bit of commentary on this document…
The new objectives are broken out by major content area of the updated NIMS document, including: Resource Management, Command and Coordination, and Communication and Information Management; as well as a General category to cover issues more related to management and administration of the NIMS program. What we also see with these updated objectives are implementation indicators, which are intended to help ground each objective. Overall, the number of objectives in this update has been cut in half from the 2009 version (28 objectives vs 14 objectives).
All in all, these objectives appear to be consistent with the current state of NIMS implementation across the nation. They are certainly suitable for most matters in regard to the oversight of implementing NIMS and it’s various components. The biggest sticking point for me is that this document is intended for use by states, tribal governments, and territories. If the goal is to have a cohesive national approach to implementation, I’d like to know what the implementation objectives are for FEMA/DHS and how they compliment those included in this document.
Objectives 8 through 11 are really the crux of this document. They are intended to examine the application of NIMS in an incident. These objectives and their corresponding indicators (which are largely shared among these objectives) are the measure by which success will ultimately be determined. While it’s a good start for these to exist, jurisdictions must be more open to criticism in their implementations of NIMS and ICS. In addition, there should be an improved mechanism for assessing the application of NIMS and ICS. While formal evaluations occur for exercises under the HSEEP model, we tend to see inconsistent application of the feedback and improvement activities to correct deficiencies. Proper evaluations of incidents, especially at the local level, are often not performed or performed well. For those that are, the same issue of feedback and improvement often stands.
Extending this discussion into reality…
The reality is that many responders are still getting it wrong. Last year my company conducted and evaluated dozens of exercises. Rarely did we see consistently good performance as far as NIMS and ICS are concerned. There are several links in this chain that have to hold firm. Here’s how I view it:
First, the right people need to be identified for key roles. Not everyone is suited for a job in public safety or emergency management in the broadest sense. Organizations need to not set up individuals and their own organization for failure by putting the wrong person in a job. If a certain job is expected to have an emergency response role, there must be certain additional qualifications and expectations that are met. Further, if someone is expected to take on a leadership role in an ICS modeled organization during an incident, there are additional expectations.
Next, quality training is needed. I wrote a couple years ago about how ICS Training Sucks. It still does. Nothing has changed. We can’t expect people to perform if they have been poorly trained. That training extends from the classroom into implementation, so we can’t expect someone to perform to standards immediately following a training course. There is simply too much going on during a disaster for a newbie to process. People need to be mentored. Yes, there is a formal system for Qualification and Certification in ICS, but this is for proper incident management teams, something most local jurisdictions aren’t able to put together.
Related to this last point, I think we need a new brand of exercise. One that more instructional where trainees are mentored and provided immediate and relevant feedback instead of having to wait for an AAR which likely won’t provide them with feedback at the individual level anyway. The exercise methodology we usually see applied calls for players to do their thing: right, wrong, or otherwise; then read about it weeks later in an AAR. There isn’t much learning that takes place. In fact, when players are allowed to do something incorrectly and aren’t corrected on the spot, this is a form of negative reinforcement – not just for that individual, but also for others; especially with how interrelated the roles and responsibilities within an ICS organization are.
While I’m all for allowing performers to discover their own mistakes and I certainly recognize that there exist multiple ways to skin the proverbial cat (no animals were harmed in the writing of this blog), this is really done best at a higher taxonomy level. Many people I see implementing portions of ICS simply aren’t there yet. They don’t have the experience to help them recognize when something is wrong.
As I’ve said before, this isn’t a school yard game of kickball. Lives are at stake. We can do better. We MUST do better.
As always, thoughts are certainly appreciated.
© 2018 – Timothy Riecker, CEDP