NIMS Alert: NQS Qualifications and Task Books for Recovery, Mitigation, and Incident Evaluation

The National Integration Center (NIC) has been busy with developing more National Qualification System (NQS) tools for incident management.  Here are the titles for the latest release open to public comment:

  • Damage Assessment Coordinator
  • HM Community Education and Outreach Specialist
  • HM Community Planner Specialist
  • HM Engineering and Architect Specialist
  • HM Floodplain Management Specialist
  • EHP Environmental Specialist
  • EHP Historic Preservation Specialist
  • Incident/Exercise Evaluator
  • Public Assistance
  • State Disaster Recovery Coordinator

There may be some incident management and response purists out there wondering why they should care about these particular titles.  I’ll agree that most of them aren’t used in a life-saving response capacity, but these are the people you want to have backing you up – otherwise you may never get away from the incident and you will find yourself in a very foreign land where complex requirements from FEMA and other federal agencies are the rules of play.

Having worked disaster recovery for some massive incidents, such as Hurricane Sandy, I can personally attest to the value so many of these people bring to the table.  It’s great to see qualification standards being established for them, just as they are for core incident management team personnel and resources.  While my experience with most of these is ancillary, however, I’ll leave specific commentary on them to those functional experts.

There is one role in here that I’m particularly pleased to see and will comment on, and that’s the Incident/Exercise Evaluator.  I wrote last year on this topic specifically and have reflected on its importance in other posts.  I see the inclusion of an Incident Evaluator in the NQS as being a huge success and the beginning of a conscious and deliberate shift toward evaluation and improvement in what we do.  Looking at the resource typing definition, I’m pretty pleased with what the NIC has put together.

What I like… I appreciate that they include a note indicating that personnel may need additional training based upon the nature or specialization of the incident or exercise.  They include a decent foundation of NIMS/ICS, exercise, and fundamental emergency management training across the various position types (although most of these are FEMA Independent Study courses -which I think are great for introductory and supplemental matter, but shouldn’t be the only exposure personnel have), including a requirement of completion of the Homeland Security Exercise and Evaluation Program (HSEEP) for a Type 1.

What I feel needs to be improved…  Considering that the Type 1 Incident/Exercise Evaluator is expected to lead the evaluation effort, I’d like to see more than just HSEEP training being the primary discerning factor.  Just because someone has completed HSEEP doesn’t mean they can plan a project, lead a team, or extrapolate HSEEP exercise evaluation practices to be effective for incident evaluation.  I suggest HSEEP should be the requirement for the Type 2 position (which would correlate well to the position description), with additional training on project management and leadership supporting the Type 1 position.  While the note is included re: the potential need for additional training, there is nothing in this about operational experience, which I think is rather important.  Lastly, this seems to identify a need for course and/or guidance specific to incident evaluation, which can and should use the principals of HSEEP as its foundation, but identify the differences, best practices, and approaches to applying them to an incident or event.

I’d love to hear your thoughts on incident evaluation as well as the other positions being identified in the NQS. Do you participate in the national engagements and provide feedback?

© 2018 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC™

 

 

Advertisements

An Updated Comprehensive Preparedness Guide 201 (THIRA/SPR)

In late May, FEMA/DHS released an updated version of Comprehensive Preparedness Guide (CPG) 201.  For those not familiar, CPG 201 is designed to guide communities and organizations through the process of the Threat and Hazard Identification and Risk Assessment (THIRA).  This is the third edition of a document that was originally released in April 2012.  This third edition integrates the Stakeholder Preparedness Review (SPR) into the process.  Note that ‘SPR’ has commonly been an acronym for State Preparedness Report, which is also associated with the THIRA.  The goal of the Stakeholder Preparedness Review appears to be fundamentally similar to that of the State Preparedness Report which some of you may be familiar with.

Picture1

First of all, a few noted changes in the THIRA portion of CPG 201.  First, FEMA now recommends that communities complete the THIRA every three years instead of annually.  Given the complexity and depth of a properly executed THIRA, this makes much more sense and I fully applaud this change.  Over the past several years many jurisdictions have watered down the process because it was so time consuming, with many THIRAs completed being more of an update to the previous year’s than really being a new independent assessment.  While it’s always good to reflect on the progress relative to the previous year, it’s human nature to get stuck in the box created by your reference material, so I think the annual assessment also stagnated progress in many areas.

The other big change to the THIRA process is elimination of the fourth step (Apply Results).  Along with some other streamlining of activities within the THIRA process, the application of results has been extended into the SPR process.  The goal of the SPR is to assess the community’s capability levels based on the capability targets identified in the THIRA.  Despite the THIRA being changed to a three-year cycle, CPG 201 states that the SPR should be conducted annually.  Since capabilities are more prone to change (often through deliberate activities of communities) this absolutely makes sense. The SPR process centers on three main activities, all informed by the THIRA:

  1. Assess Capabilities
  2. Identify and Address Gaps
  3. Describe Impacts and Funding Sources

The assessment of capabilities is intended to be a legacy function, with the first assessment establishing a baseline, which is then continually reflected on in subsequent years.  The capability assessment contributes to needs identification for a community, which is then further analyzed for the impacts of that change in capability and the identification of funding sources to sustain or improve capabilities, as needed.

An aspect of this new document which I’m excited about is that the POETE analysis is finally firmly established in doctrine.  If you aren’t familiar with the POETE analysis, you can find a few articles I’ve written on it here.  POETE is reflected on several times in the SPR process.

So who should be doing this?   The document references all the usual suspects: state, local, tribal, territorial, and UASI jurisdictions.  I think it’s great that everyone is being encouraged to do this, but we also need to identify who must do it.  Traditionally, the state preparedness report was required of states, territories, and UASIs as the initial recipients of Homeland Security Grant Program (HSGP) sub-grants.  In 2018, recipients of Tribal Homeland Security Grant Program funds will be required to complete this as well.  While other jurisdictions seem to be encouraged to use the processes of CPG 201, they aren’t being empowered to do so.

Here lies my biggest criticism…  as stated earlier, the THIRA and SPR processes are quite in-depth and the guidance provided in CPG 201 is supported by an assessment tool designed by FEMA for these purposes.  The CPG 201 website unfortunately does not include the tool, nor does CPG 201 itself even make direct reference to it.  There are vague indirect references, seeming to indicate what kind of data can be used in certain steps, but never actually stating that a tool is available.  The tool, called the Universal Reporting Tool, provides structure to the great deal of information being collected and analyzed through these processes.  Refined over the past several years as the THIRA/SPR process has evolved, the Universal Reporting Tool is a great way to complete this.  As part of the State Preparedness Report, the completed tool was submitted to the FEMA regional office who would provide feedback and submit it to HQ to contribute to the National Preparedness Report.  But what of the jurisdictions who are not required to do this and wish to do this of their own accord?  It doesn’t seem to be discouraged, as jurisdictions can request a copy from FEMA-SPR@fema.dhs.gov, but it seems that as a best practice, as well as a companion to CPG 201, the tool should be directly available on the FEMA website.  That said, if the THIRA/SPR is being conducted by a jurisdiction not required to do so, the tool would then not be required – although it would help.

Overall, I’m very happy with this evolution of CPG 201.  It’s clear that FEMA is paying attention to feedback received on the process to streamline it as best they can, while maximizing the utility of the data derived from the analysis.  A completed THIRA/SPR is an excellent foundation for planning and grant funding requests, and can inform training needs assessments and exercise program management (it should be used as a direct reference to development of a Training and Exercise Plan).

For those interested, EPS’ personnel have experience conducting the THIRA/SPR process in past years for a variety of jurisdictions and would be happy to assist yours with this updated process.  Head to the link below for more information!

© 2018 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC ™

FEMA’s 2018-2022 Strategic Plan: The Good, the Bad, and the Ignored

FEMA recently released their 2018-2022 Strategic Plan.  While organizational strategic plans are generally internal documents, the strategic plans of certain organizations, such as FEMA, have a significant link to a broader array of stakeholders.  The community of emergency management in the United States is so closely linked, that FEMA, through policy, funding, or practice, has a heavy influence on emergency management at the state and local levels.  Here are my impressions of the 38-page document.

Picture1

Right from the beginning, this document continues to reinforce the system of emergency management and the involvement of the whole community. I’m glad these concepts have been carried forward from earlier administrations.  Far too often have we seen new administrations trash the concepts of the previous for reasons none other than politics.  Things often take time in emergency management, and it sometimes seems that just as we are getting a grasp on a good concept or program, it’s stripped away in favor of something new which has yet to be proven.

The foreword of the document, as expected, lays out the overall focus of the strategic plan.  What I’m really turned off by here is the mention, not once but twice, of ‘professionalizing’ emergency management.  Use of this phrase is an unfortunate trend and a continued disappointment.  We are our own worst enemy when statements like this are made.  It seems that some in emergency management lack the confidence in our profession.  While I’m certainly critical of certain aspects of it, there is no doubt in my mind that emergency management is a profession.  I wish people, like Administrator Long, would stop doubting that.  Unfortunately, I’ve heard him recently interviewed on an emergency management podcast where he stressed the same point.  It’s getting old and is honestly insulting to those of us who have been engaged in it as a career.

The strategic goals put forward in this plan make sense.

  1. Build a culture of preparedness
  2. Ready the nation for catastrophic disasters
  3. Reduce the complexity of FEMA

These are all attainable goals that belong in this strategic plan.  They stand to benefit FEMA as an organization, emergency management as a whole, and the nation.  The objectives within these goals make sense and address gaps we continue to deal with across the profession.

A quote on page 8 really stands out… The most effective strategies for emergency management are those that are Federally supported, state managed, and locally executed.  With the system of emergency management in the US and the structure of federalism, this statement makes a lot of sense and I like it.

Based on objective 1.2 – closing the insurance gap – FEMA is standing behind the national flood insurance program.  It’s an important program, to be sure, but it needs to be better managed, better promoted, and possibly restructured.  There is a big red flag planted in this program and it needs some serious attention before it collapses.

Here’s the big one… It’s no secret that morale at FEMA has been a big issue for years.  The third strategic goal includes an objective that relates to employee morale, but unfortunately employee morale itself is not an objective.  Here’s where I think the strategic plan misses the mark.  While several objectives directly reference improving systems and processes at FEMA, none really focus on the employees.  Most mentions of employees in the document really reference them as tools, not as people.  Dancing around this issue is not going to get it resolved.  I’m disappointed for my friends and colleagues at FEMA.  While I applaud the strategic plan for realizing the scope of external stakeholders it influences, they seem to have forgotten their most important ones – their employees.  This is pretty dissatisfying and, ultimately, is an indicator of how poorly this strategic plan will perform, since it’s the employees that are counted on to support every one of these initiatives.  You can make all the policy you want, but if you don’t have a motivated and satisfied work force, change will be elusive.

Overall, I’d give this strategic plan a C.  While it addresses some important goals and objectives and recognizes pertinent performance measures, it still seems to lack a lot of substance.  External stakeholders are pandered to when internal stakeholders don’t seem to get a lot of attention.  While, as mentioned earlier, FEMA has a lot of influence across all of emergency management, they need to be functioning well internally if they are to successful externally.  Employee morale is a big issue that’s not going to go away, and it seems to be largely ignored in this document.  I absolutely want FEMA to be successful, but it looks like leadership lacks the proper focus and perspective.

What thoughts do you have on FEMA’s new strategic plan?

© 2018 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC SM

 

HSEEP Training – Is it Required

Continuing from my previous blog post, I’ll answer a search phrase used to bring someone to my blog.  Earlier this month, someone searched ‘Is HSEEP training mandatory?’.  We speak, of course, of the Homeland Security Exercise and Evaluation Program, which is the DHS-established standard in exercise program and project management.

The short answer to the question: Maybe.

Generally speaking, if your exercise activities are funded directly or indirectly by a federal preparedness grant, then grant language usually requires that all exercises are conducted in accordance with HSEEP.  While most federal grant guidance doesn’t explicitly state that exercise personnel must be formally trained in HSEEP, it’s kind of a no-brainer that the fundamental way to learn the standards of practice for HSEEP so you can apply them to meet the funding requirement is by taking an HSEEP course.  If you are a jurisdiction awarded a sub-grant of a federal preparedness grant or a firm awarded a contract, there may exist language in your agreement, placed there by the principal grantee, that specifically requires personnel to be trained in HSEEP.

Beyond grant requirements, who you work for, who are you, and what you do generally don’t dictate any requirement for HSEEP training.  Aside from the federal grant funding or contracts mentioned, there is no common external requirement for any organization to have their personnel trained in HSEEP.  If your organization does require it, this is likely through a management-level decision for the organization or a functional part of it.

So, while HSEEP is a standard of practice, training in HSEEP, in general terms, is not a universal requirement.  That said, I would certainly recommend it if you are at all involved in the management, design, conduct, or evaluation of exercises.  FEMA’s Emergency Management Institute (EMI) offers HSEEP courses in both a blended learning and classroom format.  The emergency management/homeland security offices of many states and some larger cities offer them as well.

© 2018 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC SM

NIMS Implementation Objectives and A Shot of Reality

Happy 2018 to all my readers!  Thanks for your patience while I took an extended holiday break.  A minor surgery and the flu had sidelined me for a bit, but I’m happy to be back.

This morning, FEMA issued NIMS Alert 01-18: National Engagement for Draft NIMS Implementation Objectives.  NIMS Implementation Objectives were last released in 2009, covering a period of FY 2009-FY2017.  With the release of the updated NIMS last year, FEMA is updating the implementation objectives and has established a national engagement period for their review.

So first, a bit of commentary on this document…

The new objectives are broken out by major content area of the updated NIMS document, including: Resource Management, Command and Coordination, and Communication and Information Management; as well as a General category to cover issues more related to management and administration of the NIMS program.  What we also see with these updated objectives are implementation indicators, which are intended to help ground each objective.  Overall, the number of objectives in this update has been cut in half from the 2009 version (28 objectives vs 14 objectives).

All in all, these objectives appear to be consistent with the current state of NIMS implementation across the nation.  They are certainly suitable for most matters in regard to the oversight of implementing NIMS and it’s various components.  The biggest sticking point for me is that this document is intended for use by states, tribal governments, and territories.  If the goal is to have a cohesive national approach to implementation, I’d like to know what the implementation objectives are for FEMA/DHS and how they compliment those included in this document.

Objectives 8 through 11 are really the crux of this document.  They are intended to examine the application of NIMS in an incident.  These objectives and their corresponding indicators (which are largely shared among these objectives) are the measure by which success will ultimately be determined.  While it’s a good start for these to exist, jurisdictions must be more open to criticism in their implementations of NIMS and ICS.  In addition, there should be an improved mechanism for assessing the application of NIMS and ICS.  While formal evaluations occur for exercises under the HSEEP model, we tend to see inconsistent application of the feedback and improvement activities to correct deficiencies.  Proper evaluations of incidents, especially at the local level, are often not performed or performed well. For those that are, the same issue of feedback and improvement often stands.

Extending this discussion into reality…

The reality is that many responders are still getting it wrong.  Last year my company conducted and evaluated dozens of exercises.  Rarely did we see consistently good performance as far as NIMS and ICS are concerned.  There are several links in this chain that have to hold firm.  Here’s how I view it:

First, the right people need to be identified for key roles.  Not everyone is suited for a job in public safety or emergency management in the broadest sense.  Organizations need to not set up individuals and their own organization for failure by putting the wrong person in a job.  If a certain job is expected to have an emergency response role, there must be certain additional qualifications and expectations that are met.  Further, if someone is expected to take on a leadership role in an ICS modeled organization during an incident, there are additional expectations.

Next, quality training is needed.  I wrote a couple years ago about how ICS Training Sucks.  It still does.  Nothing has changed.  We can’t expect people to perform if they have been poorly trained.  That training extends from the classroom into implementation, so we can’t expect someone to perform to standards immediately following a training course.  There is simply too much going on during a disaster for a newbie to process.  People need to be mentored.  Yes, there is a formal system for Qualification and Certification in ICS, but this is for proper incident management teams, something most local jurisdictions aren’t able to put together.

Related to this last point, I think we need a new brand of exercise.  One that more instructional where trainees are mentored and provided immediate and relevant feedback instead of having to wait for an AAR which likely won’t provide them with feedback at the individual level anyway.  The exercise methodology we usually see applied calls for players to do their thing: right, wrong, or otherwise; then read about it weeks later in an AAR.  There isn’t much learning that takes place.  In fact, when players are allowed to do something incorrectly and aren’t corrected on the spot, this is a form of negative reinforcement – not just for that individual, but also for others; especially with how interrelated the roles and responsibilities within an ICS organization are.

While I’m all for allowing performers to discover their own mistakes and I certainly recognize that there exist multiple ways to skin the proverbial cat (no animals were harmed in the writing of this blog), this is really done best at a higher taxonomy level.  Many people I see implementing portions of ICS simply aren’t there yet.  They don’t have the experience to help them recognize when something is wrong.

As I’ve said before, this isn’t a school yard game of kickball.  Lives are at stake.  We can do better.  We MUST do better.

As always, thoughts are certainly appreciated.

© 2018 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC SM

 

 

Project Responder and DHS’ Inability to Follow Standards

I was recently made aware of Project Responder, a publication sponsored by the DHS Science and Technology Directorate, which examines emergency response capability needs within the scope of current operational requirements, threats, and, hazards; with an ultimate focus on the identification of needs an correlating these with technological fixes.  The project description states that ‘the findings from the project can inform the US Department of Homeland Security’s decisions about investments in projects and programs to promote capability enhancement…’.  Project Responder 5 was published in August of this year.  Prior to this edition, I’ve not been familiar with the project, which started in early 2001.

The executive summary of the document states that ‘the document describes 37 capability needs identified by emergency responders…’ <record scratch>.  Hold on a moment… I thought DHS defined 32 Core Capabilities.  Yep, they still do.  The first page of Project Responder 5 includes a foot note that states ‘For purposes of this document, a capability is defined as “the means to accomplish one or more tasks under specific conditions”’.  So in other words, DHS can’t follow it’s own standards.  In many of my articles I’ve regularly remarked about the continual need to streamline our emergency management processes so we can make easier comparisons between these processes, efforts, and activities without having to establish cross walks or translations.  By working from the same standards, we can move easily move between mission areas, which don’t always have boldly marked lines between them, and have an ability to define target results and measure progress.  The Core Capabilities established by the National Preparedness Goal go a long way toward accomplishing this standardization.  It seems the folks in the Science and Technology Directorate don’t think they are that important, and this infuriates me.

The document outlines the 37 capability needs within nine capability domains.  These are:

  • Risk Assessment and Planning
  • Communication and Information Sharing
  • Command, Control, and Coordination
  • Training and Exercise
  • Responder Health and Safety
  • Intelligence and Investigation
  • Logistics and Resource Management
  • Casualty Management
  • Situational Awareness

Some of these appear to have direct correlation to some of what we know as the 32 Core Capabilities, while others seem to combine, redefine, or create new ones.  As the gaps within each domain are discussed, they reference applicable standards.  Interestingly enough, certain standards which you would expect to see aren’t present, such as NIMS being referenced in the Command, Control, and Coordination capability; and HSEEP referenced in the Training and Exercise capability.  Regardless of what technology applications are used to support these areas, these standards are fundamental.

It’s not that the data and analysis that comes out of Project Responder is entirely bad.  It isn’t.  But it’s not great either.  It seems to fall short consistently throughout the document.  The information also needs to be organized within the current lexicon, allowing the reader to make direct correlations to what we are familiar with.  I’m guessing that the project team who did the research and pulled the document together actually knows very little about emergency management or homeland security.  Their inability to communicate context and work within established standards seems to demonstrate this.  It’s fine that the document has a focus on technology implementations that can address gaps, but the fundamentals within the field of practice can’t be ignored.  I don’t see why this project could not have been conducted within the established industry standards.

Perhaps I’ve given a more soap-boxish post than I usually do.  I’m frustrated to see so much wasted time, effort, and dollars in something that could have been more impactful.  Please take a look through the document and let me know what your impressions are.  Also, if you happen to have any insight on this publication which I have missed or am not aware, I’d love to hear it.

Thanks for reading and be safe this holiday season.

© 2017 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC