Overuse of Severe Thunderstorm Warnings

By: Joe Lauria

As I’ve talked about in previous blogs, my feeling is that the public…my customers…don’t pay attention to the vast majority of severe thunderstorm warnings (SVRs). This is NOT a criticism of the issuing of the warnings from the National Weather Service (NWS)–they have a mandate to follow–rather, this is more of an issue with the criteria and the end results of what typically happens after storms move through.

This project is an outgrowth of our conversations within the Kansas City Integrated Warning Team (IWT) when I brought up this subject in late 2015.

As we get started, I wanted to tally up just how many warnings have been issued since the new hail criteria (from 3/4 to 1 inch diameter) was introduced. Granted the NWS offices in Missouri and Kansas were using the 1 inch diameter hail criteria (from 3/4 inch diameter) for a couple of years prior, but I tallied up from when Central Region adopted the standard, which was in 2009.



I also wanted to see how many were issued at other offices, in southern and western regions as well. I just picked a sampling of weather active county warning areas (CWA). These adopted the 1″ hail criteria 1 year later than Central Region.


I call your attention to the Norman NWS office.



Notice, in the Norman breakdown above, that about 400 warnings were issued based on the minimum criteria alone out of some 900 total warnings last year. That is almost 3 warnings per day issued and almost 1 per day based on the minimum criteria. Granted, their coverage is all of Central and Western Oklahoma, a lot of territory, but I have a tough time thinking folks are paying attention to many of the 900 warnings in 2016.

I wrote a blog last March with these initial findings. I wanted to know what my viewers/readers thought of SVR issuance at the time. It should be noted that the majority of my blog readers are a bit more weather “savvy” than a typical viewer. I asked these questions at the beginning of the blog.


Obviously I’m interested in the “deeper dive” of the 2nd question. Hence what follows. I went into the last 5 years of SVRs issued by the NWS in Pleasant Hill. I checked all the (Impact Based Warning tag information) for the initial reason why the warning was issued.


In particular, I wanted to see the quantity of 60mph wind warnings and/or 1″ hail triggers. Those are represented by the orange bars above and the red bars are the total warnings issued. The percentage values are the 60mph and/or 1″ triggers. So in 2016, roughly 55 percent of ALL warnings issued were for the minimum criteria needed to issue a warning.

I was then curious about the verification of these minimum warning issuances. In most cases, well under 50 percent verified. Of course, warnings are tough to verify many times, especially if they happen in rural areas, or the middle of the night (if a tree falls in the forest does it make a sound?).


So what would happen IF we changed the minimum criteria? How many of these warnings would drop off if we went to 70mph winds and/or 1.5″ hail minimums? The warnings issued are represented by the orange bars. Also note, as I mentioned at the IWT, the actual number of initial warnings issued based on the minimum criteria that then intensified into a 70mph and/or 1.5″ hail tag was pretty small–mostly under 10 per year.


We’re talking about 20 to 25 percent of these warnings would have been issued! In other words, even being a bit generous, 60 to 70 percent of the warnings would have been reduced. That’s pretty significant!

My thought is that IF we don’t issue so many SVRs, when one is issued, our consumers would pay more attention to what is happening. A topic discussed thoroughly on thewxsocial.com, are people fatigued by the sheer volume of warnings?

After I wrote that March 2016 blog…I asked some additional poll questions and was encouraged by the middle answers especially. This though isn’t a perfect process…there are other issues to this research.




After more than a year of research and data collection, this information was recently presented to the Kansas City Integrated Warning Team. According to a poll question, 91% of media/emergency managers/NOAA personnel that attended (about 110) said that something needs to change with regard to SVRs. Joe Lauria is the weekend meteorologist at FOX 4 in Kansas City and can be reached at joe.lauria@wdaftv4.com.

4 thoughts on “Overuse of Severe Thunderstorm Warnings

  1. Excellent blog post! This ties back into some of the other ideas that I wrote about on warning fatigue a few months ago. This also brings in some ideas from another paper that was just published surrounding nonconvective wind events and the issuance of high wind warnings (http://journals.ametsoc.org/doi/abs/10.1175/WAF-D-15-0112.1).

    My initial questions from this blog post are: does warning fatigue actually exist, or is this just something that the meteorological community is concerned about? Does this build up of previous warnings actually matter in the minds of the many publics?

    Personally, I think it comes down to a delicate balance of issuing warnings that are meaningful while also keeping them contained to be considered “a warning” to others. But to figure out the answers to these questions, we have to do more work on the human/societal aspect of warning fatigue to better understand if this is an actual phenomenon felt by the many publics. However, I’m currently working on a project to help address these questions so hopefully we will have more answers soon!



  2. Joe, great stuff! You and I have had discussions on this topic many times over the years, and it is great to see this published and somehow make it out here to Pittsburgh, PA via my WCM. Very thought provoking indeed, and I expect my thoughts will ramble on for a while below (apologies in advance). I certainly agree with you both on the merits of statistics and empirical evidence, but there are a few non-discussed issues at play here that are worthy of note: 

    1.) The 1 and 50 criteria have strong ties to the aviation community. There has been research (sorry I don’t have any links handy) which has shown that quarter sized hail doesn’t do much to vegetation, vehicles or roofing (and this seems to jive with our extensive first-hand experience with it out in the Plains), but evidence does show the impacts are non-zero. More importantly, quarter size hail has been shown to do a number to aircraft.

    2.) Wind. The six ton elephant in the room. What percentage of warnings have been verified on MEASURED wind gusts? I know there was a study on this, and my recollection is that the number was extremely low, probably less than 5%. Then we have the estimated gust verification. It doesn’t matter if it’s the public, law enforcement, emergency managers, or a degreed meteorologist making the report. It’s extremely hard (if not bordering impossible) to measure wind speeds that are both rare in occurrence and bolstered by human factors (i.e. excitement level) such as the warning itself, thunder, lightning, torrential rain, sudden onset, menacing clouds, desire to have the highest wind report, etc. Every meteorologist who has independently verified “estimated gusts” with instrumentation or has added experience via chasing severe storms and hurricanes, knows that non-measured wind gust reports are usually overestimated by somewhere between 30 and 60%! How many times have 60 or 70 mph winds been reported without any damage. This happens with some regularity, and unless you’re standing in an open field with nothing TO damage, such a claim would generally be preposterous based on the findings of repeated wind tunnel experiments. So, I would argue that true verification numbers with respect to wind damage are probably even less than what’s on the official record. Then there’s the whole argument of trees. Tree limbs down. Great! Verification! One tree? Small limbs? 40 mph winds may have done that. You see where I’m going here.

    Then there’s the flipside to that. Here in the eastern US, we have 1000x as many trees (literally) as compared to the Great Plains and often saturated grounds given the much higher frequency of rainfall. We routinely experience significant thunderstorm wind damage (multiple whole trees uprooted) with measured winds UNDER 58 mph. So what do you do there?

    Maybe it’s time we abandon numbers altogether because there are so many other factors at play (tree density, types of trees, foliated or not foliated, solid hail, mushy hail, aggregate hail, etc.)? This is a catch 22. As meteorologists, we’re trained to predict the timing and locations of large hail and wind gusts, and somehow provide a reasonable estimate of maximum size and strength. We’re not trained to predict impacts. Oh hello DSS! On the flip side, does Joe Public care if the hail is 1″ or 2″ in diameter? Would they even know without getting a ruler? Do they even care if the winds are 60, 70 or even 80 mph? Would they even know the difference? Maybe higher numbers lead to taking action, maybe not (social scientists??). They DO want to know the things we can only infer based on past experience… given a supposed severe thunderstorm with characteristics X, Y and Z, we expect it to produce marginal, moderate, major or extreme damage to (insert items of human significant here). You have XX minutes to take appropriate action.

    I really thought that impact-based-warnings were a great step one, and that the wind/hail tag idea Andy and I brainstormed after the 5/2/08 Kansas City 2AM bow echo was a great step two toward trying differentiate among marginal, significant and extreme impacts of severe thunderstorms. The problem is, the general public nor the decision makers are still largely not seeing that. The information is more-or-less buried for a computer decoder. Additionally, is the response at all improved from the generic teletype warnings of decades prior? Anecdotally, I’d say yes, but I don’t know what the numbers say and by what margin.

    So, I don’t think proposal #1 works because those existing criteria do have impacts, but with the caveat that they are not universally applicable likely have impact variances well outside of our field of work and skill! I don’t think proposal #2 works because adding yet another warning product seems like the least best-fit solution.

    The other problem with #2, which is sort of addressed in your study, is that I don’t think most warning mets have the skill to differentiate between 3/4 and 1″ hail. There are very few tools out there to assist with that decision with any degree of measurable skill. Thus, I think we oftentimes see the same warning issued on similar storm structures from 10-15 years ago, but now with slightly (negligibly) raised hail criterion. So, if you bumped it again, what analysis tools are in place to further differentiate between severe and sub-severe storms? I would think that 1.5 and 70 would be easier to distinguish from 1 and 58 (as an example), but this isn’t as simple as flipping a switch in terms of training and swapping a few numbers in algorithms.

    So, how do we resolve the issue where we KNOW people are telling us (re: your Twitter polls) that they aren’t paying attention to SVRs because of the inferred reality that “nothing happens” to them most of the time (see: 80-85% poll returns for wanting higher criteria)?

    Here are my (personal only) solutions:

    1.) Remove ALL legacy warning systems that are still county-based. This would mean an overhaul of NOAA Weather Radio, as one example, to ensure that the polygons are used universally.
    2.) Reduce the over-warn problem. Here’s an interesting study. Look at the size of warning polygons for hail and/or wind with respect to the areal distribution of actual verified events. Some of these numbers are not pretty, and there’s a huge variance based on parts of the country and even among the skill/experience of the warning forecasters inside the same office. For example, you can have a great severe thunderstorm warning with 20 minutes lead time that goes onto produce significant hail and wind damage, and yet the “cry wolf” syndrome was valid for 70% or greater of the people/area inside the polygon. In some observed cases, some of the warned people never even got a thunderstorm at all, and even had sunshine the entire duration of the warning!
    3.) Restructure the warning text to put the most important information first. Am I impacted? What is/are the threat(s)? What are the impacts? A lot of work is being done in this area right now.
    4.) Continue to research, develop and investigate ways to improve warnings to that they are actionable by the vast majority of those receiving them. I’m hopeful that FACETS is going to potentially revolutionize this in the mid-late 2020s, but we really need many of these social science questions answered and I think we need to be true to ourselves about the skill level achievable with our current state of instrumentation and assessment tools.

    Joe, a great discussion sparker that I hope reaches a diverse set of people who can work together toward a common solution! Hope to find myself back in KC soon enough to that we can grab dinner sometime and catch up. All the best.


  3. […] we tighten up criteria for issuance of severe thunderstorm warnings to make them less common? The Weather Social makes a case: “…So what would happen IF we changed the minimum criteria? How many of […]


  4. Good stuff! Obviously Proposal #1 would be the best fit – but if you were around when the 3/4″ – 1″ jump was developed, you’d know that would be a non-starter.

    Proposal #2 wouldn’t be a good call simply because WAS*IS is trying to reduce the number of warnings, not add a 123rd 🙂


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.