By. Dr. Laura Myers
How does the public get their weather information and what do they do with it when they get it? These are questions the weather warning research community has been exploring for some time. We have good data on the modalities people use to get their weather information, including smart phones, NOAA weather radios, sirens, the Internet, television, and even social media.
What the public does with that information once they get it is a more complex issue to explore. We have identified the issues of amount of lead time needed, the need for secondary confirmation, understanding the message, conveying uncertainty, providing impacts, and other factors associated with taking the best actions possible in a weather event. These research results are being used to improve the weather warning process as we speak.
An interesting phase in this research process is trying to understand what people truly understand and when they understand it in the weather warning process. In other words, what warning information do they receive or seek? When during the weather event do they turn to that information? What does that information mean to them during the timeline of an event?
I had the opportunity recently to explore these questions from a participant observer perspective when severe storms were approaching Central Alabama on February 2nd of this year. Prior to the February 2nd event, I had begun analyzing local events from the perspective of what information was available, when was it available, and what did that information convey. As Jane Q. Public in Central Alabama, I was in a position to determine what information was getting to me through the various modalities, what information could I seek to help me, and what did that information convey to me. As a social scientist, I was in a position to analyze what the weather enterprise intended for me to know and act on and compare that to what Jane Q. Public could understand and use. My initial analysis results indicated that there was a tremendous amount of information provided in many different ways that caused me to have to “interpret” the information. Were all of my sources saying the same thing or something different? What messages were my sources trying to convey? Was it timing, uncertainty, severity, impacts or some combination of those things? In the weather enterprise, we call these differences inconsistency if it confuses the end users. So as the February 2nd event was approaching, I was set up to further explore the nature of this inconsistency phenomenon.
In the days leading up to February 2nd, my primary role as Jane Q. Public led me in search of the SPC outlooks, early information from my local broadcast meteorologists and private weather forecasts, and information from my regional WFOs. I monitor all the WFOs in my region for geographical information, since some of those WFOs will be dealing with the event before my closest WFO will be dealing with it. It also allows me to determine if any of the WFOs are disseminating something different from the others and why it might be different. I noticed some differences in the information coming from the various sources, but I attributed those differences to timing, target audiences, and goals of the various sources. My intent was to determine which of the sources were most relevant to me as an individual in the impact zone. And that’s where it gets confusing, because I’m not totally Jane Q. Public. I know to seek information from all of these sources and in doing so I create confusion for myself. As my weather enterprise friends say, I know too much. The public generally does not look to all of those sources. The information from those sources is usually managed for the public through broadcast meteorologists, private weather apps, and warning modalities. So does the public get confused if they are only receiving the managed message and not looking at multiple sources like I was?
Early AM February 2nd, a discussion started on Facebook among some of the weather enterprise folks about how confusing the weather information appeared to be for the people of Central Alabama. The discussion focused on the differences from various sources in the products and graphics being disseminated, including different colors, wording, and emphases. As Jane Q. Public currently residing in Central Alabama, I decided to jump into the conversation to provide some of my observations about consistency, while experiencing the event and the process first hand. That’s when I was told I know too much! The subsequent discussion was some of the most extensive and enlightening on the topic of consistency imaginable. It raised lots of questions for further exploration and I want to share some of those questions with you.
While there appear to be differences between the SPC products (outlooks) and the products of the WFOs and broadcast meteorologists, there are many plausible reasons. Many of these products represent different time points in the process so we may be comparing products that are not intended to be compared. But how does the end-user know that? Is it clear what time point these products represent? Is it clear how much uncertainty is involved in a product? That may not be so easy to interpret. And who are the end-users of these products? Do we know who is using these products and for what purposes?
Which end-users are the products and information intended for? Many of the products and information are intended for professional and official decision-making, not individual, personal decision-making. Many of these products are used by emergency managers. They use the information to make early planning decisions. The public typically does not seek the same information, especially at the early stages of a weather event.
In fact, many of the early warning products are being consumed by television broadcasters, official decision makers, and others to be interpreted for their constituencies. The WFO’s are also localizing and contextualizing the information for their CWAs and their particular end-users. Some of the variation comes from framing the information to the needs of the particular end-users, which can be different depending on the source and the location. This process will necessarily change the appearance and emphases of the products and information being disseminated which could lead to the perception of inconsistency. But is this inconsistency if it is part of the process of customizing the information for varied groups of end-users? From a 50,000 feet view, it will appear to be different, but upon closer inspection, it may only be the customized variation of the original information and/or representing a different time point or location.
It may be true that different words, colors, and graphics are being used to depict the same things, leading to perceptions of inconsistency and the need to interpret at a very high level. However, the discussion of this Facebook group, as well as numerous interactions I’ve had with Integrated Warning Teams and broadcast meteorologists, reveal tremendous efforts to reconcile those differences, differences between WFOs within a region, between local television stations and the WFOs, and even with the SPC.
The most significant observation I made on February 2nd was the overwhelming desire of the weather enterprise to analyze the nature of inconsistency and how those inconsistencies might be impacting decision makers at ground zero, including me, Jane Q. Public. I’ve worked with many professions to provide research analytics to support evaluations, modifications, and improvements of their processes, but I’ve never worked with a profession like the weather enterprise that is so committed to understanding the process and working to evoke the right changes.