Find all supporting materials at the Hunger Vital Sign explainer series website.
This episode features an interview with Richard Sheward, Director of Innovative Partnerships at Children's HealthWatch.
Citation for the Hunger Vital Sign tool and link to the original research:
Hager, E. R., Quigg, A. M., Black, M. M., Coleman, S. M., Heeren, T., Rose-Jacobs, R., Cook, J. T., Ettinger de Cuba, S. E., Casey, P. H., Chilton, M., Cutts, D. B., Meyers A. F., Frank, D. A. (2010). Development and Validity of a 2-Item Screen to Identify Families at Risk for Food Insecurity. Pediatrics, 126(1), 26-32. doi:10.1542/peds.2009-3146.
Audio Editing and Post-Production Provided By Evergreen Audio
Welcome to the fourth of our short explainers for the Hunger Vital Sign tool. This episode will explain a bit more about what we mean when we say the Hunger Vital Sign is a reliable screening tool. I’m your host, Helen Labun. And to help with the explanations, we have a guest expert from the organization that created the Hunger Vital Sign.
I'm Richard Sheward, Director of Innovative Partnerships at Children's HealthWatch
The previous episodes covered what we mean when we say the Hunger Vital Sign is a valid screening tool, one that is predictive of both food insecurity status and poor health outcomes. I promise you that those were the most complicated episodes of the entire series, it’s smooth sailing from here. This episode looks at whether the Hunger Vital Sign is reliable.
Reliability is important in a few regards. The most obvious is that when a health care practice is looking at screening tools, they want the results found in other places to translate to their location. That’s a combination of the tool itself and how it’s implemented. Some reliability elements were included in the initial research, for example the large size of the sample, the tests performed across multiple cities and different health care environments, and providing the screener in English, Spanish, and Somali. But you can’t do everything in the first round. So, another element of reliability comes from how the Hunger Vital Sign researchers set up the project to encourage additional research, for example demonstrating its efficacy across different age groups or testing what happens with changes to wording. Testing is ongoing and builds on what happened before.
Another reason to care about reliability is that it’s the first step towards creating a standardized screening tool.
So one way that standardization happens is when you have a previously valid test or tool. In this case, the Hunger Vital Sign that is continually administered in the same manner and shows consistently reliable results again and again, and this gets to some of the subsequent research, by others unaffiliated with Children's HealthWatch, that used the Hunger Vital Sign to demonstrate reliable results, not just in households with young children, but adolescents through the research of Tamara Baer and others, as well as adults, through the research of Hilary Seligman, Craig Gundersen and others. And so, to replicate results over time in either the same populations or different populations, we're able to see the consistency in that standard manner, which leads to what we would consider a standardized tool.
This reliability step in standardization can be challenging.
Did you know that the standardized version of an inch began as equal to three barleycorns, evolved to be the width of a man’s thumb averaged over comparing three hands of different sizes, went through several iterations and did not reach the current standard until the 1960’s? It is 25.4 millimeters. The millimeter is based on a meter, which in turn is defined by how far light travels in a vacuum over a particular fraction of a second, with some accommodations for the effects predicted by general relativity.
No, I’m not going to attempt to explain more than that. I just want everyone to be happy we aren’t discussing something as complicated as a ruler. Also, in case you’re wondering, scientists are currently working on new ways to measure time that will work in other solar systems. I don’t know what that will do to the meter, millimeter and, subsequently, the inch.
Given that the Hunger Vital Sign will not be used in deep space, we can take a more pragmatic view of proving out its reliability. Let’s start with the ways that it remains in active research.
One thing that's been really exciting to see is the proliferation of research and the, you know, building of the evidence base around the Hunger Vital Sign since the Children's HealthWatch group developed it and validated it in 2010. In the years that followed, the hunger vital sign questions have been validated in adolescent populations. And they've also been validated in the general adult population, by other researchers. Other researchers have also examined the response options between the three-item response versus a binary yes or no response to understand how changing the questions would affect the validity, the sensitivity, and specificity of the tool. We've been really excited about the ongoing research and development of understanding best practices. And one thing that we've done at Children's HealthWatch to help foster and promote that continued evolution of future research was to create a national community of practice centered around the Hunger Vital Sign where we, along with the Food Research and Action Center FRAC, to facilitate conversations and, importantly, collective action around a wide range of stakeholders who are interested in understanding how to properly identify and then address food and security through a healthcare and a community lens.
And so our goal is to rapidly share the best practices and data on food insecurity, screening, and interventions, and hopefully scale what works.
Let’s take those examples of ongoing research in a few categories. First, as promised in our introduction, while the Hunger Vital Sign began as validated for use with young children, subsequent research expanded across age groups up to adult. In a health care context, screening adult patients may play out very differently than with young children, because this group is more likely to show diet-related health conditions, or pre-conditions. In this context, the invisible factor being brought to light is what barriers exist that might prevent patients from feeling they have a choice in what treatment plan to follow, or even prevent them from thinking they have any treatment options at all.
What I've seen is clinicians utilizing the Hunger Vital Sign as a tool to identify the prevalence of food insecurity in either a given population or particular patients, and to make the connection between food insecurity and certain chronic health conditions like diabetes, which would then warrant or trigger acceptance into some intervention, whether it's a medically tailored meal program or medically prescribed meal program, or a preventive food pantry or application to SNAP or WIC or some nutritional intervention. Oftentimes the Hunger Vital Sign is used as a way to identify that link, then trigger some action or intervention to address that nutrition related issue.
There is also testing of how the tool translates to different communities and cultures, not just if the words are translated, but if the meaning and cultural response is the same. For some demographics there are tests for different scenarios, such as adolescents with or without a guardian present.
There’s also research where it isn’t the group of patients changing, but the screen itself. For example, Hunger Vital Sign appears as part of what we might call a composite screen, putting together tools for different risks – unstable housing or lack of transportation or difficulty affording medications for example It combines those into one screen.
We also have examples of what happens when the original screening tool is modified. Two common changes are asking only one question, or changing the three-part answer into a yes / no choice.
The issue of deviating from asking two questions was addressed in the original research. Remember, the goal there was to shorten the ‘gold standard’ food insecurity tool, called the HFSS, to be as brief as possible. That meant testing variations on the original 18-question survey tool – and in fact from other researchers, with other needs, you also see 12-, 10- and 6-question versions and, going in the other direction, 23-question versions. In our case the researchers wanted to get it as short as possible.
In the process of developing the Hunger Vital Sign, we generated cross tabulation tables for the first two questions of the HFSS to examine sensitivity and specificity. We explored four specific combinations and found that an affirmative response to the first question only, or the second question only of the HFSS provided a sensitivity of 93% or 82% and a specificity of 85% or 95% respectively.
In other words, the study began with parameters setting an acceptable threshold for how often the tool missed somebody who was food insecure, or incorrectly flagged someone who wasn’t food insecure. And yes, they erred on the side of not missing people. With testing for different question combinations, asking only one question fell outside the boundaries for acceptable performance. Subsequent research has also added questions, but did not find improved performance . . . and in practice we’d expect diminishing returns with more detailed questions, since the screen is only supposed to be a first step to a more detailed conversation.
Changing the wording on the response from “often / sometimes / never” to “yes / no” has been tested separately.
And there's been research looking into this in 2017, a study was published in the American journal of public health. The lead author was Jennifer Mack Larky, and it provides a cautionary tale as to why seemingly minor alterations are actually ill advised. So what they did was they replaced the three part response options to both the HFSS and the hunger vital sign with simplified yes or no options. And what they found was that the yes or no response option resulted in missing nearly a quarter of food insecure adults.
Missing a quarter is a big margin of error. That level of error resembles my ill-fated attempt to learn multivariable calculus in high school. Except the only consequences of those tests were me feeling bitter about it for 25 years, and being saved from the Good Will Hunting-inspired idea that I might enter college as a math major – the consequences of getting the food insecurity screen wrong could matter a lot more. We’ll talk about those consequences in the next episode.
These results on the answer phrasing also get to another area of research that we will touch on later – that’s the research into best practices for implementing a screening tool. One hypothesis for why the three-choice phrasing matters is that it makes it easier for patients to answer truthfully.
It needs to have a certain level of acceptability, meaning that that tool is not embarrassing or socially unacceptable, that it's acceptable to the individual completing it.
Acceptability is also about clarity on the fact that food security is a range. True or false, those options are so stark. . . “sometimes”, though, that seems reasonable. Most people go through these types of calculations. I smoke cigars. Mostly in summer, when I can be outside. So not a lot, certainly not every day or even every season, but sometimes. If a screen asks me to answer yes or no to tobacco use then, sure, technically the answer is yes – still, my inner rationalization doesn’t have a hard time telling me that the health care provider doesn’t really mean my kind of tobacco use. And maybe they don’t. I don’t know. If I mark sometimes, then perhaps we’ll have a follow up conversation about it.
Other elements of how the Hunger Vital Sign is presented can impact acceptability, which we’ll get to later in the series. A key element is whether the patient expects the answer matters, that it will result in a positive change to how their health is supported. Remember the ultimate goal from the first episode:
I hope that . . . structural social safety nets are adequate and robust enough to address food insecurity when it's identified.
Let’s recap reliability on our way to that goal.
Next up, we’ll ask how we know the Hunger Vital Sign tool is useful – and then we’ll get closer to the standardization question by asking whether the tool remains valid, reliable, and useful as its use expands to become common in health care practices across the country. You can find more resources on these topics plus other episodes by clicking on the link in the show notes.