New Report Highlights Difficulties in Assessing and Measuring Health Disparities
A new report from The Commonwealth Fund has found that:
…the lack of standardization across rating and regulatory health equity metrics may undermine efforts to achieve equity. Metrics and methodologies must be aligned to accurately assess health equity progress and drive institutional change and improvements for patient populations (Bonfiglio et al., 2025).
The overarching theme of the report is that the tools used by large organizations, such as the American Hospital Association (AHA), Centers for Medicare and Medicaid Services (CMS), U.S. News & World Report, and Vizient, all use broadly different metrics for measuring health equity and disparities with little overlap, little agreement on what to measure and how, and with little oversight from any agency or authority to establish standardized metrics and assessment tools.
Photo Source: The Commonwealth Fund | Photo: Anton Petrus/Getty Images
When PlusInc began highlighting health disparities for various disease states, we noticed significant differences in how those disparities were measured across organizations and disease states.
For example, gathering mortality data is a relatively straightforward task that should yield relatively standardized results across disease states or causes of death—we do, after all, have a massively detailed International Classification of Diseases (ICD-10) system that should capture both the underlying and multiple causes of death…until you realize that assigning ICD-10 codes is primarily based upon best judgments of the causes of death, is hampered by significant underfunding, and is inconsistent from jurisdiction to jurisdiction.
The Commonwealth Fund report, rather than focusing on a singular problem, examines the overarching structures that large institutions use to measure outcomes. They look at the following agencies and organizations:
Illinois Health and Hospital Association Racial Health Equity Progress Report
American Hospital Association Health Equity Transformation Assessment
Leapfrog Hospital Survey
Centers for Medicare and Medicaid Services Hospital Commitment to Health Equity and Health Equity Framework
The Joint Commission Standards for Health Equity
U.S. News & World Report Best Hospitals
Lown Institute Hospital Index
Vizient Quality and Accountability Scorecard
Each of these organizations uses different tools to measure health equity, including self-reported qualitative and quantitative data, regulatory frameworks (i.e., a system of rules and guidelines designed to ensure compliance and evaluate the broader implications of regulations), administrative data (i.e., data collected by administrative bodies, including insurance claims data), and clinical quantitative data (i.e., data that tend to focus on performance-, patient outcome-, and health outcome-based metrics).
The problem, argues The Commonwealth Fund, is that each of these tools has merit, but because each organization examines different aspects in different ways, how they score “equity” and measure disparities in outcomes may vary significantly due to the tools they use.
For example, self-reported data are an excellent way to measure population responses to both qualitative and quantitative questions. The problem with self-reported data is that, plainly put, people lie.
They may not mean to lie, or think they’re lying, but several factors come into play when it comes to self-reported data collected from most humans.
When asked about exhibiting behaviors and habits that are culturally perceived as being “good,” such as voting, exercising, or attending church, respondents are more likely to respond in ways that indicate regularly partaking in those behaviors.
Similarly, when asked about exhibiting behaviors and habits that are culturally perceived as being “bad,” such as alcohol intake, smoking, illicit drug use, and having abortions, respondents are more likely to respond in ways that indicate lower levels of drinking, smoking, and illicit drug use, or omitting that they have had an abortion.
These tendencies aren’t necessarily intentional, but rather occur because of human desires for acceptance and approval from figures of authority. Furthermore, when it comes to interactions with medical authorities, patients who respond to hospital surveys tend to do so after a potentially harrowing experience. That experience may result in responses that, rather than benefiting from retrospective analysis and time to process, are based on immediate, gut reactions (e.g., “Were the physicians able to address the problem you complained about when admitted?” “Well, I’m still in pain, so no.”). And even with reflection upon past experiences, memory lapses or changes in perception on the incident may drastically shift responses if the surveys are completed after too long a period.
Despite these shortcomings, however, well-designed survey tools and researchers can control for potential biases, making self-reported data valuable in assessing the state of play and measuring disparities in perceptions of services rendered and health outcomes.
But not every agency uses the same tool, and could therefore be accounting for different measures. Moreover, these assessment tools tend to be institutional, meaning that they are primarily used only by one hospital or group of providers, rather than being a singular tool whose utilization is mandated across all providers or even all providers of a specific type (e.g., hospitals).
Even in the collection of more definitive data, such as mortality statistics, there is little agreement, regulation, or realism as to how deaths should be reported.
Take, for example, deaths related to extreme heat:
In a 2023 report that has since been removed by the Trump Administration for running afoul of its fiction that climate change is a hoax, the Centers for Disease Control and Prevention (CDC) reported that heat kills more people than any other extreme weather event, accounting for approximately 2,300 deaths in 2023—a number that the report indicated could increase based upon additional records processing (Selig, 2024). Those who research mortality rates, however, suggest that the number of deaths could actually be much higher than reported.
Why?
Because the CDC relies upon death certificates issued by local authorities to account for causes of death, but how those certificates are completed varies widely from place to place. Realistically, expecting accurate and timely reporting without providing financial support leads to rushed, incomplete, and potentially inaccurate results. This is particularly true in areas of the country where funding is bare bones, with just one or two people in an entire county are responsible for reporting every cause of death. Some jurisdictions don’t even consider “heat” as a potential factor when filling out those certificates (Selig, 2024).
Heat deaths are actually an excellent place to start assessing health disparities across multiple demographic groups. Logically, places located in the southern part of the United States are going to be hotter on average than those located in the Pacific Northwest or New England. When looking at the populations of these states, they tend to have higher percentages of their populations who are racial and ethnic minorities, higher rates of obesity and chronic illnesses, lower levels of educational attainment, and higher rates of poverty.
Dying from exposure to extreme heat is something many people believe occurs only to people who are exposed to high temperatures in direct sunlight, such as people who work in construction, farming, or other outdoor occupations. In reality, heat-related deaths impact a wide variety of people, including children, athletes, older adults, pregnant people, emergency responders, outdoor and indoor workers, people with disabilities, those with chronic health conditions (particularly cardiovascular and pulmonary conditions), and people experiencing homelessness (National Oceanic and Atmospheric Administration, n.d.).
When filling out death certificates, those responsible often select the most obvious causes of death in heat-related incidents: heart problems, organ failure, or poisoning by medications. If it’s 110º F in Maricopa County, AZ, everyone in that county is experiencing the same high temperatures, right?
No.
This is often where disparities related to demographic differences become apparent.
We’re not all experiencing the same heatwave. Those with the financial means to do so are often safely ensconced in their well-air-conditioned and insulated homes and workplaces, whereas those with lower incomes often live in poorly insulated and poorly cooled homes, and frequently work in both indoor and outdoor jobs in locations with poor ventilation and high exposure to high temperature which may or may not be exacerbated by working in conditions that increase temperatures, such as being surrounded by metal, working with high-temperature materials (such as paving materials), and may have few opportunities for breaks during the workday.
This appears to be a clear indication that we should focus on being accurate regarding heat-related deaths. But, there appears to be little to no appetite on the part of state or federal agencies to standardize how heat deaths are reported. The reason for this may be political.
Since at least 2013, Donald Trump has repeatedly downplayed extreme heat and warnings related to extreme heat as being the hysterical ravings of the “Disciples of Global Warming” (Twitter, 2013). Adherents to his way of thinking have since infiltrated every federal agency in our government, not only downplaying but openly working to reverse and repeal what few regulatory and statutory tools are in place to address climate change.
And so, it is unlikely that we will be able to standardize how heat-related deaths are accounted for, leaving us with inaccurate reporting at both the local and federal levels.
Ultimately, the points made by The Commonwealth Fund report recommend the following changes:
The following guiding principles for a coordinated approach to health equity measurement are derived through assessing the gaps in existing health equity tools:
Health equity metrics should include an all-payer approach, which would allow for analysis of population disparities.
Disparities in care should be measured at the patient, community, and institutional level rather than looking at wider, more generalized factors that might not fully capture how inequalities play out in real-world settings.
Measurement should be across the domains of community, organization, employees, and patients.
For health equity, both inpatient and outpatient performance should be assessed.
Process and outcome measures should be defined, specifying both the numerator and denominator for each metric.
Methodologies must be transparent, validated, and subject to peer review.
Identification of racial and other inequities in access to care and care outcomes reported to Board to encourage leadership to act on disparities (Bonfiglio et al., 2025).
As PlusInc continues to identify, research, and report on health inequities and disparities, we believe that standardization of reporting requirements (and subsequent funding to support those requirements) is essential for evaluating public health outcomes and working to improve them across all populations. This will require multiple parties, including healthcare providers, administrative officials, local, state, and federal government agencies, regulatory agencies, and payors, to come to the table and agree to operate from, at the very least, a similar playbook, measuring essentially the same things in the same ways.