When most families start researching nursing facilities, the first thing they see is a number of stars. One through five. It looks like a hotel review. It is understandable, familiar, and designed to be easy to act on. It is also one of the most consistently misread data points in the entire Care Compare system.
That is not CMS's fault — they publish detailed methodology documentation and make every component transparent. The problem is that the composite star obscures the three very different things it is measuring, weights them in ways most people would not expect, and can be influenced in ways that do not always reflect what life in a facility is actually like.
By the end of this guide you will be able to look at a facility's star rating, decompose it into its parts, and understand which parts to weight most heavily for your situation. That is a fundamentally more useful skill than reading the composite number alone.
The composite is not an average — it is a formula
The overall star rating is built from three component ratings. They are not weighted equally, and they are not simply averaged. Understanding this is the foundation of reading the rating correctly.
Facility A had a rough inspection cycle — perhaps a significant deficiency during one survey year that dragged down its health inspection score. But its day-to-day staffing is strong and its clinical quality measures are excellent. For a family looking at long-term placement, this is almost certainly the better choice.
Facility B sailed through its last inspection. But its staffing is critically low and its quality measures are poor — meaning the care being delivered to residents on a daily basis raises genuine flags. The composite number hides all of this. Never make a decision based on the composite alone.
The quality measures: what is and isn't being tracked
The quality measure component is built from 15 clinical indicators, split between residents who have been in the facility for more than 100 days (long-stay) and those admitted for shorter post-acute stays (short-stay). These are real and meaningful measures — they reflect outcomes that matter to residents and families.
Long-stay residents
- Residents with pressure ulcers (high-risk)
- Residents experiencing falls with major injury
- Residents with urinary tract infections
- Residents who self-report moderate to severe pain
- Residents with depressive symptoms
- Residents receiving antipsychotic medications
- Residents whose ability to perform daily activities has declined
- Residents with a urinary catheter inserted and left in their bladder
- Residents who lose too much weight
- Residents who have or had a physical restraint
- Residents successfully discharged to the community
Short-stay residents
- Residents with new or worsened pressure ulcers
- Residents who self-report moderate to severe pain
- Residents who newly received antipsychotic medication
- Residents assessed and given seasonal influenza vaccine
These measures are meaningful. A facility with high rates of resident falls with major injury, unexplained weight loss, or rapidly increasing antipsychotic use is telling you something important about how it operates. Look at the individual measures in a facility's profile, not just the aggregate QM star — a facility can average out to 3 stars on quality while performing particularly poorly on the specific measures most relevant to your loved one's condition.
The self-reporting caveat: how quality measures are generated
Nearly all quality measures are derived from the MDS — the Minimum Data Set, a standardized clinical assessment that facilities conduct on every resident at admission, quarterly, annually, and when a resident's condition changes significantly. The MDS is completed by the facility itself, not by external reviewers.
This creates an inherent tension. Facilities that code the MDS carefully and accurately produce quality measure data that reflects reality. Facilities that code it strategically — either through genuine error, selective interpretation, or deliberate manipulation — can influence their quality measure scores without changing what actually happens to residents.
The most scrutinized area is antipsychotic medication use, which CMS has tracked closely because of historical overuse in dementia care. Facilities have strong incentives to keep that number low. Whether they do so by actually reducing unnecessary prescriptions or by reclassifying medications in ways that exclude them from the measure is something the number itself cannot tell you.
This does not mean quality measures are unreliable — they are the best standardized clinical data available. It means they should be read alongside the inspection record and your own site visit observations, not treated as a standalone verdict.
Where the rating can break down
The automatic one-star penalty
Any facility that receives an Immediate Jeopardy citation during a health inspection — a J, K, or L severity finding, as discussed in our Form 2567 guide — is automatically assigned one star for the health inspection component, regardless of all other performance. This override exists because CMS treats immediate jeopardy as categorically serious.
The implication cuts both ways. A facility that received a single immediate jeopardy citation two years ago, has since corrected the deficiency, and has maintained strong staffing and quality measures may carry a suppressed overall rating that doesn't reflect its current state. Conversely, a facility that has never received an IJ citation but has accumulated a consistent pattern of D and E level deficiencies may look better than it should.
The override is a blunt instrument applied to a nuanced situation. When you see a low health inspection star, read the 2567 before drawing conclusions.
Two additional limitations are worth understanding. First, stars are assigned on a national curve — facilities are ranked against the distribution of all facilities nationally, and stars reflect relative position within that distribution. A three-star facility in a state with uniformly poor care may be objectively less safe than a three-star facility in a high-performing state. The number is relative, not absolute.
Second, the Special Focus Facility program. Facilities with persistent, serious quality problems — defined by a pattern of deficiencies across multiple survey cycles — can be designated by CMS as Special Focus Facilities and subjected to more frequent inspections. This designation is a significant signal that goes beyond anything a star rating communicates. It is publicly visible on Care Compare and worth checking for any facility on your serious consideration list.
How to read the stars correctly
The composite star is a useful triage tool. Use it to quickly eliminate facilities with consistently low ratings across all three components. Then put it down and look at the parts.
A five-star rating from two years ago, paired with a recent inspection cycle that hasn't been reflected yet, may be optimistic. A three-star rating driven by one bad inspection year, with strong staffing and excellent quality measures, may be underselling a genuinely good facility. The composite is a starting point. It is not a conclusion.
The series of guides we have built in this space — understanding the 2567, reading staffing data, evaluating acuity context, conducting a site visit — all exist to give you tools that go deeper than a single number. Use them together. The star rating points you toward a door. Everything else in this series helps you understand what is behind it.
How NursingHomeIQ approaches the rating differently
The NursingHomeIQ Score does not simply reproduce the CMS composite. We weight the three components differently — giving greater emphasis to staffing and quality measures, which reflect day-to-day care reality, and contextualizing health inspection scores against the actual content of deficiency citations rather than treating all citations as equivalent.
We also incorporate trend data across multiple survey cycles, flag acuity-adjusted staffing gaps, and surface community feedback signals that the government rating system does not capture. The goal is a score that more accurately reflects what a resident's daily experience in a facility is likely to be — not just what the most recent inspection found on a given Tuesday.
Our score is not perfect either. No single number can be. But we believe families deserve a synthesis that is more honest about what it measures and more transparent about what it cannot.