Back to Resources
    Practical Guide·12 min read

    Understanding the Star Rating — and Why It's Both Useful and Misleading

    The CMS five-star rating is the most visible signal in nursing home research. It is a reasonable starting point. It is a poor ending point. Here is how it is built, where it breaks down, and how to look past it.

    NursingHomeIQ Editorial

    When most families start researching nursing facilities, the first thing they see is a number of stars. One through five. It looks like a hotel review. It is understandable, familiar, and designed to be easy to act on. It is also one of the most consistently misread data points in the entire Care Compare system.

    That is not CMS's fault — they publish detailed methodology documentation and make every component transparent. The problem is that the composite star obscures the three very different things it is measuring, weights them in ways most people would not expect, and can be influenced in ways that do not always reflect what life in a facility is actually like.

    By the end of this guide you will be able to look at a facility's star rating, decompose it into its parts, and understand which parts to weight most heavily for your situation. That is a fundamentally more useful skill than reading the composite number alone.

    The composite is not an average — it is a formula

    The overall star rating is built from three component ratings. They are not weighted equally, and they are not simply averaged. Understanding this is the foundation of reading the rating correctly.

    How the three components contribute to the composite

    Health inspectionsDominant component

    Based on the three most recent standard annual surveys plus any complaint investigations over the prior 36 months. The most recent survey counts for approximately 50% of this component score; the prior year 25%; two years prior 25%. The composite star rating starts here — health inspections set the baseline, and staffing and quality measures adjust it up or down.

    StaffingAdjusts composite

    Based on acuity-adjusted RN hours per resident per day and total nurse staffing hours per resident per day. Drawn from Payroll-Based Journal data — actual payroll, not self-reported. Five stars here can add to the composite; one star can subtract from it.

    Quality measuresAdjusts composite

    Based on 15 clinical quality measures drawn from resident assessments — covering long-stay residents and short-stay residents separately. Five stars here can add to the composite; one star can subtract from it. Data is largely self-reported through the MDS assessment process.

    The practical consequence of this structure: a facility with excellent staffing and quality care can carry a depressed composite rating if it had one difficult inspection year. Conversely, a facility with mediocre staffing and genuinely problematic care indicators can maintain a decent composite if its last survey went smoothly. The star you see on the search page is heavily influenced by when the surveyors last visited and what they happened to find.

    Same composite, completely different facilities

    This is the most important practical implication of the formula — and the one the composite number most thoroughly hides. Two facilities can share an identical overall star rating while representing entirely different care realities.

    Facility A
    Health inspections
    2 stars
    Staffing
    5 stars
    Quality measures
    5 stars
    Facility B
    Health inspections
    5 stars
    Staffing
    1 star
    Quality measures
    2 stars

    Facility A had a rough inspection cycle — perhaps a significant deficiency during one survey year that dragged down its health inspection score. But its day-to-day staffing is strong and its clinical quality measures are excellent. For a family looking at long-term placement, this is almost certainly the better choice.

    Facility B sailed through its last inspection. But its staffing is critically low and its quality measures are poor — meaning the care being delivered to residents on a daily basis raises genuine flags. The composite number hides all of this. Never make a decision based on the composite alone.

    The quality measures: what is and isn't being tracked

    The quality measure component is built from 15 clinical indicators, split between residents who have been in the facility for more than 100 days (long-stay) and those admitted for shorter post-acute stays (short-stay). These are real and meaningful measures — they reflect outcomes that matter to residents and families.

    Long-stay residents

    • Residents with pressure ulcers (high-risk)
    • Residents experiencing falls with major injury
    • Residents with urinary tract infections
    • Residents who self-report moderate to severe pain
    • Residents with depressive symptoms
    • Residents receiving antipsychotic medications
    • Residents whose ability to perform daily activities has declined
    • Residents with a urinary catheter inserted and left in their bladder
    • Residents who lose too much weight
    • Residents who have or had a physical restraint
    • Residents successfully discharged to the community

    Short-stay residents

    • Residents with new or worsened pressure ulcers
    • Residents who self-report moderate to severe pain
    • Residents who newly received antipsychotic medication
    • Residents assessed and given seasonal influenza vaccine

    These measures are meaningful. A facility with high rates of resident falls with major injury, unexplained weight loss, or rapidly increasing antipsychotic use is telling you something important about how it operates. Look at the individual measures in a facility's profile, not just the aggregate QM star — a facility can average out to 3 stars on quality while performing particularly poorly on the specific measures most relevant to your loved one's condition.

    The self-reporting caveat: how quality measures are generated

    Nearly all quality measures are derived from the MDS — the Minimum Data Set, a standardized clinical assessment that facilities conduct on every resident at admission, quarterly, annually, and when a resident's condition changes significantly. The MDS is completed by the facility itself, not by external reviewers.

    This creates an inherent tension. Facilities that code the MDS carefully and accurately produce quality measure data that reflects reality. Facilities that code it strategically — either through genuine error, selective interpretation, or deliberate manipulation — can influence their quality measure scores without changing what actually happens to residents.

    The most scrutinized area is antipsychotic medication use, which CMS has tracked closely because of historical overuse in dementia care. Facilities have strong incentives to keep that number low. Whether they do so by actually reducing unnecessary prescriptions or by reclassifying medications in ways that exclude them from the measure is something the number itself cannot tell you.

    This does not mean quality measures are unreliable — they are the best standardized clinical data available. It means they should be read alongside the inspection record and your own site visit observations, not treated as a standalone verdict.

    Where the rating can break down

    The automatic one-star penalty

    Any facility that receives an Immediate Jeopardy citation during a health inspection — a J, K, or L severity finding, as discussed in our Form 2567 guide — is automatically assigned one star for the health inspection component, regardless of all other performance. This override exists because CMS treats immediate jeopardy as categorically serious.

    The implication cuts both ways. A facility that received a single immediate jeopardy citation two years ago, has since corrected the deficiency, and has maintained strong staffing and quality measures may carry a suppressed overall rating that doesn't reflect its current state. Conversely, a facility that has never received an IJ citation but has accumulated a consistent pattern of D and E level deficiencies may look better than it should.

    The override is a blunt instrument applied to a nuanced situation. When you see a low health inspection star, read the 2567 before drawing conclusions.

    Two additional limitations are worth understanding. First, stars are assigned on a national curve — facilities are ranked against the distribution of all facilities nationally, and stars reflect relative position within that distribution. A three-star facility in a state with uniformly poor care may be objectively less safe than a three-star facility in a high-performing state. The number is relative, not absolute.

    Second, the Special Focus Facility program. Facilities with persistent, serious quality problems — defined by a pattern of deficiencies across multiple survey cycles — can be designated by CMS as Special Focus Facilities and subjected to more frequent inspections. This designation is a significant signal that goes beyond anything a star rating communicates. It is publicly visible on Care Compare and worth checking for any facility on your serious consideration list.

    How to read the stars correctly

    The composite star is a useful triage tool. Use it to quickly eliminate facilities with consistently low ratings across all three components. Then put it down and look at the parts.

    Weight these more heavily
    • Staffing stars — reflects daily care reality
    • Individual QM scores on relevant measures
    • Trend across multiple survey cycles
    • Acuity-adjusted staffing comparison
    • Special Focus Facility designation
    Read in context
    • Health inspection stars — check the actual 2567
    • QM composite — check individual measures
    • Overall stars — decompose before judging
    • Any sudden drop — may reflect one survey year
    • Any recent rise — may not yet reflect current state

    A five-star rating from two years ago, paired with a recent inspection cycle that hasn't been reflected yet, may be optimistic. A three-star rating driven by one bad inspection year, with strong staffing and excellent quality measures, may be underselling a genuinely good facility. The composite is a starting point. It is not a conclusion.

    The series of guides we have built in this space — understanding the 2567, reading staffing data, evaluating acuity context, conducting a site visit — all exist to give you tools that go deeper than a single number. Use them together. The star rating points you toward a door. Everything else in this series helps you understand what is behind it.

    How NursingHomeIQ approaches the rating differently

    The NursingHomeIQ Score does not simply reproduce the CMS composite. We weight the three components differently — giving greater emphasis to staffing and quality measures, which reflect day-to-day care reality, and contextualizing health inspection scores against the actual content of deficiency citations rather than treating all citations as equivalent.

    We also incorporate trend data across multiple survey cycles, flag acuity-adjusted staffing gaps, and surface community feedback signals that the government rating system does not capture. The goal is a score that more accurately reflects what a resident's daily experience in a facility is likely to be — not just what the most recent inspection found on a given Tuesday.

    Our score is not perfect either. No single number can be. But we believe families deserve a synthesis that is more honest about what it measures and more transparent about what it cannot.

    About NursingHomeIQ · NursingHomeIQ is a consumer resource offering free and paid data and insights. We do not accept payment from facilities or operators for placement, ratings, or featured listings. Our IQ Score is proprietary but methodologically transparent. If you have questions about our methodology or want to share a story from inside a facility, we want to hear from you.

    We use cookies for anonymous analytics to improve the site. No ads, no tracking across sites. Privacy Policy