Skip to main content
What Is RADV? The Complete Medicare Advantage Audit Guide
All Articles
RADVMedicare AdvantageRisk AdjustmentCMS AuditsHCC CodingCompliance

What Is RADV? The Complete Medicare Advantage Audit Guide

Dr. Anica
February 27, 2026
16 min read

What Is RADV? The Complete Medicare Advantage Audit Guide

RADV — Risk Adjustment Data Validation — is CMS's primary audit program for verifying that the diagnosis codes submitted by Medicare Advantage organizations are supported by medical record documentation. CMS uses RADV to identify and recover improper payments made to MA plans based on unsupported or inaccurate HCC codes. With the 2025 RADV Final Rule now authorizing extrapolation of audit findings to entire contract populations, a single RADV audit can result in tens of millions of dollars in payment recoveries. Every risk adjustment team operating in Medicare Advantage needs to understand how RADV works, what triggers it, and how to prepare for it.

Why Does RADV Exist?

Medicare Advantage plans receive risk-adjusted capitated payments from CMS based on the health status of their enrolled populations. The CMS-HCC model assigns higher payments for beneficiaries with documented chronic conditions and complex diagnoses. This payment structure creates a financial incentive to code as many conditions as possible — and CMS has long recognized that this incentive can lead to overcoding, unsupported diagnoses, and inflated risk scores.

The scale of the problem is significant. The HHS Office of Inspector General (OIG) has repeatedly estimated that Medicare Advantage overpayments attributable to unsupported diagnoses amount to billions of dollars annually. In a 2023 OIG report, auditors found that unsubstantiated diagnoses accounted for an estimated $12 billion in excess payments in a single year. CMS created RADV as the enforcement mechanism to validate coding accuracy and recover improper payments at scale.

RADV is not a new program — CMS has conducted RADV audits since 2008. However, the program's impact has increased dramatically with the finalization of the extrapolation rule, which allows CMS to project audit findings from a sample of beneficiaries to the plan's full enrollment, multiplying the financial consequences of coding errors by orders of magnitude.

How Does the RADV Audit Process Work?

RADV audits follow a structured, multi-phase process that CMS administers through its Center for Program Integrity (CPI). Understanding each phase is critical for risk adjustment teams preparing for or responding to an audit.

Step 1: Contract Selection

CMS selects MA contracts for RADV audit based on statistical analysis of risk score patterns, outlier detection, and sometimes random sampling. Contracts that exhibit unusually high risk scores relative to their demographic and geographic benchmarks are more likely to be selected. CMS does not publicly disclose the exact selection methodology, but it has confirmed that data-driven anomaly detection plays a central role.

Step 2: Beneficiary Sampling

Once a contract is selected, CMS draws a stratified random sample of beneficiaries enrolled under that contract. The sample size is typically 200 beneficiaries, drawn to be statistically representative of the contract's population. CMS stratifies the sample to ensure coverage across different risk score ranges and condition categories.

Step 3: Medical Record Request

CMS issues a medical record request to the MA organization for every sampled beneficiary. The plan must produce the medical records that support the diagnosis codes submitted for risk adjustment during the payment year under audit. Plans typically have 60 to 90 days to collect and submit records from providers.

Step 4: Coding Review

CMS-contracted medical record reviewers — certified coders with risk adjustment expertise — examine each medical record against the diagnosis codes submitted. For every HCC that was submitted for a sampled beneficiary, reviewers determine whether the medical record contains documentation that supports the diagnosis according to ICD-10-CM coding guidelines and MEAT criteria (Monitor, Evaluate, Assess/Address, Treat).

Step 5: Error Rate Calculation and Payment Recovery

CMS calculates the error rate — the percentage of submitted HCCs that were not supported by medical record documentation. Under the extrapolation rule, this error rate is then applied to the contract's entire enrolled population to estimate total overpayment. CMS issues a payment recovery demand for the extrapolated amount, adjusted for statistical confidence intervals.

RADV Audit Types and Timelines

CMS conducts RADV at multiple levels, each with different scopes, timelines, and financial implications.

| Audit Type | Scope | Sample Size | Extrapolation | Typical Timeline | |---|---|---|---|---| | Contract-Level RADV | Individual MA contract | ~200 beneficiaries | Yes (post-2025 rule) | 2–4 years from payment year | | National RADV | Cross-industry national sample | Varies | Used for benchmarking | Ongoing | | HHS-OIG Audits | Targeted by OIG investigators | Varies | Case-specific | 1–3 years | | Plan-Initiated (Internal) | Self-audit by MA organization | Custom | N/A (internal) | Continuous |

Key Timeline Considerations

RADV audits are retrospective. CMS audits payment years that are typically two to four years in the past. For example, a RADV audit initiated in 2026 may target payment year 2023 or 2024 diagnosis submissions. This retrospective nature means that coding errors made years ago can result in financial consequences today — and that organizations must maintain medical record retrieval capabilities for extended periods.

The audit cycle from initial notification to final payment recovery can span 12 to 24 months. During this period, plans go through record submission, initial findings, appeals, and final determination. CMS provides a formal appeals process, but the burden of proof rests on the MA organization to demonstrate that submitted codes were supported by documentation.

The 2025 RADV Final Rule: Extrapolation Changes Everything

The most consequential regulatory development in RADV history is the CMS Final Rule on RADV extrapolation (CMS-4201-F), finalized in January 2025. Prior to this rule, CMS could only recover overpayments for the specific sampled beneficiaries — meaning that even if an audit found widespread coding errors, the financial recovery was limited to the sample.

Under the finalized rule, CMS now has the authority to extrapolate audit findings from the sample to the contract's full population. This means that if a RADV audit of 200 sampled beneficiaries finds a 10% error rate, CMS can project that error rate across all 50,000 or 100,000 members enrolled under the contract and calculate overpayment accordingly.

What the Extrapolation Rule Means Financially

The math is straightforward and severe:

  • Without extrapolation: A 10% error rate on 200 sampled members might result in $200,000 to $500,000 in recovery.
  • With extrapolation: That same 10% error rate projected across 50,000 members could result in $50 million to $100 million or more in recovery, depending on the average risk score impact per error.

CMS has stated that it will apply a Fee-for-Service Adjuster (FFS Adjuster) to account for the baseline level of coding errors in FFS Medicare, ensuring that MA plans are not penalized for error rates that also exist in the traditional Medicare program. However, the FFS Adjuster does not eliminate exposure — it only reduces the net error rate used for extrapolation.

Applicable Payment Years

The extrapolation rule applies to payment year 2018 and subsequent years. This means CMS can retroactively apply extrapolation to audits of payment years dating back to 2018, substantially increasing the financial exposure for plans that assumed extrapolation would never be implemented.

Common RADV Findings and Error Types

Understanding the most frequent RADV audit findings helps organizations focus their preparation efforts on the areas of highest risk. Based on published CMS audit results and OIG reports, the following error categories account for the majority of RADV findings.

| Error Type | Description | Prevalence | Prevention Strategy | |---|---|---|---| | Unsupported HCC | Diagnosis code submitted but no documentation in the medical record supports the condition | 40–50% of errors | Pre-submission chart review with documentation verification | | Insufficient MEAT Documentation | Condition is mentioned but the record lacks evidence of monitoring, evaluation, assessment, or treatment | 25–35% of errors | MEAT-specific documentation training for providers; automated MEAT validation | | Coding Error | Wrong ICD-10-CM code assigned — correct condition documented but mapped to an incorrect or non-specific code | 10–15% of errors | Certified coder review; AI-assisted code validation | | Linkage Error | Diagnosis documented by a provider who does not have a valid encounter linked to the beneficiary for that service date | 5–10% of errors | Encounter data reconciliation; submission filtering | | One-Time Diagnosis | Condition coded in a prior year and carried forward without re-documentation in the current measurement year | 5–10% of errors | Annual recapture workflows; prospective chart review |

Unsupported HCCs: The Highest-Risk Category

The single most common RADV finding is an HCC submission that has no supporting documentation in the medical record. This occurs when a diagnosis code is submitted for risk adjustment but the provider note for the corresponding encounter does not contain any clinical documentation of the condition. Common causes include reliance on problem lists that are not refreshed during the encounter, retrospective chart reviews that add codes without verifying current-year documentation, and data entry errors in the coding workflow.

MEAT Criteria Failures

Even when a condition is documented in the medical record, the documentation must meet MEAT criteria to be considered valid for risk adjustment. A progress note that mentions diabetes in the assessment section but does not document any monitoring (such as an A1c test), evaluation, or treatment plan may fail RADV review. CMS reviewers apply MEAT standards rigorously, and documentation that might seem adequate for clinical purposes may fall short of RADV requirements.

How to Prepare for a RADV Audit

RADV preparation is not a one-time project — it is an ongoing operational discipline that must be integrated into the risk adjustment coding workflow. Organizations that treat RADV readiness as prospective (built into every chart before submission) rather than retrospective (scrambling after audit notification) consistently achieve better outcomes.

1. Implement Prospective Chart Validation

Every chart should be reviewed for RADV compliance before diagnosis codes are submitted for risk adjustment. This means verifying that each HCC code is supported by medical record documentation that meets MEAT criteria, that the ICD-10-CM code is correctly assigned, and that the encounter is properly linked to the beneficiary.

2. Establish Documentation Standards

Work with provider networks to establish clear documentation standards for risk-adjustable conditions. Providers should understand that every condition submitted for risk adjustment must be actively addressed during the encounter — not merely listed on a problem list. Documentation templates, CDI (Clinical Documentation Improvement) programs, and provider education are essential tools.

3. Conduct Internal RADV Simulations

Run internal audits that mirror the CMS RADV methodology. Sample beneficiaries from your own population, pull medical records, and have certified coders review the documentation against submitted HCCs. Calculate your internal error rate and use it to estimate your extrapolated exposure. Organizations that conduct regular RADV simulations can identify and correct systematic coding issues before CMS does.

4. Maintain Robust Record Retrieval Capabilities

RADV audits require rapid production of medical records, often for encounters that occurred two to four years prior. Organizations must maintain relationships with provider networks that enable timely record retrieval, and they should have systems in place to track which records have been collected, which are outstanding, and which are at risk of being unavailable. Missing records are treated as unsupported HCCs in RADV review.

5. Train Coders on RADV-Specific Standards

Coding accuracy is necessary but not sufficient for RADV compliance. Coders must understand the specific evidentiary standards that RADV reviewers apply — which may differ from general coding guidelines. Training should cover MEAT criteria application, documentation specificity requirements under V28, and common RADV error patterns.

How AI Helps With RADV Preparation and Compliance

The RADV extrapolation rule has raised the stakes for coding accuracy to a level that makes manual-only workflows increasingly untenable. When every coding error on a sampled chart can be projected across an entire contract population, the cost of a single unsupported HCC is no longer limited to that one beneficiary — it is multiplied by tens of thousands. AI-assisted coding tools address this challenge by embedding RADV readiness into the coding process itself.

Automated MEAT Validation

AI systems can analyze clinical documentation in real time and evaluate whether MEAT criteria are satisfied for each captured diagnosis. Rather than relying on coders to mentally verify MEAT compliance for every condition on every chart, AI performs this validation automatically and flags conditions that lack sufficient documentation support.

ANICA, Jivica's multi-agent AI coding engine, deploys specialized agents for MEAT validation that evaluate each condition against the four MEAT criteria — Monitor, Evaluate, Assess/Address, and Treat — and generate a compliance determination with specific evidence citations from the medical record. Conditions that fail MEAT validation are flagged before submission, preventing unsupported HCCs from entering the risk adjustment pipeline.

Per-Chart RADV Readiness Scoring

Rather than waiting for an audit to discover coding vulnerabilities, AI can assign a RADV readiness score to every chart at the point of coding. This score reflects the likelihood that the chart's submitted HCCs would survive RADV review based on documentation quality, MEAT compliance, code accuracy, and encounter linkage.

ANICA's per-chart RADV readiness scoring gives risk adjustment teams a quantified, auditable measure of compliance risk for every submission. Charts that score below threshold can be routed for additional review, sent back for provider documentation improvement, or flagged for exclusion from submission until documentation gaps are resolved.

Full Evidence Trails

One of the most time-consuming aspects of RADV response is locating and presenting the specific documentation that supports each submitted HCC. AI coding systems that generate evidence trails — linking every assigned code to the specific text in the medical record that supports it — dramatically reduce the time and effort required to respond to RADV record requests.

ANICA produces a complete evidence trail for every code assignment, identifying the exact clinical language in the provider note that supports the diagnosis. When a RADV audit request arrives, the evidence trail is already compiled and available, reducing record preparation time from weeks to hours.

Prospective Audit Simulation

AI can simulate the RADV audit process across an organization's entire submission population before codes are submitted to CMS. By running the same validation logic that CMS auditors apply — verifying documentation support, MEAT compliance, code accuracy, and encounter linkage — AI systems can identify the exact charts and conditions that are most vulnerable to RADV findings.

With ANICA's 92.6% accuracy across ICD-10, HCC, and E/M coding categories, organizations can reduce their baseline error rate substantially before submission, minimizing the population of unsupported HCCs that could be caught in a RADV sample and extrapolated to the full contract.

Frequently Asked Questions

How often does CMS conduct RADV audits?

CMS conducts RADV audits on an ongoing basis and does not publish a fixed annual schedule. The number of contracts audited each year varies, but CMS has been increasing audit frequency as the program matures and the extrapolation rule takes effect. Any MA contract can be selected for RADV in any given year, and organizations should assume that they will be audited at some point and prepare accordingly.

Can MA plans appeal RADV audit findings?

Yes. CMS provides a formal appeals process for RADV findings. After receiving initial audit results, plans can submit additional documentation, request re-review of specific findings, and escalate disputes through CMS's administrative appeals process. However, the appeals process is time-consuming and the burden of proof falls on the plan to demonstrate that submitted codes were supported. The most effective strategy is to ensure documentation compliance before submission rather than relying on appeals after the fact.

Does RADV apply to all Medicare Advantage plans regardless of size?

RADV applies to all MA organizations that submit diagnosis codes for risk adjustment, regardless of contract size. However, CMS's selection methodology means that larger contracts and those with statistical risk score outliers are more likely to be selected for audit. Small plans are not exempt — CMS can and does audit contracts of all sizes.

What is the difference between RADV and a Risk Adjustment Data Certification (RADC)?

RADC is the annual attestation process in which MA organizations certify that the risk adjustment data they submitted is accurate and complete. RADC is a compliance requirement for all MA plans. RADV is the audit process through which CMS independently verifies the accuracy of submitted data by reviewing medical records. RADC is a self-certification; RADV is an external validation. Both are part of CMS's broader risk adjustment integrity framework, but RADV carries direct financial consequences through payment recovery.

How far back can CMS go with RADV audits?

Under the 2025 Final Rule, CMS can apply extrapolation to payment year 2018 and subsequent years. CMS generally initiates RADV audits for payment years that are two to four years in the past, but there is no statutory limitation that prevents auditing older payment years. Organizations should maintain medical records and coding documentation for a minimum of ten years to ensure they can respond to any retrospective audit request.

Conclusion: RADV Readiness Is a Financial Imperative

The combination of CMS's extrapolation authority and the ongoing scrutiny of Medicare Advantage coding accuracy means that RADV is no longer a distant compliance concern — it is an immediate financial risk that can be quantified in tens of millions of dollars per contract. Organizations that build RADV readiness into their prospective coding workflow — validating every chart, documenting every condition to MEAT standards, and generating evidence trails for every code — will be positioned to withstand RADV scrutiny. Those that continue to rely on retrospective correction and hope-based compliance will face escalating exposure.

AI-assisted coding is the most effective tool available for embedding RADV readiness at scale. ANICA is purpose-built for this challenge, with automated MEAT validation, per-chart RADV readiness scoring, full evidence trails, and prospective audit simulation integrated into every coding workflow. Schedule a demo to see how ANICA can reduce your RADV exposure and protect your risk adjustment revenue.


References: CMS Risk Adjustment Data Validation (RADV), CMS RADV Final Rule (CMS-4201-F), HHS OIG Medicare Advantage Audit Reports, CMS 2024 Rate Announcement and Final Call Letter, AAPC Risk Adjustment Coding Resources.