Tag Archives: EDGE

ACA risk adjustment transfers will be on EDGE

In 2019, the Centers for Medicare and Medicaid Services (CMS) will begin partially calibrating the HHS-HCC commercial risk adjustment model using actual Patient Protection and Affordable Care Act (ACA) experience from the 2016 EDGE server data submissions. CMS has based the model solely on non-ACA data up to this point.

This article by Milliman’s Zach Davis, Phil Ellenberg, and Brian Sweatman contains four interactive exhibits that allow issuers to review coefficients from the 2019 model. They can also compare how the EDGE data incorporated into the 2019 model will affect risk scores, and the magnitude of the impact on an issuer’s risk adjustment transfer.

Maximize ACA risk adjustment with EDGE server action plan

Effective management of information entered into an External Data Gathering Environment (EDGE) server may save health plans millions of dollars in risk adjustment transfer payments. In this paper, Milliman’s Jason Petroske and Alan Vandagriff outline best practices that issuers should consider as part of their annual EDGE server submission cycles to maximize risk adjustment results.

Here’s an excerpt:

Complete and accurate data is a critical element in capturing—and, more importantly, in receiving compensation for—a health plan’s true level of risk. While navigating the first two years of EDGE submissions, we have mapped out a comprehensive action plan focused on three main areas that any issuer can integrate into its data management framework:

Establish a robust review and reconciliation process: Create a continuous process for reviewing and reconciling EDGE submissions to internal data sources. Identify key metrics for data completeness and use the test environment to ensure each EDGE submission passes these standards before finalizing in production.

Prioritize error corrections: Not all errors are created equal, so have a strategic plan for correcting errors and improving data quality. Understand the economics of risk adjustment to help effectively deploy and allocate resources.

Track data quality and establish benchmarks: Track and benchmark data quality and submission results over time. Look for patterns in errors or outliers from prior submissions as these can be signals of systemic weaknesses in the overall data management process.