Tag Archives: Lynn Nemiccolo

Full steam ahead for Medicare/Medicaid EHR incentives

As reported in Modern Healthcare, the Centers for Medicare and Medicaid Services (CMS) has provided nearly $4.5 billion in electronic health records (EHR) incentive payments. About $2.4 billion of that is under Medicare, which started EHR incentives in May 2011. The rest is under Medicaid, which began EHR incentives in January 2011. Hospitals have received the bulk of the total in both programs, about $3.1 billion. The program has more than 225,000 active accounts. The EHR incentive program is part of the American Recovery and Reinvestment Act of 2009, which authorized $19 billion for the EHR incentive program.

For a couple of Milliman perspectives on the EHR program, check out this paper on state healthcare data exchanges which points out how:

Community-based data pooling initiatives in Minnesota, Massachusetts, Oregon, Washington, and Wisconsin (commonly known as Chartered Value Exchanges or CVEs) have already shown that, at least using administrative data, it is possible to bring stakeholders to the table, get appropriate infrastructure in place, and begin using community health data to improve quality and transparency. These organizations may serve as models or building blocks for more meaningful use of EHR data nationally.

It’s also worth taking a look at this 2009 research report, “The Convergence of Quality and Efficiency and the Role of Information Technology in Healthcare Reform,” which discusses how EHR can move beyond administrative efficiency to become a decision support tool for physicians.

You can download the latest EHR program data from CMS here.

Accountability starts with reducing waste

The Naples News takes a local look at a national trend: The move toward accountable care organizations (ACOs). Here’s an excerpt:

The American Hospital Association sent a nine-page letter Nov. 17 to the federal Center for Medicare and Medicaid Services, which outlines the need for flexibility in how ACOs are structured, starting slowly and focusing on a small set of quality measurements initially and addressing how seasonal residents will be treated, among other issues.

The reimbursement piece of an ACO is the hard part, [Dr. Allen] Weiss [president and chief executive officer of the NCH Healthcare System in Collier County] said. For certain, all the ACO partners would have to agree to share clinical information about patients and use a care approach that’s proven effective, what’s called evidence-based medicine, he said.

Patients also would have their role of taking part in prevention and wellness programs, he said. They would be assigned to someone as a mid-level professional as a caregiver.

“Eighty percent of what a primary-care doctor does can be done by a mid-level provider and 64 percent of what patients go to the doctor for can be done over the phone, when a mid-level provider knows the patient,” Weiss said…

A leading national health-care group, Milliman, determined that 25 percent of total health-care spending in 2008, or about $600 billion, was wasted, Weiss said.

“Milliman defines waste or inefficiency in the health-care system as treatment that is unnecessary, redundant, or ineffective and is contrary to, or not demonstrably associated with health-care quality and outcomes,” he said. “What are they (the federal government) going to use for money? The 25 percent waste is what we are really going after.”

Barriers to the adoption of clinical decision support tools

What follows is excerpted from “The convergence of quality and efficiency,” by Helen Blumen and Lynn Nemiccolo. Today’s discussion continues where yesterday left off, explaining resistance to the convergence concept.

One of the major barriers identified in the NEJM study was physician resistance. In the hospitals surveyed, 36% of those who did not have an EHR stated that physician resistance was a barrier.34 Physicians resist the use of CDS tools for a variety of reasons, but the main reason is the belief that the use of an EHR and CDS tools will decrease clinical productivity and affect financial reimbursement. Other reasons range from not wanting a computer system to infringe on their decision making to something known as alert fatigue. Alert fatigue is when physicians have been exposed to poorly implemented EHRs that warn them continuously of possible problems as they access the system. Moreover, many of the CDS tools used today have been developed without clinician input, increasing resistance to their use. But if guidelines can be used to fine-tune EHR, and EHR can inform the creation of more user-efficient guidelines, what can emerge is a usability feedback loop that results in a self-improving system that avoids the dynamic of alert fatigue.

Continue reading

Resistance to best-practice guidelines

What follows is excerpted from “The convergence of quality and efficiency,” by Helen Blumen and Lynn Nemiccolo.


Unfortunately, even when guidelines score well using the AGREE instrument, providers may still view them negatively. One example of this is an examination of guidelines conducted by researchers at RAND. One of the questions within the AGREE rigor-of-development domain requires that experts in both clinical content and guideline methodology who are external to the organization that has developed the guidelines review them, but it does not provide detailed specifications for the review. RAND researchers examined a set of guidelines for common, expensive diagnostic testing and treatment for musculoskeletal disorders of the spine and extremities. The researchers gave these guidelines high scores using the AGREE instrument. Then they asked a panel of providers (recommended by their specialty societies as leaders in their clinical field) to rate the relevance of the guidelines to common clinical situations and consistency with clinical understanding of existing evidence and expert opinion. The expert panelists thought that the guidelines did not address common clinical situations, and reported that they often disagreed with the experts’ interpretation of published evidence and clinical experience. In the face of findings of this nature, it is not surprising that guideline adherence in real-world situations may be imperfect, or that implementation of guidelines in healthcare systems may be met with resistance.

  Continue reading

Delivering best-practice guidelines

What follows is excerpted from “The convergence of quality and efficiency.” Click here to read the full paper or view citations.


Clearly, acquisition of high-quality evidence is essential to improving the quality of healthcare and reducing costly errors. Tools such as evidence-based guidelines exist to deliver the evidence to providers so they can easily use the information in clinical practice. Such guidelines can reduce the variation in care generated by differences in practice style, especially when uncertainly about the best treatment approach exists.


Independently developed, evidence-based guidelines can standardize the delivery of healthcare to best practices. This means that only efficacious treatments and tests are recommended, and only for patients likely to benefit from them. It also means that patients are treated in hospitals only for as long as needed, reducing the hazards of infections, falls, and medication errors. By recommending care for which there is evidence of appropriateness, and dissuading care that the evidence shows is inappropriate, providers can improve quality and efficiency at the same time.


Of course, acceptance of guidelines and wide implementation depend on providers’ confidence that the guidelines are developed by truly independent arbiters of what defines appropriate healthcare. To date, unfortunately, acceptance has been slow to develop. Thus, the promise of guidelines to influence practice style and change providers’ behavior has not been realized.


One reason for lack of acceptance is that the evidence base is incomplete, and multiple competing standards exist, leading at times to uncertainty about best care practices. A natural question to ask, then, is, “Are there ‘standards for the standards’”? How can policymakers, payors, and providers determine which guidelines and standards should be adopted? In response, guideline developers have created a set of standards, the AGREE instrument, by which to judge the quality of healthcare guidelines.

Setting standards for the standards

What follows is excerpted from “The convergence of quality and efficiency.” Click here to read the full paper or view citations.


The Appraisal of Guidelines Research & Evaluation (AGREE) instrument consists of 23 items organized into six domains, each of which captures a separate dimension of guideline quality. The domains are scope and purpose, stakeholder involvement, rigor of development, clarity and presentation, applicability, and editorial independence.

  Continue reading