August 2019
CAC: Temper Expectations With Reality
By Elizabeth S. Goar
For The Record
Vol. 31 No. 7 P. 18
When it comes to computer-assisted coding, perhaps it’s best to follow the teachings of a great Stoic philosopher: Don’t demand that things happen as you wish, but wish that they happen as they do happen, and you will go on well.
Since it burst onto the scene more than 15 years ago, computer-assisted coding (CAC) has undergone a major image transformation. Originally coined by an AHIMA taskforce, CAC was a term referring to any technology that would enhance the manual coding process, including encoder and grouping software, natural language processing systems, and chargemaster optimization software.
Over time, the definition has changed.
“Today, it has evolved to refer to primarily software used to generate ICD and CPT codes based on documentation electronically digested and interpreted as the most likely codes to be utilized in a particular document or episode of care,” says Darice Grzybowski, MA, RHIA, FAHIMA, president of HIMentors, LLC, who was a member of that taskforce. “CAC was coined as a much preferred industry phrase to use rather than ‘autocoding,’ as it was important to distinguish that in most cases, human coder intervention and editing was, and still is, going to be required for a significant number of cases in order to achieve the accuracy required that coding and reimbursement guidelines demand.”
A Brief History
While the phrase CAC may have been adopted to avoid the impression that it was meant to replace coders, it was far from effective. According to Alex Lennox-Miller, a senior analyst with Chilmark Research, CAC first emerged in response to health care organizations’ concerns about the impact the upcoming move to ICD-10 would have on their coding processes.
“This is a significant factor in the revenue cycle for health care organizations and something almost all struggle with. If they encountered a major disruption with ICD-10, it would be a huge issue,” he says. “One of the promises with CAC was that it would ease the transition. Another was that it would replace [some] of the coders needed … because it was going to improve accuracy and have the potential to not just preserve but also add to revenues by [allowing] billing for the highest possible severity.”
Lennox-Miller adds, “It failed most spectacularly with replacing coders.”
ICD-10 was far more complex than ICD-9, both in terms of coding and contracting between payers and providers. As a result, coder proficiency in the new code set required familiarity with not only the codes themselves but also how each individual payer required them to be used for billing. As a result, coders needed to add many variables to the process.
“The other side, which is only now beginning to be appreciated, is that a lot of this comes down to the provider documentation and notes. There are a lot of variations there, and one of the things that a really good coder can do is pull all those details out from very inconsistent documentation,” Lennox-Miller says.
Even today’s natural language processing systems with their sophisticated algorithms are unable to fully scan documentation and pull out appropriate codes with full accuracy. As a result, without coders at least reviewing for accuracy, CAC wound up being a bane to the revenue cycle as denials and appeals delayed reimbursement.
“The biggest takeaway is that CAC works best when supplementing the human process rather than replacing it,” Lennox-Miller says. “When you talk about setting unrealistic expectations, the biggest one is that you’re going to take people totally out of the picture. … The better an organization is at understanding that, the more effective CAC becomes.”
Unrealistic Expectations
Replacing coders is one of the most enduring myths about CAC, but it’s far from the only unrealistic expectation health care organizations hold about the technology. Grzybowski notes that a common misperception is that CAC is optimal technology for every type of patient encounter and every type of facility.
Successful use of CAC has more to do with the types of manual processes in place and the robustness and integrity of the existing electronic documentation, she says.
“If you work in an organization that currently does not have a good electronic forms inventory, format, and control process; has existing problems in workflow that prevent identification of records that are in a true ‘completed’ stage; and has issues with excessive copy-and-paste duplication, errors in patient status categorization, delays in diagnostic report results making their way to the final legal medical record, and frequent text corrections, you may not be ready to tackle a CAC system,” Grzybowski says.
In terms of time savings, which is often broached in discussions around expectations for CAC, Grzybowski notes that there are documented cases where implementation of CAC technologies has accelerated the coding process. However, it was usually due to preimplementation improvements to workflow and documentation or because the staff stopped verifying the system’s coding output.
The idea that CAC is infallible is another unrealistic expectation, according to Heather Eminger-Gladden, CCS, CAC/clinical documentation improvement (CDI) product manager with Dolbey Systems. No one, she says, should ever expect perfect code suggestions from any system.
“Anyone familiar with the complexities of coding or the variability in the documentation will understand that there will always be codes that need to be added or edited by coding staff. … Even with the expansion of the code sets with ICD-10, medical coding is not binary; there are nuances based upon the original documentation or facility-based policy that require oversight by the coder,” she says.
It is equally unrealistic to view CAC as merely a tool for suggesting codes, Eminger-Gladden adds.
“When the technology was originally designed, this was its main purpose. The technology has evolved to now automate workflow, prioritize charts, provide a collaborative workspace, and be a comprehensive reporting tool,” she says.
Grounded in Reality
Eminger-Gladden notes that CAC played a key role in helping many facilities through the transition to ICD-10. More recently, its purpose has expanded beyond coding to include CDI, case management, patient placement, and auditing.
As a result, it is realistic to expect CAC to help improve processes with the common data points shared by these groups by allowing them to have access to data in real time, thereby elevating the quality of documentation and, ultimately, patient outcomes.
Additionally, “it is safe to expect that coder productivity will increase with a robust CAC solution,” Eminger-Gladden says. “Each customer experience will differ slightly based on how the system is designed, [which] will always be affected by the amount of data that are available electronically and which tertiary applications can feed data to your CAC system. Manual facility processes that can be streamlined and staff enthusiasm for positive change will also affect these outcomes.”
A successful implementation, according to Eminger-Gladden, can produce decreases in accounts receivable days, discharged not final coded, contract coder expenses, and coder error. It should also result in fewer denials, improved case coverage and case mix index, enhanced quality ratings accuracy, auditing bandwidth, and data analytics, as well as improve the ability to trend and identify educational opportunities.
Grzybowski notes that “CAC is a large financial and time commitment, and an organization must be prepared for excellent data transmission and interfacing to many different feeds and systems that touch the patient’s record.”
However, organizations should expect their CAC system to assist in creating enhanced workflows and provide an ease-of-identification/attribution text as part of the coding process. This will, in turn, save the coding staff time as it reviews what can often be a voluminous amount of documentation and data.
According to Raemarie Jimenez, CPC, CPB, CPMA, CPPM, CPC-I, CCS, vice president for certifications and member development for AAPC, CAC’s ability to improve productivity and accuracy means it’s realistic to expect coders to be freed to handle the more difficult cases. Permitting CAC to manage the generation of codes for more routine, standardized items such as radiology so coders need only to approve and move on to the next case translates into more time for them to apply their expertise and critical thinking for more complex needs such as surgical cases and denials.
“One thing that concerns our members is … displacing people, [but CAC] actually enhances coders’ roles and allows them to focus on more difficult tasks that computers can’t do,” Jimenez says. “It lets them focus on other, more important, things.”
The key, she notes, is to involve coders in the decision-making process during CAC selection and implementation.
“Anytime you buy technology, it’s so important to involve the people who will be using it before you buy. Let them test it,” Jimenez says. “Too often, they have to build a workaround or use something else because they didn’t think through that aspect. Hospitals are notorious for having this happen because they don’t vet out the idea with those who will use it.”
Getting Expectations Right
When expectations are properly managed, it sets the stage for a successful long-term CAC relationship. That was the experience of Children’s Health in Dallas, an early adopter of the technology, having gone live in 2012 in preparation for ICD-10 and the state’s move to prospective payment for Medicaid.
To effectively manage the two large-scale initiatives, Children’s Health identified the need for a scalable technology that would seamlessly integrate into coder workflow. The organization opted for a system that included an advanced analyzer, all patient refined diagnosis-related groups (DRGs), CDI, CAC, and medical necessity. Two years later, potentially preventable events were added in response to Texas’ focus on potentially preventable readmissions, complications, and events (emergency department).
In 2015, incremental rollout of automatic specificity queries was launched, followed in 2016 by a professional coding tool that uses machine learning to compare evaluation/management codes assigned by provider with documentation. Finally, in late 2017, work began on automated generation of working DRGs.
“Our early expectation with CAC was that it would increase productivity; it did, by 20%,” says Katherine Lusk, MHSM, RHIA, FAHIMA, chief HIM and exchange officer at Children’s Health. “We did not expect to see an increase in the number of diagnoses captured, but we did. In fact, we increased from an average of 6.7 diagnoses per inpatient chart to 9.6. This trend has continued since 2013. As the number of diagnoses captured grew, so did our case mix index. This was a welcome surprise for us as we were preparing for prospective payment.”
Lusk says that as machine learning with CAC continues to improve, it increases efficiencies by giving coders the extra time they need to become more adept at classifying complex cases. For example, autogenerated specificity queries leverage CAC, which Lusk says equates to about two full-time employees, “once again freeing up our staff to concentrate on the more complex cases.
“Technology is a part of our everyday lives. Having these tools available to assist with documentation at the time it is needed—while not taking away from the discipline-specific knowledge—has been key to successful adoption in our institution.”
— Elizabeth S. Goar is a freelance writer based in Tampa, Florida.