ments and/or screenshots to present the various data tables typically allows for a rapid distribution of content and feed- back exchange without getting tied down in the mechanics of the system. Once consensus on content is gained and data tables are approved, they can be assigned to the design-build team for translation into an ap- plication staging environment. From here, the CSWG can
Kevin Mullen is project director, Massachusetts eHealth Collaborative.
For more on Massachusetts eHealth Collaborative: www. rsleads.com/204ht-206
review, validate form and function, provide comment on the look and feel, and determine whether or not it will meet the business and specialty requirements.
A data evaluation and migration plan was developed for each practice install. Wherever possible and appropriate, the project leveraged shared tables for referring providers, insurances, ICD-9/CPT tables, pharmacy data, visit and appointment codes, browse tables and pick lists, etc. One specifi c area that created some down-stream problems was in the variability of insurance tables. Practices that utilized a third-party billing system or had elected to have a PM inter- face needed to retain links to existing insurance tables. The resulting variability and mix of insurance codes, identifi ers and clearinghouse vendors created an attribution dilemma down the road when attempting to track payer and plan-specifi c activity for pay-for-performance (P4P) incentives. An exter- nal mapping and assignment process had to be developed as a bridge solution. At a high level, certain specialty content areas can and should be designed in advance of any provider-specifi c cus- tomization. The primary focus should be on developing the right framework to facilitate quality and specialty relevant data capture. One lesson learned was that in some areas we were over-building the system, and it took some trial and error to fi nd a balance between a blank-slate, out-of-the-box envi- ronment and fully prescribed and regimented progress notes and treatment plans. In the case of progress-note templates, we found that too much detail was being applied in advance. It became confusing, and it was not conducive for a provider to customize. However, we did fi nd value in developing other areas, such as clinical decision-support utilities – alerts (for prescription, lab, immunization, etc.) that can be established for select patient populations. Also, setting up the framework for order sets, with a recommended starter set of treatment protocols for common visit types and disease conditions, was useful. Again, the pre-build process only makes sense if you have adequate specialty representation in the CSWG (i.e. pediatricians informing pediatric content). The EHR vendor can and will pre-load a lot of specialty content, but our experi- ence suggests a deliberate effort to provide a quality, relevant framework of content went a long way with adoption.
Workfl ow optimization and training The remaining application customization became a com- www.healthmgttech.com
ponent of the practice-level workfl ow-optimization process with providers and clinical and administrative support teams. Through interview and observation, practice consultant teams performed workfl ow assessments, identifi ed gaps, determined specialty and practice requirements and prepared future-state workfl ows and transition plans. The implementation team developed a set of best-practice workfl ow recommendations for key functions within the practice: registration/scheduling, new-established patient fl ow, e-prescribing and refi lls, refer- rals, doc-folder management, in-offi ce testing, orders and lab/radiology management. Within each of the workfl ows, a deliberate effort was made to highlight key data capture points and preferred entry methods with specifi c emphasis on data sets that fed quality and performance objectives. Finally, a master training plan was developed to support the workfl ow plan and to reaffi rm the critical, high-value areas.
Quality measurement and data acquisition It is important to assemble and organize the various qual- ity reporting recipients at stake, including CMS-MU, CMS- PQRS, Public Health, NCQA, and Commercial Payer P4P, so that you can compare and prioritize the individual quality measures for the group at large and for the individual prac- tices. If possible, the group should conduct a thorough evalu- ation of the measures, their defi nitions, target populations, inclusion/exclusion criteria and reporting periods. Although there is substantial overlap between measures, very few have the exact same defi nition across the criteria categories. Nonetheless, you should compare the measure defi nitions, identify duplication and, if needed, develop a clear measure consolidation process. Key objectives for quality measurement and reporting
• Defi ne measure defi nition and specifi cations; • Prioritize measures;
• Conduct a gap analysis between new and existing measures;
• Build upon synergies between meaningful use and other reporting requirements;
• Identify data-capture requirements for measures; • Develop policies and procedures for capturing data and frequency for data capture;
• Determine format and structure of reports; • Identify frequency of report generation; and • Identify types of reports needed (quality and manage- ment reports).
For BIDPO, the initial priority measure set included: • 44 meaningful-use measures; • 24 PQRS measures; and • 35 contract-incentive measures.
This is a critical process and represents a juncture for many health systems as they evaluate the need for investing in enterprise business intelligence, community analytics and quality data-management solutions. Regardless, the group
HEALTH MANAGEMENT TECHNOLOGY April 2012 19