Friday, May 27, 2011

That Harmonious Spring Break Happened

Well, my big piano recital event for which I had been practicing for a year occurred last weekend. I was happy to play, and now I'll be a little less intense for a while, before I start preparing for the MusicAppassionata piano festival in West Chester, PA August 1-6.

I've uploaded most of my recital performances to YouTube. To find them, search in YouTube for David Tao piano recital and they'll show up. The camcorder sound was compressed, meaning that the soft parts were made louder and the loud parts were softened, so there appears to be very little dynamic variation. Trust me, I actually can play soft, sometimes!

I've related music and its performance to interoperability several times in this blog. One thing that a recital teaches you is that things go awry that you totally didn't expect, but you move on. You can't anticipate everything about some keys making unwanted noises, wheel locks not preventing the piano from rolling away, slight movements in the audience causing a brief distraction and memory lapse, etc. Despite avoidance of sugar and caffeine, and conscious intent to stay under control (knowing adrenaline would amp me up anyway), I still played faster than planned. Similarly, implementing interoperability in the real world seldom goes as you envisioned it or as cleanly as what's written in the standards or even developed in a product, because each participant doesn't have control over the other participants' systems or the environment. But every time you do it, you learn and improve the next time.

Happy Memorial Day Holiday Weekend to everyone!

David

Friday, May 13, 2011

Certification Retrospective Part 4 – Lessons Learned

Note: this blog post concludes my discussion of experiences in CCHIT. The opinions are mine, and do not necessarily reflect the views of CCHIT or Siemens.

In life in general, and in development of software, standards, and certification criteria, we learn and grow more by trial and error than we do when everything goes smoothly. Frederic Brooks’ classic software engineering book The Mythical Man-Month said “Plan to throw one away; you will anyhow.“ Hence we need prototyping and iterative refinement. The CDA Consolidation Project is fresh on my mind. I voted negative in its HL7 ballot, but that doesn’t mean it won’t succeed, only that it has issues to fix. It would have been a miracle to get that complex task right the first time.

So it is with certification. The CCHIT Interoperability Workgroup (IOWG) experienced growing pains in its early years. 2006 produced only a roadmap, 2007 produced the first (modest) interoperability criteria. 2008 had stronger criteria and roadmap, and 2009 was poised to make big strides toward semantic interoperability. What were my main lessons learned through these years?

What We Got Right (IMHO):
  • The “methodical march” sequence of building blocks. Putting the horse before the cart sure helps! As described in Method to our Madness?, we proposed a logical sequence for interoperability:
    1. Data Capture
    2. Liquidity
    3. Standardized Content
    4. Data Consumption
Our views crystallized especially in 2008-2009. CCHIT’s criteria included many of the standards that were eventually endorsed by ONC. For example, Lab Results, CCD, e-prescribing, vocabularies (LOINC, RxNorm, SNOMED CT), and leveraging HITSP.
  • The Roadmap (“Glide Path”) Concept. I love maps (online, GPS, paper) and want to know ahead of time where I’m going and the milestones along the way. CCHIT published roadmaps for the next two years beyond each certification year. Roadmaps aimed to provide adequate lead time and to encourage developers to plan and head in the right direction. While the HIT Standards Committee did a great job proposing a “Glide Path” to more robust standards in their proposals in late 2009 (see Jamie Ferguson’s August 20th 2009 Clinical Operations Workgroup presentation), that concept was mostly lost when the ONC regulations were published (see comments on John Halamka’s Feb. 10, 2010 blog). For interoperability, developers need to know the target standards, more than a high-level meaningful use matrix, to move forward.
  • Staying a Course. Once we defined the roadmap, we might adjust timing and provide more detailed criteria in our next development cycle, but overall there was continuity and stability with a minimum of “surprises.”
  • Transparency. CCHIT posted responses to every public comment on its website. Of course not everyone would agree, but we thought everyone deserved to know their comments were considered and our rationale for decisions.
  • Teamwork. The IOWG built relationships and trust among multiple stakeholders within CCHIT, as mentioned in Part 1. That enabled us to work through differences and get things done, despite conflicting opinions. In contrast, it’s harder to capture the “team” element in groups where 50-100 people are nominally on a project but attendance is sporadic, yet I understand the need for openness and transparency. Ultimately, a small group forms a cohesive team within the larger group. For example, the Documentation and Testing workgroup within the Direct Project gradually developed a “team” spirit similar to what the IOWG had.
Regrets:
  • Lack of Opportunity to Finish What We Started. Last week I used football, now here’s my first baseball analogy. CCHIT, as the “starting pitcher” for EHR certification was relieved by ONC in the 6th inning – enough to get a quality start, but not enough to win the game. Our methodical march didn’t reach its goal, but I hope that ONC will be a good “closer” to save the win for interoperable EHRs, coordination of care, and the good of the patient.
  • Lack of End-to-End Coverage. Interoperability involves senders, receivers/requesters, and perhaps intermediaries. To succeed, it must encompass all participants (e.g., EHRs, HIEs, PHRs, labs, pharmacies, public agencies), because so much exchange goes beyond EHRs. We didn’t achieve this in CCHIT, but not for lack of trying! We envisioned covering some end-to-end interoperability through certification programs for HIEs and PHRs in addition to EHRs. Excellent people were in the HIE and PHR workgroups, and the IOWG had good dialogues with both. The HIE program was launched with poor uptake followed by suspension in light of impending ONC program changes. The PHR program was not launched.
  • All-or-Nothing Approach. CCHIT pondered a modular certification approach early on, especially for inpatient EHRs, but didn’t implement it until ONC mandated it in the Certification Final Rule. In hindsight, I wish we had done that sooner. I don’t think we needed an “all or nothing” approach. Truth in labeling would have disclosed each product’s gaps, to help people make informed decisions. Vendors with complete EHR functionality could still differentiate themselves as a “one-stop shop” but others (“little guys”) would not be excluded from certification just because they didn’t provide 100% of all capabilities.
  • A year in limbo. From mid-2009 through mid-2010, the industry was paralyzed by uncertainty and lost momentum on interoperability. It was clear that CCHIT criteria wouldn’t be adopted as is by ONC, but no one knew what would replace them. There was no assurance of staying a course or a clear glide path, and people who had done their best to follow the CCHIT roadmap were concerned and confused. Interoperability features (such as standardized coding and discrete data consumption) that the IOWG expected to be in products in 2010 have been delayed, possibly to MU Stages 2 or 3.

Whether right or regret, there are lessons learned that can be applied going forward.
  • Follow the methodical march – a logical sequence of capture, liquidity, standardization, and consumption.
  • Define a roadmap (glide path) with ample lead time.
  • Stay the course.
  • Take an end-to-end perspective.
  • Be transparent.
  • Be flexible, not “all or nothing.”
That’s it! This has been a long blog series, and if you made it to the end, thanks!

Thursday, May 5, 2011

Certification Retrospective Part 3 – Controversies Along the Way

Note: this blog post continues my discussion of experiences in CCHIT. The opinions are mine only and do not necessarily reflect the views of CCHIT or Siemens. Please bear this in mind especially considering that the topic is “Controversies”

From my last Certification Retrospective post, I promised to share lessons learned and some “agonizing controversies” we faced in the CCHIT Interoperability workgroup (IOWG). I thought I could do both in one post, but now realize that I’ll need a “Fourth Movement” after this. Each controversy could easily take up a post of its own, but I’ll try to squeeze them into this post, deferring Lessons Learned to Part 4.
  1. When should HITSP specs be incorporated into certification? Once HITSP was formed, it (not CCHIT) was designated to select standards. CCHIT, however, could decide when/if those standards would be certified, striking a difficult balance between supporting the federal strategy (HITSP was HHS-funded, CCHIT was partially HHS-funded) and yet being pragmatic. It would have been difficult to require EHRs to develop too much at once when certification was “all or nothing” and not modular. From working in both CCHIT and HITSP, CCHIT tended to give more weight to considerations such as market adoption and lead times, not just whether a suitable standard existed. I along with Jamie Ferguson were appointed to co -chair the 10-member HITSP-CCHIT joint working group, whose goal was to coordinate the work of HITSP and CCHIT and address how/when to include HITSP specs in certification. But some HITSP-selected standards were still in “trial implementation” with little adoption, and the approval of a standard (HL7, HITSP, or any other) didn’t mean it should be mandatory within a specific timeframe. And there were just so many of them to digest, let alone certify! Some people weren’t pleased, thinking that CCHIT should be a more aggressive “enforcement arm” of HITSP. In the end, we included more of HITSP in 2008 and 2009 certification criteria than ONC did in its 2011 certification criteria, most notably the HL7 2.5.1 Laboratory results with LOINC vocabulary starting in 2007, C32 CCD starting in 2008 (adding RxNorm, UNII, SNOMED CT vocabularies in 2009) and HITSP TP13 (IHE XDS.b – see point #2 below). But we also roadmapped (2-3 years out) most other HITSP specs when we published our 2009 criteria. I believe this was a sensible middle of the road approach. IMHO, if we had mandated many more HITSP specs in certification, there would be few or zero certified products today!
  2. Should certification require transport standards? Fourth & Ten – we’d better punt! Seriously, we debated this during 2008 and 2009. We recognized this as a glaring gap in interoperability, and wanted to make headway, realizing that lack of secure transport standards would hinder interoperability even if content were standardized. The Direct Project didn’t exist back then. We were aware of diverse ways that HL7 messages were transported, that IHE had defined SOAP-based web services for document transport, and that HITSP had selected many IHE specifications. Many vendors had already tested IHE Cross-Enterprise Document Sharing (XDS.b) at Connectathon. So the IOWG approved XDS.b as 2009 certification criteria (along with PIX/PDQ to support it). But the Board of Commissioners decided to designate these as optional interoperability criteria, not required for certification, because despite the IOWG’s recommendation and strong EHRA support, some commented against it. This was the one case I can recall where my workgroup’s recommendation was overridden by the Board, which I know was a tough decision since the Board usually didn’t do that. This was also a case where we were less aggressive in CCHIT than the EHR Association had proposed. EHRA had sometimes criticized CCHIT for not providing enough lead time, but in this case they were very disappointed that CCHIT had not required this in certification. This one still bothers me as a missed opportunity.
  3. Should we require discrete data import, and who owns that decision anyway? We felt that the main point of requiring standardized content formats and vocabularies, not just human readability/liquidity, was for the discrete data to be consumable. Following our “methodical march” mentioned in part 2, we thought in 2008 that the time had come to propose discrete data import (e.g., update your medication, allergy, or problem lists using data from other providers). But now we were into a “boundary condition.” Where did interoperability’s “turf” end and functionality begin? We had a series of “discrete data import” meetings between the IOWG and the Ambulatory and Inpatient Functionality workgroups. We recognized that data must not only be exchanged, it needs to be reconciled (medication reconciliation being an example), and that you can’t just “slam in” all meds, allergies, problems from many external sources to automatically update the active med/allergy/problem list. The functionality workgroups didn’t think the time was right to require such capability in certification for 2009 or 2010. But how were we going to keep making progress toward semantic interoperability? In the end, we proposed criteria to import very specific discrete data elements for medications, allergies, demographics, and immunizations (with problems to come later), as described in my last post. But we didn’t define the functionality, as long as the discrete data were stored somewhere. And in any case, reconciliation criteria would belong to the functionality workgroups, not the IOWG. I think this would have been a reasonable “baby step” but because of the change in certification that started in 2009, those roadmap criteria never had the chance to be certified. I hope that similar careful thought will go into decisions in ONC certification, whenever it gets to that point. End-to-end interoperability involves content creation, secure standardized transport, and content consumption, all of which should be considered holistically.
  4. Should we push the industry toward a single standard for clinical problems? This was really part of the broader topic of converging on a singular standard vocabulary for each type of data, but SNOMED CT for problems was the lightning rod topic for several reasons: a) the difficulties of getting physicians to create medical problem lists at all; b) ICD-9 diagnosis codes were already required for reimbursement and some physicians systems were built around ICD-9; c) ICD-10 was already complicating things by looming on the horizon; d) anything that imposed more work on physicians could hinder usability and adoption. So though SNOMED CT seemed a clear winner for an interoperable clinical vocabulary and was HITSP-endorsed, we received considerable pushback from some members of the other workgroups as to how fast SNOMED CT could be required. I was pleased that many in the IOWG stepped up to provide statistics and research facts such as analysis of mapping of ICD-9 to SNOMED CT and clarification that standard codes don’t have to be visible in the clinician’s UI. We knew this was a contentious issue, but staked out our positions in the 2009 roadmap and a Q&A document.
The above four were prototypical of the controversies we faced in content, vocabulary, and transport. I could go on with more examples. That’s part of the challenge of interoperability work, as evidenced by the fact that ONC and the HIT Standards Committee still have all of the above issues to ponder for Stages 2 and beyond. I should mention that one more big controversy arose in the very first year, when CCHIT was newly formed and struggling to figure out how to set the interoperability bar. Eyes were upon those first members to resolve the CDA vs. CCR debate. Unlike the four issues above, where we took a stand, we weren’t ready to force a decision in 2006, and then HITSP was formed by ONC to deal with such “standards harmonization” issues. So at least our punt had a receiver (and football wasn’t locked out)!

I promise that Part 4 will really conclude this series with my thoughts on Lessons Learned, proudest accomplishments, regrets, and hopes for the future, stemming from my CCHIT experience. Thanks for listening to my sharing from “inside the trenches” of discussions of which the public was not aware.

Monday, May 2, 2011

A Harmonious Musical Spring Break

I expect to conclude my three-part certification retrospective within the week. Since it will end with a discussion of lessons learned as well as some tough controversies, it will need some thoughtful review before I just fire it off. In the meantime, I’m going to take a brief break to return to my non-HIT passion, music, which has many parallels with Health IT as I introduced in my first blog post in December.  One of my physician colleagues recently shared with me an article from the March 15th edition of the Annals of Internal Medicine, entitled What Musicians Can Teach Doctors. No, I’m not going to use this post to teach doctors, but I recommend the article, as it compares musical training and performance to medical training and practice. Furthermore the aspects of teamwork, rehearsal, and specialization are key to HIT interoperability. Both music and interoperability must be performed by collaborating participants, not just specified in theory, and there’s no substitute for doing it over and over, refining it, and continuously improving.

I’ve been applying the principle of “practice” (musical, not medical) for the past year preparing for an upcoming solo piano recital. For too long, I didn’t make time for it, as the demands of family, work, and “life” kept getting in the way, but now on May 20th and 21st I’ll perform two concerts. Information is available on the web about the recital program, Youtube video preview “trailer” and location at Immaculata University. So if any blog readers happen to be in the Philadelphia PA area on those dates and would like to come by, you’re most welcome! I realize that solo piano doesn’t illustrate very well the “teamwork” principle that ensemble music would. But I’d like to convey a faithful and yet personal interpretation of the composers’ musical “specifications” to connect with my audience so that they realize the genius and beauty of what Bach, Beethoven, Liszt, et al created. So even though it’s a solo recital, I’m trying to be the “interface” between the composers and my audience. Hopefully, not too much will be lost in translation!

Anyway, after this “break,” next time I’ll return to conclude the CCHIT Certification Retrospective.