VA data glitch mimics MIT’s

Bob Brewin writes today in NextGov that the VA discovered a glitch in a system interface that could display the wrong patient’s information under peak load circumstances. The VA handled it in an exemplary fashion: they immediately issued a safety alert and shut down the connection; the bug (a memory leak) has reportedly been fixed and the link will be live again Tuesday.

The glitch came to light when a doctor noticed that a female veteran had a prescription for erectile dysfunction. Hm.

I spoke with the writer today about data quality in health IT. Only a few of my words made it into the article, but an eerie parallel came to mind:

I’m class secretary for my college class, so I occasionally go into the alumni database. This week I clicked the link to update my profile, and lo, somebody else’s came to the screen!

I was seeing (and editing) the data for the next person alphabetically, not my own.

Here’s the thing: this is MIT’s alumni database. Yeah, the geek school. Accidentally letting me edit somebody else’s info.

Reality: data doesn’t flow automatically to the right place. You gotta engineer the workflow carefully, and it’s a good idea to build in safety checks.

This doesn’t change my belief that good quality data, well managed can improve things. (I didn’t say “solve everything,” I said “improve.”) Airlines got it figured out (you never pull up the wrong person’s reservation), banks got it figured out (long ago you might get the wrong person’s statement, but no more), etc etc.

As we work on health IT, let’s:

  1. Build reliability into the workflow. (Methods exist. Use them.)
  2. Check frequently for glitches
  3. Don’t be shocked when one appears. Tell people, and fix it.

p.s. I notified the Alumni Association and it was fixed within a day.


Posted in: medical records




4 Responses to “VA data glitch mimics MIT’s”

  1. ben says:

    To be fair, those banks you refer to are largely using (very expensive) solutions based on CICS which have been in development for 20 or more years with teams of hundreds and hundreds of very well-paid programmers.

    I’ve worked in higher ed and political considerations drive alumni websites more than anything else, and they usually develop them in-house, which means a new one every few years each time written by an employee who leaves once he finishes his free masters degree. :) Not exactly a recipe for robust software development.

    • Good thoughts, Ben. Re CICS – boy, haven’t heard THAT in forever.

      My point was solely that regardless of system, a workflow can be reliable or not; it’s a separate issue, no? I’ve begun learning about Lean, and from everything I hear and read (from people experienced in it), it commonly reduces errors AND cost, by building quality /reliability into the process. I guess expensive systems can have that or not.

      My point was just “Look, this is not an unsolvable problem.” Yes?

      Re alumni systems: sigh, it just grosses me out that the “alumni interface” department can be inept. It took us (the ragtag band of class secretaries) years to teach them to use an alias like “classnotes@” so there wouldn’t be chaos every time someone left.

      And that brings us back to: a system can be reliable whether or not the people involved in it are smart. I assert that in matters of (potentially) life and death, it’s IMPORTANT to build reliability into the workflow.

      • Alexandra Albin says:

        Workflow is one of the most important pieces in a large multi tiered mulit user database system (I have worked in the online publishing system), it is often the most overlooked (or at best given a cursory look) and least mapped out aspect of the process. It is expensive, time consuming, and tends to show the weak links, and often done as an afterthought usually while the system is being built. the metaphor we have typically used is we are building the airplane while we are flying it. This, in my experience in larger organizations. An exec decides oh this vendor is a good idea, they make a deal, and then it is up to the engineers to make it work. The people who actually use the system get pulled in somewhere down the line and then all hell breaks loose. Usually user acceptance testing is done quickly so the product can get out the door and someone can say their goals have been met. Also, the QA test cases are often run through crazy quick. And, don’t forget those end users, which we usually do (in the case of an EMR) that is …. you know who, nurses, docts, and maybe patients. Most systems don’t carry data that affects life and death, so the consequences are less critical.

  2. ePatientDave says:

    @TalkStandards @emmanator VA glitch matches MIT Alumni Association's: #EHR

Leave a Reply