Wednesday, August 26, 2015

Turning Point?

Image courtesy thisfabtrek.com

There was a meeting involving everyone with a stake in our PACS problem. This included radiologists, IT folks, administrators, and representatives from the vendor. The purpose of the meeting was to outline where we were, where we are, and where we are going. I left the meeting feeling cautiously optimistic for the first time in quite a while.

I'll not delve into specifics much, as they probably won't help you with your PACS problems, nor will they help you help me with mine. The generalities however, should prove more valuable.

The bottom line is quite simple, really. Our problems stem in greatest part from lack of communication. This gap occurred primarily between IT and the vendor, but there was also a rather large chasm between the radiologist end-users and the other two entities. Let's talk about that one first.

As an electrical engineer, I'm well-versed in the concept of the feedback loop, particularly the need in a circuit for negative feedback:


You've all heard the screeching "feedback" from a public-address system going out of control. Most amplifiers have a negative feedback loop, illustrated by the wire above using some of the output of the amplifier to "tone down" the input. So it is with life in general. If you think you are doing everything properly, and you receive no negative modulation, you will keep doing the same thing whether or not your actions are correct.

We rads had (or more honestly, exercised) only limited feedback options when it came to PACS. We the rads never really put our heads together to create a grievance list. Yes, one rad here or there would get upset with slow speeds, workstation crashes, slow scrolling, etc. Sometimes these could be fixed by the PACS administrators, sometimes only marginally improved. Someone would get a personal profile remade, although we never learned why this would help, someone would get his worklists redone. And so it went. In this way, little problems propagated into big problems.

I'll take some level of personal responsibility. I was so, well, jaded may not be the word...disheartened? Complacent? Resigned? Well, I had more or less decided that nothing would change until we had a major upgrade, whenever that might be, and I found ways to tolerate the glitches. It took two of my former partners, now bosses, who recently started working nights, to say "ENOUGH!" The minor slow-downs become quite major when you're trying to pump out dozens and dozens of studies through the wee hours of the morning. To be fair, there was further deterioration of the system during their initiation into the dark side (I mean dark hours), to the point that the workstations crashed, and ultimately there were a number of system outages, which certainly brought the whole situation to a head. The newly-minted night-stalkers began the campaign that has brought us to the brink of...a solution.

Communication is paramount as always, and we've made some great strides in that realm. First, we insisted on having a call-team from both IT and the vendor. We were briefly relegated to the "Help" desk; when they actually answered, they were little more than a rather slow answering service for IT. Second, we streamlined the process for rads to report problems directly. This was prompted by a response to a complaint implying that no one had ever mentioned the problem before. When Donald Trump's cell phone number was published, instead of having a little hissy-fit as we saw with a certain Senator, Trump simply put a campaign ad on his line. This inspired us to create an email thread wherein any and all PACS complaints could be reported directly to the right people. And report we did. Initially, there were tens of emails per day; this has tapered off to only one or two. We have definitely made progress.

While we physicians can be quite problematic, the deeper institutional snafus lie with IT, the vendor, and their somewhat dysfunctional relationship. There will be yet another meeting to more precisely define just how that relationship will progress, and while I detest meetings, that is one that should have been held years ago. You see, there really wasn't a single point of failure, but there were quite a few, shall we say, lost opportunities for improvement.

It turns out that one of our major outages was a network problem, caused by an update push that got out of hand. Another slow-down was the result of the EMR grabbing too much bandwidth. There was a bug in a NetApp image server that took us down. OK, we assume these things happen and can be fixed.

But it took a village full of angry radiologists to bring to light that yearly service on some of the servers might not do the trick, particularly when a couple of the critical servers running SunOS/Solaris weren't touched at all. The latter had been running on an elderly version of the OS, and had a bug that was fixed umpteen versions ago. Update the OS, kill the bug. And here is where we had trouble. Putting it simply, everyone assumed the other entity was going to take care of stuff like this, if they assumed anything at all about it. And so nothing happened until the recent unpleasantness.

We found that hardware was sometimes purchased without consulting the vendor, and then retrofitted with the vendor's help when it wasn't quite right. Perhaps both parties could be a little more proactive here, so we can all ask for permission instead of forgiveness.

Computers are made by imperfect human beings, and are thus imperfect themselves. To assume otherwise is naive at best. And so in a mission-critical area such as PACS, one must be ready for the inevitable glitch. There has to be a downtime plan, and an out-and-out disaster recovery solution. Guess what? We have neither. To my knowledge, the downtime plan hasn't been changed since I spoke at RANZCR in Perth in 2010: after four hours of outage, we start printing to film. Unfortunately, we no longer have any film printers. The next best thing, which we have been having to do, is to read directly from the modality's monitor. It isn't optimal, but it works, sort of. As for a full-fledged disaster, data is stored offsite as required. But it's on tape and recovering might take a very long time. If we could muster the resources. Let's hope it doesn't happen.

PACS, as it turns out, is the only Tier One service that does NOT have a proper downtime solution. Why did we get left out? Money. It was hard to justify a complete, mirrored, automatic fail-over that would only be used a small fraction of the time. Unless you happen to be a trauma patient in the Emergency Department, where life-and-death decisions are put on hold while someone fiddles with the server. Then it seems perfectly justified.

In the end, we all serve one customer, and that is YOU, the patient. Everything we do in this business, every decision we make, every scrap of hardware and line of code we purchase and use is meant to promote your health and well-being. It was said by some that we radiologists were "paying the price" for the various challenges I've outlined. That's true to some extent, but the real victims, at least potentially, are the patients, and that CANNOT be allowed to happen.

I've been blogging about PACS for almost 11 years, and my basic message hasn't really changed much. PACS IS the Radiology Department, and the hospital cannot function without it. Making this all work, and work properly, is in huge part a matter of communication. You have seen what happens when the discourse fails or doesn't happen at all. The downtime plan, or lack thereof, illustrates what happens when one of the groups involved in PACS, the rads, becomes disenfranchised with respect to the decision process. One of us could have very easily convinced the powers that be that we cannot tolerate a four-hour gap in service. We weren't asked to do so; we didn't even know the question had been posed. Now we do. And I am cautiously optimistic that this will improve, as will the rest of our experience.

I would be remiss if I didn't take the opportunity to excoriate the majority of PACS and EMR vendors while I'm on this particular rant. You are still not making user-friendly software. We all know it. PACS is bad enough, but our EMR and its CPOE (Computerized Physician Order Entry) is so very poorly written and implemented as to drive a good number of physicians into early retirement. Seriously. This garbage is served up as caviar to, and purchased by, those who DON'T HAVE TO USE IT, and again the physicians are disenfranchised. This too will negatively impact patient care, and it CANNOT, well, it SHOULD NOT be allowed to happen. But it is.

Let's have a meeting about THAT, shall we?

3 comments :

stacey said...

Among the questions that need to be asked:
1. Why is it that the vendor(all vendors) seems uninterested in solving problems that affect their product, regardless of the source of the issue? If they really cared about what the users thought of their product (KLAS???) wouldn't stellar support be a way to have the customers singing their praises? Vendor support seems more interested in closing tickets or proving that the source of the problem does not lie with them rather than actually fixing it. This is not "customer centric", despite slick marketing brochures to the contrary.
2. Why do the IT overlords insist on doing what they want regarding hardware configurations, without consulting or despite vendor recommendations to the contrary?
3. Is there an IT department that considers the users customers, and treat them as such; or do IT departments fancy themselves as the master that must be served, and under the guise of the mantra: "We have to protect the data" create numerous stumbling blocks for their minions, err, uh, users, which do little to actually protect data?
4. Why does the customer have to drive the solution? If the computer kiosk and/or cash register at McDonalds stopped working and the customer was no longer able to order meals, is the customer the one that needs to engage the kiosk vendor and drive it to a solution? Who owns the problem? The customer needs must be addressed, much like the PACS must be able to keep "serving up images" to the radiologists, downstream users and ultimately the patient. It should be standard operating procedure that the vendor teams with the IT team as soon as a ticket is logged where the solution is not an easy/quick fix. This would include all of those ethereal type of problems that seem to improve "slightly" but never really get "fixed".

Just my 2gb's.

Unknown said...

The advent of virtualization in newer versions of your vendor's PACS is of some help as is the presence of a virtual test all in one system. At least you can point modalities to the test system and have some place to store data. The obvious single point of failure is the NAS/SAN archive common to all of the system or the fiber optic switches connecting to it. You really do need some sort of tier II archive, preferably in a different location to go along with your PACS.

PaddyO' said...

As one of the evil IT people, I will say that we are a risk adverse, change resistant lot because of the utter complexity of the systems were are entrusted to maintain. PACS is connected to the voice recognition application, which interfaces with the RIS (EMR) which is connected to various peripheral systems like Digisonics, TeraRecon, DynaCad, etc, etc. One little change to ANY of these systems almost always affects the other systems in unpredictable ways. We recently added an extra question to our EMR and it caused all CTs and MRIs to go across PACS unverified, which meant rads couldn't read them until we figured out what had happened. We had an emergency workaround, but it took 2-3 weeks working with multiple vendors and IT teams to figure out what was happening.

So, the next time you come up with a special request that you just know will make everyone's lives better, realize it will take hundreds of man-hours to build, test, de-bug, train, and roll out, AND, it will cause something else totally unrelated to break, usually at 2 in the morning.