I hope everyone has had a happy and enjoyable Labor Day weekend. Sadly, it's now time to think about going back to the regular grind, you know, work. There is indeed a reason why work is a four-letter word.
I really don't mind going back after a long weekend, although when I've been off for one week, or more rarely two, it's a bit harder. And part of the joy is my realization that depending upon the site to which I'm assigned on that black Monday, I will have to wrestle with one balky PACS or another to accomplish my tasks for the day.
As I'm not interested in generating trouble for anyone in the form of letters from acronymical government agencies, I will avoid the pleasure of naming names and repetitively repeating lists of problems. But it seems some of the problems we are having with one of our systems are fixed by an update that came out a long time ago. However, on many PACS, adding updates can be a real pain that has the potential to shut down the enterprise. Thus, in the interest of avoiding yet another trip to the servers, we decided to wait for the update that was supposed to be available this January. Ah, but alas, here it is September 1st, and this wonderful update that fixed even more stuff is nowhere to be seen. And why is this? Well, the only word that has filtered down to my peon level is that the company has been adding more features to this update to satisfy various customers.
How should we feel about this turn of events? In my bowel-unrest over this situation, I have to conclude that the update process for many if not most companies is broken. Waiting this period of time for critical fixes is unacceptable, but there doesn't seem to be a way to get anything but an absolutely critical, emergency, life-saving patch out the door in any reasonable amount of time. This might have something to do with the way the code is written or managed. Perhaps there is a more modular approach that would allow the repair of one section without tearing apart another? That's beyond my expertise, but maybe I'm somewhere near the right track?
And, there needs to be a much better method of deployment. Updates should not be major events that are life-threatening to the PACS staff (who are in danger from the radiologists if they can't get the system back up in 5 minutes). Why is it that even horrible bloatware like Windows can automatically update in the background, but PACS server software can't? I know that some companies use a three server configuration, including the main production server, a test server, and a fail-over server. While it is still a pain to update with this system, the whole thing doesn't crumble during the process, we don't have to point to different servers during the update, etc. There has got to be a better way.
Finally, in our particular situation, the blame for the rather long delay in the update is laid at the feet of the customers (might be us, but I don't think it is) who are demanding new and different functionality. I'm assuming these requests are coming from existing customers; if the additions are for potential customers, well, let's just say Hell hath no fury like a Dalai scorned, and someone better be ready to take one for the team.
Lemme give every company out there a big hint: FIX what's wrong with what's out there BEFORE you start adding NEW thing that will break. I don't think that's a radical concept.
So, I'm begging PACS vendors to consider this request: Put some work into your update process. And your updates. And the rest of your software, as needed. It would be most appreciated.
I'll be sure to, ummmm, update everyone when I see some progress. Enjoy the rest of Labor Day!