Monday, August 29, 2005
From the cyberworld back to the hardware world for a moment...
I have yet to get the Contour Jog-Shuttle Wheel (remember Darth Vader's codpiece?) to work the way I would like with my PACS. Fortunately, it does work with my video editing software, so it isn't a total loss.
I've been thinking about the problem of navigating a 3D dataset, which can be a dangerous and expensive thing (me thinking, that is). How is this done in real life? First, we have to find an analogous situation. Airplane pilots have to navigate their aircraft through three dimensions, though, in general, they had better be going forward, and they have to deal with gravity. Helicoptors don't have to go forward, but the collective, the up-and-down control, adds complexity to this discussion. So, how about video games where one pilots a spaceship around, well, space, without regard to gravity? Most use some form of joystick navigation, though this might manifest as a limited D-pad on an X-Box or Game-Cube controller. The joystick was the one device Sherbondy did not test for his article.
An overwhelming number of joystick options are available. Take a look at the Saitek "Cyborg evo Force" pictured above. This joystick has lots of buttons as well as mini-joysticks mounted on it, and some of the buttons themselves even have switches. Our younger generation is being prepared for something, although I'm not sure we're going to like it when it arrives. But I digress.
Seems to me that this is the way to pilot through the inner space of the body, and would be especially valuable with virtual colonoscopies. I haven't, of course, worked out the logistics of this approach as yet, but one could use the main stick for navigating the slice you are in, be it axial, coronal, or sagittal, and one of the little mini-joysticks for diving up and down or in and out of the plane you are in at the moment.
My son has an old Sidewinder joystick, and since he has lost his computer priviledges for a week (forgot his homework or some other onerous offense), I'm going to confiscate the stick for my experiments. We'll see how it goes.
Sunday, August 28, 2005
I am starting a new series of blog-installments discussing various tools within PACS viewers. I'll try to describe the versions of the tool I use, with my (biased) evaluations and suggestions for improvement. At least this gives me material for the forseable future!
Let's start with the spine labeling tool. By the way, here is my stylized icon which I will be glad to sell to any interested PACS company for a really reasonable price:
Hands-down, my favorite incarnation of this tool comes from Amicas. This is the way it should be, simple, intuitive, quick, and effective. You click the button on the menu bar, and then point at the middle of S1 and click. Then you point to the middle of L5 and click...and so on until the spine is labeled. The magic of this is that 3D information within the DICOM header is utilized to deploy labels in the axial plane:
This whole process literally takes 5 seconds.
A drop-down menu gives several options with further control available from the labeling dialogue:
Now this is really neat: "Preferences" lets you set up a degree of automation. If the spine study is properly named(cervical, thoracic or lumbar), the labeling tool automatically starts at the level you choose. (This is actually an unusually deep level of adjustment access for Amicas tools, by the way, and the initial set-up will probably be adequate for the vast majority of users.)
The other systems we use either don't have this tool at all, or have a such a poorly-designed version that using it would take 10 minutes; therefore, we don't.
I don't have any significant improvements to suggest for this tool; Amicas did it right. I would perhaps like to have the ability to change the font, or at least the size, of the labels. It would also be nice if they would propagate to the coronal projections as well if these are available. Finally, it would be really nice (but really difficult to accomplish) to have the labels appear within the embedded Voxar 3D window. I can dream, can't I?
Tune in next time, when Heidi and Al get caught up in the magnification tool!
Thursday, August 25, 2005
Sorry to get on a tirade but I just got handed a bill from the Geek Squad serviceman for a service call on my Gateway that I have had for 2 years now. I just had a service call last week where a piece was replaced that was worn out and my contract states that I get 1 service a year for free. Well the hard drive failed as they sometimes do, and I needed it repaired.Here comes the fun part, for a 50 gig hard drive I was charged $667.18. Yes that is correct over 600.00 for a hard drive I could get for less than 100.00. I was charged at a rate of $240.00 per hour to install it which took 3 hours and $325.00 for them to drive to my office which is less than 25 miles from there office here. I was told that all of these are the standard rates.
Well I asked the rep if I charged him 667.18 for a contrast injection would he be upset, he said of course he would, but he has no control over the pricing. As far as I'm concerned this is all horse shit, I'm sick of being charged $95.00/ hour for IT support while they sit at one of my computers and wait 30 minutes for a download to finish or for a program to scan my hard drive. Plus they charge me 1/2 the hourly rate for travel to my office.We (physicians) have been taking in the shorts for way too long and I for one am pissed off ( sorry about the language). I want to organize just like every other business in the free world. I would like to see them put all of us in jail for collusion. I have said it before but it is time to strike!!!!!!!!!!!!!Close the doors for 2-3 days don't answer the phone, all of us go see our families take them fishing or go play golf, round on your hospital patients if you have them, but otherwise stop, Only by a massive shut down will anyone, inscos and gummit, see that we aren't going to take this crap any longer.My prices just went up 50% and no more free work of any kind unless it is something I want to do, not what some patient or inscos think I have or should do!!!!!!!!!Sorry but I'm pissed!
Among other posters, Dalai responds:
Shoulda got a Dell with a 4 year service policy. Or learn to open the can of your computer and swap the drive yourself. 5 minute hardware procedure, 1 hour or so to reload the drive.
On the more serious medical front, I think in the end docs are their own worst enemies. How 'bout them expert witnesses? Without them, the lawyers are SOL. But Google the term "medical expert witness" (I've made it easy by linking it for you), and you get 40,000+ sites that will connect you with a doc that will say the sun didn't come up this morning if you pay him/her enough. My personal solution is to make it ILLEGAL to pay for such expert testimony beyond travel expenses and such. Some of these plaintiff whores make $5,000-$20,000 per case that goes to trial, so they have every motivation to drag your sorry backside into court, even if there is no case. This must stop. The tort-reform bills going through various state legislatures, and even Congress, are not dealing with this issue as strongly as they should.
For us imagers, the clinicians with their own scanners are presenting an interesting problem. They are generating more imaging business, and the more upstanding of them contract real rads to read their stuff, but their out-of-control self-referral is bringing down the house on ALL of us. This baby doesn't want to go out with the bathwater. Then you have the ERs that order everything but a corpora-cavernosagram in the middle of the night (at least I haven't had to do one yet), and want the answer yesterday, overtaxing the system at that end.
$700 for an hour of work to replace a hard-drive? Maybe I'll join the Geek Squad....
I've been wanting to post something like this for a while, but just never got around to it. I've been sued twice, both for rediculous reasons. Don't get me wrong: I make mistakes, as do all rads and even all doctors, and all human beings in general. However, the tort system in this country, especially with the contingency basis upon which many of these cases are pursued does nothing to correct mistakes, it simply pads lawyers' pockets. In my first case, I and two of my colleagues were accused of missing a lesion on a mammogram that wasn't there. The "expert" was a general radiologist that had been sued 7 times before himself. His deposition would have been funny if it wasn't directed at ME. That case was dropped. The second case, which is awaiting summary judgement, accuses me and another rad of not seeing a chest tube on a radiograph that wasn't there. I kid you not. It is one of those classic "shotgun" suits where every doc that had the bad fortune to get his or her name on the chart was sued, with no regard for level of involvement with the patient. Once I get summary judgement, I will seriously consider seeking sanctions for the litigating attorney. I think he was expecting all NINE of us docs to settle just to make him go away. He is in for a bit of a nasty surprise.
I am very serious about my solution to the "expert witness" problem. Look at it this way: if you can pay for testimony, then it can be bought. If I am called to testify, I refuse payment, and I require a subpoena. Therefore, I am there as a free-agent, not paid by anyone, and I can speak the truth. Period. Now, the standard answer an "expert" gives when asked on the stand how much he is being paid is something like, "(muffled thousand dollars) for the time I put into this case, but I am here to give my expert opinion and payment doesn't change what I'm going to say." Sorry, I don't buy it.
Many professional societies are starting to sanction their members that grossly perjure themselves on the stand. That's a start. However, the ONLY way, in my humble, simplistic opinion, to completely stop the abuse of (and by) the legal system is to decouple the testimony from payment. That is what I practice personally. I have had lawyers look at me like I have three heads when I tell them this, which maybe is indicative of how it impacts them.
I refer to this declaration of independence of testimony from reimbursement as the "Dalai Manifesto". Outrageous? Yes, but I haven't heard a better idea yet.
Saturday, August 20, 2005
Docderwood has joined the league of Bloggers with his new blog NuclearVision. Check it out...it is very nicely done. Like me, Doc is a Nuclear Radiologist, double boarded in Radiology and Nucs. His blog is a lot more erudite than mine!
Take a look at the former Stentor website....It is now called the "Philips Global PACS Business Unit", and barely refers to Stentor by name at all. The introductory paragraph says it all:
Philips iSite® PACS is the leading enterprise-wide medical image and information management system on the market today. iSite® PACS is an innovative image and information management system that delivers on-demand diagnostic-quality images over existing hospital networks, advanced radiology reading stations for radiologists, and "always online" long-term storage.
Sure sounds like Stentor to me. Now don't get me wrong, I have the utmost respect for both companies, but the rapid transition feels a little like Orwellian doublethink. Oh well. Good luck to all Philips users, both the Sectrans and the Stentorians.
Wednesday, August 17, 2005
ADDENDUM...I just couldn't resist posting FHS's response....
We enjoyed your rants about ScImage! Our favorite was "Dear ScImage." Don't know how many times i have had to walk our radiologists over the phone at night how to re-inistall picom client. One them asked in a huff one night...."now why do i have to keep doing this?!?" I do not know sir.We enjoy your site and will most certainly be keeping up with it!Take Care!
Tuesday, August 16, 2005
Inside joke for Star Trek fans....
This blog is finally starting to stimulate the kind of banter I had in mind. The discussion of web-based PACS has generated even more commentary. You can read the full versions by clicking on the "Comments" link below the last post. Here are some pertinent exerpts:
Once you are in the application (after it is launched), the web is really out of the equation. You are running an application that is installed on your local machine. Most (all I would hope) applications have some capability to auto download new clients as they are available. So, to make a long story short, web or not, if an application can be downloaded and installed/configured from a web browser and if that same application will communicate on standard web ports, then to me everything else is the same."
Peter then comments:
Uhhhh...can a client be small and fat? Sorry, don't set me up with a straight line! I think we all may be talking about the same thing, but with different buzz words. Let's take a look at the way a generic "web-based" system functions, and see where we might have some common ground on this part of the wider issue....
I would much rather see a well designed, small fat client install which uses the internet to communicate to a backend database than...a “thin” web app.....An easily available web site with a small, intuitive install would be so much more preferable to a web app. Limiting the customizations possible by the end-user saves a lot of support time in the long run."
Those I've tried all seem to work a good deal like both Anonymous and Peter suggest. You go to a web-site with your browser which acts as a gateway. Once you have logged in, the actual viewing client is examined and updated if needed, then launched. This client is generally a Java applet or Active X control or the like. A third component acts as a conduit for the image transmission through the web (usually on https port 443), and a worklist of some sort is displayed within the brower, either with HTML or maybe Java. If you have Voxar 3D, that is a completely separate program that either taps your PACS database or intertwines somehow with locally-cached images.
Now, what do we mean by "thin" and "thick" clients? Here is one of the better definitions I found whilst Googling, from
"Solving the ‘thin client’—‘thick client’ dilemma" by Robert Barnett in "A Forms Perspective":
I think we will find that essentially all systems use mainly thick clients for viewing. You are not just tunneling into the server and watching the images being manipulated there, but rather you are pulling the images to your client, and playing with them on your very own computer. I can think of two instances in which I have used a thin client, based on the above definition: First, when I was in junior high sometime in the last century, we had the great priviledge of using a mainframe via acoustic-coupled modem over POTS (Plain Old Telephone Service) with a teletype at 110 baud. Much more recently, I have tried out a TeraRecon Aquarius via internet. I loaded a thin client on my laptop, and all the image manipulation was done on the Aquarius, wherever it was. The images were, of course, spectacular, but the process was completely bogged down by bandwidth. Therein lies the problem with thin-clients...while it might be feasible to do all the crunching on a central computer, you still have to get the results out to the viewers in the boonies. Not a big problem in-house, especially with gigabit ethernet, but even DSL or cable speeds may not be up to the task. I'm going to go out on a limb on this one, and come down hard in favor of the thick(er) clients. I just bought a number of Dell Precision 670 computers from the Dell Outlet Site for my group. For $4000 each, we get dual Xeon 3.6 GHz processors with 1 MB cache, 4 GB of RAM, 200 or so GB hard drives, and a 256 MB dual-DVI nVidia graphics card. That's more computer power than all of NASA had at the time of the moon shots (and probably more than the Shuttles themselves have today). These machines can do a great job with 3D processing and cine-style viewing of 8 or 16 windows simultaneously. In the end, they represent a cheaper approach. Bandwidth to accomplish all this would be prohibitive, at least to deliver it outside the main hospital, anyway. So, to me, the thick client approach wins. A thick-client viewer is not bound to a web-browser, but the two can play nicely. Think of Adobe Reader, a client used to read .pdf files. It is downloaded (though the user has to initiate this) from the web, and its various incarnations can work as a plug-in within the browser, or as an independent app. Most importantly (and depending on its timing, sometimes annoyingly), Adobe will give you the opportunity to upgrade to the latest and greatest when such is available. Likewise with PACS viewers: usually they are downloaded with the initial connection, and the opportunity to upgrade is usually given upon subsequent sign-ons.
A simple program or hardware device that relies on having most or all of its functionality supplied by a network server. It is similar to a dumb terminal in that it gets all of its information from the network. For example, a simple HTML form filled out in a web browser is considered to be processed by a ‘thin client’ since much of the form's functionality is supplied by the server.
‘Thick Client’ (or ‘Fat Client’)
A program that is stored locally on the user's computer rather than the server. For example, word processing software used to write letters and other documents generally resides on the user's computer rather than the server. Even when the software resides on the server it is actually on space allocated to the user and is, in reality, just an extension of the user's computer. The term can also be hardware related, referring to fast stand alone PC's that have large amounts of memory and high volume hard drives that run programs locally rather than off the server.
As far as communications are concerned, many transmission problems have been solved by the internet long ago. The 'net is designed to transmit information from one point to another in packets with self-healing redundancy. If one route is cut, another is found. From the computer's-eye view, the transmission of data via the 'net is not particularly different than through the local intranet; the same TCP/IP protocol is utilized by both. For those who have suffered through dial-up and even ISDN connections, home broadband is nearly a miracle for telerad/remote PACS applications.
I think most of us agree that a web-based system should operate from one main database. There should be direct access to this archive, whether from within the enterprise or without. Here seems to be the main differentiator between classic and web-based systems: The old architecture requires an additional "box" with a partially-mirrored database for outside consumption. I have posted elsewhere that a web-based system should have each slice or image addressed by its URL, thus adhering to the internet's conventions.
The sum of all this drivel is that a web-based PACS system mimics any other web product; it uses the web's protocols and tools rather than reinventing the wheel. The example of Adobe Reader actually is quite pertinent...instead of reading .pdf files, we look at DICOM images with a web-based PACS. Now here is a little riddle for you: If I have a conventional PACS, say, like Agfa IMPAX 4.5, and I put new software on its web appendage, in this case the Web1000, such that this former appendage is now a true web server, tapping the main IMPAX database, what do I have? Answer: IMPAX 6.0. Riddle 2: Is this new system now web-based? Answer: Probably......
Monday, August 15, 2005
"I still do not understand how all of the advantages that Brad describes are gained via a Web based PACS. Whether the PACS is web or not, a system can still be brokerless, flexible, inexpensive, easy to deploy, easy to upgrade, etc. A web based PACS still requires somewhat complex servers and configurations, etc. The advantage for me, and the only one, of a web based system is the ability to 'launch' the application from the web. This opens up opportunities that are endless. Now granted, that is a huge advantage, but I just dont think that people are conveying the advantages of web based PACS very well, and espescially not in this tidbit by Brad. I would love to see data on whether web based PACS perform better, are more reliable, are more secure, etc. And for those requests, I would love to see real examples, and not any high level, marketing focused, buzz word based responses."
I can't answer this question as well as, say, Brad Levin might, but I'll try. As you probably know by now, I have Agfa Impax 4.5 at one hospital, and Amicas LightBeam at another. The former is arguably at the pinnacle of development of a non-web-based system, the latter is typical of the modern breed of web-based architecture. What are the differences to me, the end-user?
If you discount differences in the clients themselves (and I could wax poetic about that for hours and hours), there is no obvious difference in the two approaches (again, from my point of view) whilst working within the hospital. I open the studies and read them. Rocket science here, right? I should add, however, that the Amicas system checks the software on my station upon each sign-on, and allows the installation of any available upgrade (that is already on the server) before the reading session begins. Could this be done with Impax? I suppose it could, but it isn't at the moment.
The real difference to me is how the system works when I am outside the hospital. Deployment of a client is much easier with a web-based system, and there is no discrepancy in the software I use at home, in the hospital, or in Timbuktu (or in the North Woods of Wisconsin if I should happen to be there.) Again, could this be accomplished with Impax? Maybe, but Agfa chooses instead to add another box, the Web1000, as an entirely separate server and client. The later versions of Web1000 look a little more like Impax, but they remain two very separate programs. At 3AM, it is a lot easier to use what you have been using all day than adjust to something different, trust me. Moreover, being able to sit at any computer in the world with broadband Internet access (well, any Windows computer anyway) and be up and running with minimal effort is truly mind-boggling when you think about it.
I'm not well-enough versed in the underlying architecture (or I haven't had enough Versed) to discuss the relative merits of each approach. My rather simplistic view is this: The 'net was designed for rapid, error-proof, interruption-resistant transmission of data. That's what we need for PACS, yes? So why reinvent the wheel?
I think we would all love to hear from experts on both sides of this issue.
Saturday, August 13, 2005
First a bit of a disclaimer -- As my screen name explicitly describes, I'm Brad Levin and I'm the Director of Strategic Marketing for AMICAS. Prior to AMICAS, I was a PACS Subject Matter Expert for both Cap Gemini Ernst & Young and prior to that, Xtria Healthcare. I've experienced PACS for 10 years+, as the industry has made generational changes from the earliest military days of MDIS which was highly proprietary PACS (resulting in the Unix guts of most of today's traditional PACS), to the first DIN-PACS which spurred industry to go the route of rudimentary integrated RIS/PACS and heavily brokered systems, and most recently to Web-based PACS.
I'll try to provide you a snapshot of this market segment as it exists today -- it's a long response, but I hope this provides some clarity for you:
What is Web-based PACS? While there is no Webster's definition, Web-based PACS is PACS with the guts of a Web server under the hood. In other words, Web-based PACS delivers images and reports via a URL-based mechanism (e.g., what you typed in your browser to get to AuntMinnie.com). Some Web-based PACS vendors have URLs literally in their graphical user interface (GUI), while others use this mechanism, but choose not to have the URL accessible explicitly in the GUI.
Why Web-based PACS? Traditional approaches to PACS are tried and true - there's no argument there, as every vendor can ultimately move around images and reports. But to continue the automobile metaphor, these methods require significant "elbow grease" to be successful. This level of effort has both frustrated PACS customers and vendors alike. Why? Because these approaches lead to PACS with brokers, multiple databases, multiple operating systems, restrictive Radiology-centric workflow, expensive workstations/clinical viewers, multiple levels of archive, proprietary hardware (purchased through PACS vendors), a separate/non-scaleable Web-server and multiple user interfaces for different PACS viewing applications: telerad, distribution, clinical viewing and diagnostic workstations. The paradigm shift of PACS is challenging enough on its own (e.g., change management, training, system rollout) let alone to be hampered by the technical complexity of these disparate systems that must be implemented, maintained, and synchronized. It's complex because it's inherent in the model.
What has been the result of these valiant efforts for PACS? Vendors have had no choice but to pass through (with profits) the complex development, support and integration costs onto the PACS consumer marketplace. By virtue of real-world experience alone, the majority of industry consultants are most familiar with this complex model of PACS and it is continually demonstrated in their RFPs. RFPs have marginally changed from the early days of PACS despite the significant differences in approaches to PACS. So, rather than make generational changes in architecture, many traditional vendors have continued to deliver "complex" PACS via this model, charging several million dollars per PACS, plus several hundred thousands dollars per year for support. That is why so many traditional PACS buyers have a hard time cost justifying PACS, because ROI (using "hard" numbers) is difficult to achieve in a model that does not have the tools to eliminate the production of film. And if you can't eliminate the film, you'll chase, but never get to ROI. That doesn't mean that PACS can't be justified with this model, because consultants and PACS customers have learned how to be creative using this approach with so called "soft and hard" savings. Just go to any conference or read the mags and you'll learn how. Early adopters have worked the system in this fashion simply because the technology and inherent high costs have led them in this not so pleasant direction.
So why Web-based PACS now? Simply speaking, the market dynamics changed and PACS customers are more demanding - on their terms. While literally one or two vendors have been in Web-based PACS for years, the PACS marketplace has clearly taken a decisive turn in the past 3 years. Today, there are probably a half dozen or more vendors offering Web-based PACS, and a similar handful of RIS vendors offer Web-based solutions as well.
As I said earlier, PACS early adopters (e.g., academics > 400 beds) are in upgrade mode now, but have had great difficulty accepting the costly terms of upgrading their traditional PACS. They are questioning "why spend millions on a model that was created in the early/mid 90s"?. I don't think anyone would purchase a 286/386 PC today. The same rationale exists for PACS, except when you purchase/upgrade your system, you are tied to your purchase for at least 5 years+. Combined with this is the reality that the broader marketplace (e.g., <400>
So, the only way for traditional PACS vendors to address this market opportunity is to come up with a model that meets the "demand" with a "solution" that can be delivered with scale (to provide wide image/report access and thus, allow film printing to nearly cease); reduce architectural complexity to "simplistic" models that can be deployed fast, supported over the Web with minimal resources, upgraded over the Web, etc.; provide integration platforms to electronic medical record (EMR) systems through Web server calls; leverage PACS for the enterprise and not just for Radiology; integrate through brokerless interface engines; and perhaps most important of all, to be able to be sold for less to meet the market demand head on.
And what is the outcome of the above? PACS powered by the Web allows adopters to move to PACS faster, with far greater simplicity, and thus, far more affordably than has historically been the case for PACS. It's a confusing time in the marketplace for sure -- but make no mistake -- the rules for PACS have been and are continuing to be rewritten, allowing ROI to be a reality, not the fallacy from the past. There are many flavors to Web-based PACS out there. Some use proprietary means, others focus on standards. Some offer more restrictive solutions than others for enterprise workflow, integration, off the shelf hardware, etc.
In closing, the main message of my response is that for those who have been in the industry a long time the momentum is fairly obvious -- the Web is clearly the direction of the emerging PACS vendors and most of the traditional vendors are either on board with the Web, have released partial Web-based products, or you can almost certainly be sure the Web will play a part in their future releases. If they don't move, they and their customers will be left behind. And as in the past, one day the market dynamics will have its way with the Web --- it is almost inevitable. But when that day will come is anyone's guess. The wave of the Web is in it's infancy and will likely ride for many years to come. Just remember that the PACS penetration rate outside of the academics is in single digits today, and this is the target market for all of the traditional vendors. Some can play in this space today, others can't.
The real final message is that astute and novice PACS buyers have no choice but to filter out PACS for their own best fit model. The change from the past is that the business of Radiology for the <300~400>
Thanks for listening and I hope this was helpful.
It was very helpful Brad. Based on what we've seen over the past 2 years, I think he hit pretty close to the mark.
This information has formed the basis of my own definition of web-based PACS, i.e., that it is a web-server at its core, utilizing web technology overall, and not simply adding a server as an appendage to a more traditional architecture.
Note that the top KLAS-mates over the past few years have all been web-based products, Amicas, DR, Stentor. I said in an AM post after SCAR 2003 that most major vendors either had or will have web-based architecture based on the above definition. This seems to be coming true, slowly but surely.
Friday, August 05, 2005
You must be a huge Michael Moore fan. You seem to employ a similar style.
Jim @ ScImage
Well, now, what can I say about this? First off, I had to realize that ScImage is headquarted in California, so comparing me with Michael Moore might well be a complement. I must admit I am not a fan of Mr. Moore, although there might be some physical resemblence: