Thursday, September 29, 2005

The Missing Link

Once in a while (a GREAT while, it seems, for me), one is struck with a simple solution to a complex problem. I have mentioned in earlier posts that linking between a new and an old CT (or MR for that matter) is critical for interpretation. This would be an easy proposition if patients could be very precisely positioned at the same spot on the gantry every single time, and of course if they held completely still. Then there's the breathing aspect, but I've found in general that patients who don't breathe at all tend not to pay their bills, so we have to live with that.

In the good old days, a couple of years ago that is, the only real option for comparing studies was to link by image number. In other words, if you scrolled down three slices on the new, you would scroll down three slices on the old. This would be OK if your scans were both performed with the same slice thickness. That is not always the case, especially if you have installed a new scanner since the patient's last exam. So, the modern systems (except, of course, for our friends at Image Technology Laboratories, who don't think this situation ever occurs) match scans by table position. This represents a considerable improvement, although all it really does is advance the old study intermittently to more or less match the position of the pertinent slice of the new scan. One still has to orient one study with the other. I try to pick some landmark, say the carina, or the SMA, find the slice on each study that demonstrates it well, and then I link the two together.

OK, here's where my idea comes in. I won't bother to try to patent it, because it is really just an extension (or a subset) of the fusion software used to match PET's and CT's. Those get automatically matched these days because the gantries are combined and the patient (hopefully) doesn't move much. We have a computer from Hermes that supposedly will stretch and deform and magnify the PET to conform to the CT. Sadly, it doesn't work very well unless you tweak it to death. But it tries to match without any real help from humans.

My idea is to use a simplified version of this approach to link new and old scans. Instead of having the computer grind away forever trying to match the scans, let the user do it: mark congruent points on each study, say the sternal notch, the carina, the SMA, and the symphysis, just to use my personal favorites. In the simplest implementation, slice incrementation could be adjusted to match the position relative to those marked points, rather than table position per se. The more points you place, the better the match, although I assume most people aren't going to want to place more than three. The wider the distribution in the z-axis (head-to-toe), the better the match as well. In a really whiz-bang set-up, the scans could be treated as volumes and the old one morphed to the points marked. Some folks from Voxar hinted to me that they were working on a surface-mapping approach to this problem, but so far, several years later, no such luck. My approach is a lot easier, and therefore cheaper, and therefore more likely to appear on a PACS near you.

If someone gets the volumetric approach down, the next step would of course be linked 3D studies, including MPR's and volume renderings. Again, I have heard that Siemens was working on that sort of thing for InSpace (courtesy of Eliot Fishman responding to my question on, but again, this has yet to see the light of day. My pals at ScImage did create a dual-MPR display (one can do the same with GE AW4.1 and Philips, I mean Sectra). ScI's program suffers from their usual confusion as to where it thinks you have clicked, not to mention half-a-dozen other problems, and using the dual-MPR is so tedious that I don't bother with it. Now, watch them be the first to run with my idea. That's OK, as long as everyone else does so as well.

Phew. Having ideas is hard work. I think I'll go back to bashing.


Anonymous said...

The Dalai Lama,

I am interested in your wisdom regarding AGFA vs Amicas for Radiologist functionality and overall ease of use.

Regards, Jay

Anonymous said...

What you've proposed for CT/PET fusion has indeed already been thought of and standard interchangeable DICOM objects to store the registration of CT and PET datasets (among others) was defined and approved as part of the DICOM standard in 2003 --i.e. by now it is possible some manufactures might actually be generating and/or supporting input and processing of these objects their products.

DICOM Supplement 73 defined two different but related objects. One is the Spatial Registration object which includes scaling matrixes needed to rotate, translate, scale, and shear the pixels of one set of images such that they line up with and are the same size as the images in a second set. The object doesn't specify how the render the pixels of the two datasets such that they can be presented in a fused display. It only specifies how an application needs to transform one (or both) of the image data sets so their pixel data corresponds to the same volumetric space. However there are limits to the type of deformations which can be expressed in translation matrixes.

Human beings are wiggly, squishy sorts of objects which can deform themselves in ways not easily expressed in a general formula that can be uniformly applied across values distributed in a volumetric space. The second object defined in DICOM Sup 73, the Spatial Fudicials object contains a list of corresponding points and their locations in the coordinate spaces defined for both sets of images.

While this type of object doesn't tell a receiver exactly how to compute the spatial transform, it does provide the data, which might be captured in one application, needed by another application to compute a non-rigid transform for fusing the two image sets.

Other things included the DICOM Supplement were identifiers for some standard coordinate/frame of reference mappings --ie. identifiers one would use when expressing a tranform of an image dataset to a brain or other anatomical atlas.