Sunday, August 17, 2014

Dalai The Glasshole




We've all heard the hype about Google Glass, but bleeding-edger though I am, I have not yet succumbed to the pressure to invest $1,500 in the future of the future. Fortunately, a friend in the healthcare software business has allowed me to borrow his for a prolonged trial. My conclusion? Nice first effort, Google, but it needs some work.

I'm not going to attempt a full review of Glass, nor will I dabble in the discussions about privacy and so forth. That's all been done many, many times out there on the web, by folks much more eloquent than I. On the privacy issue, my only real concern would be someone wearing Glass in a public restroom. Otherwise, have at it, Glass-wearers. I try very hard not to do something in public that would embarrass me, video'ed or not. Remember, most every cell-phone has a camera, too.

But back to Glass. Technologically, this little strip of electronics attached to an eyeglass frame is pretty amazing. Specs (pun intended) as outlined in the WiKi include:

Technical specifications[edit]


The Explorer's LCoS display optics use a PBS, a partially reflecting mirror beam splitter, and an astigmatism correcting, collimating reflector formed on the nose end of the optical assembly.[26][27]
(For the developer Explorer units:)
  • Android 4.4 [110]
  • 640×360 Himax HX7309 LCoS display[6][25]
  • 5-megapixel camera, capable of 720p video recording[7]
  • Wi-Fi 802.11b/g[7]
  • Bluetooth[7]
  • 16GB storage (12 GB available)[7]
  • Texas Instruments OMAP 4430 SoC 1.2Ghz Dual(ARMv7)[6]
  • 2GB RAM [111]
  • 3 axis gyroscope [112]
  • 3 axis accelerometer [112]
  • 3 axis magnetometer (compass)[112]
  • Ambient light sensing and proximity sensor [112]
  • Bone conduction audio transducer[7]

In the end, it is a super-duper Bluetooth headset, with the addition of video viewing and a still and video camera. (I had invented the Bluetooth headset camera idea myself in 2008! Too bad I never patented it.) And position sensors, etc., out the wazoo. But an appendage it is, and it needs a smartphone in your pocket to perform all of its tricks, although a WiFi connection will go a long way. This is almost a full-fledged computer system you wear on your face, but it's not quite capable of independent operation. Still, the technology is truly incredible, and quite an achievement for a first-pass.

In actual use, I was not as impressed as I wanted to be. Battery life was horrible, giving me just over an hour of heavy use. Of course, you could bring along a battery pack and cable and keep Glass plugged in. But even if you go to that length, Glass will shut down periodically due to overheating, and it does get quite warm to the touch.

I wear bifocals, and my dominant right eye is more nearsighted than my left. I've reached the age of presbyopia (this actually happened when I was 40 which was quite a while back) so I need close-up correction as well. The Glass display lives somewhere in your right upper outer visual quadrant, and as you can see in the bathroom mirror pic above, one has to look up to see it. To me, the display went in and out of sharpness, and my perceived resolution was fair. Text had to be pretty much full-screen to be readable. The size of the "virtual" screen is about the same as my 70" TV as seen from 20 feet away. But my TV is much sharper. We need some better optical correction.

Control of Glass might be its worst aspect. There are two ways in. First, there is a limited touch pad at the temple piece. You can tap in the manner of a mouse-click, or slide back and forth, evoking a linear menu of sorts, depending where you are in the OS. Stroking down dismisses whatever screen you have up. I'm not terribly impressed with this, but the second input, speech recognition is a deal-breaker for me here as well as in transcription. To be fair, the limited voice commands actually do work, as long as you wait for the proper prompt and begin with "OK, Glass". As in, "OK, Glass, Google why are people looking at me funny?" But therein lies the rub. Out here in the real world, you simply cannot go around talking to yourself and not get funny looks at the very least. It looks odd, it sounds odd, in the work environment it will bother other people, and at a bar it will inspire large gentlemen to assist you in divesting yourself of and ultimately destroying the $1,500 toy. Making a spectacle (ha ha) of myself is something I try to avoid. And think about the joy of having a bunch of Glasses operating in a single room. Which "OK, Glass" will the headset actually believe? There is also a third, limited input, that uses a strong eye-blink to activate the camera. So if a Glasshole winks at you, don't wink back unless you want to see it on Facebook.

Ultimately, Glass attempts to be the interface between the real-world of the user and the virtual world of Google. A laudable goal. However, neither the software nor the hardware itself are quite there, though the potential is obvious. Glass offers connectivity of sight and sound and position. It has a camera which sees what you see, and a display to feed you information visually. The microphone hears what you hear, and the bone-conduction speaker talks only to you. Glass knows where you are (via the phone's GPS) and where your head is. Assembling one or more of these capabilities can yield tremendous power, limited only by the imagination. Google outlines many of the tasks already available, such as Googling (duh) things, asking for directions, taking and sending photos and videos, and making phone calls. While it isn't particularly limiting, Glass lives in the Google universe, and your communications are predicated on using Gmail, Google +, Google Habitats, and Google Porn (gotcha). They all work, but not necessarily my favorite way to do things.

The onboard software and additional Glassware apps (loaded via MyGlass app for iOS or Android) take advantage of one or more of the headset's properties. My favorite is Star Chart, which reveals the secrets of the night sky as you gaze directly at the Heavens (or at your ceiling.) It will focus on the star or celestial body at center screen and verbally describe it to you via the ear-piece. Here, we are using the proprioception and GPS to figure out where you are looking, and the display to show the proper star-map.


I was shooting for Polaris, but by the time I captured the image on the iPhone's MyGlass app, I had moved my head. But you get the idea. See the Big Dipper in the center?

There have been a number or attempts to use Glass in the healthcare field. For the most part, these simply use the camera as a live-feed for sharing operations and such, or the display for piping imaging studies or other data in real-time to the surgeon or whomever needs them. If I may be so bold, these are really mundane applications piped through novel equipment.

My patron, the kind fellow who loaned me his Glass, wanted my impressions of how Glass could be used in Radiology. I'm not sure where he wanted me to go, but I'm going to do my best to think outside the box. And I'll probably disappoint him and you, dear readers.

Being an imager, my first thought was to use Glass to analyze images, perhaps to recognize pathology or to send a scan or slice thereof to a colleague for consultation. But the more I thought about it, the less sense that made. Why add extra links to the imaging chain? Look at the specs of the specs. Yes, the camera is 5MP, but the lens is really, really tiny. I pulled up a CT image from the 'net to simulate this process, and with an "OK, Glass," took a photo of it. (Which prompted Mrs. Dalai to suggest that I TAKE THE DAMN THING OFF AND STOP TALKING TO IT.  See what I meant above?) Anyway, here's what I got with my face about 4 inches from the screen:


OK, Glass, this is workable, although I don't like to stick my face that close to the monitor. The nose-prints get nasty after a while. But does it make sense to do it this way? Not really. We have the full resolution image right there ON THE SCREEN. It doesn't make sense to get the image into the system in a round about way when the image is already in some system. Perhaps the best approach would be to add software to the workstation (or laptop?) itself that talks with Glass. Perhaps the heads up display (HUD) could show a cross-hair to show the software where you are concentrating. But, no, that's foolish too. Point at it with the mouse and be done with it. Maybe we could use voice commands to decide which images to capture? Ummm...why bother? Proper PACS software should make that a lot easier. Scratch that idea.

Similarly, looking at images on the HUD doesn't make a lot of sense. The display has 640 x 360 pixels, or 0.23 MP. And with my eye, it doesn't even look that good. I can miss stuff at 3MP. I don't want to even contemplate what will get by me at less than 10% of that.

You see the pattern. Glass is meant for roaming away from your computer. It has some great possibilities for situations where you don't have access to a "real" computer and particularly a monitor. Glass pales miserably as compared to a proper workstation, and really shouldn't be compared at all. Radiology, being a workstation-based field, at least from my end of it, just does not as yet lend itself to this iteration of wearable technology.

At this point in time, Glass isn't a lot more than an expensive toy for bleeding edgers. It has too many problems and limitations. But it is certainly the first step in a major revolution. We will need to see some major improvements for Glass to be more practical even for its current limited applications. Battery life has to improve, and the interface needs to be trashed and redesigned. I'm not really sure what would work better than the unholy combination of voice and a very limited trackpad, but there has to be something. Maybe using the camera to watch hand motion? Of course, this would bring a new meaning to the term "hand-waving"...

Most important for imaging is the image. The itty-bitty HUD is a technological tour-de-force, but it isn't adequate for my purposes. The optics are not good for me, and several other Glass users have had the same problem. Google will have to improve upon the lensing of this tiny display. I would assume the actual display piece would have to be larger to allow for more pixels, which would add weight and bulk. A stereo display with bilateral HUD's would be wonderful, though incredibly odd-looking. The possibility of a 3D HUD brings to mind some Sci-Fi level approaches, such as superimposing volume-rendered scans over a surgical field. "Cut Here" becomes a reality at last.

OK, Glass. We've had some good times, but I'm afraid it just isn't going to work. Can we still be friends? OK, Glass, I know I was a Glasshole, but it's time to move on. Google it.

1 comment :

stacey said...

The Sci Fi crowd will like it.