Image Courtesy TMONews.com |
One of our in-house transcriptionists as been using speech-recognition in what I call "editor mode". In other words, my dictation is run through an SR engine (I don't know which one) to provide the first draft of the report. She then edits it while listening to the dictation, and places the final report on the RIS for my review. This should give us the best of both worlds, the speed of SR and the accuracy of a human.
Except it doesn't work.
The particular transcriptionist is really superb at what she does, and before this little experiment, her reports rarely had any mistakes at all. But throw in SR, and all bets are off. Almost every report produced in this manner had at least two or three mistakes. And these were NOT typos, making them much harder to spot. For example, I dictated something about activity in a stump of an amputee. The SR translated "stump" as "stomach" and this made it by the editor and would have made it by me if I hadn't remembered the case itself. I can cite dozens and dozens of other, similar glitches.
I asked that SR be turned off for this week as an experiment, and wouldn't you know it? Absolutely NO mistakes on our reports. None. Zero. Nada.
Once again, I'll yell it from the hills and the valleys: Speech Recognition is NOT READY FOR PRIMETIME! Period. Maybe in five years.
Even Siri agrees with me on this, although she knows I'll turn her off if she doesn't.
No comments :
Post a Comment