An AI Lesson from DataStew

An AI Lesson (from twenty years ago)

A couple of decades ago I used to support a variety of Attorney’s offices. One year a product called “Dragon Naturally Speaking” made a splash and we got a flood of calls to install it for clients. Dragon was a speech-to-text application and was pretty-neat tech for its time. In addition to the software, we would install a headset and often a sound card (as those weren’t standard on PC’s yet). As part of the setup process the user would take some time speaking so the system could learn the user’s speech. The software had a lot of excitement around it and had users dreaming of big gains in productivity and the elimination of tedious tasks. After everyone who wanted the software had it, and the excitement died down, we went back about our everyday task of taking care of client’s IT systems.

Over time I noticed our Attorney clients were using the software less and less, and eventually I rarely ran across it all. Occasionally I would ask about it and the general answer was that it just didn’t save them any time, or didn’t work as promised. At the time the software was claiming a 95% accuracy rate (it’s been a bit, don’t hold me to this number) and I believed it did what it said. This issue was that for most of our clients, 95% wasn’t accurate enough. For example, if an Attorney needed to write a pleading, and the text of the pleading was only 95% accurate, that would be a disaster. The reality was that the work product from Dragon would still have to go to an assistant to review and correct (or if solo, the Attorney themselves), and that didn’t save any labor, it actually added labor.

 

An AI Lesson from DataStew is it right for you

 

Speech-to-text didn’t go away, it just wasn’t right or ready for that task as that point. We now use speech-to-text all the time for things like Alexa, Siri, texting apps, close captions on TV, and unfortunately the dreaded call center bots. I believe the lesson wasn’t that it didn’t work, it just wasn’t a good solution for that problem set, at that point in development.

What I see in the hype around A.I. these days reminds me a lot of that time trying to implement Dragon. It’s not that it doesn’t work, it’s just that we are early. There are some use cases that it seems excellent for, and some that maybe we should wait a bit on. There are also HUNDREDS of AI models out there, and integrators that build these solutions don’t know the ins and outs of them all. The talent pool with experience in these things also isn’t very deep (yet). In time the experience will broaden out and the good AI models will rise and become known for the specialties they serve.

My conclusion on this is that it is like most cutting edge technology: whether or not this is right for you yet depends are how far out on the technology curve you are willing to push. If you like the risk of early adoption, and have a healthy budget, this stuff may be great for you (also, call me). But if your company values a more predictable return on investment, give it a bit of time (but also call me, we do both).

 

Key Take Away’s & Summary

  • Sometimes the brochure is better than the product
  • Good solution architecting will save money and prevent failed projects
  • AI is a great solution, if it solves your problem
  • Today’s cutting edge is tomorrow’s mainstream is next week’s old tech. Matching where you are on the tech curve to the risk profile of your company is a recipe for successful projects.