As technology races ahead to provide new and interesting applications, let’s hope we don’t lose our common sense in applying them.

A recent posting by Reuters provides an interesting example. According to that report, Total Immersion, a pioneer in the emerging field of augmented reality (AR), is working with Intel to bring AR features, like gesture recognition, into Intel's chipsets. Of course, this research is being done to help develop “total immersion” systems that will be cool for games, but perhaps less so for real-world applications.

Suppose we apply the concept of gesture recognition to automobile systems. It could be useful to have one’s car computers recognize that a wave of the hand means “shut down the engine,” but how does your car know that you are communicating with it, instead of greeting your neighbor as you drive by? And if an accident results from a computer’s misread of a driver’s intentions, what are the insurance implications?

Clearly, gestures are both highly individual and highly specific to context. If I touch my thumb to my forefinger, I might be signaling “OK,” or maybe I just happened to let the digits touch. If my car is sensitive to my gestures, however, I need to take care to avoid using those same gestures when communicating with other passengers in the vehicle. In terms of individual users, I’m sure that auto systems could be trained in much the same way that voice recognition systems are, but what happens when someone else drives? Sounds like a lot of system training for relatively little return.

On the other hand, particular forms of gesture recognition could prove to be a lifesaver if, for example, my car’s systems “recognize” when I am dozing at the wheel (a feature already being marketed by some). Again, there will have to be training involved to have the system accurately identify what it is seeing—as incidentals like a driver wearing sunglasses could prove confounding.

It makes perfect sense that drivers would want more and more of their tasks automated, but as I have said previously, the technology itself must not present a distraction. If I make a gesture and the onboard computer makes an error of recognition, my attention is suddenly focused on the errant computer—and not on my driving. For this reason, any such system needs to be rigorously tested for reliability and its potential to be dangerous in the event of a mistake. I’m not sure we’re getting this level of testing with the systems that are being talked about for marketing to the public.

I’m looking forward to all kinds of safety-enhancing technologies in cars and homes. Let’s be sure that these technologies actually do increase safety before we rush headlong to adopt them. I have no idea whether or not gesture recognition will be suggested for driver systems, but at the moment it doesn’t seem like a very practical idea.

Ara C. Trembly (www.aratremblytechnology.com) is the founder of Ara Trembly, The Tech Consultant, and a longtime observer of technology in insurance and financial services.

Readers are encouraged to respond to Ara using the “Add Your Comments” box below. He can also be reached at ara@aratremblytechnology.com.

This blog was exclusively written for Insurance Networking News. It may not be reposted or reused without permission from Insurance Networking News.

The opinions of bloggers on www.insurancenetworking.com do not necessarily reflect those of Insurance Networking News.

Register or login for access to this item and much more

All Digital Insurance content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access