Some artificial intelligence (AI) systems can already perform certain tasks as well as or even better than humans, for example, machine learning models that interpret diagnostic imaging studies – like what radiologists do – although the former can do the interpretation in seconds or much faster.   


What about voice AI? This technology is expected to change the way we interface with machines and, consequently, how humans interact with each other. Tech giants such as Amazon, Google and Apple have come out with voice-activated gadgets that are now catching on in people's daily lives.  


Meanwhile, Mycroft, the Kansas City-based voice platform company, has announced an open source voice-activated private assistant, The Mark II. This portable speaker, the company says, is designed for everyday users with varying levels of technical expertise. Mark II’s privacy, customisation, user agency and open data capabilities differentiate this product from other proprietary voice assistants currently available on the market, according to Mycroft.


You might also like: New AWS machine learning services for healthcare customers


There is also extensive development underway to bring voice technology fully into healthcare. Some leading hospitals are already forging how patients experience healthcare with voice AI. According to John Loughnane, MD, chief of clinical innovation at Commonwealth Care Alliance, the industry is on the verge of voice technologies that can be used to tailor individualised care regimens.   


"Healthcare is at a tipping point with voice," said John Brownstein, chief innovation officer at Boston Children's Hospital. "We haven't seen it transform any industries. Healthcare could be a leading vertical in voice apps."  Brownstein further said that voice could play a role in the entire patient journey beginning with voice interactions in the home or chatbots at triage, to cite just two examples. 


UPMC Enterprises Executive Vice President Dr. Shivdev Rao said 75 to 80 percent of the signal in a hospital is voice-driven. Dr. Rao added that at UPMC, capturing metadata of a patient's history could help clinicians understand when someone is coming in with chest pain caused by acid reflux rather than a heart attack.  


For her part, Sara Holoubek, CEO of Luminary Labs, called 2018, the year of the voice tech pilot. "We're in this extensive period of trial and error," she said.  


There are growing pains to be endured, according to tech experts, who advised hospitals and tech developers to anticipate and plan for some stumbling as proofs-of-concept fizzle out and pilot programmes fail to make it into production.   

 

Source: Healthcare IT News

Image credit: Pixabay

«« Machine learning proving worth in readmission risk prediction


Top 2019 CIO challenges in face of healthcare consumerism »»



Latest Articles

AI voice activation healthcare open source personal assistant New voice AI could boost privacy and customisation