Revelations about Amazon’s “secret” 1492 project, where AI is meant to be applied in the health domain has continued to fuel the AI discussion and the future of digital health. Amazon’s 1492 seems to be the latest example of yet another attempt to utilise next generation networks, ever increasing computing power and more sophisticated data mining strategies. However, while the use of smart algorithms, neuronal networks, machine learning and other strategies are all well-established procedures, AI in the context of digital health certainly needs to be considered a double-edged sword. (The recent Zuckerberg – Musk argument could be considered an instantiation of this dichotomy).

One of the problems is clearly the lack of a crisp definition of AI.


How does AI actually scale into the individual and population health context? When discussing intelligence people frequently associate artificial intelligence with “logic”, following philosophical and mathematical principles such as introduced by Aristotle, Kant, Boole, Zadeh and others. While their work has been instrumental to enhance precision in mathematical processes there are alternative expressions of logic, such as “emotional intelligence”. This might best be explained by referring to typology research by Myers and Briggs (Myers-Briggs Type Indicator MBTI) based on original work by Carl Jung. In very simplistic terms Myers and Briggs found that “intuitive” approaches can be as successful as analytical strategies and that some individuals might even rely more on an intuitive approach.


To medical professionals this might not come much of a surprise. Experienced clinicians know very well that medicine is all but a “precise” science. Just think of homeopathy or the power of placebos. One of the prime tasks of medical professionals is clearly to process information and compile it according to the requirements of their patients which are determined by a huge variety of medical, social and psychological factors.


While AI is getting closer to provide individual predictions (precision medicine) based on mathematical algorithms, it is miles away from operationalising and mimicking human interaction information processing. Some might argue that Google’s private assistant and Amazon’s Alexa already include AI elements. However, none of the existing systems comes even close to managing or even recognising complex behavioural and psychological concepts such as moods, emotions, deception, delusion, etc., let alone the realisation of intra-individual oscillations. Surprisingly, this seems not to be fully understood by some in the industry. Google’s Deep Mind uncritical large-scale data heist in London where millions of rich patient files were illegally accessed without explicit patient consent did not lead to significant progress. On the contrary, it showed the ugly face of AI and built considerable prejudice amongst the global stakeholder community. Now that Mr. Hyde has been out and awareness has been raised this mistake is unlikely to be repeated. The introduction of AI-based strategy in digital health requires a key ingredient from data owners and health care providers: trust. Although it is well known that the appearance of Mr. Hyde cannot be entirely prevented it is important to see much more of Dr. Jekyll. Amazon, Microsoft, Siemens Healthineers, Apple and Goggle have to assure transparency, collaborate closely with stakeholders by including equirements engineering principles and have to demonstrate compliance with major data protection regulation (GDPR). The industry should also embrace and contribute to responsible research and innovation (RRI) activities. This would not only keep Hyde under control but at the same time create ample  business opportunities on a global scale.


«« What will the hospital of the future look like?


HIT leaders work out strategies for value-based care »»


Latest Articles