Radiology reporting has always been more than documentation, but ECR 2026’s session “AI in radiology communication: expressionism or dadaism?” sharpened that point by tracing how meaning can be created, distorted or lost as reports move from dictation to interpretation to patient-facing dialogue. The throughline was practical: AI can reduce friction and unlock scale, but communication fails when nuance, accountability and human context are treated as optional.

 

Speech Recognition and Reporting as Clinical Intent

Prof. Charles Edward Kahn (Philadelphia, United States) framed speech-to-text as the first communication bottleneck, arguing that the radiology report functions simultaneously as a clinical decision tool, a legal document and a handoff, where “small errors” in wording can change management and liability. He contrasted “the precise articulation of clinical intent” with the risk that AI-enabled language becomes fragmented or misleading, warning against outputs that “certainly look like language but aren’t really”. Traditional systems, he noted, struggle with homophones, negation and laterality, and they remain sensitive to noise, accent and context, driving constant corrections.

 

Related Read: Implementing the EU AI Act in Radiology: ESR’s Recommendations

 

He described newer approaches using deep neural nets and large language models that are context-aware and can even learn individual reporting style, supporting real-time correction, structured formatting and consistency checks during dictation. These tools can keep attention on images while populating structured fields, and can flag contradictions as the report is being created. Yet he stressed verification as non-negotiable, highlighting risks where automated impressions can overreach, including plausible-sounding additions and misread uncertainty. As he put it, “we really want to view the radiology report as an act of communication”, but “AI is a two-edged sword”, so radiologists must remain the final interpreters.

 

Turning Narrative Reports into Computable Clinical Knowledge

Dr. Lisa C. Adams (Munich, Germany) shifted to the next step: extracting structured, computable information from narrative reports so systems can aggregate, query and act at scale. She described the core problem bluntly: “Radiology departments generate millions of free text reports every year”, rich with findings, measurements and recommendations, yet much of this value is locked in natural language and cannot be operationalised without extensive manual review. Her definition of “understanding” was specific: transforming free text into structured data a machine can process, store and trigger action from, calling it “the bridge between dictation and clinical intelligence.”

 

To organise progress, she proposed a three-level framework: extraction (identifying entities), relation (linking entities and changes over time) and inference (clinical meaning). She mapped this to the evolution from rule-based tools (such as negation detection) to domain-specific transformers trained on radiology reports, and then to generative large language models that can often perform tasks without bespoke training. Her examples emphasised why this matters operationally: NLP can detect incidental findings at high accuracy, but the safety gap may be downstream when recommendations are not acted on. She highlighted applications such as post hoc structuring across languages, integration into clinical systems via interoperability standards and automated registry and coding workflows where scale and consistency are decisive. Still, she flagged open constraints: smaller models may outperform larger ones for some tasks at far lower cost, sustainability and energy use matter at clinical scale, and European multilingual validation remains a major need. Her central inflection point was implementation: “The field is moving from the question ‘can we extract?’ to the much harder question, ‘can we integrate?’.”

 

From Understanding to Empathy in Patient Communication

Judy Birch (Poole, United Kingdom) brought the sequence into the consultation room, focusing on what happens when machine-generated clarity meets human uncertainty, pain and trust. She approached the topic from a communication and patient-representation perspective, arguing that the stakes of wording are immediate because “language in itself, how it’s said and what is said is key to how a patient perceives, interprets content and reaches a decision.” A technically “negative” examination can still leave a patient stranded if the phrasing offers no direction and the lived experience is severe symptoms without answers. For Birch, the danger is not simply imperfect automation but the erosion of human capabilities in care: intuition, empathy, professional experience and context.

 

She outlined where AI can help responsibly: maintaining updated summaries “in one place” and tailoring explanations to different literacy levels, alongside professional chatbots that help patients navigate guideline information. But she repeatedly returned to accountability and equity. If AI has been used, she argued, it must be checked by an accountable clinician, and its use must not amplify exclusion or bias. She also challenged the assumption that throughput can compensate for weak communication, insisting that “trust is in short supply” and that empathy prevents panic when results are difficult to process. Her closing insistence on co-design anchored the session’s theme in governance: patient organisations must be involved, because care cannot be rebuilt around automation alone.

 

Taken together, the session presented a clear chain: speech becomes text, text becomes structured knowledge and knowledge must still become humane communication. Kahn’s message was that AI can reduce reporting friction but can also distort meaning unless radiologists continuously verify and remain accountable. Adams showed that once reports become computable, entirely new system-level functions become possible, but deployment hinges on integration, validation and multilingual realities. Birch reminded the room that patient-facing communication is not a formatting exercise: trust, context, literacy and equity determine whether “understanding” helps or harms. The shared endpoint was not automation for its own sake, but AI that amplifies clinical intent without fragmenting meaning or displacing the human connection that patients rely on.

 

 

Source & Image Credit: ECR 2026




Latest Articles

AI in radiology, radiology reporting, speech recognition radiology, NLP in healthcare, structured radiology reports, patient communication in radiology, ECR 2026, clinical decision support AI in radiology reporting at ECR 2026: experts discuss speech recognition, NLP, structured data and patient communication to improve trust and clinical decision-making.