Healthcare organisations face sustained social engineering threats that are increasingly difficult to detect. Attackers blend convincing email lures with text and voice tactics, while artificial intelligence helps craft messages that appear legitimate. Health systems are responding with layered defences that combine secure email gateways, multifactor checks and user education. Yet questions remain about how much traditional training reduces risk in real settings. Recent experience across several providers suggests that success depends on continuous, context-specific reinforcement, stronger verification processes and tools that reduce exposure before messages reach staff. The objective is not elimination of risk but consistent reduction of the likelihood and impact of incidents.
Escalating Social Engineering Across Channels
The scale of exposure in large health systems underlines the challenge. At UC San Diego Health, a workforce of about 23,000 staff receives roughly 30 million emails each month. Screening blocks a substantial share, yet even high filtering success still leaves a meaningful number of potentially malicious messages in circulation. This reality places significant responsibility on employees to spot and report suspicious content, which is why user awareness remains central to defence.
Adversaries are widening their approach beyond email phishing. Smishing and vishing now complement inbox tactics, increasing the number of channels that staff must assess under time pressure. The intent is to exploit human trust and operational urgency, whether by imitating internal communications or by targeting topics that matter to employees. Healthcare leaders report that evolving attacker techniques, bolstered by artificial intelligence, make messages appear more credible and tailored to the recipient, raising the stakes for frontline vigilance.
Must Read: Defending Against AI Phishing
Providers are reinforcing technical controls and tightening human processes. Email security platforms screen inbound traffic and support targeted simulations. Verification rules require staff to validate sensitive requests through separate, trusted channels before acting. Training aims to help employees recognise indicators of manipulation and to normalise a culture of caution. The combined goal is to keep the number of successful lures as low as possible and to ensure swift reporting when something appears wrong.
Training Models Under Scrutiny
UC San Diego Health uses layered training that includes mandatory annual modules through a learning management system and monthly phishing simulations. Experience has prompted a further shift toward department-specific sessions delivered in person or by video. This adaptation followed internal research indicating that conventional approaches offered limited measurable benefits, with trained users showing only a modest reduction in simulated phishing failures compared with untrained peers. Tailoring content to the risks of particular roles aims to make guidance more relevant and memorable.
The organisation continues to run simulations because they sustain awareness and support ongoing conversations with staff and faculty across the year. When someone clicks a simulated lure, explanations highlight cues that were missed. Over time, high profile breach coverage and staff members’ personal encounters with scams have reinforced formal training, and more employees now forward questionable messages to the security team for review. The emphasis is on behaviour change through repetition, context and clarity rather than reliance on a single annual touchpoint.
A broader view echoes concerns about inconsistent training impact. Decision-makers have cited insufficient or ineffective employee education as a strategic risk in 2024. At UC San Diego Health, researchers conducted an eight-month randomised evaluation of 19,500 users. Click-through rates rose when phishing emails referenced topics employees cared about, such as leave or dress code changes, and annual training showed no timing benefit. Just-in-time guidance after simulated clicks produced only a small improvement, with the best group achieving a 1.7% lower failure rate than untrained peers. The findings point to the need for evidence-based cybersecurity that blends technology with improved human processes, rather than reliance on a single intervention.
Operational Lessons from Strive Health and Luminis Health
Strive Health has integrated cybersecurity into its culture from the outset. New hires complete a short phishing module during onboarding before receiving access, establishing expectations early. With clinicians and nurses working remotely and delivering mainly telehealth visits, the organisation augments annual video instruction with regular phishing tests that target different groups each month. The security team also uses spear phishing to mirror personalised attacks and follows up failures with brief remedial guidance that explains the warning signs. Technical controls, including multifactor authentication and tools that classify data to apply appropriate policies, complement the training effort.
Luminis Health has concentrated on the help desk, recognising it as a prime target for social engineering. Attackers can impersonate clinicians and press for urgent password resets, sometimes using blurred cameras or multiple communication channels to mask identity. To counter this, strict verification protocols govern resets, and employees are trained to validate suspicious requests using known contact details or official collaboration platforms. Because attacks increasingly arrive via text, staff are encouraged to confirm messages through established channels rather than respond to unverified numbers.
The organisation combines a secure email gateway with multifactor authentication and requires cybersecurity training for new starters with annual refreshers for all staff. Monthly phishing simulations provide continual reinforcement. Just-in-time microtraining appears immediately after a failed test, turning mistakes into teachable moments. Repeated failures trigger additional steps, including supervisor notification after three incidents and mandatory video training with exercises after five. The aim is a sceptical posture that treats unexpected requests with caution and verifies before acting.
Evidence from real-world programmes and a large internal evaluation indicate practical limits of traditional awareness activities when used in isolation. Click-through rates are strongly influenced by the relevance and quality of lures, and annual training alone does not reliably reduce susceptibility months later. Simulations provide marginal measured gains but help maintain dialogue and attention. The most promising path blends strong technical filtering, clear verification rules and targeted, role-specific education delivered repeatedly and at the point of need. As attackers use artificial intelligence to refine lures, providers are exploring equally adaptive defences, combining improved detection with human processes that are simple to follow under pressure. For healthcare leaders, the priority is not a single remedy but sustained risk reduction through culture, controls and continuous reinforcement that protect patients, operations and reputations.
Source: HealthTech
Image Credit: iStock