AI in Healthcare Innovation

Explore top LinkedIn content from expert professionals.

  • View profile for Vas Narasimhan
    Vas Narasimhan Vas Narasimhan is an Influencer

    Reimagining medicine as CEO of Novartis

    441,503 followers

    Right now, every CEO is wondering the same thing: “How can artificial intelligence help maximize our impact?”   Delivering on the promise of AI isn’t just good business, it has the potential to help us address some of society’s most pressing challenges. So today, I wanted to offer a closer look at how AI is helping us discover new medicines at Novartis.   The process of identifying a new drug, running patient clinical trials, and bringing it to market takes over a decade. Each new medicine costs on average $2 billion to develop, and we know nearly 9 in 10 of the treatments we work on will fail before they ever reach patients.   A major early step in that process is identifying individual targets in the body that we want to design a drug for. Once we identify that target, which most commonly is a protein, we look for molecules that might address the target’s underlying issue – ultimately those molecule structures form the basis for every successful treatment.   Unlocking the right protein and molecular structures is complex stuff – each step often takes years to get right and our scientists consider billions of potential chemical structures that might lead to effective and safe drug candidates.   AI offers us the chance to accelerate that process. Working with partners at Isomorphic Labs – including members of the Google DeepMind team that were awarded the Nobel Prize this year – we’re now able to do things like model how a protein folds and interacts with the molecules we design. AI models also make it possible for us to analyze different chemical structures simultaneously. It has the potential to add up to significant time savings for our drug development scientists and their work to predict what molecules might treat specific diseases better and faster.   We’re just at the beginning of what this technology can do. As we incorporate AI throughout Novartis’ work, I’m excited to see all the ways it helps us unlock the mysteries of human biology, so we can deliver better medicines that improve and extend patients’ lives.

  • View profile for Simon Philip Rost
    Simon Philip Rost Simon Philip Rost is an Influencer

    Chief Marketing Officer | GE HealthCare | Digital Health & AI | LinkedIn Top Voice

    45,218 followers

    AI in Healthcare Is Ready. Are We in Europe🇪🇺? I believe Europe has the tech! But do we have the trust, infrastructure, or interoperability to scale it? A new EU-commissioned study reveals both the promise and pain points of deploying AI in healthcare. It’s one of the most comprehensive looks at where we stand and what’s holding us back. Here are the Key takeaways: • AI has proven potential to: • Reduce waiting times (e.g., 70% fewer ER transfer delays) • Triage patients faster (e.g., 63-minute cut in ambulance response time) • Relieve admin burden (60% of doctors’ time goes into documentation) • Improve cancer detection, treatment planning, and equity of care • But deployment remains slow due to: • Fragmented data and lack of interoperability • Outdated hospital IT infrastructure, especially in rural areas • Hesitancy over trust, liability, and lack of clear local performance testing • Limited post-deployment monitoring and transparency • Regulatory momentum is building: • The AI Act, MDR/IVDR, and EHDS now lay the foundation for trust and transparency • Yet only 26% of hospitals feel ready to comply with these frameworks Here is why this matters for healthcare leaders: AI is here, that’s a fact. But unless we address the real-world bottlenecks, from digital infrastructure and workforce training to regulatory clarity, we risk missing its most meaningful impact: improving outcomes and alleviating burnout. It’s time to move from pilot projects to scalable transformation, with governance, guardrails, and co-creation at the core. It’s encouraging to be part of this developments with GE HealthCare, can’t think about a better sector than healthcare to explore the potential and address the challenges of AI. What’s your biggest barrier to AI adoption: tech, trust, or talent? Let’s start the conversation.

  • View profile for Rubin Pillay  PhD,MD,MBA,MSc,BSc(Hon)Pharm

    Marnix E Heersink Professor of Medicine , Assistant Dean, Executive Director, Chief Innovation Officer , Medical Futurist, Global Leader in AI in Healthcare,TedEx and Keynote Speaker

    8,590 followers

    We just ran the largest AI trial in NHS history. 205 primary care practices. 1.5 million patients. A stethoscope that detects heart failure, atrial fibrillation, and valvular heart disease in 15 seconds — with regulatory approval and strong clinical evidence behind it. The technology worked. The population-level outcomes didn't move. That gap is the most important story in health AI right now — and it has nothing to do with algorithms. The TRICORDER trial, just published in The Lancet, found that when clinicians actually used the AI stethoscope, they detected 2.33× more heart failure, 3.45× more atrial fibrillation, and nearly twice as much valvular heart disease. But 40% of practices had stopped using the device entirely within 12 months. Why? No EHR integration. Extra workflow steps. A 15-second recording that added minutes of friction to an already stretched consultation. Clinicians weren't hostile. They were exhausted. And when asked what would most improve uptake, they ranked workflow integration above financial incentives. They didn't want to be paid more to use it. They wanted it to stop getting in their way. This is the lesson the health AI field keeps learning — and keeps forgetting: → Regulatory approval is not adoption. → Algorithmic accuracy is not clinical impact. → Integration is not a feature. It is the product. The technology works. The potential is real. The gap between potential and reality is almost entirely an implementation problem. And implementation problems are solvable — if we fund them, study them, and take them as seriously as we take the algorithms. I've written about what TRICORDER really teaches us — and what needs to change if AI is going to deliver on its promise in health care. Read the full blog here: https://lnkd.in/eNSAsZw8 #HealthAI #DigitalHealth #Innovation #HealthcareLeadership #ImplementationScience #AIinMedicine

  • View profile for Robert McElroy

    CEO at McElroy Global. Enabling the acceleration of lifesaving treatments to patients who need it most via AI.

    19,046 followers

    🚨 AI JUST HIT ROCHE’S EARNINGS CALL 🚨 Roche’s Q3 2025 earnings call quietly revealed something bigger than a quarterly update — it showed where diagnostics is heading. They announced the Kidney Klinrisk Algorithm — an AI-driven risk stratification tool that just received its CE mark in Europe. This isn’t just a new test. It’s the start of a new category of diagnostics — where routine lab results, imaging, and patient data combine to predict risk before symptoms even appear. “By combining AI with routine tests, Roche helps physicians identify patients at risk of kidney function decline early on, enabling more informed and confident decision-making.” 💡 The signal beneath the noise: ✅ AI + Multi-Modal Data — Fusing clinical, biomarker, imaging, and real-world evidence to find patterns humans can’t see. ✅ Biomarker-Driven Precision — Identifying patient subgroups that respond differently, turning reactive testing into proactive insight. ✅ Data Governance & Traceability — Building regulated, audit-ready data environments to support CE-marked and FDA-cleared algorithms. ✅ Speed to Insight — Automating model development pipelines so clinicians don’t wait months for answers that data could reveal in days. For an industry where Diagnostics has been the slowest to digitize, this marks a real inflection point: from test results ➜ to algorithms ➜ to earlier, smarter interventions. Roche may have lit the spark — but the opportunity runs across the entire ecosystem. The companies who can unify multi-omics, imaging, and clinical data under a compliant, AI-ready framework will define the next era of precision medicine.

  • View profile for Gary Monk
    Gary Monk Gary Monk is an Influencer

    LinkedIn ‘Top Voice’ >> Follow for the Latest Trends, Insights, and Expert Analysis in Digital Health & AI

    46,283 followers

    Astellas Pharma becomes latest pharma giant to join Evinova's AI platform, following Bristol Myers Squibb and parent AstraZeneca in backing cross-industry clinical trial collaboration >> 🔘 Three major pharma companies are now sharing operational clinical trial data with Evinova's AI platform, marking a rare moment of cross-industry collaboration in drug development 🔘 The platform uses multi-agent AI to tackle one of pharma's most persistent problems: fragmented systems and manual processes that drag out timelines and inflate costs. 🔘 It converts protocols into machine-readable formats and generates optimized study designs in minutes, benchmarked across cost, timelines, patient experience, and even carbon footprint, replacing weeks of manual work. 🔘 A single clinical trial requires over 200 interconnected document types. AI authoring agents now handle intelligent recommendations across regulatory, scientific, and operational inputs, cutting costly protocol amendments 🔘 Early results show 5 to 7 percent savings minimum per study, translating to hundreds of millions of dollars across a top-10 pharma portfolio 🔘 The architecture is modular and cloud-native, letting organizations plug in their own AI models with built-in privacy and regulatory compliance across global markets 💬 The broader signal here: clinical development is finally moving from a document-heavy, siloed process to an AI-first workflow, and the opt-in data sharing model could set a new industry standard for how sponsors learn from each other #digitalhealth #pharma #AI

  • View profile for Jyothish Nair

    Doctoral Researcher in AI Strategy & Human-Centred AI | Technical Delivery Manager at Openreach

    19,236 followers

    AI in healthcare is not simply another technology upgrade. It is a matter of trust, safety, and ultimately, human life. In many sectors, an AI error might lead to inconvenience or financial loss. In healthcare, an AI error can mean a missed diagnosis, an inappropriate treatment pathway, or avoidable harm. That is why AI adoption in healthcare must be held to a higher standard than in almost any other industry. It requires deeper validation, stricter governance, and human guardrails at every stage. A framework I find particularly helpful is 𝐀𝐈 + 𝐑𝐀𝐂𝐓⁣, strengthened through a Human-Centred AI lens. 𝐑 = 𝐑𝐞𝐚𝐝𝐢𝐧𝐞𝐬𝐬⁣ ⁣⁣⁣⁣⁣⁣⁣⁣⁣The risk begins long before deployment. If clinical data is incomplete, biased, or unrepresentative, AI systems can fail quietly, often affecting the most vulnerable populations first. Readiness must include: →Data integrity and provenance →Regulatory compliance →Clear clinical problem definition →Ethical and patient safety accountability 𝐀 = 𝐀𝐝𝐨𝐩𝐭𝐢𝐨𝐧⁣ ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣ ⁣⁣⁣⁣⁣⁣⁣⁣In healthcare, adoption is not about installing a tool, it is about integrating it into clinical judgment. The risk is over-reliance, alert fatigue, or the introduction of friction into already pressured workflows. Human-centred adoption means: →Clinicians remain firmly in the loop →AI outputs are explainable and challengeable →Training supports human-AI collaboration, not replacement 𝐂 = 𝐂𝐚𝐩𝐚𝐛𝐢𝐥𝐢𝐭𝐲⁣ ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣ ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣Healthcare AI is not static. Models drift, populations change, and clinical practice evolves. The risk is that a system that appears safe today may not remain safe tomorrow. Capability requires: →Continuous monitoring and evaluation →Governance structures spanning clinicians, data, ethics and risk →Ongoing validation, not one-off approval 𝐓 = 𝐓𝐫𝐚𝐧𝐬𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧⁣ ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣ ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣True transformation is not automation for its own sake. The risk of scaling without safeguards is amplified inequity, diminished patient trust, and decision-making that feels outsourced. Transformation must prioritise: →Better patient outcomes and experience →Equity across communities →Shared decision-making, supported, not replaced, by AI The central truth is this: 𝐇𝐞𝐚𝐥𝐭𝐡𝐜𝐚𝐫𝐞 𝐀𝐈 𝐢𝐬 𝐧𝐨𝐭 𝐜𝐨𝐧𝐬𝐮𝐦𝐞𝐫 𝐭𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲. 𝐈𝐭 𝐢𝐬 𝐬𝐚𝐟𝐞𝐭𝐲-𝐜𝐫𝐢𝐭𝐢𝐜𝐚𝐥.⁣ ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣ ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣Progress must be ambitious, but responsibility must be uncompromising. The question is not whether AI will shape the future of care. It is whether we shape it with the rigour, humility, and human focus that patients deserve. What is the single most important gate check you insist on before scaling AI in clinical environments? ♻️ Share if this resonates ➕ Follow (Jyothish Nair) for reflections on AI, change, and human-centred AI #ResponsibleAI #AI #DigitalTransformation #HumanCentredAI

  • View profile for Namita Thapar

    Founder, Arth by Emcure

    480,994 followers

    AI in Healthcare Sepsis infection is one of the largest causes of deaths in hospitals, estimated 11 m deaths/year. AI can help. After a patient checks into the emergency ward of a hospital, AI can look into 150 patient variables like lab results, vital signs, current medications, medical history, demographics to predict risk profile for possible sepsis. Staying vigilant has brought down sepsis incidence in hospitals ! I just gave you one example of how AI can help in healthcare. Few more … DIAGNOSIS – GE is using gen AI for multi modal integration from sources like imaging, genomics, pathology to help a clinician in diagnosis. Another ex is ischemic stroke where the image has to be read by a radiologist quickly to identify the clot in the brain. This can be done by AI when radiologists are busy or limited in number. This speed in diagnosis can save lives. REMOTE PATIENT CARE – We are know that there is a demand & supply mismatch in doctors and nurses. Monitoring devices with AI can send a notification to the healthcare professionals to visit the patient as and when needed saving time. Such efficient remote care limits the number of days patient has to spend in the hospital thereby reducing cost of stay which is very helpful for patients and insurance companies. AI-trained Chatbots have shown the potential to answer patient questions when doctors are not available. DRUG DISCOVERY – With millions of people waiting for the approval of new medicines, bringing a drug to market still takes on average more than 10 years and costs over 1.9 billion Euros on average. Merck has launched a drug discovery software that identifies compounds from over 60 billion possibilities based on key properties like non toxicity, solubility and stability in the body. Insilico Medicine, a biotech company out of Hong Kong is the first company where an AI discovered drug has entered phase II clinical trials in US and China. CLINICAL TRIALS - AI can help in trials through patient recruitment (through analysing patient health records and identifying most suitable candidates thereby reducing recruitment time), patient monitoring (by identifying adverse events or complications real time), protocol design, trial site selection, predict enrolment rates, data analysis (AI can often spot patterns and correlations that might be missed by humans) and cost efficiency by automating a lot of the admin paperwork involved in trials. MANUFACTURING– AI can predict machine failure and schedule equipment maintenance before breakdown occurs. It can inspect products and detect defects more accurately than humans, it also ensures timely delivery of raw materials through analysis and prediction of typical delays due to logistics, weather, shortages etc. Way ahead - I have only skimmed the surface & covered a few areas above. There is no doubt that AI can transform healthcare in many way however the challenges of data privacy and related ethics, prohibitive costs and unclear regulations remain.

  • View profile for Montgomery Singman
    Montgomery Singman Montgomery Singman is an Influencer

    Managing Partner @ Radiance Strategic Solutions | xSony, xElectronic Arts, xCapcom, xAtari

    27,547 followers

    AI’s impact on medicine is no longer theoretical—it’s redefining daily clinical practice, medical research, and the very fabric of physician training. Breakthroughs like Google DeepMind’s AlphaFold2 have let researchers predict the structure of nearly every known protein, accelerating new drug development and igniting a wave of biotech innovation. AI models are now outperforming traditional methods—detecting cancer, forecasting disease progression, and driving efficiencies in active compound discovery. On the operational side, hospitals are leveraging large language models to automate clinical documentation and summarize complex records. The result: clinicians spend less time on paperwork—and more time with patients—helping combat burnout and improve satisfaction for both sides. Medical education is also evolving. Universities such as Stanford and Mount Sinai are weaving AI training into their curricula, recognizing that tomorrow’s doctors need to not only master clinical knowledge but also the critical thinking to collaborate with AI tools effectively. Simulated surgical training, AI-powered feedback, and new pharmacy protocols show that the skillset for modern medicine is expanding—and institutions are responding accordingly. Caution is warranted: Algorithmic bias, data privacy, and the need for robust validation remain real concerns. Yet the pace of deployment and the scope of benefit make clear that AI is not a distant disruptor; it’s a core enabler of the industry’s future. Now is the time for healthcare leaders, educators, and innovators to shape policies, invest in talent, and reimagine workflows. Let’s ensure that AI’s integration into medicine truly elevates care, training, and research for all. https://lnkd.in/gwi3htAJ #AIinMedicine #HealthcareInnovation #MedicalResearch #ClinicalAI #HealthTech #AIEducation #FutureOfMedicine #DigitalHealth #MedTech #HealthcareLeadership

  • View profile for Graham Walker, MD
    Graham Walker, MD Graham Walker, MD is an Influencer

    Healthcare AI — MDCalc & Offcall Founder — ER Doctor @ TPMG (views are my own, not employers’)

    66,906 followers

    Everyone’s worried about GenAI hallucinations. Fake facts, wrong doses, phantom studies. But what if the real danger is 𝘤𝘰𝘨𝘯𝘪𝘵𝘪𝘷𝘦 𝘪𝘯𝘧𝘦𝘤𝘵𝘪𝘰𝘯? What if GenAI subtly reshapes how your doctor thinks, what she assumes, how she defaults, and what she believes about you? This recent NYT piece showed what happens when models drift in long conversations: delusion, detachment, reality distortion. It made me realize we are NOT talking about this at all in medicine, despite evidence of broad GenAI usage already in clinical care. I’ve had these failure modes zipping around in my head — and what they might look like if your doctor uses GenAI for everything: 1️⃣ 𝗗𝗲𝗴𝗿𝗮𝗱𝗮𝘁𝗶𝗼𝗻 𝗼𝘃𝗲𝗿 𝘁𝗶𝗺𝗲. Like the NYT example: the longer the chat goes, the weirder it gets. Now picture a hospitalized patient with weeks of notes, each auto-drafted by the same GenAI tool. Does the model start suggesting snake oil or bizarre diagnostic nonsense? 2️⃣ 𝗠𝗼𝗱𝗲𝗹 𝗽𝗼𝗹𝗮𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗼𝘃𝗲𝗿 𝘁𝗶𝗺𝗲. Like #1, LLMs also appear to polarize over time. Imagine these models shaping how your doctor thinks about your goals of care based on subtle information earlier in the chat history — either recommending everyone is full code or DNR. 3️⃣ 𝗢𝘃𝗲𝗿𝗹𝘆 𝗮𝗴𝗿𝗲𝗲𝗮𝗯𝗹𝗲, 𝘀𝘆𝗰𝗼𝗽𝗵𝗮𝗻𝘁𝗶𝗰 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿. ChatGPT loves telling me I’m brilliant. But medicine requires friction to learn. “Why appendicitis? What else could it be?”  LLMs don’t push back unless you prompt them to, and that's literally how trainees learn. So if a resident relies on GenAI, are they getting sharper, or just cheered on about their erred thinking? 4️⃣ 𝗟𝗲𝘀𝘀 𝗰𝗿𝗲𝗮𝘁𝗶𝘃𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺-𝘀𝗼𝗹𝘃𝗶𝗻𝗴. LLMs don’t imagine. They autocomplete. They give you what’s most 𝙨𝙩𝙖𝙩𝙞𝙨𝙩𝙞𝙘𝙖𝙡𝙡𝙮 𝙡𝙞𝙠𝙚𝙡𝙮, and that’s not always what’s 𝙘𝙡𝙞𝙣𝙞𝙘𝙖𝙡𝙡𝙮 𝙣𝙚𝙚𝙙𝙚𝙙. So much of medicine is taking the unique patient in front of you and figuring it all out, using your knowledge and experience creatively to come up with a plan. 5️⃣ 𝗥𝗲𝘀𝗶𝘀𝘁𝗮𝗻𝗰𝗲 𝘁𝗼 𝗻𝗲𝘄 𝗲𝘃𝗶𝗱𝗲𝗻𝗰𝗲. Say a paper drops tomorrow, like HIGH DOSE ANTIBIOTICS CURE MELANOMA. No guidelines yet. Just one perfect study. How long before your GenAI integrates it? Months? Years? Ever? How does the model learn new medical knowledge, when it's spouting off the most popular stuff? You know I'm not the "ban GenAI in healthcare" guy. I’m saying: build playbooks. Write safety protocols. (My god, healthcare is great at 𝘴𝘢𝘧𝘦𝘵𝘺 𝘱𝘳𝘰𝘵𝘰𝘤𝘰𝘭𝘴.) Pressure test these tools like we would any other clinical intervention. Because if GenAI is getting implanted into the brainstem of medical practice, it better not hallucinate, drift, or flatter us into snake oil. Tomorrow, I’ll share some thoughts about how we can actually build guardrails that work. (And not just for trainees; I think attendings are just as vulnerable.) 

  • View profile for Ashok Chennuru

    Chief Data & Digital AI Transformation Officer | Elevance Health | Board Member | Advisor | Mentor

    14,318 followers

    Ambient AI is no longer a future concept in healthcare, it’s already reshaping how care is delivered. AI-enabled clinical documentation is changing how physicians experience technology, making it feel supportive rather than burdensome. By reducing the administrative load of documentation, clinicians can spend more time practicing medicine instead of managing systems. At the same time, clinical documentation, which has long been a source of friction, burnout, and risk, has the potential to become a powerful source of real-time clinical insight. At Elevance Health, we’re focused on applying digital technologies, such as ambient and clinical insights - responsibly - not just to document care, but to enable earlier intervention, better coordination, and more effective cost management. Several principles guide our approach: 🚣 Move upstream: Embed payer intelligence, such as risk signals and care gaps, directly into clinical workflows rather than surfacing insights after the fact. 🕵 Focus on moments that matter: Earlier detection of risk allows action before acute events occur. 🩺 Keep humans in the loop: AI should support clinical decision-making, not replace clinical judgment. 🔃 Reduce friction, not add it: Seamless data flow means less manual work for providers and faster, more comprehensive care. By integrating real-time clinical documentation with actionable insights, ambient AI can help surface relevant information at the moment of care, supporting more comprehensive diagnosis, improved coordination, and more affordable outcomes without increasing burden or compliance risk. The opportunity ahead isn’t about adding more AI tools. It’s about turning data into action at the right time, in the right workflow, for the right member. I look forward to continued collaboration across payers, providers, and technology partners as we shape what responsible, AI-enabled healthcare should look like.

Explore categories