Skip to content

Latest commit

 

History

History
197 lines (143 loc) · 11.4 KB

File metadata and controls

197 lines (143 loc) · 11.4 KB

Industry Use Cases

Federated Learning is being adopted across industries where data cannot be centralized due to privacy, regulatory, or competitive constraints. NVIDIA FLARE provides the platform infrastructure to enable these deployments.

Healthcare & Life Sciences

Federated learning enables multi-hospital collaboration for medical AI model development without sharing patient data.

Key applications:

  • Medical image analysis (radiology, pathology, ophthalmology)
  • Drug discovery and molecular property prediction
  • Electronic health records (EHR) analysis
  • Clinical trial optimization
  • Genomics and precision medicine

Why federated? HIPAA, patient consent, and institutional data governance policies prevent centralized data aggregation. Federated learning allows hospitals to train better models together while keeping patient data local.

Cancer AI Alliance (CAIA): The Cancer AI Alliance -- a consortium of leading cancer centers -- uses NVIDIA FLARE with Rhino Federated Computing Platform to train AI models across multiple institutions while keeping sensitive patient data behind each center's firewall. Only model weights are exchanged; cancer centers retain full control over data selection, access policies, and local execution. The platform supports differential privacy and model encryption for projects requiring enhanced protection.

Eli Lilly TuneLab -- Federated Drug Discovery: In September 2025, Eli Lilly launched TuneLab, an AI/ML platform that gives biotech companies access to drug discovery models trained on over $1 billion of Lilly's research data. The platform uses federated learning so that biotech partners can fine-tune Lilly's models on their own proprietary molecular data without exposing it. In return, partners contribute training data that continuously improves the shared models for the entire ecosystem.

Federated AI for Therapeutic Engineering (FAITE) -- AbbVie, Amgen, AstraZeneca, J&J, UCB: Launched in 2025, FAITE is a cross-industry biopharmaceutical consortium that uses federated and active learning to train models for predicting biologics properties. Member companies contribute training on local proprietary molecular data without sharing it, enabling collaborative model improvement while maintaining competitive and regulatory data boundaries.

Other references:

Financial Services

Financial institutions use federated learning for fraud detection, credit risk modeling, and anti-money laundering without exposing sensitive transaction data.

Key applications:

  • Fraud detection across payment networks
  • Credit scoring with broader data representations
  • Anti-money laundering (AML) model training
  • Market risk analysis

Why federated? Banking regulations (SOX, FINRA, GDPR, PSD2) and competitive sensitivity prevent sharing transaction data between institutions.

Swift Collaborative Fraud Defence: In September 2025, Swift partnered with 13 global banks -- including ANZ, BNY, and Intesa Sanpaolo -- to test federated learning for cross-border fraud detection. Using privacy-enhancing technologies, participating institutions trained AI models locally on their own data without sharing customer information. In trials involving 10 million artificial transactions, the collaborative federated model was twice as effective at detecting known fraudulent transactions compared to models trained on a single institution's data alone.

Reference:

JP Morgan, BNY, and RBC -- Federated Financial AI: At GTC 2025, JP Morgan, BNY, and Royal Bank of Canada (RBC) presented their experiences applying federated learning to financial AI models, covering cross-institutional model training for risk and fraud use cases without sharing sensitive customer transaction data.

Government & National Security

Government agencies and national laboratories use federated learning to collaboratively train AI models on sensitive, geographically distributed datasets that cannot be centralized due to classification, data sovereignty, or security constraints.

Trilab Federated AI (Sandia, Los Alamos, Lawrence Livermore): In 2025, the three NNSA national security laboratories demonstrated a federated-learning prototype -- codenamed Chandler -- that trains a shared large language model across three geographically distributed classified systems without exchanging raw data. Using NVIDIA FLARE to orchestrate training, the labs exchange only model weights (parameters) between epochs while keeping each laboratory's unique datasets local. The prototype ran on both NVIDIA and AMD GPU hardware, including Lawrence Livermore's El Capitan, the world's fastest supercomputer.

"Federated training is a critical tool to delivering a robust capability in a cost effective, performant and secure way." -- Si Hammond, NNSA Office of Advanced Simulation and Computing

Reference:

Oak Ridge National Laboratory (ORNL) -- OLCF Scientific Research: The Oak Ridge Leadership Computing Facility (OLCF) uses NVIDIA FLARE for federated learning across distributed scientific datasets, enabling multi-site collaboration on large-scale scientific computing research without centralizing sensitive experimental data.

Taiwan International Federated Learning Center (Ministry of Health and Welfare): In January 2026, Taiwan's Ministry of Health and Welfare announced the establishment of an International High-Computing and Federated Learning Center for training smart medical AI models while preserving data privacy and sovereignty. The center completed proof of concept with 16 major hospitals and plans to scale to 100 regional hospitals and ultimately all Taiwanese hospitals. Medical data remains on local hospital servers while the central AI model learns from distributed data through federated learning. The center is also facilitating international collaboration with Thailand's Mahidol University to jointly develop standards for AI-based medical product verification across ASEAN markets.

Reference:

Transportation

Federated Learning for Autonomous Vehicles: Automotive manufacturers and research teams use federated learning to train perception and safety models across distributed vehicle fleets and test facilities, improving model quality without centralizing proprietary driving data.

Edge AI & Scientific Computing

NVIDIA Holoscan Federated Analytics at the Edge: NVIDIA Holoscan enables federated analytics at the edge for medical devices and industrial AI applications, allowing inference data to be used for federated model improvement without transmitting raw sensor streams to a central server.

NVIDIA Data Federation Mesh -- Federated Data Processing in Scientific Computing: The NVIDIA Data Federation Mesh demonstrates federated data processing pipelines for large-scale scientific computing, enabling distributed analysis across facilities without moving raw scientific datasets.

FLARE Day -- Real-World Deployments

FLARE Day is an annual event showcasing real-world federated learning deployments across healthcare, finance, autonomous driving, and more. These talks feature practitioners sharing production experiences and lessons learned.

  • FLARE Day 2026 -- Coming September 2026
  • FLARE Day 2025 -- Real-world FL applications in healthcare, finance, autonomous driving, and more
  • FLARE Day 2024 -- Talks and demos featuring real-world FL deployments at NVIDIA, healthcare institutions, and industry partners

Getting Started with Your Industry

Regardless of your industry, the path to federated learning follows a similar pattern:

  1. Identify the use case -- What ML model do you want to improve with federated data?
  2. Start with simulation -- Use the :ref:`FL Simulator <fl_simulator>` to prototype with synthetic data
  3. Prove value with POC -- Run a :ref:`POC deployment <poc_command>` with 2-3 participating sites
  4. Scale to production -- Follow the :doc:`Deployment Guide <user_guide/admin_guide/deployment/overview>` for provisioning and infrastructure

For questions about industry-specific deployments, see the :doc:`publications_and_talks` page for talks and papers relevant to your domain.