Privacy-Preserving Federated Learning for Cross-Institution Anti-Money-Laundering Models

Authors

  • Kevin Clark College of Computing, Georgia Institute of Technology, Atlanta, GA 30332, USA Author

DOI:

https://doi.org/10.71465/fbf553

Keywords:

Federated Learning, Anti-Money Laundering, Differential Privacy, Financial Crime Detection

Abstract

The proliferation of digital financial transactions has precipitated a commensurate rise in sophisticated financial crimes, specifically money laundering, which imposes significant stability risks on the global economic framework. Traditional Anti-Money Laundering (AML) systems, predominantly relying on rule-based engines or isolated machine learning models within single institutions, fail to capture the complex, cross-institutional topology of modern laundering networks. While collaborative learning offers a theoretical solution, strict data privacy regulations such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) inhibit the centralized aggregation of sensitive transaction data. This paper presents a comprehensive framework for Privacy-Preserving Federated Learning (PPFL) tailored specifically for AML applications. We propose a novel architecture that integrates Differential Privacy (DP) with Secure Multi-Party Computation (SMPC) to enable financial institutions to collaboratively train robust Deep Neural Networks (DNNs) without sharing raw transaction ledgers. Furthermore, we address the challenge of non-Independent and Identically Distributed (non-IID) data, a characteristic inherent to the heterogeneous customer bases of different banks. Our experimental results demonstrate that the proposed framework achieves detection rates comparable to centralized training baselines while mathematically guaranteeing data privacy, thereby resolving the dilemma between regulatory compliance and effective financial crime detection.

Downloads

Download data is not yet available.

Downloads

Published

2025-12-31