Risk Mitigation
Although Federated Learning (FL) is an ML paradigm that preserves the privacy of users and clients, data leaks and various threats can arise when updates from participants, containing raw data, are uploaded to the network. This section addresses these risks and vulnerabilities, and how to mitigate them.
Risks
We identify various attacks that target FL and categorize them according to the phases of the FL process where they pose the greatest risk. We divide the FL process into four phases: (1) training, (2) parameter exchange, (3) parameter aggregation, and (4) prediction.
Data poisoning attacks [training] aim to disrupt the performance of the model by compromising the training data, such as through Label Flipping and Backdoor Updates
Model poisoning attacks [training, parameter exchange, parameter aggregation] by malicious clients or external adversaries can manipulate the FL training procedure to alter model gradients/parameters.
Inference attacks [parameter exchange, parameter aggregation, prediction] are attempts to infer sensitive information about participants, training data, and labels, and can be exploited using generative adversarial networks (GANs).
Byzantine attacks are intended to degrade the convergence of the global model.
Evasion attacks [prediction] are intended to deceive the target model by creating adversarial samples during the prediction phase.
Free-riding attacks occur when participants utilize the global model regardless of their contribution.
Risks Mitigation
To mitigate these risks, we utilize a hybrid protection approach incorporating several techniques:
Differential Privacy (DP)
Differential Privacy aims to protect users’ privacy in published data by adding randomly generated noise (intentional introduction of random data or variations into datasets or model updates to mask the original data while still allowing for accurate overall model training) to data samples.
Multi-Layered Anti-Poisoning Attacks Mechanism
To strengthen our system against poisoning and Sybil attacks, we incorporate a multi-layered defense strategy that integrates FoolsGold defense protocol, Multi-Krum, and FedG2L. FoolsGold mitigates Sybil attacks in federated learning by differentiating contributions based on data diversity. It reduces the influence of updates that appear too similar, which are likely from sybil attackers using replicated or slightly modified data. Multi-Krum selects the most reliable subset of updates, based on the assumption that the majority of clients are honest. It calculates scores based on the distance between updates and chooses the ones that are closest to the average. FedG2L focuses on reducing the impact of poisoning attacks during the global model aggregation phase by employing a gradient-similarity-based consensus algorithm. It aims to eliminate malicious gradients and update the model without introducing poisoned data.

Secure Multiparty Computation (SMPC)
Secure Multiparty Computation (SMPC) is a cryptographic technique that enables different parties to carry out distributed tasks securely. Federated Learning is a subset of MPC and when frameworks and techniques such as Differential Privacy, Homomorphic Encryption and SMPC are applied, then FL is a subset of SMPC itself.
The Blockchain
One of the most powerful tools in our arsenal is the blockchain itself. By employing a decentralized, immutable, and transparent ledger, and using it as the coordinator in FL combined with the above mitigation techniques, we overcome major security and privacy risks. Blockchain proves to be the ultimate complementary technology to federated learning, enabling a secure and privacy-preserving AI.
Last updated