< Back

Towards Trusted, Confidential AI: The Delegated Federated Learning Approach

June 6, 4:05 PM - 4:25 PM
Grand Ballroom Salon B

Imagine a vast ocean of information. That's the amount of data we're generating today. Combined with breakthroughs in Deep Learning and Natural Language Processing, this data fuels the creation of incredibly powerful AI models like Large Language Models (LLMs). These powerful models are trained on massive amounts of information, often hundreds of gigabytes.

But there's a catch.

Training these models often involves sensitive data, such as patient health records, financial data, or intellectual property. This raises a critical question: how can we leverage this powerful AI technology without compromising data privacy?

Federated Learning (FL) offers a promising solution. In FL, multiple devices – like phones or computers – collaborate to train a model. Data stays on the devices during the entire machine learning process, thus protecting the privacy of the user's data.

However, FL requires messages containing model updates to be exchanged between participating devices, which can create a security risk. Malicious actors could potentially gain sensitive information from these messages. We need a way to keep them private and unreadable by anyone involved. Some solutions, like Differential Privacy, Full Homomorphic Encryption, and Multi-Party Computation exist, but they can significantly hurt the model's performance.

Confidential computing (CC) protects sensitive information even while it's being used in a way no one can get access to it. Unfortunately, most consumer grade devices do not support CC yet.

Imagine a way to let people use the power of Federated Learning and confidential computing even though they don't own a CC-enabled device. This is where Delegated Federated Learning comes into play: in this framework, users contribute their encrypted data while remote worker nodes perform the actual Federated Learning using Trusted Execution Environments. This way, workers perform the training while never obtaining access to either the training data or the shared model.

In this presentation, we will show a first design and proof-of-concept of a Delegated Federated Learning framework based on the iExec decentralized cloud computing marketplace. The framework relies on blockchain technology and several TEE technologies (Intel SGX, TDX, and nVidia Hopper Confidential Compute) to implement an industry-grade Trusted and Confidential AI platform.

We believe that Delegated Federated Learning has the potential to revolutionize the field of Confidential AI, opening doors to a future where anyone can contribute to powerful and privacy-preserving AI development.

About the speaker

Anthony Simonet

Anthony Simonet

Head of Research and Innovation, iExec

Dr. Anthony Simonet-Boulogne is the Head of Research and Innovation with iExec Blockchain Tech. He has been active in big data, distributed computing, and cloud computing research through several publications in high-impact academic conferences and journals.

Anthony received his Ph.D. from the École Normale Supérieure de Lyon (France) in 2015 and his Master's degree from the University of Bordeaux. Before joining iExec, he was a Post-Doctoral Associate with Inria (France) and the Rutgers Discovery Informatics Institute (RDI2) at Rutgers University (New Jersey, USA). Dr Anthony Simonet-Boulogne holds a Ph.D. in Computer Science and has been active in distributed computing, cloud and fog/edge computing, and blockchain research. He has contributed to several projects related to confidential and trusted computing for big data and IoT applications.