Highlights From Confidential Computing
Confidential Computing and the Solution to Privacy-Preserving Generative AI
Raluca Ada Popa, Associate Professor, CS, UC Berkeley & Co-founder Opaque Systems
Raluca Ada Popa opens the Summit and discusses the factors driving the considerable rate of growth of the confidential computing market between 2023-2029 including the rise of data privacy laws and the rise of new use cases across every industry. Ada Popa also highlights the recent ground-breaking innovation across every layer of the confidential computing technology stack as well as confidential computing’s position on the larger Privacy Enhancing Technology (PET) landscape & why confidential computing is the next frontier in data security.
Download SlidesThe Mission of the Confidential Computing Consortium and Driving Adoption of Confidential Computing
Mike Bursell, Executive Director, Confidential Computing Consortium
Mike Bursell speaks to the shared mission of the 45+ member organizations in the Confidential Computing Consortium, the CCC’s support of new open source standards and projects relating to confidential computing such as Keystone, Veracruz, Gramine and Occlum and how the CCC helps accelerate the acceptance and adoption of confidential computing.
Download SlidesConfidential Computing and Zero Trust
Vikas Bhatia, Head of Products, Microsoft Azure Confidential Computing
Zero Trust is top of mind for many organizations. Confidential computing supports Zero Trust in ways that can only be accomplished by industry collaboration between chip manufacturers, software platforms, and cloud providers. Learn how this collaboration is extending Zero Trust to data-in-use and letting organizations assume breaches occur in all components outside a tightly controlled and attested trusted computing base.
Download SlidesThe Urgency for Confidential Computing from Zero-Trust Data Clean Rooms to Privacy-Preserving Generative AI
Rishabh Poddar, CEO & Co-founder Opaque Systems
Teresa Tung, Cloud First CTO Accenture
Protecting the confidentiality of organizational data will be discussed and demonstrated by Poddar along with a focus on privacy-preserving generative AI. Poddar also discusses details on how multiple organizations can easily and securely analyze their combined confidential data with Zero Trust Data Clean Rooms without sharing or revealing the underlying raw data. Accenture’s Cloud First Chief Technologist, Teresa Tung, will also share case studies on Opaque’s usage for advanced, collaborative analytics using multi-party data clean rooms.
Download SlidesPanel: Generative AI Security and Privacy
Raluca Ada Popa, Associate Professor, CS, UC Berkeley & Co-founder Opaque Systems
Vikas Bhatia, Head of Product, Azure Confidential Computing (ACC)
Phil Rogers,Compute Server Architect at NVIDIA
Professor Joseph Gonzalez, UC Berkeley
Moderator: Ben Lorica, Host The Data Exchange Podcast, Past: Program Chair of Strataconf.com, TheAIconf.com, and Tensorflow.world.
Overcoming Barriers to Confidential Computing as a Universal Platform
John Manferdelli, Office of the CTO, VMware
Confidential computing provides simple, principled confidentiality and integrity for workloads wherever they run. Within multi-cloud infrastructures, it opens the door for a universal distributed computing solution that addresses verifiable program isolation, programs as authenticated security principals, secure key management, trust management, and the ability to prove these security properties cryptographically "over the wire" to relying parties using attestation. Yet the adoption of confidential computing has been slowed by the difficulty of writing CC-enabled programs quickly and securely, and across hardware technologies. Manferdelli will describe issues and requirements for a universal programming platform and introduce the open source “Certifier Framework for Confidential Computing” that provides a step towards overcoming development barriers.
Download SlidesPanel: Trending Applications and Use Cases in Confidential Computing*
Ron Perez, Fellow, Chief Security Architect, Intel Office of CTO
Nelly Porter, Head of Product, GCP Confidential Computing and Encryption
Vishal Gossain, Practice Leader, Risk Analytics and Strategy, Ernst & Young
Andrew Brust, Moderator, Founder & CEO, Blue Badge Insights
Panelists including experts such as Intel’s Chief Security Architect from Intel’s Office of the CTO, Head of Product, GCP Confidential Computing, and EY experts will discuss practical applications of confidential computing across banking, healthcare, insurance, Blockchain, AdTech, supply chain and more. Panelists will discuss the trends and nuances across use cases, speak to examples, and discuss multi-party computing, confidential AI, multi-party analytics, data clean rooms, and more.
*A. Any views expressed by EY speakers are views of their own, and don't necessarily represent the views of their employers.
B. EY is participating under Chatham house rules.
Confidential Computing as a Cornerstone for Cybersecurity Strategies and Compliance
Xochitl Monteon, Chief Privacy Officer & VP Cybersecurity Risk & Governance, Intel
With the growth of new government-driven cybersecurity strategies and continued expansion of global regulations, organizations are facing increased pressure to transform while still protecting their sensitive data. We will explore how confidential computing technology, rooted in secure clouds and hardware, is uniquely suited today to help organizations meet these new cybersecurity requirements and our vision for tomorrow’s emerging data landscape.
Trusted Execution Environments and Private Messaging
Rolfe Schmidt, Senior Researcher, Signal Messenger
Rolfe works to bring security research into engineering practice at Signal Messenger, the world's most widely used truly private communications app. Keeping metadata private is just as important as protecting message contents, and in some cases it is more important given the intimate details that metadata can expose. Ideally, privacy is accomplished using cryptography to ensure that sensitive data never leaves a user device, but this isn’t always feasible. Attested, confidential TEEs offer another option. This talk will look at how Signal Messenger is using them as one part of a defense-in-depth strategy to offer a fully featured app that provides metadata privacy at a global scale.
Why Organizations Are Investing in Private Multi-Party Analytics
Ion Stoica, Professor UC Berkeley, Executive Chairman, Anyscale, Executive Chairman, Databricks, Board Member Opaque Systems
Ion Stoica, the co-founder of Databricks and Spark open source, co-founder of Anyscale and Ray open source, co-founder of Opaque Systems and co-creator for Spark-based MC2 open source for confidential computing, speaks to the criticality of confidential, private multi-party analytics and machine learning. Stoica highlights why organizations need it, the use cases that demand multi-party analytics and ML, and what's driving the increasing urgency.
Panel: The Surging Demand for Data Clean Rooms. Why Now?
Frank Badalamenti, Partner, Cyber, Risk and Anti-Fraud Technologies, PwC
Rishabh Poddar, CEO, Opaque Systems, Abhishek Chakraborty, Senior Product Manager, MiQ
From media and ad companies, to packaged goods, and marketeers, data clean rooms have become a necessary to ensure data privacy, protect PIIA data and enable collaboration across multiple parties on confidential data. Understand the latest technologies, hear about use cases and find out what’s driving the surging demand.
VMs Are The Next Perimeter
Dr Jethro Beekman, VP Technology & CISO, Fortanix
Hardware manufacturers and infrastructure providers are growing an ever-denser forest of confidential computing options. It can be hard to understand the inherent security improvements offered by each of these. As the constant stream of data breaches in the news shows, perimeter security as a security baseline isn’t cutting it anymore. Trying to secure VM deployments, as commonly used today, directly with confidential computing moves the perimeter but doesn’t fundamentally change the security of data in use. In this talk, we'll look at the pitfalls and what it takes to truly separate security and infrastructure.
Citadel: Side-Channel-Resistant Enclaves on an Open-Source, Speculative, Out-of-Order Processor
Srini Devadas, Webster Professor of EECS, MIT
Citadel is a side-channel-resistant enclave platform running on a speculative, out-of-order, multicore processor with the RISC-V ISA. We developed a new hardware mechanism to prevent enclaves from speculatively accessing shared memory, effectively protecting them from speculative attacks. Our multicore processor runs on an FPGA and boots untrusted Linux. We open-source our end-to-end hardware and software infrastructure in the hope of sparking research and development to bridge the gap between architectural proposals and deployed enclaves.
Download SlidesInnovation and Collaboration at Scale: How Confidential Computing Empowers Enterprises to Fully Embrace the Public Cloud
Ayal Yogev, CEO & Founder, Anjuna Security
Despite the massive benefits of cloud computing, enterprises in strategic sectors such as financial services, healthcare, defense, and government remain hesitant to fully embrace the public cloud. This hesitancy effectively places a glass ceiling on the scale at which innovation and collaboration can occur. Yogev speaks to how the rapid maturation of confidential computing platforms is poised to serve as the catalyst to the next phase of cloud adoption by unlocking unprecedented data security and privacy. In his talk, Ayal will highlight real-world examples of how organizations across industries are already leveraging confidential computing, and demonstrate how doing so ultimately turns a security solution into a vehicle for increased innovation and fearless collaboration.
Download SlidesThe Opaque Platform for Secure Multi-party Analytics
Jay Harel, VP, Products, Opaque Systems and Russell Goodwin, Customer Solutions, Opaque Systems
Russell Goodwin, Customer Solutions, Opaque Systems
Use cases across banking, healthcare, AdTech, insurance, manufacturing and more that involve confidential and sensitive data require capabilities for secure inter-company and intra-company collaborative analytics on encrypted data in TEEs while ensuring each party is only privy to the data they own. Learn about Opaque’s unique platform for collaborative, multi-partly analytics and Data Clean Room capabilities and experience it through a live demo.
Enabling Secure Multi-Party Collaboration With Confidential Computing
Rene Kolga, Product Manager, Google
Can we create a usable trusted execution environment that supports a trust model where the workload author, workload operator, and resource owners are separate, mutually distrusting parties? Most definitely!
GCP's Confidential Space is a system that uses confidential computing to protect the workload from an untrusted workload operator while providing code and data integrity, and data confidentiality guarantees. This unlocks multiple secure collaboration and privacy-preserving analytics use cases.
Securing Secrets on Edge with SGX Enclaves
Henry Wang, Network Platform Security, Software Engineer, Meta
• Introduction to SGX Enclaves
• Overview of FBEnclave Platform
• Production Use Case on Edge
• Deployment Challenges and Tradeoffs
• Some Performance Benchmarks
An Open Source Certifier Framework for Confidential Computing
John Manferdelli, Office of the CTO, VMware
Ye Li, Staff Engineer, VMware
Confidential computing is a foundational technology, but its adoption has been inhibited by the difficulty in implementing programs quickly even on a single platform. In addition, fragmentation in the TEE platform market has prevented software portability and reuse across TEE technologies. In this session, we will discuss the Certifier Framework for confidential computing, an open source project offering a simple, general API and accompanying service for managing attestation in scaled CC programs. With a half dozen or so API calls, a developer can incorporate CC into their software without deep expertise in security and platform-specific TEE technologies. Furthermore, the framework also decouples trust policy from program code and supports managed deployment. We’ll cover the programming model, trust model and support (including policy and key storage) that makes the Certifier Framework easy to use and broadly applicable.
Download SlidesLowering the Barriers to Confidential Computing
Thomas Fossati, Principal Engineer
Marc Meunier, Director Ecosystem Development, ARM
Providers of computing platforms are racing to deploy products that deliver on the promise of confidential computing. As with any new technology, the initial investment can be high, and pioneers face the risk of cost overrun and failure. In this presentation we explore some of the implementation choices, and the resources Arm is making available to simplify the process of building a platform that supports a “confidential by default” methodology.
Download SlidesBuilding Privacy-Preserving Multi-Party Apps on Azure
Graham Bury, Product Management, Azure Confidential Computing (ACC)
Learn about real-world multiparty computing scenarios enabled by Azure confidential computing, including solutions provided by Microsoft technology partners. Discover new Azure offerings that make it easier to develop privacy-preserving applications, including new confidential container offerings in Azure.
Download SlidesApplication of confidential computing to Anti Money Laundering in Canada*
Vishal Gossain Practice Leader, Risk Analytics and Strategy, Ernst and Young
In Canada, financial institutions face regulatory and privacy challenges in sharing information with each other on their customers to build inter-institution models to better detect money laundering. In collaboration with UC Berkeley (MC2), EY is working with the Big 5 Canadian banks and regulators to create an AML consortium to share data for inter-FI human trafficking detection models. This talk will focus on the consortium framework, technology, progress made, challenges and future outlook.
*1. Any views expressed by speakers are views of their own, and don’t necessarily represent the views of their employers.
2. EY is participating under Chatham house rules.
Confidential Computing in Eyecare
Jackie Sweet, Lead Software Engineer, Dr Tavel Optical Group
How an Indiana eyecare provider is leveraging confidential computing to build a zero trust infrastructure.
Download SlidesAccelerating Confidential Computing Adoption: CCC’s Open Source Project Highlights
Lily Sturmann CCC and Office of the CTO Emerging Technologies, RedHat
The CCC brings together hardware providers, software solutions and cloud providers to ease and accelerate the adoption of confidential computing. Learn also how the CCC embodies open governance and open collaboration, which includes driving commitments from numerous member organizations and actively supporting contributions from several open source projects such as Enarx, Keystone, Gramine, Open Enclave SDK, and many more.
Download SlidesPervasive Confidential Computing From Cloud-to-Edge
Mona Vij, Principal Engineer, Intel Labs
Security and compliance solutions are not one-size-fits-all, and neither is there only one way to deliver confidential computing (CC). Confidentiality and Integrity can be delivered at the application, container, or VM level, with trust verified via a range of attestation mechanisms. We’ll discuss example usages and deployments for each, and how Intel is uniquely positioned to provide this comprehensive backbone for the underlying infrastructure and services. We’ll also introduce the continued evolution of CC to encompass confidential collaboration and distributed confidential computing.
Download SlidesEnabling Confidential Information Retrieval for Regulatory Compliance
Dr. Richard Searle, Vice President of Confidential Computing, Fortanix
With the continuing expansion of data protection legislation, organizations must ensure compliance with legal obligations and organizational risk and compliance policies. In this session, learn how confidential computing supports controlled access to data for analysts working inside and outside the organizational boundary. The session will provide a contextual overview of the business requirement for confidential information retrieval and details of how confidential computing is being used to protect regulated and classified data. A flexible and scalable architecture will be demonstrated, and comparisons provided to alternative solutions to the use-case requirement. The session will provide essential insights for data analysts, compliance officers, and those seeking to enhance the value of their available data assets.
Download SlidesLeveraging SGX/TDX for Secure and Efficient AI at ByteDance using BigDL PPML
Ruide Zhang Security Software Engineer, ByteDance Inc
Jiao Wang, AI Frameworks Engineer, Intel Corporation
While we can safeguard applications and data in memory using Intel SGX (Software Guard Extensions) or TDX (Trust Domain Extensions), ensuring the security of distributed AI workloads remains a complex challenge.
In this session, we will share our experience at ByteDance in developing end-to-end secure AI workloads utilizing Jeddak Sandbox and BigDL PPML. Our solution has been deployed in production for internal customers to build trusted AI applications on large-scale datasets.
Writing Digital Exams secured by Remote Attestation and Cloud Computing
Thore Sommer, Keylime Maintainer, FHNW
Digital exams in schools and universities are getting more and more common. This talk takes a closer look at how we, the University of Applied Sciences and Arts Northwestern Switzerland (FHNW), solved many of the related challenges with Trusted Platform Module (TPM) based remote attestation and cloud computing.
During this session we discuss the solutions that we found and the challenges we faced to implement it in our exam system, the Cloud Assessment Management Platform (CAMPLA).
Unlock the Potential of AI with Confidential Computing on NVIDIA GPUs
Philip Rogers, VP System Software, NVIDIA
Confidential computing has made great strides on CPUs over the last several years, especially with the advent of Confidential VMs. This has provided important guarantees for many use cases, but more is needed to run the demanding workloads of Machine Learning and Artificial Intelligence. This session will provide an overview of the software stack and deployment modes for operating the NVIDIA H100 Tensor Core GPU as part of a fully-attested confidential computing platform to deliver the performance levels needed for ML and AI that only GPUs can provide. We will announce the availability of the NVIDIA Confidential Computing Software Stack and how it enables cloud deployments using CVMs, as well as on-premise deployments using Confidential Containers and Kata. confidential computing is a team sport, and we will discuss ongoing collaborations with Microsoft Azure and Intel to advance the confidential computing ecosystem.
Download SlidesFrom a No to On-the-Go! A Frontrunners Story on How We Introduced Confidential Computing in Telco
Jonas De Troy, Domain Manager Public Cloud & Edge, Proximus
The introduction of emerging technologies is always a difficult task in larger sized companies, true for all organisations and relevant for regulated industries. We will articulate a use case to show the process from a no-go - to an on-the-go decision. We will show how technology, vision and legal/compliance need to collaborate to prove added value. As a domain manager for Public Cloud & Edge I had the opportunity to live this process from early concept to how we introduced confidentiality.
Download SlidesWrapping entire Kubernetes clusters into a confidential-computing envelope with Constellation
Felix Schuster, CEO, Edgeless Systems
Kubernetes is widely used for managing and scaling containerized workloads and is considered the most popular platform for this purpose. However, for confidential computing to become mainstream, comprehensive confidential-computing features must be brought to Kubernetes.
Download SlidesCollaborative Account Recovery for End-to-End Encryption Systems
Mike Blaguszewski, Lead Backend Engineer, PreVeil Inc.
In this presentation we will describe method that uses of secure enclaves to recover user keys in Preveil’s end-to-end encrypted system.
Download SlidesCONFIDENTIAL6G EU Research Project
Drasko Draskovic, CEO, Abstract Machines
Dušan Borovčanin, Software Architect, Ultraviolet
CONFIDENTIAL6G is a large-scale European research project, engaging seven academic and research institutes, three industrial partners, and three SMEs. Project develops tools, libraries, mechanisms and architectural blueprints for confidentiality in 6G, confidential computing enablers and TEE software abstraction layer. This talk will describe the project, its concept, objectives, organization and research strategies related to confidential computing subjects.
Download SlidesTEEtime: A New Architecture for Bringing Sovereignty to Smartphones
Ivan Puddu, ETH Zurich
Srdjan Capkun, Professor, ETH Zurich
Shweta Shinde, Assistant Professor, ETH Zurich
Phone manufacturers, operators, OS vendors, and users have diverse interests but imbalanced security dynamics. Developers entrust their security to OS vendors who can limit the user, OSes then rely on the firmware for protection.
In this talk, we present a new smartphone architecture called TEEtime that balances the ecosystem while maintaining compatibility. We create ARM TEE-based domains for users and OSes to isolate resources, peripherals, and interrupts as demonstrated with case studies.
Accelerating your digital Business with Anjuna Confidential Computing Platform
Mark Bower, VP Product, Anjuna Security
Confidential computing is driving radical changes in how leading organizations innovate and transform their digital business. Beyond making transformation seamless and secure, it is enabling G2000 enterprises to embrace the cloud at scale, powering optimized digital experiences, privacy-enhanced analytics, and new business models. This session will provide an overview of Anjuna’s platform, how it can be deployed in minutes effectively to secure enterprise workloads without disruption, and how a global financial services organization is leveraging it to accelerate its cloud-first strategy.
Download SlidesCC for AI/ML Models: A Comprehensive Security Framework for In-Use Protection, Ownership, and Data
Jay Chetty, Cloud Security Architect, Confidential Computing, Intel
As AI/ML models increase in value and application across sectors, their protection becomes paramount. These models, which often demand substantial investment for training and optimization, face threats of tampering and theft. Furthermore, data used for inference holds significant business value and is often subject to regulations such as GDPR and HIPAA. This AI/ML Security Framework offers a protective layer to these models. It secures models at rest, during transit, and at run time, ensuring integrity, confidentiality, and control over their usage. It introduces model "ownership" and the concept of model licensing, allowing developers to monitor deployment and potentially revoke a model’s use if misbehavior or critical flaws are detected. The framework applies cryptographic techniques and Intel Trusted Execution Environments (TEEs) to protect models. TEE attestation, used in licensing, is facilitated via Secure Boot and Intel Platform Trust Technology or SGX-based DCAP. The model’s protection is extended to data streams used for various AI/ML analytics and output results. Designed for AI/ML specialists with limited security expertise, this framework is delivered with a set of easy-to-use tools and is open-sourced, compatible with Linux KVM (VT-x), Intel SGX (with Gramine), and Kubernetes Containers.
Data Clean Rooms for Secure Multi-Party Collaborative Analytics on Confidential Data
Rishabh Poddar, CEO, Opaque Systems
Confidential collaboration on shared data has forced organizations to make trade-offs and compromises when sharing sensitive data. Learn how Confidential Data Clean Rooms eliminate these trade-offs and provide increased data security, analytic quality, reduced costs, and improved collaboration.
Confidential Containers, Grow Up and Leave the Nest
Amar Gowda, Principal Product Manager, Microsoft
Learn how to deploy confidential pods on public clouds using Cloud API Adaptor (CAA), a sub-project of the Confidential Containers project. This open-source project enables the creation of CVMs on public clouds by integrating with Kubernetes and kata-containers. In this talk, we’ll discuss the technical details of CAA, its integration with k8s, the challenges of deploying pods to k8s using this non-obvious approach, attestation on the respective hardware used to power these virtual machines, etc.
Download SlidesAn Open, Platform-Neutral Approach to Attestation
Mathias Brossard, Principal Security Architect, Arm
Attestation is one of the pillars on which confidential computing rests. A compute environment needs to prove its confidential characteristics before workloads can be executed. Methods of attestation are often platform-specific, leading to fragmentation as more confidential platforms and architectures emerge. This session shows how open-source, platform-neutral, standards-based abstractions can be applied in this space, and invites the community to collaborate and invest in them.
Download SlidesCollaborative Confidential Computing: FHE vs sMPC vs Confidential Computing. Security Models and Real World Use Cases
Bruno Grieder, CTO & Co-Founder, Cosmian
Poster Sessions
An Introduction to Huawei Qingtian Enclaves
Quoc Do Le, Confidential Computing Lead, Huawei Munich Research Center
Qingtian Enclave provides a secure and isolated environment for running sensitive workloads and data in the cloud. With QingTian Enclaves, customers can leverage the power of cloud computing in Huawei Cloud while maintaining the confidentiality and integrity of their application.
Cocos AI—System for Confidential Collaborative AI
Darko Draskovic, Senior Software Engineer, Ultraviolet
Filip Bugarski, Software Engineer, Ultraviolet
Cocos AI is a cloud distributed microservice-based solution that enables confidential and privacy-preserving AI/ML and allows data scientists to train AI and ML models on confidential data that is never revealed, and can be used for Secure Multi-Party Computation (SMPC).
Achieving Kata Confidential Containers Deployments on Azure for Your Zero Trust Operator Deployments
Amar Gowda, Principal Product Manager, Microsoft
We have Confidential computing with AMD’s SEV-SNP based Trusted Execution Environments (TEE) which provides remote attestation, memory and code protection, isolation from host. Then we have Kata Confidential Containers Open-Source Project that allows you to achieve the highest form of isolation from other pods, host, and Kubernetes components in a single Kubernetes container host. Combining these two can help deliver zero trust operator deployments.
Highlights From Confidential Computing
Welcome Remarks and Introduction to Confidential Computing
This keynote delivers welcome remarks including some exciting statistics about this year's summit. It then overviews the capabilities of the confidential computing technology when compared to other technologies. Stay tuned for a small surprise as well!
Download SlidesNew Trends in Confidential Computing
Advancing Confidential Computing and its Ecosystem
The Confidential Computing Consortium (CCC) plays a crucial role in promoting and advancing the field of confidential computing. This keynote delves into the recent activities of the CCC, highlights its open-source projects and members, and outlines upcoming plans. From this talk, you'll gain insights into how these developments can benefit you and your organization.
Download SlidesConfidential Computing: Elevating Cloud Security and Privacy
Confidential Computing (CC) is reshaping the future of cloud security by extending data protection to computation. This paradigm shift not only fortifies defenses against cyber threats but also fuels innovation, empowering businesses to explore new possibilities. Join us on a journey to uncover the transformative potential of CC and forge a future that is both secure and innovative, especially in the era of generative AI.
Download SlidesFrom Risk to Resilience: Strategies for Building a Secure Foundation for Gen AI
Generative AI is being embraced by companies across industries as the #1 lever for reinvention. But it's also introducing new risks in the form of data privacy, bias & explainability and security. As companies move from discrete use cases to larger-scale implementations, new risks are surfacing—from model disruption to data and IP confidentiality and protection. Your data is becoming more powerful and more vulnerable, all at once, and most companies simply aren't prepared. When all data could be accessible by AI, confidential computing is more important than ever—and companies need to scale it fast. Join me to learn valuable tips and strategies for scaling confidential computing and accelerating gen AI innovation. I'll discuss practical steps for integrating confidential computing as a standard practice.
Show Me the Money: The Business Value of Confidential Computing
More than another layer of security, confidential computing is a business enabler. Use it to unlock data silos, confidentially share data and monetize insights, deploy workloads in the cloud or remote locations with assurance, and reduce the time and effort spent on data sanitization. These aren't pipe dreams. We'll share stories of enterprises already deploying at scale and how they've realized new revenue streams, reduced CapEx and OpEx, and achieved expanded reach while remaining compliant, confidential, and protected.
Download SlidesThe Age of Confidential AI
As AI's potential reshapes the world, the need to secure sensitive training data and AI models becomes paramount. Join this session to see Confidential Computing and AI in action!
Confidential Computing Use Cases Panel, Moderated by Ben Lorica
Trends in Confidential Data and AI
Toward AI Security Level 4: Protecting Model Weights
The most highly-scaled AI companies, through the Frontier Model Forum, have increasingly focused on ensuring safe and responsible development of frontier AI models. In Anthropic's approach to this, the Responsible Scaling Policy, AI Safety Level 4 systems are defined as those that will present critical catastrophic misuse risk such as becoming the primary source of a national security risk in one area (such as cyberattacks or biological weapons). Such AI systems should be defended from exfiltration and abuse by motivated nation state attackers. This talk will cover early thoughts on defining ASL-4 security hardening including the utilization of confidential computing for training and inference.
Download SlidesTowards Trustworthy Generative AI
With GenAI's unprecedented capabilities and wide adoption come challenges and risks in ensuring responsible use. This session will discuss the various challenges and issues in ensuring trustworthy GenAI, including hallucination, privacy leakage, adversarial robustness, harmlessness, and alignment. It will also explore the potential solution space in addressing these issues and defending against the misuse of GenAI technology.
Download SlidesSecuring AI Model Weights - Confidential Computing and Beyond
Frontier AI systems are rapidly becoming more capable. AI model weights are a critical component to secure - they are the culmination of significant computing, training data (trillions of tokens), and algorithmic insights and optimizations. Their security is already a matter of increasing commercial interest, but depending on their future performance on hard-to-predict tasks (such as assistance in the development of bioweapons) their security could suddenly become a matter of national security. In this talk, we'll discuss potential future security needs for frontier AI, what labs can do today to prepare, and how confidential computing fits into the picture.
Confidential Computing for Generative AI
Generative AI is changing the world in spectacular ways, most of which we have yet to experience. One significant barrier to its widespread adoption is the concern over the exposure or leakage to the LLM providers and outside parties of confidential/proprietary prompts and data during fine-tuning, as well as integrity concerns over the trustworthiness of the results. At the same time, enabling the power of generative AI to be used on user everyday personal data or enterprise data can bring unprecedented productivity. In this panel, industry experts leading confidential computing and/or secure AI will discuss the exciting opportunities, and associated challenges, with using confidential computing for generative AI.
Protecting AI Data and Workloads with AMD Infinity Guard
Confidential computing is a critical technology for protecting data used in artificial intelligence (AI) applications. In this keynote, Mark Papermaster, EVP and CTO of AMD will share the AMD vision and strategy for delivering confidential AI solutions to the market. He'll discuss how AMD Infinity Guard technology in AMD EPYC™ CPUs, deployed on multiple cloud platforms, enables confidential AI today and how the next evolution of AMD Infinity Guard will expand the trust boundaries to include endpoints like AI accelerators. He'll also highlight the role of ecosystem collaboration and open standards in advancing confidential AI for the industry.
Download SlidesNavigating the AI Landscape: Exclusive Data Strategy
Join Jeremiah Owyang, General Partner at Blitzscaling Ventures, as he guides attendees through the layers of the AI technology stack. This presentation will demystify the components that power AI applications and highlight the role of exclusive data. Jeremiah will discuss how leveraging unique datasets can provide an advantage, accelerating growth and impact of startups, corporations, and organizations in the AI landscape. Explore how exclusive data serves as a cornerstone for innovation and success in the AI industry. Whether you're a founder, strategist, or enthusiast, this talk will equip you with the knowledge to harness AI technologies for your ventures. Originally written by organic Jeremiah, with editing by GPT4.
Download SlidesThe Need for Speed: How Confidential Computing Accelerates UX
This session will explain how Portal leverages confidential computing to unlock exceptional user experiences (UX) for our customers. This challenge falls into two main categories: Performance and UX flows.
On the performance side, confidential computing allows us to streamline computations that typically take a long time on the client side by moving them to the server side. Since we use confidential computing, we still ensure trustlessness in our offering through attestation. This approach has enabled us to achieve up to a 10x increase in speed in our signature generation protocols.
In today's world, how users access their data is crucial. One of the key challenges for us has been balancing the users' control over the keys to their assets while also abstracting storage away from them. For example, with a single Face ID scan, they can gain access to their keys. On top of this, we aimed to leverage the distributed trust nature of Multi-Party Computation (MPC). We discovered that a combination of confidential computing and technologies such as passkeys strikes the balance we were looking for.
This session will also delve into detail about how we optimized our computation and made the UX flows as simple as performing a Face ID scan, all by leveraging confidential computing.
Download SlidesConfidential Data Panel, Moderated by Aaron Fulkerson
In today's data-driven world, stringent data security, privacy, and sovereignty requirements often hinder the ability to harness AI's full potential. This panel will explore how confidential computing can unblock AI projects and unlock significant value for businesses. Moderated by Aaron Fulkerson, CEO of Opaque Systems, this discussion will feature insights from industry leaders who have successfully navigated these challenges.
Leveraging PETs for Generative AI and Consortium Applications in Financial Services
As a part of the Canadian financial ecosystem, EY has been leading the application of Privacy Enhancing Technologies (PETs) for a) Generative AI applications for financial services and b) data consortium applications for financial services. We will discuss some use-case applications for Generative AI (such as contact center support for financial services, automated code generation, complaints handling, automated model documentation, etc.) and how EY has been leveraging PETs to ensure data privacy and security during its Gen AI applications in financial services. Technologies referenced will include Open AI, Microsoft Azure, Python, and other relevant tools. Further, we will discuss one of the first consortiums in Canada, finally moving to real-world applications in the field of anti money laundering. The technologies leveraged will include confidential computing (TEE), differential privacy, synthetic data generation, and end-to-end Gen AI evaluation (governance framework). Also included will be privacy testing framework and controls (PII sanitization, differentially private fine-tuning, privacy penetration testing), Hallucination (Retrieval relevance, response relevance, and faithfulness), performance testing (e.g. BERTscore, ROUGE), and content filtering. We will talk about practical challenges in these applications and our approach to solving them.
THE SPEAKER HAS REQUESTED THAT WE DO NOT PUBLISH THEIR SESSION VIDEO AND SLIDE DECK.
Architects of Trust: Fireside Chat with Industry Leaders
Join us for an engaging fireside chat moderated by Mark Bower, Vice President of Product at Anjuna, as he sits down with leaders from the financial services, blockchain, and security sectors to discuss their adoption and utilization of confidential computing technology.
This discussion will delve into critical questions such as the importance of security to their businesses and customers, their previous security strategies and limitations, and what drove their decision to embrace confidential computing. Learn firsthand how these leaders have simplified their business operations, accelerated innovation, and unlocked new opportunities in cutting-edge use cases, including multi-party collaboration, ML inference models, and wallet-as-a-service. Discover the unique advantages and impacts on development time and costs, as well as insights into its broader implications for like-minded companies struggling to secure their cloud-native business stacks. Don't miss this insightful discussion on securing the future of cloud computing!
The Privacy Paradox: PETs as Catalysts for Data Collaboration
Data privacy is often treated by many enterprises as a necessary yet burdensome "cost of doing business." In response to a renewed global focus on consumer data protection, multinational corporations have created new leadership roles and team structures to ensure compliance with GDPR, CCPA, and other data privacy laws. In this session, we challenge the narrative that corporate privacy programs are cost-centers that simply function to manage risk and ensure legal compliance. We illustrate how the introduction of privacy-enhancing technologies (PETs) and confidential computing can actually serve as powerful mechanisms for value creation. Drawing from examples in healthcare, finance, and public policy, we will demonstrate how investments in data privacy programs are catalysts for impactful, data-driven collaborations. In turn, we show how these data collaborations can supercharge many different business functions, like product development, marketing, and customer service.
Download SlidesConfidential Computing: Preventing Fraud Through Secure Industry Collaboration
Annually, the volume of fraudulent activities in payments continues to rise, prompting banks to ramp up their investments in prevention and compliance measures to safeguard the integrity of the financial system. Recognizing the imperative for industry-wide collaboration, Swift, as a member-driven cooperative, is spearheading efforts to mitigate the impact of fraud through innovative approaches.
In this session, Swift will showcase its groundbreaking initiative to drive industry collaboration in fraud reduction. Leveraging its unparalleled network and community data, Swift is pioneering a foundation model for anomaly detection with unprecedented accuracy and speed. Central to this endeavor is Swift's strategic integration of confidential computing, ensuring the highest standards of security and privacy in data and AI collaboration.
By partnering with key technology vendors and leading an industry pilot group comprising the world's largest banks, Swift is tackling some of the toughest challenges that have plagued the industry for decades. This collaborative effort underscores the recognition that no single entity possesses all the answers, but together, industry stakeholders can forge solutions that benefit all.
Attendees will gain invaluable insights into Swift's holistic approach to combating fraud and how confidential computing serves as a linchpin in enabling secure collaboration among industry players. Join us to discover how this work is championing a global, inclusive economy that prioritizes the interests of end customers while maintaining the highest standards of security and privacy.
Download SlidesConfidential Computing for the Enterprise: How ServiceNow puts Trusted AI to Work
In the rapidly evolving landscape of AI, businesses face significant challenges in bringing AI projects from concept to production—and ensuring data privacy and security remains a top concern. Join this session to hear about how ServiceNow is setting a new benchmark for confidential AI in the enterprise powered by the Opaque platform.
Download SlidesTowards the Goal of Confidential Computing (CC) Ubiquity -- Interoperability Efforts by Microsoft and Intel Towards Attestation Standardization
Microsoft recently announced the public preview of Confidential VMs (CVMs) with Intel TDX at the Nov 2023 MS-Ignite conference. A fundamental aspect of CVMs lies in empowering the customers to verify trustworthiness through the process of remote attestation. Customers can choose from different verifiers, such as CSP-offered services like Microsoft Azure Attestation, third-party services like Intel Trust Authority, or via security ISVs, etc. Regardless of the verifier employed, it is critical to enable seamless interoperability among these verifiers to ensure that downstream consuming services, also known as relying parties, can uniformly access and utilize the attestation results. This joint session from Microsoft and Intel will focus on our collaborative efforts in achieving interoperability, specifically with attestation result formats of TDX CVMs and the IETF draft on the TDX shared EAT profile.
Download SlidesA System for Minimizing LLM Hallucinations
When it comes to LLMs, hallucinations are a fact of life. In order to develop trustworthy AI, teams need to proactively find and fix hallucinations, at scale. This starts with evaluations.
Join Galileo's Atin Sanyal as he discusses hallucinations. He'll cover how they occur, how to monitor for hallucinations, and how teams can set up an end-to-end system for GenAI hallucination.
Strengthening Confidentiality with Multiple Attestations
This session is a collaboration between AMD's SEV-SNP team, the NSA's Trusted Mechanisms research group, and Invary.
We intend to showcase the NSA's open-source Maat Measurement and Attestation Framework, which orchestrates attestations of host and guest OS boot and runtime integrity and guest memory integrity (AMD SEV-SNP).
We will thus show the benefits of aggregated attestations to confidential computing workloads, which benefit both the workload owner utilizing the guest and the host's manager.
We will demonstrate multiple use cases with varying levels of confidentiality, providing optionality to end users, for example:
1. A confidential environment with all components having integrity, as seen through a single aggregated output via MAAT.
2. An environment where the host OS lacks runtime integrity via a rootkit attack, but the guest maintains OS and memory runtime integrity. This scenario allows for a separation of response between the owner of a confidential workload and the owner of the host.
3. An environment where the guest lacks memory integrity, but the host and guest have OS integrity.
4. An environment where the guest lacks OS runtime integrity but maintains memory confidentiality.
The key takeaways are:
1. The benefits of open source frameworks like MAAT to aggregate and orchestrate multiple third-party attestations.
2. An understanding of the layered architecture of a confidential computing environment and how each, if compromised, can impact the others.
3. The importance of attestation in confidential computing.
4. Exposure to the Copland language, used for expressing attestations (as described in "Flexible Mechanisms for Remote Attestation" (DOI: 10.1145/3470535).
Download SlidesSovereign Private Cloud - A Confidential Computing Solution for the Italian Public Administration
The Cloud Italy Strategy, created by the Department for Digital Transformation and the National Cybersecurity Agency, contains the strategic directions for the migration path towards the cloud of data and digital services of the Public Administration. The strategy responds to three main challenges: ensuring the country's technological autonomy, guaranteeing control over data, and increasing the resilience of digital services. In line with the objectives of the National Recovery and Resilience Plan, approximately 75% of Italian PAs are migrating data and IT applications towards a cloud environment. The Italian strategy is based on a highly reliable infrastructure that has the objective, in line with the Cloud Italia Strategy and the National Recovery and Resilience Plan (PNRR), to provide cloud infrastructures for the highest guarantees of reliability, resilience, scalability, interoperability and environmental sustainability. One of the objectives of the Italian strategy is to design and provide a secure infrastructure supporting this qualified cloud. One of the requirements for this infrastructure is the capability to technically enforce isolation of the cloud end-user data with respect to the infrastructure team. One of the technologies chosen to implement this isolation is the confidential computing technology applied at the level of the virtual machines. Confidential computing provides the protection of the data in the use of a VM and the capacity to verify the activation of the memory isolation and the integrity of some code running within the VMs. Based on these capacities, CYSEC designed a solution that protects the VM data in all states (at rest, in transit, and in use) and allows the detection of abnormal behaviors of the infrastructure hosting the qualified cloud. This solution includes the attestation of the launch of VMs and a regular auditing mechanism of the VMs at runtime.
CYSEC will present the high-level design of the hosted private cloud solution for the Italian administrations and will present the design of the confidential computing solution embedded within this hosted private cloud.
Download SlidesSecuring Services at Meta with CVM Lift and Shift
Over the past year, Meta has significantly increased its investments in TEE technologies, with a focus on AMD SEV-SNP Confidential VMs. We will discuss how Meta leverages CVMs to secure services, particularly the infrastructure for Lift and Shift services.
The primary class of use cases we've addressed is defense in depth. Some high-criticality services desire confidentiality and integrity guarantees provided by CVMs. For example, a Key Management Service desires higher security posterity for its keys stored in main memory. CVMs offer strong security guarantees with cheaper hardware, making them an attractive solution to secure Meta's vast number of services deployed across a global fleet.
Attestation Infra
Consider interactions where attestation is done at the application level. Attesters generate evidence, verifiers validate evidence, and encrypted channels must be established. Both attester and verifier services would need to integrate TEE-specific attestation software, which violates Lift and Shift. Even if these dependencies were abstracted into a library, it must support multiple programming languages and TEE technologies. We thought such a library would be difficult to maintain.
Instead of doing explicit remote attestation at the application layer, our solution was to do implicit remote attestation at the transport layer. CVM Services use X509 certificates that encode attested identity – which is something all Meta services can understand because we already encode regular service identities this way. We refer to this concept as Implicit RA-TLS via Attested Service Identities, which is an implementation of the RATS Passport model.
The major steps are:
1. An agent in the CVM requests a cert from a special Certificate Authority by providing attestation evidence binded with CSR.
2. The CA appraises the evidence according to some policy and mints (e.g. signs) a cert representing the Attested Service Identity.
3. The CVM Service uses this cert to interact with other entities in Meta infra. The relying party does not need to perform explicit remote attestation because it trusts the CA has attested the peer.
The main benefits of this attestation model:
Reusing TLS for establishing encrypted channels. Instead of redoing it at the application layer. The protocol ensures the X509 private key is owned by the CVM.
We reuse the concept of identities in X509 certs to allow for seamless integration with Meta's Auth framework to authenticate services (plus their attested state) and authorize access to resources.
Application code can remain agnostic to the runtime environment (e.g CVM, Container, bare metal). Service owners do not need deep technical knowledge and effort to use CVMs.
Trade-Offs
While CVMs allow for the convenience of Lift and Shift, they trade off for increased TCB. The inclusion of the kernel, OS, and entire service applications bloat the TCB, which increases attack surface area. This property is also unsuitable for use cases that care to minimize TCB.
The current deployment model requires service owners to build entire VM images, which adds significant overhead compared to containers, Meta's standard unit of deployment.
There is a noticeable latency overhead for IO-bound workloads. We may present preliminary performance benchmarks for early services that we have onboarded.
Future Work
We are exploring a deployment model that runs critical code inside a small CVM (e.g., a sidecar) alongside the main process. This will allow for relevant use cases to minimize TCB.
We are also exploring a deployment model similar to Redhat's Confidential Containers, but instead of Kubernetes, it's Meta's internal container orchestration. The main idea is for a base VM image containing the management layer that stages the container layer. This would alleviate service owners from building entire VM Images.
Download SlidesHow Confidential Computing is Changing the World
Confidential Computing has changed how we securely exchange information and is literally creating a safer and more secure world. Join us to learn how Confidential Computing has lowered fraud rates in competing companies, led to medical breakthroughs, disease detection and treatment, as well as breaking down data siloes within organizations to help companies drive revenue through the data they already possess.
Enterprise Case Studies in Data Sharing and Generative AI
Confidential Computing for Enhanced Genomic Epidemiology
Advances in DNA sequencing technology are expected to continue over the next decade, accompanied by continued decreases in cost. The COVID-19 pandemic clearly demonstrated the value of these technologies for infectious disease surveillance in various settings across public health, hospital systems, and government entities. However, the insights gained from genomics tools are most powerful when combined with complementary epidemiological and patient data.
Unfortunately, privacy and security challenges have hindered the combination of genomics data with relevant patient data. Confidential computing technology offers a solution to these problems by providing a secure, encrypted environment where genomics and patient data can be combined to drive insights while keeping both data and models secure. Palmona Pathogenomics has developed a platform (P3) for combining these data sets to improve the management of infectious diseases by fostering multi-party collaboration across stakeholders leveraging confidential computing.
We have implemented predictive models of pathogen properties based on genome sequences to predict antibiotic resistance and virulence risk. This information is combined with epidemiological data to uncover factors driving the spread of pathogens across regions and facilities. Insights related to patient risk based on demographic factors (age, gender, co-morbidities) are presented. Epidemiological factors such as travel history are incorporated for improved outbreak tracing. Trend analysis highlights changes in pathogens and resistance mechanisms across time and geographies.
The P3 Platform is currently used in Public Health, Medical Centers, Diagnostics, and Life Science Tools companies. This session will describe the use cases and insights offered to these customers, leveraging privacy-preserving architecture and supporting cloud, data, and AI technology.
Download SlidesEnsuring Trust and Privacy in AI
This talk will address the intertwined issues of trust and privacy in Gen AI. We will delve into the significant privacy concerns in Gen AI, discussing the challenges of protecting user data while maintaining performance. We will also explore the importance of having a verifiable AI pipeline and how to ensure the integrity and authenticity of AI outputs across various entities.
Expanding The Trust Boundaries and Supply Chain Security
Advancements in confidential computing by AMD and other participants in the ecosystem have been significant in the last few years. There is still more to do for securing new and wider range of use cases in multi-tenant data centers. The next evolution of confidential computing technologies aims to extend the boundaries of the trusted execution environment to trusted device I/O virtualization with TDISP, workload migration, and supply chain security. This session covers these emerging technologies, identifies next set of problems to solve in confidential computing, and calls for industry action to drive their broad adoption to achieve forward-progress in building scalable & effective confidential computing solutions for AI and general workloads.
Download SlidesHorizontal Federated Learning with Intel® Trust Domain Extensions (Intel® TDX) on VMware® ESXi
Intel® Trust Domain Extensions (Intel® TDX) is designed to isolate Trust Domain (TD) VMs from the hypervisor and other non-TD software on the host platform to enhance confidential computing from broad range of software attacks. Federated Learning is a machine learning algorithm that allows a model to be trained across multiple decentralized nodes without explicitly exchanging local data samples. In this session, we describe the use case of running Federated Learning to train a global machine learning model collaboratively across a network of trust domains running on VMware® ESXi while keeping the sensitive data localized.
Download SlidesThe Mesh is the New Web, Secured From Within by Confidential Computing
It is hard to believe, but as a 35-year-old technology, the Web is a contemporary of the Intel 80486 processor and the brick phone! Built on the even older Domain Name System (DNS), the Web delegates authority to domain administrators without any built-in security at all. So, for decades, the best we have been able to do is to bolt security on the side, resulting in cybersecurity patchworks and Whac-A-Moles.
Fast forward to today, and confidential computing (CC) brings built-in cryptographic capabilities at the chip level. In this session, we will outline how CC enables the creation of "the Mesh," a global information space like the Web, but with built-in cryptographic security for each individual person and non-person entity at internet scale.
Built on the fully automated Universal Name System (UNS) and Universal Certificate Authority (UCA), the Mesh provides global assurance of provenance, integrity, authenticity, confidentiality, reputation, and privacy for all bits within it, be they code or data. In particular, we will discuss how the Mesh enables organizations to deploy their own globally attested and verified software agents to secure everything for everyone.
Download SlidesA Secure and Private Platform for Transparent Research Access via Trusted Execution Environments
The recent shift in regulatory demands, which are aimed at fostering an understanding of the systemic risks, is pushing large online platforms and search engines towards greater transparency. Some regulations require these companies to grant privileged data access to vetted researchers. However, this could potentially lead to unintentional misuse or leaks of user privacy. It is very hard, yet important, to design a platform that strives to balance transparency with robust protections to mitigate these privacy risks. This session will discuss a solution that leverages trusted execution environments (TEEs) in the cloud to balance between transparency and user privacy. We demonstrate how TEEs can provide data confidentiality and execution integrity to both data owners and data scientists. We will also discuss future opportunities and other use cases of the solution.
Enhancing End-User Devices with Confidential Computing: Protecting AI Applications and Improving Gaming Experiences
At Samsung Research, we are working on confidential computing technologies for end-user devices. As part of this work, we are actively involved in promoting confidential computing by contributing to our open-source project called Islet, which is a CC software platform based on Arm CCA and has recently joined the Confidential Computing Consortium (CCC) project. In addition, with the growing importance of Confidential AI, we develop the Samsung Confidential AI Framework deployed on our CC platform, Islet, to protect users, their data and AI models employed by different mobile applications. Our ongoing efforts involve exploring potential use cases occurring on the user's device, aiming at introducing distinctive enhancements to the confidential computing platform for user devices, and building upon the insights gained from these use cases.
In this session, we plan to introduce use case scenarios related to AI and gaming and demonstrate them utilizing our Islet platform.
AI Scenario:
Generative AI models, e.g. Large Language Models (LLMs), are capable of generating harmful and undesired content such as abuse, violence, self-harm, etc. Unsafe content can erode user trust and create legal issues for device and service providers. Therefore, content moderation (also known as Guardrails) is essential for safety of Generative AI applications. Current safety solutions are designed for cloud-based protection, not on-device protection.
In our scenario demonstration, we introduce how AI Guards can protect Generative AI models from jailbreak attacks. Firstly, we consider a scenario where the attacks are rendered by malicious users of different Generative AI applications, such as text summarization and Chatbots. Secondly, the adversarial AI models deployed in the applications are exploited as malicious agents.
Game Scenario:
There has been a long battle between game players and game service providers due to mutual distrust. We introduce how CC can rebuild this trust and the advantages they derive from the trust.
Firstly, by executing a game app in a trusted environment and the users gaining trustworthiness, offline gaming becomes viable, resulting in enhanced user responsiveness and uninterrupted gameplay regardless of network conditions. Secondly, users can personally confirm the transparency of the game service on their own devices, thus improving the overall service reliability. As an example, we will demonstrate this by running a simple Randombox program on our Islet platform.
Download SlidesOpaque Systems Demonstration
In this talk, we'll highlight some of the newest key features and capabilities of the next generation of the the Opaque Confidential AI platform, enabled by confidential computing. Join us for a hands-on tour to see how we've built on confidential computing to make it accessible for all.
Download SlidesLatest Azure Confidential Computing Services and Use Cases
Microsoft Azure continues to add new IaaS and PaaS services for our customers, including internal Microsoft customers. This session will give an overview of our latest services and use case, including the migration of core Microsoft services to Azure confidential computing.
Download SlidesClosing the Gap
As AI transcends the data center and enhances our lives in remarkable ways, we need primitives to guarantee our data is used responsibly. From supply chain, to the Confidential VM and the application itself, this diverse AI landscape requires a robust security story. This talk will take apart an end-to-end AI use case that will benefit from Confidential Computing and look at the implications to the overall hardware and software architectures. We will present best practices when planning a deployment. Finally, we will discuss Arm's technological contributions and open-source ecosystem leadership to make end-to-end security and privacy the new reality.
The Intersection of Confidential Computing and Zero Trust
Zero Trust implementations have been challenging due to the complexity of applying end-to-end least privilege access across enterprises' digital infrastructure. While Secure Access Service Edge (SASE) and other access solutions have made great progress in verifying the access request, there is still a gap in verifying the network resource being accessed. Ironically, this lack of two-way verification forces many organizations to implicitly trust their computing resources – violating Zero Trust's central tenet of no implicit trust.
Confidential Computing, such as Intel Trust Services, provides workload verification as part of an end-to-end Zero Trust architecture –facilitating a "never trust, always verify" security strategy. Intel offers hardware-based isolation and encryption for attestation of computing confidentiality and integrity at scale. When used with SASE and other access solutions, Intel Confidential Computing solutions help ensure that every access attempt is thoroughly verified – from end-to-end.
Join us to understand how Intel Trust Services, Zero Trust, and Confidential Computing intersect to create a robust security framework. Discover best practices for implementing these technologies and safeguarding your data against modern threats.
Download SlidesConfidential Accelerators for AI Workloads
The future of cloud computing is shifting to private, encrypted services where people can be confident that their workloads stay verifiably isolated and protected. Explore the latest advances in confidential computing and confidential accelerators, and how they can be used to not only preserve your data confidentiality, but to free it for secure collaboration across teams, companies, and borders. You'll see demos focusing on providing holistic data confidentiality and AI/ML model protection. Welcome to the future of secure data freedom!
Download SlidesMaking Workplace Applications Confidential: Google Workspace, Microsoft Office, Business AI
Discover the power of confidential computing in fortifying workplace applications like Google Workspace and Microsoft Office, safeguarding sensitive information such as emails. During the talk, we will also explore how confidential computing can be leveraged to add confidential AI capabilities to these applications, ensuring unparalleled data security and privacy. Join us as we delve into the future of secure workplace solutions.
Auditable and Verifiable Transparency with Trusted Execution Environment
Transparency is one of the key components of building trust with the public, customers and regulators. Particularly, when discussing trustworthy and responsible AI, transparency is also a keyword that every organization emphasizes. There are multiple ways to provide transparency of a platform, e.g., open-sourcing the code of platforms to the public, and inviting third-parties to evaluate safety of AI models. However, providing transparency with a complete chain of trust that can be verified by public individuals is a challenging task. Confidential computing and its underlying technology Trusted Execution Environment provides confidentiality, integrity, and attestability properties of a platform that can unlock the potential to build auditable and verifiable transparency for AI platforms. In this talk, we will discuss some missing pieces of current transparency efforts. Then, we will introduce the potential solution to auditable and verifiable transparency using Trusted Execution Environment. Lastly, we want to discuss potential opportunities and challenges that need collaborative efforts from the community.
Download SlidesEnterprise GenAI: Confidential and Nimble
As enterprises adopt Generative AI, it is crucial to secure these high-value models. Threats to highly sensitive training data, model theft, and sensitive data leakage may become impediments to business transformation. Confidential computing offers a foundation to address AI security challenges with isolation and verification while helping address concerns around privacy, provenance, and access control for models, parameters, and data. In this talk, you will learn how Intel® Xeon® platforms are well suited for nimble and confidential AI enabled by Intel Trusted Execution Environments, Intel® Advanced Matrix Extensions (Intel® AMX), and upcoming scaling to GPUs and other accelerators with Intel® TDX Connect.
Download SlidesSecuring AI in the Enterprise: Overcoming the Data Privacy Hurdle
This talk will share learnings from Opaque's enterprise customer engagements. We uncover the challenges that data and AI leaders face in moving projects from pilot to production as a result of data privacy issues, and illustrate how to overcome them.
Download SlidesSecuring GenAI Rollouts: Strategies for Effectively Deploying Trustworthy Generative AI Solutions
Deploying secure and trustworthy generative AI solutions is crucial for large-scale enterprise adoption of impactful Gen AI solutions. This talk will explore essential strategies to safeguard generative AI applications. Attendees will learn about identifying and mitigating generative AI risks such as jailbreaking, generating harmful/toxic content, and bias using advanced red teaming efforts. The talk will then cover mitigating those risks using alignment training and effective guardrails. We will effectively cover how to build and deploy robust generative AI systems with continuous risk assessment, real-time threat detection, and automated compliance enforcement. This session is designed for AI practitioners, security experts, and business leaders who aim to harness the potential of Gen AI while ensuring their applications remain secure and trustworthy.
Download SlidesTEEs in the Post-Cookie World of Advertising
With legacy, third-party cookie-based approaches going away, Adtechs and the broader advertising industry seek a future-proof solution. Let's talk about how TEEs can enable privacy-preserving Adtech capabilities. Can we have both customer privacy and a thriving advertising ecosystem? Join this session to find out!
Download SlidesTrusted Execution is Dead. And We Killed It.
Using cryptography and distributed systems, we eliminate trust assumptions for confidential computing.
Download SlidesEnabling Arm Confidential Compute Architecture (CCA) on Open Source
Over the past year, Linaro has been deeply involved in enabling the Arm Confidential Compute Architecture (CCA) on various open-source projects. Working with Arm and other companies in the community, we have focused on ensuring that firmware, operating systems, virtual machine monitors, and container management environments can work correctly with Arm CCA. Our goal has been to provide the basic support to run Trusted Execution Environments (TEEs), which are referred to as Realms within the CCA specification, and to ensure that we have a cohesive attestation verification story.
During this session, we will highlight our progress to date in many projects, including TrustedFirmware-A, EDK2, Linux, QEMU, Kata Containers, and Confidential Containers. Additionally, we will discuss the next steps in our roadmap.
Download SlidesGPU-Enhanced Confidential Computing—Unlocking New Privacy Preserving AI Scenarios in Azure
Microsoft Azure recently started the gated preview of Azure confidential VMs with NVIDIA H100 Tensor Core GPUs. This session will delve into the collaborative efforts between Azure and NVIDIA Confidential Computing to showcase how this cutting-edge technology paves the way for new confidential AI scenarios, including retrieval augmented generation (RAG) and federated learning (FL).
Download SlidesMoving Beyond VMs with CoCo on Arm
As confidential virtual machines, supported by new hardware extensions such as AMD SEV-SNP and Intel TDX, are becoming the prevalent paradigm for confidential computing, the Confidential Containers (CoCo) project provides a simple adoption path to the cloud-native world. The Arm Confidential Computing Architecture (Arm CCA) includes the Realm Management Extension (RME) hardware extensions in the new Armv9-A architecture and the required software stack to support confidential computing. This session will examine our work, in terms of protocol/software standardization and implementation, to integrate Arm CCA with CoCo. The main components impacted were the Attestation Agent in charge of collecting attestation evidence and the Attestation Service responsible for validating it. We developed a Rust crate providing Arm CCA attestation primitives to enable CoCo to gather, verify and appraise attestation evidence. This library also acts as an endorsement and reference values store abstraction. We will also present how CoCo's Attestation Service integrates with Veraison, an open-source attestation verifier, in a chained deployment topology. Finally, we will showcase an end-to-end attestation flow in a demo setup.
Download SlidesLeveraging Formal Programming Techniques for Robust Confidential Computing
The advent of trusted execution environments (TEEs) has ushered in a new era of confidential computing. However, using such TEEs is still non-trivial. To be able to use them well, especially while keeping the trusted computing base small, developers have to understand how to program with them, the exact security guarantees they provide, and how to correctly set them up and attest programs in them, as well. This is especially cumbersome as many different TEEs exist in the meantime, which differ substantially in their programming interfaces and usage, security guarantees, and performance characteristics, making it hard to port programs from one to another, let alone develop programs that use several TEEs together.
At Securified, we specialize in leveraging formal programming techniques in the context of confidential computing. In particular, we have devised static type systems based on information flow that allow us to precisely reason about guarantees in the vein of non-interference at every step of computations automatically using a variety of TEEs, including software-based cryptographic primitives for confidentiality, and even combine them on the same programs. We believe that such techniques are key to more secure, efficient, and effective use of TEEs.
Part of our work was described in an article published at ACM PLDI'24.
Download SlidesSecurely Collaborating Across Multiple Cloud Providers
Secure collaboration across teams and organizations powered by Trusted Execution Environments (TEEs) helps break down silos, unlock business value, and enable previously impractical use cases. However, sensitive data or intellectual property of all collaborators is often distributed among multiple locations or cloud environments, requiring special care to interoperate. In this talk we'll show how we freed TEE from data location constraint and allowed it to work with data across multiple cloud service providers (CSPs).
Download SlidesPerformant Confidential Computing on NVIDIA GPUs
Confidential computing with NVIDIA gives every customer the ability to securely adopt performant Generative AI and LLMs. As NVIDIA continues the journey of innovating in confidential computing first available on Hopper, join us to learn about NVIDIA Blackwell - the first TEE I/O-capable GPU in the industry. With key architectural enhancements to support inline encryption for PCIe and NVLink traffic, Blackwell architecture delivers key performance improvements across single and multi-GPU workloads in HW/SW-based TEEs without compromising security.
Download SlidesFalsifiability in Confidential Computing: A Philosophical Approach
In this session, we re-examine the nature of software and hardware claims through a philosophical lens, offering fresh insights into the realm of confidential computing.
Typically, a product claim describes a desired property of an artifact, such as "ensures data privacy during processing," "it produces no observable side effects," "it is rollback protected," and "it is a non-invertible function."
While the common discourse focuses on the "verifiability" of such claims (which, in most cases, is not technically achievable), we propose a shift towards evaluating their "falsifiability," drawing inspiration from Karl Popper's philosophy of science. This approach, where a claim's validity is tested by its potential to be proven false, has profound implications not only in philosophical terms but also in practical applications such as software supply chain, testing, formal verification, and enhancing transparency in remote attestation. At the heart of confidential computing lies the challenge of asserting claims about software and hardware systems outside direct user control. Our discussion aims to establish a consistent mental model and terminology for these assertions, highlighting how the principle of falsifiability can be the foundation for useful claims, both technical and informal.
Download SlidesAttested TLS and Formalization
Transport Layer Security (TLS) is a widely used protocol for secure channel establishment. However, it lacks any inherent mechanism for validating the security state of the workload and its platform. To address this, remote attestation can be integrated into TLS, named attested TLS. In this session, we present a survey of the three approaches for this integration, namely pre-handshake attestation, post-handshake attestation, and intra-handshake attestation. We also present our ongoing research on formal verification of the three approaches using the state-of-the-art symbolic security analysis tool ProVerif to provide high confidence for use in security-critical applications.
Key takeaways: Our preliminary analysis shows that pre-handshake attestation is potentially vulnerable to replay and relay attacks. On the other hand, post-handshake attestation results in high latency. Intra-handshake attestation offering high security via formal verification and low latency by avoiding the additional roundtrip forms a good ground for further research and analysis.
Technologies discussed in this session: Arm CCA.
Download SlidesTowards Trusted, Confidential AI: The Delegated Federated Learning Approach
Imagine a vast ocean of information. That's the amount of data we're generating today. Combined with breakthroughs in Deep Learning and Natural Language Processing, this data fuels the creation of incredibly powerful AI models like Large Language Models (LLMs). These powerful models are trained on massive amounts of information, often hundreds of gigabytes. But there's a catch. Training these models often involves sensitive data, such as patient health records, financial data, or intellectual property. This raises a critical question: how can we leverage this powerful AI technology without compromising data privacy? Federated Learning (FL) offers a promising solution. In FL, multiple devices – like phones or computers – collaborate to train a model. Data stays on the devices during the entire machine learning process, thus protecting the privacy of the user's data.
However, FL requires messages containing model updates to be exchanged between participating devices, which can create a security risk. Malicious actors could potentially gain sensitive information from these messages. We need a way to keep them private and unreadable by anyone involved. Some solutions, like Differential Privacy, Full Homomorphic Encryption, and Multi-Party Computation exist, but they can significantly hurt the model's performance.
Confidential computing (CC) protects sensitive information even while it's being used in a way no one can access it. Unfortunately, most consumer grade devices do not support CC yet.
Imagine a way to let people use the power of Federated Learning and confidential computing even though they don't own a CC-enabled device. This is where Delegated Federated Learning comes into play: in this framework, users contribute their encrypted data while remote worker nodes perform the actual Federated Learning using Trusted Execution Environments. This way, workers perform the training while never obtaining access to either the training data or the shared model.
In this presentation, we will show a first design and proof-of-concept of a Delegated Federated Learning framework based on the iExec decentralized cloud computing marketplace. The framework relies on blockchain technology and several TEE technologies (Intel SGX, TDX, and nVidia Hopper Confidential Compute) to implement an industry-grade Trusted and Confidential AI platform. We believe that Delegated Federated Learning has the potential to revolutionize the field of Confidential AI, opening doors to a future where anyone can contribute to powerful and privacy-preserving AI development.
Download SlidesTrustless Attestation Service for TEEs with Zero-Knowledge Proofs
In the remote attestation procedures of confidential computing, the verifier (often called attestation service) plays a critical role in the attestation mechanism to verify the evidence and produce attestation results. However, in the current implementation, attestation services are designed and implemented as trusted components in the remote attestation architecture. That is to say, the relying party who usually owns the secrets has to trust these attestation services provided by cloud service providers, which breaks the promise of TEE on excluding cloud service providers from the trust boundary. Thus, a critical question emerges: can we exclude attestation services from the trust boundary? In this session, we will present a possible solution to implementing a trustless attestation service for TEEs. By leveraging recent developments in zero-knowledge proofs, the proposed attestation service can be deployed in an untrusted environment that is out of the trust boundary while providing provable attestation results.
Download Slides