Poster Sessions
TEE'fying the Service Oriented Architecture
Jiankun Lu
Software Engineer
Modern applications are no longer monoliths, rather they are often composed of a number of microservices. But what if you need to run your microservices in TEEs? How do you connect a web of TEEs securely to each other? In this talk, we discuss our approach to tackling the complex issue of inter-service trust and how to convey the web to relying parties.
Confidential Compute on Arm… Where do I start?
Pareena Verma
Principal Solutions Architect
Arm
To speed up development timelines with the RME-enabled hardware, Arm has equipped developers with a pre-silicon simulation platform. In this session, developers will get an overview of the reference integration software stack for Arm CCA and learn how to start developing confidential applications. Emphasis will be placed on modern design principles with ready-made container images for the software stack. We will explore multiple recipes providing a ready to use guide for developers. Join us for this session to discover the tools and resources, learn best practices and accelerate your path to a confidential future.
Open More and Trust Less in Binary Verification Service -- Towards a Trusted App Store for Confidential Computing
Hongbo Chen
Research Assistant
Indiana University
In the contemporary landscape of AI and big data analytics, ensuring the secure and private exchange of data is crucial. Yet, safeguarding data-in-use presents a substantial challenge, underscoring the need for innovative solutions. The National Science Foundation's Center for Distributed Confidential Computing (CDCC) is at the forefront of addressing this issue, pioneering research to lay down the technical groundwork for scalable data protection across cloud and edge computing environments. This talk aims to introduce CDCCâa multifaceted initiative supported by the National Science Foundationâhighlighting its multidisciplinary approach and multi-institutional collaboration. We will delve into the center's principal research avenues, such as the verification of Trusted Execution Environment (TEE) code, the integrity of computing nodes, the coherence of workflows, and the alignment with stakeholder expectations. These efforts converge to forge a resilient ecosystem for confidential computing, characterized by open certification for TEE code, adaptable policy models for data use, extensible data control at runtime, and rigorous adherence to data-use policies across workflows. Moreover, this presentation will showcase the practical impacts of these endeavors, particularly through the lens of enabling secure applications like confidential disease prognosis. Â A focal point of our discussion will be the development of a Trusted Application Store (TAPStore), beginning with the foundational step of creating a verifier tailored for TEE binaries. This initiative has led to the design of an innovative verification service that balances openness with trustworthiness. At the heart of this approach lies a strategic insight: certain tasks can be delegated to untrusted entities, while the corresponding validators are securely housed within the trusted computing base (TCB). The service can validate untrusted proof generated for versatile policies. Through a novel blockchain-based bounty task manager, it also utilizes crowdsourcing to remove trust in theorem provers. These synergistic techniques successfully ameliorate the TCB size burden associated with two procedures: binary analysis and theorem proving. Therefore, it allows untrusted parties to participate in these complex processes. Moreover, based on running the optimized TCB within trusted execution environments and recording the verification process on a blockchain, the public can audit the correctness of verification results. We also implement verification workflows for software-based fault isolation policy and side-channel mitigation to demonstrate its efficacy. Â Through this exploration, attendees will gain insight into the cutting-edge research and developments emerging from the CDCC.
EscrowAI - A Confidential Computing Based ML Lifecycle Collaboration Platform
Alan Czeszynski
Vice President, Product
BeeKeeperAI, Inc.
EscrowAI is a patent-protected, privacy-enhancing ML/AI lifecycle collaboration platform enabling secure collaborations between algorithm developers and stewards of protected data. EscrowAI automates the use of Trusted Execution Environments (TEE) using confidential computing, along with other privacy-enhancing technologies, to ensure data sovereignty, security, and individual privacy, and to protect the algorithm's intellectual property throughout the ML lifecycle. The entire computing workflow in the TEE is fully automated, democratizing confidential computing for everyday data science tasks (training, test, validation, federation) across all industries that rely on protected data and IP security. EscrowAI enables data to remain within the data steward's secure environment where it is made available for computation in a hardware-based TEE instance. Encrypted algorithms are brought into the TEE, and the algorithms and data are decrypted in the TEE instance's protected memory with user-created private keys stored in an HSM key vault. The computation is executed, and only a predetermined output is allowed out of the TEE instance after verification by EscrowAI. After the computation is complete, the TEE instance is decommissioned. A variety of confidential computing resources are available, including application enclaves, confidential containers, and confidential virtual machines. A confidential ledger is used to create immutable records of AI/ML artifacts from computing cycles to create trustworthy AI. As an example. BeeKeeperAI is working with the US Novartis Foundation's Beacon of Hope initiative, along with the leading Historically Black Colleges and Universities (HBCU) medical schools, to use EscrowAI to investigate racial bias in clinical algorithms. These data stewards have rich data sets, including social determinants of health, that cannot be de-identified or otherwise obscured. EscrowAI enables HBCU medical schools to participate in AI/ML innovation using these data sets from typically under-represented populations. This critical research hopes to help resolve the lack of access to diverse data when training and validating algorithms for use within the clinical environment, which was identified as a critical shortcoming by the US Department of Health and Human Services' Office of Minority Health. EscrowAI is available in the Azure Marketplace.
Business Models for Attestation Services - Expert Panel
Mike Bursell
Executive Director
Confidential Computing Consortium
Attestation is a core part of Confidential Computing, and lack of attestation services is one of the main barriers to entry for many potential customers and use cases of the various technologies. Technical work around attestation is proceeding quickly to reduce friction of users, but there is no consensus on the best business model(s) for services, other than an acknowledgement that the worst party to be providing them is the operator of the Confidential Computing resources. In this panel, moderated by Mike Bursell, Executive Director of the Confidential Computing Consortium and comprising experts from the worlds of hardware, cloud computing and software, we will explore some of the possible business models and look at pros and cons of each. Expect to hear discussion on: - ISV-provided attestation services - customer-managed attestation - sector-specific services - government-operated services. We expect a lively discussion and input from attendees to raise the profile of this vital examination of business models for the ecosystem. Panel members confirmed: - Marc Meunier, Director of Ecosystem Development (Arm) - Ofir Azoulay-Rozanes, Director of Product Management (Anjuna Security) - Ijlal Loutfi, Ubuntu Security tech - Product Lead (Canonical)
Extending Confidential Containers for enhanced security and privacy
Anbazhagan Mani
Distinguished Engineer
IBM
Leveraging the security advantages of IBM Z and LinuxONE platform, we have developed confidential computing platform (IBM Hyper Protect) that secures the end to end lifecyle of the mission critical workloads including digital assets custody & blockchain based workloads. Now we are extending these capabilities to protect container based orchestration environments like kubernetes. Confidential Containers leverage IBM Secure Execution for Linux with Kubernetes-based OpenShift to allow for the deployment of containers into secured pods, providing all the advantages of excellent operational experience while also designed to protect a tenant's containers from privileged user access. IBM is further adding zero trust principles designed to increase security through zero knowledge proofs & multi-party architecture. We focus on ease of use and not requiring any additional or external services to deploy, manage and protect confidential containers.
How Red Hat plans to facilitate the adoption of Confidential Computing using stateful vTPMs in Confidential Virtual Machines
Yash Mankad
Principal Software Engineer
Red Hat
Confidential Computing technologies have long been available in bare-metal environments, yet complexity around standards and implementation has hindered widespread adoption. Despite this, there is a strong ecosystem built around Trusted Computing Group standards like TPM and IMA. Red Hat's vision is to streamline the adoption of Confidential Computing for customers by leveraging emerging hardware features such as TDX and SEV-SNP through an easily accessible Confidential Virtual Machine (CVM) in a hybrid cloud environment. This discussion will focus on how Red Hat is using Virtualization technologies and stateful virtual Trusted Platform Modules (vTPMs) in CVMs to enable advanced features like Runtime Integrity Measurements and an enhanced Virtual Firmware known as "Paravisor." Join this session to learn how Red Hat's exploration of these innovative technologies is shaping the confidential computing landscape.
Enabling secure multiparty data collaboration via Confidential Clean Room
Mayank Thapliyal
Product Lead
Microsoft Azure
In a world of data-driven decision-making, safeguarding confidential information is critical. Confidential Data Clean Rooms emerge as a pioneering solution, providing a secure platform for collaboration without compromising data privacy. This session aims to explain the concept of Confidential Clean Rooms and highlight their benefits through a detailed use case. The session will provide an overview of Confidential Clean Rooms, outlining their needs and key features that ensure data confidentiality, integrity, and availability. Attendees will gain insights into the working of Confidential Clean Rooms for cross-organizational collaboration, where sensitive data can be shared for analysis without being directly exposed. We will then delve deeper into a Confidential ML modeling use case which will explain the applicability of Confidential Clean Rooms. The use case will highlight how Clean Rooms enable seamless integration of diverse data source to train a model and generate valuable insights- all within the confines of a confidential and controlled environment. At the end of the session, we will discuss the possible applications for Confidential Data Clean Rooms across verticals.
CoVE: Confidential computing for heterogeneous RISC-V platforms
Ravi Sahita
Principal MTS, Security Architecture
Rivos Inc.
Multi-tenant computing platforms are typically composed of several software and hardware components including programmable logic, platform firmware, operating systems, virtualization monitors, and tenant workloads (typically operating in a virtual machine, container, or application context). In this complex TCB environment, Confidential Computing is gaining traction in large scale commercial deployments, with the majority of hyper-scalers adopting this computing model. As computing becomes increasingly heterogeneous with a mix of general purpose, fixed function and accelerator engines - new platform components, operators and software frameworks have to be reasoned about to manage the impact on the Trusted Computing Base (TCB) for workloads. Confidential computing with its strong requirements around HW-backed attestation presents a good stepping-stone towards providing a quantifiable TCB for such a complex computing model. The RISC-V architecture especially presents a strong foundation for meeting the requirements of both Heterogenous Computing and Confidential Computing in a clean slate manner with an open ISA. This session describes the reference architecture and discusses RISC-V ISA, non-ISA (SW/HW) and system-on-chip (SoC) infrastructure capabilities that are currently in definition as part of the Confidential Virtual-machine Extension for RISC-V (referred to as CoVE). The session also covers common standards that RISC-V CoVE enables and supports to provide interoperability across RISC-V and non-RISC-V implementations. The goals of this session are to share status and ideas with the confidential computing community, to get feedback on the ISA and non-ISA specifications in development, and identify common building blocks projects that can enhance the open source development and deliverables of the RISC-V task groups working on confidential computing.
SecurAI - Enhancing Large Language Model Inference with Confidential Computing
Jordan Brandt
CEO and Cofounder
Inpher, Inc.
In the rapidly evolving landscape of Generative Artificial Intelligence (AI) organizations are exploring applications to enhance productivity and unlock substantial business benefits. However, utilizing AI for applications like code development, content creation, anomaly detection, automation, healthcare analytics or personalization often involves handling sensitive data and intellectual property. The visibility of this data specifically through prompts and completions shared with a model service provider raises serious governance concerns, often hindering organizations from fully leveraging AI capabilities. Inpher SecurAI ensures that both the prompt and the completion are secured during model inference, thus enabling large organizations to solve the sensitive data challenge. Our approach utilizes TEEs that protect virtual machines (VMs) from various threats. We employ cutting edge techniques like memory encryption, secure boot, secure nested paging, secure encapsulation and remote attestation. Hosted on Microsoft Azure, our service utilizes Azure Confidential Computing, today based on AMD SEV-SNP technology, reinforcing our commitment to robust security and a roadmap to continue pushing the edge of security and utility. With SecurAI, bringing ChatGPT and other LLMs into organizations can be achieved without compromising privacy. Employing TEEs, advanced encryption, and remote attestation creates a multi-layered defense that maintains integrity even in the face of potential threats. For organizations seeking to harness the predictive power of AI in a manner that aligns with ethical and regulatory demands, Inpher SecurAI provides an innovative approach in navigating the complexities of AI security.
Confidential Neural Computing - Hosting Generative AI Model Workloads in a Trusted Execution Environment
Joe Woodworth
Software Engineer
As generative AI models grow more & more capable, products increasingly want to leverage these models to provide personalized generative experiences for their users. This personalization relies on fine-tuning and running these models with sensitive user data. The sensitivity of this user data motivates the need to train & run these models in a privacy-safe way that provides strong safety guarantees to the user, and earns user trust. The Confidential Neural Computing project builds an ML framework focused on enabled generative AI training and inference in secure enclaves. In this talk, we give an overview of some of the core components of the Confidential Neural Computing framework, explain how the framework leverages current CPU & GPU confidential computing technologies, and share results from our current & on-going areas of investigation.
Unlocking Unprecedented Insights -- Confidential Data Collaboration in Finance
Frederic Lebeau
Co-founder and CEO
Datavillage
Datavillage aims to assist organizations in solving challenges that cannot be addressed within their data silos. referring to one of the main hurdles that currently limit the true potential of data analytics. Empowering a collaborative future, our vision is to tackle challenges beyond organizations data silos. For instance, only 1% of money laundering activities are recovered today; by working together, financial institutions can boost their detection rates by 50%. Similarly, with 80% of total advertising revenue is dominated by global players. Local players get the opportunity to jointly capture advertising revenue from global players. Huge amounts of data are collected, duplicated and kept quiet in organizational silos. Access to them remains a complex issue legally and technically. The current means to allow data sharing are currently limited and favor the copying of data, ad hoc anonymization mechanisms which raise fears of data breaches, risk of non-compliance or competitive disadvantages and result in non-sharing of data. Our Confidential Data Collaboration Platform revolutionizes data collaboration by seamlessly connecting information across organizations. With our Plug and Play technology, organizations can effortlessly link data without uploads, ensuring a high level of security. Users can then create personalized collaboration spaces powered by confidential computing technology, applying custom algorithmsâflexibility not limited to predefined models. This unique approach, combined with the platform's strict compliance with EU regulations, guarantees that data remains under control, used only for defined purposes, and always in accordance with the highest standards of confidentiality, security, and trust. In finance, relying on your own data is no longer enough.You can now use data collectively to better detect fraudsters, money launderers, evaluate your partners or interact with your customers without disclosing sensitive information. Utilizing a certificate-based confidential computing attestation mechanism, The solution seamlessly connects to any data source while ensuring a high level of security and confidentiality. This approach facilitates a quick and easy integration process, allowing customers to connect their existing systems more efficiently, following a plug and play model. Our technology is fully API-based, offering unparalleled flexibility. This design choice allows our customers to seamlessly integrate their own brands with our product, reflecting our commitment to customization and adaptability in the ever-evolving business landscape. key takeaways Concrete use cases (Finance, Energy, Media) Customer experience and confidential computing Confidential computing across enterprises landscapes
Confidential AI and Intel Trust Authority
Raghuram Yeluri
Senior Principal Engineer, Lead Security Architect
Intel Corporation
Attestation is the ground truth of Confidential AI. As a model owner and data owner, you want very precise clarity on the integrity and confidential of the infrastructure, the workloads, and the environment in which the models and data are accessed and operated on. Attestation provides that clarity and definitive proof. This proof of attestation has to be acquired in a very simple, and straightforward way, with near zero changes to your applications/workloads and workflows. This is where Intel Trust Authority comes in (ITA). ITA has a singular goal: Make attestation boring and a non-event, so data scientists, data owners can focus on what is important for them, with the assurance that their models and data are protected. In this session we will share how ITA simplifies getting attestation of the CPU TEEs and/or the attached GPU TEEs, with 1 (that is right!) call to the ITA Client CLI. The ITA Client will manage the details of communicating with TEE hardware (both CPUs and GPUs) for evidence, transmitting the evidence to the ITA Service, and respond with a signed composite attestation token. The application can use this token to make decisions from here. ex: provide the token as proof to a Relying Party to get some secrets, decide to accelerate training using a trusted GPU, etc. You will learn the details about ITA, the ITA Client (SDK), and how to use with Intel TDX and NVidia H100 TEEs. There will also be a discussion of the Intel Trust Authority roadmap, to provide the audience a view of what to expect the rest of 2024.
Gramine-TDX: A Lightweight OS Kernel for Confidential Virtual Machines
Dmitrii Kuvaiskii
Research Scientist
Intel Labs
Confidential Virtual Machines (CVMs) have emerged to protect data in use by performing computations in hardware-based Trusted Execution Environments (TEEs). Typically, a legacy feature-rich VM is re-packed into an encrypted CVM, such that the entire VM is protected from privileged insider attackers, running cloud-native workloads in a secure and isolated fashion. However, this primary usage of CVMs is not suitable for small, specialized, security-critical workloads: legacy VMs with their conventional OS distributions and a plethora of applications, tools, and files result in unnecessarily bloated CVMs that expose a large set of attack vectors. Moreover, CVMs are commonly based on the Linux kernel that was never intended for Confidential Computing (CC) and thus does not protect against certain attack vectors. We propose the Gramine-TDX OS kernel to execute slim, single-purpose, security-first unmodified Linux workloads with a minimal attack surface, using Intel TDX. We base our work on Gramine-SGX, a battle-tested TEE runtime tailored for CC usages. This allows us to build on the existing protections of Gramine-SGX and focus only on the CVM-specific attack surface. In comparison to a typical Linux kernel, the Confidential Virtual Machines (CVMs) have emerged to protect data in use by performing computations in hardware-based Trusted Execution Environments (TEEs). Typically, a legacy feature-rich VM is re-packed into an encrypted CVM, such that the entire VM is protected from privileged insider attackers, running cloud-native workloads in a secure and isolated fashion. However, this primary usage of CVMs is not suitable for small, specialized, security-critical workloads: legacy VMs with their conventional OS distributions and a plethora of applications, tools, and files result in unnecessarily bloated CVMs that expose a large set of attack vectors. Moreover, CVMs are commonly based on the Linux kernel that was never intended for Confidential Computing (CC) and thus does not protect against certain attack vectors. We propose the Gramine-TDX OS kernel to execute slim, single-purpose, security-first unmodified Linux workloads with a minimal attack surface, using Intel TDX. We base our work on Gramine-SGX, a battle-tested TEE runtime tailored for CC usages. This allows us to build on the existing protections of Gramine-SGX and focus only on the CVM-specific attack surface. In comparison to a typical Linux kernel, the Gramine-TDX codebase is ¼50x less in binary size and has a significantly smaller attack surface, which makes it a perfect match for emerging cloud-native confidential-computing workloads. and has a significantly smaller attack surface, which makes it a perfect match for emerging cloud-native confidential-computing workloads.
Secure ML Deployment: Using Confidential Containers for IP Protection and Data Privacy
Prashanth Harshangi
Founders
Enkrypt AI
Tanay Baswa, Pradipta Banerjee, Sahil Agarwal, Prashanth Harshangi Machine Learning (ML) providers are increasingly concerned with protecting their intellectual property (IP) and the confidentiality of their model weights. This concern naturally gravitates towards a cloud-based ML as a Service (MLaaS) solution. Concurrently, there is a significant demand from enterprises to leverage these ML services without compromising their confidential data to external infrastructures or untrusted parties. A growing number of customers are insisting on deploying ML services on-premises to maintain control over data flow. This scenario presents an opportunity to employ confidential computing solutions, utilizing the Confidential Containers (CoCo) project to securely deliver ML models and services directly to customers' premises. Such an approach ensures the protection of the ML provider's IP and model integrity while enabling customers to run ML services within their infrastructure securely. By encrypting data sent to the ML service using asymmetric cryptography, customers are assured that their sensitive information is only decrypted in a secure environment, invisible even to themselves while it is in use. Key Insights and Technologies: Confidential Containers: At the forefront of this solution, the CoCo project serves as the pivotal technology for securely delivering ML models. Using CoCo methods allows us create a trusted execution environment on the customer's premises to run encrypted ML containers, effectively safeguarding the model's IP alongside the customer's data. Workload Container Encryption: We allow ML providers to encrypt their containers using the CoCo key-provider. This encrypted workload can only be decrypted inside a TEE by a corresponding CoCo Key Broker Service. This simplifies the storage and delivery process for ML providers by guaranteeing that their workload will only be decrypted and executed within an attested and confidential environment, thereby guaranteeing the protection of their IP. Simplified Deployment Process: The deployment of confidential containers is made accessible through streamlined scripts. This ease of deployment ensures that both ML providers and customers can effortlessly set up the confidential computing environment, facilitating the secure execution of ML workloads without requiring extensive technical expertise. Seamless Deployment with End-to-End Privacy: Our solution offers a streamlined deployment process that handles all aspects of remote attestation and verification automatically. Additionally, by hosting a verifier container that also facilitates the encryption of data entering and leaving the TEE, we can ensure end-to-end privacy. Key Takeaways: Protecting IP and Data: The use of confidential containers and encryption technologies ensures that ML models are securely delivered and executed, protecting both the provider's IP and the customer's data. Secure Deployment on Customer Premises: This solution enables the secure deployment of ML services on customers' premises, addressing concerns about data privacy and control over data flow. Ease of Use: The streamlined process for encryption and deployment, facilitated by scripts and comprehensive guidelines, makes it easier for both providers and customers to implement confidential computing solutions. Enhanced Security with Encryption and Isolation: The combination of encrypted memory execution and isolated runtime environments significantly boosts the security of executing ML workloads, minimizing the risks associated with data leaks or unauthorized access. Conclusion For ML providers wanting to distribute their models without compromising on IP safety and data privacy, our approach offers a seamless and efficient solution. For customers, it provides a way to utilize advanced ML services while maintaining control over their confidential data, ensuring that both insights and integrity are preserved. The deployment of confidential encrypted containers represents a forward-thinking solution to the challenges of data privacy and IP protection in the cloud-based MLaaS landscape.
Zero-Trust on the cloud with Confidential Computing and WebAssembly.
Etienne Bosse
Head of Platform
Secretarium Ltd
Klave is a cloud platform providing privacy-enabling capabilities and zero-trust architecture for WebAssembly-based applications. It enables all developers to create stateful, attestable, tamper-proof applications protected by secure hardware and cryptography at scale, in the cloud, without any infrastructure setup or maintenance required. It enables organisations to maintain data and code privacy, integrity and auditability by design whilst facilitating use cases pertaining to the management of sensitive data, protection of intellectual property and secure data collaboration. Technologies to be discussed: - Confidential Computing (TEEs, Intel SGX) - Distributed Ledger Technology (DLT) - WebAssembly (WASM) Key Takeaways: - Enabling a Zero-trust architecture with TEEs and DLT - Creating a PaaS/Serverless cloud platform - Micro-segmentation and deployment of Wasm apps within enclaves - Enabling accessibility and adoption to all developer through Wasm Information on the Platform architecture to be discussed: Our open platform is designed to provide solid cryptographic evidence at all levels in a world where trust is based on reputation and accountability is hard to come by, delivering integrity and privacy at every step. It provides a scalable and resilient infrastructure where individuals and businesses can build and run their applications without fear of data tampering or theft by platform providers or other parties. Klave also provides unparalleled auditability, enabling you to ensure the integrity of your data and code and maintain unassailable records for audits and adherence to regulations. This is achieved by combining Confidential Computing and Distributed Ledger Technology (DLT). Confidential Computing ensures that data remains protected even while in use, protecting data and code at their most vulnerable. This helps organisations to control the exposure of their data to third parties and to demonstrate transparency regarding data management, usage, and protection. The DLT ensures data integrity and auditability, streamlining the process of evidence generation and gathering to comply with regulations, detect threats or mitigate legal challenges. Klave utilises hardware-based secure enclaves to create trusted execution environments where sensitive data remains encrypted and isolated from both Klave and the underlying system. This protects against unauthorised access and mitigates the risk of insider threats and malicious attacks. Inside these enclaves, a Zero-Trust runtime uses certificate cryptography mechanisms to continuously verify the identity of every platform component, application, user and device, reducing the risk of unauthorised access and insider threats. Memory sandboxing techniques are in place to create isolated WebAssembly execution contexts, ensuring that access to data is strictly controlled during execution. Additionally, by anchoring every applicative transaction deterministically onto a distributed ledger, Klave ensures integrity and immutability of both data and process, thus enhancing trust and accountability in a code-is-law fashion while defending against destructive attacks such as ransomware. Klave empowers organisations to unlock their data's full potential in several ways, including securely deriving insights, facilitating sensitive data collaboration and collective intelligence, helping with evidence gathering for compliance and conformity, and bolstering internal and external threat protection measures.
GuaranTEE: Towards Attestable and private ML with CCA
Sandra Siby
Research Associate
Imperial College London
Machine-learning (ML) models are increasingly being deployed on edge devices to provide a variety of services. However, their deployment is accompanied by challenges in model privacy and auditability. Model providers want to ensure that (i) their proprietary models are not exposed to third parties; and (ii) be able to get attestations that their genuine models are operating on edge devices in accordance with the service agreement with the user. Existing measures to address these challenges have been hindered by issues such as high overheads and limited capability (processing/secure memory) on edge devices. In this work, we propose GuaranTEE, a framework to provide attestable private machine learning on the edge. GuaranTEE uses Confidential Computing Architecture (CCA), Arm's latest architectural extension that allows for the creation and deployment of dynamic Trusted Execution Environments (TEEs) within which models can be executed. We evaluate CCA's feasibility to deploy ML models by developing, evaluating, and openly releasing a prototype. We also suggest improvements to CCA to facilitate its use in protecting the entire ML deployment pipeline on edge devices.
Confidential computing beyond the CPU: the case of Data Processing Units (DPU)
Marc Meunier
Director of Ecosystem Development
Arm
Confidential computing protecting data in use is a hot topic within the semiconductor industry at large. Hardware-based enclaves such as AMD SEV, Intel SGX/TDX, and Arm CCA are now considered the most pragmatic approach to this. So far, the focus has been given to implementing confidential computing into CPUs. Without extending confidential computing to other classes of devices, how could an AI business outsource the training of its models to a remote cloud platform complex systems combining AI accelerators, storage devices, network interfaces, etc., and have the strong guarantee that its data is protected from both the platform and other customers sharing the same resources? This limitation is detrimental to the wider adoption of confidential computing, particularly in the datacenter.
We present a case study of confidential computing extended to Data Processing Units (DPUs), a device optimized for high-throughput I/O and used by hyperscalers to offload tasks such as TCP/TLS, over-the-network storage operations and asymmetric cryptography. We will go through what makes the intersection of confidential computing and DPUs so interesting, and will present our ongoing work exploring new use cases such as vicarious attestation and confidential TLS offload. Technologies discussed during the talk: Confidential VMs, DPUs, TLS, Virtual Private Cloud, remote attestation, PCIe/TDISP.
Confidential computing with Always Encrypted using enclaves
Pieter Vanhove
Program Manager
Microsoft
Imagine a database system that can perform computations on sensitive data without ever having access to the data in plaintext. With such confidential computing capabilities, you could protect your sensitive data from powerful high-privileged but unauthorized users, including cloud operators and DBAs, while preserving database system's processing power. With Always Encrypted using enclave technologies, this powerful vision has become a reality. Join us for this session to learn about this game-changing technology. We will demonstrate the main benefits of Always Encrypted secure enclaves, discuss the best practices for configuring the feature and address the latest Always Encrypted investments in Azure SQL and other Azure data services.
Auditable and Verifiable Transparency with Trusted Execution Environment
Mingshen Sun
Research Scientist
TikTok Privacy Innovation Lab
Transparency is one of the key components of building trust with the public, customers and regulators. Particularly, when discussing trustworthy and responsible AI, transparency is also a keyword that every organization emphasizes. There are multiple ways to provide transparency of a platform, e.g., open-sourcing the code of platforms to the public, and inviting third-parties to evaluate safety of AI models. However, providing transparency with a complete chain of trust that can be verified by public individuals is a challenging task. Confidential Computing and its underlying technology Trusted Execution Environment provides confidentiality, integrity and attestability properties of a platform that can unlock the potential to building auditable and verifiable transparency for AI platforms. In this talk, we will discuss some missing pieces of current transparency efforts. Then, we will introduce the potential solution to auditable and verifiable transparency using Trusted Execution Environment. Lastly, we want to discuss potential opportunities and challenges that need collaborative efforts from the community.
Fulfilling the Promise of Confidential Computing with Direct Attestation on Intel TDX
Alex Wu
Software Engineer
As the Confidential Computing Wikipedia page states: "In addition to trusted execution environments, remote cryptographic attestation is an essential part of confidential computing." At the same time, attestation can be rooted in software, or hardware, or the combination of the two. Intel TDX provides an ability to achieve full hardware-rooted attestation using Run-Time Measurement Registers (RTMRs). This talk will cover RTMRs, their properties, and how to achieve attestation without the cloud provider in TCB. We will also introduce new libraries to work with RTMRs in a similar way to TPM PCRs.
Islet: Empowering End-User Devices with Confidential Computing
Bokdeuk Jeong
Principal Engineer
Samsung Electronics
The primary focus of confidential computing has been on protecting large amounts of data on the server side. In our opinion, safeguarding user data and privacy on end-user devices is equally important, as it is often the initial point of data collection. In this talk, we share the potential benefits of employing confidential computing on these devices, presenting prospective use cases, including AI. Our contribution to this field is an open-source project called Islet, which aims to create a secure on-device confidential computing platform on Arm CCA. The introduction of Islet provides an overview of its unique features and our approach to designing and verifying the platform, leveraging the benefits of safety properties in the Rust programming language.
Optimizing Data Utility with Differential Privacy
Rishi Balakrishnan
Software Engineer
Oasis Labs
Ensuring privacy together with confidentiality is a key requirement when it comes to using and sharing highly regulated sensitive personal data. In this talk, we present differential privacy via query rewriting for protecting individual privacy in aggregate statistics for analytics. We will present how it obviates the need for traditional data anonymization and protects against re-identification risks, balancing data utility with privacy. We will present Oasis PrivateSQL, powered by differential privacy, showcasing its seamless integration into data workflows for data analysis and collaboration among multiple parties. Join us to unlock data's potential while safeguarding privacy.
Building an Attestation Verification Service using Project Veraison open source components
Thomas Fossatti
Principal Engineer
Arm Ltd
Attestation is a core requirement for Confidential Computing Platforms and workloads. Challenges are presented in scaling a solution to support multiple TEE architectures and deployment use cases. Project Veraison provides the building blocks required for the process of attestation verification and collection of supply chain data. This session will cover the features of an attestation service and the corresponding components from the Veraison Project. We will then present a case study for the usage of these components in Oracle's specific implementation.
Private Data Vault with Confidential Computing
Sivanarayana Gaddam
Lead Software Engineer
Cohesity
Customers use enterprise data protection platforms to protect themselves from any data loss or cyber incidents. Customers expect that these platforms guarantee data privacy and availability at all times. Enterprises preferably deploy a 3-2-1 backup strategy when they deploy enterprise data protection platforms (e.g., Cohesity) for backup and off-site copy. Security-sensitive customers mitigate data privacy issues by deploying data protection appliances (e.g., Cohesity) in an isolated and customer-controlled environment. For off-site copy, they trust data protection service providers for data privacy. Customers are increasingly challenging these trust assumptions and demanding private off-site storage solutions. To meet these emerging demands, we propose using confidential computing (e.g. Intel SGX) to guarantee off-site storage privacy. In this session, we present how confidential computing primitives help achieve data privacy guarantees even if an attacker compromises the off-site storage solution providers.
An Architecture Reference for Confidential Decentralized Identity
Stefano Tempesta
Chief Technology Officer
Aetlas
The identity & credentials economy has been relatively stable for a very long period. .For hundreds of years, it has been a quasi institutional monopoly centered around governments and universities, and with relatively little change and innovation. But recently, this industry has begun to be disrupted by the globalization of higher education and labor markets, requiring credentials to work over a much larger scale. The increasing need for access to digital services, which require multiple forms of digital identity, for digital wallets, online banking, social media accounts, etc. carries significant risk, if not thoughtfully designed and carefully implemented. A new form of identity is needed, one that weaves together technologies and standards to deliver key identity attributes, such as self-ownership and censorship resistance, that are difficult to achieve with existing systems. Cryptographically secure, decentralized identity systems could provide greater privacy protection for users, while also allowing for portability and verifiability. This session describes an architecture reference for Decentralized Identities (DIDs) that fits in the self-sovereign identity (SSI) framework of issuer - holder - verifier process. The solution architecture demonstrates how to make this "trust triangle" trustworthy and confidential. Confidential Identity Hubs, hosted on confidential computing infrastructure in multiple "hubs" around the world create the necessary distributed network for storing and securing identity elements at scale. Security of sensitive data is ensured by the redundancy of the distributed network, and governance remains decentralized by removing sole ownership of the provided infrastructure.