Gallery sponsored by
Gallery sponsored by
Attested TLS is an essential ingredient of every confidential computing solution. In this talk, we explore the identity crisis which results from ill-defined notions of identity for attested TLS in confidential computing. We present a formal approach based on state-of-the-art symbolic security analysis tool ProVerif for the comparison of the security strengths of attested TLS protocols. We also present a couple of vulnerabilities in two state-of-the-art protocols, namely Interoperable RA-TLS [1] and proposed standard draft-fossati-tls-attestation [2].
Further details: [3]
Key takeaways: Combining web PKI with remote attestation can provide stronger security than replacing one with another.
Technologies discussed in this talk: Remote attestation, TLS, Attested TLS
[1] https://github.com/CCC-Attestation/interoperable-ra-tls
[2] https://datatracker.ietf.org/doc/draft-fossati-tls-attestation/
[3] https://mailarchive.ietf.org/arch/msg/tls/Jx_yPoYWMIKaqXmPsytKZBDq23o/
In this session, we introduce our framework designed to address these issues. Our solution ensures the confidentiality and integrity of the intellectual property (input datasets, ML code, and models) while providing mechanisms to measure privacy leaks from the trained models and balance privacy with utility.
Real-time credit card fraud detection is a great use case. In the Hotchips 2024 conference, we introduced more enhancements to the on-chip AI accelerator for zNext and a PCIe-based AI Accelerator (Sypre) for running encoder models. Within the IBM Z system, we have the unique opportunity to protect the data pipeline as well as the AI models - this is what we call Confidential AI.
Implementing effective TCB recovery mechanisms for both the sealing provider and CVM instances
Nagaraju Kodalapura Nagabhushana Rao
Post-doctorial researcher, Intel
Quoc Do Le
Principal Research Engineer, Huawei Munich Research Center
Confidential Multi-Stakeholder Machine Learning (CML) utilizing Trusted Execution Environments (TEEs) enables collaborative machine learning computations without exposing intellectual property, including ML source code and input training datasets However, this method faces challenges in safeguarding the privacy of input datasets, as shared trained models may reveal sensitive information.
Ethereum block builders aim to maximize arbitrage opportunities across thousands of decentralized exchanges (DEXs) and limit order books, yet they often lack the specialized expertise and resources to identify these opportunities efficiently. We present Bob, a low-latency Trusted Execution Environment (TEE)-enforced data room designed to address this challenge by enabling secure collaboration between block builders and high-frequency trading (HFT) firms. Named after bottom-of-block arbitrage, Bob allows builders to leverage HFT expertise to enhance arbitrage capture while ensuring robust protection for both parties’ sensitive information.
With mobile devices becoming increasingly essential, corporate asset extension to edge/mobile devices and user data utilization raises privacy concerns similar to those encountered in cloud computing.
Key Takeaways
We are presenting an end-to-end solution for building a secure stateful Function-as-a-Service (FaaS) platform using Trusted Execution Environments. For example, we use the dynamic memory mapping capabilities of Intel SGX2 to create an in-enclave WebAssembly runtime with just-in-time compilation and shared libraries. We use a distributed, tamper-proof, confidential storage system to coordinate eventually consistent state updates among FaaS workers with release-consistent locking for transactional capabilities.
Secure handling of state in Function-as-a-Service (FaaS) systems is challenging. When handling sensitive data, the FaaS model forces clients to trust the FaaS providers. In other words, we must trust the entire privileged software stack of the host as well as the FaaS provider admins. Trusted execution environments (TEE) utilize secure hardware to create a secure context that prevents even privileged software from reading or tampering with it. TEEs can also provide cryptographic proof that it has loaded a particular program. Instead of trusting the FaaS providers, we can use trusted hardware with a secure root of trust. We introduce a FaaS execution service that enables secure and stateful execution for both cloud and edge environments, while at the same time being easy to use. The FaaS workers, called Paranoid Stateful Lambdas (PSLs), can collaborate to perform large parallel computations that span the globe and securely exploit resources from many domains. We provide an easy-to-use data model and automatically manage cryptographic keys for compute and storage resources.
BuilderNet represents a significant advancement in addressing one of Ethereum's critical challenges: the centralization of block building. With 90 percent of blocks currently built by just two parties, this concentration threatens the network's foundational principles of resilience and neutrality.
Our solution leverages Trusted Execution Environments (TEEs) to create a decentralized block-building network that democratizes participation while maintaining high performance and security. The innovation lies in our multi-operator system, where multiple parties can collaboratively participate in block building through verifiable TEE instances. This approach creates a new paradigm for secure transaction processing. By enabling orderflow sharing between verified instances, BuilderNet establishes a more equitable playing field for all participants while maintaining the sophistication necessary for efficient block production.
As we implement BuilderNet, we've encountered several critical challenges that we believe require broader industry collaboration and innovation in the TEE space. These include:
Building robust mutual attestation systems between network participants
Establishing effective governance mechanisms for expected measurements
Ensuring strong censorship resistance in distributed TEE networks
Maintaining reproducibility and transparency of underlying firmware
Defining best practices for security auditing TEE applications
Our real-world deployment of BuilderNet serves as a practical case study in applying TEEs to blockchain infrastructure, highlighting both the potential and current limitations of confidential computing in this context. This presentation will share our experiences and, more importantly, open a dialogue about these crucial challenges. We aim to engage with the confidential computing community to develop solutions that can advance not just BuilderNet, but the broader ecosystem of distributed TEE applications.
We believe solving these challenges is essential for the future of both confidential computing and decentralized systems, and we invite collaboration from the TEE industry to help shape this future.
The role of GKE Control Plane Authority in managing encryption keys and auditing cluster operations
Confidential Virtual Machines (CVMs) have emerged as a powerful solution for hardware-based isolation in cloud environments, providing strong security guarantees for sensitive workloads. However, current CVM implementations often lack native sealing capabilities - a critical feature for applications that need to maintain state persistence across restarts or crashes, which has traditionally been available in other TEE technologies like Intel SGX.
How GKE tenant isolation underpins a confidential MaaS workflow.
Enabling confidential computing across entire Kubernetes clusters ensures data remains encrypted at all times for cloud customers. This approach, adopted by major cloud providers such as Microsoft Azure and Google Cloud, significantly enhances security and privacy. However, state-of-the-art systems face challenges in providing secure rolling updates, particularly in managing complex coordination across multiple nodes, ensuring compatibility between different hardware and software components and maintaining service integrity and availability during the update process.
How GPU Confidential nodes matter for LLM inference
Datacenter-Colocated MPC Scaling: By co-locating MPC parties within a single datacenter but isolating them in TEEs of different vendors, CoVault ensures horizontal scalability without sacrificing security (i.e., making unrealistic assumptions about MPC parties’ independence).
FLIPS is a middleware system that manages data and participant heterogeneity in federated learning (FL) training workloads. In particular, we examine the benefits of label distribution clustering on participant selection in federated learning. FLIPS clusters parties involved in an FL training job based on the label distribution of their data apriori, and during FL training, ensures that each cluster is equitably represented in the participants selected. To manage platform heterogeneity and dynamic resource availability, FLIPS incorporates a straggler management mechanism to handle changing capacities in distributed, smart community applications.
Perry Alexander
Co-Founder, Invary LLC and Distinguished Professor, University of Kansas
Invary, LLC and University of Kansas
Remote attestation is a process of gathering evidence from a remote system with the intent of establishing its trustworthiness. A relying party requests evidence from a target. The target responds by gathering allowable evidence and meta-evidence. Target evidence and meta-evidence are together appraised to establish whether the target is in a good operational state.
Any modern attestation target comprises many subsystems and depends on many others. Thus, attestation of a single component provides a limited picture of an appraisal target. Instead attestation targets should be treated as collections of interacting, distributed components. Attestation should gather and compose evidence for entire systems.
Layered attestation is an enhanced attestation process where attestation managers execute protocols that perform multiple component measurements and bundle resulting evidence for appraisal. The MAESTRO tool suite provides a mechanism for building layered attestation systems around the execution of Copland protocols. Users specify a protocol to be executed and MAESTRO configures a common attestation manager core and attestation service providers to execute that protocol on target systems. With few exceptions, MAESTRO components are either formally verified or synthesized from formal specifications providing assurance that protocol execution faithfully implements the Copland semantics.
Our presentation will cover an overview of layered attestation using MAESTRO. We will present a brief overview of layered attestation and the Copland attestation protocol language. We will then present an attestation architecture for a cross-domain system. The attestation architecture includes a measured boot using TPM, IMA and LKIM that transitions to run-time attestation using MAESTRO that reports execution state. We will cover both formal treatment and empirical evaluation results.
Enterprises want to run inference and fine-tune closed-source models on their own sensitive data without risking IP theft or data leakage. Google Kubernetes Engine (GKE) offers a path to enable such Model-as-a-Service (MaaS) scenarios through strict isolation, encryption, and identity management. In this session, we’ll show how GKE’s tenant projects and locked-down node pools can keep proprietary model weights and customer data shielded from one another—and from anyone with direct node access.
Raghu Yeluri
Intel Fellow, Lead Architect Confidential Computing Services and Confidential AI,
Intel Corp
As Artificial Intelligence adoption intensifies across highly regulated sectors, enterprises are confronting an increasingly complex landscape of cross-border compliance, sovereign data restrictions, AI model safety requirements, and regulator-mandated accountability frameworks. Whole AI + DigiDaaS deliver confidential AI architecture and platform governance services that embed these requirements directly into enterprise AI systems – balancing model accountability, safety, privacy, and postmarket compliance while maintaining client-specific use case adaptability.
Our Sovereign AI Governance & Deployment Framework enables regulated enterprises – including MedTech, Software as a Medical Device (SaMD), healthcare diagnostics, financial services, and industrial AI safety systems – to architect, deploy, and operate Confidential AI pipelines with jurisdictional safety and verifiable auditability. We combine full-stack confidential compute engineering, sovereign key management, Trusted Execution Environment orchestration, encrypted model inference, federated data collaboration, and cross-border compliance tooling. This allows enterprises to confidently align with FDA, EU MDR, HIPAA, NYDFS, EU AI Act, GDPR, ISO 13485/14971, IEC 62304/60601 standards, while maintaining real-world client deployment velocity and regulator cooperation readiness.
Whole AI + DigiDaaS operate as a sovereign-grade AI platform architecture and advisory firm, partnering with regulated enterprises, design partners, co-development clients, public-private governance alliances, and (prospectively) capital Limited Partner syndicates to deliver regulated AI trust infrastructure. Our client-centric, consulting-led model ensures platform capabilities are grounded in real enterprise needs, while preserving board-level fiduciary governance, regulator collaboration pathways, and sovereign cloud partner alignment.
At Confidential Computing Summit 2025, we look forward to meeting cross-disciplinary professionals passionate about sovereign systems (architecture, engineering, zero trust compute), confidential AI, regulatory excellence, public-private governance alliances, IP chain of custody, data sovereignty, emerging progress in regulated sectors, business development, and client success.
Confidential computing and in particular the confidential containers (CoCo) enables secure processing of sensitive data and applications by leveraging the Trusted Execution Environments (TEEs). also, it protects the data in use by performing computation in a hardware-based, attested TEE and makes it impossible to extract information from a memory dump as it's encrypted.This work explores the deployment of a Text Summarization AI application for medical report analysis within the coco framework, this application accepts the individual medical reports and analyzes the records for summarization by leveraging a large language model (LLM). IBM Secure Execution for Linux technology promises the technical assurance, which means administrators cannot access the PII contained medical report instead of relying on trust that they will not access the data, also it uses an encrypted contract to ensure zero-trust policy enforcement at deployment.The contract defines different personas and ensures access isolation between each persona through encryption, also it defines the restrictions for the workload that is going to run in the pod, and any other authorizations required for the server. Such contracts can also incorporate further tokens, credentials or certificates, which are needed by the workload and should not be shared with others like the kubernetes admin. The Workload Provider provides the workload and defines the workload policies, such as restricting access to the deployed workload, like not allowing any kubectl exec operations or any other API endpoints considered not to be trusted. Since the contract is encrypted, anything within it is confidential and cannot be modified by anyone including kubernetes admins, either.The LLM used for the summarization was fine-tuned for a specific purpose, it might contain business IP or have value itself, as it is deployed in an environment not operated by the healthcare institution or the model provider whereas here the solution ensures data privacy and model protection by enforcing zero-trust policies through signed container images and immutable policy definitions. Unauthorized access and tampering attempts, even by privileged administrators, are blocked, safeguarding both sensitive personal data and the proprietary AI model.WIth this solution we do not need to rely on the trust in Kubernetes or IaaS admin instances. This approach demonstrates how confidential containers can meet strict security requirements in healthcare and other industries handling critical information.
Optimizing Large Language Models (LLMs) for Efficient Inference: Techniques and Best Practices
Organized by OPAQUE
© 2024 OPAQUE. All rights reserved. | OPAQUE, The Confidential AI CompanyTM | Privacy Policy | Terms of Service