Confidential Computing Summit
Skip to Content
  • Contact
  • Login
  • Contact
    • Contact

    • Login
Summit Posters
Confidential Computing Summit 2025
  • Agenda
  • Workshop
  • Posters
  • Speakers
  • Sponsors
  • Program Committee
  • Past Events
  • About

2025 Poster Gallery

Gallery sponsored by

Attested TLS: Design Space, Tradeoffs and Standardization

Muhammad Usama Sardar
Research Associate, TU Dresden

Download Video

Attested TLS is an essential ingredient of every confidential computing solution. In this talk, we explore the identity crisis which results from ill-defined notions of identity for attested TLS in confidential computing. We present a formal approach based on state-of-the-art symbolic security analysis tool ProVerif for the comparison of the security strengths of attested TLS protocols. We also present a couple of vulnerabilities in two state-of-the-art protocols, namely Interoperable RA-TLS [1] and proposed standard draft-fossati-tls-attestation [2].

Further details: [3]

Key takeaways: Combining web PKI with remote attestation can provide stronger security than replacing one with another.

Technologies discussed in this talk: Remote attestation, TLS, Attested TLS

[1] https://github.com/CCC-Attestation/interoperable-ra-tls

[2] https://datatracker.ietf.org/doc/draft-fossati-tls-attestation/

[3] https://mailarchive.ietf.org/arch/msg/tls/Jx_yPoYWMIKaqXmPsytKZBDq23o/

Confidential VMs to empower Decentralized Health Data Platform

Ali Farahanchi
Founder and CEO, Lavita AI

Download Video

Lavita's mission is empower people with full control of their health data, with no barriers to harnessing it, and where they can securely access the latest medical knowledge and AI models to generate personalized insights that improve their health and well-being.

Along this vision, Lavita is building a trusted platform where researchers conduct cutting-edge medical and AI research using a decentralized-first approach to collect data, access compute resources, and validate and share results with communities — ultimately pushing the boundaries of science in medicine.

Lavita’s implementation of confidential computing (using confidential VMs on Azure with NVIDIA H100 Tensor Core GPUs) brings some of the most advanced open-source models from Llama, DeepSeek and others to users’ fingertips, enabling them to access reliable, secure, and personalized medical information.

Putting attestation into Confidential Computing - Introducing Google Cloud Attestation

Ruide Zhang
Founder & CEO, Lavita AI

Download Poster

In this session, we introduce our framework designed to address these issues. Our solution ensures the confidentiality and integrity of the intellectual property (input datasets, ML code, and models) while providing mechanisms to measure privacy leaks from the trained models and balance privacy with utility.

Computer Science is Broken and the File System is the Reason Why.

Steve Guilford
Founder and President, AsterionDB

Download Poster
Download Video

TL:DR - The legacy file system is a major problem. Find out how to move beyond the legacy file system by embracing a data-layer microservice approach.
Bill Gates was once asked if there was a product he regretted not being able to bring to the market. In his response, he described a system that married the best aspects of the file system and relational databases. That technology was called WinFS (Future Storage). Unfortunately, it never saw broad public release.
“We had a rich database as the client/cloud store that was part of a Windows release that was before its time. This is an idea that will reemerge since your cloud store will be rich with schema rather than just a bunch of files, and the client will be a partial replica of it with rich schema understanding.”
Gates predicted the technology would reemerge, and it has in the form of AsterionDB as described by Chris Steffan, VP of Research at Enterprise Management Associates:
“AsterionDB’s approach to managing unstructured data aligns with the WinFS strategy and its ‘rich schema’ concept. WinFS aimed to unify structured and unstructured data by layering a relational schema over diverse data types, enabling powerful querying and relationships. Similarly, AsterionDB integrates unstructured data directly into Oracle’s relational environment, leveraging its robust schema capabilities to provide rich metadata, security, and search functionality.”
It has been tried before, using a database to replace the file system, but back then, technology and market forces prevented its realization. Now, nearly 20 years later, technology and market forces (e.g. cybersecurity threats) demand that this technological approach be given another chance.
What happens when you move file-based user assets out of the legacy file system? What happens if you move all of your business logic out of the middle-tier? Does this present any architectural benefits? Does this approach help to solve the increasing cybersecurity threat we face? Can this approach provide an answer to the threat of harvest-now-decrypt-later presented by Quantum Computing?
Find out the answers to these questions and more when the inventor of AsterionDB describes the benefits and use cases for the AsterionDB Converged Computing Platform.

Acai: Protecting Accelerator Execution with Arm Confidential Computing Architecture

Supraja Sridhara
PhD Student, ETH Zurich

Download Poster

Trusted execution environments in several existing and upcoming CPUs demonstrate the success of confidential computing, with the caveat that tenants cannot securely use accelerators such as GPUs and FPGAs. In this paper, we reconsider the Arm Confidential Computing Architecture (CCA) design, an upcoming TEE feature in Armv9-A, to address this gap. We observe that CCA offers the right abstraction and mechanisms to allow confidential VMs to use accelerators as a first-class abstraction. We build ACAI, a CCA-based solution, with a principled approach of extending CCA security invariants to device-side access to address several critical security gaps. Our experimental results on GPU and FPGA demonstrate the feasibility of ACAI while maintaining security guarantees.

TeeMate: Fast and Efficient Confidential Container using Shared Enclave

Jaewon Hur
Post-doctorial researcher, Georgia Tech

Download Poster

Confidential container is becoming increasingly popular as it meets both needs for efficient resource management by cloud providers, and data protection by cloud users. Specifically, confidential containers integrate the container and the enclave, aiming to inherit the design-wise advantages of both (i.e., resource management and data protection). However, current confidential containers suffer from large performance overheads caused by i) a larger startup latency due to the enclave creation, and ii) a larger memory footprint due to the non-sharable characteristics of enclave memory.

We explore a design conundrum of confidential containers, examining why confidential containers impose such large performance overheads. Surprisingly, we found there is a universal misconception that an enclave can only be used by a single (containerized) process that created it. However, an enclave can be shared across multiple processes, because an enclave is merely a set of physical resources, while the process is an abstraction constructed by the host kernel. To this end, we introduce TeeMate, a new approach to utilize the enclaves on the host system. 

Especially, TeeMate designs the primitives to i) share the enclave memory between processes, thus preserving memory abstraction, and ii) assign the threads in enclave between processes, thus preserving thread abstraction. We concretized TeeMate on Intel SGX, and implemented confidential serverless computing and confidential database on top of TeeMate-based confidential containers. The evaluation clearly demonstrated the strong practical impact of TeeMate by achieving at least 4.5 times lower latency and 2.8 times lower memory usage compared to the applications built on the conventional confidential containers.

Driving security assurance of Intel TDX (Trust Domain Extensions); An Offensive Security Research Approach

Nagaraju N Kodalapura

Principal Engineer: Offensive Security Research, Intel Corporation

Download Poster
Download Video

In this presentation, we provide a high-level architectural overview and security objectives of Intel TDX (Trust Domain Extensions) technology for Confidential Computing. Then we’ll focus on driving the best security assurance practices and the corresponding results through an offensive research approach that was applied on Intel TDX. The security research shall capture a few examples of vulnerabilities and mitigation in topics such as Vulnerability research and mitigation, emerging threat analysis, platform-level pen-testing, etc. The presentation shall provide a few real examples of vulnerabilities (CVEs - Common Vulnerabilities and Exposures, SA - Security Advisory, etc.) that were uncovered and mitigated through this process in Hardware, firmware, and software components of Intel TDX. Finally, we shall provide a quick overview of how Intel drove the security research activities with external industry partners from Google and Microsoft on Intel TDX and share some interesting, fruitful results from the collaboration effort, followed by Q & A
Presentation Outline:
•  Introduction to TDX architecture, security objectives, and key ingredients
•  Driving security assurance through offensive security research
•  Example vulnerabilities uncovered and mitigated in Hardware, Firmware and Software components and the associated CVE (Common Vulnerabilities and Exposures) and Intel SA (Security Advisory) details
•  External research collaboration
•  Hackathons, a method of driving security research on Intel’s TDX and the results from the effort
•  Background on how Intel enabled Intel’s TDX Module software for external auditing
•  External research collaboration on TDX research with industry partners from Google and Microsoft
Associated results and publication details Intel report: https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/tdx-security-research-and-assurance.html
Google report: https://cloud.google.com/blog/products/identity-security/rsa-google-intel-confidential-computing-more-secure/

Intel

Intel Trust Domain Extensions Security Research and Assurance

This article describes the advanced security assurance actions, above and beyond standard SDL requirements, that Intel has dedicated to assuring Intel TDX provides robust security. (92 ko)

https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/tdx-security-research-and-assurance.html

Google Cloud Blog

How Google and Intel make Confidential Computing more secure | Google Cloud Blog

Google Project Zero has partnered with Intel to audit the HW & FW security of Intel TDX, the technology that makes confidential computing possible. (262 ko)

https://cloud.google.com/blog/products/identity-security/rsa-google-intel-confidential-computing-more-secure/

Confidential and Privacy-Preserving Multi-Stakeholder Machine Learning with Trusted Execution Environments

Quoc Do Le

Principal Research Engineer, Huawei Munich Research Center

Download Poster
Download Video

Confidential Multi-Stakeholder Machine Learning (CML) utilizing Trusted Execution Environments (TEEs) enables collaborative machine learning computations without exposing intellectual property, including ML source code and input training datasets However, this method faces challenges in safeguarding the privacy of input datasets, as shared trained models may reveal sensitive information.

TEE-enforced data clean rooms

Angela Lu

Product Lead, Flashbots

Download Poster 

Ethereum block builders aim to maximize arbitrage opportunities across thousands of decentralized exchanges (DEXs) and limit order books, yet they often lack the specialized expertise and resources to identify these opportunities efficiently. We present Bob, a low-latency Trusted Execution Environment (TEE)-enforced data room designed to address this challenge by enabling secure collaboration between block builders and high-frequency trading (HFT) firms. Named after bottom-of-block arbitrage, Bob allows builders to leverage HFT expertise to enhance arbitrage capture while ensuring robust protection for both parties’ sensitive information.

A Security Monitor Framework For On-Device Confidential Computing

Jinbum Park

Staff Engineer, Samsung Research

Download Poster
Download Video

With mobile devices becoming increasingly essential, corporate asset extension to edge/mobile devices and user data utilization raises privacy concerns similar to those encountered in cloud computing.

Lessons Learned from Building CI/CD Pipelines for Confidential Services

Kritika Partha

Software Engineer, Google

Download Poster

Deploying confidential services, particularly those leveraging CVM technologies like SEV-SNP, introduces unique challenges to traditional CI/CD pipelines. This presentation delves into the lessons learned from building robust pipelines for such services, focusing on the critical aspects of security, reproducibility, and transparency.

Paranoid Stateful Lambda

Shubham Mishra

PhD Student, UC Berkeley

Download Poster

We are presenting an end-to-end solution for building a secure stateful Function-as-a-Service (FaaS) platform using Trusted Execution Environments. For example, we use the dynamic memory mapping capabilities of Intel SGX2 to create an in-enclave WebAssembly runtime with just-in-time compilation and shared libraries. We use a distributed, tamper-proof, confidential storage system to coordinate eventually consistent state updates among FaaS workers with release-consistent locking for transactional capabilities.

Secure handling of state in Function-as-a-Service (FaaS) systems is challenging. When handling sensitive data, the FaaS model forces clients to trust the FaaS providers. In other words, we must trust the entire privileged software stack of the host as well as the FaaS provider admins. Trusted execution environments (TEE) utilize secure hardware to create a secure context that prevents even privileged software from reading or tampering with it. TEEs can also provide cryptographic proof that it has loaded a particular program. Instead of trusting the FaaS providers, we can use trusted hardware with a secure root of trust. We introduce a FaaS execution service that enables secure and stateful execution for both cloud and edge environments, while at the same time being easy to use. The FaaS workers, called Paranoid Stateful Lambdas (PSLs), can collaborate to perform large parallel computations that span the globe and securely exploit resources from many domains. We provide an easy-to-use data model and automatically manage cryptographic keys for compute and storage resources.

BuilderNet: Decentralizing Block Building via TEEs

Shea Ketsdever

Sr. Software Engineer (TEE Engineer), Flashbots

Download Poster

BuilderNet represents a significant advancement in addressing one of Ethereum's critical challenges: the centralization of block building. With 90 percent of blocks currently built by just two parties, this concentration threatens the network's foundational principles of resilience and neutrality.

Our solution leverages Trusted Execution Environments (TEEs) to create a decentralized block-building network that democratizes participation while maintaining high performance and security. The innovation lies in our multi-operator system, where multiple parties can collaboratively participate in block building through verifiable TEE instances. This approach creates a new paradigm for secure transaction processing. By enabling orderflow sharing between verified instances, BuilderNet establishes a more equitable playing field for all participants while maintaining the sophistication necessary for efficient block production.

As we implement BuilderNet, we've encountered several critical challenges that we believe require broader industry collaboration and innovation in the TEE space. These include:

Building robust mutual attestation systems between network participants
Establishing effective governance mechanisms for expected measurements
Ensuring strong censorship resistance in distributed TEE networks
Maintaining reproducibility and transparency of underlying firmware
Defining best practices for security auditing TEE applications

Our real-world deployment of BuilderNet serves as a practical case study in applying TEEs to blockchain infrastructure, highlighting both the potential and current limitations of confidential computing in this context. This presentation will share our experiences and, more importantly, open a dialogue about these crucial challenges. We aim to engage with the confidential computing community to develop solutions that can advance not just BuilderNet, but the broader ecosystem of distributed TEE applications.

We believe solving these challenges is essential for the future of both confidential computing and decentralized systems, and we invite collaboration from the TEE industry to help shape this future.

Loose SEAL: Restoring Sealed Storage Capabilities in Intel TDX Through SGX Sidecar

Moe Mahhouk

Sr. Software Engineer (TEE Engineer), Flashbots

Download Poster

Confidential Virtual Machines (CVMs) have emerged as a powerful solution for hardware-based isolation in cloud environments, providing strong security guarantees for sensitive workloads. However, current CVM implementations often lack native sealing capabilities - a critical feature for applications that need to maintain state persistence across restarts or crashes, which has traditionally been available in other TEE technologies like Intel SGX.

Toward a 100% Open Source Platform for Secure and Scalable Confidential Services

Sida Chen

Software Engineer, Google

Download Poster

Confidential computing promises to revolutionize data security by encrypting data in use within trusted execution environments (TEEs). However, existing CVM solutions are often tied to specific public cloud providers, limiting portability and transparency due to proprietary software stacks. This presentation previews an open-source and transparent platform designed to deploy, manage, and scale Confidential VMs in production environments.

Enabling secure rolling updates for confidential K8S

Christof Fetzer

Professor and Co-Founder, Scontain GmbH

Download Poster
Download Video

Enabling confidential computing across entire Kubernetes clusters ensures data remains encrypted at all times for cloud customers. This approach, adopted by major cloud providers such as Microsoft Azure and Google Cloud, significantly enhances security and privacy. However, state-of-the-art systems face challenges in providing secure rolling updates, particularly in managing complex coordination across multiple nodes, ensuring compatibility between different hardware and software components and maintaining service integrity and availability during the update process.

Lessons Learned from Building CI/CD Pipelines for Confidential Services

Kevin Garbe

Software Engineer, Google

Download Poster

Deploying confidential services, particularly those leveraging CVM technologies like SEV-SNP, introduces unique challenges to traditional CI/CD pipelines. This presentation delves into the lessons learned from building robust pipelines for such services, focusing on the critical aspects of security, reproducibility, and transparency.

Toward a 100% Open Source Platform for Secure and Scalable Confidential Services

Alex Orozco

Software Engineer, Google

Download Poster

Confidential computing promises to revolutionize data security by encrypting data in use within trusted execution environments (TEEs). However, existing CVM solutions are often tied to specific public cloud providers, limiting portability and transparency due to proprietary software stacks. This presentation previews an open-source and transparent platform designed to deploy, manage, and scale Confidential VMs in production environments.

Bridging SGX and TDX with Conker: Advancing Decentralized Confidential Computing

Anthony Simonet-Boulogne

Head of Research & Innovation, iExec Blockchain Tech

Download Poster

Trusted Execution Environments (TEEs) are vital to Decentralized Confidential Computing (DeCC), ensuring computational integrity and confidentiality in decentralized and cloud infrastructures. Intel SGX introduced secure enclave-based execution but posed compatibility challenges, requiring application rewrites, which hindered adoption. Intel’s Trusted Domain Extensions (TDX) enables confidential virtual machines (CVMs), allowing virtually any application to run securely without requiring adaptation. However, TDX lacks features like fine-grained code measurement and attestation, affecting trust verification and security guarantees critical to DeCC use-cases.

At iExec, our experience with SGX and TDX revealed key usability and security challenges, leading us to develop Conker, an open-source framework that unifies the strengths of both technologies. SGX provides code-level measurement, ensuring fine-grained security, while TDX offers seamless compatibility for unmodified applications. Conker bridges this gap by integrating fine-grained security with ease of use, making confidential computing more accessible to our users. It enhances the security of applications and assets, optimizes development workflows, and facilitates deployment across decentralized infrastructures, enabling a more practical and scalable approach to DeCC.

This talk will present our hands-on experience with both SGX and TDX, detailing the practical insights gained from real-world deployments. We will discuss the key trade-offs between these two technologies, including performance implications, security considerations, and ease of integration. Additionally, we will demonstrate how Conker simplifies the use of confidential computing, making it more accessible to developers and enterprises alike.

Attendees will gain:

A deep dive into the evolution of confidential computing, from SGX to TDX.

Practical lessons from deploying SGX-based and TDX-based solutions at scale.

Insights into the security and performance trade-offs between SGX and TDX.

An introduction to Conker and how it bridges the gaps between these technologies.

Guidance on how to leverage confidential computing for real-world applications.

CoVault: Secure, Scalable Analytics of Personal Data

Roberta De Viti

PhD Student, Max Planck Institute for Software Systems (MPI-SWS)

Download Poster

Datacenter-Colocated MPC Scaling: By co-locating MPC parties within a single datacenter but isolating them in TEEs of different vendors, CoVault ensures horizontal scalability without sacrificing security (i.e., making unrealistic assumptions about MPC parties’ independence).

FLIPS: Federated Learning using Intelligent Participant Selection

Rahul Atul Bhope

PhD Candidate, University of California, Irvine

Download Poster

FLIPS is a middleware system that manages data and participant heterogeneity in federated learning (FL) training workloads. In particular, we examine the benefits of label distribution clustering on participant selection in federated learning. FLIPS clusters parties involved in an FL training job based on the label distribution of their data apriori, and during FL training, ensures that each cluster is equitably represented in the participants selected. To manage platform heterogeneity and dynamic resource availability, FLIPS incorporates a straggler management mechanism to handle changing capacities in distributed, smart community applications.

Trusted Computing Through Layering Attestation

Perry Alexander

Co-Founder, Invary LLC and Distinguished Professor, University of Kansas

Invary, LLC and University of Kansas

Download Poster
Download Video

Remote attestation is a process of gathering evidence from a remote system with the intent of establishing its trustworthiness. A relying party requests evidence from a target. The target responds by gathering allowable evidence and meta-evidence. Target evidence and meta-evidence are together appraised to establish whether the target is in a good operational state.

Any modern attestation target comprises many subsystems and depends on many others. Thus, attestation of a single component provides a limited picture of an appraisal target. Instead attestation targets should be treated as collections of interacting, distributed components.  Attestation should gather and compose evidence for entire systems.

Layered attestation is an enhanced attestation process where attestation managers execute protocols that perform multiple component measurements and bundle resulting evidence for appraisal. The MAESTRO tool suite provides a mechanism for building layered attestation systems around the execution of Copland protocols. Users specify a protocol to be executed and MAESTRO configures a common attestation manager core and attestation service providers to execute that protocol on target systems. With few exceptions, MAESTRO components are either formally verified or synthesized from formal specifications providing assurance that protocol execution faithfully implements the Copland semantics.

Our presentation will cover an overview of layered attestation using MAESTRO. We will present a brief overview of layered attestation and the Copland attestation protocol language.  We will then present an attestation architecture for a cross-domain system. The attestation architecture includes a measured boot using TPM, IMA and LKIM that transitions to run-time attestation using MAESTRO that reports execution state.  We will cover both formal treatment and empirical evaluation results.

Isolated IaaS: Enabling Sovereign cloud through Confidential Computing

Matthieu Legre

VP of Product, CYSEC SA

Download Video

The Cloud Italian Strategy contains the strategic directions for the migration path towards the cloud of data and digital services of the public administration. The strategy responds to three main challenges: ensuring the country's technological autonomy, guaranteeing control over data, and increasing the resilience of digital services. To address these new needs, CYSEC has developed a solution of IaaS isolation that is used as one brick integrated in sovereign cloud environments used by some public administrations. 

CYSEC's solution exploits the security properties of the confidential computing technologies to bootstrap two security functionalities: the encryption of data in all states and the authenticity enforcement of the execution environment. These properties apply to all data and code hosted by virtual machines deployed on a third-party IaaS. By doing so, the CYSEC isolation solution provides hardware-based control of the protection of data to the VM owners.

Despite the present discrepancies in terms of approaches to confidential VMs, CYSEC solution can be deployed on various environments such as Proxmox, Azure or VMware by Broadcom. For each environment, the solution has to be adapted for the end-users to benefit from the best guarantees of security. The aim of this presentation is to share the different strategies that CYSEC has developed to adapt its solution to those different virtualization environments. 

Safeguarding Sensitive Data Access At Scale with Confidential Computing

Mingshen Sun

Research Scientist, TikTok

Download Video
Download Poster

Microservice architecture has become a popular system design option in recent years due to its agility and scalability. Business logics are decoupled into multiple services, and data can easily flow among them. However, this brings in another critical challenge in data protection - how to safeguard sensitive data while maintaining the benefits of microservices. In this talk, we present our innovative solution to protect the sensitive data lifecycle in a large-scale microservice architecture. By utilizing confidential computing and other privacy-enhancing technologies, our solution guarantees that sensitive data can only be accessible by minimal logics in trusted execution environments. In addition, to provide better transparency on sensitive data usage, computations in TEE can be audited and verified by external parties through the remote attestation mechanism. Finally, we will also discuss the challenges and opportunities of applying confidential computing at scale.

Using HPCR to Host Trustee for Secure and Trusted Remote Attestation

Chathurya Adapa

Software Engineer, IBM

Download Video

In confidential computing and especially the confidential containers (Coco) landscape, ensuring the integrity and trustworthiness of the environment in which workloads run is crucial. The IBM Hyper Protect Container Runtime (HPCR) offers a hardware-backed, Trusted Execution Environment (TEE) that ensures the data is protected while in use, isolated from both the host and potential malicious actors. To leverage and enable secure remote attestation with this environment, we propose hosting Trustee, an attestation service, within HPCR to manage the root of trust for confidential workloads.

Trustee consists of multiple key components, including the Attestation Service (AS), Key Broker Service (KBS), and Reference Value Provider Service (RVPS), which work together to validate the trustworthiness of remote environments. When an application or workload needs to be executed in a confidential container, Trustee's attestation services verify that the environment is running within a trusted TEE and that the underlying software and hardware are in the expected, secure state. This ensures that sensitive data is protected both at rest and while in use. 

By deploying Trustee within HPCR, we leverage HPCR's secure, hardware-backed infrastructure to manage and verify attestation evidence, making it the ideal platform to host Trustee's services. In this setup, HPCR handles the root of trust, providing hardware-level security features, such as secure boot and encrypted data protection, to safeguard the execution environment. Trustee then performs the attestation process, ensuring that only workloads operating within verified and secure TEEs are granted access to sensitive secrets, which could be encryption keys, API tokens, etc. This integration ensures that the entire lifecycle of confidential data at storage, transit, and processing is secured.

This solution supports a range of use cases in industries that require stringent security and compliance, such as healthcare, finance, and government. By using HPCR as the secure foundation for hosting Trustee, organizations can confidently deploy confidential containers, knowing that both the environment and the data are protected by the highest levels of attestation and hardware-based security. This integrated approach delivers a scalable, flexible, and trusted platform for running sensitive workloads in hybrid cloud or on-premises environments.

Also, a unique differentiator of HPCR lies in its ability to address the longstanding "bootstrap problem" in secure systems. Traditionally, remote attestation requires a trusted environment for the attestation service itself, creating a cyclical dependency: how do you trust the environment before attesting it? HPCR innovatively solves this challenge through the use of encrypted images and a novel mechanism for Integrated Trust Initialization. This approach allows us to begin with a pre-established trusted environment, effectively bypassing the traditional dependency loop. By doing so, HPCR not only streamlines the setup of secure and trusted systems but also positions Trustee as a groundbreaking solution in the realm of remote attestation. This capability is a core USP of hosting Trustee on HPCR and is worth emphasizing as a transformative feature in the abstract.

eBPF Integrity and Attestation for Confidential Computing

Steve Perry

Principal Architect, Invary

Download Video

Initially created as a mechanism for efficient packet filtering in Linux, eBPF has evolved into a cross-platform, general-purpose framework to run custom code in kernel space for tracing, observability, security enforcement, performance monitoring, and network packet processing.  As a result, eBPF's versatility and efficiency have established it as a cornerstone technology inside and outside of container environments.

Confidential virtual machines based on AMD SEV-SNP, Arm CCA, or Intel TDX run a full guest OS like Linux, allowing technologies like eBPF to be easily adopted.  If that confidential VM is acting as a Kubernetes node, then it’s likely already using one of the eBPF-powered container network interfaces like Cilium for load balancing, observability, and network security.  If that confidential VM is running endpoint protection or other security monitoring applications, those are also likely to incorporate eBPF programs.

In this session, we examine the benefits of running eBPF inside a confidential VM, as well as the challenges to integrity, attestability, and even confidentiality that can arise.  We propose an approach to integrity measurement and attestation for the OS kernel, eBPF subsystem, and eBPF programs that address these challenges, effectively enabling the use of modern observability and security applications inside confidential computing environments.

Attendees will learn:

How modern observability and security applications make use of both static and dynamic eBPF

How threat actors can violate the integrity of eBPF

How runtime attestation of the TEE, guest OS kernel, and running eBPF programs can validate that a confidential VM is in an intended operating state

Confidential Model-as-a-Service on GKE: Tenant Isolation, Future GPU Confidential Nodes, and Control Plane Authority

Alex Bulankou

Senior Engineering Manager, Google

Download Poster

Confidential Computing on GKE
What is Confidential GKE?
GKE aims to be the world’s preferred platform for container-based confidential computing by providing secure isolation and attestation of workloads and the hardware they run on.
Why Confidential GKE?

Growing Demand for AI Data Security and Privacy

Rapid growth and adoption of AI introduces new data security and privacy considerations.

 

Increasing Insider Risk

SaaS and CSP consumers are increasingly aware of insider risk and seeking solutions to ensure customer trust

 

New Opportunities in Trusted Multi-Party Collaboration

AI combined with large data sets and advanced cloud computing power presents new opportunities for organizations to gain additional insight.

 

Evolving Regulatory Landscape and Compliance Mandates

Compliance, regulatory and sovereignty requirements continue to evolve and bring forward hardened data security and privacy requirements.

Google Cloud Confidential Space and Intel Trust Authority – better together

Raghu Yeluri

Intel Fellow, Lead Architect Confidential Computing Services and Confidential AI,
Intel Corp

Download Poster

Google Cloud Confidential Space is a trusted execution environment (TEE) designed to enable secure multi-party collaboration on sensitive data (for example, regulated or proprietary information) using a pre-defined workload, while ensuring that each party retains the confidentiality and ownership of that data.

For customers who fully bought-in into the value propositions of confidential computing, provider independent Attestation Verifier Service is an important part of the overall story. Intel-run Intel Trust Authority (ITA) provides an example of such an independent service that can help customers verify the authenticity and integrity of TEEs.

In the session, we will demonstrate how Google Cloud's Confidential Space TEE can seamlessly leverage ITA. We will walk through the components of Confidential Space, and how ITA is integrated into the workflow. Audience will walk away understanding:

What is Google Cloud Confidential Space, and how it protects sensitive data from the operators; and how the confidentiality, integrity and ownership of data are ascertained.
Design and workflow for ITA integration with Confidential Space.
End-to-end demonstration of Confidential Space with seamless leveraging ITA for attestation verification.

Achieving auditability and trust through Azure’s Cloud Transparency Service

Kartik Prabhu

Senior Product Manager, Microsoft Corporation

Download Poster

This session presents our solution based on the confidential computing framework SCONE to address these challenges and enable secure rolling updates for confidential Kubernetes clusters.

Using TEEs as a Credential Proxy for Interop and more

Andrew Miller

Entrepreneur in Residence, Flashbots[X]

Download Poster

This session explains why using TEE for credential management is a provocative idea for user empowerment.

First, a TEE credential proxy can be used for fine-grained delegation. We built teleport.best, a TEE-based X (formerly Twitter) app that creates single-use "tweet once" tokens. Existing delegation mechanisms like OAuth2 support expiry but not "one-time" counters. The TEE creates a scope restriction beyond what the original account provider envisioned.

Second, TEE can also be used to create credible commitments. A package manager can encumber the 2FA account for their package, probably enforcing mandatory policies. I'll also present TeeHeeHe, an AI agent that has provably exclusive control of its twitter account, by having changed the password from within the TEE proxy. 

References:

https://teleport.best/
DelegaTEE: Brokered Delegation Using Trusted Execution Environment   (Usenix Security 2018)
https://medium.com/@helltech/deal-with-the-devil-24c3f2681200

Securing LLMs with Lift and Shift Strategy for AI using Confidential Containers with NVIDIA GPU

Hema Shankar Bontha

Senior Product Manager, NVIDIA

Download Poster

As organizations deploy AI/ML workloads in cloud environments, ensuring data confidentiality and workload isolation is critical. This session explores how Confidential Containers, powered by Kata Containers and NVIDIA GPUs, create a secure execution environment for AI models, including NVIDIA NIMs and other LLMs without exposing sensitive data.

A lift-and-shift approach that enables GPU acceleration within confidential containers, including virtualization reference architectures and attestation capabilities allow businesses to migrate existing AI applications into protected environments without modifying code and help customers with seamless adoption of confidential computing.

Attendees will gain insights into key security mechanisms such as attestation, workload isolation, and key management.  In our demo  we will be highlighting how confidential computing safeguards AI assets. Additionally, the role of Kubernetes in orchestrating GPU-enabled confidential workloads will be explored, ensuring that security is maintained at scale.

Securing Data and Records Through Azure Confidential Ledger

Shubhra Sinha Kamath

Senior Product Manager, Microsoft

Download Poster

Data integrity and security are paramount in today's digital age! Organizations find value in blockchain technology to ensure immutability and transparency of their records. In this session, learn about how you can leverage it with your existing data sources like Azure SQL, Azure Blob Storage, and for computation transparency! From AI, healthcare, finance, legal, or government services, the use cases are broad and support compliance scenarios too!  

Azure confidential ledger is a managed, decentralized ledger service that leverages the power of blockchain technology to provide tamper-proof storage for sensitive data records. It provides advantages of maintaining data integrity, including end-to-end protection and confidentiality. ACL can provide verifiable proof against unauthorized modifications and shield data from unauthorized access, and ensure a permanent record of transactions or changes.

Ethics, Transparency, and Industry Transformation in the Age of Confidential Computing and AI

Nicole Yue

CEO, Founder, Whole AI
Co-Founder, DigiDaaS

Download Poster

As Artificial Intelligence (AI) adoption accelerates across healthcare, financial services, industrial systems, and clinical-grade diagnostics, enterprise leaders are confronting increasingly complex challenges around regulatory obligations, cross-border compliance risks, AI model safety, and sovereign data governance. The emerging landscape requires not only technical solutions, but engineered governance frameworks capable of sustaining regulatory-grade accountability across diverse jurisdictions.
Whole AI + DigiDaaS are engineering the Sovereign AI Foundry -- an applied Confidential AI architecture and governance initiative purpose-built for regulated enterprise environments. Our work combines applied research, platform systems engineering, and client-centered co-development to address compliance, model accountability, safety, and jurisdictional control as core design imperatives. Areas of active focus include cross-border sovereignty enforcement models, end-to-end AI chain-of-custody tracking with cryptographically verifiable auditability, and confidential multi-component AI pipelines; structured to align with national and regional governance frameworks, regulated AI compliance standards, and emerging public-private sovereign cloud partnerships.
The Sovereign AI Foundry framework enables regulated enterprises -- including MedTech, Software as a Medical Device (SaMD), clinical-grade diagnostics, financial services, and industrial systems -- to engineer, deploy, and operate Confidential AI pipelines with jurisdictional safety, data residency assurance, and regulator-grade auditability. We are engineering an approach that integrates full-stack confidential compute orchestration, sovereign key management, Trusted Execution Environment (TEE) control, encrypted model inference, federated data collaboration, and cross-border compliance pipelines, architected to support FDA, EU MDR, HIPAA, NYDFS, EU AI Act, GDPR, ISO 13485/14971, IEC 62304/60601 standards while maintaining real-world enterprise deployment velocity and regulatory cooperation readiness.
At Confidential Computing Summit 2025, we are actively engaging co-development partners and professionals in sovereign AI architecture, confidential compute engineering, zero trust infrastructure, regulated sector compliance, public-private governance alliances, sovereign AI chain-of-custody governance, and fiduciary-grade auditability -- across healthcare, financial services, industrial systems, sovereign cloud infrastructure, and regulated data environments.

Protecting PII and AI models with Confidential Containers

Savitri Hunasheekatti

Senior Software Engineer, IBM

Download Video

Confidential computing and in particular the confidential containers (CoCo) enables secure processing of sensitive data and applications by leveraging the Trusted Execution Environments (TEEs). also, it protects the data in use by performing computation in a hardware-based, attested TEE and makes it impossible to extract information from a memory dump as it's encrypted.This work explores the deployment of a Text Summarization AI application for medical report analysis within the coco framework, this application accepts the individual medical reports and analyzes the records for summarization by leveraging a large language model (LLM). IBM Secure Execution for Linux technology promises the technical assurance, which means administrators cannot access the PII contained medical report instead of relying on trust that they will not access the data, also it uses an encrypted contract to ensure zero-trust policy enforcement at deployment.The contract defines different personas and ensures access isolation between each persona through encryption, also it defines the restrictions for the workload that is going to run in the pod, and any other authorizations required for the server. Such contracts can also incorporate further tokens, credentials or certificates, which are needed by the workload and should not be shared with others like the kubernetes admin. The Workload Provider provides the workload and defines the workload policies, such as restricting access to the deployed workload, like not allowing any kubectl exec operations or any other API endpoints considered not to be trusted. Since the contract is encrypted, anything within it is confidential and cannot be modified by anyone including kubernetes admins, either.The LLM used for the summarization was fine-tuned for a specific purpose, it might contain business IP or have value itself, as it is deployed in an environment not operated by the healthcare institution or the model provider whereas here the solution ensures data privacy and model protection by enforcing zero-trust policies through signed container images and immutable policy definitions. Unauthorized access and tampering attempts, even by privileged administrators, are blocked, safeguarding both sensitive personal data and the proprietary AI model.WIth this solution we do not need to rely on the trust in Kubernetes or IaaS admin instances. This approach demonstrates how confidential containers can meet strict security requirements in healthcare and other industries handling critical information.

Optimizing Large Language Models (LLMs) for Efficient Inference: Techniques and Best Practices

Kailash Thiyagarajan

Senior Machine Learning Engineer, Apple

Download Video

Data integrity and security are paramount in today's digital age! Organizations find value in blockchain technology to ensure immutability and transparency of their records. In this session, learn about how you can leverage it with your existing data sources like Azure SQL, Azure Blob Storage, and for computation transparency! From AI, healthcare, finance, legal, or government services, the use cases are broad and support compliance scenarios too!  

Azure confidential ledger is a managed, decentralized ledger service that leverages the power of blockchain technology to provide tamper-proof storage for sensitive data records. It provides advantages of maintaining data integrity, including end-to-end protection and confidentiality. ACL can provide verifiable proof against unauthorized modifications and shield data from unauthorized access, and ensure a permanent record of transactions or changes.

Organized by OPAQUE

© 2024 OPAQUE. All rights reserved.  |  OPAQUE, The Confidential AI CompanyTM  |  Privacy Policy  |  Terms of Service