This keynote delivers welcome remarks.
As a global-scale agentic web emerges—powered by AI agents with human-like reasoning but operating at machine speed—the stakes have never been higher. Our legacy trust frameworks—contracts, checklists, retroactive audits—can’t keep up. In this keynote, Aaron Fulkerson, CEO of OPAQUE, calls on the ecosystem to stop treating infrastructure as a product and start treating it as a shared responsibility. Drawing on real-world momentum from Meta, Microsoft, ServiceNow, Google, and more, he explains why the systems underpinning this new world must be built on verifiable proof—not assumptions.This isn’t talk goes beyond Confidential Computing, it’s about designing systems that can hold up under global pressure. The call to action? Build the trust layer the world needs. Together.
Download Slides
Learn how Google Cloud is fortifying the entire data lifecycle -- protecting your AI/ML models and sensitive information not just while in use with confidential computing, but also safeguarding it against future quantum attacks when at rest and in transit. Discover how tools like Gemini and our PQC strategy are essential for true end-to-end, quantum-resistant confidentiality.
Download Slides
Confidential computing (CC) is reshaping the future of cloud security by extending data protection to computation. This paradigm shift not only fortifies defenses against cyber threats but also fuels innovation, empowering businesses to explore new possibilities.
Join us as we highlight notable new use cases that uncover the transformative potential of CC and forge a future that is both secure and innovative, especially in the era of generative AI.
Download Slides
Today, business insights and opportunities are revealed through analysis of large data sets, often aggregated from different countries, brokers, and internal department silos each with specific confidentiality, regulatory, security, and ownership considerations.
This session features a discussion with Graham Mudd, a longtime navigator of data collaboration challenges at Meta and founder of privacy-preserving AdTech platform Anonym, now part of Mozilla. The discussion will reveal how confidential computing can be successfully applied in many industries to combine diverse data sets while satisfying the seemingly impossible security and privacy requirements.
Download Slides
How enterprises can deploy autonomous agents without chaos or compliance risk
Key Takeaways:
As generative and retrieval-augmented AI systems scale across the enterprise, explainability is no longer enough. In industries like high-tech, advanced manufacturing, and finance, the data exhaust from GenAI isn’t just noise—it’s signal. And when that signal includes sensitive IP, decision-making logic, or user prompts, it becomes a strategic asset—or a liability.
This keynote panel brings together leaders from Microsoft, Google, Intel, and McKinsey to explore a new foundation for enterprise AI: provable guarantees of privacy, integrity, and security. From silicon to system policy, these organizations are rethinking how AI gets built and deployed in environments where compliance, competition, and control are non-negotiable.
Attendees will be engaged in an interactive Q&A segment, fostering deeper understanding and active dialogue around the topics covered on day one of the summit. The session places particular emphasis on how the summit experience can empower you to more effectively leverage confidential computing within your respective organizations and projects.
Since the late 2010s, Confidential Computing has evolved from a promising concept into a foundational security technology for modern workloads. In this keynote, Ravi Kuppuswamy, SVP of Server Product & Engineering at AMD, will explore how the rise of AI has elevated the importance of Confidential Computing—making it essential for protecting models, data, and workloads while enabling organizations to harness the full potential of AI. He will walk through real-world deployments of AMD Infinity Guard, showcasing how it helps enterprises meet evolving security and privacy demands. Ravi will be joined on stage by BeekeeperAI, who will share how they leverage AMD’s technology to protect sensitive healthcare AI workloads.
Download Slides
AI models have had a ~4x YoY increase in compute for the last 70 years. In the security domain, what has 4x effective compute brought us in 2025 and what will it bring us in 2026? In this session, Jason will give us a survey of the bleeding edge of security applications, from a frontier AI lab perspective, including advanced persistent threats, and what new security threats are coming from AI and can be defended by AI in 2026 and beyond.
Download Slides
Confidential Computing provides obvious workload isolation benefits for sensitive workloads, but the use cases extend much further. In this session we'll look at two new ways to look at problems in your sector and to identify opportunities whether Confidential Computing's combination of TEEs and attestation offers a good fit to address them.
Download Slides
The exponential growth of GPU demand, driven by AI model scaling laws and compute-intensive workloads, is reshaping the global technology landscape.
This talk explores how the intersection of GPU scaling dynamics and confidential computing is unlocking new frontiers in secure AI.
We analyze market trends, architectural breakthroughs, and use cases demonstrating how organizations can harness GPU-powered confidential environments.
Download Slides
Organizations scaling AI today are shaping the future of their industries, creating new growth opportunities and redefining how business operates. We have analyzed over 2,000 gen AI projects delivered to clients, conducted surveys with more than 3,000 C-level executives and drawn on insights from our own practitioners.
Scaling AI starts with data. 70% of the 2000 companies we surveyed recognize its importance, yet few fully leverage their proprietary data. Learn what companies that are AI reinvention-ready are doing and how they have built the essential capabilities needed to scale AI successfully.
Download Slides
The explosive growth of AI relies on unprecedented amounts of sensitive data. While Confidential Computing promises to protect data during processing using hardware enclaves, realizing this promise for large-scale AI workloads presents significant systems challenges. The next frontier of enterprise AI and data-driven decision-making hinges on one critical pillar: Confidential AI. In this keynote, Ion Stoica will reveal how this technology is reshaping how organizations build, deploy, and scale AI securely — accelerating innovation while preserving privacy and compliance. With insights from academic research and industry-leading initiatives, Ion will map the path forward for enterprises looking to turn secure computation of large-scale sensitive data from a defensive measure into a competitive advantage.
Download Slides
As enterprise AI systems scale into production, confidential AI has emerged as a critical enabler—transforming how organizations process sensitive data, enforce security policies, and build trust in AI. With Apple’s Private Cloud Compute, Meta’s Private Processing, and Microsoft’s Confidential Whisper API leading the way, this panel will unpack the architectural and industry shifts driving adoption across regulated sectors like finance, healthcare, and telecom. Panelists will explore how hardware-backed guarantees—from TEEs to secure GPUs and attested runtimes—are being used to protect prompts, model behavior, and proprietary data across distributed, multi-cloud environments. We’ll share lessons from real-world deployments, examine how confidential computing intersects with responsible AI and compliance, and debate what’s still missing—from full-stack attestation to fine-grained policy enforcement—to make confidential AI scalable and production-ready.
Flower has established itself as an industry standard for building production-grade federated AI, gaining widespread adoption through its flexible "any cloud, any hardware" approach. It provides the industry's broadest support for confidential compute across all major cloud platforms -- including AWS, GCP, and Azure -- and leading hardware providers such as Nvidia, AMD, and Intel. This talk briefly outlines how Flower seamlessly integrates confidential computing into critical federated AI workflows, from fine-tuning, inference, and RAG, to even pre- and post-training. Attendees will learn how Flower's open-source framework delivers best-in-class ease of development and maintenance, enhancing security, ensuring compliance, and driving innovation in enterprise AI deployments.
To close the Summit, we’ll kick off with a high-energy live audience Q&A, where attendees can ask rapid-fire questions with answers from a panel of experts.
Next, we’ll transition into an Insights session, inviting attendees to share key takeaways, learnings, or benefits they’ve gained from the Summit.
And finally, we will share a Summit Wrap-Up, highlighting major themes, memorable quotes, and what’s ahead.
As AI agents gain autonomous capabilities to interact with enterprise systems, they introduce unprecedented data security risks. This session explores the emerging field of confidential AI agents - a breakthrough approach that enables autonomous AI to operate on sensitive data without exposing it to security or compliance vulnerabilities.
Learn how cryptographic policy enforcement, secure enclaves, and real-time governance allow organizations to deploy AI agents across previously restricted data sources while maintaining regulatory compliance. Discover how leading enterprises are using these techniques to accelerate AI deployment cycles, eliminate manual compliance checks, and turn inaccessible sensitive datasets into strategic assets through secure automation. This talk provides a practical roadmap for organizations looking to balance innovation speed with the strict security requirements.
Download Slides
In this talk, we introduce ManaTEE, an open-source framework designed to enable privacy-preserving data analytics for public research. Private data is invaluable not only for businesses but also for critical research domains such as public health, economics, social sciences, and civic engagement.
However, conducting public research with private or proprietary data presents significant challenges. Directly sharing such data poses privacy risks, is often prohibited by regulations, and may conflict with business interests. ManaTEE addresses these challenges by integrating Privacy Enhancing Techniques (PETs), including confidential computing and differential privacy (DP), to protect sensitive data while maintaining usability. By leveraging Trusted Execution Environments (TEEs), ManaTEE ensures data confidentiality and code integrity while also providing proof of execution, allowing researchers to cryptographically verify result integrity through remote attestation. We demonstrate two key applications of ManaTEE in public research: (1) Trusted Research Environments (TREs) – securely enabling researchers to analyze private datasets from multiple organizations without direct access to raw data; and (2) Private AI Model Evaluation – allowing researchers to evaluate AI models (e.g., assessing fairness, reasoning, and bias) while preserving model privacy.
As an open-source, easy-to-deploy, and user-friendly framework, ManaTEE empowers both businesses and researchers to collaborate on privacy-preserving public research, bridging the gap between data protection and societal benefit.
GenAI LLM firewalls function as Man-in-the-Middle (MITM) proxies to inspect traffic for safety, security, and accuracy. However, this process breaks end-to-end TLS encryption, as firewalls intercept and decrypt traffic. This introduces security and privacy concerns and makes breach detection more difficult. In this presentation, we introduce a new approach where each network intermediary performs self-attestation, allowing enterprises to assess the trustworthiness of every link in the traffic chain. By integrating confidential computing, this self-verification improves security, visibility, and data integrity. The session will conclude with a live demonstration of this solution.
Details:While TLS/SSL is commonly seen as providing end-to-end confidentiality, the reality is more complicated. The rise of technologies such as LLM firewalls, alongside other security solutions, means that TLS connections are frequently terminated mid-traffic—sometimes multiple times. This allows for deep content inspection and the creation of new TLS connections to upstream services, effectively disrupting the end-to-end security model. Enterprises prioritize both security and content inspection, but tracking the various intermediary services that a connection traverses is a challenge. Currently, there is no simple way for enterprises to gain visibility into these intermediary services or evaluate their security postures. This lack of transparency complicates troubleshooting and impedes the identification of potential culprits in cases of identity theft or data breaches.
This presentation introduces a novel approach in which each intermediary MITM entity performs self-attestation in every session using a unique workload signing key. This method enables both client-owned security services (e.g., LLM firewalls) and service provider-owned security entities to map out the MITM entities involved in the traffic path, along with their locations and respective owners.
Using confidential computing, the workload signing keys (both private and public) are generated by the MITM entities. The signing public key, along with the workload identity (e.g., code hash, organization signing public key) and authorized confidential computing host hardware IDs, is stored in a transparency service. Each MITM entity then produces hardware-attested evidence, which includes the workload identity, the workload signing public key, a nonce provided by the verifier, and the confidential computing host hardware ID. Enterprises and third parties can verify the trustworthiness of traffic paths by using this hardware-attested evidence and the details stored in the transparency service, thereby improving both security and visibility across networks.
To further enhance trust, privacy-preserving selective disclosure techniques, such as “Encrypted Content-Encoding for HTTP” (RFC 8188), can be applied. For instance, the egress LLM firewall might perform only data loss inspection. In this case, the client encrypts the HTTP request body using the ingress LLM firewall’s public key, rendering the body anonymous to the egress LLM firewall. The ingress LLM firewall then decrypts the body with its private key and performs content inspection.
Download Slides
This talk provides an overview and demo of Azure’s upcoming Cloud Transparency Service (CTS). The Cloud Transparency Service (CTS) is a confidential-grade Azure service designed to provide a comprehensive, tamper-proof history of signed statements (e.g., release artifacts) authorized by configurable registration policies. CTS enables forensic investigation, creates strong deterrents for malicious behavior, and supports the trust-but-verify model. In this talk we will provide an overview of how we intend to leverage CTS to be a key differentiator for Azure and talk about exciting upcoming releases such as Confidential AI Inferencing and OCP SAFE FW.
Download Slides
Advances in Genomics technology and healthcare AI coupled with decreasing processing and analytics cost is leading to wider adoption in areas such as infectious disease surveillance & therapeutics, early-stage cancer detection & drug resistance, immunology & auto-immune diseases, and identification of rare genetic diseases.
Unfortunately, privacy and security challenges continue to hinder combining disparate, granular Genomics data (owned by different stakeholders) with relevant patient and related data sets. Makers of healthcare AI are equally concerned about IP protection while refining their models’ accuracy and lowering bias using “trusted granular data” distributed across ecosystems. Confidential computing technology, coupled with user-friendly “Attestation of Data & AI Trust, “ offers a differentiated solution to these challenges.
Palmona Pathogenomics has developed Genomics-based Anti-Microbial Resistance prediction AI models for training and inferencing of data sets provided by our customers. We have leveraged the confidential computing organization's principles and architectural recommendations to develop “trusted insights” Architecture for Infectious Diseases & AMR Resistance. To demonstrate “evidence of trust” while customers’ data and our AI assets were “in use”, we partnered with the Google confidential computing team to pilot multi-party collaboration using “Google Confidential Spaces” and “Attestation of Trust” reporting.
This session highlights key learnings, feedback from customers and partners as we work on a scalable architecture, instill rigor of AIOps, and further “simplify” the Attestation of Trust required by clinicians, bioinformaticians, compliance experts, and regulatory authorities. This pilot has generated tremendous interest in refining the architecture for oncology & immunology use cases. We will share the next steps, such as the inclusion of infectious disease and oncology data sets across Africa and responding to EU AI regulators’ compliance mandates.
Download Slides
Enterprise leaders know the future of AI will be agentic, autonomous, and everywhere. But most architectures weren’t built for that future—and the result is a trust gap. AI projects stall, security concerns dominate board conversations, and high-value data remains off limits.
This panel brings together some of the most forward-thinking technology executives across cloud, enterprise software, product-led growth, and consulting to explore what it takes to build AI systems you can stake your business on. From runtime guarantees to cryptographic auditability, they’ll share how trust is shifting from a governance afterthought to a first-class system design principle.
Whether it’s orchestrating confidential agents across teams, enforcing data policy without slowing developers, or giving customers provable guarantees of integrity and sovereignty, these leaders are rethinking the entire foundation of enterprise AI. The result: faster deployment, broader access to sensitive data, and architectures that are ready for what’s next.
As Teresa Tung of Accenture’s Trusted Data Services puts it: “The winners in enterprise AI will be the ones who can act on the most valuable data—without ever losing control.”
If you're an ISV building a solution that processes sensitive data — AI or not — you need to think about security. This is particularly true when your solution is deployed in environments that are not under your control: public clouds or private clouds belonging to customers. It becomes even more pressing when your solution processes data from multiple, distributed vendors, where there is no single trusted infrastructure. In these cases, security is not just a technical challenge; it can be a business blocker.
Confidential computing offers a unique opportunity to resolve the tension between the need to process data, extract value from it, and keep it secure. But here’s the catch: acquiring hardware with confidential computing capabilities is only part of the solution. The software stack must catch up.
In this talk, we will show you how Canonical and Ubuntu enable ISVs to build confidential computing solutions with Ubuntu Confidential VMs. We’ll explore how to deploy these solutions securely across public, private, and edge cloud environments using Canonical's MicroCloud platform. This session will provide concrete guidance on how to build and deploy secure solutions that protect data and ensure privacy, even in multi-tenant environments.
Confidential computing is not a luxury — it’s a necessity for securing data in today’s complex, multi-cloud world.
Download Slides
Secure AI relies on operating system enabled confidential communication between the CPU and GPU. As enterprise use of AI to extract to business value grows, in-use protection of proprietary data and fine-tuning evolved model parameters adds defense in depth to existing IT controls. In this session, Nvidia, Intel and Canonical will showcase improved ease of adoption and a glimpse at the performance of an Nvidia H100 GPU with its confidential-mode driver, running in an Intel TDX protected confidential VM on the Canonical Ubuntu operating system. This session will explain solution components and share resources that enable audience members to reproduce the solution.
Download Slides
The relentless advance of AI and Machine Learning has ushered in an era of unprecedented innovation, yet it simultaneously escalates the critical need for robust data security and privacy. As global regulations tighten and cyber threats grow more sophisticated, safeguarding both sensitive user data and proprietary AI/ML models has become a non-negotiable imperative.
Join us to explore how Confidential Computing stands as the foundational technology for addressing these challenges, ensuring data and models remain encrypted and protected throughout their entire lifecycle – even during processing. We’ll understand what Google is doing with Confidential AI to not only mitigate critical risks and protect valuable AI IP, but also unlock new possibilities for secure multi-party collaboration on sensitive datasets, driving compliance and trust in an AI-driven world.
Download Slides
Data integrity and security are paramount in today's digital age! Organizations find value in blockchain technology to ensure immutability and transparency of their records. In this session, learn about how you can leverage it with your existing data sources like Azure SQL, Azure Blob Storage, and for computation transparency! From AI, healthcare, finance, legal, or government services, the use cases are broad and support compliance scenarios too!
Azure confidential ledger is a managed, decentralized ledger service that leverages the power of blockchain technology to provide tamper-proof storage for sensitive data records. It provides advantages of maintaining data integrity, including end-to-end protection and confidentiality. ACL can provide verifiable proof against unauthorized modifications and shield data from unauthorized access, and ensure a permanent record of transactions or changes.
Download Slides
The gap between AI pilots and production is where most enterprise initiatives stall—especially when sensitive data, multi-stakeholder workflows, or regulated domains are involved. This panel brings together technical and business leaders from healthcare, SaaS, consulting, and AI startups to explore how verifiable trust—not just explainability—is becoming the cornerstone of scalable AI. From ServiceNow’s internal platform transformation to J&J’s medical imaging workflows, and from cross-organizational data orchestration at Accenture to revenue acceleration at Bloomfilter, these leaders will share how Confidential Agents and runtime guarantees are enabling them to deploy faster, govern confidently, and win stakeholder trust. Topics will include real-time policy enforcement, secure RAG, cryptographic audit trails, and how shifting control to the user is accelerating adoption across industries.
As AI systems evolve into distributed, agentic architectures—interacting autonomously across data silos, models, and organizational boundaries—the need for trust-by-design becomes existential. This talk introduces a blueprint for establishing trust in multi-agent systems -- enabling secure AI deployment without compromising data privacy, model integrity, or digital sovereignty.We explore how to define trust for diverse stakeholders -- users, model creators, infrastructure providers -- and how to enforce security, usage policies, and accountability in a non-deterministic AI ecosystem. The talk presents key design tenets, threat assumptions, and enforcement primitives for enabling confidential computation across multi-party AI workflows. Attendees will leave with a practical framework and vocabulary for building and evaluating secure AI systems that can withstand adversarial environments and regulatory scrutiny.
Download Slides
This presentation provides an update on the work done by Linaro, Arm, and community members to advance the support of confidential computing on Arm platforms since our presentation at the Confidential Computing Summit last year.
From there, we will outline our plans for the coming year in areas such as device assignment, memory encryption, endorsement API, and how containers can leverage all this technology to protect your workload.
Download Slides
There are many well-known challenges when building scalable confidential services such as AI inference: how do clients verify attestation? How to interpret the measurements reported by attestation into meaningful security and privacy properties - for instance, the prompts and model output remain confidential to the service operator and the cloud service provider? How to deal with the updates to the service that may break the advertised security goals of the service? In this session, we explore how to address these challenges with Azure's suite of services by looking at the design of Azure's confidential inferencing service.
Download Slides
The session will provide an overview on the AI Market and why companies are struggling to unlock its full potential. We will discuss some of the major roadblocks to AI Adoption ranging from outdated data protection techniques to data fragmentation and compliance with regulations. From there we will get deeper into how technology, and Confidential Computing in particular, is filling that gap as part of enterprise platforms that enable scalable AI Solutions. We will conclude looking at practical applications and use cases such as Confidential AI and Agents, Multi-Party Collaboration and Infrastructure Extension. You will gain actionable insights on how confidential computing can enable enterprises to scale AI solutions.
Download Slides
Retrieval Augmented Generation (RAG) is revolutionizing how we interact with large language models, enabling access to up-to-date and domain-specific knowledge. However, sensitive data often limits its application in regulated industries. This talk introduces a novel approach to Confidential RAG, leveraging confidential computing principles to protect both the knowledge base and the query process.
We'll explore how to build a secure and scalable RAG pipeline using a combination of cutting-edge technologies: Confidential GPUs: We'll demonstrate how to utilize Confidential GPUs (e.g., on cloud platforms) to perform sensitive vector embeddings and similarity searches within a hardware-protected enclave. This ensures that both the embeddings and the RAG model parameters remain encrypted and isolated from the underlying infrastructure.
OpenSearch as a Secure Knowledge Base: We'll highlight the use of OpenSearch as a document store for confidential data. This includes considerations for encryption at rest and in transit, along with access control mechanisms. We will show how OpenSearch can be tuned to provide low-latency retrieval while respecting data security constraints.
Kubeflow for Orchestration and Reproducibility: We'll showcase how Kubeflow orchestrates the entire RAG pipeline, from data ingestion and embedding generation to query processing and response generation, in a secure and reproducible manner. This approach allows for consistent and auditable deployments across different environments.
Security Best Practices:
Attendees will gain a practical understanding of how to build a Confidential RAG system that balances security, performance, and scalability, enabling the deployment of RAG in even the most sensitive environments.
Download Slides
In this session, Lisa Loud, Executive Director of the Secret Network Foundation, will explore the transformative power of confidential computing in enabling privacy-first applications within the rapidly evolving generative AI landscape. Attendees will gain actionable insights into real-world implementations of privacy-preserving technologies, including how they are utilized to safeguard sensitive data and empower decentralized applications.
Key takeaways include:
This session is ideal for professionals seeking to enhance their knowledge of privacy-first solutions in AI and blockchain.
Download Slides
Fujitsu is developing FUJITSU-MONAKA, a next-generation energy-efficient processor based on Armv9-A architecture, scheduled for release in 2027. This processor aims to meet the future performance and energy efficiency demands of AI and datacenter workloads, contributing to a carbon-neutral digital society through Fujitsu's innovative 3D many-core and ultra-low-voltage microarchitecture technologies. FUJITSU-MONAKA will also incorporate Arm Confidential Compute Architecture (CCA) to protect highly sensitive data in workloads such as financial transactions, healthcare data analysis, and government operations.
iThis session will discuss Fujitsu's open-source software (OSS) development for Arm CCA and Confidential Computing (CC). Fujitsu has a long history of contributing to OSS, including Linux kernel and Kubernetes, and understands its crucial role in accelerating industry innovation. We are currently focused on building a comprehensive software stack for cloud and data center use cases based on Confidential VMs (CVM) and Confidential Containers (CoCo). We are actively working on Arm CCA enablement for CVM software, such as libvirt, OpenStack, and KubeVirt. Furthermore, we start development in CoCo, focusing on both Arm CCA-specific and architecture-independent features. For example, while CoCo Peer-Pods currently only support hyperscalers infrastructures, we plan to extend support to OpenStack and KubeVirt, enabling OSS-based cloud infrastructures to provide CoCo services. We will also share the future outlook for FUJITSU-MONAKA with Arm CCA, including its target use cases.
Download Slides
Confidential computing relies on Trusted Execution Environments (TEEs) to ensure the integrity and confidentiality of sensitive workloads. A cornerstone of TEE security is remote attestation, which enables users to verify that a given environment is authentic and uncompromised. However, as modern applications increasingly adopt microservices architecture—often across cloud-based container infrastructures—attestation becomes more complex. Verifying each service instance or container individually can be prohibitively expensive and cumbersome, particularly when these services must also mutually attest to each other.In this work, we propose a novel framework that combines a centralized attestation verification service with zero-knowledge proofs (ZKP) to mitigate trust concerns. This centralized approach eases the burden on users, who no longer need to verify multiple microservices individually. Besides, by leveraging zero-knowledge proofs, the attestation verification service can establish its trustworthiness and validate critical security properties of TEEs and applications without disclosing the cluster infrastructure and source code of the applications, which is a vital requirement for applications whose code holds substantial commercial value.
Confidential computing is critical for many use cases across various regulated industries and markets. This session provides an overview of various use cases and a deep dive into a healthcare use case, exploring how a confidential computing solution supports the implementation of a nationwide digital healthcare transformation and enables confidential computing at scale.
Download Slides
At the end of 2024, the European Union ratified the Cyber Resilience Act, a piece of legislation that will apply to almost all hardware and software sold or distributed within the the EU. Most of the measures don't start coming into force until the end of 2026, but the requirements on businesses are complex and wide-ranging -- and the penalties for non-compliance severe. If we're not preparing our businesses, our products and our processes now, we will run into trouble later around a number of areas from vulnerability management, incident disclosure, risk management, architectural review and compliance.
This session will give a brief introduction to the CRA and discuss likely impacts in terms of those creating, selling or deploying PDEs ("Products with Digital Elements") which use Trusted Execution Environments and/or Attestation Verification Services.
Microservice architecture has become a popular system design option in recent years due to its agility and scalability. Business logics are decoupled into multiple services, and data can easily flow among them. However, this brings in another critical challenge in data protection - how to safeguard sensitive data while maintaining the benefits of microservices. In this talk, we present our innovative solution to protect the sensitive data lifecycle in a large-scale microservice architecture. By utilizing confidential computing and other privacy-enhancing technologies, our solution guarantees that sensitive data can only be accessible by minimal logics in trusted execution environments. In addition, to provide better transparency on sensitive data usage, computations in TEE can be audited and verified by external parties through the remote attestation mechanism. Finally, we will also discuss the challenges and opportunities of applying confidential computing at scale.
Download Slides
Korea Telecom is launching its Secure Public Cloud solution with Azure confidential computing to address the unique needs of regulated industries and public sector customers. In Korea, the importance of secure cloud solutions cannot be overstated due to stringent data privacy regulations, increasing cyberattack threats, and the growing need for data sovereignty in AI innovation.
KT's journey of adopting Azure confidential computing has been pivotal in enhancing their solution with stronger data protection capabilities and providing a more trusted environment for KT developers and users. This journey involved careful considerations to migrate legacy on-premise apps onto confidential computing at scale and ensure that KT could manage the new computing environment efficiently. KT will share lessons from their initial migration and scale-up journey. Looking ahead, KT is committed to a future where confidential computing plays a central role in its cloud strategy, ensuring that its data remains secure and private while innovating with Microsoft.
Download Slides
AMD SEV Confidential Computing has been in production for years, powering mission-critical workloads across Microsoft Azure, Google Cloud, AWS, Oracle Cloud, and more. The latest AMD EPYC™ “Turin” processors, featuring Trusted IO support built on the TDISP standard, strengthen this robust platform. This session provides a focused ecosystem update: what’s deployable now, broad cloud support, mature Linux kernel integration, and readiness for emerging AI workloads.
We’ll feature BeeKeeperAI's EscrowAI™ platform as a real-world example. This zero-trust architecture allows life science companies and research institutions to collaborate securely on encrypted healthcare data—from rare disease datasets to genomic analysis—while keeping algorithms and sensitive data cryptographically isolated throughout execution. These advances show that confidential computing has moved beyond proofs of concept to real-world solutions in regulated industries.
Download Slides
Data may be encrypted at rest and in transit, yet today’s RAG and vector-search stacks still leak sensitive embeddings in use. This talk introduces CyborgDB, a drop-in proxy that keeps every vector, index, and filter encrypted end-to-end while sustaining sub-10 ms retrieval. We’ll dissect the technical details (forward-secret cryptography, performance optimizations), share benchmarks (CPU & GPU), and map the real-world trade-offs so you can ship confidential AI in production.
Download Slides
This session features a panel of leaders from the Confidential Computing Consortium (CCC), including representatives from the Linux Foundation, Outreach Committee, Technical Advisory Council (TAC), and the Governing Board. It will explore the consortium’s mission, key project milestones, cross-industry initiatives, and strategic priorities.
Attendees will gain insights into how CCC is shaping the future of confidential computing through collaborative development, community engagement, and ecosystem alignment. Learn how to drive industry-wide confidential computing initiatives and take advantage of this unique opportunity to hear directly from CCC leadership on how to get involved.
Download Slides
Confidential computing is rapidly evolving with Intel TDX, AMD SEV-SNP, and Arm CCA. However, unlike TDX and SEV-SNP, Arm CCA lacks publicly available hardware, making performance evaluation difficult. While Arm's hardware simulation provides functional correctness, it lacks cycle accuracy, forcing researchers to build best-effort performance prototypes by transplanting their CCA-bound implementations onto non-CCA Arm boards and estimating CCA overheads in software. This leads to duplicated efforts, inconsistent comparisons, and high barriers to entry.
In this talk, I will present OpenCCA, our open research framework that enables CCA-bound code execution on commodity Arm hardware. OpenCCA systematically adapts the software stack—from bootloader to hypervisor—to emulate CCA operations for performance evaluation while preserving functional correctness. Our approach allows researchers to lift-and-shift implementations from Arm’s simulation to real hardware, providing a framework for performance analysis, even without publicly available Arm CPUs with CCA.
I will discuss the key challenges in OpenCCA's design, implementation, and evaluation, demonstrating its effectiveness through life-cycle measurements and case studies inspired by prior CCA research. OpenCCA runs on an affordable Armv8.2 Rockchip RK3588 board ($250), making it a practical and accessible platform for Arm CCA research.
Download Slides
Confidential computing (CC) on GPU-based systems is a critical technology for securing data in use. By leveraging encryption and virtual machine (VM) level isolation, CC allows existing code to run without modification. However, the performance overhead of CC can hinder its adoption in real-world GPU-based systems.
This talk provides a detailed evaluation of GPU-based CC, offering empirical characterizations of diverse workloads and perspectives. Specifically, we evaluate several benchmarks and micro-benchmarks with a focus on data transfer, cryptographic operations, kernel launch, and kernel execution. We further characterize privacy-critical workloads to see how CC overhead could affect real-world applications. Finally, we will discuss some optimization techniques to address the overheads of confidential computing with GPUs.
This presentation will be based on the paper:
Yang Yang, Mohammad Sonji, Adwait Jog, Dissecting Performance Overheads of Confidential Computing on GPU-based Systems, In the Proceedings of IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), Ghent, Belgium, May 2025.
Download Slides
As a society, we are highly dependent on the security and correct functioning of computer systems in safety-critical industries such as energy, health care, and transportation. These systems often must be certified to a high-level of security and safety standards. Unfortunately, securing this class of systems is difficult due to their complex, low-level nature, and large trusted computing base as recent cyber-attacks on some U.S. infrastructures have demonstrated.
Confidential computing technology can play a crucial role in improving isolation and curtailing the size of the TCB. However, today’s confidential computing technology focuses primarily on high-end servers for cloud computing and lacks crucial support for the embedded computing space where key high-assurance safety-critical systems are deployed. Moreover, to achieve the highest level of certifications requires formal verification, a costly and complex process even without confidential computing in play. To address both the lack of confidential computing design in the embedded system space and the need for tractable formal verification of such systems, we present ACE, an open-source, designed for formal verification, VM-based confidential computing technology targeted for mid- and high-end embedded processors.
We will present ACE's design and its formally provable qualities. In addition, we have implemented a first-of-its-kind prototype of ACE on a RISC-V CPU development board and will present our findings including challenges and lessons learned in bootstrapping a confidential VM on such hardware. The audience will better understand the current RISC-V confidential computing landscape for embedded systems, the challenges and opportunities for VM-based confidential computing on RISC-V, and what it takes to formally prove security properties of such systems.
Download Slides
AI is moving fast—but enterprise adoption is hitting a wall. The most valuable data is often the most restricted, held back by legitimate concerns over security, compliance, and intellectual property protection. If we want to move beyond prototypes and pilots, we have to solve for AI that enterprises can trust with their most sensitive data.
In this panel, top venture capitalists will share how they’re thinking about the next generation of enterprise AI companies—from model infrastructure and vertical apps to the trust, security, and control layers that will unlock real value. These investors may come from different focus areas, but they share one common belief: security and verifiability are no longer features—they’re preconditions for scale.
Topics will include:
This session is built for founders, product leaders, and enterprise buyers navigating the path from AI exploration to enterprise deployment.
As the boundaries of commerce are redrawn by real-time payment rails like FedNow and RTP, the next generation of P2P and B2B payments must be built on a trustless infrastructure, where data security isn’t just a feature, but a foundational design principle.
Payfinia is leading this transformation with a bold, zero-trust architecture anchored by confidential computing. In its Paze Digital Wallet solution, encryption keys remain entirely invisible, even to internal stakeholders such as developers and infrastructure operators, ensuring uncompromised security against both internal and external threats.
Building on this success, Payfinia is extending confidential computing to its Instant Payment eXchange (IPX) product, where confidential computing secures memory-in-use for instant transactions. This elevates compliance with PCI-DSS, FedNow, and NACHA standards. By safeguarding data during computation, Payfinia unlocks new opportunities in regulated markets previously deemed too risky due to security concerns.
Confidential computing is no longer just a cybersecurity strategy—it is the blueprint for creating scalable, secure, and inclusive digital payment ecosystems. Join this session to explore how confidential computing transforms payment systems into zero-trust environments where sensitive data remains protected at every stage of processing.
Download Slides
Securing the modern enterprise means extending trust beyond traditional boundaries. This session dives into the design and deployment of an attestation service that spans cloud, multi-cloud, hybrid, and on-prem environments.
Independent attestation is designed for secure collaboration and resilience against emerging threats. We will explore real-world use cases for AI-driven business decisions, privacy-preserving data analysis for healthcare, and protecting post-quantum cryptography infrastructure.
You will gain actionable insights into building scalable, future-proof attestation frameworks for the next era of confidential computing.
Download Slides
As the LLM-driven agentic industry is maturing, we are moving towards a marketplace of independent tools for agent builders to choose from, often hosted in the cloud and from third-party vendors. On the other hand, for AI agents to be useful, they need to process data and have access to systems that matter, which are often confidential and private. This fast pace of growth, coupled with the sensitivity of the data being utilized, along with the need to not "chain" or restrict the agents' capabilities, can result in catastrophic security lapses.
Confidential and trusted execution environments (TEEs) leveraging both confidential CPU and GPU technologies have the potential to work seamlessly with AI agents while ensuring user privacy. Thus far, harnessing confidential computing technology in an agile and seamless way for fast-moving agentic workloads has been elusive. In this talk, we address this challenge by discussing how to leverage Confidential Container technology with confidential GPUs to host and deploy a secure and distributed AI Agent. Specifically, we will describe the security concerns of agentic platforms, the agent-specific policies, how the policy gets mapped to different enforcement points in Kubernetes running Kata containers, and how different components of the agent get distributed, especially components handling sensitive data. The audience will take away the security implications of a distributed AI Agent in the cloud and how to leverage Confidential Container technology with confidential GPU to unlock enterprise use cases.
Download Slides
In the world of digital advertising, businesses want to use their first-party data to reach customers and measure the impact of their digital ad campaigns, while respecting customers' privacy. This requires new approaches to data security, isolation and transparency - all made available through the use of Trusted Execution Environments (TEEs).
In this talk we provide an insight into reference architecture patterns of how Google Ads uses TEEs as Privacy Enhancing Technologies (PETs) and revolutionize how we handle sensitive data, enabling secure and intuitive products without compromising user privacy. We'll delve into the challenges of various use cases, how TEEs address privacy concerns while enabling processing, and key architectural insights for achieving scalability, cost-efficiency, and robust security. Join us to discover how Google Ads leverages TEEs to unlock the value of data while upholding user trust.
Download Slides
Distributed logs are increasingly used in industry to guarantee strong integrity protection and high availability. They spread trust across multiple administrative domains, and rely on the fact that it is unlikely that all parties will be compromised at once. Tamper-proof logs have been used for applications such as code transparency, key recovery for encrypted messaging, decentralized identity management, privacy-preserving advertising, multi-party data sharing, and confidential machine learning. Often, these logs are utilized to record a small amount of highly sensitive state, such as secret keys, serving as the root of trust for a much larger application.
Download Slides
This session discusses the evolution of confidential containers, integrating them with the GPU cloud-native stack for AI/ML workloads. We explore transitioning from traditional to secure, isolated environments crucial for sensitive data processing. Our choice of Kata for confidential container enablement ensures security while maintaining container flexibility.
Alongside this, a virtualization reference architecture supports advanced scenarios like GPUdirect RDMA. A key aspect of our strategy is the lift-and-shift approach, allowing seamless migration of existing AI/ML workloads to these confidential environments. This integration combines LLMs with GPU-accelerated computing, leveraging Kubernetes for effective orchestration and balancing computational power and data privacy.
Download Slides
Model-as-a-Service (MaaS) on Kubernetes involves three key parties: the Model Producer (owning model IP), the Model Consumer (using the model with sensitive data), and the MaaS Environment Owner, who provides the Kubernetes platform and a dedicated, isolated operational environment. Though producers and consumers may inherently distrust each other with their assets, both rely on the MaaS Environment Owner to ensure a secure, isolated space for their managed interactions.
This session presents a conceptual architecture for MaaS using open-source Kubernetes and tooling. A core principle is establishing a strong isolation boundary provided and controlled by the MaaS Environment Owner for each MaaS instance.
A critical evolutionary step for this architecture is the adoption of Trusted Execution Environments (TEEs) that keep the data encrypted while it is processed and only decrypted by the processor itself. As a result, the MaaS environment owner cannot access the decrypted data or the model weights during execution, reducing the need for the Model Producer and Consumer to trust the MaaS environment owner.
Key Takeaways
Key Technologies
This session will discuss how confidential containers and confidential-computing features in NVIDIA hardware can be used to create an end-to-end secure and scalable GenAI-as-a-Service.
We’ll dive into how these technologies enable confidential orchestration, enforce workload attestation, and securely integrate GPUs. We’ll also discuss advanced features and concepts like service-provider exclusion (in addition to infrastructure-provider exclusion) and confidential RAG.
Download Slides
Confidential computing has emerged as a game-changer in the realm of cloud computing, offering enhanced security and privacy for sensitive data. While the Cloud has been at the forefront of confidential computing advancements, the EDGE also presents a unique set of opportunities for leveraging this technology. In this presentation, we will delve into the distinct opportunities that Confidential Computing at the EDGE offers, focusing on 1p model export and isolation, strict AI regulatory compliance and attestation, and minimizing digital footprint for highly sensitive data.
We will discuss Google's approach to confidential computing at the Edge and the latest technologies, products, and commercial strategies, including Google's vision to protect proprietary models at the EDGE using TEEs, allowing for zero-trust intermediaries to manage semi-connected or disconnected EDGE solutions, and the ability to comply with strict regulatory compliance through the experiences with the EU regulatory Sandbox on EDGE model attestation and TEEs.
Related article: Gemini in Google Distributed Cloud
Download Slides
Today, users can "lift-and-shift" unmodified applications into modern, VM-based Trusted Execution Environments (TEEs) in order to gain hardware-based security guarantees. However, TEEs do not protect applications against disk rollback attacks, where persistent storage can be reverted to an earlier state after a crash. Existing solutions that protect against rollback attacks either only support a subset of applications or require code modification, breaking the lift-and-shift abstraction.
We present ROLLBACCINE, a device mapper that provides lift-and-shift rollback resistance for all applications. ROLLBACCINE achieves this with low overheads: it intercepts and replicates writes to disk by leveraging the observation that only synchronous writes need to be replicated on the critical path. We introduce a formal specification for the crash consistency of block devices and show that ROLLBACCINE is correct and consistent. Our experiments show that ROLLBACCINE introduces minimal throughput and latency overhead over two real applications, PostgreSQL and HDFS, and two file systems, ext4 and xfs, adding only 13 percent overhead across benchmarks, except for the fsync-heavy Filebench Varmail. In addition, ROLLBACCINE outperforms a state-of-the-art, non-automatic rollback resistant solution by 160×.
Download Slides
The rapid advancements in artificial intelligence (AI), particularly in large language models (LLMs), have facilitated a wide range of applications, including chatbots, work assistants, and code-generation tools. However, these advancements also introduce significant risks, such as misuse in malicious contexts, non-compliance with regulatory requirements, and the potential for harmful content generation.
While governments and regulators push for greater transparency and accountability in AI development, verifying claims about model training processes remains a challenge. Current auditing approaches rely heavily on trust, with limited mechanisms for verifying the model claims. On one hand, malicious model providers may fabricate details about their models, believing that their claims cannot be verified. On the other hand, even cooperative model providers may hesitate to participate in auditing due to concerns about exposing proprietary datasets or training techniques. From the users’ perspective, establishing trust in model providers’ claims—such as the datasets used or the hardware involved—can be inherently difficult. This issue is exemplified by recent debates surrounding the DeepSeek models, where concerns have emerged over whether they were distilled from other advanced models and whether high-end GPUs were used in their training.
To address these challenges, we present TAITEE, a system that integrates model training provenance with Confidential Computing (CC) to establish a secure and verifiable training pipeline. TAITEE leverages Trusted Execution Environments (TEEs) to generate an immutable training certificate that attests to the integrity of the training process while safeguarding sensitive data. This certificate records critical details such as the dataset used, the hardware involved, and even the carbon emissions associated with training. Additionally, TAITEE supports flexible, customized training algorithms by recording proprietary functions—such as advanced data processing techniques or reward functions in reinforcement learning—as cryptographic hashes. This ensures that the training process remains both reproducible and verifiable without exposing sensitive algorithms. Because all training metrics are truthfully recorded at runtime and securely signed by the TEE, model providers—whether of closed or open-weight models—cannot falsify these aspects. Furthermore, TAITEE’s recording system operates in a sandboxed environment, preventing malicious training scripts from compromising the framework.
Beyond initial training, TAITEE extends its verification capabilities to fine-tuning and knowledge distillation, ensuring continued model training provenance throughout the AI lifecycle. Moreover, TAITEE prioritizes ease of integration by allowing developers to import the TAITEE module to replace key training components, such as data loaders and model trainers, without modifying a single line of code in the original training script. To demonstrate its broad applicability, we conducted experiments on various models, including the Llama family and DeepSeek models, across tasks such as mathematics and reasoning benchmarks. Our results confirm that TAITEE provides strong guarantees of authenticity, compliance, privacy, and faithfulness, offering a robust solution for trustworthy AI model development.
Download Slides
This talk explores key design considerations for enabling Confidential Computing (CC) in chiplet-based systems. We cover foundational topics such as hardware-enforced isolation, secure memory encryption, and trusted execution environments (TEEs), as well as advanced concepts like cache coherence, chiplet interconnects, unified roots of trust, and dynamic trust domains. The session also examines trade-offs in TEE deployment across heterogeneous chiplets and strategies for secure boot, communication, and debug isolation in disaggregated SoCs.
Download Slides
In an AI-first world, autonomous agents are becoming active participants in the software development lifecycle—analyzing workflows, suggesting changes, and even executing tasks. But for these agents to contribute meaningfully, they must access and interpret sensitive process data—raising critical concerns around privacy, confidentiality, and control.
This panel explores how to design Process Intelligence systems that balance human-agent collaboration with robust data governance. We’ll examine secure data architectures, trust boundaries between humans and AI, and strategies for enabling agents to act autonomously without exposing proprietary or personal information. The future of intelligent software delivery depends on getting this balance right.
This presentation focuses on verifiability challenges in confidential compute workloads. We will discuss the emerging problem of making verifiable promises to end-users about data handling and Privacy.
Our aim is to kick off a conversation on how the industry can create a sustainable verification model for user-focused confidential computing
Open source machine learning (ML) models, datasets, and external services for training or fine-tuning are rapidly becoming central to building AI applications. While this trend accelerates innovation and democratizes AI, it exposes applications to security risks like data poisoning and supply chain attacks. Threats like malicious backdoors hidden in pre-trained ML models hosted on major hubs like Hugging Face emphasize that compromises can happen at any stage of ML model development. So, how do we build trust in the ML lifecycle?
In this talk, we present Atlas, a framework that combines trusted execution environments (TEEs) and transparency logs with open specifications for data and software supply chain provenance like Coalition for Content Provenance and Authenticity (C2PA) and Supply-chain Levels for Software Artifacts (SLSA) to create fully attestable ML pipelines. We start by motivating the need to safeguard ML artifacts, systems and processes. We highlight the role TEEs can play in enhancing ML system integrity and demonstrate how Atlas’s three core mechanisms enable verification: (1) cryptographic artifact authentication, (2) hardware-based runtime integrity and attestation for ML systems and processes, and (3) provenance tracking across ML pipelines. Our Atlas demo integrates several open-source tools to generate and verify provenance metadata, building an end-to-end ML lifecycle transparency system.
Attendees can expect to come away with a clear idea that end-to-end ML lifecycle attestation and transparency is achievable with technologies that are available today: TEEs, KubeFlow, Sigstore Rekor, C2PA, and SLSA. Our aim for this talk is to catalyze community discussions about deploying an approach like Atlas at scale, other ways in which TEEs could further enhance ML lifecycle integrity, and seed conversations about their relevance in other important areas like privacy and compliance in the ML lifecycle.
Download Slides
The rapid evolution of high-performance computing (HPC) clusters has been instrumental in driving transformative advancements in AI research and applications. These sophisticated systems enable the processing of complex datasets and support groundbreaking innovation. However, as their adoption grows, so do the critical security challenges they face, particularly when handling sensitive data in multi-tenant environments where diverse users and workloads coexist. Organizations are increasingly turning to confidential computing as a framework to protect AI workloads, emphasizing the need for robust HPC architectures that incorporate runtime attestation capabilities to ensure trust and integrity.
In this session, we present an advanced HPC cluster architecture designed to address these challenges, focusing on how runtime attestation of critical components—such as the kernel, Trusted Execution Environments (TEEs), and eBPF layers—can effectively fortify HPC clusters for AI applications operating across disjoint tenants. This architecture leverages cutting-edge security practices, enabling real-time verification and anomaly detection without compromising the performance essential to HPC systems.
Attendees will learn:
Through detailed use cases and examples, we will illustrate how runtime attestation integrates seamlessly into HPC environments, offering a scalable and efficient solution for securing AI workloads. Participants will leave this session equipped with a deeper understanding of how to leverage runtime attestation and confidential computing principles to build secure, reliable, and high-performing HPC clusters tailored for AI innovations.
Download Slides
In the agentic era, true zero-trust security demands that any two software agents be able to exchange messages securely, without any trusted role in the middle. This means every agent must be able to connect, identify, authenticate, and encrypt its messages with any other agent in a fully automated and decentralized manner.
The natural topology that emerges is that of a cryptographic mesh — hence "Meshaging:" messaging within a global cryptographically secure mesh. In such a topology, Confidential computing is essential for the attestation and verification of agents at the hardware level, to achieve high assurance while eliminating reliance on trusted roles. This approach solves entire classes of cyber threats, from identity theft and data breaches to fakes, fraud, and man-in-the-middle attacks.
This session will explore the "least trust" approach to building such an architecture, and the role of Meshaging in securing autonomous interactions at scale. We’ll present how confidential computing enables a global network of "TrusTEEs", how automated pairwise cryptographic security is necessary for universal zero-trust, and how these principles combine to form a resilient, globally verified security fabric. Attendees will gain a deep understanding of how Meshaging redefines cybersecurity, enabling a decentralized, trustless, and tamper-proof foundation for the agentic era.
Download Slides
Confidential computing has evolved from a nascent technology to a fundamental requirement for AI deployment. This session explores the ongoing mission to make confidential computing the default mode of computing, summarizing the current NVIDIA GPU confidential computing capabilities, the current roadmap, and where we see the industry moving.
And this isn’t stopping at isolation. Learn how to bring measurability, verifiability with attestation services, and zero-trust design to every layer of the stack. Additionally, covering NVIDIA’s plans for deep integration across the ecosystem, so you'll learn how you can leverage the full breadth of NVIDIA technologies to deliver a comprehensive, secure, and performant AI solution.
Download Slides
Today's internet wasn't built for AI agents that need to collaborate, reason, and take action together. As these agents become essential to business operations, scientific discovery, and even physical robotics, they need a new infrastructure foundation to communicate.
Enter the Internet of Agents - an open, interoperable collaboration layer enabling AI agents to discover each other, establish trust, and collaborate securely across organizational boundaries. Join Papi Menon, VP of Product Management at Outshift, to understand why now is the time to build the Internet of Agents and how it will transform industries.
Download Slides
The “New NORMAL” workshop at Confidential Computing Summit 2025 introduces a practical enterprise architecture for deploying secure, production-grade Generative AI systems. Designed for AI leaders, engineers, and architects, the session outlines the six-layer NORMAL Stack—covering modular design, observability, ModelOps, secure workload orchestration, and confidential AI deployment. Attendees will learn actionable strategies to move from prototypes to scalable, compliant GenAI applications, with insights from industry experts across NVIDIA, LangChain, Accenture, and more.
Smaller, specialized Open Source models are growing in popularity, providing a more private, cost-effective, and customizable alternative to cloud-based AI solutions. In this session we'll discuss the benefits of Open Source models, using the Granite models as an example of a family of specialty-built smaller models for enterprises. Then we'll highlight Open Source AI and LLM tools that can be installed and run locally on your laptop or via Google Colab. To follow along on your laptop with the live demos at the same time they're shown on screen, please complete the lab pre-work.
Download Slides
We are entering a new era of building agentic applications—ones that not only understand context but plan, take action and adapt. With the advent of the Model Context Protocol (MCP) and a wave of orchestration frameworks, developers can now construct complex, long-running agentic systems with far greater modularity than ever before. But functionality is not enough — building "reliable" AI applications is the true challenge.
In this talk, I’ll explore what it means to build AI agents you can trust to act autonomously, while detecting, root-causing and acting quickly on errors. From defining “five nines” of agentic reliability to designing systems that can reason, revise, and recover gracefully, we’ll walk through the architectural principles that elevate agents from impressive prototypes to being dependable in production. No matter what agentic app you are building, the core question remains: can your AI agents be trusted to act correctly — every single time?
Download Slides
Exploration of how combining Retrieval-Augmented Generation (RAG), knowledge graphs, and grounding techniques can enhance the transparency and reliability of AI systems. It shows how RAG pulls verifiable external knowledge, while graphs provide structured context, and grounding ties responses back to trusted sources. These methods offer a robust foundation for building AI users can understand, verify, and trust.
Join CrewAI as we explore how agentic AI is moving from theory to practice—enabling real-world deployment of collaborative agents across industries. This talk will share lessons from designing, deploying, and governing multi-agent systems using CrewAI’s flexible, LLM-agnostic platform.
Download Slides
In this session, we'll explore how LangGraph enables developers to build robust, scalable, and controllable multi-agent systems. We'll begin by examining LangGraph's architecture, focusing on features like stateful orchestration, modular agent design, and integration with observability tools. Following this, a live demonstration will showcase a multi-agent workflow coordinated by a supervisor agent, illustrating practical applications of LangGraph in orchestrating complex tasks with reliability and efficiency.
Download Slides
This two-part session covers:
Part 1 – Building with NeMo: Scalable, Sovereign, and Secure AI Workflows
Discover how NVIDIA NeMo and NIM microservices enable enterprises to build agentic and generative AI workflows on cloud, edge, or on-prem, while preserving full data sovereignty and control. Explore tools that make this a reality –NeMo Safety, NIM, the Data Flywheel Blueprint and more through real-world enterprise use cases. Learn how to deploy secure, air-gapped systems that scale from prototype to production, followed by a hands-on demo.
Part 2 – Full Spectrum Model IP Protection – How Confidential Computing Secures Federated Learning and Inference Against Theft and Access.
Deploying AI models on untrusted infrastructure presents serious IP risks. Simply running code inside a TEE is not sufficient to protect your model. We show how a zero-trust, Confidential VM-based approach with hardened images protects model assets from deployment through execution. We will walk through building and securing a federated fine-tuning pipeline using NeMo and NVIDIA FLARE—mitigating boot-time tampering, checkpoint leaks, and attestation bypass. The same approach applies to general inference workloads.
Attendees will learn how to:
Organized by OPAQUE
© 2025 OPAQUE All rights reserved. | OPAQUE, The Confidential AI CompanyTM | Privacy Policy | Terms of Service