< Back

Enhancing End-User Devices with Confidential Computing: Protecting AI Applications and Improving Gaming Experiences

June 6, 4:15 PM - 4:45 PM
Imperial Room B

At Samsung Research, we are working on confidential computing technologies for end-user devices. As part of this work, we are actively involved in promoting confidential computing by contributing to our open-source project called Islet, which is a CC software platform based on Arm CCA and has recently joined the Confidential Computing Consortium (CCC) project. In addition, with the growing importance of Confidential AI, we develop the Samsung Confidential AI Framework deployed on our CC platform, Islet, to protect users, their data and AI models employed by different mobile applications. Our ongoing efforts involve exploring potential use cases occurring on the user's device, aiming at introducing distinctive enhancements to the confidential computing platform for user devices, and building upon the insights gained from these use cases.

In this session, we plan to introduce use case scenarios related to AI and gaming and demonstrate them utilizing our Islet platform.

AI Scenario:

Generative AI models, e.g. Large Language Models (LLMs), are capable of generating harmful and undesired content such as abuse, violence, self-harm, etc. Unsafe content can erode user trust and create legal issues for device and service providers. Therefore, content moderation (also known as Guardrails) is essential for safety of Generative AI applications. Current safety solutions are designed for cloud-based protection, not on-device protection.

In our scenario demonstration, we introduce how AI Guards can protect Generative AI models from jailbreak attacks. Firstly, we consider a scenario where the attacks are rendered by malicious users of different Generative AI applications, such as text summarization and Chatbots. Secondly, the adversarial AI models deployed in the applications are exploited as malicious agents.

Game Scenario:

There has been a long battle between game players and game service providers due to mutual distrust. We introduce how CC can rebuild this trust and the advantages they derive from the trust.

Firstly, by executing a game app in a trusted environment and the users gaining trustworthiness, offline gaming becomes viable, resulting in enhanced user responsiveness and uninterrupted gameplay regardless of network conditions. Secondly, users can personally confirm the transparency of the game service on their own devices, thus improving the overall service reliability. As an example, we will demonstrate this by running a simple Randombox program on our Islet platform.

Download Slides
Play Session
CCS Video Placeholder

About the speakers

Heeill Wang

Heeill Wang

Software Engineer, Samsung Research

Heeill is a Software Engineer at Samsung Research, working on Samsung Islet, open-source project written in Rust that enables ARM CCA. At Samsung Research, he worked on automated Rust fuzzing and various system security projects regarding Samsung products. Heeill's research interest includes Rust Programming Language, use case for on-device confidential computing and operating systems.

Pedro Esperanca

Pedro Esperanca

Staff Engineer, Samsung Research UK

Pedro is a Staff Engineer at Samsung, working on Artificial Intelligence (AI) systems. He holds a Ph.D. in Statistical Machine Learning from the University of Oxford, and his current work focuses on trustworthiness of language-based AI systems, including confidentiality and safety.