Š
2022 Springer. Personal use of this material is permitted. Permission from Springer must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This is the author's version of the work. The final authenticated version is available online at doi.org/10.1007/978-3-031-16092-9 7 and has been published in the proceedings of the 22nd International Conference on Distributed Applications and Interoperable Systems (DAIS'22).
## Attestation Mechanisms
## for Trusted Execution Environments Demystified
J¨ ames M´ en´ etrey 1 , Christian G¨ ottel 1 , Anum Khurshid 2 , Marcelo Pasin 1 , Pascal Felber 1 , Valerio Schiavoni 1 , and Shahid Raza 2
- 2
1 University of NeuchË atel, NeuchË atel, Switzerland, first.last@unine.ch RISE Research Institutes of Sweden, Stockholm, Sweden, first.last@ri.se
Abstract. Attestation is a fundamental building block to establish trust over software systems. When used in conjunction with trusted execution environments, it guarantees the genuineness of the code executed against powerful attackers and threats, paving the way for adoption in several sensitive application domains. This paper reviews remote attestation principles and explains how the modern and industrially well-established trusted execution environments Intel SGX, Arm TrustZone and AMD SEV, as well as emerging RISC-V solutions, leverage these mechanisms.
Keywords: Trusted execution environments, Attestation, Intel SGX, Arm TrustZone, AMD SEV, RISC-V
## 1 Introduction
Confidentiality and integrity are essential features when building secure computer systems. This is particularly important when the underlying system cannot be fully trusted or controlled. For example, video broadcasting software can be tampered with by end-users to circumvent digital rights management, or virtual machines are candidly open to the indiscretion of their cloud-based untrusted hosts. The introduction of Trusted Execution Environments (TEEs), such as Intel SGX, AMD SEV, RISC-V and Arm TrustZone-A/-M, into commodity processors, significantly mitigates the attack surface against powerful attackers. In a nutshell, TEEs let a piece of software be executed with stronger security guarantees, including privacy and integrity properties, without relying on a trustworthy operating system. Each of these enabling technologies offers different degrees of guarantees that can be leveraged to increase the confidentiality and integrity of applications.
Remote attestation allows establishing a trusting relationship with a specific software by verifying its authenticity and integrity. Through remote attestation, one ensures to be communicating with a specific, trusted (attested) program remotely. TEEs can support and strengthen the attestation process, ensuring that programs are shielded against many powerful attacks by isolating critical security software, assets and private information from the rest of the system. However, to the best of our knowledge, there is not a clear systematisation of attestation mechanisms supported by modern and industrially well-established TEEs. Hence, the main contribution of this work is to describe the state-of-the-art best practices
regarding remote attestation mechanisms of TEEs, covering a necessarily incomplete selection of TEEs, which includes the four major technologies available for commodity hardware, which are Intel SGX, Arm TrustZone-A/-M, AMD SEV and many emerging TEEs using the open ISA RISC-V. We complement previous work [36, 37] with an updated analysis of TEEs (e.g., introduction of Intel SGX and Arm TrustZone variations), a thorough analysis of remote attestation mechanisms and coverage of the upcoming TEEs of Intel and Arm.
## 2 Attestation
## 2.1 Local attestation
Local attestation enables a trusted environment to prove its identity to any other trusted environments hosted on the same system, respectively, on the same CPU if the secret provisioned for the attestation is bound to the processor. The target environment that receives the local attestation request can assess whether the issued proof is genuine by verifying its authentication, usually based on a symmetric-key scheme, using a message authentication code (MAC). This mechanism is required to establish secure communication channels between trusted environments, often used to delegate computing tasks securely. As an example, Intel SGX's remote attestation (detailed in Section 3.3) leverages the local attestation to sign proofs in another trusted environment through a secure communication channel.
## 2.2 Remote attestation
Remote attestation allows to establish trust between different devices and provides cryptographic proofs that the executing software is genuine and untampered [18]. In the remainder, we adopt the terminology proposed by the IETF to describe remote attestation and related architectures [13]. Under these terms, a relying party wishes to establish a trusted relationship with an attester , thanks to the help of a verifier . The attester provides the state of its system, indicating the hardware and the software stack that runs on its device by collecting a set of claims of trustworthiness. A claim is a piece of asserted information collected by an attesting environment, e.g., a TEE. An example of claims is the code measurement , (i.e., a cryptographic hash of the application's code) of an executing program within a TEE. TEEs also create additional claims that identify the trusted computing base (TCB is the amount of hardware and software that needs to be trusted), so verifiers are able to evaluate the genuineness of the platform. Claims are collected and cryptographically signed to form evidence , later observed and accepted (or denied) by the verifier. Once the attester is proven genuine, the relying party can safely interact with it and transfer confidential data or delegate computations.
The problem of remotely attesting software has been extensively studied in academia, and industrial implementations already exist. Three leading families of remote attestation methods exist: (i) software-based, (ii) hardware-based, and (iii) hybrid (software- and hardware-based). Software-based remote attestation [47] does not depend on any particular hardware. This method is particularly adapted to low-cost use cases. Hardware-based remote attestation relies on a root of trust , which is one or many cryptographic values rooted in hardware
to ensure that the claims are trustworthy. Typically, a root of trust can be implemented using tamper-resistant hardware, such as a trusted platform module (TPM) [55], a physical unclonable function (PUF) that prevents impersonations by using unique hardware marks produced at manufacture [30], or a hardware secret fused in a die (e.g., CPU) exposed exclusively to the trusted environment. Hybrid solutions combine hardware devices and software implementations [21], in an attempt to leverage advantages from both sides. Researchers used hardware/software co-design techniques to propose a hybrid design with a formal proof of correctness [40]. Finally, remote attestation mechanisms are popular among the TEEs due to their carefully controlled environments and their ability to generate code measurements. Section 3 delivers extensive analysis of the state of the art of the TEEs, including their support for remote attestation.
## 2.3 Mutual attestation
Trusted applications may need stronger trust assurances by ensuring both ends of a secure channel are attested. For example, when retrieving confidential data from a sensing IoT device (where data is particularly sensitive), the device must authenticate the remote party, while the latter must ensure the sensing device has not been spoofed or tampered with. Mutual attestation protocols have been designed to appraise the trustworthiness of both end devices involved in a communication. We also report how mutual attestation has also been studied in the context of TEEs [51], as we further detail in Section 3.
## 3 Issuing attestations using TEEs
Several solutions exist to implement hardware support for trusted computing, and TEEs are particularly promising. Typically, a TEE consists of isolating critical components of the system, (e.g., portions of the memory), denying access to more privileged but untrusted systems, such as kernel and machine modes. Depending on the implementation, it guarantees the confidentiality and the integrity of the code and data of trusted applications, thanks to the assistance of CPU security features. This work surveys modern and prevailing TEEs from processor designers and vendors with remote attestation capabilities for commodity or server-grade processors, namely Intel SGX [19], AMD SEV [3], and Arm TrustZone [42]. Besides, RISC-V, an open ISA with multiple open-source core implementations, ratified the physical memory protection (PMP) instructions, offering similar capabilities to memory protection offered by aforementioned technologies. As such, we also included many emerging academic and proprietary frameworks that capitalise on standard RISC-V primitives, which are Keystone [33], Sanctum [20], TIMBER-V [54] and LIRA-V [49]. Finally, among the many other technologies in the literature, we omitted the TEEs lacking remote attestation mechanisms (e.g., IBM PEF [26]) as well as the TEEs not supported on currently available CPUs (e.g., Intel TDX [27], Realm [11] from Arm CCA [8]).
## 4
Table 1: Comparison of the state-of-the-art TEEs.
<details>
<summary>Image 1 Details</summary>

### Visual Description
## Feature Comparison Chart: SGX, TrustZone, SEV, and RISC-V
### Overview
The image is a feature comparison chart evaluating different secure execution environments (SEEs) across several features. The chart compares SGX, TrustZone, SEV, and RISC-V implementations, indicating whether each feature is supported (filled circle) or not supported (empty circle) by each SEE.
### Components/Axes
* **Columns (Secure Execution Environments):**
* SGX: Client SGX, Scalable SGX
* TrustZone: TrustZone-A, TrustZone-M
* SEV: Vanilla, SEV-ES, SEV-SNP, Keystone
* RISC-V: Sanctum, TIMBER-V, LIRA-V
* **Rows (Features):**
* Integrity
* Freshness
* Encryption
* Unlimited domains
* Open source
* Local attestation
* Remote attestation
* API for attestation
* Mutual attestation
* User-mode support
* Industrial TEE
* Isolation and attestation granularity
* System support for isolation
### Detailed Analysis or ### Content Details
**SGX**
* **Client SGX:**
* Integrity: Supported (filled circle)
* Freshness: Supported (filled circle)
* Encryption: Supported (filled circle)
* Unlimited domains: Supported (filled circle)
* Open source: Not supported (empty circle)
* Local attestation: Supported (filled circle)
* Remote attestation: Supported (filled circle)
* API for attestation: Supported (filled circle)
* Mutual attestation: Not supported (empty circle)
* User-mode support: Supported (filled circle)
* Industrial TEE: Not supported (empty circle)
* Isolation and attestation granularity: Intra-address space
* System support for isolation: Âľcode + XuCode
* **Scalable SGX:**
* Integrity: Not supported (empty circle)
* Freshness: Partially supported (half-filled circle)
* Encryption: Not supported (empty circle)
* Unlimited domains: Not supported (empty circle)
* Open source: Partially supported (half-filled circle)
* Local attestation: Supported (filled circle)
* Remote attestation: Supported (filled circle)
* API for attestation: Supported (filled circle)
* Mutual attestation: Not supported (empty circle)
* User-mode support: Supported (filled circle)
* Industrial TEE: Not supported (empty circle)
**TrustZone**
* **TrustZone-A:**
* Integrity: Not supported (empty circle)
* Freshness: Not supported (empty circle)
* Encryption: Not supported (empty circle)
* Unlimited domains: Not supported (empty circle)
* Open source: Not supported (empty circle)
* Local attestation: Not supported (empty circle)
* Remote attestation: Not supported (empty circle)
* API for attestation: Not supported (empty circle)
* Mutual attestation: Not supported (empty circle)
* User-mode support: Supported (filled circle)
* Industrial TEE: Not supported (empty circle)
* Isolation and attestation granularity: Secure world
* System support for isolation: SMC MPU
* **TrustZone-M:**
* All features except User-mode support are not supported (empty circle).
* User-mode support: Supported (filled circle)
**SEV**
* **Vanilla:**
* Integrity: Not supported (empty circle)
* Freshness: Not supported (empty circle)
* Encryption: Not supported (empty circle)
* Unlimited domains: Partially supported (half-filled circle)
* Open source: Not supported (empty circle)
* Local attestation: Supported (filled circle)
* Remote attestation: Supported (filled circle)
* API for attestation: Supported (filled circle)
* Mutual attestation: Not supported (empty circle)
* User-mode support: Supported (filled circle)
* Industrial TEE: Not supported (empty circle)
* Isolation and attestation granularity: VM
* System support for isolation: Firmware
* **SEV-ES:**
* Integrity: Supported (filled circle)
* Freshness: Supported (filled circle)
* Encryption: Supported (filled circle)
* Unlimited domains: Supported (filled circle)
* Open source: Supported (filled circle)
* Local attestation: Supported (filled circle)
* Remote attestation: Supported (filled circle)
* API for attestation: Supported (filled circle)
* Mutual attestation: Supported (filled circle)
* User-mode support: Supported (filled circle)
* Industrial TEE: Not supported (empty circle)
* **SEV-SNP:**
* Integrity: Supported (filled circle)
* Freshness: Supported (filled circle)
* Encryption: Supported (filled circle)
* Unlimited domains: Supported (filled circle)
* Open source: Partially supported (half-filled circle)
* Local attestation: Supported (filled circle)
* Remote attestation: Supported (filled circle)
* API for attestation: Supported (filled circle)
* Mutual attestation: Supported (filled circle)
* User-mode support: Supported (filled circle)
* Industrial TEE: Not supported (empty circle)
* **Keystone:**
* Integrity: Supported (filled circle)
* Freshness: Supported (filled circle)
* Encryption: Supported (filled circle)
* Unlimited domains: Supported (filled circle)
* Open source: Supported (filled circle)
* Local attestation: Supported (filled circle)
* Remote attestation: Supported (filled circle)
* API for attestation: Supported (filled circle)
* Mutual attestation: Supported (filled circle)
* User-mode support: Supported (filled circle)
* Industrial TEE: Not supported (empty circle)
* Isolation and attestation granularity: Secure world
* System support for isolation: SMC + PMP
**RISC-V**
* **Sanctum:**
* All features except User-mode support are not supported (empty circle).
* User-mode support: Supported (filled circle)
* Isolation and attestation granularity: Intra-address space
* System support for isolation: Tag + MPU
* **TIMBER-V:**
* All features except User-mode support are not supported (empty circle).
* User-mode support: Supported (filled circle)
* **LIRA-V:**
* All features except User-mode support are not supported (empty circle).
* User-mode support: Supported (filled circle)
* System support for isolation: PMP
### Key Observations
* Client SGX supports most of the listed features, while Scalable SGX has limited support.
* TrustZone-A and TrustZone-M primarily support user-mode support.
* SEV-ES and Keystone offer comprehensive feature support among the SEV implementations.
* RISC-V implementations (Sanctum, TIMBER-V, LIRA-V) mainly support user-mode support.
* Industrial TEE is not supported by any of the listed implementations.
### Interpretation
The chart provides a comparative overview of feature support across different secure execution environments. It highlights the strengths and weaknesses of each implementation, indicating which features are prioritized in each design. The data suggests that Client SGX, SEV-ES, and Keystone offer more comprehensive security features compared to other implementations listed. The lack of Industrial TEE support across all implementations suggests a potential area for future development. The chart is useful for understanding the trade-offs between different secure execution environments and selecting the most appropriate one for a given application.
</details>
| | SGX | TrustZone | SEV | RISC-V | RISC-V | RISC-V |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------|-------------------------|------------------------|------------------|-----------------------------|-----------|
| | Client SGX Scalable SGX | TrustZone-A TrustZone-M | Vanilla SEV-ES SEV-SNP | Keystone | Sanctum TIMBER-V | LIRA-V |
| Features Integrity Freshness Encryption Unlimited domains Open source Local attestation Remote attestation API for attestation Mutual attestation User-mode support Industrial TEE Isolation and attestation granularity System support for isolation | Intra- address space Âľ code + XuCode | Secure world SMC MPU | VM Firmware | Secure world SMC | Intra-address PMP Tag + MPU | space PMP |
Table 2: Features of the state-of-the-art TEEs.
| Feature | Description |
|---------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Integrity | An active mechanism preventing DRAM of TEE instances from being tam- pered with. Partial fulfilment means no protection against physical attacks. |
| Freshness | Protecting DRAM of TEE instances against replay and rollback attacks. Partial fulfilment means no protection against physical attacks. |
| Encryption | DRAM of TEE instances is encrypted to assure that no unauthorised access or memory snooping of the enclave occurs. |
| Unlimited domains | Many TEE instances can run concurrently, while the TEE boundaries (e.g., isolation, integrity) between these instances are guaranteed by hardware. Partial fulfilment means that the number of domains is capped. |
| Open source | Indicate whether the solution is either partially or fully publicly available. |
| Local attestation | A TEE instance attests running on the same system to another instance. |
| Remote attestation | A TEE instance attests genuineness to remote parties. Partial fulfilment means no built-in support but is extended by the literature. |
| API for attestation | An API is available by the trusted applications to interact with the process of remote attestation. Partial fulfilment means no built-in support but is extended by the literature. |
| Mutual attestation | The identity of the attester and the verifier are authenticated upon remote attestations. Partial fulfilment means no built-in support but is extended by the literature. |
| User mode support | State whether the trusted applications are hosted in user mode, according to the processor architecture. |
| Industrial TEE | Contrast the TEEs used in production and made by the industry from the research prototypes designed by the academia. |
| Isolation and attestation granularity | The level of granularity where the TEE operates for providing isolation and attestation of the trusted software. |
| System support for isolation | The hardware mechanisms used to isolate trusted applications. |
## 3.1 TEE cornerstone features
We propose a series of cornerstone features of TEEs and remote attestation capabilities and compare many emerging and well-established state-of-the-art solutions in Table 1. Each feature is detailed in Table 2 and can either be missing ( ), partially ( ) or fully ( ) available. Besides, we elaborate further on each TEE in the remainder of the section.
Fig. 1: The workflow of deployment and attestation of TEEs.
<details>
<summary>Image 2 Details</summary>

### Visual Description
## System Diagram: Secure Deployment Architecture
### Overview
The image illustrates a secure deployment architecture, contrasting a "safe" developer's environment with an "unsafe" host environment. It depicts the flow of code from source files to a binary, and then to an attester within a trusted execution environment (TEE) on the host. The diagram highlights the processes of code measurement, verification, and network attestation to ensure security.
### Components/Axes
* **Regions:**
* **Developer's premises (safe):** Enclosed in a red dashed rectangle, located on the left side of the diagram. A red padlock icon is present in the bottom-left corner of this region.
* **Host (unsafe):** Enclosed in a gray dashed rectangle, located on the right side of the diagram. A red padlock icon is present inside a smaller red dashed rectangle labeled "TEE (safe)" within the host environment. A black hacker icon is present in the bottom-right corner of this region.
* **Processes/Components:**
* **Source files:** Represented by a file icon with "</>" inside. Located in the top-left of the "Developer's premises" region.
* **Compiler:** Represented by a blue gear icon. Located to the right of "Source files".
* **Binary:** Represented by a file icon with "</>" inside. Located to the right of "Compiler".
* **Network deployment:** Text label above the red arrow going from "Binary" to "Attester".
* **Code measurement:** Represented by a yellow gear icon. Located below "Compiler".
* **Verifier:** Represented by a yellow gear icon. Located to the right of "Code measurement".
* **Network attestation:** Text label below the red arrow going from "Verifier" to "Attester".
* **Attester:** Represented by a file icon with "</>" inside. Located in the "Host" region.
* **TEE (safe):** Trusted Execution Environment, enclosed in a red dashed rectangle within the "Host" region.
### Detailed Analysis
* **Flow:**
* Source files are processed by the Compiler, resulting in a Binary.
* The Binary is deployed to the Attester via "Network deployment" (red arrow).
* Independently, the Source files are subjected to Code measurement, and then processed by the Verifier.
* The Verifier performs "Network attestation" to the Attester (red arrow).
* **Data Points/Connections:**
* A green arrow connects "Source files" to "Compiler".
* A green arrow connects "Compiler" to "Binary".
* A green arrow connects "Source files" to "Code measurement".
* A green arrow connects "Code measurement" to "Verifier".
* A red arrow connects "Binary" to "Attester" (Network deployment).
* A red arrow connects "Verifier" to "Attester" (Network attestation).
### Key Observations
* The diagram clearly separates the "safe" development environment from the "unsafe" host environment.
* The use of a TEE within the host environment provides a secure execution space for the Attester.
* The processes of code measurement, verification, and network attestation are crucial for ensuring the integrity and trustworthiness of the deployed binary.
* The red arrows indicate network communication, highlighting potential attack vectors.
### Interpretation
The diagram illustrates a security-focused deployment strategy. By performing code measurement and verification within the trusted "Developer's premises," the system aims to ensure that only verified and untampered code is deployed to the "unsafe" host environment. The TEE provides an additional layer of security by isolating the Attester from the rest of the host system. The network attestation process further validates the integrity of the deployed code. This architecture is designed to mitigate risks associated with deploying code to potentially compromised environments. The presence of the hacker icon in the "Host (unsafe)" region emphasizes the importance of these security measures.
</details>
## 3.2 Trusted environments and remote attestation
The attestation of software and hardware components requires an environment to issue evidence securely. This role is usually assigned to some software or hardware mechanism that cannot be tampered with. These environments rely on the code measurement of the executed software and combine that claim with cryptographic values derived from the root of trust. We analysed today's practices for the leading processor vendors for issuing cryptographically signed evidence.
Figure 1 illustrates the generic workflow TEE developers usually follow for the deployment of trusted applications. Initially, the application is compiled and measured on the developers' premises. It is later transferred to an untrusted system, executed in the TEE facility. Once the trusted application is loaded and required to receive sensitive data, it communicates with a verifier to establish a trusted channel. The TEE environment must facilitate this transaction by exposing evidence to the trusted application, which adds key material to bootstrap a secure channel from the TEE, thus preventing an attacker from eavesdropping on the communication. The verifier examines the evidence, maintaining a list of reference values to identify genuine instances of trusted applications. If recognised as trustworthy, the verifier can proceed to data exchanges.
## 3.3 Intel SGX
Intel Software Guard Extensions (SGX) [19] introduced TEEs for mass-market processors in 2015. Figure 2a illustrates the high-level architecture of SGX. Specifically, Intel's Skylake architecture introduced a new set of processor instructions to create encrypted regions of memory, called enclaves , living within the processes of the user space. Intel SGX exist in two flavours: client SGX and scalable SGX [12]. The former is the technology released in 2015, designed and implemented into consumer-grade processors, while the latter was released in 2021, focusing on server-grade processors. The key differences between the two variants are: (i) the volatile memory available to enclaves, 128 MB and 512 GB, respectively, (ii) the multi-socket support and (iii) the lack of integrity and replay protections against hardware attacks for the latter. Researchers conduct work to bring integrity protection for scalable SGX [12].
These instructions are their own ISA that is implemented in XuCode [28] and together with model specific registers provide the requirements to form the implementation of SGX. XuCode is a technology that Intel developed and integrated into selected processor families to deliver new features more quickly and,
<details>
<summary>Image 3 Details</summary>

### Visual Description
## Security Architecture Diagrams: Intel SGX, AMD SEV, Arm TrustZone-A, Arm TrustZone-M
### Overview
The image presents four diagrams illustrating different security architectures: Intel SGX, AMD SEV, Arm TrustZone-A, and Arm TrustZone-M. Each diagram depicts a layered system, divided into trusted and untrusted domains (or normal and secure worlds), showing the placement of various components like applications, operating systems, hypervisors, and firmware. The diagrams use color-coding to represent the level of trust, with darker shades indicating higher trust levels.
### Components/Axes
Each diagram is structured with the following common elements:
* **Vertical Division:** Each diagram is split vertically into two domains:
* Intel SGX and AMD SEV: "Untrusted" and "Trusted"
* Arm TrustZone-A and Arm TrustZone-M: "Normal World" and "Secure World"
* **Layered Architecture:** Each domain contains layers representing different system components.
* **Color Gradient:** A color gradient is used to indicate the level of trust, ranging from light orange (least trusted) to dark red (most trusted).
* **Component Labels:** Each layer is labeled with the name of the component it represents (e.g., "App," "OS," "Hypervisor," "Firmware").
Specific components and annotations:
* **Intel SGX (a):**
* Untrusted: App, OS, Hypervisor
* Trusted: Call gate, Enclaveâ
* Firmware/Microcode (shared between trusted and untrusted)
* **AMD SEV (b):**
* Untrusted: App, OS, Hypervisor
* Trusted: Appâ , OSâ
* Hypervisor\*
* Firmware/Microcode (shared between trusted and untrusted)
* **Arm TrustZone-A (c):**
* Normal World: App, OS, Hypervisor
* Secure World: TA (Trusted Application), Trusted OS, Secure Monitor
* **Arm TrustZone-M (d):**
* Normal World: App, OS
* Secure World: NSC, Service, Firmware
* Hardware (shared between normal and secure worlds)
### Detailed Analysis or Content Details
**1. Intel SGX (a):**
* **Untrusted App:** Located in the "Untrusted" domain, top layer, light orange.
* **Call gate:** Located in the "Trusted" domain, top layer, light orange.
* **Enclaveâ :** Located in the "Trusted" domain, top layer, light orange.
* **Untrusted OS:** Located below the "App" layer in the "Untrusted" domain, orange.
* **Untrusted Hypervisor:** Located below the "OS" layer in the "Untrusted" domain, orange-red.
* **Firmware / Microcode:** Located at the bottom, spanning both "Untrusted" and "Trusted" domains, dark red.
**2. AMD SEV (b):**
* **Untrusted App:** Located in the "Untrusted" domain, top layer, light orange.
* **Trusted Appâ :** Located in the "Trusted" domain, top layer, light orange.
* **Untrusted OS:** Located below the "App" layer in the "Untrusted" domain, orange.
* **Trusted OSâ :** Located below the "App" layer in the "Trusted" domain, orange.
* **Hypervisor\*:** Located below the "OS" layer, spanning both "Untrusted" and "Trusted" domains, orange-red.
* **Firmware / Microcode:** Located at the bottom, spanning both "Untrusted" and "Trusted" domains, dark red.
**3. Arm TrustZone-A (c):**
* **Normal World App:** Located in the "Normal World" domain, top layer, light orange.
* **Secure World TA:** Located in the "Secure World" domain, top layer, light orange.
* **Normal World OS:** Located below the "App" layer in the "Normal World" domain, orange.
* **Secure World Trusted OS:** Located below the "TA" layer in the "Secure World" domain, orange.
* **Normal World Hypervisor:** Located below the "OS" layer in the "Normal World" domain, orange-red.
* **Secure Monitor:** Located at the bottom, in the "Secure World" domain, dark red. An arrow points from the "Hypervisor" layer to the "Secure Monitor" layer.
**4. Arm TrustZone-M (d):**
* **Normal World App:** Located in the "Normal World" domain, top layer, light orange.
* **Secure World NSC:** Located in the "Secure World" domain, top layer, light orange.
* **Secure World Service:** Located in the "Secure World" domain, top layer, light orange.
* **Normal World OS:** Located below the "App" layer in the "Normal World" domain, orange.
* **Secure World Firmware:** Located below the "NSC" and "Service" layers in the "Secure World" domain, orange-red.
* **Hardware:** Located at the bottom, spanning both "Normal World" and "Secure World" domains, dark red.
### Key Observations
* **Trust Boundaries:** The diagrams clearly delineate the boundaries between trusted and untrusted domains, highlighting which components are responsible for maintaining security.
* **Layered Security:** All architectures employ a layered approach, with the most critical components (e.g., firmware, secure monitor) residing at the lowest levels and having the highest level of trust.
* **Component Placement:** The placement of components varies across architectures, reflecting different design choices and security models. For example, in Intel SGX, the "Enclave" resides in the trusted domain, while in Arm TrustZone-A, the "Secure Monitor" handles security.
* **Color Gradient:** The color gradient consistently represents the level of trust, with darker shades indicating higher trust.
### Interpretation
The diagrams illustrate different approaches to hardware-based security. Intel SGX and AMD SEV focus on isolating sensitive computations within enclaves or secure VMs. Arm TrustZone-A and Arm TrustZone-M provide a more comprehensive separation between normal and secure worlds, allowing for the execution of trusted applications and services in a protected environment.
The placement of components and the trust boundaries reflect the specific security goals and design constraints of each architecture. For example, the presence of a "Secure Monitor" in Arm TrustZone-A indicates a strong emphasis on isolation and control over access to secure resources. The use of a "Call gate" in Intel SGX suggests a mechanism for controlled transitions between untrusted and trusted code.
The diagrams highlight the importance of hardware-based security in modern systems, providing a foundation for building secure applications and protecting sensitive data. The variations in architecture reflect the ongoing evolution of security technologies and the need to adapt to different threat models and use cases.
</details>
Fig. 2: Overview of the industrial TEE architectures. ( â denotes the attested elements) ( /born means trusted for SEV/SEV-ES, untrusted for SEV-SNP)
particularly for SGX, reduce the impact a (complex) hardware implementation would have had on the features. It operates from protected system memory in a special execution mode of the CPU, which are both set up by system firmware. SGX is, to date, the only technology that is making use of XuCode.
A memory region is reserved at boot time for storing code and data of encrypted enclaves. This memory area, called the enclave page cache (EPC), is inaccessible to other programs running on the same machine, including the operating system and the hypervisor. The traffic between the CPU and the system memory remains confidential thanks to the memory encryption engine (MEE). The EPC also stores verification codes to ensure that the DRAM corresponding to the EPC was not modified by any software external to the enclave.
A trusted application executing in an enclave may establish a local attestation with another enclave running on the same hardware. Toward this end, Intel SGX issues a set of claims, called report , that contains identities, attributes (i.e., modes and other properties), the trustworthiness of the TCB, additional information for the target enclave and a MAC. Unlike local attestation, remote attestation uses an asymmetric-key scheme, which is made possible by a special enclave, called quoting enclave , that has access to the device-specific private key. Intel designed their remote attestation protocol based on the SIGMA protocol [31]
Fig. 3: The remote attestation flow of Intel SGX.
<details>
<summary>Image 4 Details</summary>

### Visual Description
## Attestation Diagram: Enclave Verification Process
### Overview
The image is a diagram illustrating the process of attestation for an enclave, involving components like memory, EPC, enclave, quoting enclave, application, Intel Attestation Service, and a generic service. The diagram outlines the flow of information and cryptographic operations between these components to verify the integrity and authenticity of the enclave.
### Components/Axes
* **Memory:** Represents the system memory, containing the application and the MEE (likely Management Engine Environment).
* **App:** Application running in memory.
* **MEE:** Management Engine Environment.
* **EPC:** Enclave Page Cache, a protected memory region. It has two lock icons, one green on top and one red below.
* **Enclave:** A secure region within memory.
* Label: "Enclave"
* Text: "ephemeral key" with a key icon.
* Process: EREPORT
* **Quoting Enclave:** A specialized enclave responsible for generating quotes.
* Label: "Quoting enclave"
* Text: "EPID device-specific key" with a key icon.
* Process: EGETKEY
* **Application:** The application that needs to verify the enclave.
* Label: "Application"
* **Intel Attestation Service:** A service provided by Intel for attestation.
* Label: "Intel Attestation Service"
* **Service:** A generic service that validates the quote.
* Label: "Service"
* Text: "validate" and "Verification"
### Detailed Analysis or Content Details
The diagram depicts a series of steps, numbered 1 through 9, illustrating the attestation process:
1. **Challenge:** The Application sends a challenge to the Service.
2. **QE ID:** The Enclave sends the QE ID to the Application.
3. **EREPORT:** The Enclave generates a report (EREPORT) using an ephemeral key.
4. **REPORT Claims:** The Enclave sends a REPORT containing claims to the Application.
5. **REPORT QUOTE:** The Application sends a REPORT and requests a QUOTE from the Quoting Enclave.
6. **EGETKEY:** The Quoting Enclave uses EGETKEY to generate a device-specific key and EPID.
7. **QUOTE:** The Quoting Enclave sends a QUOTE to the Application.
8. **Claims QUOTE:** The Application sends Claims and the QUOTE to the Service.
9. **QUOTE:** The Service sends the QUOTE to the Intel Attestation Service for verification. The Intel Attestation Service sends the QUOTE back to the Service.
Additional details:
* The Enclave is colored light green.
* The Quoting Enclave and Intel Attestation Service are colored light red.
* The Service is represented by a green keyhole icon.
* The Application is a white rectangle.
* The Memory block is white with a black outline.
* The EPC has a green lock on top and a red lock below.
* Arrows indicate the direction of data flow.
* The application encrypts/decrypts data to/from the MEE.
### Key Observations
* The diagram highlights the interaction between different components to establish trust in the Enclave.
* The use of cryptographic keys and reports is central to the attestation process.
* The Intel Attestation Service plays a crucial role in verifying the QUOTE.
* The process involves multiple steps to ensure the integrity and authenticity of the Enclave.
### Interpretation
The diagram illustrates a typical remote attestation process for enclaves. The application, seeking to trust the enclave, initiates a challenge. The enclave, through a series of cryptographic operations and interactions with the quoting enclave and the Intel Attestation Service, provides evidence of its integrity and authenticity. The service validates this evidence, allowing the application to establish trust in the enclave. The use of ephemeral keys and device-specific keys adds layers of security to the process. The diagram demonstrates the complexity involved in establishing trust in secure enclaves, highlighting the importance of secure key management and robust attestation mechanisms.
</details>
and extended it to the enhanced privacy ID (EPID). The EPID scheme does not identify unique entities, but rather a group of attesters. Each attester belongs to a group, and the verifier checks the group's public key. Evidence is signed by the EPID key, which guarantees the trustworthiness of the hardware and is bound to the firmware version of the processor.
In a remote attestation scenario, a verifier submits a challenge to the attester (i.e., application enclave) with a nonce (Fig.3 ). The attester prepares a response to the challenge by creating a set of claims, a public key (Fig.3-´ ), and performs a local attestation with the quoting enclave. After verifying the set of claims (i.e., report), the quoting enclave signs the report to form evidence with the EPID key obtained using the EGETKEY instruction (Fig.3-ÂŻ ) and returns the evidence to the attester (Fig.3-Ë ), which sends it back to the verifier (Fig.3-Ë ). The public key contained in the evidence enables the creation of a confidential communication channel. Finally, the verifier examines the signature of the evidence (Fig.3-¨ ) using the Intel attestation service (IAS) [5, 14]. If deemed trustworthy, the verifier may provision sensitive data to the attester using the secure channel.
More recently, Intel introduced the Data Center Attestation Primitives (DCAP) [46], an alternative solution to EPID, enabling third-party attestation. Thanks to DCAP, the verifiers have their own attestation infrastructure and prevent depending on external dependencies (e.g., IAS) during the attestation procedure. DCAP introduces an additional step, where the quote (Fig.3-Ë ) is signed using elliptic curve digital signature algorithm (ECDSA) by the attestation collateral of the attestation infrastructure. Instead of contacting the IAS (Fig.3-¨ ), the service retrieves the attestation collateral associated with the received evidence from the attestation infrastructure in order to validate the signature.
While the quoting enclave, the microcode and XuCode are closed-source, recent work analysed the TEE and its attestation mechanism formally [50, 45]. The other components of SGX (i.e., kernel driver and SDK) are open source. MAGE [16] further extended the remote attestation scheme of Intel SGX by offering mutual attestation for a group of enclaves without trusted third parties. Similarly, OPERA [17] proposes a decentralised attestation scheme, unchaining the attesters from the IAS while conducting attestation.
Intel SGX has many advantages but suffers from a few limitations as well. First, most of the SGX implementation limits the EPC size to 93.5 MB [52]. While smaller programs offer smaller attack surfaces, exceeding this threshold increases the memory access latency because of its pagination mechanism. Newer Intel Xeon processors extend that limit to 512 GB, but drop integrity protection
and freshness against physical attacks. Besides, the enclave model prevents performing system calls and direct hardware access since the threat model distrusts the outer world, leading to the development of partitioned applications.
## 3.4 Arm TrustZone architectures
Depending on the architecture of Arm's processors, TrustZone comes in two flavours: TrustZone-A (for Cortex-A) and TrustZone-M (for Cortex-M). While they share many design aspects, we detail how different they are in the remainder.
Arm TrustZone-A provides the hardware elements to establish a single TEE per system [42]. Figure 2c illustrates the high-level architecture of TrustZone-A. Broadly adopted by commodity devices, TrustZone splits the processor into two states: the secure world (TEE) and the normal world (untrusted environment). A secure monitor (SMC) is switching between worlds, and each world operates with its own user and kernel spaces. The trusted world uses a trusted operating system (e.g., OP-TEE) and runs trusted applications (TAs) as isolated processes. The normal world uses a traditional operating system (e.g., Linux).
Despite the commercial success of TrustZone-A, it lacks attestation mechanisms, preventing relying parties from validating and trusting the state of TrustZone-A remotely. Nevertheless, researchers proposed several variants of one-way remote attestation protocols for Arm TrustZone [56, 34], as well as mutual remote attestation [2, 48], thus extending the built-in capabilities of the architecture for attestation. All of these propositions require the availability of hardware primitives on the system-on-chip (SoC): (i) a root of trust in the secure world, (ii) a secure source of randomness for cryptographic operations, and (iii) a secure boot mechanism, ensuring the sane state of a system upon boot. Indeed, devices lacking built-in attestation mechanisms may rely on a root of trust to derive private cryptographic materials (e.g., a private key for evidence issuance). Secure boot measures the integrity of individual boot stages on devices and prevents tampered systems from being booted. As a result, remote parties can verify issued evidence in the TEE and ensure the trustworthiness of the attesters.
We describe the remote attestation mechanism of Shepherd et al. [48] as a study case. This solution establishes mutually trusted channels for bi-directional attestation, based on a trusted measurer (TM), which is a software component located in the trusted world and authenticated by the TEE's secure boot, to generate claims and issue evidence based on the OS and TA states. A private key is provisioned and sealed in the TEE's secure storage and used by the TM to sign evidence, similarly to a firmware TPM [43]. Using a dedicated protocol for remote attestation, the bi-directional attestation is accomplished in three rounds:
1. The attester sends a handshake request to the verifier containing the identity of both parties and the cryptographic materials to initiate keys establishment.
2. The verifier answers to the handshake by including similar information (i.e., both identifies and cryptographic materials), as well as evidence of the verifier's TEE, based on the computed common secret (i.e., using Diffie-Hellman).
3. Finally, the attester sends back signed evidence of the attester's TEE, based on the same common secret.
Once the two parties validated the genuineness of the evidence, they can derive further shared secrets to establish a trusted communication channel.
Arm TrustZone-A also presents some advantages and drawbacks. Hardware is independently accessible by both worlds, which is helpful for TEE applications utilising peripherals. On the other hand, the reference and open-source trusted OS, i.e., OP-TEE, limits the memory available to TAs by a few MB [24]. Due to this constraint, software needs to be partitioned to leverage TrustZone. Besides, the system must be installed in a particular way: a trusted OS is required, instead of creating TEE instances directly in the regular OS, bringing more complexity. Finally, OP-TEE is small and does not implement a POSIX API, making developing TAs difficult, notably when porting legacy code. While most components of TrustZone have open-source alternatives (e.g., the firmware and the trusted OS), many vendors do not disclose the implementation of the secure monitor.
Arm TrustZone-M (TZ-M) much like its predecessor TrustZone-A, provides an efficient mechanism to isolate the system into two distinct states/processing environments [7]. The TZ-M extension brings trusted execution into resourceconstrained IoT devices (e.g., Cortex-M23/M33/M35P/M55). When a TZ-M enabled device boots up, it always starts in the secure world, where the memory is initialised before transferring the control to the normal world. Despite the similarity regarding the high-level concept, TrustZone-M differs from TrustZone-A in low-level implementation of some features. The switch between the secure and the normal world is embedded in hardware and is much faster than the secure monitor [6]. This makes the context switching efficient and suitable for constrained devices. The normal world applications directly call the secure world functions using the non-secure callable (NSC) region (Figure 2d). TrustZone-M lacks complex memory management operations like the memory management unit (MMU) and only supports the memory protection unit (MPU) to enforce even finer levels of access control and memory protection [9]. In TZ-M enabled IoT devices, the secure world runs a concise trusted firmware which provides secure processing in the form of secure services (e.g., TrustedFirmware-M), which is a reference implementation of Platform Security Architecture (PSA) [10]) and the normal world supports real-time operating systems (e.g., Zephyr OS, Arm MBED OS, FreeRTOS).
Since TZ-M is a relatively new addition, recently available for the IoT infrastructure, existing work on attestation mechanisms for the hardware/software is scarce. Nonetheless, TZ-M fulfils some basic requirements for attestation like (i) secure storage, (ii) secure boot, (iii) secure inter-world communication and (iv) isolation of software. Thus, schemes like [1] have leveraged TZ-M to develop attestation and use TZ-M's TEE capabilities to establish a chain of trust. TrustedFirmware-M, following the guidelines of PSA, also supports initial attestation of device-specific data in the form of a secure service. We provide further details of the remote attestation mechanism introduced in DIAT [1]. It aims at providing run-time attestation of on-device data integrity in autonomous embedded systems in the absence of a central verifier. They provide attestation of the data integrity by identifying the software components (or modules), i.e., the claims, that process the data of concern, verifying that the modules are not
modified, ensuring that all modules of software that influence data are benign. Data integrity is provided by attestation, ensuring correct processing of the sensitive data. The main steps of the protocol are described below:
- -The verifier sends a request for data to the attester along with a nonce. The data can represent collected environmental (e.g., a sensing edge device) or compute-bound (e.g., machine learning) information.
- -The attester generates the requested data and issues evidence, called the attestation results , which are the list of all the software modules that affect the data, and the control flow of each module is derived using the control flow graph. The attester signs the data and the evidence with its secret key and sends the authenticated data to the verifier.
- -The verifier assesses the authenticity and integrity of the data by tracing the software modules from the evidence. Since the evidence is comprised of software modules that process the data and the frequency of execution of a module, unauthorised data modifications and code reuse attacks are detected and prevented.
TrustZone-M provides several advantages as a TEE to support remote attestation but also has a few drawbacks. It provides efficient isolation of the software modules and a faster context switch between the secure and normal world. This is advantageous as it is critical to have minimum attestation latency in the real-time operations of embedded systems like autonomous vehicles, industrial control systems, unmanned aerial vehicles, etc. The availability of hardware-unique keys in TZ-M enabled devices further ensures that the evidence generated by the TCB cannot be forged. Besides, the software stack may be fully open source, thanks to the absence of a secure monitor. On the other hand, since the components involved in measuring, attesting, and verifying the data/system need to be protected as part of the TCB, it increases the TCB size on the attested devices, raising the attack surface.
## 3.5 AMDSEV
AMD Secure Encrypted Virtualization (SEV) [3] allows isolating virtualised environments (e.g., containers and virtual machines) from trusted hypervisors. Figure 2b illustrates the high-level architecture of SEV. SEV uses an embedded hardware AES engine, which relies on multiple keys to encrypt memory seamlessly. It exploits a closed Arm Cortex-v5 processor as a secure co-processor, used to generate cryptographic materials kept in the CPU. Each virtual machine (VM) and hypervisor is assigned a particular key and tagged with an address space identifier (ASID), preventing cross-TEE attacks. The tag restricts the code and data usage to the owner with the same ASID and protects from unauthorised usage inside the processor. Code and data are protected by AES encryption with a 128-bit key based on the tag outside the processor package.
The original version of SEV could leak sensitive information during interrupts from guests to the hypervisor through registers [25]. This issue was addressed with SEV Encrypted State (SEV-ES) [29], where register states are encrypted, and the guest operating system needs to grant the hypervisor access to specific guest
Fig. 4: The remote attestation flow of AMD SEV.
<details>
<summary>Image 5 Details</summary>

### Visual Description
## System Security Diagram: Virtual Machine Launch Process
### Overview
The image is a diagram illustrating a secure virtual machine (VM) launch process, emphasizing memory encryption and integrity. It depicts the interaction between various components, including memory, memory controller, hypervisor, firmware, and the guest owner. The diagram highlights the steps involved in launching a VM, focusing on security measures like encryption, measurement, and secret management.
### Components/Axes
* **Memory:** Stack of memory blocks, each potentially encrypted (indicated by lock icons). The stack has three blocks, colored green, red, and blue from top to bottom.
* **VM (Virtual Machine):** Two VM blocks, colored orange and green. Each contains "Shared memory", "GHCB", and "VMCB".
* **Guest Owner:** A figure representing the guest owner, holding a key.
* **Hypervisor:** A central component mediating access between VMs and the underlying hardware.
* **Memory Controller:** Responsible for encrypting/decrypting memory. It has an "ASID" (Address Space Identifier) and a key icon.
* **Firmware:** The lowest-level software, responsible for initializing the system. It also has a key icon.
* **Numbered Arrows:** Indicate the sequence of steps in the VM launch process.
### Detailed Analysis
1. **Memory:**
* The memory stack shows three blocks, each with a lock icon, suggesting encryption.
* The blocks are colored green (top), red (middle), and blue (bottom).
2. **VMs:**
* Two VMs are shown, one orange and one green.
* Each VM contains "Shared memory", "GHCB", and "VMCB".
* The orange VM has a blue lock icon.
3. **Hypervisor:**
* Positioned centrally, mediating access between VMs and hardware.
4. **Memory Controller:**
* Connected to the memory stack and the hypervisor.
* Handles encryption/decryption of memory.
* Has an "ASID" and a key icon.
5. **Firmware:**
* Located at the bottom of the diagram.
* Connected to the memory controller and the hypervisor.
* Has a key icon.
6. **Guest Owner:**
* Located on the right side of the diagram.
* Connected to the hypervisor.
* Holds a key.
7. **Launch Process Steps:**
* **â LAUNCH\_START:** Arrow from Hypervisor to Firmware.
* **⥠LAUNCH\_UPDATE\_DATA, LAUNCH\_UPDATE\_VMSA:** Arrow from Hypervisor to Firmware.
* **⢠LAUNCH\_MEASURE:** Arrow from Hypervisor to Firmware.
* **⣠LAUNCH\_SECRET:** Arrow from Hypervisor to Firmware. A blue key is associated with this step, also present in the orange VM.
* **⤠LAUNCH\_FINISHED:** Arrow from Hypervisor to Firmware.
8. **Other Connections:**
* "encrypt/decrypt" arrow between Memory Controller and Memory.
* "load key" arrow between Firmware and Memory Controller.
* Arrow from Guest Owner to Hypervisor.
### Key Observations
* The diagram emphasizes security through encryption (lock icons) and controlled access (Hypervisor).
* The launch process involves multiple steps, each potentially involving security checks or updates.
* The Firmware plays a crucial role in the launch process, interacting with the Hypervisor and Memory Controller.
* The Guest Owner interacts with the Hypervisor, likely for VM management.
### Interpretation
The diagram illustrates a secure VM launch process, highlighting the importance of memory encryption, integrity checks, and controlled access. The Hypervisor acts as a central mediator, ensuring that the VM is launched securely. The Firmware plays a critical role in initializing the system and loading keys. The numbered steps indicate a sequence of actions that must be performed to ensure a secure launch. The presence of lock icons on memory blocks and VMs suggests that encryption is used to protect sensitive data. The "LAUNCH\_MEASURE" step indicates that the system measures the VM's state to ensure its integrity. The "LAUNCH\_SECRET" step suggests that secrets are securely managed during the launch process. The overall process aims to prevent unauthorized access to the VM and protect it from tampering.
</details>
registers. Register states are stored with SEV-ES for each VM in a virtual machine control block (VMCB) that is divided into an unencrypted control area and an encrypted virtual machine save area . The hypervisor manages the control area to indicate event and interrupt handling, while VMSA contains register states. Integrity protection ensures that encrypted register values in the VMSA cannot be modified without being noticed and VMs resume with the same state. Requesting services from the hypervisor due to interrupts in VMs are communicated over the guest hypervisor communication block (GHCB) that is accessible through shared memory. Hypervisors do not need to be trusted with SEV-ES because they no longer have access to guest registers. However, the remote attestation protocol was recently proven unsecure [15], exposing the system to rollback attacks and allowing a malicious cloud provider with physical access to SEV machines to easily install malicious firmware and be able to read in clear the (otherwise protected) system. Future iterations of this technology, i.e., SEV Secure Nested Paging (SEV-SNP) [4], plan to overcome these limitations, typically by means of in-silico redesigns.
At its core, SEV leverages a root of trust, called chip endorsement key , a secret fused in the die of the processor and issued by AMD for its attestation mechanism. The three editions of SEV may start the VMs from an unencrypted state, similarly to Intel SGX enclaves. In such cases, the secrets and confidential data must then be provisioned using remote attestation. The AMD secure processor creates a claim based on the measurement of the content of the VM. In addition, SEV-SNP measures the metadata associated with memory pages, ensuring the digest also considers the layout of the initial guest memory. While SEV and SEV-ES only support remote attestation during the launch of the guest operating system, SEV-SNP supports a more flexible model. That latter bootstraps private communication keys, enabling the guest VM to request evidence at any time and obtain cryptographic materials for data sealing, i.e., storing data securely at rest.
The remote attestation process takes place when SEV is starting the VMs. First, the attester, called hypervisor , executes the LAUNCH START command (Fig.4-Ë ) which creates a guest context in the firmware with the public key of the verifier, called guest owner . As the attester is loading the VM into memory, the LAUNCH UPDATE DATA / LAUNCH UPDATE VMSA commands (Fig.4-¸ ) are called to encrypt the memory and calculate the claims. When the VM is loaded, the attester calls the LAUNCH MEASURE command (Fig.4 ), which produces evidence
of the encrypted VM. The SEV firmware provides the verifier with evidence of the state of the VM to prove that it is in the expected state. The verifier examines the evidence to determine whether the VM has not been interfered with. Finally, sensitive data, such as image decryption keys, is provisioned through the LAUNCH SECRET command (Fig.4-Ë ) after which the attester calls the LAUNCH FINISHED command (Fig.4-Ë ) to indicate that the VM can be executed.
Software development is eased, as AMD SEV protects the whole VM, which comprises the operating system, unlike Intel SGX, where the applications are split into untrusted and trusted parts. Nonetheless, this approach increases the attack surface of the secure environment since the TCB is enlarged. The guest operating system must also support SEV, cannot access host devices (PCI passthrough), and the first edition of SEV (called vanilla in Table 1) is limited to 16 VMs.
## 3.6 RISC-V architectures
There exist several proposals for TEEs designs for RISC-V based on PMP instructions. These proposals include support for remote attestation, such as those previously described. We survey the most important ones in the following.
Keystone [33] is a modular framework that provides the building blocks to create trusted execution environments, rather than providing an all-in-one solution that is inflexible and is another fixed design point. Instead, they advocate that hardware should provide security primitives instead of point-wise solutions. Keystone implements a secure monitor at machine mode (M-mode) and relies on the RISC-V PMP instructions to provide isolated execution and, therefore, does not require any hardware change. Since Keystone leverages features composition, the framework users can select their own set of security primitives, e.g., memory encryption, dynamic memory management and cache partitioning. Each trusted application executes in user mode (U-mode) and embeds a runtime that executes in supervisor mode (S-mode). The runtime decouples the infrastructure aspect of the TEE (e.g., memory management, scheduling) from the security aspect handled by the secure monitor. As such, Keystone programmers can roll their custom runtime to fine-grained control of the computer resources without managing the TEE's security. Keystone utilises a secure boot mechanism that measures the secure monitor image, generates an attestation key and signs them using a root of trust. The secure monitor exposes a supervisor system interface (SBI) for the enclaves to communicate. A subset of the SBI is dedicated to issue evidence signed by provisioned keys (i.e., endorsed by the verifier), based on the measurement of the secure monitor, the runtime and the enclave's application. Arbitrary data can be attached to evidence, enabling an attester to create a secure communication channel with a verifier using key establishment protocols (e.g., Diffie-Hellman). When a remote attestation request takes place, the verifier sends a challenge to the trusted application. The response contains evidence with the public session key of the attester. Finally, the verifier examines the evidence based on the public signature and the claims (i.e., measurements of components), leading to establishing a secure communication channel. While Keystone does not describe in-depth the protocol, the authors provide a case study of remote attestation.
Sanctum [20] has been the first proposition with support for attesting trusted applications. It offers similar promises to Intel's SGX by providing provable and robust software isolation, running in enclaves. The authors replaced Intel's opaque microcode/XuCode with two open-source components: the measurement root ( mroot ) and a secure monitor to provide verifiable protection. A remote attestation protocol is proposed, as well as a comprehensive design for deriving trust from a root of trust. Upon booting the system, mroot generates the cryptographic materials for signing if started for the first time and hands off to the secure monitor. Similarly to SGX, Sanctum utilises a signing enclave , that receives a derived private key from the secure monitor for evidence generation. The remote attestation protocol requires the attester, called enclave , to establish a session key with a verifier, called remote party . Afterwards, an enclave can request evidence from the signing enclave based on multiple claims, such as the hash of the code of the requesting enclave and some information coming from the key exchange messages. The evidence is then forwarded to the verifier through the secure channel for examination. This work has been further extended to establish a secure boot mechanism and an alternative method for remote attestation by deriving a cryptographic identity from manufacturing variation using a PUF, which is useful when a hardware secret is not present [32].
TIMBER-V [54] achieved the isolation of execution on small embedded processors thanks to hardware-assisted memory tagging. Tagged memory transparently associates blocks of memory with additional metadata. Unlike Sanctum, they aim to bring enclaves to smaller RISC-V featuring only limited physical memory. Similarly to TrustZone, the user mode (U-mode) and the supervisor mode (S-mode) are split into a secure and normal world. The secure supervisor mode runs a trust manager, called TagRoot , which manages the tagging of the memory. The secure user mode improves the model of TrustZone, as it can handle multiple concurrent enclaves, which are isolated from each other. They combine tagged memory with an MPU to support an arbitrary number of processes while avoiding the overhead of large tags. The trust manager exposes an API for the enclaves to retrieve evidence, based on a given enclave identity, a root of trust, called the secret platform key , and an arbitrary identifier provided by the enclave. The remote attestation protocol is twofold: the verifier (i.e., remote party) sends a challenge to the attester (i.e., enclave). Next, the challenge is forwarded to the trust manager as an identifier to issue evidence, which is authenticated using a MAC. The usage of symmetric cryptography is unusual in remote attestation because the verifier requires to own the secret key to verify the evidence. The authors added that TIMBER-V could be extended to leverage public-key cryptography for remote attestation.
LIRA-V [49] drafted a mutual remote attestation for constrained edge devices. While this solution does not enable the execution of arbitrary code in a TEE, it introduces a comprehensive remote attestation mechanism that leverages PMP for code protection of the attesting environment and the availability of a root of trust to issue evidence. The proposed protocol relies exclusively on machine mode (Mmode) or machine and user mode (M-mode and U-mode). The claim, which is the code measurement, is computed on parts of the physical memory regions by a pro-
gram stored in the ROM. LIRA-V's mutual attestation protocol works similarly to the protocol illustrated in TrustZone-A, in three rounds and requires provisioned keys as a root of trust. The first device (i.e., verifier) sends a challenge with a public session key. Next, the second device (i.e., attester) answers with a challenge and public session key, as well as evidence bound to that device and encrypted using the established shared session key. Finally, if the first device validates the evidence, it becomes the attester and issues evidence for the second device, which becomes the verifier. This protocol has been formally verified and enables the creation of a trusted communication channel upon the validation of evidence.
Lastly, we omitted some other emerging TEEs leveraging RISC-V as they lack remote attestation mechanisms. These technologies are yet to be researched for bringing such capabilities. We briefly introduce them here for completeness. SiFive, the provider of commercial RISC-V processor IP, proposes Hex-Five MultiZone [23], a zero-trust computing architecture enabling the isolation of software, called zones . The multi zones kernel ensures the sane state of the system using secure boot and PMP and runs unmodified applications by trapping and emulating functionality for privileged instructions. HECTOR-V [39] is a design for developing hardened TEEs with a reduced TCB. Thanks to a tight coupling of the TEE and the SoC, the authors provide runtime and peripherals services directly from the hardware and leverage a dedicated processor and a hardwarebased security monitor, which ensure the isolation and the control-flow integrity of the trusted applications, called trustlets . Finally, Lindemer et al. [35] enable simultaneous thread isolation and TEE separation on devices with a flat address space (i.e., without an MMU), thanks to a minor change in the PMP specification.
## 4 Future work
TEEs and remote attestation are fast-moving research areas, where we expect many technological and paradigm enhancements in the next decades. This section introduces the next trusted environments announced by Intel and Arm. Besides, we also describe a shift to VM-based TEEs and conclude on attestation uniformity.
Intel unveiled Trust Domain Extensions (TDX) [27] in 2020 as its upcoming TEE, introducing the deployment of hardware-isolated virtual machines, called trust domains . Similarly to AMD SEV, Intel TDX is designed to isolate legacy applications running on regular operating systems, unlike Intel SGX, which requires tailored software working on a split architecture (i.e., untrusted and trusted parts). TDX leverages Intel Virtual Machine Extensions and Intel Multi-Key Total Memory Encryption, as well as proposes an attestation process to guarantee the trustworthiness of the trust domains for relying parties. In particular, it extends the remote attestation mechanisms of Intel SGX to issue claims and evidence, which has been formally verified by researchers [44].
Arm announced Confidential Compute Architecture (CCA) [8] as part of their future Armv9 processor architecture, consolidating TrustZone to isolate secure virtual machines. With this aim in mind, Arm CCA leverages Arm Realm Management Extension [11] to create a trusted third world called realm , next to the existing normal and secure worlds. Arm designed CCA to provide remote attestation mechanisms, assuring that relying parties can trust data and transactions.
These two recent initiatives highlight a convergence into the VM-based isolation paradigm. Initially started by AMD, that architecture of TEEs has many advantages. In particular, it reduces the developers' friction in writing applications, since the underlying operating system and API are standard and no different compared to the outside of the TEE. Furthermore, a convergence of the paradigm may ease the development of unified and hardware-agnostic solutions for trusted software deployment, such as Open Enclave SDK [41] or the recent initiatives promoting WebAssembly as an abstract portable executable code running in TEEs [22, 53, 38]. Remote attestation may also benefit from these unified solutions by abstracting the attestation process behind standard interfaces.
## 5 Conclusion
This work compares state-of-the-art remote attestation schemes, which leverage hardware-assisted TEEs, which help deploy and run trusted applications from commodity devices to cloud providers. TEE-based remote attestation has not yet been extensively studied and remains an industrial challenge.
Our survey highlights four architectural extensions: Intel SGX, Arm TrustZone, AMD SEV, and upcoming RISC-V TEEs. While SGX competes with SEV, the two pursue significantly different approaches. The former provides a complete built-in remote attestation protocol for multiple, independent, trusted applications. The latter is designed for virtualised environments, shielding VMs from untrusted hypervisors, and provides instructions to help the attestation of independent VMs. Arm TrustZone and native RISC-V do not provide means for attesting software running in the trusted environment, relying on the community to develop alternatives. However, TrustZone-M supports a root of trust, helping to develop an adequately trusted implementation. RISC-V extensions differ a lot, offering different combinations of software and hardware extensions, some of which support a root of trust and multiple trusted applications.
Whether provided by manufacturers or academia, remote attestation remains an essential part of trusted computing solutions. They are the foundation of trust for remote computing where the target environments are not fully trusted. Current solutions widely differ in terms of maturity and security. Whereas some TEEs are developed by leading processor companies and provide built-in attestation mechanisms, others still lack proper hardware attestation support and require software solutions instead. Our study sheds some light on the limitations of state-of-the-art TEEs and identifies promising directions for future work.
Acknowledgments This publication incorporates results from the VEDLIoT project, which received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 957197, and from the Swedish Foundation for Strategic Research (SSF) aSSIsT.
## References
1. Abera, T., Bahmani, R., Brasser, F., et al. : DIAT: data integrity attestation for resilient collaboration of autonomous systems. In: NDSS '19
2. Ahn, J., Lee, I.-G., and Kim, M.: Design and implementation of hardware-based remote attestation for a secure internet of things. Wireless Personal Communications 114(1) (2020)
3. AMD: Secure Encrypted Virtualization API: technical preview. Tech. rep. 55766, (2019)
4. AMD: Strengthening VM isolation with integrity protection and more. White Paper (2020)
5. Anati, I., Gueron, S., Johnson, S., et al. : Innovative technology for CPU based attestation and sealing. In: HASP '13
6. Arm: Arm trustzone technology for the Armv8-M architecture. Tech. rep. 100690, (2016)
7. Arm: Armv8-m architecture reference manual . DDI0553. 2016.
8. Arm: Introducing Arm confidential compute architecture. Tech. rep. DEN0125, (2021)
9. Arm: Memory protection unit. Tech. rep. 100699, version 2.1 (2018)
10. Arm: Platform security architecture application guide. Tech. rep., version 2 (2019)
11. Arm: The realm management extension (RME), for Armv9-A . DDI0615. 2021.
12. Aublin, P.L., Mahhouk, M., and Kapitza, R.: Towards TEEs with large secure memory and integrity protection against HW attacks. In: SysTEX '22
13. Birkholz, H., Thaler, D., Richardson, M., et al. : Remote attestation procedures architecture. Tech. rep. draft-ietf-rats-architecture-12, Internet Engineering Task Force (2021)
14. Brickell, E., and Li, J.: Enhanced privacy ID: a direct anonymous attestation scheme with enhanced revocation capabilities. In: WPES '07
15. Buhren, R., Werling, C., and Seifert, J.-P.: Insecure until proven updated: analyzing AMD SEV's remote attestation. In: CCS '19. ACM
16. Chen, G., and Zhang, Y.: Mage: mutual attestation for a group of enclaves without trusted third parties. arXiv preprint arXiv:2008.09501 (2020)
17. Chen, G., Zhang, Y., and Lai, T.-H.: Opera: open remote attestation for Intel's secure enclaves. In: CCS '19. ACM
18. Coker, G., Guttman, J., Loscocco, P., et al. : Principles of remote attestation. Int. J. Inf. Secur. 10(2) (2011)
19. Costan, V., and Devadas, S.: 'Intel SGX Explained'. Cryptology ePrint Archive.
20. Costan, V., Lebedev, I., and Devadas, S.: Sanctum: minimal hardware extensions for strong software isolation. In: USENIX Security 16
21. De Oliveira Nunes, I., Jakkamsetti, S., Rattanavipanon, N., et al. : On the TOCTOU problem in remote attestation. In: CCS '21. ACM
22. Enarx, https://enarx.dev
23. Garlati, C., and Pinto, S.: A clean slate approach to Linux security RISC-V enclaves. In: EW '20
24. G¨ ottel, C., F elber, P ., and Schiavoni, V.: Developing secure services for IoT with OP-TEE: a first look at performance and usability. In: IFIP DAIS '19. Springer
25. Hetzelt, F., and Buhren, R.: Security analysis of encrypted virtual machines. In: VEE '17. ACM
26. Hunt, G.D.H., Pai, R., Le, M.V., et al. : Confidential computing for OpenPOWER. In: EuroSys '21. ACM
27. Intel: Trust Domain Extensions, (2020). https://intel.ly/3L901wS
28. Intel: XuCode, (2021). https://intel.ly/3rYAhMI
29. Kaplan, D.: Protecting VM register state with SEV-ES. Tech. rep., (2017)
30. Kong, J., Koushanfar, F., Pendyala, P.K., et al. : PUFatt: embedded platform attestation based on novel processor-based PUFs. In: DAC '14. IEEE
31. Krawczyk, H.: SIGMA: the 'SIGn-and-MAc' approach to authenticated Diffie-Hellman
- and its use in the IKE protocols. In: Boneh, D. (ed.) CRYPTO '03. Springer
32. Lebedev, I., Hogan, K., and Devadas, S.: Invited paper: secure boot and remote attestation in the Sanctum processor. In: CSF '18. IEEE
33. Lee, D., Kohlbrenner, D., Shinde, S., et al. : Keystone: an open framework for architecting trusted execution environments. In: EuroSys '20. ACM
34. Li, W., Li, H., Chen, H., et al. : Adattester: secure online mobile advertisement attestation using trustzone. In: MobiSys '15. ACM
35. Lindemer, S., Mid´ eus, G., and Raza, S.: Real-time thread isolation and trusted execution on embedded RISC-V. In: SECRISC-V '20
36. Maene, P., G¨ otzfried, J., Clercq, R. de, et al. : Hardware-based trusted computing architectures for isolation and attestation. IEEE Transactions on Computers 67(3) (2018)
37. M´ en´ etrey, J., Pasin, M., Felber, P., et al. : An exploratory study of attestation mechanisms for trusted execution environments. In: SysTEX '22
38. M´ en´ etrey, J., Pasin, M., Felber, P., et al. : Twine : an embedded trusted runtime for WebAssembly. In: ICDE'21. IEEE
39. Nasahl, P., Schilling, R., Werner, M., et al. : HECTOR-V: a heterogeneous CPU architecture for a secure RISC-V execution environment. In: ASIA CCS '21. ACM
40. Nunes, I.D.O., Eldefrawy, K., Rattanavipanon, N., et al. : VRASED: a verified hardware/software co-design for remote attestation. In: USENIX Security 19
41. Open Enclave SDK, https://openenclave.io
42. Pinto, S., and Santos, N.: Demystifying Arm TrustZone: a comprehensive survey. ACM Computing Surveys 51(6) (2019)
43. Raj, H., Saroiu, S., Wolman, A., et al. : fTPM: a Software-Only implementation of a TPM chip. In: USENIX Security 16
44. Sardar, M.U., Musaev, S., and Fetzer, C.: Demystifying attestation in Intel Trust Domain Extensions via formal verification. IEEE Access 9 (2021)
45. Sardar, M.U., Quoc, D.L., and Fetzer, C.: Towards formalization of enhanced privacy ID (EPID)-based remote attestation in Intel SGX. In: DSD '20. IEEE
46. Scarlata, V., Johnson, S., Beaney, J., et al. : Supporting third party attestation for Intel SGX with Intel data center attestation primitives. White paper (2018)
47. Seshadri, A., Luk, M., Shi, E., et al. : Pioneer: verifying integrity and guaranteeing execution of code on legacy platforms. In: SOSP '05. ACM
48. Shepherd, C., Akram, R.N., and Markantonakis, K.: Establishing mutually trusted channels for remote sensing devices with trusted execution environments. In: ARES '17. ACM
49. Shepherd, C., Markantonakis, K., and Jaloyan, G.-A.: LIRA-V: lightweight remote attestation for constrained RISC-V devices. In: SPW '21. IEEE
50. Subramanyan, P., Sinha, R., Lebedev, I., et al. : A formal foundation for secure remote execution of enclaves. In: CCS '17. ACM
51. Turan, F., and Verbauwhede, I.: Propagating trusted execution through mutual attestation. In: SysTEX '19. ACM
52. Vaucher, S., Pires, R., Felber, P ., et al. : SGX-aware container orchestration for heterogeneous clusters. In: ICDCS '18. IEEE
53. Veracruz, https://veracruz-project.com
54. Weiser, S., Werner, M., Brasser, F., et al. : TIMBER-V: tag-isolated memory bringing fine-grained enclaves to RISC-V. In: NDSS '19
55. Xu, W., Zhang, X., Hu, H., et al. : Remote attestation with domain-based integrity model and policy analysis. IEEE TDSC 9(3) (2012)
56. Zhao, S., Zhang, Q., Qin, Y., et al. : SecTEE: a software-based approach to secure enclave architecture using TEE. In: CCS '19. ACM