#
Confidential Computing
#
Data Privacy and Protection: Operational vs. Technical Assurance
Many cloud providers rely primarily on operational assurance — policies, processes, and administrative controls — to protect customer data. While necessary, this model leaves a residual risk: providers (and in some jurisdictions, government authorities) can compel access because the provider ultimately operates the infrastructure and often holds the encryption keys. Phoeniqs takes a different approach. As a Swiss-owned provider operating in Switzerland and building on client-controlled encryption and confidential computing, we prioritize technical assurance: cryptographic and hardware-enforced controls that make access to plaintext data technically infeasible — even for our own administrators (illustrated in Figure 1). Our philosophy toward confidential computing is rooted in protecting data across its entire lifecycle: in transit, at rest, and in use.
#
Operational vs. Technical Assurance (Conceptual Illustration)
Customers bring or keep their own keys (BYOK/KYOK), and workloads run inside attested enclaves that protect data in transit, at rest, and in use.
By combining hardware security module (HSM)-based encryption, tamper-proof attestation, confidential VMs/containers, and GPU-enabled enclaves, we provide sovereign-grade security, unmatched by hyperscaler operational models:
- Data in Transit: All network traffic is encrypted (TLS 1.3). This ensures data cannot be intercepted in motion.
- Data at Rest: Physical and logical storage encryption protects data stored in our infrastructure. Clients can bring or keep their own keys (BYOK/KYOK), ensuring Phoeniqs cannot decrypt data.
- Data in Use: The most critical protection. Workloads run inside hardware-based trusted execution environments (TEEs) using AMD SEV, Intel TDX, or IBM Hyper Protect Secure Execution. This prevents administrators, hypervisors, or external actors from accessing plaintext data or models during computation.
- Confidential VMs & Containers: Containerized workloads are isolated with attestation, ensuring integrity before any secret provisioning.
- Confidential Data Services: Application-level services run in the same secure enclaves, maintaining full-stack confidentiality.
- Tamper-proof Attestation: Cryptographic attestation validates the environment, guaranteeing only trusted code and hardware are used.
#
Technically Assured Data Protection Across States
Why this matters:
- CLOUD Act exposure: Hyperscalers subject to the U.S. CLOUD Act may be compelled to provide data or assistance.
By contrast, Phoeniqs is not a U.S. provider and implements a zero-access architecture: we do not possess customer keys, and enclave protections prevent privileged access. - Zero-access Operations: Even if compelled, we are technically unable to decrypt customer content.
- Auditability: Hardware attestation and tamper-evident measurements prove that only verified code runs before any secrets are released.
#
Applying Confidential Containers (CoCo) for Confidential AI
Confidential Containers (CoCo) leverage trusted execution environments (TEEs) to protect containerized workloads, isolating sensitive applications from the host OS, other workloads, and the cloud provider.
Key features include:
- Hardware-Based Security: Utilizes TEEs like AMD SEV-SNP, Intel TDX, and IBM Z Secure Execution Environment to protect container memory.
- Attestation: Verifies the integrity of the execution environment.
- Secure Key Management: Integrates with KMS for secret provisioning.
- Open-Source Ecosystem: Built on Kubernetes and Kata Containers, supporting cloud-native workloads.
To address the security requirements of confidential AI, IBM Research has proposed a robust architecture for protecting AI workflows — particularly for large language model (LLM) inference.
This architecture ensures end-to-end protection across requestors, proxy, and inference engine, leveraging CoCo and confidential computing technologies.
#
Figure 3 — LLM Workflow For Confidential AI
- Requestor-to-Proxy Communication: Protected using TLS/SSL to encrypt input and output tokens, preventing intermediaries from accessing plaintext data.
- Proxy POD: Runs in a Confidential Container POD, utilizing SEV-SNP to protect memory from the infrastructure provider or compromised OpenShift cluster.
- Inference Server POD: Operates in a Confidential Container POD with GPU Confidential Mode on (e.g., NVIDIA H100 PCIe), ensuring data in the GPU is inaccessible outside the POD. Data transferred between CPU and GPU over PCIe bus is encrypted.
- Proxy-to-Inference Server Communication: Secured using POD-to-POD transparent encryption — developed and upstreamed to the CoCo project by IBM Research.
- Model Protection: Proprietary models are stored encrypted and decrypted only within the vLLM POD.
#
Applying Confidential AI for Model as a Service (MAAS) Data Privacy and Protection
Phoeniqs extends confidential computing protections to GPU workloads, ensuring that prompts, model weights, and inference data remain encrypted and isolated even during GPU execution.
This leverages IBM LinuxONE, Hyper Protect virtualization, and OpenShift AI for orchestration, combined with attestation-based trust guarantees.
Key principles:
- Encrypted Models: Safeguards models from unauthorized access.
- Scalability: Supports requests for large-scale AI tasks.
- System Integration: Manages attestation and resource allocation.
- CoCo Alignment: Ensures workloads run in a TEE-protected environment.
When fully deployed, user prompts, model weights, and runtime memory execute inside confidential computing enclaves with attestation — ensuring even privileged admins cannot access plaintext workloads.
Architecture features:
- Hardware-based security: TEEs (AMD SEV-SNP, Intel TDX, IBM Secure Execution)
- Confidential Containers: Containerized AI workloads run inside TEEs.
- GPU Confidential Mode: Encrypts GPU memory and PCIe bus data.
- Attestation: Verifies enclave and GPU integrity before workload execution.
- Secure Key Management: HSM integration and BYOK/KYOK.
- Proxy-to-Inference Encryption: POD-to-POD traffic is transparently encrypted.
- Encrypted Models: Decrypted only within enclave-protected inference servers.
- OpenShift & Kubernetes Integration: Provides scalability and consistency.
- Attestation Services (Trustee): Enables verifiable trust chains.
All workloads are processed inside secure enclaves with no provider access, exclusively in Switzerland, ensuring sovereign-grade compliance and zero-access operations.
#
Architecture & Data Flow
- Client encrypts request; sends via TLS to AI Gateway enclave.
- Inside enclave, request is decrypted and policy-checked; re-encrypted for the Model Serving enclave.
- Model executes inside enclave; output is re-encrypted and returned to client.
- Only minimal metadata leaves enclaves, stored per DPA.
#
Attestation & Trust
- Remote attestation ensures only verified code runs before any decryption or secret provisioning.
- Secure Boot and sealed memory prevent tampering.
- If attestation fails, workloads do not start and keys are never released.
#
Key Management
- HSM-backed BYOK/KYOK: customers own and control keys.
- Multi-party approval/escrow: optional.
- Keys protect data in transit, at rest, and in use.
#
Encryption
- In transit: TLS 1.3
- At rest: AES-256
- In use: Enclave/TEE memory encryption and runtime isolation
#
Storage, Residency & Telemetry
- Prompts/outputs processed only inside enclaves (no persistent storage).
- Operational telemetry: minimal and pseudonymized; stored in Switzerland under ISO 27001.
- Bulk/batch modes (if enabled) use encrypted payloads with controlled retention.
#
Access Controls & Admin Boundaries
- Enclave consoles gated by attestation, IAM, and RBAC.
- Zero-access operations: platform admins cannot see plaintext.
- Customer SOC teams retain audit rights.
#
Locations
All processing and storage occur exclusively in Switzerland.
#
Compliance & Documentation
- ISO 27001-aligned ISMS
- Swiss nFADP/GDPR support
- Sub-Annex for client-specific controls (e.g., retention, telemetry)
#
Benefits of the PhoeniqsApproach
Phoeniqs provides technical assurance and sovereign guarantees beyond operational policies and limited TEEs offered by hyperscalers.
These commitments align with our published Data Processing Agreement (DPA) and Data Privacy Policy available on our website.