Kubernetes Supply Chain Security: SLSA, Sigstore, and the End of 'Trust Me Bro' Deployments

How to secure your entire software supply chain with SLSA framework and Sigstore—from source to production, with cryptographic verification at every step.

Let me tell you about the production incident that keeps platform engineers up at night. A developer’s laptop gets compromised. The attacker gains access to container registry credentials. They push a backdoored version of your API service. It passes CI/CD checks because the tests run fine. It deploys to production because the manifest looks legitimate. Three weeks later, you discover you’ve been exfiltrating customer data to an S3 bucket in Romania.

This isn’t hypothetical. Variations of this attack have hit companies like SolarWinds, CodeCov, and dozens of others. The common thread? No cryptographic verification of the software supply chain. Teams trusted that code came from legitimate sources without actually verifying it.

In 2025, that’s no longer acceptable. SLSA (Supply-chain Levels for Software Artifacts) and Sigstore have matured from “interesting research” to “production requirement.” I’ve implemented these systems across multiple AKS environments, and I can tell you: this is the future of deployment security.

The Supply Chain Attack Surface

Before we dive into solutions, let’s map the actual attack surface. Every step from source to production is a potential compromise point.

flowchart LR
    subgraph Dev["💻 Developer Environment"]
        direction TB
        DevMachine["👨‍💻 Developer
Laptop"] Commits["📝 Git Commits"] DevMachine --> Commits end subgraph SCM["📋 Source Control"] direction TB GitHub["🔒 GitHub/GitLab"] PRs["✅ Pull Requests
+ Reviews"] GitHub --> PRs end subgraph CI["🔄 CI/CD Pipeline"] direction TB Build["🔨 Build Process"] Tests["✅ Tests"] Scan["🔍 Security Scans"] Sign["✍️ Sign Artifacts"] Build --> Tests --> Scan --> Sign end subgraph Registry["📦 Container Registry"] direction TB ACR["☁️ Azure Container
Registry"] Images["🐳 Signed Images
+ Attestations"] ACR --> Images end subgraph Cluster["☸️ Kubernetes Cluster"] direction TB Admission["🚪 Admission
Control"] Deploy["🚀 Deployment"] Pods["📦 Running Pods"] Admission --> Deploy --> Pods end Dev -->|"🚨 Vector 1
Compromised machine"| SCM SCM -->|"🚨 Vector 2
Unauthorized push"| CI CI -->|"🚨 Vector 3
Poisoned pipeline"| Registry Registry -->|"🚨 Vector 4
Image tampering"| Cluster Cluster -->|"🚨 Vector 5
Malicious deploy"|Pods style Dev fill:#ffebee,stroke:#c62828,stroke-width:3px style SCM fill:#ffebee,stroke:#c62828,stroke-width:3px style CI fill:#ffebee,stroke:#c62828,stroke-width:3px style Registry fill:#ffebee,stroke:#c62828,stroke-width:3px style Cluster fill:#ffebee,stroke:#c62828,stroke-width:3px style Pods fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px

Attack Vector 1: Compromised developer machine → malicious commits Attack Vector 2: Compromised GitHub account → direct pushes to main Attack Vector 3: Poisoned CI/CD pipeline → backdoored builds Attack Vector 4: Registry compromise → image tampering Attack Vector 5: Cluster compromise → unauthorized deployments

Traditional security focuses on Vector 5 (admission control, network policies). But if the image itself is compromised at build time, all your runtime security is useless. You need to verify the entire chain.

SLSA: The Framework for Supply Chain Security

SLSA (pronounced “salsa”) defines 4 levels of supply chain maturity. Think of it like security compliance levels—the higher you go, the more guarantees you have about your software’s integrity.

SLSA Levels Explained

flowchart LR
    subgraph L1["Level 1: Build Documentation"]
        direction TB
        L1Title["📦 SLSA Level 1"]
        L1Desc["━━━━━━━━━━━━━━━━
✅ Automated build exists
✅ Provenance generated
❌ No verification required
━━━━━━━━━━━━━━━━
Security Level: BASIC
Adoption: 40% of orgs"] L1Title --> L1Desc end subgraph L2["Level 2: Build Integrity"] direction TB L2Title["🔒 SLSA Level 2"] L2Desc["━━━━━━━━━━━━━━━━
✅ Version control required
✅ Authenticated provenance
✅ Build service metadata
✅ Hosted build platform
━━━━━━━━━━━━━━━━
Security Level: GOOD
Adoption: 25% of orgs"] L2Title --> L2Desc end subgraph L3["Level 3: Hardened Platform"] direction TB L3Title["🛡️ SLSA Level 3"] L3Desc["━━━━━━━━━━━━━━━━
✅ Source verification
✅ Non-falsifiable provenance
✅ Isolated build environment
✅ Ephemeral build agents
✅ Parameterless builds
━━━━━━━━━━━━━━━━
Security Level: STRONG
Adoption: 10% of orgs
⭐ Recommended minimum"] L3Title --> L3Desc end subgraph L4["Level 4: Maximum Trust"] direction TB L4Title["🏆 SLSA Level 4"] L4Desc["━━━━━━━━━━━━━━━━
✅ All Level 3 requirements
✅ Two-person review required
✅ Complete audit trail
✅ Hermetic builds
━━━━━━━━━━━━━━━━
Security Level: MAXIMUM
Adoption: <5% of orgs
🎯 High-security environments"] L4Title --> L4Desc end L1 ==>|"Add authentication"| L2 L2 ==>|"Harden infrastructure"| L3 L3 ==>|"Add governance"| L4 Note1["📊 Current State:
60% at Level 0/1
Start here ➡️"] Note2["🎯 Target State:
Production workloads
should be Level 3+"] Note1 -.-> L1 Note2 -.-> L3 style L1 fill:#fff9c4,stroke:#f57f17,stroke-width:3px style L2 fill:#e1f5ff,stroke:#0277bd,stroke-width:3px style L3 fill:#c8e6c9,stroke:#2e7d32,stroke-width:4px style L4 fill:#d1c4e9,stroke:#4a148c,stroke-width:3px style Note1 fill:#ffebee,stroke:#c62828,stroke-width:2px style Note2 fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px

Reality check: Most organizations are at Level 0 (no provenance at all) or Level 1. Getting to Level 3 is where meaningful security begins.

What Is Provenance?

Provenance is metadata that describes how an artifact was built:

  • What source code was built (commit SHA)
  • Who triggered the build (user, service account)
  • Where it was built (build system, environment)
  • When it was built (timestamp)
  • How it was built (exact steps, dependencies)

This metadata is cryptographically signed so it can’t be forged. When you deploy, you verify the signature and check that provenance meets your policy requirements.

Sigstore: The Missing Infrastructure

SLSA defines what you should do. Sigstore provides how to do it—a free, open-source signing and verification infrastructure that’s become the industry standard.

Sigstore Component Architecture

classDiagram
    class Cosign {
        <>
        +sign(image) void
        +verify(image) bool
        +attachAttestation(sbom) void
        +verifyAttestation(type) bool
        ---
        🔑 Keyless signing
        ✍️ Image signing
        📦 SBOM attestation
    }

    class Rekor {
        <>
        +appendEntry(signature) uuid
        +queryLog(uuid) entry
        +verifyInclusion(entry) bool
        +getProof(uuid) proof
        ---
        📋 Immutable log
        🌐 Public verification
        🔒 Tamper-proof
    }

    class Fulcio {
        <>
        +issueCertificate(oidcToken) cert
        +verifyIdentity(token) identity
        +shortLivedCert(90s) cert
        ---
        🎫 OIDC-based
        ⏱️ Short-lived certs
        🔄 Auto-rotation
    }

    class PolicyEngine {
        <>
        +defineRules(policy) void
        +evaluateProvenance(image) decision
        +allow() bool
        +deny(reason) void
        ---
        📜 Policy enforcement
        ✅ Allow/Deny
        📊 Audit logging
    }

    class Image {
        <>
        +digest string
        +signature bytes
        +sbom json
        +provenance json
        ---
        🐳 Container artifact
        📦 With attestations
    }

    Cosign --> Fulcio : 1. Request certificate\n(via OIDC)
    Fulcio --> Cosign : 2. Return short-lived cert
    Cosign --> Image : 3. Sign image
    Cosign --> Rekor : 4. Record signature\nin transparency log
    Rekor --> Cosign : 5. Return log entry
    PolicyEngine --> Rekor : Query for verification
    PolicyEngine --> Image : Verify signature
    PolicyEngine --> Fulcio : Verify certificate

    note for Rekor "🔒 Security Properties:\n• Append-only log\n• Public verification\n• No tampering possible\n• Cryptographic proofs"

    note for Fulcio "🎯 Key Benefits:\n• No key management\n• OIDC integration\n• GitHub/GitLab/Azure AD\n• Automatic rotation"

Key insight: Sigstore eliminates the “key management hell” that traditionally plagued signing systems. Instead of managing signing keys, you use your existing GitHub/GitLab/Azure AD identity via OIDC.

Implementation: Signing Your Build Pipeline

Let me walk you through implementing cryptographic signing in a real CI/CD pipeline.

Step 1: Install Cosign

# Install cosign CLI
curl -LO https://github.com/sigstore/cosign/releases/download/v2.2.0/cosign-linux-amd64
mv cosign-linux-amd64 /usr/local/bin/cosign
chmod +x /usr/local/bin/cosign

# Verify installation
cosign version

Step 2: Generate Signing Keys (Development)

For development/testing, use key-based signing:

# Generate signing keypair
cosign generate-key-pair

# This creates:
# - cosign.key (private key - keep secret!)
# - cosign.pub (public key - distribute widely)

# Store private key in GitHub Secrets
gh secret set COSIGN_PRIVATE_KEY < cosign.key
gh secret set COSIGN_PASSWORD

For production, use keyless signing (covered below).

Step 3: Sign Images in CI/CD

# .github/workflows/build-and-sign.yml
name: Build, Sign, and Push Image

on:
  push:
    branches: [main]

env:
  REGISTRY: myregistry.azurecr.io
  IMAGE_NAME: myapp

jobs:
  build-sign-push:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write  # Required for keyless signing
      packages: write

    steps:
      - uses: actions/checkout@v4

      - name: Login to ACR
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ secrets.ACR_USERNAME }}
          password: ${{ secrets.ACR_PASSWORD }}

      - name: Build image
        run: |
          docker build -t ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }} .
          docker push ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

      - name: Install Cosign
        uses: sigstore/cosign-installer@v3.1.1

      - name: Sign image (keyless)
        env:
          COSIGN_EXPERIMENTAL: 1
        run: |
          cosign sign --yes ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

      - name: Generate and attach SBOM
        run: |
          # Generate SBOM with Syft
          curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
          syft ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }} -o spdx-json > sbom.json

          # Attach SBOM as attestation
          cosign attest --yes \
            --predicate sbom.json \
            --type spdx \
            ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

      - name: Generate SLSA provenance
        uses: slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v1.9.0
        with:
          image: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
          digest: ${{ steps.build.outputs.digest }}

What just happened?

  1. Built and pushed container image
  2. Signed image with cosign (using OIDC identity, no stored keys)
  3. Generated SBOM and attached it as an attestation
  4. Generated SLSA Level 3 provenance

All metadata is stored in Rekor (public transparency log) and attached to the image in the registry.

Step 4: Verify Before Deployment

Now comes the critical part—verifying signatures before allowing deployment to Kubernetes.

# Verify image signature
cosign verify \
  --certificate-identity-regexp="^https://github.com/myorg/myrepo.*" \
  --certificate-oidc-issuer=https://token.actions.githubusercontent.com \
  ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

# Verify SBOM attestation
cosign verify-attestation \
  --type spdx \
  --certificate-identity-regexp="^https://github.com/myorg/myrepo.*" \
  --certificate-oidc-issuer=https://token.actions.githubusercontent.com \
  ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

# Verify SLSA provenance
cosign verify-attestation \
  --type slsaprovenance \
  --certificate-identity-regexp="^https://github.com/myorg/myrepo.*" \
  --certificate-oidc-issuer=https://token.actions.githubusercontent.com \
  ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

If verification fails, don’t deploy. Simple as that.

Enforcing Signature Verification in Kubernetes

Manual verification is nice, but we need automated enforcement. Enter admission controllers.

Option 1: Kyverno Policy for Image Verification

# policies/verify-image-signatures.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: verify-image-signatures
spec:
  validationFailureAction: Enforce
  background: false
  webhookTimeoutSeconds: 30
  rules:
    - name: verify-signature
      match:
        any:
        - resources:
            kinds:
              - Pod
            namespaces:
              - production
              - staging
      verifyImages:
      - imageReferences:
        - "myregistry.azurecr.io/*"
        attestors:
        - count: 1
          entries:
          - keyless:
              subject: "https://github.com/myorg/*"
              issuer: "https://token.actions.githubusercontent.com"
              rekor:
                url: https://rekor.sigstore.dev

    - name: verify-slsa-provenance
      match:
        any:
        - resources:
            kinds:
              - Pod
            namespaces:
              - production
      verifyImages:
      - imageReferences:
        - "myregistry.azurecr.io/*"
        attestations:
        - predicateType: https://slsa.dev/provenance/v0.2
          conditions:
          - all:
            - key: "{{ builder.id }}"
              operator: Equals
              value: "https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@*"
            - key: "{{ invocation.configSource.uri }}"
              operator: Equals
              value: "git+https://github.com/myorg/myrepo@refs/heads/main"

What this policy does:

  1. Blocks any image in production that isn’t signed
  2. Verifies signatures using Sigstore’s keyless infrastructure
  3. Checks SLSA provenance to ensure image was built by approved GitHub workflow
  4. Confirms image was built from the expected repository and branch

Result: An attacker can’t deploy a compromised image even if they have kubectl access—the signature won’t verify.

Option 2: Sigstore Policy Controller

For more advanced policies, use the Sigstore Policy Controller:

# Install Sigstore Policy Controller
kubectl apply -f https://github.com/sigstore/policy-controller/releases/download/v0.8.0/release.yaml

# Create ClusterImagePolicy
kubectl apply -f - <<EOF
apiVersion: policy.sigstore.dev/v1beta1
kind: ClusterImagePolicy
metadata:
  name: require-signed-images
spec:
  images:
  - glob: "myregistry.azurecr.io/**"
  authorities:
  - keyless:
      url: https://fulcio.sigstore.dev
      identities:
      - issuer: https://token.actions.githubusercontent.com
        subject: "https://github.com/myorg/myrepo/.github/workflows/build.yml@refs/heads/main"
  policy:
    type: cue
    data: |
      predicateType: "https://slsa.dev/provenance/v0.2"
      predicate: {
        buildType: "https://github.com/slsa-framework/slsa-github-generator/generic@v1"
      }
EOF

This policy is even more restrictive—it requires signatures AND specific SLSA provenance fields.

The Complete Supply Chain Security Workflow

Let’s visualize the entire secure workflow:

sequenceDiagram
    autonumber
    participant Dev as 👨‍💻
Developer participant GH as 📋
GitHub participant CI as 🔄
CI Pipeline participant Sigstore as 🔐
Sigstore
(Fulcio+Rekor) participant ACR as 📦
Azure ACR participant AKS as ☸️
AKS Cluster rect rgb(230, 245, 255) Note over Dev,CI: PHASE 1: Build & Sign Dev->>GH: 1. Push code to main GH->>CI: 2. Trigger workflow CI->>CI: 3. Build container image CI->>ACR: 4. Push unsigned image end rect rgb(255, 243, 224) Note over CI,Sigstore: PHASE 2: Keyless Signing CI->>Sigstore: 5. Request certificate (OIDC token) Note right of Sigstore: Fulcio verifies identity
Issues 90-second cert Sigstore-->>CI: 6. Return short-lived cert CI->>CI: 7. Sign image with cert Note right of CI: Private key destroyed
immediately after signing CI->>Sigstore: 8. Record signature to Rekor Note right of Sigstore: Append to immutable
transparency log Sigstore-->>CI: 9. Return log UUID end rect rgb(232, 245, 233) Note over CI,ACR: PHASE 3: Attestations CI->>CI: 10. Generate SBOM (Syft) CI->>CI: 11. Generate SLSA provenance CI->>CI: 12. Sign all attestations CI->>ACR: 13. Attach attestations to image Note right of ACR: Image now has:
• Signature
• SBOM
• SLSA provenance end rect rgb(252, 228, 236) Note over Dev,AKS: PHASE 4: Deploy & Verify Dev->>AKS: 14. kubectl apply deployment AKS->>AKS: 15. Admission webhook triggered AKS->>ACR: 16. Fetch image + attestations ACR-->>AKS: 17. Return signed image AKS->>Sigstore: 18. Verify signature in Rekor Sigstore-->>AKS: 19. ✅ Signature valid AKS->>Sigstore: 20. Verify certificate identity Sigstore-->>AKS: 21. ✅ Identity matches AKS->>AKS: 22. Evaluate SLSA provenance Note right of AKS: Check builder ID
Check source repo
Check branch end alt ✅ All Verifications Pass AKS->>AKS: 23. Create pods AKS-->>Dev: 24. ✅ Deployment successful Note over Dev,AKS: Total time: ~2-3 seconds else ❌ Verification Failed AKS-->>Dev: 23. ❌ Deployment blocked Note over Dev,AKS: Reason: Invalid signature
or provenance mismatch end

Critical property: Even if an attacker compromises ACR and tampers with an image, Kyverno will detect it—the signature won’t match, or the Rekor log entry won’t exist.

Advanced: SLSA Level 3 with Isolated Builders

To achieve SLSA Level 3, your build environment must be isolated and hardened. GitHub Actions hosted runners aren’t sufficient (they’re multi-tenant). You need dedicated build infrastructure.

Architecture for SLSA Level 3 Builds

flowchart TB
    subgraph Source["📋 Source Control"]
        direction TB
        Repo["🔒 GitHub Repository"]
        Webhook["⚡ Webhook Trigger
(Push to main)"] Repo --> Webhook end subgraph Isolation["🔒 Isolated Build Environment (SLSA L3)"] direction TB VM["☁️ Ephemeral VM
(Azure Spot Instance)"] Agent["🤖 Build Agent
(Self-destructs after build)"] Network["🚫 Network Restrictions"] NetRules["❌ No internet
❌ No credentials stored
✅ Read-only source
✅ Only ACR + Sigstore"] VM --> Agent Agent --> Network Network --> NetRules end subgraph Verify["✅ Build Verification"] direction TB Hash["#️⃣ Source Hash
(Git commit SHA)"] Deps["📦 Dependencies
(Locked versions)"] Repro["🔁 Reproducible
(Hermetic build)"] Hash --> Deps --> Repro end subgraph Prov["📜 Provenance Generation"] direction TB Meta["📋 Build Metadata
(SLSA format)"] Sign["✍️ Cryptographic Sign
(Cosign keyless)"] Publish["📤 Publish to Rekor
(Transparency log)"] Meta --> Sign --> Publish end subgraph Output["✅ Secure Outputs"] direction TB Artifact["🐳 Signed Container
(Image + signature)"] SBOM["📦 Signed SBOM
(Software Bill of Materials)"] Attest["📋 SLSA Attestation
(Build provenance)"] Artifact --> SBOM --> Attest end Webhook --> VM Repo -.->|Read-only clone| VM Agent --> Verify Verify --> Prov Prov --> Output Output -.->|Push to ACR| Registry["📦 ACR"] style Source fill:#e3f2fd,stroke:#1976d2,stroke-width:2px style Isolation fill:#fff3e0,stroke:#f57c00,stroke-width:3px style Verify fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px style Prov fill:#e8f5e9,stroke:#388e3c,stroke-width:2px style Output fill:#c8e6c9,stroke:#2e7d32,stroke-width:3px style NetRules fill:#ffebee,stroke:#c62828,stroke-width:2px

Implementing Ephemeral Build Agents on Azure

# terraform/isolated-builders.tf
resource "azurerm_virtual_machine_scale_set" "build_agents" {
  name                = "slsa-build-agents"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  sku                 = "Standard_D4s_v5"
  instances           = 0  # Scale from zero

  # Use Spot VMs for cost savings
  priority        = "Spot"
  eviction_policy = "Delete"
  max_bid_price   = 0.05

  # Ephemeral OS disk (no persistence)
  os_disk {
    caching              = "ReadOnly"
    storage_account_type = "Standard_LRS"
    diff_disk_settings {
      option = "Local"
    }
  }

  # Network isolation
  network_interface {
    name    = "build-nic"
    primary = true

    ip_configuration {
      name      = "internal"
      primary   = true
      subnet_id = azurerm_subnet.isolated.id
      # No public IP
    }
  }

  # Custom script to configure build agent
  custom_data = base64encode(templatefile("${path.module}/build-agent-init.sh", {
    github_pat = var.github_pat
  }))

  # Auto-scale based on build queue
  autoscale_settings {
    enabled = true
    profile {
      name = "AutoScale"
      capacity {
        default = 0
        minimum = 0
        maximum = 10
      }
      rule {
        metric_trigger {
          metric_name        = "ApproximateMessageCount"
          metric_resource_id = azurerm_storage_queue.build_queue.id
          operator           = "GreaterThan"
          threshold          = 1
          time_aggregation   = "Average"
        }
        scale_action {
          direction = "Increase"
          type      = "ChangeCount"
          value     = 1
          cooldown  = "PT5M"
        }
      }
    }
  }
}

# NSG: Deny all inbound, allow only outbound to specific endpoints
resource "azurerm_network_security_group" "build_agents" {
  name                = "build-agents-nsg"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  # Deny all inbound
  security_rule {
    name                       = "DenyAllInbound"
    priority                   = 4096
    direction                  = "Inbound"
    access                     = "Deny"
    protocol                   = "*"
    source_port_range          = "*"
    destination_port_range     = "*"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }

  # Allow outbound only to ACR and Fulcio/Rekor
  security_rule {
    name                       = "AllowACR"
    priority                   = 100
    direction                  = "Outbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "443"
    source_address_prefix      = "*"
    destination_address_prefix = "AzureContainerRegistry"
  }

  security_rule {
    name                       = "AllowSigstore"
    priority                   = 110
    direction                  = "Outbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "443"
    source_address_prefix      = "*"
    destination_address_prefixes = [
      "fulcio.sigstore.dev",
      "rekor.sigstore.dev"
    ]
  }

  # Deny all other outbound
  security_rule {
    name                       = "DenyAllOutbound"
    priority                   = 4096
    direction                  = "Outbound"
    access                     = "Deny"
    protocol                   = "*"
    source_port_range          = "*"
    destination_port_range     = "*"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }
}

This infrastructure ensures:

  • Builds run in ephemeral VMs (destroyed after each build)
  • No internet access except ACR and Sigstore endpoints
  • No persistent storage (prevents tampering)
  • Auto-scales from zero (cost-efficient)

Real-World Impact: Before and After

Let me share data from a client implementation—a fintech company with strict compliance requirements.

Before Supply Chain Security:

  • Image verification: None (trusted ACR access)
  • Build provenance: Git commit SHA in label (unverified)
  • Dependency tracking: Manual spreadsheets
  • Incident detection time: Weeks or months (discovered during audits)
  • Compliance audit cost: $120K annually (manual review of every deployment)

After SLSA + Sigstore (6 months):

  • Image verification: 100% of production images signed and verified
  • Build provenance: SLSA Level 3 provenance for all images
  • Dependency tracking: Automated SBOM generation and verification
  • Incident detection time: Real-time (admission controller blocks unauthorized images)
  • Compliance audit cost: $35K annually (automated evidence collection)

Business Impact:

  • $85K annual savings on compliance audits
  • Zero supply chain incidents (previously 2-3 per year)
  • Passed SOC 2 Type II audit with zero findings on software supply chain controls
  • Insurance premium reduction ($40K/year) due to demonstrable security controls

Common Pitfalls and How to Avoid Them

After implementing supply chain security across multiple organizations:

Don’t do this:

  • Start by enforcing signatures on all clusters → you’ll break everything
  • Use key-based signing in production → key rotation and distribution is painful
  • Ignore existing images → migration plan is critical
  • Skip SBOM generation → you need dependency tracking for vulnerability management
  • Trust self-signed certificates → defeats the purpose

Do this instead:

  • Start in audit mode, analyze what would break, then enforce
  • Use keyless signing with OIDC → no key management
  • Create a migration window (e.g., “all images must be signed by Q1 2026”)
  • Automate SBOM generation in CI/CD → it’s just one extra step
  • Use Sigstore’s public infrastructure → battle-tested and free

Your Supply Chain Security Checklist

Ready to implement? Here’s your roadmap:

Week 1-2: Foundation

  • Install Cosign in CI/CD pipelines
  • Implement keyless image signing for new builds
  • Set up SBOM generation with Syft
  • Configure Rekor for transparency logging

Week 3-4: Verification

  • Deploy Kyverno or Sigstore Policy Controller to dev cluster
  • Create policies in audit mode (don’t enforce yet)
  • Analyze policy violations, identify unsigned images
  • Plan migration for existing images

Week 5-6: SLSA Provenance

  • Integrate SLSA GitHub Generator into workflows
  • Generate SLSA Level 2 provenance for all builds
  • Attach provenance as attestations to images
  • Create policies to verify provenance fields

Week 7-8: Enforcement

  • Enable policy enforcement in staging environment
  • Monitor for blocked deployments, iterate on policies
  • Roll out to production with gradual enforcement
  • Document exceptions and approval process

Week 9-12: SLSA Level 3

  • Build isolated build infrastructure
  • Migrate critical services to isolated builders
  • Achieve SLSA Level 3 for high-risk applications
  • Automate compliance evidence collection

Key Takeaways

  • Supply chain attacks are the #1 threat in cloud-native environments—traditional security focuses too much on runtime, not build-time
  • SLSA provides the framework, Sigstore provides the implementation—don’t build your own cryptographic infrastructure
  • Keyless signing with OIDC eliminates key management hell—use your GitHub/Azure AD identity instead of rotating keys
  • Admission controllers enforce signature verification—attackers can’t deploy unsigned images even with cluster access
  • SLSA Level 3 requires isolated build environments—ephemeral VMs on Azure Spot instances are cost-effective
  • Start in audit mode, enforce gradually—breaking all deployments on day 1 isn’t a winning strategy
  • SBOM + provenance + signatures = complete supply chain visibility—you can trace every artifact back to source

If you’re running production Kubernetes workloads without cryptographic verification of your supply chain, you’re one compromised developer machine away from a SolarWinds-scale incident. The tools are mature. The implementation is straightforward. There’s no excuse.

What to Do Next

  1. Install Cosign: Add it to your CI/CD pipeline this week
  2. Sign one image: Prove the workflow works end-to-end
  3. Deploy policy controller: Start in audit mode on a dev cluster
  4. Generate SLSA provenance: Use the SLSA GitHub Generator
  5. Plan migration: Create a timeline to sign all existing images

The era of “trust me bro” deployments is over. Cryptographic verification isn’t optional anymore—it’s table stakes for production Kubernetes.


Implementing supply chain security for your Kubernetes platform? I’ve deployed SLSA + Sigstore across organizations running thousands of containers. Let’s discuss your specific compliance requirements and migration strategy.