Policy Result Evaluation
Introduction to policy results
The policy evaluation outcome is a report in SARIF format, optionally encapsulated as an evidence: in-toto statement (default) or attestation. By default, this evidence is pushed to the attestation store and can be referenced by other policies.
In this context, the in-toto statement or attestation has a predicate type of http://docs.oasis-open.org/sarif/sarif/2.1.0
, target type policy-results
and contains SARIF under .predicate.content
path.
The pure SARIF format consists solely of the SARIF output from the policy evaluation, designed for seamless integration with other tools.
In addition to the evidence output, the results are also presented in the log as a table, providing a quick overview of both failed and passed rules.
Creating attestations out of policy results
The results of policy evaluation are stored by default as evidence. If you want not to do it, use the --skip-report
option.
The --format
option (or -o
for short) is employed to specify the output format. Supported values include attest-sarif
(or simply attest
), statement-sarif
(also referred to as statement
) and sarif
(in JSON format). The default value is statement
.
Additionally, you have the option to save a local copy of the uploaded statement using the --output-file /path/to/file
option.
Tuning policy results output
It's also possible to determine how policy results are included in the output. The supported options are:
-
result-per-violation – each violation is pushed to SARIF as a separate result. This is the default behavior.
-
result-per-rule – all rule violations are aggregated into one result per rule. This option can be enabled separately for each rule by specifying
aggregate-results: true
in the rule configuration.
Policy results in valint logs
After policy evaluation, the results are shown in the output log as a table. This table provides a quick overview of the evaluation results for each rule, as well as the overall control results. For an example of the output, see the Example section.
The results of the control verification are presented in a table format. The table consists of the following columns:
RULE ID
: The unique identifier of the rule.RULE NAME
: The name of the rule.LEVEL
: The severity level of the rule. Only rules with the "error" level can fail the control.VERIFIED
: A boolean value indicating whether the evidence signature was verified. Verification failure causes the rule to fail only if the rule requires a signed attestation.RESULT
: The result of the rule verification. It can be "pass", "fail" or "open".SUMMARY
: The reason for the rule result.TARGET
: The target asset of the rule verification.CONTROL RESULT
: The result of the control verification. Every rule with the "error" level must pass for the control to pass. If at least one rule fails, the control fails. If no such rule fails but at least one rule returns "open," the control is considered "open." If all rules pass, the control passes. Rules with the "warning" or "note" levels do not affect the control result.
The results of the initiative verification are also presented in a table format. The table consists of the following columns:
CONTROL ID
: The unique identifier of the control.CONTROL NAME
: The name of the control.RULE LIST
: A list of rules that were verified for the control. Each rule is mentioned as many times as it was verified. In parentheses, the rule's result is shown with consideration of the rule level: for example, if the rule failed, but the level was set towarning
, the result of the rule evaluation will also bewarning
.RESULT
: The result of the control verification. It can be "pass", "fail" or "open".INITIATIVE RESULT
: The result of the initiative verification. If at least one control fails, the initiative fails. If no control fails but at least one control returns "open," the initiative is considered "open." If all controls pass, the initiative passes.
Example
To illustrate the process of creating attestations and evaluating policy results, consider the following example. In this case, we'll create a signed SBOM (Software Bill of Materials) evidence for the busybox image and then evaluate it against a policy named image-fresh.
Create SBOM Evidence:
valint bom busybox:latest -o attest
We first use valint bom
command generates a signed SBOM evidence for the busybox image using the default output format, which is an in-toto statement (attest).
Evaluate Initiative:
valint verify busybox:latest --initiative ssdf@v2
Next we use valint verify
command to evaluate the busybox image against the corresponding set of rules from the SSDF initiative.
After executing these commands, the results of the evaluation are displayed in the output log as a table, summarizing the evaluation for each rule:
After policy evaluation, the results are shown in the output log as a table:
INFO PS/PS.2/PS.2.1: Control "Make software integrity verification information available to software acquirers" Evaluation Summary:
┌──────────────────────────────────────────────────────────────────── ──────────────────────────────────────────────────┐
│ [PS/PS.2/PS.2.1] Control "Make software integrity verification information available to software acquirers" Evaluati │
│ on Summary │
├────────────────┬──────────────────┬───────┬──────────┬────────┬─────────────────────────────┬────────────────────────┤
│ RULE ID │ RULE NAME │ LEVEL │ VERIFIED │ RESULT │ SUMMARY │ TARGET │
├────────────────┼──────────────────┼───────┼──────────┼────────┼─────────────────────────────┼────────────────────────┤
│ sbom-is-signed │ Image-verifiable │ none │ true │ pass │ Evidence signature verified │ busybox:1.36.1 (image) │