Skip to content

Commit

Permalink
feat(core): scoring data customizable added with config file and CLI …
Browse files Browse the repository at this point in the history
…parameter

            scoring threshold not reached returns exit code 1, other returns 0
            scoring data tests and readme documentation added
  • Loading branch information
PagoNxt-Trade committed Dec 2, 2022
1 parent 810f591 commit aa3cbc2
Show file tree
Hide file tree
Showing 30 changed files with 1,191 additions and 20 deletions.
87 changes: 87 additions & 0 deletions docs/guides/2-cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ Other options include:
--stdin-filepath path to a file to pretend that stdin comes from [string]
--resolver path to custom json-ref-resolver instance [string]
-r, --ruleset path/URL to a ruleset file [string]
-s, --scoring-config path/URL to a scoring config file [string]
-F, --fail-severity results of this level or above will trigger a failure exit code
[string] [choices: "error", "warn", "info", "hint"] [default: "error"]
-D, --display-only-failures only output results equal to or greater than --fail-severity [boolean] [default: false]
Expand All @@ -60,6 +61,92 @@ Here you can build a [custom ruleset](../getting-started/3-rulesets.md), or exte
- [OpenAPI ruleset](../reference/openapi-rules.md)
- [AsyncAPI ruleset](../reference/asyncapi-rules.md)

## Scoring the API

Scoring an API definition is a way to understand in a high level, how compliant is the API definition with the rulesets provided. This helps teams to understand the quality of the APIs regarding the definition.

The scoring is produced in two different metrics:

- A number scoring. Who cames as substracting from 100% from any error or warning
- A letter, who groups numeric scorings in letters from A (better) to any

Also it introduces a quality gate, were an API scoring below the specific threshold will fail in a pipeline.

Enabling scoring is done using a new parameter called --scoring-config or -s and the scoring configuration file, where you can define how an error or a warning affects to the scoring

Usage:

```bash
spectral lint ./reference/**/*.oas*.{json,yml,yaml} --ruleset mycustomruleset.js --scoring-config ./scoringFile.json
```

or

```bash
spectral lint ./reference/**/*.oas*.{json,yml,yaml} -r mycustomruleset.js -s ./scoringFile.json
```

Heres an example of this scoringFile config file:

```
{
"scoringSubtract":
{
"error":
{
1:55,
2:65,
3:75,
6:85,
10:95
}
"warn":
{
1:3,
2:7,
3:10,
6:15,
10:18
}
},
"scoringLetter":
{
"A":75,
"B":65,
"C":55,
"D":45,
"E":0
},
"threshold":50,
"warningsSubtract": true,
"uniqueErrors": false
}
```

Where:

- scoringSubtract : An object with a key/value pair objects for every result level we want to subtract percentage, with the percentage to subtract from number of results on every result type
- scoringLetter : An object with key/value pairs with scoring letter and scoring percentage, that the result must be greater , for this letter
- threshold : A number with minimum percentage value to provide valid the file we are checking
- warningsSubtract : A boolean to setup if accumulate the result types to less the scoring percentage or stop counting on most critical result types
- uniqueErrors : A boolean to setup a count with unique errors or with all of them

Example:

With previous scoring config file, if we have:

1 error, the scoring is 45% and D
2 errors, the scoring is 35% and E
3 errors, the scoring is 25% and E
4 errors, the scoring is 25% and E
and so on

Output:

Below your output log you can see the scoring, like:

✖ SCORING: A (93%)

## Error Results

Spectral has a few different error severities: `error`, `warn`, `info`, and `hint`, and they are in "order" from highest to lowest. By default, all results will be shown regardless of severity, but since v5.0, only the presence of errors will cause a failure status code of 1. Seeing results and getting a failure code for it are now two different things.
Expand Down
20 changes: 18 additions & 2 deletions packages/cli/src/commands/__tests__/lint.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,22 @@ describe('lint', () => {
);
});

it('calls lint with document, ruleset and scoring config file', async () => {
const doc = './__fixtures__/empty-oas2-document.json';
const ruleset = 'custom-ruleset.json';
const configFile = 'scoring-config.json';
await run(`lint -r ${ruleset} -s ${configFile} ${doc}`);
expect(lint).toBeCalledWith([doc], {
encoding: 'utf8',
format: ['stylish'],
output: { stylish: '<stdout>' },
ruleset: 'custom-ruleset.json',
stdinFilepath: undefined,
ignoreUnknownFormat: false,
failOnUnmatchedGlobs: false,
});
});

it.each(['json', 'stylish'])('calls formatOutput with %s format', async format => {
await run(`lint -f ${format} ./__fixtures__/empty-oas2-document.json`);
expect(formatOutput).toBeCalledWith(results, format, { failSeverity: DiagnosticSeverity.Error });
Expand Down Expand Up @@ -244,13 +260,13 @@ describe('lint', () => {
expect(process.stderr.write).nthCalledWith(2, `Error #1: ${chalk.red('some unhandled exception')}\n`);
expect(process.stderr.write).nthCalledWith(
3,
expect.stringContaining(`packages/cli/src/commands/__tests__/lint.test.ts:236`),
expect.stringContaining(`packages/cli/src/commands/__tests__/lint.test.ts:252`),
);

expect(process.stderr.write).nthCalledWith(4, `Error #2: ${chalk.red('another one')}\n`);
expect(process.stderr.write).nthCalledWith(
5,
expect.stringContaining(`packages/cli/src/commands/__tests__/lint.test.ts:237`),
expect.stringContaining(`packages/cli/src/commands/__tests__/lint.test.ts:253`),
);

expect(process.stderr.write).nthCalledWith(6, `Error #3: ${chalk.red('original exception')}\n`);
Expand Down
47 changes: 45 additions & 2 deletions packages/cli/src/commands/lint.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,14 @@ import { formatOutput, writeOutput } from '../services/output';
import { FailSeverity, ILintConfig, OutputFormat } from '../services/config';

import { CLIError } from '../errors';
import { ScoringConfig } from './../formatters/types';
import {
getScoringConfig,
getScoringLevel,
groupBySource,
getCountsBySeverity,
uniqueErrors,
} from '../formatters//utils';

const formatOptions = Object.values(OutputFormat);

Expand Down Expand Up @@ -127,6 +135,11 @@ const lintCommand: CommandModule = {
description: 'path/URL to a ruleset file',
type: 'string',
},
'scoring-config': {
alias: 's',
description: 'path/URL to a scoring config file',
type: 'string',
},
'fail-severity': {
alias: 'F',
description: 'results of this level or above will trigger a failure exit code',
Expand Down Expand Up @@ -168,6 +181,7 @@ const lintCommand: CommandModule = {
failSeverity,
displayOnlyFailures,
ruleset,
scoringConfig,
stdinFilepath,
format,
output,
Expand Down Expand Up @@ -197,20 +211,30 @@ const lintCommand: CommandModule = {
results = filterResultsBySeverity(results, failSeverity);
}

const scoringConfigData = getScoringConfig(scoringConfig);

await Promise.all(
format.map(f => {
const formattedOutput = formatOutput(results, f, { failSeverity: getDiagnosticSeverity(failSeverity) });
const formattedOutput = formatOutput(results, f, {
failSeverity: getDiagnosticSeverity(failSeverity),
scoringConfig: scoringConfigData,
});
return writeOutput(formattedOutput, output?.[f] ?? '<stdout>');
}),
);

if (results.length > 0) {
process.exit(severeEnoughToFail(results, failSeverity) ? 1 : 0);
process.exit(
scoringThresholdNotEnough(results, scoringConfigData) ? 1 : severeEnoughToFail(results, failSeverity) ? 1 : 0,
);
} else if (config.quiet !== true) {
const isErrorSeverity = getDiagnosticSeverity(failSeverity) === DiagnosticSeverity.Error;
process.stdout.write(
`No results with a severity of '${failSeverity}' ${isErrorSeverity ? '' : 'or higher '}found!\n`,
);
if (scoringConfig !== void 0) {
process.stdout.write(`SCORING: (100%)\nPASSED!`);
}
}
} catch (ex) {
fail(isError(ex) ? ex : new Error(String(ex)), config.verbose === true);
Expand Down Expand Up @@ -273,6 +297,25 @@ const filterResultsBySeverity = (results: IRuleResult[], failSeverity: FailSever
return results.filter(r => r.severity <= diagnosticSeverity);
};

const scoringThresholdNotEnough = (results: IRuleResult[], scoringConfig: ScoringConfig | undefined): boolean => {
if (scoringConfig !== void 0) {
const groupedResults = groupBySource(results);
let groupedUniqueResults = { ...groupedResults };
if (scoringConfig.uniqueErrors) {
groupedUniqueResults = { ...groupBySource(uniqueErrors(results)) };
}
return (
scoringConfig.threshold >
getScoringLevel(
getCountsBySeverity(groupedUniqueResults),
scoringConfig.scoringSubtract,
scoringConfig.warningsSubtract,
)
);
}
return false;
};

export const severeEnoughToFail = (results: IRuleResult[], failSeverity: FailSeverity): boolean => {
const diagnosticSeverity = getDiagnosticSeverity(failSeverity);
return results.some(r => r.severity <= diagnosticSeverity);
Expand Down
33 changes: 30 additions & 3 deletions packages/cli/src/formatters/json.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,21 @@
import { Formatter } from './types';
import { ISpectralDiagnostic } from '@stoplight/spectral-core';
import { Formatter, FormatterOptions } from './types';

export const json: Formatter = results => {
import { groupBySource, uniqueErrors, getCountsBySeverity, getScoringText } from './utils';

const version = process.env.npm_package_version;

export const json: Formatter = (results: ISpectralDiagnostic[], options: FormatterOptions) => {
let spectralVersion = '';
let groupedResults;
let scoringText = '';
if (options.scoringConfig !== void 0) {
if (options.scoringConfig.customScoring !== undefined) {
spectralVersion = `${options.scoringConfig.customScoring} ${version as string}`;
}
groupedResults = groupBySource(uniqueErrors(results));
scoringText = getScoringText(getCountsBySeverity(groupedResults), options.scoringConfig);
}
const outputJson = results.map(result => {
return {
code: result.code,
Expand All @@ -11,5 +26,17 @@ export const json: Formatter = results => {
source: result.source,
};
});
return JSON.stringify(outputJson, null, '\t');
let objectOutput;
if (options.scoringConfig !== void 0) {
const scoring = +(scoringText !== null ? scoringText.replace('%', '').split(/[()]+/)[1] : 0);
objectOutput = {
version: spectralVersion,
scoring: scoringText.replace('SCORING:', '').trim(),
passed: scoring >= options.scoringConfig.threshold,
results: outputJson,
};
} else {
objectOutput = outputJson;
}
return JSON.stringify(objectOutput, null, '\t');
};
50 changes: 45 additions & 5 deletions packages/cli/src/formatters/pretty.ts
Original file line number Diff line number Diff line change
Expand Up @@ -24,22 +24,40 @@
* @author Ava Thorn
*/

import { ISpectralDiagnostic } from '@stoplight/spectral-core';
import { printPath, PrintStyle } from '@stoplight/spectral-runtime';
import { IDiagnostic, IRange } from '@stoplight/types';
import { IDiagnostic, IRange, DiagnosticSeverity } from '@stoplight/types';
import chalk from 'chalk';

import { Formatter } from './types';
import { getColorForSeverity, getHighestSeverity, getSummary, getSeverityName, groupBySource } from './utils';
import { Formatter, FormatterOptions } from './types';
import {
getColorForSeverity,
getHighestSeverity,
getSummary,
getSeverityName,
groupBySource,
getScoringText,
getCountsBySeverity,
uniqueErrors,
} from './utils';

const { version } = require('../../package.json');

function formatRange(range?: IRange): string {
if (range === void 0) return '';

return ` ${range.start.line + 1}:${range.start.character + 1}`;
}

export const pretty: Formatter = results => {
export const pretty: Formatter = (results: ISpectralDiagnostic[], options: FormatterOptions) => {
const cliui = require('cliui');
let output = '\n';
if (options.scoringConfig !== void 0) {
if (options.scoringConfig.customScoring !== void 0) {
output += `${options.scoringConfig.customScoring}${version as string}\n`;
}
}
output += '\n';
const DEFAULT_TOTAL_WIDTH = process.stdout.columns;
const COLUMNS = [10, 13, 25, 20, 20];
const variableColumns = DEFAULT_TOTAL_WIDTH - COLUMNS.reduce((a, b) => a + b);
Expand All @@ -50,10 +68,23 @@ export const pretty: Formatter = results => {
const PAD_TOP1_LEFT0 = [1, 0, 0, 0];
const ui = cliui({ width: DEFAULT_TOTAL_WIDTH, wrap: true });

const uniqueResults = uniqueErrors(results);
const groupedResults = groupBySource(results);
const summaryColor = getColorForSeverity(getHighestSeverity(results));
const summaryColor = getColorForSeverity(getHighestSeverity(uniqueResults));
const summaryText = getSummary(groupedResults);

let groupedUniqueResults = { ...groupedResults };
let scoringColor = '';
let scoringText = null;

if (options.scoringConfig !== void 0) {
if (options.scoringConfig.uniqueErrors) {
groupedUniqueResults = { ...groupBySource(uniqueResults) };
}
scoringColor = getColorForSeverity(DiagnosticSeverity.Information);
scoringText = getScoringText(getCountsBySeverity(groupedUniqueResults), options.scoringConfig);
}

const uniqueIssues: IDiagnostic['code'][] = [];
Object.keys(groupedResults).forEach(i => {
const pathResults = groupedResults[i];
Expand Down Expand Up @@ -83,6 +114,15 @@ export const pretty: Formatter = results => {
output += ui.toString();
output += chalk[summaryColor].bold(`${uniqueIssues.length} Unique Issue(s)\n`);
output += chalk[summaryColor].bold(`\u2716${summaryText !== null ? ` ${summaryText}` : ''}\n`);
if (options.scoringConfig !== void 0) {
output += chalk[scoringColor].bold(`\u2716${scoringText !== null ? ` ${scoringText}` : ''}\n`);
const scoring = +(scoringText !== null ? scoringText.replace('%', '').split(/[()]+/)[1] : 0);
if (scoring >= options.scoringConfig.threshold) {
output += chalk['green'].bold(`\u2716 PASSED!\n`);
} else {
output += chalk['red'].bold(`\u2716 NOT PASSED!\n`);
}
}

return output;
};
Loading

0 comments on commit aa3cbc2

Please sign in to comment.