> Scroll down for code samples, example requests and responses. Select a language for code samples from the tabs above or the mobile navigation menu.
-ClairV4 is a set of cooperating microservices which scan, index, and match your container's content with known vulnerabilities.
+Clair is a set of cooperating microservices which can index and match a container image's content with known vulnerabilities.
Email: Clair Team Web: Clair Team
License: Apache License 2.0
-
-
-|Name|In|Type|Required|Description|
-|---|---|---|---|---|
-|notification_id|path|string|false|A notification ID returned by a callback|
-
-> Example responses
-
-> 400 Response
-
-```json
-{
- "code": "string",
- "message": "string"
-}
-```
-
-
Responses
-
-|Status|Meaning|Description|Schema|
-|---|---|---|---|
-|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|OK|None|
-|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[Error](#schemaerror)|
-|405|[Method Not Allowed](https://tools.ietf.org/html/rfc7231#section-6.5.5)|Method Not Allowed|[Error](#schemaerror)|
-|500|[Internal Server Error](https://tools.ietf.org/html/rfc7231#section-6.6.1)|Internal Server Error|[Error](#schemaerror)|
-
-
-
-## Retrieve a paginated result of notifications for the provided id.
-
-
-
-> Code samples
-
-```python
-import requests
-headers = {
- 'Accept': 'application/json'
-}
-
-r = requests.get('/notifier/api/v1/notification/{notification_id}', headers = headers)
-
-print(r.json())
-
-```
-
-```go
-package main
-
-import (
- "bytes"
- "net/http"
-)
-
-func main() {
-
- headers := map[string][]string{
- "Accept": []string{"application/json"},
- }
-
- data := bytes.NewBuffer([]byte{jsonReq})
- req, err := http.NewRequest("GET", "/notifier/api/v1/notification/{notification_id}", data)
- req.Header = headers
-
- client := &http.Client{}
- resp, err := client.Do(req)
- // ...
-}
-
-```
-
-```javascript
-
-const headers = {
- 'Accept':'application/json'
-};
-
-fetch('/notifier/api/v1/notification/{notification_id}',
-{
- method: 'GET',
-
- headers: headers
-})
-.then(function(res) {
- return res.json();
-}).then(function(body) {
- console.log(body);
-});
-
-```
-
-`GET /notifier/api/v1/notification/{notification_id}`
-
-By performing a GET with a notification_id as a path parameter, the client will retrieve a paginated response of notification objects.
-
-
Parameters
-
-|Name|In|Type|Required|Description|
-|---|---|---|---|---|
-|notification_id|path|string|false|A notification ID returned by a callback|
-|page_size|query|int|false|The maximum number of notifications to deliver in a single page.|
-|next|query|string|false|The next page to fetch via id. Typically this number is provided on initial response in the page.next field. The first GET request may omit this field.|
-
-> Example responses
-
-> 200 Response
-
-```json
-{
- "page": {
- "size": 100,
- "next": "1b4d0db2-e757-4150-bbbb-543658144205"
- },
- "notifications": [
- {
- "id": "5e4b387e-88d3-4364-86fd-063447a6fad2",
- "manifest": "sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a",
- "reason": "added",
- "vulnerability": {
- "name": "CVE-2009-5155",
- "fixed_in_version": "v0.0.1",
- "links": "http://link-to-advisory",
- "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.\"",
- "normalized_severity": "Unknown",
- "package": {
- "id": "10",
- "name": "libapt-pkg5.0",
- "version": "1.6.11",
- "kind": "binary",
- "normalized_version": "",
- "arch": "x86",
- "module": "",
- "cpe": "",
- "source": {
- "id": "9",
- "name": "apt",
- "version": "1.6.11",
- "kind": "source",
- "source": null
- }
- },
- "distribution": {
- "id": "1",
- "did": "ubuntu",
- "name": "Ubuntu",
- "version": "18.04.3 LTS (Bionic Beaver)",
- "version_code_name": "bionic",
- "version_id": "18.04",
- "arch": "",
- "cpe": "",
- "pretty_name": "Ubuntu 18.04.3 LTS"
- },
- "repository": {
- "id": "string",
- "name": "string",
- "key": "string",
- "uri": "string",
- "cpe": "string"
- }
- }
- }
- ]
-}
-```
-
-
Responses
-
-|Status|Meaning|Description|Schema|
-|---|---|---|---|
-|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|A paginated list of notifications|[PagedNotifications](#schemapagednotifications)|
-|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[Error](#schemaerror)|
-|405|[Method Not Allowed](https://tools.ietf.org/html/rfc7231#section-6.5.5)|Method Not Allowed|[Error](#schemaerror)|
-|500|[Internal Server Error](https://tools.ietf.org/html/rfc7231#section-6.6.1)|Internal Server Error|[Error](#schemaerror)|
-
-
-
-
Indexer
+
indexer
## Index the contents of a Manifest
@@ -287,8 +42,8 @@ This operation does not require authentication
```python
import requests
headers = {
- 'Content-Type': 'application/json',
- 'Accept': 'application/json'
+ 'Content-Type': 'application/vnd.clair.manifest.v1+json',
+ 'Accept': 'application/vnd.clair.index_report.v1+json'
}
r = requests.post('/indexer/api/v1/index_report', headers = headers)
@@ -308,8 +63,8 @@ import (
func main() {
headers := map[string][]string{
- "Content-Type": []string{"application/json"},
- "Accept": []string{"application/json"},
+ "Content-Type": []string{"application/vnd.clair.manifest.v1+json"},
+ "Accept": []string{"application/vnd.clair.index_report.v1+json"},
}
data := bytes.NewBuffer([]byte{jsonReq})
@@ -325,25 +80,17 @@ func main() {
```javascript
const inputBody = '{
- "hash": "sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3",
+ "hash": "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"layers": [
{
- "hash": "sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3",
- "uri": "https://storage.example.com/blob/2f077db56abccc19f16f140f629ae98e904b4b7d563957a7fc319bd11b82ba36",
- "headers": {
- "property1": [
- "string"
- ],
- "property2": [
- "string"
- ]
- }
+ "hash": "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b856",
+ "uri": "https://storage.example.com/blob/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b856"
}
]
}';
const headers = {
- 'Content-Type':'application/json',
- 'Accept':'application/json'
+ 'Content-Type':'application/vnd.clair.manifest.v1+json',
+ 'Accept':'application/vnd.clair.index_report.v1+json'
};
fetch('/indexer/api/v1/index_report',
@@ -368,19 +115,11 @@ By submitting a Manifest object to this endpoint Clair will fetch the layers, sc
```json
{
- "hash": "sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3",
+ "hash": "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"layers": [
{
- "hash": "sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3",
- "uri": "https://storage.example.com/blob/2f077db56abccc19f16f140f629ae98e904b4b7d563957a7fc319bd11b82ba36",
- "headers": {
- "property1": [
- "string"
- ],
- "property2": [
- "string"
- ]
- }
+ "hash": "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b856",
+ "uri": "https://storage.example.com/blob/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b856"
}
]
}
@@ -390,7 +129,7 @@ By submitting a Manifest object to this endpoint Clair will fetch the layers, sc
|Name|In|Type|Required|Description|
|---|---|---|---|---|
-|body|body|[Manifest](#schemamanifest)|true|none|
+|body|body|[manifest.schema](#schemamanifest.schema)|true|none|
> Example responses
@@ -398,10 +137,29 @@ By submitting a Manifest object to this endpoint Clair will fetch the layers, sc
```json
{
- "manifest_hash": "sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3",
- "state": "IndexFinished",
+ "manifest_hash": "string",
+ "state": "string",
+ "err": "string",
+ "success": true,
"packages": {
- "10": {
+ "property1": {
+ "id": "10",
+ "name": "libapt-pkg5.0",
+ "version": "1.6.11",
+ "kind": "binary",
+ "normalized_version": "",
+ "arch": "x86",
+ "module": "",
+ "cpe": "",
+ "source": {
+ "id": "9",
+ "name": "apt",
+ "version": "1.6.11",
+ "kind": "source",
+ "source": null
+ }
+ },
+ "property2": {
"id": "10",
"name": "libapt-pkg5.0",
"version": "1.6.11",
@@ -420,29 +178,61 @@ By submitting a Manifest object to this endpoint Clair will fetch the layers, sc
}
},
"distributions": {
- "1": {
+ "property1": {
+ "id": "1",
+ "did": "ubuntu",
+ "name": "Ubuntu",
+ "version": "18.04.3 LTS (Bionic Beaver)",
+ "version_code_name": "bionic",
+ "version_id": "18.04",
+ "pretty_name": "Ubuntu 18.04.3 LTS"
+ },
+ "property2": {
"id": "1",
"did": "ubuntu",
"name": "Ubuntu",
"version": "18.04.3 LTS (Bionic Beaver)",
"version_code_name": "bionic",
"version_id": "18.04",
- "arch": "",
- "cpe": "",
"pretty_name": "Ubuntu 18.04.3 LTS"
}
},
+ "repository": {
+ "property1": {
+ "id": "string",
+ "name": "string",
+ "key": "string",
+ "uri": "http://example.com",
+ "cpe": null
+ },
+ "property2": {
+ "id": "string",
+ "name": "string",
+ "key": "string",
+ "uri": "http://example.com",
+ "cpe": null
+ }
+ },
"environments": {
- "10": [
+ "property1": [
+ {
+ "value": {
+ "package_db": "var/lib/dpkg/status",
+ "introduced_in": "sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a",
+ "distribution_id": "1"
+ }
+ }
+ ],
+ "property2": [
{
- "package_db": "var/lib/dpkg/status",
- "introduced_in": "sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a",
- "distribution_id": "1"
+ "value": {
+ "package_db": "var/lib/dpkg/status",
+ "introduced_in": "sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a",
+ "distribution_id": "1"
+ }
}
]
- },
- "success": true,
- "err": ""
+ }
}
```
@@ -450,16 +240,28 @@ By submitting a Manifest object to this endpoint Clair will fetch the layers, sc
|Status|Meaning|Description|Schema|
|---|---|---|---|
-|201|[Created](https://tools.ietf.org/html/rfc7231#section-6.3.2)|IndexReport Created|[IndexReport](#schemaindexreport)|
-|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[Error](#schemaerror)|
-|405|[Method Not Allowed](https://tools.ietf.org/html/rfc7231#section-6.5.5)|Method Not Allowed|[Error](#schemaerror)|
-|500|[Internal Server Error](https://tools.ietf.org/html/rfc7231#section-6.6.1)|Internal Server Error|[Error](#schemaerror)|
+|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|none|None|
+|201|[Created](https://tools.ietf.org/html/rfc7231#section-6.3.2)|IndexReport created.
+
+Clients SHOULD not reading the body if simply submitting the manifest for later vulnerability reporting.|[index_report.schema](#schemaindex_report.schema)|
+|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[error.schema](#schemaerror.schema)|
+|412|[Precondition Failed](https://tools.ietf.org/html/rfc7232#section-4.2)|none|None|
+|415|[Unsupported Media Type](https://tools.ietf.org/html/rfc7231#section-6.5.13)|Unsupported Media Type|[error.schema](#schemaerror.schema)|
+|default|Default|Internal Server Error|[error.schema](#schemaerror.schema)|
+
+### Response Headers
+
+|Status|Header|Type|Format|Description|
+|---|---|---|---|---|
+|200|Clair-Error|string||This is a trailer containing any errors encountered while writing the response.|
+|201|Location|string||HTTP [Location header](https://httpwg.org/specs/rfc9110.html#field.location)|
+|201|Link|string||Web Linking [Link header](https://httpwg.org/specs/rfc8288.html#header)|
-## Delete the IndexReport and associated information for the given Manifest hashes, if they exist.
+## Delete the referenced manifests.
@@ -468,8 +270,8 @@ This operation does not require authentication
```python
import requests
headers = {
- 'Content-Type': 'application/json',
- 'Accept': 'application/json'
+ 'Content-Type': 'application/vnd.clair.bulk_delete.v1+json',
+ 'Accept': 'application/vnd.clair.bulk_delete.v1+json'
}
r = requests.delete('/indexer/api/v1/index_report', headers = headers)
@@ -489,8 +291,8 @@ import (
func main() {
headers := map[string][]string{
- "Content-Type": []string{"application/json"},
- "Accept": []string{"application/json"},
+ "Content-Type": []string{"application/vnd.clair.bulk_delete.v1+json"},
+ "Accept": []string{"application/vnd.clair.bulk_delete.v1+json"},
}
data := bytes.NewBuffer([]byte{jsonReq})
@@ -506,11 +308,11 @@ func main() {
```javascript
const inputBody = '[
- "sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3"
+ "string"
]';
const headers = {
- 'Content-Type':'application/json',
- 'Accept':'application/json'
+ 'Content-Type':'application/vnd.clair.bulk_delete.v1+json',
+ 'Accept':'application/vnd.clair.bulk_delete.v1+json'
};
fetch('/indexer/api/v1/index_report',
@@ -535,15 +337,15 @@ Given a Manifest's content addressable hash, any data related to it will be remo
```json
[
- "sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3"
+ "string"
]
```
-
Parameters
+
Parameters
|Name|In|Type|Required|Description|
|---|---|---|---|---|
-|body|body|[BulkDelete](#schemabulkdelete)|true|none|
+|body|body|[bulk_delete.schema](#schemabulk_delete.schema)|true|none|
> Example responses
@@ -551,23 +353,30 @@ Given a Manifest's content addressable hash, any data related to it will be remo
```json
[
- "sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3"
+ "string"
]
```
-
Responses
+
Responses
|Status|Meaning|Description|Schema|
|---|---|---|---|
-|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|OK|[BulkDelete](#schemabulkdelete)|
-|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[Error](#schemaerror)|
-|500|[Internal Server Error](https://tools.ietf.org/html/rfc7231#section-6.6.1)|Internal Server Error|[Error](#schemaerror)|
+|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|none|[bulk_delete.schema](#schemabulk_delete.schema)|
+|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[error.schema](#schemaerror.schema)|
+|415|[Unsupported Media Type](https://tools.ietf.org/html/rfc7231#section-6.5.13)|Unsupported Media Type|[error.schema](#schemaerror.schema)|
+|default|Default|Internal Server Error|[error.schema](#schemaerror.schema)|
+
+### Response Headers
+
+|Status|Header|Type|Format|Description|
+|---|---|---|---|---|
+|200|Clair-Error|string||This is a trailer containing any errors encountered while writing the response.|
-## Delete the IndexReport and associated information for the given Manifest hash, if exists.
+## Delete the referenced manifest.
@@ -576,10 +385,10 @@ This operation does not require authentication
```python
import requests
headers = {
- 'Accept': 'application/json'
+ 'Accept': 'application/vnd.clair.error.v1+json'
}
-r = requests.delete('/indexer/api/v1/index_report/{manifest_hash}', headers = headers)
+r = requests.delete('/indexer/api/v1/index_report/{digest}', headers = headers)
print(r.json())
@@ -596,11 +405,11 @@ import (
func main() {
headers := map[string][]string{
- "Accept": []string{"application/json"},
+ "Accept": []string{"application/vnd.clair.error.v1+json"},
}
data := bytes.NewBuffer([]byte{jsonReq})
- req, err := http.NewRequest("DELETE", "/indexer/api/v1/index_report/{manifest_hash}", data)
+ req, err := http.NewRequest("DELETE", "/indexer/api/v1/index_report/{digest}", data)
req.Header = headers
client := &http.Client{}
@@ -613,10 +422,10 @@ func main() {
```javascript
const headers = {
- 'Accept':'application/json'
+ 'Accept':'application/vnd.clair.error.v1+json'
};
-fetch('/indexer/api/v1/index_report/{manifest_hash}',
+fetch('/indexer/api/v1/index_report/{digest}',
{
method: 'DELETE',
@@ -630,15 +439,15 @@ fetch('/indexer/api/v1/index_report/{manifest_hash}',
```
-`DELETE /indexer/api/v1/index_report/{manifest_hash}`
+`DELETE /indexer/api/v1/index_report/{digest}`
Given a Manifest's content addressable hash, any data related to it will be removed it it exists.
-
Parameters
+
Parameters
|Name|In|Type|Required|Description|
|---|---|---|---|---|
-|manifest_hash|path|[Digest](#schemadigest)|true|A digest of a manifest that has been indexed previous to this request.|
+|digest|path|[digest.schema](#schemadigest.schema)|true|OCI-compatible digest of a referred object.|
> Example responses
@@ -651,19 +460,27 @@ Given a Manifest's content addressable hash, any data related to it will be remo
}
```
-
Responses
+
Responses
|Status|Meaning|Description|Schema|
|---|---|---|---|
-|204|[No Content](https://tools.ietf.org/html/rfc7231#section-6.3.5)|OK|None|
-|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[Error](#schemaerror)|
-|500|[Internal Server Error](https://tools.ietf.org/html/rfc7231#section-6.6.1)|Internal Server Error|[Error](#schemaerror)|
+|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|none|None|
+|204|[No Content](https://tools.ietf.org/html/rfc7231#section-6.3.5)|none|None|
+|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[error.schema](#schemaerror.schema)|
+|415|[Unsupported Media Type](https://tools.ietf.org/html/rfc7231#section-6.5.13)|Unsupported Media Type|[error.schema](#schemaerror.schema)|
+|default|Default|Internal Server Error|[error.schema](#schemaerror.schema)|
+
+### Response Headers
+
+|Status|Header|Type|Format|Description|
+|---|---|---|---|---|
+|200|Clair-Error|string||This is a trailer containing any errors encountered while writing the response.|
-## Retrieve an IndexReport for the given Manifest hash if exists.
+## Retrieve the IndexReport for the referenced manifest.
@@ -672,10 +489,10 @@ This operation does not require authentication
```python
import requests
headers = {
- 'Accept': 'application/json'
+ 'Accept': 'application/vnd.clair.index_report.v1+json'
}
-r = requests.get('/indexer/api/v1/index_report/{manifest_hash}', headers = headers)
+r = requests.get('/indexer/api/v1/index_report/{digest}', headers = headers)
print(r.json())
@@ -692,11 +509,11 @@ import (
func main() {
headers := map[string][]string{
- "Accept": []string{"application/json"},
+ "Accept": []string{"application/vnd.clair.index_report.v1+json"},
}
data := bytes.NewBuffer([]byte{jsonReq})
- req, err := http.NewRequest("GET", "/indexer/api/v1/index_report/{manifest_hash}", data)
+ req, err := http.NewRequest("GET", "/indexer/api/v1/index_report/{digest}", data)
req.Header = headers
client := &http.Client{}
@@ -709,10 +526,10 @@ func main() {
```javascript
const headers = {
- 'Accept':'application/json'
+ 'Accept':'application/vnd.clair.index_report.v1+json'
};
-fetch('/indexer/api/v1/index_report/{manifest_hash}',
+fetch('/indexer/api/v1/index_report/{digest}',
{
method: 'GET',
@@ -726,15 +543,15 @@ fetch('/indexer/api/v1/index_report/{manifest_hash}',
```
-`GET /indexer/api/v1/index_report/{manifest_hash}`
+`GET /indexer/api/v1/index_report/{digest}`
-Given a Manifest's content addressable hash an IndexReport will be retrieved if exists.
+Given a Manifest's content addressable hash, an IndexReport will be retrieved if it exists.
-
|Status|Meaning|Description|Schema|
|---|---|---|---|
-|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|IndexReport retrieved|[IndexReport](#schemaindexreport)|
-|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[Error](#schemaerror)|
-|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Not Found|[Error](#schemaerror)|
-|405|[Method Not Allowed](https://tools.ietf.org/html/rfc7231#section-6.5.5)|Method Not Allowed|[Error](#schemaerror)|
-|500|[Internal Server Error](https://tools.ietf.org/html/rfc7231#section-6.6.1)|Internal Server Error|[Error](#schemaerror)|
+|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|IndexReport retrieved|[index_report.schema](#schemaindex_report.schema)|
+|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[error.schema](#schemaerror.schema)|
+|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Not Found|[error.schema](#schemaerror.schema)|
+|415|[Unsupported Media Type](https://tools.ietf.org/html/rfc7231#section-6.5.13)|Unsupported Media Type|[error.schema](#schemaerror.schema)|
+|default|Default|Internal Server Error|[error.schema](#schemaerror.schema)|
+
+### Response Headers
+
+|Status|Header|Type|Format|Description|
+|---|---|---|---|---|
+|200|Clair-Error|string||This is a trailer containing any errors encountered while writing the response.|
-
Matcher
+
matcher
-## Retrieve a VulnerabilityReport for a given manifest's content addressable hash.
+## Retrieve a VulnerabilityReport for the referenced manifest.
@@ -910,10 +784,10 @@ This operation does not require authentication
```python
import requests
headers = {
- 'Accept': 'application/json'
+ 'Accept': 'application/vnd.clair.vulnerability_report.v1+json'
}
-r = requests.get('/matcher/api/v1/vulnerability_report/{manifest_hash}', headers = headers)
+r = requests.get('/matcher/api/v1/vulnerability_report/{digest}', headers = headers)
print(r.json())
@@ -930,11 +804,11 @@ import (
func main() {
headers := map[string][]string{
- "Accept": []string{"application/json"},
+ "Accept": []string{"application/vnd.clair.vulnerability_report.v1+json"},
}
data := bytes.NewBuffer([]byte{jsonReq})
- req, err := http.NewRequest("GET", "/matcher/api/v1/vulnerability_report/{manifest_hash}", data)
+ req, err := http.NewRequest("GET", "/matcher/api/v1/vulnerability_report/{digest}", data)
req.Header = headers
client := &http.Client{}
@@ -947,10 +821,10 @@ func main() {
```javascript
const headers = {
- 'Accept':'application/json'
+ 'Accept':'application/vnd.clair.vulnerability_report.v1+json'
};
-fetch('/matcher/api/v1/vulnerability_report/{manifest_hash}',
+fetch('/matcher/api/v1/vulnerability_report/{digest}',
{
method: 'GET',
@@ -964,15 +838,15 @@ fetch('/matcher/api/v1/vulnerability_report/{manifest_hash}',
```
-`GET /matcher/api/v1/vulnerability_report/{manifest_hash}`
+`GET /matcher/api/v1/vulnerability_report/{digest}`
Given a Manifest's content addressable hash a VulnerabilityReport will be created. The Manifest **must** have been Indexed first via the Index endpoint.
-
Parameters
+
Parameters
|Name|In|Type|Required|Description|
|---|---|---|---|---|
-|manifest_hash|path|[Digest](#schemadigest)|true|A digest of a manifest that has been indexed previous to this request.|
+|digest|path|[digest.schema](#schemadigest.schema)|true|OCI-compatible digest of a referred object.|
> Example responses
@@ -980,9 +854,26 @@ Given a Manifest's content addressable hash a VulnerabilityReport will be create
```json
{
- "manifest_hash": "sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3",
+ "manifest_hash": "string",
"packages": {
- "10": {
+ "property1": {
+ "id": "10",
+ "name": "libapt-pkg5.0",
+ "version": "1.6.11",
+ "kind": "binary",
+ "normalized_version": "",
+ "arch": "x86",
+ "module": "",
+ "cpe": "",
+ "source": {
+ "id": "9",
+ "name": "apt",
+ "version": "1.6.11",
+ "kind": "source",
+ "source": null
+ }
+ },
+ "property2": {
"id": "10",
"name": "libapt-pkg5.0",
"version": "1.6.11",
@@ -1001,44 +892,105 @@ Given a Manifest's content addressable hash a VulnerabilityReport will be create
}
},
"distributions": {
- "1": {
+ "property1": {
+ "id": "1",
+ "did": "ubuntu",
+ "name": "Ubuntu",
+ "version": "18.04.3 LTS (Bionic Beaver)",
+ "version_code_name": "bionic",
+ "version_id": "18.04",
+ "pretty_name": "Ubuntu 18.04.3 LTS"
+ },
+ "property2": {
"id": "1",
"did": "ubuntu",
"name": "Ubuntu",
"version": "18.04.3 LTS (Bionic Beaver)",
"version_code_name": "bionic",
"version_id": "18.04",
- "arch": "",
- "cpe": "",
"pretty_name": "Ubuntu 18.04.3 LTS"
}
},
+ "repository": {
+ "property1": {
+ "id": "string",
+ "name": "string",
+ "key": "string",
+ "uri": "http://example.com",
+ "cpe": null
+ },
+ "property2": {
+ "id": "string",
+ "name": "string",
+ "key": "string",
+ "uri": "http://example.com",
+ "cpe": null
+ }
+ },
"environments": {
- "10": [
+ "property1": [
+ {
+ "value": {
+ "package_db": "var/lib/dpkg/status",
+ "introduced_in": "sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a",
+ "distribution_id": "1"
+ }
+ }
+ ],
+ "property2": [
{
- "package_db": "var/lib/dpkg/status",
- "introduced_in": "sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a",
- "distribution_id": "1"
+ "value": {
+ "package_db": "var/lib/dpkg/status",
+ "introduced_in": "sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a",
+ "distribution_id": "1"
+ }
}
]
},
"vulnerabilities": {
- "356835": {
+ "property1": {
+ "id": "356835",
+ "updater": "ubuntu",
+ "name": "CVE-2009-5155",
+ "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.",
+ "links": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-5155 http://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-5155.html https://sourceware.org/bugzilla/show_bug.cgi?id=11053 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=22793 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=32806 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=34238 https://sourceware.org/bugzilla/show_bug.cgi?id=18986",
+ "severity": "Low",
+ "normalized_severity": "Low",
+ "package": {
+ "id": "0",
+ "name": "glibc",
+ "kind": "binary",
+ "source": null
+ },
+ "dist": {
+ "id": "0",
+ "did": "ubuntu",
+ "name": "Ubuntu",
+ "version": "18.04.3 LTS (Bionic Beaver)",
+ "version_code_name": "bionic",
+ "version_id": "18.04",
+ "arch": "amd64"
+ },
+ "repo": {
+ "id": "0",
+ "name": "Ubuntu 18.04.3 LTS"
+ },
+ "issued": "2019-10-12T07:20:50.52Z",
+ "fixed_in_version": "2.28-0ubuntu1"
+ },
+ "property2": {
"id": "356835",
- "updater": "",
+ "updater": "ubuntu",
"name": "CVE-2009-5155",
- "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.\"",
- "links": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-5155 http://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-5155.html https://sourceware.org/bugzilla/show_bug.cgi?id=11053 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=22793 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=32806 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=34238 https://sourceware.org/bugzilla/show_bug.cgi?id=18986\"",
+ "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.",
+ "links": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-5155 http://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-5155.html https://sourceware.org/bugzilla/show_bug.cgi?id=11053 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=22793 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=32806 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=34238 https://sourceware.org/bugzilla/show_bug.cgi?id=18986",
"severity": "Low",
"normalized_severity": "Low",
"package": {
"id": "0",
"name": "glibc",
- "version": "",
- "kind": "",
- "source": null,
- "package_db": "",
- "repository_hint": ""
+ "kind": "binary",
+ "source": null
},
"dist": {
"id": "0",
@@ -1047,354 +999,779 @@ Given a Manifest's content addressable hash a VulnerabilityReport will be create
"version": "18.04.3 LTS (Bionic Beaver)",
"version_code_name": "bionic",
"version_id": "18.04",
- "arch": "",
- "cpe": "",
- "pretty_name": ""
+ "arch": "amd64"
},
"repo": {
"id": "0",
- "name": "Ubuntu 18.04.3 LTS",
- "key": "",
- "uri": ""
+ "name": "Ubuntu 18.04.3 LTS"
},
"issued": "2019-10-12T07:20:50.52Z",
"fixed_in_version": "2.28-0ubuntu1"
}
},
"package_vulnerabilities": {
- "10": [
- "356835"
+ "property1": [
+ "string"
+ ],
+ "property2": [
+ "string"
]
+ },
+ "enrichments": {
+ "property1": [],
+ "property2": []
}
}
```
-
Responses
+
Responses
|Status|Meaning|Description|Schema|
|---|---|---|---|
-|201|[Created](https://tools.ietf.org/html/rfc7231#section-6.3.2)|VulnerabilityReport Created|[VulnerabilityReport](#schemavulnerabilityreport)|
-|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[Error](#schemaerror)|
-|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Not Found|[Error](#schemaerror)|
-|405|[Method Not Allowed](https://tools.ietf.org/html/rfc7231#section-6.5.5)|Method Not Allowed|[Error](#schemaerror)|
-|500|[Internal Server Error](https://tools.ietf.org/html/rfc7231#section-6.6.1)|Internal Server Error|[Error](#schemaerror)|
+|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|none|None|
+|201|[Created](https://tools.ietf.org/html/rfc7231#section-6.3.2)|Vulnerability Report Created|[vulnerability_report.schema](#schemavulnerability_report.schema)|
+|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[error.schema](#schemaerror.schema)|
+|404|[Not Found](https://tools.ietf.org/html/rfc7231#section-6.5.4)|Not Found|[error.schema](#schemaerror.schema)|
+|415|[Unsupported Media Type](https://tools.ietf.org/html/rfc7231#section-6.5.13)|Unsupported Media Type|[error.schema](#schemaerror.schema)|
+|default|Default|Internal Server Error|[error.schema](#schemaerror.schema)|
+
+### Response Headers
+
+|Status|Header|Type|Format|Description|
+|---|---|---|---|---|
+|200|Clair-Error|string||This is a trailer containing any errors encountered while writing the response.|
-# Schemas
-
-
-```
+## Delete the referenced notification set.
-Page
+
-### Properties
+> Code samples
-|Name|Type|Required|Restrictions|Description|
-|---|---|---|---|---|
-|size|int|false|none|The maximum number of elements in a page|
-|next|string|false|none|The next id to submit to the api to continue paging|
+```python
+import requests
+headers = {
+ 'Accept': 'application/vnd.clair.error.v1+json'
+}
-
PagedNotifications
-
-
-
-
-
+r = requests.delete('/notifier/api/v1/notification/{id}', headers = headers)
-```json
-{
- "page": {
- "size": 100,
- "next": "1b4d0db2-e757-4150-bbbb-543658144205"
- },
- "notifications": [
- {
- "id": "5e4b387e-88d3-4364-86fd-063447a6fad2",
- "manifest": "sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a",
- "reason": "added",
- "vulnerability": {
- "name": "CVE-2009-5155",
- "fixed_in_version": "v0.0.1",
- "links": "http://link-to-advisory",
- "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.\"",
- "normalized_severity": "Unknown",
- "package": {
- "id": "10",
- "name": "libapt-pkg5.0",
- "version": "1.6.11",
- "kind": "binary",
- "normalized_version": "",
- "arch": "x86",
- "module": "",
- "cpe": "",
- "source": {
- "id": "9",
- "name": "apt",
- "version": "1.6.11",
- "kind": "source",
- "source": null
- }
- },
- "distribution": {
- "id": "1",
- "did": "ubuntu",
- "name": "Ubuntu",
- "version": "18.04.3 LTS (Bionic Beaver)",
- "version_code_name": "bionic",
- "version_id": "18.04",
- "arch": "",
- "cpe": "",
- "pretty_name": "Ubuntu 18.04.3 LTS"
- },
- "repository": {
- "id": "string",
- "name": "string",
- "key": "string",
- "uri": "string",
- "cpe": "string"
- }
- }
- }
- ]
-}
+print(r.json())
```
-PagedNotifications
+```go
+package main
-### Properties
+import (
+ "bytes"
+ "net/http"
+)
-|Name|Type|Required|Restrictions|Description|
-|---|---|---|---|---|
-|page|object|false|none|A page object informing the client the next page to retrieve. If page.next becomes "-1" the client should stop paging.|
-|notifications|[[Notification](#schemanotification)]|false|none|A list of notifications within this page|
+func main() {
-
Callback
-
-
-
-
-
+ headers := map[string][]string{
+ "Accept": []string{"application/vnd.clair.error.v1+json"},
+ }
-```json
-{
- "notification_id": "269886f3-0146-4f08-9bf7-cb1138d48643",
- "callback": "http://clair-notifier/notifier/api/v1/notification/269886f3-0146-4f08-9bf7-cb1138d48643"
+ data := bytes.NewBuffer([]byte{jsonReq})
+ req, err := http.NewRequest("DELETE", "/notifier/api/v1/notification/{id}", data)
+ req.Header = headers
+
+ client := &http.Client{}
+ resp, err := client.Do(req)
+ // ...
}
```
-Callback
-
-### Properties
-
-|Name|Type|Required|Restrictions|Description|
-|---|---|---|---|---|
-|notification_id|string|false|none|the unique identifier for this set of notifications|
-|callback|string|false|none|the url where notifications can be retrieved|
+```javascript
-
VulnSummary
-
-
-
-
-
+const headers = {
+ 'Accept':'application/vnd.clair.error.v1+json'
+};
-```json
+fetch('/notifier/api/v1/notification/{id}',
{
- "name": "CVE-2009-5155",
- "fixed_in_version": "v0.0.1",
- "links": "http://link-to-advisory",
- "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.\"",
- "normalized_severity": "Unknown",
- "package": {
- "id": "10",
- "name": "libapt-pkg5.0",
- "version": "1.6.11",
- "kind": "binary",
- "normalized_version": "",
- "arch": "x86",
- "module": "",
- "cpe": "",
- "source": {
- "id": "9",
- "name": "apt",
- "version": "1.6.11",
- "kind": "source",
- "source": null
- }
- },
- "distribution": {
- "id": "1",
- "did": "ubuntu",
- "name": "Ubuntu",
- "version": "18.04.3 LTS (Bionic Beaver)",
- "version_code_name": "bionic",
- "version_id": "18.04",
- "arch": "",
- "cpe": "",
- "pretty_name": "Ubuntu 18.04.3 LTS"
- },
- "repository": {
- "id": "string",
- "name": "string",
- "key": "string",
- "uri": "string",
- "cpe": "string"
- }
-}
+ method: 'DELETE',
+
+ headers: headers
+})
+.then(function(res) {
+ return res.json();
+}).then(function(body) {
+ console.log(body);
+});
```
-VulnSummary
+`DELETE /notifier/api/v1/notification/{id}`
-### Properties
+Issues a delete of the provided notification id and all associated notifications.
+After this delete clients will no longer be able to retrieve notifications.
-|Name|Type|Required|Restrictions|Description|
+
Parameters
+
+|Name|In|Type|Required|Description|
|---|---|---|---|---|
-|name|string|false|none|the vulnerability name|
-|fixed_in_version|string|false|none|The version which the vulnerability is fixed in. Empty if not fixed.|
-|links|string|false|none|links to external information about vulnerability|
-|description|string|false|none|the vulnerability name|
-|normalized_severity|string|false|none|A well defined set of severity strings guaranteed to be present.|
-|package|[Package](#schemapackage)|false|none|A package discovered by indexing a Manifest|
-|distribution|[Distribution](#schemadistribution)|false|none|An indexed distribution discovered in a layer. See https://www.freedesktop.org/software/systemd/man/os-release.html for explanations and example of fields.|
-|repository|[Repository](#schemarepository)|false|none|A package repository|
+|id|path|[token](#schematoken)|true|A notification ID returned by a callback|
-#### Enumerated Values
+> Example responses
-|Property|Value|
-|---|---|
-|normalized_severity|Unknown|
-|normalized_severity|Negligible|
-|normalized_severity|Low|
-|normalized_severity|Medium|
-|normalized_severity|High|
-|normalized_severity|Critical|
-
-
-### Properties
+|Status|Meaning|Description|Schema|
+|---|---|---|---|
+|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|none|None|
+|204|[No Content](https://tools.ietf.org/html/rfc7231#section-6.3.5)|none|None|
+|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[error.schema](#schemaerror.schema)|
+|415|[Unsupported Media Type](https://tools.ietf.org/html/rfc7231#section-6.5.13)|Unsupported Media Type|[error.schema](#schemaerror.schema)|
+|default|Default|Internal Server Error|[error.schema](#schemaerror.schema)|
-|Name|Type|Required|Restrictions|Description|
+### Response Headers
+
+|Status|Header|Type|Format|Description|
|---|---|---|---|---|
-|id|string|false|none|a unique identifier for this notification|
-|manifest|string|false|none|The hash of the manifest affected by the provided vulnerability.|
-|reason|string|false|none|the reason for the notifcation, [added | removed]|
-|vulnerability|[VulnSummary](#schemavulnsummary)|false|none|A summary of a vulnerability|
+|200|Clair-Error|string||This is a trailer containing any errors encountered while writing the response.|
-
+
+|Name|In|Type|Required|Description|
+|---|---|---|---|---|
+|page_size|query|integer|false|The maximum number of notifications to deliver in a single page.|
+|next|query|string|false|The next page to fetch via id. Typically this number is provided on initial response in the "page.next" field. The first request should omit this field.|
+|id|path|[token](#schematoken)|true|A notification ID returned by a callback|
+
+> Example responses
+
+> 200 Response
+
+```json
+{
+ "page": {
+ "size": 0,
+ "next": "-1"
+ },
+ "notifications": []
+}
+```
+
+
Responses
+
+|Status|Meaning|Description|Schema|
+|---|---|---|---|
+|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|A paginated list of notifications|[notification_page.schema](#schemanotification_page.schema)|
+|304|[Not Modified](https://tools.ietf.org/html/rfc7232#section-4.1)|none|None|
+|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[error.schema](#schemaerror.schema)|
+|415|[Unsupported Media Type](https://tools.ietf.org/html/rfc7231#section-6.5.13)|Unsupported Media Type|[error.schema](#schemaerror.schema)|
+|default|Default|Internal Server Error|[error.schema](#schemaerror.schema)|
+
+### Response Headers
+
+|Status|Header|Type|Format|Description|
+|---|---|---|---|---|
+|200|Clair-Error|string||This is a trailer containing any errors encountered while writing the response.|
+
+
+
+
internal
+
+## Retrieve the set of manifests affected by the provided vulnerabilities.
+
+
+
+> Code samples
+
+```python
+import requests
+headers = {
+ 'Accept': 'application/vnd.clair.affected_manifests.v1+json'
+}
+
+r = requests.post('/indexer/api/v1/internal/affected_manifest', headers = headers)
+
+print(r.json())
+
+```
+
+```go
+package main
+
+import (
+ "bytes"
+ "net/http"
+)
+
+func main() {
+
+ headers := map[string][]string{
+ "Accept": []string{"application/vnd.clair.affected_manifests.v1+json"},
+ }
+
+ data := bytes.NewBuffer([]byte{jsonReq})
+ req, err := http.NewRequest("POST", "/indexer/api/v1/internal/affected_manifest", data)
+ req.Header = headers
+
+ client := &http.Client{}
+ resp, err := client.Do(req)
+ // ...
+}
+
+```
+
+```javascript
+
+const headers = {
+ 'Accept':'application/vnd.clair.affected_manifests.v1+json'
+};
+
+fetch('/indexer/api/v1/internal/affected_manifest',
+{
+ method: 'POST',
+
+ headers: headers
+})
+.then(function(res) {
+ return res.json();
+}).then(function(body) {
+ console.log(body);
+});
+
+```
+
+`POST /indexer/api/v1/internal/affected_manifest`
+
+> Example responses
+
+> 200 Response
+
+```json
+{
+ "vulnerabilities": {
+ "property1": {
+ "id": "356835",
+ "updater": "ubuntu",
+ "name": "CVE-2009-5155",
+ "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.",
+ "links": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-5155 http://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-5155.html https://sourceware.org/bugzilla/show_bug.cgi?id=11053 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=22793 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=32806 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=34238 https://sourceware.org/bugzilla/show_bug.cgi?id=18986",
+ "severity": "Low",
+ "normalized_severity": "Low",
+ "package": {
+ "id": "0",
+ "name": "glibc",
+ "kind": "binary",
+ "source": null
+ },
+ "dist": {
+ "id": "0",
+ "did": "ubuntu",
+ "name": "Ubuntu",
+ "version": "18.04.3 LTS (Bionic Beaver)",
+ "version_code_name": "bionic",
+ "version_id": "18.04",
+ "arch": "amd64"
+ },
+ "repo": {
+ "id": "0",
+ "name": "Ubuntu 18.04.3 LTS"
+ },
+ "issued": "2019-10-12T07:20:50.52Z",
+ "fixed_in_version": "2.28-0ubuntu1"
+ },
+ "property2": {
+ "id": "356835",
+ "updater": "ubuntu",
+ "name": "CVE-2009-5155",
+ "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.",
+ "links": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-5155 http://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-5155.html https://sourceware.org/bugzilla/show_bug.cgi?id=11053 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=22793 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=32806 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=34238 https://sourceware.org/bugzilla/show_bug.cgi?id=18986",
+ "severity": "Low",
+ "normalized_severity": "Low",
+ "package": {
+ "id": "0",
+ "name": "glibc",
+ "kind": "binary",
+ "source": null
+ },
+ "dist": {
+ "id": "0",
+ "did": "ubuntu",
+ "name": "Ubuntu",
+ "version": "18.04.3 LTS (Bionic Beaver)",
+ "version_code_name": "bionic",
+ "version_id": "18.04",
+ "arch": "amd64"
+ },
+ "repo": {
+ "id": "0",
+ "name": "Ubuntu 18.04.3 LTS"
+ },
+ "issued": "2019-10-12T07:20:50.52Z",
+ "fixed_in_version": "2.28-0ubuntu1"
+ }
+ },
+ "vulnerable_manifests": {
+ "property1": [
+ "string"
+ ],
+ "property2": [
+ "string"
+ ]
+ }
+}
+```
+
+
Responses
+
+|Status|Meaning|Description|Schema|
+|---|---|---|---|
+|200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|none|[affected_manifests.schema](#schemaaffected_manifests.schema)|
+|400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[error.schema](#schemaerror.schema)|
+|415|[Unsupported Media Type](https://tools.ietf.org/html/rfc7231#section-6.5.13)|Unsupported Media Type|[error.schema](#schemaerror.schema)|
+|default|Default|Internal Server Error|[error.schema](#schemaerror.schema)|
+
+### Response Headers
+
+|Status|Header|Type|Format|Description|
+|---|---|---|---|---|
+|200|Clair-Error|string||This is a trailer containing any errors encountered while writing the response.|
+
+
+
+# Schemas
+
+
token
+
+
+
+
+
+
+```json
+"string"
+
+```
+
+An opaque token previously obtained from the service.
+
+### Properties
+
+|Name|Type|Required|Restrictions|Description|
+|---|---|---|---|---|
+|*anonymous*|string|false|none|An opaque token previously obtained from the service.|
+
+
manifest.schema
+
+
+
+
+
+
+```json
+{
+ "hash": "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "layers": [
+ {
+ "hash": "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b856",
+ "uri": "https://storage.example.com/blob/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b856"
+ }
+ ]
+}
+
+```
+
+Manifest
+
+### Properties
+
+|Name|Type|Required|Restrictions|Description|
+|---|---|---|---|---|
+|hash|[digest.schema.json](#schemadigest.schema.json)|true|none|#/components/schemas/digest.schema|
+|layers|[[layer.schema](#schemalayer.schema)]|false|none|[Layer is a description of a container layer. It should contain enough information to fetch the layer.]|
+
+
+
+
+
+
+
+
+```json
+{
+ "id": "string",
+ "manifest": null,
+ "reason": "added",
+ "vulnerability": {
+ "name": "CVE-2009-5155",
+ "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.",
+ "normalized_severity": "Low",
+ "fixed_in_version": "v0.0.1",
+ "links": "http://link-to-advisory",
+ "package": {
+ "id": "0",
+ "name": "glibc"
+ },
+ "dist": {
+ "id": "0",
"did": "ubuntu",
"name": "Ubuntu",
"version": "18.04.3 LTS (Bionic Beaver)",
"version_code_name": "bionic",
- "version_id": "18.04",
- "arch": "",
- "cpe": "",
- "pretty_name": "Ubuntu 18.04.3 LTS"
- }
- },
- "environments": {
- "10": [
- {
- "package_db": "var/lib/dpkg/status",
- "introduced_in": "sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a",
- "distribution_id": "1"
- }
- ]
- },
- "vulnerabilities": {
- "356835": {
- "id": "356835",
- "updater": "",
- "name": "CVE-2009-5155",
- "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.\"",
- "links": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-5155 http://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-5155.html https://sourceware.org/bugzilla/show_bug.cgi?id=11053 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=22793 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=32806 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=34238 https://sourceware.org/bugzilla/show_bug.cgi?id=18986\"",
- "severity": "Low",
- "normalized_severity": "Low",
- "package": {
- "id": "0",
- "name": "glibc",
- "version": "",
- "kind": "",
- "source": null,
- "package_db": "",
- "repository_hint": ""
- },
- "dist": {
- "id": "0",
- "did": "ubuntu",
- "name": "Ubuntu",
- "version": "18.04.3 LTS (Bionic Beaver)",
- "version_code_name": "bionic",
- "version_id": "18.04",
- "arch": "",
- "cpe": "",
- "pretty_name": ""
- },
- "repo": {
- "id": "0",
- "name": "Ubuntu 18.04.3 LTS",
- "key": "",
- "uri": ""
- },
- "issued": "2019-10-12T07:20:50.52Z",
- "fixed_in_version": "2.28-0ubuntu1"
+ "version_id": "18.04"
+ },
+ "repo": {
+ "id": "0",
+ "name": "Ubuntu 18.04.3 LTS"
}
- },
- "package_vulnerabilities": {
- "10": [
- "356835"
- ]
}
}
```
-VulnerabilityReport
+Notification
### Properties
|Name|Type|Required|Restrictions|Description|
|---|---|---|---|---|
-|manifest_hash|[Digest](#schemadigest)|true|none|A digest string with prefixed algorithm. The format is described here: https://github.com/opencontainers/image-spec/blob/master/descriptor.md#digests Digests are used throughout the API to identify Layers and Manifests.|
-|packages|object|true|none|A map of Package objects indexed by Package.id|
-|» **additionalProperties**|[Package](#schemapackage)|false|none|A package discovered by indexing a Manifest|
-|distributions|object|true|none|A map of Distribution objects indexed by Distribution.id.|
-|» **additionalProperties**|[Distribution](#schemadistribution)|false|none|An indexed distribution discovered in a layer. See https://www.freedesktop.org/software/systemd/man/os-release.html for explanations and example of fields.|
-|environments|object|true|none|A mapping of Environment lists indexed by Package.id|
-|» **additionalProperties**|[[Environment](#schemaenvironment)]|false|none|[The environment a particular package was discovered in.]|
-|vulnerabilities|object|true|none|A map of Vulnerabilities indexed by Vulnerability.id|
-|» **additionalProperties**|[Vulnerability](#schemavulnerability)|false|none|A unique vulnerability indexed by Clair|
-|package_vulnerabilities|object|true|none|A mapping of Vulnerability.id lists indexed by Package.id.|
-|» **additionalProperties**|[string]|false|none|none|
+|id|string|true|none|Unique identifier for this notification.|
+|manifest|[digest.schema.json](#schemadigest.schema.json)|true|none|#/components/schemas/digest.schema|
+|reason|any|true|none|The reason for the notifcation.|
+|vulnerability|[vulnerability_summary.schema](#schemavulnerability_summary.schema)|true|none|A summary of a vulnerability.|
+
+#### Enumerated Values
+
+|Property|Value|
+|---|---|
+|reason|added|
+|reason|removed|
-
Vulnerability
+
error.schema
-
-
-
-
+
+
+
+
```json
{
- "id": "356835",
- "updater": "",
- "name": "CVE-2009-5155",
- "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.\"",
- "links": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-5155 http://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-5155.html https://sourceware.org/bugzilla/show_bug.cgi?id=11053 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=22793 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=32806 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=34238 https://sourceware.org/bugzilla/show_bug.cgi?id=18986\"",
- "severity": "Low",
- "normalized_severity": "Low",
- "package": {
- "id": "0",
- "name": "glibc",
- "version": "",
- "kind": "",
- "source": null,
- "package_db": "",
- "repository_hint": ""
- },
- "dist": {
- "id": "0",
- "did": "ubuntu",
- "name": "Ubuntu",
- "version": "18.04.3 LTS (Bionic Beaver)",
- "version_code_name": "bionic",
- "version_id": "18.04",
- "arch": "",
- "cpe": "",
- "pretty_name": ""
- },
- "repo": {
- "id": "0",
- "name": "Ubuntu 18.04.3 LTS",
- "key": "",
- "uri": ""
- },
- "issued": "2019-10-12T07:20:50.52Z",
- "fixed_in_version": "2.28-0ubuntu1",
- "x-widdershins-oldRef": "#/components/examples/Vulnerability/value"
+ "code": "string",
+ "message": "string"
}
```
-Vulnerability
+Error
### Properties
|Name|Type|Required|Restrictions|Description|
|---|---|---|---|---|
-|id|string|true|none|A unique ID representing this vulnerability.|
-|updater|string|true|none|A unique ID representing this vulnerability.|
-|name|string|true|none|Name of this specific vulnerability.|
-|description|string|true|none|A description of this specific vulnerability.|
-|links|string|true|none|A space separate list of links to any external information.|
-|severity|string|true|none|A severity keyword taken verbatim from the vulnerability source.|
-|normalized_severity|string|true|none|A well defined set of severity strings guaranteed to be present.|
-|package|[Package](#schemapackage)|false|none|A package discovered by indexing a Manifest|
-|distribution|[Distribution](#schemadistribution)|false|none|An indexed distribution discovered in a layer. See https://www.freedesktop.org/software/systemd/man/os-release.html for explanations and example of fields.|
-|repository|[Repository](#schemarepository)|false|none|A package repository|
-|issued|string|false|none|The timestamp in which the vulnerability was issued|
-|range|string|false|none|The range of package versions affected by this vulnerability.|
-|fixed_in_version|string|true|none|A unique ID representing this vulnerability.|
+|code|string|false|none|a code for this particular error|
+|message|string|true|none|a message with further detail|
-#### Enumerated Values
+
-
-
-
-
+
+
+
+
```json
{
- "id": "1",
- "did": "ubuntu",
- "name": "Ubuntu",
- "version": "18.04.3 LTS (Bionic Beaver)",
- "version_code_name": "bionic",
- "version_id": "18.04",
- "arch": "",
- "cpe": "",
- "pretty_name": "Ubuntu 18.04.3 LTS",
- "x-widdershins-oldRef": "#/components/examples/Distribution/value"
+ "hash": "string",
+ "uri": "string",
+ "headers": {},
+ "media_type": "string"
}
```
-Distribution
+Layer
### Properties
|Name|Type|Required|Restrictions|Description|
|---|---|---|---|---|
-|id|string|true|none|A unique ID representing this distribution|
-|did|string|true|none|none|
-|name|string|true|none|none|
-|version|string|true|none|none|
-|version_code_name|string|true|none|none|
-|version_id|string|true|none|none|
-|arch|string|true|none|none|
-|cpe|string|true|none|none|
-|pretty_name|string|true|none|none|
+|hash|[digest.schema](#schemadigest.schema)|true|none|Digest of the layer blob.|
+|uri|string|true|none|A URI indicating where the layer blob can be downloaded from.|
+|headers|object|false|none|Any additional HTTP-style headers needed for requesting layers.|
+|» ^[a-zA-Z0-9\-_]+$|[string]|false|none|none|
+|media_type|string|false|none|The OCI Layer media type for this layer.|
-
Package
+
package.schema
-
-
-
-
+
+
+
+
```json
{
@@ -1716,8 +2162,7 @@ Distribution
"version": "1.6.11",
"kind": "source",
"source": null
- },
- "x-widdershins-oldRef": "#/components/examples/Package/value"
+ }
}
```
@@ -1728,30 +2173,66 @@ Package
|Name|Type|Required|Restrictions|Description|
|---|---|---|---|---|
-|id|string|true|none|A unique ID representing this package|
-|name|string|true|none|Name of the Package|
-|version|string|true|none|Version of the Package|
-|kind|string|false|none|Kind of package. Source | Binary|
-|source|[Package](#schemapackage)|false|none|A package discovered by indexing a Manifest|
-|normalized_version|[Version](#schemaversion)|false|none|Version is a normalized claircore version, composed of a "kind" and an array of integers such that two versions of the same kind have the correct ordering when the integers are compared pair-wise.|
-|arch|string|false|none|The package's target system architecture|
-|module|string|false|none|A module further defining a namespace for a package|
-|cpe|string|false|none|A CPE identifying the package|
-
-
+
+
+
+
+
+
+```json
+{
+ "id": "1",
+ "did": "ubuntu",
+ "name": "Ubuntu",
+ "version": "18.04.3 LTS (Bionic Beaver)",
+ "version_code_name": "bionic",
+ "version_id": "18.04",
+ "pretty_name": "Ubuntu 18.04.3 LTS"
+}
+
+```
+
+Distribution
+
+### Properties
+
+|Name|Type|Required|Restrictions|Description|
+|---|---|---|---|---|
+|id|string|true|none|Unique ID for this Distribution. May be unique to the response document, not the whole system.|
+|did|string|false|none|A lower-case string (no spaces or other characters outside of 0–9, a–z, ".", "_", and "-") identifying the operating system, excluding any version information and suitable for processing by scripts or usage in generated filenames.|
+|name|string|false|none|A string identifying the operating system.|
+|version|string|false|none|A string identifying the operating system version, excluding any OS name information, possibly including a release code name, and suitable for presentation to the user.|
+|version_code_name|string|false|none|A lower-case string (no spaces or other characters outside of 0–9, a–z, ".", "_", and "-") identifying the operating system release code name, excluding any OS name information or release version, and suitable for processing by scripts or usage in generated filenames.|
+|version_id|string|false|none|A lower-case string (mostly numeric, no spaces or other characters outside of 0–9, a–z, ".", "_", and "-") identifying the operating system version, excluding any OS name information or release code name.|
+|arch|string|false|none|A string identifying the OS architecture.|
+|cpe|[cpe.schema.json](#schemacpe.schema.json)|false|none|#/components/schemas/cpe.schema|
+|pretty_name|string|false|none|A pretty operating system name in a format suitable for presentation to the user.|
+
+
-
-
-
-
+
+
+
+
```json
-"pep440:0.0.0.0.0.0.0.0.0"
+{
+ "value": {
+ "package_db": "var/lib/dpkg/status",
+ "introduced_in": "sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a",
+ "distribution_id": "1"
+ }
+}
```
-Version
+Environment
### Properties
|Name|Type|Required|Restrictions|Description|
|---|---|---|---|---|
-|Version|string|false|none|Version is a normalized claircore version, composed of a "kind" and an array of integers such that two versions of the same kind have the correct ordering when the integers are compared pair-wise.|
+|package_db|string|false|none|The database the associated Package was discovered in.|
+|distribution_id|string|false|none|The ID of the Distribution of the associated Package.|
+|introduced_in|[digest.schema.json](#schemadigest.schema.json)|false|none|#/components/schemas/digest.schema|
+|repository_ids|[string]|false|none|The IDs of the Repositories of the associated Package.|
-
Manifest
+
cpe.schema
-
-
-
-
+
+
+
+
```json
-{
- "hash": "sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3",
- "layers": [
- {
- "hash": "sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3",
- "uri": "https://storage.example.com/blob/2f077db56abccc19f16f140f629ae98e904b4b7d563957a7fc319bd11b82ba36",
- "headers": {
- "property1": [
- "string"
- ],
- "property2": [
- "string"
- ]
- }
- }
- ]
-}
+"cpe:/a:microsoft:internet_explorer:8.0.6001:beta"
```
-Manifest
+Common Platform Enumeration Name
### Properties
|Name|Type|Required|Restrictions|Description|
|---|---|---|---|---|
-|hash|[Digest](#schemadigest)|true|none|A digest string with prefixed algorithm. The format is described here: https://github.com/opencontainers/image-spec/blob/master/descriptor.md#digests Digests are used throughout the API to identify Layers and Manifests.|
-|layers|[[Layer](#schemalayer)]|true|none|[A Layer within a Manifest and where Clair may retrieve it.]|
+|Common Platform Enumeration Name|string|false|none|This is a CPE Name in either v2.2 "URI" form or v2.3 "Formatted String" form.|
+
+oneOf
+
+|Name|Type|Required|Restrictions|Description|
+|---|---|---|---|---|
+|*anonymous*|string|false|none|none|
+
+xor
+
+|Name|Type|Required|Restrictions|Description|
+|---|---|---|---|---|
+|*anonymous*|string|false|none|none|
-
Layer
+
vulnerability.schema
-
-
-
-
+
+
+
+
```json
{
- "hash": "sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3",
- "uri": "https://storage.example.com/blob/2f077db56abccc19f16f140f629ae98e904b4b7d563957a7fc319bd11b82ba36",
- "headers": {
- "property1": [
- "string"
- ],
- "property2": [
- "string"
- ]
- }
+ "id": "356835",
+ "updater": "ubuntu",
+ "name": "CVE-2009-5155",
+ "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.",
+ "links": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-5155 http://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-5155.html https://sourceware.org/bugzilla/show_bug.cgi?id=11053 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=22793 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=32806 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=34238 https://sourceware.org/bugzilla/show_bug.cgi?id=18986",
+ "severity": "Low",
+ "normalized_severity": "Low",
+ "package": {
+ "id": "0",
+ "name": "glibc",
+ "kind": "binary",
+ "source": null
+ },
+ "dist": {
+ "id": "0",
+ "did": "ubuntu",
+ "name": "Ubuntu",
+ "version": "18.04.3 LTS (Bionic Beaver)",
+ "version_code_name": "bionic",
+ "version_id": "18.04",
+ "arch": "amd64"
+ },
+ "repo": {
+ "id": "0",
+ "name": "Ubuntu 18.04.3 LTS"
+ },
+ "issued": "2019-10-12T07:20:50.52Z",
+ "fixed_in_version": "2.28-0ubuntu1"
}
```
-Layer
+Vulnerability
### Properties
|Name|Type|Required|Restrictions|Description|
|---|---|---|---|---|
-|hash|[Digest](#schemadigest)|true|none|A digest string with prefixed algorithm. The format is described here: https://github.com/opencontainers/image-spec/blob/master/descriptor.md#digests Digests are used throughout the API to identify Layers and Manifests.|
-|uri|string|true|none|A URI describing where the layer may be found. Implementations MUST support http(s) schemes and MAY support additional schemes.|
-|headers|object|true|none|map of arrays of header values keyed by header value. e.g. map[string][]string|
-|» **additionalProperties**|[string]|false|none|none|
+|id|string|true|none|none|
+|updater|string|true|none|none|
+|name|string|true|none|none|
+|description|string|false|none|none|
+|issued|string(date-time)|false|none|none|
+|links|string|false|none|none|
+|severity|string|false|none|none|
+|normalized_severity|[normalized_severity.schema.json](#schemanormalized_severity.schema.json)|true|none|#/components/schemas/normalized_severity.schema|
+|package|[package.schema.json](#schemapackage.schema.json)|false|none|#/components/schemas/package.schema|
+|distribution|[distribution.schema.json](#schemadistribution.schema.json)|false|none|#/components/schemas/distribution.schema|
+|repository|[repository.schema.json](#schemarepository.schema.json)|false|none|#/components/schemas/repository.schema|
+|fixed_in_version|string|false|none|none|
+|range|[range.schema.json](#schemarange.schema.json)|false|none|#/components/schemas/range.schema|
+|arch_op|string|false|none|Flag indicating how the referenced package's "arch" member should be interpreted.|
+
+anyOf
+
+|Name|Type|Required|Restrictions|Description|
+|---|---|---|---|---|
+|*anonymous*|object|false|none|none|
+
+or
+
+|Name|Type|Required|Restrictions|Description|
+|---|---|---|---|---|
+|*anonymous*|object|false|none|none|
+
+or
+
+|Name|Type|Required|Restrictions|Description|
+|---|---|---|---|---|
+|*anonymous*|object|false|none|none|
+
+#### Enumerated Values
+
+|Property|Value|
+|---|---|
+|arch_op|equals|
+|arch_op|not equals|
+|arch_op|pattern match|
-
-
-
-
-
+
+
+
+
```json
{
- "state": "aae368a064d7c5a433d0bf2c4f5554cc"
+ "name": "CVE-2009-5155",
+ "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.",
+ "normalized_severity": "Low",
+ "fixed_in_version": "v0.0.1",
+ "links": "http://link-to-advisory",
+ "package": {
+ "id": "0",
+ "name": "glibc"
+ },
+ "dist": {
+ "id": "0",
+ "did": "ubuntu",
+ "name": "Ubuntu",
+ "version": "18.04.3 LTS (Bionic Beaver)",
+ "version_code_name": "bionic",
+ "version_id": "18.04"
+ },
+ "repo": {
+ "id": "0",
+ "name": "Ubuntu 18.04.3 LTS"
+ }
}
```
-State
+Vulnerability Summary
### Properties
|Name|Type|Required|Restrictions|Description|
|---|---|---|---|---|
-|state|string|true|none|an opaque identifier|
+|name|string|true|none|Unique identifier for this notification.|
+|fixed_in_version|string|true|none|none|
+|links|string|false|none|none|
+|description|string|false|none|none|
+|normalized_severity|[normalized_severity.schema.json](#schemanormalized_severity.schema.json)|true|none|#/components/schemas/normalized_severity.schema|
+|package|[package.schema](#schemapackage.schema)|false|none|none|
+|distribution|[distribution.schema](#schemadistribution.schema)|false|none|Distribution is the accompanying system context of a Package.|
+|repository|[repository.schema](#schemarepository.schema)|false|none|none|
-
Digest
-
-
-
-
-
+anyOf
-```json
-"sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3"
+|Name|Type|Required|Restrictions|Description|
+|---|---|---|---|---|
+|*anonymous*|object|false|none|none|
-```
+or
-Digest
+|Name|Type|Required|Restrictions|Description|
+|---|---|---|---|---|
+|*anonymous*|object|false|none|none|
-### Properties
+or
|Name|Type|Required|Restrictions|Description|
|---|---|---|---|---|
-|Digest|string|false|none|A digest string with prefixed algorithm. The format is described here: https://github.com/opencontainers/image-spec/blob/master/descriptor.md#digests Digests are used throughout the API to identify Layers and Manifests.|
+|*anonymous*|object|false|none|none|
diff --git a/cmd/clairctl/client.go b/cmd/clairctl/client.go
index 2fed786230..0e327bcab3 100644
--- a/cmd/clairctl/client.go
+++ b/cmd/clairctl/client.go
@@ -211,7 +211,6 @@ func (c *Client) IndexReport(ctx context.Context, id claircore.Digest, m *clairc
}
var report claircore.IndexReport
dec := codec.GetDecoder(rd)
- defer codec.PutDecoder(dec)
if err := dec.Decode(&report); err != nil {
zlog.Debug(ctx).
Err(err).
@@ -275,7 +274,6 @@ func (c *Client) VulnerabilityReport(ctx context.Context, id claircore.Digest) (
}
var report claircore.VulnerabilityReport
dec := codec.GetDecoder(res.Body)
- defer codec.PutDecoder(dec)
if err := dec.Decode(&report); err != nil {
zlog.Debug(ctx).
Err(err).
diff --git a/cmd/clairctl/jsonformatter.go b/cmd/clairctl/jsonformatter.go
index ea734d7632..cb1425d84d 100644
--- a/cmd/clairctl/jsonformatter.go
+++ b/cmd/clairctl/jsonformatter.go
@@ -8,17 +8,16 @@ import (
var _ Formatter = (*jsonFormatter)(nil)
-// JsonFormatter is a very simple formatter; it just calls
-// (*json.Encoder).Encode.
+// JsonFormatter outputs JSON.
type jsonFormatter struct {
- enc *codec.Encoder
+ enc codec.Encoder
c io.Closer
}
func (f *jsonFormatter) Format(r *Result) error {
return f.enc.Encode(r.Report)
}
+
func (f *jsonFormatter) Close() error {
- codec.PutEncoder(f.enc)
return f.c.Close()
}
diff --git a/cmd/clairctl/manifest.go b/cmd/clairctl/manifest.go
index cafc3380f4..3d0994f2ba 100644
--- a/cmd/clairctl/manifest.go
+++ b/cmd/clairctl/manifest.go
@@ -8,6 +8,7 @@ import (
"os"
"path"
"strings"
+ "sync"
"github.com/google/go-containerregistry/pkg/name"
"github.com/google/go-containerregistry/pkg/v1/remote"
@@ -36,21 +37,15 @@ func manifestAction(c *cli.Context) error {
}
result := make(chan *claircore.Manifest)
- done := make(chan struct{})
eg, ctx := errgroup.WithContext(c.Context)
- go func() {
- defer close(done)
- enc := codec.GetEncoder(os.Stdout)
- defer codec.PutEncoder(enc)
- for m := range result {
- enc.MustEncode(m)
- }
- }()
+ var workers sync.WaitGroup
+ workers.Add(args.Len())
- for i := 0; i < args.Len(); i++ {
+ for i := range args.Len() {
name := args.Get(i)
zlog.Debug(ctx).Str("name", name).Msg("fetching")
eg.Go(func() error {
+ defer workers.Done()
m, err := Inspect(ctx, name)
if err != nil {
zlog.Debug(ctx).
@@ -66,11 +61,24 @@ func manifestAction(c *cli.Context) error {
return nil
})
}
+ eg.Go(func() error {
+ workers.Wait()
+ close(result)
+ return nil
+ })
+ eg.Go(func() error {
+ enc := codec.GetEncoder(os.Stdout)
+ for m := range result {
+ if err := enc.Encode(m); err != nil {
+ return err
+ }
+ }
+ return nil
+ })
+
if err := eg.Wait(); err != nil {
return err
}
- close(result)
- <-done
return nil
}
diff --git a/go.mod b/go.mod
index f6433863ac..02c75e51e3 100644
--- a/go.mod
+++ b/go.mod
@@ -14,6 +14,7 @@ require (
github.com/google/uuid v1.6.0
github.com/grafana/pyroscope-go/godeltaprof v0.1.9
github.com/jackc/pgx/v5 v5.7.6
+ github.com/kaptinlin/jsonschema v0.4.6
github.com/klauspost/compress v1.18.0
github.com/prometheus/client_golang v1.23.2
github.com/quay/clair/config v1.4.3
@@ -24,7 +25,6 @@ require (
github.com/rogpeppe/go-internal v1.14.1
github.com/rs/zerolog v1.34.0
github.com/tomnomnom/linkheader v0.0.0-20180905144013-02ca5825eb80
- github.com/ugorji/go/codec v1.2.14
github.com/urfave/cli/v2 v2.27.7
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.63.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0
@@ -44,6 +44,7 @@ require (
golang.org/x/net v0.44.0
golang.org/x/sync v0.17.0
golang.org/x/time v0.13.0
+ golang.org/x/tools v0.36.0
google.golang.org/grpc v1.75.1
gopkg.in/yaml.v3 v3.0.1
)
@@ -62,6 +63,10 @@ require (
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
+ github.com/goccy/go-json v0.10.5 // indirect
+ github.com/goccy/go-yaml v1.18.0 // indirect
+ github.com/gotnospirit/makeplural v0.0.0-20180622080156-a5f48d94d976 // indirect
+ github.com/gotnospirit/messageformat v0.0.0-20221001023931-dfe49f1eb092 // indirect
github.com/grafana/regexp v0.0.0-20240518133315-a468a5bfb3bc // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 // indirect
github.com/jackc/chunkreader/v2 v2.0.1 // indirect
@@ -74,6 +79,7 @@ require (
github.com/jackc/pgx/v4 v4.18.3 // indirect
github.com/jackc/puddle v1.3.0 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
+ github.com/kaptinlin/go-i18n v0.1.4 // indirect
github.com/knqyf263/go-apk-version v0.0.0-20200609155635-041fdbb8563f // indirect
github.com/knqyf263/go-deb-version v0.0.0-20190517075300-09fca494f03d // indirect
github.com/knqyf263/go-rpm-version v0.0.0-20170716094938-74609b86c936 // indirect
@@ -110,7 +116,6 @@ require (
golang.org/x/mod v0.27.0 // indirect
golang.org/x/sys v0.36.0 // indirect
golang.org/x/text v0.29.0 // indirect
- golang.org/x/tools v0.36.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250825161204-c5933d9347a5 // indirect
google.golang.org/protobuf v1.36.8 // indirect
diff --git a/go.sum b/go.sum
index 85e1bd575a..493b3bf493 100644
--- a/go.sum
+++ b/go.sum
@@ -50,6 +50,10 @@ github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-stomp/stomp/v3 v3.1.3 h1:5/wi+bI38O1Qkf2cc7Gjlw7N5beHMWB/BxpX+4p/MGI=
github.com/go-stomp/stomp/v3 v3.1.3/go.mod h1:ztzZej6T2W4Y6FlD+Tb5n7HQP3/O5UNQiuC169pIp10=
+github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
+github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
+github.com/goccy/go-yaml v1.18.0 h1:8W7wMFS12Pcas7KU+VVkaiCng+kG8QiFeFwzFb+rwuw=
+github.com/goccy/go-yaml v1.18.0/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gofrs/uuid v4.0.0+incompatible h1:1SD/1F5pU8p29ybwgQSwpQk+mwdRrXCYuPhW6m+TnJw=
github.com/gofrs/uuid v4.0.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
@@ -71,6 +75,10 @@ github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm4
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/gotnospirit/makeplural v0.0.0-20180622080156-a5f48d94d976 h1:b70jEaX2iaJSPZULSUxKtm73LBfsCrMsIlYCUgNGSIs=
+github.com/gotnospirit/makeplural v0.0.0-20180622080156-a5f48d94d976/go.mod h1:ZGQeOwybjD8lkCjIyJfqR5LD2wMVHJ31d6GdPxoTsWY=
+github.com/gotnospirit/messageformat v0.0.0-20221001023931-dfe49f1eb092 h1:c7gcNWTSr1gtLp6PyYi3wzvFCEcHJ4YRobDgqmIgf7Q=
+github.com/gotnospirit/messageformat v0.0.0-20221001023931-dfe49f1eb092/go.mod h1:ZZAN4fkkful3l1lpJwF8JbW41ZiG9TwJ2ZlqzQovBNU=
github.com/grafana/pyroscope-go/godeltaprof v0.1.9 h1:c1Us8i6eSmkW+Ez05d3co8kasnuOY813tbMN8i/a3Og=
github.com/grafana/pyroscope-go/godeltaprof v0.1.9/go.mod h1:2+l7K7twW49Ct4wFluZD3tZ6e0SjanjcUUBPVD/UuGU=
github.com/grafana/regexp v0.0.0-20240518133315-a468a5bfb3bc h1:GN2Lv3MGO7AS6PrRoT6yV5+wkrOpcszoIsO4+4ds248=
@@ -130,6 +138,10 @@ github.com/jackc/puddle v1.3.0 h1:eHK/5clGOatcjX3oWGBO/MpxpbHzSwud5EWTSCI+MX0=
github.com/jackc/puddle v1.3.0/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
+github.com/kaptinlin/go-i18n v0.1.4 h1:wCiwAn1LOcvymvWIVAM4m5dUAMiHunTdEubLDk4hTGs=
+github.com/kaptinlin/go-i18n v0.1.4/go.mod h1:g1fn1GvTgT4CiLE8/fFE1hboHWJ6erivrDpiDtCcFKg=
+github.com/kaptinlin/jsonschema v0.4.6 h1:vOSFg5tjmfkOdKg+D6Oo4fVOM/pActWu/ntkPsI1T64=
+github.com/kaptinlin/jsonschema v0.4.6/go.mod h1:1DUd7r5SdyB2ZnMtyB7uLv64dE3zTFTiYytDCd+AEL0=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
@@ -183,6 +195,8 @@ github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJw
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
github.com/package-url/packageurl-go v0.1.3 h1:4juMED3hHiz0set3Vq3KeQ75KD1avthoXLtmE3I0PLs=
github.com/package-url/packageurl-go v0.1.3/go.mod h1:nKAWB8E6uk1MHqiS/lQb9pYBGH2+mdJ2PJc2s50dQY0=
+github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
+github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
@@ -250,8 +264,6 @@ github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tomnomnom/linkheader v0.0.0-20180905144013-02ca5825eb80 h1:nrZ3ySNYwJbSpD6ce9duiP+QkD3JuLCcWkdaehUS/3Y=
github.com/tomnomnom/linkheader v0.0.0-20180905144013-02ca5825eb80/go.mod h1:iFyPdL66DjUD96XmzVL3ZntbzcflLnznH0fr99w5VqE=
-github.com/ugorji/go/codec v1.2.14 h1:yOQvXCBc3Ij46LRkRoh4Yd5qK6LVOgi0bYOXfb7ifjw=
-github.com/ugorji/go/codec v1.2.14/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
github.com/ulikunitz/xz v0.5.15 h1:9DNdB5s+SgV3bQ2ApL10xRc35ck0DuIX/isZvIk+ubY=
github.com/ulikunitz/xz v0.5.15/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14=
github.com/urfave/cli/v2 v2.27.7 h1:bH59vdhbjLv3LAvIu6gd0usJHgoTTPhCFib8qqOwXYU=
diff --git a/httptransport/api/lib/oapi.jq b/httptransport/api/lib/oapi.jq
new file mode 100644
index 0000000000..ba9b10f108
--- /dev/null
+++ b/httptransport/api/lib/oapi.jq
@@ -0,0 +1,64 @@
+# vim: set expandtab ts=2 sw=2:
+module {
+ name: "openapi",
+};
+
+# Some helper functions:
+
+def ref($ref): # Construct a JSON Schema reference object.
+ { "$ref": "\($ref)" }
+;
+
+def lref($kind; $id): # Construct a ref object to an OpenAPI component.
+ ref("#/components/\($kind)/\($id)")
+;
+
+def param_ref($id): # Construct a ref object to an OpenAPI parameter component.
+ lref("parameters"; $id)
+;
+
+def response_ref($id): # Construct a ref object to an OpenAPI response component.
+ lref("responses"; $id)
+;
+
+def header_ref($id): # Construct a ref object to an OpenAPI header component.
+ lref("headers"; $id)
+;
+
+def schema_ref($id): # Construct a ref object to an OpenAPI schema component.
+ lref("schemas"; $id)
+;
+
+def mediatype($t; $v): # Return the local vendor mediatype for $t, version $v.
+ "application/vnd.clair.\($t).\($v)+json"
+;
+
+def mediatype($t): # As mediatype/2, but with the default of "v1".
+ mediatype($t; "v1")
+;
+
+def contenttype($t; $v): # Construct an OpenAPI content type object for $t, version $v.
+ { (mediatype($t; $v)): { "schema": schema_ref($t) } }
+;
+
+def contenttype($t): # As contenttype/2, but with the default version.
+ { (mediatype($t)): { "schema": schema_ref($t) } }
+;
+
+def cli_hints: # Add some hints that CLI tools can pick up on to ignore our internal paths.
+ (.paths[][] | select(objects and (.tags|contains(["internal"]))) ) |= . + {"x-cli-ignore": true}
+;
+
+def sort_paths: # Sort the paths object.
+ .paths |= (. | to_entries | sort_by(.key) | from_entries)
+;
+
+def content_defaults: # All responses that don't have a "default" type, pick the first one.
+ "application/json" as $t |
+ [["example"], ["examples"]] as $rm |
+ ( .paths[][] | select(objects) | .responses[].content | select(objects and (has($t)|not)) ) |= (. + { $t: (to_entries[0].value | delpaths($rm)) })
+ |
+ ( .paths[][] | select(objects) | .requestBody.content | select(objects and (has($t)|not)) ) |= (. + { $t: (to_entries[0].value | delpaths($rm)) })
+ |
+ ( .components.responses[].content | select(has($t)|not) ) |= (. + { $t: (to_entries[0].value | delpaths($rm)) })
+;
diff --git a/httptransport/api/openapi.zsh b/httptransport/api/openapi.zsh
new file mode 100755
index 0000000000..58cebc7864
--- /dev/null
+++ b/httptransport/api/openapi.zsh
@@ -0,0 +1,73 @@
+#!/usr/bin/zsh
+set -euo pipefail
+
+# This script builds the OpenAPI documents, rendering them into YAML and JSON.
+#
+# The main inputs for this are the "openapi.jq" files in the "v?" directories.
+# These are jq(1) scripts that are executed with no input in the relevant
+# directory; they're expected to output a valid OpenAPI document. All the JSON
+# Schema documents in the matching "httptransport/types/v?" directory are
+# copied into the working directory. Matching files in the "examples"
+# subdirectory will be slipstreamed to the expected field.
+#
+# The result is then "bundled" into one document, then linted, rendered out to
+# both YAML and JSON, and strings to be used as HTTP Etags are written out.
+
+for cmd in sha256sum git jq yq npx; do
+ if ! command -v "$cmd" &>/dev/null; then
+ print missing needed command: "$cmd" >&2
+ exit 1
+ fi
+done
+
+function jq() {
+ command jq --exit-status --compact-output "$@"
+}
+
+function yq() {
+ command yq --exit-status "$@"
+}
+
+function schemalint() {
+ npx --yes @sourcemeta/jsonschema metaschema --resolve "$1" "$1"
+ npx --yes @sourcemeta/jsonschema lint --resolve "$1" "$1"
+}
+
+function render() {
+ function TRAPEXIT() {
+ rm openapi.*.{json,yaml}(N) *.schema.json(N)
+ popd -q
+ }
+ pushd -q "${1?missing directory argument}"
+ local v=${1:A:t}
+ local t=${1:A:h:h}/types/v1
+
+ schemalint "$t"
+ for f in ${t}/*.schema.json; do
+ local ex=examples/${${f:t}%.schema.json}.json
+ if [[ -f "$ex" ]]; then
+ jq --slurpfile ex "${ex}" 'setpath(["examples"]; $ex)' "$f" > "${f:t}"
+ else
+ cp "$f" .
+ fi
+ done
+
+ jq --null-input \
+ 'reduce (inputs|(.["$id"]|split("/")|.[-1]|rtrimstr(".schema.json")) as $k|{components:{schemas:{$k:.}}}) as $it({};. * $it)'\
+ *.schema.json >openapi.types.json
+
+
+ jq --null-input -L "${1:A:h}/lib" --from-file openapi.jq >openapi.frag.json
+ jq --null-input 'reduce inputs as $it({};. * $it)' openapi.{frag,types}.json >openapi.json
+
+ yq -pj eval . openapi.yaml
+ # Need some validator that actually works >:(
+
+ sha256sum openapi.{json,yaml} |
+ awk '{printf "\"%s\"", $1 >$2".etag" }'
+}
+
+local root=$(git rev-parse --show-toplevel)
+for d in ${root}/httptransport/api/v*/; do
+ render "$d"
+done
diff --git a/httptransport/api/v1/examples/cpe.json b/httptransport/api/v1/examples/cpe.json
new file mode 100644
index 0000000000..c05fb5ab1d
--- /dev/null
+++ b/httptransport/api/v1/examples/cpe.json
@@ -0,0 +1,2 @@
+"cpe:/a:microsoft:internet_explorer:8.0.6001:beta"
+"cpe:2.3:a:microsoft:internet_explorer:8.0.6001:beta:*:*:*:*:*:*"
diff --git a/httptransport/api/v1/examples/distribution.json b/httptransport/api/v1/examples/distribution.json
new file mode 100644
index 0000000000..19b1fd83a0
--- /dev/null
+++ b/httptransport/api/v1/examples/distribution.json
@@ -0,0 +1,9 @@
+{
+ "id": "1",
+ "did": "ubuntu",
+ "name": "Ubuntu",
+ "version": "18.04.3 LTS (Bionic Beaver)",
+ "version_code_name": "bionic",
+ "version_id": "18.04",
+ "pretty_name": "Ubuntu 18.04.3 LTS"
+}
diff --git a/httptransport/api/v1/examples/environment.json b/httptransport/api/v1/examples/environment.json
new file mode 100644
index 0000000000..d55cdb1c9b
--- /dev/null
+++ b/httptransport/api/v1/examples/environment.json
@@ -0,0 +1,7 @@
+{
+ "value": {
+ "package_db": "var/lib/dpkg/status",
+ "introduced_in": "sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a",
+ "distribution_id": "1"
+ }
+}
diff --git a/httptransport/api/v1/examples/manifest.json b/httptransport/api/v1/examples/manifest.json
new file mode 100644
index 0000000000..4a36f4fce6
--- /dev/null
+++ b/httptransport/api/v1/examples/manifest.json
@@ -0,0 +1,9 @@
+{
+ "hash": "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "layers": [
+ {
+ "hash": "sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b856",
+ "uri": "https://storage.example.com/blob/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b856"
+ }
+ ]
+}
diff --git a/httptransport/api/v1/examples/notification_page.json b/httptransport/api/v1/examples/notification_page.json
new file mode 100644
index 0000000000..1653cafd01
--- /dev/null
+++ b/httptransport/api/v1/examples/notification_page.json
@@ -0,0 +1,7 @@
+{
+ "page": {
+ "size": 0,
+ "next": "-1"
+ },
+ "notifications": []
+}
diff --git a/httptransport/api/v1/examples/package.json b/httptransport/api/v1/examples/package.json
new file mode 100644
index 0000000000..d86c65698b
--- /dev/null
+++ b/httptransport/api/v1/examples/package.json
@@ -0,0 +1,17 @@
+{
+ "id": "10",
+ "name": "libapt-pkg5.0",
+ "version": "1.6.11",
+ "kind": "binary",
+ "normalized_version": "",
+ "arch": "x86",
+ "module": "",
+ "cpe": "",
+ "source": {
+ "id": "9",
+ "name": "apt",
+ "version": "1.6.11",
+ "kind": "source",
+ "source": null
+ }
+}
diff --git a/httptransport/api/v1/examples/vulnerability.json b/httptransport/api/v1/examples/vulnerability.json
new file mode 100644
index 0000000000..92a0944aca
--- /dev/null
+++ b/httptransport/api/v1/examples/vulnerability.json
@@ -0,0 +1,31 @@
+{
+ "id": "356835",
+ "updater": "ubuntu",
+ "name": "CVE-2009-5155",
+ "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.",
+ "links": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-5155 http://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-5155.html https://sourceware.org/bugzilla/show_bug.cgi?id=11053 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=22793 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=32806 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=34238 https://sourceware.org/bugzilla/show_bug.cgi?id=18986",
+ "severity": "Low",
+ "normalized_severity": "Low",
+ "package": {
+ "id": "0",
+ "name": "glibc",
+ "version": "2.27-0ubuntu1",
+ "kind": "binary",
+ "source": null
+ },
+ "dist": {
+ "id": "0",
+ "did": "ubuntu",
+ "name": "Ubuntu",
+ "version": "18.04.3 LTS (Bionic Beaver)",
+ "version_code_name": "bionic",
+ "version_id": "18.04",
+ "arch": "amd64"
+ },
+ "repo": {
+ "id": "0",
+ "name": "Ubuntu 18.04.3 LTS"
+ },
+ "issued": "2019-10-12T07:20:50.52Z",
+ "fixed_in_version": "2.28-0ubuntu1"
+}
diff --git a/httptransport/api/v1/examples/vulnerability_summary.json b/httptransport/api/v1/examples/vulnerability_summary.json
new file mode 100644
index 0000000000..2f90e97b9b
--- /dev/null
+++ b/httptransport/api/v1/examples/vulnerability_summary.json
@@ -0,0 +1,24 @@
+{
+ "name": "CVE-2009-5155",
+ "description": "In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.",
+ "normalized_severity": "Low",
+ "fixed_in_version": "v0.0.1",
+ "links": "http://link-to-advisory",
+ "package": {
+ "id": "0",
+ "name": "glibc",
+ "version": "v0.0.1-rc1"
+ },
+ "dist": {
+ "id": "0",
+ "did": "ubuntu",
+ "name": "Ubuntu",
+ "version": "18.04.3 LTS (Bionic Beaver)",
+ "version_code_name": "bionic",
+ "version_id": "18.04"
+ },
+ "repo": {
+ "id": "0",
+ "name": "Ubuntu 18.04.3 LTS"
+ }
+}
diff --git a/httptransport/api/v1/openapi.jq b/httptransport/api/v1/openapi.jq
new file mode 100644
index 0000000000..f31ceda374
--- /dev/null
+++ b/httptransport/api/v1/openapi.jq
@@ -0,0 +1,438 @@
+# vim: set expandtab ts=2 sw=2:
+include "oapi";
+
+# Some helper functions:
+def example_ref($id): ref("examples/\($id).json"); # Files are local at build time.
+def responses($r):
+{
+ "200": {
+ description: "Success",
+ headers: {
+ "Clair-Error": header_ref("Clair-Error"),
+ },
+ },
+ "400": response_ref("bad_request"),
+ "415": response_ref("unsupported_media_type"),
+ default: response_ref("oops"),
+} * $r
+;
+
+# Some variables:
+"/notifier/api/v1" as $path_notif |
+"/matcher/api/v1" as $path_match |
+"/indexer/api/v1" as $path_index |
+
+# The OpenAPI object:
+{
+ openapi: "3.1.0",
+ info: {
+ title: "Clair Container Analyzer",
+ description: ([
+ "Clair is a set of cooperating microservices which can index and match a container image's content with known vulnerabilities.",
+ "",
+ "**Note:** Any endpoints tagged \"internal\" or \"unstable\" are documented for completeness but are considered exempt from versioning.",
+ ""] | join("\n") | sub("[[:space:]]*$"; "")),
+ version: "1.2.0",
+ contact: {
+ name: "Clair Team",
+ url: "http://github.com/quay/clair",
+ email: "quay-devel@redhat.com",
+ },
+ license: {
+ name: "Apache License 2.0",
+ url: "http://www.apache.org/licenses/",
+ }
+ },
+ externalDocs: {url: "https://quay.github.io/clair/"},
+ tags: [
+ { name: "indexer" },
+ { name: "matcher" },
+ { name: "notifier" },
+ { name: "internal" },
+ { name: "unstable" }
+ ],
+ paths: {
+ "\($path_notif)/notification/{id}": {
+ parameters: [ {
+ in: "path",
+ name: "id",
+ required: true,
+ schema: schema_ref("token"),
+ description: "A notification ID returned by a callback"
+ } ],
+ delete: {
+ operationId: "DeleteNotification",
+ responses: responses({"204": {description: "TODO"}}),
+ },
+ get: {
+ operationId: "GetNotification",
+ parameters: [
+ {
+ in: "query",
+ name: "page_size",
+ schema: {"type": "integer"},
+ description: "The maximum number of notifications to deliver in a single page."
+ },
+ {
+ in: "query",
+ name: "next",
+ schema: {"type": "string"},
+ description: "The next page to fetch via id. Typically this number is provided on initial response in the \"page.next\" field. The first request should omit this field."
+ }
+ ],
+ responses: responses({
+ "200": {
+ description: "A paginated list of notifications",
+ content: contenttype("notification_page"),
+ },
+ "304": {
+ description: "Not modified",
+ },
+ })
+ }
+ },
+ "\($path_index)/index_report": {
+ post: {
+ operationId: "Index",
+ requestBody: {
+ description: "Manifest to index.",
+ required: true,
+ content: contenttype("manifest"),
+ },
+ responses: (responses({
+ "201": {
+ description: "IndexReport created.\n\nClients may want to avoid reading the body if simply submitting the manifest for later vulnerability reporting.",
+ content: contenttype("index_report"),
+ headers: {
+ Location: header_ref("Location"),
+ Link: header_ref("Link"),
+ },
+ links: {
+ retrieve: {
+ operationId: "GetIndexReport",
+ parameters: {
+ digest: "$request.body#/hash"
+ },
+ },
+ delete: {
+ operationId: "DeleteManifest",
+ parameters: {
+ digest: "$request.body#/hash"
+ },
+ },
+ report: {
+ operationId: "GetVulnerabilityReport",
+ parameters: {
+ digest: "$request.body#/hash"
+ },
+ },
+ },
+ },
+ "412": {
+ description: "Precondition Failed",
+ },
+ }) | del(.["200"])),
+ },
+ delete: {
+ operationId: "DeleteManifests",
+ requestBody: {
+ description: "Array of manifest digests to delete.",
+ required: true,
+ content: contenttype("bulk_delete"),
+ },
+ responses: responses({
+ "200": {
+ description: "Successfully deleted manifests.",
+ content: contenttype("bulk_delete"),
+ },
+ }),
+ }
+ },
+ "\($path_index)/index_report/{digest}": {
+ delete: {
+ operationId: "DeleteManifest",
+ responses: (responses({"204": {
+ description: "Success",
+ }}) |
+ del(.["200"])),
+ },
+ get: {
+ operationId: "GetIndexReport",
+ responses: responses({
+ "200": {
+ description: "IndexReport retrieved",
+ content: contenttype("index_report"),
+ },
+ "404": response_ref("not_found"),
+ }),
+ },
+ parameters: [ param_ref("digest") ],
+ },
+ "\($path_index)/internal/affected_manifest": {
+ post: {
+ tags: [ "internal", "unstable"],
+ operationId: "AffectedManifests",
+ responses: responses({
+ "200": {
+ description: "TODO",
+ content: contenttype("affected_manifests"),
+ },
+ }),
+ },
+ },
+ "\($path_index)/index_state": {
+ get: {
+ operationId: "IndexState",
+ responses: {
+ "200": {
+ description: "Indexer State",
+ headers: {
+ Etag: header_ref("Etag"),
+ },
+ content: contenttype("index_state"),
+ },
+ "304": {
+ description: "Not Modified",
+ },
+ }
+ }
+ },
+ "\($path_match)/vulnerability_report/{digest}": {
+ get: {
+ operationId: "GetVulnerabilityReport",
+ responses: (responses({
+ "201": {
+ description: "Vulnerability Report Created",
+ content: contenttype("vulnerability_report"),
+ }
+ ,
+ "404": response_ref("not_found"),
+ }) | del(.["200"])),
+ },
+ parameters: [ param_ref("digest") ],
+ },
+ "\($path_match)/internal/update_operation": {
+ post: {
+ tags: [ "internal", "unstable"],
+ operationId: "UpdateOperation",
+ responses: responses({
+ "200": {
+ description: "TODO",
+ content: contenttype("affected_manifests"),
+ },
+ }),
+ },
+ },
+ "\($path_match)/internal/update_diff": {
+ get: {
+ tags: [ "internal", "unstable"],
+ operationId: "GetUpdateDiff",
+ responses: responses({
+ "200": {
+ description: "TODO",
+ content: contenttype("update_diff"),
+ },
+ }),
+ parameters: [
+ {
+ in: "query",
+ name: "cur",
+ schema: schema_ref("token"),
+ description: "TKTK"
+ },
+ {
+ in: "query",
+ name: "prev",
+ schema: schema_ref("token"),
+ description: "TKTK"
+ }
+ ],
+ },
+ },
+ },
+ security: [
+ #{},
+ #{"psk": []},
+ ],
+ webhooks: {
+ notification: {
+ post: {
+ tags: ["notifier"],
+ requestBody: {
+ content: contenttype("notification"),
+ },
+ responses: {
+ "200": {
+ description: "TODO",
+ },
+ },
+ },
+ },
+ },
+ components: {
+ schemas: {
+ # Anything here will get overwritten by standalone JSON Schema objects
+ # if the keys are duplicated.
+ #
+ # Generally, anything that goes in a response/request body should have a
+ # schema over in the types directory.
+ token: {
+ "type": "string",
+ description: "An opaque token previously obtained from the service.",
+ },
+ },
+ responses: {
+ bad_request: {
+ description: "Bad Request",
+ content: contenttype("error"),
+ },
+ oops: {
+ description: "Internal Server Error",
+ content: contenttype("error"),
+ },
+ not_found: {
+ description: "Not Found",
+ content: contenttype("error"),
+ },
+ # Not expressible in OpenAPI:
+ #method_not_allowed: {
+ # description: "Method Not Allowed",
+ # headers: {
+ # Allow: header_ref("Allow"),
+ # },
+ # content: contenttype("error"),
+ #},
+ unsupported_media_type: {
+ description: "Unsupported Media Type",
+ content: contenttype("error"),
+ },
+ },
+ parameters: {
+ digest: {
+ description: "OCI-compatible digest of a referred object.",
+ name: "digest",
+ in: "path",
+ schema: schema_ref("digest"),
+ required: true,
+ }
+ },
+ headers: {
+ # Only used for 415 Method Not Allowed responses, which aren't expressible in OpenAPI.
+ #Allow: {
+ # description: "TKTK",
+ # style: "simple",
+ # schema: { "type": "string" },
+ # required: true,
+ #},
+ "Clair-Error": {
+ description: "This is a trailer containing any errors encountered while writing the response.",
+ style: "simple",
+ schema: { "type": "string" },
+ },
+ Etag: {
+ description: "HTTP [ETag header](https://httpwg.org/specs/rfc9110.html#field.etag)",
+ style: "simple",
+ schema: {"type": "string"}
+ },
+ Link: {
+ description: "Web Linking [Link header](https://httpwg.org/specs/rfc8288.html#header)",
+ style: "simple",
+ schema: { "type": "string" },
+ },
+ Location: {
+ description: "HTTP [Location header](https://httpwg.org/specs/rfc9110.html#field.location)",
+ style: "simple",
+ required: true,
+ schema: { "type": "string" },
+ },
+ },
+ securitySchemes: {
+ psk: {
+ "type": "http",
+ scheme: "bearer",
+ bearerFormat: "JWT with preshared key and allow-listed issuers",
+ description: "Clair's authentication scheme.",
+ },
+ },
+ },
+}
+|
+# And now, a bunch of fixups:
+def add_tags: # Match the path prefixes and add default tags.
+ .paths |= with_entries(
+ (
+ if (.key|startswith($path_index)) then
+ "indexer"
+ elif (.key|startswith($path_match)) then
+ "matcher"
+ elif (.key|startswith($path_notif)) then
+ "notifier"
+ else
+ ""
+ end
+ ) as $k |
+ if ($k=="") then
+ .
+ else
+ (.value[]|select(objects)) |= . + {
+ tags: ((.tags//[]) + [$k]),
+ }
+ end
+ )
+;
+def operation_metadata: # Slipstream some metadata into response objects.
+ {
+ AffectedManifests: {
+ summary: "Retrieve the set of manifests affected by the provided vulnerabilities.",
+ description: "",
+ },
+ DeleteManifest: {
+ summary: "Delete the referenced manifest.",
+ description: "Given a Manifest's content addressable hash, any data related to it will be removed it it exists.",
+ },
+ DeleteManifests: {
+ summary: "Delete the referenced manifests.",
+ description: "Given a Manifest's content addressable hash, any data related to it will be removed if it exists.",
+ },
+ DeleteNotification: {
+ summary: "Delete the referenced notification set.",
+ description: "Issues a delete of the provided notification id and all associated notifications.\nAfter this delete clients will no longer be able to retrieve notifications.",
+ },
+ GetIndexReport: {
+ summary: "Retrieve the IndexReport for the referenced manifest.",
+ description: "Given a Manifest's content addressable hash, an IndexReport will be retrieved if it exists.",
+ },
+ GetNotification: {
+ summary: "Retrieve pages of the referenced notification set.",
+ description: "By performing a GET with an id as a path parameter, the client will retrieve a paginated response of notification objects.",
+ },
+ GetVulnerabilityReport: {
+ summary: "Retrieve a VulnerabilityReport for the referenced manifest.",
+ description: "Given a Manifest's content addressable hash a VulnerabilityReport will be created. The Manifest **must** have been Indexed first via the Index endpoint.",
+ },
+ Index: {
+ summary: "Index the contents of a Manifest",
+ description: "By submitting a Manifest object to this endpoint Clair will fetch the layers, scan each layer's contents, and provide an index of discovered packages, repository and distribution information.",
+ },
+ IndexState: {
+ summary: "Report the indexer's internal configuration and state.",
+ description: "The index state endpoint returns a json structure indicating the indexer's internal configuration state.\nA client may be interested in this as a signal that manifests may need to be re-indexed.",
+ },
+ } as $m |
+ ( .paths[][] | select(objects) ) |= (
+ .operationId as $id |
+ ($m[$id]?) as $m |
+ if ($m) then
+ . + $m
+ else
+ .
+ end
+ )
+;
+
+sort_paths |
+content_defaults |
+add_tags |
+operation_metadata |
+cli_hints |
+.
diff --git a/httptransport/api/v1/openapi.json b/httptransport/api/v1/openapi.json
new file mode 100644
index 0000000000..6ecc7e9489
--- /dev/null
+++ b/httptransport/api/v1/openapi.json
@@ -0,0 +1 @@
+{"openapi":"3.1.0","info":{"title":"Clair Container Analyzer","description":"Clair is a set of cooperating microservices which can index and match a container image's content with known vulnerabilities.\n\n**Note:** Any endpoints tagged \"internal\" or \"unstable\" are documented for completeness but are considered exempt from versioning.","version":"1.2.0","contact":{"name":"Clair Team","url":"http://github.com/quay/clair","email":"quay-devel@redhat.com"},"license":{"name":"Apache License 2.0","url":"http://www.apache.org/licenses/"}},"externalDocs":{"url":"https://quay.github.io/clair/"},"tags":[{"name":"indexer"},{"name":"matcher"},{"name":"notifier"},{"name":"internal"},{"name":"unstable"}],"paths":{"/indexer/api/v1/index_report":{"post":{"operationId":"Index","requestBody":{"description":"Manifest to index.","required":true,"content":{"application/vnd.clair.manifest.v1+json":{"schema":{"$ref":"#/components/schemas/manifest"}},"application/json":{"schema":{"$ref":"#/components/schemas/manifest"}}}},"responses":{"400":{"$ref":"#/components/responses/bad_request"},"415":{"$ref":"#/components/responses/unsupported_media_type"},"default":{"$ref":"#/components/responses/oops"},"201":{"description":"IndexReport created.\n\nClients may want to avoid reading the body if simply submitting the manifest for later vulnerability reporting.","content":{"application/vnd.clair.index_report.v1+json":{"schema":{"$ref":"#/components/schemas/index_report"}},"application/json":{"schema":{"$ref":"#/components/schemas/index_report"}}},"headers":{"Location":{"$ref":"#/components/headers/Location"},"Link":{"$ref":"#/components/headers/Link"}},"links":{"retrieve":{"operationId":"GetIndexReport","parameters":{"digest":"$request.body#/hash"}},"delete":{"operationId":"DeleteManifest","parameters":{"digest":"$request.body#/hash"}},"report":{"operationId":"GetVulnerabilityReport","parameters":{"digest":"$request.body#/hash"}}}},"412":{"description":"Precondition Failed"}},"tags":["indexer"],"summary":"Index the contents of a Manifest","description":"By submitting a Manifest object to this endpoint Clair will fetch the layers, scan each layer's contents, and provide an index of discovered packages, repository and distribution information."},"delete":{"operationId":"DeleteManifests","requestBody":{"description":"Array of manifest digests to delete.","required":true,"content":{"application/vnd.clair.bulk_delete.v1+json":{"schema":{"$ref":"#/components/schemas/bulk_delete"}},"application/json":{"schema":{"$ref":"#/components/schemas/bulk_delete"}}}},"responses":{"200":{"description":"Successfully deleted manifests.","headers":{"Clair-Error":{"$ref":"#/components/headers/Clair-Error"}},"content":{"application/vnd.clair.bulk_delete.v1+json":{"schema":{"$ref":"#/components/schemas/bulk_delete"}},"application/json":{"schema":{"$ref":"#/components/schemas/bulk_delete"}}}},"400":{"$ref":"#/components/responses/bad_request"},"415":{"$ref":"#/components/responses/unsupported_media_type"},"default":{"$ref":"#/components/responses/oops"}},"tags":["indexer"],"summary":"Delete the referenced manifests.","description":"Given a Manifest's content addressable hash, any data related to it will be removed if it exists."}},"/indexer/api/v1/index_report/{digest}":{"delete":{"operationId":"DeleteManifest","responses":{"400":{"$ref":"#/components/responses/bad_request"},"415":{"$ref":"#/components/responses/unsupported_media_type"},"default":{"$ref":"#/components/responses/oops"},"204":{"description":"Success"}},"tags":["indexer"],"summary":"Delete the referenced manifest.","description":"Given a Manifest's content addressable hash, any data related to it will be removed it it exists."},"get":{"operationId":"GetIndexReport","responses":{"200":{"description":"IndexReport retrieved","headers":{"Clair-Error":{"$ref":"#/components/headers/Clair-Error"}},"content":{"application/vnd.clair.index_report.v1+json":{"schema":{"$ref":"#/components/schemas/index_report"}},"application/json":{"schema":{"$ref":"#/components/schemas/index_report"}}}},"400":{"$ref":"#/components/responses/bad_request"},"415":{"$ref":"#/components/responses/unsupported_media_type"},"default":{"$ref":"#/components/responses/oops"},"404":{"$ref":"#/components/responses/not_found"}},"tags":["indexer"],"summary":"Retrieve the IndexReport for the referenced manifest.","description":"Given a Manifest's content addressable hash, an IndexReport will be retrieved if it exists."},"parameters":[{"$ref":"#/components/parameters/digest"}]},"/indexer/api/v1/index_state":{"get":{"operationId":"IndexState","responses":{"200":{"description":"Indexer State","headers":{"Etag":{"$ref":"#/components/headers/Etag"}},"content":{"application/vnd.clair.index_state.v1+json":{"schema":{"$ref":"#/components/schemas/index_state"}},"application/json":{"schema":{"$ref":"#/components/schemas/index_state"}}}},"304":{"description":"Not Modified"}},"tags":["indexer"],"summary":"Report the indexer's internal configuration and state.","description":"The index state endpoint returns a json structure indicating the indexer's internal configuration state.\nA client may be interested in this as a signal that manifests may need to be re-indexed."}},"/indexer/api/v1/internal/affected_manifest":{"post":{"tags":["internal","unstable","indexer"],"operationId":"AffectedManifests","responses":{"200":{"description":"TODO","headers":{"Clair-Error":{"$ref":"#/components/headers/Clair-Error"}},"content":{"application/vnd.clair.affected_manifests.v1+json":{"schema":{"$ref":"#/components/schemas/affected_manifests"}},"application/json":{"schema":{"$ref":"#/components/schemas/affected_manifests"}}}},"400":{"$ref":"#/components/responses/bad_request"},"415":{"$ref":"#/components/responses/unsupported_media_type"},"default":{"$ref":"#/components/responses/oops"}},"summary":"Retrieve the set of manifests affected by the provided vulnerabilities.","description":"","x-cli-ignore":true}},"/matcher/api/v1/internal/update_diff":{"get":{"tags":["internal","unstable","matcher"],"operationId":"GetUpdateDiff","responses":{"200":{"description":"TODO","headers":{"Clair-Error":{"$ref":"#/components/headers/Clair-Error"}},"content":{"application/vnd.clair.update_diff.v1+json":{"schema":{"$ref":"#/components/schemas/update_diff"}},"application/json":{"schema":{"$ref":"#/components/schemas/update_diff"}}}},"400":{"$ref":"#/components/responses/bad_request"},"415":{"$ref":"#/components/responses/unsupported_media_type"},"default":{"$ref":"#/components/responses/oops"}},"parameters":[{"in":"query","name":"cur","schema":{"$ref":"#/components/schemas/token"},"description":"TKTK"},{"in":"query","name":"prev","schema":{"$ref":"#/components/schemas/token"},"description":"TKTK"}],"x-cli-ignore":true}},"/matcher/api/v1/internal/update_operation":{"post":{"tags":["internal","unstable","matcher"],"operationId":"UpdateOperation","responses":{"200":{"description":"TODO","headers":{"Clair-Error":{"$ref":"#/components/headers/Clair-Error"}},"content":{"application/vnd.clair.affected_manifests.v1+json":{"schema":{"$ref":"#/components/schemas/affected_manifests"}},"application/json":{"schema":{"$ref":"#/components/schemas/affected_manifests"}}}},"400":{"$ref":"#/components/responses/bad_request"},"415":{"$ref":"#/components/responses/unsupported_media_type"},"default":{"$ref":"#/components/responses/oops"}},"x-cli-ignore":true}},"/matcher/api/v1/vulnerability_report/{digest}":{"get":{"operationId":"GetVulnerabilityReport","responses":{"400":{"$ref":"#/components/responses/bad_request"},"415":{"$ref":"#/components/responses/unsupported_media_type"},"default":{"$ref":"#/components/responses/oops"},"201":{"description":"Vulnerability Report Created","content":{"application/vnd.clair.vulnerability_report.v1+json":{"schema":{"$ref":"#/components/schemas/vulnerability_report"}},"application/json":{"schema":{"$ref":"#/components/schemas/vulnerability_report"}}}},"404":{"$ref":"#/components/responses/not_found"}},"tags":["matcher"],"summary":"Retrieve a VulnerabilityReport for the referenced manifest.","description":"Given a Manifest's content addressable hash a VulnerabilityReport will be created. The Manifest **must** have been Indexed first via the Index endpoint."},"parameters":[{"$ref":"#/components/parameters/digest"}]},"/notifier/api/v1/notification/{id}":{"parameters":[{"in":"path","name":"id","required":true,"schema":{"$ref":"#/components/schemas/token"},"description":"A notification ID returned by a callback"}],"delete":{"operationId":"DeleteNotification","responses":{"200":{"description":"Success","headers":{"Clair-Error":{"$ref":"#/components/headers/Clair-Error"}}},"400":{"$ref":"#/components/responses/bad_request"},"415":{"$ref":"#/components/responses/unsupported_media_type"},"default":{"$ref":"#/components/responses/oops"},"204":{"description":"TODO"}},"tags":["notifier"],"summary":"Delete the referenced notification set.","description":"Issues a delete of the provided notification id and all associated notifications.\nAfter this delete clients will no longer be able to retrieve notifications."},"get":{"operationId":"GetNotification","parameters":[{"in":"query","name":"page_size","schema":{"type":"integer"},"description":"The maximum number of notifications to deliver in a single page."},{"in":"query","name":"next","schema":{"type":"string"},"description":"The next page to fetch via id. Typically this number is provided on initial response in the \"page.next\" field. The first request should omit this field."}],"responses":{"200":{"description":"A paginated list of notifications","headers":{"Clair-Error":{"$ref":"#/components/headers/Clair-Error"}},"content":{"application/vnd.clair.notification_page.v1+json":{"schema":{"$ref":"#/components/schemas/notification_page"}},"application/json":{"schema":{"$ref":"#/components/schemas/notification_page"}}}},"400":{"$ref":"#/components/responses/bad_request"},"415":{"$ref":"#/components/responses/unsupported_media_type"},"default":{"$ref":"#/components/responses/oops"},"304":{"description":"Not modified"}},"tags":["notifier"],"summary":"Retrieve pages of the referenced notification set.","description":"By performing a GET with an id as a path parameter, the client will retrieve a paginated response of notification objects."}}},"security":[],"webhooks":{"notification":{"post":{"tags":["notifier"],"requestBody":{"content":{"application/vnd.clair.notification.v1+json":{"schema":{"$ref":"#/components/schemas/notification"}}}},"responses":{"200":{"description":"TODO"}}}}},"components":{"schemas":{"token":{"type":"string","description":"An opaque token previously obtained from the service."},"affected_manifests":{"$id":"https://clairproject.org/api/http/v1/affected_manifests.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Affected Manifests","type":"object","description":"**This is an internal type, documented for completeness.**\n\nManifests affected by the specified vulnerability objects.","properties":{"vulnerabilities":{"type":"object","description":"Vulnerability objects.","additionalProperties":{"$ref":"vulnerability.schema.json"}},"vulnerable_manifests":{"type":"object","description":"Mapping of manifest digests to vulnerability identifiers.","additionalProperties":{"type":"array","items":{"type":"string","description":"An identifier to be used in the \"#/vulnerabilities\" object."}}}},"required":["vulnerable_manifests"]},"bulk_delete":{"$id":"https://clairproject.org/api/http/v1/bulk_delete.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Bulk Delete","type":"array","description":"Array of manifest digests to delete from the system.","items":{"$ref":"digest.schema.json","description":"Manifest digest to delete from the system."}},"cpe":{"$id":"https://clairproject.org/api/http/v1/cpe.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Common Platform Enumeration Name","description":"This is a CPE Name in either v2.2 \"URI\" form or v2.3 \"Formatted String\" form.","$comment":"Clair only produces v2.3 CPE Names. Any v2.2 Names will be normalized into v2.3 form.","oneOf":[{"description":"This is the CPE 2.2 regexp: https://cpe.mitre.org/specification/2.2/cpe-language_2.2.xsd","type":"string","pattern":"^[c][pP][eE]:/[AHOaho]?(:[A-Za-z0-9\\._\\-~%]*){0,6}$"},{"description":"This is the CPE 2.3 regexp: https://csrc.nist.gov/schema/cpe/2.3/cpe-naming_2.3.xsd","type":"string","pattern":"^cpe:2\\.3:[aho\\*\\-](:(((\\?*|\\*?)([a-zA-Z0-9\\-\\._]|(\\\\[\\\\\\*\\?!\"#$$%&'\\(\\)\\+,/:;<=>@\\[\\]\\^`\\{\\|}~]))+(\\?*|\\*?))|[\\*\\-])){5}(:(([a-zA-Z]{2,3}(-([a-zA-Z]{2}|[0-9]{3}))?)|[\\*\\-]))(:(((\\?*|\\*?)([a-zA-Z0-9\\-\\._]|(\\\\[\\\\\\*\\?!\"#$$%&'\\(\\)\\+,/:;<=>@\\[\\]\\^`\\{\\|}~]))+(\\?*|\\*?))|[\\*\\-])){4}$"}],"examples":["cpe:/a:microsoft:internet_explorer:8.0.6001:beta","cpe:2.3:a:microsoft:internet_explorer:8.0.6001:beta:*:*:*:*:*:*"]},"digest":{"$id":"https://clairproject.org/api/http/v1/digest.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Digest","description":"A digest acts as a content identifier, enabling content addressability.","oneOf":[{"$comment":"SHA256: MUST be implemented","description":"SHA256","type":"string","pattern":"^sha256:[a-f0-9]{64}$"},{"$comment":"SHA512: MAY be implemented","description":"SHA512","type":"string","pattern":"^sha512:[a-f0-9]{128}$"},{"$comment":"BLAKE3: MAY be implemented","description":"BLAKE3\n\n**Currently not implemented.**","type":"string","pattern":"^blake3:[a-f0-9]{64}$"}]},"distribution":{"$id":"https://clairproject.org/api/http/v1/distribution.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Distribution","type":"object","description":"Distribution is the accompanying system context of a Package.","properties":{"id":{"description":"Unique ID for this Distribution. May be unique to the response document, not the whole system.","type":"string"},"did":{"description":"A lower-case string (no spaces or other characters outside of 0–9, a–z, \".\", \"_\", and \"-\") identifying the operating system, excluding any version information and suitable for processing by scripts or usage in generated filenames.","type":"string"},"name":{"description":"A string identifying the operating system.","type":"string"},"version":{"description":"A string identifying the operating system version, excluding any OS name information, possibly including a release code name, and suitable for presentation to the user.","type":"string"},"version_code_name":{"description":"A lower-case string (no spaces or other characters outside of 0–9, a–z, \".\", \"_\", and \"-\") identifying the operating system release code name, excluding any OS name information or release version, and suitable for processing by scripts or usage in generated filenames.","type":"string"},"version_id":{"description":"A lower-case string (mostly numeric, no spaces or other characters outside of 0–9, a–z, \".\", \"_\", and \"-\") identifying the operating system version, excluding any OS name information or release code name.","type":"string"},"arch":{"description":"A string identifying the OS architecture.","type":"string"},"cpe":{"description":"Common Platform Enumeration name.","$ref":"cpe.schema.json"},"pretty_name":{"description":"A pretty operating system name in a format suitable for presentation to the user.","type":"string"}},"additionalProperties":false,"required":["id"],"examples":[{"id":"1","did":"ubuntu","name":"Ubuntu","version":"18.04.3 LTS (Bionic Beaver)","version_code_name":"bionic","version_id":"18.04","pretty_name":"Ubuntu 18.04.3 LTS"}]},"environment":{"$id":"https://clairproject.org/api/http/v1/environment.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Environment","type":"object","description":"Environment describes the surrounding environment a package was discovered in.","properties":{"package_db":{"description":"The database the associated Package was discovered in.","type":"string"},"distribution_id":{"description":"The ID of the Distribution of the associated Package.","type":"string"},"introduced_in":{"description":"The Layer the associated Package was introduced in.","$ref":"digest.schema.json"},"repository_ids":{"description":"The IDs of the Repositories of the associated Package.","type":"array","items":{"type":"string"}}},"additionalProperties":false,"examples":[{"value":{"package_db":"var/lib/dpkg/status","introduced_in":"sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a","distribution_id":"1"}}]},"error":{"$id":"https://clairproject.org/api/http/v1/error.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Error","type":"object","description":"A general error response.","properties":{"code":{"type":"string","description":"a code for this particular error"},"message":{"type":"string","description":"a message with further detail"}},"required":["message"]},"index_report":{"$id":"https://clairproject.org/api/http/v1/index_report.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"IndexReport","type":"object","description":"An index of the contents of a Manifest.","properties":{"manifest_hash":{"$ref":"digest.schema.json","description":"The Manifest's digest."},"state":{"type":"string","description":"The current state of the index operation"},"err":{"type":"string","description":"An error message on event of unsuccessful index"},"success":{"type":"boolean","description":"A bool indicating succcessful index"},"packages":{"type":"object","additionalProperties":{"$ref":"package.schema.json"}},"distributions":{"type":"object","additionalProperties":{"$ref":"distribution.schema.json"}},"repository":{"type":"object","additionalProperties":{"$ref":"repository.schema.json"}},"environments":{"type":"object","additionalProperties":{"type":"array","items":{"$ref":"environment.schema.json"}}}},"additionalProperties":false,"required":["manifest_hash","state","success"]},"index_state":{"$id":"https://clairproject.org/api/http/v1/index_state.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Index State","type":"object","description":"Information on the state of the indexer system.","properties":{"state":{"type":"string","description":"an opaque token"}},"required":["state"]},"layer":{"$id":"https://clairproject.org/api/http/v1/layer.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Layer","type":"object","description":"Layer is a description of a container layer. It should contain enough information to fetch the layer.","properties":{"hash":{"$ref":"digest.schema.json","description":"Digest of the layer blob."},"uri":{"type":"string","description":"A URI indicating where the layer blob can be downloaded from."},"headers":{"description":"Any additional HTTP-style headers needed for requesting layers.","type":"object","patternProperties":{"^[a-zA-Z0-9\\-_]+$":{"type":"array","items":{"type":"string"}}}},"media_type":{"description":"The OCI Layer media type for this layer.","type":"string","pattern":"^application/vnd\\.oci\\.image\\.layer\\.v1\\.tar(\\+(gzip|zstd))?$"}},"additionalProperties":false,"required":["hash","uri"]},"manifest":{"$id":"https://clairproject.org/api/http/v1/manifest.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Manifest","type":"object","description":"A description of an OCI Image Manifest.","properties":{"hash":{"$ref":"digest.schema.json","description":"The OCI Image Manifest's digest.\n\nThis is used as an identifier throughout the system. This **SHOULD** be the same as the OCI Image Manifest's digest, but this is not enforced."},"layers":{"type":"array","description":"The OCI Layers making up the Image, in order.","items":{"$ref":"layer.schema.json"}}},"additionalProperties":false,"required":["hash"],"examples":[{"hash":"sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","layers":[{"hash":"sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b856","uri":"https://storage.example.com/blob/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b856"}]}]},"normalized_severity":{"$id":"https://clairproject.org/api/http/v1/normalized_severity.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Normalized Severity","description":"Standardized severity values.","enum":["Unknown","Negligible","Low","Medium","High","Critical"]},"notification_page":{"$id":"https://clairproject.org/api/http/v1/notification_page.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Notification Page","type":"object","description":"A page description and list of notifications.","properties":{"page":{"description":"An object informing the client the next page to retrieve.","type":"object","properties":{"size":{"type":"integer"},"next":{"oneOf":[{"type":"string"},{"const":"-1"}]}},"additionalProperties":false,"required":["size"]},"notifications":{"description":"Notifications within this page.","type":"array","items":{"$ref":"notification.schema.json"}}},"additionalProperties":false,"required":["page","notifications"],"examples":[{"page":{"size":0,"next":"-1"},"notifications":[]}]},"notification":{"$id":"https://clairproject.org/api/http/v1/notification.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Notification","type":"object","description":"A change in a manifest affected by a vulnerability.","properties":{"id":{"description":"Unique identifier for this notification.","type":"string"},"manifest":{"$ref":"digest.schema.json","description":"The digest of the manifest affected by the provided vulnerability."},"reason":{"description":"The reason for the notifcation.","enum":["added","removed"]},"vulnerability":{"$ref":"vulnerability_summary.schema.json"}},"additionalProperties":false,"required":["id","manifest","reason","vulnerability"]},"package":{"$id":"https://clairproject.org/api/http/v1/package.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Package","type":"object","description":"Description of installed software.","properties":{"id":{"description":"Unique ID for this Package. May be unique to the response document, not the whole system.","type":"string"},"name":{"description":"Identifier of this Package.\n\nThe uniqueness and scoping of this name depends on the packaging system.","type":"string"},"version":{"description":"Version of this Package, as reported by the packaging system.","type":"string"},"kind":{"description":"The \"kind\" of this Package.","enum":["binary","source"],"default":"binary"},"source":{"$ref":"package.schema.json","description":"Source Package that produced the current binary Package, if known."},"normalized_version":{"description":"Normalized representation of the discoverd version.\n\nThe format is not specific, but is guarenteed to be forward compatible.","type":"string"},"module":{"description":"An identifier for intra-Repository grouping of packages.\n\nLikely only relevant on rpm-based systems.","type":"string"},"arch":{"description":"Native architecture for the Package.","type":"string","$comment":"This should become and enum in the future."},"cpe":{"$ref":"cpe.schema.json","description":"CPE Name for the Package."}},"additionalProperties":false,"required":["name","version"],"examples":[{"id":"10","name":"libapt-pkg5.0","version":"1.6.11","kind":"binary","normalized_version":"","arch":"x86","module":"","cpe":"","source":{"id":"9","name":"apt","version":"1.6.11","kind":"source","source":null}}]},"range":{"$id":"https://clairproject.org/api/http/v1/range.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Range","type":"object","description":"A range of versions.","properties":{"[":{"type":"string","description":"Lower bound, inclusive."},")":{"type":"string","description":"Upper bound, exclusive."}},"minProperties":1,"additionalProperties":false},"repository":{"$id":"https://clairproject.org/api/http/v1/repository.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Repository","type":"object","description":"Description of a software repository","properties":{"id":{"description":"Unique ID for this Repository. May be unique to the response document, not the whole system.","type":"string"},"name":{"description":"Human-relevant name for the Repository.","type":"string"},"key":{"description":"Machine-relevant name for the Repository.","type":"string"},"uri":{"description":"URI describing the Repository.","type":"string","format":"uri"},"cpe":{"description":"CPE name for the Repository.","$ref":"cpe.schema.json"}},"additionalProperties":false,"required":["id"]},"update_diff":{"$id":"https://clairproject.org/api/http/v1/update_diff.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Update Difference","type":"object","description":"**This is an internal type, documented for completeness.**\n\nTKTK","additionalProperties":false,"required":[]},"vulnerability_core":{"$id":"https://clairproject.org/api/http/v1/vulnerability_core.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Vulnerability Core","type":"object","description":"The core elements of vulnerabilities in the Clair system.","properties":{"name":{"type":"string","description":"Human-readable name, as presented in the vendor data."},"fixed_in_version":{"type":"string","description":"Version string, as presented in the vendor data."},"severity":{"type":"string","description":"Severity, as presented in the vendor data."},"normalized_severity":{"$ref":"normalized_severity.schema.json","description":"A well defined set of severity strings guaranteed to be present."},"range":{"$ref":"range.schema.json","description":"Range of versions the vulnerability applies to."},"arch_op":{"description":"Flag indicating how the referenced package's \"arch\" member should be interpreted.","enum":["equals","not equals","pattern match"]},"package":{"$ref":"package.schema.json","description":"A package description"},"distribution":{"$ref":"distribution.schema.json","description":"A distribution description"},"repository":{"$ref":"repository.schema.json","description":"A repository description"}},"required":["name","normalized_severity"],"dependentRequired":{"package":["arch_op"]},"anyOf":[{"required":["package"]},{"required":["repository"]},{"required":["distribution"]}]},"vulnerability_report":{"$id":"https://clairproject.org/api/http/v1/vulnerability_report.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"VulnerabilityReport","type":"object","description":"A report expressing discovered packages, package environments, and package vulnerabilities within a Manifest.","properties":{"manifest_hash":{"$ref":"digest.schema.json"},"packages":{"type":"object","description":"A map of Package objects indexed by \"/id\"","additionalProperties":{"$ref":"package.schema.json"}},"distributions":{"type":"object","description":"A map of Distribution objects indexed by \"/id\"","additionalProperties":{"$ref":"distribution.schema.json"}},"repository":{"type":"object","description":"A map of Repository objects indexed by \"/id\"","additionalProperties":{"$ref":"repository.schema.json"}},"environments":{"type":"object","description":"A map of Environment arrays indexed by a Package \"/id\"","additionalProperties":{"type":"array","items":{"$ref":"environment.schema.json"}}},"vulnerabilities":{"type":"object","description":"A map of Vulnerabilities indexed by \"/id\"","additionalProperties":{"$ref":"vulnerability.schema.json"}},"package_vulnerabilities":{"type":"object","description":"A mapping of Vulnerability \"/id\" lists indexed by Package \"/id\"","additionalProperties":{"type":"array","items":{"type":"string"}}},"enrichments":{"type":"object","description":"A mapping of extra \"enrichment\" data by type","additionalProperties":{"type":"array"}}},"additionalProperties":false,"required":["distributions","environments","manifest_hash","packages","package_vulnerabilities","vulnerabilities"]},"vulnerability":{"$id":"https://clairproject.org/api/http/v1/vulnerability.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Vulnerability","type":"object","description":"Description of a software flaw.","$ref":"vulnerability_core.schema.json","properties":{"id":{"description":"","type":"string"},"updater":{"description":"","type":"string"},"description":{"description":"","type":"string"},"issued":{"description":"","type":"string","format":"date-time"},"links":{"description":"","type":"string"}},"unevaluatedProperties":false,"required":["id","updater"],"examples":[{"id":"356835","updater":"ubuntu","name":"CVE-2009-5155","description":"In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.","links":"https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-5155 http://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-5155.html https://sourceware.org/bugzilla/show_bug.cgi?id=11053 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=22793 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=32806 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=34238 https://sourceware.org/bugzilla/show_bug.cgi?id=18986","severity":"Low","normalized_severity":"Low","package":{"id":"0","name":"glibc","version":"2.27-0ubuntu1","kind":"binary","source":null},"dist":{"id":"0","did":"ubuntu","name":"Ubuntu","version":"18.04.3 LTS (Bionic Beaver)","version_code_name":"bionic","version_id":"18.04","arch":"amd64"},"repo":{"id":"0","name":"Ubuntu 18.04.3 LTS"},"issued":"2019-10-12T07:20:50.52Z","fixed_in_version":"2.28-0ubuntu1"}]},"vulnerability_summary":{"$id":"https://clairproject.org/api/http/v1/vulnerability_summary.schema.json","$schema":"https://json-schema.org/draft/2020-12/schema","title":"Vulnerability Summary","type":"object","description":"A summary of a vulnerability.","$ref":"vulnerability_core.schema.json","unevaluatedProperties":false,"examples":[{"name":"CVE-2009-5155","description":"In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.","normalized_severity":"Low","fixed_in_version":"v0.0.1","links":"http://link-to-advisory","package":{"id":"0","name":"glibc","version":"v0.0.1-rc1"},"dist":{"id":"0","did":"ubuntu","name":"Ubuntu","version":"18.04.3 LTS (Bionic Beaver)","version_code_name":"bionic","version_id":"18.04"},"repo":{"id":"0","name":"Ubuntu 18.04.3 LTS"}}]}},"responses":{"bad_request":{"description":"Bad Request","content":{"application/vnd.clair.error.v1+json":{"schema":{"$ref":"#/components/schemas/error"}},"application/json":{"schema":{"$ref":"#/components/schemas/error"}}}},"oops":{"description":"Internal Server Error","content":{"application/vnd.clair.error.v1+json":{"schema":{"$ref":"#/components/schemas/error"}},"application/json":{"schema":{"$ref":"#/components/schemas/error"}}}},"not_found":{"description":"Not Found","content":{"application/vnd.clair.error.v1+json":{"schema":{"$ref":"#/components/schemas/error"}},"application/json":{"schema":{"$ref":"#/components/schemas/error"}}}},"unsupported_media_type":{"description":"Unsupported Media Type","content":{"application/vnd.clair.error.v1+json":{"schema":{"$ref":"#/components/schemas/error"}},"application/json":{"schema":{"$ref":"#/components/schemas/error"}}}}},"parameters":{"digest":{"description":"OCI-compatible digest of a referred object.","name":"digest","in":"path","schema":{"$ref":"#/components/schemas/digest"},"required":true}},"headers":{"Clair-Error":{"description":"This is a trailer containing any errors encountered while writing the response.","style":"simple","schema":{"type":"string"}},"Etag":{"description":"HTTP [ETag header](https://httpwg.org/specs/rfc9110.html#field.etag)","style":"simple","schema":{"type":"string"}},"Link":{"description":"Web Linking [Link header](https://httpwg.org/specs/rfc8288.html#header)","style":"simple","schema":{"type":"string"}},"Location":{"description":"HTTP [Location header](https://httpwg.org/specs/rfc9110.html#field.location)","style":"simple","required":true,"schema":{"type":"string"}}},"securitySchemes":{"psk":{"type":"http","scheme":"bearer","bearerFormat":"JWT with preshared key and allow-listed issuers","description":"Clair's authentication scheme."}}}}
diff --git a/httptransport/api/v1/openapi.json.etag b/httptransport/api/v1/openapi.json.etag
new file mode 100644
index 0000000000..4d249479fd
--- /dev/null
+++ b/httptransport/api/v1/openapi.json.etag
@@ -0,0 +1 @@
+"5bd75557472d7c0d60e8114f3bb61cceed81fd28b2209147333bd4d24e6cd4bf"
\ No newline at end of file
diff --git a/httptransport/api/v1/openapi.yaml b/httptransport/api/v1/openapi.yaml
new file mode 100644
index 0000000000..6eea56e4a8
--- /dev/null
+++ b/httptransport/api/v1/openapi.yaml
@@ -0,0 +1,1111 @@
+openapi: 3.1.0
+info:
+ title: Clair Container Analyzer
+ description: |-
+ Clair is a set of cooperating microservices which can index and match a container image's content with known vulnerabilities.
+
+ **Note:** Any endpoints tagged "internal" or "unstable" are documented for completeness but are considered exempt from versioning.
+ version: 1.2.0
+ contact:
+ name: Clair Team
+ url: http://github.com/quay/clair
+ email: quay-devel@redhat.com
+ license:
+ name: Apache License 2.0
+ url: http://www.apache.org/licenses/
+externalDocs:
+ url: https://quay.github.io/clair/
+tags:
+ - name: indexer
+ - name: matcher
+ - name: notifier
+ - name: internal
+ - name: unstable
+paths:
+ /indexer/api/v1/index_report:
+ post:
+ operationId: Index
+ requestBody:
+ description: Manifest to index.
+ required: true
+ content:
+ application/vnd.clair.manifest.v1+json:
+ schema:
+ $ref: '#/components/schemas/manifest'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/manifest'
+ responses:
+ "400":
+ $ref: '#/components/responses/bad_request'
+ "415":
+ $ref: '#/components/responses/unsupported_media_type'
+ default:
+ $ref: '#/components/responses/oops'
+ "201":
+ description: |-
+ IndexReport created.
+
+ Clients may want to avoid reading the body if simply submitting the manifest for later vulnerability reporting.
+ content:
+ application/vnd.clair.index_report.v1+json:
+ schema:
+ $ref: '#/components/schemas/index_report'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/index_report'
+ headers:
+ Location:
+ $ref: '#/components/headers/Location'
+ Link:
+ $ref: '#/components/headers/Link'
+ links:
+ retrieve:
+ operationId: GetIndexReport
+ parameters:
+ digest: $request.body#/hash
+ delete:
+ operationId: DeleteManifest
+ parameters:
+ digest: $request.body#/hash
+ report:
+ operationId: GetVulnerabilityReport
+ parameters:
+ digest: $request.body#/hash
+ "412":
+ description: Precondition Failed
+ tags:
+ - indexer
+ summary: Index the contents of a Manifest
+ description: By submitting a Manifest object to this endpoint Clair will fetch the layers, scan each layer's contents, and provide an index of discovered packages, repository and distribution information.
+ delete:
+ operationId: DeleteManifests
+ requestBody:
+ description: Array of manifest digests to delete.
+ required: true
+ content:
+ application/vnd.clair.bulk_delete.v1+json:
+ schema:
+ $ref: '#/components/schemas/bulk_delete'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/bulk_delete'
+ responses:
+ "200":
+ description: Successfully deleted manifests.
+ headers:
+ Clair-Error:
+ $ref: '#/components/headers/Clair-Error'
+ content:
+ application/vnd.clair.bulk_delete.v1+json:
+ schema:
+ $ref: '#/components/schemas/bulk_delete'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/bulk_delete'
+ "400":
+ $ref: '#/components/responses/bad_request'
+ "415":
+ $ref: '#/components/responses/unsupported_media_type'
+ default:
+ $ref: '#/components/responses/oops'
+ tags:
+ - indexer
+ summary: Delete the referenced manifests.
+ description: Given a Manifest's content addressable hash, any data related to it will be removed if it exists.
+ /indexer/api/v1/index_report/{digest}:
+ delete:
+ operationId: DeleteManifest
+ responses:
+ "400":
+ $ref: '#/components/responses/bad_request'
+ "415":
+ $ref: '#/components/responses/unsupported_media_type'
+ default:
+ $ref: '#/components/responses/oops'
+ "204":
+ description: Success
+ tags:
+ - indexer
+ summary: Delete the referenced manifest.
+ description: Given a Manifest's content addressable hash, any data related to it will be removed it it exists.
+ get:
+ operationId: GetIndexReport
+ responses:
+ "200":
+ description: IndexReport retrieved
+ headers:
+ Clair-Error:
+ $ref: '#/components/headers/Clair-Error'
+ content:
+ application/vnd.clair.index_report.v1+json:
+ schema:
+ $ref: '#/components/schemas/index_report'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/index_report'
+ "400":
+ $ref: '#/components/responses/bad_request'
+ "415":
+ $ref: '#/components/responses/unsupported_media_type'
+ default:
+ $ref: '#/components/responses/oops'
+ "404":
+ $ref: '#/components/responses/not_found'
+ tags:
+ - indexer
+ summary: Retrieve the IndexReport for the referenced manifest.
+ description: Given a Manifest's content addressable hash, an IndexReport will be retrieved if it exists.
+ parameters:
+ - $ref: '#/components/parameters/digest'
+ /indexer/api/v1/index_state:
+ get:
+ operationId: IndexState
+ responses:
+ "200":
+ description: Indexer State
+ headers:
+ Etag:
+ $ref: '#/components/headers/Etag'
+ content:
+ application/vnd.clair.index_state.v1+json:
+ schema:
+ $ref: '#/components/schemas/index_state'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/index_state'
+ "304":
+ description: Not Modified
+ tags:
+ - indexer
+ summary: Report the indexer's internal configuration and state.
+ description: |-
+ The index state endpoint returns a json structure indicating the indexer's internal configuration state.
+ A client may be interested in this as a signal that manifests may need to be re-indexed.
+ /indexer/api/v1/internal/affected_manifest:
+ post:
+ tags:
+ - internal
+ - unstable
+ - indexer
+ operationId: AffectedManifests
+ responses:
+ "200":
+ description: TODO
+ headers:
+ Clair-Error:
+ $ref: '#/components/headers/Clair-Error'
+ content:
+ application/vnd.clair.affected_manifests.v1+json:
+ schema:
+ $ref: '#/components/schemas/affected_manifests'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/affected_manifests'
+ "400":
+ $ref: '#/components/responses/bad_request'
+ "415":
+ $ref: '#/components/responses/unsupported_media_type'
+ default:
+ $ref: '#/components/responses/oops'
+ summary: Retrieve the set of manifests affected by the provided vulnerabilities.
+ description: ""
+ x-cli-ignore: true
+ /matcher/api/v1/internal/update_diff:
+ get:
+ tags:
+ - internal
+ - unstable
+ - matcher
+ operationId: GetUpdateDiff
+ responses:
+ "200":
+ description: TODO
+ headers:
+ Clair-Error:
+ $ref: '#/components/headers/Clair-Error'
+ content:
+ application/vnd.clair.update_diff.v1+json:
+ schema:
+ $ref: '#/components/schemas/update_diff'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/update_diff'
+ "400":
+ $ref: '#/components/responses/bad_request'
+ "415":
+ $ref: '#/components/responses/unsupported_media_type'
+ default:
+ $ref: '#/components/responses/oops'
+ parameters:
+ - in: query
+ name: cur
+ schema:
+ $ref: '#/components/schemas/token'
+ description: TKTK
+ - in: query
+ name: prev
+ schema:
+ $ref: '#/components/schemas/token'
+ description: TKTK
+ x-cli-ignore: true
+ /matcher/api/v1/internal/update_operation:
+ post:
+ tags:
+ - internal
+ - unstable
+ - matcher
+ operationId: UpdateOperation
+ responses:
+ "200":
+ description: TODO
+ headers:
+ Clair-Error:
+ $ref: '#/components/headers/Clair-Error'
+ content:
+ application/vnd.clair.affected_manifests.v1+json:
+ schema:
+ $ref: '#/components/schemas/affected_manifests'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/affected_manifests'
+ "400":
+ $ref: '#/components/responses/bad_request'
+ "415":
+ $ref: '#/components/responses/unsupported_media_type'
+ default:
+ $ref: '#/components/responses/oops'
+ x-cli-ignore: true
+ /matcher/api/v1/vulnerability_report/{digest}:
+ get:
+ operationId: GetVulnerabilityReport
+ responses:
+ "400":
+ $ref: '#/components/responses/bad_request'
+ "415":
+ $ref: '#/components/responses/unsupported_media_type'
+ default:
+ $ref: '#/components/responses/oops'
+ "201":
+ description: Vulnerability Report Created
+ content:
+ application/vnd.clair.vulnerability_report.v1+json:
+ schema:
+ $ref: '#/components/schemas/vulnerability_report'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/vulnerability_report'
+ "404":
+ $ref: '#/components/responses/not_found'
+ tags:
+ - matcher
+ summary: Retrieve a VulnerabilityReport for the referenced manifest.
+ description: Given a Manifest's content addressable hash a VulnerabilityReport will be created. The Manifest **must** have been Indexed first via the Index endpoint.
+ parameters:
+ - $ref: '#/components/parameters/digest'
+ /notifier/api/v1/notification/{id}:
+ parameters:
+ - in: path
+ name: id
+ required: true
+ schema:
+ $ref: '#/components/schemas/token'
+ description: A notification ID returned by a callback
+ delete:
+ operationId: DeleteNotification
+ responses:
+ "200":
+ description: Success
+ headers:
+ Clair-Error:
+ $ref: '#/components/headers/Clair-Error'
+ "400":
+ $ref: '#/components/responses/bad_request'
+ "415":
+ $ref: '#/components/responses/unsupported_media_type'
+ default:
+ $ref: '#/components/responses/oops'
+ "204":
+ description: TODO
+ tags:
+ - notifier
+ summary: Delete the referenced notification set.
+ description: |-
+ Issues a delete of the provided notification id and all associated notifications.
+ After this delete clients will no longer be able to retrieve notifications.
+ get:
+ operationId: GetNotification
+ parameters:
+ - in: query
+ name: page_size
+ schema:
+ type: integer
+ description: The maximum number of notifications to deliver in a single page.
+ - in: query
+ name: next
+ schema:
+ type: string
+ description: The next page to fetch via id. Typically this number is provided on initial response in the "page.next" field. The first request should omit this field.
+ responses:
+ "200":
+ description: A paginated list of notifications
+ headers:
+ Clair-Error:
+ $ref: '#/components/headers/Clair-Error'
+ content:
+ application/vnd.clair.notification_page.v1+json:
+ schema:
+ $ref: '#/components/schemas/notification_page'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/notification_page'
+ "400":
+ $ref: '#/components/responses/bad_request'
+ "415":
+ $ref: '#/components/responses/unsupported_media_type'
+ default:
+ $ref: '#/components/responses/oops'
+ "304":
+ description: Not modified
+ tags:
+ - notifier
+ summary: Retrieve pages of the referenced notification set.
+ description: By performing a GET with an id as a path parameter, the client will retrieve a paginated response of notification objects.
+security: []
+webhooks:
+ notification:
+ post:
+ tags:
+ - notifier
+ requestBody:
+ content:
+ application/vnd.clair.notification.v1+json:
+ schema:
+ $ref: '#/components/schemas/notification'
+ responses:
+ "200":
+ description: TODO
+components:
+ schemas:
+ token:
+ type: string
+ description: An opaque token previously obtained from the service.
+ affected_manifests:
+ $id: https://clairproject.org/api/http/v1/affected_manifests.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Affected Manifests
+ type: object
+ description: |-
+ **This is an internal type, documented for completeness.**
+
+ Manifests affected by the specified vulnerability objects.
+ properties:
+ vulnerabilities:
+ type: object
+ description: Vulnerability objects.
+ additionalProperties:
+ $ref: vulnerability.schema.json
+ vulnerable_manifests:
+ type: object
+ description: Mapping of manifest digests to vulnerability identifiers.
+ additionalProperties:
+ type: array
+ items:
+ type: string
+ description: An identifier to be used in the "#/vulnerabilities" object.
+ required:
+ - vulnerable_manifests
+ bulk_delete:
+ $id: https://clairproject.org/api/http/v1/bulk_delete.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Bulk Delete
+ type: array
+ description: Array of manifest digests to delete from the system.
+ items:
+ $ref: digest.schema.json
+ description: Manifest digest to delete from the system.
+ cpe:
+ $id: https://clairproject.org/api/http/v1/cpe.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Common Platform Enumeration Name
+ description: This is a CPE Name in either v2.2 "URI" form or v2.3 "Formatted String" form.
+ $comment: Clair only produces v2.3 CPE Names. Any v2.2 Names will be normalized into v2.3 form.
+ oneOf:
+ - description: 'This is the CPE 2.2 regexp: https://cpe.mitre.org/specification/2.2/cpe-language_2.2.xsd'
+ type: string
+ pattern: ^[c][pP][eE]:/[AHOaho]?(:[A-Za-z0-9\._\-~%]*){0,6}$
+ - description: 'This is the CPE 2.3 regexp: https://csrc.nist.gov/schema/cpe/2.3/cpe-naming_2.3.xsd'
+ type: string
+ pattern: ^cpe:2\.3:[aho\*\-](:(((\?*|\*?)([a-zA-Z0-9\-\._]|(\\[\\\*\?!"#$$%&'\(\)\+,/:;<=>@\[\]\^`\{\|}~]))+(\?*|\*?))|[\*\-])){5}(:(([a-zA-Z]{2,3}(-([a-zA-Z]{2}|[0-9]{3}))?)|[\*\-]))(:(((\?*|\*?)([a-zA-Z0-9\-\._]|(\\[\\\*\?!"#$$%&'\(\)\+,/:;<=>@\[\]\^`\{\|}~]))+(\?*|\*?))|[\*\-])){4}$
+ examples:
+ - cpe:/a:microsoft:internet_explorer:8.0.6001:beta
+ - cpe:2.3:a:microsoft:internet_explorer:8.0.6001:beta:*:*:*:*:*:*
+ digest:
+ $id: https://clairproject.org/api/http/v1/digest.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Digest
+ description: A digest acts as a content identifier, enabling content addressability.
+ oneOf:
+ - $comment: 'SHA256: MUST be implemented'
+ description: SHA256
+ type: string
+ pattern: ^sha256:[a-f0-9]{64}$
+ - $comment: 'SHA512: MAY be implemented'
+ description: SHA512
+ type: string
+ pattern: ^sha512:[a-f0-9]{128}$
+ - $comment: 'BLAKE3: MAY be implemented'
+ description: |-
+ BLAKE3
+
+ **Currently not implemented.**
+ type: string
+ pattern: ^blake3:[a-f0-9]{64}$
+ distribution:
+ $id: https://clairproject.org/api/http/v1/distribution.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Distribution
+ type: object
+ description: Distribution is the accompanying system context of a Package.
+ properties:
+ id:
+ description: Unique ID for this Distribution. May be unique to the response document, not the whole system.
+ type: string
+ did:
+ description: A lower-case string (no spaces or other characters outside of 0–9, a–z, ".", "_", and "-") identifying the operating system, excluding any version information and suitable for processing by scripts or usage in generated filenames.
+ type: string
+ name:
+ description: A string identifying the operating system.
+ type: string
+ version:
+ description: A string identifying the operating system version, excluding any OS name information, possibly including a release code name, and suitable for presentation to the user.
+ type: string
+ version_code_name:
+ description: A lower-case string (no spaces or other characters outside of 0–9, a–z, ".", "_", and "-") identifying the operating system release code name, excluding any OS name information or release version, and suitable for processing by scripts or usage in generated filenames.
+ type: string
+ version_id:
+ description: A lower-case string (mostly numeric, no spaces or other characters outside of 0–9, a–z, ".", "_", and "-") identifying the operating system version, excluding any OS name information or release code name.
+ type: string
+ arch:
+ description: A string identifying the OS architecture.
+ type: string
+ cpe:
+ description: Common Platform Enumeration name.
+ $ref: cpe.schema.json
+ pretty_name:
+ description: A pretty operating system name in a format suitable for presentation to the user.
+ type: string
+ additionalProperties: false
+ required:
+ - id
+ examples:
+ - id: "1"
+ did: ubuntu
+ name: Ubuntu
+ version: 18.04.3 LTS (Bionic Beaver)
+ version_code_name: bionic
+ version_id: "18.04"
+ pretty_name: Ubuntu 18.04.3 LTS
+ environment:
+ $id: https://clairproject.org/api/http/v1/environment.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Environment
+ type: object
+ description: Environment describes the surrounding environment a package was discovered in.
+ properties:
+ package_db:
+ description: The database the associated Package was discovered in.
+ type: string
+ distribution_id:
+ description: The ID of the Distribution of the associated Package.
+ type: string
+ introduced_in:
+ description: The Layer the associated Package was introduced in.
+ $ref: digest.schema.json
+ repository_ids:
+ description: The IDs of the Repositories of the associated Package.
+ type: array
+ items:
+ type: string
+ additionalProperties: false
+ examples:
+ - value:
+ package_db: var/lib/dpkg/status
+ introduced_in: sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a
+ distribution_id: "1"
+ error:
+ $id: https://clairproject.org/api/http/v1/error.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Error
+ type: object
+ description: A general error response.
+ properties:
+ code:
+ type: string
+ description: a code for this particular error
+ message:
+ type: string
+ description: a message with further detail
+ required:
+ - message
+ index_report:
+ $id: https://clairproject.org/api/http/v1/index_report.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: IndexReport
+ type: object
+ description: An index of the contents of a Manifest.
+ properties:
+ manifest_hash:
+ $ref: digest.schema.json
+ description: The Manifest's digest.
+ state:
+ type: string
+ description: The current state of the index operation
+ err:
+ type: string
+ description: An error message on event of unsuccessful index
+ success:
+ type: boolean
+ description: A bool indicating succcessful index
+ packages:
+ type: object
+ additionalProperties:
+ $ref: package.schema.json
+ distributions:
+ type: object
+ additionalProperties:
+ $ref: distribution.schema.json
+ repository:
+ type: object
+ additionalProperties:
+ $ref: repository.schema.json
+ environments:
+ type: object
+ additionalProperties:
+ type: array
+ items:
+ $ref: environment.schema.json
+ additionalProperties: false
+ required:
+ - manifest_hash
+ - state
+ - success
+ index_state:
+ $id: https://clairproject.org/api/http/v1/index_state.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Index State
+ type: object
+ description: Information on the state of the indexer system.
+ properties:
+ state:
+ type: string
+ description: an opaque token
+ required:
+ - state
+ layer:
+ $id: https://clairproject.org/api/http/v1/layer.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Layer
+ type: object
+ description: Layer is a description of a container layer. It should contain enough information to fetch the layer.
+ properties:
+ hash:
+ $ref: digest.schema.json
+ description: Digest of the layer blob.
+ uri:
+ type: string
+ description: A URI indicating where the layer blob can be downloaded from.
+ headers:
+ description: Any additional HTTP-style headers needed for requesting layers.
+ type: object
+ patternProperties:
+ ^[a-zA-Z0-9\-_]+$:
+ type: array
+ items:
+ type: string
+ media_type:
+ description: The OCI Layer media type for this layer.
+ type: string
+ pattern: ^application/vnd\.oci\.image\.layer\.v1\.tar(\+(gzip|zstd))?$
+ additionalProperties: false
+ required:
+ - hash
+ - uri
+ manifest:
+ $id: https://clairproject.org/api/http/v1/manifest.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Manifest
+ type: object
+ description: A description of an OCI Image Manifest.
+ properties:
+ hash:
+ $ref: digest.schema.json
+ description: |-
+ The OCI Image Manifest's digest.
+
+ This is used as an identifier throughout the system. This **SHOULD** be the same as the OCI Image Manifest's digest, but this is not enforced.
+ layers:
+ type: array
+ description: The OCI Layers making up the Image, in order.
+ items:
+ $ref: layer.schema.json
+ additionalProperties: false
+ required:
+ - hash
+ examples:
+ - hash: sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
+ layers:
+ - hash: sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b856
+ uri: https://storage.example.com/blob/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b856
+ normalized_severity:
+ $id: https://clairproject.org/api/http/v1/normalized_severity.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Normalized Severity
+ description: Standardized severity values.
+ enum:
+ - Unknown
+ - Negligible
+ - Low
+ - Medium
+ - High
+ - Critical
+ notification_page:
+ $id: https://clairproject.org/api/http/v1/notification_page.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Notification Page
+ type: object
+ description: A page description and list of notifications.
+ properties:
+ page:
+ description: An object informing the client the next page to retrieve.
+ type: object
+ properties:
+ size:
+ type: integer
+ next:
+ oneOf:
+ - type: string
+ - const: "-1"
+ additionalProperties: false
+ required:
+ - size
+ notifications:
+ description: Notifications within this page.
+ type: array
+ items:
+ $ref: notification.schema.json
+ additionalProperties: false
+ required:
+ - page
+ - notifications
+ examples:
+ - page:
+ size: 0
+ next: "-1"
+ notifications: []
+ notification:
+ $id: https://clairproject.org/api/http/v1/notification.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Notification
+ type: object
+ description: A change in a manifest affected by a vulnerability.
+ properties:
+ id:
+ description: Unique identifier for this notification.
+ type: string
+ manifest:
+ $ref: digest.schema.json
+ description: The digest of the manifest affected by the provided vulnerability.
+ reason:
+ description: The reason for the notifcation.
+ enum:
+ - added
+ - removed
+ vulnerability:
+ $ref: vulnerability_summary.schema.json
+ additionalProperties: false
+ required:
+ - id
+ - manifest
+ - reason
+ - vulnerability
+ package:
+ $id: https://clairproject.org/api/http/v1/package.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Package
+ type: object
+ description: Description of installed software.
+ properties:
+ id:
+ description: Unique ID for this Package. May be unique to the response document, not the whole system.
+ type: string
+ name:
+ description: |-
+ Identifier of this Package.
+
+ The uniqueness and scoping of this name depends on the packaging system.
+ type: string
+ version:
+ description: Version of this Package, as reported by the packaging system.
+ type: string
+ kind:
+ description: The "kind" of this Package.
+ enum:
+ - binary
+ - source
+ default: binary
+ source:
+ $ref: package.schema.json
+ description: Source Package that produced the current binary Package, if known.
+ normalized_version:
+ description: |-
+ Normalized representation of the discoverd version.
+
+ The format is not specific, but is guarenteed to be forward compatible.
+ type: string
+ module:
+ description: |-
+ An identifier for intra-Repository grouping of packages.
+
+ Likely only relevant on rpm-based systems.
+ type: string
+ arch:
+ description: Native architecture for the Package.
+ type: string
+ $comment: This should become and enum in the future.
+ cpe:
+ $ref: cpe.schema.json
+ description: CPE Name for the Package.
+ additionalProperties: false
+ required:
+ - name
+ - version
+ examples:
+ - id: "10"
+ name: libapt-pkg5.0
+ version: 1.6.11
+ kind: binary
+ normalized_version: ""
+ arch: x86
+ module: ""
+ cpe: ""
+ source:
+ id: "9"
+ name: apt
+ version: 1.6.11
+ kind: source
+ source: null
+ range:
+ $id: https://clairproject.org/api/http/v1/range.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Range
+ type: object
+ description: A range of versions.
+ properties:
+ '[':
+ type: string
+ description: Lower bound, inclusive.
+ ):
+ type: string
+ description: Upper bound, exclusive.
+ minProperties: 1
+ additionalProperties: false
+ repository:
+ $id: https://clairproject.org/api/http/v1/repository.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Repository
+ type: object
+ description: Description of a software repository
+ properties:
+ id:
+ description: Unique ID for this Repository. May be unique to the response document, not the whole system.
+ type: string
+ name:
+ description: Human-relevant name for the Repository.
+ type: string
+ key:
+ description: Machine-relevant name for the Repository.
+ type: string
+ uri:
+ description: URI describing the Repository.
+ type: string
+ format: uri
+ cpe:
+ description: CPE name for the Repository.
+ $ref: cpe.schema.json
+ additionalProperties: false
+ required:
+ - id
+ update_diff:
+ $id: https://clairproject.org/api/http/v1/update_diff.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Update Difference
+ type: object
+ description: |-
+ **This is an internal type, documented for completeness.**
+
+ TKTK
+ additionalProperties: false
+ required: []
+ vulnerability_core:
+ $id: https://clairproject.org/api/http/v1/vulnerability_core.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Vulnerability Core
+ type: object
+ description: The core elements of vulnerabilities in the Clair system.
+ properties:
+ name:
+ type: string
+ description: Human-readable name, as presented in the vendor data.
+ fixed_in_version:
+ type: string
+ description: Version string, as presented in the vendor data.
+ severity:
+ type: string
+ description: Severity, as presented in the vendor data.
+ normalized_severity:
+ $ref: normalized_severity.schema.json
+ description: A well defined set of severity strings guaranteed to be present.
+ range:
+ $ref: range.schema.json
+ description: Range of versions the vulnerability applies to.
+ arch_op:
+ description: Flag indicating how the referenced package's "arch" member should be interpreted.
+ enum:
+ - equals
+ - not equals
+ - pattern match
+ package:
+ $ref: package.schema.json
+ description: A package description
+ distribution:
+ $ref: distribution.schema.json
+ description: A distribution description
+ repository:
+ $ref: repository.schema.json
+ description: A repository description
+ required:
+ - name
+ - normalized_severity
+ dependentRequired:
+ package:
+ - arch_op
+ anyOf:
+ - required:
+ - package
+ - required:
+ - repository
+ - required:
+ - distribution
+ vulnerability_report:
+ $id: https://clairproject.org/api/http/v1/vulnerability_report.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: VulnerabilityReport
+ type: object
+ description: A report expressing discovered packages, package environments, and package vulnerabilities within a Manifest.
+ properties:
+ manifest_hash:
+ $ref: digest.schema.json
+ packages:
+ type: object
+ description: A map of Package objects indexed by "/id"
+ additionalProperties:
+ $ref: package.schema.json
+ distributions:
+ type: object
+ description: A map of Distribution objects indexed by "/id"
+ additionalProperties:
+ $ref: distribution.schema.json
+ repository:
+ type: object
+ description: A map of Repository objects indexed by "/id"
+ additionalProperties:
+ $ref: repository.schema.json
+ environments:
+ type: object
+ description: A map of Environment arrays indexed by a Package "/id"
+ additionalProperties:
+ type: array
+ items:
+ $ref: environment.schema.json
+ vulnerabilities:
+ type: object
+ description: A map of Vulnerabilities indexed by "/id"
+ additionalProperties:
+ $ref: vulnerability.schema.json
+ package_vulnerabilities:
+ type: object
+ description: A mapping of Vulnerability "/id" lists indexed by Package "/id"
+ additionalProperties:
+ type: array
+ items:
+ type: string
+ enrichments:
+ type: object
+ description: A mapping of extra "enrichment" data by type
+ additionalProperties:
+ type: array
+ additionalProperties: false
+ required:
+ - distributions
+ - environments
+ - manifest_hash
+ - packages
+ - package_vulnerabilities
+ - vulnerabilities
+ vulnerability:
+ $id: https://clairproject.org/api/http/v1/vulnerability.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Vulnerability
+ type: object
+ description: Description of a software flaw.
+ $ref: vulnerability_core.schema.json
+ properties:
+ id:
+ description: ""
+ type: string
+ updater:
+ description: ""
+ type: string
+ description:
+ description: ""
+ type: string
+ issued:
+ description: ""
+ type: string
+ format: date-time
+ links:
+ description: ""
+ type: string
+ unevaluatedProperties: false
+ required:
+ - id
+ - updater
+ examples:
+ - id: "356835"
+ updater: ubuntu
+ name: CVE-2009-5155
+ description: In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.
+ links: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-5155 http://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-5155.html https://sourceware.org/bugzilla/show_bug.cgi?id=11053 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=22793 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=32806 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=34238 https://sourceware.org/bugzilla/show_bug.cgi?id=18986
+ severity: Low
+ normalized_severity: Low
+ package:
+ id: "0"
+ name: glibc
+ version: 2.27-0ubuntu1
+ kind: binary
+ source: null
+ dist:
+ id: "0"
+ did: ubuntu
+ name: Ubuntu
+ version: 18.04.3 LTS (Bionic Beaver)
+ version_code_name: bionic
+ version_id: "18.04"
+ arch: amd64
+ repo:
+ id: "0"
+ name: Ubuntu 18.04.3 LTS
+ issued: "2019-10-12T07:20:50.52Z"
+ fixed_in_version: 2.28-0ubuntu1
+ vulnerability_summary:
+ $id: https://clairproject.org/api/http/v1/vulnerability_summary.schema.json
+ $schema: https://json-schema.org/draft/2020-12/schema
+ title: Vulnerability Summary
+ type: object
+ description: A summary of a vulnerability.
+ $ref: vulnerability_core.schema.json
+ unevaluatedProperties: false
+ examples:
+ - name: CVE-2009-5155
+ description: In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.
+ normalized_severity: Low
+ fixed_in_version: v0.0.1
+ links: http://link-to-advisory
+ package:
+ id: "0"
+ name: glibc
+ version: v0.0.1-rc1
+ dist:
+ id: "0"
+ did: ubuntu
+ name: Ubuntu
+ version: 18.04.3 LTS (Bionic Beaver)
+ version_code_name: bionic
+ version_id: "18.04"
+ repo:
+ id: "0"
+ name: Ubuntu 18.04.3 LTS
+ responses:
+ bad_request:
+ description: Bad Request
+ content:
+ application/vnd.clair.error.v1+json:
+ schema:
+ $ref: '#/components/schemas/error'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/error'
+ oops:
+ description: Internal Server Error
+ content:
+ application/vnd.clair.error.v1+json:
+ schema:
+ $ref: '#/components/schemas/error'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/error'
+ not_found:
+ description: Not Found
+ content:
+ application/vnd.clair.error.v1+json:
+ schema:
+ $ref: '#/components/schemas/error'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/error'
+ unsupported_media_type:
+ description: Unsupported Media Type
+ content:
+ application/vnd.clair.error.v1+json:
+ schema:
+ $ref: '#/components/schemas/error'
+ application/json:
+ schema:
+ $ref: '#/components/schemas/error'
+ parameters:
+ digest:
+ description: OCI-compatible digest of a referred object.
+ name: digest
+ in: path
+ schema:
+ $ref: '#/components/schemas/digest'
+ required: true
+ headers:
+ Clair-Error:
+ description: This is a trailer containing any errors encountered while writing the response.
+ style: simple
+ schema:
+ type: string
+ Etag:
+ description: HTTP [ETag header](https://httpwg.org/specs/rfc9110.html#field.etag)
+ style: simple
+ schema:
+ type: string
+ Link:
+ description: Web Linking [Link header](https://httpwg.org/specs/rfc8288.html#header)
+ style: simple
+ schema:
+ type: string
+ Location:
+ description: HTTP [Location header](https://httpwg.org/specs/rfc9110.html#field.location)
+ style: simple
+ required: true
+ schema:
+ type: string
+ securitySchemes:
+ psk:
+ type: http
+ scheme: bearer
+ bearerFormat: JWT with preshared key and allow-listed issuers
+ description: Clair's authentication scheme.
diff --git a/httptransport/api/v1/openapi.yaml.etag b/httptransport/api/v1/openapi.yaml.etag
new file mode 100644
index 0000000000..488ba1d6ca
--- /dev/null
+++ b/httptransport/api/v1/openapi.yaml.etag
@@ -0,0 +1 @@
+"3dabe4315e4538bd3636a169eaf8c2d91a9159823f74171029afef965b1a27c4"
\ No newline at end of file
diff --git a/httptransport/client/indexer.go b/httptransport/client/indexer.go
index bc14b82067..60f734495a 100644
--- a/httptransport/client/indexer.go
+++ b/httptransport/client/indexer.go
@@ -52,7 +52,6 @@ func (s *HTTP) AffectedManifests(ctx context.Context, v []claircore.Vulnerabilit
switch ct := req.Header.Get("content-type"); ct {
case "", `application/json`:
dec := codec.GetDecoder(resp.Body)
- defer codec.PutDecoder(dec)
if err := dec.Decode(&a); err != nil {
return nil, err
}
@@ -98,7 +97,6 @@ func (s *HTTP) Index(ctx context.Context, manifest *claircore.Manifest) (*clairc
switch ct := resp.Header.Get("content-type"); ct {
case "", `application/json`:
dec := codec.GetDecoder(resp.Body)
- defer codec.PutDecoder(dec)
if err := dec.Decode(&ir); err != nil {
return nil, err
}
@@ -142,7 +140,6 @@ func (s *HTTP) IndexReport(ctx context.Context, manifest claircore.Digest) (*cla
ir := &claircore.IndexReport{}
dec := codec.GetDecoder(resp.Body)
- defer codec.PutDecoder(dec)
if err := dec.Decode(ir); err != nil {
return nil, false, &clairerror.ErrBadIndexReport{E: err}
}
@@ -199,7 +196,6 @@ func (s *HTTP) DeleteManifests(ctx context.Context, d ...claircore.Digest) ([]cl
}
var ret []claircore.Digest
dec := codec.GetDecoder(resp.Body)
- defer codec.PutDecoder(dec)
if err := dec.Decode(&ret); err != nil {
return nil, fmt.Errorf("failed to decode response: %w", err)
}
diff --git a/httptransport/client/matcher.go b/httptransport/client/matcher.go
index ba57359385..7e8f19937c 100644
--- a/httptransport/client/matcher.go
+++ b/httptransport/client/matcher.go
@@ -51,7 +51,6 @@ func (c *HTTP) Scan(ctx context.Context, ir *claircore.IndexReport) (*claircore.
switch ct := req.Header.Get("content-type"); ct {
case "", `application/json`:
dec := codec.GetDecoder(resp.Body)
- defer codec.PutDecoder(dec)
if err := dec.Decode(&vr); err != nil {
return nil, err
}
@@ -203,7 +202,6 @@ func (c *HTTP) updateOperations(ctx context.Context, req *http.Request, cache *u
case http.StatusOK:
m := make(map[string][]driver.UpdateOperation)
dec := codec.GetDecoder(res.Body)
- defer codec.PutDecoder(dec)
if err := dec.Decode(&m); err != nil {
return nil, err
}
@@ -254,7 +252,6 @@ func (c *HTTP) UpdateDiff(ctx context.Context, prev, cur uuid.UUID) (*driver.Upd
}
d := driver.UpdateDiff{}
dec := codec.GetDecoder(res.Body)
- defer codec.PutDecoder(dec)
if err := dec.Decode(&d); err != nil {
return nil, err
}
diff --git a/httptransport/common.go b/httptransport/common.go
index 375e2411b7..08832bd651 100644
--- a/httptransport/common.go
+++ b/httptransport/common.go
@@ -53,6 +53,7 @@ func pickContentType(w http.ResponseWriter, r *http.Request, allow []string) err
w.Header().Set("content-type", allow[0])
return nil
}
+ w.Header().Add("Vary", "Accept")
var acceptable []accept
for _, part := range as {
for _, s := range strings.Split(part, ",") {
@@ -84,6 +85,7 @@ func pickContentType(w http.ResponseWriter, r *http.Request, allow []string) err
}
}
}
+ // TODO(hank) This isn't quite right.
w.WriteHeader(http.StatusUnsupportedMediaType)
return ErrMediaType
}
diff --git a/httptransport/discoveryhandler.go b/httptransport/discoveryhandler.go
index 2d8d0acff0..c4e2524a89 100644
--- a/httptransport/discoveryhandler.go
+++ b/httptransport/discoveryhandler.go
@@ -3,7 +3,7 @@ package httptransport
import (
"bytes"
"context"
- _ "embed" // for json and etag
+ _ "embed" // for OpenAPI docs and etags
"errors"
"io"
"net/http"
@@ -16,24 +16,30 @@ import (
"github.com/quay/clair/v4/middleware/compress"
)
-//go:generate go run openapigen.go
+//go:generate env -C api zsh ./openapi.zsh
var (
- //go:embed openapi.json
+ //go:embed api/v1/openapi.json
openapiJSON []byte
- //go:embed openapi.etag
+ //go:embed api/v1/openapi.json.etag
openapiJSONEtag string
+ //go:embed api/v1/openapi.yaml
+ openapiYAML []byte
+ //go:embed api/v1/openapi.yaml.etag
+ openapiYAMLEtag string
)
// DiscoveryHandler serves the embedded OpenAPI spec.
func DiscoveryHandler(_ context.Context, prefix string, topt otelhttp.Option) http.Handler {
- allow := []string{`application/json`, `application/vnd.oai.openapi+json`}
+ allow := []string{
+ `application/openapi+json`, `application/openapi+yaml`, // New types: https://datatracker.ietf.org/doc/draft-ietf-httpapi-rest-api-mediatypes/
+ `application/json`, `application/yaml`, // Format types.
+ `application/vnd.oai.openapi+json`, `application/vnd.oai.openapi+yaml`, // Older vendor-tree types.
+ }
// These functions are written back-to-front.
var inner http.Handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
- if r.Method != http.MethodGet {
- apiError(ctx, w, http.StatusMethodNotAllowed, "endpoint only allows GET")
- }
+ checkMethod(ctx, w, r, http.MethodGet)
switch err := pickContentType(w, r, allow); {
case errors.Is(err, nil):
case errors.Is(err, ErrMediaType):
@@ -41,10 +47,23 @@ func DiscoveryHandler(_ context.Context, prefix string, topt otelhttp.Option) ht
default:
apiError(ctx, w, http.StatusInternalServerError, "unexpected error: %v", err)
}
- w.Header().Set("etag", openapiJSONEtag)
+ h := w.Header()
+ // The [pickContentType] call will have populated this or errored.
+ kind := h.Get(`Content-Type`)
+ var src *bytes.Reader
+ switch kind[len(kind)-4:] {
+ case "json":
+ h.Set("etag", openapiJSONEtag)
+ src = bytes.NewReader(openapiJSON)
+ case "yaml":
+ h.Set("etag", openapiYAMLEtag)
+ src = bytes.NewReader(openapiYAML)
+ default:
+ apiError(ctx, w, http.StatusInternalServerError, "unexpected error: unknown content-type kind: %q", kind)
+ }
var err error
defer writerError(w, &err)()
- _, err = io.Copy(w, bytes.NewReader(openapiJSON))
+ _, err = io.Copy(w, src)
})
inner = otelhttp.NewHandler(
compress.Handler(discoverywrapper.wrap(prefix, inner)),
diff --git a/httptransport/discoveryhandler_test.go b/httptransport/discoveryhandler_test.go
index 4ebee7d0fd..17c645bf88 100644
--- a/httptransport/discoveryhandler_test.go
+++ b/httptransport/discoveryhandler_test.go
@@ -1,108 +1,72 @@
package httptransport
import (
- "bytes"
"context"
"encoding/json"
"io"
"net/http"
"net/http/httptest"
- "os"
- "os/exec"
- "path/filepath"
- "strings"
"testing"
- "github.com/google/go-cmp/cmp"
- "github.com/google/go-cmp/cmp/cmpopts"
"github.com/quay/zlog"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
- "go.opentelemetry.io/otel/trace"
+ "go.opentelemetry.io/otel/trace/noop"
)
-func TestDiscoveryEndpoint(t *testing.T) {
- ctx := zlog.Test(context.Background(), t)
- h := DiscoveryHandler(ctx, OpenAPIV1Path, otelhttp.WithTracerProvider(trace.NewNoopTracerProvider()))
+func TestDiscovery(t *testing.T) {
+ t.Run("Endpoint", func(t *testing.T) {
+ ctx := zlog.Test(context.Background(), t)
+ h := DiscoveryHandler(ctx, OpenAPIV1Path, otelhttp.WithTracerProvider(noop.NewTracerProvider()))
- r := httptest.NewRecorder()
- req := httptest.NewRequest("GET", OpenAPIV1Path, nil).WithContext(ctx)
- req.Header.Set("Accept", "application/yaml, application/json; q=0.4, application/vnd.oai.openapi+json; q=1.0")
- h.ServeHTTP(r, req)
-
- resp := r.Result()
- if resp.StatusCode != http.StatusOK {
- t.Fatalf("got status code: %v want status code: %v", resp.StatusCode, http.StatusOK)
- }
- if got, want := resp.Header.Get("content-type"), "application/vnd.oai.openapi+json"; got != want {
- t.Errorf("got: %q, want: %q", got, want)
- }
-
- buf, err := io.ReadAll(resp.Body)
- if err != nil {
- t.Fatalf("failed to ready response body: %v", err)
- }
-
- m := map[string]interface{}{}
- err = json.Unmarshal(buf, &m)
- if err != nil {
- t.Fatalf("failed to json parse returned bytes: %v", err)
- }
-
- if _, ok := m["openapi"]; !ok {
- t.Fatalf("returned json did not container openapi key at the root")
- }
- t.Logf("openapi verion: %v", m["openapi"])
-}
-
-func TestDiscoveryFailure(t *testing.T) {
- ctx := zlog.Test(context.Background(), t)
- h := DiscoveryHandler(ctx, OpenAPIV1Path, otelhttp.WithTracerProvider(trace.NewNoopTracerProvider()))
-
- r := httptest.NewRecorder()
- // Needed because handlers exit the goroutine.
- done := make(chan struct{})
- go func() {
- defer close(done)
+ r := httptest.NewRecorder()
req := httptest.NewRequest("GET", OpenAPIV1Path, nil).WithContext(ctx)
- req.Header.Set("Accept", "application/yaml")
+ req.Header.Set("Accept", "application/yaml; q=0.4, application/json; q=0.4, application/vnd.oai.openapi+json; q=0.6, application/openapi+json")
h.ServeHTTP(r, req)
- }()
- <-done
-
- resp := r.Result()
- t.Log(resp.Status)
- if got, want := resp.StatusCode, http.StatusUnsupportedMediaType; got != want {
- t.Errorf("got status code: %v want status code: %v", got, want)
- }
-}
-func TestEmbedding(t *testing.T) {
- d := t.TempDir()
- var buf bytes.Buffer
- cmd := exec.Command("go", "run", "openapigen.go", "-in", "../openapi.yaml", "-out", d)
- cmd.Stdout = &buf
- cmd.Stderr = &buf
- t.Log(cmd.Args)
- if err := cmd.Run(); err != nil {
- t.Error(err)
- t.Error(buf.String())
- }
+ resp := r.Result()
+ if resp.StatusCode != http.StatusOK {
+ t.Fatalf("got status code: %v want status code: %v", resp.StatusCode, http.StatusOK)
+ }
+ if got, want := resp.Header.Get("content-type"), "application/openapi+json"; got != want {
+ t.Errorf("got: %q, want: %q", got, want)
+ }
- for _, n := range []string{
- "openapi.json", "openapi.etag"} {
- nf, err := os.ReadFile(filepath.Join(d, n))
+ buf, err := io.ReadAll(resp.Body)
if err != nil {
- t.Error(err)
- continue
+ t.Fatalf("failed to ready response body: %v", err)
}
- of, err := os.ReadFile(n)
+
+ m := make(map[string]any)
+ err = json.Unmarshal(buf, &m)
if err != nil {
- t.Error(err)
- continue
+ t.Fatalf("failed to json parse returned bytes: %v", err)
+ }
+
+ if _, ok := m["openapi"]; !ok {
+ t.Fatalf("returned json did not container openapi key at the root")
}
- if got, want := string(nf), string(of); !cmp.Equal(got, want) {
- t.Error(cmp.Diff(got, want, cmpopts.AcyclicTransformer("normalizeWhitespace", func(s string) []string { return strings.Split(s, "\n") })))
- t.Log("\n\tYou probably edited the openapi.yaml and forgot to run `go generate` here.")
+ t.Logf("openapi verion: %v", m["openapi"])
+ })
+
+ t.Run("Failure", func(t *testing.T) {
+ ctx := zlog.Test(context.Background(), t)
+ h := DiscoveryHandler(ctx, OpenAPIV1Path, otelhttp.WithTracerProvider(noop.NewTracerProvider()))
+
+ r := httptest.NewRecorder()
+ // Needed because handlers exit the goroutine.
+ done := make(chan struct{})
+ go func() {
+ defer close(done)
+ req := httptest.NewRequest("GET", OpenAPIV1Path, nil).WithContext(ctx)
+ req.Header.Set("Accept", "application/xml")
+ h.ServeHTTP(r, req)
+ }()
+ <-done
+
+ resp := r.Result()
+ t.Log(resp.Status)
+ if got, want := resp.StatusCode, http.StatusUnsupportedMediaType; got != want {
+ t.Errorf("got status code: %v want status code: %v", got, want)
}
- }
+ })
}
diff --git a/httptransport/error.go b/httptransport/error.go
index f24a1bb110..f7ff28e6b1 100644
--- a/httptransport/error.go
+++ b/httptransport/error.go
@@ -1,14 +1,17 @@
package httptransport
import (
- "bytes"
"context"
- "encoding/json"
"errors"
"fmt"
"net/http"
+ "slices"
+ "strings"
"github.com/quay/zlog"
+
+ types "github.com/quay/clair/v4/httptransport/types/v1"
+ "github.com/quay/clair/v4/internal/codec"
)
// StatusClientClosedRequest is a nonstandard HTTP status code used when the
@@ -17,12 +20,13 @@ import (
// This convention is cribbed from Nginx.
const statusClientClosedRequest = 499
-// ApiError writes an untyped (that is, "application/json") error with the
+// ApiError writes a v1 error ("application/vnd.clair.error.v1+json") with the
// provided HTTP status code and message.
//
// ApiError does not return, but instead causes the goroutine to exit.
-func apiError(ctx context.Context, w http.ResponseWriter, code int, f string, v ...interface{}) {
+func apiError(ctx context.Context, w http.ResponseWriter, code int, f string, v ...any) {
const errheader = `Clair-Error`
+ const ctype = `application/vnd.clair.error.v1+json`
disconnect := false
select {
case <-ctx.Done():
@@ -45,37 +49,25 @@ func apiError(ctx context.Context, w http.ResponseWriter, code int, f string, v
}
h := w.Header()
- h.Del("link")
- h.Set("content-type", "application/json")
+ // Remove the links that use API relations: they should only be used on
+ // successful responses.
+ h[`Link`] = slices.DeleteFunc(h[`Link`], func(v string) bool {
+ return strings.Contains(v, `rel="https://projectquay.io/clair/v1`)
+ })
+ h.Set("content-type", ctype)
h.Set("x-content-type-options", "nosniff")
h.Set("trailer", errheader)
w.WriteHeader(code)
- var buf bytes.Buffer
- buf.WriteString(`{"code":"`)
- switch code {
- case http.StatusBadRequest:
- buf.WriteString("bad-request")
- case http.StatusMethodNotAllowed:
- buf.WriteString("method-not-allowed")
- case http.StatusNotFound:
- buf.WriteString("not-found")
- case http.StatusTooManyRequests:
- buf.WriteString("too-many-requests")
- default:
- buf.WriteString("internal-error")
+ enc := codec.GetEncoder(w, codec.SchemeV1)
+ val := types.Error{
+ Code: code,
+ Message: fmt.Sprintf(f, v...),
}
- buf.WriteByte('"')
- if f != "" {
- buf.WriteString(`,"message":`)
- b, _ := json.Marshal(fmt.Sprintf(f, v...)) // OK use of encoding/json.
- buf.Write(b)
- }
- buf.WriteByte('}')
-
- if _, err := buf.WriteTo(w); err != nil {
+ if err := enc.Encode(&val); err != nil {
h.Set(errheader, err.Error())
}
+
switch err := http.NewResponseController(w).Flush(); {
case errors.Is(err, nil):
case errors.Is(err, http.ErrNotSupported):
@@ -87,3 +79,13 @@ func apiError(ctx context.Context, w http.ResponseWriter, code int, f string, v
}
panic(http.ErrAbortHandler)
}
+
+// CheckMethod returns if the request method is in the "allow" slice, or calls
+// [apiError] with appropriate arguments.
+func checkMethod(ctx context.Context, w http.ResponseWriter, r *http.Request, allow ...string) {
+ if slices.Contains(allow, r.Method) {
+ return
+ }
+ w.Header().Set(`Allow`, strings.Join(allow, ", "))
+ apiError(ctx, w, http.StatusMethodNotAllowed, "method %q disallowed", r.Method)
+}
diff --git a/httptransport/indexer_v1.go b/httptransport/indexer_v1.go
index 86fa050bb3..a4ccf9cae6 100644
--- a/httptransport/indexer_v1.go
+++ b/httptransport/indexer_v1.go
@@ -53,7 +53,7 @@ type IndexerV1 struct {
var _ http.Handler = (*IndexerV1)(nil)
-// ServeHTTP implements http.Handler.
+// ServeHTTP implements [http.Handler].
func (h *IndexerV1) ServeHTTP(w http.ResponseWriter, r *http.Request) {
start := time.Now()
r = withRequestID(r)
@@ -84,15 +84,10 @@ func (h *IndexerV1) ServeHTTP(w http.ResponseWriter, r *http.Request) {
func (h *IndexerV1) indexReport(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
- switch r.Method {
- case http.MethodPost:
- case http.MethodDelete:
- default:
- apiError(ctx, w, http.StatusMethodNotAllowed, "method disallowed: %s", r.Method)
- }
+ checkMethod(ctx, w, r, http.MethodPost, http.MethodDelete)
+
defer r.Body.Close()
dec := codec.GetDecoder(r.Body)
- defer codec.PutDecoder(dec)
switch r.Method {
case http.MethodPost:
state, err := h.srv.State(ctx)
@@ -132,7 +127,6 @@ func (h *IndexerV1) indexReport(w http.ResponseWriter, r *http.Request) {
defer writerError(w, &err)()
w.WriteHeader(http.StatusCreated)
enc := codec.GetEncoder(w)
- defer codec.PutEncoder(enc)
err = enc.Encode(report)
case http.MethodDelete:
var ds []claircore.Digest
@@ -149,7 +143,6 @@ func (h *IndexerV1) indexReport(w http.ResponseWriter, r *http.Request) {
defer writerError(w, &err)()
w.WriteHeader(http.StatusOK)
enc := codec.GetEncoder(w)
- defer codec.PutEncoder(enc)
err = enc.Encode(ds)
}
}
@@ -161,19 +154,19 @@ const (
func (h *IndexerV1) indexReportOne(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
- switch r.Method {
- case http.MethodGet:
- case http.MethodDelete:
- default:
- apiError(ctx, w, http.StatusMethodNotAllowed, "method disallowed: %s", r.Method)
- }
+ checkMethod(ctx, w, r, http.MethodGet, http.MethodDelete)
+
d, err := getDigest(w, r)
if err != nil {
apiError(ctx, w, http.StatusBadRequest, "malformed path: %v", err)
}
switch r.Method {
case http.MethodGet:
- allow := []string{"application/vnd.clair.indexreport.v1+json", "application/json"}
+ allow := []string{
+ "application/vnd.clair.index_report.v1+json",
+ "application/json",
+ "application/vnd.clair.indexreport.v1+json", // Previous spelling, kept for backwards compatibility.
+ }
switch err := pickContentType(w, r, allow); {
case errors.Is(err, nil): // OK
case errors.Is(err, ErrMediaType):
@@ -203,7 +196,6 @@ func (h *IndexerV1) indexReportOne(w http.ResponseWriter, r *http.Request) {
w.Header().Add("etag", validator)
defer writerError(w, &err)()
enc := codec.GetEncoder(w)
- defer codec.PutEncoder(enc)
err = enc.Encode(report)
case http.MethodDelete:
if _, err := h.srv.DeleteManifests(ctx, d); err != nil {
@@ -215,10 +207,13 @@ func (h *IndexerV1) indexReportOne(w http.ResponseWriter, r *http.Request) {
func (h *IndexerV1) indexState(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
- if r.Method != http.MethodGet {
- apiError(ctx, w, http.StatusMethodNotAllowed, "method disallowed: %s", r.Method)
+ checkMethod(ctx, w, r, http.MethodGet)
+
+ allow := []string{
+ "application/vnd.clair.index_state.v1+json",
+ "application/json",
+ "application/vnd.clair.indexstate.v1+json", // Previous spelling, kept for backwards compatibility.
}
- allow := []string{"application/vnd.clair.indexstate.v1+json", "application/json"}
switch err := pickContentType(w, r, allow); {
case errors.Is(err, nil): // OK
case errors.Is(err, ErrMediaType):
@@ -242,7 +237,6 @@ func (h *IndexerV1) indexState(w http.ResponseWriter, r *http.Request) {
defer writerError(w, &err)()
// TODO(hank) Don't use an encoder to write out like 40 bytes of json.
enc := codec.GetEncoder(w)
- defer codec.PutEncoder(enc)
err = enc.Encode(struct {
State string `json:"state"`
}{
@@ -252,10 +246,9 @@ func (h *IndexerV1) indexState(w http.ResponseWriter, r *http.Request) {
func (h *IndexerV1) affectedManifests(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
- if r.Method != http.MethodPost {
- apiError(ctx, w, http.StatusMethodNotAllowed, "method disallowed: %s", r.Method)
- }
- allow := []string{"application/vnd.clair.affectedmanifests.v1+json", "application/json"}
+ checkMethod(ctx, w, r, http.MethodPost)
+
+ allow := []string{"application/vnd.clair.affected_manifests.v1+json", "application/json"}
switch err := pickContentType(w, r, allow); {
case errors.Is(err, nil): // OK
case errors.Is(err, ErrMediaType):
@@ -268,7 +261,6 @@ func (h *IndexerV1) affectedManifests(w http.ResponseWriter, r *http.Request) {
V []claircore.Vulnerability `json:"vulnerabilities"`
}
dec := codec.GetDecoder(r.Body)
- defer codec.PutDecoder(dec)
if err := dec.Decode(&vulnerabilities); err != nil {
apiError(ctx, w, http.StatusBadRequest, "failed to deserialize vulnerabilities: %v", err)
}
@@ -280,7 +272,6 @@ func (h *IndexerV1) affectedManifests(w http.ResponseWriter, r *http.Request) {
defer writerError(w, &err)
enc := codec.GetEncoder(w)
- defer codec.PutEncoder(enc)
err = enc.Encode(affected)
}
diff --git a/httptransport/matcher_v1.go b/httptransport/matcher_v1.go
index b93661455f..c5bc765cde 100644
--- a/httptransport/matcher_v1.go
+++ b/httptransport/matcher_v1.go
@@ -95,17 +95,27 @@ func (h *MatcherV1) ServeHTTP(w http.ResponseWriter, r *http.Request) {
func (h *MatcherV1) vulnerabilityReport(w http.ResponseWriter, r *http.Request) {
ctx := zlog.ContextWithValues(r.Context(),
"component", "httptransport/MatcherV1.vulnerabilityReport")
+ checkMethod(ctx, w, r, http.MethodGet)
- if r.Method != http.MethodGet {
- apiError(ctx, w, http.StatusMethodNotAllowed, "endpoint only allows GET")
- }
ctx, done := context.WithCancel(ctx)
defer done()
ctx = httptrace.WithClientTrace(ctx, oteltrace.NewClientTrace(ctx))
+ allow := []string{
+ "application/vnd.clair.vulnerability_report.v1+json",
+ "application/json",
+ }
+ switch err := pickContentType(w, r, allow); {
+ case errors.Is(err, nil): // OK
+ case errors.Is(err, ErrMediaType):
+ apiError(ctx, w, http.StatusUnsupportedMediaType, "unable to negotiate common media type for %v", allow)
+ default:
+ apiError(ctx, w, http.StatusBadRequest, "malformed request: %v", err)
+ }
+
manifestStr := path.Base(r.URL.Path)
if manifestStr == "" {
- apiError(ctx, w, http.StatusBadRequest, "malformed path. provide a single manifest hash")
+ apiError(ctx, w, http.StatusBadRequest, "malformed path: provide a single manifest hash")
}
manifest, err := claircore.ParseDigest(manifestStr)
if err != nil {
@@ -137,22 +147,18 @@ func (h *MatcherV1) vulnerabilityReport(w http.ResponseWriter, r *http.Request)
apiError(ctx, w, http.StatusInternalServerError, "failed to start scan: %v", err)
}
- w.Header().Set("content-type", "application/json")
setCacheControl(w, h.Cache)
defer writerError(w, &err)()
enc := codec.GetEncoder(w)
- defer codec.PutEncoder(enc)
err = enc.Encode(vulnReport)
}
func (h *MatcherV1) updateDiffHandler(w http.ResponseWriter, r *http.Request) {
ctx := zlog.ContextWithValues(r.Context(),
"component", "httptransport/MatcherV1.updateDiffHandler")
+ checkMethod(ctx, w, r, http.MethodGet)
- if r.Method != http.MethodGet {
- apiError(ctx, w, http.StatusMethodNotAllowed, "endpoint only allows GET")
- }
// prev param is optional.
var prev uuid.UUID
var err error
@@ -180,19 +186,13 @@ func (h *MatcherV1) updateDiffHandler(w http.ResponseWriter, r *http.Request) {
defer writerError(w, &err)()
enc := codec.GetEncoder(w)
- defer codec.PutEncoder(enc)
err = enc.Encode(&diff)
}
func (h *MatcherV1) updateOperationHandlerGet(w http.ResponseWriter, r *http.Request) {
ctx := zlog.ContextWithValues(r.Context(),
"component", "httptransport/MatcherV1.updateOperationHandlerGet")
-
- switch r.Method {
- case http.MethodGet:
- default:
- apiError(ctx, w, http.StatusMethodNotAllowed, "method disallowed: %s", r.Method)
- }
+ checkMethod(ctx, w, r, http.MethodGet)
kind := driver.VulnerabilityKind
switch k := r.URL.Query().Get("kind"); k {
@@ -229,18 +229,13 @@ func (h *MatcherV1) updateOperationHandlerGet(w http.ResponseWriter, r *http.Req
defer writerError(w, &err)()
enc := codec.GetEncoder(w)
- defer codec.PutEncoder(enc)
err = enc.Encode(&uos)
}
func (h *MatcherV1) updateOperationHandlerDelete(w http.ResponseWriter, r *http.Request) {
ctx := zlog.ContextWithValues(r.Context(),
"component", "httptransport/MatcherV1.updateOperationHandlerDelete")
- switch r.Method {
- case http.MethodDelete:
- default:
- apiError(ctx, w, http.StatusMethodNotAllowed, "method disallowed: %s", r.Method)
- }
+ checkMethod(ctx, w, r, http.MethodDelete)
path := r.URL.Path
id := filepath.Base(path)
diff --git a/httptransport/notification_v1.go b/httptransport/notification_v1.go
index e27f1bc579..7568d00c3a 100644
--- a/httptransport/notification_v1.go
+++ b/httptransport/notification_v1.go
@@ -83,18 +83,20 @@ func (h *NotificationV1) ServeHTTP(w http.ResponseWriter, r *http.Request) {
}
func (h *NotificationV1) serveHTTP(w http.ResponseWriter, r *http.Request) {
+ ctx := r.Context()
+ checkMethod(ctx, w, r, http.MethodGet, http.MethodDelete)
switch r.Method {
case http.MethodGet:
- h.get(w, r)
+ h.get(ctx, w, r)
case http.MethodDelete:
- h.delete(w, r)
+ h.delete(ctx, w, r)
default:
- apiError(r.Context(), w, http.StatusMethodNotAllowed, "endpoint only allows GET or DELETE")
+ panic("unreachable")
}
}
-func (h *NotificationV1) delete(w http.ResponseWriter, r *http.Request) {
- ctx := zlog.ContextWithValues(r.Context(), "component", "httptransport/NotificationV1.delete")
+func (h *NotificationV1) delete(ctx context.Context, w http.ResponseWriter, r *http.Request) {
+ ctx = zlog.ContextWithValues(ctx, "component", "httptransport/NotificationV1.delete")
path := r.URL.Path
id := filepath.Base(path)
notificationID, err := uuid.Parse(id)
@@ -111,8 +113,8 @@ func (h *NotificationV1) delete(w http.ResponseWriter, r *http.Request) {
}
// Get will return paginated notifications to the caller.
-func (h *NotificationV1) get(w http.ResponseWriter, r *http.Request) {
- ctx := zlog.ContextWithValues(r.Context(), "component", "httptransport/NotificationV1.get")
+func (h *NotificationV1) get(ctx context.Context, w http.ResponseWriter, r *http.Request) {
+ ctx = zlog.ContextWithValues(ctx, "component", "httptransport/NotificationV1.get")
path := r.URL.Path
id := filepath.Base(path)
notificationID, err := uuid.Parse(id)
@@ -171,7 +173,6 @@ func (h *NotificationV1) get(w http.ResponseWriter, r *http.Request) {
defer writerError(w, &err)()
enc := codec.GetEncoder(w)
- defer codec.PutEncoder(enc)
err = enc.Encode(&response)
}
diff --git a/httptransport/notification_v1_test.go b/httptransport/notification_v1_test.go
index 0aa8a560a6..73bcdec8f9 100644
--- a/httptransport/notification_v1_test.go
+++ b/httptransport/notification_v1_test.go
@@ -59,7 +59,7 @@ func testNotificationHandlerDelete(ctx context.Context) func(*testing.T) {
t.Error(err)
}
- h.delete(rr, req)
+ h.delete(ctx, rr, req)
res := rr.Result()
if res.StatusCode != http.StatusOK {
t.Fatalf("got: %v, wanted: %v", res.StatusCode, http.StatusOK)
@@ -110,7 +110,7 @@ func testNotificationHandlerGet(ctx context.Context) func(*testing.T) {
t.Error(err)
}
- h.get(rr, req)
+ h.get(ctx, rr, req)
res := rr.Result()
if res.StatusCode != http.StatusOK {
t.Errorf("got: %v, wanted: %v", res.StatusCode, http.StatusOK)
@@ -177,7 +177,7 @@ func testNotificationHandlerGetParams(ctx context.Context) func(*testing.T) {
t.Error(err)
}
- h.get(rr, req)
+ h.get(ctx, rr, req)
res := rr.Result()
if res.StatusCode != http.StatusOK {
t.Errorf("got: %v, wanted: %v", res.StatusCode, http.StatusOK)
diff --git a/httptransport/openapi.etag b/httptransport/openapi.etag
deleted file mode 100644
index 1da246c82e..0000000000
--- a/httptransport/openapi.etag
+++ /dev/null
@@ -1 +0,0 @@
-"a16d4a25e54a4cfe7fbf4e234af1c7585e840fef19c4f84aba1e814233c3b281"
\ No newline at end of file
diff --git a/httptransport/openapi.json b/httptransport/openapi.json
deleted file mode 100644
index 69a4c1a385..0000000000
--- a/httptransport/openapi.json
+++ /dev/null
@@ -1 +0,0 @@
-{"components":{"examples":{"Distribution":{"value":{"arch":"","cpe":"","did":"ubuntu","id":"1","name":"Ubuntu","pretty_name":"Ubuntu 18.04.3 LTS","version":"18.04.3 LTS (Bionic Beaver)","version_code_name":"bionic","version_id":"18.04"}},"Environment":{"value":{"distribution_id":"1","introduced_in":"sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a","package_db":"var/lib/dpkg/status"}},"Package":{"value":{"arch":"x86","cpe":"","id":"10","kind":"binary","module":"","name":"libapt-pkg5.0","normalized_version":"","source":{"id":"9","kind":"source","name":"apt","source":null,"version":"1.6.11"},"version":"1.6.11"}},"VulnSummary":{"value":{"description":"In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.\"","dist":{"arch":"","cpe":"","did":"ubuntu","id":"0","name":"Ubuntu","pretty_name":"","version":"18.04.3 LTS (Bionic Beaver)","version_code_name":"bionic","version_id":"18.04"},"fixed_in_version":"v0.0.1","links":"http://link-to-advisory","name":"CVE-2009-5155","normalized_severity":"Low","package":{"id":"0","kind":"","name":"glibc","package_db":"","repository_hint":"","source":null,"version":""},"repo":{"id":"0","key":"","name":"Ubuntu 18.04.3 LTS","uri":""}}},"Vulnerability":{"value":{"description":"In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.\"","dist":{"arch":"","cpe":"","did":"ubuntu","id":"0","name":"Ubuntu","pretty_name":"","version":"18.04.3 LTS (Bionic Beaver)","version_code_name":"bionic","version_id":"18.04"},"fixed_in_version":"2.28-0ubuntu1","id":"356835","issued":"2019-10-12T07:20:50.52Z","links":"https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-5155 http://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-5155.html https://sourceware.org/bugzilla/show_bug.cgi?id=11053 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=22793 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=32806 https://debbugs.gnu.org/cgi/bugreport.cgi?bug=34238 https://sourceware.org/bugzilla/show_bug.cgi?id=18986\"","name":"CVE-2009-5155","normalized_severity":"Low","package":{"id":"0","kind":"","name":"glibc","package_db":"","repository_hint":"","source":null,"version":""},"repo":{"id":"0","key":"","name":"Ubuntu 18.04.3 LTS","uri":""},"severity":"Low","updater":""}}},"responses":{"BadRequest":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}},"description":"Bad Request"},"InternalServerError":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}},"description":"Internal Server Error"},"MethodNotAllowed":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}},"description":"Method Not Allowed"},"NotFound":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Error"}}},"description":"Not Found"}},"schemas":{"BulkDelete":{"description":"An array of Digests to be deleted.","items":{"$ref":"#/components/schemas/Digest"},"title":"BulkDelete","type":"array"},"Callback":{"description":"A callback for clients to retrieve notifications","properties":{"callback":{"description":"the url where notifications can be retrieved","example":"http://clair-notifier/notifier/api/v1/notification/269886f3-0146-4f08-9bf7-cb1138d48643","type":"string"},"notification_id":{"description":"the unique identifier for this set of notifications","example":"269886f3-0146-4f08-9bf7-cb1138d48643","type":"string"}},"title":"Callback","type":"object"},"Digest":{"description":"A digest string with prefixed algorithm. The format is described here: https://github.com/opencontainers/image-spec/blob/master/descriptor.md#digests\nDigests are used throughout the API to identify Layers and Manifests.","example":"sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3","title":"Digest","type":"string"},"Distribution":{"description":"An indexed distribution discovered in a layer. See https://www.freedesktop.org/software/systemd/man/os-release.html for explanations and example of fields.","example":{"$ref":"#/components/examples/Distribution/value"},"properties":{"arch":{"type":"string"},"cpe":{"type":"string"},"did":{"type":"string"},"id":{"description":"A unique ID representing this distribution","type":"string"},"name":{"type":"string"},"pretty_name":{"type":"string"},"version":{"type":"string"},"version_code_name":{"type":"string"},"version_id":{"type":"string"}},"required":["id","did","name","version","version_code_name","version_id","arch","cpe","pretty_name"],"title":"Distribution","type":"object"},"Environment":{"description":"The environment a particular package was discovered in.","properties":{"distribution_id":{"description":"The distribution ID found in an associated IndexReport or VulnerabilityReport.","example":"1","type":"string"},"introduced_in":{"$ref":"#/components/schemas/Digest"},"package_db":{"description":"The filesystem path or unique identifier of a package database.","example":"var/lib/dpkg/status","type":"string"}},"required":["package_db","introduced_in","distribution_id"],"title":"Environment","type":"object"},"Error":{"description":"A general error schema returned when status is not 200 OK","properties":{"code":{"description":"a code for this particular error","type":"string"},"message":{"description":"a message with further detail","type":"string"}},"title":"Error","type":"object"},"IndexReport":{"description":"A report of the Index process for a particular manifest. A client's usage of this is largely information. Clair uses this report for matching Vulnerabilities.","properties":{"distributions":{"additionalProperties":{"$ref":"#/components/schemas/Distribution"},"description":"A map of Distribution objects keyed by their Distribution.id discovered in the manifest.","example":{"1":{"$ref":"#/components/examples/Distribution/value"}},"type":"object"},"environments":{"additionalProperties":{"items":{"$ref":"#/components/schemas/Environment"},"type":"array"},"description":"A map of lists containing Environment objects keyed by the associated Package.id.","example":{"10":[{"distribution_id":"1","introduced_in":"sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a","package_db":"var/lib/dpkg/status"}]},"type":"object"},"err":{"description":"An error message on event of unsuccessful index","example":"","type":"string"},"manifest_hash":{"$ref":"#/components/schemas/Digest"},"packages":{"additionalProperties":{"$ref":"#/components/schemas/Package"},"description":"A map of Package objects indexed by Package.id","example":{"10":{"$ref":"#/components/examples/Package/value"}},"type":"object"},"state":{"description":"The current state of the index operation","example":"IndexFinished","type":"string"},"success":{"description":"A bool indicating succcessful index","example":true,"type":"boolean"}},"required":["manifest_hash","state","packages","distributions","environments","success","err"],"title":"IndexReport","type":"object"},"Layer":{"description":"A Layer within a Manifest and where Clair may retrieve it.","properties":{"hash":{"$ref":"#/components/schemas/Digest"},"headers":{"additionalProperties":{"items":{"type":"string"},"type":"array"},"description":"map of arrays of header values keyed by header value. e.g. map[string][]string","type":"object"},"uri":{"description":"A URI describing where the layer may be found. Implementations MUST support http(s) schemes and MAY support additional schemes.","example":"https://storage.example.com/blob/2f077db56abccc19f16f140f629ae98e904b4b7d563957a7fc319bd11b82ba36","type":"string"}},"required":["hash","uri","headers"],"title":"Layer","type":"object"},"Manifest":{"description":"A Manifest representing a container. The 'layers' array must preserve the original container's layer order for accurate usage.","properties":{"hash":{"$ref":"#/components/schemas/Digest"},"layers":{"items":{"$ref":"#/components/schemas/Layer"},"type":"array"}},"required":["hash","layers"],"title":"Manifest","type":"object"},"Notification":{"description":"A notification expressing a change in a manifest affected by a vulnerability.","properties":{"id":{"description":"a unique identifier for this notification","example":"5e4b387e-88d3-4364-86fd-063447a6fad2","type":"string"},"manifest":{"description":"The hash of the manifest affected by the provided vulnerability.","example":"sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a","type":"string"},"reason":{"description":"the reason for the notifcation, [added | removed]","example":"added","type":"string"},"vulnerability":{"$ref":"#/components/schemas/VulnSummary"}},"title":"Notification","type":"object"},"Package":{"description":"A package discovered by indexing a Manifest","example":{"$ref":"#/components/examples/Package/value"},"properties":{"arch":{"description":"The package's target system architecture","type":"string"},"cpe":{"description":"A CPE identifying the package","type":"string"},"id":{"description":"A unique ID representing this package","type":"string"},"kind":{"description":"Kind of package. Source | Binary","type":"string"},"module":{"description":"A module further defining a namespace for a package","type":"string"},"name":{"description":"Name of the Package","type":"string"},"normalized_version":{"$ref":"#/components/schemas/Version"},"source":{"$ref":"#/components/schemas/Package"},"version":{"description":"Version of the Package","type":"string"}},"required":["id","name","version"],"title":"Package","type":"object"},"Page":{"description":"A page object indicating to the client how to retrieve multiple pages of a particular entity.","properties":{"next":{"description":"The next id to submit to the api to continue paging","example":"1b4d0db2-e757-4150-bbbb-543658144205","type":"string"},"size":{"description":"The maximum number of elements in a page","example":1,"type":"int"}},"title":"Page"},"PagedNotifications":{"description":"A page object followed by a list of notifications","properties":{"notifications":{"description":"A list of notifications within this page","items":{"$ref":"#/components/schemas/Notification"},"type":"array"},"page":{"description":"A page object informing the client the next page to retrieve. If page.next becomes \"-1\" the client should stop paging.","example":{"next":"1b4d0db2-e757-4150-bbbb-543658144205","size":100},"type":"object"}},"title":"PagedNotifications","type":"object"},"Repository":{"description":"A package repository","properties":{"cpe":{"type":"string"},"id":{"type":"string"},"key":{"type":"string"},"name":{"type":"string"},"uri":{"type":"string"}},"title":"Repository","type":"object"},"State":{"description":"an opaque identifier","example":{"state":"aae368a064d7c5a433d0bf2c4f5554cc"},"properties":{"state":{"description":"an opaque identifier","type":"string"}},"required":["state"],"title":"State","type":"object"},"Version":{"description":"Version is a normalized claircore version, composed of a \"kind\" and an array of integers such that two versions of the same kind have the correct ordering when the integers are compared pair-wise.","example":"pep440:0.0.0.0.0.0.0.0.0","title":"Version","type":"string"},"VulnSummary":{"description":"A summary of a vulnerability","properties":{"description":{"description":"the vulnerability name","example":"In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match.\"","type":"string"},"distribution":{"$ref":"#/components/schemas/Distribution"},"fixed_in_version":{"description":"The version which the vulnerability is fixed in. Empty if not fixed.","example":"v0.0.1","type":"string"},"links":{"description":"links to external information about vulnerability","example":"http://link-to-advisory","type":"string"},"name":{"description":"the vulnerability name","example":"CVE-2009-5155","type":"string"},"normalized_severity":{"description":"A well defined set of severity strings guaranteed to be present.","enum":["Unknown","Negligible","Low","Medium","High","Critical"],"type":"string"},"package":{"$ref":"#/components/schemas/Package"},"repository":{"$ref":"#/components/schemas/Repository"}},"title":"VulnSummary","type":"object"},"Vulnerability":{"description":"A unique vulnerability indexed by Clair","example":{"$ref":"#/components/examples/Vulnerability/value"},"properties":{"description":{"description":"A description of this specific vulnerability.","type":"string"},"distribution":{"$ref":"#/components/schemas/Distribution"},"fixed_in_version":{"description":"A unique ID representing this vulnerability.","type":"string"},"id":{"description":"A unique ID representing this vulnerability.","type":"string"},"issued":{"description":"The timestamp in which the vulnerability was issued","type":"string"},"links":{"description":"A space separate list of links to any external information.","type":"string"},"name":{"description":"Name of this specific vulnerability.","type":"string"},"normalized_severity":{"description":"A well defined set of severity strings guaranteed to be present.","enum":["Unknown","Negligible","Low","Medium","High","Critical"],"type":"string"},"package":{"$ref":"#/components/schemas/Package"},"range":{"description":"The range of package versions affected by this vulnerability.","type":"string"},"repository":{"$ref":"#/components/schemas/Repository"},"severity":{"description":"A severity keyword taken verbatim from the vulnerability source.","type":"string"},"updater":{"description":"A unique ID representing this vulnerability.","type":"string"}},"required":["id","updater","name","description","links","severity","normalized_severity","fixed_in_version"],"title":"Vulnerability","type":"object"},"VulnerabilityReport":{"description":"A report expressing discovered packages, package environments, and package vulnerabilities within a Manifest.","properties":{"distributions":{"additionalProperties":{"$ref":"#/components/schemas/Distribution"},"description":"A map of Distribution objects indexed by Distribution.id.","example":{"1":{"$ref":"#/components/examples/Distribution/value"}},"type":"object"},"environments":{"additionalProperties":{"items":{"$ref":"#/components/schemas/Environment"},"type":"array"},"description":"A mapping of Environment lists indexed by Package.id","example":{"10":[{"distribution_id":"1","introduced_in":"sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a","package_db":"var/lib/dpkg/status"}]},"type":"object"},"manifest_hash":{"$ref":"#/components/schemas/Digest"},"package_vulnerabilities":{"additionalProperties":{"items":{"type":"string"},"type":"array"},"description":"A mapping of Vulnerability.id lists indexed by Package.id.","example":{"10":["356835"]}},"packages":{"additionalProperties":{"$ref":"#/components/schemas/Package"},"description":"A map of Package objects indexed by Package.id","example":{"10":{"$ref":"#/components/examples/Package/value"}},"type":"object"},"vulnerabilities":{"additionalProperties":{"$ref":"#/components/schemas/Vulnerability"},"description":"A map of Vulnerabilities indexed by Vulnerability.id","example":{"356835":{"$ref":"#/components/examples/Vulnerability/value"}},"type":"object"}},"required":["manifest_hash","packages","distributions","environments","vulnerabilities","package_vulnerabilities"],"title":"VulnerabilityReport","type":"object"}}},"info":{"contact":{"email":"quay-devel@redhat.com","name":"Clair Team","url":"http://github.com/quay/clair"},"description":"ClairV4 is a set of cooperating microservices which scan, index, and match your container's content with known vulnerabilities.","license":{"name":"Apache License 2.0","url":"http://www.apache.org/licenses/"},"termsOfService":"","title":"ClairV4","version":"1.1"},"openapi":"3.0.2","paths":{"/indexer/api/v1/index_report":{"delete":{"description":"Given a Manifest's content addressable hash, any data related to it will be removed if it exists.","operationId":"DeleteManifests","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/BulkDelete"}}},"required":true},"responses":{"200":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/BulkDelete"}}},"description":"OK"},"400":{"$ref":"#/components/responses/BadRequest"},"500":{"$ref":"#/components/responses/InternalServerError"}},"summary":"Delete the IndexReport and associated information for the given Manifest hashes, if they exist.","tags":["Indexer"]},"post":{"description":"By submitting a Manifest object to this endpoint Clair will fetch the layers, scan each layer's contents, and provide an index of discovered packages, repository and distribution information.","operationId":"Index","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Manifest"}}},"required":true},"responses":{"201":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/IndexReport"}}},"description":"IndexReport Created"},"400":{"$ref":"#/components/responses/BadRequest"},"405":{"$ref":"#/components/responses/MethodNotAllowed"},"500":{"$ref":"#/components/responses/InternalServerError"}},"summary":"Index the contents of a Manifest","tags":["Indexer"]}},"/indexer/api/v1/index_report/{manifest_hash}":{"delete":{"description":"Given a Manifest's content addressable hash, any data related to it will be removed it it exists.","operationId":"DeleteManifest","parameters":[{"description":"A digest of a manifest that has been indexed previous to this request.","in":"path","name":"manifest_hash","required":true,"schema":{"$ref":"#/components/schemas/Digest"}}],"responses":{"204":{"description":"OK"},"400":{"$ref":"#/components/responses/BadRequest"},"500":{"$ref":"#/components/responses/InternalServerError"}},"summary":"Delete the IndexReport and associated information for the given Manifest hash, if exists.","tags":["Indexer"]},"get":{"description":"Given a Manifest's content addressable hash an IndexReport will be retrieved if exists.","operationId":"GetIndexReport","parameters":[{"description":"A digest of a manifest that has been indexed previous to this request.","in":"path","name":"manifest_hash","required":true,"schema":{"$ref":"#/components/schemas/Digest"}}],"responses":{"200":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/IndexReport"}}},"description":"IndexReport retrieved"},"400":{"$ref":"#/components/responses/BadRequest"},"404":{"$ref":"#/components/responses/NotFound"},"405":{"$ref":"#/components/responses/MethodNotAllowed"},"500":{"$ref":"#/components/responses/InternalServerError"}},"summary":"Retrieve an IndexReport for the given Manifest hash if exists.","tags":["Indexer"]}},"/indexer/api/v1/index_state":{"get":{"description":"The index state endpoint returns a json structure indicating the indexer's internal configuration state.\nA client may be interested in this as a signal that manifests may need to be re-indexed.","operationId":"IndexState","responses":{"200":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/State"}}},"description":"Indexer State","headers":{"Etag":{"description":"Entity Tag","schema":{"type":"string"}}}},"304":{"description":"Indexer State Unchanged"}},"summary":"Report the indexer's internal configuration and state.","tags":["Indexer"]}},"/matcher/api/v1/vulnerability_report/{manifest_hash}":{"get":{"description":"Given a Manifest's content addressable hash a VulnerabilityReport will be created. The Manifest **must** have been Indexed first via the Index endpoint.","operationId":"GetVulnerabilityReport","parameters":[{"description":"A digest of a manifest that has been indexed previous to this request.","in":"path","name":"manifest_hash","required":true,"schema":{"$ref":"#/components/schemas/Digest"}}],"responses":{"201":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/VulnerabilityReport"}}},"description":"VulnerabilityReport Created"},"400":{"$ref":"#/components/responses/BadRequest"},"404":{"$ref":"#/components/responses/NotFound"},"405":{"$ref":"#/components/responses/MethodNotAllowed"},"500":{"$ref":"#/components/responses/InternalServerError"}},"summary":"Retrieve a VulnerabilityReport for a given manifest's content addressable hash.","tags":["Matcher"]}},"/notifier/api/v1/notification/{notification_id}":{"delete":{"description":"Issues a delete of the provided notification id and all associated notifications. After this delete clients will no longer be able to retrieve notifications.","operationId":"DeleteNotification","parameters":[{"description":"A notification ID returned by a callback","in":"path","name":"notification_id","schema":{"type":"string"}}],"responses":{"200":{"description":"OK"},"400":{"$ref":"#/components/responses/BadRequest"},"405":{"$ref":"#/components/responses/MethodNotAllowed"},"500":{"$ref":"#/components/responses/InternalServerError"}},"tags":["Notifier"]},"get":{"description":"By performing a GET with a notification_id as a path parameter, the client will retrieve a paginated response of notification objects.","operationId":"GetNotification","parameters":[{"description":"A notification ID returned by a callback","in":"path","name":"notification_id","schema":{"type":"string"}},{"description":"The maximum number of notifications to deliver in a single page.","in":"query","name":"page_size","schema":{"type":"int"}},{"description":"The next page to fetch via id. Typically this number is provided on initial response in the page.next field. The first GET request may omit this field.","in":"query","name":"next","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/PagedNotifications"}}},"description":"A paginated list of notifications"},"400":{"$ref":"#/components/responses/BadRequest"},"405":{"$ref":"#/components/responses/MethodNotAllowed"},"500":{"$ref":"#/components/responses/InternalServerError"}},"summary":"Retrieve a paginated result of notifications for the provided id.","tags":["Notifier"]}}}}
\ No newline at end of file
diff --git a/httptransport/openapigen.go b/httptransport/openapigen.go
deleted file mode 100644
index 84146868d7..0000000000
--- a/httptransport/openapigen.go
+++ /dev/null
@@ -1,83 +0,0 @@
-//go:build tools
-// +build tools
-
-// Openapigen is a script to take the OpenAPI YAML file, turn it into a JSON
-// document, and write out files for use with the "embed" package.
-package main
-
-import (
- "bytes"
- "crypto/sha256"
- "encoding/json"
- "flag"
- "fmt"
- "io"
- "log"
- "os"
- "path/filepath"
-
- "gopkg.in/yaml.v3"
-)
-
-func main() {
- inFile := flag.String("in", "../openapi.yaml", "input YAML file")
- outDir := flag.String("out", ".", "output directory")
- flag.Parse()
-
- inF, err := os.Open(*inFile)
- if inF != nil {
- defer inF.Close()
- }
- if err != nil {
- log.Fatal(err)
- }
-
- tmp := map[interface{}]interface{}{}
- if err := yaml.NewDecoder(inF).Decode(&tmp); err != nil {
- log.Fatal(err)
- }
- embed, err := json.Marshal(convert(tmp))
- if err != nil {
- log.Fatal(err)
- }
- ck := sha256.Sum256(embed)
-
- outF, err := os.OpenFile(filepath.Join(*outDir, `openapi.json`), os.O_WRONLY|os.O_TRUNC|os.O_CREATE, 0644)
- if err != nil {
- log.Fatal(err)
- }
- defer outF.Close()
- if _, err := io.Copy(outF, bytes.NewReader(embed)); err != nil {
- log.Fatal(err)
- }
- outF, err = os.OpenFile(filepath.Join(*outDir, `openapi.etag`), os.O_WRONLY|os.O_TRUNC|os.O_CREATE, 0644)
- if err != nil {
- log.Fatal(err)
- }
- defer outF.Close()
- if _, err := fmt.Fprintf(outF, `"%x"`, ck); err != nil {
- log.Fatal(err)
- }
-}
-
-// Convert yoinked from:
-// https://stackoverflow.com/questions/40737122/convert-yaml-to-json-without-struct/40737676#40737676
-func convert(i interface{}) interface{} {
- switch x := i.(type) {
- case map[interface{}]interface{}:
- m2 := map[string]interface{}{}
- for k, v := range x {
- m2[fmt.Sprint(k)] = convert(v)
- }
- return m2
- case []interface{}:
- for i, v := range x {
- x[i] = convert(v)
- }
- case map[string]interface{}:
- for k, v := range x {
- x[k] = convert(v)
- }
- }
- return i
-}
diff --git a/httptransport/server.go b/httptransport/server.go
index 30bd95cffa..2d674cbb7f 100644
--- a/httptransport/server.go
+++ b/httptransport/server.go
@@ -6,6 +6,8 @@ import (
"context"
"fmt"
"net/http"
+ "slices"
+ "strings"
"time"
"github.com/quay/clair/config"
@@ -117,7 +119,27 @@ func New(ctx context.Context, conf *config.Config, indexer indexer.Service, matc
return final, nil
}
mux.Handle("/robots.txt", robotsHandler)
- return mux, nil
+ return responseHeaders(mux), nil
+}
+
+func responseHeaders(next http.Handler) http.Handler {
+ descs := []string{
+ `<` + OpenAPIV1Path + `>; rel="service-desc"; title="V1 API"; type="application/openapi+json"`,
+ }
+ return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ switch p := r.URL.EscapedPath(); {
+ case p == "/robots.txt": // Do nothing.
+ case strings.HasPrefix(p, apiRoot):
+ // See https://datatracker.ietf.org/doc/html/rfc8631 and https://datatracker.ietf.org/doc/html/rfc5988
+ w.Header().Add("Link", descs[0])
+ default: // Some unknown path, insert relevant links.
+ w.Header().Add("Link", `; rel="service-doc"; title="Documentation"`)
+ for _, l := range descs {
+ w.Header().Add("Link", l)
+ }
+ }
+ next.ServeHTTP(w, r)
+ })
}
// IntraserviceIssuer is the issuer that will be used if Clair is configured to
@@ -126,14 +148,8 @@ const IntraserviceIssuer = `clair-intraservice`
// Unmodified determines whether to return a conditional response.
func unmodified(r *http.Request, v string) bool {
- if vs, ok := r.Header["If-None-Match"]; ok {
- for _, rv := range vs {
- if rv == v {
- return true
- }
- }
- }
- return false
+ vs, ok := r.Header["If-None-Match"]
+ return ok && slices.Contains(vs, v)
}
// WriterError is a helper that closes over an error that may be returned after
diff --git a/httptransport/types/v1/affected_manifests.schema.json b/httptransport/types/v1/affected_manifests.schema.json
new file mode 100644
index 0000000000..1f12fee324
--- /dev/null
+++ b/httptransport/types/v1/affected_manifests.schema.json
@@ -0,0 +1,30 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/affected_manifests.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Affected Manifests",
+ "type": "object",
+ "description": "**This is an internal type, documented for completeness.**\n\nManifests affected by the specified vulnerability objects.",
+ "properties": {
+ "vulnerabilities": {
+ "type": "object",
+ "description": "Vulnerability objects.",
+ "additionalProperties": {
+ "$ref": "vulnerability.schema.json"
+ }
+ },
+ "vulnerable_manifests": {
+ "type": "object",
+ "description": "Mapping of manifest digests to vulnerability identifiers.",
+ "additionalProperties": {
+ "type": "array",
+ "items": {
+ "type": "string",
+ "description": "An identifier to be used in the \"#/vulnerabilities\" object."
+ }
+ }
+ }
+ },
+ "required": [
+ "vulnerable_manifests"
+ ]
+}
diff --git a/httptransport/types/v1/bulk_delete.schema.json b/httptransport/types/v1/bulk_delete.schema.json
new file mode 100644
index 0000000000..9306e2c893
--- /dev/null
+++ b/httptransport/types/v1/bulk_delete.schema.json
@@ -0,0 +1,11 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/bulk_delete.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Bulk Delete",
+ "type": "array",
+ "description": "Array of manifest digests to delete from the system.",
+ "items": {
+ "$ref": "digest.schema.json",
+ "description": "Manifest digest to delete from the system."
+ }
+}
diff --git a/httptransport/types/v1/cpe.schema.json b/httptransport/types/v1/cpe.schema.json
new file mode 100644
index 0000000000..9b5a9aed81
--- /dev/null
+++ b/httptransport/types/v1/cpe.schema.json
@@ -0,0 +1,19 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/cpe.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Common Platform Enumeration Name",
+ "description": "This is a CPE Name in either v2.2 \"URI\" form or v2.3 \"Formatted String\" form.",
+ "$comment": "Clair only produces v2.3 CPE Names. Any v2.2 Names will be normalized into v2.3 form.",
+ "oneOf": [
+ {
+ "description": "This is the CPE 2.2 regexp: https://cpe.mitre.org/specification/2.2/cpe-language_2.2.xsd",
+ "type": "string",
+ "pattern": "^[c][pP][eE]:/[AHOaho]?(:[A-Za-z0-9\\._\\-~%]*){0,6}$"
+ },
+ {
+ "description": "This is the CPE 2.3 regexp: https://csrc.nist.gov/schema/cpe/2.3/cpe-naming_2.3.xsd",
+ "type": "string",
+ "pattern": "^cpe:2\\.3:[aho\\*\\-](:(((\\?*|\\*?)([a-zA-Z0-9\\-\\._]|(\\\\[\\\\\\*\\?!\"#$$%&'\\(\\)\\+,/:;<=>@\\[\\]\\^`\\{\\|}~]))+(\\?*|\\*?))|[\\*\\-])){5}(:(([a-zA-Z]{2,3}(-([a-zA-Z]{2}|[0-9]{3}))?)|[\\*\\-]))(:(((\\?*|\\*?)([a-zA-Z0-9\\-\\._]|(\\\\[\\\\\\*\\?!\"#$$%&'\\(\\)\\+,/:;<=>@\\[\\]\\^`\\{\\|}~]))+(\\?*|\\*?))|[\\*\\-])){4}$"
+ }
+ ]
+}
diff --git a/httptransport/types/v1/digest.schema.json b/httptransport/types/v1/digest.schema.json
new file mode 100644
index 0000000000..8ca37862ce
--- /dev/null
+++ b/httptransport/types/v1/digest.schema.json
@@ -0,0 +1,26 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/digest.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Digest",
+ "description": "A digest acts as a content identifier, enabling content addressability.",
+ "oneOf": [
+ {
+ "$comment": "SHA256: MUST be implemented",
+ "description": "SHA256",
+ "type": "string",
+ "pattern": "^sha256:[a-f0-9]{64}$"
+ },
+ {
+ "$comment": "SHA512: MAY be implemented",
+ "description": "SHA512",
+ "type": "string",
+ "pattern": "^sha512:[a-f0-9]{128}$"
+ },
+ {
+ "$comment": "BLAKE3: MAY be implemented",
+ "description": "BLAKE3\n\n**Currently not implemented.**",
+ "type": "string",
+ "pattern": "^blake3:[a-f0-9]{64}$"
+ }
+ ]
+}
diff --git a/httptransport/types/v1/distribution.schema.json b/httptransport/types/v1/distribution.schema.json
new file mode 100644
index 0000000000..fa473bb4f6
--- /dev/null
+++ b/httptransport/types/v1/distribution.schema.json
@@ -0,0 +1,49 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/distribution.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Distribution",
+ "type": "object",
+ "description": "Distribution is the accompanying system context of a Package.",
+ "properties": {
+ "id": {
+ "description": "Unique ID for this Distribution. May be unique to the response document, not the whole system.",
+ "type": "string"
+ },
+ "did": {
+ "description": "A lower-case string (no spaces or other characters outside of 0–9, a–z, \".\", \"_\", and \"-\") identifying the operating system, excluding any version information and suitable for processing by scripts or usage in generated filenames.",
+ "type": "string"
+ },
+ "name": {
+ "description": "A string identifying the operating system.",
+ "type": "string"
+ },
+ "version": {
+ "description": "A string identifying the operating system version, excluding any OS name information, possibly including a release code name, and suitable for presentation to the user.",
+ "type": "string"
+ },
+ "version_code_name": {
+ "description": "A lower-case string (no spaces or other characters outside of 0–9, a–z, \".\", \"_\", and \"-\") identifying the operating system release code name, excluding any OS name information or release version, and suitable for processing by scripts or usage in generated filenames.",
+ "type": "string"
+ },
+ "version_id": {
+ "description": "A lower-case string (mostly numeric, no spaces or other characters outside of 0–9, a–z, \".\", \"_\", and \"-\") identifying the operating system version, excluding any OS name information or release code name.",
+ "type": "string"
+ },
+ "arch": {
+ "description": "A string identifying the OS architecture.",
+ "type": "string"
+ },
+ "cpe": {
+ "description": "Common Platform Enumeration name.",
+ "$ref": "cpe.schema.json"
+ },
+ "pretty_name": {
+ "description": "A pretty operating system name in a format suitable for presentation to the user.",
+ "type": "string"
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "id"
+ ]
+}
diff --git a/httptransport/types/v1/environment.schema.json b/httptransport/types/v1/environment.schema.json
new file mode 100644
index 0000000000..20d7a5f0b8
--- /dev/null
+++ b/httptransport/types/v1/environment.schema.json
@@ -0,0 +1,29 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/environment.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Environment",
+ "type": "object",
+ "description": "Environment describes the surrounding environment a package was discovered in.",
+ "properties": {
+ "package_db": {
+ "description": "The database the associated Package was discovered in.",
+ "type": "string"
+ },
+ "distribution_id": {
+ "description": "The ID of the Distribution of the associated Package.",
+ "type": "string"
+ },
+ "introduced_in": {
+ "description": "The Layer the associated Package was introduced in.",
+ "$ref": "digest.schema.json"
+ },
+ "repository_ids": {
+ "description": "The IDs of the Repositories of the associated Package.",
+ "type": "array",
+ "items": {
+ "type": "string"
+ }
+ }
+ },
+ "additionalProperties": false
+}
diff --git a/httptransport/types/v1/error.schema.json b/httptransport/types/v1/error.schema.json
new file mode 100644
index 0000000000..ffb1209425
--- /dev/null
+++ b/httptransport/types/v1/error.schema.json
@@ -0,0 +1,20 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/error.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Error",
+ "type": "object",
+ "description": "A general error response.",
+ "properties": {
+ "code": {
+ "type": "string",
+ "description": "a code for this particular error"
+ },
+ "message": {
+ "type": "string",
+ "description": "a message with further detail"
+ }
+ },
+ "required": [
+ "message"
+ ]
+}
diff --git a/httptransport/types/v1/index_report.schema.json b/httptransport/types/v1/index_report.schema.json
new file mode 100644
index 0000000000..f6fbb17f3f
--- /dev/null
+++ b/httptransport/types/v1/index_report.schema.json
@@ -0,0 +1,62 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/index_report.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Index Report",
+ "type": "object",
+ "description": "An index of the contents of a Manifest.",
+ "properties": {
+ "manifest_hash": {
+ "$ref": "digest.schema.json",
+ "description": "The Manifest's digest."
+ },
+ "state": {
+ "type": "string",
+ "description": "The current state of the index operation"
+ },
+ "err": {
+ "type": "string",
+ "description": "An error message on event of unsuccessful index"
+ },
+ "success": {
+ "type": "boolean",
+ "description": "A bool indicating succcessful index"
+ },
+ "packages": {
+ "type": "object",
+ "description": "A map of Package objects indexed by a document-local identifier.",
+ "additionalProperties": {
+ "$ref": "package.schema.json"
+ }
+ },
+ "distributions": {
+ "type": "object",
+ "description": "A map of Distribution objects indexed by a document-local identifier.",
+ "additionalProperties": {
+ "$ref": "distribution.schema.json"
+ }
+ },
+ "repository": {
+ "type": "object",
+ "description": "A map of Repository objects indexed by a document-local identifier.",
+ "additionalProperties": {
+ "$ref": "repository.schema.json"
+ }
+ },
+ "environments": {
+ "type": "object",
+ "description": "A map of Environment arrays indexed by a Package's identifier.",
+ "additionalProperties": {
+ "type": "array",
+ "items": {
+ "$ref": "environment.schema.json"
+ }
+ }
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "manifest_hash",
+ "state",
+ "success"
+ ]
+}
diff --git a/httptransport/types/v1/index_state.schema.json b/httptransport/types/v1/index_state.schema.json
new file mode 100644
index 0000000000..19645519cb
--- /dev/null
+++ b/httptransport/types/v1/index_state.schema.json
@@ -0,0 +1,16 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/index_state.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Index State",
+ "type": "object",
+ "description": "Information on the state of the indexer system.",
+ "properties": {
+ "state": {
+ "type": "string",
+ "description": "an opaque token"
+ }
+ },
+ "required": [
+ "state"
+ ]
+}
diff --git a/httptransport/types/v1/layer.schema.json b/httptransport/types/v1/layer.schema.json
new file mode 100644
index 0000000000..f69f81aac0
--- /dev/null
+++ b/httptransport/types/v1/layer.schema.json
@@ -0,0 +1,39 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/layer.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Layer",
+ "type": "object",
+ "description": "Layer is a description of a container layer. It should contain enough information to fetch the layer.",
+ "properties": {
+ "hash": {
+ "$ref": "digest.schema.json",
+ "description": "Digest of the layer blob."
+ },
+ "uri": {
+ "type": "string",
+ "description": "A URI indicating where the layer blob can be downloaded from."
+ },
+ "headers": {
+ "description": "Any additional HTTP-style headers needed for requesting layers.",
+ "type": "object",
+ "patternProperties": {
+ "^[a-zA-Z0-9\\-_]+$": {
+ "type": "array",
+ "items": {
+ "type": "string"
+ }
+ }
+ }
+ },
+ "media_type": {
+ "description": "The OCI Layer media type for this layer.",
+ "type": "string",
+ "pattern": "^application/vnd\\.oci\\.image\\.layer\\.v1\\.tar(\\+(gzip|zstd))?$"
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "hash",
+ "uri"
+ ]
+}
diff --git a/httptransport/types/v1/manifest.schema.json b/httptransport/types/v1/manifest.schema.json
new file mode 100644
index 0000000000..15face0a91
--- /dev/null
+++ b/httptransport/types/v1/manifest.schema.json
@@ -0,0 +1,24 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/manifest.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Manifest",
+ "type": "object",
+ "description": "A description of an OCI Image Manifest.",
+ "properties": {
+ "hash": {
+ "$ref": "digest.schema.json",
+ "description": "The OCI Image Manifest's digest.\n\nThis is used as an identifier throughout the system. This **SHOULD** be the same as the OCI Image Manifest's digest, but this is not enforced."
+ },
+ "layers": {
+ "type": "array",
+ "description": "The OCI Layers making up the Image, in order.",
+ "items": {
+ "$ref": "layer.schema.json"
+ }
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "hash"
+ ]
+}
diff --git a/httptransport/types/v1/normalized_severity.schema.json b/httptransport/types/v1/normalized_severity.schema.json
new file mode 100644
index 0000000000..c3ef09d055
--- /dev/null
+++ b/httptransport/types/v1/normalized_severity.schema.json
@@ -0,0 +1,14 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/normalized_severity.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Normalized Severity",
+ "description": "Standardized severity values.",
+ "enum": [
+ "Unknown",
+ "Negligible",
+ "Low",
+ "Medium",
+ "High",
+ "Critical"
+ ]
+}
diff --git a/httptransport/types/v1/notification.schema.json b/httptransport/types/v1/notification.schema.json
new file mode 100644
index 0000000000..d9deb2d665
--- /dev/null
+++ b/httptransport/types/v1/notification.schema.json
@@ -0,0 +1,34 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/notification.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Notification",
+ "type": "object",
+ "description": "A change in a manifest affected by a vulnerability.",
+ "properties": {
+ "id": {
+ "description": "Unique identifier for this notification.",
+ "type": "string"
+ },
+ "manifest": {
+ "$ref": "digest.schema.json",
+ "description": "The digest of the manifest affected by the provided vulnerability."
+ },
+ "reason": {
+ "description": "The reason for the notifcation.",
+ "enum": [
+ "added",
+ "removed"
+ ]
+ },
+ "vulnerability": {
+ "$ref": "vulnerability_summary.schema.json"
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "id",
+ "manifest",
+ "reason",
+ "vulnerability"
+ ]
+}
diff --git a/httptransport/types/v1/notification_page.schema.json b/httptransport/types/v1/notification_page.schema.json
new file mode 100644
index 0000000000..8a01866856
--- /dev/null
+++ b/httptransport/types/v1/notification_page.schema.json
@@ -0,0 +1,44 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/notification_page.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Notification Page",
+ "type": "object",
+ "description": "A page description and list of notifications.",
+ "properties": {
+ "page": {
+ "description": "An object informing the client the next page to retrieve.",
+ "type": "object",
+ "properties": {
+ "size": {
+ "type": "integer"
+ },
+ "next": {
+ "oneOf": [
+ {
+ "type": "string"
+ },
+ {
+ "const": "-1"
+ }
+ ]
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "size"
+ ]
+ },
+ "notifications": {
+ "description": "Notifications within this page.",
+ "type": "array",
+ "items": {
+ "$ref": "notification.schema.json"
+ }
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "page",
+ "notifications"
+ ]
+}
diff --git a/httptransport/types/v1/package.schema.json b/httptransport/types/v1/package.schema.json
new file mode 100644
index 0000000000..a4a54cd9af
--- /dev/null
+++ b/httptransport/types/v1/package.schema.json
@@ -0,0 +1,55 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/package.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Package",
+ "type": "object",
+ "description": "Description of installed software.",
+ "properties": {
+ "id": {
+ "description": "Unique ID for this Package. May be unique to the response document, not the whole system.",
+ "type": "string"
+ },
+ "name": {
+ "description": "Identifier of this Package.\n\nThe uniqueness and scoping of this name depends on the packaging system.",
+ "type": "string"
+ },
+ "version": {
+ "description": "Version of this Package, as reported by the packaging system.",
+ "type": "string"
+ },
+ "kind": {
+ "description": "The \"kind\" of this Package.",
+ "enum": [
+ "BINARY",
+ "SOURCE"
+ ],
+ "default": "BINARY"
+ },
+ "source": {
+ "$ref": "#",
+ "description": "Source Package that produced the current binary Package, if known."
+ },
+ "normalized_version": {
+ "description": "Normalized representation of the discoverd version.\n\nThe format is not specific, but is guarenteed to be forward compatible.",
+ "type": "string"
+ },
+ "module": {
+ "description": "An identifier for intra-Repository grouping of packages.\n\nLikely only relevant on rpm-based systems.",
+ "type": "string"
+ },
+ "arch": {
+ "description": "Native architecture for the Package.",
+ "type": "string",
+ "$comment": "This should become and enum in the future."
+ },
+ "cpe": {
+ "$ref": "cpe.schema.json",
+ "description": "CPE Name for the Package."
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "name",
+ "version"
+ ]
+}
diff --git a/httptransport/types/v1/range.schema.json b/httptransport/types/v1/range.schema.json
new file mode 100644
index 0000000000..924a430f80
--- /dev/null
+++ b/httptransport/types/v1/range.schema.json
@@ -0,0 +1,19 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/range.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Range",
+ "type": "object",
+ "description": "A range of versions.",
+ "properties": {
+ "[": {
+ "type": "string",
+ "description": "Lower bound, inclusive."
+ },
+ ")": {
+ "type": "string",
+ "description": "Upper bound, exclusive."
+ }
+ },
+ "minProperties": 1,
+ "additionalProperties": false
+}
diff --git a/httptransport/types/v1/repository.schema.json b/httptransport/types/v1/repository.schema.json
new file mode 100644
index 0000000000..1bb0bd9410
--- /dev/null
+++ b/httptransport/types/v1/repository.schema.json
@@ -0,0 +1,34 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/repository.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Repository",
+ "type": "object",
+ "description": "Description of a software repository",
+ "properties": {
+ "id": {
+ "description": "Unique ID for this Repository. May be unique to the response document, not the whole system.",
+ "type": "string"
+ },
+ "name": {
+ "description": "Human-relevant name for the Repository.",
+ "type": "string"
+ },
+ "key": {
+ "description": "Machine-relevant name for the Repository.",
+ "type": "string"
+ },
+ "uri": {
+ "description": "URI describing the Repository.",
+ "type": "string",
+ "format": "uri"
+ },
+ "cpe": {
+ "description": "CPE name for the Repository.",
+ "$ref": "cpe.schema.json"
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "id"
+ ]
+}
diff --git a/httptransport/types/v1/types.go b/httptransport/types/v1/types.go
new file mode 100644
index 0000000000..a0f14687bb
--- /dev/null
+++ b/httptransport/types/v1/types.go
@@ -0,0 +1,157 @@
+// Package types provides concrete types for the HTTP API.
+package types
+
+import (
+ "embed"
+ "encoding/json"
+ "fmt"
+ "time"
+)
+
+//go:embed *.schema.json
+var Schema embed.FS
+
+// Indexer types
+type (
+ Manifest struct {
+ Hash string `json:"hash"`
+ Layers []Layer `json:"layers,omitempty"`
+ }
+
+ Layer struct {
+ Hash string `json:"hash"`
+ URI string `json:"uri"`
+ Headers map[string][]string `json:"headers,omitempty"`
+ }
+
+ IndexReport struct {
+ Hash string `json:"manifest_hash"`
+ State string `json:"state"`
+ Err string `json:"err,omitempty"`
+ Packages map[string]*Package `json:"packages,omitempty"`
+ Distributions map[string]*Distribution `json:"distributions,omitempty"`
+ Repositories map[string]*Repository `json:"repository,omitempty"`
+ Environments map[string][]*Environment `json:"environments,omitempty"`
+ Success bool `json:"success"`
+ }
+
+ Package struct {
+ ID string `json:"id"`
+ Name string `json:"name,omitempty"`
+ Version string `json:"version,omitempty"`
+ Kind string `json:"kind,omitempty"`
+ Source *Package `json:"source,omitempty"`
+ NormalizedVersion string `json:"normalized_version,omitempty"`
+ Module string `json:"module,omitempty"`
+ Arch string `json:"arch,omitempty"`
+ CPE string `json:"cpe,omitempty"`
+ }
+
+ Distribution struct {
+ ID string `json:"id"`
+ DID string `json:"did,omitempty"`
+ Name string `json:"name,omitempty"`
+ Version string `json:"version,omitempty"`
+ VersionCodeName string `json:"version_code_name,omitempty"`
+ VersionID string `json:"version_id,omitempty"`
+ Arch string `json:"arch,omitempty"`
+ CPE string `json:"cpe,omitempty"`
+ PrettyName string `json:"pretty_name,omitempty"`
+ }
+
+ Repository struct {
+ ID string `json:"id,omitempty"`
+ Name string `json:"name,omitempty"`
+ Key string `json:"key,omitempty"`
+ URI string `json:"uri,omitempty"`
+ CPE string `json:"cpe,omitempty"`
+ }
+
+ Environment struct {
+ IntroducedIn string `json:"introduced_in"`
+ PackageDB string `json:"package_db,omitempty"`
+ DistributionID string `json:"distribution_id,omitempty"`
+ RepositoryIDs []string `json:"repository_ids,omitempty"`
+ }
+
+ IndexerState struct {
+ State string
+ }
+
+ VulnerabilityBatch struct {
+ Vulnerabilities []Vulnerability
+ }
+)
+
+// Matcher types
+type (
+ VulnerabilityReport struct {
+ Hash string `json:"manifest_hash"`
+ Packages map[string]*Package `json:"packages,omitempty"`
+ Vulnerabilities map[string]*Vulnerability `json:"vulnerabilities,omitempty"`
+ Environments map[string][]*Environment `json:"environments,omitempty"`
+ PackageVulnerabilities map[string][]string `json:"package_vulnerabilities,omitempty"`
+ Distributions map[string]*Distribution `json:"distributions,omitempty"`
+ Repositories map[string]*Repository `json:"repository,omitempty"`
+ Enrichments map[string][]json.RawMessage `json:"enrichments,omitempty"`
+ }
+
+ Vulnerability struct {
+ ID string `json:"id"`
+ Updater string `json:"updater,omitempty"`
+ Name string `json:"name,omitempty"`
+ Issued time.Time `json:"issued"`
+ Severity string `json:"severity,omitempty"`
+ NormalizedSeverity string `json:"normalized_severity,omitempty"`
+ Description string `json:"description,omitempty"`
+ Links string `json:"links,omitempty"`
+ Package *Package `json:"package,omitempty"`
+ Dist *Distribution `json:"distribution,omitempty"`
+ Repo *Repository `json:"repository,omitempty"`
+ FixedInVersion string `json:"fixed_in_version"`
+ Range *Range `json:"range,omitempty"`
+ ArchOperation string `json:"arch_op,omitempty"`
+ }
+
+ Range struct {
+ Lower string `json:"[,omitempty"`
+ Upper string `json:"),omitempty"`
+ }
+
+ UpdateKind int
+
+ UpdateOperation struct {
+ Ref string `json:"ref"`
+ Updater string `json:"updater"`
+ Fingerprint []byte `json:"fingerprint"`
+ Date time.Time `json:"date"`
+ Kind UpdateKind `json:"kind"`
+ }
+
+ UpdateDiff struct {
+ Prev UpdateOperation `json:"prev"`
+ Cur UpdateOperation `json:"cur"`
+ Added []Vulnerability `json:"added"`
+ Removed []Vulnerability `json:"removed"`
+ }
+)
+
+//go:generate go run golang.org/x/tools/cmd/stringer@latest -type UpdateKind -linecomment
+
+const (
+ _ UpdateKind = iota
+ UpdateVulnerability // vulnerability
+ UpdateEnrichment // enrichment
+)
+
+// API types
+type (
+ Error struct {
+ Code int
+ Message string
+ }
+)
+
+func (e *Error) Error() string {
+ return fmt.Sprintf("%s (HTTP %d)", e.Message, e.Code)
+}
diff --git a/httptransport/types/v1/update_diff.schema.json b/httptransport/types/v1/update_diff.schema.json
new file mode 100644
index 0000000000..6986db50b8
--- /dev/null
+++ b/httptransport/types/v1/update_diff.schema.json
@@ -0,0 +1,9 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/update_diff.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Update Difference",
+ "type": "object",
+ "description": "**This is an internal type, documented for completeness.**\n\nTKTK",
+ "additionalProperties": false,
+ "required": [ ]
+}
diff --git a/httptransport/types/v1/updatekind_string.go b/httptransport/types/v1/updatekind_string.go
new file mode 100644
index 0000000000..39d8627236
--- /dev/null
+++ b/httptransport/types/v1/updatekind_string.go
@@ -0,0 +1,25 @@
+// Code generated by "stringer -type UpdateKind -linecomment"; DO NOT EDIT.
+
+package types
+
+import "strconv"
+
+func _() {
+ // An "invalid array index" compiler error signifies that the constant values have changed.
+ // Re-run the stringer command to generate them again.
+ var x [1]struct{}
+ _ = x[UpdateVulnerability-1]
+ _ = x[UpdateEnrichment-2]
+}
+
+const _UpdateKind_name = "vulnerabilityenrichment"
+
+var _UpdateKind_index = [...]uint8{0, 13, 23}
+
+func (i UpdateKind) String() string {
+ i -= 1
+ if i < 0 || i >= UpdateKind(len(_UpdateKind_index)-1) {
+ return "UpdateKind(" + strconv.FormatInt(int64(i+1), 10) + ")"
+ }
+ return _UpdateKind_name[_UpdateKind_index[i]:_UpdateKind_index[i+1]]
+}
diff --git a/httptransport/types/v1/vulnerability.schema.json b/httptransport/types/v1/vulnerability.schema.json
new file mode 100644
index 0000000000..2cb9f7501c
--- /dev/null
+++ b/httptransport/types/v1/vulnerability.schema.json
@@ -0,0 +1,36 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/vulnerability.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Vulnerability",
+ "type": "object",
+ "description": "Description of a software flaw.",
+ "$ref": "vulnerability_core.schema.json",
+ "properties": {
+ "id": {
+ "description": "",
+ "type": "string"
+ },
+ "updater": {
+ "description": "",
+ "type": "string"
+ },
+ "description": {
+ "description": "",
+ "type": "string"
+ },
+ "issued": {
+ "description": "",
+ "type": "string",
+ "format": "date-time"
+ },
+ "links": {
+ "description": "",
+ "type": "string"
+ }
+ },
+ "unevaluatedProperties": false,
+ "required": [
+ "id",
+ "updater"
+ ]
+}
diff --git a/httptransport/types/v1/vulnerability_core.schema.json b/httptransport/types/v1/vulnerability_core.schema.json
new file mode 100644
index 0000000000..b5f665b97d
--- /dev/null
+++ b/httptransport/types/v1/vulnerability_core.schema.json
@@ -0,0 +1,75 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/vulnerability_core.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Vulnerability Core",
+ "type": "object",
+ "description": "The core elements of vulnerabilities in the Clair system.",
+ "properties": {
+ "name": {
+ "type": "string",
+ "description": "Human-readable name, as presented in the vendor data."
+ },
+ "fixed_in_version": {
+ "type": "string",
+ "description": "Version string, as presented in the vendor data."
+ },
+ "severity": {
+ "type": "string",
+ "description": "Severity, as presented in the vendor data."
+ },
+ "normalized_severity": {
+ "$ref": "normalized_severity.schema.json",
+ "description": "A well defined set of severity strings guaranteed to be present."
+ },
+ "range": {
+ "$ref": "range.schema.json",
+ "description": "Range of versions the vulnerability applies to."
+ },
+ "arch_op": {
+ "description": "Flag indicating how the referenced package's \"arch\" member should be interpreted.",
+ "enum": [
+ "equals",
+ "not equals",
+ "pattern match"
+ ]
+ },
+ "package": {
+ "$ref": "package.schema.json",
+ "description": "A package description"
+ },
+ "distribution": {
+ "$ref": "distribution.schema.json",
+ "description": "A distribution description"
+ },
+ "repository": {
+ "$ref": "repository.schema.json",
+ "description": "A repository description"
+ }
+ },
+ "required": [
+ "name",
+ "normalized_severity"
+ ],
+ "dependentRequired": {
+ "package": [
+ "arch_op"
+ ]
+ },
+ "anyOf": [
+ {
+ "required": [
+ "package"
+ ]
+ },
+ {
+ "required": [
+ "repository"
+ ]
+ },
+ {
+ "required": [
+ "distribution"
+ ]
+ }
+ ]
+}
diff --git a/httptransport/types/v1/vulnerability_report.schema.json b/httptransport/types/v1/vulnerability_report.schema.json
new file mode 100644
index 0000000000..8f2658560d
--- /dev/null
+++ b/httptransport/types/v1/vulnerability_report.schema.json
@@ -0,0 +1,77 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/vulnerability_report.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Vulnerability Report",
+ "type": "object",
+ "description": "A report with discovered packages, package environments, and package vulnerabilities within a Manifest.",
+ "properties": {
+ "manifest_hash": {
+ "$ref": "digest.schema.json",
+ "description": "The Manifest's digest."
+ },
+ "packages": {
+ "type": "object",
+ "description": "A map of Package objects indexed by a document-local identifier.",
+ "additionalProperties": {
+ "$ref": "package.schema.json"
+ }
+ },
+ "distributions": {
+ "type": "object",
+ "description": "A map of Distribution objects indexed by a document-local identifier.",
+ "additionalProperties": {
+ "$ref": "distribution.schema.json"
+ }
+ },
+ "repository": {
+ "type": "object",
+ "description": "A map of Repository objects indexed by a document-local identifier.",
+ "additionalProperties": {
+ "$ref": "repository.schema.json"
+ }
+ },
+ "environments": {
+ "type": "object",
+ "description": "A map of Environment arrays indexed by a Package's identifier.",
+ "additionalProperties": {
+ "type": "array",
+ "items": {
+ "$ref": "environment.schema.json"
+ }
+ }
+ },
+ "vulnerabilities": {
+ "type": "object",
+ "description": "A map of Vulnerabilities indexed by a document-local identifier.",
+ "additionalProperties": {
+ "$ref": "vulnerability.schema.json"
+ }
+ },
+ "package_vulnerabilities": {
+ "type": "object",
+ "description": "A mapping of Vulnerability identifier lists indexed by Package identifier.",
+ "additionalProperties": {
+ "type": "array",
+ "items": {
+ "type": "string"
+ }
+ }
+ },
+ "enrichments": {
+ "type": "object",
+ "description": "A mapping of extra \"enrichment\" data by type",
+ "additionalProperties": {
+ "type": "array"
+ }
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "distributions",
+ "environments",
+ "manifest_hash",
+ "packages",
+ "package_vulnerabilities",
+ "vulnerabilities"
+ ]
+}
diff --git a/httptransport/types/v1/vulnerability_summary.schema.json b/httptransport/types/v1/vulnerability_summary.schema.json
new file mode 100644
index 0000000000..8fac0115e5
--- /dev/null
+++ b/httptransport/types/v1/vulnerability_summary.schema.json
@@ -0,0 +1,9 @@
+{
+ "$id": "https://clairproject.org/api/http/v1/vulnerability_summary.schema.json",
+ "$schema": "https://json-schema.org/draft/2020-12/schema",
+ "title": "Vulnerability Summary",
+ "type": "object",
+ "description": "A summary of a vulnerability.",
+ "$ref": "vulnerability_core.schema.json",
+ "unevaluatedProperties": false
+}
diff --git a/internal/alias_gen.go b/internal/alias_gen.go
new file mode 100644
index 0000000000..a065f7725e
--- /dev/null
+++ b/internal/alias_gen.go
@@ -0,0 +1,287 @@
+// Copyright 2025 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build ignore
+
+// AliasGen collects all top-level, exported declarations and
+// produces aliases that reference declarations in another package
+// that actually implements each declaration.
+//
+// Usage:
+//
+// go run alias_gen.go {TargetPkgPath} {WorkingDir}
+//
+// Where:
+// - TargetPkgPath is that package path that implements each declaration.
+// - WorkingDir is the directory that contains Go source files.
+// All top-level, exported declarations are collected and transformed
+// into aliases to the equivalent declaration in the target package
+// and written to an alias.go output file.
+package main
+
+import (
+ "bytes"
+ "cmp"
+ "fmt"
+ "go/ast"
+ "go/format"
+ "go/parser"
+ "go/token"
+ "maps"
+ "os"
+ "path/filepath"
+ "slices"
+ "strconv"
+ "strings"
+)
+
+func main() {
+ targetPkgPath := os.Args[1]
+ workingDir := os.Args[2]
+ generateAliases(targetPkgPath, workingDir)
+}
+
+func generateAliases(targetPkgPath, workingDir string) {
+ fset := token.NewFileSet()
+ var files []*ast.File
+ for _, fi := range mustGet(os.ReadDir(workingDir)) {
+ if !strings.HasSuffix(fi.Name(), ".go") || strings.HasSuffix(fi.Name(), "_test.go") || fi.Name() == "alias.go" || fi.Name() == "alias_gen.go" {
+ continue
+ }
+ b := mustGet(os.ReadFile(filepath.Join(workingDir, fi.Name())))
+ f := mustGet(parser.ParseFile(fset, fi.Name(), b, parser.ParseComments))
+ files = append(files, f)
+ }
+ slices.SortFunc(files, func(x, y *ast.File) int {
+ return cmp.Compare(fset.File(x.Pos()).Name(), fset.File(y.Pos()).Name())
+ })
+
+ var aliasFile, aliasDecls bytes.Buffer
+
+ // Print copyright.
+ aliasFile.WriteString("// Copyright 2025 The Go Authors. All rights reserved.\n")
+ aliasFile.WriteString("// Use of this source code is governed by a BSD-style\n")
+ aliasFile.WriteString("// license that can be found in the LICENSE file.\n")
+ aliasFile.WriteString("\n")
+
+ // Print build tag.
+ aliasFile.WriteString("// Code generated by alias_gen.go; DO NOT EDIT.\n")
+ aliasFile.WriteString("\n")
+ aliasFile.WriteString("//go:build goexperiment.jsonv2 && go1.25\n")
+ aliasFile.WriteString("\n")
+
+ // Print package docs.
+ packageName := files[0].Name.String()
+ for _, f := range files {
+ writeComments(&aliasFile, f.Doc)
+ }
+ aliasFile.WriteString("package " + packageName + "\n")
+ aliasFile.WriteString("\n")
+
+ // Print the imports.
+ imports := make(map[string]struct{})
+ writeType := func(expr ast.Expr) {
+ ast.Walk(astVisitor(func(node ast.Node) bool {
+ if sel, ok := node.(*ast.SelectorExpr); ok {
+ if id, ok := sel.X.(*ast.Ident); ok {
+ switch pkgName := id.String(); pkgName {
+ case "io", "bytes":
+ imports[pkgName] = struct{}{}
+ case "jsontext":
+ imports[`encoding/json/jsontext`] = struct{}{}
+ default:
+ panic(fmt.Sprintf("unknown package %q", id.String()))
+ }
+ }
+ }
+ return true
+ }), expr)
+ mustDo(format.Node(&aliasDecls, fset, expr))
+ }
+ writeImports := func() {
+ aliasFile.WriteString("import (\n")
+ imports := append(slices.Collect(maps.Keys(imports)), targetPkgPath)
+ slices.Sort(imports)
+ for _, pkgPath := range imports {
+ aliasFile.WriteString(strconv.Quote(pkgPath) + "\n")
+ }
+ aliasFile.WriteString(")\n")
+ aliasFile.WriteString("\n")
+ }
+
+ // Print aliases to every exported top-level declaration.
+ for _, f := range files {
+ for _, d := range f.Decls {
+ switch d := d.(type) {
+ case *ast.GenDecl:
+ switch d.Tok {
+ case token.IMPORT:
+ case token.CONST, token.VAR:
+ // Check whether there any exported declarations.
+ var hasExported bool
+ for _, s := range d.Specs {
+ for _, name := range s.(*ast.ValueSpec).Names {
+ hasExported = hasExported || (name.IsExported() && name.String() != "Internal")
+ }
+ }
+ if !hasExported {
+ continue
+ }
+
+ // Print the declaration.
+ writeComments(&aliasDecls, d.Doc)
+ if d.Lparen > 0 {
+ aliasDecls.WriteString(d.Tok.String())
+ aliasDecls.WriteString(" (\n")
+ }
+ for _, s := range d.Specs {
+ s := s.(*ast.ValueSpec)
+ writeComments(&aliasDecls, s.Doc)
+ if d.Lparen == 0 {
+ aliasDecls.WriteString(d.Tok.String())
+ aliasDecls.WriteByte(' ')
+ }
+ var hasExported bool
+ for _, name := range s.Names {
+ if name.IsExported() {
+ aliasDecls.WriteString(name.String())
+ aliasDecls.WriteByte(',')
+ hasExported = true
+ }
+ }
+ if !hasExported {
+ continue
+ }
+ trimRight(&aliasDecls, ",")
+ aliasDecls.WriteByte('=')
+ for _, name := range s.Names {
+ if name.IsExported() {
+ aliasDecls.WriteString(packageName)
+ aliasDecls.WriteByte('.')
+ aliasDecls.WriteString(name.String())
+ aliasDecls.WriteByte(',')
+ }
+ }
+ trimRight(&aliasDecls, ",")
+ aliasDecls.WriteString("\n")
+ }
+ if d.Rparen > 0 {
+ aliasDecls.WriteString(")")
+ }
+ aliasDecls.WriteString("\n")
+ case token.TYPE:
+ for _, s := range d.Specs {
+ s := s.(*ast.TypeSpec)
+ if !s.Name.IsExported() {
+ continue
+ }
+ writeComments(&aliasDecls, d.Doc)
+ aliasDecls.WriteString(d.Tok.String())
+ aliasDecls.WriteByte(' ')
+ aliasDecls.WriteString(s.Name.String())
+ aliasDecls.WriteByte('=')
+ aliasDecls.WriteString(packageName)
+ aliasDecls.WriteByte('.')
+ aliasDecls.WriteString(s.Name.String())
+ aliasDecls.WriteString("\n")
+ aliasDecls.WriteString("\n")
+ }
+ default:
+ panic(fmt.Sprintf("unknown token.Token: %v", d.Tok))
+ }
+ case *ast.FuncDecl:
+ if !d.Name.IsExported() || d.Recv != nil {
+ continue // ignore unexported functions or methods
+ }
+
+ // Print the comment.
+ writeComments(&aliasDecls, d.Doc)
+ aliasDecls.WriteString(token.FUNC.String())
+ aliasDecls.WriteByte(' ')
+ aliasDecls.WriteString(d.Name.String())
+ writeFields := func(fields *ast.FieldList, leftDelim, rightDelim byte, withType bool) {
+ if fields == nil {
+ return
+ }
+ aliasDecls.WriteByte(leftDelim)
+ for i, field := range fields.List {
+ for j, name := range field.Names {
+ aliasDecls.WriteString(name.String())
+ if j < len(field.Names)-1 {
+ aliasDecls.WriteByte(',')
+ }
+ }
+ if withType {
+ aliasDecls.WriteByte(' ')
+ writeType(field.Type)
+ } else if _, ok := field.Type.(*ast.Ellipsis); ok {
+ aliasDecls.WriteString("...")
+ }
+ if i < len(fields.List)-1 {
+ aliasDecls.WriteByte(',')
+ }
+ }
+ aliasDecls.WriteByte(rightDelim)
+ }
+
+ writeFields(d.Type.TypeParams, '[', ']', true)
+ writeFields(d.Type.Params, '(', ')', true)
+ writeFields(d.Type.Results, '(', ')', true)
+
+ aliasDecls.WriteString("{\n")
+ if d.Type.Results != nil {
+ aliasDecls.WriteString(token.RETURN.String())
+ aliasDecls.WriteByte(' ')
+ }
+ aliasDecls.WriteString(packageName)
+ aliasDecls.WriteByte('.')
+ aliasDecls.WriteString(d.Name.String())
+ writeFields(d.Type.TypeParams, '[', ']', false)
+ writeFields(d.Type.Params, '(', ')', false)
+ aliasDecls.WriteString("\n")
+ aliasDecls.WriteString("}\n")
+ aliasDecls.WriteString("\n")
+ default:
+ panic(fmt.Sprintf("unknown ast.Decl type: %T", d))
+ }
+ }
+ }
+ writeImports()
+ aliasFile.Write(aliasDecls.Bytes())
+
+ // Write to the output file.
+ b := mustGet(format.Source(aliasFile.Bytes()))
+ mustDo(os.WriteFile(filepath.Join(workingDir, "alias.go"), b, 0664))
+}
+
+func mustDo(err error) {
+ if err != nil {
+ panic(err)
+ }
+}
+
+func mustGet[T any](v T, err error) T {
+ mustDo(err)
+ return v
+}
+
+func writeComments(out *bytes.Buffer, comments *ast.CommentGroup) {
+ for line := range strings.Lines(comments.Text()) {
+ out.WriteString("// ")
+ out.WriteString(line)
+ }
+}
+
+func trimRight(out *bytes.Buffer, cutset string) {
+ out.Truncate(len(bytes.TrimRight(out.Bytes(), cutset)))
+}
+
+type astVisitor func(ast.Node) bool
+
+func (f astVisitor) Visit(node ast.Node) ast.Visitor {
+ if !f(node) {
+ return nil
+ }
+ return f
+}
diff --git a/internal/codec/codec.go b/internal/codec/codec.go
index c55cd3a4ab..771925f6a5 100644
--- a/internal/codec/codec.go
+++ b/internal/codec/codec.go
@@ -3,66 +3,85 @@
package codec
import (
+ "errors"
+ "fmt"
"io"
- "sync"
-
- "github.com/ugorji/go/codec"
)
-var jsonHandle codec.JsonHandle
+// Encoder encodes.
+type Encoder interface {
+ Encode(in any) error
+}
-func init() {
- // This is documented to cause "smart buffering".
- jsonHandle.WriterBufferSize = 4096
- jsonHandle.ReaderBufferSize = 4096
- // Force calling time.Time's Marshal function. This causes an allocation on
- // every time.Time value, but is the same behavior as the stdlib json
- // encoder. If we decide nulls are OK, this should get removed.
- jsonHandle.TimeNotBuiltin = true
+// Decoder decodes.
+type Decoder interface {
+ Decode(out any) error
}
-// Encoder and decoder pools, to reuse if possible.
-var (
- encPool = sync.Pool{
- New: func() interface{} {
- return codec.NewEncoder(nil, &jsonHandle)
- },
- }
- decPool = sync.Pool{
- New: func() interface{} {
- return codec.NewDecoder(nil, &jsonHandle)
- },
- }
+// Scheme indicates an API type scheme.
+//
+// This allows the same program type to have different wire representations.
+type Scheme uint
+
+//go:generate go run golang.org/x/tools/cmd/stringer -type Scheme -trimprefix Scheme
+
+const (
+ _ Scheme = iota
+ // SchemeV1 outputs v1 HTTP API objects for the relevant domain objects.
+ SchemeV1
)
-// Encoder encodes.
-type Encoder = codec.Encoder
+// SchemeDefault is the [Scheme] selected when no [Scheme] argument is passed to
+// [GetEncoder]/[GetDecoder].
+const SchemeDefault = SchemeV1
-// GetEncoder returns an encoder configured to write to w.
-func GetEncoder(w io.Writer) *Encoder {
- e := encPool.Get().(*Encoder)
- e.Reset(w)
- return e
-}
+var _ error = invalidScheme(0)
+
+type invalidScheme Scheme
-// PutEncoder returns an encoder to the pool.
-func PutEncoder(e *Encoder) {
- e.Reset(nil)
- encPool.Put(e)
+func (i invalidScheme) Error() string {
+ return fmt.Sprintf("programmer error: bad encoding scheme: %v", Scheme(i).String())
}
-// Decoder decodes.
-type Decoder = codec.Decoder
+var errExtraArgs = errors.New("programmer error: multiple extra arguments")
+
+// All the exported functions delegate to an unexported version, which is
+// provided by whichever implementation is selected at compile time.
-// GetDecoder returns a decoder configured to read from r.
-func GetDecoder(r io.Reader) *Decoder {
- d := decPool.Get().(*Decoder)
- d.Reset(r)
- return d
+// GetEncoder returns an [Encoder] configured to write to "w".
+//
+// An optional [Scheme] may be passed to change the encoding scheme.
+func GetEncoder(w io.Writer, v ...Scheme) Encoder {
+ s := SchemeDefault
+ switch len(v) {
+ case 0:
+ case 1:
+ s = v[0]
+ default:
+ panic(errExtraArgs)
+ }
+ switch s {
+ case SchemeV1:
+ return v1Encoder(w)
+ }
+ panic(invalidScheme(s))
}
-// PutDecoder returns a decoder to the pool.
-func PutDecoder(d *Decoder) {
- d.Reset(nil)
- decPool.Put(d)
+// GetDecoder returns a [Decoder] configured to read from "r".
+//
+// An optional [Scheme] may be passed to change the encoding scheme.
+func GetDecoder(r io.Reader, v ...Scheme) Decoder {
+ s := SchemeDefault
+ switch len(v) {
+ case 0:
+ case 1:
+ s = v[0]
+ default:
+ panic(errExtraArgs)
+ }
+ switch s {
+ case SchemeV1:
+ return v1Decoder(r)
+ }
+ panic(invalidScheme(s))
}
diff --git a/internal/codec/codec_test.go b/internal/codec/codec_test.go
index 70377ccd71..f2411502c5 100644
--- a/internal/codec/codec_test.go
+++ b/internal/codec/codec_test.go
@@ -3,23 +3,33 @@ package codec
import (
"bytes"
"encoding/json"
+ "errors"
"fmt"
+ "io"
"os"
+ "path"
+ "reflect"
+ "slices"
"strings"
+ "sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
+ "github.com/kaptinlin/jsonschema"
+ "github.com/quay/claircore"
+ "golang.org/x/tools/txtar"
+
+ "github.com/quay/clair/v4/httptransport/types/v1"
)
func Example() {
enc := GetEncoder(os.Stdout)
- defer PutEncoder(enc)
- enc.MustEncode([]string{"a", "slice", "of", "strings"})
+ enc.Encode([]string{"a", "slice", "of", "strings"})
fmt.Fprintln(os.Stdout)
- enc.MustEncode(nil)
+ enc.Encode(nil)
fmt.Fprintln(os.Stdout)
- enc.MustEncode(map[string]string{})
+ enc.Encode(map[string]string{})
fmt.Fprintln(os.Stdout)
// Output: ["a","slice","of","strings"]
// null
@@ -35,12 +45,10 @@ func BenchmarkDecode(b *testing.B) {
"d": strings.Repeat(`D`, 2048),
}
got := make(map[string]string, len(want))
- b.ResetTimer()
- for i := 0; i < b.N; i++ {
+ for b.Loop() {
dec := GetDecoder(JSONReader(want))
err := dec.Decode(&got)
- PutDecoder(dec)
if err != nil {
b.Error(err)
}
@@ -59,9 +67,8 @@ func BenchmarkDecodeStdlib(b *testing.B) {
"d": strings.Repeat(`D`, 2048),
}
got := make(map[string]string, len(want))
- b.ResetTimer()
- for i := 0; i < b.N; i++ {
+ for b.Loop() {
x, err := json.Marshal(want)
if err != nil {
b.Error(err)
@@ -81,7 +88,6 @@ func TestTimeNotNull(t *testing.T) {
}
var b bytes.Buffer
enc := GetEncoder(&b)
- defer PutEncoder(enc)
// Example encoding of a populated time:
if err := enc.Encode(s{Time: time.Unix(0, 0).UTC()}); err != nil {
@@ -99,3 +105,255 @@ func TestTimeNotNull(t *testing.T) {
t.Error("wanted non-null encoding")
}
}
+
+func TestScheme(t *testing.T) {
+ t.Logf("Default: %v", SchemeDefault)
+ t.Run("Decoder", func(t *testing.T) {
+ t.Run("Implicit", func(t *testing.T) {
+ dec := GetDecoder(bytes.NewBufferString(`true`))
+ var got bool
+ if err := dec.Decode(&got); err != nil {
+ t.Error(err)
+ }
+ if want := true; got != want {
+ t.Errorf("got: %v, want: %v", got, want)
+ }
+ })
+ t.Run("Explicit", func(t *testing.T) {
+ dec := GetDecoder(bytes.NewBufferString(`true`), SchemeV1)
+ var got bool
+ if err := dec.Decode(&got); err != nil {
+ t.Error(err)
+ }
+ if want := true; got != want {
+ t.Errorf("got: %v, want: %v", got, want)
+ }
+ })
+ t.Run("TooManyArgs", func(t *testing.T) {
+ defer func() {
+ r := recover()
+ if r == nil {
+ t.Error("expected panic")
+ return
+ }
+ err, ok := r.(error)
+ if !ok {
+ t.Error("expected to recover an error")
+ return
+ }
+ t.Log(err)
+ if !errors.Is(err, errExtraArgs) {
+ t.Error("unexpected recover")
+ }
+ }()
+ GetDecoder(bytes.NewBufferString(`true`), SchemeV1, SchemeV1)
+ })
+ t.Run("Invalid", func(t *testing.T) {
+ defer func() {
+ r := recover()
+ if r == nil {
+ t.Error("expected panic")
+ return
+ }
+ err, ok := r.(error)
+ if !ok {
+ t.Error("expected to recover an error")
+ return
+ }
+ t.Log(err)
+ var invalid invalidScheme
+ if !errors.As(err, &invalid) {
+ t.Error("unexpected recover")
+ }
+ }()
+ GetDecoder(bytes.NewBufferString(`true`), Scheme(999))
+ })
+ })
+ t.Run("Encoder", func(t *testing.T) {
+ t.Run("Implicit", func(t *testing.T) {
+ t.Skip("TODO")
+ })
+ t.Run("Explicit", func(t *testing.T) {
+ t.Skip("TODO")
+ })
+ t.Run("TooManyArgs", func(t *testing.T) {
+ defer func() {
+ r := recover()
+ if r == nil {
+ t.Error("expected panic")
+ return
+ }
+ err, ok := r.(error)
+ if !ok {
+ t.Error("expected to recover an error")
+ return
+ }
+ t.Log(err)
+ if !errors.Is(err, errExtraArgs) {
+ t.Error("unexpected recover")
+ }
+ }()
+ GetEncoder(io.Discard, SchemeV1, SchemeV1)
+ })
+ t.Run("Invalid", func(t *testing.T) {
+ defer func() {
+ r := recover()
+ if r == nil {
+ t.Error("expected panic")
+ return
+ }
+ err, ok := r.(error)
+ if !ok {
+ t.Error("expected to recover an error")
+ return
+ }
+ t.Log(err)
+ var invalid invalidScheme
+ if !errors.As(err, &invalid) {
+ t.Error("unexpected recover")
+ }
+ }()
+ GetEncoder(io.Discard, Scheme(999))
+ })
+ })
+}
+
+func TestCustom(t *testing.T) {
+ t.Run("Roundtrip", func(t *testing.T) {
+ roundtripFromArchive[claircore.Manifest](t)
+ roundtripFromArchive[claircore.Layer](t)
+ roundtripFromArchive[claircore.Package](t)
+ roundtripFromArchive[claircore.Distribution](t)
+ roundtripFromArchive[claircore.Repository](t)
+ roundtripFromArchive[claircore.Environment](t)
+ roundtripFromArchive[claircore.Vulnerability](t)
+ roundtripFromArchive[claircore.Range](t)
+ roundtripFromArchive[claircore.IndexReport](t)
+ roundtripFromArchive[claircore.VulnerabilityReport](t)
+ })
+}
+
+const jsonschemaRoot = `https://clairproject.org/api/http/v1/`
+
+var jsonschemaCompiler = sync.OnceValue(func() *jsonschema.Compiler {
+ loaderFunc := func(u string) (io.ReadCloser, error) {
+ if strings.HasPrefix(u, jsonschemaRoot) {
+ return types.Schema.Open(path.Base(u))
+ }
+ return nil, errors.ErrUnsupported
+ }
+ return jsonschema.GetDefaultCompiler().
+ SetDefaultBaseURI(jsonschemaRoot).
+ RegisterLoader(`http`, loaderFunc).
+ RegisterLoader(`https`, loaderFunc)
+})
+
+func roundtripFromArchive[T any](t *testing.T) {
+ typ := reflect.TypeFor[T]().Name()
+ file := path.Join(`testdata`, typ+`.txtar`)
+ ar, err := txtar.ParseFile(file)
+ if err != nil {
+ t.Skip(err)
+ }
+ var testnames []string
+ for _, f := range ar.Files {
+ n := f.Name
+ n = strings.TrimSuffix(n, ".in.json")
+ n = strings.TrimSuffix(n, ".want.json")
+ testnames = append(testnames, n)
+ }
+ slices.Sort(testnames)
+ testnames = slices.Compact(testnames)
+ var tcs []roundtripTestcase[T]
+
+ for _, n := range testnames {
+ var tc roundtripTestcase[T]
+ tc.Name = n
+ for _, f := range ar.Files {
+ switch strings.TrimPrefix(f.Name, n) {
+ case ".in.json":
+ tc.In = f.Data
+ case ".want.json":
+ tc.Want = f.Data
+ default:
+ }
+ }
+ if tc.In != nil && tc.Want != nil {
+ tcs = append(tcs, tc)
+ }
+ }
+
+ t.Run(typ, func(t *testing.T) {
+ if len(tcs) == 0 {
+ t.Skip("no fixtures found")
+ }
+ t.Log("found tests:", strings.Join(testnames, ", "))
+ for _, tc := range tcs {
+ t.Run(tc.Name, tc.Run)
+ }
+ })
+}
+
+type roundtripTestcase[T any] struct {
+ Name string
+ In []byte
+ Want []byte
+}
+
+func (tc *roundtripTestcase[T]) Run(t *testing.T) {
+ s := tc.GetSchema(t)
+ var b bytes.Buffer
+ func() {
+ var v T
+ dec := GetDecoder(bytes.NewReader(tc.In))
+ if err := dec.Decode(&v); err != nil {
+ t.Error(err)
+ }
+
+ enc := GetEncoder(&b)
+ if err := enc.Encode(&v); err != nil {
+ t.Error(err)
+ }
+ }()
+ if t.Failed() {
+ return
+ }
+
+ var got, want map[string]any
+ err := errors.Join(json.Unmarshal(b.Bytes(), &got), json.Unmarshal(tc.Want, &want))
+ if err != nil {
+ t.Error(err)
+ }
+ which := [2]string{"got", "want"}
+ for i, res := range []*jsonschema.EvaluationResult{
+ s.ValidateMap(got), s.ValidateMap(want),
+ } {
+ if res.Valid {
+ continue
+ }
+ for k, v := range res.Errors {
+ t.Errorf("%s: %s: %v", which[i], k, v)
+ }
+ }
+ if !cmp.Equal(got, want) {
+ t.Error(cmp.Diff(got, want))
+ }
+}
+
+func (tc *roundtripTestcase[T]) GetSchema(t *testing.T) *jsonschema.Schema {
+ typ := reflect.TypeFor[T]().Name()
+ ref, ok := schemaName[typ]
+ if !ok {
+ ref = jsonschemaRoot + strings.ToLower(typ) + ".schema.json"
+ }
+ s, err := jsonschemaCompiler().GetSchema(ref)
+ if err != nil {
+ t.Fatalf("unable to get schema for %q (%q): %v", typ, ref, err)
+ }
+ return s
+}
+
+var schemaName = map[string]string{
+ "IndexReport": jsonschemaRoot + "index_report.schema.json",
+ "VulnerabilityReport": jsonschemaRoot + "vulnerability_report.schema.json",
+}
diff --git a/internal/codec/jsonv2.go b/internal/codec/jsonv2.go
new file mode 100644
index 0000000000..4ee5d9066f
--- /dev/null
+++ b/internal/codec/jsonv2.go
@@ -0,0 +1,1285 @@
+package codec
+
+import (
+ "encoding/base64"
+ jsonv1 "encoding/json"
+ "fmt"
+ "io"
+ "reflect"
+ "time"
+ "unicode/utf8"
+
+ "github.com/quay/claircore"
+ "github.com/quay/claircore/libvuln/driver"
+
+ types "github.com/quay/clair/v4/httptransport/types/v1"
+ "github.com/quay/clair/v4/internal/json"
+ "github.com/quay/clair/v4/internal/json/jsontext"
+)
+
+// The interface built on json/v2 does not use its own pool and instead relies
+// on the json package's pooling.
+
+var (
+ v1Options = json.JoinOptions(
+ json.DefaultOptionsV2(),
+ jsontext.Multiline(false),
+ jsontext.SpaceAfterColon(false),
+ jsontext.SpaceAfterComma(false),
+ json.OmitZeroStructFields(true),
+ json.FormatNilMapAsNull(true),
+ json.FormatNilSliceAsNull(true),
+ json.WithMarshalers(v1Marshalers),
+ json.WithUnmarshalers(v1Unmarshalers),
+ )
+ v1Marshalers = json.JoinMarshalers(
+ // API-only types:
+ json.MarshalToFunc(v1ErrorMarshal),
+ // Indexer types:
+ json.MarshalToFunc(v1ManifestMarshal),
+ json.MarshalToFunc(v1LayerMarshal),
+ json.MarshalToFunc(v1IndexReportMarshal),
+ json.MarshalToFunc(v1PackageMarshal),
+ json.MarshalToFunc(v1RepositoryMarshal),
+ json.MarshalToFunc(v1DistributionMarshal),
+ json.MarshalToFunc(v1EnvironmentMarshal),
+ // Matcher types:
+ json.MarshalToFunc(v1VulnerabilityReportMarshal),
+ json.MarshalToFunc(v1VulnerabilityMarshal),
+ json.MarshalToFunc(v1RangeMarshal),
+ json.MarshalToFunc(v1UpdateOperationMarshal),
+ json.MarshalToFunc(v1UpdateDiffMarshal),
+ )
+ v1Unmarshalers = json.JoinUnmarshalers(
+ // Indexer types:
+ json.UnmarshalFromFunc(v1ManifestUnmarshal),
+ json.UnmarshalFromFunc(v1LayerUnmarshal),
+ json.UnmarshalFromFunc(v1IndexReportUnmarshal),
+ json.UnmarshalFromFunc(v1PackageUnmarshal),
+ json.UnmarshalFromFunc(v1DistributionUnmarshal),
+ json.UnmarshalFromFunc(v1RepositoryUnmarshal),
+ json.UnmarshalFromFunc(v1EnvironmentUnmarshal),
+ json.UnmarshalFromFunc(v1VulnerabilityReportUnmarshal),
+ json.UnmarshalFromFunc(v1VulnerabilityUnmarshal),
+ json.UnmarshalFromFunc(v1RangeUnmarshal),
+ )
+)
+
+func v1Encoder(w io.Writer) Encoder {
+ return &fwdWriter{w: w}
+}
+
+type fwdWriter struct {
+ w io.Writer
+}
+
+func (w *fwdWriter) Encode(in any) error {
+ return json.MarshalWrite(w.w, in, v1Options)
+}
+
+func v1Decoder(r io.Reader) Decoder {
+ return &fwdReader{r}
+}
+
+type fwdReader struct {
+ r io.Reader
+}
+
+func (r *fwdReader) Decode(out any) error {
+ return json.UnmarshalRead(r.r, out, v1Options)
+}
+
+// All these functions look like long ways to do what the json package already
+// does for us. That's true currently, but it allows us to change the claircore
+// types and not have the serialization change!
+
+func v1ManifestMarshal(enc *jsontext.Encoder, v *claircore.Manifest) error {
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+
+ if err := enc.WriteToken(hashKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(v.Hash.String())); err != nil {
+ return err
+ }
+
+ if v.Layers != nil {
+ if err := enc.WriteToken(layersKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.BeginArray); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndArray)
+
+ for _, l := range v.Layers {
+ if err := v1LayerMarshal(enc, l); err != nil {
+ return err
+ }
+ }
+ }
+
+ return nil
+}
+
+func v1LayerMarshal(enc *jsontext.Encoder, v *claircore.Layer) error {
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+
+ if err := enc.WriteToken(hashKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(v.Hash.String())); err != nil {
+ return err
+ }
+
+ if err := enc.WriteToken(uriKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(v.URI)); err != nil {
+ return err
+ }
+
+ if err := enc.WriteToken(headersKey); err != nil {
+ return err
+ }
+ if err := json.MarshalEncode(enc, v.Headers); err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func v1IndexReportMarshal(enc *jsontext.Encoder, v *claircore.IndexReport) error {
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+
+ if err := enc.WriteToken(reporthashKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(v.Hash.String())); err != nil {
+ return err
+ }
+
+ if err := enc.WriteToken(stateKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(v.State)); err != nil {
+ return err
+ }
+
+ if err := enc.WriteToken(successKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.Bool(v.Success)); err != nil {
+ return err
+ }
+
+ if e := v.Err; e != "" {
+ if err := enc.WriteToken(errKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(e)); err != nil {
+ return err
+ }
+ }
+
+ if err := v1DoMap(enc, packagesKey, v.Packages, v1PackageMarshal); err != nil {
+ return err
+ }
+ if err := v1DoMap(enc, distributionsKey, v.Distributions, v1DistributionMarshal); err != nil {
+ return err
+ }
+ if err := v1DoMap(enc, repositoryKey, v.Repositories, v1RepositoryMarshal); err != nil {
+ return err
+ }
+ if err := v1DoMapArray(enc, environmentsKey, v.Environments, v1EnvironmentMarshal); err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func v1DoMap[T any](enc *jsontext.Encoder, t jsontext.Token, m map[string]*T, f func(*jsontext.Encoder, *T) error) error {
+ if len(m) == 0 {
+ return nil
+ }
+ if err := enc.WriteToken(t); err != nil {
+ return err
+ }
+
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+ for k, v := range m {
+ if err := enc.WriteToken(jsontext.String(k)); err != nil {
+ return err
+ }
+ if err := f(enc, v); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func v1DoMapArray[T any](enc *jsontext.Encoder, t jsontext.Token, m map[string][]T, f func(*jsontext.Encoder, T) error) error {
+ if len(m) == 0 {
+ return nil
+ }
+ if err := enc.WriteToken(t); err != nil {
+ return err
+ }
+
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+
+ writeArray := func(v []T) error {
+ if err := enc.WriteToken(jsontext.BeginArray); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndArray)
+ for _, v := range v {
+ if err := f(enc, v); err != nil {
+ return err
+ }
+ }
+ return nil
+ }
+ for k, v := range m {
+ if err := enc.WriteToken(jsontext.String(k)); err != nil {
+ return err
+ }
+ if err := writeArray(v); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func v1PackageMarshal(enc *jsontext.Encoder, v *claircore.Package) error {
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+
+ fs := []struct {
+ Key jsontext.Token
+ Value string
+ }{
+ {idKey, v.ID},
+ {nameKey, v.Name},
+ {versionKey, v.Version},
+ {kindKey, v.Kind},
+ {moduleKey, v.Module},
+ {archKey, v.Arch},
+ }
+ for _, f := range fs {
+ if f.Value == "" {
+ continue
+ }
+ if err := enc.WriteToken(f.Key); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(f.Value)); err != nil {
+ return err
+ }
+ }
+
+ if v.NormalizedVersion.Kind != "" {
+ if err := enc.WriteToken(normVersionKey); err != nil {
+ return err
+ }
+ v, err := v.NormalizedVersion.MarshalText()
+ if err != nil {
+ return err
+ }
+ b, err := jsontext.AppendQuote(enc.AvailableBuffer(), v)
+ if err != nil {
+ return err
+ }
+ if err := enc.WriteValue(b); err != nil {
+ return err
+ }
+ }
+
+ if v.CPE.Valid() == nil {
+ if err := enc.WriteToken(cpeKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(v.CPE.String())); err != nil {
+ return err
+ }
+ }
+
+ if src := v.Source; src != nil {
+ if err := enc.WriteToken(sourceKey); err != nil {
+ return err
+ }
+ if err := json.MarshalEncode(enc, src); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func v1DistributionMarshal(enc *jsontext.Encoder, v *claircore.Distribution) error {
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+ fs := []struct {
+ Key jsontext.Token
+ Value string
+ }{
+ {idKey, v.ID},
+ {didKey, v.DID},
+ {nameKey, v.Name},
+ {versionKey, v.Version},
+ {versionCodeNameKey, v.VersionCodeName},
+ {versionIDKey, v.VersionID},
+ {archKey, v.Arch},
+ {prettyNameKey, v.PrettyName},
+ }
+ for _, f := range fs {
+ if f.Value == "" {
+ continue
+ }
+ if err := enc.WriteToken(f.Key); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(f.Value)); err != nil {
+ return err
+ }
+ }
+
+ if v.CPE.Valid() == nil {
+ if err := enc.WriteToken(cpeKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(v.CPE.String())); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func v1RepositoryMarshal(enc *jsontext.Encoder, v *claircore.Repository) error {
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+ fs := []struct {
+ Key jsontext.Token
+ Value string
+ }{
+ {idKey, v.ID},
+ {nameKey, v.Name},
+ {keyKey, v.Key},
+ {uriKey, v.URI},
+ }
+ for _, f := range fs {
+ if f.Value == "" {
+ continue
+ }
+ if err := enc.WriteToken(f.Key); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(f.Value)); err != nil {
+ return err
+ }
+ }
+
+ if v.CPE.Valid() == nil {
+ if err := enc.WriteToken(cpeKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(v.CPE.String())); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func v1EnvironmentMarshal(enc *jsontext.Encoder, v *claircore.Environment) error {
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+ fs := []struct {
+ Key jsontext.Token
+ Value string
+ }{
+ {packageDBKey, v.PackageDB},
+ {distributionIDKey, v.DistributionID},
+ }
+ for _, f := range fs {
+ if f.Value == "" {
+ continue
+ }
+ if err := enc.WriteToken(f.Key); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(f.Value)); err != nil {
+ return err
+ }
+ }
+
+ if v.IntroducedIn.Algorithm() != "" {
+ if err := enc.WriteToken(introducedKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(v.IntroducedIn.String())); err != nil {
+ return err
+ }
+ }
+
+ if len(v.RepositoryIDs) != 0 {
+ if err := enc.WriteToken(repositoryIDsKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.BeginArray); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndArray)
+ for _, id := range v.RepositoryIDs {
+ if err := enc.WriteToken(jsontext.String(id)); err != nil {
+ return err
+ }
+ }
+ }
+
+ return nil
+}
+
+func v1VulnerabilityReportMarshal(enc *jsontext.Encoder, v *claircore.VulnerabilityReport) error {
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+
+ if err := enc.WriteToken(reporthashKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(v.Hash.String())); err != nil {
+ return err
+ }
+
+ if err := v1DoMap(enc, packagesKey, v.Packages, v1PackageMarshal); err != nil {
+ return err
+ }
+ if err := v1DoMap(enc, distributionsKey, v.Distributions, v1DistributionMarshal); err != nil {
+ return err
+ }
+ if err := v1DoMap(enc, repositoryKey, v.Repositories, v1RepositoryMarshal); err != nil {
+ return err
+ }
+ if err := v1DoMapArray(enc, environmentsKey, v.Environments, v1EnvironmentMarshal); err != nil {
+ return err
+ }
+ if err := v1DoMap(enc, vulnerabilitiesKey, v.Vulnerabilities, v1VulnerabilityMarshal); err != nil {
+ return err
+ }
+ if err := v1DoMapArray(enc, packageVulnerabilitiesKey, v.PackageVulnerabilities, func(enc *jsontext.Encoder, v string) error {
+ return enc.WriteToken(jsontext.String(v))
+ }); err != nil {
+ return err
+ }
+ if err := v1DoMapArray(enc, enrichmentsKey, v.Enrichments, func(enc *jsontext.Encoder, v jsonv1.RawMessage) error {
+ return enc.WriteValue(jsontext.Value(v))
+ }); err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func v1VulnerabilityMarshal(enc *jsontext.Encoder, v *claircore.Vulnerability) error {
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+
+ fs := []struct {
+ Key jsontext.Token
+ Value string
+ }{
+ {idKey, v.ID},
+ {updaterKey, v.Updater},
+ {nameKey, v.Name},
+ {descriptionKey, v.Description},
+ {linksKey, v.Links},
+ {severityKey, v.Severity},
+ {fixedInKey, v.FixedInVersion},
+ }
+ for _, f := range fs {
+ if f.Value == "" {
+ continue
+ }
+ if err := enc.WriteToken(f.Key); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(f.Value)); err != nil {
+ return err
+ }
+ }
+
+ if err := enc.WriteToken(normSeverityKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(v.NormalizedSeverity.String())); err != nil {
+ return err
+ }
+
+ if !v.Issued.IsZero() {
+ if err := enc.WriteToken(issuedKey); err != nil {
+ return err
+ }
+ b := enc.AvailableBuffer()
+ b = append(b, '"')
+ b = v.Issued.AppendFormat(b, time.RFC3339)
+ b = append(b, '"')
+ if err := enc.WriteValue(b); err != nil {
+ return err
+ }
+ }
+
+ if v.Package != nil {
+ if err := enc.WriteToken(packageKey); err != nil {
+ return err
+ }
+ if err := v1PackageMarshal(enc, v.Package); err != nil {
+ return err
+ }
+ }
+ if v.Dist != nil {
+ if err := enc.WriteToken(distributionKey); err != nil {
+ return err
+ }
+ if err := v1DistributionMarshal(enc, v.Dist); err != nil {
+ return err
+ }
+ }
+ if v.Repo != nil {
+ if err := enc.WriteToken(repositoryKey); err != nil {
+ return err
+ }
+ if err := v1RepositoryMarshal(enc, v.Repo); err != nil {
+ return err
+ }
+ }
+
+ if v.Range != nil {
+ if err := enc.WriteToken(rangeKey); err != nil {
+ return err
+ }
+ if err := v1RangeMarshal(enc, v.Range); err != nil {
+ return err
+ }
+ }
+
+ if v.ArchOperation != claircore.ArchOp(0) {
+ if err := enc.WriteToken(archOpKey); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(v.ArchOperation.String())); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func v1RangeMarshal(enc *jsontext.Encoder, v *claircore.Range) error {
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+
+ f := func(k string, v *claircore.Version) error {
+ if v.Kind == "" {
+ return nil
+ }
+ r, err := v.MarshalText()
+ if err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(k)); err != nil {
+ return err
+ }
+ b := enc.AvailableBuffer()
+ b = append(b, '"')
+ b = append(b, r...)
+ b = append(b, '"')
+ return enc.WriteValue(b)
+ }
+
+ if err := f(`[`, &v.Lower); err != nil {
+ return err
+ }
+ if err := f(`)`, &v.Upper); err != nil {
+ return err
+ }
+ return nil
+}
+
+func v1ErrorMarshal(enc *jsontext.Encoder, v *types.Error) error {
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+ code := jsontext.String("code")
+ message := jsontext.String("code")
+
+ if err := enc.WriteToken(code); err != nil {
+ return err
+ }
+ var err error
+ // Add the status codes numerically here to avoid pulling in the whole http
+ // package.
+ switch v.Code {
+ case 400:
+ err = enc.WriteToken(jsontext.String("bad-request"))
+ case 404:
+ err = enc.WriteToken(jsontext.String("not-found"))
+ case 415:
+ err = enc.WriteToken(jsontext.String("method-not-allowed"))
+ case 429:
+ err = enc.WriteToken(jsontext.String("too-many-requests"))
+ default:
+ err = enc.WriteToken(jsontext.String("internal-error"))
+ }
+ if err != nil {
+ return err
+ }
+
+ if err := enc.WriteToken(message); err != nil {
+ return err
+ }
+ b, err := jsontext.AppendQuote(enc.AvailableBuffer(), v.Message)
+ if err != nil {
+ return err
+ }
+ if err := enc.WriteValue(b); err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func v1UpdateOperationMarshal(enc *jsontext.Encoder, v *driver.UpdateOperation) error {
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+
+ if err := enc.WriteToken(jsontext.String("ref")); err != nil {
+ return err
+ }
+ b, err := v.Ref.MarshalText()
+ if err != nil {
+ return err
+ }
+ b, err = jsontext.AppendQuote(enc.AvailableBuffer(), b)
+ if err != nil {
+ return err
+ }
+ if err := enc.WriteValue(b); err != nil {
+ return err
+ }
+
+ if err := enc.WriteToken(jsontext.String("updater")); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(v.Updater)); err != nil {
+ return err
+ }
+
+ if err := enc.WriteToken(jsontext.String("fingerprint")); err != nil {
+ return err
+ }
+
+ if fp := []byte(v.Fingerprint); utf8.Valid(fp) {
+ err = enc.WriteToken(jsontext.String(string(v.Fingerprint)))
+ } else {
+ b := enc.AvailableBuffer()
+ b = append(b, '"')
+ b = base64.StdEncoding.AppendEncode(b, fp)
+ b = append(b, '"')
+ err = enc.WriteValue(b)
+ }
+ if err != nil {
+ return err
+ }
+
+ if err := enc.WriteToken(jsontext.String("date")); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(v.Date.Format(time.RFC3339))); err != nil {
+ return err
+ }
+
+ if err := enc.WriteToken(jsontext.String("kind")); err != nil {
+ return err
+ }
+ if err := enc.WriteToken(jsontext.String(string(v.Kind))); err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func v1UpdateDiffMarshal(enc *jsontext.Encoder, v *driver.UpdateDiff) error {
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndObject)
+
+ writeOp := func(k string, op *driver.UpdateOperation) error {
+ if err := enc.WriteToken(jsontext.String(k)); err != nil {
+ return err
+ }
+ if err := v1UpdateOperationMarshal(enc, op); err != nil {
+ return err
+ }
+ return nil
+ }
+ writeSlice := func(k string, vs []claircore.Vulnerability) error {
+ if len(vs) == 0 {
+ return nil
+ }
+ if err := enc.WriteToken(jsontext.String(k)); err != nil {
+ return err
+ }
+
+ if err := enc.WriteToken(jsontext.BeginArray); err != nil {
+ return err
+ }
+ defer enc.WriteToken(jsontext.EndArray)
+
+ for i := range vs {
+ if err := v1VulnerabilityMarshal(enc, &vs[i]); err != nil {
+ return err
+ }
+ }
+
+ return nil
+ }
+
+ if !reflect.ValueOf(v.Prev).IsZero() {
+ if err := writeOp("prev", &v.Prev); err != nil {
+ return err
+ }
+ }
+ if err := writeOp("cur", &v.Cur); err != nil {
+ return err
+ }
+ if err := writeSlice("added", v.Added); err != nil {
+ return err
+ }
+ if err := writeSlice("removed", v.Removed); err != nil {
+ return err
+ }
+
+ return nil
+}
+
+// These Unmarshal functions are implemented as state machines using the
+// return-a-function pattern. See jsonv2_unmarshal.go for the generic
+// bits.
+
+func v1ManifestUnmarshal(dec *jsontext.Decoder, v *claircore.Manifest) error {
+ return runUnmarshalMachine(dec, v, unmarshalObjectBegin(uV1ManifestKeys))
+}
+
+func uV1ManifestKeys(m *unmarshalMachine[claircore.Manifest]) uStateFn[claircore.Manifest] {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+
+ switch tok.Kind() {
+ case '"':
+ switch tok.String() {
+ case "hash":
+ return m.doText(&m.out.Hash, uV1ManifestKeys)
+ case "layers":
+ return unmarshalArray(m, &m.out.Layers, uV1ManifestKeys)
+ default: // Unexpected key, skip
+ if err := m.dec.SkipValue(); err != nil {
+ return m.error(err)
+ }
+ return uV1ManifestKeys
+ }
+ case '}':
+ return nil
+ default:
+ err := fmt.Errorf("unexpected token (at %s): %q", m.dec.StackPointer(), tok)
+ return m.error(err)
+ }
+}
+
+func v1LayerUnmarshal(dec *jsontext.Decoder, v *claircore.Layer) error {
+ return runUnmarshalMachine(dec, v, unmarshalObjectBegin(uV1LayerKeys))
+}
+
+func uV1LayerKeys(m *unmarshalMachine[claircore.Layer]) uStateFn[claircore.Layer] {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+
+ switch tok.Kind() {
+ case '"':
+ switch tok.String() {
+ case "hash":
+ return m.doText(&m.out.Hash, uV1LayerKeys)
+ case "uri":
+ return m.doString(&m.out.URI, uV1LayerKeys)
+ case "headers":
+ out := make(map[string][]string)
+ if err := json.UnmarshalDecode(m.dec, &out); err != nil {
+ return m.error(err)
+ }
+ m.out.Headers = out
+ return uV1LayerKeys
+ default: // Unexpected key, skip
+ if err := m.dec.SkipValue(); err != nil {
+ return m.error(err)
+ }
+ return uV1LayerKeys
+ }
+ case '}':
+ return nil
+ default:
+ return m.invalidObjectKey()
+ }
+}
+
+func v1IndexReportUnmarshal(dec *jsontext.Decoder, v *claircore.IndexReport) error {
+ return runUnmarshalMachine(dec, v, unmarshalObjectBegin(uV1IndexReportKeys))
+}
+
+func uV1IndexReportKeys(m *unmarshalMachine[claircore.IndexReport]) uStateFn[claircore.IndexReport] {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+
+ switch tok.Kind() {
+ case '"':
+ switch tok.String() {
+ case "manifest_hash":
+ return m.doText(&m.out.Hash, uV1IndexReportKeys)
+ case "state":
+ return m.doString(&m.out.State, uV1IndexReportKeys)
+ case "err":
+ return m.doString(&m.out.Err, uV1IndexReportKeys)
+ case "success":
+ return m.doBool(&m.out.Success, uV1IndexReportKeys)
+ case "packages":
+ m.out.Packages = make(map[string]*claircore.Package)
+ return unmarshalMap(m, &m.out.Packages, uV1IndexReportKeys)
+ case "distributions":
+ m.out.Distributions = make(map[string]*claircore.Distribution)
+ return unmarshalMap(m, &m.out.Distributions, uV1IndexReportKeys)
+ case "repository":
+ m.out.Repositories = make(map[string]*claircore.Repository)
+ return unmarshalMap(m, &m.out.Repositories, uV1IndexReportKeys)
+ case "environments":
+ m.out.Environments = make(map[string][]*claircore.Environment)
+ return unmarshalMap(m, &m.out.Environments, uV1IndexReportKeys)
+ default: // Unexpected key, skip
+ if err := m.dec.SkipValue(); err != nil {
+ return m.error(err)
+ }
+ return uV1IndexReportKeys
+ }
+ case '}':
+ return nil
+ default:
+ return m.invalidObjectKey()
+ }
+}
+
+func v1PackageUnmarshal(dec *jsontext.Decoder, v *claircore.Package) error {
+ return runUnmarshalMachine(dec, v, unmarshalObjectBegin(uV1PackageKeys))
+}
+
+func uV1PackageKeys(m *unmarshalMachine[claircore.Package]) uStateFn[claircore.Package] {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+
+ switch tok.Kind() {
+ case '"':
+ switch tok.String() {
+ case "id":
+ return m.doString(&m.out.ID, uV1PackageKeys)
+ case "name":
+ return m.doString(&m.out.Name, uV1PackageKeys)
+ case "version":
+ return m.doString(&m.out.Version, uV1PackageKeys)
+ case "kind":
+ return m.doString(&m.out.Kind, uV1PackageKeys)
+ case "module":
+ return m.doString(&m.out.Module, uV1PackageKeys)
+ case "arch":
+ return m.doString(&m.out.Arch, uV1PackageKeys)
+ case "normalized_version":
+ return m.doText(&m.out.NormalizedVersion, uV1PackageKeys)
+ case "cpe":
+ return m.doText(&m.out.CPE, uV1PackageKeys)
+ case "source":
+ m.out.Source = new(claircore.Package)
+ if err := json.UnmarshalDecode(m.dec, m.out.Source); err != nil {
+ return m.error(err)
+ }
+ return uV1PackageKeys
+ default: // Unexpected key, skip
+ if err := m.dec.SkipValue(); err != nil {
+ return m.error(err)
+ }
+ return uV1PackageKeys
+ }
+ case '}':
+ return nil
+ default:
+ return m.invalidObjectKey()
+ }
+}
+
+func v1DistributionUnmarshal(dec *jsontext.Decoder, v *claircore.Distribution) error {
+ return runUnmarshalMachine(dec, v, unmarshalObjectBegin(uV1DistributionKeys))
+}
+
+func uV1DistributionKeys(m *unmarshalMachine[claircore.Distribution]) uStateFn[claircore.Distribution] {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+
+ switch tok.Kind() {
+ case '"':
+ switch tok.String() {
+ case "id":
+ return m.doString(&m.out.ID, uV1DistributionKeys)
+ case "did":
+ return m.doString(&m.out.DID, uV1DistributionKeys)
+ case "name":
+ return m.doString(&m.out.Name, uV1DistributionKeys)
+ case "version":
+ return m.doString(&m.out.Version, uV1DistributionKeys)
+ case "version_code_name":
+ return m.doString(&m.out.VersionCodeName, uV1DistributionKeys)
+ case "version_id":
+ return m.doString(&m.out.VersionID, uV1DistributionKeys)
+ case "arch":
+ return m.doString(&m.out.Arch, uV1DistributionKeys)
+ case "pretty_name":
+ return m.doString(&m.out.PrettyName, uV1DistributionKeys)
+ case "cpe":
+ return m.doText(&m.out.CPE, uV1DistributionKeys)
+ default: // Unexpected key, skip
+ if err := m.dec.SkipValue(); err != nil {
+ return m.error(err)
+ }
+ return uV1DistributionKeys
+ }
+ case '}':
+ return nil
+ default:
+ return m.invalidObjectKey()
+ }
+}
+
+func v1RepositoryUnmarshal(dec *jsontext.Decoder, v *claircore.Repository) error {
+ return runUnmarshalMachine(dec, v, unmarshalObjectBegin(uV1RepositoryKeys))
+}
+
+func uV1RepositoryKeys(m *unmarshalMachine[claircore.Repository]) uStateFn[claircore.Repository] {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+
+ switch tok.Kind() {
+ case '"':
+ switch tok.String() {
+ case "id":
+ return m.doString(&m.out.ID, uV1RepositoryKeys)
+ case "name":
+ return m.doString(&m.out.Name, uV1RepositoryKeys)
+ case "key":
+ return m.doString(&m.out.Key, uV1RepositoryKeys)
+ case "uri":
+ return m.doString(&m.out.URI, uV1RepositoryKeys)
+ case "cpe":
+ return m.doText(&m.out.CPE, uV1RepositoryKeys)
+ default: // Unexpected key, skip
+ if err := m.dec.SkipValue(); err != nil {
+ return m.error(err)
+ }
+ return uV1RepositoryKeys
+ }
+ case '}':
+ return nil
+ default:
+ return m.invalidObjectKey()
+ }
+}
+
+func v1EnvironmentUnmarshal(dec *jsontext.Decoder, v *claircore.Environment) error {
+ return runUnmarshalMachine(dec, v, unmarshalObjectBegin(uV1EnvironmentKeys))
+}
+
+func uV1EnvironmentKeys(m *unmarshalMachine[claircore.Environment]) uStateFn[claircore.Environment] {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+
+ switch tok.Kind() {
+ case '"':
+ switch tok.String() {
+ case "package_db":
+ return m.doString(&m.out.PackageDB, uV1EnvironmentKeys)
+ case "distribution_id":
+ return m.doString(&m.out.DistributionID, uV1EnvironmentKeys)
+ case "introduced_in":
+ return m.doText(&m.out.IntroducedIn, uV1EnvironmentKeys)
+ case "repository_ids":
+ return unmarshalArray(m, &m.out.RepositoryIDs, uV1EnvironmentKeys)
+ default: // Unexpected key, skip
+ if err := m.dec.SkipValue(); err != nil {
+ return m.error(err)
+ }
+ return uV1EnvironmentKeys
+ }
+ case '}':
+ return nil
+ default:
+ return m.invalidObjectKey()
+ }
+}
+
+func v1VulnerabilityUnmarshal(dec *jsontext.Decoder, v *claircore.Vulnerability) error {
+ return runUnmarshalMachine(dec, v, unmarshalObjectBegin(uV1VulnerabilityKeys))
+}
+
+func uV1VulnerabilityKeys(m *unmarshalMachine[claircore.Vulnerability]) uStateFn[claircore.Vulnerability] {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+
+ switch tok.Kind() {
+ case '"':
+ switch tok.String() {
+ case "id":
+ return m.doString(&m.out.ID, uV1VulnerabilityKeys)
+ case "updater":
+ return m.doString(&m.out.Updater, uV1VulnerabilityKeys)
+ case "name":
+ return m.doString(&m.out.Name, uV1VulnerabilityKeys)
+ case "description":
+ return m.doString(&m.out.Description, uV1VulnerabilityKeys)
+ case "links":
+ return m.doString(&m.out.Links, uV1VulnerabilityKeys)
+ case "severity":
+ return m.doString(&m.out.Severity, uV1VulnerabilityKeys)
+ case "fixed_in_version":
+ return m.doString(&m.out.FixedInVersion, uV1VulnerabilityKeys)
+ case "issued":
+ return m.doText(&m.out.Issued, uV1VulnerabilityKeys)
+ case "normalized_severity":
+ return m.doText(&m.out.NormalizedSeverity, uV1VulnerabilityKeys)
+ case "arch_op":
+ return m.doText(&m.out.ArchOperation, uV1VulnerabilityKeys)
+ case "package":
+ v := new(claircore.Package)
+ if err := json.UnmarshalDecode(m.dec, v); err != nil {
+ return m.error(err)
+ }
+ if !reflect.ValueOf(v).Elem().IsZero() {
+ m.out.Package = v
+ }
+ return uV1VulnerabilityKeys
+ case "distribution":
+ v := new(claircore.Distribution)
+ if err := json.UnmarshalDecode(m.dec, v); err != nil {
+ return m.error(err)
+ }
+ if !reflect.ValueOf(v).Elem().IsZero() {
+ m.out.Dist = v
+ }
+ return uV1VulnerabilityKeys
+ case "repository":
+ v := new(claircore.Repository)
+ if err := json.UnmarshalDecode(m.dec, v); err != nil {
+ return m.error(err)
+ }
+ if !reflect.ValueOf(v).Elem().IsZero() {
+ m.out.Repo = v
+ }
+ return uV1VulnerabilityKeys
+ case "range":
+ v := new(claircore.Range)
+ if err := json.UnmarshalDecode(m.dec, v); err != nil {
+ return m.error(err)
+ }
+ if v.Lower.Kind != "" || v.Upper.Kind != "" {
+ m.out.Range = v
+ }
+ return uV1VulnerabilityKeys
+ default: // Unexpected key, skip
+ if err := m.dec.SkipValue(); err != nil {
+ return m.error(err)
+ }
+ return uV1VulnerabilityKeys
+ }
+ case '}':
+ return nil
+ default:
+ return m.invalidObjectKey()
+ }
+}
+
+func v1RangeUnmarshal(dec *jsontext.Decoder, v *claircore.Range) error {
+ return runUnmarshalMachine(dec, v, unmarshalObjectBegin(uV1RangeKeys))
+}
+
+func uV1RangeKeys(m *unmarshalMachine[claircore.Range]) uStateFn[claircore.Range] {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+
+ switch tok.Kind() {
+ case '"':
+ switch tok.String() {
+ case "[":
+ return m.doText(&m.out.Lower, uV1RangeKeys)
+ case ")":
+ return m.doText(&m.out.Upper, uV1RangeKeys)
+ default: // Unexpected key, skip
+ if err := m.dec.SkipValue(); err != nil {
+ return m.error(err)
+ }
+ return uV1RangeKeys
+ }
+ case '}':
+ return nil
+ default:
+ return m.invalidObjectKey()
+ }
+}
+
+func v1VulnerabilityReportUnmarshal(dec *jsontext.Decoder, v *claircore.VulnerabilityReport) error {
+ return runUnmarshalMachine(dec, v, unmarshalObjectBegin(uV1VulnerabilityReportKeys))
+}
+
+func uV1VulnerabilityReportKeys(m *unmarshalMachine[claircore.VulnerabilityReport]) uStateFn[claircore.VulnerabilityReport] {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+
+ switch tok.Kind() {
+ case '"':
+ switch tok.String() {
+ case "manifest_hash":
+ return m.doText(&m.out.Hash, uV1VulnerabilityReportKeys)
+ case "packages":
+ m.out.Packages = make(map[string]*claircore.Package)
+ return unmarshalMap(m, &m.out.Packages, uV1VulnerabilityReportKeys)
+ case "distributions":
+ m.out.Distributions = make(map[string]*claircore.Distribution)
+ return unmarshalMap(m, &m.out.Distributions, uV1VulnerabilityReportKeys)
+ case "repository":
+ m.out.Repositories = make(map[string]*claircore.Repository)
+ return unmarshalMap(m, &m.out.Repositories, uV1VulnerabilityReportKeys)
+ case "environments":
+ m.out.Environments = make(map[string][]*claircore.Environment)
+ return unmarshalMap(m, &m.out.Environments, uV1VulnerabilityReportKeys)
+ case "vulnerabilities":
+ m.out.Vulnerabilities = make(map[string]*claircore.Vulnerability)
+ return unmarshalMap(m, &m.out.Vulnerabilities, uV1VulnerabilityReportKeys)
+ case "package_vulnerabilities":
+ m.out.PackageVulnerabilities = make(map[string][]string)
+ return unmarshalMap(m, &m.out.PackageVulnerabilities, uV1VulnerabilityReportKeys)
+ case "enrichments":
+ m.out.Enrichments = make(map[string][]jsonv1.RawMessage)
+ return unmarshalMap(m, &m.out.Enrichments, uV1VulnerabilityReportKeys)
+ default: // Unexpected key, skip
+ if err := m.dec.SkipValue(); err != nil {
+ return m.error(err)
+ }
+ return uV1VulnerabilityReportKeys
+ }
+ case '}':
+ return nil
+ default:
+ return m.invalidObjectKey()
+ }
+}
+
+var (
+ archKey = jsontext.String(`arch`)
+ cpeKey = jsontext.String(`cpe`)
+ descriptionKey = jsontext.String(`description`)
+ didKey = jsontext.String(`did`)
+ distributionIDKey = jsontext.String(`distribution_id`)
+ distributionsKey = jsontext.String(`distributions`)
+ distributionKey = jsontext.String(`distribution`)
+ enrichmentsKey = jsontext.String(`enrichments`)
+ environmentsKey = jsontext.String(`environments`)
+ errKey = jsontext.String(`err`)
+ fixedInKey = jsontext.String(`fixed_in_version`)
+ hashKey = jsontext.String(`hash`)
+ headersKey = jsontext.String(`headers`)
+ idKey = jsontext.String(`id`)
+ introducedKey = jsontext.String(`introduced_in`)
+ issuedKey = jsontext.String(`issued`)
+ keyKey = jsontext.String(`key`)
+ kindKey = jsontext.String(`kind`)
+ layersKey = jsontext.String(`layers`)
+ linksKey = jsontext.String(`links`)
+ moduleKey = jsontext.String(`module`)
+ nameKey = jsontext.String(`name`)
+ normVersionKey = jsontext.String(`normalized_version`)
+ normSeverityKey = jsontext.String(`normalized_severity`)
+ packageDBKey = jsontext.String(`package_db`)
+ packagesKey = jsontext.String(`packages`)
+ packageKey = jsontext.String(`package`)
+ packageVulnerabilitiesKey = jsontext.String(`package_vulnerabilities`)
+ prettyNameKey = jsontext.String(`pretty_name`)
+ reporthashKey = jsontext.String(`manifest_hash`)
+ repositoryKey = jsontext.String(`repository`)
+ repositoryIDsKey = jsontext.String(`repository_ids`)
+ severityKey = jsontext.String(`severity`)
+ sourceKey = jsontext.String(`source`)
+ stateKey = jsontext.String(`state`)
+ successKey = jsontext.String(`success`)
+ updaterKey = jsontext.String(`updater`)
+ uriKey = jsontext.String(`uri`)
+ versionCodeNameKey = jsontext.String(`version_code_name`)
+ versionIDKey = jsontext.String(`version_id`)
+ versionKey = jsontext.String(`version`)
+ vulnerabilitiesKey = jsontext.String(`vulnerabilities`)
+ rangeKey = jsontext.String(`range`)
+ archOpKey = jsontext.String(`arch_op`)
+)
diff --git a/internal/codec/jsonv2_unmarshal.go b/internal/codec/jsonv2_unmarshal.go
new file mode 100644
index 0000000000..0d52582c09
--- /dev/null
+++ b/internal/codec/jsonv2_unmarshal.go
@@ -0,0 +1,200 @@
+package codec
+
+import (
+ "encoding"
+ "fmt"
+ "reflect"
+
+ "github.com/quay/clair/v4/internal/json"
+ "github.com/quay/clair/v4/internal/json/jsontext"
+)
+
+type unmarshalMachine[V any] struct {
+ dec *jsontext.Decoder
+ out *V
+ err error
+ state uStateFn[V]
+}
+
+type uStateFn[V any] func(*unmarshalMachine[V]) uStateFn[V]
+
+func runUnmarshalMachine[V any](dec *jsontext.Decoder, out *V, init uStateFn[V]) error {
+ m := unmarshalMachine[V]{
+ dec: dec,
+ out: out,
+ err: nil,
+ }
+ state := init
+ for state != nil {
+ state = state(&m)
+ }
+
+ return m.err
+}
+
+func (m *unmarshalMachine[V]) error(err error) uStateFn[V] {
+ m.err = err
+ return nil
+}
+
+func (m *unmarshalMachine[V]) invalidObjectKey() uStateFn[V] {
+ m.err = fmt.Errorf("invalid object key (at %s)", m.dec.StackPointer())
+ return nil
+}
+
+func (m *unmarshalMachine[V]) expectKind(want jsontext.Kind) (jsontext.Token, error) {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return jsontext.Token{}, err
+ }
+ if got := tok.Kind(); got != want {
+ err := fmt.Errorf("unexpected token (at %s): got %q, want %q: %w", m.dec.StackPointer(), tok, want, m.dec.SkipValue())
+ return jsontext.Token{}, err
+ }
+ return tok, nil
+}
+
+func unmarshalArray[V any, T any](m *unmarshalMachine[V], out *[]T, after uStateFn[V]) uStateFn[V] {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+ switch tok.Kind() {
+ case '[':
+ *out = make([]T, 0)
+ case 'n':
+ out = nil
+ return after
+ default:
+ err := fmt.Errorf("unexpected token (at %s): got %q: %w", m.dec.StackPointer(), tok, m.dec.SkipValue())
+ return m.error(err)
+ }
+
+ var arrayElem uStateFn[V]
+ arrayElem = func(m *unmarshalMachine[V]) uStateFn[V] {
+ if m.dec.PeekKind() == ']' {
+ m.dec.ReadToken()
+ return after
+ }
+ var v T
+ if err := json.UnmarshalDecode(m.dec, &v); err != nil {
+ return m.error(err)
+ }
+ *out = append(*out, v)
+ return arrayElem
+ }
+ return arrayElem
+}
+
+func unmarshalObjectBegin[V any](keys uStateFn[V]) uStateFn[V] {
+ return func(m *unmarshalMachine[V]) uStateFn[V] {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+ switch k := tok.Kind(); k {
+ case '{':
+ return keys
+ case 'n':
+ return nil
+ default:
+ err := fmt.Errorf("unexpected token kind: got: %q, need: %q", k, jsontext.BeginObject)
+ return m.error(err)
+ }
+ }
+}
+
+func unmarshalMap[V any, T any](m *unmarshalMachine[V], out *map[string]T, after uStateFn[V]) uStateFn[V] {
+ typ := reflect.TypeFor[T]()
+ needPtr := typ.Kind() == reflect.Pointer
+ if needPtr {
+ typ = typ.Elem()
+ }
+
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+ switch tok.Kind() {
+ case '{':
+ *out = make(map[string]T)
+ case 'n':
+ out = nil
+ return after
+ default:
+ err := fmt.Errorf("unexpected token (at %s): got %q: %w", m.dec.StackPointer(), tok, m.dec.SkipValue())
+ return m.error(err)
+ }
+
+ var mapKV uStateFn[V]
+ mapKV = func(m *unmarshalMachine[V]) uStateFn[V] {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+
+ var key string
+ switch k := tok.Kind(); k {
+ case '}':
+ return after
+ case '"':
+ key = tok.String()
+ default:
+ return m.error(fmt.Errorf("unexpected token: %v", tok))
+ }
+
+ rv := reflect.New(typ)
+ if err := json.UnmarshalDecode(m.dec, rv.Interface()); err != nil {
+ return m.error(err)
+ }
+ if !needPtr {
+ rv = rv.Elem()
+ }
+ v, ok := rv.Interface().(T)
+ // TODO(go1.25) This should be more efficient:
+ // v, ok := reflect.TypeAssert[T](rv)
+ if !ok {
+ panic("unreachable: all the weirdness should be contained to this function")
+ }
+ (*out)[key] = v
+
+ return mapKV
+ }
+ return mapKV
+}
+
+func (m *unmarshalMachine[V]) doBool(out *bool, next uStateFn[V]) uStateFn[V] {
+ tok, err := m.dec.ReadToken()
+ if err != nil {
+ return m.error(err)
+ }
+ switch tok.Kind() {
+ case 't', 'f':
+ default:
+ err := fmt.Errorf("unexpected token (at %s): got %q, want %q: %w", m.dec.StackPointer(), tok, "t/f", m.dec.SkipValue())
+ return m.error(err)
+ }
+ *out = tok.Bool()
+ return next
+}
+
+func (m *unmarshalMachine[V]) doString(out *string, next uStateFn[V]) uStateFn[V] {
+ tok, err := m.expectKind(jsontext.Kind('"'))
+ if err != nil {
+ return m.error(err)
+ }
+ *out = tok.String()
+ return next
+}
+
+func (m *unmarshalMachine[V]) doText(out encoding.TextUnmarshaler, next uStateFn[V]) uStateFn[V] {
+ tok, err := m.expectKind(jsontext.Kind('"'))
+ if err != nil {
+ return m.error(err)
+ }
+ if err := out.UnmarshalText([]byte(tok.String())); err != nil {
+ err = fmt.Errorf("at %s: %w", m.dec.StackPointer(), err)
+ return m.error(err)
+ }
+ return next
+}
diff --git a/internal/codec/reader.go b/internal/codec/reader.go
index 86d9ffe65c..c7636d8b68 100644
--- a/internal/codec/reader.go
+++ b/internal/codec/reader.go
@@ -2,15 +2,14 @@ package codec
import "io"
-// JSONReader returns an io.ReadCloser backed by a pipe being fed by a JSON
+// JSONReader returns an [io.ReadCloser] backed by a pipe being fed by a JSON
// encoder.
-func JSONReader(v interface{}) io.ReadCloser {
+func JSONReader(v any) io.ReadCloser {
r, w := io.Pipe()
// This unsupervised goroutine should be fine, because the writer will error
// once the reader is closed.
go func() {
enc := GetEncoder(w)
- defer PutEncoder(enc)
defer w.Close()
if err := enc.Encode(v); err != nil {
w.CloseWithError(err)
diff --git a/internal/codec/scheme_string.go b/internal/codec/scheme_string.go
new file mode 100644
index 0000000000..420da6f7dc
--- /dev/null
+++ b/internal/codec/scheme_string.go
@@ -0,0 +1,24 @@
+// Code generated by "stringer -type Scheme -trimprefix Scheme"; DO NOT EDIT.
+
+package codec
+
+import "strconv"
+
+func _() {
+ // An "invalid array index" compiler error signifies that the constant values have changed.
+ // Re-run the stringer command to generate them again.
+ var x [1]struct{}
+ _ = x[SchemeV1-1]
+}
+
+const _Scheme_name = "V1"
+
+var _Scheme_index = [...]uint8{0, 2}
+
+func (i Scheme) String() string {
+ i -= 1
+ if i >= Scheme(len(_Scheme_index)-1) {
+ return "Scheme(" + strconv.FormatInt(int64(i+1), 10) + ")"
+ }
+ return _Scheme_name[_Scheme_index[i]:_Scheme_index[i+1]]
+}
diff --git a/internal/codec/testdata/Distribution.txtar b/internal/codec/testdata/Distribution.txtar
new file mode 100644
index 0000000000..e92681515e
--- /dev/null
+++ b/internal/codec/testdata/Distribution.txtar
@@ -0,0 +1,25 @@
+-- Simple.in.json --
+{
+ "id": "0",
+ "foo": "bar",
+ "did": "linux",
+ "name": "Linux 3.11",
+ "version": "3.11",
+ "version_code_name": "For Workgroups",
+ "version_id": "3",
+ "arch": "amd64",
+ "pretty_name": "Linux 3.11 For Workgroups",
+ "cpe": "cpe:2.3:o:*:*:*:*:*:*:*:*:*:*"
+}
+-- Simple.want.json --
+{
+ "id": "0",
+ "did": "linux",
+ "name": "Linux 3.11",
+ "version": "3.11",
+ "version_code_name": "For Workgroups",
+ "version_id": "3",
+ "arch": "amd64",
+ "pretty_name": "Linux 3.11 For Workgroups",
+ "cpe": "cpe:2.3:o:*:*:*:*:*:*:*:*:*:*"
+}
diff --git a/internal/codec/testdata/Environment.txtar b/internal/codec/testdata/Environment.txtar
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/internal/codec/testdata/IndexReport.txtar b/internal/codec/testdata/IndexReport.txtar
new file mode 100644
index 0000000000..1c27de2e02
--- /dev/null
+++ b/internal/codec/testdata/IndexReport.txtar
@@ -0,0 +1,117 @@
+-- Simple.in.json --
+{
+ "manifest_hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "state": "Success",
+ "packages": {
+ "0": {
+ "id": "0",
+ "name": "hello",
+ "version": "1.0.0",
+ "kind": "BINARY",
+ "normalized_version": "semver:1.0.0",
+ "arch": "amd64",
+ "cpe": "cpe:/a"
+ }
+ },
+ "distributions": {
+ "0": {
+ "id": "0",
+ "did": "linux",
+ "name": "Linux 3.11",
+ "version": "3.11",
+ "version_code_name": "For Workgroups",
+ "version_id": "3",
+ "arch": "amd64",
+ "cpe": "cpe:/o",
+ "pretty_name": "Linux 3.11 For Workgroups"
+ }
+ },
+ "repository": {
+ "0": {
+ "id": "0",
+ "name": "a",
+ "key": "b",
+ "uri": "tag:example",
+ "cpe": "cpe:/o"
+ }
+ },
+ "environments": {
+ "0": [
+ {
+ "package_db": "rpm:/var/lib/rpm",
+ "introduced_in": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "distribution_id": "0",
+ "repository_ids": [
+ "0"
+ ]
+ }
+ ]
+ },
+ "success": true,
+ "otherkey:": null
+}
+-- Simple.want.json --
+{
+ "manifest_hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "state": "Success",
+ "success": true,
+ "packages": {
+ "0": {
+ "id": "0",
+ "name": "hello",
+ "version": "1.0.0",
+ "kind": "BINARY",
+ "arch": "amd64",
+ "normalized_version": "semver:1.0.0.0.0.0.0.0.0.0",
+ "cpe": "cpe:2.3:a:*:*:*:*:*:*:*:*:*:*"
+ }
+ },
+ "distributions": {
+ "0": {
+ "id": "0",
+ "did": "linux",
+ "name": "Linux 3.11",
+ "version": "3.11",
+ "version_code_name": "For Workgroups",
+ "version_id": "3",
+ "arch": "amd64",
+ "pretty_name": "Linux 3.11 For Workgroups",
+ "cpe": "cpe:2.3:o:*:*:*:*:*:*:*:*:*:*"
+ }
+ },
+ "repository": {
+ "0": {
+ "id": "0",
+ "name": "a",
+ "key": "b",
+ "uri": "tag:example",
+ "cpe": "cpe:2.3:o:*:*:*:*:*:*:*:*:*:*"
+ }
+ },
+ "environments": {
+ "0": [
+ {
+ "package_db": "rpm:/var/lib/rpm",
+ "distribution_id": "0",
+ "introduced_in": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "repository_ids": [
+ "0"
+ ]
+ }
+ ]
+ }
+}
+-- Error.in.json --
+{
+ "manifest_hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "state": "Error",
+ "success": false,
+ "err": "test error"
+}
+-- Error.want.json --
+{
+ "manifest_hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "state": "Error",
+ "success": false,
+ "err": "test error"
+}
diff --git a/internal/codec/testdata/Layer.txtar b/internal/codec/testdata/Layer.txtar
new file mode 100644
index 0000000000..53b0de0a15
--- /dev/null
+++ b/internal/codec/testdata/Layer.txtar
@@ -0,0 +1,21 @@
+-- Simple.in.json --
+{
+ "hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "uri": "tag:example",
+ "otherkey": null,
+ "headers": {
+ "a": [
+ "b"
+ ]
+ }
+}
+-- Simple.want.json --
+{
+ "hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "uri": "tag:example",
+ "headers": {
+ "a": [
+ "b"
+ ]
+ }
+}
diff --git a/internal/codec/testdata/Manifest.txtar b/internal/codec/testdata/Manifest.txtar
new file mode 100644
index 0000000000..a505538313
--- /dev/null
+++ b/internal/codec/testdata/Manifest.txtar
@@ -0,0 +1,44 @@
+-- Simple.in.json --
+{
+ "hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "layers": [
+ {
+ "hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "uri": "tag:example",
+ "headers": { "a": [ "b" ] }
+ }
+ ],
+ "otherkey": null
+}
+-- Simple.want.json --
+{
+ "hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "layers": [
+ {
+ "hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "uri": "tag:example",
+ "headers": { "a": [ "b" ] }
+ }
+ ]
+}
+-- NoLayers.in.json --
+{
+ "hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "layers": [],
+ "otherkey": null
+}
+-- NoLayers.want.json --
+{
+ "hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "layers": []
+}
+-- NullLayers.in.json --
+{
+ "hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "layers": null,
+ "otherkey": null
+}
+-- NullLayers.want.json --
+{
+ "hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b"
+}
diff --git a/internal/codec/testdata/Package.txtar b/internal/codec/testdata/Package.txtar
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/internal/codec/testdata/Range.txtar b/internal/codec/testdata/Range.txtar
new file mode 100644
index 0000000000..b017b1004d
--- /dev/null
+++ b/internal/codec/testdata/Range.txtar
@@ -0,0 +1,8 @@
+-- Simple.in.json --
+{
+ "[": "semver:1.0"
+}
+-- Simple.want.json --
+{
+ "[": "semver:1.0.0.0.0.0.0.0.0.0"
+}
diff --git a/internal/codec/testdata/Repository.txtar b/internal/codec/testdata/Repository.txtar
new file mode 100644
index 0000000000..457ff1f748
--- /dev/null
+++ b/internal/codec/testdata/Repository.txtar
@@ -0,0 +1,33 @@
+-- Simple.in.json --
+{
+ "id": "0",
+ "extrakey": null,
+ "name": "a",
+ "key": "b",
+ "uri": "tag:example",
+ "cpe": "cpe:2.3:o:*:*:*:*:*:*:*:*:*:*"
+}
+-- Simple.want.json --
+{
+ "id": "0",
+ "name": "a",
+ "key": "b",
+ "uri": "tag:example",
+ "cpe": "cpe:2.3:o:*:*:*:*:*:*:*:*:*:*"
+}
+-- CPE22.in.json --
+{
+ "id": "0",
+ "name": "a",
+ "key": "b",
+ "uri": "tag:example",
+ "cpe": "cpe:/o"
+}
+-- CPE22.want.json --
+{
+ "id": "0",
+ "name": "a",
+ "key": "b",
+ "uri": "tag:example",
+ "cpe": "cpe:2.3:o:*:*:*:*:*:*:*:*:*:*"
+}
diff --git a/internal/codec/testdata/Vulnerability.txtar b/internal/codec/testdata/Vulnerability.txtar
new file mode 100644
index 0000000000..32a8d8404b
--- /dev/null
+++ b/internal/codec/testdata/Vulnerability.txtar
@@ -0,0 +1,38 @@
+-- Simple.in.json --
+{
+ "id": "0",
+ "updater": "test",
+ "name": "CVE-2025-7777",
+ "issued": "2025-08-27T15:45:00Z",
+ "links": "https://security.access.redhat.com/data/csaf/v2/vex/2025/cve-2025-7777.json",
+ "severity": "not-quite-heartbleed",
+ "normalized_severity": "Unknown",
+ "package": null,
+ "repository": {
+ "id": "0",
+ "name": "a",
+ "key": "b",
+ "uri": "tag:example",
+ "cpe": "cpe:/o"
+ },
+ "arch_op": "equals",
+ "otherkey": null
+}
+-- Simple.want.json --
+{
+ "id": "0",
+ "updater": "test",
+ "name": "CVE-2025-7777",
+ "issued": "2025-08-27T15:45:00Z",
+ "links": "https://security.access.redhat.com/data/csaf/v2/vex/2025/cve-2025-7777.json",
+ "severity": "not-quite-heartbleed",
+ "normalized_severity": "Unknown",
+ "repository": {
+ "id": "0",
+ "name": "a",
+ "key": "b",
+ "uri": "tag:example",
+ "cpe": "cpe:2.3:o:*:*:*:*:*:*:*:*:*:*"
+ },
+ "arch_op": "equals"
+}
diff --git a/internal/codec/testdata/VulnerabilityReport.txtar b/internal/codec/testdata/VulnerabilityReport.txtar
new file mode 100644
index 0000000000..8563a92c25
--- /dev/null
+++ b/internal/codec/testdata/VulnerabilityReport.txtar
@@ -0,0 +1,139 @@
+-- Simple.in.json --
+{
+ "manifest_hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "packages": {
+ "0": {
+ "id": "0",
+ "name": "hello",
+ "version": "1.0.0",
+ "kind": "BINARY",
+ "normalized_version": "semver:1.0.0",
+ "arch": "amd64",
+ "cpe": "cpe:/a"
+ }
+ },
+ "distributions": {
+ "0": {
+ "id": "0",
+ "did": "linux",
+ "name": "Linux 3.11",
+ "version": "3.11",
+ "version_code_name": "For Workgroups",
+ "version_id": "3",
+ "arch": "amd64",
+ "cpe": "cpe:/o",
+ "pretty_name": "Linux 3.11 For Workgroups"
+ }
+ },
+ "repository": {
+ "0": {
+ "id": "0",
+ "name": "a",
+ "key": "b",
+ "uri": "tag:example",
+ "cpe": "cpe:/o"
+ }
+ },
+ "environments": {
+ "0": [
+ {
+ "package_db": "rpm:/var/lib/rpm",
+ "introduced_in": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "distribution_id": "0",
+ "repository_ids": [
+ "0"
+ ]
+ }
+ ]
+ },
+ "vulnerabilities": {
+ "0": {
+ "id": "0",
+ "updater": "test",
+ "name": "CVE-2025-7777",
+ "issued": "2025-08-27T15:45:00Z",
+ "links": "https://security.access.redhat.com/data/csaf/v2/vex/2025/cve-2025-7777.json",
+ "severity": "not-quite-heartbleed",
+ "normalized_severity": "Unknown",
+ "package": null,
+ "repository": {
+ "id": "0",
+ "name": "a",
+ "key": "b",
+ "uri": "tag:example",
+ "cpe": "cpe:/o"
+ },
+ "arch_op": "equals"
+ }
+ },
+ "package_vulnerabilities": {"0":["0"]}
+}
+-- Simple.want.json --
+{
+ "manifest_hash": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "packages": {
+ "0": {
+ "id": "0",
+ "name": "hello",
+ "version": "1.0.0",
+ "kind": "BINARY",
+ "normalized_version": "semver:1.0.0.0.0.0.0.0.0.0",
+ "arch": "amd64",
+ "cpe": "cpe:2.3:a:*:*:*:*:*:*:*:*:*:*"
+ }
+ },
+ "distributions": {
+ "0": {
+ "id": "0",
+ "did": "linux",
+ "name": "Linux 3.11",
+ "version": "3.11",
+ "version_code_name": "For Workgroups",
+ "version_id": "3",
+ "arch": "amd64",
+ "cpe": "cpe:2.3:o:*:*:*:*:*:*:*:*:*:*",
+ "pretty_name": "Linux 3.11 For Workgroups"
+ }
+ },
+ "repository": {
+ "0": {
+ "id": "0",
+ "name": "a",
+ "key": "b",
+ "uri": "tag:example",
+ "cpe": "cpe:2.3:o:*:*:*:*:*:*:*:*:*:*"
+ }
+ },
+ "environments": {
+ "0": [
+ {
+ "package_db": "rpm:/var/lib/rpm",
+ "introduced_in": "sha256:01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b",
+ "distribution_id": "0",
+ "repository_ids": [
+ "0"
+ ]
+ }
+ ]
+ },
+ "vulnerabilities": {
+ "0": {
+ "id": "0",
+ "updater": "test",
+ "name": "CVE-2025-7777",
+ "issued": "2025-08-27T15:45:00Z",
+ "links": "https://security.access.redhat.com/data/csaf/v2/vex/2025/cve-2025-7777.json",
+ "severity": "not-quite-heartbleed",
+ "normalized_severity": "Unknown",
+ "repository": {
+ "id": "0",
+ "name": "a",
+ "key": "b",
+ "uri": "tag:example",
+ "cpe": "cpe:2.3:o:*:*:*:*:*:*:*:*:*:*"
+ },
+ "arch_op": "equals"
+ }
+ },
+ "package_vulnerabilities": {"0":["0"]}
+}
diff --git a/internal/json/alias.go b/internal/json/alias.go
new file mode 100644
index 0000000000..ebbdffbfe0
--- /dev/null
+++ b/internal/json/alias.go
@@ -0,0 +1,985 @@
+// Copyright 2025 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Code generated by alias_gen.go; DO NOT EDIT.
+
+//go:build goexperiment.jsonv2 && go1.25
+
+// Package json implements semantic processing of JSON as specified in RFC 8259.
+// JSON is a simple data interchange format that can represent
+// primitive data types such as booleans, strings, and numbers,
+// in addition to structured data types such as objects and arrays.
+//
+// [Marshal] and [Unmarshal] encode and decode Go values
+// to/from JSON text contained within a []byte.
+// [MarshalWrite] and [UnmarshalRead] operate on JSON text
+// by writing to or reading from an [io.Writer] or [io.Reader].
+// [MarshalEncode] and [UnmarshalDecode] operate on JSON text
+// by encoding to or decoding from a [jsontext.Encoder] or [jsontext.Decoder].
+// [Options] may be passed to each of the marshal or unmarshal functions
+// to configure the semantic behavior of marshaling and unmarshaling
+// (i.e., alter how JSON data is understood as Go data and vice versa).
+// [jsontext.Options] may also be passed to the marshal or unmarshal functions
+// to configure the syntactic behavior of encoding or decoding.
+//
+// The data types of JSON are mapped to/from the data types of Go based on
+// the closest logical equivalent between the two type systems. For example,
+// a JSON boolean corresponds with a Go bool,
+// a JSON string corresponds with a Go string,
+// a JSON number corresponds with a Go int, uint or float,
+// a JSON array corresponds with a Go slice or array, and
+// a JSON object corresponds with a Go struct or map.
+// See the documentation on [Marshal] and [Unmarshal] for a comprehensive list
+// of how the JSON and Go type systems correspond.
+//
+// Arbitrary Go types can customize their JSON representation by implementing
+// [Marshaler], [MarshalerTo], [Unmarshaler], or [UnmarshalerFrom].
+// This provides authors of Go types with control over how their types are
+// serialized as JSON. Alternatively, users can implement functions that match
+// [MarshalFunc], [MarshalToFunc], [UnmarshalFunc], or [UnmarshalFromFunc]
+// to specify the JSON representation for arbitrary types.
+// This provides callers of JSON functionality with control over
+// how any arbitrary type is serialized as JSON.
+//
+// # JSON Representation of Go structs
+//
+// A Go struct is naturally represented as a JSON object,
+// where each Go struct field corresponds with a JSON object member.
+// When marshaling, all Go struct fields are recursively encoded in depth-first
+// order as JSON object members except those that are ignored or omitted.
+// When unmarshaling, JSON object members are recursively decoded
+// into the corresponding Go struct fields.
+// Object members that do not match any struct fields,
+// also known as “unknown members”, are ignored by default or rejected
+// if [RejectUnknownMembers] is specified.
+//
+// The representation of each struct field can be customized in the
+// "json" struct field tag, where the tag is a comma separated list of options.
+// As a special case, if the entire tag is `json:"-"`,
+// then the field is ignored with regard to its JSON representation.
+// Some options also have equivalent behavior controlled by a caller-specified [Options].
+// Field-specified options take precedence over caller-specified options.
+//
+// The first option is the JSON object name override for the Go struct field.
+// If the name is not specified, then the Go struct field name
+// is used as the JSON object name. JSON names containing commas or quotes,
+// or names identical to "" or "-", can be specified using
+// a single-quoted string literal, where the syntax is identical to
+// the Go grammar for a double-quoted string literal,
+// but instead uses single quotes as the delimiters.
+// By default, unmarshaling uses case-sensitive matching to identify
+// the Go struct field associated with a JSON object name.
+//
+// After the name, the following tag options are supported:
+//
+// - omitzero: When marshaling, the "omitzero" option specifies that
+// the struct field should be omitted if the field value is zero
+// as determined by the "IsZero() bool" method if present,
+// otherwise based on whether the field is the zero Go value.
+// This option has no effect when unmarshaling.
+//
+// - omitempty: When marshaling, the "omitempty" option specifies that
+// the struct field should be omitted if the field value would have been
+// encoded as a JSON null, empty string, empty object, or empty array.
+// This option has no effect when unmarshaling.
+//
+// - string: The "string" option specifies that [StringifyNumbers]
+// be set when marshaling or unmarshaling a struct field value.
+// This causes numeric types to be encoded as a JSON number
+// within a JSON string, and to be decoded from a JSON string
+// containing the JSON number without any surrounding whitespace.
+// This extra level of encoding is often necessary since
+// many JSON parsers cannot precisely represent 64-bit integers.
+//
+// - case: When unmarshaling, the "case" option specifies how
+// JSON object names are matched with the JSON name for Go struct fields.
+// The option is a key-value pair specified as "case:value" where
+// the value must either be 'ignore' or 'strict'.
+// The 'ignore' value specifies that matching is case-insensitive
+// where dashes and underscores are also ignored. If multiple fields match,
+// the first declared field in breadth-first order takes precedence.
+// The 'strict' value specifies that matching is case-sensitive.
+// This takes precedence over the [MatchCaseInsensitiveNames] option.
+//
+// - inline: The "inline" option specifies that
+// the JSON representable content of this field type is to be promoted
+// as if they were specified in the parent struct.
+// It is the JSON equivalent of Go struct embedding.
+// A Go embedded field is implicitly inlined unless an explicit JSON name
+// is specified. The inlined field must be a Go struct
+// (that does not implement any JSON methods), [jsontext.Value],
+// map[~string]T, or an unnamed pointer to such types. When marshaling,
+// inlined fields from a pointer type are omitted if it is nil.
+// Inlined fields of type [jsontext.Value] and map[~string]T are called
+// “inlined fallbacks” as they can represent all possible
+// JSON object members not directly handled by the parent struct.
+// Only one inlined fallback field may be specified in a struct,
+// while many non-fallback fields may be specified. This option
+// must not be specified with any other option (including the JSON name).
+//
+// - unknown: The "unknown" option is a specialized variant
+// of the inlined fallback to indicate that this Go struct field
+// contains any number of unknown JSON object members. The field type must
+// be a [jsontext.Value], map[~string]T, or an unnamed pointer to such types.
+// If [DiscardUnknownMembers] is specified when marshaling,
+// the contents of this field are ignored.
+// If [RejectUnknownMembers] is specified when unmarshaling,
+// any unknown object members are rejected regardless of whether
+// an inlined fallback with the "unknown" option exists. This option
+// must not be specified with any other option (including the JSON name).
+//
+// - format: The "format" option specifies a format flag
+// used to specialize the formatting of the field value.
+// The option is a key-value pair specified as "format:value" where
+// the value must be either a literal consisting of letters and numbers
+// (e.g., "format:RFC3339") or a single-quoted string literal
+// (e.g., "format:'2006-01-02'"). The interpretation of the format flag
+// is determined by the struct field type.
+//
+// The "omitzero" and "omitempty" options are mostly semantically identical.
+// The former is defined in terms of the Go type system,
+// while the latter in terms of the JSON type system.
+// Consequently they behave differently in some circumstances.
+// For example, only a nil slice or map is omitted under "omitzero", while
+// an empty slice or map is omitted under "omitempty" regardless of nilness.
+// The "omitzero" option is useful for types with a well-defined zero value
+// (e.g., [net/netip.Addr]) or have an IsZero method (e.g., [time.Time.IsZero]).
+//
+// Every Go struct corresponds to a list of JSON representable fields
+// which is constructed by performing a breadth-first search over
+// all struct fields (excluding unexported or ignored fields),
+// where the search recursively descends into inlined structs.
+// The set of non-inlined fields in a struct must have unique JSON names.
+// If multiple fields all have the same JSON name, then the one
+// at shallowest depth takes precedence and the other fields at deeper depths
+// are excluded from the list of JSON representable fields.
+// If multiple fields at the shallowest depth have the same JSON name,
+// but exactly one is explicitly tagged with a JSON name,
+// then that field takes precedence and all others are excluded from the list.
+// This is analogous to Go visibility rules for struct field selection
+// with embedded struct types.
+//
+// Marshaling or unmarshaling a non-empty struct
+// without any JSON representable fields results in a [SemanticError].
+// Unexported fields must not have any `json` tags except for `json:"-"`.
+//
+// # Security Considerations
+//
+// JSON is frequently used as a data interchange format to communicate
+// between different systems, possibly implemented in different languages.
+// For interoperability and security reasons, it is important that
+// all implementations agree upon the semantic meaning of the data.
+//
+// [For example, suppose we have two micro-services.]
+// The first service is responsible for authenticating a JSON request,
+// while the second service is responsible for executing the request
+// (having assumed that the prior service authenticated the request).
+// If an attacker were able to maliciously craft a JSON request such that
+// both services believe that the same request is from different users,
+// it could bypass the authenticator with valid credentials for one user,
+// but maliciously perform an action on behalf of a different user.
+//
+// According to RFC 8259, there unfortunately exist many JSON texts
+// that are syntactically valid but semantically ambiguous.
+// For example, the standard does not define how to interpret duplicate
+// names within an object.
+//
+// The v1 [encoding/json] and [encoding/json/v2] packages
+// interpret some inputs in different ways. In particular:
+//
+// - The standard specifies that JSON must be encoded using UTF-8.
+// By default, v1 replaces invalid bytes of UTF-8 in JSON strings
+// with the Unicode replacement character,
+// while v2 rejects inputs with invalid UTF-8.
+// To change the default, specify the [jsontext.AllowInvalidUTF8] option.
+// The replacement of invalid UTF-8 is a form of data corruption
+// that alters the precise meaning of strings.
+//
+// - The standard does not specify a particular behavior when
+// duplicate names are encountered within a JSON object,
+// which means that different implementations may behave differently.
+// By default, v1 allows for the presence of duplicate names,
+// while v2 rejects duplicate names.
+// To change the default, specify the [jsontext.AllowDuplicateNames] option.
+// If allowed, object members are processed in the order they are observed,
+// meaning that later values will replace or be merged into prior values,
+// depending on the Go value type.
+//
+// - The standard defines a JSON object as an unordered collection of name/value pairs.
+// While ordering can be observed through the underlying [jsontext] API,
+// both v1 and v2 generally avoid exposing the ordering.
+// No application should semantically depend on the order of object members.
+// Allowing duplicate names is a vector through which ordering of members
+// can accidentally be observed and depended upon.
+//
+// - The standard suggests that JSON object names are typically compared
+// based on equality of the sequence of Unicode code points,
+// which implies that comparing names is often case-sensitive.
+// When unmarshaling a JSON object into a Go struct,
+// by default, v1 uses a (loose) case-insensitive match on the name,
+// while v2 uses a (strict) case-sensitive match on the name.
+// To change the default, specify the [MatchCaseInsensitiveNames] option.
+// The use of case-insensitive matching provides another vector through
+// which duplicate names can occur. Allowing case-insensitive matching
+// means that v1 or v2 might interpret JSON objects differently from most
+// other JSON implementations (which typically use a case-sensitive match).
+//
+// - The standard does not specify a particular behavior when
+// an unknown name in a JSON object is encountered.
+// When unmarshaling a JSON object into a Go struct, by default
+// both v1 and v2 ignore unknown names and their corresponding values.
+// To change the default, specify the [RejectUnknownMembers] option.
+//
+// - The standard suggests that implementations may use a float64
+// to represent a JSON number. Consequently, large JSON integers
+// may lose precision when stored as a floating-point type.
+// Both v1 and v2 correctly preserve precision when marshaling and
+// unmarshaling a concrete integer type. However, even if v1 and v2
+// preserve precision for concrete types, other JSON implementations
+// may not be able to preserve precision for outputs produced by v1 or v2.
+// The `string` tag option can be used to specify that an integer type
+// is to be quoted within a JSON string to avoid loss of precision.
+// Furthermore, v1 and v2 may still lose precision when unmarshaling
+// into an any interface value, where unmarshal uses a float64
+// by default to represent a JSON number.
+// To change the default, specify the [WithUnmarshalers] option
+// with a custom unmarshaler that pre-populates the interface value
+// with a concrete Go type that can preserve precision.
+//
+// RFC 8785 specifies a canonical form for any JSON text,
+// which explicitly defines specific behaviors that RFC 8259 leaves undefined.
+// In theory, if a text can successfully [jsontext.Value.Canonicalize]
+// without changing the semantic meaning of the data, then it provides a
+// greater degree of confidence that the data is more secure and interoperable.
+//
+// The v2 API generally chooses more secure defaults than v1,
+// but care should still be taken with large integers or unknown members.
+//
+// [For example, suppose we have two micro-services.]: https://www.youtube.com/watch?v=avilmOcHKHE&t=1057s
+package json
+
+import (
+ "encoding/json/jsontext"
+ "encoding/json/v2"
+ "io"
+)
+
+// Marshal serializes a Go value as a []byte according to the provided
+// marshal and encode options (while ignoring unmarshal or decode options).
+// It does not terminate the output with a newline.
+//
+// Type-specific marshal functions and methods take precedence
+// over the default representation of a value.
+// Functions or methods that operate on *T are only called when encoding
+// a value of type T (by taking its address) or a non-nil value of *T.
+// Marshal ensures that a value is always addressable
+// (by boxing it on the heap if necessary) so that
+// these functions and methods can be consistently called. For performance,
+// it is recommended that Marshal be passed a non-nil pointer to the value.
+//
+// The input value is encoded as JSON according the following rules:
+//
+// - If any type-specific functions in a [WithMarshalers] option match
+// the value type, then those functions are called to encode the value.
+// If all applicable functions return [SkipFunc],
+// then the value is encoded according to subsequent rules.
+//
+// - If the value type implements [MarshalerTo],
+// then the MarshalJSONTo method is called to encode the value.
+//
+// - If the value type implements [Marshaler],
+// then the MarshalJSON method is called to encode the value.
+//
+// - If the value type implements [encoding.TextAppender],
+// then the AppendText method is called to encode the value and
+// subsequently encode its result as a JSON string.
+//
+// - If the value type implements [encoding.TextMarshaler],
+// then the MarshalText method is called to encode the value and
+// subsequently encode its result as a JSON string.
+//
+// - Otherwise, the value is encoded according to the value's type
+// as described in detail below.
+//
+// Most Go types have a default JSON representation.
+// Certain types support specialized formatting according to
+// a format flag optionally specified in the Go struct tag
+// for the struct field that contains the current value
+// (see the “JSON Representation of Go structs” section for more details).
+//
+// The representation of each type is as follows:
+//
+// - A Go boolean is encoded as a JSON boolean (e.g., true or false).
+// It does not support any custom format flags.
+//
+// - A Go string is encoded as a JSON string.
+// It does not support any custom format flags.
+//
+// - A Go []byte or [N]byte is encoded as a JSON string containing
+// the binary value encoded using RFC 4648.
+// If the format is "base64" or unspecified, then this uses RFC 4648, section 4.
+// If the format is "base64url", then this uses RFC 4648, section 5.
+// If the format is "base32", then this uses RFC 4648, section 6.
+// If the format is "base32hex", then this uses RFC 4648, section 7.
+// If the format is "base16" or "hex", then this uses RFC 4648, section 8.
+// If the format is "array", then the bytes value is encoded as a JSON array
+// where each byte is recursively JSON-encoded as each JSON array element.
+//
+// - A Go integer is encoded as a JSON number without fractions or exponents.
+// If [StringifyNumbers] is specified or encoding a JSON object name,
+// then the JSON number is encoded within a JSON string.
+// It does not support any custom format flags.
+//
+// - A Go float is encoded as a JSON number.
+// If [StringifyNumbers] is specified or encoding a JSON object name,
+// then the JSON number is encoded within a JSON string.
+// If the format is "nonfinite", then NaN, +Inf, and -Inf are encoded as
+// the JSON strings "NaN", "Infinity", and "-Infinity", respectively.
+// Otherwise, the presence of non-finite numbers results in a [SemanticError].
+//
+// - A Go map is encoded as a JSON object, where each Go map key and value
+// is recursively encoded as a name and value pair in the JSON object.
+// The Go map key must encode as a JSON string, otherwise this results
+// in a [SemanticError]. The Go map is traversed in a non-deterministic order.
+// For deterministic encoding, consider using the [Deterministic] option.
+// If the format is "emitnull", then a nil map is encoded as a JSON null.
+// If the format is "emitempty", then a nil map is encoded as an empty JSON object,
+// regardless of whether [FormatNilMapAsNull] is specified.
+// Otherwise by default, a nil map is encoded as an empty JSON object.
+//
+// - A Go struct is encoded as a JSON object.
+// See the “JSON Representation of Go structs” section
+// in the package-level documentation for more details.
+//
+// - A Go slice is encoded as a JSON array, where each Go slice element
+// is recursively JSON-encoded as the elements of the JSON array.
+// If the format is "emitnull", then a nil slice is encoded as a JSON null.
+// If the format is "emitempty", then a nil slice is encoded as an empty JSON array,
+// regardless of whether [FormatNilSliceAsNull] is specified.
+// Otherwise by default, a nil slice is encoded as an empty JSON array.
+//
+// - A Go array is encoded as a JSON array, where each Go array element
+// is recursively JSON-encoded as the elements of the JSON array.
+// The JSON array length is always identical to the Go array length.
+// It does not support any custom format flags.
+//
+// - A Go pointer is encoded as a JSON null if nil, otherwise it is
+// the recursively JSON-encoded representation of the underlying value.
+// Format flags are forwarded to the encoding of the underlying value.
+//
+// - A Go interface is encoded as a JSON null if nil, otherwise it is
+// the recursively JSON-encoded representation of the underlying value.
+// It does not support any custom format flags.
+//
+// - A Go [time.Time] is encoded as a JSON string containing the timestamp
+// formatted in RFC 3339 with nanosecond precision.
+// If the format matches one of the format constants declared
+// in the time package (e.g., RFC1123), then that format is used.
+// If the format is "unix", "unixmilli", "unixmicro", or "unixnano",
+// then the timestamp is encoded as a possibly fractional JSON number
+// of the number of seconds (or milliseconds, microseconds, or nanoseconds)
+// since the Unix epoch, which is January 1st, 1970 at 00:00:00 UTC.
+// To avoid a fractional component, round the timestamp to the relevant unit.
+// Otherwise, the format is used as-is with [time.Time.Format] if non-empty.
+//
+// - A Go [time.Duration] currently has no default representation and
+// requires an explicit format to be specified.
+// If the format is "sec", "milli", "micro", or "nano",
+// then the duration is encoded as a possibly fractional JSON number
+// of the number of seconds (or milliseconds, microseconds, or nanoseconds).
+// To avoid a fractional component, round the duration to the relevant unit.
+// If the format is "units", it is encoded as a JSON string formatted using
+// [time.Duration.String] (e.g., "1h30m" for 1 hour 30 minutes).
+// If the format is "iso8601", it is encoded as a JSON string using the
+// ISO 8601 standard for durations (e.g., "PT1H30M" for 1 hour 30 minutes)
+// using only accurate units of hours, minutes, and seconds.
+//
+// - All other Go types (e.g., complex numbers, channels, and functions)
+// have no default representation and result in a [SemanticError].
+//
+// JSON cannot represent cyclic data structures and Marshal does not handle them.
+// Passing cyclic structures will result in an error.
+func Marshal(in any, opts ...Options) (out []byte, err error) {
+ return json.Marshal(in, opts...)
+}
+
+// MarshalWrite serializes a Go value into an [io.Writer] according to the provided
+// marshal and encode options (while ignoring unmarshal or decode options).
+// It does not terminate the output with a newline.
+// See [Marshal] for details about the conversion of a Go value into JSON.
+func MarshalWrite(out io.Writer, in any, opts ...Options) (err error) {
+ return json.MarshalWrite(out, in, opts...)
+}
+
+// MarshalEncode serializes a Go value into an [jsontext.Encoder] according to
+// the provided marshal options (while ignoring unmarshal, encode, or decode options).
+// Any marshal-relevant options already specified on the [jsontext.Encoder]
+// take lower precedence than the set of options provided by the caller.
+// Unlike [Marshal] and [MarshalWrite], encode options are ignored because
+// they must have already been specified on the provided [jsontext.Encoder].
+//
+// See [Marshal] for details about the conversion of a Go value into JSON.
+func MarshalEncode(out *jsontext.Encoder, in any, opts ...Options) (err error) {
+ return json.MarshalEncode(out, in, opts...)
+}
+
+// Unmarshal decodes a []byte input into a Go value according to the provided
+// unmarshal and decode options (while ignoring marshal or encode options).
+// The input must be a single JSON value with optional whitespace interspersed.
+// The output must be a non-nil pointer.
+//
+// Type-specific unmarshal functions and methods take precedence
+// over the default representation of a value.
+// Functions or methods that operate on *T are only called when decoding
+// a value of type T (by taking its address) or a non-nil value of *T.
+// Unmarshal ensures that a value is always addressable
+// (by boxing it on the heap if necessary) so that
+// these functions and methods can be consistently called.
+//
+// The input is decoded into the output according the following rules:
+//
+// - If any type-specific functions in a [WithUnmarshalers] option match
+// the value type, then those functions are called to decode the JSON
+// value. If all applicable functions return [SkipFunc],
+// then the input is decoded according to subsequent rules.
+//
+// - If the value type implements [UnmarshalerFrom],
+// then the UnmarshalJSONFrom method is called to decode the JSON value.
+//
+// - If the value type implements [Unmarshaler],
+// then the UnmarshalJSON method is called to decode the JSON value.
+//
+// - If the value type implements [encoding.TextUnmarshaler],
+// then the input is decoded as a JSON string and
+// the UnmarshalText method is called with the decoded string value.
+// This fails with a [SemanticError] if the input is not a JSON string.
+//
+// - Otherwise, the JSON value is decoded according to the value's type
+// as described in detail below.
+//
+// Most Go types have a default JSON representation.
+// Certain types support specialized formatting according to
+// a format flag optionally specified in the Go struct tag
+// for the struct field that contains the current value
+// (see the “JSON Representation of Go structs” section for more details).
+// A JSON null may be decoded into every supported Go value where
+// it is equivalent to storing the zero value of the Go value.
+// If the input JSON kind is not handled by the current Go value type,
+// then this fails with a [SemanticError]. Unless otherwise specified,
+// the decoded value replaces any pre-existing value.
+//
+// The representation of each type is as follows:
+//
+// - A Go boolean is decoded from a JSON boolean (e.g., true or false).
+// It does not support any custom format flags.
+//
+// - A Go string is decoded from a JSON string.
+// It does not support any custom format flags.
+//
+// - A Go []byte or [N]byte is decoded from a JSON string
+// containing the binary value encoded using RFC 4648.
+// If the format is "base64" or unspecified, then this uses RFC 4648, section 4.
+// If the format is "base64url", then this uses RFC 4648, section 5.
+// If the format is "base32", then this uses RFC 4648, section 6.
+// If the format is "base32hex", then this uses RFC 4648, section 7.
+// If the format is "base16" or "hex", then this uses RFC 4648, section 8.
+// If the format is "array", then the Go slice or array is decoded from a
+// JSON array where each JSON element is recursively decoded for each byte.
+// When decoding into a non-nil []byte, the slice length is reset to zero
+// and the decoded input is appended to it.
+// When decoding into a [N]byte, the input must decode to exactly N bytes,
+// otherwise it fails with a [SemanticError].
+//
+// - A Go integer is decoded from a JSON number.
+// It must be decoded from a JSON string containing a JSON number
+// if [StringifyNumbers] is specified or decoding a JSON object name.
+// It fails with a [SemanticError] if the JSON number
+// has a fractional or exponent component.
+// It also fails if it overflows the representation of the Go integer type.
+// It does not support any custom format flags.
+//
+// - A Go float is decoded from a JSON number.
+// It must be decoded from a JSON string containing a JSON number
+// if [StringifyNumbers] is specified or decoding a JSON object name.
+// It fails if it overflows the representation of the Go float type.
+// If the format is "nonfinite", then the JSON strings
+// "NaN", "Infinity", and "-Infinity" are decoded as NaN, +Inf, and -Inf.
+// Otherwise, the presence of such strings results in a [SemanticError].
+//
+// - A Go map is decoded from a JSON object,
+// where each JSON object name and value pair is recursively decoded
+// as the Go map key and value. Maps are not cleared.
+// If the Go map is nil, then a new map is allocated to decode into.
+// If the decoded key matches an existing Go map entry, the entry value
+// is reused by decoding the JSON object value into it.
+// The formats "emitnull" and "emitempty" have no effect when decoding.
+//
+// - A Go struct is decoded from a JSON object.
+// See the “JSON Representation of Go structs” section
+// in the package-level documentation for more details.
+//
+// - A Go slice is decoded from a JSON array, where each JSON element
+// is recursively decoded and appended to the Go slice.
+// Before appending into a Go slice, a new slice is allocated if it is nil,
+// otherwise the slice length is reset to zero.
+// The formats "emitnull" and "emitempty" have no effect when decoding.
+//
+// - A Go array is decoded from a JSON array, where each JSON array element
+// is recursively decoded as each corresponding Go array element.
+// Each Go array element is zeroed before decoding into it.
+// It fails with a [SemanticError] if the JSON array does not contain
+// the exact same number of elements as the Go array.
+// It does not support any custom format flags.
+//
+// - A Go pointer is decoded based on the JSON kind and underlying Go type.
+// If the input is a JSON null, then this stores a nil pointer.
+// Otherwise, it allocates a new underlying value if the pointer is nil,
+// and recursively JSON decodes into the underlying value.
+// Format flags are forwarded to the decoding of the underlying type.
+//
+// - A Go interface is decoded based on the JSON kind and underlying Go type.
+// If the input is a JSON null, then this stores a nil interface value.
+// Otherwise, a nil interface value of an empty interface type is initialized
+// with a zero Go bool, string, float64, map[string]any, or []any if the
+// input is a JSON boolean, string, number, object, or array, respectively.
+// If the interface value is still nil, then this fails with a [SemanticError]
+// since decoding could not determine an appropriate Go type to decode into.
+// For example, unmarshaling into a nil io.Reader fails since
+// there is no concrete type to populate the interface value with.
+// Otherwise an underlying value exists and it recursively decodes
+// the JSON input into it. It does not support any custom format flags.
+//
+// - A Go [time.Time] is decoded from a JSON string containing the time
+// formatted in RFC 3339 with nanosecond precision.
+// If the format matches one of the format constants declared in
+// the time package (e.g., RFC1123), then that format is used for parsing.
+// If the format is "unix", "unixmilli", "unixmicro", or "unixnano",
+// then the timestamp is decoded from an optionally fractional JSON number
+// of the number of seconds (or milliseconds, microseconds, or nanoseconds)
+// since the Unix epoch, which is January 1st, 1970 at 00:00:00 UTC.
+// Otherwise, the format is used as-is with [time.Time.Parse] if non-empty.
+//
+// - A Go [time.Duration] currently has no default representation and
+// requires an explicit format to be specified.
+// If the format is "sec", "milli", "micro", or "nano",
+// then the duration is decoded from an optionally fractional JSON number
+// of the number of seconds (or milliseconds, microseconds, or nanoseconds).
+// If the format is "units", it is decoded from a JSON string parsed using
+// [time.ParseDuration] (e.g., "1h30m" for 1 hour 30 minutes).
+// If the format is "iso8601", it is decoded from a JSON string using the
+// ISO 8601 standard for durations (e.g., "PT1H30M" for 1 hour 30 minutes)
+// accepting only accurate units of hours, minutes, or seconds.
+//
+// - All other Go types (e.g., complex numbers, channels, and functions)
+// have no default representation and result in a [SemanticError].
+//
+// In general, unmarshaling follows merge semantics (similar to RFC 7396)
+// where the decoded Go value replaces the destination value
+// for any JSON kind other than an object.
+// For JSON objects, the input object is merged into the destination value
+// where matching object members recursively apply merge semantics.
+func Unmarshal(in []byte, out any, opts ...Options) (err error) {
+ return json.Unmarshal(in, out, opts...)
+}
+
+// UnmarshalRead deserializes a Go value from an [io.Reader] according to the
+// provided unmarshal and decode options (while ignoring marshal or encode options).
+// The input must be a single JSON value with optional whitespace interspersed.
+// It consumes the entirety of [io.Reader] until [io.EOF] is encountered,
+// without reporting an error for EOF. The output must be a non-nil pointer.
+// See [Unmarshal] for details about the conversion of JSON into a Go value.
+func UnmarshalRead(in io.Reader, out any, opts ...Options) (err error) {
+ return json.UnmarshalRead(in, out, opts...)
+}
+
+// UnmarshalDecode deserializes a Go value from a [jsontext.Decoder] according to
+// the provided unmarshal options (while ignoring marshal, encode, or decode options).
+// Any unmarshal options already specified on the [jsontext.Decoder]
+// take lower precedence than the set of options provided by the caller.
+// Unlike [Unmarshal] and [UnmarshalRead], decode options are ignored because
+// they must have already been specified on the provided [jsontext.Decoder].
+//
+// The input may be a stream of one or more JSON values,
+// where this only unmarshals the next JSON value in the stream.
+// The output must be a non-nil pointer.
+// See [Unmarshal] for details about the conversion of JSON into a Go value.
+func UnmarshalDecode(in *jsontext.Decoder, out any, opts ...Options) (err error) {
+ return json.UnmarshalDecode(in, out, opts...)
+}
+
+// SkipFunc may be returned by [MarshalToFunc] and [UnmarshalFromFunc] functions.
+//
+// Any function that returns SkipFunc must not cause observable side effects
+// on the provided [jsontext.Encoder] or [jsontext.Decoder].
+// For example, it is permissible to call [jsontext.Decoder.PeekKind],
+// but not permissible to call [jsontext.Decoder.ReadToken] or
+// [jsontext.Encoder.WriteToken] since such methods mutate the state.
+var SkipFunc = json.SkipFunc
+
+// Marshalers is a list of functions that may override the marshal behavior
+// of specific types. Populate [WithMarshalers] to use it with
+// [Marshal], [MarshalWrite], or [MarshalEncode].
+// A nil *Marshalers is equivalent to an empty list.
+// There are no exported fields or methods on Marshalers.
+type Marshalers = json.Marshalers
+
+// JoinMarshalers constructs a flattened list of marshal functions.
+// If multiple functions in the list are applicable for a value of a given type,
+// then those earlier in the list take precedence over those that come later.
+// If a function returns [SkipFunc], then the next applicable function is called,
+// otherwise the default marshaling behavior is used.
+//
+// For example:
+//
+// m1 := JoinMarshalers(f1, f2)
+// m2 := JoinMarshalers(f0, m1, f3) // equivalent to m3
+// m3 := JoinMarshalers(f0, f1, f2, f3) // equivalent to m2
+func JoinMarshalers(ms ...*Marshalers) *Marshalers {
+ return json.JoinMarshalers(ms...)
+}
+
+// Unmarshalers is a list of functions that may override the unmarshal behavior
+// of specific types. Populate [WithUnmarshalers] to use it with
+// [Unmarshal], [UnmarshalRead], or [UnmarshalDecode].
+// A nil *Unmarshalers is equivalent to an empty list.
+// There are no exported fields or methods on Unmarshalers.
+type Unmarshalers = json.Unmarshalers
+
+// JoinUnmarshalers constructs a flattened list of unmarshal functions.
+// If multiple functions in the list are applicable for a value of a given type,
+// then those earlier in the list take precedence over those that come later.
+// If a function returns [SkipFunc], then the next applicable function is called,
+// otherwise the default unmarshaling behavior is used.
+//
+// For example:
+//
+// u1 := JoinUnmarshalers(f1, f2)
+// u2 := JoinUnmarshalers(f0, u1, f3) // equivalent to u3
+// u3 := JoinUnmarshalers(f0, f1, f2, f3) // equivalent to u2
+func JoinUnmarshalers(us ...*Unmarshalers) *Unmarshalers {
+ return json.JoinUnmarshalers(us...)
+}
+
+// MarshalFunc constructs a type-specific marshaler that
+// specifies how to marshal values of type T.
+// T can be any type except a named pointer.
+// The function is always provided with a non-nil pointer value
+// if T is an interface or pointer type.
+//
+// The function must marshal exactly one JSON value.
+// The value of T must not be retained outside the function call.
+// It may not return [SkipFunc].
+func MarshalFunc[T any](fn func(T) ([]byte, error)) *Marshalers {
+ return json.MarshalFunc[T](fn)
+}
+
+// MarshalToFunc constructs a type-specific marshaler that
+// specifies how to marshal values of type T.
+// T can be any type except a named pointer.
+// The function is always provided with a non-nil pointer value
+// if T is an interface or pointer type.
+//
+// The function must marshal exactly one JSON value by calling write methods
+// on the provided encoder. It may return [SkipFunc] such that marshaling can
+// move on to the next marshal function. However, no mutable method calls may
+// be called on the encoder if [SkipFunc] is returned.
+// The pointer to [jsontext.Encoder] and the value of T
+// must not be retained outside the function call.
+func MarshalToFunc[T any](fn func(*jsontext.Encoder, T) error) *Marshalers {
+ return json.MarshalToFunc[T](fn)
+}
+
+// UnmarshalFunc constructs a type-specific unmarshaler that
+// specifies how to unmarshal values of type T.
+// T must be an unnamed pointer or an interface type.
+// The function is always provided with a non-nil pointer value.
+//
+// The function must unmarshal exactly one JSON value.
+// The input []byte must not be mutated.
+// The input []byte and value T must not be retained outside the function call.
+// It may not return [SkipFunc].
+func UnmarshalFunc[T any](fn func([]byte, T) error) *Unmarshalers {
+ return json.UnmarshalFunc[T](fn)
+}
+
+// UnmarshalFromFunc constructs a type-specific unmarshaler that
+// specifies how to unmarshal values of type T.
+// T must be an unnamed pointer or an interface type.
+// The function is always provided with a non-nil pointer value.
+//
+// The function must unmarshal exactly one JSON value by calling read methods
+// on the provided decoder. It may return [SkipFunc] such that unmarshaling can
+// move on to the next unmarshal function. However, no mutable method calls may
+// be called on the decoder if [SkipFunc] is returned.
+// The pointer to [jsontext.Decoder] and the value of T
+// must not be retained outside the function call.
+func UnmarshalFromFunc[T any](fn func(*jsontext.Decoder, T) error) *Unmarshalers {
+ return json.UnmarshalFromFunc[T](fn)
+}
+
+// Marshaler is implemented by types that can marshal themselves.
+// It is recommended that types implement [MarshalerTo] unless the implementation
+// is trying to avoid a hard dependency on the "jsontext" package.
+//
+// It is recommended that implementations return a buffer that is safe
+// for the caller to retain and potentially mutate.
+type Marshaler = json.Marshaler
+
+// MarshalerTo is implemented by types that can marshal themselves.
+// It is recommended that types implement MarshalerTo instead of [Marshaler]
+// since this is both more performant and flexible.
+// If a type implements both Marshaler and MarshalerTo,
+// then MarshalerTo takes precedence. In such a case, both implementations
+// should aim to have equivalent behavior for the default marshal options.
+//
+// The implementation must write only one JSON value to the Encoder and
+// must not retain the pointer to [jsontext.Encoder].
+type MarshalerTo = json.MarshalerTo
+
+// Unmarshaler is implemented by types that can unmarshal themselves.
+// It is recommended that types implement [UnmarshalerFrom] unless the implementation
+// is trying to avoid a hard dependency on the "jsontext" package.
+//
+// The input can be assumed to be a valid encoding of a JSON value
+// if called from unmarshal functionality in this package.
+// UnmarshalJSON must copy the JSON data if it is retained after returning.
+// It is recommended that UnmarshalJSON implement merge semantics when
+// unmarshaling into a pre-populated value.
+//
+// Implementations must not retain or mutate the input []byte.
+type Unmarshaler = json.Unmarshaler
+
+// UnmarshalerFrom is implemented by types that can unmarshal themselves.
+// It is recommended that types implement UnmarshalerFrom instead of [Unmarshaler]
+// since this is both more performant and flexible.
+// If a type implements both Unmarshaler and UnmarshalerFrom,
+// then UnmarshalerFrom takes precedence. In such a case, both implementations
+// should aim to have equivalent behavior for the default unmarshal options.
+//
+// The implementation must read only one JSON value from the Decoder.
+// It is recommended that UnmarshalJSONFrom implement merge semantics when
+// unmarshaling into a pre-populated value.
+//
+// Implementations must not retain the pointer to [jsontext.Decoder].
+type UnmarshalerFrom = json.UnmarshalerFrom
+
+// ErrUnknownName indicates that a JSON object member could not be
+// unmarshaled because the name is not known to the target Go struct.
+// This error is directly wrapped within a [SemanticError] when produced.
+//
+// The name of an unknown JSON object member can be extracted as:
+//
+// err := ...
+// var serr json.SemanticError
+// if errors.As(err, &serr) && serr.Err == json.ErrUnknownName {
+// ptr := serr.JSONPointer // JSON pointer to unknown name
+// name := ptr.LastToken() // unknown name itself
+// ...
+// }
+//
+// This error is only returned if [RejectUnknownMembers] is true.
+var ErrUnknownName = json.ErrUnknownName
+
+// SemanticError describes an error determining the meaning
+// of JSON data as Go data or vice-versa.
+//
+// The contents of this error as produced by this package may change over time.
+type SemanticError = json.SemanticError
+
+// Options configure [Marshal], [MarshalWrite], [MarshalEncode],
+// [Unmarshal], [UnmarshalRead], and [UnmarshalDecode] with specific features.
+// Each function takes in a variadic list of options, where properties
+// set in later options override the value of previously set properties.
+//
+// The Options type is identical to [encoding/json.Options] and
+// [encoding/json/jsontext.Options]. Options from the other packages can
+// be used interchangeably with functionality in this package.
+//
+// Options represent either a singular option or a set of options.
+// It can be functionally thought of as a Go map of option properties
+// (even though the underlying implementation avoids Go maps for performance).
+//
+// The constructors (e.g., [Deterministic]) return a singular option value:
+//
+// opt := Deterministic(true)
+//
+// which is analogous to creating a single entry map:
+//
+// opt := Options{"Deterministic": true}
+//
+// [JoinOptions] composes multiple options values to together:
+//
+// out := JoinOptions(opts...)
+//
+// which is analogous to making a new map and copying the options over:
+//
+// out := make(Options)
+// for _, m := range opts {
+// for k, v := range m {
+// out[k] = v
+// }
+// }
+//
+// [GetOption] looks up the value of options parameter:
+//
+// v, ok := GetOption(opts, Deterministic)
+//
+// which is analogous to a Go map lookup:
+//
+// v, ok := Options["Deterministic"]
+//
+// There is a single Options type, which is used with both marshal and unmarshal.
+// Some options affect both operations, while others only affect one operation:
+//
+// - [StringifyNumbers] affects marshaling and unmarshaling
+// - [Deterministic] affects marshaling only
+// - [FormatNilSliceAsNull] affects marshaling only
+// - [FormatNilMapAsNull] affects marshaling only
+// - [OmitZeroStructFields] affects marshaling only
+// - [MatchCaseInsensitiveNames] affects marshaling and unmarshaling
+// - [DiscardUnknownMembers] affects marshaling only
+// - [RejectUnknownMembers] affects unmarshaling only
+// - [WithMarshalers] affects marshaling only
+// - [WithUnmarshalers] affects unmarshaling only
+//
+// Options that do not affect a particular operation are ignored.
+type Options = json.Options
+
+// JoinOptions coalesces the provided list of options into a single Options.
+// Properties set in later options override the value of previously set properties.
+func JoinOptions(srcs ...Options) Options {
+ return json.JoinOptions(srcs...)
+}
+
+// GetOption returns the value stored in opts with the provided setter,
+// reporting whether the value is present.
+//
+// Example usage:
+//
+// v, ok := json.GetOption(opts, json.Deterministic)
+//
+// Options are most commonly introspected to alter the JSON representation of
+// [MarshalerTo.MarshalJSONTo] and [UnmarshalerFrom.UnmarshalJSONFrom] methods, and
+// [MarshalToFunc] and [UnmarshalFromFunc] functions.
+// In such cases, the presence bit should generally be ignored.
+func GetOption[T any](opts Options, setter func(T) Options) (T, bool) {
+ return json.GetOption[T](opts, setter)
+}
+
+// DefaultOptionsV2 is the full set of all options that define v2 semantics.
+// It is equivalent to all options under [Options], [encoding/json.Options],
+// and [encoding/json/jsontext.Options] being set to false or the zero value,
+// except for the options related to whitespace formatting.
+func DefaultOptionsV2() Options {
+ return json.DefaultOptionsV2()
+}
+
+// StringifyNumbers specifies that numeric Go types should be marshaled
+// as a JSON string containing the equivalent JSON number value.
+// When unmarshaling, numeric Go types are parsed from a JSON string
+// containing the JSON number without any surrounding whitespace.
+//
+// According to RFC 8259, section 6, a JSON implementation may choose to
+// limit the representation of a JSON number to an IEEE 754 binary64 value.
+// This may cause decoders to lose precision for int64 and uint64 types.
+// Quoting JSON numbers as a JSON string preserves the exact precision.
+//
+// This affects either marshaling or unmarshaling.
+func StringifyNumbers(v bool) Options {
+ return json.StringifyNumbers(v)
+}
+
+// Deterministic specifies that the same input value will be serialized
+// as the exact same output bytes. Different processes of
+// the same program will serialize equal values to the same bytes,
+// but different versions of the same program are not guaranteed
+// to produce the exact same sequence of bytes.
+//
+// This only affects marshaling and is ignored when unmarshaling.
+func Deterministic(v bool) Options {
+ return json.Deterministic(v)
+}
+
+// FormatNilSliceAsNull specifies that a nil Go slice should marshal as a
+// JSON null instead of the default representation as an empty JSON array
+// (or an empty JSON string in the case of ~[]byte).
+// Slice fields explicitly marked with `format:emitempty` still marshal
+// as an empty JSON array.
+//
+// This only affects marshaling and is ignored when unmarshaling.
+func FormatNilSliceAsNull(v bool) Options {
+ return json.FormatNilSliceAsNull(v)
+}
+
+// FormatNilMapAsNull specifies that a nil Go map should marshal as a
+// JSON null instead of the default representation as an empty JSON object.
+// Map fields explicitly marked with `format:emitempty` still marshal
+// as an empty JSON object.
+//
+// This only affects marshaling and is ignored when unmarshaling.
+func FormatNilMapAsNull(v bool) Options {
+ return json.FormatNilMapAsNull(v)
+}
+
+// OmitZeroStructFields specifies that a Go struct should marshal in such a way
+// that all struct fields that are zero are omitted from the marshaled output
+// if the value is zero as determined by the "IsZero() bool" method if present,
+// otherwise based on whether the field is the zero Go value.
+// This is semantically equivalent to specifying the `omitzero` tag option
+// on every field in a Go struct.
+//
+// This only affects marshaling and is ignored when unmarshaling.
+func OmitZeroStructFields(v bool) Options {
+ return json.OmitZeroStructFields(v)
+}
+
+// MatchCaseInsensitiveNames specifies that JSON object members are matched
+// against Go struct fields using a case-insensitive match of the name.
+// Go struct fields explicitly marked with `case:strict` or `case:ignore`
+// always use case-sensitive (or case-insensitive) name matching,
+// regardless of the value of this option.
+//
+// This affects either marshaling or unmarshaling.
+// For marshaling, this option may alter the detection of duplicate names
+// (assuming [jsontext.AllowDuplicateNames] is false) from inlined fields
+// if it matches one of the declared fields in the Go struct.
+func MatchCaseInsensitiveNames(v bool) Options {
+ return json.MatchCaseInsensitiveNames(v)
+}
+
+// DiscardUnknownMembers specifies that marshaling should ignore any
+// JSON object members stored in Go struct fields dedicated to storing
+// unknown JSON object members.
+//
+// This only affects marshaling and is ignored when unmarshaling.
+func DiscardUnknownMembers(v bool) Options {
+ return json.DiscardUnknownMembers(v)
+}
+
+// RejectUnknownMembers specifies that unknown members should be rejected
+// when unmarshaling a JSON object, regardless of whether there is a field
+// to store unknown members.
+//
+// This only affects unmarshaling and is ignored when marshaling.
+func RejectUnknownMembers(v bool) Options {
+ return json.RejectUnknownMembers(v)
+}
+
+// WithMarshalers specifies a list of type-specific marshalers to use,
+// which can be used to override the default marshal behavior for values
+// of particular types.
+//
+// This only affects marshaling and is ignored when unmarshaling.
+func WithMarshalers(v *Marshalers) Options {
+ return json.WithMarshalers(v)
+}
+
+// WithUnmarshalers specifies a list of type-specific unmarshalers to use,
+// which can be used to override the default unmarshal behavior for values
+// of particular types.
+//
+// This only affects unmarshaling and is ignored when marshaling.
+func WithUnmarshalers(v *Unmarshalers) Options {
+ return json.WithUnmarshalers(v)
+}
diff --git a/internal/json/arshal.go b/internal/json/arshal.go
new file mode 100644
index 0000000000..56fe8d882a
--- /dev/null
+++ b/internal/json/arshal.go
@@ -0,0 +1,581 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package json
+
+import (
+ "bytes"
+ "encoding"
+ "io"
+ "reflect"
+ "slices"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/quay/clair/v4/internal/json/internal"
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonopts"
+ "github.com/quay/clair/v4/internal/json/jsontext"
+)
+
+// Reference encoding and time packages to assist pkgsite
+// in being able to hotlink references to those packages.
+var (
+ _ encoding.TextMarshaler
+ _ encoding.TextAppender
+ _ encoding.TextUnmarshaler
+ _ time.Time
+ _ time.Duration
+)
+
+// export exposes internal functionality of the "jsontext" package.
+var export = jsontext.Internal.Export(&internal.AllowInternalUse)
+
+// Marshal serializes a Go value as a []byte according to the provided
+// marshal and encode options (while ignoring unmarshal or decode options).
+// It does not terminate the output with a newline.
+//
+// Type-specific marshal functions and methods take precedence
+// over the default representation of a value.
+// Functions or methods that operate on *T are only called when encoding
+// a value of type T (by taking its address) or a non-nil value of *T.
+// Marshal ensures that a value is always addressable
+// (by boxing it on the heap if necessary) so that
+// these functions and methods can be consistently called. For performance,
+// it is recommended that Marshal be passed a non-nil pointer to the value.
+//
+// The input value is encoded as JSON according the following rules:
+//
+// - If any type-specific functions in a [WithMarshalers] option match
+// the value type, then those functions are called to encode the value.
+// If all applicable functions return [SkipFunc],
+// then the value is encoded according to subsequent rules.
+//
+// - If the value type implements [MarshalerTo],
+// then the MarshalJSONTo method is called to encode the value.
+//
+// - If the value type implements [Marshaler],
+// then the MarshalJSON method is called to encode the value.
+//
+// - If the value type implements [encoding.TextAppender],
+// then the AppendText method is called to encode the value and
+// subsequently encode its result as a JSON string.
+//
+// - If the value type implements [encoding.TextMarshaler],
+// then the MarshalText method is called to encode the value and
+// subsequently encode its result as a JSON string.
+//
+// - Otherwise, the value is encoded according to the value's type
+// as described in detail below.
+//
+// Most Go types have a default JSON representation.
+// Certain types support specialized formatting according to
+// a format flag optionally specified in the Go struct tag
+// for the struct field that contains the current value
+// (see the “JSON Representation of Go structs” section for more details).
+//
+// The representation of each type is as follows:
+//
+// - A Go boolean is encoded as a JSON boolean (e.g., true or false).
+// It does not support any custom format flags.
+//
+// - A Go string is encoded as a JSON string.
+// It does not support any custom format flags.
+//
+// - A Go []byte or [N]byte is encoded as a JSON string containing
+// the binary value encoded using RFC 4648.
+// If the format is "base64" or unspecified, then this uses RFC 4648, section 4.
+// If the format is "base64url", then this uses RFC 4648, section 5.
+// If the format is "base32", then this uses RFC 4648, section 6.
+// If the format is "base32hex", then this uses RFC 4648, section 7.
+// If the format is "base16" or "hex", then this uses RFC 4648, section 8.
+// If the format is "array", then the bytes value is encoded as a JSON array
+// where each byte is recursively JSON-encoded as each JSON array element.
+//
+// - A Go integer is encoded as a JSON number without fractions or exponents.
+// If [StringifyNumbers] is specified or encoding a JSON object name,
+// then the JSON number is encoded within a JSON string.
+// It does not support any custom format flags.
+//
+// - A Go float is encoded as a JSON number.
+// If [StringifyNumbers] is specified or encoding a JSON object name,
+// then the JSON number is encoded within a JSON string.
+// If the format is "nonfinite", then NaN, +Inf, and -Inf are encoded as
+// the JSON strings "NaN", "Infinity", and "-Infinity", respectively.
+// Otherwise, the presence of non-finite numbers results in a [SemanticError].
+//
+// - A Go map is encoded as a JSON object, where each Go map key and value
+// is recursively encoded as a name and value pair in the JSON object.
+// The Go map key must encode as a JSON string, otherwise this results
+// in a [SemanticError]. The Go map is traversed in a non-deterministic order.
+// For deterministic encoding, consider using the [Deterministic] option.
+// If the format is "emitnull", then a nil map is encoded as a JSON null.
+// If the format is "emitempty", then a nil map is encoded as an empty JSON object,
+// regardless of whether [FormatNilMapAsNull] is specified.
+// Otherwise by default, a nil map is encoded as an empty JSON object.
+//
+// - A Go struct is encoded as a JSON object.
+// See the “JSON Representation of Go structs” section
+// in the package-level documentation for more details.
+//
+// - A Go slice is encoded as a JSON array, where each Go slice element
+// is recursively JSON-encoded as the elements of the JSON array.
+// If the format is "emitnull", then a nil slice is encoded as a JSON null.
+// If the format is "emitempty", then a nil slice is encoded as an empty JSON array,
+// regardless of whether [FormatNilSliceAsNull] is specified.
+// Otherwise by default, a nil slice is encoded as an empty JSON array.
+//
+// - A Go array is encoded as a JSON array, where each Go array element
+// is recursively JSON-encoded as the elements of the JSON array.
+// The JSON array length is always identical to the Go array length.
+// It does not support any custom format flags.
+//
+// - A Go pointer is encoded as a JSON null if nil, otherwise it is
+// the recursively JSON-encoded representation of the underlying value.
+// Format flags are forwarded to the encoding of the underlying value.
+//
+// - A Go interface is encoded as a JSON null if nil, otherwise it is
+// the recursively JSON-encoded representation of the underlying value.
+// It does not support any custom format flags.
+//
+// - A Go [time.Time] is encoded as a JSON string containing the timestamp
+// formatted in RFC 3339 with nanosecond precision.
+// If the format matches one of the format constants declared
+// in the time package (e.g., RFC1123), then that format is used.
+// If the format is "unix", "unixmilli", "unixmicro", or "unixnano",
+// then the timestamp is encoded as a possibly fractional JSON number
+// of the number of seconds (or milliseconds, microseconds, or nanoseconds)
+// since the Unix epoch, which is January 1st, 1970 at 00:00:00 UTC.
+// To avoid a fractional component, round the timestamp to the relevant unit.
+// Otherwise, the format is used as-is with [time.Time.Format] if non-empty.
+//
+// - A Go [time.Duration] currently has no default representation and
+// requires an explicit format to be specified.
+// If the format is "sec", "milli", "micro", or "nano",
+// then the duration is encoded as a possibly fractional JSON number
+// of the number of seconds (or milliseconds, microseconds, or nanoseconds).
+// To avoid a fractional component, round the duration to the relevant unit.
+// If the format is "units", it is encoded as a JSON string formatted using
+// [time.Duration.String] (e.g., "1h30m" for 1 hour 30 minutes).
+// If the format is "iso8601", it is encoded as a JSON string using the
+// ISO 8601 standard for durations (e.g., "PT1H30M" for 1 hour 30 minutes)
+// using only accurate units of hours, minutes, and seconds.
+//
+// - All other Go types (e.g., complex numbers, channels, and functions)
+// have no default representation and result in a [SemanticError].
+//
+// JSON cannot represent cyclic data structures and Marshal does not handle them.
+// Passing cyclic structures will result in an error.
+func Marshal(in any, opts ...Options) (out []byte, err error) {
+ enc := export.GetBufferedEncoder(opts...)
+ defer export.PutBufferedEncoder(enc)
+ xe := export.Encoder(enc)
+ xe.Flags.Set(jsonflags.OmitTopLevelNewline | 1)
+ err = marshalEncode(enc, in, &xe.Struct)
+ if err != nil && xe.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return nil, internal.TransformMarshalError(in, err)
+ }
+ return bytes.Clone(xe.Buf), err
+}
+
+// MarshalWrite serializes a Go value into an [io.Writer] according to the provided
+// marshal and encode options (while ignoring unmarshal or decode options).
+// It does not terminate the output with a newline.
+// See [Marshal] for details about the conversion of a Go value into JSON.
+func MarshalWrite(out io.Writer, in any, opts ...Options) (err error) {
+ enc := export.GetStreamingEncoder(out, opts...)
+ defer export.PutStreamingEncoder(enc)
+ xe := export.Encoder(enc)
+ xe.Flags.Set(jsonflags.OmitTopLevelNewline | 1)
+ err = marshalEncode(enc, in, &xe.Struct)
+ if err != nil && xe.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return internal.TransformMarshalError(in, err)
+ }
+ return err
+}
+
+// MarshalEncode serializes a Go value into an [jsontext.Encoder] according to
+// the provided marshal options (while ignoring unmarshal, encode, or decode options).
+// Any marshal-relevant options already specified on the [jsontext.Encoder]
+// take lower precedence than the set of options provided by the caller.
+// Unlike [Marshal] and [MarshalWrite], encode options are ignored because
+// they must have already been specified on the provided [jsontext.Encoder].
+//
+// See [Marshal] for details about the conversion of a Go value into JSON.
+func MarshalEncode(out *jsontext.Encoder, in any, opts ...Options) (err error) {
+ xe := export.Encoder(out)
+ if len(opts) > 0 {
+ optsOriginal := xe.Struct
+ defer func() { xe.Struct = optsOriginal }()
+ xe.Struct.JoinWithoutCoderOptions(opts...)
+ }
+ err = marshalEncode(out, in, &xe.Struct)
+ if err != nil && xe.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return internal.TransformMarshalError(in, err)
+ }
+ return err
+}
+
+func marshalEncode(out *jsontext.Encoder, in any, mo *jsonopts.Struct) (err error) {
+ v := reflect.ValueOf(in)
+ if !v.IsValid() || (v.Kind() == reflect.Pointer && v.IsNil()) {
+ return out.WriteToken(jsontext.Null)
+ }
+ // Shallow copy non-pointer values to obtain an addressable value.
+ // It is beneficial to performance to always pass pointers to avoid this.
+ forceAddr := v.Kind() != reflect.Pointer
+ if forceAddr {
+ v2 := reflect.New(v.Type())
+ v2.Elem().Set(v)
+ v = v2
+ }
+ va := addressableValue{v.Elem(), forceAddr} // dereferenced pointer is always addressable
+ t := va.Type()
+
+ // Lookup and call the marshal function for this type.
+ marshal := lookupArshaler(t).marshal
+ if mo.Marshalers != nil {
+ marshal, _ = mo.Marshalers.(*Marshalers).lookup(marshal, t)
+ }
+ if err := marshal(out, va, mo); err != nil {
+ if !mo.Flags.Get(jsonflags.AllowDuplicateNames) {
+ export.Encoder(out).Tokens.InvalidateDisabledNamespaces()
+ }
+ return err
+ }
+ return nil
+}
+
+// Unmarshal decodes a []byte input into a Go value according to the provided
+// unmarshal and decode options (while ignoring marshal or encode options).
+// The input must be a single JSON value with optional whitespace interspersed.
+// The output must be a non-nil pointer.
+//
+// Type-specific unmarshal functions and methods take precedence
+// over the default representation of a value.
+// Functions or methods that operate on *T are only called when decoding
+// a value of type T (by taking its address) or a non-nil value of *T.
+// Unmarshal ensures that a value is always addressable
+// (by boxing it on the heap if necessary) so that
+// these functions and methods can be consistently called.
+//
+// The input is decoded into the output according the following rules:
+//
+// - If any type-specific functions in a [WithUnmarshalers] option match
+// the value type, then those functions are called to decode the JSON
+// value. If all applicable functions return [SkipFunc],
+// then the input is decoded according to subsequent rules.
+//
+// - If the value type implements [UnmarshalerFrom],
+// then the UnmarshalJSONFrom method is called to decode the JSON value.
+//
+// - If the value type implements [Unmarshaler],
+// then the UnmarshalJSON method is called to decode the JSON value.
+//
+// - If the value type implements [encoding.TextUnmarshaler],
+// then the input is decoded as a JSON string and
+// the UnmarshalText method is called with the decoded string value.
+// This fails with a [SemanticError] if the input is not a JSON string.
+//
+// - Otherwise, the JSON value is decoded according to the value's type
+// as described in detail below.
+//
+// Most Go types have a default JSON representation.
+// Certain types support specialized formatting according to
+// a format flag optionally specified in the Go struct tag
+// for the struct field that contains the current value
+// (see the “JSON Representation of Go structs” section for more details).
+// A JSON null may be decoded into every supported Go value where
+// it is equivalent to storing the zero value of the Go value.
+// If the input JSON kind is not handled by the current Go value type,
+// then this fails with a [SemanticError]. Unless otherwise specified,
+// the decoded value replaces any pre-existing value.
+//
+// The representation of each type is as follows:
+//
+// - A Go boolean is decoded from a JSON boolean (e.g., true or false).
+// It does not support any custom format flags.
+//
+// - A Go string is decoded from a JSON string.
+// It does not support any custom format flags.
+//
+// - A Go []byte or [N]byte is decoded from a JSON string
+// containing the binary value encoded using RFC 4648.
+// If the format is "base64" or unspecified, then this uses RFC 4648, section 4.
+// If the format is "base64url", then this uses RFC 4648, section 5.
+// If the format is "base32", then this uses RFC 4648, section 6.
+// If the format is "base32hex", then this uses RFC 4648, section 7.
+// If the format is "base16" or "hex", then this uses RFC 4648, section 8.
+// If the format is "array", then the Go slice or array is decoded from a
+// JSON array where each JSON element is recursively decoded for each byte.
+// When decoding into a non-nil []byte, the slice length is reset to zero
+// and the decoded input is appended to it.
+// When decoding into a [N]byte, the input must decode to exactly N bytes,
+// otherwise it fails with a [SemanticError].
+//
+// - A Go integer is decoded from a JSON number.
+// It must be decoded from a JSON string containing a JSON number
+// if [StringifyNumbers] is specified or decoding a JSON object name.
+// It fails with a [SemanticError] if the JSON number
+// has a fractional or exponent component.
+// It also fails if it overflows the representation of the Go integer type.
+// It does not support any custom format flags.
+//
+// - A Go float is decoded from a JSON number.
+// It must be decoded from a JSON string containing a JSON number
+// if [StringifyNumbers] is specified or decoding a JSON object name.
+// It fails if it overflows the representation of the Go float type.
+// If the format is "nonfinite", then the JSON strings
+// "NaN", "Infinity", and "-Infinity" are decoded as NaN, +Inf, and -Inf.
+// Otherwise, the presence of such strings results in a [SemanticError].
+//
+// - A Go map is decoded from a JSON object,
+// where each JSON object name and value pair is recursively decoded
+// as the Go map key and value. Maps are not cleared.
+// If the Go map is nil, then a new map is allocated to decode into.
+// If the decoded key matches an existing Go map entry, the entry value
+// is reused by decoding the JSON object value into it.
+// The formats "emitnull" and "emitempty" have no effect when decoding.
+//
+// - A Go struct is decoded from a JSON object.
+// See the “JSON Representation of Go structs” section
+// in the package-level documentation for more details.
+//
+// - A Go slice is decoded from a JSON array, where each JSON element
+// is recursively decoded and appended to the Go slice.
+// Before appending into a Go slice, a new slice is allocated if it is nil,
+// otherwise the slice length is reset to zero.
+// The formats "emitnull" and "emitempty" have no effect when decoding.
+//
+// - A Go array is decoded from a JSON array, where each JSON array element
+// is recursively decoded as each corresponding Go array element.
+// Each Go array element is zeroed before decoding into it.
+// It fails with a [SemanticError] if the JSON array does not contain
+// the exact same number of elements as the Go array.
+// It does not support any custom format flags.
+//
+// - A Go pointer is decoded based on the JSON kind and underlying Go type.
+// If the input is a JSON null, then this stores a nil pointer.
+// Otherwise, it allocates a new underlying value if the pointer is nil,
+// and recursively JSON decodes into the underlying value.
+// Format flags are forwarded to the decoding of the underlying type.
+//
+// - A Go interface is decoded based on the JSON kind and underlying Go type.
+// If the input is a JSON null, then this stores a nil interface value.
+// Otherwise, a nil interface value of an empty interface type is initialized
+// with a zero Go bool, string, float64, map[string]any, or []any if the
+// input is a JSON boolean, string, number, object, or array, respectively.
+// If the interface value is still nil, then this fails with a [SemanticError]
+// since decoding could not determine an appropriate Go type to decode into.
+// For example, unmarshaling into a nil io.Reader fails since
+// there is no concrete type to populate the interface value with.
+// Otherwise an underlying value exists and it recursively decodes
+// the JSON input into it. It does not support any custom format flags.
+//
+// - A Go [time.Time] is decoded from a JSON string containing the time
+// formatted in RFC 3339 with nanosecond precision.
+// If the format matches one of the format constants declared in
+// the time package (e.g., RFC1123), then that format is used for parsing.
+// If the format is "unix", "unixmilli", "unixmicro", or "unixnano",
+// then the timestamp is decoded from an optionally fractional JSON number
+// of the number of seconds (or milliseconds, microseconds, or nanoseconds)
+// since the Unix epoch, which is January 1st, 1970 at 00:00:00 UTC.
+// Otherwise, the format is used as-is with [time.Time.Parse] if non-empty.
+//
+// - A Go [time.Duration] currently has no default representation and
+// requires an explicit format to be specified.
+// If the format is "sec", "milli", "micro", or "nano",
+// then the duration is decoded from an optionally fractional JSON number
+// of the number of seconds (or milliseconds, microseconds, or nanoseconds).
+// If the format is "units", it is decoded from a JSON string parsed using
+// [time.ParseDuration] (e.g., "1h30m" for 1 hour 30 minutes).
+// If the format is "iso8601", it is decoded from a JSON string using the
+// ISO 8601 standard for durations (e.g., "PT1H30M" for 1 hour 30 minutes)
+// accepting only accurate units of hours, minutes, or seconds.
+//
+// - All other Go types (e.g., complex numbers, channels, and functions)
+// have no default representation and result in a [SemanticError].
+//
+// In general, unmarshaling follows merge semantics (similar to RFC 7396)
+// where the decoded Go value replaces the destination value
+// for any JSON kind other than an object.
+// For JSON objects, the input object is merged into the destination value
+// where matching object members recursively apply merge semantics.
+func Unmarshal(in []byte, out any, opts ...Options) (err error) {
+ dec := export.GetBufferedDecoder(in, opts...)
+ defer export.PutBufferedDecoder(dec)
+ xd := export.Decoder(dec)
+ err = unmarshalFull(dec, out, &xd.Struct)
+ if err != nil && xd.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return internal.TransformUnmarshalError(out, err)
+ }
+ return err
+}
+
+// UnmarshalRead deserializes a Go value from an [io.Reader] according to the
+// provided unmarshal and decode options (while ignoring marshal or encode options).
+// The input must be a single JSON value with optional whitespace interspersed.
+// It consumes the entirety of [io.Reader] until [io.EOF] is encountered,
+// without reporting an error for EOF. The output must be a non-nil pointer.
+// See [Unmarshal] for details about the conversion of JSON into a Go value.
+func UnmarshalRead(in io.Reader, out any, opts ...Options) (err error) {
+ dec := export.GetStreamingDecoder(in, opts...)
+ defer export.PutStreamingDecoder(dec)
+ xd := export.Decoder(dec)
+ err = unmarshalFull(dec, out, &xd.Struct)
+ if err != nil && xd.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return internal.TransformUnmarshalError(out, err)
+ }
+ return err
+}
+
+func unmarshalFull(in *jsontext.Decoder, out any, uo *jsonopts.Struct) error {
+ switch err := unmarshalDecode(in, out, uo); err {
+ case nil:
+ return export.Decoder(in).CheckEOF()
+ case io.EOF:
+ offset := in.InputOffset() + int64(len(in.UnreadBuffer()))
+ return &jsontext.SyntacticError{ByteOffset: offset, Err: io.ErrUnexpectedEOF}
+ default:
+ return err
+ }
+}
+
+// UnmarshalDecode deserializes a Go value from a [jsontext.Decoder] according to
+// the provided unmarshal options (while ignoring marshal, encode, or decode options).
+// Any unmarshal options already specified on the [jsontext.Decoder]
+// take lower precedence than the set of options provided by the caller.
+// Unlike [Unmarshal] and [UnmarshalRead], decode options are ignored because
+// they must have already been specified on the provided [jsontext.Decoder].
+//
+// The input may be a stream of one or more JSON values,
+// where this only unmarshals the next JSON value in the stream.
+// The output must be a non-nil pointer.
+// See [Unmarshal] for details about the conversion of JSON into a Go value.
+func UnmarshalDecode(in *jsontext.Decoder, out any, opts ...Options) (err error) {
+ xd := export.Decoder(in)
+ if len(opts) > 0 {
+ optsOriginal := xd.Struct
+ defer func() { xd.Struct = optsOriginal }()
+ xd.Struct.JoinWithoutCoderOptions(opts...)
+ }
+ err = unmarshalDecode(in, out, &xd.Struct)
+ if err != nil && xd.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return internal.TransformUnmarshalError(out, err)
+ }
+ return err
+}
+
+func unmarshalDecode(in *jsontext.Decoder, out any, uo *jsonopts.Struct) (err error) {
+ v := reflect.ValueOf(out)
+ if v.Kind() != reflect.Pointer || v.IsNil() {
+ return &SemanticError{action: "unmarshal", GoType: reflect.TypeOf(out), Err: internal.ErrNonNilReference}
+ }
+ va := addressableValue{v.Elem(), false} // dereferenced pointer is always addressable
+ t := va.Type()
+
+ // In legacy semantics, the entirety of the next JSON value
+ // was validated before attempting to unmarshal it.
+ if uo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ if err := export.Decoder(in).CheckNextValue(); err != nil {
+ return err
+ }
+ }
+
+ // Lookup and call the unmarshal function for this type.
+ unmarshal := lookupArshaler(t).unmarshal
+ if uo.Unmarshalers != nil {
+ unmarshal, _ = uo.Unmarshalers.(*Unmarshalers).lookup(unmarshal, t)
+ }
+ if err := unmarshal(in, va, uo); err != nil {
+ if !uo.Flags.Get(jsonflags.AllowDuplicateNames) {
+ export.Decoder(in).Tokens.InvalidateDisabledNamespaces()
+ }
+ return err
+ }
+ return nil
+}
+
+// addressableValue is a reflect.Value that is guaranteed to be addressable
+// such that calling the Addr and Set methods do not panic.
+//
+// There is no compile magic that enforces this property,
+// but rather the need to construct this type makes it easier to examine each
+// construction site to ensure that this property is upheld.
+type addressableValue struct {
+ reflect.Value
+
+ // forcedAddr reports whether this value is addressable
+ // only through the use of [newAddressableValue].
+ // This is only used for [jsonflags.CallMethodsWithLegacySemantics].
+ forcedAddr bool
+}
+
+// newAddressableValue constructs a new addressable value of type t.
+func newAddressableValue(t reflect.Type) addressableValue {
+ return addressableValue{reflect.New(t).Elem(), true}
+}
+
+// TODO: Remove *jsonopts.Struct argument from [marshaler] and [unmarshaler].
+// This can be directly accessed on the encoder or decoder.
+
+// All marshal and unmarshal behavior is implemented using these signatures.
+// The *jsonopts.Struct argument is guaranteed to identical to or at least
+// a strict super-set of the options in Encoder.Struct or Decoder.Struct.
+// It is identical for Marshal, Unmarshal, MarshalWrite, and UnmarshalRead.
+// It is a super-set for MarshalEncode and UnmarshalDecode.
+type (
+ marshaler = func(*jsontext.Encoder, addressableValue, *jsonopts.Struct) error
+ unmarshaler = func(*jsontext.Decoder, addressableValue, *jsonopts.Struct) error
+)
+
+type arshaler struct {
+ marshal marshaler
+ unmarshal unmarshaler
+ nonDefault bool
+}
+
+var lookupArshalerCache sync.Map // map[reflect.Type]*arshaler
+
+func lookupArshaler(t reflect.Type) *arshaler {
+ if v, ok := lookupArshalerCache.Load(t); ok {
+ return v.(*arshaler)
+ }
+
+ fncs := makeDefaultArshaler(t)
+ fncs = makeMethodArshaler(fncs, t)
+ fncs = makeTimeArshaler(fncs, t)
+
+ // Use the last stored so that duplicate arshalers can be garbage collected.
+ v, _ := lookupArshalerCache.LoadOrStore(t, fncs)
+ return v.(*arshaler)
+}
+
+var stringsPools = &sync.Pool{New: func() any { return new(stringSlice) }}
+
+type stringSlice []string
+
+// getStrings returns a non-nil pointer to a slice with length n.
+func getStrings(n int) *stringSlice {
+ s := stringsPools.Get().(*stringSlice)
+ if cap(*s) < n {
+ *s = make([]string, n)
+ }
+ *s = (*s)[:n]
+ return s
+}
+
+func putStrings(s *stringSlice) {
+ if cap(*s) > 1<<10 {
+ *s = nil // avoid pinning arbitrarily large amounts of memory
+ }
+ stringsPools.Put(s)
+}
+
+func (ss *stringSlice) Sort() {
+ slices.SortFunc(*ss, func(x, y string) int { return strings.Compare(x, y) })
+}
diff --git a/internal/json/arshal_any.go b/internal/json/arshal_any.go
new file mode 100644
index 0000000000..b5a1be4d81
--- /dev/null
+++ b/internal/json/arshal_any.go
@@ -0,0 +1,283 @@
+// Copyright 2022 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package json
+
+import (
+ "cmp"
+ "reflect"
+ "strconv"
+
+ "github.com/quay/clair/v4/internal/json/internal"
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonopts"
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+ "github.com/quay/clair/v4/internal/json/jsontext"
+)
+
+// This file contains an optimized marshal and unmarshal implementation
+// for the any type. This type is often used when the Go program has
+// no knowledge of the JSON schema. This is a common enough occurrence
+// to justify the complexity of adding logic for this.
+
+// marshalValueAny marshals a Go any as a JSON value.
+// This assumes that there are no special formatting directives
+// for any possible nested value.
+func marshalValueAny(enc *jsontext.Encoder, val any, mo *jsonopts.Struct) error {
+ switch val := val.(type) {
+ case nil:
+ return enc.WriteToken(jsontext.Null)
+ case bool:
+ return enc.WriteToken(jsontext.Bool(val))
+ case string:
+ return enc.WriteToken(jsontext.String(val))
+ case float64:
+ return enc.WriteToken(jsontext.Float(val))
+ case map[string]any:
+ return marshalObjectAny(enc, val, mo)
+ case []any:
+ return marshalArrayAny(enc, val, mo)
+ default:
+ v := newAddressableValue(reflect.TypeOf(val))
+ v.Set(reflect.ValueOf(val))
+ marshal := lookupArshaler(v.Type()).marshal
+ if mo.Marshalers != nil {
+ marshal, _ = mo.Marshalers.(*Marshalers).lookup(marshal, v.Type())
+ }
+ return marshal(enc, v, mo)
+ }
+}
+
+// unmarshalValueAny unmarshals a JSON value as a Go any.
+// This assumes that there are no special formatting directives
+// for any possible nested value.
+// Duplicate names must be rejected since this does not implement merging.
+func unmarshalValueAny(dec *jsontext.Decoder, uo *jsonopts.Struct) (any, error) {
+ switch k := dec.PeekKind(); k {
+ case '{':
+ return unmarshalObjectAny(dec, uo)
+ case '[':
+ return unmarshalArrayAny(dec, uo)
+ default:
+ xd := export.Decoder(dec)
+ var flags jsonwire.ValueFlags
+ val, err := xd.ReadValue(&flags)
+ if err != nil {
+ return nil, err
+ }
+ switch val.Kind() {
+ case 'n':
+ return nil, nil
+ case 'f':
+ return false, nil
+ case 't':
+ return true, nil
+ case '"':
+ val = jsonwire.UnquoteMayCopy(val, flags.IsVerbatim())
+ if xd.StringCache == nil {
+ xd.StringCache = new(stringCache)
+ }
+ return makeString(xd.StringCache, val), nil
+ case '0':
+ if uo.Flags.Get(jsonflags.UnmarshalAnyWithRawNumber) {
+ return internal.RawNumberOf(val), nil
+ }
+ fv, ok := jsonwire.ParseFloat(val, 64)
+ if !ok {
+ return fv, newUnmarshalErrorAfterWithValue(dec, float64Type, strconv.ErrRange)
+ }
+ return fv, nil
+ default:
+ panic("BUG: invalid kind: " + k.String())
+ }
+ }
+}
+
+// marshalObjectAny marshals a Go map[string]any as a JSON object
+// (or as a JSON null if nil and [jsonflags.FormatNilMapAsNull]).
+func marshalObjectAny(enc *jsontext.Encoder, obj map[string]any, mo *jsonopts.Struct) error {
+ // Check for cycles.
+ xe := export.Encoder(enc)
+ if xe.Tokens.Depth() > startDetectingCyclesAfter {
+ v := reflect.ValueOf(obj)
+ if err := visitPointer(&xe.SeenPointers, v); err != nil {
+ return newMarshalErrorBefore(enc, anyType, err)
+ }
+ defer leavePointer(&xe.SeenPointers, v)
+ }
+
+ // Handle empty maps.
+ if len(obj) == 0 {
+ if mo.Flags.Get(jsonflags.FormatNilMapAsNull) && obj == nil {
+ return enc.WriteToken(jsontext.Null)
+ }
+ // Optimize for marshaling an empty map without any preceding whitespace.
+ if !mo.Flags.Get(jsonflags.AnyWhitespace) && !xe.Tokens.Last.NeedObjectName() {
+ xe.Buf = append(xe.Tokens.MayAppendDelim(xe.Buf, '{'), "{}"...)
+ xe.Tokens.Last.Increment()
+ if xe.NeedFlush() {
+ return xe.Flush()
+ }
+ return nil
+ }
+ }
+
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ // A Go map guarantees that each entry has a unique key
+ // The only possibility of duplicates is due to invalid UTF-8.
+ if !mo.Flags.Get(jsonflags.AllowInvalidUTF8) {
+ xe.Tokens.Last.DisableNamespace()
+ }
+ if !mo.Flags.Get(jsonflags.Deterministic) || len(obj) <= 1 {
+ for name, val := range obj {
+ if err := enc.WriteToken(jsontext.String(name)); err != nil {
+ return err
+ }
+ if err := marshalValueAny(enc, val, mo); err != nil {
+ return err
+ }
+ }
+ } else {
+ names := getStrings(len(obj))
+ var i int
+ for name := range obj {
+ (*names)[i] = name
+ i++
+ }
+ names.Sort()
+ for _, name := range *names {
+ if err := enc.WriteToken(jsontext.String(name)); err != nil {
+ return err
+ }
+ if err := marshalValueAny(enc, obj[name], mo); err != nil {
+ return err
+ }
+ }
+ putStrings(names)
+ }
+ if err := enc.WriteToken(jsontext.EndObject); err != nil {
+ return err
+ }
+ return nil
+}
+
+// unmarshalObjectAny unmarshals a JSON object as a Go map[string]any.
+// It panics if not decoding a JSON object.
+func unmarshalObjectAny(dec *jsontext.Decoder, uo *jsonopts.Struct) (map[string]any, error) {
+ switch tok, err := dec.ReadToken(); {
+ case err != nil:
+ return nil, err
+ case tok.Kind() != '{':
+ panic("BUG: invalid kind: " + tok.Kind().String())
+ }
+ obj := make(map[string]any)
+ // A Go map guarantees that each entry has a unique key
+ // The only possibility of duplicates is due to invalid UTF-8.
+ if !uo.Flags.Get(jsonflags.AllowInvalidUTF8) {
+ export.Decoder(dec).Tokens.Last.DisableNamespace()
+ }
+ var errUnmarshal error
+ for dec.PeekKind() != '}' {
+ tok, err := dec.ReadToken()
+ if err != nil {
+ return obj, err
+ }
+ name := tok.String()
+
+ // Manually check for duplicate names.
+ if _, ok := obj[name]; ok {
+ // TODO: Unread the object name.
+ name := export.Decoder(dec).PreviousTokenOrValue()
+ err := newDuplicateNameError(dec.StackPointer(), nil, dec.InputOffset()-len64(name))
+ return obj, err
+ }
+
+ val, err := unmarshalValueAny(dec, uo)
+ obj[name] = val
+ if err != nil {
+ if isFatalError(err, uo.Flags) {
+ return obj, err
+ }
+ errUnmarshal = cmp.Or(err, errUnmarshal)
+ }
+ }
+ if _, err := dec.ReadToken(); err != nil {
+ return obj, err
+ }
+ return obj, errUnmarshal
+}
+
+// marshalArrayAny marshals a Go []any as a JSON array
+// (or as a JSON null if nil and [jsonflags.FormatNilSliceAsNull]).
+func marshalArrayAny(enc *jsontext.Encoder, arr []any, mo *jsonopts.Struct) error {
+ // Check for cycles.
+ xe := export.Encoder(enc)
+ if xe.Tokens.Depth() > startDetectingCyclesAfter {
+ v := reflect.ValueOf(arr)
+ if err := visitPointer(&xe.SeenPointers, v); err != nil {
+ return newMarshalErrorBefore(enc, sliceAnyType, err)
+ }
+ defer leavePointer(&xe.SeenPointers, v)
+ }
+
+ // Handle empty slices.
+ if len(arr) == 0 {
+ if mo.Flags.Get(jsonflags.FormatNilSliceAsNull) && arr == nil {
+ return enc.WriteToken(jsontext.Null)
+ }
+ // Optimize for marshaling an empty slice without any preceding whitespace.
+ if !mo.Flags.Get(jsonflags.AnyWhitespace) && !xe.Tokens.Last.NeedObjectName() {
+ xe.Buf = append(xe.Tokens.MayAppendDelim(xe.Buf, '['), "[]"...)
+ xe.Tokens.Last.Increment()
+ if xe.NeedFlush() {
+ return xe.Flush()
+ }
+ return nil
+ }
+ }
+
+ if err := enc.WriteToken(jsontext.BeginArray); err != nil {
+ return err
+ }
+ for _, val := range arr {
+ if err := marshalValueAny(enc, val, mo); err != nil {
+ return err
+ }
+ }
+ if err := enc.WriteToken(jsontext.EndArray); err != nil {
+ return err
+ }
+ return nil
+}
+
+// unmarshalArrayAny unmarshals a JSON array as a Go []any.
+// It panics if not decoding a JSON array.
+func unmarshalArrayAny(dec *jsontext.Decoder, uo *jsonopts.Struct) ([]any, error) {
+ switch tok, err := dec.ReadToken(); {
+ case err != nil:
+ return nil, err
+ case tok.Kind() != '[':
+ panic("BUG: invalid kind: " + tok.Kind().String())
+ }
+ arr := []any{}
+ var errUnmarshal error
+ for dec.PeekKind() != ']' {
+ val, err := unmarshalValueAny(dec, uo)
+ arr = append(arr, val)
+ if err != nil {
+ if isFatalError(err, uo.Flags) {
+ return arr, err
+ }
+ errUnmarshal = cmp.Or(errUnmarshal, err)
+ }
+ }
+ if _, err := dec.ReadToken(); err != nil {
+ return arr, err
+ }
+ return arr, errUnmarshal
+}
diff --git a/internal/json/arshal_default.go b/internal/json/arshal_default.go
new file mode 100644
index 0000000000..bf83542fb6
--- /dev/null
+++ b/internal/json/arshal_default.go
@@ -0,0 +1,1912 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package json
+
+import (
+ "bytes"
+ "cmp"
+ "encoding"
+ "encoding/base32"
+ "encoding/base64"
+ "encoding/hex"
+ "errors"
+ "fmt"
+ "math"
+ "reflect"
+ "slices"
+ "strconv"
+ "strings"
+ "sync"
+
+ "github.com/quay/clair/v4/internal/json/internal"
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonopts"
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+ "github.com/quay/clair/v4/internal/json/jsontext"
+)
+
+// optimizeCommon specifies whether to use optimizations targeted for certain
+// common patterns, rather than using the slower, but more general logic.
+// All tests should pass regardless of whether this is true or not.
+const optimizeCommon = true
+
+var (
+ // Most natural Go type that correspond with each JSON type.
+ anyType = reflect.TypeFor[any]() // JSON value
+ boolType = reflect.TypeFor[bool]() // JSON bool
+ stringType = reflect.TypeFor[string]() // JSON string
+ float64Type = reflect.TypeFor[float64]() // JSON number
+ mapStringAnyType = reflect.TypeFor[map[string]any]() // JSON object
+ sliceAnyType = reflect.TypeFor[[]any]() // JSON array
+
+ bytesType = reflect.TypeFor[[]byte]()
+ emptyStructType = reflect.TypeFor[struct{}]()
+)
+
+const startDetectingCyclesAfter = 1000
+
+type seenPointers = map[any]struct{}
+
+type typedPointer struct {
+ typ reflect.Type
+ ptr any // always stores unsafe.Pointer, but avoids depending on unsafe
+ len int // remember slice length to avoid false positives
+}
+
+// visitPointer visits pointer p of type t, reporting an error if seen before.
+// If successfully visited, then the caller must eventually call leave.
+func visitPointer(m *seenPointers, v reflect.Value) error {
+ p := typedPointer{v.Type(), v.UnsafePointer(), sliceLen(v)}
+ if _, ok := (*m)[p]; ok {
+ return internal.ErrCycle
+ }
+ if *m == nil {
+ *m = make(seenPointers)
+ }
+ (*m)[p] = struct{}{}
+ return nil
+}
+func leavePointer(m *seenPointers, v reflect.Value) {
+ p := typedPointer{v.Type(), v.UnsafePointer(), sliceLen(v)}
+ delete(*m, p)
+}
+
+func sliceLen(v reflect.Value) int {
+ if v.Kind() == reflect.Slice {
+ return v.Len()
+ }
+ return 0
+}
+
+func len64[Bytes ~[]byte | ~string](in Bytes) int64 {
+ return int64(len(in))
+}
+
+func makeDefaultArshaler(t reflect.Type) *arshaler {
+ switch t.Kind() {
+ case reflect.Bool:
+ return makeBoolArshaler(t)
+ case reflect.String:
+ return makeStringArshaler(t)
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ return makeIntArshaler(t)
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ return makeUintArshaler(t)
+ case reflect.Float32, reflect.Float64:
+ return makeFloatArshaler(t)
+ case reflect.Map:
+ return makeMapArshaler(t)
+ case reflect.Struct:
+ return makeStructArshaler(t)
+ case reflect.Slice:
+ fncs := makeSliceArshaler(t)
+ if t.Elem().Kind() == reflect.Uint8 {
+ return makeBytesArshaler(t, fncs)
+ }
+ return fncs
+ case reflect.Array:
+ fncs := makeArrayArshaler(t)
+ if t.Elem().Kind() == reflect.Uint8 {
+ return makeBytesArshaler(t, fncs)
+ }
+ return fncs
+ case reflect.Pointer:
+ return makePointerArshaler(t)
+ case reflect.Interface:
+ return makeInterfaceArshaler(t)
+ default:
+ return makeInvalidArshaler(t)
+ }
+}
+
+func makeBoolArshaler(t reflect.Type) *arshaler {
+ var fncs arshaler
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ xe := export.Encoder(enc)
+ if mo.Format != "" && mo.FormatDepth == xe.Tokens.Depth() {
+ return newInvalidFormatError(enc, t, mo)
+ }
+
+ // Optimize for marshaling without preceding whitespace.
+ if optimizeCommon && !mo.Flags.Get(jsonflags.AnyWhitespace|jsonflags.StringifyBoolsAndStrings) && !xe.Tokens.Last.NeedObjectName() {
+ xe.Buf = strconv.AppendBool(xe.Tokens.MayAppendDelim(xe.Buf, 't'), va.Bool())
+ xe.Tokens.Last.Increment()
+ if xe.NeedFlush() {
+ return xe.Flush()
+ }
+ return nil
+ }
+
+ if mo.Flags.Get(jsonflags.StringifyBoolsAndStrings) {
+ if va.Bool() {
+ return enc.WriteToken(jsontext.String("true"))
+ } else {
+ return enc.WriteToken(jsontext.String("false"))
+ }
+ }
+ return enc.WriteToken(jsontext.Bool(va.Bool()))
+ }
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ xd := export.Decoder(dec)
+ if uo.Format != "" && uo.FormatDepth == xd.Tokens.Depth() {
+ return newInvalidFormatError(dec, t, uo)
+ }
+ tok, err := dec.ReadToken()
+ if err != nil {
+ return err
+ }
+ k := tok.Kind()
+ switch k {
+ case 'n':
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ va.SetBool(false)
+ }
+ return nil
+ case 't', 'f':
+ if !uo.Flags.Get(jsonflags.StringifyBoolsAndStrings) {
+ va.SetBool(tok.Bool())
+ return nil
+ }
+ case '"':
+ if uo.Flags.Get(jsonflags.StringifyBoolsAndStrings) {
+ switch tok.String() {
+ case "true":
+ va.SetBool(true)
+ case "false":
+ va.SetBool(false)
+ default:
+ if uo.Flags.Get(jsonflags.StringifyWithLegacySemantics) && tok.String() == "null" {
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ va.SetBool(false)
+ }
+ return nil
+ }
+ return newUnmarshalErrorAfterWithValue(dec, t, strconv.ErrSyntax)
+ }
+ return nil
+ }
+ }
+ return newUnmarshalErrorAfterWithSkipping(dec, uo, t, nil)
+ }
+ return &fncs
+}
+
+func makeStringArshaler(t reflect.Type) *arshaler {
+ var fncs arshaler
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ xe := export.Encoder(enc)
+ if mo.Format != "" && mo.FormatDepth == xe.Tokens.Depth() {
+ return newInvalidFormatError(enc, t, mo)
+ }
+
+ // Optimize for marshaling without preceding whitespace.
+ s := va.String()
+ if optimizeCommon && !mo.Flags.Get(jsonflags.AnyWhitespace|jsonflags.StringifyBoolsAndStrings) && !xe.Tokens.Last.NeedObjectName() {
+ b := xe.Buf
+ b = xe.Tokens.MayAppendDelim(b, '"')
+ b, err := jsonwire.AppendQuote(b, s, &mo.Flags)
+ if err == nil {
+ xe.Buf = b
+ xe.Tokens.Last.Increment()
+ if xe.NeedFlush() {
+ return xe.Flush()
+ }
+ return nil
+ }
+ // Otherwise, the string contains invalid UTF-8,
+ // so let the logic below construct the proper error.
+ }
+
+ if mo.Flags.Get(jsonflags.StringifyBoolsAndStrings) {
+ b, err := jsonwire.AppendQuote(nil, s, &mo.Flags)
+ if err != nil {
+ return newMarshalErrorBefore(enc, t, &jsontext.SyntacticError{Err: err})
+ }
+ q, err := jsontext.AppendQuote(nil, b)
+ if err != nil {
+ panic("BUG: second AppendQuote should never fail: " + err.Error())
+ }
+ return enc.WriteValue(q)
+ }
+ return enc.WriteToken(jsontext.String(s))
+ }
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ xd := export.Decoder(dec)
+ if uo.Format != "" && uo.FormatDepth == xd.Tokens.Depth() {
+ return newInvalidFormatError(dec, t, uo)
+ }
+ var flags jsonwire.ValueFlags
+ val, err := xd.ReadValue(&flags)
+ if err != nil {
+ return err
+ }
+ k := val.Kind()
+ switch k {
+ case 'n':
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ va.SetString("")
+ }
+ return nil
+ case '"':
+ val = jsonwire.UnquoteMayCopy(val, flags.IsVerbatim())
+ if uo.Flags.Get(jsonflags.StringifyBoolsAndStrings) {
+ val, err = jsontext.AppendUnquote(nil, val)
+ if err != nil {
+ return newUnmarshalErrorAfter(dec, t, err)
+ }
+ if uo.Flags.Get(jsonflags.StringifyWithLegacySemantics) && string(val) == "null" {
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ va.SetString("")
+ }
+ return nil
+ }
+ }
+ if xd.StringCache == nil {
+ xd.StringCache = new(stringCache)
+ }
+ str := makeString(xd.StringCache, val)
+ va.SetString(str)
+ return nil
+ }
+ return newUnmarshalErrorAfter(dec, t, nil)
+ }
+ return &fncs
+}
+
+var (
+ appendEncodeBase16 = hex.AppendEncode
+ appendEncodeBase32 = base32.StdEncoding.AppendEncode
+ appendEncodeBase32Hex = base32.HexEncoding.AppendEncode
+ appendEncodeBase64 = base64.StdEncoding.AppendEncode
+ appendEncodeBase64URL = base64.URLEncoding.AppendEncode
+ encodedLenBase16 = hex.EncodedLen
+ encodedLenBase32 = base32.StdEncoding.EncodedLen
+ encodedLenBase32Hex = base32.HexEncoding.EncodedLen
+ encodedLenBase64 = base64.StdEncoding.EncodedLen
+ encodedLenBase64URL = base64.URLEncoding.EncodedLen
+ appendDecodeBase16 = hex.AppendDecode
+ appendDecodeBase32 = base32.StdEncoding.AppendDecode
+ appendDecodeBase32Hex = base32.HexEncoding.AppendDecode
+ appendDecodeBase64 = base64.StdEncoding.AppendDecode
+ appendDecodeBase64URL = base64.URLEncoding.AppendDecode
+)
+
+func makeBytesArshaler(t reflect.Type, fncs *arshaler) *arshaler {
+ // NOTE: This handles both []~byte and [N]~byte.
+ // The v2 default is to treat a []namedByte as equivalent to []T
+ // since being able to convert []namedByte to []byte relies on
+ // dubious Go reflection behavior (see https://go.dev/issue/24746).
+ // For v1 emulation, we use jsonflags.FormatBytesWithLegacySemantics
+ // to forcibly treat []namedByte as a []byte.
+ marshalArray := fncs.marshal
+ isNamedByte := t.Elem().PkgPath() != ""
+ hasMarshaler := implementsAny(t.Elem(), allMarshalerTypes...)
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ if !mo.Flags.Get(jsonflags.FormatBytesWithLegacySemantics) && isNamedByte {
+ return marshalArray(enc, va, mo) // treat as []T or [N]T
+ }
+ xe := export.Encoder(enc)
+ appendEncode := appendEncodeBase64
+ if mo.Format != "" && mo.FormatDepth == xe.Tokens.Depth() {
+ switch mo.Format {
+ case "base64":
+ appendEncode = appendEncodeBase64
+ case "base64url":
+ appendEncode = appendEncodeBase64URL
+ case "base32":
+ appendEncode = appendEncodeBase32
+ case "base32hex":
+ appendEncode = appendEncodeBase32Hex
+ case "base16", "hex":
+ appendEncode = appendEncodeBase16
+ case "array":
+ mo.Format = ""
+ return marshalArray(enc, va, mo)
+ default:
+ return newInvalidFormatError(enc, t, mo)
+ }
+ } else if mo.Flags.Get(jsonflags.FormatByteArrayAsArray) && va.Kind() == reflect.Array {
+ return marshalArray(enc, va, mo)
+ } else if mo.Flags.Get(jsonflags.FormatBytesWithLegacySemantics) && hasMarshaler {
+ return marshalArray(enc, va, mo)
+ }
+ if mo.Flags.Get(jsonflags.FormatNilSliceAsNull) && va.Kind() == reflect.Slice && va.IsNil() {
+ // TODO: Provide a "emitempty" format override?
+ return enc.WriteToken(jsontext.Null)
+ }
+ return xe.AppendRaw('"', true, func(b []byte) ([]byte, error) {
+ return appendEncode(b, va.Bytes()), nil
+ })
+ }
+ unmarshalArray := fncs.unmarshal
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ if !uo.Flags.Get(jsonflags.FormatBytesWithLegacySemantics) && isNamedByte {
+ return unmarshalArray(dec, va, uo) // treat as []T or [N]T
+ }
+ xd := export.Decoder(dec)
+ appendDecode, encodedLen := appendDecodeBase64, encodedLenBase64
+ if uo.Format != "" && uo.FormatDepth == xd.Tokens.Depth() {
+ switch uo.Format {
+ case "base64":
+ appendDecode, encodedLen = appendDecodeBase64, encodedLenBase64
+ case "base64url":
+ appendDecode, encodedLen = appendDecodeBase64URL, encodedLenBase64URL
+ case "base32":
+ appendDecode, encodedLen = appendDecodeBase32, encodedLenBase32
+ case "base32hex":
+ appendDecode, encodedLen = appendDecodeBase32Hex, encodedLenBase32Hex
+ case "base16", "hex":
+ appendDecode, encodedLen = appendDecodeBase16, encodedLenBase16
+ case "array":
+ uo.Format = ""
+ return unmarshalArray(dec, va, uo)
+ default:
+ return newInvalidFormatError(dec, t, uo)
+ }
+ } else if uo.Flags.Get(jsonflags.FormatByteArrayAsArray) && va.Kind() == reflect.Array {
+ return unmarshalArray(dec, va, uo)
+ } else if uo.Flags.Get(jsonflags.FormatBytesWithLegacySemantics) && dec.PeekKind() == '[' {
+ return unmarshalArray(dec, va, uo)
+ }
+ var flags jsonwire.ValueFlags
+ val, err := xd.ReadValue(&flags)
+ if err != nil {
+ return err
+ }
+ k := val.Kind()
+ switch k {
+ case 'n':
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) || va.Kind() != reflect.Array {
+ va.SetZero()
+ }
+ return nil
+ case '"':
+ // NOTE: The v2 default is to strictly comply with RFC 4648.
+ // Section 3.2 specifies that padding is required.
+ // Section 3.3 specifies that non-alphabet characters
+ // (e.g., '\r' or '\n') must be rejected.
+ // Section 3.5 specifies that unnecessary non-zero bits in
+ // the last quantum may be rejected. Since this is optional,
+ // we do not reject such inputs.
+ val = jsonwire.UnquoteMayCopy(val, flags.IsVerbatim())
+ b, err := appendDecode(va.Bytes()[:0], val)
+ if err != nil {
+ return newUnmarshalErrorAfter(dec, t, err)
+ }
+ if len(val) != encodedLen(len(b)) && !uo.Flags.Get(jsonflags.ParseBytesWithLooseRFC4648) {
+ // TODO(https://go.dev/issue/53845): RFC 4648, section 3.3,
+ // specifies that non-alphabet characters must be rejected.
+ // Unfortunately, the "base32" and "base64" packages allow
+ // '\r' and '\n' characters by default.
+ i := bytes.IndexAny(val, "\r\n")
+ err := fmt.Errorf("illegal character %s at offset %d", jsonwire.QuoteRune(val[i:]), i)
+ return newUnmarshalErrorAfter(dec, t, err)
+ }
+
+ if va.Kind() == reflect.Array {
+ dst := va.Bytes()
+ clear(dst[copy(dst, b):]) // noop if len(b) <= len(dst)
+ if len(b) != len(dst) && !uo.Flags.Get(jsonflags.UnmarshalArrayFromAnyLength) {
+ err := fmt.Errorf("decoded length of %d mismatches array length of %d", len(b), len(dst))
+ return newUnmarshalErrorAfter(dec, t, err)
+ }
+ } else {
+ if b == nil {
+ b = []byte{}
+ }
+ va.SetBytes(b)
+ }
+ return nil
+ }
+ return newUnmarshalErrorAfter(dec, t, nil)
+ }
+ return fncs
+}
+
+func makeIntArshaler(t reflect.Type) *arshaler {
+ var fncs arshaler
+ bits := t.Bits()
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ xe := export.Encoder(enc)
+ if mo.Format != "" && mo.FormatDepth == xe.Tokens.Depth() {
+ return newInvalidFormatError(enc, t, mo)
+ }
+
+ // Optimize for marshaling without preceding whitespace or string escaping.
+ if optimizeCommon && !mo.Flags.Get(jsonflags.AnyWhitespace|jsonflags.StringifyNumbers) && !xe.Tokens.Last.NeedObjectName() {
+ xe.Buf = strconv.AppendInt(xe.Tokens.MayAppendDelim(xe.Buf, '0'), va.Int(), 10)
+ xe.Tokens.Last.Increment()
+ if xe.NeedFlush() {
+ return xe.Flush()
+ }
+ return nil
+ }
+
+ k := stringOrNumberKind(xe.Tokens.Last.NeedObjectName() || mo.Flags.Get(jsonflags.StringifyNumbers))
+ return xe.AppendRaw(k, true, func(b []byte) ([]byte, error) {
+ return strconv.AppendInt(b, va.Int(), 10), nil
+ })
+ }
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ xd := export.Decoder(dec)
+ if uo.Format != "" && uo.FormatDepth == xd.Tokens.Depth() {
+ return newInvalidFormatError(dec, t, uo)
+ }
+ stringify := xd.Tokens.Last.NeedObjectName() || uo.Flags.Get(jsonflags.StringifyNumbers)
+ var flags jsonwire.ValueFlags
+ val, err := xd.ReadValue(&flags)
+ if err != nil {
+ return err
+ }
+ k := val.Kind()
+ switch k {
+ case 'n':
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ va.SetInt(0)
+ }
+ return nil
+ case '"':
+ if !stringify {
+ break
+ }
+ val = jsonwire.UnquoteMayCopy(val, flags.IsVerbatim())
+ if uo.Flags.Get(jsonflags.StringifyWithLegacySemantics) && string(val) == "null" {
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ va.SetInt(0)
+ }
+ return nil
+ }
+ fallthrough
+ case '0':
+ if stringify && k == '0' {
+ break
+ }
+ var negOffset int
+ neg := len(val) > 0 && val[0] == '-'
+ if neg {
+ negOffset = 1
+ }
+ n, ok := jsonwire.ParseUint(val[negOffset:])
+ maxInt := uint64(1) << (bits - 1)
+ overflow := (neg && n > maxInt) || (!neg && n > maxInt-1)
+ if !ok {
+ if n != math.MaxUint64 {
+ return newUnmarshalErrorAfterWithValue(dec, t, strconv.ErrSyntax)
+ }
+ overflow = true
+ }
+ if overflow {
+ return newUnmarshalErrorAfterWithValue(dec, t, strconv.ErrRange)
+ }
+ if neg {
+ va.SetInt(int64(-n))
+ } else {
+ va.SetInt(int64(+n))
+ }
+ return nil
+ }
+ return newUnmarshalErrorAfter(dec, t, nil)
+ }
+ return &fncs
+}
+
+func makeUintArshaler(t reflect.Type) *arshaler {
+ var fncs arshaler
+ bits := t.Bits()
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ xe := export.Encoder(enc)
+ if mo.Format != "" && mo.FormatDepth == xe.Tokens.Depth() {
+ return newInvalidFormatError(enc, t, mo)
+ }
+
+ // Optimize for marshaling without preceding whitespace or string escaping.
+ if optimizeCommon && !mo.Flags.Get(jsonflags.AnyWhitespace|jsonflags.StringifyNumbers) && !xe.Tokens.Last.NeedObjectName() {
+ xe.Buf = strconv.AppendUint(xe.Tokens.MayAppendDelim(xe.Buf, '0'), va.Uint(), 10)
+ xe.Tokens.Last.Increment()
+ if xe.NeedFlush() {
+ return xe.Flush()
+ }
+ return nil
+ }
+
+ k := stringOrNumberKind(xe.Tokens.Last.NeedObjectName() || mo.Flags.Get(jsonflags.StringifyNumbers))
+ return xe.AppendRaw(k, true, func(b []byte) ([]byte, error) {
+ return strconv.AppendUint(b, va.Uint(), 10), nil
+ })
+ }
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ xd := export.Decoder(dec)
+ if uo.Format != "" && uo.FormatDepth == xd.Tokens.Depth() {
+ return newInvalidFormatError(dec, t, uo)
+ }
+ stringify := xd.Tokens.Last.NeedObjectName() || uo.Flags.Get(jsonflags.StringifyNumbers)
+ var flags jsonwire.ValueFlags
+ val, err := xd.ReadValue(&flags)
+ if err != nil {
+ return err
+ }
+ k := val.Kind()
+ switch k {
+ case 'n':
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ va.SetUint(0)
+ }
+ return nil
+ case '"':
+ if !stringify {
+ break
+ }
+ val = jsonwire.UnquoteMayCopy(val, flags.IsVerbatim())
+ if uo.Flags.Get(jsonflags.StringifyWithLegacySemantics) && string(val) == "null" {
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ va.SetUint(0)
+ }
+ return nil
+ }
+ fallthrough
+ case '0':
+ if stringify && k == '0' {
+ break
+ }
+ n, ok := jsonwire.ParseUint(val)
+ maxUint := uint64(1) << bits
+ overflow := n > maxUint-1
+ if !ok {
+ if n != math.MaxUint64 {
+ return newUnmarshalErrorAfterWithValue(dec, t, strconv.ErrSyntax)
+ }
+ overflow = true
+ }
+ if overflow {
+ return newUnmarshalErrorAfterWithValue(dec, t, strconv.ErrRange)
+ }
+ va.SetUint(n)
+ return nil
+ }
+ return newUnmarshalErrorAfter(dec, t, nil)
+ }
+ return &fncs
+}
+
+func makeFloatArshaler(t reflect.Type) *arshaler {
+ var fncs arshaler
+ bits := t.Bits()
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ xe := export.Encoder(enc)
+ var allowNonFinite bool
+ if mo.Format != "" && mo.FormatDepth == xe.Tokens.Depth() {
+ if mo.Format == "nonfinite" {
+ allowNonFinite = true
+ } else {
+ return newInvalidFormatError(enc, t, mo)
+ }
+ }
+
+ fv := va.Float()
+ if math.IsNaN(fv) || math.IsInf(fv, 0) {
+ if !allowNonFinite {
+ err := fmt.Errorf("unsupported value: %v", fv)
+ return newMarshalErrorBefore(enc, t, err)
+ }
+ return enc.WriteToken(jsontext.Float(fv))
+ }
+
+ // Optimize for marshaling without preceding whitespace or string escaping.
+ if optimizeCommon && !mo.Flags.Get(jsonflags.AnyWhitespace|jsonflags.StringifyNumbers) && !xe.Tokens.Last.NeedObjectName() {
+ xe.Buf = jsonwire.AppendFloat(xe.Tokens.MayAppendDelim(xe.Buf, '0'), fv, bits)
+ xe.Tokens.Last.Increment()
+ if xe.NeedFlush() {
+ return xe.Flush()
+ }
+ return nil
+ }
+
+ k := stringOrNumberKind(xe.Tokens.Last.NeedObjectName() || mo.Flags.Get(jsonflags.StringifyNumbers))
+ return xe.AppendRaw(k, true, func(b []byte) ([]byte, error) {
+ return jsonwire.AppendFloat(b, va.Float(), bits), nil
+ })
+ }
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ xd := export.Decoder(dec)
+ var allowNonFinite bool
+ if uo.Format != "" && uo.FormatDepth == xd.Tokens.Depth() {
+ if uo.Format == "nonfinite" {
+ allowNonFinite = true
+ } else {
+ return newInvalidFormatError(dec, t, uo)
+ }
+ }
+ stringify := xd.Tokens.Last.NeedObjectName() || uo.Flags.Get(jsonflags.StringifyNumbers)
+ var flags jsonwire.ValueFlags
+ val, err := xd.ReadValue(&flags)
+ if err != nil {
+ return err
+ }
+ k := val.Kind()
+ switch k {
+ case 'n':
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ va.SetFloat(0)
+ }
+ return nil
+ case '"':
+ val = jsonwire.UnquoteMayCopy(val, flags.IsVerbatim())
+ if allowNonFinite {
+ switch string(val) {
+ case "NaN":
+ va.SetFloat(math.NaN())
+ return nil
+ case "Infinity":
+ va.SetFloat(math.Inf(+1))
+ return nil
+ case "-Infinity":
+ va.SetFloat(math.Inf(-1))
+ return nil
+ }
+ }
+ if !stringify {
+ break
+ }
+ if uo.Flags.Get(jsonflags.StringifyWithLegacySemantics) && string(val) == "null" {
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ va.SetFloat(0)
+ }
+ return nil
+ }
+ if n, err := jsonwire.ConsumeNumber(val); n != len(val) || err != nil {
+ return newUnmarshalErrorAfterWithValue(dec, t, strconv.ErrSyntax)
+ }
+ fallthrough
+ case '0':
+ if stringify && k == '0' {
+ break
+ }
+ fv, ok := jsonwire.ParseFloat(val, bits)
+ va.SetFloat(fv)
+ if !ok {
+ return newUnmarshalErrorAfterWithValue(dec, t, strconv.ErrRange)
+ }
+ return nil
+ }
+ return newUnmarshalErrorAfter(dec, t, nil)
+ }
+ return &fncs
+}
+
+func makeMapArshaler(t reflect.Type) *arshaler {
+ // NOTE: The logic below disables namespaces for tracking duplicate names
+ // when handling map keys with a unique representation.
+
+ // NOTE: Values retrieved from a map are not addressable,
+ // so we shallow copy the values to make them addressable and
+ // store them back into the map afterwards.
+
+ var fncs arshaler
+ var (
+ once sync.Once
+ keyFncs *arshaler
+ valFncs *arshaler
+ )
+ init := func() {
+ keyFncs = lookupArshaler(t.Key())
+ valFncs = lookupArshaler(t.Elem())
+ }
+ nillableLegacyKey := t.Key().Kind() == reflect.Pointer &&
+ implementsAny(t.Key(), textMarshalerType, textAppenderType)
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ // Check for cycles.
+ xe := export.Encoder(enc)
+ if xe.Tokens.Depth() > startDetectingCyclesAfter {
+ if err := visitPointer(&xe.SeenPointers, va.Value); err != nil {
+ return newMarshalErrorBefore(enc, t, err)
+ }
+ defer leavePointer(&xe.SeenPointers, va.Value)
+ }
+
+ emitNull := mo.Flags.Get(jsonflags.FormatNilMapAsNull)
+ if mo.Format != "" && mo.FormatDepth == xe.Tokens.Depth() {
+ switch mo.Format {
+ case "emitnull":
+ emitNull = true
+ mo.Format = ""
+ case "emitempty":
+ emitNull = false
+ mo.Format = ""
+ default:
+ return newInvalidFormatError(enc, t, mo)
+ }
+ }
+
+ // Handle empty maps.
+ n := va.Len()
+ if n == 0 {
+ if emitNull && va.IsNil() {
+ return enc.WriteToken(jsontext.Null)
+ }
+ // Optimize for marshaling an empty map without any preceding whitespace.
+ if optimizeCommon && !mo.Flags.Get(jsonflags.AnyWhitespace) && !xe.Tokens.Last.NeedObjectName() {
+ xe.Buf = append(xe.Tokens.MayAppendDelim(xe.Buf, '{'), "{}"...)
+ xe.Tokens.Last.Increment()
+ if xe.NeedFlush() {
+ return xe.Flush()
+ }
+ return nil
+ }
+ }
+
+ once.Do(init)
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ if n > 0 {
+ nonDefaultKey := keyFncs.nonDefault
+ marshalKey := keyFncs.marshal
+ marshalVal := valFncs.marshal
+ if mo.Marshalers != nil {
+ var ok bool
+ marshalKey, ok = mo.Marshalers.(*Marshalers).lookup(marshalKey, t.Key())
+ marshalVal, _ = mo.Marshalers.(*Marshalers).lookup(marshalVal, t.Elem())
+ nonDefaultKey = nonDefaultKey || ok
+ }
+ k := newAddressableValue(t.Key())
+ v := newAddressableValue(t.Elem())
+
+ // A Go map guarantees that each entry has a unique key.
+ // As such, disable the expensive duplicate name check if we know
+ // that every Go key will serialize as a unique JSON string.
+ if !nonDefaultKey && mapKeyWithUniqueRepresentation(k.Kind(), mo.Flags.Get(jsonflags.AllowInvalidUTF8)) {
+ xe.Tokens.Last.DisableNamespace()
+ }
+
+ switch {
+ case !mo.Flags.Get(jsonflags.Deterministic) || n <= 1:
+ for iter := va.Value.MapRange(); iter.Next(); {
+ k.SetIterKey(iter)
+ err := marshalKey(enc, k, mo)
+ if err != nil {
+ if mo.Flags.Get(jsonflags.CallMethodsWithLegacySemantics) &&
+ errors.Is(err, jsontext.ErrNonStringName) && nillableLegacyKey && k.IsNil() {
+ err = enc.WriteToken(jsontext.String(""))
+ }
+ if err != nil {
+ if serr, ok := err.(*jsontext.SyntacticError); ok && serr.Err == jsontext.ErrNonStringName {
+ err = newMarshalErrorBefore(enc, k.Type(), err)
+ }
+ return err
+ }
+ }
+ v.SetIterValue(iter)
+ if err := marshalVal(enc, v, mo); err != nil {
+ return err
+ }
+ }
+ case !nonDefaultKey && t.Key().Kind() == reflect.String:
+ names := getStrings(n)
+ for i, iter := 0, va.Value.MapRange(); i < n && iter.Next(); i++ {
+ k.SetIterKey(iter)
+ (*names)[i] = k.String()
+ }
+ names.Sort()
+ for _, name := range *names {
+ if err := enc.WriteToken(jsontext.String(name)); err != nil {
+ return err
+ }
+ // TODO(https://go.dev/issue/57061): Use v.SetMapIndexOf.
+ k.SetString(name)
+ v.Set(va.MapIndex(k.Value))
+ if err := marshalVal(enc, v, mo); err != nil {
+ return err
+ }
+ }
+ putStrings(names)
+ default:
+ type member struct {
+ name string // unquoted name
+ key addressableValue
+ val addressableValue
+ }
+ members := make([]member, n)
+ keys := reflect.MakeSlice(reflect.SliceOf(t.Key()), n, n)
+ vals := reflect.MakeSlice(reflect.SliceOf(t.Elem()), n, n)
+ for i, iter := 0, va.Value.MapRange(); i < n && iter.Next(); i++ {
+ // Marshal the member name.
+ k := addressableValue{keys.Index(i), true} // indexed slice element is always addressable
+ k.SetIterKey(iter)
+ v := addressableValue{vals.Index(i), true} // indexed slice element is always addressable
+ v.SetIterValue(iter)
+ err := marshalKey(enc, k, mo)
+ if err != nil {
+ if mo.Flags.Get(jsonflags.CallMethodsWithLegacySemantics) &&
+ errors.Is(err, jsontext.ErrNonStringName) && nillableLegacyKey && k.IsNil() {
+ err = enc.WriteToken(jsontext.String(""))
+ }
+ if err != nil {
+ if serr, ok := err.(*jsontext.SyntacticError); ok && serr.Err == jsontext.ErrNonStringName {
+ err = newMarshalErrorBefore(enc, k.Type(), err)
+ }
+ return err
+ }
+ }
+ name := xe.UnwriteOnlyObjectMemberName()
+ members[i] = member{name, k, v}
+ }
+ // TODO: If AllowDuplicateNames is enabled, then sort according
+ // to reflect.Value as well if the names are equal.
+ // See internal/fmtsort.
+ slices.SortFunc(members, func(x, y member) int {
+ return strings.Compare(x.name, y.name)
+ })
+ for _, member := range members {
+ if err := enc.WriteToken(jsontext.String(member.name)); err != nil {
+ return err
+ }
+ if err := marshalVal(enc, member.val, mo); err != nil {
+ return err
+ }
+ }
+ }
+ }
+ if err := enc.WriteToken(jsontext.EndObject); err != nil {
+ return err
+ }
+ return nil
+ }
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ xd := export.Decoder(dec)
+ if uo.Format != "" && uo.FormatDepth == xd.Tokens.Depth() {
+ switch uo.Format {
+ case "emitnull", "emitempty":
+ uo.Format = "" // only relevant for marshaling
+ default:
+ return newInvalidFormatError(dec, t, uo)
+ }
+ }
+ tok, err := dec.ReadToken()
+ if err != nil {
+ return err
+ }
+ k := tok.Kind()
+ switch k {
+ case 'n':
+ va.SetZero()
+ return nil
+ case '{':
+ once.Do(init)
+ if va.IsNil() {
+ va.Set(reflect.MakeMap(t))
+ }
+
+ nonDefaultKey := keyFncs.nonDefault
+ unmarshalKey := keyFncs.unmarshal
+ unmarshalVal := valFncs.unmarshal
+ if uo.Unmarshalers != nil {
+ var ok bool
+ unmarshalKey, ok = uo.Unmarshalers.(*Unmarshalers).lookup(unmarshalKey, t.Key())
+ unmarshalVal, _ = uo.Unmarshalers.(*Unmarshalers).lookup(unmarshalVal, t.Elem())
+ nonDefaultKey = nonDefaultKey || ok
+ }
+ k := newAddressableValue(t.Key())
+ v := newAddressableValue(t.Elem())
+
+ // Manually check for duplicate entries by virtue of whether the
+ // unmarshaled key already exists in the destination Go map.
+ // Consequently, syntactically different names (e.g., "0" and "-0")
+ // will be rejected as duplicates since they semantically refer
+ // to the same Go value. This is an unusual interaction
+ // between syntax and semantics, but is more correct.
+ if !nonDefaultKey && mapKeyWithUniqueRepresentation(k.Kind(), uo.Flags.Get(jsonflags.AllowInvalidUTF8)) {
+ xd.Tokens.Last.DisableNamespace()
+ }
+
+ // In the rare case where the map is not already empty,
+ // then we need to manually track which keys we already saw
+ // since existing presence alone is insufficient to indicate
+ // whether the input had a duplicate name.
+ var seen reflect.Value
+ if !uo.Flags.Get(jsonflags.AllowDuplicateNames) && va.Len() > 0 {
+ seen = reflect.MakeMap(reflect.MapOf(k.Type(), emptyStructType))
+ }
+
+ var errUnmarshal error
+ for dec.PeekKind() != '}' {
+ // Unmarshal the map entry key.
+ k.SetZero()
+ err := unmarshalKey(dec, k, uo)
+ if err != nil {
+ if isFatalError(err, uo.Flags) {
+ return err
+ }
+ if err := dec.SkipValue(); err != nil {
+ return err
+ }
+ errUnmarshal = cmp.Or(errUnmarshal, err)
+ continue
+ }
+ if k.Kind() == reflect.Interface && !k.IsNil() && !k.Elem().Type().Comparable() {
+ err := newUnmarshalErrorAfter(dec, t, fmt.Errorf("invalid incomparable key type %v", k.Elem().Type()))
+ if !uo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return err
+ }
+ if err2 := dec.SkipValue(); err2 != nil {
+ return err2
+ }
+ errUnmarshal = cmp.Or(errUnmarshal, err)
+ continue
+ }
+
+ // Check if a pre-existing map entry value exists for this key.
+ if v2 := va.MapIndex(k.Value); v2.IsValid() {
+ if !uo.Flags.Get(jsonflags.AllowDuplicateNames) && (!seen.IsValid() || seen.MapIndex(k.Value).IsValid()) {
+ // TODO: Unread the object name.
+ name := xd.PreviousTokenOrValue()
+ return newDuplicateNameError(dec.StackPointer(), nil, dec.InputOffset()-len64(name))
+ }
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ v.Set(v2)
+ } else {
+ v.SetZero()
+ }
+ } else {
+ v.SetZero()
+ }
+
+ // Unmarshal the map entry value.
+ err = unmarshalVal(dec, v, uo)
+ va.SetMapIndex(k.Value, v.Value)
+ if seen.IsValid() {
+ seen.SetMapIndex(k.Value, reflect.Zero(emptyStructType))
+ }
+ if err != nil {
+ if isFatalError(err, uo.Flags) {
+ return err
+ }
+ errUnmarshal = cmp.Or(errUnmarshal, err)
+ }
+ }
+ if _, err := dec.ReadToken(); err != nil {
+ return err
+ }
+ return errUnmarshal
+ }
+ return newUnmarshalErrorAfterWithSkipping(dec, uo, t, nil)
+ }
+ return &fncs
+}
+
+// mapKeyWithUniqueRepresentation reports whether all possible values of k
+// marshal to a different JSON value, and whether all possible JSON values
+// that can unmarshal into k unmarshal to different Go values.
+// In other words, the representation must be a bijective.
+func mapKeyWithUniqueRepresentation(k reflect.Kind, allowInvalidUTF8 bool) bool {
+ switch k {
+ case reflect.Bool,
+ reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
+ reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ return true
+ case reflect.String:
+ // For strings, we have to be careful since names with invalid UTF-8
+ // maybe unescape to the same Go string value.
+ return !allowInvalidUTF8
+ default:
+ // Floating-point kinds are not listed above since NaNs
+ // can appear multiple times and all serialize as "NaN".
+ return false
+ }
+}
+
+var errNilField = errors.New("cannot set embedded pointer to unexported struct type")
+
+func makeStructArshaler(t reflect.Type) *arshaler {
+ // NOTE: The logic below disables namespaces for tracking duplicate names
+ // and does the tracking locally with an efficient bit-set based on which
+ // Go struct fields were seen.
+
+ var fncs arshaler
+ var (
+ once sync.Once
+ fields structFields
+ errInit *SemanticError
+ )
+ init := func() {
+ fields, errInit = makeStructFields(t)
+ }
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ xe := export.Encoder(enc)
+ if mo.Format != "" && mo.FormatDepth == xe.Tokens.Depth() {
+ return newInvalidFormatError(enc, t, mo)
+ }
+ once.Do(init)
+ if errInit != nil && !mo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return newMarshalErrorBefore(enc, errInit.GoType, errInit.Err)
+ }
+ if err := enc.WriteToken(jsontext.BeginObject); err != nil {
+ return err
+ }
+ var seenIdxs uintSet
+ prevIdx := -1
+ xe.Tokens.Last.DisableNamespace() // we manually ensure unique names below
+ for i := range fields.flattened {
+ f := &fields.flattened[i]
+ v := addressableValue{va.Field(f.index0), va.forcedAddr} // addressable if struct value is addressable
+ if len(f.index) > 0 {
+ v = v.fieldByIndex(f.index, false)
+ if !v.IsValid() {
+ continue // implies a nil inlined field
+ }
+ }
+
+ // OmitZero skips the field if the Go value is zero,
+ // which we can determine up front without calling the marshaler.
+ if (f.omitzero || mo.Flags.Get(jsonflags.OmitZeroStructFields)) &&
+ ((f.isZero == nil && v.IsZero()) || (f.isZero != nil && f.isZero(v))) {
+ continue
+ }
+
+ // Check for the legacy definition of omitempty.
+ if f.omitempty && mo.Flags.Get(jsonflags.OmitEmptyWithLegacySemantics) && isLegacyEmpty(v) {
+ continue
+ }
+
+ marshal := f.fncs.marshal
+ nonDefault := f.fncs.nonDefault
+ if mo.Marshalers != nil {
+ var ok bool
+ marshal, ok = mo.Marshalers.(*Marshalers).lookup(marshal, f.typ)
+ nonDefault = nonDefault || ok
+ }
+
+ // OmitEmpty skips the field if the marshaled JSON value is empty,
+ // which we can know up front if there are no custom marshalers,
+ // otherwise we must marshal the value and unwrite it if empty.
+ if f.omitempty && !mo.Flags.Get(jsonflags.OmitEmptyWithLegacySemantics) &&
+ !nonDefault && f.isEmpty != nil && f.isEmpty(v) {
+ continue // fast path for omitempty
+ }
+
+ // Write the object member name.
+ //
+ // The logic below is semantically equivalent to:
+ // enc.WriteToken(String(f.name))
+ // but specialized and simplified because:
+ // 1. The Encoder must be expecting an object name.
+ // 2. The object namespace is guaranteed to be disabled.
+ // 3. The object name is guaranteed to be valid and pre-escaped.
+ // 4. There is no need to flush the buffer (for unwrite purposes).
+ // 5. There is no possibility of an error occurring.
+ if optimizeCommon {
+ // Append any delimiters or optional whitespace.
+ b := xe.Buf
+ if xe.Tokens.Last.Length() > 0 {
+ b = append(b, ',')
+ if mo.Flags.Get(jsonflags.SpaceAfterComma) {
+ b = append(b, ' ')
+ }
+ }
+ if mo.Flags.Get(jsonflags.Multiline) {
+ b = xe.AppendIndent(b, xe.Tokens.NeedIndent('"'))
+ }
+
+ // Append the token to the output and to the state machine.
+ n0 := len(b) // offset before calling AppendQuote
+ if !f.nameNeedEscape {
+ b = append(b, f.quotedName...)
+ } else {
+ b, _ = jsonwire.AppendQuote(b, f.name, &mo.Flags)
+ }
+ xe.Buf = b
+ xe.Names.ReplaceLastQuotedOffset(n0)
+ xe.Tokens.Last.Increment()
+ } else {
+ if err := enc.WriteToken(jsontext.String(f.name)); err != nil {
+ return err
+ }
+ }
+
+ // Write the object member value.
+ flagsOriginal := mo.Flags
+ if f.string {
+ if !mo.Flags.Get(jsonflags.StringifyWithLegacySemantics) {
+ mo.Flags.Set(jsonflags.StringifyNumbers | 1)
+ } else if canLegacyStringify(f.typ) {
+ mo.Flags.Set(jsonflags.StringifyNumbers | jsonflags.StringifyBoolsAndStrings | 1)
+ }
+ }
+ if f.format != "" {
+ mo.FormatDepth = xe.Tokens.Depth()
+ mo.Format = f.format
+ }
+ err := marshal(enc, v, mo)
+ mo.Flags = flagsOriginal
+ mo.Format = ""
+ if err != nil {
+ return err
+ }
+
+ // Try unwriting the member if empty (slow path for omitempty).
+ if f.omitempty && !mo.Flags.Get(jsonflags.OmitEmptyWithLegacySemantics) {
+ var prevName *string
+ if prevIdx >= 0 {
+ prevName = &fields.flattened[prevIdx].name
+ }
+ if xe.UnwriteEmptyObjectMember(prevName) {
+ continue
+ }
+ }
+
+ // Remember the previous written object member.
+ // The set of seen fields only needs to be updated to detect
+ // duplicate names with those from the inlined fallback.
+ if !mo.Flags.Get(jsonflags.AllowDuplicateNames) && fields.inlinedFallback != nil {
+ seenIdxs.insert(uint(f.id))
+ }
+ prevIdx = f.id
+ }
+ if fields.inlinedFallback != nil && !(mo.Flags.Get(jsonflags.DiscardUnknownMembers) && fields.inlinedFallback.unknown) {
+ var insertUnquotedName func([]byte) bool
+ if !mo.Flags.Get(jsonflags.AllowDuplicateNames) {
+ insertUnquotedName = func(name []byte) bool {
+ // Check that the name from inlined fallback does not match
+ // one of the previously marshaled names from known fields.
+ if foldedFields := fields.lookupByFoldedName(name); len(foldedFields) > 0 {
+ if f := fields.byActualName[string(name)]; f != nil {
+ return seenIdxs.insert(uint(f.id))
+ }
+ for _, f := range foldedFields {
+ if f.matchFoldedName(name, &mo.Flags) {
+ return seenIdxs.insert(uint(f.id))
+ }
+ }
+ }
+
+ // Check that the name does not match any other name
+ // previously marshaled from the inlined fallback.
+ return xe.Namespaces.Last().InsertUnquoted(name)
+ }
+ }
+ if err := marshalInlinedFallbackAll(enc, va, mo, fields.inlinedFallback, insertUnquotedName); err != nil {
+ return err
+ }
+ }
+ if err := enc.WriteToken(jsontext.EndObject); err != nil {
+ return err
+ }
+ return nil
+ }
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ xd := export.Decoder(dec)
+ if uo.Format != "" && uo.FormatDepth == xd.Tokens.Depth() {
+ return newInvalidFormatError(dec, t, uo)
+ }
+ tok, err := dec.ReadToken()
+ if err != nil {
+ return err
+ }
+ k := tok.Kind()
+ switch k {
+ case 'n':
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ va.SetZero()
+ }
+ return nil
+ case '{':
+ once.Do(init)
+ if errInit != nil && !uo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return newUnmarshalErrorAfter(dec, errInit.GoType, errInit.Err)
+ }
+ var seenIdxs uintSet
+ xd.Tokens.Last.DisableNamespace()
+ var errUnmarshal error
+ for dec.PeekKind() != '}' {
+ // Process the object member name.
+ var flags jsonwire.ValueFlags
+ val, err := xd.ReadValue(&flags)
+ if err != nil {
+ return err
+ }
+ name := jsonwire.UnquoteMayCopy(val, flags.IsVerbatim())
+ f := fields.byActualName[string(name)]
+ if f == nil {
+ for _, f2 := range fields.lookupByFoldedName(name) {
+ if f2.matchFoldedName(name, &uo.Flags) {
+ f = f2
+ break
+ }
+ }
+ if f == nil {
+ if uo.Flags.Get(jsonflags.RejectUnknownMembers) && (fields.inlinedFallback == nil || fields.inlinedFallback.unknown) {
+ err := newUnmarshalErrorAfter(dec, t, ErrUnknownName)
+ if !uo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return err
+ }
+ errUnmarshal = cmp.Or(errUnmarshal, err)
+ }
+ if !uo.Flags.Get(jsonflags.AllowDuplicateNames) && !xd.Namespaces.Last().InsertUnquoted(name) {
+ // TODO: Unread the object name.
+ return newDuplicateNameError(dec.StackPointer(), nil, dec.InputOffset()-len64(val))
+ }
+
+ if fields.inlinedFallback == nil {
+ // Skip unknown value since we have no place to store it.
+ if err := dec.SkipValue(); err != nil {
+ return err
+ }
+ } else {
+ // Marshal into value capable of storing arbitrary object members.
+ if err := unmarshalInlinedFallbackNext(dec, va, uo, fields.inlinedFallback, val, name); err != nil {
+ if isFatalError(err, uo.Flags) {
+ return err
+ }
+ errUnmarshal = cmp.Or(errUnmarshal, err)
+ }
+ }
+ continue
+ }
+ }
+ if !uo.Flags.Get(jsonflags.AllowDuplicateNames) && !seenIdxs.insert(uint(f.id)) {
+ // TODO: Unread the object name.
+ return newDuplicateNameError(dec.StackPointer(), nil, dec.InputOffset()-len64(val))
+ }
+
+ // Process the object member value.
+ unmarshal := f.fncs.unmarshal
+ if uo.Unmarshalers != nil {
+ unmarshal, _ = uo.Unmarshalers.(*Unmarshalers).lookup(unmarshal, f.typ)
+ }
+ flagsOriginal := uo.Flags
+ if f.string {
+ if !uo.Flags.Get(jsonflags.StringifyWithLegacySemantics) {
+ uo.Flags.Set(jsonflags.StringifyNumbers | 1)
+ } else if canLegacyStringify(f.typ) {
+ uo.Flags.Set(jsonflags.StringifyNumbers | jsonflags.StringifyBoolsAndStrings | 1)
+ }
+ }
+ if f.format != "" {
+ uo.FormatDepth = xd.Tokens.Depth()
+ uo.Format = f.format
+ }
+ v := addressableValue{va.Field(f.index0), va.forcedAddr} // addressable if struct value is addressable
+ if len(f.index) > 0 {
+ v = v.fieldByIndex(f.index, true)
+ if !v.IsValid() {
+ err := newUnmarshalErrorBefore(dec, t, errNilField)
+ if !uo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return err
+ }
+ errUnmarshal = cmp.Or(errUnmarshal, err)
+ unmarshal = func(dec *jsontext.Decoder, _ addressableValue, _ *jsonopts.Struct) error {
+ return dec.SkipValue()
+ }
+ }
+ }
+ err = unmarshal(dec, v, uo)
+ uo.Flags = flagsOriginal
+ uo.Format = ""
+ if err != nil {
+ if isFatalError(err, uo.Flags) {
+ return err
+ }
+ errUnmarshal = cmp.Or(errUnmarshal, err)
+ }
+ }
+ if _, err := dec.ReadToken(); err != nil {
+ return err
+ }
+ return errUnmarshal
+ }
+ return newUnmarshalErrorAfterWithSkipping(dec, uo, t, nil)
+ }
+ return &fncs
+}
+
+func (va addressableValue) fieldByIndex(index []int, mayAlloc bool) addressableValue {
+ for _, i := range index {
+ va = va.indirect(mayAlloc)
+ if !va.IsValid() {
+ return va
+ }
+ va = addressableValue{va.Field(i), va.forcedAddr} // addressable if struct value is addressable
+ }
+ return va
+}
+
+func (va addressableValue) indirect(mayAlloc bool) addressableValue {
+ if va.Kind() == reflect.Pointer {
+ if va.IsNil() {
+ if !mayAlloc || !va.CanSet() {
+ return addressableValue{}
+ }
+ va.Set(reflect.New(va.Type().Elem()))
+ }
+ va = addressableValue{va.Elem(), false} // dereferenced pointer is always addressable
+ }
+ return va
+}
+
+// isLegacyEmpty reports whether a value is empty according to the v1 definition.
+func isLegacyEmpty(v addressableValue) bool {
+ // Equivalent to encoding/json.isEmptyValue@v1.21.0.
+ switch v.Kind() {
+ case reflect.Bool:
+ return v.Bool() == false
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ return v.Int() == 0
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ return v.Uint() == 0
+ case reflect.Float32, reflect.Float64:
+ return v.Float() == 0
+ case reflect.String, reflect.Map, reflect.Slice, reflect.Array:
+ return v.Len() == 0
+ case reflect.Pointer, reflect.Interface:
+ return v.IsNil()
+ }
+ return false
+}
+
+// canLegacyStringify reports whether t can be stringified according to v1,
+// where t is a bool, string, or number (or unnamed pointer to such).
+// In v1, the `string` option does not apply recursively to nested types within
+// a composite Go type (e.g., an array, slice, struct, map, or interface).
+func canLegacyStringify(t reflect.Type) bool {
+ // Based on encoding/json.typeFields#L1126-L1143@v1.23.0
+ if t.Name() == "" && t.Kind() == reflect.Ptr {
+ t = t.Elem()
+ }
+ switch t.Kind() {
+ case reflect.Bool, reflect.String,
+ reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
+ reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr,
+ reflect.Float32, reflect.Float64:
+ return true
+ }
+ return false
+}
+
+func makeSliceArshaler(t reflect.Type) *arshaler {
+ var fncs arshaler
+ var (
+ once sync.Once
+ valFncs *arshaler
+ )
+ init := func() {
+ valFncs = lookupArshaler(t.Elem())
+ }
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ // Check for cycles.
+ xe := export.Encoder(enc)
+ if xe.Tokens.Depth() > startDetectingCyclesAfter {
+ if err := visitPointer(&xe.SeenPointers, va.Value); err != nil {
+ return newMarshalErrorBefore(enc, t, err)
+ }
+ defer leavePointer(&xe.SeenPointers, va.Value)
+ }
+
+ emitNull := mo.Flags.Get(jsonflags.FormatNilSliceAsNull)
+ if mo.Format != "" && mo.FormatDepth == xe.Tokens.Depth() {
+ switch mo.Format {
+ case "emitnull":
+ emitNull = true
+ mo.Format = ""
+ case "emitempty":
+ emitNull = false
+ mo.Format = ""
+ default:
+ return newInvalidFormatError(enc, t, mo)
+ }
+ }
+
+ // Handle empty slices.
+ n := va.Len()
+ if n == 0 {
+ if emitNull && va.IsNil() {
+ return enc.WriteToken(jsontext.Null)
+ }
+ // Optimize for marshaling an empty slice without any preceding whitespace.
+ if optimizeCommon && !mo.Flags.Get(jsonflags.AnyWhitespace) && !xe.Tokens.Last.NeedObjectName() {
+ xe.Buf = append(xe.Tokens.MayAppendDelim(xe.Buf, '['), "[]"...)
+ xe.Tokens.Last.Increment()
+ if xe.NeedFlush() {
+ return xe.Flush()
+ }
+ return nil
+ }
+ }
+
+ once.Do(init)
+ if err := enc.WriteToken(jsontext.BeginArray); err != nil {
+ return err
+ }
+ marshal := valFncs.marshal
+ if mo.Marshalers != nil {
+ marshal, _ = mo.Marshalers.(*Marshalers).lookup(marshal, t.Elem())
+ }
+ for i := range n {
+ v := addressableValue{va.Index(i), false} // indexed slice element is always addressable
+ if err := marshal(enc, v, mo); err != nil {
+ return err
+ }
+ }
+ if err := enc.WriteToken(jsontext.EndArray); err != nil {
+ return err
+ }
+ return nil
+ }
+ emptySlice := reflect.MakeSlice(t, 0, 0)
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ xd := export.Decoder(dec)
+ if uo.Format != "" && uo.FormatDepth == xd.Tokens.Depth() {
+ switch uo.Format {
+ case "emitnull", "emitempty":
+ uo.Format = "" // only relevant for marshaling
+ default:
+ return newInvalidFormatError(dec, t, uo)
+ }
+ }
+
+ tok, err := dec.ReadToken()
+ if err != nil {
+ return err
+ }
+ k := tok.Kind()
+ switch k {
+ case 'n':
+ va.SetZero()
+ return nil
+ case '[':
+ once.Do(init)
+ unmarshal := valFncs.unmarshal
+ if uo.Unmarshalers != nil {
+ unmarshal, _ = uo.Unmarshalers.(*Unmarshalers).lookup(unmarshal, t.Elem())
+ }
+ mustZero := true // we do not know the cleanliness of unused capacity
+ cap := va.Cap()
+ if cap > 0 {
+ va.SetLen(cap)
+ }
+ var i int
+ var errUnmarshal error
+ for dec.PeekKind() != ']' {
+ if i == cap {
+ va.Value.Grow(1)
+ cap = va.Cap()
+ va.SetLen(cap)
+ mustZero = false // reflect.Value.Grow ensures new capacity is zero-initialized
+ }
+ v := addressableValue{va.Index(i), false} // indexed slice element is always addressable
+ i++
+ if mustZero && !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ v.SetZero()
+ }
+ if err := unmarshal(dec, v, uo); err != nil {
+ if isFatalError(err, uo.Flags) {
+ va.SetLen(i)
+ return err
+ }
+ errUnmarshal = cmp.Or(errUnmarshal, err)
+ }
+ }
+ if i == 0 {
+ va.Set(emptySlice)
+ } else {
+ va.SetLen(i)
+ }
+ if _, err := dec.ReadToken(); err != nil {
+ return err
+ }
+ return errUnmarshal
+ }
+ return newUnmarshalErrorAfterWithSkipping(dec, uo, t, nil)
+ }
+ return &fncs
+}
+
+var errArrayUnderflow = errors.New("too few array elements")
+var errArrayOverflow = errors.New("too many array elements")
+
+func makeArrayArshaler(t reflect.Type) *arshaler {
+ var fncs arshaler
+ var (
+ once sync.Once
+ valFncs *arshaler
+ )
+ init := func() {
+ valFncs = lookupArshaler(t.Elem())
+ }
+ n := t.Len()
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ xe := export.Encoder(enc)
+ if mo.Format != "" && mo.FormatDepth == xe.Tokens.Depth() {
+ return newInvalidFormatError(enc, t, mo)
+ }
+ once.Do(init)
+ if err := enc.WriteToken(jsontext.BeginArray); err != nil {
+ return err
+ }
+ marshal := valFncs.marshal
+ if mo.Marshalers != nil {
+ marshal, _ = mo.Marshalers.(*Marshalers).lookup(marshal, t.Elem())
+ }
+ for i := range n {
+ v := addressableValue{va.Index(i), va.forcedAddr} // indexed array element is addressable if array is addressable
+ if err := marshal(enc, v, mo); err != nil {
+ return err
+ }
+ }
+ if err := enc.WriteToken(jsontext.EndArray); err != nil {
+ return err
+ }
+ return nil
+ }
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ xd := export.Decoder(dec)
+ if uo.Format != "" && uo.FormatDepth == xd.Tokens.Depth() {
+ return newInvalidFormatError(dec, t, uo)
+ }
+ tok, err := dec.ReadToken()
+ if err != nil {
+ return err
+ }
+ k := tok.Kind()
+ switch k {
+ case 'n':
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ va.SetZero()
+ }
+ return nil
+ case '[':
+ once.Do(init)
+ unmarshal := valFncs.unmarshal
+ if uo.Unmarshalers != nil {
+ unmarshal, _ = uo.Unmarshalers.(*Unmarshalers).lookup(unmarshal, t.Elem())
+ }
+ var i int
+ var errUnmarshal error
+ for dec.PeekKind() != ']' {
+ if i >= n {
+ if err := dec.SkipValue(); err != nil {
+ return err
+ }
+ err = errArrayOverflow
+ continue
+ }
+ v := addressableValue{va.Index(i), va.forcedAddr} // indexed array element is addressable if array is addressable
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ v.SetZero()
+ }
+ if err := unmarshal(dec, v, uo); err != nil {
+ if isFatalError(err, uo.Flags) {
+ return err
+ }
+ errUnmarshal = cmp.Or(errUnmarshal, err)
+ }
+ i++
+ }
+ for ; i < n; i++ {
+ va.Index(i).SetZero()
+ err = errArrayUnderflow
+ }
+ if _, err := dec.ReadToken(); err != nil {
+ return err
+ }
+ if err != nil && !uo.Flags.Get(jsonflags.UnmarshalArrayFromAnyLength) {
+ return newUnmarshalErrorAfter(dec, t, err)
+ }
+ return errUnmarshal
+ }
+ return newUnmarshalErrorAfterWithSkipping(dec, uo, t, nil)
+ }
+ return &fncs
+}
+
+func makePointerArshaler(t reflect.Type) *arshaler {
+ var fncs arshaler
+ var (
+ once sync.Once
+ valFncs *arshaler
+ )
+ init := func() {
+ valFncs = lookupArshaler(t.Elem())
+ }
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ // Check for cycles.
+ xe := export.Encoder(enc)
+ if xe.Tokens.Depth() > startDetectingCyclesAfter {
+ if err := visitPointer(&xe.SeenPointers, va.Value); err != nil {
+ return newMarshalErrorBefore(enc, t, err)
+ }
+ defer leavePointer(&xe.SeenPointers, va.Value)
+ }
+
+ // NOTE: Struct.Format is forwarded to underlying marshal.
+ if va.IsNil() {
+ return enc.WriteToken(jsontext.Null)
+ }
+ once.Do(init)
+ marshal := valFncs.marshal
+ if mo.Marshalers != nil {
+ marshal, _ = mo.Marshalers.(*Marshalers).lookup(marshal, t.Elem())
+ }
+ v := addressableValue{va.Elem(), false} // dereferenced pointer is always addressable
+ return marshal(enc, v, mo)
+ }
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ // NOTE: Struct.Format is forwarded to underlying unmarshal.
+ if dec.PeekKind() == 'n' {
+ if _, err := dec.ReadToken(); err != nil {
+ return err
+ }
+ va.SetZero()
+ return nil
+ }
+ once.Do(init)
+ unmarshal := valFncs.unmarshal
+ if uo.Unmarshalers != nil {
+ unmarshal, _ = uo.Unmarshalers.(*Unmarshalers).lookup(unmarshal, t.Elem())
+ }
+ if va.IsNil() {
+ va.Set(reflect.New(t.Elem()))
+ }
+ v := addressableValue{va.Elem(), false} // dereferenced pointer is always addressable
+ if err := unmarshal(dec, v, uo); err != nil {
+ return err
+ }
+ if uo.Flags.Get(jsonflags.StringifyWithLegacySemantics) &&
+ uo.Flags.Get(jsonflags.StringifyNumbers|jsonflags.StringifyBoolsAndStrings) {
+ // A JSON null quoted within a JSON string should take effect
+ // within the pointer value, rather than the indirect value.
+ //
+ // TODO: This does not correctly handle escaped nulls
+ // (e.g., "\u006e\u0075\u006c\u006c"), but is good enough
+ // for such an esoteric use case of the `string` option.
+ if string(export.Decoder(dec).PreviousTokenOrValue()) == `"null"` {
+ va.SetZero()
+ }
+ }
+ return nil
+ }
+ return &fncs
+}
+
+var errNilInterface = errors.New("cannot derive concrete type for nil interface with finite type set")
+
+func makeInterfaceArshaler(t reflect.Type) *arshaler {
+ // NOTE: Values retrieved from an interface are not addressable,
+ // so we shallow copy the values to make them addressable and
+ // store them back into the interface afterwards.
+
+ var fncs arshaler
+ var whichMarshaler reflect.Type
+ for _, iface := range allMarshalerTypes {
+ if t.Implements(iface) {
+ whichMarshaler = t
+ break
+ }
+ }
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ xe := export.Encoder(enc)
+ if mo.Format != "" && mo.FormatDepth == xe.Tokens.Depth() {
+ return newInvalidFormatError(enc, t, mo)
+ }
+ if va.IsNil() {
+ return enc.WriteToken(jsontext.Null)
+ } else if mo.Flags.Get(jsonflags.CallMethodsWithLegacySemantics) && whichMarshaler != nil {
+ // The marshaler for a pointer never calls the method on a nil receiver.
+ // Wrap the nil pointer within a struct type so that marshal
+ // instead appears on a value receiver and may be called.
+ if va.Elem().Kind() == reflect.Pointer && va.Elem().IsNil() {
+ v2 := newAddressableValue(whichMarshaler)
+ switch whichMarshaler {
+ case jsonMarshalerToType:
+ v2.Set(reflect.ValueOf(struct{ MarshalerTo }{va.Elem().Interface().(MarshalerTo)}))
+ case jsonMarshalerType:
+ v2.Set(reflect.ValueOf(struct{ Marshaler }{va.Elem().Interface().(Marshaler)}))
+ case textAppenderType:
+ v2.Set(reflect.ValueOf(struct{ encoding.TextAppender }{va.Elem().Interface().(encoding.TextAppender)}))
+ case textMarshalerType:
+ v2.Set(reflect.ValueOf(struct{ encoding.TextMarshaler }{va.Elem().Interface().(encoding.TextMarshaler)}))
+ }
+ va = v2
+ }
+ }
+ v := newAddressableValue(va.Elem().Type())
+ v.Set(va.Elem())
+ marshal := lookupArshaler(v.Type()).marshal
+ if mo.Marshalers != nil {
+ marshal, _ = mo.Marshalers.(*Marshalers).lookup(marshal, v.Type())
+ }
+ // Optimize for the any type if there are no special options.
+ if optimizeCommon &&
+ t == anyType && !mo.Flags.Get(jsonflags.StringifyNumbers|jsonflags.StringifyBoolsAndStrings) && mo.Format == "" &&
+ (mo.Marshalers == nil || !mo.Marshalers.(*Marshalers).fromAny) {
+ return marshalValueAny(enc, va.Elem().Interface(), mo)
+ }
+ return marshal(enc, v, mo)
+ }
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ xd := export.Decoder(dec)
+ if uo.Format != "" && uo.FormatDepth == xd.Tokens.Depth() {
+ return newInvalidFormatError(dec, t, uo)
+ }
+ if uo.Flags.Get(jsonflags.MergeWithLegacySemantics) && !va.IsNil() {
+ // Legacy merge behavior is difficult to explain.
+ // In general, it only merges for non-nil pointer kinds.
+ // As a special case, unmarshaling a JSON null into a pointer
+ // sets a concrete nil pointer of the underlying type
+ // (rather than setting the interface value itself to nil).
+ e := va.Elem()
+ if e.Kind() == reflect.Pointer && !e.IsNil() {
+ if dec.PeekKind() == 'n' && e.Elem().Kind() == reflect.Pointer {
+ if _, err := dec.ReadToken(); err != nil {
+ return err
+ }
+ va.Elem().Elem().SetZero()
+ return nil
+ }
+ } else {
+ va.SetZero()
+ }
+ }
+ if dec.PeekKind() == 'n' {
+ if _, err := dec.ReadToken(); err != nil {
+ return err
+ }
+ va.SetZero()
+ return nil
+ }
+ var v addressableValue
+ if va.IsNil() {
+ // Optimize for the any type if there are no special options.
+ // We do not care about stringified numbers since JSON strings
+ // are always unmarshaled into an any value as Go strings.
+ // Duplicate name check must be enforced since unmarshalValueAny
+ // does not implement merge semantics.
+ if optimizeCommon &&
+ t == anyType && !uo.Flags.Get(jsonflags.AllowDuplicateNames) && uo.Format == "" &&
+ (uo.Unmarshalers == nil || !uo.Unmarshalers.(*Unmarshalers).fromAny) {
+ v, err := unmarshalValueAny(dec, uo)
+ // We must check for nil interface values up front.
+ // See https://go.dev/issue/52310.
+ if v != nil {
+ va.Set(reflect.ValueOf(v))
+ }
+ return err
+ }
+
+ k := dec.PeekKind()
+ if !isAnyType(t) {
+ return newUnmarshalErrorBeforeWithSkipping(dec, uo, t, errNilInterface)
+ }
+ switch k {
+ case 'f', 't':
+ v = newAddressableValue(boolType)
+ case '"':
+ v = newAddressableValue(stringType)
+ case '0':
+ if uo.Flags.Get(jsonflags.UnmarshalAnyWithRawNumber) {
+ v = addressableValue{reflect.ValueOf(internal.NewRawNumber()).Elem(), true}
+ } else {
+ v = newAddressableValue(float64Type)
+ }
+ case '{':
+ v = newAddressableValue(mapStringAnyType)
+ case '[':
+ v = newAddressableValue(sliceAnyType)
+ default:
+ // If k is invalid (e.g., due to an I/O or syntax error), then
+ // that will be cached by PeekKind and returned by ReadValue.
+ // If k is '}' or ']', then ReadValue must error since
+ // those are invalid kinds at the start of a JSON value.
+ _, err := dec.ReadValue()
+ return err
+ }
+ } else {
+ // Shallow copy the existing value to keep it addressable.
+ // Any mutations at the top-level of the value will be observable
+ // since we always store this value back into the interface value.
+ v = newAddressableValue(va.Elem().Type())
+ v.Set(va.Elem())
+ }
+ unmarshal := lookupArshaler(v.Type()).unmarshal
+ if uo.Unmarshalers != nil {
+ unmarshal, _ = uo.Unmarshalers.(*Unmarshalers).lookup(unmarshal, v.Type())
+ }
+ err := unmarshal(dec, v, uo)
+ va.Set(v.Value)
+ return err
+ }
+ return &fncs
+}
+
+// isAnyType reports wether t is equivalent to the any interface type.
+func isAnyType(t reflect.Type) bool {
+ // This is forward compatible if the Go language permits type sets within
+ // ordinary interfaces where an interface with zero methods does not
+ // necessarily mean it can hold every possible Go type.
+ // See https://go.dev/issue/45346.
+ return t == anyType || anyType.Implements(t)
+}
+
+func makeInvalidArshaler(t reflect.Type) *arshaler {
+ var fncs arshaler
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ return newMarshalErrorBefore(enc, t, nil)
+ }
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ return newUnmarshalErrorBefore(dec, t, nil)
+ }
+ return &fncs
+}
+
+func stringOrNumberKind(isString bool) jsontext.Kind {
+ if isString {
+ return '"'
+ } else {
+ return '0'
+ }
+}
+
+type uintSet64 uint64
+
+func (s uintSet64) has(i uint) bool { return s&(1< 0 }
+func (s *uintSet64) set(i uint) { *s |= 1 << i }
+
+// uintSet is a set of unsigned integers.
+// It is optimized for most integers being close to zero.
+type uintSet struct {
+ lo uintSet64
+ hi []uintSet64
+}
+
+// has reports whether i is in the set.
+func (s *uintSet) has(i uint) bool {
+ if i < 64 {
+ return s.lo.has(i)
+ } else {
+ i -= 64
+ iHi, iLo := int(i/64), i%64
+ return iHi < len(s.hi) && s.hi[iHi].has(iLo)
+ }
+}
+
+// insert inserts i into the set and reports whether it was the first insertion.
+func (s *uintSet) insert(i uint) bool {
+ // TODO: Make this inlinable at least for the lower 64-bit case.
+ if i < 64 {
+ has := s.lo.has(i)
+ s.lo.set(i)
+ return !has
+ } else {
+ i -= 64
+ iHi, iLo := int(i/64), i%64
+ if iHi >= len(s.hi) {
+ s.hi = append(s.hi, make([]uintSet64, iHi+1-len(s.hi))...)
+ s.hi = s.hi[:cap(s.hi)]
+ }
+ has := s.hi[iHi].has(iLo)
+ s.hi[iHi].set(iLo)
+ return !has
+ }
+}
diff --git a/internal/json/arshal_funcs.go b/internal/json/arshal_funcs.go
new file mode 100644
index 0000000000..2ddad14800
--- /dev/null
+++ b/internal/json/arshal_funcs.go
@@ -0,0 +1,432 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package json
+
+import (
+ "errors"
+ "fmt"
+ "reflect"
+ "sync"
+
+ "github.com/quay/clair/v4/internal/json/internal"
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonopts"
+ "github.com/quay/clair/v4/internal/json/jsontext"
+)
+
+// SkipFunc may be returned by [MarshalToFunc] and [UnmarshalFromFunc] functions.
+//
+// Any function that returns SkipFunc must not cause observable side effects
+// on the provided [jsontext.Encoder] or [jsontext.Decoder].
+// For example, it is permissible to call [jsontext.Decoder.PeekKind],
+// but not permissible to call [jsontext.Decoder.ReadToken] or
+// [jsontext.Encoder.WriteToken] since such methods mutate the state.
+var SkipFunc = errors.New("json: skip function")
+
+var errSkipMutation = errors.New("must not read or write any tokens when skipping")
+var errNonSingularValue = errors.New("must read or write exactly one value")
+
+// Marshalers is a list of functions that may override the marshal behavior
+// of specific types. Populate [WithMarshalers] to use it with
+// [Marshal], [MarshalWrite], or [MarshalEncode].
+// A nil *Marshalers is equivalent to an empty list.
+// There are no exported fields or methods on Marshalers.
+type Marshalers = typedMarshalers
+
+// JoinMarshalers constructs a flattened list of marshal functions.
+// If multiple functions in the list are applicable for a value of a given type,
+// then those earlier in the list take precedence over those that come later.
+// If a function returns [SkipFunc], then the next applicable function is called,
+// otherwise the default marshaling behavior is used.
+//
+// For example:
+//
+// m1 := JoinMarshalers(f1, f2)
+// m2 := JoinMarshalers(f0, m1, f3) // equivalent to m3
+// m3 := JoinMarshalers(f0, f1, f2, f3) // equivalent to m2
+func JoinMarshalers(ms ...*Marshalers) *Marshalers {
+ return newMarshalers(ms...)
+}
+
+// Unmarshalers is a list of functions that may override the unmarshal behavior
+// of specific types. Populate [WithUnmarshalers] to use it with
+// [Unmarshal], [UnmarshalRead], or [UnmarshalDecode].
+// A nil *Unmarshalers is equivalent to an empty list.
+// There are no exported fields or methods on Unmarshalers.
+type Unmarshalers = typedUnmarshalers
+
+// JoinUnmarshalers constructs a flattened list of unmarshal functions.
+// If multiple functions in the list are applicable for a value of a given type,
+// then those earlier in the list take precedence over those that come later.
+// If a function returns [SkipFunc], then the next applicable function is called,
+// otherwise the default unmarshaling behavior is used.
+//
+// For example:
+//
+// u1 := JoinUnmarshalers(f1, f2)
+// u2 := JoinUnmarshalers(f0, u1, f3) // equivalent to u3
+// u3 := JoinUnmarshalers(f0, f1, f2, f3) // equivalent to u2
+func JoinUnmarshalers(us ...*Unmarshalers) *Unmarshalers {
+ return newUnmarshalers(us...)
+}
+
+type typedMarshalers = typedArshalers[jsontext.Encoder]
+type typedUnmarshalers = typedArshalers[jsontext.Decoder]
+type typedArshalers[Coder any] struct {
+ nonComparable
+
+ fncVals []typedArshaler[Coder]
+ fncCache sync.Map // map[reflect.Type]arshaler
+
+ // fromAny reports whether any of Go types used to represent arbitrary JSON
+ // (i.e., any, bool, string, float64, map[string]any, or []any) matches
+ // any of the provided type-specific arshalers.
+ //
+ // This bit of information is needed in arshal_default.go to determine
+ // whether to use the specialized logic in arshal_any.go to handle
+ // the any interface type. The logic in arshal_any.go does not support
+ // type-specific arshal functions, so we must avoid using that logic
+ // if this is true.
+ fromAny bool
+}
+type typedMarshaler = typedArshaler[jsontext.Encoder]
+type typedUnmarshaler = typedArshaler[jsontext.Decoder]
+type typedArshaler[Coder any] struct {
+ typ reflect.Type
+ fnc func(*Coder, addressableValue, *jsonopts.Struct) error
+ maySkip bool
+}
+
+func newMarshalers(ms ...*Marshalers) *Marshalers { return newTypedArshalers(ms...) }
+func newUnmarshalers(us ...*Unmarshalers) *Unmarshalers { return newTypedArshalers(us...) }
+func newTypedArshalers[Coder any](as ...*typedArshalers[Coder]) *typedArshalers[Coder] {
+ var a typedArshalers[Coder]
+ for _, a2 := range as {
+ if a2 != nil {
+ a.fncVals = append(a.fncVals, a2.fncVals...)
+ a.fromAny = a.fromAny || a2.fromAny
+ }
+ }
+ if len(a.fncVals) == 0 {
+ return nil
+ }
+ return &a
+}
+
+func (a *typedArshalers[Coder]) lookup(fnc func(*Coder, addressableValue, *jsonopts.Struct) error, t reflect.Type) (func(*Coder, addressableValue, *jsonopts.Struct) error, bool) {
+ if a == nil {
+ return fnc, false
+ }
+ if v, ok := a.fncCache.Load(t); ok {
+ if v == nil {
+ return fnc, false
+ }
+ return v.(func(*Coder, addressableValue, *jsonopts.Struct) error), true
+ }
+
+ // Collect a list of arshalers that can be called for this type.
+ // This list may be longer than 1 since some arshalers can be skipped.
+ var fncs []func(*Coder, addressableValue, *jsonopts.Struct) error
+ for _, fncVal := range a.fncVals {
+ if !castableTo(t, fncVal.typ) {
+ continue
+ }
+ fncs = append(fncs, fncVal.fnc)
+ if !fncVal.maySkip {
+ break // subsequent arshalers will never be called
+ }
+ }
+
+ if len(fncs) == 0 {
+ a.fncCache.Store(t, nil) // nil to indicate that no funcs found
+ return fnc, false
+ }
+
+ // Construct an arshaler that may call every applicable arshaler.
+ fncDefault := fnc
+ fnc = func(c *Coder, v addressableValue, o *jsonopts.Struct) error {
+ for _, fnc := range fncs {
+ if err := fnc(c, v, o); err != SkipFunc {
+ return err // may be nil or non-nil
+ }
+ }
+ return fncDefault(c, v, o)
+ }
+
+ // Use the first stored so duplicate work can be garbage collected.
+ v, _ := a.fncCache.LoadOrStore(t, fnc)
+ return v.(func(*Coder, addressableValue, *jsonopts.Struct) error), true
+}
+
+// MarshalFunc constructs a type-specific marshaler that
+// specifies how to marshal values of type T.
+// T can be any type except a named pointer.
+// The function is always provided with a non-nil pointer value
+// if T is an interface or pointer type.
+//
+// The function must marshal exactly one JSON value.
+// The value of T must not be retained outside the function call.
+// It may not return [SkipFunc].
+func MarshalFunc[T any](fn func(T) ([]byte, error)) *Marshalers {
+ t := reflect.TypeFor[T]()
+ assertCastableTo(t, true)
+ typFnc := typedMarshaler{
+ typ: t,
+ fnc: func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ val, err := fn(va.castTo(t).Interface().(T))
+ if err != nil {
+ err = wrapSkipFunc(err, "marshal function of type func(T) ([]byte, error)")
+ if mo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return internal.NewMarshalerError(va.Addr().Interface(), err, "MarshalFunc") // unlike unmarshal, always wrapped
+ }
+ err = newMarshalErrorBefore(enc, t, err)
+ return collapseSemanticErrors(err)
+ }
+ if err := enc.WriteValue(val); err != nil {
+ if mo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return internal.NewMarshalerError(va.Addr().Interface(), err, "MarshalFunc") // unlike unmarshal, always wrapped
+ }
+ if isSyntacticError(err) {
+ err = newMarshalErrorBefore(enc, t, err)
+ }
+ return err
+ }
+ return nil
+ },
+ }
+ return &Marshalers{fncVals: []typedMarshaler{typFnc}, fromAny: castableToFromAny(t)}
+}
+
+// MarshalToFunc constructs a type-specific marshaler that
+// specifies how to marshal values of type T.
+// T can be any type except a named pointer.
+// The function is always provided with a non-nil pointer value
+// if T is an interface or pointer type.
+//
+// The function must marshal exactly one JSON value by calling write methods
+// on the provided encoder. It may return [SkipFunc] such that marshaling can
+// move on to the next marshal function. However, no mutable method calls may
+// be called on the encoder if [SkipFunc] is returned.
+// The pointer to [jsontext.Encoder] and the value of T
+// must not be retained outside the function call.
+func MarshalToFunc[T any](fn func(*jsontext.Encoder, T) error) *Marshalers {
+ t := reflect.TypeFor[T]()
+ assertCastableTo(t, true)
+ typFnc := typedMarshaler{
+ typ: t,
+ fnc: func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ xe := export.Encoder(enc)
+ prevDepth, prevLength := xe.Tokens.DepthLength()
+ xe.Flags.Set(jsonflags.WithinArshalCall | 1)
+ err := fn(enc, va.castTo(t).Interface().(T))
+ xe.Flags.Set(jsonflags.WithinArshalCall | 0)
+ currDepth, currLength := xe.Tokens.DepthLength()
+ if err == nil && (prevDepth != currDepth || prevLength+1 != currLength) {
+ err = errNonSingularValue
+ }
+ if err != nil {
+ if err == SkipFunc {
+ if prevDepth == currDepth && prevLength == currLength {
+ return SkipFunc
+ }
+ err = errSkipMutation
+ }
+ if mo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return internal.NewMarshalerError(va.Addr().Interface(), err, "MarshalToFunc") // unlike unmarshal, always wrapped
+ }
+ if !export.IsIOError(err) {
+ err = newSemanticErrorWithPosition(enc, t, prevDepth, prevLength, err)
+ }
+ return err
+ }
+ return nil
+ },
+ maySkip: true,
+ }
+ return &Marshalers{fncVals: []typedMarshaler{typFnc}, fromAny: castableToFromAny(t)}
+}
+
+// UnmarshalFunc constructs a type-specific unmarshaler that
+// specifies how to unmarshal values of type T.
+// T must be an unnamed pointer or an interface type.
+// The function is always provided with a non-nil pointer value.
+//
+// The function must unmarshal exactly one JSON value.
+// The input []byte must not be mutated.
+// The input []byte and value T must not be retained outside the function call.
+// It may not return [SkipFunc].
+func UnmarshalFunc[T any](fn func([]byte, T) error) *Unmarshalers {
+ t := reflect.TypeFor[T]()
+ assertCastableTo(t, false)
+ typFnc := typedUnmarshaler{
+ typ: t,
+ fnc: func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ val, err := dec.ReadValue()
+ if err != nil {
+ return err // must be a syntactic or I/O error
+ }
+ err = fn(val, va.castTo(t).Interface().(T))
+ if err != nil {
+ err = wrapSkipFunc(err, "unmarshal function of type func([]byte, T) error")
+ if uo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return err // unlike marshal, never wrapped
+ }
+ err = newUnmarshalErrorAfter(dec, t, err)
+ return collapseSemanticErrors(err)
+ }
+ return nil
+ },
+ }
+ return &Unmarshalers{fncVals: []typedUnmarshaler{typFnc}, fromAny: castableToFromAny(t)}
+}
+
+// UnmarshalFromFunc constructs a type-specific unmarshaler that
+// specifies how to unmarshal values of type T.
+// T must be an unnamed pointer or an interface type.
+// The function is always provided with a non-nil pointer value.
+//
+// The function must unmarshal exactly one JSON value by calling read methods
+// on the provided decoder. It may return [SkipFunc] such that unmarshaling can
+// move on to the next unmarshal function. However, no mutable method calls may
+// be called on the decoder if [SkipFunc] is returned.
+// The pointer to [jsontext.Decoder] and the value of T
+// must not be retained outside the function call.
+func UnmarshalFromFunc[T any](fn func(*jsontext.Decoder, T) error) *Unmarshalers {
+ t := reflect.TypeFor[T]()
+ assertCastableTo(t, false)
+ typFnc := typedUnmarshaler{
+ typ: t,
+ fnc: func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ xd := export.Decoder(dec)
+ prevDepth, prevLength := xd.Tokens.DepthLength()
+ xd.Flags.Set(jsonflags.WithinArshalCall | 1)
+ err := fn(dec, va.castTo(t).Interface().(T))
+ xd.Flags.Set(jsonflags.WithinArshalCall | 0)
+ currDepth, currLength := xd.Tokens.DepthLength()
+ if err == nil && (prevDepth != currDepth || prevLength+1 != currLength) {
+ err = errNonSingularValue
+ }
+ if err != nil {
+ if err == SkipFunc {
+ if prevDepth == currDepth && prevLength == currLength {
+ return SkipFunc
+ }
+ err = errSkipMutation
+ }
+ if uo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ if err2 := xd.SkipUntil(prevDepth, prevLength+1); err2 != nil {
+ return err2
+ }
+ return err // unlike marshal, never wrapped
+ }
+ if !isSyntacticError(err) && !export.IsIOError(err) {
+ err = newSemanticErrorWithPosition(dec, t, prevDepth, prevLength, err)
+ }
+ return err
+ }
+ return nil
+ },
+ maySkip: true,
+ }
+ return &Unmarshalers{fncVals: []typedUnmarshaler{typFnc}, fromAny: castableToFromAny(t)}
+}
+
+// assertCastableTo asserts that "to" is a valid type to be casted to.
+// These are the Go types that type-specific arshalers may operate upon.
+//
+// Let AllTypes be the universal set of all possible Go types.
+// This function generally asserts that:
+//
+// len([from for from in AllTypes if castableTo(from, to)]) > 0
+//
+// otherwise it panics.
+//
+// As a special-case if marshal is false, then we forbid any non-pointer or
+// non-interface type since it is almost always a bug trying to unmarshal
+// into something where the end-user caller did not pass in an addressable value
+// since they will not observe the mutations.
+func assertCastableTo(to reflect.Type, marshal bool) {
+ switch to.Kind() {
+ case reflect.Interface:
+ return
+ case reflect.Pointer:
+ // Only allow unnamed pointers to be consistent with the fact that
+ // taking the address of a value produces an unnamed pointer type.
+ if to.Name() == "" {
+ return
+ }
+ default:
+ // Technically, non-pointer types are permissible for unmarshal.
+ // However, they are often a bug since the receiver would be immutable.
+ // Thus, only allow them for marshaling.
+ if marshal {
+ return
+ }
+ }
+ if marshal {
+ panic(fmt.Sprintf("input type %v must be an interface type, an unnamed pointer type, or a non-pointer type", to))
+ } else {
+ panic(fmt.Sprintf("input type %v must be an interface type or an unnamed pointer type", to))
+ }
+}
+
+// castableTo checks whether values of type "from" can be casted to type "to".
+// Nil pointer or interface "from" values are never considered castable.
+//
+// This function must be kept in sync with addressableValue.castTo.
+func castableTo(from, to reflect.Type) bool {
+ switch to.Kind() {
+ case reflect.Interface:
+ // TODO: This breaks when ordinary interfaces can have type sets
+ // since interfaces now exist where only the value form of a type (T)
+ // implements the interface, but not the pointer variant (*T).
+ // See https://go.dev/issue/45346.
+ return reflect.PointerTo(from).Implements(to)
+ case reflect.Pointer:
+ // Common case for unmarshaling.
+ // From must be a concrete or interface type.
+ return reflect.PointerTo(from) == to
+ default:
+ // Common case for marshaling.
+ // From must be a concrete type.
+ return from == to
+ }
+}
+
+// castTo casts va to the specified type.
+// If the type is an interface, then the underlying type will always
+// be a non-nil pointer to a concrete type.
+//
+// Requirement: castableTo(va.Type(), to) must hold.
+func (va addressableValue) castTo(to reflect.Type) reflect.Value {
+ switch to.Kind() {
+ case reflect.Interface:
+ return va.Addr().Convert(to)
+ case reflect.Pointer:
+ return va.Addr()
+ default:
+ return va.Value
+ }
+}
+
+// castableToFromAny reports whether "to" can be casted to from any
+// of the dynamic types used to represent arbitrary JSON.
+func castableToFromAny(to reflect.Type) bool {
+ for _, from := range []reflect.Type{anyType, boolType, stringType, float64Type, mapStringAnyType, sliceAnyType} {
+ if castableTo(from, to) {
+ return true
+ }
+ }
+ return false
+}
+
+func wrapSkipFunc(err error, what string) error {
+ if err == SkipFunc {
+ return errors.New(what + " cannot be skipped")
+ }
+ return err
+}
diff --git a/internal/json/arshal_inlined.go b/internal/json/arshal_inlined.go
new file mode 100644
index 0000000000..a53537aaaa
--- /dev/null
+++ b/internal/json/arshal_inlined.go
@@ -0,0 +1,230 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package json
+
+import (
+ "bytes"
+ "errors"
+ "io"
+ "reflect"
+
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonopts"
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+ "github.com/quay/clair/v4/internal/json/jsontext"
+)
+
+// This package supports "inlining" a Go struct field, where the contents
+// of the serialized field (which must be a JSON object) are treated as if
+// they are part of the parent Go struct (which represents a JSON object).
+//
+// Generally, inlined fields are of a Go struct type, where the fields of the
+// nested struct are virtually hoisted up to the parent struct using rules
+// similar to how Go embedding works (but operating within the JSON namespace).
+//
+// However, inlined fields may also be of a Go map type with a string key or
+// a jsontext.Value. Such inlined fields are called "fallback" fields since they
+// represent any arbitrary JSON object member. Explicitly named fields take
+// precedence over the inlined fallback. Only one inlined fallback is allowed.
+
+var errRawInlinedNotObject = errors.New("inlined raw value must be a JSON object")
+
+var jsontextValueType = reflect.TypeFor[jsontext.Value]()
+
+// marshalInlinedFallbackAll marshals all the members in an inlined fallback.
+func marshalInlinedFallbackAll(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct, f *structField, insertUnquotedName func([]byte) bool) error {
+ v := addressableValue{va.Field(f.index0), va.forcedAddr} // addressable if struct value is addressable
+ if len(f.index) > 0 {
+ v = v.fieldByIndex(f.index, false)
+ if !v.IsValid() {
+ return nil // implies a nil inlined field
+ }
+ }
+ v = v.indirect(false)
+ if !v.IsValid() {
+ return nil
+ }
+
+ if v.Type() == jsontextValueType {
+ // TODO(https://go.dev/issue/62121): Use reflect.Value.AssertTo.
+ b := *v.Addr().Interface().(*jsontext.Value)
+ if len(b) == 0 { // TODO: Should this be nil? What if it were all whitespace?
+ return nil
+ }
+
+ dec := export.GetBufferedDecoder(b)
+ defer export.PutBufferedDecoder(dec)
+ xd := export.Decoder(dec)
+ xd.Flags.Set(jsonflags.AllowDuplicateNames | jsonflags.AllowInvalidUTF8 | 1)
+
+ tok, err := dec.ReadToken()
+ if err != nil {
+ if err == io.EOF {
+ err = io.ErrUnexpectedEOF
+ }
+ return newMarshalErrorBefore(enc, v.Type(), err)
+ }
+ if tok.Kind() != '{' {
+ return newMarshalErrorBefore(enc, v.Type(), errRawInlinedNotObject)
+ }
+ for dec.PeekKind() != '}' {
+ // Parse the JSON object name.
+ var flags jsonwire.ValueFlags
+ val, err := xd.ReadValue(&flags)
+ if err != nil {
+ return newMarshalErrorBefore(enc, v.Type(), err)
+ }
+ if insertUnquotedName != nil {
+ name := jsonwire.UnquoteMayCopy(val, flags.IsVerbatim())
+ if !insertUnquotedName(name) {
+ return newDuplicateNameError(enc.StackPointer().Parent(), val, enc.OutputOffset())
+ }
+ }
+ if err := enc.WriteValue(val); err != nil {
+ return err
+ }
+
+ // Parse the JSON object value.
+ val, err = xd.ReadValue(&flags)
+ if err != nil {
+ return newMarshalErrorBefore(enc, v.Type(), err)
+ }
+ if err := enc.WriteValue(val); err != nil {
+ return err
+ }
+ }
+ if _, err := dec.ReadToken(); err != nil {
+ return newMarshalErrorBefore(enc, v.Type(), err)
+ }
+ if err := xd.CheckEOF(); err != nil {
+ return newMarshalErrorBefore(enc, v.Type(), err)
+ }
+ return nil
+ } else {
+ m := v // must be a map[~string]V
+ n := m.Len()
+ if n == 0 {
+ return nil
+ }
+ mk := newAddressableValue(m.Type().Key())
+ mv := newAddressableValue(m.Type().Elem())
+ marshalKey := func(mk addressableValue) error {
+ b, err := jsonwire.AppendQuote(enc.AvailableBuffer(), mk.String(), &mo.Flags)
+ if err != nil {
+ return newMarshalErrorBefore(enc, m.Type().Key(), err)
+ }
+ if insertUnquotedName != nil {
+ isVerbatim := bytes.IndexByte(b, '\\') < 0
+ name := jsonwire.UnquoteMayCopy(b, isVerbatim)
+ if !insertUnquotedName(name) {
+ return newDuplicateNameError(enc.StackPointer().Parent(), b, enc.OutputOffset())
+ }
+ }
+ return enc.WriteValue(b)
+ }
+ marshalVal := f.fncs.marshal
+ if mo.Marshalers != nil {
+ marshalVal, _ = mo.Marshalers.(*Marshalers).lookup(marshalVal, mv.Type())
+ }
+ if !mo.Flags.Get(jsonflags.Deterministic) || n <= 1 {
+ for iter := m.MapRange(); iter.Next(); {
+ mk.SetIterKey(iter)
+ if err := marshalKey(mk); err != nil {
+ return err
+ }
+ mv.Set(iter.Value())
+ if err := marshalVal(enc, mv, mo); err != nil {
+ return err
+ }
+ }
+ } else {
+ names := getStrings(n)
+ for i, iter := 0, m.Value.MapRange(); i < n && iter.Next(); i++ {
+ mk.SetIterKey(iter)
+ (*names)[i] = mk.String()
+ }
+ names.Sort()
+ for _, name := range *names {
+ mk.SetString(name)
+ if err := marshalKey(mk); err != nil {
+ return err
+ }
+ // TODO(https://go.dev/issue/57061): Use mv.SetMapIndexOf.
+ mv.Set(m.MapIndex(mk.Value))
+ if err := marshalVal(enc, mv, mo); err != nil {
+ return err
+ }
+ }
+ putStrings(names)
+ }
+ return nil
+ }
+}
+
+// unmarshalInlinedFallbackNext unmarshals only the next member in an inlined fallback.
+func unmarshalInlinedFallbackNext(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct, f *structField, quotedName, unquotedName []byte) error {
+ v := addressableValue{va.Field(f.index0), va.forcedAddr} // addressable if struct value is addressable
+ if len(f.index) > 0 {
+ v = v.fieldByIndex(f.index, true)
+ }
+ v = v.indirect(true)
+
+ if v.Type() == jsontextValueType {
+ b := v.Addr().Interface().(*jsontext.Value)
+ if len(*b) == 0 { // TODO: Should this be nil? What if it were all whitespace?
+ *b = append(*b, '{')
+ } else {
+ *b = jsonwire.TrimSuffixWhitespace(*b)
+ if jsonwire.HasSuffixByte(*b, '}') {
+ // TODO: When merging into an object for the first time,
+ // should we verify that it is valid?
+ *b = jsonwire.TrimSuffixByte(*b, '}')
+ *b = jsonwire.TrimSuffixWhitespace(*b)
+ if !jsonwire.HasSuffixByte(*b, ',') && !jsonwire.HasSuffixByte(*b, '{') {
+ *b = append(*b, ',')
+ }
+ } else {
+ return newUnmarshalErrorAfterWithSkipping(dec, uo, v.Type(), errRawInlinedNotObject)
+ }
+ }
+ *b = append(*b, quotedName...)
+ *b = append(*b, ':')
+ val, err := dec.ReadValue()
+ if err != nil {
+ return err
+ }
+ *b = append(*b, val...)
+ *b = append(*b, '}')
+ return nil
+ } else {
+ name := string(unquotedName) // TODO: Intern this?
+
+ m := v // must be a map[~string]V
+ if m.IsNil() {
+ m.Set(reflect.MakeMap(m.Type()))
+ }
+ mk := reflect.ValueOf(name)
+ if mkt := m.Type().Key(); mkt != stringType {
+ mk = mk.Convert(mkt)
+ }
+ mv := newAddressableValue(m.Type().Elem()) // TODO: Cache across calls?
+ if v2 := m.MapIndex(mk); v2.IsValid() {
+ mv.Set(v2)
+ }
+
+ unmarshal := f.fncs.unmarshal
+ if uo.Unmarshalers != nil {
+ unmarshal, _ = uo.Unmarshalers.(*Unmarshalers).lookup(unmarshal, mv.Type())
+ }
+ err := unmarshal(dec, mv, uo)
+ m.SetMapIndex(mk, mv.Value)
+ if err != nil {
+ return err
+ }
+ return nil
+ }
+}
diff --git a/internal/json/arshal_methods.go b/internal/json/arshal_methods.go
new file mode 100644
index 0000000000..c58fa9cbad
--- /dev/null
+++ b/internal/json/arshal_methods.go
@@ -0,0 +1,337 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package json
+
+import (
+ "encoding"
+ "errors"
+ "reflect"
+
+ "github.com/quay/clair/v4/internal/json/internal"
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonopts"
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+ "github.com/quay/clair/v4/internal/json/jsontext"
+)
+
+var errNonStringValue = errors.New("JSON value must be string type")
+
+// Interfaces for custom serialization.
+var (
+ jsonMarshalerType = reflect.TypeFor[Marshaler]()
+ jsonMarshalerToType = reflect.TypeFor[MarshalerTo]()
+ jsonUnmarshalerType = reflect.TypeFor[Unmarshaler]()
+ jsonUnmarshalerFromType = reflect.TypeFor[UnmarshalerFrom]()
+ textAppenderType = reflect.TypeFor[encoding.TextAppender]()
+ textMarshalerType = reflect.TypeFor[encoding.TextMarshaler]()
+ textUnmarshalerType = reflect.TypeFor[encoding.TextUnmarshaler]()
+
+ allMarshalerTypes = []reflect.Type{jsonMarshalerToType, jsonMarshalerType, textAppenderType, textMarshalerType}
+ allUnmarshalerTypes = []reflect.Type{jsonUnmarshalerFromType, jsonUnmarshalerType, textUnmarshalerType}
+ allMethodTypes = append(allMarshalerTypes, allUnmarshalerTypes...)
+)
+
+// Marshaler is implemented by types that can marshal themselves.
+// It is recommended that types implement [MarshalerTo] unless the implementation
+// is trying to avoid a hard dependency on the "jsontext" package.
+//
+// It is recommended that implementations return a buffer that is safe
+// for the caller to retain and potentially mutate.
+type Marshaler interface {
+ MarshalJSON() ([]byte, error)
+}
+
+// MarshalerTo is implemented by types that can marshal themselves.
+// It is recommended that types implement MarshalerTo instead of [Marshaler]
+// since this is both more performant and flexible.
+// If a type implements both Marshaler and MarshalerTo,
+// then MarshalerTo takes precedence. In such a case, both implementations
+// should aim to have equivalent behavior for the default marshal options.
+//
+// The implementation must write only one JSON value to the Encoder and
+// must not retain the pointer to [jsontext.Encoder].
+type MarshalerTo interface {
+ MarshalJSONTo(*jsontext.Encoder) error
+
+ // TODO: Should users call the MarshalEncode function or
+ // should/can they call this method directly? Does it matter?
+}
+
+// Unmarshaler is implemented by types that can unmarshal themselves.
+// It is recommended that types implement [UnmarshalerFrom] unless the implementation
+// is trying to avoid a hard dependency on the "jsontext" package.
+//
+// The input can be assumed to be a valid encoding of a JSON value
+// if called from unmarshal functionality in this package.
+// UnmarshalJSON must copy the JSON data if it is retained after returning.
+// It is recommended that UnmarshalJSON implement merge semantics when
+// unmarshaling into a pre-populated value.
+//
+// Implementations must not retain or mutate the input []byte.
+type Unmarshaler interface {
+ UnmarshalJSON([]byte) error
+}
+
+// UnmarshalerFrom is implemented by types that can unmarshal themselves.
+// It is recommended that types implement UnmarshalerFrom instead of [Unmarshaler]
+// since this is both more performant and flexible.
+// If a type implements both Unmarshaler and UnmarshalerFrom,
+// then UnmarshalerFrom takes precedence. In such a case, both implementations
+// should aim to have equivalent behavior for the default unmarshal options.
+//
+// The implementation must read only one JSON value from the Decoder.
+// It is recommended that UnmarshalJSONFrom implement merge semantics when
+// unmarshaling into a pre-populated value.
+//
+// Implementations must not retain the pointer to [jsontext.Decoder].
+type UnmarshalerFrom interface {
+ UnmarshalJSONFrom(*jsontext.Decoder) error
+
+ // TODO: Should users call the UnmarshalDecode function or
+ // should/can they call this method directly? Does it matter?
+}
+
+func makeMethodArshaler(fncs *arshaler, t reflect.Type) *arshaler {
+ // Avoid injecting method arshaler on the pointer or interface version
+ // to avoid ever calling the method on a nil pointer or interface receiver.
+ // Let it be injected on the value receiver (which is always addressable).
+ if t.Kind() == reflect.Pointer || t.Kind() == reflect.Interface {
+ return fncs
+ }
+
+ if needAddr, ok := implements(t, textMarshalerType); ok {
+ fncs.nonDefault = true
+ prevMarshal := fncs.marshal
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ if mo.Flags.Get(jsonflags.CallMethodsWithLegacySemantics) &&
+ (needAddr && va.forcedAddr) {
+ return prevMarshal(enc, va, mo)
+ }
+ marshaler := va.Addr().Interface().(encoding.TextMarshaler)
+ if err := export.Encoder(enc).AppendRaw('"', false, func(b []byte) ([]byte, error) {
+ b2, err := marshaler.MarshalText()
+ return append(b, b2...), err
+ }); err != nil {
+ err = wrapSkipFunc(err, "marshal method")
+ if mo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return internal.NewMarshalerError(va.Addr().Interface(), err, "MarshalText") // unlike unmarshal, always wrapped
+ }
+ if !isSemanticError(err) && !export.IsIOError(err) {
+ err = newMarshalErrorBefore(enc, t, err)
+ }
+ return err
+ }
+ return nil
+ }
+ }
+
+ if needAddr, ok := implements(t, textAppenderType); ok {
+ fncs.nonDefault = true
+ prevMarshal := fncs.marshal
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) (err error) {
+ if mo.Flags.Get(jsonflags.CallMethodsWithLegacySemantics) &&
+ (needAddr && va.forcedAddr) {
+ return prevMarshal(enc, va, mo)
+ }
+ appender := va.Addr().Interface().(encoding.TextAppender)
+ if err := export.Encoder(enc).AppendRaw('"', false, appender.AppendText); err != nil {
+ err = wrapSkipFunc(err, "append method")
+ if mo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return internal.NewMarshalerError(va.Addr().Interface(), err, "AppendText") // unlike unmarshal, always wrapped
+ }
+ if !isSemanticError(err) && !export.IsIOError(err) {
+ err = newMarshalErrorBefore(enc, t, err)
+ }
+ return err
+ }
+ return nil
+ }
+ }
+
+ if needAddr, ok := implements(t, jsonMarshalerType); ok {
+ fncs.nonDefault = true
+ prevMarshal := fncs.marshal
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ if mo.Flags.Get(jsonflags.CallMethodsWithLegacySemantics) &&
+ ((needAddr && va.forcedAddr) || export.Encoder(enc).Tokens.Last.NeedObjectName()) {
+ return prevMarshal(enc, va, mo)
+ }
+ marshaler := va.Addr().Interface().(Marshaler)
+ val, err := marshaler.MarshalJSON()
+ if err != nil {
+ err = wrapSkipFunc(err, "marshal method")
+ if mo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return internal.NewMarshalerError(va.Addr().Interface(), err, "MarshalJSON") // unlike unmarshal, always wrapped
+ }
+ err = newMarshalErrorBefore(enc, t, err)
+ return collapseSemanticErrors(err)
+ }
+ if err := enc.WriteValue(val); err != nil {
+ if mo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return internal.NewMarshalerError(va.Addr().Interface(), err, "MarshalJSON") // unlike unmarshal, always wrapped
+ }
+ if isSyntacticError(err) {
+ err = newMarshalErrorBefore(enc, t, err)
+ }
+ return err
+ }
+ return nil
+ }
+ }
+
+ if needAddr, ok := implements(t, jsonMarshalerToType); ok {
+ fncs.nonDefault = true
+ prevMarshal := fncs.marshal
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ if mo.Flags.Get(jsonflags.CallMethodsWithLegacySemantics) &&
+ ((needAddr && va.forcedAddr) || export.Encoder(enc).Tokens.Last.NeedObjectName()) {
+ return prevMarshal(enc, va, mo)
+ }
+ xe := export.Encoder(enc)
+ prevDepth, prevLength := xe.Tokens.DepthLength()
+ xe.Flags.Set(jsonflags.WithinArshalCall | 1)
+ err := va.Addr().Interface().(MarshalerTo).MarshalJSONTo(enc)
+ xe.Flags.Set(jsonflags.WithinArshalCall | 0)
+ currDepth, currLength := xe.Tokens.DepthLength()
+ if (prevDepth != currDepth || prevLength+1 != currLength) && err == nil {
+ err = errNonSingularValue
+ }
+ if err != nil {
+ err = wrapSkipFunc(err, "marshal method")
+ if mo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return internal.NewMarshalerError(va.Addr().Interface(), err, "MarshalJSONTo") // unlike unmarshal, always wrapped
+ }
+ if !export.IsIOError(err) {
+ err = newSemanticErrorWithPosition(enc, t, prevDepth, prevLength, err)
+ }
+ return err
+ }
+ return nil
+ }
+ }
+
+ if _, ok := implements(t, textUnmarshalerType); ok {
+ fncs.nonDefault = true
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ xd := export.Decoder(dec)
+ var flags jsonwire.ValueFlags
+ val, err := xd.ReadValue(&flags)
+ if err != nil {
+ return err // must be a syntactic or I/O error
+ }
+ if val.Kind() == 'n' {
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ va.SetZero()
+ }
+ return nil
+ }
+ if val.Kind() != '"' {
+ return newUnmarshalErrorAfter(dec, t, errNonStringValue)
+ }
+ s := jsonwire.UnquoteMayCopy(val, flags.IsVerbatim())
+ unmarshaler := va.Addr().Interface().(encoding.TextUnmarshaler)
+ if err := unmarshaler.UnmarshalText(s); err != nil {
+ err = wrapSkipFunc(err, "unmarshal method")
+ if uo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return err // unlike marshal, never wrapped
+ }
+ if !isSemanticError(err) && !isSyntacticError(err) && !export.IsIOError(err) {
+ err = newUnmarshalErrorAfter(dec, t, err)
+ }
+ return err
+ }
+ return nil
+ }
+ }
+
+ if _, ok := implements(t, jsonUnmarshalerType); ok {
+ fncs.nonDefault = true
+ prevUnmarshal := fncs.unmarshal
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ if uo.Flags.Get(jsonflags.CallMethodsWithLegacySemantics) &&
+ export.Decoder(dec).Tokens.Last.NeedObjectName() {
+ return prevUnmarshal(dec, va, uo)
+ }
+ val, err := dec.ReadValue()
+ if err != nil {
+ return err // must be a syntactic or I/O error
+ }
+ unmarshaler := va.Addr().Interface().(Unmarshaler)
+ if err := unmarshaler.UnmarshalJSON(val); err != nil {
+ err = wrapSkipFunc(err, "unmarshal method")
+ if uo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return err // unlike marshal, never wrapped
+ }
+ err = newUnmarshalErrorAfter(dec, t, err)
+ return collapseSemanticErrors(err)
+ }
+ return nil
+ }
+ }
+
+ if _, ok := implements(t, jsonUnmarshalerFromType); ok {
+ fncs.nonDefault = true
+ prevUnmarshal := fncs.unmarshal
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ if uo.Flags.Get(jsonflags.CallMethodsWithLegacySemantics) &&
+ export.Decoder(dec).Tokens.Last.NeedObjectName() {
+ return prevUnmarshal(dec, va, uo)
+ }
+ xd := export.Decoder(dec)
+ prevDepth, prevLength := xd.Tokens.DepthLength()
+ xd.Flags.Set(jsonflags.WithinArshalCall | 1)
+ err := va.Addr().Interface().(UnmarshalerFrom).UnmarshalJSONFrom(dec)
+ xd.Flags.Set(jsonflags.WithinArshalCall | 0)
+ currDepth, currLength := xd.Tokens.DepthLength()
+ if (prevDepth != currDepth || prevLength+1 != currLength) && err == nil {
+ err = errNonSingularValue
+ }
+ if err != nil {
+ err = wrapSkipFunc(err, "unmarshal method")
+ if uo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ if err2 := xd.SkipUntil(prevDepth, prevLength+1); err2 != nil {
+ return err2
+ }
+ return err // unlike marshal, never wrapped
+ }
+ if !isSyntacticError(err) && !export.IsIOError(err) {
+ err = newSemanticErrorWithPosition(dec, t, prevDepth, prevLength, err)
+ }
+ return err
+ }
+ return nil
+ }
+ }
+
+ return fncs
+}
+
+// implementsAny is like t.Implements(ifaceType) for a list of interfaces,
+// but checks whether either t or reflect.PointerTo(t) implements the interface.
+func implementsAny(t reflect.Type, ifaceTypes ...reflect.Type) bool {
+ for _, ifaceType := range ifaceTypes {
+ if _, ok := implements(t, ifaceType); ok {
+ return true
+ }
+ }
+ return false
+}
+
+// implements is like t.Implements(ifaceType) but checks whether
+// either t or reflect.PointerTo(t) implements the interface.
+// It also reports whether the value needs to be addressed
+// in order to satisfy the interface.
+func implements(t, ifaceType reflect.Type) (needAddr, ok bool) {
+ switch {
+ case t.Implements(ifaceType):
+ return false, true
+ case reflect.PointerTo(t).Implements(ifaceType):
+ return true, true
+ default:
+ return false, false
+ }
+}
diff --git a/internal/json/arshal_time.go b/internal/json/arshal_time.go
new file mode 100644
index 0000000000..5cc6eebcd0
--- /dev/null
+++ b/internal/json/arshal_time.go
@@ -0,0 +1,777 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package json
+
+import (
+ "bytes"
+ "cmp"
+ "errors"
+ "fmt"
+ "math"
+ "math/bits"
+ "reflect"
+ "strconv"
+ "strings"
+ "time"
+
+ "github.com/quay/clair/v4/internal/json/internal"
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonopts"
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+ "github.com/quay/clair/v4/internal/json/jsontext"
+)
+
+var (
+ timeDurationType = reflect.TypeFor[time.Duration]()
+ timeTimeType = reflect.TypeFor[time.Time]()
+)
+
+func makeTimeArshaler(fncs *arshaler, t reflect.Type) *arshaler {
+ // Ideally, time types would implement MarshalerTo and UnmarshalerFrom,
+ // but that would incur a dependency on package json from package time.
+ // Given how widely used time is, it is more acceptable that we incur a
+ // dependency on time from json.
+ //
+ // Injecting the arshaling functionality like this will not be identical
+ // to actually declaring methods on the time types since embedding of the
+ // time types will not be able to forward this functionality.
+ switch t {
+ case timeDurationType:
+ fncs.nonDefault = true
+ marshalNano := fncs.marshal
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) error {
+ xe := export.Encoder(enc)
+ var m durationArshaler
+ if mo.Format != "" && mo.FormatDepth == xe.Tokens.Depth() {
+ if !m.initFormat(mo.Format) {
+ return newInvalidFormatError(enc, t, mo)
+ }
+ } else if mo.Flags.Get(jsonflags.FormatDurationAsNano) {
+ return marshalNano(enc, va, mo)
+ } else {
+ // TODO(https://go.dev/issue/71631): Decide on default duration representation.
+ return newMarshalErrorBefore(enc, t, errors.New("no default representation (see https://go.dev/issue/71631); specify an explicit format"))
+ }
+
+ // TODO(https://go.dev/issue/62121): Use reflect.Value.AssertTo.
+ m.td = *va.Addr().Interface().(*time.Duration)
+ k := stringOrNumberKind(!m.isNumeric() || xe.Tokens.Last.NeedObjectName() || mo.Flags.Get(jsonflags.StringifyNumbers))
+ if err := xe.AppendRaw(k, true, m.appendMarshal); err != nil {
+ if !isSyntacticError(err) && !export.IsIOError(err) {
+ err = newMarshalErrorBefore(enc, t, err)
+ }
+ return err
+ }
+ return nil
+ }
+ unmarshalNano := fncs.unmarshal
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) error {
+ xd := export.Decoder(dec)
+ var u durationArshaler
+ if uo.Format != "" && uo.FormatDepth == xd.Tokens.Depth() {
+ if !u.initFormat(uo.Format) {
+ return newInvalidFormatError(dec, t, uo)
+ }
+ } else if uo.Flags.Get(jsonflags.FormatDurationAsNano) {
+ return unmarshalNano(dec, va, uo)
+ } else {
+ // TODO(https://go.dev/issue/71631): Decide on default duration representation.
+ return newUnmarshalErrorBeforeWithSkipping(dec, uo, t, errors.New("no default representation (see https://go.dev/issue/71631); specify an explicit format"))
+ }
+
+ stringify := !u.isNumeric() || xd.Tokens.Last.NeedObjectName() || uo.Flags.Get(jsonflags.StringifyNumbers)
+ var flags jsonwire.ValueFlags
+ td := va.Addr().Interface().(*time.Duration)
+ val, err := xd.ReadValue(&flags)
+ if err != nil {
+ return err
+ }
+ switch k := val.Kind(); k {
+ case 'n':
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ *td = time.Duration(0)
+ }
+ return nil
+ case '"':
+ if !stringify {
+ break
+ }
+ val = jsonwire.UnquoteMayCopy(val, flags.IsVerbatim())
+ if err := u.unmarshal(val); err != nil {
+ return newUnmarshalErrorAfter(dec, t, err)
+ }
+ *td = u.td
+ return nil
+ case '0':
+ if stringify {
+ break
+ }
+ if err := u.unmarshal(val); err != nil {
+ return newUnmarshalErrorAfter(dec, t, err)
+ }
+ *td = u.td
+ return nil
+ }
+ return newUnmarshalErrorAfter(dec, t, nil)
+ }
+ case timeTimeType:
+ fncs.nonDefault = true
+ fncs.marshal = func(enc *jsontext.Encoder, va addressableValue, mo *jsonopts.Struct) (err error) {
+ xe := export.Encoder(enc)
+ var m timeArshaler
+ if mo.Format != "" && mo.FormatDepth == xe.Tokens.Depth() {
+ if !m.initFormat(mo.Format) {
+ return newInvalidFormatError(enc, t, mo)
+ }
+ }
+
+ // TODO(https://go.dev/issue/62121): Use reflect.Value.AssertTo.
+ m.tt = *va.Addr().Interface().(*time.Time)
+ k := stringOrNumberKind(!m.isNumeric() || xe.Tokens.Last.NeedObjectName() || mo.Flags.Get(jsonflags.StringifyNumbers))
+ if err := xe.AppendRaw(k, !m.hasCustomFormat(), m.appendMarshal); err != nil {
+ if mo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return internal.NewMarshalerError(va.Addr().Interface(), err, "MarshalJSON") // unlike unmarshal, always wrapped
+ }
+ if !isSyntacticError(err) && !export.IsIOError(err) {
+ err = newMarshalErrorBefore(enc, t, err)
+ }
+ return err
+ }
+ return nil
+ }
+ fncs.unmarshal = func(dec *jsontext.Decoder, va addressableValue, uo *jsonopts.Struct) (err error) {
+ xd := export.Decoder(dec)
+ var u timeArshaler
+ if uo.Format != "" && uo.FormatDepth == xd.Tokens.Depth() {
+ if !u.initFormat(uo.Format) {
+ return newInvalidFormatError(dec, t, uo)
+ }
+ } else if uo.Flags.Get(jsonflags.ParseTimeWithLooseRFC3339) {
+ u.looseRFC3339 = true
+ }
+
+ stringify := !u.isNumeric() || xd.Tokens.Last.NeedObjectName() || uo.Flags.Get(jsonflags.StringifyNumbers)
+ var flags jsonwire.ValueFlags
+ tt := va.Addr().Interface().(*time.Time)
+ val, err := xd.ReadValue(&flags)
+ if err != nil {
+ return err
+ }
+ switch k := val.Kind(); k {
+ case 'n':
+ if !uo.Flags.Get(jsonflags.MergeWithLegacySemantics) {
+ *tt = time.Time{}
+ }
+ return nil
+ case '"':
+ if !stringify {
+ break
+ }
+ val = jsonwire.UnquoteMayCopy(val, flags.IsVerbatim())
+ if err := u.unmarshal(val); err != nil {
+ if uo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return err // unlike marshal, never wrapped
+ }
+ return newUnmarshalErrorAfter(dec, t, err)
+ }
+ *tt = u.tt
+ return nil
+ case '0':
+ if stringify {
+ break
+ }
+ if err := u.unmarshal(val); err != nil {
+ if uo.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ return err // unlike marshal, never wrapped
+ }
+ return newUnmarshalErrorAfter(dec, t, err)
+ }
+ *tt = u.tt
+ return nil
+ }
+ return newUnmarshalErrorAfter(dec, t, nil)
+ }
+ }
+ return fncs
+}
+
+type durationArshaler struct {
+ td time.Duration
+
+ // base records the representation where:
+ // - 0 uses time.Duration.String
+ // - 1e0, 1e3, 1e6, or 1e9 use a decimal encoding of the duration as
+ // nanoseconds, microseconds, milliseconds, or seconds.
+ // - 8601 uses ISO 8601
+ base uint64
+}
+
+func (a *durationArshaler) initFormat(format string) (ok bool) {
+ switch format {
+ case "units":
+ a.base = 0
+ case "sec":
+ a.base = 1e9
+ case "milli":
+ a.base = 1e6
+ case "micro":
+ a.base = 1e3
+ case "nano":
+ a.base = 1e0
+ case "iso8601":
+ a.base = 8601
+ default:
+ return false
+ }
+ return true
+}
+
+func (a *durationArshaler) isNumeric() bool {
+ return a.base != 0 && a.base != 8601
+}
+
+func (a *durationArshaler) appendMarshal(b []byte) ([]byte, error) {
+ switch a.base {
+ case 0:
+ return append(b, a.td.String()...), nil
+ case 8601:
+ return appendDurationISO8601(b, a.td), nil
+ default:
+ return appendDurationBase10(b, a.td, a.base), nil
+ }
+}
+
+func (a *durationArshaler) unmarshal(b []byte) (err error) {
+ switch a.base {
+ case 0:
+ a.td, err = time.ParseDuration(string(b))
+ case 8601:
+ a.td, err = parseDurationISO8601(b)
+ default:
+ a.td, err = parseDurationBase10(b, a.base)
+ }
+ return err
+}
+
+type timeArshaler struct {
+ tt time.Time
+
+ // base records the representation where:
+ // - 0 uses RFC 3339 encoding of the timestamp
+ // - 1e0, 1e3, 1e6, or 1e9 use a decimal encoding of the timestamp as
+ // seconds, milliseconds, microseconds, or nanoseconds since Unix epoch.
+ // - math.MaxUint uses time.Time.Format to encode the timestamp
+ base uint64
+ format string // time format passed to time.Parse
+
+ looseRFC3339 bool
+}
+
+func (a *timeArshaler) initFormat(format string) bool {
+ // We assume that an exported constant in the time package will
+ // always start with an uppercase ASCII letter.
+ if len(format) == 0 {
+ return false
+ }
+ a.base = math.MaxUint // implies custom format
+ if c := format[0]; !('a' <= c && c <= 'z') && !('A' <= c && c <= 'Z') {
+ a.format = format
+ return true
+ }
+ switch format {
+ case "ANSIC":
+ a.format = time.ANSIC
+ case "UnixDate":
+ a.format = time.UnixDate
+ case "RubyDate":
+ a.format = time.RubyDate
+ case "RFC822":
+ a.format = time.RFC822
+ case "RFC822Z":
+ a.format = time.RFC822Z
+ case "RFC850":
+ a.format = time.RFC850
+ case "RFC1123":
+ a.format = time.RFC1123
+ case "RFC1123Z":
+ a.format = time.RFC1123Z
+ case "RFC3339":
+ a.base = 0
+ a.format = time.RFC3339
+ case "RFC3339Nano":
+ a.base = 0
+ a.format = time.RFC3339Nano
+ case "Kitchen":
+ a.format = time.Kitchen
+ case "Stamp":
+ a.format = time.Stamp
+ case "StampMilli":
+ a.format = time.StampMilli
+ case "StampMicro":
+ a.format = time.StampMicro
+ case "StampNano":
+ a.format = time.StampNano
+ case "DateTime":
+ a.format = time.DateTime
+ case "DateOnly":
+ a.format = time.DateOnly
+ case "TimeOnly":
+ a.format = time.TimeOnly
+ case "unix":
+ a.base = 1e0
+ case "unixmilli":
+ a.base = 1e3
+ case "unixmicro":
+ a.base = 1e6
+ case "unixnano":
+ a.base = 1e9
+ default:
+ // Reject any Go identifier in case new constants are supported.
+ if strings.TrimFunc(format, isLetterOrDigit) == "" {
+ return false
+ }
+ a.format = format
+ }
+ return true
+}
+
+func (a *timeArshaler) isNumeric() bool {
+ return int(a.base) > 0
+}
+
+func (a *timeArshaler) hasCustomFormat() bool {
+ return a.base == math.MaxUint
+}
+
+func (a *timeArshaler) appendMarshal(b []byte) ([]byte, error) {
+ switch a.base {
+ case 0:
+ format := cmp.Or(a.format, time.RFC3339Nano)
+ n0 := len(b)
+ b = a.tt.AppendFormat(b, format)
+ // Not all Go timestamps can be represented as valid RFC 3339.
+ // Explicitly check for these edge cases.
+ // See https://go.dev/issue/4556 and https://go.dev/issue/54580.
+ switch b := b[n0:]; {
+ case b[len("9999")] != '-': // year must be exactly 4 digits wide
+ return b, errors.New("year outside of range [0,9999]")
+ case b[len(b)-1] != 'Z':
+ c := b[len(b)-len("Z07:00")]
+ if ('0' <= c && c <= '9') || parseDec2(b[len(b)-len("07:00"):]) >= 24 {
+ return b, errors.New("timezone hour outside of range [0,23]")
+ }
+ }
+ return b, nil
+ case math.MaxUint:
+ return a.tt.AppendFormat(b, a.format), nil
+ default:
+ return appendTimeUnix(b, a.tt, a.base), nil
+ }
+}
+
+func (a *timeArshaler) unmarshal(b []byte) (err error) {
+ switch a.base {
+ case 0:
+ // Use time.Time.UnmarshalText to avoid possible string allocation.
+ if err := a.tt.UnmarshalText(b); err != nil {
+ return err
+ }
+ // TODO(https://go.dev/issue/57912):
+ // RFC 3339 specifies the grammar for a valid timestamp.
+ // However, the parsing functionality in "time" is too loose and
+ // incorrectly accepts invalid timestamps as valid.
+ // Remove these manual checks when "time" checks it for us.
+ newParseError := func(layout, value, layoutElem, valueElem, message string) error {
+ return &time.ParseError{Layout: layout, Value: value, LayoutElem: layoutElem, ValueElem: valueElem, Message: message}
+ }
+ switch {
+ case a.looseRFC3339:
+ return nil
+ case b[len("2006-01-02T")+1] == ':': // hour must be two digits
+ return newParseError(time.RFC3339, string(b), "15", string(b[len("2006-01-02T"):][:1]), "")
+ case b[len("2006-01-02T15:04:05")] == ',': // sub-second separator must be a period
+ return newParseError(time.RFC3339, string(b), ".", ",", "")
+ case b[len(b)-1] != 'Z':
+ switch {
+ case parseDec2(b[len(b)-len("07:00"):]) >= 24: // timezone hour must be in range
+ return newParseError(time.RFC3339, string(b), "Z07:00", string(b[len(b)-len("Z07:00"):]), ": timezone hour out of range")
+ case parseDec2(b[len(b)-len("00"):]) >= 60: // timezone minute must be in range
+ return newParseError(time.RFC3339, string(b), "Z07:00", string(b[len(b)-len("Z07:00"):]), ": timezone minute out of range")
+ }
+ }
+ return nil
+ case math.MaxUint:
+ a.tt, err = time.Parse(a.format, string(b))
+ return err
+ default:
+ a.tt, err = parseTimeUnix(b, a.base)
+ return err
+ }
+}
+
+// appendDurationBase10 appends d formatted as a decimal fractional number,
+// where pow10 is a power-of-10 used to scale down the number.
+func appendDurationBase10(b []byte, d time.Duration, pow10 uint64) []byte {
+ b, n := mayAppendDurationSign(b, d) // append sign
+ whole, frac := bits.Div64(0, n, uint64(pow10)) // compute whole and frac fields
+ b = strconv.AppendUint(b, whole, 10) // append whole field
+ return appendFracBase10(b, frac, pow10) // append frac field
+}
+
+// parseDurationBase10 parses d from a decimal fractional number,
+// where pow10 is a power-of-10 used to scale up the number.
+func parseDurationBase10(b []byte, pow10 uint64) (time.Duration, error) {
+ suffix, neg := consumeSign(b, false) // consume sign
+ wholeBytes, fracBytes := bytesCutByte(suffix, '.', true) // consume whole and frac fields
+ whole, okWhole := jsonwire.ParseUint(wholeBytes) // parse whole field; may overflow
+ frac, okFrac := parseFracBase10(fracBytes, pow10) // parse frac field
+ hi, lo := bits.Mul64(whole, uint64(pow10)) // overflow if hi > 0
+ sum, co := bits.Add64(lo, uint64(frac), 0) // overflow if co > 0
+ switch d := mayApplyDurationSign(sum, neg); { // overflow if neg != (d < 0)
+ case (!okWhole && whole != math.MaxUint64) || !okFrac:
+ return 0, fmt.Errorf("invalid duration %q: %w", b, strconv.ErrSyntax)
+ case !okWhole || hi > 0 || co > 0 || neg != (d < 0):
+ return 0, fmt.Errorf("invalid duration %q: %w", b, strconv.ErrRange)
+ default:
+ return d, nil
+ }
+}
+
+// appendDurationISO8601 appends an ISO 8601 duration with a restricted grammar,
+// where leading and trailing zeroes and zero-value designators are omitted.
+// It only uses hour, minute, and second designators since ISO 8601 defines
+// those as being "accurate", while year, month, week, and day are "nominal".
+func appendDurationISO8601(b []byte, d time.Duration) []byte {
+ if d == 0 {
+ return append(b, "PT0S"...)
+ }
+ b, n := mayAppendDurationSign(b, d)
+ b = append(b, "PT"...)
+ n, nsec := bits.Div64(0, n, 1e9) // compute nsec field
+ n, sec := bits.Div64(0, n, 60) // compute sec field
+ hour, min := bits.Div64(0, n, 60) // compute hour and min fields
+ if hour > 0 {
+ b = append(strconv.AppendUint(b, hour, 10), 'H')
+ }
+ if min > 0 {
+ b = append(strconv.AppendUint(b, min, 10), 'M')
+ }
+ if sec > 0 || nsec > 0 {
+ b = append(appendFracBase10(strconv.AppendUint(b, sec, 10), nsec, 1e9), 'S')
+ }
+ return b
+}
+
+// daysPerYear is the exact average number of days in a year according to
+// the Gregorian calender, which has an extra day each year that is
+// a multiple of 4, unless it is evenly divisible by 100 but not by 400.
+// This does not take into account leap seconds, which are not deterministic.
+const daysPerYear = 365.2425
+
+var errInaccurateDateUnits = errors.New("inaccurate year, month, week, or day units")
+
+// parseDurationISO8601 parses a duration according to ISO 8601-1:2019,
+// section 5.5.2.2 and 5.5.2.3 with the following restrictions or extensions:
+//
+// - A leading minus sign is permitted for negative duration according
+// to ISO 8601-2:2019, section 4.4.1.9. We do not permit negative values
+// for each "time scale component", which is permitted by section 4.4.1.1,
+// but rarely supported by parsers.
+//
+// - A leading plus sign is permitted (and ignored).
+// This is not required by ISO 8601, but not forbidden either.
+// There is some precedent for this as it is supported by the principle of
+// duration arithmetic as specified in ISO 8601-2-2019, section 14.1.
+// Of note, the JavaScript grammar for ISO 8601 permits a leading plus sign.
+//
+// - A fractional value is only permitted for accurate units
+// (i.e., hour, minute, and seconds) in the last time component,
+// which is permissible by ISO 8601-1:2019, section 5.5.2.3.
+//
+// - Both periods ('.') and commas (',') are supported as the separator
+// between the integer part and fraction part of a number,
+// as specified in ISO 8601-1:2019, section 3.2.6.
+// While ISO 8601 recommends comma as the default separator,
+// most formatters uses a period.
+//
+// - Leading zeros are ignored. This is not required by ISO 8601,
+// but also not forbidden by the standard. Many parsers support this.
+//
+// - Lowercase designators are supported. This is not required by ISO 8601,
+// but also not forbidden by the standard. Many parsers support this.
+//
+// If the nominal units of year, month, week, or day are present,
+// this produces a best-effort value and also reports [errInaccurateDateUnits].
+//
+// The accepted grammar is identical to JavaScript's Duration:
+//
+// https://tc39.es/proposal-temporal/#prod-Duration
+//
+// We follow JavaScript's grammar as JSON itself is derived from JavaScript.
+// The Temporal.Duration.toJSON method is guaranteed to produce an output
+// that can be parsed by this function so long as arithmetic in JavaScript
+// do not use a largestUnit value higher than "hours" (which is the default).
+// Even if it does, this will do a best-effort parsing with inaccurate units,
+// but report [errInaccurateDateUnits].
+func parseDurationISO8601(b []byte) (time.Duration, error) {
+ var invalid, overflow, inaccurate, sawFrac bool
+ var sumNanos, n, co uint64
+
+ // cutBytes is like [bytes.Cut], but uses either c0 or c1 as the separator.
+ cutBytes := func(b []byte, c0, c1 byte) (prefix, suffix []byte, ok bool) {
+ for i, c := range b {
+ if c == c0 || c == c1 {
+ return b[:i], b[i+1:], true
+ }
+ }
+ return b, nil, false
+ }
+
+ // mayParseUnit attempts to parse another date or time number
+ // identified by the desHi and desLo unit characters.
+ // If the part is absent for current unit, it returns b as is.
+ mayParseUnit := func(b []byte, desHi, desLo byte, unit time.Duration) []byte {
+ number, suffix, ok := cutBytes(b, desHi, desLo)
+ if !ok || sawFrac {
+ return b // designator is not present or already saw fraction, which can only be in the last component
+ }
+
+ // Parse the number.
+ // A fraction allowed for the accurate units in the last part.
+ whole, frac, ok := cutBytes(number, '.', ',')
+ if ok {
+ sawFrac = true
+ invalid = invalid || len(frac) == len("") || unit > time.Hour
+ if unit == time.Second {
+ n, ok = parsePaddedBase10(frac, uint64(time.Second))
+ invalid = invalid || !ok
+ } else {
+ f, err := strconv.ParseFloat("0."+string(frac), 64)
+ invalid = invalid || err != nil || len(bytes.Trim(frac[len("."):], "0123456789")) > 0
+ n = uint64(math.Round(f * float64(unit))) // never overflows since f is within [0..1]
+ }
+ sumNanos, co = bits.Add64(sumNanos, n, 0) // overflow if co > 0
+ overflow = overflow || co > 0
+ }
+ for len(whole) > 1 && whole[0] == '0' {
+ whole = whole[len("0"):] // trim leading zeros
+ }
+ n, ok := jsonwire.ParseUint(whole) // overflow if !ok && MaxUint64
+ hi, lo := bits.Mul64(n, uint64(unit)) // overflow if hi > 0
+ sumNanos, co = bits.Add64(sumNanos, lo, 0) // overflow if co > 0
+ invalid = invalid || (!ok && n != math.MaxUint64)
+ overflow = overflow || (!ok && n == math.MaxUint64) || hi > 0 || co > 0
+ inaccurate = inaccurate || unit > time.Hour
+ return suffix
+ }
+
+ suffix, neg := consumeSign(b, true)
+ prefix, suffix, okP := cutBytes(suffix, 'P', 'p')
+ durDate, durTime, okT := cutBytes(suffix, 'T', 't')
+ invalid = invalid || len(prefix) > 0 || !okP || (okT && len(durTime) == 0) || len(durDate)+len(durTime) == 0
+ if len(durDate) > 0 { // nominal portion of the duration
+ durDate = mayParseUnit(durDate, 'Y', 'y', time.Duration(daysPerYear*24*60*60*1e9))
+ durDate = mayParseUnit(durDate, 'M', 'm', time.Duration(daysPerYear/12*24*60*60*1e9))
+ durDate = mayParseUnit(durDate, 'W', 'w', time.Duration(7*24*60*60*1e9))
+ durDate = mayParseUnit(durDate, 'D', 'd', time.Duration(24*60*60*1e9))
+ invalid = invalid || len(durDate) > 0 // unknown elements
+ }
+ if len(durTime) > 0 { // accurate portion of the duration
+ durTime = mayParseUnit(durTime, 'H', 'h', time.Duration(60*60*1e9))
+ durTime = mayParseUnit(durTime, 'M', 'm', time.Duration(60*1e9))
+ durTime = mayParseUnit(durTime, 'S', 's', time.Duration(1e9))
+ invalid = invalid || len(durTime) > 0 // unknown elements
+ }
+ d := mayApplyDurationSign(sumNanos, neg)
+ overflow = overflow || (neg != (d < 0) && d != 0) // overflows signed duration
+
+ switch {
+ case invalid:
+ return 0, fmt.Errorf("invalid ISO 8601 duration %q: %w", b, strconv.ErrSyntax)
+ case overflow:
+ return 0, fmt.Errorf("invalid ISO 8601 duration %q: %w", b, strconv.ErrRange)
+ case inaccurate:
+ return d, fmt.Errorf("invalid ISO 8601 duration %q: %w", b, errInaccurateDateUnits)
+ default:
+ return d, nil
+ }
+}
+
+// mayAppendDurationSign appends a negative sign if n is negative.
+func mayAppendDurationSign(b []byte, d time.Duration) ([]byte, uint64) {
+ if d < 0 {
+ b = append(b, '-')
+ d *= -1
+ }
+ return b, uint64(d)
+}
+
+// mayApplyDurationSign inverts n if neg is specified.
+func mayApplyDurationSign(n uint64, neg bool) time.Duration {
+ if neg {
+ return -1 * time.Duration(n)
+ } else {
+ return +1 * time.Duration(n)
+ }
+}
+
+// appendTimeUnix appends t formatted as a decimal fractional number,
+// where pow10 is a power-of-10 used to scale up the number.
+func appendTimeUnix(b []byte, t time.Time, pow10 uint64) []byte {
+ sec, nsec := t.Unix(), int64(t.Nanosecond())
+ if sec < 0 {
+ b = append(b, '-')
+ sec, nsec = negateSecNano(sec, nsec)
+ }
+ switch {
+ case pow10 == 1e0: // fast case where units is in seconds
+ b = strconv.AppendUint(b, uint64(sec), 10)
+ return appendFracBase10(b, uint64(nsec), 1e9)
+ case uint64(sec) < 1e9: // intermediate case where units is not seconds, but no overflow
+ b = strconv.AppendUint(b, uint64(sec)*uint64(pow10)+uint64(uint64(nsec)/(1e9/pow10)), 10)
+ return appendFracBase10(b, (uint64(nsec)*pow10)%1e9, 1e9)
+ default: // slow case where units is not seconds and overflow would occur
+ b = strconv.AppendUint(b, uint64(sec), 10)
+ b = appendPaddedBase10(b, uint64(nsec)/(1e9/pow10), pow10)
+ return appendFracBase10(b, (uint64(nsec)*pow10)%1e9, 1e9)
+ }
+}
+
+// parseTimeUnix parses t formatted as a decimal fractional number,
+// where pow10 is a power-of-10 used to scale down the number.
+func parseTimeUnix(b []byte, pow10 uint64) (time.Time, error) {
+ suffix, neg := consumeSign(b, false) // consume sign
+ wholeBytes, fracBytes := bytesCutByte(suffix, '.', true) // consume whole and frac fields
+ whole, okWhole := jsonwire.ParseUint(wholeBytes) // parse whole field; may overflow
+ frac, okFrac := parseFracBase10(fracBytes, 1e9/pow10) // parse frac field
+ var sec, nsec int64
+ switch {
+ case pow10 == 1e0: // fast case where units is in seconds
+ sec = int64(whole) // check overflow later after negation
+ nsec = int64(frac) // cannot overflow
+ case okWhole: // intermediate case where units is not seconds, but no overflow
+ sec = int64(whole / pow10) // check overflow later after negation
+ nsec = int64((whole%pow10)*(1e9/pow10) + frac) // cannot overflow
+ case !okWhole && whole == math.MaxUint64: // slow case where units is not seconds and overflow occurred
+ width := int(math.Log10(float64(pow10))) // compute len(strconv.Itoa(pow10-1))
+ whole, okWhole = jsonwire.ParseUint(wholeBytes[:len(wholeBytes)-width]) // parse the upper whole field
+ mid, _ := parsePaddedBase10(wholeBytes[len(wholeBytes)-width:], pow10) // parse the lower whole field
+ sec = int64(whole) // check overflow later after negation
+ nsec = int64(mid*(1e9/pow10) + frac) // cannot overflow
+ }
+ if neg {
+ sec, nsec = negateSecNano(sec, nsec)
+ }
+ switch t := time.Unix(sec, nsec).UTC(); {
+ case (!okWhole && whole != math.MaxUint64) || !okFrac:
+ return time.Time{}, fmt.Errorf("invalid time %q: %w", b, strconv.ErrSyntax)
+ case !okWhole || neg != (t.Unix() < 0):
+ return time.Time{}, fmt.Errorf("invalid time %q: %w", b, strconv.ErrRange)
+ default:
+ return t, nil
+ }
+}
+
+// negateSecNano negates a Unix timestamp, where nsec must be within [0, 1e9).
+func negateSecNano(sec, nsec int64) (int64, int64) {
+ sec = ^sec // twos-complement negation (i.e., -1*sec + 1)
+ nsec = -nsec + 1e9 // negate nsec and add 1e9 (which is the extra +1 from sec negation)
+ sec += int64(nsec / 1e9) // handle possible overflow of nsec if it started as zero
+ nsec %= 1e9 // ensure nsec stays within [0, 1e9)
+ return sec, nsec
+}
+
+// appendFracBase10 appends the fraction of n/max10,
+// where max10 is a power-of-10 that is larger than n.
+func appendFracBase10(b []byte, n, max10 uint64) []byte {
+ if n == 0 {
+ return b
+ }
+ return bytes.TrimRight(appendPaddedBase10(append(b, '.'), n, max10), "0")
+}
+
+// parseFracBase10 parses the fraction of n/max10,
+// where max10 is a power-of-10 that is larger than n.
+func parseFracBase10(b []byte, max10 uint64) (n uint64, ok bool) {
+ switch {
+ case len(b) == 0:
+ return 0, true
+ case len(b) < len(".0") || b[0] != '.':
+ return 0, false
+ }
+ return parsePaddedBase10(b[len("."):], max10)
+}
+
+// appendPaddedBase10 appends a zero-padded encoding of n,
+// where max10 is a power-of-10 that is larger than n.
+func appendPaddedBase10(b []byte, n, max10 uint64) []byte {
+ if n < max10/10 {
+ // Formatting of n is shorter than log10(max10),
+ // so add max10/10 to ensure the length is equal to log10(max10).
+ i := len(b)
+ b = strconv.AppendUint(b, n+max10/10, 10)
+ b[i]-- // subtract the addition of max10/10
+ return b
+ }
+ return strconv.AppendUint(b, n, 10)
+}
+
+// parsePaddedBase10 parses b as the zero-padded encoding of n,
+// where max10 is a power-of-10 that is larger than n.
+// Truncated suffix is treated as implicit zeros.
+// Extended suffix is ignored, but verified to contain only digits.
+func parsePaddedBase10(b []byte, max10 uint64) (n uint64, ok bool) {
+ pow10 := uint64(1)
+ for pow10 < max10 {
+ n *= 10
+ if len(b) > 0 {
+ if b[0] < '0' || '9' < b[0] {
+ return n, false
+ }
+ n += uint64(b[0] - '0')
+ b = b[1:]
+ }
+ pow10 *= 10
+ }
+ if len(b) > 0 && len(bytes.TrimRight(b, "0123456789")) > 0 {
+ return n, false // trailing characters are not digits
+ }
+ return n, true
+}
+
+// consumeSign consumes an optional leading negative or positive sign.
+func consumeSign(b []byte, allowPlus bool) ([]byte, bool) {
+ if len(b) > 0 {
+ if b[0] == '-' {
+ return b[len("-"):], true
+ } else if b[0] == '+' && allowPlus {
+ return b[len("+"):], false
+ }
+ }
+ return b, false
+}
+
+// bytesCutByte is similar to bytes.Cut(b, []byte{c}),
+// except c may optionally be included as part of the suffix.
+func bytesCutByte(b []byte, c byte, include bool) ([]byte, []byte) {
+ if i := bytes.IndexByte(b, c); i >= 0 {
+ if include {
+ return b[:i], b[i:]
+ }
+ return b[:i], b[i+1:]
+ }
+ return b, nil
+}
+
+// parseDec2 parses b as an unsigned, base-10, 2-digit number.
+// The result is undefined if digits are not base-10.
+func parseDec2(b []byte) byte {
+ if len(b) < 2 {
+ return 0
+ }
+ return 10*(b[0]-'0') + (b[1] - '0')
+}
diff --git a/internal/json/doc.go b/internal/json/doc.go
new file mode 100644
index 0000000000..a463168589
--- /dev/null
+++ b/internal/json/doc.go
@@ -0,0 +1,264 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+// Package json implements semantic processing of JSON as specified in RFC 8259.
+// JSON is a simple data interchange format that can represent
+// primitive data types such as booleans, strings, and numbers,
+// in addition to structured data types such as objects and arrays.
+//
+// [Marshal] and [Unmarshal] encode and decode Go values
+// to/from JSON text contained within a []byte.
+// [MarshalWrite] and [UnmarshalRead] operate on JSON text
+// by writing to or reading from an [io.Writer] or [io.Reader].
+// [MarshalEncode] and [UnmarshalDecode] operate on JSON text
+// by encoding to or decoding from a [jsontext.Encoder] or [jsontext.Decoder].
+// [Options] may be passed to each of the marshal or unmarshal functions
+// to configure the semantic behavior of marshaling and unmarshaling
+// (i.e., alter how JSON data is understood as Go data and vice versa).
+// [jsontext.Options] may also be passed to the marshal or unmarshal functions
+// to configure the syntactic behavior of encoding or decoding.
+//
+// The data types of JSON are mapped to/from the data types of Go based on
+// the closest logical equivalent between the two type systems. For example,
+// a JSON boolean corresponds with a Go bool,
+// a JSON string corresponds with a Go string,
+// a JSON number corresponds with a Go int, uint or float,
+// a JSON array corresponds with a Go slice or array, and
+// a JSON object corresponds with a Go struct or map.
+// See the documentation on [Marshal] and [Unmarshal] for a comprehensive list
+// of how the JSON and Go type systems correspond.
+//
+// Arbitrary Go types can customize their JSON representation by implementing
+// [Marshaler], [MarshalerTo], [Unmarshaler], or [UnmarshalerFrom].
+// This provides authors of Go types with control over how their types are
+// serialized as JSON. Alternatively, users can implement functions that match
+// [MarshalFunc], [MarshalToFunc], [UnmarshalFunc], or [UnmarshalFromFunc]
+// to specify the JSON representation for arbitrary types.
+// This provides callers of JSON functionality with control over
+// how any arbitrary type is serialized as JSON.
+//
+// # JSON Representation of Go structs
+//
+// A Go struct is naturally represented as a JSON object,
+// where each Go struct field corresponds with a JSON object member.
+// When marshaling, all Go struct fields are recursively encoded in depth-first
+// order as JSON object members except those that are ignored or omitted.
+// When unmarshaling, JSON object members are recursively decoded
+// into the corresponding Go struct fields.
+// Object members that do not match any struct fields,
+// also known as “unknown members”, are ignored by default or rejected
+// if [RejectUnknownMembers] is specified.
+//
+// The representation of each struct field can be customized in the
+// "json" struct field tag, where the tag is a comma separated list of options.
+// As a special case, if the entire tag is `json:"-"`,
+// then the field is ignored with regard to its JSON representation.
+// Some options also have equivalent behavior controlled by a caller-specified [Options].
+// Field-specified options take precedence over caller-specified options.
+//
+// The first option is the JSON object name override for the Go struct field.
+// If the name is not specified, then the Go struct field name
+// is used as the JSON object name. JSON names containing commas or quotes,
+// or names identical to "" or "-", can be specified using
+// a single-quoted string literal, where the syntax is identical to
+// the Go grammar for a double-quoted string literal,
+// but instead uses single quotes as the delimiters.
+// By default, unmarshaling uses case-sensitive matching to identify
+// the Go struct field associated with a JSON object name.
+//
+// After the name, the following tag options are supported:
+//
+// - omitzero: When marshaling, the "omitzero" option specifies that
+// the struct field should be omitted if the field value is zero
+// as determined by the "IsZero() bool" method if present,
+// otherwise based on whether the field is the zero Go value.
+// This option has no effect when unmarshaling.
+//
+// - omitempty: When marshaling, the "omitempty" option specifies that
+// the struct field should be omitted if the field value would have been
+// encoded as a JSON null, empty string, empty object, or empty array.
+// This option has no effect when unmarshaling.
+//
+// - string: The "string" option specifies that [StringifyNumbers]
+// be set when marshaling or unmarshaling a struct field value.
+// This causes numeric types to be encoded as a JSON number
+// within a JSON string, and to be decoded from a JSON string
+// containing the JSON number without any surrounding whitespace.
+// This extra level of encoding is often necessary since
+// many JSON parsers cannot precisely represent 64-bit integers.
+//
+// - case: When unmarshaling, the "case" option specifies how
+// JSON object names are matched with the JSON name for Go struct fields.
+// The option is a key-value pair specified as "case:value" where
+// the value must either be 'ignore' or 'strict'.
+// The 'ignore' value specifies that matching is case-insensitive
+// where dashes and underscores are also ignored. If multiple fields match,
+// the first declared field in breadth-first order takes precedence.
+// The 'strict' value specifies that matching is case-sensitive.
+// This takes precedence over the [MatchCaseInsensitiveNames] option.
+//
+// - inline: The "inline" option specifies that
+// the JSON representable content of this field type is to be promoted
+// as if they were specified in the parent struct.
+// It is the JSON equivalent of Go struct embedding.
+// A Go embedded field is implicitly inlined unless an explicit JSON name
+// is specified. The inlined field must be a Go struct
+// (that does not implement any JSON methods), [jsontext.Value],
+// map[~string]T, or an unnamed pointer to such types. When marshaling,
+// inlined fields from a pointer type are omitted if it is nil.
+// Inlined fields of type [jsontext.Value] and map[~string]T are called
+// “inlined fallbacks” as they can represent all possible
+// JSON object members not directly handled by the parent struct.
+// Only one inlined fallback field may be specified in a struct,
+// while many non-fallback fields may be specified. This option
+// must not be specified with any other option (including the JSON name).
+//
+// - unknown: The "unknown" option is a specialized variant
+// of the inlined fallback to indicate that this Go struct field
+// contains any number of unknown JSON object members. The field type must
+// be a [jsontext.Value], map[~string]T, or an unnamed pointer to such types.
+// If [DiscardUnknownMembers] is specified when marshaling,
+// the contents of this field are ignored.
+// If [RejectUnknownMembers] is specified when unmarshaling,
+// any unknown object members are rejected regardless of whether
+// an inlined fallback with the "unknown" option exists. This option
+// must not be specified with any other option (including the JSON name).
+//
+// - format: The "format" option specifies a format flag
+// used to specialize the formatting of the field value.
+// The option is a key-value pair specified as "format:value" where
+// the value must be either a literal consisting of letters and numbers
+// (e.g., "format:RFC3339") or a single-quoted string literal
+// (e.g., "format:'2006-01-02'"). The interpretation of the format flag
+// is determined by the struct field type.
+//
+// The "omitzero" and "omitempty" options are mostly semantically identical.
+// The former is defined in terms of the Go type system,
+// while the latter in terms of the JSON type system.
+// Consequently they behave differently in some circumstances.
+// For example, only a nil slice or map is omitted under "omitzero", while
+// an empty slice or map is omitted under "omitempty" regardless of nilness.
+// The "omitzero" option is useful for types with a well-defined zero value
+// (e.g., [net/netip.Addr]) or have an IsZero method (e.g., [time.Time.IsZero]).
+//
+// Every Go struct corresponds to a list of JSON representable fields
+// which is constructed by performing a breadth-first search over
+// all struct fields (excluding unexported or ignored fields),
+// where the search recursively descends into inlined structs.
+// The set of non-inlined fields in a struct must have unique JSON names.
+// If multiple fields all have the same JSON name, then the one
+// at shallowest depth takes precedence and the other fields at deeper depths
+// are excluded from the list of JSON representable fields.
+// If multiple fields at the shallowest depth have the same JSON name,
+// but exactly one is explicitly tagged with a JSON name,
+// then that field takes precedence and all others are excluded from the list.
+// This is analogous to Go visibility rules for struct field selection
+// with embedded struct types.
+//
+// Marshaling or unmarshaling a non-empty struct
+// without any JSON representable fields results in a [SemanticError].
+// Unexported fields must not have any `json` tags except for `json:"-"`.
+//
+// # Security Considerations
+//
+// JSON is frequently used as a data interchange format to communicate
+// between different systems, possibly implemented in different languages.
+// For interoperability and security reasons, it is important that
+// all implementations agree upon the semantic meaning of the data.
+//
+// [For example, suppose we have two micro-services.]
+// The first service is responsible for authenticating a JSON request,
+// while the second service is responsible for executing the request
+// (having assumed that the prior service authenticated the request).
+// If an attacker were able to maliciously craft a JSON request such that
+// both services believe that the same request is from different users,
+// it could bypass the authenticator with valid credentials for one user,
+// but maliciously perform an action on behalf of a different user.
+//
+// According to RFC 8259, there unfortunately exist many JSON texts
+// that are syntactically valid but semantically ambiguous.
+// For example, the standard does not define how to interpret duplicate
+// names within an object.
+//
+// The v1 [encoding/json] and [encoding/json/v2] packages
+// interpret some inputs in different ways. In particular:
+//
+// - The standard specifies that JSON must be encoded using UTF-8.
+// By default, v1 replaces invalid bytes of UTF-8 in JSON strings
+// with the Unicode replacement character,
+// while v2 rejects inputs with invalid UTF-8.
+// To change the default, specify the [jsontext.AllowInvalidUTF8] option.
+// The replacement of invalid UTF-8 is a form of data corruption
+// that alters the precise meaning of strings.
+//
+// - The standard does not specify a particular behavior when
+// duplicate names are encountered within a JSON object,
+// which means that different implementations may behave differently.
+// By default, v1 allows for the presence of duplicate names,
+// while v2 rejects duplicate names.
+// To change the default, specify the [jsontext.AllowDuplicateNames] option.
+// If allowed, object members are processed in the order they are observed,
+// meaning that later values will replace or be merged into prior values,
+// depending on the Go value type.
+//
+// - The standard defines a JSON object as an unordered collection of name/value pairs.
+// While ordering can be observed through the underlying [jsontext] API,
+// both v1 and v2 generally avoid exposing the ordering.
+// No application should semantically depend on the order of object members.
+// Allowing duplicate names is a vector through which ordering of members
+// can accidentally be observed and depended upon.
+//
+// - The standard suggests that JSON object names are typically compared
+// based on equality of the sequence of Unicode code points,
+// which implies that comparing names is often case-sensitive.
+// When unmarshaling a JSON object into a Go struct,
+// by default, v1 uses a (loose) case-insensitive match on the name,
+// while v2 uses a (strict) case-sensitive match on the name.
+// To change the default, specify the [MatchCaseInsensitiveNames] option.
+// The use of case-insensitive matching provides another vector through
+// which duplicate names can occur. Allowing case-insensitive matching
+// means that v1 or v2 might interpret JSON objects differently from most
+// other JSON implementations (which typically use a case-sensitive match).
+//
+// - The standard does not specify a particular behavior when
+// an unknown name in a JSON object is encountered.
+// When unmarshaling a JSON object into a Go struct, by default
+// both v1 and v2 ignore unknown names and their corresponding values.
+// To change the default, specify the [RejectUnknownMembers] option.
+//
+// - The standard suggests that implementations may use a float64
+// to represent a JSON number. Consequently, large JSON integers
+// may lose precision when stored as a floating-point type.
+// Both v1 and v2 correctly preserve precision when marshaling and
+// unmarshaling a concrete integer type. However, even if v1 and v2
+// preserve precision for concrete types, other JSON implementations
+// may not be able to preserve precision for outputs produced by v1 or v2.
+// The `string` tag option can be used to specify that an integer type
+// is to be quoted within a JSON string to avoid loss of precision.
+// Furthermore, v1 and v2 may still lose precision when unmarshaling
+// into an any interface value, where unmarshal uses a float64
+// by default to represent a JSON number.
+// To change the default, specify the [WithUnmarshalers] option
+// with a custom unmarshaler that pre-populates the interface value
+// with a concrete Go type that can preserve precision.
+//
+// RFC 8785 specifies a canonical form for any JSON text,
+// which explicitly defines specific behaviors that RFC 8259 leaves undefined.
+// In theory, if a text can successfully [jsontext.Value.Canonicalize]
+// without changing the semantic meaning of the data, then it provides a
+// greater degree of confidence that the data is more secure and interoperable.
+//
+// The v2 API generally chooses more secure defaults than v1,
+// but care should still be taken with large integers or unknown members.
+//
+// [For example, suppose we have two micro-services.]: https://www.youtube.com/watch?v=avilmOcHKHE&t=1057s
+package json
+
+// requireKeyedLiterals can be embedded in a struct to require keyed literals.
+type requireKeyedLiterals struct{}
+
+// nonComparable can be embedded in a struct to prevent comparability.
+type nonComparable [0]func()
diff --git a/internal/json/errors.go b/internal/json/errors.go
new file mode 100644
index 0000000000..302a715583
--- /dev/null
+++ b/internal/json/errors.go
@@ -0,0 +1,420 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package json
+
+import (
+ "cmp"
+ "errors"
+ "fmt"
+ "reflect"
+ "strconv"
+ "strings"
+ "sync"
+
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonopts"
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+ "github.com/quay/clair/v4/internal/json/jsontext"
+)
+
+// ErrUnknownName indicates that a JSON object member could not be
+// unmarshaled because the name is not known to the target Go struct.
+// This error is directly wrapped within a [SemanticError] when produced.
+//
+// The name of an unknown JSON object member can be extracted as:
+//
+// err := ...
+// var serr json.SemanticError
+// if errors.As(err, &serr) && serr.Err == json.ErrUnknownName {
+// ptr := serr.JSONPointer // JSON pointer to unknown name
+// name := ptr.LastToken() // unknown name itself
+// ...
+// }
+//
+// This error is only returned if [RejectUnknownMembers] is true.
+var ErrUnknownName = errors.New("unknown object member name")
+
+const errorPrefix = "json: "
+
+func isSemanticError(err error) bool {
+ _, ok := err.(*SemanticError)
+ return ok
+}
+
+func isSyntacticError(err error) bool {
+ _, ok := err.(*jsontext.SyntacticError)
+ return ok
+}
+
+// isFatalError reports whether this error must terminate asharling.
+// All errors are considered fatal unless operating under
+// [jsonflags.ReportErrorsWithLegacySemantics] in which case only
+// syntactic errors and I/O errors are considered fatal.
+func isFatalError(err error, flags jsonflags.Flags) bool {
+ return !flags.Get(jsonflags.ReportErrorsWithLegacySemantics) ||
+ isSyntacticError(err) || export.IsIOError(err)
+}
+
+// SemanticError describes an error determining the meaning
+// of JSON data as Go data or vice-versa.
+//
+// The contents of this error as produced by this package may change over time.
+type SemanticError struct {
+ requireKeyedLiterals
+ nonComparable
+
+ action string // either "marshal" or "unmarshal"
+
+ // ByteOffset indicates that an error occurred after this byte offset.
+ ByteOffset int64
+ // JSONPointer indicates that an error occurred within this JSON value
+ // as indicated using the JSON Pointer notation (see RFC 6901).
+ JSONPointer jsontext.Pointer
+
+ // JSONKind is the JSON kind that could not be handled.
+ JSONKind jsontext.Kind // may be zero if unknown
+ // JSONValue is the JSON number or string that could not be unmarshaled.
+ // It is not populated during marshaling.
+ JSONValue jsontext.Value // may be nil if irrelevant or unknown
+ // GoType is the Go type that could not be handled.
+ GoType reflect.Type // may be nil if unknown
+
+ // Err is the underlying error.
+ Err error // may be nil
+}
+
+// coder is implemented by [jsontext.Encoder] or [jsontext.Decoder].
+type coder interface{ StackPointer() jsontext.Pointer }
+
+// newInvalidFormatError wraps err in a SemanticError because
+// the current type t cannot handle the provided options format.
+// This error must be called before producing or consuming the next value.
+//
+// If [jsonflags.ReportErrorsWithLegacySemantics] is specified,
+// then this automatically skips the next value when unmarshaling
+// to ensure that the value is fully consumed.
+func newInvalidFormatError(c coder, t reflect.Type, o *jsonopts.Struct) error {
+ err := fmt.Errorf("invalid format flag %q", o.Format)
+ switch c := c.(type) {
+ case *jsontext.Encoder:
+ err = newMarshalErrorBefore(c, t, err)
+ case *jsontext.Decoder:
+ err = newUnmarshalErrorBeforeWithSkipping(c, o, t, err)
+ }
+ return err
+}
+
+// newMarshalErrorBefore wraps err in a SemanticError assuming that e
+// is positioned right before the next token or value, which causes an error.
+func newMarshalErrorBefore(e *jsontext.Encoder, t reflect.Type, err error) error {
+ return &SemanticError{action: "marshal", GoType: t, Err: err,
+ ByteOffset: e.OutputOffset() + int64(export.Encoder(e).CountNextDelimWhitespace()),
+ JSONPointer: jsontext.Pointer(export.Encoder(e).AppendStackPointer(nil, +1))}
+}
+
+// newUnmarshalErrorBefore wraps err in a SemanticError assuming that d
+// is positioned right before the next token or value, which causes an error.
+// It does not record the next JSON kind as this error is used to indicate
+// the receiving Go value is invalid to unmarshal into (and not a JSON error).
+func newUnmarshalErrorBefore(d *jsontext.Decoder, t reflect.Type, err error) error {
+ return &SemanticError{action: "unmarshal", GoType: t, Err: err,
+ ByteOffset: d.InputOffset() + int64(export.Decoder(d).CountNextDelimWhitespace()),
+ JSONPointer: jsontext.Pointer(export.Decoder(d).AppendStackPointer(nil, +1))}
+}
+
+// newUnmarshalErrorBeforeWithSkipping is like [newUnmarshalErrorBefore],
+// but automatically skips the next value if
+// [jsonflags.ReportErrorsWithLegacySemantics] is specified.
+func newUnmarshalErrorBeforeWithSkipping(d *jsontext.Decoder, o *jsonopts.Struct, t reflect.Type, err error) error {
+ err = newUnmarshalErrorBefore(d, t, err)
+ if o.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ if err2 := export.Decoder(d).SkipValue(); err2 != nil {
+ return err2
+ }
+ }
+ return err
+}
+
+// newUnmarshalErrorAfter wraps err in a SemanticError assuming that d
+// is positioned right after the previous token or value, which caused an error.
+func newUnmarshalErrorAfter(d *jsontext.Decoder, t reflect.Type, err error) error {
+ tokOrVal := export.Decoder(d).PreviousTokenOrValue()
+ return &SemanticError{action: "unmarshal", GoType: t, Err: err,
+ ByteOffset: d.InputOffset() - int64(len(tokOrVal)),
+ JSONPointer: jsontext.Pointer(export.Decoder(d).AppendStackPointer(nil, -1)),
+ JSONKind: jsontext.Value(tokOrVal).Kind()}
+}
+
+// newUnmarshalErrorAfter wraps err in a SemanticError assuming that d
+// is positioned right after the previous token or value, which caused an error.
+// It also stores a copy of the last JSON value if it is a string or number.
+func newUnmarshalErrorAfterWithValue(d *jsontext.Decoder, t reflect.Type, err error) error {
+ serr := newUnmarshalErrorAfter(d, t, err).(*SemanticError)
+ if serr.JSONKind == '"' || serr.JSONKind == '0' {
+ serr.JSONValue = jsontext.Value(export.Decoder(d).PreviousTokenOrValue()).Clone()
+ }
+ return serr
+}
+
+// newUnmarshalErrorAfterWithSkipping is like [newUnmarshalErrorAfter],
+// but automatically skips the remainder of the current value if
+// [jsonflags.ReportErrorsWithLegacySemantics] is specified.
+func newUnmarshalErrorAfterWithSkipping(d *jsontext.Decoder, o *jsonopts.Struct, t reflect.Type, err error) error {
+ err = newUnmarshalErrorAfter(d, t, err)
+ if o.Flags.Get(jsonflags.ReportErrorsWithLegacySemantics) {
+ if err2 := export.Decoder(d).SkipValueRemainder(); err2 != nil {
+ return err2
+ }
+ }
+ return err
+}
+
+// newSemanticErrorWithPosition wraps err in a SemanticError assuming that
+// the error occurred at the provided depth, and length.
+// If err is already a SemanticError, then position information is only
+// injected if it is currently unpopulated.
+//
+// If the position is unpopulated, it is ambiguous where the error occurred
+// in the user code, whether it was before or after the current position.
+// For the byte offset, we assume that the error occurred before the last read
+// token or value when decoding, or before the next value when encoding.
+// For the JSON pointer, we point to the parent object or array unless
+// we can be certain that it happened with an object member.
+//
+// This is used to annotate errors returned by user-provided
+// v2 MarshalJSON or UnmarshalJSON methods or functions.
+func newSemanticErrorWithPosition(c coder, t reflect.Type, prevDepth int, prevLength int64, err error) error {
+ serr, _ := err.(*SemanticError)
+ if serr == nil {
+ serr = &SemanticError{Err: err}
+ }
+ var currDepth int
+ var currLength int64
+ var coderState interface{ AppendStackPointer([]byte, int) []byte }
+ var offset int64
+ switch c := c.(type) {
+ case *jsontext.Encoder:
+ e := export.Encoder(c)
+ serr.action = cmp.Or(serr.action, "marshal")
+ currDepth, currLength = e.Tokens.DepthLength()
+ offset = c.OutputOffset() + int64(export.Encoder(c).CountNextDelimWhitespace())
+ coderState = e
+ case *jsontext.Decoder:
+ d := export.Decoder(c)
+ serr.action = cmp.Or(serr.action, "unmarshal")
+ currDepth, currLength = d.Tokens.DepthLength()
+ tokOrVal := d.PreviousTokenOrValue()
+ offset = c.InputOffset() - int64(len(tokOrVal))
+ if (prevDepth == currDepth && prevLength == currLength) || len(tokOrVal) == 0 {
+ // If no Read method was called in the user-defined method or
+ // if the Peek method was called, then use the offset of the next value.
+ offset = c.InputOffset() + int64(export.Decoder(c).CountNextDelimWhitespace())
+ }
+ coderState = d
+ }
+ serr.ByteOffset = cmp.Or(serr.ByteOffset, offset)
+ if serr.JSONPointer == "" {
+ where := 0 // default to ambiguous positioning
+ switch {
+ case prevDepth == currDepth && prevLength+0 == currLength:
+ where = +1
+ case prevDepth == currDepth && prevLength+1 == currLength:
+ where = -1
+ }
+ serr.JSONPointer = jsontext.Pointer(coderState.AppendStackPointer(nil, where))
+ }
+ serr.GoType = cmp.Or(serr.GoType, t)
+ return serr
+}
+
+// collapseSemanticErrors collapses double SemanticErrors at the outer levels
+// into a single SemanticError by preserving the inner error,
+// but prepending the ByteOffset and JSONPointer with the outer error.
+//
+// For example:
+//
+// collapseSemanticErrors(&SemanticError{
+// ByteOffset: len64(`[0,{"alpha":[0,1,`),
+// JSONPointer: "/1/alpha/2",
+// GoType: reflect.TypeFor[outerType](),
+// Err: &SemanticError{
+// ByteOffset: len64(`{"foo":"bar","fizz":[0,`),
+// JSONPointer: "/fizz/1",
+// GoType: reflect.TypeFor[innerType](),
+// Err: ...,
+// },
+// })
+//
+// results in:
+//
+// &SemanticError{
+// ByteOffset: len64(`[0,{"alpha":[0,1,`) + len64(`{"foo":"bar","fizz":[0,`),
+// JSONPointer: "/1/alpha/2" + "/fizz/1",
+// GoType: reflect.TypeFor[innerType](),
+// Err: ...,
+// }
+//
+// This is used to annotate errors returned by user-provided
+// v1 MarshalJSON or UnmarshalJSON methods with precise position information
+// if they themselves happened to return a SemanticError.
+// Since MarshalJSON and UnmarshalJSON are not operating on the root JSON value,
+// their positioning must be relative to the nested JSON value
+// returned by UnmarshalJSON or passed to MarshalJSON.
+// Therefore, we can construct an absolute position by concatenating
+// the outer with the inner positions.
+//
+// Note that we do not use collapseSemanticErrors with user-provided functions
+// that take in an [jsontext.Encoder] or [jsontext.Decoder] since they contain
+// methods to report position relative to the root JSON value.
+// We assume user-constructed errors are correctly precise about position.
+func collapseSemanticErrors(err error) error {
+ if serr1, ok := err.(*SemanticError); ok {
+ if serr2, ok := serr1.Err.(*SemanticError); ok {
+ serr2.ByteOffset = serr1.ByteOffset + serr2.ByteOffset
+ serr2.JSONPointer = serr1.JSONPointer + serr2.JSONPointer
+ *serr1 = *serr2
+ }
+ }
+ return err
+}
+
+// errorModalVerb is a modal verb like "cannot" or "unable to".
+//
+// Once per process, Hyrum-proof the error message by deliberately
+// switching between equivalent renderings of the same error message.
+// The randomization is tied to the Hyrum-proofing already applied
+// on map iteration in Go.
+var errorModalVerb = sync.OnceValue(func() string {
+ for phrase := range map[string]struct{}{"cannot": {}, "unable to": {}} {
+ return phrase // use whichever phrase we get in the first iteration
+ }
+ return ""
+})
+
+func (e *SemanticError) Error() string {
+ var sb strings.Builder
+ sb.WriteString(errorPrefix)
+ sb.WriteString(errorModalVerb())
+
+ // Format action.
+ var preposition string
+ switch e.action {
+ case "marshal":
+ sb.WriteString(" marshal")
+ preposition = " from"
+ case "unmarshal":
+ sb.WriteString(" unmarshal")
+ preposition = " into"
+ default:
+ sb.WriteString(" handle")
+ preposition = " with"
+ }
+
+ // Format JSON kind.
+ switch e.JSONKind {
+ case 'n':
+ sb.WriteString(" JSON null")
+ case 'f', 't':
+ sb.WriteString(" JSON boolean")
+ case '"':
+ sb.WriteString(" JSON string")
+ case '0':
+ sb.WriteString(" JSON number")
+ case '{', '}':
+ sb.WriteString(" JSON object")
+ case '[', ']':
+ sb.WriteString(" JSON array")
+ default:
+ if e.action == "" {
+ preposition = ""
+ }
+ }
+ if len(e.JSONValue) > 0 && len(e.JSONValue) < 100 {
+ sb.WriteByte(' ')
+ sb.Write(e.JSONValue)
+ }
+
+ // Format Go type.
+ if e.GoType != nil {
+ typeString := e.GoType.String()
+ if len(typeString) > 100 {
+ // An excessively long type string most likely occurs for
+ // an anonymous struct declaration with many fields.
+ // Reduce the noise by just printing the kind,
+ // and optionally prepending it with the package name
+ // if the struct happens to include an unexported field.
+ typeString = e.GoType.Kind().String()
+ if e.GoType.Kind() == reflect.Struct && e.GoType.Name() == "" {
+ for i := range e.GoType.NumField() {
+ if pkgPath := e.GoType.Field(i).PkgPath; pkgPath != "" {
+ typeString = pkgPath[strings.LastIndexByte(pkgPath, '/')+len("/"):] + ".struct"
+ break
+ }
+ }
+ }
+ }
+ sb.WriteString(preposition)
+ sb.WriteString(" Go ")
+ sb.WriteString(typeString)
+ }
+
+ // Special handling for unknown names.
+ if e.Err == ErrUnknownName {
+ sb.WriteString(": ")
+ sb.WriteString(ErrUnknownName.Error())
+ sb.WriteString(" ")
+ sb.WriteString(strconv.Quote(e.JSONPointer.LastToken()))
+ if parent := e.JSONPointer.Parent(); parent != "" {
+ sb.WriteString(" within ")
+ sb.WriteString(strconv.Quote(jsonwire.TruncatePointer(string(parent), 100)))
+ }
+ return sb.String()
+ }
+
+ // Format where.
+ // Avoid printing if it overlaps with a wrapped SyntacticError.
+ switch serr, _ := e.Err.(*jsontext.SyntacticError); {
+ case e.JSONPointer != "":
+ if serr == nil || !e.JSONPointer.Contains(serr.JSONPointer) {
+ sb.WriteString(" within ")
+ sb.WriteString(strconv.Quote(jsonwire.TruncatePointer(string(e.JSONPointer), 100)))
+ }
+ case e.ByteOffset > 0:
+ if serr == nil || !(e.ByteOffset <= serr.ByteOffset) {
+ sb.WriteString(" after offset ")
+ sb.WriteString(strconv.FormatInt(e.ByteOffset, 10))
+ }
+ }
+
+ // Format underlying error.
+ if e.Err != nil {
+ errString := e.Err.Error()
+ if isSyntacticError(e.Err) {
+ errString = strings.TrimPrefix(errString, "jsontext: ")
+ }
+ sb.WriteString(": ")
+ sb.WriteString(errString)
+ }
+
+ return sb.String()
+}
+
+func (e *SemanticError) Unwrap() error {
+ return e.Err
+}
+
+func newDuplicateNameError(ptr jsontext.Pointer, quotedName []byte, offset int64) error {
+ if quotedName != nil {
+ name, _ := jsonwire.AppendUnquote(nil, quotedName)
+ ptr = ptr.AppendToken(string(name))
+ }
+ return &jsontext.SyntacticError{
+ ByteOffset: offset,
+ JSONPointer: ptr,
+ Err: jsontext.ErrDuplicateName,
+ }
+}
diff --git a/internal/json/fields.go b/internal/json/fields.go
new file mode 100644
index 0000000000..606c608ba3
--- /dev/null
+++ b/internal/json/fields.go
@@ -0,0 +1,654 @@
+// Copyright 2021 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package json
+
+import (
+ "cmp"
+ "errors"
+ "fmt"
+ "io"
+ "reflect"
+ "slices"
+ "strconv"
+ "strings"
+ "unicode"
+ "unicode/utf8"
+
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+)
+
+type isZeroer interface {
+ IsZero() bool
+}
+
+var isZeroerType = reflect.TypeFor[isZeroer]()
+
+type structFields struct {
+ flattened []structField // listed in depth-first ordering
+ byActualName map[string]*structField
+ byFoldedName map[string][]*structField
+ inlinedFallback *structField
+}
+
+// reindex recomputes index to avoid bounds check during runtime.
+//
+// During the construction of each [structField] in [makeStructFields],
+// the index field is 0-indexed. However, before it returns,
+// the 0th field is stored in index0 and index stores the remainder.
+func (sf *structFields) reindex() {
+ reindex := func(f *structField) {
+ f.index0 = f.index[0]
+ f.index = f.index[1:]
+ if len(f.index) == 0 {
+ f.index = nil // avoid pinning the backing slice
+ }
+ }
+ for i := range sf.flattened {
+ reindex(&sf.flattened[i])
+ }
+ if sf.inlinedFallback != nil {
+ reindex(sf.inlinedFallback)
+ }
+}
+
+// lookupByFoldedName looks up name by a case-insensitive match
+// that also ignores the presence of dashes and underscores.
+func (fs *structFields) lookupByFoldedName(name []byte) []*structField {
+ return fs.byFoldedName[string(foldName(name))]
+}
+
+type structField struct {
+ id int // unique numeric ID in breadth-first ordering
+ index0 int // 0th index into a struct according to [reflect.Type.FieldByIndex]
+ index []int // 1st index and remainder according to [reflect.Type.FieldByIndex]
+ typ reflect.Type
+ fncs *arshaler
+ isZero func(addressableValue) bool
+ isEmpty func(addressableValue) bool
+ fieldOptions
+}
+
+var errNoExportedFields = errors.New("Go struct has no exported fields")
+
+func makeStructFields(root reflect.Type) (fs structFields, serr *SemanticError) {
+ orErrorf := func(serr *SemanticError, t reflect.Type, f string, a ...any) *SemanticError {
+ return cmp.Or(serr, &SemanticError{GoType: t, Err: fmt.Errorf(f, a...)})
+ }
+
+ // Setup a queue for a breath-first search.
+ var queueIndex int
+ type queueEntry struct {
+ typ reflect.Type
+ index []int
+ visitChildren bool // whether to recursively visit inlined field in this struct
+ }
+ queue := []queueEntry{{root, nil, true}}
+ seen := map[reflect.Type]bool{root: true}
+
+ // Perform a breadth-first search over all reachable fields.
+ // This ensures that len(f.index) will be monotonically increasing.
+ var allFields, inlinedFallbacks []structField
+ for queueIndex < len(queue) {
+ qe := queue[queueIndex]
+ queueIndex++
+
+ t := qe.typ
+ inlinedFallbackIndex := -1 // index of last inlined fallback field in current struct
+ namesIndex := make(map[string]int) // index of each field with a given JSON object name in current struct
+ var hasAnyJSONTag bool // whether any Go struct field has a `json` tag
+ var hasAnyJSONField bool // whether any JSON serializable fields exist in current struct
+ for i := range t.NumField() {
+ sf := t.Field(i)
+ _, hasTag := sf.Tag.Lookup("json")
+ hasAnyJSONTag = hasAnyJSONTag || hasTag
+ options, ignored, err := parseFieldOptions(sf)
+ if err != nil {
+ serr = cmp.Or(serr, &SemanticError{GoType: t, Err: err})
+ }
+ if ignored {
+ continue
+ }
+ hasAnyJSONField = true
+ f := structField{
+ // Allocate a new slice (len=N+1) to hold both
+ // the parent index (len=N) and the current index (len=1).
+ // Do this to avoid clobbering the memory of the parent index.
+ index: append(append(make([]int, 0, len(qe.index)+1), qe.index...), i),
+ typ: sf.Type,
+ fieldOptions: options,
+ }
+ if sf.Anonymous && !f.hasName {
+ if indirectType(f.typ).Kind() != reflect.Struct {
+ serr = orErrorf(serr, t, "embedded Go struct field %s of non-struct type must be explicitly given a JSON name", sf.Name)
+ } else {
+ f.inline = true // implied by use of Go embedding without an explicit name
+ }
+ }
+ if f.inline || f.unknown {
+ // Handle an inlined field that serializes to/from
+ // zero or more JSON object members.
+
+ switch f.fieldOptions {
+ case fieldOptions{name: f.name, quotedName: f.quotedName, inline: true}:
+ case fieldOptions{name: f.name, quotedName: f.quotedName, unknown: true}:
+ case fieldOptions{name: f.name, quotedName: f.quotedName, inline: true, unknown: true}:
+ serr = orErrorf(serr, t, "Go struct field %s cannot have both `inline` and `unknown` specified", sf.Name)
+ f.inline = false // let `unknown` take precedence
+ default:
+ serr = orErrorf(serr, t, "Go struct field %s cannot have any options other than `inline` or `unknown` specified", sf.Name)
+ if f.hasName {
+ continue // invalid inlined field; treat as ignored
+ }
+ f.fieldOptions = fieldOptions{name: f.name, quotedName: f.quotedName, inline: f.inline, unknown: f.unknown}
+ if f.inline && f.unknown {
+ f.inline = false // let `unknown` take precedence
+ }
+ }
+
+ // Reject any types with custom serialization otherwise
+ // it becomes impossible to know what sub-fields to inline.
+ tf := indirectType(f.typ)
+ if implementsAny(tf, allMethodTypes...) && tf != jsontextValueType {
+ serr = orErrorf(serr, t, "inlined Go struct field %s of type %s must not implement marshal or unmarshal methods", sf.Name, tf)
+ }
+
+ // Handle an inlined field that serializes to/from
+ // a finite number of JSON object members backed by a Go struct.
+ if tf.Kind() == reflect.Struct {
+ if f.unknown {
+ serr = orErrorf(serr, t, "inlined Go struct field %s of type %s with `unknown` tag must be a Go map of string key or a jsontext.Value", sf.Name, tf)
+ continue // invalid inlined field; treat as ignored
+ }
+ if qe.visitChildren {
+ queue = append(queue, queueEntry{tf, f.index, !seen[tf]})
+ }
+ seen[tf] = true
+ continue
+ } else if !sf.IsExported() {
+ serr = orErrorf(serr, t, "inlined Go struct field %s is not exported", sf.Name)
+ continue // invalid inlined field; treat as ignored
+ }
+
+ // Handle an inlined field that serializes to/from any number of
+ // JSON object members back by a Go map or jsontext.Value.
+ switch {
+ case tf == jsontextValueType:
+ f.fncs = nil // specially handled in arshal_inlined.go
+ case tf.Kind() == reflect.Map && tf.Key().Kind() == reflect.String:
+ if implementsAny(tf.Key(), allMethodTypes...) {
+ serr = orErrorf(serr, t, "inlined map field %s of type %s must have a string key that does not implement marshal or unmarshal methods", sf.Name, tf)
+ continue // invalid inlined field; treat as ignored
+ }
+ f.fncs = lookupArshaler(tf.Elem())
+ default:
+ serr = orErrorf(serr, t, "inlined Go struct field %s of type %s must be a Go struct, Go map of string key, or jsontext.Value", sf.Name, tf)
+ continue // invalid inlined field; treat as ignored
+ }
+
+ // Reject multiple inlined fallback fields within the same struct.
+ if inlinedFallbackIndex >= 0 {
+ serr = orErrorf(serr, t, "inlined Go struct fields %s and %s cannot both be a Go map or jsontext.Value", t.Field(inlinedFallbackIndex).Name, sf.Name)
+ // Still append f to inlinedFallbacks as there is still a
+ // check for a dominant inlined fallback before returning.
+ }
+ inlinedFallbackIndex = i
+
+ inlinedFallbacks = append(inlinedFallbacks, f)
+ } else {
+ // Handle normal Go struct field that serializes to/from
+ // a single JSON object member.
+
+ // Unexported fields cannot be serialized except for
+ // embedded fields of a struct type,
+ // which might promote exported fields of their own.
+ if !sf.IsExported() {
+ tf := indirectType(f.typ)
+ if !(sf.Anonymous && tf.Kind() == reflect.Struct) {
+ serr = orErrorf(serr, t, "Go struct field %s is not exported", sf.Name)
+ continue
+ }
+ // Unfortunately, methods on the unexported field
+ // still cannot be called.
+ if implementsAny(tf, allMethodTypes...) ||
+ (f.omitzero && implementsAny(tf, isZeroerType)) {
+ serr = orErrorf(serr, t, "Go struct field %s is not exported for method calls", sf.Name)
+ continue
+ }
+ }
+
+ // Provide a function that uses a type's IsZero method.
+ switch {
+ case sf.Type.Kind() == reflect.Interface && sf.Type.Implements(isZeroerType):
+ f.isZero = func(va addressableValue) bool {
+ // Avoid panics calling IsZero on a nil interface or
+ // non-nil interface with nil pointer.
+ return va.IsNil() || (va.Elem().Kind() == reflect.Pointer && va.Elem().IsNil()) || va.Interface().(isZeroer).IsZero()
+ }
+ case sf.Type.Kind() == reflect.Pointer && sf.Type.Implements(isZeroerType):
+ f.isZero = func(va addressableValue) bool {
+ // Avoid panics calling IsZero on nil pointer.
+ return va.IsNil() || va.Interface().(isZeroer).IsZero()
+ }
+ case sf.Type.Implements(isZeroerType):
+ f.isZero = func(va addressableValue) bool { return va.Interface().(isZeroer).IsZero() }
+ case reflect.PointerTo(sf.Type).Implements(isZeroerType):
+ f.isZero = func(va addressableValue) bool { return va.Addr().Interface().(isZeroer).IsZero() }
+ }
+
+ // Provide a function that can determine whether the value would
+ // serialize as an empty JSON value.
+ switch sf.Type.Kind() {
+ case reflect.String, reflect.Map, reflect.Array, reflect.Slice:
+ f.isEmpty = func(va addressableValue) bool { return va.Len() == 0 }
+ case reflect.Pointer, reflect.Interface:
+ f.isEmpty = func(va addressableValue) bool { return va.IsNil() }
+ }
+
+ // Reject multiple fields with same name within the same struct.
+ if j, ok := namesIndex[f.name]; ok {
+ serr = orErrorf(serr, t, "Go struct fields %s and %s conflict over JSON object name %q", t.Field(j).Name, sf.Name, f.name)
+ // Still append f to allFields as there is still a
+ // check for a dominant field before returning.
+ }
+ namesIndex[f.name] = i
+
+ f.id = len(allFields)
+ f.fncs = lookupArshaler(sf.Type)
+ allFields = append(allFields, f)
+ }
+ }
+
+ // NOTE: New users to the json package are occasionally surprised that
+ // unexported fields are ignored. This occurs by necessity due to our
+ // inability to directly introspect such fields with Go reflection
+ // without the use of unsafe.
+ //
+ // To reduce friction here, refuse to serialize any Go struct that
+ // has no JSON serializable fields, has at least one Go struct field,
+ // and does not have any `json` tags present. For example,
+ // errors returned by errors.New would fail to serialize.
+ isEmptyStruct := t.NumField() == 0
+ if !isEmptyStruct && !hasAnyJSONTag && !hasAnyJSONField {
+ serr = cmp.Or(serr, &SemanticError{GoType: t, Err: errNoExportedFields})
+ }
+ }
+
+ // Sort the fields by exact name (breaking ties by depth and
+ // then by presence of an explicitly provided JSON name).
+ // Select the dominant field from each set of fields with the same name.
+ // If multiple fields have the same name, then the dominant field
+ // is the one that exists alone at the shallowest depth,
+ // or the one that is uniquely tagged with a JSON name.
+ // Otherwise, no dominant field exists for the set.
+ flattened := allFields[:0]
+ slices.SortStableFunc(allFields, func(x, y structField) int {
+ return cmp.Or(
+ strings.Compare(x.name, y.name),
+ cmp.Compare(len(x.index), len(y.index)),
+ boolsCompare(!x.hasName, !y.hasName))
+ })
+ for len(allFields) > 0 {
+ n := 1 // number of fields with the same exact name
+ for n < len(allFields) && allFields[n-1].name == allFields[n].name {
+ n++
+ }
+ if n == 1 || len(allFields[0].index) != len(allFields[1].index) || allFields[0].hasName != allFields[1].hasName {
+ flattened = append(flattened, allFields[0]) // only keep field if there is a dominant field
+ }
+ allFields = allFields[n:]
+ }
+
+ // Sort the fields according to a breadth-first ordering
+ // so that we can re-number IDs with the smallest possible values.
+ // This optimizes use of uintSet such that it fits in the 64-entry bit set.
+ slices.SortFunc(flattened, func(x, y structField) int {
+ return cmp.Compare(x.id, y.id)
+ })
+ for i := range flattened {
+ flattened[i].id = i
+ }
+
+ // Sort the fields according to a depth-first ordering
+ // as the typical order that fields are marshaled.
+ slices.SortFunc(flattened, func(x, y structField) int {
+ return slices.Compare(x.index, y.index)
+ })
+
+ // Compute the mapping of fields in the byActualName map.
+ // Pre-fold all names so that we can lookup folded names quickly.
+ fs = structFields{
+ flattened: flattened,
+ byActualName: make(map[string]*structField, len(flattened)),
+ byFoldedName: make(map[string][]*structField, len(flattened)),
+ }
+ for i, f := range fs.flattened {
+ foldedName := string(foldName([]byte(f.name)))
+ fs.byActualName[f.name] = &fs.flattened[i]
+ fs.byFoldedName[foldedName] = append(fs.byFoldedName[foldedName], &fs.flattened[i])
+ }
+ for foldedName, fields := range fs.byFoldedName {
+ if len(fields) > 1 {
+ // The precedence order for conflicting ignoreCase names
+ // is by breadth-first order, rather than depth-first order.
+ slices.SortFunc(fields, func(x, y *structField) int {
+ return cmp.Compare(x.id, y.id)
+ })
+ fs.byFoldedName[foldedName] = fields
+ }
+ }
+ if n := len(inlinedFallbacks); n == 1 || (n > 1 && len(inlinedFallbacks[0].index) != len(inlinedFallbacks[1].index)) {
+ fs.inlinedFallback = &inlinedFallbacks[0] // dominant inlined fallback field
+ }
+ fs.reindex()
+ return fs, serr
+}
+
+// indirectType unwraps one level of pointer indirection
+// similar to how Go only allows embedding either T or *T,
+// but not **T or P (which is a named pointer).
+func indirectType(t reflect.Type) reflect.Type {
+ if t.Kind() == reflect.Pointer && t.Name() == "" {
+ t = t.Elem()
+ }
+ return t
+}
+
+// matchFoldedName matches a case-insensitive name depending on the options.
+// It assumes that foldName(f.name) == foldName(name).
+//
+// Case-insensitive matching is used if the `case:ignore` tag option is specified
+// or the MatchCaseInsensitiveNames call option is specified
+// (and the `case:strict` tag option is not specified).
+// Functionally, the `case:ignore` and `case:strict` tag options take precedence.
+//
+// The v1 definition of case-insensitivity operated under strings.EqualFold
+// and would strictly compare dashes and underscores,
+// while the v2 definition would ignore the presence of dashes and underscores.
+// Thus, if the MatchCaseSensitiveDelimiter call option is specified,
+// the match is further restricted to using strings.EqualFold.
+func (f *structField) matchFoldedName(name []byte, flags *jsonflags.Flags) bool {
+ if f.casing == caseIgnore || (flags.Get(jsonflags.MatchCaseInsensitiveNames) && f.casing != caseStrict) {
+ if !flags.Get(jsonflags.MatchCaseSensitiveDelimiter) || strings.EqualFold(string(name), f.name) {
+ return true
+ }
+ }
+ return false
+}
+
+const (
+ caseIgnore = 1
+ caseStrict = 2
+)
+
+type fieldOptions struct {
+ name string
+ quotedName string // quoted name per RFC 8785, section 3.2.2.2.
+ hasName bool
+ nameNeedEscape bool
+ casing int8 // either 0, caseIgnore, or caseStrict
+ inline bool
+ unknown bool
+ omitzero bool
+ omitempty bool
+ string bool
+ format string
+}
+
+// parseFieldOptions parses the `json` tag in a Go struct field as
+// a structured set of options configuring parameters such as
+// the JSON member name and other features.
+func parseFieldOptions(sf reflect.StructField) (out fieldOptions, ignored bool, err error) {
+ tag, hasTag := sf.Tag.Lookup("json")
+ tagOrig := tag
+
+ // Check whether this field is explicitly ignored.
+ if tag == "-" {
+ return fieldOptions{}, true, nil
+ }
+
+ // Check whether this field is unexported and not embedded,
+ // which Go reflection cannot mutate for the sake of serialization.
+ //
+ // An embedded field of an unexported type is still capable of
+ // forwarding exported fields, which may be JSON serialized.
+ // This technically operates on the edge of what is permissible by
+ // the Go language, but the most recent decision is to permit this.
+ //
+ // See https://go.dev/issue/24153 and https://go.dev/issue/32772.
+ if !sf.IsExported() && !sf.Anonymous {
+ // Tag options specified on an unexported field suggests user error.
+ if hasTag {
+ err = cmp.Or(err, fmt.Errorf("unexported Go struct field %s cannot have non-ignored `json:%q` tag", sf.Name, tag))
+ }
+ return fieldOptions{}, true, err
+ }
+
+ // Determine the JSON member name for this Go field. A user-specified name
+ // may be provided as either an identifier or a single-quoted string.
+ // The single-quoted string allows arbitrary characters in the name.
+ // See https://go.dev/issue/2718 and https://go.dev/issue/3546.
+ out.name = sf.Name // always starts with an uppercase character
+ if len(tag) > 0 && !strings.HasPrefix(tag, ",") {
+ // For better compatibility with v1, accept almost any unescaped name.
+ n := len(tag) - len(strings.TrimLeftFunc(tag, func(r rune) bool {
+ return !strings.ContainsRune(",\\'\"`", r) // reserve comma, backslash, and quotes
+ }))
+ name := tag[:n]
+
+ // If the next character is not a comma, then the name is either
+ // malformed (if n > 0) or a single-quoted name.
+ // In either case, call consumeTagOption to handle it further.
+ var err2 error
+ if !strings.HasPrefix(tag[n:], ",") && len(name) != len(tag) {
+ name, n, err2 = consumeTagOption(tag)
+ if err2 != nil {
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s has malformed `json` tag: %v", sf.Name, err2))
+ }
+ }
+ if !utf8.ValidString(name) {
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s has JSON object name %q with invalid UTF-8", sf.Name, name))
+ name = string([]rune(name)) // replace invalid UTF-8 with utf8.RuneError
+ }
+ if name == "-" && tag[0] == '-' {
+ defer func() { // defer to let other errors take precedence
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s has JSON object name %q; either "+
+ "use `json:\"-\"` to ignore the field or "+
+ "use `json:\"'-'%s` to specify %q as the name", sf.Name, out.name, strings.TrimPrefix(strconv.Quote(tagOrig), `"-`), name))
+ }()
+ }
+ if err2 == nil {
+ out.hasName = true
+ out.name = name
+ }
+ tag = tag[n:]
+ }
+ b, _ := jsonwire.AppendQuote(nil, out.name, &jsonflags.Flags{})
+ out.quotedName = string(b)
+ out.nameNeedEscape = jsonwire.NeedEscape(out.name)
+
+ // Handle any additional tag options (if any).
+ var wasFormat bool
+ seenOpts := make(map[string]bool)
+ for len(tag) > 0 {
+ // Consume comma delimiter.
+ if tag[0] != ',' {
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s has malformed `json` tag: invalid character %q before next option (expecting ',')", sf.Name, tag[0]))
+ } else {
+ tag = tag[len(","):]
+ if len(tag) == 0 {
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s has malformed `json` tag: invalid trailing ',' character", sf.Name))
+ break
+ }
+ }
+
+ // Consume and process the tag option.
+ opt, n, err2 := consumeTagOption(tag)
+ if err2 != nil {
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s has malformed `json` tag: %v", sf.Name, err2))
+ }
+ rawOpt := tag[:n]
+ tag = tag[n:]
+ switch {
+ case wasFormat:
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s has `format` tag option that was not specified last", sf.Name))
+ case strings.HasPrefix(rawOpt, "'") && strings.TrimFunc(opt, isLetterOrDigit) == "":
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s has unnecessarily quoted appearance of `%s` tag option; specify `%s` instead", sf.Name, rawOpt, opt))
+ }
+ switch opt {
+ case "case":
+ if !strings.HasPrefix(tag, ":") {
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s is missing value for `case` tag option; specify `case:ignore` or `case:strict` instead", sf.Name))
+ break
+ }
+ tag = tag[len(":"):]
+ opt, n, err2 := consumeTagOption(tag)
+ if err2 != nil {
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s has malformed value for `case` tag option: %v", sf.Name, err2))
+ break
+ }
+ rawOpt := tag[:n]
+ tag = tag[n:]
+ if strings.HasPrefix(rawOpt, "'") {
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s has unnecessarily quoted appearance of `case:%s` tag option; specify `case:%s` instead", sf.Name, rawOpt, opt))
+ }
+ switch opt {
+ case "ignore":
+ out.casing |= caseIgnore
+ case "strict":
+ out.casing |= caseStrict
+ default:
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s has unknown `case:%s` tag value", sf.Name, rawOpt))
+ }
+ case "inline":
+ out.inline = true
+ case "unknown":
+ out.unknown = true
+ case "omitzero":
+ out.omitzero = true
+ case "omitempty":
+ out.omitempty = true
+ case "string":
+ out.string = true
+ case "format":
+ if !strings.HasPrefix(tag, ":") {
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s is missing value for `format` tag option", sf.Name))
+ break
+ }
+ tag = tag[len(":"):]
+ opt, n, err2 := consumeTagOption(tag)
+ if err2 != nil {
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s has malformed value for `format` tag option: %v", sf.Name, err2))
+ break
+ }
+ tag = tag[n:]
+ out.format = opt
+ wasFormat = true
+ default:
+ // Reject keys that resemble one of the supported options.
+ // This catches invalid mutants such as "omitEmpty" or "omit_empty".
+ normOpt := strings.ReplaceAll(strings.ToLower(opt), "_", "")
+ switch normOpt {
+ case "case", "inline", "unknown", "omitzero", "omitempty", "string", "format":
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s has invalid appearance of `%s` tag option; specify `%s` instead", sf.Name, opt, normOpt))
+ }
+
+ // NOTE: Everything else is ignored. This does not mean it is
+ // forward compatible to insert arbitrary tag options since
+ // a future version of this package may understand that tag.
+ }
+
+ // Reject duplicates.
+ switch {
+ case out.casing == caseIgnore|caseStrict:
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s cannot have both `case:ignore` and `case:strict` tag options", sf.Name))
+ case seenOpts[opt]:
+ err = cmp.Or(err, fmt.Errorf("Go struct field %s has duplicate appearance of `%s` tag option", sf.Name, rawOpt))
+ }
+ seenOpts[opt] = true
+ }
+ return out, false, err
+}
+
+// consumeTagOption consumes the next option,
+// which is either a Go identifier or a single-quoted string.
+// If the next option is invalid, it returns all of in until the next comma,
+// and reports an error.
+func consumeTagOption(in string) (string, int, error) {
+ // For legacy compatibility with v1, assume options are comma-separated.
+ i := strings.IndexByte(in, ',')
+ if i < 0 {
+ i = len(in)
+ }
+
+ switch r, _ := utf8.DecodeRuneInString(in); {
+ // Option as a Go identifier.
+ case r == '_' || unicode.IsLetter(r):
+ n := len(in) - len(strings.TrimLeftFunc(in, isLetterOrDigit))
+ return in[:n], n, nil
+ // Option as a single-quoted string.
+ case r == '\'':
+ // The grammar is nearly identical to a double-quoted Go string literal,
+ // but uses single quotes as the terminators. The reason for a custom
+ // grammar is because both backtick and double quotes cannot be used
+ // verbatim in a struct tag.
+ //
+ // Convert a single-quoted string to a double-quote string and rely on
+ // strconv.Unquote to handle the rest.
+ var inEscape bool
+ b := []byte{'"'}
+ n := len(`'`)
+ for len(in) > n {
+ r, rn := utf8.DecodeRuneInString(in[n:])
+ switch {
+ case inEscape:
+ if r == '\'' {
+ b = b[:len(b)-1] // remove escape character: `\'` => `'`
+ }
+ inEscape = false
+ case r == '\\':
+ inEscape = true
+ case r == '"':
+ b = append(b, '\\') // insert escape character: `"` => `\"`
+ case r == '\'':
+ b = append(b, '"')
+ n += len(`'`)
+ out, err := strconv.Unquote(string(b))
+ if err != nil {
+ return in[:i], i, fmt.Errorf("invalid single-quoted string: %s", in[:n])
+ }
+ return out, n, nil
+ }
+ b = append(b, in[n:][:rn]...)
+ n += rn
+ }
+ if n > 10 {
+ n = 10 // limit the amount of context printed in the error
+ }
+ return in[:i], i, fmt.Errorf("single-quoted string not terminated: %s...", in[:n])
+ case len(in) == 0:
+ return in[:i], i, io.ErrUnexpectedEOF
+ default:
+ return in[:i], i, fmt.Errorf("invalid character %q at start of option (expecting Unicode letter or single quote)", r)
+ }
+}
+
+func isLetterOrDigit(r rune) bool {
+ return r == '_' || unicode.IsLetter(r) || unicode.IsNumber(r)
+}
+
+// boolsCompare compares x and y, ordering false before true.
+func boolsCompare(x, y bool) int {
+ switch {
+ case !x && y:
+ return -1
+ default:
+ return 0
+ case x && !y:
+ return +1
+ }
+}
diff --git a/internal/json/fold.go b/internal/json/fold.go
new file mode 100644
index 0000000000..973f52e73a
--- /dev/null
+++ b/internal/json/fold.go
@@ -0,0 +1,58 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package json
+
+import (
+ "unicode"
+ "unicode/utf8"
+)
+
+// foldName returns a folded string such that foldName(x) == foldName(y)
+// is similar to strings.EqualFold(x, y), but ignores underscore and dashes.
+// This allows foldName to match common naming conventions.
+func foldName(in []byte) []byte {
+ // This is inlinable to take advantage of "function outlining".
+ // See https://blog.filippo.io/efficient-go-apis-with-the-inliner/
+ var arr [32]byte // large enough for most JSON names
+ return appendFoldedName(arr[:0], in)
+}
+func appendFoldedName(out, in []byte) []byte {
+ for i := 0; i < len(in); {
+ // Handle single-byte ASCII.
+ if c := in[i]; c < utf8.RuneSelf {
+ if c != '_' && c != '-' {
+ if 'a' <= c && c <= 'z' {
+ c -= 'a' - 'A'
+ }
+ out = append(out, c)
+ }
+ i++
+ continue
+ }
+ // Handle multi-byte Unicode.
+ r, n := utf8.DecodeRune(in[i:])
+ out = utf8.AppendRune(out, foldRune(r))
+ i += n
+ }
+ return out
+}
+
+// foldRune is a variation on unicode.SimpleFold that returns the same rune
+// for all runes in the same fold set.
+//
+// Invariant:
+//
+// foldRune(x) == foldRune(y) ⇔ strings.EqualFold(string(x), string(y))
+func foldRune(r rune) rune {
+ for {
+ r2 := unicode.SimpleFold(r)
+ if r2 <= r {
+ return r2 // smallest character in the fold set
+ }
+ r = r2
+ }
+}
diff --git a/internal/json/intern.go b/internal/json/intern.go
new file mode 100644
index 0000000000..1bfb8ca633
--- /dev/null
+++ b/internal/json/intern.go
@@ -0,0 +1,88 @@
+// Copyright 2022 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package json
+
+import (
+ "encoding/binary"
+ "math/bits"
+)
+
+// stringCache is a cache for strings converted from a []byte.
+type stringCache = [256]string // 256*unsafe.Sizeof(string("")) => 4KiB
+
+// makeString returns the string form of b.
+// It returns a pre-allocated string from c if present, otherwise
+// it allocates a new string, inserts it into the cache, and returns it.
+func makeString(c *stringCache, b []byte) string {
+ const (
+ minCachedLen = 2 // single byte strings are already interned by the runtime
+ maxCachedLen = 256 // large enough for UUIDs, IPv6 addresses, SHA-256 checksums, etc.
+ )
+ if c == nil || len(b) < minCachedLen || len(b) > maxCachedLen {
+ return string(b)
+ }
+
+ // Compute a hash from the fixed-width prefix and suffix of the string.
+ // This ensures hashing a string is a constant time operation.
+ var h uint32
+ switch {
+ case len(b) >= 8:
+ lo := binary.LittleEndian.Uint64(b[:8])
+ hi := binary.LittleEndian.Uint64(b[len(b)-8:])
+ h = hash64(uint32(lo), uint32(lo>>32)) ^ hash64(uint32(hi), uint32(hi>>32))
+ case len(b) >= 4:
+ lo := binary.LittleEndian.Uint32(b[:4])
+ hi := binary.LittleEndian.Uint32(b[len(b)-4:])
+ h = hash64(lo, hi)
+ case len(b) >= 2:
+ lo := binary.LittleEndian.Uint16(b[:2])
+ hi := binary.LittleEndian.Uint16(b[len(b)-2:])
+ h = hash64(uint32(lo), uint32(hi))
+ }
+
+ // Check the cache for the string.
+ i := h % uint32(len(*c))
+ if s := (*c)[i]; s == string(b) {
+ return s
+ }
+ s := string(b)
+ (*c)[i] = s
+ return s
+}
+
+// hash64 returns the hash of two uint32s as a single uint32.
+func hash64(lo, hi uint32) uint32 {
+ // If avalanche=true, this is identical to XXH32 hash on a 8B string:
+ // var b [8]byte
+ // binary.LittleEndian.PutUint32(b[:4], lo)
+ // binary.LittleEndian.PutUint32(b[4:], hi)
+ // return xxhash.Sum32(b[:])
+ const (
+ prime1 = 0x9e3779b1
+ prime2 = 0x85ebca77
+ prime3 = 0xc2b2ae3d
+ prime4 = 0x27d4eb2f
+ prime5 = 0x165667b1
+ )
+ h := prime5 + uint32(8)
+ h += lo * prime3
+ h = bits.RotateLeft32(h, 17) * prime4
+ h += hi * prime3
+ h = bits.RotateLeft32(h, 17) * prime4
+ // Skip final mix (avalanche) step of XXH32 for performance reasons.
+ // Empirical testing shows that the improvements in unbiased distribution
+ // does not outweigh the extra cost in computational complexity.
+ const avalanche = false
+ if avalanche {
+ h ^= h >> 15
+ h *= prime2
+ h ^= h >> 13
+ h *= prime3
+ h ^= h >> 16
+ }
+ return h
+}
diff --git a/internal/json/internal/internal.go b/internal/json/internal/internal.go
new file mode 100644
index 0000000000..a675387329
--- /dev/null
+++ b/internal/json/internal/internal.go
@@ -0,0 +1,41 @@
+// Copyright 2023 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package internal
+
+import "errors"
+
+// NotForPublicUse is a marker type that an API is for internal use only.
+// It does not perfectly prevent usage of that API, but helps to restrict usage.
+// Anything with this marker is not covered by the Go compatibility agreement.
+type NotForPublicUse struct{}
+
+// AllowInternalUse is passed from "json" to "jsontext" to authenticate
+// that the caller can have access to internal functionality.
+var AllowInternalUse NotForPublicUse
+
+// Sentinel error values internally shared between jsonv1 and jsonv2.
+var (
+ ErrCycle = errors.New("encountered a cycle")
+ ErrNonNilReference = errors.New("value must be passed as a non-nil pointer reference")
+)
+
+var (
+ // TransformMarshalError converts a v2 error into a v1 error.
+ // It is called only at the top-level of a Marshal function.
+ TransformMarshalError func(any, error) error
+ // NewMarshalerError constructs a jsonv1.MarshalerError.
+ // It is called after a user-defined Marshal method/function fails.
+ NewMarshalerError func(any, error, string) error
+ // TransformUnmarshalError converts a v2 error into a v1 error.
+ // It is called only at the top-level of a Unmarshal function.
+ TransformUnmarshalError func(any, error) error
+
+ // NewRawNumber returns new(jsonv1.Number).
+ NewRawNumber func() any
+ // RawNumberOf returns jsonv1.Number(b).
+ RawNumberOf func(b []byte) any
+)
diff --git a/internal/json/internal/jsonflags/flags.go b/internal/json/internal/jsonflags/flags.go
new file mode 100644
index 0000000000..39843f0a28
--- /dev/null
+++ b/internal/json/internal/jsonflags/flags.go
@@ -0,0 +1,215 @@
+// Copyright 2023 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+// jsonflags implements all the optional boolean flags.
+// These flags are shared across both "json", "jsontext", and "jsonopts".
+package jsonflags
+
+import "github.com/quay/clair/v4/internal/json/internal"
+
+// Bools represents zero or more boolean flags, all set to true or false.
+// The least-significant bit is the boolean value of all flags in the set.
+// The remaining bits identify which particular flags.
+//
+// In common usage, this is OR'd with 0 or 1. For example:
+// - (AllowInvalidUTF8 | 0) means "AllowInvalidUTF8 is false"
+// - (Multiline | Indent | 1) means "Multiline and Indent are true"
+type Bools uint64
+
+func (Bools) JSONOptions(internal.NotForPublicUse) {}
+
+const (
+ // AllFlags is the set of all flags.
+ AllFlags = AllCoderFlags | AllArshalV2Flags | AllArshalV1Flags
+
+ // AllCoderFlags is the set of all encoder/decoder flags.
+ AllCoderFlags = (maxCoderFlag - 1) - initFlag
+
+ // AllArshalV2Flags is the set of all v2 marshal/unmarshal flags.
+ AllArshalV2Flags = (maxArshalV2Flag - 1) - (maxCoderFlag - 1)
+
+ // AllArshalV1Flags is the set of all v1 marshal/unmarshal flags.
+ AllArshalV1Flags = (maxArshalV1Flag - 1) - (maxArshalV2Flag - 1)
+
+ // NonBooleanFlags is the set of non-boolean flags,
+ // where the value is some other concrete Go type.
+ // The value of the flag is stored within jsonopts.Struct.
+ NonBooleanFlags = 0 |
+ Indent |
+ IndentPrefix |
+ ByteLimit |
+ DepthLimit |
+ Marshalers |
+ Unmarshalers
+
+ // DefaultV1Flags is the set of booleans flags that default to true under
+ // v1 semantics. None of the non-boolean flags differ between v1 and v2.
+ DefaultV1Flags = 0 |
+ AllowDuplicateNames |
+ AllowInvalidUTF8 |
+ EscapeForHTML |
+ EscapeForJS |
+ PreserveRawStrings |
+ Deterministic |
+ FormatNilMapAsNull |
+ FormatNilSliceAsNull |
+ MatchCaseInsensitiveNames |
+ CallMethodsWithLegacySemantics |
+ FormatByteArrayAsArray |
+ FormatBytesWithLegacySemantics |
+ FormatDurationAsNano |
+ MatchCaseSensitiveDelimiter |
+ MergeWithLegacySemantics |
+ OmitEmptyWithLegacySemantics |
+ ParseBytesWithLooseRFC4648 |
+ ParseTimeWithLooseRFC3339 |
+ ReportErrorsWithLegacySemantics |
+ StringifyWithLegacySemantics |
+ UnmarshalArrayFromAnyLength
+
+ // AnyWhitespace reports whether the encoded output might have any whitespace.
+ AnyWhitespace = Multiline | SpaceAfterColon | SpaceAfterComma
+
+ // WhitespaceFlags is the set of flags related to whitespace formatting.
+ // In contrast to AnyWhitespace, this includes Indent and IndentPrefix
+ // as those settings take no effect if Multiline is false.
+ WhitespaceFlags = AnyWhitespace | Indent | IndentPrefix
+
+ // AnyEscape is the set of flags related to escaping in a JSON string.
+ AnyEscape = EscapeForHTML | EscapeForJS
+
+ // CanonicalizeNumbers is the set of flags related to raw number canonicalization.
+ CanonicalizeNumbers = CanonicalizeRawInts | CanonicalizeRawFloats
+)
+
+// Encoder and decoder flags.
+const (
+ initFlag Bools = 1 << iota // reserved for the boolean value itself
+
+ AllowDuplicateNames // encode or decode
+ AllowInvalidUTF8 // encode or decode
+ WithinArshalCall // encode or decode; for internal use by json.Marshal and json.Unmarshal
+ OmitTopLevelNewline // encode only; for internal use by json.Marshal and json.MarshalWrite
+ PreserveRawStrings // encode only
+ CanonicalizeRawInts // encode only
+ CanonicalizeRawFloats // encode only
+ ReorderRawObjects // encode only
+ EscapeForHTML // encode only
+ EscapeForJS // encode only
+ Multiline // encode only
+ SpaceAfterColon // encode only
+ SpaceAfterComma // encode only
+ Indent // encode only; non-boolean flag
+ IndentPrefix // encode only; non-boolean flag
+ ByteLimit // encode or decode; non-boolean flag
+ DepthLimit // encode or decode; non-boolean flag
+
+ maxCoderFlag
+)
+
+// Marshal and Unmarshal flags (for v2).
+const (
+ _ Bools = (maxCoderFlag >> 1) << iota
+
+ StringifyNumbers // marshal or unmarshal
+ Deterministic // marshal only
+ FormatNilMapAsNull // marshal only
+ FormatNilSliceAsNull // marshal only
+ OmitZeroStructFields // marshal only
+ MatchCaseInsensitiveNames // marshal or unmarshal
+ DiscardUnknownMembers // marshal only
+ RejectUnknownMembers // unmarshal only
+ Marshalers // marshal only; non-boolean flag
+ Unmarshalers // unmarshal only; non-boolean flag
+
+ maxArshalV2Flag
+)
+
+// Marshal and Unmarshal flags (for v1).
+const (
+ _ Bools = (maxArshalV2Flag >> 1) << iota
+
+ CallMethodsWithLegacySemantics // marshal or unmarshal
+ FormatByteArrayAsArray // marshal or unmarshal
+ FormatBytesWithLegacySemantics // marshal or unmarshal
+ FormatDurationAsNano // marshal or unmarshal
+ MatchCaseSensitiveDelimiter // marshal or unmarshal
+ MergeWithLegacySemantics // unmarshal
+ OmitEmptyWithLegacySemantics // marshal
+ ParseBytesWithLooseRFC4648 // unmarshal
+ ParseTimeWithLooseRFC3339 // unmarshal
+ ReportErrorsWithLegacySemantics // marshal or unmarshal
+ StringifyWithLegacySemantics // marshal or unmarshal
+ StringifyBoolsAndStrings // marshal or unmarshal; for internal use by jsonv2.makeStructArshaler
+ UnmarshalAnyWithRawNumber // unmarshal; for internal use by jsonv1.Decoder.UseNumber
+ UnmarshalArrayFromAnyLength // unmarshal
+
+ maxArshalV1Flag
+)
+
+// bitsUsed is the number of bits used in the 64-bit boolean flags
+const bitsUsed = 42
+
+// Static compile check that bitsUsed and maxArshalV1Flag are in sync.
+const _ = uint64((1< 0b_110_11011
+ dst.Values &= ^src.Presence // e.g., 0b_1000_0011 & 0b_1010_0101 -> 0b_100_00001
+ dst.Values |= src.Values // e.g., 0b_1000_0001 | 0b_1001_0010 -> 0b_100_10011
+}
+
+// Set sets both the presence and value for the provided bool (or set of bools).
+func (fs *Flags) Set(f Bools) {
+ // Select out the bits for the flag identifiers (everything except LSB),
+ // then set the presence for all the identifier bits (using OR),
+ // then invert the identifier bits to clear out the values (using AND-NOT),
+ // then copy over all the identifier bits to the value if LSB is 1.
+ // e.g., fs := Flags{Presence: 0b_0101_0010, Value: 0b_0001_0010}
+ // e.g., f := 0b_1001_0001
+ id := uint64(f) &^ uint64(1) // e.g., 0b_1001_0001 & 0b_1111_1110 -> 0b_1001_0000
+ fs.Presence |= id // e.g., 0b_0101_0010 | 0b_1001_0000 -> 0b_1101_0011
+ fs.Values &= ^id // e.g., 0b_0001_0010 & 0b_0110_1111 -> 0b_0000_0010
+ fs.Values |= uint64(f&1) * id // e.g., 0b_0000_0010 | 0b_1001_0000 -> 0b_1001_0010
+}
+
+// Get reports whether the bool (or any of the bools) is true.
+// This is generally only used with a singular bool.
+// The value bit of f (i.e., the LSB) is ignored.
+func (fs Flags) Get(f Bools) bool {
+ return fs.Values&uint64(f) > 0
+}
+
+// Has reports whether the bool (or any of the bools) is set.
+// The value bit of f (i.e., the LSB) is ignored.
+func (fs Flags) Has(f Bools) bool {
+ return fs.Presence&uint64(f) > 0
+}
+
+// Clear clears both the presence and value for the provided bool or bools.
+// The value bit of f (i.e., the LSB) is ignored.
+func (fs *Flags) Clear(f Bools) {
+ // Invert f to produce a mask to clear all bits in f (using AND).
+ // e.g., fs := Flags{Presence: 0b_0101_0010, Value: 0b_0001_0010}
+ // e.g., f := 0b_0001_1000
+ mask := uint64(^f) // e.g., 0b_0001_1000 -> 0b_1110_0111
+ fs.Presence &= mask // e.g., 0b_0101_0010 & 0b_1110_0111 -> 0b_0100_0010
+ fs.Values &= mask // e.g., 0b_0001_0010 & 0b_1110_0111 -> 0b_0000_0010
+}
diff --git a/internal/json/internal/jsonopts/options.go b/internal/json/internal/jsonopts/options.go
new file mode 100644
index 0000000000..5dd2458aa1
--- /dev/null
+++ b/internal/json/internal/jsonopts/options.go
@@ -0,0 +1,202 @@
+// Copyright 2023 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package jsonopts
+
+import (
+ "github.com/quay/clair/v4/internal/json/internal"
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+)
+
+// Options is the common options type shared across json packages.
+type Options interface {
+ // JSONOptions is exported so related json packages can implement Options.
+ JSONOptions(internal.NotForPublicUse)
+}
+
+// Struct is the combination of all options in struct form.
+// This is efficient to pass down the call stack and to query.
+type Struct struct {
+ Flags jsonflags.Flags
+
+ CoderValues
+ ArshalValues
+}
+
+type CoderValues struct {
+ Indent string // jsonflags.Indent
+ IndentPrefix string // jsonflags.IndentPrefix
+ ByteLimit int64 // jsonflags.ByteLimit
+ DepthLimit int // jsonflags.DepthLimit
+}
+
+type ArshalValues struct {
+ // The Marshalers and Unmarshalers fields use the any type to avoid a
+ // concrete dependency on *json.Marshalers and *json.Unmarshalers,
+ // which would in turn create a dependency on the "reflect" package.
+
+ Marshalers any // jsonflags.Marshalers
+ Unmarshalers any // jsonflags.Unmarshalers
+
+ Format string
+ FormatDepth int
+}
+
+// DefaultOptionsV2 is the set of all options that define default v2 behavior.
+var DefaultOptionsV2 = Struct{
+ Flags: jsonflags.Flags{
+ Presence: uint64(jsonflags.AllFlags & ^jsonflags.WhitespaceFlags),
+ Values: uint64(0),
+ },
+}
+
+// DefaultOptionsV1 is the set of all options that define default v1 behavior.
+var DefaultOptionsV1 = Struct{
+ Flags: jsonflags.Flags{
+ Presence: uint64(jsonflags.AllFlags & ^jsonflags.WhitespaceFlags),
+ Values: uint64(jsonflags.DefaultV1Flags),
+ },
+}
+
+func (*Struct) JSONOptions(internal.NotForPublicUse) {}
+
+// GetUnknownOption is injected by the "json" package to handle Options
+// declared in that package so that "jsonopts" can handle them.
+var GetUnknownOption = func(Struct, Options) (any, bool) { panic("unknown option") }
+
+func GetOption[T any](opts Options, setter func(T) Options) (T, bool) {
+ // Collapse the options to *Struct to simplify lookup.
+ structOpts, ok := opts.(*Struct)
+ if !ok {
+ var structOpts2 Struct
+ structOpts2.Join(opts)
+ structOpts = &structOpts2
+ }
+
+ // Lookup the option based on the return value of the setter.
+ var zero T
+ switch opt := setter(zero).(type) {
+ case jsonflags.Bools:
+ v := structOpts.Flags.Get(opt)
+ ok := structOpts.Flags.Has(opt)
+ return any(v).(T), ok
+ case Indent:
+ if !structOpts.Flags.Has(jsonflags.Indent) {
+ return zero, false
+ }
+ return any(structOpts.Indent).(T), true
+ case IndentPrefix:
+ if !structOpts.Flags.Has(jsonflags.IndentPrefix) {
+ return zero, false
+ }
+ return any(structOpts.IndentPrefix).(T), true
+ case ByteLimit:
+ if !structOpts.Flags.Has(jsonflags.ByteLimit) {
+ return zero, false
+ }
+ return any(structOpts.ByteLimit).(T), true
+ case DepthLimit:
+ if !structOpts.Flags.Has(jsonflags.DepthLimit) {
+ return zero, false
+ }
+ return any(structOpts.DepthLimit).(T), true
+ default:
+ v, ok := GetUnknownOption(*structOpts, opt)
+ return v.(T), ok
+ }
+}
+
+// JoinUnknownOption is injected by the "json" package to handle Options
+// declared in that package so that "jsonopts" can handle them.
+var JoinUnknownOption = func(Struct, Options) Struct { panic("unknown option") }
+
+func (dst *Struct) Join(srcs ...Options) {
+ dst.join(false, srcs...)
+}
+
+func (dst *Struct) JoinWithoutCoderOptions(srcs ...Options) {
+ dst.join(true, srcs...)
+}
+
+func (dst *Struct) join(excludeCoderOptions bool, srcs ...Options) {
+ for _, src := range srcs {
+ switch src := src.(type) {
+ case nil:
+ continue
+ case jsonflags.Bools:
+ if excludeCoderOptions {
+ src &= ^jsonflags.AllCoderFlags
+ }
+ dst.Flags.Set(src)
+ case Indent:
+ if excludeCoderOptions {
+ continue
+ }
+ dst.Flags.Set(jsonflags.Multiline | jsonflags.Indent | 1)
+ dst.Indent = string(src)
+ case IndentPrefix:
+ if excludeCoderOptions {
+ continue
+ }
+ dst.Flags.Set(jsonflags.Multiline | jsonflags.IndentPrefix | 1)
+ dst.IndentPrefix = string(src)
+ case ByteLimit:
+ if excludeCoderOptions {
+ continue
+ }
+ dst.Flags.Set(jsonflags.ByteLimit | 1)
+ dst.ByteLimit = int64(src)
+ case DepthLimit:
+ if excludeCoderOptions {
+ continue
+ }
+ dst.Flags.Set(jsonflags.DepthLimit | 1)
+ dst.DepthLimit = int(src)
+ case *Struct:
+ srcFlags := src.Flags // shallow copy the flags
+ if excludeCoderOptions {
+ srcFlags.Clear(jsonflags.AllCoderFlags)
+ }
+ dst.Flags.Join(srcFlags)
+ if srcFlags.Has(jsonflags.NonBooleanFlags) {
+ if srcFlags.Has(jsonflags.Indent) {
+ dst.Indent = src.Indent
+ }
+ if srcFlags.Has(jsonflags.IndentPrefix) {
+ dst.IndentPrefix = src.IndentPrefix
+ }
+ if srcFlags.Has(jsonflags.ByteLimit) {
+ dst.ByteLimit = src.ByteLimit
+ }
+ if srcFlags.Has(jsonflags.DepthLimit) {
+ dst.DepthLimit = src.DepthLimit
+ }
+ if srcFlags.Has(jsonflags.Marshalers) {
+ dst.Marshalers = src.Marshalers
+ }
+ if srcFlags.Has(jsonflags.Unmarshalers) {
+ dst.Unmarshalers = src.Unmarshalers
+ }
+ }
+ default:
+ *dst = JoinUnknownOption(*dst, src)
+ }
+ }
+}
+
+type (
+ Indent string // jsontext.WithIndent
+ IndentPrefix string // jsontext.WithIndentPrefix
+ ByteLimit int64 // jsontext.WithByteLimit
+ DepthLimit int // jsontext.WithDepthLimit
+ // type for jsonflags.Marshalers declared in "json" package
+ // type for jsonflags.Unmarshalers declared in "json" package
+)
+
+func (Indent) JSONOptions(internal.NotForPublicUse) {}
+func (IndentPrefix) JSONOptions(internal.NotForPublicUse) {}
+func (ByteLimit) JSONOptions(internal.NotForPublicUse) {}
+func (DepthLimit) JSONOptions(internal.NotForPublicUse) {}
diff --git a/internal/json/internal/jsonwire/decode.go b/internal/json/internal/jsonwire/decode.go
new file mode 100644
index 0000000000..6a5acb8ec0
--- /dev/null
+++ b/internal/json/internal/jsonwire/decode.go
@@ -0,0 +1,629 @@
+// Copyright 2023 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package jsonwire
+
+import (
+ "io"
+ "math"
+ "slices"
+ "strconv"
+ "unicode/utf16"
+ "unicode/utf8"
+)
+
+type ValueFlags uint
+
+const (
+ _ ValueFlags = (1 << iota) / 2 // powers of two starting with zero
+
+ stringNonVerbatim // string cannot be naively treated as valid UTF-8
+ stringNonCanonical // string not formatted according to RFC 8785, section 3.2.2.2.
+ // TODO: Track whether a number is a non-integer?
+)
+
+func (f *ValueFlags) Join(f2 ValueFlags) { *f |= f2 }
+func (f ValueFlags) IsVerbatim() bool { return f&stringNonVerbatim == 0 }
+func (f ValueFlags) IsCanonical() bool { return f&stringNonCanonical == 0 }
+
+// ConsumeWhitespace consumes leading JSON whitespace per RFC 7159, section 2.
+func ConsumeWhitespace(b []byte) (n int) {
+ // NOTE: The arguments and logic are kept simple to keep this inlinable.
+ for len(b) > n && (b[n] == ' ' || b[n] == '\t' || b[n] == '\r' || b[n] == '\n') {
+ n++
+ }
+ return n
+}
+
+// ConsumeNull consumes the next JSON null literal per RFC 7159, section 3.
+// It returns 0 if it is invalid, in which case consumeLiteral should be used.
+func ConsumeNull(b []byte) int {
+ // NOTE: The arguments and logic are kept simple to keep this inlinable.
+ const literal = "null"
+ if len(b) >= len(literal) && string(b[:len(literal)]) == literal {
+ return len(literal)
+ }
+ return 0
+}
+
+// ConsumeFalse consumes the next JSON false literal per RFC 7159, section 3.
+// It returns 0 if it is invalid, in which case consumeLiteral should be used.
+func ConsumeFalse(b []byte) int {
+ // NOTE: The arguments and logic are kept simple to keep this inlinable.
+ const literal = "false"
+ if len(b) >= len(literal) && string(b[:len(literal)]) == literal {
+ return len(literal)
+ }
+ return 0
+}
+
+// ConsumeTrue consumes the next JSON true literal per RFC 7159, section 3.
+// It returns 0 if it is invalid, in which case consumeLiteral should be used.
+func ConsumeTrue(b []byte) int {
+ // NOTE: The arguments and logic are kept simple to keep this inlinable.
+ const literal = "true"
+ if len(b) >= len(literal) && string(b[:len(literal)]) == literal {
+ return len(literal)
+ }
+ return 0
+}
+
+// ConsumeLiteral consumes the next JSON literal per RFC 7159, section 3.
+// If the input appears truncated, it returns io.ErrUnexpectedEOF.
+func ConsumeLiteral(b []byte, lit string) (n int, err error) {
+ for i := 0; i < len(b) && i < len(lit); i++ {
+ if b[i] != lit[i] {
+ return i, NewInvalidCharacterError(b[i:], "in literal "+lit+" (expecting "+strconv.QuoteRune(rune(lit[i]))+")")
+ }
+ }
+ if len(b) < len(lit) {
+ return len(b), io.ErrUnexpectedEOF
+ }
+ return len(lit), nil
+}
+
+// ConsumeSimpleString consumes the next JSON string per RFC 7159, section 7
+// but is limited to the grammar for an ASCII string without escape sequences.
+// It returns 0 if it is invalid or more complicated than a simple string,
+// in which case consumeString should be called.
+//
+// It rejects '<', '>', and '&' for compatibility reasons since these were
+// always escaped in the v1 implementation. Thus, if this function reports
+// non-zero then we know that the string would be encoded the same way
+// under both v1 or v2 escape semantics.
+func ConsumeSimpleString(b []byte) (n int) {
+ // NOTE: The arguments and logic are kept simple to keep this inlinable.
+ if len(b) > 0 && b[0] == '"' {
+ n++
+ for len(b) > n && b[n] < utf8.RuneSelf && escapeASCII[b[n]] == 0 {
+ n++
+ }
+ if uint(len(b)) > uint(n) && b[n] == '"' {
+ n++
+ return n
+ }
+ }
+ return 0
+}
+
+// ConsumeString consumes the next JSON string per RFC 7159, section 7.
+// If validateUTF8 is false, then this allows the presence of invalid UTF-8
+// characters within the string itself.
+// It reports the number of bytes consumed and whether an error was encountered.
+// If the input appears truncated, it returns io.ErrUnexpectedEOF.
+func ConsumeString(flags *ValueFlags, b []byte, validateUTF8 bool) (n int, err error) {
+ return ConsumeStringResumable(flags, b, 0, validateUTF8)
+}
+
+// ConsumeStringResumable is identical to consumeString but supports resuming
+// from a previous call that returned io.ErrUnexpectedEOF.
+func ConsumeStringResumable(flags *ValueFlags, b []byte, resumeOffset int, validateUTF8 bool) (n int, err error) {
+ // Consume the leading double quote.
+ switch {
+ case resumeOffset > 0:
+ n = resumeOffset // already handled the leading quote
+ case uint(len(b)) == 0:
+ return n, io.ErrUnexpectedEOF
+ case b[0] == '"':
+ n++
+ default:
+ return n, NewInvalidCharacterError(b[n:], `at start of string (expecting '"')`)
+ }
+
+ // Consume every character in the string.
+ for uint(len(b)) > uint(n) {
+ // Optimize for long sequences of unescaped characters.
+ noEscape := func(c byte) bool {
+ return c < utf8.RuneSelf && ' ' <= c && c != '\\' && c != '"'
+ }
+ for uint(len(b)) > uint(n) && noEscape(b[n]) {
+ n++
+ }
+ if uint(len(b)) <= uint(n) {
+ return n, io.ErrUnexpectedEOF
+ }
+
+ // Check for terminating double quote.
+ if b[n] == '"' {
+ n++
+ return n, nil
+ }
+
+ switch r, rn := utf8.DecodeRune(b[n:]); {
+ // Handle UTF-8 encoded byte sequence.
+ // Due to specialized handling of ASCII above, we know that
+ // all normal sequences at this point must be 2 bytes or larger.
+ case rn > 1:
+ n += rn
+ // Handle escape sequence.
+ case r == '\\':
+ flags.Join(stringNonVerbatim)
+ resumeOffset = n
+ if uint(len(b)) < uint(n+2) {
+ return resumeOffset, io.ErrUnexpectedEOF
+ }
+ switch r := b[n+1]; r {
+ case '/':
+ // Forward slash is the only character with 3 representations.
+ // Per RFC 8785, section 3.2.2.2., this must not be escaped.
+ flags.Join(stringNonCanonical)
+ n += 2
+ case '"', '\\', 'b', 'f', 'n', 'r', 't':
+ n += 2
+ case 'u':
+ if uint(len(b)) < uint(n+6) {
+ if hasEscapedUTF16Prefix(b[n:], false) {
+ return resumeOffset, io.ErrUnexpectedEOF
+ }
+ flags.Join(stringNonCanonical)
+ return n, NewInvalidEscapeSequenceError(b[n:])
+ }
+ v1, ok := parseHexUint16(b[n+2 : n+6])
+ if !ok {
+ flags.Join(stringNonCanonical)
+ return n, NewInvalidEscapeSequenceError(b[n : n+6])
+ }
+ // Only certain control characters can use the \uFFFF notation
+ // for canonical formatting (per RFC 8785, section 3.2.2.2.).
+ switch v1 {
+ // \uFFFF notation not permitted for these characters.
+ case '\b', '\f', '\n', '\r', '\t':
+ flags.Join(stringNonCanonical)
+ default:
+ // \uFFFF notation only permitted for control characters.
+ if v1 >= ' ' {
+ flags.Join(stringNonCanonical)
+ } else {
+ // \uFFFF notation must be lower case.
+ for _, c := range b[n+2 : n+6] {
+ if 'A' <= c && c <= 'F' {
+ flags.Join(stringNonCanonical)
+ }
+ }
+ }
+ }
+ n += 6
+
+ r := rune(v1)
+ if validateUTF8 && utf16.IsSurrogate(r) {
+ if uint(len(b)) < uint(n+6) {
+ if hasEscapedUTF16Prefix(b[n:], true) {
+ return resumeOffset, io.ErrUnexpectedEOF
+ }
+ flags.Join(stringNonCanonical)
+ return n - 6, NewInvalidEscapeSequenceError(b[n-6:])
+ } else if v2, ok := parseHexUint16(b[n+2 : n+6]); b[n] != '\\' || b[n+1] != 'u' || !ok {
+ flags.Join(stringNonCanonical)
+ return n - 6, NewInvalidEscapeSequenceError(b[n-6 : n+6])
+ } else if r = utf16.DecodeRune(rune(v1), rune(v2)); r == utf8.RuneError {
+ flags.Join(stringNonCanonical)
+ return n - 6, NewInvalidEscapeSequenceError(b[n-6 : n+6])
+ } else {
+ n += 6
+ }
+ }
+ default:
+ flags.Join(stringNonCanonical)
+ return n, NewInvalidEscapeSequenceError(b[n : n+2])
+ }
+ // Handle invalid UTF-8.
+ case r == utf8.RuneError:
+ if !utf8.FullRune(b[n:]) {
+ return n, io.ErrUnexpectedEOF
+ }
+ flags.Join(stringNonVerbatim | stringNonCanonical)
+ if validateUTF8 {
+ return n, ErrInvalidUTF8
+ }
+ n++
+ // Handle invalid control characters.
+ case r < ' ':
+ flags.Join(stringNonVerbatim | stringNonCanonical)
+ return n, NewInvalidCharacterError(b[n:], "in string (expecting non-control character)")
+ default:
+ panic("BUG: unhandled character " + QuoteRune(b[n:]))
+ }
+ }
+ return n, io.ErrUnexpectedEOF
+}
+
+// AppendUnquote appends the unescaped form of a JSON string in src to dst.
+// Any invalid UTF-8 within the string will be replaced with utf8.RuneError,
+// but the error will be specified as having encountered such an error.
+// The input must be an entire JSON string with no surrounding whitespace.
+func AppendUnquote[Bytes ~[]byte | ~string](dst []byte, src Bytes) (v []byte, err error) {
+ dst = slices.Grow(dst, len(src))
+
+ // Consume the leading double quote.
+ var i, n int
+ switch {
+ case uint(len(src)) == 0:
+ return dst, io.ErrUnexpectedEOF
+ case src[0] == '"':
+ i, n = 1, 1
+ default:
+ return dst, NewInvalidCharacterError(src, `at start of string (expecting '"')`)
+ }
+
+ // Consume every character in the string.
+ for uint(len(src)) > uint(n) {
+ // Optimize for long sequences of unescaped characters.
+ noEscape := func(c byte) bool {
+ return c < utf8.RuneSelf && ' ' <= c && c != '\\' && c != '"'
+ }
+ for uint(len(src)) > uint(n) && noEscape(src[n]) {
+ n++
+ }
+ if uint(len(src)) <= uint(n) {
+ dst = append(dst, src[i:n]...)
+ return dst, io.ErrUnexpectedEOF
+ }
+
+ // Check for terminating double quote.
+ if src[n] == '"' {
+ dst = append(dst, src[i:n]...)
+ n++
+ if n < len(src) {
+ err = NewInvalidCharacterError(src[n:], "after string value")
+ }
+ return dst, err
+ }
+
+ switch r, rn := utf8.DecodeRuneInString(string(truncateMaxUTF8(src[n:]))); {
+ // Handle UTF-8 encoded byte sequence.
+ // Due to specialized handling of ASCII above, we know that
+ // all normal sequences at this point must be 2 bytes or larger.
+ case rn > 1:
+ n += rn
+ // Handle escape sequence.
+ case r == '\\':
+ dst = append(dst, src[i:n]...)
+
+ // Handle escape sequence.
+ if uint(len(src)) < uint(n+2) {
+ return dst, io.ErrUnexpectedEOF
+ }
+ switch r := src[n+1]; r {
+ case '"', '\\', '/':
+ dst = append(dst, r)
+ n += 2
+ case 'b':
+ dst = append(dst, '\b')
+ n += 2
+ case 'f':
+ dst = append(dst, '\f')
+ n += 2
+ case 'n':
+ dst = append(dst, '\n')
+ n += 2
+ case 'r':
+ dst = append(dst, '\r')
+ n += 2
+ case 't':
+ dst = append(dst, '\t')
+ n += 2
+ case 'u':
+ if uint(len(src)) < uint(n+6) {
+ if hasEscapedUTF16Prefix(src[n:], false) {
+ return dst, io.ErrUnexpectedEOF
+ }
+ return dst, NewInvalidEscapeSequenceError(src[n:])
+ }
+ v1, ok := parseHexUint16(src[n+2 : n+6])
+ if !ok {
+ return dst, NewInvalidEscapeSequenceError(src[n : n+6])
+ }
+ n += 6
+
+ // Check whether this is a surrogate half.
+ r := rune(v1)
+ if utf16.IsSurrogate(r) {
+ r = utf8.RuneError // assume failure unless the following succeeds
+ if uint(len(src)) < uint(n+6) {
+ if hasEscapedUTF16Prefix(src[n:], true) {
+ return utf8.AppendRune(dst, r), io.ErrUnexpectedEOF
+ }
+ err = NewInvalidEscapeSequenceError(src[n-6:])
+ } else if v2, ok := parseHexUint16(src[n+2 : n+6]); src[n] != '\\' || src[n+1] != 'u' || !ok {
+ err = NewInvalidEscapeSequenceError(src[n-6 : n+6])
+ } else if r = utf16.DecodeRune(rune(v1), rune(v2)); r == utf8.RuneError {
+ err = NewInvalidEscapeSequenceError(src[n-6 : n+6])
+ } else {
+ n += 6
+ }
+ }
+
+ dst = utf8.AppendRune(dst, r)
+ default:
+ return dst, NewInvalidEscapeSequenceError(src[n : n+2])
+ }
+ i = n
+ // Handle invalid UTF-8.
+ case r == utf8.RuneError:
+ dst = append(dst, src[i:n]...)
+ if !utf8.FullRuneInString(string(truncateMaxUTF8(src[n:]))) {
+ return dst, io.ErrUnexpectedEOF
+ }
+ // NOTE: An unescaped string may be longer than the escaped string
+ // because invalid UTF-8 bytes are being replaced.
+ dst = append(dst, "\uFFFD"...)
+ n += rn
+ i = n
+ err = ErrInvalidUTF8
+ // Handle invalid control characters.
+ case r < ' ':
+ dst = append(dst, src[i:n]...)
+ return dst, NewInvalidCharacterError(src[n:], "in string (expecting non-control character)")
+ default:
+ panic("BUG: unhandled character " + QuoteRune(src[n:]))
+ }
+ }
+ dst = append(dst, src[i:n]...)
+ return dst, io.ErrUnexpectedEOF
+}
+
+// hasEscapedUTF16Prefix reports whether b is possibly
+// the truncated prefix of a \uFFFF escape sequence.
+func hasEscapedUTF16Prefix[Bytes ~[]byte | ~string](b Bytes, lowerSurrogateHalf bool) bool {
+ for i := range len(b) {
+ switch c := b[i]; {
+ case i == 0 && c != '\\':
+ return false
+ case i == 1 && c != 'u':
+ return false
+ case i == 2 && lowerSurrogateHalf && c != 'd' && c != 'D':
+ return false // not within ['\uDC00':'\uDFFF']
+ case i == 3 && lowerSurrogateHalf && !('c' <= c && c <= 'f') && !('C' <= c && c <= 'F'):
+ return false // not within ['\uDC00':'\uDFFF']
+ case i >= 2 && i < 6 && !('0' <= c && c <= '9') && !('a' <= c && c <= 'f') && !('A' <= c && c <= 'F'):
+ return false
+ }
+ }
+ return true
+}
+
+// UnquoteMayCopy returns the unescaped form of b.
+// If there are no escaped characters, the output is simply a subslice of
+// the input with the surrounding quotes removed.
+// Otherwise, a new buffer is allocated for the output.
+// It assumes the input is valid.
+func UnquoteMayCopy(b []byte, isVerbatim bool) []byte {
+ // NOTE: The arguments and logic are kept simple to keep this inlinable.
+ if isVerbatim {
+ return b[len(`"`) : len(b)-len(`"`)]
+ }
+ b, _ = AppendUnquote(nil, b)
+ return b
+}
+
+// ConsumeSimpleNumber consumes the next JSON number per RFC 7159, section 6
+// but is limited to the grammar for a positive integer.
+// It returns 0 if it is invalid or more complicated than a simple integer,
+// in which case consumeNumber should be called.
+func ConsumeSimpleNumber(b []byte) (n int) {
+ // NOTE: The arguments and logic are kept simple to keep this inlinable.
+ if len(b) > 0 {
+ if b[0] == '0' {
+ n++
+ } else if '1' <= b[0] && b[0] <= '9' {
+ n++
+ for len(b) > n && ('0' <= b[n] && b[n] <= '9') {
+ n++
+ }
+ } else {
+ return 0
+ }
+ if uint(len(b)) <= uint(n) || (b[n] != '.' && b[n] != 'e' && b[n] != 'E') {
+ return n
+ }
+ }
+ return 0
+}
+
+type ConsumeNumberState uint
+
+const (
+ consumeNumberInit ConsumeNumberState = iota
+ beforeIntegerDigits
+ withinIntegerDigits
+ beforeFractionalDigits
+ withinFractionalDigits
+ beforeExponentDigits
+ withinExponentDigits
+)
+
+// ConsumeNumber consumes the next JSON number per RFC 7159, section 6.
+// It reports the number of bytes consumed and whether an error was encountered.
+// If the input appears truncated, it returns io.ErrUnexpectedEOF.
+//
+// Note that JSON numbers are not self-terminating.
+// If the entire input is consumed, then the caller needs to consider whether
+// there may be subsequent unread data that may still be part of this number.
+func ConsumeNumber(b []byte) (n int, err error) {
+ n, _, err = ConsumeNumberResumable(b, 0, consumeNumberInit)
+ return n, err
+}
+
+// ConsumeNumberResumable is identical to consumeNumber but supports resuming
+// from a previous call that returned io.ErrUnexpectedEOF.
+func ConsumeNumberResumable(b []byte, resumeOffset int, state ConsumeNumberState) (n int, _ ConsumeNumberState, err error) {
+ // Jump to the right state when resuming from a partial consumption.
+ n = resumeOffset
+ if state > consumeNumberInit {
+ switch state {
+ case withinIntegerDigits, withinFractionalDigits, withinExponentDigits:
+ // Consume leading digits.
+ for uint(len(b)) > uint(n) && ('0' <= b[n] && b[n] <= '9') {
+ n++
+ }
+ if uint(len(b)) <= uint(n) {
+ return n, state, nil // still within the same state
+ }
+ state++ // switches "withinX" to "beforeY" where Y is the state after X
+ }
+ switch state {
+ case beforeIntegerDigits:
+ goto beforeInteger
+ case beforeFractionalDigits:
+ goto beforeFractional
+ case beforeExponentDigits:
+ goto beforeExponent
+ default:
+ return n, state, nil
+ }
+ }
+
+ // Consume required integer component (with optional minus sign).
+beforeInteger:
+ resumeOffset = n
+ if uint(len(b)) > 0 && b[0] == '-' {
+ n++
+ }
+ switch {
+ case uint(len(b)) <= uint(n):
+ return resumeOffset, beforeIntegerDigits, io.ErrUnexpectedEOF
+ case b[n] == '0':
+ n++
+ state = beforeFractionalDigits
+ case '1' <= b[n] && b[n] <= '9':
+ n++
+ for uint(len(b)) > uint(n) && ('0' <= b[n] && b[n] <= '9') {
+ n++
+ }
+ state = withinIntegerDigits
+ default:
+ return n, state, NewInvalidCharacterError(b[n:], "in number (expecting digit)")
+ }
+
+ // Consume optional fractional component.
+beforeFractional:
+ if uint(len(b)) > uint(n) && b[n] == '.' {
+ resumeOffset = n
+ n++
+ switch {
+ case uint(len(b)) <= uint(n):
+ return resumeOffset, beforeFractionalDigits, io.ErrUnexpectedEOF
+ case '0' <= b[n] && b[n] <= '9':
+ n++
+ default:
+ return n, state, NewInvalidCharacterError(b[n:], "in number (expecting digit)")
+ }
+ for uint(len(b)) > uint(n) && ('0' <= b[n] && b[n] <= '9') {
+ n++
+ }
+ state = withinFractionalDigits
+ }
+
+ // Consume optional exponent component.
+beforeExponent:
+ if uint(len(b)) > uint(n) && (b[n] == 'e' || b[n] == 'E') {
+ resumeOffset = n
+ n++
+ if uint(len(b)) > uint(n) && (b[n] == '-' || b[n] == '+') {
+ n++
+ }
+ switch {
+ case uint(len(b)) <= uint(n):
+ return resumeOffset, beforeExponentDigits, io.ErrUnexpectedEOF
+ case '0' <= b[n] && b[n] <= '9':
+ n++
+ default:
+ return n, state, NewInvalidCharacterError(b[n:], "in number (expecting digit)")
+ }
+ for uint(len(b)) > uint(n) && ('0' <= b[n] && b[n] <= '9') {
+ n++
+ }
+ state = withinExponentDigits
+ }
+
+ return n, state, nil
+}
+
+// parseHexUint16 is similar to strconv.ParseUint,
+// but operates directly on []byte and is optimized for base-16.
+// See https://go.dev/issue/42429.
+func parseHexUint16[Bytes ~[]byte | ~string](b Bytes) (v uint16, ok bool) {
+ if len(b) != 4 {
+ return 0, false
+ }
+ for i := range 4 {
+ c := b[i]
+ switch {
+ case '0' <= c && c <= '9':
+ c = c - '0'
+ case 'a' <= c && c <= 'f':
+ c = 10 + c - 'a'
+ case 'A' <= c && c <= 'F':
+ c = 10 + c - 'A'
+ default:
+ return 0, false
+ }
+ v = v*16 + uint16(c)
+ }
+ return v, true
+}
+
+// ParseUint parses b as a decimal unsigned integer according to
+// a strict subset of the JSON number grammar, returning the value if valid.
+// It returns (0, false) if there is a syntax error and
+// returns (math.MaxUint64, false) if there is an overflow.
+func ParseUint(b []byte) (v uint64, ok bool) {
+ const unsafeWidth = 20 // len(fmt.Sprint(uint64(math.MaxUint64)))
+ var n int
+ for ; len(b) > n && ('0' <= b[n] && b[n] <= '9'); n++ {
+ v = 10*v + uint64(b[n]-'0')
+ }
+ switch {
+ case n == 0 || len(b) != n || (b[0] == '0' && string(b) != "0"):
+ return 0, false
+ case n >= unsafeWidth && (b[0] != '1' || v < 1e19 || n > unsafeWidth):
+ return math.MaxUint64, false
+ }
+ return v, true
+}
+
+// ParseFloat parses a floating point number according to the Go float grammar.
+// Note that the JSON number grammar is a strict subset.
+//
+// If the number overflows the finite representation of a float,
+// then we return MaxFloat since any finite value will always be infinitely
+// more accurate at representing another finite value than an infinite value.
+func ParseFloat(b []byte, bits int) (v float64, ok bool) {
+ fv, err := strconv.ParseFloat(string(b), bits)
+ if math.IsInf(fv, 0) {
+ switch {
+ case bits == 32 && math.IsInf(fv, +1):
+ fv = +math.MaxFloat32
+ case bits == 64 && math.IsInf(fv, +1):
+ fv = +math.MaxFloat64
+ case bits == 32 && math.IsInf(fv, -1):
+ fv = -math.MaxFloat32
+ case bits == 64 && math.IsInf(fv, -1):
+ fv = -math.MaxFloat64
+ }
+ }
+ return fv, err == nil
+}
diff --git a/internal/json/internal/jsonwire/encode.go b/internal/json/internal/jsonwire/encode.go
new file mode 100644
index 0000000000..38acbbbcd8
--- /dev/null
+++ b/internal/json/internal/jsonwire/encode.go
@@ -0,0 +1,290 @@
+// Copyright 2023 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package jsonwire
+
+import (
+ "math"
+ "slices"
+ "strconv"
+ "unicode/utf16"
+ "unicode/utf8"
+
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+)
+
+// escapeASCII reports whether the ASCII character needs to be escaped.
+// It conservatively assumes EscapeForHTML.
+var escapeASCII = [...]uint8{
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, // escape control characters
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, // escape control characters
+ 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, // escape '"' and '&'
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, // escape '<' and '>'
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, // escape '\\'
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+}
+
+// NeedEscape reports whether src needs escaping of any characters.
+// It conservatively assumes EscapeForHTML and EscapeForJS.
+// It reports true for inputs with invalid UTF-8.
+func NeedEscape[Bytes ~[]byte | ~string](src Bytes) bool {
+ var i int
+ for uint(len(src)) > uint(i) {
+ if c := src[i]; c < utf8.RuneSelf {
+ if escapeASCII[c] > 0 {
+ return true
+ }
+ i++
+ } else {
+ r, rn := utf8.DecodeRuneInString(string(truncateMaxUTF8(src[i:])))
+ if r == utf8.RuneError || r == '\u2028' || r == '\u2029' {
+ return true
+ }
+ i += rn
+ }
+ }
+ return false
+}
+
+// AppendQuote appends src to dst as a JSON string per RFC 7159, section 7.
+//
+// It takes in flags and respects the following:
+// - EscapeForHTML escapes '<', '>', and '&'.
+// - EscapeForJS escapes '\u2028' and '\u2029'.
+// - AllowInvalidUTF8 avoids reporting an error for invalid UTF-8.
+//
+// Regardless of whether AllowInvalidUTF8 is specified,
+// invalid bytes are replaced with the Unicode replacement character ('\ufffd').
+// If no escape flags are set, then the shortest representable form is used,
+// which is also the canonical form for strings (RFC 8785, section 3.2.2.2).
+func AppendQuote[Bytes ~[]byte | ~string](dst []byte, src Bytes, flags *jsonflags.Flags) ([]byte, error) {
+ var i, n int
+ var hasInvalidUTF8 bool
+ dst = slices.Grow(dst, len(`"`)+len(src)+len(`"`))
+ dst = append(dst, '"')
+ for uint(len(src)) > uint(n) {
+ if c := src[n]; c < utf8.RuneSelf {
+ // Handle single-byte ASCII.
+ n++
+ if escapeASCII[c] == 0 {
+ continue // no escaping possibly needed
+ }
+ // Handle escaping of single-byte ASCII.
+ if !(c == '<' || c == '>' || c == '&') || flags.Get(jsonflags.EscapeForHTML) {
+ dst = append(dst, src[i:n-1]...)
+ dst = appendEscapedASCII(dst, c)
+ i = n
+ }
+ } else {
+ // Handle multi-byte Unicode.
+ r, rn := utf8.DecodeRuneInString(string(truncateMaxUTF8(src[n:])))
+ n += rn
+ if r != utf8.RuneError && r != '\u2028' && r != '\u2029' {
+ continue // no escaping possibly needed
+ }
+ // Handle escaping of multi-byte Unicode.
+ switch {
+ case isInvalidUTF8(r, rn):
+ hasInvalidUTF8 = true
+ dst = append(dst, src[i:n-rn]...)
+ dst = append(dst, "\ufffd"...)
+ i = n
+ case (r == '\u2028' || r == '\u2029') && flags.Get(jsonflags.EscapeForJS):
+ dst = append(dst, src[i:n-rn]...)
+ dst = appendEscapedUnicode(dst, r)
+ i = n
+ }
+ }
+ }
+ dst = append(dst, src[i:n]...)
+ dst = append(dst, '"')
+ if hasInvalidUTF8 && !flags.Get(jsonflags.AllowInvalidUTF8) {
+ return dst, ErrInvalidUTF8
+ }
+ return dst, nil
+}
+
+func appendEscapedASCII(dst []byte, c byte) []byte {
+ switch c {
+ case '"', '\\':
+ dst = append(dst, '\\', c)
+ case '\b':
+ dst = append(dst, "\\b"...)
+ case '\f':
+ dst = append(dst, "\\f"...)
+ case '\n':
+ dst = append(dst, "\\n"...)
+ case '\r':
+ dst = append(dst, "\\r"...)
+ case '\t':
+ dst = append(dst, "\\t"...)
+ default:
+ dst = appendEscapedUTF16(dst, uint16(c))
+ }
+ return dst
+}
+
+func appendEscapedUnicode(dst []byte, r rune) []byte {
+ if r1, r2 := utf16.EncodeRune(r); r1 != '\ufffd' && r2 != '\ufffd' {
+ dst = appendEscapedUTF16(dst, uint16(r1))
+ dst = appendEscapedUTF16(dst, uint16(r2))
+ } else {
+ dst = appendEscapedUTF16(dst, uint16(r))
+ }
+ return dst
+}
+
+func appendEscapedUTF16(dst []byte, x uint16) []byte {
+ const hex = "0123456789abcdef"
+ return append(dst, '\\', 'u', hex[(x>>12)&0xf], hex[(x>>8)&0xf], hex[(x>>4)&0xf], hex[(x>>0)&0xf])
+}
+
+// ReformatString consumes a JSON string from src and appends it to dst,
+// reformatting it if necessary according to the specified flags.
+// It returns the appended output and the number of consumed input bytes.
+func ReformatString(dst, src []byte, flags *jsonflags.Flags) ([]byte, int, error) {
+ // TODO: Should this update ValueFlags as input?
+ var valFlags ValueFlags
+ n, err := ConsumeString(&valFlags, src, !flags.Get(jsonflags.AllowInvalidUTF8))
+ if err != nil {
+ return dst, n, err
+ }
+
+ // If the output requires no special escapes, and the input
+ // is already in canonical form or should be preserved verbatim,
+ // then directly copy the input to the output.
+ if !flags.Get(jsonflags.AnyEscape) &&
+ (valFlags.IsCanonical() || flags.Get(jsonflags.PreserveRawStrings)) {
+ dst = append(dst, src[:n]...) // copy the string verbatim
+ return dst, n, nil
+ }
+
+ // Under [jsonflags.PreserveRawStrings], any pre-escaped sequences
+ // remain escaped, however we still need to respect the
+ // [jsonflags.EscapeForHTML] and [jsonflags.EscapeForJS] options.
+ if flags.Get(jsonflags.PreserveRawStrings) {
+ var i, lastAppendIndex int
+ for i < n {
+ if c := src[i]; c < utf8.RuneSelf {
+ if (c == '<' || c == '>' || c == '&') && flags.Get(jsonflags.EscapeForHTML) {
+ dst = append(dst, src[lastAppendIndex:i]...)
+ dst = appendEscapedASCII(dst, c)
+ lastAppendIndex = i + 1
+ }
+ i++
+ } else {
+ r, rn := utf8.DecodeRune(truncateMaxUTF8(src[i:]))
+ if (r == '\u2028' || r == '\u2029') && flags.Get(jsonflags.EscapeForJS) {
+ dst = append(dst, src[lastAppendIndex:i]...)
+ dst = appendEscapedUnicode(dst, r)
+ lastAppendIndex = i + rn
+ }
+ i += rn
+ }
+ }
+ return append(dst, src[lastAppendIndex:n]...), n, nil
+ }
+
+ // The input contains characters that might need escaping,
+ // unnecessary escape sequences, or invalid UTF-8.
+ // Perform a round-trip unquote and quote to properly reformat
+ // these sequences according the current flags.
+ b, _ := AppendUnquote(nil, src[:n])
+ dst, _ = AppendQuote(dst, b, flags)
+ return dst, n, nil
+}
+
+// AppendFloat appends src to dst as a JSON number per RFC 7159, section 6.
+// It formats numbers similar to the ES6 number-to-string conversion.
+// See https://go.dev/issue/14135.
+//
+// The output is identical to ECMA-262, 6th edition, section 7.1.12.1 and with
+// RFC 8785, section 3.2.2.3 for 64-bit floating-point numbers except for -0,
+// which is formatted as -0 instead of just 0.
+//
+// For 32-bit floating-point numbers,
+// the output is a 32-bit equivalent of the algorithm.
+// Note that ECMA-262 specifies no algorithm for 32-bit numbers.
+func AppendFloat(dst []byte, src float64, bits int) []byte {
+ if bits == 32 {
+ src = float64(float32(src))
+ }
+
+ abs := math.Abs(src)
+ fmt := byte('f')
+ if abs != 0 {
+ if bits == 64 && (float64(abs) < 1e-6 || float64(abs) >= 1e21) ||
+ bits == 32 && (float32(abs) < 1e-6 || float32(abs) >= 1e21) {
+ fmt = 'e'
+ }
+ }
+ dst = strconv.AppendFloat(dst, src, fmt, -1, bits)
+ if fmt == 'e' {
+ // Clean up e-09 to e-9.
+ n := len(dst)
+ if n >= 4 && dst[n-4] == 'e' && dst[n-3] == '-' && dst[n-2] == '0' {
+ dst[n-2] = dst[n-1]
+ dst = dst[:n-1]
+ }
+ }
+ return dst
+}
+
+// ReformatNumber consumes a JSON string from src and appends it to dst,
+// canonicalizing it if specified.
+// It returns the appended output and the number of consumed input bytes.
+func ReformatNumber(dst, src []byte, flags *jsonflags.Flags) ([]byte, int, error) {
+ n, err := ConsumeNumber(src)
+ if err != nil {
+ return dst, n, err
+ }
+ if !flags.Get(jsonflags.CanonicalizeNumbers) {
+ dst = append(dst, src[:n]...) // copy the number verbatim
+ return dst, n, nil
+ }
+
+ // Identify the kind of number.
+ var isFloat bool
+ for _, c := range src[:n] {
+ if c == '.' || c == 'e' || c == 'E' {
+ isFloat = true // has fraction or exponent
+ break
+ }
+ }
+
+ // Check if need to canonicalize this kind of number.
+ switch {
+ case string(src[:n]) == "-0":
+ break // canonicalize -0 as 0 regardless of kind
+ case isFloat:
+ if !flags.Get(jsonflags.CanonicalizeRawFloats) {
+ dst = append(dst, src[:n]...) // copy the number verbatim
+ return dst, n, nil
+ }
+ default:
+ // As an optimization, we can copy integer numbers below 2⁵³ verbatim
+ // since the canonical form is always identical.
+ const maxExactIntegerDigits = 16 // len(strconv.AppendUint(nil, 1<<53, 10))
+ if !flags.Get(jsonflags.CanonicalizeRawInts) || n < maxExactIntegerDigits {
+ dst = append(dst, src[:n]...) // copy the number verbatim
+ return dst, n, nil
+ }
+ }
+
+ // Parse and reformat the number (which uses a canonical format).
+ fv, _ := strconv.ParseFloat(string(src[:n]), 64)
+ switch {
+ case fv == 0:
+ fv = 0 // normalize negative zero as just zero
+ case math.IsInf(fv, +1):
+ fv = +math.MaxFloat64
+ case math.IsInf(fv, -1):
+ fv = -math.MaxFloat64
+ }
+ return AppendFloat(dst, fv, 64), n, nil
+}
diff --git a/internal/json/internal/jsonwire/wire.go b/internal/json/internal/jsonwire/wire.go
new file mode 100644
index 0000000000..a0622c65b8
--- /dev/null
+++ b/internal/json/internal/jsonwire/wire.go
@@ -0,0 +1,217 @@
+// Copyright 2023 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+// Package jsonwire implements stateless functionality for handling JSON text.
+package jsonwire
+
+import (
+ "cmp"
+ "errors"
+ "strconv"
+ "strings"
+ "unicode"
+ "unicode/utf16"
+ "unicode/utf8"
+)
+
+// TrimSuffixWhitespace trims JSON from the end of b.
+func TrimSuffixWhitespace(b []byte) []byte {
+ // NOTE: The arguments and logic are kept simple to keep this inlinable.
+ n := len(b) - 1
+ for n >= 0 && (b[n] == ' ' || b[n] == '\t' || b[n] == '\r' || b[n] == '\n') {
+ n--
+ }
+ return b[:n+1]
+}
+
+// TrimSuffixString trims a valid JSON string at the end of b.
+// The behavior is undefined if there is not a valid JSON string present.
+func TrimSuffixString(b []byte) []byte {
+ // NOTE: The arguments and logic are kept simple to keep this inlinable.
+ if len(b) > 0 && b[len(b)-1] == '"' {
+ b = b[:len(b)-1]
+ }
+ for len(b) >= 2 && !(b[len(b)-1] == '"' && b[len(b)-2] != '\\') {
+ b = b[:len(b)-1] // trim all characters except an unescaped quote
+ }
+ if len(b) > 0 && b[len(b)-1] == '"' {
+ b = b[:len(b)-1]
+ }
+ return b
+}
+
+// HasSuffixByte reports whether b ends with c.
+func HasSuffixByte(b []byte, c byte) bool {
+ // NOTE: The arguments and logic are kept simple to keep this inlinable.
+ return len(b) > 0 && b[len(b)-1] == c
+}
+
+// TrimSuffixByte removes c from the end of b if it is present.
+func TrimSuffixByte(b []byte, c byte) []byte {
+ // NOTE: The arguments and logic are kept simple to keep this inlinable.
+ if len(b) > 0 && b[len(b)-1] == c {
+ return b[:len(b)-1]
+ }
+ return b
+}
+
+// QuoteRune quotes the first rune in the input.
+func QuoteRune[Bytes ~[]byte | ~string](b Bytes) string {
+ r, n := utf8.DecodeRuneInString(string(truncateMaxUTF8(b)))
+ if r == utf8.RuneError && n == 1 {
+ return `'\x` + strconv.FormatUint(uint64(b[0]), 16) + `'`
+ }
+ return strconv.QuoteRune(r)
+}
+
+// CompareUTF16 lexicographically compares x to y according
+// to the UTF-16 codepoints of the UTF-8 encoded input strings.
+// This implements the ordering specified in RFC 8785, section 3.2.3.
+func CompareUTF16[Bytes ~[]byte | ~string](x, y Bytes) int {
+ // NOTE: This is an optimized, mostly allocation-free implementation
+ // of CompareUTF16Simple in wire_test.go. FuzzCompareUTF16 verifies that the
+ // two implementations agree on the result of comparing any two strings.
+ isUTF16Self := func(r rune) bool {
+ return ('\u0000' <= r && r <= '\uD7FF') || ('\uE000' <= r && r <= '\uFFFF')
+ }
+
+ for {
+ if len(x) == 0 || len(y) == 0 {
+ return cmp.Compare(len(x), len(y))
+ }
+
+ // ASCII fast-path.
+ if x[0] < utf8.RuneSelf || y[0] < utf8.RuneSelf {
+ if x[0] != y[0] {
+ return cmp.Compare(x[0], y[0])
+ }
+ x, y = x[1:], y[1:]
+ continue
+ }
+
+ // Decode next pair of runes as UTF-8.
+ rx, nx := utf8.DecodeRuneInString(string(truncateMaxUTF8(x)))
+ ry, ny := utf8.DecodeRuneInString(string(truncateMaxUTF8(y)))
+
+ selfx := isUTF16Self(rx)
+ selfy := isUTF16Self(ry)
+ switch {
+ // The x rune is a single UTF-16 codepoint, while
+ // the y rune is a surrogate pair of UTF-16 codepoints.
+ case selfx && !selfy:
+ ry, _ = utf16.EncodeRune(ry)
+ // The y rune is a single UTF-16 codepoint, while
+ // the x rune is a surrogate pair of UTF-16 codepoints.
+ case selfy && !selfx:
+ rx, _ = utf16.EncodeRune(rx)
+ }
+ if rx != ry {
+ return cmp.Compare(rx, ry)
+ }
+
+ // Check for invalid UTF-8, in which case,
+ // we just perform a byte-for-byte comparison.
+ if isInvalidUTF8(rx, nx) || isInvalidUTF8(ry, ny) {
+ if x[0] != y[0] {
+ return cmp.Compare(x[0], y[0])
+ }
+ }
+ x, y = x[nx:], y[ny:]
+ }
+}
+
+// truncateMaxUTF8 truncates b such it contains at least one rune.
+//
+// The utf8 package currently lacks generic variants, which complicates
+// generic functions that operates on either []byte or string.
+// As a hack, we always call the utf8 function operating on strings,
+// but always truncate the input such that the result is identical.
+//
+// Example usage:
+//
+// utf8.DecodeRuneInString(string(truncateMaxUTF8(b)))
+//
+// Converting a []byte to a string is stack allocated since
+// truncateMaxUTF8 guarantees that the []byte is short.
+func truncateMaxUTF8[Bytes ~[]byte | ~string](b Bytes) Bytes {
+ // TODO(https://go.dev/issue/56948): Remove this function and
+ // instead directly call generic utf8 functions wherever used.
+ if len(b) > utf8.UTFMax {
+ return b[:utf8.UTFMax]
+ }
+ return b
+}
+
+// TODO(https://go.dev/issue/70547): Use utf8.ErrInvalid instead.
+var ErrInvalidUTF8 = errors.New("invalid UTF-8")
+
+func NewInvalidCharacterError[Bytes ~[]byte | ~string](prefix Bytes, where string) error {
+ what := QuoteRune(prefix)
+ return errors.New("invalid character " + what + " " + where)
+}
+
+func NewInvalidEscapeSequenceError[Bytes ~[]byte | ~string](what Bytes) error {
+ label := "escape sequence"
+ if len(what) > 6 {
+ label = "surrogate pair"
+ }
+ needEscape := strings.IndexFunc(string(what), func(r rune) bool {
+ return r == '`' || r == utf8.RuneError || unicode.IsSpace(r) || !unicode.IsPrint(r)
+ }) >= 0
+ if needEscape {
+ return errors.New("invalid " + label + " " + strconv.Quote(string(what)) + " in string")
+ } else {
+ return errors.New("invalid " + label + " `" + string(what) + "` in string")
+ }
+}
+
+// TruncatePointer optionally truncates the JSON pointer,
+// enforcing that the length roughly does not exceed n.
+func TruncatePointer(s string, n int) string {
+ if len(s) <= n {
+ return s
+ }
+ i := n / 2
+ j := len(s) - n/2
+
+ // Avoid truncating a name if there are multiple names present.
+ if k := strings.LastIndexByte(s[:i], '/'); k > 0 {
+ i = k
+ }
+ if k := strings.IndexByte(s[j:], '/'); k >= 0 {
+ j += k + len("/")
+ }
+
+ // Avoid truncation in the middle of a UTF-8 rune.
+ for i > 0 && isInvalidUTF8(utf8.DecodeLastRuneInString(s[:i])) {
+ i--
+ }
+ for j < len(s) && isInvalidUTF8(utf8.DecodeRuneInString(s[j:])) {
+ j++
+ }
+
+ // Determine the right middle fragment to use.
+ var middle string
+ switch strings.Count(s[i:j], "/") {
+ case 0:
+ middle = "…"
+ case 1:
+ middle = "…/…"
+ default:
+ middle = "…/…/…"
+ }
+ if strings.HasPrefix(s[i:j], "/") && middle != "…" {
+ middle = strings.TrimPrefix(middle, "…")
+ }
+ if strings.HasSuffix(s[i:j], "/") && middle != "…" {
+ middle = strings.TrimSuffix(middle, "…")
+ }
+ return s[:i] + middle + s[j:]
+}
+
+func isInvalidUTF8(r rune, rn int) bool {
+ return r == utf8.RuneError && rn == 1
+}
diff --git a/internal/json/jsontext/alias.go b/internal/json/jsontext/alias.go
new file mode 100644
index 0000000000..dc18d5d55d
--- /dev/null
+++ b/internal/json/jsontext/alias.go
@@ -0,0 +1,536 @@
+// Copyright 2025 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Code generated by alias_gen.go; DO NOT EDIT.
+
+//go:build goexperiment.jsonv2 && go1.25
+
+// Package jsontext implements syntactic processing of JSON
+// as specified in RFC 4627, RFC 7159, RFC 7493, RFC 8259, and RFC 8785.
+// JSON is a simple data interchange format that can represent
+// primitive data types such as booleans, strings, and numbers,
+// in addition to structured data types such as objects and arrays.
+//
+// The [Encoder] and [Decoder] types are used to encode or decode
+// a stream of JSON tokens or values.
+//
+// # Tokens and Values
+//
+// A JSON token refers to the basic structural elements of JSON:
+//
+// - a JSON literal (i.e., null, true, or false)
+// - a JSON string (e.g., "hello, world!")
+// - a JSON number (e.g., 123.456)
+// - a begin or end delimiter for a JSON object (i.e., '{' or '}')
+// - a begin or end delimiter for a JSON array (i.e., '[' or ']')
+//
+// A JSON token is represented by the [Token] type in Go. Technically,
+// there are two additional structural characters (i.e., ':' and ','),
+// but there is no [Token] representation for them since their presence
+// can be inferred by the structure of the JSON grammar itself.
+// For example, there must always be an implicit colon between
+// the name and value of a JSON object member.
+//
+// A JSON value refers to a complete unit of JSON data:
+//
+// - a JSON literal, string, or number
+// - a JSON object (e.g., `{"name":"value"}`)
+// - a JSON array (e.g., `[1,2,3,]`)
+//
+// A JSON value is represented by the [Value] type in Go and is a []byte
+// containing the raw textual representation of the value. There is some overlap
+// between tokens and values as both contain literals, strings, and numbers.
+// However, only a value can represent the entirety of a JSON object or array.
+//
+// The [Encoder] and [Decoder] types contain methods to read or write the next
+// [Token] or [Value] in a sequence. They maintain a state machine to validate
+// whether the sequence of JSON tokens and/or values produces a valid JSON.
+// [Options] may be passed to the [NewEncoder] or [NewDecoder] constructors
+// to configure the syntactic behavior of encoding and decoding.
+//
+// # Terminology
+//
+// The terms "encode" and "decode" are used for syntactic functionality
+// that is concerned with processing JSON based on its grammar, and
+// the terms "marshal" and "unmarshal" are used for semantic functionality
+// that determines the meaning of JSON values as Go values and vice-versa.
+// This package (i.e., [jsontext]) deals with JSON at a syntactic layer,
+// while [encoding/json/v2] deals with JSON at a semantic layer.
+// The goal is to provide a clear distinction between functionality that
+// is purely concerned with encoding versus that of marshaling.
+// For example, one can directly encode a stream of JSON tokens without
+// needing to marshal a concrete Go value representing them.
+// Similarly, one can decode a stream of JSON tokens without
+// needing to unmarshal them into a concrete Go value.
+//
+// This package uses JSON terminology when discussing JSON, which may differ
+// from related concepts in Go or elsewhere in computing literature.
+//
+// - a JSON "object" refers to an unordered collection of name/value members.
+// - a JSON "array" refers to an ordered sequence of elements.
+// - a JSON "value" refers to either a literal (i.e., null, false, or true),
+// string, number, object, or array.
+//
+// See RFC 8259 for more information.
+//
+// # Specifications
+//
+// Relevant specifications include RFC 4627, RFC 7159, RFC 7493, RFC 8259,
+// and RFC 8785. Each RFC is generally a stricter subset of another RFC.
+// In increasing order of strictness:
+//
+// - RFC 4627 and RFC 7159 do not require (but recommend) the use of UTF-8
+// and also do not require (but recommend) that object names be unique.
+// - RFC 8259 requires the use of UTF-8,
+// but does not require (but recommends) that object names be unique.
+// - RFC 7493 requires the use of UTF-8
+// and also requires that object names be unique.
+// - RFC 8785 defines a canonical representation. It requires the use of UTF-8
+// and also requires that object names be unique and in a specific ordering.
+// It specifies exactly how strings and numbers must be formatted.
+//
+// The primary difference between RFC 4627 and RFC 7159 is that the former
+// restricted top-level values to only JSON objects and arrays, while
+// RFC 7159 and subsequent RFCs permit top-level values to additionally be
+// JSON nulls, booleans, strings, or numbers.
+//
+// By default, this package operates on RFC 7493, but can be configured
+// to operate according to the other RFC specifications.
+// RFC 7493 is a stricter subset of RFC 8259 and fully compliant with it.
+// In particular, it makes specific choices about behavior that RFC 8259
+// leaves as undefined in order to ensure greater interoperability.
+//
+// # Security Considerations
+//
+// See the "Security Considerations" section in [encoding/json/v2].
+package jsontext
+
+import (
+ "encoding/json/jsontext"
+ "io"
+)
+
+// Decoder is a streaming decoder for raw JSON tokens and values.
+// It is used to read a stream of top-level JSON values,
+// each separated by optional whitespace characters.
+//
+// [Decoder.ReadToken] and [Decoder.ReadValue] calls may be interleaved.
+// For example, the following JSON value:
+//
+// {"name":"value","array":[null,false,true,3.14159],"object":{"k":"v"}}
+//
+// can be parsed with the following calls (ignoring errors for brevity):
+//
+// d.ReadToken() // {
+// d.ReadToken() // "name"
+// d.ReadToken() // "value"
+// d.ReadValue() // "array"
+// d.ReadToken() // [
+// d.ReadToken() // null
+// d.ReadToken() // false
+// d.ReadValue() // true
+// d.ReadToken() // 3.14159
+// d.ReadToken() // ]
+// d.ReadValue() // "object"
+// d.ReadValue() // {"k":"v"}
+// d.ReadToken() // }
+//
+// The above is one of many possible sequence of calls and
+// may not represent the most sensible method to call for any given token/value.
+// For example, it is probably more common to call [Decoder.ReadToken] to obtain a
+// string token for object names.
+type Decoder = jsontext.Decoder
+
+// NewDecoder constructs a new streaming decoder reading from r.
+//
+// If r is a [bytes.Buffer], then the decoder parses directly from the buffer
+// without first copying the contents to an intermediate buffer.
+// Additional writes to the buffer must not occur while the decoder is in use.
+func NewDecoder(r io.Reader, opts ...Options) *Decoder {
+ return jsontext.NewDecoder(r, opts...)
+}
+
+// Encoder is a streaming encoder from raw JSON tokens and values.
+// It is used to write a stream of top-level JSON values,
+// each terminated with a newline character.
+//
+// [Encoder.WriteToken] and [Encoder.WriteValue] calls may be interleaved.
+// For example, the following JSON value:
+//
+// {"name":"value","array":[null,false,true,3.14159],"object":{"k":"v"}}
+//
+// can be composed with the following calls (ignoring errors for brevity):
+//
+// e.WriteToken(BeginObject) // {
+// e.WriteToken(String("name")) // "name"
+// e.WriteToken(String("value")) // "value"
+// e.WriteValue(Value(`"array"`)) // "array"
+// e.WriteToken(BeginArray) // [
+// e.WriteToken(Null) // null
+// e.WriteToken(False) // false
+// e.WriteValue(Value("true")) // true
+// e.WriteToken(Float(3.14159)) // 3.14159
+// e.WriteToken(EndArray) // ]
+// e.WriteValue(Value(`"object"`)) // "object"
+// e.WriteValue(Value(`{"k":"v"}`)) // {"k":"v"}
+// e.WriteToken(EndObject) // }
+//
+// The above is one of many possible sequence of calls and
+// may not represent the most sensible method to call for any given token/value.
+// For example, it is probably more common to call [Encoder.WriteToken] with a string
+// for object names.
+type Encoder = jsontext.Encoder
+
+// NewEncoder constructs a new streaming encoder writing to w
+// configured with the provided options.
+// It flushes the internal buffer when the buffer is sufficiently full or
+// when a top-level value has been written.
+//
+// If w is a [bytes.Buffer], then the encoder appends directly into the buffer
+// without copying the contents from an intermediate buffer.
+func NewEncoder(w io.Writer, opts ...Options) *Encoder {
+ return jsontext.NewEncoder(w, opts...)
+}
+
+// SyntacticError is a description of a syntactic error that occurred when
+// encoding or decoding JSON according to the grammar.
+//
+// The contents of this error as produced by this package may change over time.
+type SyntacticError = jsontext.SyntacticError
+
+// Options configures [NewEncoder], [Encoder.Reset], [NewDecoder],
+// and [Decoder.Reset] with specific features.
+// Each function takes in a variadic list of options, where properties
+// set in latter options override the value of previously set properties.
+//
+// There is a single Options type, which is used with both encoding and decoding.
+// Some options affect both operations, while others only affect one operation:
+//
+// - [AllowDuplicateNames] affects encoding and decoding
+// - [AllowInvalidUTF8] affects encoding and decoding
+// - [EscapeForHTML] affects encoding only
+// - [EscapeForJS] affects encoding only
+// - [PreserveRawStrings] affects encoding only
+// - [CanonicalizeRawInts] affects encoding only
+// - [CanonicalizeRawFloats] affects encoding only
+// - [ReorderRawObjects] affects encoding only
+// - [SpaceAfterColon] affects encoding only
+// - [SpaceAfterComma] affects encoding only
+// - [Multiline] affects encoding only
+// - [WithIndent] affects encoding only
+// - [WithIndentPrefix] affects encoding only
+//
+// Options that do not affect a particular operation are ignored.
+//
+// The Options type is identical to [encoding/json.Options] and
+// [encoding/json/v2.Options]. Options from the other packages may
+// be passed to functionality in this package, but are ignored.
+// Options from this package may be used with the other packages.
+type Options = jsontext.Options
+
+// AllowDuplicateNames specifies that JSON objects may contain
+// duplicate member names. Disabling the duplicate name check may provide
+// performance benefits, but breaks compliance with RFC 7493, section 2.3.
+// The input or output will still be compliant with RFC 8259,
+// which leaves the handling of duplicate names as unspecified behavior.
+//
+// This affects either encoding or decoding.
+func AllowDuplicateNames(v bool) Options {
+ return jsontext.AllowDuplicateNames(v)
+}
+
+// AllowInvalidUTF8 specifies that JSON strings may contain invalid UTF-8,
+// which will be mangled as the Unicode replacement character, U+FFFD.
+// This causes the encoder or decoder to break compliance with
+// RFC 7493, section 2.1, and RFC 8259, section 8.1.
+//
+// This affects either encoding or decoding.
+func AllowInvalidUTF8(v bool) Options {
+ return jsontext.AllowInvalidUTF8(v)
+}
+
+// EscapeForHTML specifies that '<', '>', and '&' characters within JSON strings
+// should be escaped as a hexadecimal Unicode codepoint (e.g., \u003c) so that
+// the output is safe to embed within HTML.
+//
+// This only affects encoding and is ignored when decoding.
+func EscapeForHTML(v bool) Options {
+ return jsontext.EscapeForHTML(v)
+}
+
+// EscapeForJS specifies that U+2028 and U+2029 characters within JSON strings
+// should be escaped as a hexadecimal Unicode codepoint (e.g., \u2028) so that
+// the output is valid to embed within JavaScript. See RFC 8259, section 12.
+//
+// This only affects encoding and is ignored when decoding.
+func EscapeForJS(v bool) Options {
+ return jsontext.EscapeForJS(v)
+}
+
+// PreserveRawStrings specifies that when encoding a raw JSON string in a
+// [Token] or [Value], pre-escaped sequences
+// in a JSON string are preserved to the output.
+// However, raw strings still respect [EscapeForHTML] and [EscapeForJS]
+// such that the relevant characters are escaped.
+// If [AllowInvalidUTF8] is enabled, bytes of invalid UTF-8
+// are preserved to the output.
+//
+// This only affects encoding and is ignored when decoding.
+func PreserveRawStrings(v bool) Options {
+ return jsontext.PreserveRawStrings(v)
+}
+
+// CanonicalizeRawInts specifies that when encoding a raw JSON
+// integer number (i.e., a number without a fraction and exponent) in a
+// [Token] or [Value], the number is canonicalized
+// according to RFC 8785, section 3.2.2.3. As a special case,
+// the number -0 is canonicalized as 0.
+//
+// JSON numbers are treated as IEEE 754 double precision numbers.
+// Any numbers with precision beyond what is representable by that form
+// will lose their precision when canonicalized. For example,
+// integer values beyond ±2⁵³ will lose their precision.
+// For example, 1234567890123456789 is formatted as 1234567890123456800.
+//
+// This only affects encoding and is ignored when decoding.
+func CanonicalizeRawInts(v bool) Options {
+ return jsontext.CanonicalizeRawInts(v)
+}
+
+// CanonicalizeRawFloats specifies that when encoding a raw JSON
+// floating-point number (i.e., a number with a fraction or exponent) in a
+// [Token] or [Value], the number is canonicalized
+// according to RFC 8785, section 3.2.2.3. As a special case,
+// the number -0 is canonicalized as 0.
+//
+// JSON numbers are treated as IEEE 754 double precision numbers.
+// It is safe to canonicalize a serialized single precision number and
+// parse it back as a single precision number and expect the same value.
+// If a number exceeds ±1.7976931348623157e+308, which is the maximum
+// finite number, then it saturated at that value and formatted as such.
+//
+// This only affects encoding and is ignored when decoding.
+func CanonicalizeRawFloats(v bool) Options {
+ return jsontext.CanonicalizeRawFloats(v)
+}
+
+// ReorderRawObjects specifies that when encoding a raw JSON object in a
+// [Value], the object members are reordered according to
+// RFC 8785, section 3.2.3.
+//
+// This only affects encoding and is ignored when decoding.
+func ReorderRawObjects(v bool) Options {
+ return jsontext.ReorderRawObjects(v)
+}
+
+// SpaceAfterColon specifies that the JSON output should emit a space character
+// after each colon separator following a JSON object name.
+// If false, then no space character appears after the colon separator.
+//
+// This only affects encoding and is ignored when decoding.
+func SpaceAfterColon(v bool) Options {
+ return jsontext.SpaceAfterColon(v)
+}
+
+// SpaceAfterComma specifies that the JSON output should emit a space character
+// after each comma separator following a JSON object value or array element.
+// If false, then no space character appears after the comma separator.
+//
+// This only affects encoding and is ignored when decoding.
+func SpaceAfterComma(v bool) Options {
+ return jsontext.SpaceAfterComma(v)
+}
+
+// Multiline specifies that the JSON output should expand to multiple lines,
+// where every JSON object member or JSON array element appears on
+// a new, indented line according to the nesting depth.
+//
+// If [SpaceAfterColon] is not specified, then the default is true.
+// If [SpaceAfterComma] is not specified, then the default is false.
+// If [WithIndent] is not specified, then the default is "\t".
+//
+// If set to false, then the output is a single-line,
+// where the only whitespace emitted is determined by the current
+// values of [SpaceAfterColon] and [SpaceAfterComma].
+//
+// This only affects encoding and is ignored when decoding.
+func Multiline(v bool) Options {
+ return jsontext.Multiline(v)
+}
+
+// WithIndent specifies that the encoder should emit multiline output
+// where each element in a JSON object or array begins on a new, indented line
+// beginning with the indent prefix (see [WithIndentPrefix])
+// followed by one or more copies of indent according to the nesting depth.
+// The indent must only be composed of space or tab characters.
+//
+// If the intent to emit indented output without a preference for
+// the particular indent string, then use [Multiline] instead.
+//
+// This only affects encoding and is ignored when decoding.
+// Use of this option implies [Multiline] being set to true.
+func WithIndent(indent string) Options {
+ return jsontext.WithIndent(indent)
+}
+
+// WithIndentPrefix specifies that the encoder should emit multiline output
+// where each element in a JSON object or array begins on a new, indented line
+// beginning with the indent prefix followed by one or more copies of indent
+// (see [WithIndent]) according to the nesting depth.
+// The prefix must only be composed of space or tab characters.
+//
+// This only affects encoding and is ignored when decoding.
+// Use of this option implies [Multiline] being set to true.
+func WithIndentPrefix(prefix string) Options {
+ return jsontext.WithIndentPrefix(prefix)
+}
+
+// AppendQuote appends a double-quoted JSON string literal representing src
+// to dst and returns the extended buffer.
+// It uses the minimal string representation per RFC 8785, section 3.2.2.2.
+// Invalid UTF-8 bytes are replaced with the Unicode replacement character
+// and an error is returned at the end indicating the presence of invalid UTF-8.
+// The dst must not overlap with the src.
+func AppendQuote[Bytes ~[]byte | ~string](dst []byte, src Bytes) ([]byte, error) {
+ return jsontext.AppendQuote[Bytes](dst, src)
+}
+
+// AppendUnquote appends the decoded interpretation of src as a
+// double-quoted JSON string literal to dst and returns the extended buffer.
+// The input src must be a JSON string without any surrounding whitespace.
+// Invalid UTF-8 bytes are replaced with the Unicode replacement character
+// and an error is returned at the end indicating the presence of invalid UTF-8.
+// Any trailing bytes after the JSON string literal results in an error.
+// The dst must not overlap with the src.
+func AppendUnquote[Bytes ~[]byte | ~string](dst []byte, src Bytes) ([]byte, error) {
+ return jsontext.AppendUnquote[Bytes](dst, src)
+}
+
+// ErrDuplicateName indicates that a JSON token could not be
+// encoded or decoded because it results in a duplicate JSON object name.
+// This error is directly wrapped within a [SyntacticError] when produced.
+//
+// The name of a duplicate JSON object member can be extracted as:
+//
+// err := ...
+// var serr jsontext.SyntacticError
+// if errors.As(err, &serr) && serr.Err == jsontext.ErrDuplicateName {
+// ptr := serr.JSONPointer // JSON pointer to duplicate name
+// name := ptr.LastToken() // duplicate name itself
+// ...
+// }
+//
+// This error is only returned if [AllowDuplicateNames] is false.
+var ErrDuplicateName = jsontext.ErrDuplicateName
+
+// ErrNonStringName indicates that a JSON token could not be
+// encoded or decoded because it is not a string,
+// as required for JSON object names according to RFC 8259, section 4.
+// This error is directly wrapped within a [SyntacticError] when produced.
+var ErrNonStringName = jsontext.ErrNonStringName
+
+// Pointer is a JSON Pointer (RFC 6901) that references a particular JSON value
+// relative to the root of the top-level JSON value.
+//
+// A Pointer is a slash-separated list of tokens, where each token is
+// either a JSON object name or an index to a JSON array element
+// encoded as a base-10 integer value.
+// It is impossible to distinguish between an array index and an object name
+// (that happens to be an base-10 encoded integer) without also knowing
+// the structure of the top-level JSON value that the pointer refers to.
+//
+// There is exactly one representation of a pointer to a particular value,
+// so comparability of Pointer values is equivalent to checking whether
+// they both point to the exact same value.
+type Pointer = jsontext.Pointer
+
+// Token represents a lexical JSON token, which may be one of the following:
+// - a JSON literal (i.e., null, true, or false)
+// - a JSON string (e.g., "hello, world!")
+// - a JSON number (e.g., 123.456)
+// - a begin or end delimiter for a JSON object (i.e., { or } )
+// - a begin or end delimiter for a JSON array (i.e., [ or ] )
+//
+// A Token cannot represent entire array or object values, while a [Value] can.
+// There is no Token to represent commas and colons since
+// these structural tokens can be inferred from the surrounding context.
+type Token = jsontext.Token
+
+var (
+ Null = jsontext.Null
+ False = jsontext.False
+ True = jsontext.True
+ BeginObject = jsontext.BeginObject
+ EndObject = jsontext.EndObject
+ BeginArray = jsontext.BeginArray
+ EndArray = jsontext.EndArray
+)
+
+// Bool constructs a Token representing a JSON boolean.
+func Bool(b bool) Token {
+ return jsontext.Bool(b)
+}
+
+// String constructs a Token representing a JSON string.
+// The provided string should contain valid UTF-8, otherwise invalid characters
+// may be mangled as the Unicode replacement character.
+func String(s string) Token {
+ return jsontext.String(s)
+}
+
+// Float constructs a Token representing a JSON number.
+// The values NaN, +Inf, and -Inf will be represented
+// as a JSON string with the values "NaN", "Infinity", and "-Infinity".
+func Float(n float64) Token {
+ return jsontext.Float(n)
+}
+
+// Int constructs a Token representing a JSON number from an int64.
+func Int(n int64) Token {
+ return jsontext.Int(n)
+}
+
+// Uint constructs a Token representing a JSON number from a uint64.
+func Uint(n uint64) Token {
+ return jsontext.Uint(n)
+}
+
+// Kind represents each possible JSON token kind with a single byte,
+// which is conveniently the first byte of that kind's grammar
+// with the restriction that numbers always be represented with '0':
+//
+// - 'n': null
+// - 'f': false
+// - 't': true
+// - '"': string
+// - '0': number
+// - '{': object begin
+// - '}': object end
+// - '[': array begin
+// - ']': array end
+//
+// An invalid kind is usually represented using 0,
+// but may be non-zero due to invalid JSON data.
+type Kind = jsontext.Kind
+
+// AppendFormat formats the JSON value in src and appends it to dst
+// according to the specified options.
+// See [Value.Format] for more details about the formatting behavior.
+//
+// The dst and src may overlap.
+// If an error is reported, then the entirety of src is appended to dst.
+func AppendFormat(dst, src []byte, opts ...Options) ([]byte, error) {
+ return jsontext.AppendFormat(dst, src, opts...)
+}
+
+// Value represents a single raw JSON value, which may be one of the following:
+// - a JSON literal (i.e., null, true, or false)
+// - a JSON string (e.g., "hello, world!")
+// - a JSON number (e.g., 123.456)
+// - an entire JSON object (e.g., {"fizz":"buzz"} )
+// - an entire JSON array (e.g., [1,2,3] )
+//
+// Value can represent entire array or object values, while [Token] cannot.
+// Value may contain leading and/or trailing whitespace.
+type Value = jsontext.Value
diff --git a/internal/json/jsontext/decode.go b/internal/json/jsontext/decode.go
new file mode 100644
index 0000000000..31a8fbef97
--- /dev/null
+++ b/internal/json/jsontext/decode.go
@@ -0,0 +1,1168 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package jsontext
+
+import (
+ "bytes"
+ "errors"
+ "io"
+
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonopts"
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+)
+
+// NOTE: The logic for decoding is complicated by the fact that reading from
+// an io.Reader into a temporary buffer means that the buffer may contain a
+// truncated portion of some valid input, requiring the need to fetch more data.
+//
+// This file is structured in the following way:
+//
+// - consumeXXX functions parse an exact JSON token from a []byte.
+// If the buffer appears truncated, then it returns io.ErrUnexpectedEOF.
+// The consumeSimpleXXX functions are so named because they only handle
+// a subset of the grammar for the JSON token being parsed.
+// They do not handle the full grammar to keep these functions inlinable.
+//
+// - Decoder.consumeXXX methods parse the next JSON token from Decoder.buf,
+// automatically fetching more input if necessary. These methods take
+// a position relative to the start of Decoder.buf as an argument and
+// return the end of the consumed JSON token as a position,
+// also relative to the start of Decoder.buf.
+//
+// - In the event of an I/O errors or state machine violations,
+// the implementation avoids mutating the state of Decoder
+// (aside from the book-keeping needed to implement Decoder.fetch).
+// For this reason, only Decoder.ReadToken and Decoder.ReadValue are
+// responsible for updated Decoder.prevStart and Decoder.prevEnd.
+//
+// - For performance, much of the implementation uses the pattern of calling
+// the inlinable consumeXXX functions first, and if more work is necessary,
+// then it calls the slower Decoder.consumeXXX methods.
+// TODO: Revisit this pattern if the Go compiler provides finer control
+// over exactly which calls are inlined or not.
+
+// Decoder is a streaming decoder for raw JSON tokens and values.
+// It is used to read a stream of top-level JSON values,
+// each separated by optional whitespace characters.
+//
+// [Decoder.ReadToken] and [Decoder.ReadValue] calls may be interleaved.
+// For example, the following JSON value:
+//
+// {"name":"value","array":[null,false,true,3.14159],"object":{"k":"v"}}
+//
+// can be parsed with the following calls (ignoring errors for brevity):
+//
+// d.ReadToken() // {
+// d.ReadToken() // "name"
+// d.ReadToken() // "value"
+// d.ReadValue() // "array"
+// d.ReadToken() // [
+// d.ReadToken() // null
+// d.ReadToken() // false
+// d.ReadValue() // true
+// d.ReadToken() // 3.14159
+// d.ReadToken() // ]
+// d.ReadValue() // "object"
+// d.ReadValue() // {"k":"v"}
+// d.ReadToken() // }
+//
+// The above is one of many possible sequence of calls and
+// may not represent the most sensible method to call for any given token/value.
+// For example, it is probably more common to call [Decoder.ReadToken] to obtain a
+// string token for object names.
+type Decoder struct {
+ s decoderState
+}
+
+// decoderState is the low-level state of Decoder.
+// It has exported fields and method for use by the "json" package.
+type decoderState struct {
+ state
+ decodeBuffer
+ jsonopts.Struct
+
+ StringCache *[256]string // only used when unmarshaling; identical to json.stringCache
+}
+
+// decodeBuffer is a buffer split into 4 segments:
+//
+// - buf[0:prevEnd] // already read portion of the buffer
+// - buf[prevStart:prevEnd] // previously read value
+// - buf[prevEnd:len(buf)] // unread portion of the buffer
+// - buf[len(buf):cap(buf)] // unused portion of the buffer
+//
+// Invariants:
+//
+// 0 ≤ prevStart ≤ prevEnd ≤ len(buf) ≤ cap(buf)
+type decodeBuffer struct {
+ peekPos int // non-zero if valid offset into buf for start of next token
+ peekErr error // implies peekPos is -1
+
+ buf []byte // may alias rd if it is a bytes.Buffer
+ prevStart int
+ prevEnd int
+
+ // baseOffset is added to prevStart and prevEnd to obtain
+ // the absolute offset relative to the start of io.Reader stream.
+ baseOffset int64
+
+ rd io.Reader
+}
+
+// NewDecoder constructs a new streaming decoder reading from r.
+//
+// If r is a [bytes.Buffer], then the decoder parses directly from the buffer
+// without first copying the contents to an intermediate buffer.
+// Additional writes to the buffer must not occur while the decoder is in use.
+func NewDecoder(r io.Reader, opts ...Options) *Decoder {
+ d := new(Decoder)
+ d.Reset(r, opts...)
+ return d
+}
+
+// Reset resets a decoder such that it is reading afresh from r and
+// configured with the provided options. Reset must not be called on an
+// a Decoder passed to the [encoding/json/v2.UnmarshalerFrom.UnmarshalJSONFrom] method
+// or the [encoding/json/v2.UnmarshalFromFunc] function.
+func (d *Decoder) Reset(r io.Reader, opts ...Options) {
+ switch {
+ case d == nil:
+ panic("jsontext: invalid nil Decoder")
+ case r == nil:
+ panic("jsontext: invalid nil io.Reader")
+ case d.s.Flags.Get(jsonflags.WithinArshalCall):
+ panic("jsontext: cannot reset Decoder passed to json.UnmarshalerFrom")
+ }
+ d.s.reset(nil, r, opts...)
+}
+
+func (d *decoderState) reset(b []byte, r io.Reader, opts ...Options) {
+ d.state.reset()
+ d.decodeBuffer = decodeBuffer{buf: b, rd: r}
+ opts2 := jsonopts.Struct{} // avoid mutating d.Struct in case it is part of opts
+ opts2.Join(opts...)
+ d.Struct = opts2
+}
+
+// Options returns the options used to construct the encoder and
+// may additionally contain semantic options passed to a
+// [encoding/json/v2.UnmarshalDecode] call.
+//
+// If operating within
+// a [encoding/json/v2.UnmarshalerFrom.UnmarshalJSONFrom] method call or
+// a [encoding/json/v2.UnmarshalFromFunc] function call,
+// then the returned options are only valid within the call.
+func (d *Decoder) Options() Options {
+ return &d.s.Struct
+}
+
+var errBufferWriteAfterNext = errors.New("invalid bytes.Buffer.Write call after calling bytes.Buffer.Next")
+
+// fetch reads at least 1 byte from the underlying io.Reader.
+// It returns io.ErrUnexpectedEOF if zero bytes were read and io.EOF was seen.
+func (d *decoderState) fetch() error {
+ if d.rd == nil {
+ return io.ErrUnexpectedEOF
+ }
+
+ // Inform objectNameStack that we are about to fetch new buffer content.
+ d.Names.copyQuotedBuffer(d.buf)
+
+ // Specialize bytes.Buffer for better performance.
+ if bb, ok := d.rd.(*bytes.Buffer); ok {
+ switch {
+ case bb.Len() == 0:
+ return io.ErrUnexpectedEOF
+ case len(d.buf) == 0:
+ d.buf = bb.Next(bb.Len()) // "read" all data in the buffer
+ return nil
+ default:
+ // This only occurs if a partially filled bytes.Buffer was provided
+ // and more data is written to it while Decoder is reading from it.
+ // This practice will lead to data corruption since future writes
+ // may overwrite the contents of the current buffer.
+ //
+ // The user is trying to use a bytes.Buffer as a pipe,
+ // but a bytes.Buffer is poor implementation of a pipe,
+ // the purpose-built io.Pipe should be used instead.
+ return &ioError{action: "read", err: errBufferWriteAfterNext}
+ }
+ }
+
+ // Allocate initial buffer if empty.
+ if cap(d.buf) == 0 {
+ d.buf = make([]byte, 0, 64)
+ }
+
+ // Check whether to grow the buffer.
+ const maxBufferSize = 4 << 10
+ const growthSizeFactor = 2 // higher value is faster
+ const growthRateFactor = 2 // higher value is slower
+ // By default, grow if below the maximum buffer size.
+ grow := cap(d.buf) <= maxBufferSize/growthSizeFactor
+ // Growing can be expensive, so only grow
+ // if a sufficient number of bytes have been processed.
+ grow = grow && int64(cap(d.buf)) < d.previousOffsetEnd()/growthRateFactor
+ // If prevStart==0, then fetch was called in order to fetch more data
+ // to finish consuming a large JSON value contiguously.
+ // Grow if less than 25% of the remaining capacity is available.
+ // Note that this may cause the input buffer to exceed maxBufferSize.
+ grow = grow || (d.prevStart == 0 && len(d.buf) >= 3*cap(d.buf)/4)
+
+ if grow {
+ // Allocate a new buffer and copy the contents of the old buffer over.
+ // TODO: Provide a hard limit on the maximum internal buffer size?
+ buf := make([]byte, 0, cap(d.buf)*growthSizeFactor)
+ d.buf = append(buf, d.buf[d.prevStart:]...)
+ } else {
+ // Move unread portion of the data to the front.
+ n := copy(d.buf[:cap(d.buf)], d.buf[d.prevStart:])
+ d.buf = d.buf[:n]
+ }
+ d.baseOffset += int64(d.prevStart)
+ d.prevEnd -= d.prevStart
+ d.prevStart = 0
+
+ // Read more data into the internal buffer.
+ for {
+ n, err := d.rd.Read(d.buf[len(d.buf):cap(d.buf)])
+ switch {
+ case n > 0:
+ d.buf = d.buf[:len(d.buf)+n]
+ return nil // ignore errors if any bytes are read
+ case err == io.EOF:
+ return io.ErrUnexpectedEOF
+ case err != nil:
+ return &ioError{action: "read", err: err}
+ default:
+ continue // Read returned (0, nil)
+ }
+ }
+}
+
+const invalidateBufferByte = '#' // invalid starting character for JSON grammar
+
+// invalidatePreviousRead invalidates buffers returned by Peek and Read calls
+// so that the first byte is an invalid character.
+// This Hyrum-proofs the API against faulty application code that assumes
+// values returned by ReadValue remain valid past subsequent Read calls.
+func (d *decodeBuffer) invalidatePreviousRead() {
+ // Avoid mutating the buffer if d.rd is nil which implies that d.buf
+ // is provided by the user code and may not expect mutations.
+ isBytesBuffer := func(r io.Reader) bool {
+ _, ok := r.(*bytes.Buffer)
+ return ok
+ }
+ if d.rd != nil && !isBytesBuffer(d.rd) && d.prevStart < d.prevEnd && uint(d.prevStart) < uint(len(d.buf)) {
+ d.buf[d.prevStart] = invalidateBufferByte
+ d.prevStart = d.prevEnd
+ }
+}
+
+// needMore reports whether there are no more unread bytes.
+func (d *decodeBuffer) needMore(pos int) bool {
+ // NOTE: The arguments and logic are kept simple to keep this inlinable.
+ return pos == len(d.buf)
+}
+
+func (d *decodeBuffer) offsetAt(pos int) int64 { return d.baseOffset + int64(pos) }
+func (d *decodeBuffer) previousOffsetStart() int64 { return d.baseOffset + int64(d.prevStart) }
+func (d *decodeBuffer) previousOffsetEnd() int64 { return d.baseOffset + int64(d.prevEnd) }
+func (d *decodeBuffer) previousBuffer() []byte { return d.buf[d.prevStart:d.prevEnd] }
+func (d *decodeBuffer) unreadBuffer() []byte { return d.buf[d.prevEnd:len(d.buf)] }
+
+// PreviousTokenOrValue returns the previously read token or value
+// unless it has been invalidated by a call to PeekKind.
+// If a token is just a delimiter, then this returns a 1-byte buffer.
+// This method is used for error reporting at the semantic layer.
+func (d *decodeBuffer) PreviousTokenOrValue() []byte {
+ b := d.previousBuffer()
+ // If peek was called, then the previous token or buffer is invalidated.
+ if d.peekPos > 0 || len(b) > 0 && b[0] == invalidateBufferByte {
+ return nil
+ }
+ // ReadToken does not preserve the buffer for null, bools, or delimiters.
+ // Manually re-construct that buffer.
+ if len(b) == 0 {
+ b = d.buf[:d.prevEnd] // entirety of the previous buffer
+ for _, tok := range []string{"null", "false", "true", "{", "}", "[", "]"} {
+ if len(b) >= len(tok) && string(b[len(b)-len(tok):]) == tok {
+ return b[len(b)-len(tok):]
+ }
+ }
+ }
+ return b
+}
+
+// PeekKind retrieves the next token kind, but does not advance the read offset.
+//
+// It returns 0 if an error occurs. Any such error is cached until
+// the next read call and it is the caller's responsibility to eventually
+// follow up a PeekKind call with a read call.
+func (d *Decoder) PeekKind() Kind {
+ return d.s.PeekKind()
+}
+func (d *decoderState) PeekKind() Kind {
+ // Check whether we have a cached peek result.
+ if d.peekPos > 0 {
+ return Kind(d.buf[d.peekPos]).normalize()
+ }
+
+ var err error
+ d.invalidatePreviousRead()
+ pos := d.prevEnd
+
+ // Consume leading whitespace.
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ if pos, err = d.consumeWhitespace(pos); err != nil {
+ if err == io.ErrUnexpectedEOF && d.Tokens.Depth() == 1 {
+ err = io.EOF // EOF possibly if no Tokens present after top-level value
+ }
+ d.peekPos, d.peekErr = -1, wrapSyntacticError(d, err, pos, 0)
+ return invalidKind
+ }
+ }
+
+ // Consume colon or comma.
+ var delim byte
+ if c := d.buf[pos]; c == ':' || c == ',' {
+ delim = c
+ pos += 1
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ if pos, err = d.consumeWhitespace(pos); err != nil {
+ err = wrapSyntacticError(d, err, pos, 0)
+ d.peekPos, d.peekErr = -1, d.checkDelimBeforeIOError(delim, err)
+ return invalidKind
+ }
+ }
+ }
+ next := Kind(d.buf[pos]).normalize()
+ if d.Tokens.needDelim(next) != delim {
+ d.peekPos, d.peekErr = -1, d.checkDelim(delim, next)
+ return invalidKind
+ }
+
+ // This may set peekPos to zero, which is indistinguishable from
+ // the uninitialized state. While a small hit to performance, it is correct
+ // since ReadValue and ReadToken will disregard the cached result and
+ // recompute the next kind.
+ d.peekPos, d.peekErr = pos, nil
+ return next
+}
+
+// checkDelimBeforeIOError checks whether the delim is even valid
+// before returning an IO error, which occurs after the delim.
+func (d *decoderState) checkDelimBeforeIOError(delim byte, err error) error {
+ // Since an IO error occurred, we do not know what the next kind is.
+ // However, knowing the next kind is necessary to validate
+ // whether the current delim is at least potentially valid.
+ // Since a JSON string is always valid as the next token,
+ // conservatively assume that is the next kind for validation.
+ const next = Kind('"')
+ if d.Tokens.needDelim(next) != delim {
+ err = d.checkDelim(delim, next)
+ }
+ return err
+}
+
+// CountNextDelimWhitespace counts the number of upcoming bytes of
+// delimiter or whitespace characters.
+// This method is used for error reporting at the semantic layer.
+func (d *decoderState) CountNextDelimWhitespace() int {
+ d.PeekKind() // populate unreadBuffer
+ return len(d.unreadBuffer()) - len(bytes.TrimLeft(d.unreadBuffer(), ",: \n\r\t"))
+}
+
+// checkDelim checks whether delim is valid for the given next kind.
+func (d *decoderState) checkDelim(delim byte, next Kind) error {
+ where := "at start of value"
+ switch d.Tokens.needDelim(next) {
+ case delim:
+ return nil
+ case ':':
+ where = "after object name (expecting ':')"
+ case ',':
+ if d.Tokens.Last.isObject() {
+ where = "after object value (expecting ',' or '}')"
+ } else {
+ where = "after array element (expecting ',' or ']')"
+ }
+ }
+ pos := d.prevEnd // restore position to right after leading whitespace
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ err := jsonwire.NewInvalidCharacterError(d.buf[pos:], where)
+ return wrapSyntacticError(d, err, pos, 0)
+}
+
+// SkipValue is semantically equivalent to calling [Decoder.ReadValue] and discarding
+// the result except that memory is not wasted trying to hold the entire result.
+func (d *Decoder) SkipValue() error {
+ return d.s.SkipValue()
+}
+func (d *decoderState) SkipValue() error {
+ switch d.PeekKind() {
+ case '{', '[':
+ // For JSON objects and arrays, keep skipping all tokens
+ // until the depth matches the starting depth.
+ depth := d.Tokens.Depth()
+ for {
+ if _, err := d.ReadToken(); err != nil {
+ return err
+ }
+ if depth >= d.Tokens.Depth() {
+ return nil
+ }
+ }
+ default:
+ // Trying to skip a value when the next token is a '}' or ']'
+ // will result in an error being returned here.
+ var flags jsonwire.ValueFlags
+ if _, err := d.ReadValue(&flags); err != nil {
+ return err
+ }
+ return nil
+ }
+}
+
+// SkipValueRemainder skips the remainder of a value
+// after reading a '{' or '[' token.
+func (d *decoderState) SkipValueRemainder() error {
+ if d.Tokens.Depth()-1 > 0 && d.Tokens.Last.Length() == 0 {
+ for n := d.Tokens.Depth(); d.Tokens.Depth() >= n; {
+ if _, err := d.ReadToken(); err != nil {
+ return err
+ }
+ }
+ }
+ return nil
+}
+
+// SkipUntil skips all tokens until the state machine
+// is at or past the specified depth and length.
+func (d *decoderState) SkipUntil(depth int, length int64) error {
+ for d.Tokens.Depth() > depth || (d.Tokens.Depth() == depth && d.Tokens.Last.Length() < length) {
+ if _, err := d.ReadToken(); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+// ReadToken reads the next [Token], advancing the read offset.
+// The returned token is only valid until the next Peek, Read, or Skip call.
+// It returns [io.EOF] if there are no more tokens.
+func (d *Decoder) ReadToken() (Token, error) {
+ return d.s.ReadToken()
+}
+func (d *decoderState) ReadToken() (Token, error) {
+ // Determine the next kind.
+ var err error
+ var next Kind
+ pos := d.peekPos
+ if pos != 0 {
+ // Use cached peek result.
+ if d.peekErr != nil {
+ err := d.peekErr
+ d.peekPos, d.peekErr = 0, nil // possibly a transient I/O error
+ return Token{}, err
+ }
+ next = Kind(d.buf[pos]).normalize()
+ d.peekPos = 0 // reset cache
+ } else {
+ d.invalidatePreviousRead()
+ pos = d.prevEnd
+
+ // Consume leading whitespace.
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ if pos, err = d.consumeWhitespace(pos); err != nil {
+ if err == io.ErrUnexpectedEOF && d.Tokens.Depth() == 1 {
+ err = io.EOF // EOF possibly if no Tokens present after top-level value
+ }
+ return Token{}, wrapSyntacticError(d, err, pos, 0)
+ }
+ }
+
+ // Consume colon or comma.
+ var delim byte
+ if c := d.buf[pos]; c == ':' || c == ',' {
+ delim = c
+ pos += 1
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ if pos, err = d.consumeWhitespace(pos); err != nil {
+ err = wrapSyntacticError(d, err, pos, 0)
+ return Token{}, d.checkDelimBeforeIOError(delim, err)
+ }
+ }
+ }
+ next = Kind(d.buf[pos]).normalize()
+ if d.Tokens.needDelim(next) != delim {
+ return Token{}, d.checkDelim(delim, next)
+ }
+ }
+
+ // Handle the next token.
+ var n int
+ switch next {
+ case 'n':
+ if jsonwire.ConsumeNull(d.buf[pos:]) == 0 {
+ pos, err = d.consumeLiteral(pos, "null")
+ if err != nil {
+ return Token{}, wrapSyntacticError(d, err, pos, +1)
+ }
+ } else {
+ pos += len("null")
+ }
+ if err = d.Tokens.appendLiteral(); err != nil {
+ return Token{}, wrapSyntacticError(d, err, pos-len("null"), +1) // report position at start of literal
+ }
+ d.prevStart, d.prevEnd = pos, pos
+ return Null, nil
+
+ case 'f':
+ if jsonwire.ConsumeFalse(d.buf[pos:]) == 0 {
+ pos, err = d.consumeLiteral(pos, "false")
+ if err != nil {
+ return Token{}, wrapSyntacticError(d, err, pos, +1)
+ }
+ } else {
+ pos += len("false")
+ }
+ if err = d.Tokens.appendLiteral(); err != nil {
+ return Token{}, wrapSyntacticError(d, err, pos-len("false"), +1) // report position at start of literal
+ }
+ d.prevStart, d.prevEnd = pos, pos
+ return False, nil
+
+ case 't':
+ if jsonwire.ConsumeTrue(d.buf[pos:]) == 0 {
+ pos, err = d.consumeLiteral(pos, "true")
+ if err != nil {
+ return Token{}, wrapSyntacticError(d, err, pos, +1)
+ }
+ } else {
+ pos += len("true")
+ }
+ if err = d.Tokens.appendLiteral(); err != nil {
+ return Token{}, wrapSyntacticError(d, err, pos-len("true"), +1) // report position at start of literal
+ }
+ d.prevStart, d.prevEnd = pos, pos
+ return True, nil
+
+ case '"':
+ var flags jsonwire.ValueFlags // TODO: Preserve this in Token?
+ if n = jsonwire.ConsumeSimpleString(d.buf[pos:]); n == 0 {
+ oldAbsPos := d.baseOffset + int64(pos)
+ pos, err = d.consumeString(&flags, pos)
+ newAbsPos := d.baseOffset + int64(pos)
+ n = int(newAbsPos - oldAbsPos)
+ if err != nil {
+ return Token{}, wrapSyntacticError(d, err, pos, +1)
+ }
+ } else {
+ pos += n
+ }
+ if d.Tokens.Last.NeedObjectName() {
+ if !d.Flags.Get(jsonflags.AllowDuplicateNames) {
+ if !d.Tokens.Last.isValidNamespace() {
+ return Token{}, wrapSyntacticError(d, errInvalidNamespace, pos-n, +1)
+ }
+ if d.Tokens.Last.isActiveNamespace() && !d.Namespaces.Last().insertQuoted(d.buf[pos-n:pos], flags.IsVerbatim()) {
+ err = wrapWithObjectName(ErrDuplicateName, d.buf[pos-n:pos])
+ return Token{}, wrapSyntacticError(d, err, pos-n, +1) // report position at start of string
+ }
+ }
+ d.Names.ReplaceLastQuotedOffset(pos - n) // only replace if insertQuoted succeeds
+ }
+ if err = d.Tokens.appendString(); err != nil {
+ return Token{}, wrapSyntacticError(d, err, pos-n, +1) // report position at start of string
+ }
+ d.prevStart, d.prevEnd = pos-n, pos
+ return Token{raw: &d.decodeBuffer, num: uint64(d.previousOffsetStart())}, nil
+
+ case '0':
+ // NOTE: Since JSON numbers are not self-terminating,
+ // we need to make sure that the next byte is not part of a number.
+ if n = jsonwire.ConsumeSimpleNumber(d.buf[pos:]); n == 0 || d.needMore(pos+n) {
+ oldAbsPos := d.baseOffset + int64(pos)
+ pos, err = d.consumeNumber(pos)
+ newAbsPos := d.baseOffset + int64(pos)
+ n = int(newAbsPos - oldAbsPos)
+ if err != nil {
+ return Token{}, wrapSyntacticError(d, err, pos, +1)
+ }
+ } else {
+ pos += n
+ }
+ if err = d.Tokens.appendNumber(); err != nil {
+ return Token{}, wrapSyntacticError(d, err, pos-n, +1) // report position at start of number
+ }
+ d.prevStart, d.prevEnd = pos-n, pos
+ return Token{raw: &d.decodeBuffer, num: uint64(d.previousOffsetStart())}, nil
+
+ case '{':
+ if err = d.Tokens.pushObject(); err != nil {
+ return Token{}, wrapSyntacticError(d, err, pos, +1)
+ }
+ d.Names.push()
+ if !d.Flags.Get(jsonflags.AllowDuplicateNames) {
+ d.Namespaces.push()
+ }
+ pos += 1
+ d.prevStart, d.prevEnd = pos, pos
+ return BeginObject, nil
+
+ case '}':
+ if err = d.Tokens.popObject(); err != nil {
+ return Token{}, wrapSyntacticError(d, err, pos, +1)
+ }
+ d.Names.pop()
+ if !d.Flags.Get(jsonflags.AllowDuplicateNames) {
+ d.Namespaces.pop()
+ }
+ pos += 1
+ d.prevStart, d.prevEnd = pos, pos
+ return EndObject, nil
+
+ case '[':
+ if err = d.Tokens.pushArray(); err != nil {
+ return Token{}, wrapSyntacticError(d, err, pos, +1)
+ }
+ pos += 1
+ d.prevStart, d.prevEnd = pos, pos
+ return BeginArray, nil
+
+ case ']':
+ if err = d.Tokens.popArray(); err != nil {
+ return Token{}, wrapSyntacticError(d, err, pos, +1)
+ }
+ pos += 1
+ d.prevStart, d.prevEnd = pos, pos
+ return EndArray, nil
+
+ default:
+ err = jsonwire.NewInvalidCharacterError(d.buf[pos:], "at start of value")
+ return Token{}, wrapSyntacticError(d, err, pos, +1)
+ }
+}
+
+// ReadValue returns the next raw JSON value, advancing the read offset.
+// The value is stripped of any leading or trailing whitespace and
+// contains the exact bytes of the input, which may contain invalid UTF-8
+// if [AllowInvalidUTF8] is specified.
+//
+// The returned value is only valid until the next Peek, Read, or Skip call and
+// may not be mutated while the Decoder remains in use.
+// If the decoder is currently at the end token for an object or array,
+// then it reports a [SyntacticError] and the internal state remains unchanged.
+// It returns [io.EOF] if there are no more values.
+func (d *Decoder) ReadValue() (Value, error) {
+ var flags jsonwire.ValueFlags
+ return d.s.ReadValue(&flags)
+}
+func (d *decoderState) ReadValue(flags *jsonwire.ValueFlags) (Value, error) {
+ // Determine the next kind.
+ var err error
+ var next Kind
+ pos := d.peekPos
+ if pos != 0 {
+ // Use cached peek result.
+ if d.peekErr != nil {
+ err := d.peekErr
+ d.peekPos, d.peekErr = 0, nil // possibly a transient I/O error
+ return nil, err
+ }
+ next = Kind(d.buf[pos]).normalize()
+ d.peekPos = 0 // reset cache
+ } else {
+ d.invalidatePreviousRead()
+ pos = d.prevEnd
+
+ // Consume leading whitespace.
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ if pos, err = d.consumeWhitespace(pos); err != nil {
+ if err == io.ErrUnexpectedEOF && d.Tokens.Depth() == 1 {
+ err = io.EOF // EOF possibly if no Tokens present after top-level value
+ }
+ return nil, wrapSyntacticError(d, err, pos, 0)
+ }
+ }
+
+ // Consume colon or comma.
+ var delim byte
+ if c := d.buf[pos]; c == ':' || c == ',' {
+ delim = c
+ pos += 1
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ if pos, err = d.consumeWhitespace(pos); err != nil {
+ err = wrapSyntacticError(d, err, pos, 0)
+ return nil, d.checkDelimBeforeIOError(delim, err)
+ }
+ }
+ }
+ next = Kind(d.buf[pos]).normalize()
+ if d.Tokens.needDelim(next) != delim {
+ return nil, d.checkDelim(delim, next)
+ }
+ }
+
+ // Handle the next value.
+ oldAbsPos := d.baseOffset + int64(pos)
+ pos, err = d.consumeValue(flags, pos, d.Tokens.Depth())
+ newAbsPos := d.baseOffset + int64(pos)
+ n := int(newAbsPos - oldAbsPos)
+ if err != nil {
+ return nil, wrapSyntacticError(d, err, pos, +1)
+ }
+ switch next {
+ case 'n', 't', 'f':
+ err = d.Tokens.appendLiteral()
+ case '"':
+ if d.Tokens.Last.NeedObjectName() {
+ if !d.Flags.Get(jsonflags.AllowDuplicateNames) {
+ if !d.Tokens.Last.isValidNamespace() {
+ err = errInvalidNamespace
+ break
+ }
+ if d.Tokens.Last.isActiveNamespace() && !d.Namespaces.Last().insertQuoted(d.buf[pos-n:pos], flags.IsVerbatim()) {
+ err = wrapWithObjectName(ErrDuplicateName, d.buf[pos-n:pos])
+ break
+ }
+ }
+ d.Names.ReplaceLastQuotedOffset(pos - n) // only replace if insertQuoted succeeds
+ }
+ err = d.Tokens.appendString()
+ case '0':
+ err = d.Tokens.appendNumber()
+ case '{':
+ if err = d.Tokens.pushObject(); err != nil {
+ break
+ }
+ if err = d.Tokens.popObject(); err != nil {
+ panic("BUG: popObject should never fail immediately after pushObject: " + err.Error())
+ }
+ case '[':
+ if err = d.Tokens.pushArray(); err != nil {
+ break
+ }
+ if err = d.Tokens.popArray(); err != nil {
+ panic("BUG: popArray should never fail immediately after pushArray: " + err.Error())
+ }
+ }
+ if err != nil {
+ return nil, wrapSyntacticError(d, err, pos-n, +1) // report position at start of value
+ }
+ d.prevEnd = pos
+ d.prevStart = pos - n
+ return d.buf[pos-n : pos : pos], nil
+}
+
+// CheckNextValue checks whether the next value is syntactically valid,
+// but does not advance the read offset.
+func (d *decoderState) CheckNextValue() error {
+ d.PeekKind() // populates d.peekPos and d.peekErr
+ pos, err := d.peekPos, d.peekErr
+ d.peekPos, d.peekErr = 0, nil
+ if err != nil {
+ return err
+ }
+
+ var flags jsonwire.ValueFlags
+ if pos, err := d.consumeValue(&flags, pos, d.Tokens.Depth()); err != nil {
+ return wrapSyntacticError(d, err, pos, +1)
+ }
+ return nil
+}
+
+// CheckEOF verifies that the input has no more data.
+func (d *decoderState) CheckEOF() error {
+ switch pos, err := d.consumeWhitespace(d.prevEnd); err {
+ case nil:
+ err := jsonwire.NewInvalidCharacterError(d.buf[pos:], "after top-level value")
+ return wrapSyntacticError(d, err, pos, 0)
+ case io.ErrUnexpectedEOF:
+ return nil
+ default:
+ return err
+ }
+}
+
+// consumeWhitespace consumes all whitespace starting at d.buf[pos:].
+// It returns the new position in d.buf immediately after the last whitespace.
+// If it returns nil, there is guaranteed to at least be one unread byte.
+//
+// The following pattern is common in this implementation:
+//
+// pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+// if d.needMore(pos) {
+// if pos, err = d.consumeWhitespace(pos); err != nil {
+// return ...
+// }
+// }
+//
+// It is difficult to simplify this without sacrificing performance since
+// consumeWhitespace must be inlined. The body of the if statement is
+// executed only in rare situations where we need to fetch more data.
+// Since fetching may return an error, we also need to check the error.
+func (d *decoderState) consumeWhitespace(pos int) (newPos int, err error) {
+ for {
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ absPos := d.baseOffset + int64(pos)
+ err = d.fetch() // will mutate d.buf and invalidate pos
+ pos = int(absPos - d.baseOffset)
+ if err != nil {
+ return pos, err
+ }
+ continue
+ }
+ return pos, nil
+ }
+}
+
+// consumeValue consumes a single JSON value starting at d.buf[pos:].
+// It returns the new position in d.buf immediately after the value.
+func (d *decoderState) consumeValue(flags *jsonwire.ValueFlags, pos, depth int) (newPos int, err error) {
+ for {
+ var n int
+ var err error
+ switch next := Kind(d.buf[pos]).normalize(); next {
+ case 'n':
+ if n = jsonwire.ConsumeNull(d.buf[pos:]); n == 0 {
+ n, err = jsonwire.ConsumeLiteral(d.buf[pos:], "null")
+ }
+ case 'f':
+ if n = jsonwire.ConsumeFalse(d.buf[pos:]); n == 0 {
+ n, err = jsonwire.ConsumeLiteral(d.buf[pos:], "false")
+ }
+ case 't':
+ if n = jsonwire.ConsumeTrue(d.buf[pos:]); n == 0 {
+ n, err = jsonwire.ConsumeLiteral(d.buf[pos:], "true")
+ }
+ case '"':
+ if n = jsonwire.ConsumeSimpleString(d.buf[pos:]); n == 0 {
+ return d.consumeString(flags, pos)
+ }
+ case '0':
+ // NOTE: Since JSON numbers are not self-terminating,
+ // we need to make sure that the next byte is not part of a number.
+ if n = jsonwire.ConsumeSimpleNumber(d.buf[pos:]); n == 0 || d.needMore(pos+n) {
+ return d.consumeNumber(pos)
+ }
+ case '{':
+ return d.consumeObject(flags, pos, depth)
+ case '[':
+ return d.consumeArray(flags, pos, depth)
+ default:
+ if (d.Tokens.Last.isObject() && next == ']') || (d.Tokens.Last.isArray() && next == '}') {
+ return pos, errMismatchDelim
+ }
+ return pos, jsonwire.NewInvalidCharacterError(d.buf[pos:], "at start of value")
+ }
+ if err == io.ErrUnexpectedEOF {
+ absPos := d.baseOffset + int64(pos)
+ err = d.fetch() // will mutate d.buf and invalidate pos
+ pos = int(absPos - d.baseOffset)
+ if err != nil {
+ return pos + n, err
+ }
+ continue
+ }
+ return pos + n, err
+ }
+}
+
+// consumeLiteral consumes a single JSON literal starting at d.buf[pos:].
+// It returns the new position in d.buf immediately after the literal.
+func (d *decoderState) consumeLiteral(pos int, lit string) (newPos int, err error) {
+ for {
+ n, err := jsonwire.ConsumeLiteral(d.buf[pos:], lit)
+ if err == io.ErrUnexpectedEOF {
+ absPos := d.baseOffset + int64(pos)
+ err = d.fetch() // will mutate d.buf and invalidate pos
+ pos = int(absPos - d.baseOffset)
+ if err != nil {
+ return pos + n, err
+ }
+ continue
+ }
+ return pos + n, err
+ }
+}
+
+// consumeString consumes a single JSON string starting at d.buf[pos:].
+// It returns the new position in d.buf immediately after the string.
+func (d *decoderState) consumeString(flags *jsonwire.ValueFlags, pos int) (newPos int, err error) {
+ var n int
+ for {
+ n, err = jsonwire.ConsumeStringResumable(flags, d.buf[pos:], n, !d.Flags.Get(jsonflags.AllowInvalidUTF8))
+ if err == io.ErrUnexpectedEOF {
+ absPos := d.baseOffset + int64(pos)
+ err = d.fetch() // will mutate d.buf and invalidate pos
+ pos = int(absPos - d.baseOffset)
+ if err != nil {
+ return pos + n, err
+ }
+ continue
+ }
+ return pos + n, err
+ }
+}
+
+// consumeNumber consumes a single JSON number starting at d.buf[pos:].
+// It returns the new position in d.buf immediately after the number.
+func (d *decoderState) consumeNumber(pos int) (newPos int, err error) {
+ var n int
+ var state jsonwire.ConsumeNumberState
+ for {
+ n, state, err = jsonwire.ConsumeNumberResumable(d.buf[pos:], n, state)
+ // NOTE: Since JSON numbers are not self-terminating,
+ // we need to make sure that the next byte is not part of a number.
+ if err == io.ErrUnexpectedEOF || d.needMore(pos+n) {
+ mayTerminate := err == nil
+ absPos := d.baseOffset + int64(pos)
+ err = d.fetch() // will mutate d.buf and invalidate pos
+ pos = int(absPos - d.baseOffset)
+ if err != nil {
+ if mayTerminate && err == io.ErrUnexpectedEOF {
+ return pos + n, nil
+ }
+ return pos, err
+ }
+ continue
+ }
+ return pos + n, err
+ }
+}
+
+// consumeObject consumes a single JSON object starting at d.buf[pos:].
+// It returns the new position in d.buf immediately after the object.
+func (d *decoderState) consumeObject(flags *jsonwire.ValueFlags, pos, depth int) (newPos int, err error) {
+ var n int
+ var names *objectNamespace
+ if !d.Flags.Get(jsonflags.AllowDuplicateNames) {
+ d.Namespaces.push()
+ defer d.Namespaces.pop()
+ names = d.Namespaces.Last()
+ }
+
+ // Handle before start.
+ if uint(pos) >= uint(len(d.buf)) || d.buf[pos] != '{' {
+ panic("BUG: consumeObject must be called with a buffer that starts with '{'")
+ } else if depth == maxNestingDepth+1 {
+ return pos, errMaxDepth
+ }
+ pos++
+
+ // Handle after start.
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ if pos, err = d.consumeWhitespace(pos); err != nil {
+ return pos, err
+ }
+ }
+ if d.buf[pos] == '}' {
+ pos++
+ return pos, nil
+ }
+
+ depth++
+ for {
+ // Handle before name.
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ if pos, err = d.consumeWhitespace(pos); err != nil {
+ return pos, err
+ }
+ }
+ var flags2 jsonwire.ValueFlags
+ if n = jsonwire.ConsumeSimpleString(d.buf[pos:]); n == 0 {
+ oldAbsPos := d.baseOffset + int64(pos)
+ pos, err = d.consumeString(&flags2, pos)
+ newAbsPos := d.baseOffset + int64(pos)
+ n = int(newAbsPos - oldAbsPos)
+ flags.Join(flags2)
+ if err != nil {
+ return pos, err
+ }
+ } else {
+ pos += n
+ }
+ quotedName := d.buf[pos-n : pos]
+ if !d.Flags.Get(jsonflags.AllowDuplicateNames) && !names.insertQuoted(quotedName, flags2.IsVerbatim()) {
+ return pos - n, wrapWithObjectName(ErrDuplicateName, quotedName)
+ }
+
+ // Handle after name.
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ if pos, err = d.consumeWhitespace(pos); err != nil {
+ return pos, wrapWithObjectName(err, quotedName)
+ }
+ }
+ if d.buf[pos] != ':' {
+ err := jsonwire.NewInvalidCharacterError(d.buf[pos:], "after object name (expecting ':')")
+ return pos, wrapWithObjectName(err, quotedName)
+ }
+ pos++
+
+ // Handle before value.
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ if pos, err = d.consumeWhitespace(pos); err != nil {
+ return pos, wrapWithObjectName(err, quotedName)
+ }
+ }
+ pos, err = d.consumeValue(flags, pos, depth)
+ if err != nil {
+ return pos, wrapWithObjectName(err, quotedName)
+ }
+
+ // Handle after value.
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ if pos, err = d.consumeWhitespace(pos); err != nil {
+ return pos, err
+ }
+ }
+ switch d.buf[pos] {
+ case ',':
+ pos++
+ continue
+ case '}':
+ pos++
+ return pos, nil
+ default:
+ return pos, jsonwire.NewInvalidCharacterError(d.buf[pos:], "after object value (expecting ',' or '}')")
+ }
+ }
+}
+
+// consumeArray consumes a single JSON array starting at d.buf[pos:].
+// It returns the new position in d.buf immediately after the array.
+func (d *decoderState) consumeArray(flags *jsonwire.ValueFlags, pos, depth int) (newPos int, err error) {
+ // Handle before start.
+ if uint(pos) >= uint(len(d.buf)) || d.buf[pos] != '[' {
+ panic("BUG: consumeArray must be called with a buffer that starts with '['")
+ } else if depth == maxNestingDepth+1 {
+ return pos, errMaxDepth
+ }
+ pos++
+
+ // Handle after start.
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ if pos, err = d.consumeWhitespace(pos); err != nil {
+ return pos, err
+ }
+ }
+ if d.buf[pos] == ']' {
+ pos++
+ return pos, nil
+ }
+
+ var idx int64
+ depth++
+ for {
+ // Handle before value.
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ if pos, err = d.consumeWhitespace(pos); err != nil {
+ return pos, err
+ }
+ }
+ pos, err = d.consumeValue(flags, pos, depth)
+ if err != nil {
+ return pos, wrapWithArrayIndex(err, idx)
+ }
+
+ // Handle after value.
+ pos += jsonwire.ConsumeWhitespace(d.buf[pos:])
+ if d.needMore(pos) {
+ if pos, err = d.consumeWhitespace(pos); err != nil {
+ return pos, err
+ }
+ }
+ switch d.buf[pos] {
+ case ',':
+ pos++
+ idx++
+ continue
+ case ']':
+ pos++
+ return pos, nil
+ default:
+ return pos, jsonwire.NewInvalidCharacterError(d.buf[pos:], "after array element (expecting ',' or ']')")
+ }
+ }
+}
+
+// InputOffset returns the current input byte offset. It gives the location
+// of the next byte immediately after the most recently returned token or value.
+// The number of bytes actually read from the underlying [io.Reader] may be more
+// than this offset due to internal buffering effects.
+func (d *Decoder) InputOffset() int64 {
+ return d.s.previousOffsetEnd()
+}
+
+// UnreadBuffer returns the data remaining in the unread buffer,
+// which may contain zero or more bytes.
+// The returned buffer must not be mutated while Decoder continues to be used.
+// The buffer contents are valid until the next Peek, Read, or Skip call.
+func (d *Decoder) UnreadBuffer() []byte {
+ return d.s.unreadBuffer()
+}
+
+// StackDepth returns the depth of the state machine for read JSON data.
+// Each level on the stack represents a nested JSON object or array.
+// It is incremented whenever an [BeginObject] or [BeginArray] token is encountered
+// and decremented whenever an [EndObject] or [EndArray] token is encountered.
+// The depth is zero-indexed, where zero represents the top-level JSON value.
+func (d *Decoder) StackDepth() int {
+ // NOTE: Keep in sync with Encoder.StackDepth.
+ return d.s.Tokens.Depth() - 1
+}
+
+// StackIndex returns information about the specified stack level.
+// It must be a number between 0 and [Decoder.StackDepth], inclusive.
+// For each level, it reports the kind:
+//
+// - 0 for a level of zero,
+// - '{' for a level representing a JSON object, and
+// - '[' for a level representing a JSON array.
+//
+// It also reports the length of that JSON object or array.
+// Each name and value in a JSON object is counted separately,
+// so the effective number of members would be half the length.
+// A complete JSON object must have an even length.
+func (d *Decoder) StackIndex(i int) (Kind, int64) {
+ // NOTE: Keep in sync with Encoder.StackIndex.
+ switch s := d.s.Tokens.index(i); {
+ case i > 0 && s.isObject():
+ return '{', s.Length()
+ case i > 0 && s.isArray():
+ return '[', s.Length()
+ default:
+ return 0, s.Length()
+ }
+}
+
+// StackPointer returns a JSON Pointer (RFC 6901) to the most recently read value.
+func (d *Decoder) StackPointer() Pointer {
+ return Pointer(d.s.AppendStackPointer(nil, -1))
+}
+
+func (d *decoderState) AppendStackPointer(b []byte, where int) []byte {
+ d.Names.copyQuotedBuffer(d.buf)
+ return d.state.appendStackPointer(b, where)
+}
diff --git a/internal/json/jsontext/doc.go b/internal/json/jsontext/doc.go
new file mode 100644
index 0000000000..22081df053
--- /dev/null
+++ b/internal/json/jsontext/doc.go
@@ -0,0 +1,111 @@
+// Copyright 2023 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+// Package jsontext implements syntactic processing of JSON
+// as specified in RFC 4627, RFC 7159, RFC 7493, RFC 8259, and RFC 8785.
+// JSON is a simple data interchange format that can represent
+// primitive data types such as booleans, strings, and numbers,
+// in addition to structured data types such as objects and arrays.
+//
+// The [Encoder] and [Decoder] types are used to encode or decode
+// a stream of JSON tokens or values.
+//
+// # Tokens and Values
+//
+// A JSON token refers to the basic structural elements of JSON:
+//
+// - a JSON literal (i.e., null, true, or false)
+// - a JSON string (e.g., "hello, world!")
+// - a JSON number (e.g., 123.456)
+// - a begin or end delimiter for a JSON object (i.e., '{' or '}')
+// - a begin or end delimiter for a JSON array (i.e., '[' or ']')
+//
+// A JSON token is represented by the [Token] type in Go. Technically,
+// there are two additional structural characters (i.e., ':' and ','),
+// but there is no [Token] representation for them since their presence
+// can be inferred by the structure of the JSON grammar itself.
+// For example, there must always be an implicit colon between
+// the name and value of a JSON object member.
+//
+// A JSON value refers to a complete unit of JSON data:
+//
+// - a JSON literal, string, or number
+// - a JSON object (e.g., `{"name":"value"}`)
+// - a JSON array (e.g., `[1,2,3,]`)
+//
+// A JSON value is represented by the [Value] type in Go and is a []byte
+// containing the raw textual representation of the value. There is some overlap
+// between tokens and values as both contain literals, strings, and numbers.
+// However, only a value can represent the entirety of a JSON object or array.
+//
+// The [Encoder] and [Decoder] types contain methods to read or write the next
+// [Token] or [Value] in a sequence. They maintain a state machine to validate
+// whether the sequence of JSON tokens and/or values produces a valid JSON.
+// [Options] may be passed to the [NewEncoder] or [NewDecoder] constructors
+// to configure the syntactic behavior of encoding and decoding.
+//
+// # Terminology
+//
+// The terms "encode" and "decode" are used for syntactic functionality
+// that is concerned with processing JSON based on its grammar, and
+// the terms "marshal" and "unmarshal" are used for semantic functionality
+// that determines the meaning of JSON values as Go values and vice-versa.
+// This package (i.e., [jsontext]) deals with JSON at a syntactic layer,
+// while [encoding/json/v2] deals with JSON at a semantic layer.
+// The goal is to provide a clear distinction between functionality that
+// is purely concerned with encoding versus that of marshaling.
+// For example, one can directly encode a stream of JSON tokens without
+// needing to marshal a concrete Go value representing them.
+// Similarly, one can decode a stream of JSON tokens without
+// needing to unmarshal them into a concrete Go value.
+//
+// This package uses JSON terminology when discussing JSON, which may differ
+// from related concepts in Go or elsewhere in computing literature.
+//
+// - a JSON "object" refers to an unordered collection of name/value members.
+// - a JSON "array" refers to an ordered sequence of elements.
+// - a JSON "value" refers to either a literal (i.e., null, false, or true),
+// string, number, object, or array.
+//
+// See RFC 8259 for more information.
+//
+// # Specifications
+//
+// Relevant specifications include RFC 4627, RFC 7159, RFC 7493, RFC 8259,
+// and RFC 8785. Each RFC is generally a stricter subset of another RFC.
+// In increasing order of strictness:
+//
+// - RFC 4627 and RFC 7159 do not require (but recommend) the use of UTF-8
+// and also do not require (but recommend) that object names be unique.
+// - RFC 8259 requires the use of UTF-8,
+// but does not require (but recommends) that object names be unique.
+// - RFC 7493 requires the use of UTF-8
+// and also requires that object names be unique.
+// - RFC 8785 defines a canonical representation. It requires the use of UTF-8
+// and also requires that object names be unique and in a specific ordering.
+// It specifies exactly how strings and numbers must be formatted.
+//
+// The primary difference between RFC 4627 and RFC 7159 is that the former
+// restricted top-level values to only JSON objects and arrays, while
+// RFC 7159 and subsequent RFCs permit top-level values to additionally be
+// JSON nulls, booleans, strings, or numbers.
+//
+// By default, this package operates on RFC 7493, but can be configured
+// to operate according to the other RFC specifications.
+// RFC 7493 is a stricter subset of RFC 8259 and fully compliant with it.
+// In particular, it makes specific choices about behavior that RFC 8259
+// leaves as undefined in order to ensure greater interoperability.
+//
+// # Security Considerations
+//
+// See the "Security Considerations" section in [encoding/json/v2].
+package jsontext
+
+// requireKeyedLiterals can be embedded in a struct to require keyed literals.
+type requireKeyedLiterals struct{}
+
+// nonComparable can be embedded in a struct to prevent comparability.
+type nonComparable [0]func()
diff --git a/internal/json/jsontext/encode.go b/internal/json/jsontext/encode.go
new file mode 100644
index 0000000000..cfe5b50a73
--- /dev/null
+++ b/internal/json/jsontext/encode.go
@@ -0,0 +1,972 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package jsontext
+
+import (
+ "bytes"
+ "io"
+ "math/bits"
+
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonopts"
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+)
+
+// Encoder is a streaming encoder from raw JSON tokens and values.
+// It is used to write a stream of top-level JSON values,
+// each terminated with a newline character.
+//
+// [Encoder.WriteToken] and [Encoder.WriteValue] calls may be interleaved.
+// For example, the following JSON value:
+//
+// {"name":"value","array":[null,false,true,3.14159],"object":{"k":"v"}}
+//
+// can be composed with the following calls (ignoring errors for brevity):
+//
+// e.WriteToken(BeginObject) // {
+// e.WriteToken(String("name")) // "name"
+// e.WriteToken(String("value")) // "value"
+// e.WriteValue(Value(`"array"`)) // "array"
+// e.WriteToken(BeginArray) // [
+// e.WriteToken(Null) // null
+// e.WriteToken(False) // false
+// e.WriteValue(Value("true")) // true
+// e.WriteToken(Float(3.14159)) // 3.14159
+// e.WriteToken(EndArray) // ]
+// e.WriteValue(Value(`"object"`)) // "object"
+// e.WriteValue(Value(`{"k":"v"}`)) // {"k":"v"}
+// e.WriteToken(EndObject) // }
+//
+// The above is one of many possible sequence of calls and
+// may not represent the most sensible method to call for any given token/value.
+// For example, it is probably more common to call [Encoder.WriteToken] with a string
+// for object names.
+type Encoder struct {
+ s encoderState
+}
+
+// encoderState is the low-level state of Encoder.
+// It has exported fields and method for use by the "json" package.
+type encoderState struct {
+ state
+ encodeBuffer
+ jsonopts.Struct
+
+ SeenPointers map[any]struct{} // only used when marshaling; identical to json.seenPointers
+}
+
+// encodeBuffer is a buffer split into 2 segments:
+//
+// - buf[0:len(buf)] // written (but unflushed) portion of the buffer
+// - buf[len(buf):cap(buf)] // unused portion of the buffer
+type encodeBuffer struct {
+ Buf []byte // may alias wr if it is a bytes.Buffer
+
+ // baseOffset is added to len(buf) to obtain the absolute offset
+ // relative to the start of io.Writer stream.
+ baseOffset int64
+
+ wr io.Writer
+
+ // maxValue is the approximate maximum Value size passed to WriteValue.
+ maxValue int
+ // availBuffer is the buffer returned by the AvailableBuffer method.
+ availBuffer []byte // always has zero length
+ // bufStats is statistics about buffer utilization.
+ // It is only used with pooled encoders in pools.go.
+ bufStats bufferStatistics
+}
+
+// NewEncoder constructs a new streaming encoder writing to w
+// configured with the provided options.
+// It flushes the internal buffer when the buffer is sufficiently full or
+// when a top-level value has been written.
+//
+// If w is a [bytes.Buffer], then the encoder appends directly into the buffer
+// without copying the contents from an intermediate buffer.
+func NewEncoder(w io.Writer, opts ...Options) *Encoder {
+ e := new(Encoder)
+ e.Reset(w, opts...)
+ return e
+}
+
+// Reset resets an encoder such that it is writing afresh to w and
+// configured with the provided options. Reset must not be called on
+// a Encoder passed to the [encoding/json/v2.MarshalerTo.MarshalJSONTo] method
+// or the [encoding/json/v2.MarshalToFunc] function.
+func (e *Encoder) Reset(w io.Writer, opts ...Options) {
+ switch {
+ case e == nil:
+ panic("jsontext: invalid nil Encoder")
+ case w == nil:
+ panic("jsontext: invalid nil io.Writer")
+ case e.s.Flags.Get(jsonflags.WithinArshalCall):
+ panic("jsontext: cannot reset Encoder passed to json.MarshalerTo")
+ }
+ e.s.reset(nil, w, opts...)
+}
+
+func (e *encoderState) reset(b []byte, w io.Writer, opts ...Options) {
+ e.state.reset()
+ e.encodeBuffer = encodeBuffer{Buf: b, wr: w, bufStats: e.bufStats}
+ if bb, ok := w.(*bytes.Buffer); ok && bb != nil {
+ e.Buf = bb.AvailableBuffer() // alias the unused buffer of bb
+ }
+ opts2 := jsonopts.Struct{} // avoid mutating e.Struct in case it is part of opts
+ opts2.Join(opts...)
+ e.Struct = opts2
+ if e.Flags.Get(jsonflags.Multiline) {
+ if !e.Flags.Has(jsonflags.SpaceAfterColon) {
+ e.Flags.Set(jsonflags.SpaceAfterColon | 1)
+ }
+ if !e.Flags.Has(jsonflags.SpaceAfterComma) {
+ e.Flags.Set(jsonflags.SpaceAfterComma | 0)
+ }
+ if !e.Flags.Has(jsonflags.Indent) {
+ e.Flags.Set(jsonflags.Indent | 1)
+ e.Indent = "\t"
+ }
+ }
+}
+
+// Options returns the options used to construct the decoder and
+// may additionally contain semantic options passed to a
+// [encoding/json/v2.MarshalEncode] call.
+//
+// If operating within
+// a [encoding/json/v2.MarshalerTo.MarshalJSONTo] method call or
+// a [encoding/json/v2.MarshalToFunc] function call,
+// then the returned options are only valid within the call.
+func (e *Encoder) Options() Options {
+ return &e.s.Struct
+}
+
+// NeedFlush determines whether to flush at this point.
+func (e *encoderState) NeedFlush() bool {
+ // NOTE: This function is carefully written to be inlinable.
+
+ // Avoid flushing if e.wr is nil since there is no underlying writer.
+ // Flush if less than 25% of the capacity remains.
+ // Flushing at some constant fraction ensures that the buffer stops growing
+ // so long as the largest Token or Value fits within that unused capacity.
+ return e.wr != nil && (e.Tokens.Depth() == 1 || len(e.Buf) > 3*cap(e.Buf)/4)
+}
+
+// Flush flushes the buffer to the underlying io.Writer.
+// It may append a trailing newline after the top-level value.
+func (e *encoderState) Flush() error {
+ if e.wr == nil || e.avoidFlush() {
+ return nil
+ }
+
+ // In streaming mode, always emit a newline after the top-level value.
+ if e.Tokens.Depth() == 1 && !e.Flags.Get(jsonflags.OmitTopLevelNewline) {
+ e.Buf = append(e.Buf, '\n')
+ }
+
+ // Inform objectNameStack that we are about to flush the buffer content.
+ e.Names.copyQuotedBuffer(e.Buf)
+
+ // Specialize bytes.Buffer for better performance.
+ if bb, ok := e.wr.(*bytes.Buffer); ok {
+ // If e.buf already aliases the internal buffer of bb,
+ // then the Write call simply increments the internal offset,
+ // otherwise Write operates as expected.
+ // See https://go.dev/issue/42986.
+ n, _ := bb.Write(e.Buf) // never fails unless bb is nil
+ e.baseOffset += int64(n)
+
+ // If the internal buffer of bytes.Buffer is too small,
+ // append operations elsewhere in the Encoder may grow the buffer.
+ // This would be semantically correct, but hurts performance.
+ // As such, ensure 25% of the current length is always available
+ // to reduce the probability that other appends must allocate.
+ if avail := bb.Available(); avail < bb.Len()/4 {
+ bb.Grow(avail + 1)
+ }
+
+ e.Buf = bb.AvailableBuffer()
+ return nil
+ }
+
+ // Flush the internal buffer to the underlying io.Writer.
+ n, err := e.wr.Write(e.Buf)
+ e.baseOffset += int64(n)
+ if err != nil {
+ // In the event of an error, preserve the unflushed portion.
+ // Thus, write errors aren't fatal so long as the io.Writer
+ // maintains consistent state after errors.
+ if n > 0 {
+ e.Buf = e.Buf[:copy(e.Buf, e.Buf[n:])]
+ }
+ return &ioError{action: "write", err: err}
+ }
+ e.Buf = e.Buf[:0]
+
+ // Check whether to grow the buffer.
+ // Note that cap(e.buf) may already exceed maxBufferSize since
+ // an append elsewhere already grew it to store a large token.
+ const maxBufferSize = 4 << 10
+ const growthSizeFactor = 2 // higher value is faster
+ const growthRateFactor = 2 // higher value is slower
+ // By default, grow if below the maximum buffer size.
+ grow := cap(e.Buf) <= maxBufferSize/growthSizeFactor
+ // Growing can be expensive, so only grow
+ // if a sufficient number of bytes have been processed.
+ grow = grow && int64(cap(e.Buf)) < e.previousOffsetEnd()/growthRateFactor
+ if grow {
+ e.Buf = make([]byte, 0, cap(e.Buf)*growthSizeFactor)
+ }
+
+ return nil
+}
+func (d *encodeBuffer) offsetAt(pos int) int64 { return d.baseOffset + int64(pos) }
+func (e *encodeBuffer) previousOffsetEnd() int64 { return e.baseOffset + int64(len(e.Buf)) }
+func (e *encodeBuffer) unflushedBuffer() []byte { return e.Buf }
+
+// avoidFlush indicates whether to avoid flushing to ensure there is always
+// enough in the buffer to unwrite the last object member if it were empty.
+func (e *encoderState) avoidFlush() bool {
+ switch {
+ case e.Tokens.Last.Length() == 0:
+ // Never flush after BeginObject or BeginArray since we don't know yet
+ // if the object or array will end up being empty.
+ return true
+ case e.Tokens.Last.needObjectValue():
+ // Never flush before the object value since we don't know yet
+ // if the object value will end up being empty.
+ return true
+ case e.Tokens.Last.NeedObjectName() && len(e.Buf) >= 2:
+ // Never flush after the object value if it does turn out to be empty.
+ switch string(e.Buf[len(e.Buf)-2:]) {
+ case `ll`, `""`, `{}`, `[]`: // last two bytes of every empty value
+ return true
+ }
+ }
+ return false
+}
+
+// UnwriteEmptyObjectMember unwrites the last object member if it is empty
+// and reports whether it performed an unwrite operation.
+func (e *encoderState) UnwriteEmptyObjectMember(prevName *string) bool {
+ if last := e.Tokens.Last; !last.isObject() || !last.NeedObjectName() || last.Length() == 0 {
+ panic("BUG: must be called on an object after writing a value")
+ }
+
+ // The flushing logic is modified to never flush a trailing empty value.
+ // The encoder never writes trailing whitespace eagerly.
+ b := e.unflushedBuffer()
+
+ // Detect whether the last value was empty.
+ var n int
+ if len(b) >= 3 {
+ switch string(b[len(b)-2:]) {
+ case "ll": // last two bytes of `null`
+ n = len(`null`)
+ case `""`:
+ // It is possible for a non-empty string to have `""` as a suffix
+ // if the second to the last quote was escaped.
+ if b[len(b)-3] == '\\' {
+ return false // e.g., `"\""` is not empty
+ }
+ n = len(`""`)
+ case `{}`:
+ n = len(`{}`)
+ case `[]`:
+ n = len(`[]`)
+ }
+ }
+ if n == 0 {
+ return false
+ }
+
+ // Unwrite the value, whitespace, colon, name, whitespace, and comma.
+ b = b[:len(b)-n]
+ b = jsonwire.TrimSuffixWhitespace(b)
+ b = jsonwire.TrimSuffixByte(b, ':')
+ b = jsonwire.TrimSuffixString(b)
+ b = jsonwire.TrimSuffixWhitespace(b)
+ b = jsonwire.TrimSuffixByte(b, ',')
+ e.Buf = b // store back truncated unflushed buffer
+
+ // Undo state changes.
+ e.Tokens.Last.decrement() // for object member value
+ e.Tokens.Last.decrement() // for object member name
+ if !e.Flags.Get(jsonflags.AllowDuplicateNames) {
+ if e.Tokens.Last.isActiveNamespace() {
+ e.Namespaces.Last().removeLast()
+ }
+ }
+ e.Names.clearLast()
+ if prevName != nil {
+ e.Names.copyQuotedBuffer(e.Buf) // required by objectNameStack.replaceLastUnquotedName
+ e.Names.replaceLastUnquotedName(*prevName)
+ }
+ return true
+}
+
+// UnwriteOnlyObjectMemberName unwrites the only object member name
+// and returns the unquoted name.
+func (e *encoderState) UnwriteOnlyObjectMemberName() string {
+ if last := e.Tokens.Last; !last.isObject() || last.Length() != 1 {
+ panic("BUG: must be called on an object after writing first name")
+ }
+
+ // Unwrite the name and whitespace.
+ b := jsonwire.TrimSuffixString(e.Buf)
+ isVerbatim := bytes.IndexByte(e.Buf[len(b):], '\\') < 0
+ name := string(jsonwire.UnquoteMayCopy(e.Buf[len(b):], isVerbatim))
+ e.Buf = jsonwire.TrimSuffixWhitespace(b)
+
+ // Undo state changes.
+ e.Tokens.Last.decrement()
+ if !e.Flags.Get(jsonflags.AllowDuplicateNames) {
+ if e.Tokens.Last.isActiveNamespace() {
+ e.Namespaces.Last().removeLast()
+ }
+ }
+ e.Names.clearLast()
+ return name
+}
+
+// WriteToken writes the next token and advances the internal write offset.
+//
+// The provided token kind must be consistent with the JSON grammar.
+// For example, it is an error to provide a number when the encoder
+// is expecting an object name (which is always a string), or
+// to provide an end object delimiter when the encoder is finishing an array.
+// If the provided token is invalid, then it reports a [SyntacticError] and
+// the internal state remains unchanged. The offset reported
+// in [SyntacticError] will be relative to the [Encoder.OutputOffset].
+func (e *Encoder) WriteToken(t Token) error {
+ return e.s.WriteToken(t)
+}
+func (e *encoderState) WriteToken(t Token) error {
+ k := t.Kind()
+ b := e.Buf // use local variable to avoid mutating e in case of error
+
+ // Append any delimiters or optional whitespace.
+ b = e.Tokens.MayAppendDelim(b, k)
+ if e.Flags.Get(jsonflags.AnyWhitespace) {
+ b = e.appendWhitespace(b, k)
+ }
+ pos := len(b) // offset before the token
+
+ // Append the token to the output and to the state machine.
+ var err error
+ switch k {
+ case 'n':
+ b = append(b, "null"...)
+ err = e.Tokens.appendLiteral()
+ case 'f':
+ b = append(b, "false"...)
+ err = e.Tokens.appendLiteral()
+ case 't':
+ b = append(b, "true"...)
+ err = e.Tokens.appendLiteral()
+ case '"':
+ if b, err = t.appendString(b, &e.Flags); err != nil {
+ break
+ }
+ if e.Tokens.Last.NeedObjectName() {
+ if !e.Flags.Get(jsonflags.AllowDuplicateNames) {
+ if !e.Tokens.Last.isValidNamespace() {
+ err = errInvalidNamespace
+ break
+ }
+ if e.Tokens.Last.isActiveNamespace() && !e.Namespaces.Last().insertQuoted(b[pos:], false) {
+ err = wrapWithObjectName(ErrDuplicateName, b[pos:])
+ break
+ }
+ }
+ e.Names.ReplaceLastQuotedOffset(pos) // only replace if insertQuoted succeeds
+ }
+ err = e.Tokens.appendString()
+ case '0':
+ if b, err = t.appendNumber(b, &e.Flags); err != nil {
+ break
+ }
+ err = e.Tokens.appendNumber()
+ case '{':
+ b = append(b, '{')
+ if err = e.Tokens.pushObject(); err != nil {
+ break
+ }
+ e.Names.push()
+ if !e.Flags.Get(jsonflags.AllowDuplicateNames) {
+ e.Namespaces.push()
+ }
+ case '}':
+ b = append(b, '}')
+ if err = e.Tokens.popObject(); err != nil {
+ break
+ }
+ e.Names.pop()
+ if !e.Flags.Get(jsonflags.AllowDuplicateNames) {
+ e.Namespaces.pop()
+ }
+ case '[':
+ b = append(b, '[')
+ err = e.Tokens.pushArray()
+ case ']':
+ b = append(b, ']')
+ err = e.Tokens.popArray()
+ default:
+ err = errInvalidToken
+ }
+ if err != nil {
+ return wrapSyntacticError(e, err, pos, +1)
+ }
+
+ // Finish off the buffer and store it back into e.
+ e.Buf = b
+ if e.NeedFlush() {
+ return e.Flush()
+ }
+ return nil
+}
+
+// AppendRaw appends either a raw string (without double quotes) or number.
+// Specify safeASCII if the string output is guaranteed to be ASCII
+// without any characters (including '<', '>', and '&') that need escaping,
+// otherwise this will validate whether the string needs escaping.
+// The appended bytes for a JSON number must be valid.
+//
+// This is a specialized implementation of Encoder.WriteValue
+// that allows appending directly into the buffer.
+// It is only called from marshal logic in the "json" package.
+func (e *encoderState) AppendRaw(k Kind, safeASCII bool, appendFn func([]byte) ([]byte, error)) error {
+ b := e.Buf // use local variable to avoid mutating e in case of error
+
+ // Append any delimiters or optional whitespace.
+ b = e.Tokens.MayAppendDelim(b, k)
+ if e.Flags.Get(jsonflags.AnyWhitespace) {
+ b = e.appendWhitespace(b, k)
+ }
+ pos := len(b) // offset before the token
+
+ var err error
+ switch k {
+ case '"':
+ // Append directly into the encoder buffer by assuming that
+ // most of the time none of the characters need escaping.
+ b = append(b, '"')
+ if b, err = appendFn(b); err != nil {
+ return err
+ }
+ b = append(b, '"')
+
+ // Check whether we need to escape the string and if necessary
+ // copy it to a scratch buffer and then escape it back.
+ isVerbatim := safeASCII || !jsonwire.NeedEscape(b[pos+len(`"`):len(b)-len(`"`)])
+ if !isVerbatim {
+ var err error
+ b2 := append(e.availBuffer, b[pos+len(`"`):len(b)-len(`"`)]...)
+ b, err = jsonwire.AppendQuote(b[:pos], string(b2), &e.Flags)
+ e.availBuffer = b2[:0]
+ if err != nil {
+ return wrapSyntacticError(e, err, pos, +1)
+ }
+ }
+
+ // Update the state machine.
+ if e.Tokens.Last.NeedObjectName() {
+ if !e.Flags.Get(jsonflags.AllowDuplicateNames) {
+ if !e.Tokens.Last.isValidNamespace() {
+ return wrapSyntacticError(e, err, pos, +1)
+ }
+ if e.Tokens.Last.isActiveNamespace() && !e.Namespaces.Last().insertQuoted(b[pos:], isVerbatim) {
+ err = wrapWithObjectName(ErrDuplicateName, b[pos:])
+ return wrapSyntacticError(e, err, pos, +1)
+ }
+ }
+ e.Names.ReplaceLastQuotedOffset(pos) // only replace if insertQuoted succeeds
+ }
+ if err := e.Tokens.appendString(); err != nil {
+ return wrapSyntacticError(e, err, pos, +1)
+ }
+ case '0':
+ if b, err = appendFn(b); err != nil {
+ return err
+ }
+ if err := e.Tokens.appendNumber(); err != nil {
+ return wrapSyntacticError(e, err, pos, +1)
+ }
+ default:
+ panic("BUG: invalid kind")
+ }
+
+ // Finish off the buffer and store it back into e.
+ e.Buf = b
+ if e.NeedFlush() {
+ return e.Flush()
+ }
+ return nil
+}
+
+// WriteValue writes the next raw value and advances the internal write offset.
+// The Encoder does not simply copy the provided value verbatim, but
+// parses it to ensure that it is syntactically valid and reformats it
+// according to how the Encoder is configured to format whitespace and strings.
+// If [AllowInvalidUTF8] is specified, then any invalid UTF-8 is mangled
+// as the Unicode replacement character, U+FFFD.
+//
+// The provided value kind must be consistent with the JSON grammar
+// (see examples on [Encoder.WriteToken]). If the provided value is invalid,
+// then it reports a [SyntacticError] and the internal state remains unchanged.
+// The offset reported in [SyntacticError] will be relative to the
+// [Encoder.OutputOffset] plus the offset into v of any encountered syntax error.
+func (e *Encoder) WriteValue(v Value) error {
+ return e.s.WriteValue(v)
+}
+func (e *encoderState) WriteValue(v Value) error {
+ e.maxValue |= len(v) // bitwise OR is a fast approximation of max
+
+ k := v.Kind()
+ b := e.Buf // use local variable to avoid mutating e in case of error
+
+ // Append any delimiters or optional whitespace.
+ b = e.Tokens.MayAppendDelim(b, k)
+ if e.Flags.Get(jsonflags.AnyWhitespace) {
+ b = e.appendWhitespace(b, k)
+ }
+ pos := len(b) // offset before the value
+
+ // Append the value the output.
+ var n int
+ n += jsonwire.ConsumeWhitespace(v[n:])
+ b, m, err := e.reformatValue(b, v[n:], e.Tokens.Depth())
+ if err != nil {
+ return wrapSyntacticError(e, err, pos+n+m, +1)
+ }
+ n += m
+ n += jsonwire.ConsumeWhitespace(v[n:])
+ if len(v) > n {
+ err = jsonwire.NewInvalidCharacterError(v[n:], "after top-level value")
+ return wrapSyntacticError(e, err, pos+n, 0)
+ }
+
+ // Append the kind to the state machine.
+ switch k {
+ case 'n', 'f', 't':
+ err = e.Tokens.appendLiteral()
+ case '"':
+ if e.Tokens.Last.NeedObjectName() {
+ if !e.Flags.Get(jsonflags.AllowDuplicateNames) {
+ if !e.Tokens.Last.isValidNamespace() {
+ err = errInvalidNamespace
+ break
+ }
+ if e.Tokens.Last.isActiveNamespace() && !e.Namespaces.Last().insertQuoted(b[pos:], false) {
+ err = wrapWithObjectName(ErrDuplicateName, b[pos:])
+ break
+ }
+ }
+ e.Names.ReplaceLastQuotedOffset(pos) // only replace if insertQuoted succeeds
+ }
+ err = e.Tokens.appendString()
+ case '0':
+ err = e.Tokens.appendNumber()
+ case '{':
+ if err = e.Tokens.pushObject(); err != nil {
+ break
+ }
+ if err = e.Tokens.popObject(); err != nil {
+ panic("BUG: popObject should never fail immediately after pushObject: " + err.Error())
+ }
+ if e.Flags.Get(jsonflags.ReorderRawObjects) {
+ mustReorderObjects(b[pos:])
+ }
+ case '[':
+ if err = e.Tokens.pushArray(); err != nil {
+ break
+ }
+ if err = e.Tokens.popArray(); err != nil {
+ panic("BUG: popArray should never fail immediately after pushArray: " + err.Error())
+ }
+ if e.Flags.Get(jsonflags.ReorderRawObjects) {
+ mustReorderObjects(b[pos:])
+ }
+ }
+ if err != nil {
+ return wrapSyntacticError(e, err, pos, +1)
+ }
+
+ // Finish off the buffer and store it back into e.
+ e.Buf = b
+ if e.NeedFlush() {
+ return e.Flush()
+ }
+ return nil
+}
+
+// CountNextDelimWhitespace counts the number of bytes of delimiter and
+// whitespace bytes assuming the upcoming token is a JSON value.
+// This method is used for error reporting at the semantic layer.
+func (e *encoderState) CountNextDelimWhitespace() (n int) {
+ const next = Kind('"') // arbitrary kind as next JSON value
+ delim := e.Tokens.needDelim(next)
+ if delim > 0 {
+ n += len(",") | len(":")
+ }
+ if delim == ':' {
+ if e.Flags.Get(jsonflags.SpaceAfterColon) {
+ n += len(" ")
+ }
+ } else {
+ if delim == ',' && e.Flags.Get(jsonflags.SpaceAfterComma) {
+ n += len(" ")
+ }
+ if e.Flags.Get(jsonflags.Multiline) {
+ if m := e.Tokens.NeedIndent(next); m > 0 {
+ n += len("\n") + len(e.IndentPrefix) + (m-1)*len(e.Indent)
+ }
+ }
+ }
+ return n
+}
+
+// appendWhitespace appends whitespace that immediately precedes the next token.
+func (e *encoderState) appendWhitespace(b []byte, next Kind) []byte {
+ if delim := e.Tokens.needDelim(next); delim == ':' {
+ if e.Flags.Get(jsonflags.SpaceAfterColon) {
+ b = append(b, ' ')
+ }
+ } else {
+ if delim == ',' && e.Flags.Get(jsonflags.SpaceAfterComma) {
+ b = append(b, ' ')
+ }
+ if e.Flags.Get(jsonflags.Multiline) {
+ b = e.AppendIndent(b, e.Tokens.NeedIndent(next))
+ }
+ }
+ return b
+}
+
+// AppendIndent appends the appropriate number of indentation characters
+// for the current nested level, n.
+func (e *encoderState) AppendIndent(b []byte, n int) []byte {
+ if n == 0 {
+ return b
+ }
+ b = append(b, '\n')
+ b = append(b, e.IndentPrefix...)
+ for ; n > 1; n-- {
+ b = append(b, e.Indent...)
+ }
+ return b
+}
+
+// reformatValue parses a JSON value from the start of src and
+// appends it to the end of dst, reformatting whitespace and strings as needed.
+// It returns the extended dst buffer and the number of consumed input bytes.
+func (e *encoderState) reformatValue(dst []byte, src Value, depth int) ([]byte, int, error) {
+ // TODO: Should this update ValueFlags as input?
+ if len(src) == 0 {
+ return dst, 0, io.ErrUnexpectedEOF
+ }
+ switch k := Kind(src[0]).normalize(); k {
+ case 'n':
+ if jsonwire.ConsumeNull(src) == 0 {
+ n, err := jsonwire.ConsumeLiteral(src, "null")
+ return dst, n, err
+ }
+ return append(dst, "null"...), len("null"), nil
+ case 'f':
+ if jsonwire.ConsumeFalse(src) == 0 {
+ n, err := jsonwire.ConsumeLiteral(src, "false")
+ return dst, n, err
+ }
+ return append(dst, "false"...), len("false"), nil
+ case 't':
+ if jsonwire.ConsumeTrue(src) == 0 {
+ n, err := jsonwire.ConsumeLiteral(src, "true")
+ return dst, n, err
+ }
+ return append(dst, "true"...), len("true"), nil
+ case '"':
+ if n := jsonwire.ConsumeSimpleString(src); n > 0 {
+ dst = append(dst, src[:n]...) // copy simple strings verbatim
+ return dst, n, nil
+ }
+ return jsonwire.ReformatString(dst, src, &e.Flags)
+ case '0':
+ if n := jsonwire.ConsumeSimpleNumber(src); n > 0 && !e.Flags.Get(jsonflags.CanonicalizeNumbers) {
+ dst = append(dst, src[:n]...) // copy simple numbers verbatim
+ return dst, n, nil
+ }
+ return jsonwire.ReformatNumber(dst, src, &e.Flags)
+ case '{':
+ return e.reformatObject(dst, src, depth)
+ case '[':
+ return e.reformatArray(dst, src, depth)
+ default:
+ return dst, 0, jsonwire.NewInvalidCharacterError(src, "at start of value")
+ }
+}
+
+// reformatObject parses a JSON object from the start of src and
+// appends it to the end of src, reformatting whitespace and strings as needed.
+// It returns the extended dst buffer and the number of consumed input bytes.
+func (e *encoderState) reformatObject(dst []byte, src Value, depth int) ([]byte, int, error) {
+ // Append object begin.
+ if len(src) == 0 || src[0] != '{' {
+ panic("BUG: reformatObject must be called with a buffer that starts with '{'")
+ } else if depth == maxNestingDepth+1 {
+ return dst, 0, errMaxDepth
+ }
+ dst = append(dst, '{')
+ n := len("{")
+
+ // Append (possible) object end.
+ n += jsonwire.ConsumeWhitespace(src[n:])
+ if uint(len(src)) <= uint(n) {
+ return dst, n, io.ErrUnexpectedEOF
+ }
+ if src[n] == '}' {
+ dst = append(dst, '}')
+ n += len("}")
+ return dst, n, nil
+ }
+
+ var err error
+ var names *objectNamespace
+ if !e.Flags.Get(jsonflags.AllowDuplicateNames) {
+ e.Namespaces.push()
+ defer e.Namespaces.pop()
+ names = e.Namespaces.Last()
+ }
+ depth++
+ for {
+ // Append optional newline and indentation.
+ if e.Flags.Get(jsonflags.Multiline) {
+ dst = e.AppendIndent(dst, depth)
+ }
+
+ // Append object name.
+ n += jsonwire.ConsumeWhitespace(src[n:])
+ if uint(len(src)) <= uint(n) {
+ return dst, n, io.ErrUnexpectedEOF
+ }
+ m := jsonwire.ConsumeSimpleString(src[n:])
+ isVerbatim := m > 0
+ if isVerbatim {
+ dst = append(dst, src[n:n+m]...)
+ } else {
+ dst, m, err = jsonwire.ReformatString(dst, src[n:], &e.Flags)
+ if err != nil {
+ return dst, n + m, err
+ }
+ }
+ quotedName := src[n : n+m]
+ if !e.Flags.Get(jsonflags.AllowDuplicateNames) && !names.insertQuoted(quotedName, isVerbatim) {
+ return dst, n, wrapWithObjectName(ErrDuplicateName, quotedName)
+ }
+ n += m
+
+ // Append colon.
+ n += jsonwire.ConsumeWhitespace(src[n:])
+ if uint(len(src)) <= uint(n) {
+ return dst, n, wrapWithObjectName(io.ErrUnexpectedEOF, quotedName)
+ }
+ if src[n] != ':' {
+ err = jsonwire.NewInvalidCharacterError(src[n:], "after object name (expecting ':')")
+ return dst, n, wrapWithObjectName(err, quotedName)
+ }
+ dst = append(dst, ':')
+ n += len(":")
+ if e.Flags.Get(jsonflags.SpaceAfterColon) {
+ dst = append(dst, ' ')
+ }
+
+ // Append object value.
+ n += jsonwire.ConsumeWhitespace(src[n:])
+ if uint(len(src)) <= uint(n) {
+ return dst, n, wrapWithObjectName(io.ErrUnexpectedEOF, quotedName)
+ }
+ dst, m, err = e.reformatValue(dst, src[n:], depth)
+ if err != nil {
+ return dst, n + m, wrapWithObjectName(err, quotedName)
+ }
+ n += m
+
+ // Append comma or object end.
+ n += jsonwire.ConsumeWhitespace(src[n:])
+ if uint(len(src)) <= uint(n) {
+ return dst, n, io.ErrUnexpectedEOF
+ }
+ switch src[n] {
+ case ',':
+ dst = append(dst, ',')
+ if e.Flags.Get(jsonflags.SpaceAfterComma) {
+ dst = append(dst, ' ')
+ }
+ n += len(",")
+ continue
+ case '}':
+ if e.Flags.Get(jsonflags.Multiline) {
+ dst = e.AppendIndent(dst, depth-1)
+ }
+ dst = append(dst, '}')
+ n += len("}")
+ return dst, n, nil
+ default:
+ return dst, n, jsonwire.NewInvalidCharacterError(src[n:], "after object value (expecting ',' or '}')")
+ }
+ }
+}
+
+// reformatArray parses a JSON array from the start of src and
+// appends it to the end of dst, reformatting whitespace and strings as needed.
+// It returns the extended dst buffer and the number of consumed input bytes.
+func (e *encoderState) reformatArray(dst []byte, src Value, depth int) ([]byte, int, error) {
+ // Append array begin.
+ if len(src) == 0 || src[0] != '[' {
+ panic("BUG: reformatArray must be called with a buffer that starts with '['")
+ } else if depth == maxNestingDepth+1 {
+ return dst, 0, errMaxDepth
+ }
+ dst = append(dst, '[')
+ n := len("[")
+
+ // Append (possible) array end.
+ n += jsonwire.ConsumeWhitespace(src[n:])
+ if uint(len(src)) <= uint(n) {
+ return dst, n, io.ErrUnexpectedEOF
+ }
+ if src[n] == ']' {
+ dst = append(dst, ']')
+ n += len("]")
+ return dst, n, nil
+ }
+
+ var idx int64
+ var err error
+ depth++
+ for {
+ // Append optional newline and indentation.
+ if e.Flags.Get(jsonflags.Multiline) {
+ dst = e.AppendIndent(dst, depth)
+ }
+
+ // Append array value.
+ n += jsonwire.ConsumeWhitespace(src[n:])
+ if uint(len(src)) <= uint(n) {
+ return dst, n, io.ErrUnexpectedEOF
+ }
+ var m int
+ dst, m, err = e.reformatValue(dst, src[n:], depth)
+ if err != nil {
+ return dst, n + m, wrapWithArrayIndex(err, idx)
+ }
+ n += m
+
+ // Append comma or array end.
+ n += jsonwire.ConsumeWhitespace(src[n:])
+ if uint(len(src)) <= uint(n) {
+ return dst, n, io.ErrUnexpectedEOF
+ }
+ switch src[n] {
+ case ',':
+ dst = append(dst, ',')
+ if e.Flags.Get(jsonflags.SpaceAfterComma) {
+ dst = append(dst, ' ')
+ }
+ n += len(",")
+ idx++
+ continue
+ case ']':
+ if e.Flags.Get(jsonflags.Multiline) {
+ dst = e.AppendIndent(dst, depth-1)
+ }
+ dst = append(dst, ']')
+ n += len("]")
+ return dst, n, nil
+ default:
+ return dst, n, jsonwire.NewInvalidCharacterError(src[n:], "after array value (expecting ',' or ']')")
+ }
+ }
+}
+
+// OutputOffset returns the current output byte offset. It gives the location
+// of the next byte immediately after the most recently written token or value.
+// The number of bytes actually written to the underlying [io.Writer] may be less
+// than this offset due to internal buffering effects.
+func (e *Encoder) OutputOffset() int64 {
+ return e.s.previousOffsetEnd()
+}
+
+// AvailableBuffer returns a zero-length buffer with a possible non-zero capacity.
+// This buffer is intended to be used to populate a [Value]
+// being passed to an immediately succeeding [Encoder.WriteValue] call.
+//
+// Example usage:
+//
+// b := d.AvailableBuffer()
+// b = append(b, '"')
+// b = appendString(b, v) // append the string formatting of v
+// b = append(b, '"')
+// ... := d.WriteValue(b)
+//
+// It is the user's responsibility to ensure that the value is valid JSON.
+func (e *Encoder) AvailableBuffer() []byte {
+ // NOTE: We don't return e.buf[len(e.buf):cap(e.buf)] since WriteValue would
+ // need to take special care to avoid mangling the data while reformatting.
+ // WriteValue can't easily identify whether the input Value aliases e.buf
+ // without using unsafe.Pointer. Thus, we just return a different buffer.
+ // Should this ever alias e.buf, we need to consider how it operates with
+ // the specialized performance optimization for bytes.Buffer.
+ n := 1 << bits.Len(uint(e.s.maxValue|63)) // fast approximation for max length
+ if cap(e.s.availBuffer) < n {
+ e.s.availBuffer = make([]byte, 0, n)
+ }
+ return e.s.availBuffer
+}
+
+// StackDepth returns the depth of the state machine for written JSON data.
+// Each level on the stack represents a nested JSON object or array.
+// It is incremented whenever an [BeginObject] or [BeginArray] token is encountered
+// and decremented whenever an [EndObject] or [EndArray] token is encountered.
+// The depth is zero-indexed, where zero represents the top-level JSON value.
+func (e *Encoder) StackDepth() int {
+ // NOTE: Keep in sync with Decoder.StackDepth.
+ return e.s.Tokens.Depth() - 1
+}
+
+// StackIndex returns information about the specified stack level.
+// It must be a number between 0 and [Encoder.StackDepth], inclusive.
+// For each level, it reports the kind:
+//
+// - 0 for a level of zero,
+// - '{' for a level representing a JSON object, and
+// - '[' for a level representing a JSON array.
+//
+// It also reports the length of that JSON object or array.
+// Each name and value in a JSON object is counted separately,
+// so the effective number of members would be half the length.
+// A complete JSON object must have an even length.
+func (e *Encoder) StackIndex(i int) (Kind, int64) {
+ // NOTE: Keep in sync with Decoder.StackIndex.
+ switch s := e.s.Tokens.index(i); {
+ case i > 0 && s.isObject():
+ return '{', s.Length()
+ case i > 0 && s.isArray():
+ return '[', s.Length()
+ default:
+ return 0, s.Length()
+ }
+}
+
+// StackPointer returns a JSON Pointer (RFC 6901) to the most recently written value.
+func (e *Encoder) StackPointer() Pointer {
+ return Pointer(e.s.AppendStackPointer(nil, -1))
+}
+
+func (e *encoderState) AppendStackPointer(b []byte, where int) []byte {
+ e.Names.copyQuotedBuffer(e.Buf)
+ return e.state.appendStackPointer(b, where)
+}
diff --git a/internal/json/jsontext/errors.go b/internal/json/jsontext/errors.go
new file mode 100644
index 0000000000..2e6fee1a32
--- /dev/null
+++ b/internal/json/jsontext/errors.go
@@ -0,0 +1,182 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package jsontext
+
+import (
+ "bytes"
+ "io"
+ "strconv"
+
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+)
+
+const errorPrefix = "jsontext: "
+
+type ioError struct {
+ action string // either "read" or "write"
+ err error
+}
+
+func (e *ioError) Error() string {
+ return errorPrefix + e.action + " error: " + e.err.Error()
+}
+func (e *ioError) Unwrap() error {
+ return e.err
+}
+
+// SyntacticError is a description of a syntactic error that occurred when
+// encoding or decoding JSON according to the grammar.
+//
+// The contents of this error as produced by this package may change over time.
+type SyntacticError struct {
+ requireKeyedLiterals
+ nonComparable
+
+ // ByteOffset indicates that an error occurred after this byte offset.
+ ByteOffset int64
+ // JSONPointer indicates that an error occurred within this JSON value
+ // as indicated using the JSON Pointer notation (see RFC 6901).
+ JSONPointer Pointer
+
+ // Err is the underlying error.
+ Err error
+}
+
+// wrapSyntacticError wraps an error and annotates it with a precise location
+// using the provided [encoderState] or [decoderState].
+// If err is an [ioError] or [io.EOF], then it is not wrapped.
+//
+// It takes a relative offset pos that can be resolved into
+// an absolute offset using state.offsetAt.
+//
+// It takes a where that specify how the JSON pointer is derived.
+// If the underlying error is a [pointerSuffixError],
+// then the suffix is appended to the derived pointer.
+func wrapSyntacticError(state interface {
+ offsetAt(pos int) int64
+ AppendStackPointer(b []byte, where int) []byte
+}, err error, pos, where int) error {
+ if _, ok := err.(*ioError); err == io.EOF || ok {
+ return err
+ }
+ offset := state.offsetAt(pos)
+ ptr := state.AppendStackPointer(nil, where)
+ if serr, ok := err.(*pointerSuffixError); ok {
+ ptr = serr.appendPointer(ptr)
+ err = serr.error
+ }
+ if d, ok := state.(*decoderState); ok && err == errMismatchDelim {
+ where := "at start of value"
+ if len(d.Tokens.Stack) > 0 && d.Tokens.Last.Length() > 0 {
+ switch {
+ case d.Tokens.Last.isArray():
+ where = "after array element (expecting ',' or ']')"
+ ptr = []byte(Pointer(ptr).Parent()) // problem is with parent array
+ case d.Tokens.Last.isObject():
+ where = "after object value (expecting ',' or '}')"
+ ptr = []byte(Pointer(ptr).Parent()) // problem is with parent object
+ }
+ }
+ err = jsonwire.NewInvalidCharacterError(d.buf[pos:], where)
+ }
+ return &SyntacticError{ByteOffset: offset, JSONPointer: Pointer(ptr), Err: err}
+}
+
+func (e *SyntacticError) Error() string {
+ pointer := e.JSONPointer
+ offset := e.ByteOffset
+ b := []byte(errorPrefix)
+ if e.Err != nil {
+ b = append(b, e.Err.Error()...)
+ if e.Err == ErrDuplicateName {
+ b = strconv.AppendQuote(append(b, ' '), pointer.LastToken())
+ pointer = pointer.Parent()
+ offset = 0 // not useful to print offset for duplicate names
+ }
+ } else {
+ b = append(b, "syntactic error"...)
+ }
+ if pointer != "" {
+ b = strconv.AppendQuote(append(b, " within "...), jsonwire.TruncatePointer(string(pointer), 100))
+ }
+ if offset > 0 {
+ b = strconv.AppendInt(append(b, " after offset "...), offset, 10)
+ }
+ return string(b)
+}
+
+func (e *SyntacticError) Unwrap() error {
+ return e.Err
+}
+
+// pointerSuffixError represents a JSON pointer suffix to be appended
+// to [SyntacticError.JSONPointer]. It is an internal error type
+// used within this package and does not appear in the public API.
+//
+// This type is primarily used to annotate errors in Encoder.WriteValue
+// and Decoder.ReadValue with precise positions.
+// At the time WriteValue or ReadValue is called, a JSON pointer to the
+// upcoming value can be constructed using the Encoder/Decoder state.
+// However, tracking pointers within values during normal operation
+// would incur a performance penalty in the error-free case.
+//
+// To provide precise error locations without this overhead,
+// the error is wrapped with object names or array indices
+// as the call stack is popped when an error occurs.
+// Since this happens in reverse order, pointerSuffixError holds
+// the pointer in reverse and is only later reversed when appending to
+// the pointer prefix.
+//
+// For example, if the encoder is at "/alpha/bravo/charlie"
+// and an error occurs in WriteValue at "/xray/yankee/zulu", then
+// the final pointer should be "/alpha/bravo/charlie/xray/yankee/zulu".
+//
+// As pointerSuffixError is populated during the error return path,
+// it first contains "/zulu", then "/zulu/yankee",
+// and finally "/zulu/yankee/xray".
+// These tokens are reversed and concatenated to "/alpha/bravo/charlie"
+// to form the full pointer.
+type pointerSuffixError struct {
+ error
+
+ // reversePointer is a JSON pointer, but with each token in reverse order.
+ reversePointer []byte
+}
+
+// wrapWithObjectName wraps err with a JSON object name access,
+// which must be a valid quoted JSON string.
+func wrapWithObjectName(err error, quotedName []byte) error {
+ serr, _ := err.(*pointerSuffixError)
+ if serr == nil {
+ serr = &pointerSuffixError{error: err}
+ }
+ name := jsonwire.UnquoteMayCopy(quotedName, false)
+ serr.reversePointer = appendEscapePointerName(append(serr.reversePointer, '/'), name)
+ return serr
+}
+
+// wrapWithArrayIndex wraps err with a JSON array index access.
+func wrapWithArrayIndex(err error, index int64) error {
+ serr, _ := err.(*pointerSuffixError)
+ if serr == nil {
+ serr = &pointerSuffixError{error: err}
+ }
+ serr.reversePointer = strconv.AppendUint(append(serr.reversePointer, '/'), uint64(index), 10)
+ return serr
+}
+
+// appendPointer appends the path encoded in e to the end of pointer.
+func (e *pointerSuffixError) appendPointer(pointer []byte) []byte {
+ // Copy each token in reversePointer to the end of pointer in reverse order.
+ // Double reversal means that the appended suffix is now in forward order.
+ bi, bo := e.reversePointer, pointer
+ for len(bi) > 0 {
+ i := bytes.LastIndexByte(bi, '/')
+ bi, bo = bi[:i], append(bo, bi[i:]...)
+ }
+ return bo
+}
diff --git a/internal/json/jsontext/export.go b/internal/json/jsontext/export.go
new file mode 100644
index 0000000000..aff068d3c9
--- /dev/null
+++ b/internal/json/jsontext/export.go
@@ -0,0 +1,77 @@
+// Copyright 2023 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package jsontext
+
+import (
+ "io"
+
+ "github.com/quay/clair/v4/internal/json/internal"
+)
+
+// Internal is for internal use only.
+// This is exempt from the Go compatibility agreement.
+var Internal exporter
+
+type exporter struct{}
+
+// Export exposes internal functionality from "jsontext" to "json".
+// This cannot be dynamically called by other packages since
+// they cannot obtain a reference to the internal.AllowInternalUse value.
+func (exporter) Export(p *internal.NotForPublicUse) export {
+ if p != &internal.AllowInternalUse {
+ panic("unauthorized call to Export")
+ }
+ return export{}
+}
+
+// The export type exposes functionality to packages with visibility to
+// the internal.AllowInternalUse variable. The "json" package uses this
+// to modify low-level state in the Encoder and Decoder types.
+// It mutates the state directly instead of calling ReadToken or WriteToken
+// since this is more performant. The public APIs need to track state to ensure
+// that users are constructing a valid JSON value, but the "json" implementation
+// guarantees that it emits valid JSON by the structure of the code itself.
+type export struct{}
+
+// Encoder returns a pointer to the underlying encoderState.
+func (export) Encoder(e *Encoder) *encoderState { return &e.s }
+
+// Decoder returns a pointer to the underlying decoderState.
+func (export) Decoder(d *Decoder) *decoderState { return &d.s }
+
+func (export) GetBufferedEncoder(o ...Options) *Encoder {
+ return getBufferedEncoder(o...)
+}
+func (export) PutBufferedEncoder(e *Encoder) {
+ putBufferedEncoder(e)
+}
+
+func (export) GetStreamingEncoder(w io.Writer, o ...Options) *Encoder {
+ return getStreamingEncoder(w, o...)
+}
+func (export) PutStreamingEncoder(e *Encoder) {
+ putStreamingEncoder(e)
+}
+
+func (export) GetBufferedDecoder(b []byte, o ...Options) *Decoder {
+ return getBufferedDecoder(b, o...)
+}
+func (export) PutBufferedDecoder(d *Decoder) {
+ putBufferedDecoder(d)
+}
+
+func (export) GetStreamingDecoder(r io.Reader, o ...Options) *Decoder {
+ return getStreamingDecoder(r, o...)
+}
+func (export) PutStreamingDecoder(d *Decoder) {
+ putStreamingDecoder(d)
+}
+
+func (export) IsIOError(err error) bool {
+ _, ok := err.(*ioError)
+ return ok
+}
diff --git a/internal/json/jsontext/options.go b/internal/json/jsontext/options.go
new file mode 100644
index 0000000000..9c541c0b8c
--- /dev/null
+++ b/internal/json/jsontext/options.go
@@ -0,0 +1,304 @@
+// Copyright 2023 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package jsontext
+
+import (
+ "strings"
+
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonopts"
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+)
+
+// Options configures [NewEncoder], [Encoder.Reset], [NewDecoder],
+// and [Decoder.Reset] with specific features.
+// Each function takes in a variadic list of options, where properties
+// set in latter options override the value of previously set properties.
+//
+// There is a single Options type, which is used with both encoding and decoding.
+// Some options affect both operations, while others only affect one operation:
+//
+// - [AllowDuplicateNames] affects encoding and decoding
+// - [AllowInvalidUTF8] affects encoding and decoding
+// - [EscapeForHTML] affects encoding only
+// - [EscapeForJS] affects encoding only
+// - [PreserveRawStrings] affects encoding only
+// - [CanonicalizeRawInts] affects encoding only
+// - [CanonicalizeRawFloats] affects encoding only
+// - [ReorderRawObjects] affects encoding only
+// - [SpaceAfterColon] affects encoding only
+// - [SpaceAfterComma] affects encoding only
+// - [Multiline] affects encoding only
+// - [WithIndent] affects encoding only
+// - [WithIndentPrefix] affects encoding only
+//
+// Options that do not affect a particular operation are ignored.
+//
+// The Options type is identical to [encoding/json.Options] and
+// [encoding/json/v2.Options]. Options from the other packages may
+// be passed to functionality in this package, but are ignored.
+// Options from this package may be used with the other packages.
+type Options = jsonopts.Options
+
+// AllowDuplicateNames specifies that JSON objects may contain
+// duplicate member names. Disabling the duplicate name check may provide
+// performance benefits, but breaks compliance with RFC 7493, section 2.3.
+// The input or output will still be compliant with RFC 8259,
+// which leaves the handling of duplicate names as unspecified behavior.
+//
+// This affects either encoding or decoding.
+func AllowDuplicateNames(v bool) Options {
+ if v {
+ return jsonflags.AllowDuplicateNames | 1
+ } else {
+ return jsonflags.AllowDuplicateNames | 0
+ }
+}
+
+// AllowInvalidUTF8 specifies that JSON strings may contain invalid UTF-8,
+// which will be mangled as the Unicode replacement character, U+FFFD.
+// This causes the encoder or decoder to break compliance with
+// RFC 7493, section 2.1, and RFC 8259, section 8.1.
+//
+// This affects either encoding or decoding.
+func AllowInvalidUTF8(v bool) Options {
+ if v {
+ return jsonflags.AllowInvalidUTF8 | 1
+ } else {
+ return jsonflags.AllowInvalidUTF8 | 0
+ }
+}
+
+// EscapeForHTML specifies that '<', '>', and '&' characters within JSON strings
+// should be escaped as a hexadecimal Unicode codepoint (e.g., \u003c) so that
+// the output is safe to embed within HTML.
+//
+// This only affects encoding and is ignored when decoding.
+func EscapeForHTML(v bool) Options {
+ if v {
+ return jsonflags.EscapeForHTML | 1
+ } else {
+ return jsonflags.EscapeForHTML | 0
+ }
+}
+
+// EscapeForJS specifies that U+2028 and U+2029 characters within JSON strings
+// should be escaped as a hexadecimal Unicode codepoint (e.g., \u2028) so that
+// the output is valid to embed within JavaScript. See RFC 8259, section 12.
+//
+// This only affects encoding and is ignored when decoding.
+func EscapeForJS(v bool) Options {
+ if v {
+ return jsonflags.EscapeForJS | 1
+ } else {
+ return jsonflags.EscapeForJS | 0
+ }
+}
+
+// PreserveRawStrings specifies that when encoding a raw JSON string in a
+// [Token] or [Value], pre-escaped sequences
+// in a JSON string are preserved to the output.
+// However, raw strings still respect [EscapeForHTML] and [EscapeForJS]
+// such that the relevant characters are escaped.
+// If [AllowInvalidUTF8] is enabled, bytes of invalid UTF-8
+// are preserved to the output.
+//
+// This only affects encoding and is ignored when decoding.
+func PreserveRawStrings(v bool) Options {
+ if v {
+ return jsonflags.PreserveRawStrings | 1
+ } else {
+ return jsonflags.PreserveRawStrings | 0
+ }
+}
+
+// CanonicalizeRawInts specifies that when encoding a raw JSON
+// integer number (i.e., a number without a fraction and exponent) in a
+// [Token] or [Value], the number is canonicalized
+// according to RFC 8785, section 3.2.2.3. As a special case,
+// the number -0 is canonicalized as 0.
+//
+// JSON numbers are treated as IEEE 754 double precision numbers.
+// Any numbers with precision beyond what is representable by that form
+// will lose their precision when canonicalized. For example,
+// integer values beyond ±2⁵³ will lose their precision.
+// For example, 1234567890123456789 is formatted as 1234567890123456800.
+//
+// This only affects encoding and is ignored when decoding.
+func CanonicalizeRawInts(v bool) Options {
+ if v {
+ return jsonflags.CanonicalizeRawInts | 1
+ } else {
+ return jsonflags.CanonicalizeRawInts | 0
+ }
+}
+
+// CanonicalizeRawFloats specifies that when encoding a raw JSON
+// floating-point number (i.e., a number with a fraction or exponent) in a
+// [Token] or [Value], the number is canonicalized
+// according to RFC 8785, section 3.2.2.3. As a special case,
+// the number -0 is canonicalized as 0.
+//
+// JSON numbers are treated as IEEE 754 double precision numbers.
+// It is safe to canonicalize a serialized single precision number and
+// parse it back as a single precision number and expect the same value.
+// If a number exceeds ±1.7976931348623157e+308, which is the maximum
+// finite number, then it saturated at that value and formatted as such.
+//
+// This only affects encoding and is ignored when decoding.
+func CanonicalizeRawFloats(v bool) Options {
+ if v {
+ return jsonflags.CanonicalizeRawFloats | 1
+ } else {
+ return jsonflags.CanonicalizeRawFloats | 0
+ }
+}
+
+// ReorderRawObjects specifies that when encoding a raw JSON object in a
+// [Value], the object members are reordered according to
+// RFC 8785, section 3.2.3.
+//
+// This only affects encoding and is ignored when decoding.
+func ReorderRawObjects(v bool) Options {
+ if v {
+ return jsonflags.ReorderRawObjects | 1
+ } else {
+ return jsonflags.ReorderRawObjects | 0
+ }
+}
+
+// SpaceAfterColon specifies that the JSON output should emit a space character
+// after each colon separator following a JSON object name.
+// If false, then no space character appears after the colon separator.
+//
+// This only affects encoding and is ignored when decoding.
+func SpaceAfterColon(v bool) Options {
+ if v {
+ return jsonflags.SpaceAfterColon | 1
+ } else {
+ return jsonflags.SpaceAfterColon | 0
+ }
+}
+
+// SpaceAfterComma specifies that the JSON output should emit a space character
+// after each comma separator following a JSON object value or array element.
+// If false, then no space character appears after the comma separator.
+//
+// This only affects encoding and is ignored when decoding.
+func SpaceAfterComma(v bool) Options {
+ if v {
+ return jsonflags.SpaceAfterComma | 1
+ } else {
+ return jsonflags.SpaceAfterComma | 0
+ }
+}
+
+// Multiline specifies that the JSON output should expand to multiple lines,
+// where every JSON object member or JSON array element appears on
+// a new, indented line according to the nesting depth.
+//
+// If [SpaceAfterColon] is not specified, then the default is true.
+// If [SpaceAfterComma] is not specified, then the default is false.
+// If [WithIndent] is not specified, then the default is "\t".
+//
+// If set to false, then the output is a single-line,
+// where the only whitespace emitted is determined by the current
+// values of [SpaceAfterColon] and [SpaceAfterComma].
+//
+// This only affects encoding and is ignored when decoding.
+func Multiline(v bool) Options {
+ if v {
+ return jsonflags.Multiline | 1
+ } else {
+ return jsonflags.Multiline | 0
+ }
+}
+
+// WithIndent specifies that the encoder should emit multiline output
+// where each element in a JSON object or array begins on a new, indented line
+// beginning with the indent prefix (see [WithIndentPrefix])
+// followed by one or more copies of indent according to the nesting depth.
+// The indent must only be composed of space or tab characters.
+//
+// If the intent to emit indented output without a preference for
+// the particular indent string, then use [Multiline] instead.
+//
+// This only affects encoding and is ignored when decoding.
+// Use of this option implies [Multiline] being set to true.
+func WithIndent(indent string) Options {
+ // Fast-path: Return a constant for common indents, which avoids allocating.
+ // These are derived from analyzing the Go module proxy on 2023-07-01.
+ switch indent {
+ case "\t":
+ return jsonopts.Indent("\t") // ~14k usages
+ case " ":
+ return jsonopts.Indent(" ") // ~18k usages
+ case " ":
+ return jsonopts.Indent(" ") // ~1.7k usages
+ case " ":
+ return jsonopts.Indent(" ") // ~52k usages
+ case " ":
+ return jsonopts.Indent(" ") // ~12k usages
+ case "":
+ return jsonopts.Indent("") // ~1.5k usages
+ }
+
+ // Otherwise, allocate for this unique value.
+ if s := strings.Trim(indent, " \t"); len(s) > 0 {
+ panic("json: invalid character " + jsonwire.QuoteRune(s) + " in indent")
+ }
+ return jsonopts.Indent(indent)
+}
+
+// WithIndentPrefix specifies that the encoder should emit multiline output
+// where each element in a JSON object or array begins on a new, indented line
+// beginning with the indent prefix followed by one or more copies of indent
+// (see [WithIndent]) according to the nesting depth.
+// The prefix must only be composed of space or tab characters.
+//
+// This only affects encoding and is ignored when decoding.
+// Use of this option implies [Multiline] being set to true.
+func WithIndentPrefix(prefix string) Options {
+ if s := strings.Trim(prefix, " \t"); len(s) > 0 {
+ panic("json: invalid character " + jsonwire.QuoteRune(s) + " in indent prefix")
+ }
+ return jsonopts.IndentPrefix(prefix)
+}
+
+/*
+// TODO(https://go.dev/issue/56733): Implement WithByteLimit and WithDepthLimit.
+// Remember to also update the "Security Considerations" section.
+
+// WithByteLimit sets a limit on the number of bytes of input or output bytes
+// that may be consumed or produced for each top-level JSON value.
+// If a [Decoder] or [Encoder] method call would need to consume/produce
+// more than a total of n bytes to make progress on the top-level JSON value,
+// then the call will report an error.
+// Whitespace before and within the top-level value are counted against the limit.
+// Whitespace after a top-level value are counted against the limit
+// for the next top-level value.
+//
+// A non-positive limit is equivalent to no limit at all.
+// If unspecified, the default limit is no limit at all.
+// This affects either encoding or decoding.
+func WithByteLimit(n int64) Options {
+ return jsonopts.ByteLimit(max(n, 0))
+}
+
+// WithDepthLimit sets a limit on the maximum depth of JSON nesting
+// that may be consumed or produced for each top-level JSON value.
+// If a [Decoder] or [Encoder] method call would need to consume or produce
+// a depth greater than n to make progress on the top-level JSON value,
+// then the call will report an error.
+//
+// A non-positive limit is equivalent to no limit at all.
+// If unspecified, the default limit is 10000.
+// This affects either encoding or decoding.
+func WithDepthLimit(n int) Options {
+ return jsonopts.DepthLimit(max(n, 0))
+}
+*/
diff --git a/internal/json/jsontext/pools.go b/internal/json/jsontext/pools.go
new file mode 100644
index 0000000000..cf59d99b92
--- /dev/null
+++ b/internal/json/jsontext/pools.go
@@ -0,0 +1,152 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package jsontext
+
+import (
+ "bytes"
+ "io"
+ "math/bits"
+ "sync"
+)
+
+// TODO(https://go.dev/issue/47657): Use sync.PoolOf.
+
+var (
+ // This owns the internal buffer since there is no io.Writer to output to.
+ // Since the buffer can get arbitrarily large in normal usage,
+ // there is statistical tracking logic to determine whether to recycle
+ // the internal buffer or not based on a history of utilization.
+ bufferedEncoderPool = &sync.Pool{New: func() any { return new(Encoder) }}
+
+ // This owns the internal buffer, but it is only used to temporarily store
+ // buffered JSON before flushing it to the underlying io.Writer.
+ // In a sufficiently efficient streaming mode, we do not expect the buffer
+ // to grow arbitrarily large. Thus, we avoid recycling large buffers.
+ streamingEncoderPool = &sync.Pool{New: func() any { return new(Encoder) }}
+
+ // This does not own the internal buffer since
+ // it is taken directly from the provided bytes.Buffer.
+ bytesBufferEncoderPool = &sync.Pool{New: func() any { return new(Encoder) }}
+)
+
+// bufferStatistics is statistics to track buffer utilization.
+// It is used to determine whether to recycle a buffer or not
+// to avoid https://go.dev/issue/23199.
+type bufferStatistics struct {
+ strikes int // number of times the buffer was under-utilized
+ prevLen int // length of previous buffer
+}
+
+func getBufferedEncoder(opts ...Options) *Encoder {
+ e := bufferedEncoderPool.Get().(*Encoder)
+ if e.s.Buf == nil {
+ // Round up to nearest 2ⁿ to make best use of malloc size classes.
+ // See runtime/sizeclasses.go on Go1.15.
+ // Logical OR with 63 to ensure 64 as the minimum buffer size.
+ n := 1 << bits.Len(uint(e.s.bufStats.prevLen|63))
+ e.s.Buf = make([]byte, 0, n)
+ }
+ e.s.reset(e.s.Buf[:0], nil, opts...)
+ return e
+}
+func putBufferedEncoder(e *Encoder) {
+ // Recycle large buffers only if sufficiently utilized.
+ // If a buffer is under-utilized enough times sequentially,
+ // then it is discarded, ensuring that a single large buffer
+ // won't be kept alive by a continuous stream of small usages.
+ //
+ // The worst case utilization is computed as:
+ // MIN_UTILIZATION_THRESHOLD / (1 + MAX_NUM_STRIKES)
+ //
+ // For the constants chosen below, this is (25%)/(1+4) ⇒ 5%.
+ // This may seem low, but it ensures a lower bound on
+ // the absolute worst-case utilization. Without this check,
+ // this would be theoretically 0%, which is infinitely worse.
+ //
+ // See https://go.dev/issue/27735.
+ switch {
+ case cap(e.s.Buf) <= 4<<10: // always recycle buffers smaller than 4KiB
+ e.s.bufStats.strikes = 0
+ case cap(e.s.Buf)/4 <= len(e.s.Buf): // at least 25% utilization
+ e.s.bufStats.strikes = 0
+ case e.s.bufStats.strikes < 4: // at most 4 strikes
+ e.s.bufStats.strikes++
+ default: // discard the buffer; too large and too often under-utilized
+ e.s.bufStats.strikes = 0
+ e.s.bufStats.prevLen = len(e.s.Buf) // heuristic for size to allocate next time
+ e.s.Buf = nil
+ }
+ bufferedEncoderPool.Put(e)
+}
+
+func getStreamingEncoder(w io.Writer, opts ...Options) *Encoder {
+ if _, ok := w.(*bytes.Buffer); ok {
+ e := bytesBufferEncoderPool.Get().(*Encoder)
+ e.s.reset(nil, w, opts...) // buffer taken from bytes.Buffer
+ return e
+ } else {
+ e := streamingEncoderPool.Get().(*Encoder)
+ e.s.reset(e.s.Buf[:0], w, opts...) // preserve existing buffer
+ return e
+ }
+}
+func putStreamingEncoder(e *Encoder) {
+ if _, ok := e.s.wr.(*bytes.Buffer); ok {
+ bytesBufferEncoderPool.Put(e)
+ } else {
+ if cap(e.s.Buf) > 64<<10 {
+ e.s.Buf = nil // avoid pinning arbitrarily large amounts of memory
+ }
+ streamingEncoderPool.Put(e)
+ }
+}
+
+var (
+ // This does not own the internal buffer since it is externally provided.
+ bufferedDecoderPool = &sync.Pool{New: func() any { return new(Decoder) }}
+
+ // This owns the internal buffer, but it is only used to temporarily store
+ // buffered JSON fetched from the underlying io.Reader.
+ // In a sufficiently efficient streaming mode, we do not expect the buffer
+ // to grow arbitrarily large. Thus, we avoid recycling large buffers.
+ streamingDecoderPool = &sync.Pool{New: func() any { return new(Decoder) }}
+
+ // This does not own the internal buffer since
+ // it is taken directly from the provided bytes.Buffer.
+ bytesBufferDecoderPool = bufferedDecoderPool
+)
+
+func getBufferedDecoder(b []byte, opts ...Options) *Decoder {
+ d := bufferedDecoderPool.Get().(*Decoder)
+ d.s.reset(b, nil, opts...)
+ return d
+}
+func putBufferedDecoder(d *Decoder) {
+ bufferedDecoderPool.Put(d)
+}
+
+func getStreamingDecoder(r io.Reader, opts ...Options) *Decoder {
+ if _, ok := r.(*bytes.Buffer); ok {
+ d := bytesBufferDecoderPool.Get().(*Decoder)
+ d.s.reset(nil, r, opts...) // buffer taken from bytes.Buffer
+ return d
+ } else {
+ d := streamingDecoderPool.Get().(*Decoder)
+ d.s.reset(d.s.buf[:0], r, opts...) // preserve existing buffer
+ return d
+ }
+}
+func putStreamingDecoder(d *Decoder) {
+ if _, ok := d.s.rd.(*bytes.Buffer); ok {
+ bytesBufferDecoderPool.Put(d)
+ } else {
+ if cap(d.s.buf) > 64<<10 {
+ d.s.buf = nil // avoid pinning arbitrarily large amounts of memory
+ }
+ streamingDecoderPool.Put(d)
+ }
+}
diff --git a/internal/json/jsontext/quote.go b/internal/json/jsontext/quote.go
new file mode 100644
index 0000000000..401a291b2c
--- /dev/null
+++ b/internal/json/jsontext/quote.go
@@ -0,0 +1,41 @@
+// Copyright 2023 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package jsontext
+
+import (
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+)
+
+// AppendQuote appends a double-quoted JSON string literal representing src
+// to dst and returns the extended buffer.
+// It uses the minimal string representation per RFC 8785, section 3.2.2.2.
+// Invalid UTF-8 bytes are replaced with the Unicode replacement character
+// and an error is returned at the end indicating the presence of invalid UTF-8.
+// The dst must not overlap with the src.
+func AppendQuote[Bytes ~[]byte | ~string](dst []byte, src Bytes) ([]byte, error) {
+ dst, err := jsonwire.AppendQuote(dst, src, &jsonflags.Flags{})
+ if err != nil {
+ err = &SyntacticError{Err: err}
+ }
+ return dst, err
+}
+
+// AppendUnquote appends the decoded interpretation of src as a
+// double-quoted JSON string literal to dst and returns the extended buffer.
+// The input src must be a JSON string without any surrounding whitespace.
+// Invalid UTF-8 bytes are replaced with the Unicode replacement character
+// and an error is returned at the end indicating the presence of invalid UTF-8.
+// Any trailing bytes after the JSON string literal results in an error.
+// The dst must not overlap with the src.
+func AppendUnquote[Bytes ~[]byte | ~string](dst []byte, src Bytes) ([]byte, error) {
+ dst, err := jsonwire.AppendUnquote(dst, src)
+ if err != nil {
+ err = &SyntacticError{Err: err}
+ }
+ return dst, err
+}
diff --git a/internal/json/jsontext/state.go b/internal/json/jsontext/state.go
new file mode 100644
index 0000000000..3d9fbaa4c2
--- /dev/null
+++ b/internal/json/jsontext/state.go
@@ -0,0 +1,828 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package jsontext
+
+import (
+ "errors"
+ "iter"
+ "math"
+ "strconv"
+ "strings"
+ "unicode/utf8"
+
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+)
+
+// ErrDuplicateName indicates that a JSON token could not be
+// encoded or decoded because it results in a duplicate JSON object name.
+// This error is directly wrapped within a [SyntacticError] when produced.
+//
+// The name of a duplicate JSON object member can be extracted as:
+//
+// err := ...
+// var serr jsontext.SyntacticError
+// if errors.As(err, &serr) && serr.Err == jsontext.ErrDuplicateName {
+// ptr := serr.JSONPointer // JSON pointer to duplicate name
+// name := ptr.LastToken() // duplicate name itself
+// ...
+// }
+//
+// This error is only returned if [AllowDuplicateNames] is false.
+var ErrDuplicateName = errors.New("duplicate object member name")
+
+// ErrNonStringName indicates that a JSON token could not be
+// encoded or decoded because it is not a string,
+// as required for JSON object names according to RFC 8259, section 4.
+// This error is directly wrapped within a [SyntacticError] when produced.
+var ErrNonStringName = errors.New("object member name must be a string")
+
+var (
+ errMissingValue = errors.New("missing value after object name")
+ errMismatchDelim = errors.New("mismatching structural token for object or array")
+ errMaxDepth = errors.New("exceeded max depth")
+
+ errInvalidNamespace = errors.New("object namespace is in an invalid state")
+)
+
+// Per RFC 8259, section 9, implementations may enforce a maximum depth.
+// Such a limit is necessary to prevent stack overflows.
+const maxNestingDepth = 10000
+
+type state struct {
+ // Tokens validates whether the next token kind is valid.
+ Tokens stateMachine
+
+ // Names is a stack of object names.
+ Names objectNameStack
+
+ // Namespaces is a stack of object namespaces.
+ // For performance reasons, Encoder or Decoder may not update this
+ // if Marshal or Unmarshal is able to track names in a more efficient way.
+ // See makeMapArshaler and makeStructArshaler.
+ // Not used if AllowDuplicateNames is true.
+ Namespaces objectNamespaceStack
+}
+
+// needObjectValue reports whether the next token should be an object value.
+// This method is used by [wrapSyntacticError].
+func (s *state) needObjectValue() bool {
+ return s.Tokens.Last.needObjectValue()
+}
+
+func (s *state) reset() {
+ s.Tokens.reset()
+ s.Names.reset()
+ s.Namespaces.reset()
+}
+
+// Pointer is a JSON Pointer (RFC 6901) that references a particular JSON value
+// relative to the root of the top-level JSON value.
+//
+// A Pointer is a slash-separated list of tokens, where each token is
+// either a JSON object name or an index to a JSON array element
+// encoded as a base-10 integer value.
+// It is impossible to distinguish between an array index and an object name
+// (that happens to be an base-10 encoded integer) without also knowing
+// the structure of the top-level JSON value that the pointer refers to.
+//
+// There is exactly one representation of a pointer to a particular value,
+// so comparability of Pointer values is equivalent to checking whether
+// they both point to the exact same value.
+type Pointer string
+
+// IsValid reports whether p is a valid JSON Pointer according to RFC 6901.
+// Note that the concatenation of two valid pointers produces a valid pointer.
+func (p Pointer) IsValid() bool {
+ for i, r := range p {
+ switch {
+ case r == '~' && (i+1 == len(p) || (p[i+1] != '0' && p[i+1] != '1')):
+ return false // invalid escape
+ case r == '\ufffd' && !strings.HasPrefix(string(p[i:]), "\ufffd"):
+ return false // invalid UTF-8
+ }
+ }
+ return len(p) == 0 || p[0] == '/'
+}
+
+// Contains reports whether the JSON value that p points to
+// is equal to or contains the JSON value that pc points to.
+func (p Pointer) Contains(pc Pointer) bool {
+ // Invariant: len(p) <= len(pc) if p.Contains(pc)
+ suffix, ok := strings.CutPrefix(string(pc), string(p))
+ return ok && (suffix == "" || suffix[0] == '/')
+}
+
+// Parent strips off the last token and returns the remaining pointer.
+// The parent of an empty p is an empty string.
+func (p Pointer) Parent() Pointer {
+ return p[:max(strings.LastIndexByte(string(p), '/'), 0)]
+}
+
+// LastToken returns the last token in the pointer.
+// The last token of an empty p is an empty string.
+func (p Pointer) LastToken() string {
+ last := p[max(strings.LastIndexByte(string(p), '/'), 0):]
+ return unescapePointerToken(strings.TrimPrefix(string(last), "/"))
+}
+
+// AppendToken appends a token to the end of p and returns the full pointer.
+func (p Pointer) AppendToken(tok string) Pointer {
+ return Pointer(appendEscapePointerName([]byte(p+"/"), tok))
+}
+
+// TODO: Add Pointer.AppendTokens,
+// but should this take in a ...string or an iter.Seq[string]?
+
+// Tokens returns an iterator over the reference tokens in the JSON pointer,
+// starting from the first token until the last token (unless stopped early).
+func (p Pointer) Tokens() iter.Seq[string] {
+ return func(yield func(string) bool) {
+ for len(p) > 0 {
+ p = Pointer(strings.TrimPrefix(string(p), "/"))
+ i := min(uint(strings.IndexByte(string(p), '/')), uint(len(p)))
+ if !yield(unescapePointerToken(string(p)[:i])) {
+ return
+ }
+ p = p[i:]
+ }
+ }
+}
+
+func unescapePointerToken(token string) string {
+ if strings.Contains(token, "~") {
+ // Per RFC 6901, section 3, unescape '~' and '/' characters.
+ token = strings.ReplaceAll(token, "~1", "/")
+ token = strings.ReplaceAll(token, "~0", "~")
+ }
+ return token
+}
+
+// appendStackPointer appends a JSON Pointer (RFC 6901) to the current value.
+//
+// - If where is -1, then it points to the previously processed token.
+//
+// - If where is 0, then it points to the parent JSON object or array,
+// or an object member if in-between an object member key and value.
+// This is useful when the position is ambiguous whether
+// we are interested in the previous or next token, or
+// when we are uncertain whether the next token
+// continues or terminates the current object or array.
+//
+// - If where is +1, then it points to the next expected value,
+// assuming that it continues the current JSON object or array.
+// As a special case, if the next token is a JSON object name,
+// then it points to the parent JSON object.
+//
+// Invariant: Must call s.names.copyQuotedBuffer beforehand.
+func (s state) appendStackPointer(b []byte, where int) []byte {
+ var objectDepth int
+ for i := 1; i < s.Tokens.Depth(); i++ {
+ e := s.Tokens.index(i)
+ arrayDelta := -1 // by default point to previous array element
+ if isLast := i == s.Tokens.Depth()-1; isLast {
+ switch {
+ case where < 0 && e.Length() == 0 || where == 0 && !e.needObjectValue() || where > 0 && e.NeedObjectName():
+ return b
+ case where > 0 && e.isArray():
+ arrayDelta = 0 // point to next array element
+ }
+ }
+ switch {
+ case e.isObject():
+ b = appendEscapePointerName(append(b, '/'), s.Names.getUnquoted(objectDepth))
+ objectDepth++
+ case e.isArray():
+ b = strconv.AppendUint(append(b, '/'), uint64(e.Length()+int64(arrayDelta)), 10)
+ }
+ }
+ return b
+}
+
+func appendEscapePointerName[Bytes ~[]byte | ~string](b []byte, name Bytes) []byte {
+ for _, r := range string(name) {
+ // Per RFC 6901, section 3, escape '~' and '/' characters.
+ switch r {
+ case '~':
+ b = append(b, "~0"...)
+ case '/':
+ b = append(b, "~1"...)
+ default:
+ b = utf8.AppendRune(b, r)
+ }
+ }
+ return b
+}
+
+// stateMachine is a push-down automaton that validates whether
+// a sequence of tokens is valid or not according to the JSON grammar.
+// It is useful for both encoding and decoding.
+//
+// It is a stack where each entry represents a nested JSON object or array.
+// The stack has a minimum depth of 1 where the first level is a
+// virtual JSON array to handle a stream of top-level JSON values.
+// The top-level virtual JSON array is special in that it doesn't require commas
+// between each JSON value.
+//
+// For performance, most methods are carefully written to be inlinable.
+// The zero value is a valid state machine ready for use.
+type stateMachine struct {
+ Stack []stateEntry
+ Last stateEntry
+}
+
+// reset resets the state machine.
+// The machine always starts with a minimum depth of 1.
+func (m *stateMachine) reset() {
+ m.Stack = m.Stack[:0]
+ if cap(m.Stack) > 1<<10 {
+ m.Stack = nil
+ }
+ m.Last = stateTypeArray
+}
+
+// Depth is the current nested depth of JSON objects and arrays.
+// It is one-indexed (i.e., top-level values have a depth of 1).
+func (m stateMachine) Depth() int {
+ return len(m.Stack) + 1
+}
+
+// index returns a reference to the ith entry.
+// It is only valid until the next push method call.
+func (m *stateMachine) index(i int) *stateEntry {
+ if i == len(m.Stack) {
+ return &m.Last
+ }
+ return &m.Stack[i]
+}
+
+// DepthLength reports the current nested depth and
+// the length of the last JSON object or array.
+func (m stateMachine) DepthLength() (int, int64) {
+ return m.Depth(), m.Last.Length()
+}
+
+// appendLiteral appends a JSON literal as the next token in the sequence.
+// If an error is returned, the state is not mutated.
+func (m *stateMachine) appendLiteral() error {
+ switch {
+ case m.Last.NeedObjectName():
+ return ErrNonStringName
+ case !m.Last.isValidNamespace():
+ return errInvalidNamespace
+ default:
+ m.Last.Increment()
+ return nil
+ }
+}
+
+// appendString appends a JSON string as the next token in the sequence.
+// If an error is returned, the state is not mutated.
+func (m *stateMachine) appendString() error {
+ switch {
+ case !m.Last.isValidNamespace():
+ return errInvalidNamespace
+ default:
+ m.Last.Increment()
+ return nil
+ }
+}
+
+// appendNumber appends a JSON number as the next token in the sequence.
+// If an error is returned, the state is not mutated.
+func (m *stateMachine) appendNumber() error {
+ return m.appendLiteral()
+}
+
+// pushObject appends a JSON begin object token as next in the sequence.
+// If an error is returned, the state is not mutated.
+func (m *stateMachine) pushObject() error {
+ switch {
+ case m.Last.NeedObjectName():
+ return ErrNonStringName
+ case !m.Last.isValidNamespace():
+ return errInvalidNamespace
+ case len(m.Stack) == maxNestingDepth:
+ return errMaxDepth
+ default:
+ m.Last.Increment()
+ m.Stack = append(m.Stack, m.Last)
+ m.Last = stateTypeObject
+ return nil
+ }
+}
+
+// popObject appends a JSON end object token as next in the sequence.
+// If an error is returned, the state is not mutated.
+func (m *stateMachine) popObject() error {
+ switch {
+ case !m.Last.isObject():
+ return errMismatchDelim
+ case m.Last.needObjectValue():
+ return errMissingValue
+ case !m.Last.isValidNamespace():
+ return errInvalidNamespace
+ default:
+ m.Last = m.Stack[len(m.Stack)-1]
+ m.Stack = m.Stack[:len(m.Stack)-1]
+ return nil
+ }
+}
+
+// pushArray appends a JSON begin array token as next in the sequence.
+// If an error is returned, the state is not mutated.
+func (m *stateMachine) pushArray() error {
+ switch {
+ case m.Last.NeedObjectName():
+ return ErrNonStringName
+ case !m.Last.isValidNamespace():
+ return errInvalidNamespace
+ case len(m.Stack) == maxNestingDepth:
+ return errMaxDepth
+ default:
+ m.Last.Increment()
+ m.Stack = append(m.Stack, m.Last)
+ m.Last = stateTypeArray
+ return nil
+ }
+}
+
+// popArray appends a JSON end array token as next in the sequence.
+// If an error is returned, the state is not mutated.
+func (m *stateMachine) popArray() error {
+ switch {
+ case !m.Last.isArray() || len(m.Stack) == 0: // forbid popping top-level virtual JSON array
+ return errMismatchDelim
+ case !m.Last.isValidNamespace():
+ return errInvalidNamespace
+ default:
+ m.Last = m.Stack[len(m.Stack)-1]
+ m.Stack = m.Stack[:len(m.Stack)-1]
+ return nil
+ }
+}
+
+// NeedIndent reports whether indent whitespace should be injected.
+// A zero value means that no whitespace should be injected.
+// A positive value means '\n', indentPrefix, and (n-1) copies of indentBody
+// should be appended to the output immediately before the next token.
+func (m stateMachine) NeedIndent(next Kind) (n int) {
+ willEnd := next == '}' || next == ']'
+ switch {
+ case m.Depth() == 1:
+ return 0 // top-level values are never indented
+ case m.Last.Length() == 0 && willEnd:
+ return 0 // an empty object or array is never indented
+ case m.Last.Length() == 0 || m.Last.needImplicitComma(next):
+ return m.Depth()
+ case willEnd:
+ return m.Depth() - 1
+ default:
+ return 0
+ }
+}
+
+// MayAppendDelim appends a colon or comma that may precede the next token.
+func (m stateMachine) MayAppendDelim(b []byte, next Kind) []byte {
+ switch {
+ case m.Last.needImplicitColon():
+ return append(b, ':')
+ case m.Last.needImplicitComma(next) && len(m.Stack) != 0: // comma not needed for top-level values
+ return append(b, ',')
+ default:
+ return b
+ }
+}
+
+// needDelim reports whether a colon or comma token should be implicitly emitted
+// before the next token of the specified kind.
+// A zero value means no delimiter should be emitted.
+func (m stateMachine) needDelim(next Kind) (delim byte) {
+ switch {
+ case m.Last.needImplicitColon():
+ return ':'
+ case m.Last.needImplicitComma(next) && len(m.Stack) != 0: // comma not needed for top-level values
+ return ','
+ default:
+ return 0
+ }
+}
+
+// InvalidateDisabledNamespaces marks all disabled namespaces as invalid.
+//
+// For efficiency, Marshal and Unmarshal may disable namespaces since there are
+// more efficient ways to track duplicate names. However, if an error occurs,
+// the namespaces in Encoder or Decoder will be left in an inconsistent state.
+// Mark the namespaces as invalid so that future method calls on
+// Encoder or Decoder will return an error.
+func (m *stateMachine) InvalidateDisabledNamespaces() {
+ for i := range m.Depth() {
+ e := m.index(i)
+ if !e.isActiveNamespace() {
+ e.invalidateNamespace()
+ }
+ }
+}
+
+// stateEntry encodes several artifacts within a single unsigned integer:
+// - whether this represents a JSON object or array,
+// - whether this object should check for duplicate names, and
+// - how many elements are in this JSON object or array.
+type stateEntry uint64
+
+const (
+ // The type mask (1 bit) records whether this is a JSON object or array.
+ stateTypeMask stateEntry = 0x8000_0000_0000_0000
+ stateTypeObject stateEntry = 0x8000_0000_0000_0000
+ stateTypeArray stateEntry = 0x0000_0000_0000_0000
+
+ // The name check mask (2 bit) records whether to update
+ // the namespaces for the current JSON object and
+ // whether the namespace is valid.
+ stateNamespaceMask stateEntry = 0x6000_0000_0000_0000
+ stateDisableNamespace stateEntry = 0x4000_0000_0000_0000
+ stateInvalidNamespace stateEntry = 0x2000_0000_0000_0000
+
+ // The count mask (61 bits) records the number of elements.
+ stateCountMask stateEntry = 0x1fff_ffff_ffff_ffff
+ stateCountLSBMask stateEntry = 0x0000_0000_0000_0001
+ stateCountOdd stateEntry = 0x0000_0000_0000_0001
+ stateCountEven stateEntry = 0x0000_0000_0000_0000
+)
+
+// Length reports the number of elements in the JSON object or array.
+// Each name and value in an object entry is treated as a separate element.
+func (e stateEntry) Length() int64 {
+ return int64(e & stateCountMask)
+}
+
+// isObject reports whether this is a JSON object.
+func (e stateEntry) isObject() bool {
+ return e&stateTypeMask == stateTypeObject
+}
+
+// isArray reports whether this is a JSON array.
+func (e stateEntry) isArray() bool {
+ return e&stateTypeMask == stateTypeArray
+}
+
+// NeedObjectName reports whether the next token must be a JSON string,
+// which is necessary for JSON object names.
+func (e stateEntry) NeedObjectName() bool {
+ return e&(stateTypeMask|stateCountLSBMask) == stateTypeObject|stateCountEven
+}
+
+// needImplicitColon reports whether an colon should occur next,
+// which always occurs after JSON object names.
+func (e stateEntry) needImplicitColon() bool {
+ return e.needObjectValue()
+}
+
+// needObjectValue reports whether the next token must be a JSON value,
+// which is necessary after every JSON object name.
+func (e stateEntry) needObjectValue() bool {
+ return e&(stateTypeMask|stateCountLSBMask) == stateTypeObject|stateCountOdd
+}
+
+// needImplicitComma reports whether an comma should occur next,
+// which always occurs after a value in a JSON object or array
+// before the next value (or name).
+func (e stateEntry) needImplicitComma(next Kind) bool {
+ return !e.needObjectValue() && e.Length() > 0 && next != '}' && next != ']'
+}
+
+// Increment increments the number of elements for the current object or array.
+// This assumes that overflow won't practically be an issue since
+// 1< 0.
+func (e *stateEntry) decrement() {
+ (*e)--
+}
+
+// DisableNamespace disables the JSON object namespace such that the
+// Encoder or Decoder no longer updates the namespace.
+func (e *stateEntry) DisableNamespace() {
+ *e |= stateDisableNamespace
+}
+
+// isActiveNamespace reports whether the JSON object namespace is actively
+// being updated and used for duplicate name checks.
+func (e stateEntry) isActiveNamespace() bool {
+ return e&(stateDisableNamespace) == 0
+}
+
+// invalidateNamespace marks the JSON object namespace as being invalid.
+func (e *stateEntry) invalidateNamespace() {
+ *e |= stateInvalidNamespace
+}
+
+// isValidNamespace reports whether the JSON object namespace is valid.
+func (e stateEntry) isValidNamespace() bool {
+ return e&(stateInvalidNamespace) == 0
+}
+
+// objectNameStack is a stack of names when descending into a JSON object.
+// In contrast to objectNamespaceStack, this only has to remember a single name
+// per JSON object.
+//
+// This data structure may contain offsets to encodeBuffer or decodeBuffer.
+// It violates clean abstraction of layers, but is significantly more efficient.
+// This ensures that popping and pushing in the common case is a trivial
+// push/pop of an offset integer.
+//
+// The zero value is an empty names stack ready for use.
+type objectNameStack struct {
+ // offsets is a stack of offsets for each name.
+ // A non-negative offset is the ending offset into the local names buffer.
+ // A negative offset is the bit-wise inverse of a starting offset into
+ // a remote buffer (e.g., encodeBuffer or decodeBuffer).
+ // A math.MinInt offset at the end implies that the last object is empty.
+ // Invariant: Positive offsets always occur before negative offsets.
+ offsets []int
+ // unquotedNames is a back-to-back concatenation of names.
+ unquotedNames []byte
+}
+
+func (ns *objectNameStack) reset() {
+ ns.offsets = ns.offsets[:0]
+ ns.unquotedNames = ns.unquotedNames[:0]
+ if cap(ns.offsets) > 1<<6 {
+ ns.offsets = nil // avoid pinning arbitrarily large amounts of memory
+ }
+ if cap(ns.unquotedNames) > 1<<10 {
+ ns.unquotedNames = nil // avoid pinning arbitrarily large amounts of memory
+ }
+}
+
+func (ns *objectNameStack) length() int {
+ return len(ns.offsets)
+}
+
+// getUnquoted retrieves the ith unquoted name in the stack.
+// It returns an empty string if the last object is empty.
+//
+// Invariant: Must call copyQuotedBuffer beforehand.
+func (ns *objectNameStack) getUnquoted(i int) []byte {
+ ns.ensureCopiedBuffer()
+ if i == 0 {
+ return ns.unquotedNames[:ns.offsets[0]]
+ } else {
+ return ns.unquotedNames[ns.offsets[i-1]:ns.offsets[i-0]]
+ }
+}
+
+// invalidOffset indicates that the last JSON object currently has no name.
+const invalidOffset = math.MinInt
+
+// push descends into a nested JSON object.
+func (ns *objectNameStack) push() {
+ ns.offsets = append(ns.offsets, invalidOffset)
+}
+
+// ReplaceLastQuotedOffset replaces the last name with the starting offset
+// to the quoted name in some remote buffer. All offsets provided must be
+// relative to the same buffer until copyQuotedBuffer is called.
+func (ns *objectNameStack) ReplaceLastQuotedOffset(i int) {
+ // Use bit-wise inversion instead of naive multiplication by -1 to avoid
+ // ambiguity regarding zero (which is a valid offset into the names field).
+ // Bit-wise inversion is mathematically equivalent to -i-1,
+ // such that 0 becomes -1, 1 becomes -2, and so forth.
+ // This ensures that remote offsets are always negative.
+ ns.offsets[len(ns.offsets)-1] = ^i
+}
+
+// replaceLastUnquotedName replaces the last name with the provided name.
+//
+// Invariant: Must call copyQuotedBuffer beforehand.
+func (ns *objectNameStack) replaceLastUnquotedName(s string) {
+ ns.ensureCopiedBuffer()
+ var startOffset int
+ if len(ns.offsets) > 1 {
+ startOffset = ns.offsets[len(ns.offsets)-2]
+ }
+ ns.unquotedNames = append(ns.unquotedNames[:startOffset], s...)
+ ns.offsets[len(ns.offsets)-1] = len(ns.unquotedNames)
+}
+
+// clearLast removes any name in the last JSON object.
+// It is semantically equivalent to ns.push followed by ns.pop.
+func (ns *objectNameStack) clearLast() {
+ ns.offsets[len(ns.offsets)-1] = invalidOffset
+}
+
+// pop ascends out of a nested JSON object.
+func (ns *objectNameStack) pop() {
+ ns.offsets = ns.offsets[:len(ns.offsets)-1]
+}
+
+// copyQuotedBuffer copies names from the remote buffer into the local names
+// buffer so that there are no more offset references into the remote buffer.
+// This allows the remote buffer to change contents without affecting
+// the names that this data structure is trying to remember.
+func (ns *objectNameStack) copyQuotedBuffer(b []byte) {
+ // Find the first negative offset.
+ var i int
+ for i = len(ns.offsets) - 1; i >= 0 && ns.offsets[i] < 0; i-- {
+ continue
+ }
+
+ // Copy each name from the remote buffer into the local buffer.
+ for i = i + 1; i < len(ns.offsets); i++ {
+ if i == len(ns.offsets)-1 && ns.offsets[i] == invalidOffset {
+ if i == 0 {
+ ns.offsets[i] = 0
+ } else {
+ ns.offsets[i] = ns.offsets[i-1]
+ }
+ break // last JSON object had a push without any names
+ }
+
+ // As a form of Hyrum proofing, we write an invalid character into the
+ // buffer to make misuse of Decoder.ReadToken more obvious.
+ // We need to undo that mutation here.
+ quotedName := b[^ns.offsets[i]:]
+ if quotedName[0] == invalidateBufferByte {
+ quotedName[0] = '"'
+ }
+
+ // Append the unquoted name to the local buffer.
+ var startOffset int
+ if i > 0 {
+ startOffset = ns.offsets[i-1]
+ }
+ if n := jsonwire.ConsumeSimpleString(quotedName); n > 0 {
+ ns.unquotedNames = append(ns.unquotedNames[:startOffset], quotedName[len(`"`):n-len(`"`)]...)
+ } else {
+ ns.unquotedNames, _ = jsonwire.AppendUnquote(ns.unquotedNames[:startOffset], quotedName)
+ }
+ ns.offsets[i] = len(ns.unquotedNames)
+ }
+}
+
+func (ns *objectNameStack) ensureCopiedBuffer() {
+ if len(ns.offsets) > 0 && ns.offsets[len(ns.offsets)-1] < 0 {
+ panic("BUG: copyQuotedBuffer not called beforehand")
+ }
+}
+
+// objectNamespaceStack is a stack of object namespaces.
+// This data structure assists in detecting duplicate names.
+type objectNamespaceStack []objectNamespace
+
+// reset resets the object namespace stack.
+func (nss *objectNamespaceStack) reset() {
+ if cap(*nss) > 1<<10 {
+ *nss = nil
+ }
+ *nss = (*nss)[:0]
+}
+
+// push starts a new namespace for a nested JSON object.
+func (nss *objectNamespaceStack) push() {
+ if cap(*nss) > len(*nss) {
+ *nss = (*nss)[:len(*nss)+1]
+ nss.Last().reset()
+ } else {
+ *nss = append(*nss, objectNamespace{})
+ }
+}
+
+// Last returns a pointer to the last JSON object namespace.
+func (nss objectNamespaceStack) Last() *objectNamespace {
+ return &nss[len(nss)-1]
+}
+
+// pop terminates the namespace for a nested JSON object.
+func (nss *objectNamespaceStack) pop() {
+ *nss = (*nss)[:len(*nss)-1]
+}
+
+// objectNamespace is the namespace for a JSON object.
+// In contrast to objectNameStack, this needs to remember a all names
+// per JSON object.
+//
+// The zero value is an empty namespace ready for use.
+type objectNamespace struct {
+ // It relies on a linear search over all the names before switching
+ // to use a Go map for direct lookup.
+
+ // endOffsets is a list of offsets to the end of each name in buffers.
+ // The length of offsets is the number of names in the namespace.
+ endOffsets []uint
+ // allUnquotedNames is a back-to-back concatenation of every name in the namespace.
+ allUnquotedNames []byte
+ // mapNames is a Go map containing every name in the namespace.
+ // Only valid if non-nil.
+ mapNames map[string]struct{}
+}
+
+// reset resets the namespace to be empty.
+func (ns *objectNamespace) reset() {
+ ns.endOffsets = ns.endOffsets[:0]
+ ns.allUnquotedNames = ns.allUnquotedNames[:0]
+ ns.mapNames = nil
+ if cap(ns.endOffsets) > 1<<6 {
+ ns.endOffsets = nil // avoid pinning arbitrarily large amounts of memory
+ }
+ if cap(ns.allUnquotedNames) > 1<<10 {
+ ns.allUnquotedNames = nil // avoid pinning arbitrarily large amounts of memory
+ }
+}
+
+// length reports the number of names in the namespace.
+func (ns *objectNamespace) length() int {
+ return len(ns.endOffsets)
+}
+
+// getUnquoted retrieves the ith unquoted name in the namespace.
+func (ns *objectNamespace) getUnquoted(i int) []byte {
+ if i == 0 {
+ return ns.allUnquotedNames[:ns.endOffsets[0]]
+ } else {
+ return ns.allUnquotedNames[ns.endOffsets[i-1]:ns.endOffsets[i-0]]
+ }
+}
+
+// lastUnquoted retrieves the last name in the namespace.
+func (ns *objectNamespace) lastUnquoted() []byte {
+ return ns.getUnquoted(ns.length() - 1)
+}
+
+// insertQuoted inserts a name and reports whether it was inserted,
+// which only occurs if name is not already in the namespace.
+// The provided name must be a valid JSON string.
+func (ns *objectNamespace) insertQuoted(name []byte, isVerbatim bool) bool {
+ if isVerbatim {
+ name = name[len(`"`) : len(name)-len(`"`)]
+ }
+ return ns.insert(name, !isVerbatim)
+}
+func (ns *objectNamespace) InsertUnquoted(name []byte) bool {
+ return ns.insert(name, false)
+}
+func (ns *objectNamespace) insert(name []byte, quoted bool) bool {
+ var allNames []byte
+ if quoted {
+ allNames, _ = jsonwire.AppendUnquote(ns.allUnquotedNames, name)
+ } else {
+ allNames = append(ns.allUnquotedNames, name...)
+ }
+ name = allNames[len(ns.allUnquotedNames):]
+
+ // Switch to a map if the buffer is too large for linear search.
+ // This does not add the current name to the map.
+ if ns.mapNames == nil && (ns.length() > 64 || len(ns.allUnquotedNames) > 1024) {
+ ns.mapNames = make(map[string]struct{})
+ var startOffset uint
+ for _, endOffset := range ns.endOffsets {
+ name := ns.allUnquotedNames[startOffset:endOffset]
+ ns.mapNames[string(name)] = struct{}{} // allocates a new string
+ startOffset = endOffset
+ }
+ }
+
+ if ns.mapNames == nil {
+ // Perform linear search over the buffer to find matching names.
+ // It provides O(n) lookup, but does not require any allocations.
+ var startOffset uint
+ for _, endOffset := range ns.endOffsets {
+ if string(ns.allUnquotedNames[startOffset:endOffset]) == string(name) {
+ return false
+ }
+ startOffset = endOffset
+ }
+ } else {
+ // Use the map if it is populated.
+ // It provides O(1) lookup, but requires a string allocation per name.
+ if _, ok := ns.mapNames[string(name)]; ok {
+ return false
+ }
+ ns.mapNames[string(name)] = struct{}{} // allocates a new string
+ }
+
+ ns.allUnquotedNames = allNames
+ ns.endOffsets = append(ns.endOffsets, uint(len(ns.allUnquotedNames)))
+ return true
+}
+
+// removeLast removes the last name in the namespace.
+func (ns *objectNamespace) removeLast() {
+ if ns.mapNames != nil {
+ delete(ns.mapNames, string(ns.lastUnquoted()))
+ }
+ if ns.length()-1 == 0 {
+ ns.endOffsets = ns.endOffsets[:0]
+ ns.allUnquotedNames = ns.allUnquotedNames[:0]
+ } else {
+ ns.endOffsets = ns.endOffsets[:ns.length()-1]
+ ns.allUnquotedNames = ns.allUnquotedNames[:ns.endOffsets[ns.length()-1]]
+ }
+}
diff --git a/internal/json/jsontext/token.go b/internal/json/jsontext/token.go
new file mode 100644
index 0000000000..7f39aa8359
--- /dev/null
+++ b/internal/json/jsontext/token.go
@@ -0,0 +1,527 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package jsontext
+
+import (
+ "bytes"
+ "errors"
+ "math"
+ "strconv"
+
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+)
+
+// NOTE: Token is analogous to v1 json.Token.
+
+const (
+ maxInt64 = math.MaxInt64
+ minInt64 = math.MinInt64
+ maxUint64 = math.MaxUint64
+ minUint64 = 0 // for consistency and readability purposes
+
+ invalidTokenPanic = "invalid jsontext.Token; it has been voided by a subsequent json.Decoder call"
+)
+
+var errInvalidToken = errors.New("invalid jsontext.Token")
+
+// Token represents a lexical JSON token, which may be one of the following:
+// - a JSON literal (i.e., null, true, or false)
+// - a JSON string (e.g., "hello, world!")
+// - a JSON number (e.g., 123.456)
+// - a begin or end delimiter for a JSON object (i.e., { or } )
+// - a begin or end delimiter for a JSON array (i.e., [ or ] )
+//
+// A Token cannot represent entire array or object values, while a [Value] can.
+// There is no Token to represent commas and colons since
+// these structural tokens can be inferred from the surrounding context.
+type Token struct {
+ nonComparable
+
+ // Tokens can exist in either a "raw" or an "exact" form.
+ // Tokens produced by the Decoder are in the "raw" form.
+ // Tokens returned by constructors are usually in the "exact" form.
+ // The Encoder accepts Tokens in either the "raw" or "exact" form.
+ //
+ // The following chart shows the possible values for each Token type:
+ // ╔═════════════════╦════════════╤════════════╤════════════╗
+ // ║ Token type ║ raw field │ str field │ num field ║
+ // ╠═════════════════╬════════════╪════════════╪════════════╣
+ // ║ null (raw) ║ "null" │ "" │ 0 ║
+ // ║ false (raw) ║ "false" │ "" │ 0 ║
+ // ║ true (raw) ║ "true" │ "" │ 0 ║
+ // ║ string (raw) ║ non-empty │ "" │ offset ║
+ // ║ string (string) ║ nil │ non-empty │ 0 ║
+ // ║ number (raw) ║ non-empty │ "" │ offset ║
+ // ║ number (float) ║ nil │ "f" │ non-zero ║
+ // ║ number (int64) ║ nil │ "i" │ non-zero ║
+ // ║ number (uint64) ║ nil │ "u" │ non-zero ║
+ // ║ object (delim) ║ "{" or "}" │ "" │ 0 ║
+ // ║ array (delim) ║ "[" or "]" │ "" │ 0 ║
+ // ╚═════════════════╩════════════╧════════════╧════════════╝
+ //
+ // Notes:
+ // - For tokens stored in "raw" form, the num field contains the
+ // absolute offset determined by raw.previousOffsetStart().
+ // The buffer itself is stored in raw.previousBuffer().
+ // - JSON literals and structural characters are always in the "raw" form.
+ // - JSON strings and numbers can be in either "raw" or "exact" forms.
+ // - The exact zero value of JSON strings and numbers in the "exact" forms
+ // have ambiguous representation. Thus, they are always represented
+ // in the "raw" form.
+
+ // raw contains a reference to the raw decode buffer.
+ // If non-nil, then its value takes precedence over str and num.
+ // It is only valid if num == raw.previousOffsetStart().
+ raw *decodeBuffer
+
+ // str is the unescaped JSON string if num is zero.
+ // Otherwise, it is "f", "i", or "u" if num should be interpreted
+ // as a float64, int64, or uint64, respectively.
+ str string
+
+ // num is a float64, int64, or uint64 stored as a uint64 value.
+ // It is non-zero for any JSON number in the "exact" form.
+ num uint64
+}
+
+// TODO: Does representing 1-byte delimiters as *decodeBuffer cause performance issues?
+
+var (
+ Null Token = rawToken("null")
+ False Token = rawToken("false")
+ True Token = rawToken("true")
+
+ BeginObject Token = rawToken("{")
+ EndObject Token = rawToken("}")
+ BeginArray Token = rawToken("[")
+ EndArray Token = rawToken("]")
+
+ zeroString Token = rawToken(`""`)
+ zeroNumber Token = rawToken(`0`)
+
+ nanString Token = String("NaN")
+ pinfString Token = String("Infinity")
+ ninfString Token = String("-Infinity")
+)
+
+func rawToken(s string) Token {
+ return Token{raw: &decodeBuffer{buf: []byte(s), prevStart: 0, prevEnd: len(s)}}
+}
+
+// Bool constructs a Token representing a JSON boolean.
+func Bool(b bool) Token {
+ if b {
+ return True
+ }
+ return False
+}
+
+// String constructs a Token representing a JSON string.
+// The provided string should contain valid UTF-8, otherwise invalid characters
+// may be mangled as the Unicode replacement character.
+func String(s string) Token {
+ if len(s) == 0 {
+ return zeroString
+ }
+ return Token{str: s}
+}
+
+// Float constructs a Token representing a JSON number.
+// The values NaN, +Inf, and -Inf will be represented
+// as a JSON string with the values "NaN", "Infinity", and "-Infinity".
+func Float(n float64) Token {
+ switch {
+ case math.Float64bits(n) == 0:
+ return zeroNumber
+ case math.IsNaN(n):
+ return nanString
+ case math.IsInf(n, +1):
+ return pinfString
+ case math.IsInf(n, -1):
+ return ninfString
+ }
+ return Token{str: "f", num: math.Float64bits(n)}
+}
+
+// Int constructs a Token representing a JSON number from an int64.
+func Int(n int64) Token {
+ if n == 0 {
+ return zeroNumber
+ }
+ return Token{str: "i", num: uint64(n)}
+}
+
+// Uint constructs a Token representing a JSON number from a uint64.
+func Uint(n uint64) Token {
+ if n == 0 {
+ return zeroNumber
+ }
+ return Token{str: "u", num: uint64(n)}
+}
+
+// Clone makes a copy of the Token such that its value remains valid
+// even after a subsequent [Decoder.Read] call.
+func (t Token) Clone() Token {
+ // TODO: Allow caller to avoid any allocations?
+ if raw := t.raw; raw != nil {
+ // Avoid copying globals.
+ if t.raw.prevStart == 0 {
+ switch t.raw {
+ case Null.raw:
+ return Null
+ case False.raw:
+ return False
+ case True.raw:
+ return True
+ case BeginObject.raw:
+ return BeginObject
+ case EndObject.raw:
+ return EndObject
+ case BeginArray.raw:
+ return BeginArray
+ case EndArray.raw:
+ return EndArray
+ }
+ }
+
+ if uint64(raw.previousOffsetStart()) != t.num {
+ panic(invalidTokenPanic)
+ }
+ buf := bytes.Clone(raw.previousBuffer())
+ return Token{raw: &decodeBuffer{buf: buf, prevStart: 0, prevEnd: len(buf)}}
+ }
+ return t
+}
+
+// Bool returns the value for a JSON boolean.
+// It panics if the token kind is not a JSON boolean.
+func (t Token) Bool() bool {
+ switch t.raw {
+ case True.raw:
+ return true
+ case False.raw:
+ return false
+ default:
+ panic("invalid JSON token kind: " + t.Kind().String())
+ }
+}
+
+// appendString appends a JSON string to dst and returns it.
+// It panics if t is not a JSON string.
+func (t Token) appendString(dst []byte, flags *jsonflags.Flags) ([]byte, error) {
+ if raw := t.raw; raw != nil {
+ // Handle raw string value.
+ buf := raw.previousBuffer()
+ if Kind(buf[0]) == '"' {
+ if jsonwire.ConsumeSimpleString(buf) == len(buf) {
+ return append(dst, buf...), nil
+ }
+ dst, _, err := jsonwire.ReformatString(dst, buf, flags)
+ return dst, err
+ }
+ } else if len(t.str) != 0 && t.num == 0 {
+ // Handle exact string value.
+ return jsonwire.AppendQuote(dst, t.str, flags)
+ }
+
+ panic("invalid JSON token kind: " + t.Kind().String())
+}
+
+// String returns the unescaped string value for a JSON string.
+// For other JSON kinds, this returns the raw JSON representation.
+func (t Token) String() string {
+ // This is inlinable to take advantage of "function outlining".
+ // This avoids an allocation for the string(b) conversion
+ // if the caller does not use the string in an escaping manner.
+ // See https://blog.filippo.io/efficient-go-apis-with-the-inliner/
+ s, b := t.string()
+ if len(b) > 0 {
+ return string(b)
+ }
+ return s
+}
+func (t Token) string() (string, []byte) {
+ if raw := t.raw; raw != nil {
+ if uint64(raw.previousOffsetStart()) != t.num {
+ panic(invalidTokenPanic)
+ }
+ buf := raw.previousBuffer()
+ if buf[0] == '"' {
+ // TODO: Preserve ValueFlags in Token?
+ isVerbatim := jsonwire.ConsumeSimpleString(buf) == len(buf)
+ return "", jsonwire.UnquoteMayCopy(buf, isVerbatim)
+ }
+ // Handle tokens that are not JSON strings for fmt.Stringer.
+ return "", buf
+ }
+ if len(t.str) != 0 && t.num == 0 {
+ return t.str, nil
+ }
+ // Handle tokens that are not JSON strings for fmt.Stringer.
+ if t.num > 0 {
+ switch t.str[0] {
+ case 'f':
+ return string(jsonwire.AppendFloat(nil, math.Float64frombits(t.num), 64)), nil
+ case 'i':
+ return strconv.FormatInt(int64(t.num), 10), nil
+ case 'u':
+ return strconv.FormatUint(uint64(t.num), 10), nil
+ }
+ }
+ return "", nil
+}
+
+// appendNumber appends a JSON number to dst and returns it.
+// It panics if t is not a JSON number.
+func (t Token) appendNumber(dst []byte, flags *jsonflags.Flags) ([]byte, error) {
+ if raw := t.raw; raw != nil {
+ // Handle raw number value.
+ buf := raw.previousBuffer()
+ if Kind(buf[0]).normalize() == '0' {
+ dst, _, err := jsonwire.ReformatNumber(dst, buf, flags)
+ return dst, err
+ }
+ } else if t.num != 0 {
+ // Handle exact number value.
+ switch t.str[0] {
+ case 'f':
+ return jsonwire.AppendFloat(dst, math.Float64frombits(t.num), 64), nil
+ case 'i':
+ return strconv.AppendInt(dst, int64(t.num), 10), nil
+ case 'u':
+ return strconv.AppendUint(dst, uint64(t.num), 10), nil
+ }
+ }
+
+ panic("invalid JSON token kind: " + t.Kind().String())
+}
+
+// Float returns the floating-point value for a JSON number.
+// It returns a NaN, +Inf, or -Inf value for any JSON string
+// with the values "NaN", "Infinity", or "-Infinity".
+// It panics for all other cases.
+func (t Token) Float() float64 {
+ if raw := t.raw; raw != nil {
+ // Handle raw number value.
+ if uint64(raw.previousOffsetStart()) != t.num {
+ panic(invalidTokenPanic)
+ }
+ buf := raw.previousBuffer()
+ if Kind(buf[0]).normalize() == '0' {
+ fv, _ := jsonwire.ParseFloat(buf, 64)
+ return fv
+ }
+ } else if t.num != 0 {
+ // Handle exact number value.
+ switch t.str[0] {
+ case 'f':
+ return math.Float64frombits(t.num)
+ case 'i':
+ return float64(int64(t.num))
+ case 'u':
+ return float64(uint64(t.num))
+ }
+ }
+
+ // Handle string values with "NaN", "Infinity", or "-Infinity".
+ if t.Kind() == '"' {
+ switch t.String() {
+ case "NaN":
+ return math.NaN()
+ case "Infinity":
+ return math.Inf(+1)
+ case "-Infinity":
+ return math.Inf(-1)
+ }
+ }
+
+ panic("invalid JSON token kind: " + t.Kind().String())
+}
+
+// Int returns the signed integer value for a JSON number.
+// The fractional component of any number is ignored (truncation toward zero).
+// Any number beyond the representation of an int64 will be saturated
+// to the closest representable value.
+// It panics if the token kind is not a JSON number.
+func (t Token) Int() int64 {
+ if raw := t.raw; raw != nil {
+ // Handle raw integer value.
+ if uint64(raw.previousOffsetStart()) != t.num {
+ panic(invalidTokenPanic)
+ }
+ neg := false
+ buf := raw.previousBuffer()
+ if len(buf) > 0 && buf[0] == '-' {
+ neg, buf = true, buf[1:]
+ }
+ if numAbs, ok := jsonwire.ParseUint(buf); ok {
+ if neg {
+ if numAbs > -minInt64 {
+ return minInt64
+ }
+ return -1 * int64(numAbs)
+ } else {
+ if numAbs > +maxInt64 {
+ return maxInt64
+ }
+ return +1 * int64(numAbs)
+ }
+ }
+ } else if t.num != 0 {
+ // Handle exact integer value.
+ switch t.str[0] {
+ case 'i':
+ return int64(t.num)
+ case 'u':
+ if t.num > maxInt64 {
+ return maxInt64
+ }
+ return int64(t.num)
+ }
+ }
+
+ // Handle JSON number that is a floating-point value.
+ if t.Kind() == '0' {
+ switch fv := t.Float(); {
+ case fv >= maxInt64:
+ return maxInt64
+ case fv <= minInt64:
+ return minInt64
+ default:
+ return int64(fv) // truncation toward zero
+ }
+ }
+
+ panic("invalid JSON token kind: " + t.Kind().String())
+}
+
+// Uint returns the unsigned integer value for a JSON number.
+// The fractional component of any number is ignored (truncation toward zero).
+// Any number beyond the representation of an uint64 will be saturated
+// to the closest representable value.
+// It panics if the token kind is not a JSON number.
+func (t Token) Uint() uint64 {
+ // NOTE: This accessor returns 0 for any negative JSON number,
+ // which might be surprising, but is at least consistent with the behavior
+ // of saturating out-of-bounds numbers to the closest representable number.
+
+ if raw := t.raw; raw != nil {
+ // Handle raw integer value.
+ if uint64(raw.previousOffsetStart()) != t.num {
+ panic(invalidTokenPanic)
+ }
+ neg := false
+ buf := raw.previousBuffer()
+ if len(buf) > 0 && buf[0] == '-' {
+ neg, buf = true, buf[1:]
+ }
+ if num, ok := jsonwire.ParseUint(buf); ok {
+ if neg {
+ return minUint64
+ }
+ return num
+ }
+ } else if t.num != 0 {
+ // Handle exact integer value.
+ switch t.str[0] {
+ case 'u':
+ return t.num
+ case 'i':
+ if int64(t.num) < minUint64 {
+ return minUint64
+ }
+ return uint64(int64(t.num))
+ }
+ }
+
+ // Handle JSON number that is a floating-point value.
+ if t.Kind() == '0' {
+ switch fv := t.Float(); {
+ case fv >= maxUint64:
+ return maxUint64
+ case fv <= minUint64:
+ return minUint64
+ default:
+ return uint64(fv) // truncation toward zero
+ }
+ }
+
+ panic("invalid JSON token kind: " + t.Kind().String())
+}
+
+// Kind returns the token kind.
+func (t Token) Kind() Kind {
+ switch {
+ case t.raw != nil:
+ raw := t.raw
+ if uint64(raw.previousOffsetStart()) != t.num {
+ panic(invalidTokenPanic)
+ }
+ return Kind(t.raw.buf[raw.prevStart]).normalize()
+ case t.num != 0:
+ return '0'
+ case len(t.str) != 0:
+ return '"'
+ default:
+ return invalidKind
+ }
+}
+
+// Kind represents each possible JSON token kind with a single byte,
+// which is conveniently the first byte of that kind's grammar
+// with the restriction that numbers always be represented with '0':
+//
+// - 'n': null
+// - 'f': false
+// - 't': true
+// - '"': string
+// - '0': number
+// - '{': object begin
+// - '}': object end
+// - '[': array begin
+// - ']': array end
+//
+// An invalid kind is usually represented using 0,
+// but may be non-zero due to invalid JSON data.
+type Kind byte
+
+const invalidKind Kind = 0
+
+// String prints the kind in a humanly readable fashion.
+func (k Kind) String() string {
+ switch k {
+ case 'n':
+ return "null"
+ case 'f':
+ return "false"
+ case 't':
+ return "true"
+ case '"':
+ return "string"
+ case '0':
+ return "number"
+ case '{':
+ return "{"
+ case '}':
+ return "}"
+ case '[':
+ return "["
+ case ']':
+ return "]"
+ default:
+ return ""
+ }
+}
+
+// normalize coalesces all possible starting characters of a number as just '0'.
+func (k Kind) normalize() Kind {
+ if k == '-' || ('0' <= k && k <= '9') {
+ return '0'
+ }
+ return k
+}
diff --git a/internal/json/jsontext/value.go b/internal/json/jsontext/value.go
new file mode 100644
index 0000000000..646de92a8d
--- /dev/null
+++ b/internal/json/jsontext/value.go
@@ -0,0 +1,395 @@
+// Copyright 2020 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package jsontext
+
+import (
+ "bytes"
+ "errors"
+ "io"
+ "slices"
+ "sync"
+
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonwire"
+)
+
+// NOTE: Value is analogous to v1 json.RawMessage.
+
+// AppendFormat formats the JSON value in src and appends it to dst
+// according to the specified options.
+// See [Value.Format] for more details about the formatting behavior.
+//
+// The dst and src may overlap.
+// If an error is reported, then the entirety of src is appended to dst.
+func AppendFormat(dst, src []byte, opts ...Options) ([]byte, error) {
+ e := getBufferedEncoder(opts...)
+ defer putBufferedEncoder(e)
+ e.s.Flags.Set(jsonflags.OmitTopLevelNewline | 1)
+ if err := e.s.WriteValue(src); err != nil {
+ return append(dst, src...), err
+ }
+ return append(dst, e.s.Buf...), nil
+}
+
+// Value represents a single raw JSON value, which may be one of the following:
+// - a JSON literal (i.e., null, true, or false)
+// - a JSON string (e.g., "hello, world!")
+// - a JSON number (e.g., 123.456)
+// - an entire JSON object (e.g., {"fizz":"buzz"} )
+// - an entire JSON array (e.g., [1,2,3] )
+//
+// Value can represent entire array or object values, while [Token] cannot.
+// Value may contain leading and/or trailing whitespace.
+type Value []byte
+
+// Clone returns a copy of v.
+func (v Value) Clone() Value {
+ return bytes.Clone(v)
+}
+
+// String returns the string formatting of v.
+func (v Value) String() string {
+ if v == nil {
+ return "null"
+ }
+ return string(v)
+}
+
+// IsValid reports whether the raw JSON value is syntactically valid
+// according to the specified options.
+//
+// By default (if no options are specified), it validates according to RFC 7493.
+// It verifies whether the input is properly encoded as UTF-8,
+// that escape sequences within strings decode to valid Unicode codepoints, and
+// that all names in each object are unique.
+// It does not verify whether numbers are representable within the limits
+// of any common numeric type (e.g., float64, int64, or uint64).
+//
+// Relevant options include:
+// - [AllowDuplicateNames]
+// - [AllowInvalidUTF8]
+//
+// All other options are ignored.
+func (v Value) IsValid(opts ...Options) bool {
+ // TODO: Document support for [WithByteLimit] and [WithDepthLimit].
+ d := getBufferedDecoder(v, opts...)
+ defer putBufferedDecoder(d)
+ _, errVal := d.ReadValue()
+ _, errEOF := d.ReadToken()
+ return errVal == nil && errEOF == io.EOF
+}
+
+// Format formats the raw JSON value in place.
+//
+// By default (if no options are specified), it validates according to RFC 7493
+// and produces the minimal JSON representation, where
+// all whitespace is elided and JSON strings use the shortest encoding.
+//
+// Relevant options include:
+// - [AllowDuplicateNames]
+// - [AllowInvalidUTF8]
+// - [EscapeForHTML]
+// - [EscapeForJS]
+// - [PreserveRawStrings]
+// - [CanonicalizeRawInts]
+// - [CanonicalizeRawFloats]
+// - [ReorderRawObjects]
+// - [SpaceAfterColon]
+// - [SpaceAfterComma]
+// - [Multiline]
+// - [WithIndent]
+// - [WithIndentPrefix]
+//
+// All other options are ignored.
+//
+// It is guaranteed to succeed if the value is valid according to the same options.
+// If the value is already formatted, then the buffer is not mutated.
+func (v *Value) Format(opts ...Options) error {
+ // TODO: Document support for [WithByteLimit] and [WithDepthLimit].
+ return v.format(opts, nil)
+}
+
+// format accepts two []Options to avoid the allocation appending them together.
+// It is equivalent to v.Format(append(opts1, opts2...)...).
+func (v *Value) format(opts1, opts2 []Options) error {
+ e := getBufferedEncoder(opts1...)
+ defer putBufferedEncoder(e)
+ e.s.Join(opts2...)
+ e.s.Flags.Set(jsonflags.OmitTopLevelNewline | 1)
+ if err := e.s.WriteValue(*v); err != nil {
+ return err
+ }
+ if !bytes.Equal(*v, e.s.Buf) {
+ *v = append((*v)[:0], e.s.Buf...)
+ }
+ return nil
+}
+
+// Compact removes all whitespace from the raw JSON value.
+//
+// It does not reformat JSON strings or numbers to use any other representation.
+// To maximize the set of JSON values that can be formatted,
+// this permits values with duplicate names and invalid UTF-8.
+//
+// Compact is equivalent to calling [Value.Format] with the following options:
+// - [AllowDuplicateNames](true)
+// - [AllowInvalidUTF8](true)
+// - [PreserveRawStrings](true)
+//
+// Any options specified by the caller are applied after the initial set
+// and may deliberately override prior options.
+func (v *Value) Compact(opts ...Options) error {
+ return v.format([]Options{
+ AllowDuplicateNames(true),
+ AllowInvalidUTF8(true),
+ PreserveRawStrings(true),
+ }, opts)
+}
+
+// Indent reformats the whitespace in the raw JSON value so that each element
+// in a JSON object or array begins on a indented line according to the nesting.
+//
+// It does not reformat JSON strings or numbers to use any other representation.
+// To maximize the set of JSON values that can be formatted,
+// this permits values with duplicate names and invalid UTF-8.
+//
+// Indent is equivalent to calling [Value.Format] with the following options:
+// - [AllowDuplicateNames](true)
+// - [AllowInvalidUTF8](true)
+// - [PreserveRawStrings](true)
+// - [Multiline](true)
+//
+// Any options specified by the caller are applied after the initial set
+// and may deliberately override prior options.
+func (v *Value) Indent(opts ...Options) error {
+ return v.format([]Options{
+ AllowDuplicateNames(true),
+ AllowInvalidUTF8(true),
+ PreserveRawStrings(true),
+ Multiline(true),
+ }, opts)
+}
+
+// Canonicalize canonicalizes the raw JSON value according to the
+// JSON Canonicalization Scheme (JCS) as defined by RFC 8785
+// where it produces a stable representation of a JSON value.
+//
+// JSON strings are formatted to use their minimal representation,
+// JSON numbers are formatted as double precision numbers according
+// to some stable serialization algorithm.
+// JSON object members are sorted in ascending order by name.
+// All whitespace is removed.
+//
+// The output stability is dependent on the stability of the application data
+// (see RFC 8785, Appendix E). It cannot produce stable output from
+// fundamentally unstable input. For example, if the JSON value
+// contains ephemeral data (e.g., a frequently changing timestamp),
+// then the value is still unstable regardless of whether this is called.
+//
+// Canonicalize is equivalent to calling [Value.Format] with the following options:
+// - [CanonicalizeRawInts](true)
+// - [CanonicalizeRawFloats](true)
+// - [ReorderRawObjects](true)
+//
+// Any options specified by the caller are applied after the initial set
+// and may deliberately override prior options.
+//
+// Note that JCS treats all JSON numbers as IEEE 754 double precision numbers.
+// Any numbers with precision beyond what is representable by that form
+// will lose their precision when canonicalized. For example, integer values
+// beyond ±2⁵³ will lose their precision. To preserve the original representation
+// of JSON integers, additionally set [CanonicalizeRawInts] to false:
+//
+// v.Canonicalize(jsontext.CanonicalizeRawInts(false))
+func (v *Value) Canonicalize(opts ...Options) error {
+ return v.format([]Options{
+ CanonicalizeRawInts(true),
+ CanonicalizeRawFloats(true),
+ ReorderRawObjects(true),
+ }, opts)
+}
+
+// MarshalJSON returns v as the JSON encoding of v.
+// It returns the stored value as the raw JSON output without any validation.
+// If v is nil, then this returns a JSON null.
+func (v Value) MarshalJSON() ([]byte, error) {
+ // NOTE: This matches the behavior of v1 json.RawMessage.MarshalJSON.
+ if v == nil {
+ return []byte("null"), nil
+ }
+ return v, nil
+}
+
+// UnmarshalJSON sets v as the JSON encoding of b.
+// It stores a copy of the provided raw JSON input without any validation.
+func (v *Value) UnmarshalJSON(b []byte) error {
+ // NOTE: This matches the behavior of v1 json.RawMessage.UnmarshalJSON.
+ if v == nil {
+ return errors.New("jsontext.Value: UnmarshalJSON on nil pointer")
+ }
+ *v = append((*v)[:0], b...)
+ return nil
+}
+
+// Kind returns the starting token kind.
+// For a valid value, this will never include '}' or ']'.
+func (v Value) Kind() Kind {
+ if v := v[jsonwire.ConsumeWhitespace(v):]; len(v) > 0 {
+ return Kind(v[0]).normalize()
+ }
+ return invalidKind
+}
+
+const commaAndWhitespace = ", \n\r\t"
+
+type objectMember struct {
+ // name is the unquoted name.
+ name []byte // e.g., "name"
+ // buffer is the entirety of the raw JSON object member
+ // starting from right after the previous member (or opening '{')
+ // until right after the member value.
+ buffer []byte // e.g., `, \n\r\t"name": "value"`
+}
+
+func (x objectMember) Compare(y objectMember) int {
+ if c := jsonwire.CompareUTF16(x.name, y.name); c != 0 {
+ return c
+ }
+ // With [AllowDuplicateNames] or [AllowInvalidUTF8],
+ // names could be identical, so also sort using the member value.
+ return jsonwire.CompareUTF16(
+ bytes.TrimLeft(x.buffer, commaAndWhitespace),
+ bytes.TrimLeft(y.buffer, commaAndWhitespace))
+}
+
+var objectMemberPool = sync.Pool{New: func() any { return new([]objectMember) }}
+
+func getObjectMembers() *[]objectMember {
+ ns := objectMemberPool.Get().(*[]objectMember)
+ *ns = (*ns)[:0]
+ return ns
+}
+func putObjectMembers(ns *[]objectMember) {
+ if cap(*ns) < 1<<10 {
+ clear(*ns) // avoid pinning name and buffer
+ objectMemberPool.Put(ns)
+ }
+}
+
+// mustReorderObjects reorders in-place all object members in a JSON value,
+// which must be valid otherwise it panics.
+func mustReorderObjects(b []byte) {
+ // Obtain a buffered encoder just to use its internal buffer as
+ // a scratch buffer for reordering object members.
+ e2 := getBufferedEncoder()
+ defer putBufferedEncoder(e2)
+
+ // Disable unnecessary checks to syntactically parse the JSON value.
+ d := getBufferedDecoder(b)
+ defer putBufferedDecoder(d)
+ d.s.Flags.Set(jsonflags.AllowDuplicateNames | jsonflags.AllowInvalidUTF8 | 1)
+ mustReorderObjectsFromDecoder(d, &e2.s.Buf) // per RFC 8785, section 3.2.3
+}
+
+// mustReorderObjectsFromDecoder recursively reorders all object members in place
+// according to the ordering specified in RFC 8785, section 3.2.3.
+//
+// Pre-conditions:
+// - The value is valid (i.e., no decoder errors should ever occur).
+// - Initial call is provided a Decoder reading from the start of v.
+//
+// Post-conditions:
+// - Exactly one JSON value is read from the Decoder.
+// - All fully-parsed JSON objects are reordered by directly moving
+// the members in the value buffer.
+//
+// The runtime is approximately O(n·log(n)) + O(m·log(m)),
+// where n is len(v) and m is the total number of object members.
+func mustReorderObjectsFromDecoder(d *Decoder, scratch *[]byte) {
+ switch tok, err := d.ReadToken(); tok.Kind() {
+ case '{':
+ // Iterate and collect the name and offsets for every object member.
+ members := getObjectMembers()
+ defer putObjectMembers(members)
+ var prevMember objectMember
+ isSorted := true
+
+ beforeBody := d.InputOffset() // offset after '{'
+ for d.PeekKind() != '}' {
+ beforeName := d.InputOffset()
+ var flags jsonwire.ValueFlags
+ name, _ := d.s.ReadValue(&flags)
+ name = jsonwire.UnquoteMayCopy(name, flags.IsVerbatim())
+ mustReorderObjectsFromDecoder(d, scratch)
+ afterValue := d.InputOffset()
+
+ currMember := objectMember{name, d.s.buf[beforeName:afterValue]}
+ if isSorted && len(*members) > 0 {
+ isSorted = objectMember.Compare(prevMember, currMember) < 0
+ }
+ *members = append(*members, currMember)
+ prevMember = currMember
+ }
+ afterBody := d.InputOffset() // offset before '}'
+ d.ReadToken()
+
+ // Sort the members; return early if it's already sorted.
+ if isSorted {
+ return
+ }
+ firstBufferBeforeSorting := (*members)[0].buffer
+ slices.SortFunc(*members, objectMember.Compare)
+ firstBufferAfterSorting := (*members)[0].buffer
+
+ // Append the reordered members to a new buffer,
+ // then copy the reordered members back over the original members.
+ // Avoid swapping in place since each member may be a different size
+ // where moving a member over a smaller member may corrupt the data
+ // for subsequent members before they have been moved.
+ //
+ // The following invariant must hold:
+ // sum([m.after-m.before for m in members]) == afterBody-beforeBody
+ commaAndWhitespacePrefix := func(b []byte) []byte {
+ return b[:len(b)-len(bytes.TrimLeft(b, commaAndWhitespace))]
+ }
+ sorted := (*scratch)[:0]
+ for i, member := range *members {
+ switch {
+ case i == 0 && &member.buffer[0] != &firstBufferBeforeSorting[0]:
+ // First member after sorting is not the first member before sorting,
+ // so use the prefix of the first member before sorting.
+ sorted = append(sorted, commaAndWhitespacePrefix(firstBufferBeforeSorting)...)
+ sorted = append(sorted, bytes.TrimLeft(member.buffer, commaAndWhitespace)...)
+ case i != 0 && &member.buffer[0] == &firstBufferBeforeSorting[0]:
+ // Later member after sorting is the first member before sorting,
+ // so use the prefix of the first member after sorting.
+ sorted = append(sorted, commaAndWhitespacePrefix(firstBufferAfterSorting)...)
+ sorted = append(sorted, bytes.TrimLeft(member.buffer, commaAndWhitespace)...)
+ default:
+ sorted = append(sorted, member.buffer...)
+ }
+ }
+ if int(afterBody-beforeBody) != len(sorted) {
+ panic("BUG: length invariant violated")
+ }
+ copy(d.s.buf[beforeBody:afterBody], sorted)
+
+ // Update scratch buffer to the largest amount ever used.
+ if len(sorted) > len(*scratch) {
+ *scratch = sorted
+ }
+ case '[':
+ for d.PeekKind() != ']' {
+ mustReorderObjectsFromDecoder(d, scratch)
+ }
+ d.ReadToken()
+ default:
+ if err != nil {
+ panic("BUG: " + err.Error())
+ }
+ }
+}
diff --git a/internal/json/options.go b/internal/json/options.go
new file mode 100644
index 0000000000..dfc1b21ce2
--- /dev/null
+++ b/internal/json/options.go
@@ -0,0 +1,289 @@
+// Copyright 2023 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:build !goexperiment.jsonv2 || !go1.25
+
+package json
+
+import (
+ "fmt"
+
+ "github.com/quay/clair/v4/internal/json/internal"
+ "github.com/quay/clair/v4/internal/json/internal/jsonflags"
+ "github.com/quay/clair/v4/internal/json/internal/jsonopts"
+)
+
+// Options configure [Marshal], [MarshalWrite], [MarshalEncode],
+// [Unmarshal], [UnmarshalRead], and [UnmarshalDecode] with specific features.
+// Each function takes in a variadic list of options, where properties
+// set in later options override the value of previously set properties.
+//
+// The Options type is identical to [encoding/json.Options] and
+// [encoding/json/jsontext.Options]. Options from the other packages can
+// be used interchangeably with functionality in this package.
+//
+// Options represent either a singular option or a set of options.
+// It can be functionally thought of as a Go map of option properties
+// (even though the underlying implementation avoids Go maps for performance).
+//
+// The constructors (e.g., [Deterministic]) return a singular option value:
+//
+// opt := Deterministic(true)
+//
+// which is analogous to creating a single entry map:
+//
+// opt := Options{"Deterministic": true}
+//
+// [JoinOptions] composes multiple options values to together:
+//
+// out := JoinOptions(opts...)
+//
+// which is analogous to making a new map and copying the options over:
+//
+// out := make(Options)
+// for _, m := range opts {
+// for k, v := range m {
+// out[k] = v
+// }
+// }
+//
+// [GetOption] looks up the value of options parameter:
+//
+// v, ok := GetOption(opts, Deterministic)
+//
+// which is analogous to a Go map lookup:
+//
+// v, ok := Options["Deterministic"]
+//
+// There is a single Options type, which is used with both marshal and unmarshal.
+// Some options affect both operations, while others only affect one operation:
+//
+// - [StringifyNumbers] affects marshaling and unmarshaling
+// - [Deterministic] affects marshaling only
+// - [FormatNilSliceAsNull] affects marshaling only
+// - [FormatNilMapAsNull] affects marshaling only
+// - [OmitZeroStructFields] affects marshaling only
+// - [MatchCaseInsensitiveNames] affects marshaling and unmarshaling
+// - [DiscardUnknownMembers] affects marshaling only
+// - [RejectUnknownMembers] affects unmarshaling only
+// - [WithMarshalers] affects marshaling only
+// - [WithUnmarshalers] affects unmarshaling only
+//
+// Options that do not affect a particular operation are ignored.
+type Options = jsonopts.Options
+
+// JoinOptions coalesces the provided list of options into a single Options.
+// Properties set in later options override the value of previously set properties.
+func JoinOptions(srcs ...Options) Options {
+ var dst jsonopts.Struct
+ dst.Join(srcs...)
+ return &dst
+}
+
+// GetOption returns the value stored in opts with the provided setter,
+// reporting whether the value is present.
+//
+// Example usage:
+//
+// v, ok := json.GetOption(opts, json.Deterministic)
+//
+// Options are most commonly introspected to alter the JSON representation of
+// [MarshalerTo.MarshalJSONTo] and [UnmarshalerFrom.UnmarshalJSONFrom] methods, and
+// [MarshalToFunc] and [UnmarshalFromFunc] functions.
+// In such cases, the presence bit should generally be ignored.
+func GetOption[T any](opts Options, setter func(T) Options) (T, bool) {
+ return jsonopts.GetOption(opts, setter)
+}
+
+// DefaultOptionsV2 is the full set of all options that define v2 semantics.
+// It is equivalent to all options under [Options], [encoding/json.Options],
+// and [encoding/json/jsontext.Options] being set to false or the zero value,
+// except for the options related to whitespace formatting.
+func DefaultOptionsV2() Options {
+ return &jsonopts.DefaultOptionsV2
+}
+
+// StringifyNumbers specifies that numeric Go types should be marshaled
+// as a JSON string containing the equivalent JSON number value.
+// When unmarshaling, numeric Go types are parsed from a JSON string
+// containing the JSON number without any surrounding whitespace.
+//
+// According to RFC 8259, section 6, a JSON implementation may choose to
+// limit the representation of a JSON number to an IEEE 754 binary64 value.
+// This may cause decoders to lose precision for int64 and uint64 types.
+// Quoting JSON numbers as a JSON string preserves the exact precision.
+//
+// This affects either marshaling or unmarshaling.
+func StringifyNumbers(v bool) Options {
+ if v {
+ return jsonflags.StringifyNumbers | 1
+ } else {
+ return jsonflags.StringifyNumbers | 0
+ }
+}
+
+// Deterministic specifies that the same input value will be serialized
+// as the exact same output bytes. Different processes of
+// the same program will serialize equal values to the same bytes,
+// but different versions of the same program are not guaranteed
+// to produce the exact same sequence of bytes.
+//
+// This only affects marshaling and is ignored when unmarshaling.
+func Deterministic(v bool) Options {
+ if v {
+ return jsonflags.Deterministic | 1
+ } else {
+ return jsonflags.Deterministic | 0
+ }
+}
+
+// FormatNilSliceAsNull specifies that a nil Go slice should marshal as a
+// JSON null instead of the default representation as an empty JSON array
+// (or an empty JSON string in the case of ~[]byte).
+// Slice fields explicitly marked with `format:emitempty` still marshal
+// as an empty JSON array.
+//
+// This only affects marshaling and is ignored when unmarshaling.
+func FormatNilSliceAsNull(v bool) Options {
+ if v {
+ return jsonflags.FormatNilSliceAsNull | 1
+ } else {
+ return jsonflags.FormatNilSliceAsNull | 0
+ }
+}
+
+// FormatNilMapAsNull specifies that a nil Go map should marshal as a
+// JSON null instead of the default representation as an empty JSON object.
+// Map fields explicitly marked with `format:emitempty` still marshal
+// as an empty JSON object.
+//
+// This only affects marshaling and is ignored when unmarshaling.
+func FormatNilMapAsNull(v bool) Options {
+ if v {
+ return jsonflags.FormatNilMapAsNull | 1
+ } else {
+ return jsonflags.FormatNilMapAsNull | 0
+ }
+}
+
+// OmitZeroStructFields specifies that a Go struct should marshal in such a way
+// that all struct fields that are zero are omitted from the marshaled output
+// if the value is zero as determined by the "IsZero() bool" method if present,
+// otherwise based on whether the field is the zero Go value.
+// This is semantically equivalent to specifying the `omitzero` tag option
+// on every field in a Go struct.
+//
+// This only affects marshaling and is ignored when unmarshaling.
+func OmitZeroStructFields(v bool) Options {
+ if v {
+ return jsonflags.OmitZeroStructFields | 1
+ } else {
+ return jsonflags.OmitZeroStructFields | 0
+ }
+}
+
+// MatchCaseInsensitiveNames specifies that JSON object members are matched
+// against Go struct fields using a case-insensitive match of the name.
+// Go struct fields explicitly marked with `case:strict` or `case:ignore`
+// always use case-sensitive (or case-insensitive) name matching,
+// regardless of the value of this option.
+//
+// This affects either marshaling or unmarshaling.
+// For marshaling, this option may alter the detection of duplicate names
+// (assuming [jsontext.AllowDuplicateNames] is false) from inlined fields
+// if it matches one of the declared fields in the Go struct.
+func MatchCaseInsensitiveNames(v bool) Options {
+ if v {
+ return jsonflags.MatchCaseInsensitiveNames | 1
+ } else {
+ return jsonflags.MatchCaseInsensitiveNames | 0
+ }
+}
+
+// DiscardUnknownMembers specifies that marshaling should ignore any
+// JSON object members stored in Go struct fields dedicated to storing
+// unknown JSON object members.
+//
+// This only affects marshaling and is ignored when unmarshaling.
+func DiscardUnknownMembers(v bool) Options {
+ if v {
+ return jsonflags.DiscardUnknownMembers | 1
+ } else {
+ return jsonflags.DiscardUnknownMembers | 0
+ }
+}
+
+// RejectUnknownMembers specifies that unknown members should be rejected
+// when unmarshaling a JSON object, regardless of whether there is a field
+// to store unknown members.
+//
+// This only affects unmarshaling and is ignored when marshaling.
+func RejectUnknownMembers(v bool) Options {
+ if v {
+ return jsonflags.RejectUnknownMembers | 1
+ } else {
+ return jsonflags.RejectUnknownMembers | 0
+ }
+}
+
+// WithMarshalers specifies a list of type-specific marshalers to use,
+// which can be used to override the default marshal behavior for values
+// of particular types.
+//
+// This only affects marshaling and is ignored when unmarshaling.
+func WithMarshalers(v *Marshalers) Options {
+ return (*marshalersOption)(v)
+}
+
+// WithUnmarshalers specifies a list of type-specific unmarshalers to use,
+// which can be used to override the default unmarshal behavior for values
+// of particular types.
+//
+// This only affects unmarshaling and is ignored when marshaling.
+func WithUnmarshalers(v *Unmarshalers) Options {
+ return (*unmarshalersOption)(v)
+}
+
+// These option types are declared here instead of "jsonopts"
+// to avoid a dependency on "reflect" from "jsonopts".
+type (
+ marshalersOption Marshalers
+ unmarshalersOption Unmarshalers
+)
+
+func (*marshalersOption) JSONOptions(internal.NotForPublicUse) {}
+func (*unmarshalersOption) JSONOptions(internal.NotForPublicUse) {}
+
+// Inject support into "jsonopts" to handle these types.
+func init() {
+ jsonopts.GetUnknownOption = func(src jsonopts.Struct, zero jsonopts.Options) (any, bool) {
+ switch zero.(type) {
+ case *marshalersOption:
+ if !src.Flags.Has(jsonflags.Marshalers) {
+ return (*Marshalers)(nil), false
+ }
+ return src.Marshalers.(*Marshalers), true
+ case *unmarshalersOption:
+ if !src.Flags.Has(jsonflags.Unmarshalers) {
+ return (*Unmarshalers)(nil), false
+ }
+ return src.Unmarshalers.(*Unmarshalers), true
+ default:
+ panic(fmt.Sprintf("unknown option %T", zero))
+ }
+ }
+ jsonopts.JoinUnknownOption = func(dst jsonopts.Struct, src jsonopts.Options) jsonopts.Struct {
+ switch src := src.(type) {
+ case *marshalersOption:
+ dst.Flags.Set(jsonflags.Marshalers | 1)
+ dst.Marshalers = (*Marshalers)(src)
+ case *unmarshalersOption:
+ dst.Flags.Set(jsonflags.Unmarshalers | 1)
+ dst.Unmarshalers = (*Unmarshalers)(src)
+ default:
+ panic(fmt.Sprintf("unknown option %T", src))
+ }
+ return dst
+ }
+}
diff --git a/internal/vendor_json.sh b/internal/vendor_json.sh
new file mode 100755
index 0000000000..ea83f30b87
--- /dev/null
+++ b/internal/vendor_json.sh
@@ -0,0 +1,39 @@
+#!/usr/bin/env bash
+# This is a port of the migrate.sh from go-json-experiment.
+set -euo pipefail
+
+GOROOT=$(go env GOROOT)
+JSONROOT=$(realpath json)
+echo copying from "$GOROOT" '('$(go version)')'
+
+#rm -r $JSONROOT ||:
+#mkdir -p $JSONROOT
+rsync \
+ --recursive --verbose --delete \
+ --exclude={**/testdata,**/jsontest,*_test.go} \
+ $GOROOT/src/encoding/json/{v2/,internal,jsontext} \
+ $JSONROOT/
+
+sedscript=$(mktemp --tmpdir migrate_jsonv2.XXXXXX)
+cat <<'.' >$sedscript
+s/go:build goexperiment.jsonv2$/go:build !goexperiment.jsonv2 || !go1.25/;
+s|"encoding/json/v2"|"github.com/quay/clair/v4/internal/json"|;
+s|"encoding/json/internal"|"github.com/quay/clair/v4/internal/json/internal"|;
+s|"encoding/json/internal/jsonflags"|"github.com/quay/clair/v4/internal/json/internal/jsonflags"|;
+s|"encoding/json/internal/jsonopts"|"github.com/quay/clair/v4/internal/json/internal/jsonopts"|;
+s|"encoding/json/internal/jsonwire"|"github.com/quay/clair/v4/internal/json/internal/jsonwire"|;
+s|"encoding/json/jsontext"|"github.com/quay/clair/v4/internal/json/jsontext"|;
+.
+trap "rm -f '$sedscript'" EXIT
+
+find "$JSONROOT" \
+ -type f -name '*.go' \
+ -exec sed -f "$sedscript" -i '{}' \; \
+ -exec goimports -w '{}' \+
+find "$JSONROOT" \
+ -type f -name 'doc.go' \
+ -exec sed -i '/This package .* is experimental/,+4d' '{}' \+
+
+go run alias_gen.go "encoding/json/v2" $JSONROOT
+go run alias_gen.go "encoding/json/jsontext" $JSONROOT/jsontext
+go test -run none $JSONROOT/...
diff --git a/openapi.yaml b/openapi.yaml
deleted file mode 100644
index 81fab1f116..0000000000
--- a/openapi.yaml
+++ /dev/null
@@ -1,934 +0,0 @@
----
-openapi: "3.0.2"
-info:
- title: "ClairV4"
- description: >-
- ClairV4 is a set of cooperating microservices which scan, index, and
- match your container's content with known vulnerabilities.
- version: "1.1"
- termsOfService: ""
- contact:
- name: "Clair Team"
- url: "http://github.com/quay/clair"
- email: "quay-devel@redhat.com"
- license:
- name: "Apache License 2.0"
- url: "http://www.apache.org/licenses/"
-
-paths:
- /notifier/api/v1/notification/{notification_id}:
- delete:
- tags:
- - Notifier
- operationId: "DeleteNotification"
- description: >-
- Issues a delete of the provided notification id and all associated
- notifications. After this delete clients will no longer be able to
- retrieve notifications.
- parameters:
- - in: path
- name: notification_id
- schema:
- type: string
- description: "A notification ID returned by a callback"
- responses:
- 200:
- description: "OK"
- 400:
- $ref: '#/components/responses/BadRequest'
- 405:
- $ref: '#/components/responses/MethodNotAllowed'
- 500:
- $ref: '#/components/responses/InternalServerError'
- get:
- tags:
- - Notifier
- operationId: "GetNotification"
- summary: Retrieve a paginated result of notifications for the provided id.
- description: >-
- By performing a GET with a notification_id as a path parameter, the
- client will retrieve a paginated response of notification objects.
- parameters:
- - in: path
- name: notification_id
- schema:
- type: string
- description: "A notification ID returned by a callback"
- - in: query
- name: page_size
- schema:
- type: int
- description: >-
- The maximum number of notifications to deliver in a single page.
- - in: query
- name: next
- schema:
- type: string
- description: >-
- The next page to fetch via id. Typically this number is provided
- on initial response in the page.next field.
- The first GET request may omit this field.
- responses:
- 200:
- description: "A paginated list of notifications"
- content:
- application/json:
- schema:
- $ref: '#/components/schemas/PagedNotifications'
- 400:
- $ref: '#/components/responses/BadRequest'
- 405:
- $ref: '#/components/responses/MethodNotAllowed'
- 500:
- $ref: '#/components/responses/InternalServerError'
-
- /indexer/api/v1/index_report:
- post:
- tags:
- - Indexer
- operationId: "Index"
- summary: "Index the contents of a Manifest"
- description: >-
- By submitting a Manifest object to this endpoint Clair will fetch the
- layers, scan each layer's contents, and provide an index of discovered
- packages, repository and distribution information.
- requestBody:
- required: true
- content:
- application/json:
- schema:
- $ref: '#/components/schemas/Manifest'
- responses:
- 201:
- description: IndexReport Created
- content:
- application/json:
- schema:
- $ref: '#/components/schemas/IndexReport'
- 400:
- $ref: '#/components/responses/BadRequest'
- 405:
- $ref: '#/components/responses/MethodNotAllowed'
- 500:
- $ref: '#/components/responses/InternalServerError'
- delete:
- tags:
- - Indexer
- operationId: "DeleteManifests"
- summary: >-
- Delete the IndexReport and associated information for the given
- Manifest hashes, if they exist.
- description: >-
- Given a Manifest's content addressable hash, any data related to it
- will be removed if it exists.
- requestBody:
- required: true
- content:
- application/json:
- schema:
- $ref: '#/components/schemas/BulkDelete'
- responses:
- 200:
- description: "OK"
- content:
- application/json:
- schema:
- $ref: '#/components/schemas/BulkDelete'
- 400:
- $ref: '#/components/responses/BadRequest'
- 500:
- $ref: '#/components/responses/InternalServerError'
-
- /indexer/api/v1/index_report/{manifest_hash}:
- delete:
- tags:
- - Indexer
- operationId: "DeleteManifest"
- summary: >-
- Delete the IndexReport and associated information for the given
- Manifest hash, if exists.
- description: >-
- Given a Manifest's content addressable hash, any data related to it
- will be removed it it exists.
- parameters:
- - name: manifest_hash
- in: path
- description: >-
- A digest of a manifest that has been indexed previous to this
- request.
- required: true
- schema:
- $ref: '#/components/schemas/Digest'
- responses:
- 204:
- description: "OK"
- 400:
- $ref: '#/components/responses/BadRequest'
- 500:
- $ref: '#/components/responses/InternalServerError'
- get:
- tags:
- - Indexer
- operationId: "GetIndexReport"
- summary: "Retrieve an IndexReport for the given Manifest hash if exists."
- description: >-
- Given a Manifest's content addressable hash an IndexReport will
- be retrieved if exists.
- parameters:
- - name: manifest_hash
- in: path
- description: >-
- A digest of a manifest that has been indexed previous to this
- request.
- required: true
- schema:
- $ref: '#/components/schemas/Digest'
- responses:
- 200:
- description: IndexReport retrieved
- content:
- application/json:
- schema:
- $ref: '#/components/schemas/IndexReport'
- 400:
- $ref: '#/components/responses/BadRequest'
- 404:
- $ref: '#/components/responses/NotFound'
- 405:
- $ref: '#/components/responses/MethodNotAllowed'
- 500:
- $ref: '#/components/responses/InternalServerError'
-
- /matcher/api/v1/vulnerability_report/{manifest_hash}:
- get:
- tags:
- - Matcher
- operationId: "GetVulnerabilityReport"
- summary: >-
- Retrieve a VulnerabilityReport for a given manifest's content
- addressable hash.
- description: >-
- Given a Manifest's content addressable hash a VulnerabilityReport
- will be created. The Manifest **must** have been Indexed first
- via the Index endpoint.
- parameters:
- - name: manifest_hash
- in: path
- description: >-
- A digest of a manifest that has been indexed previous to this
- request.
- required: true
- schema:
- $ref: '#/components/schemas/Digest'
- responses:
- 201:
- description: VulnerabilityReport Created
- content:
- application/json:
- schema:
- $ref: '#/components/schemas/VulnerabilityReport'
- 400:
- $ref: '#/components/responses/BadRequest'
- 404:
- $ref: '#/components/responses/NotFound'
- 405:
- $ref: '#/components/responses/MethodNotAllowed'
- 500:
- $ref: '#/components/responses/InternalServerError'
-
- /indexer/api/v1/index_state:
- get:
- tags:
- - Indexer
- operationId: IndexState
- summary: Report the indexer's internal configuration and state.
- description: >-
- The index state endpoint returns a json structure indicating the
- indexer's internal configuration state.
-
- A client may be interested in this as a signal that manifests may need
- to be re-indexed.
- responses:
- 200:
- description: Indexer State
- headers:
- Etag:
- description: 'Entity Tag'
- schema: {type: string}
- content:
- application/json:
- schema:
- $ref: '#/components/schemas/State'
- 304:
- description: Indexer State Unchanged
-
-components:
- responses:
- BadRequest:
- description: Bad Request
- content:
- application/json:
- schema:
- $ref: '#/components/schemas/Error'
-
- MethodNotAllowed:
- description: Method Not Allowed
- content:
- application/json:
- schema:
- $ref: '#/components/schemas/Error'
-
- InternalServerError:
- description: Internal Server Error
- content:
- application/json:
- schema:
- $ref: '#/components/schemas/Error'
-
- NotFound:
- description: Not Found
- content:
- application/json:
- schema:
- $ref: '#/components/schemas/Error'
-
- examples:
- Environment:
- value:
- package_db: "var/lib/dpkg/status"
- introduced_in: >-
- sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a
- distribution_id: "1"
-
- Vulnerability:
- value:
- id: "356835"
- updater: ""
- name: "CVE-2009-5155"
- description: >-
- In the GNU C Library (aka glibc or libc6) before 2.28,
- parse_reg_exp in posix/regcomp.c misparses alternatives,
- which allows attackers to cause a denial of service (assertion
- failure and application exit) or trigger an incorrect result
- by attempting a regular-expression match."
- links: >-
- https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-5155
- http://people.canonical.com/~ubuntu-security/cve/2009/CVE-2009-5155.html
- https://sourceware.org/bugzilla/show_bug.cgi?id=11053
- https://debbugs.gnu.org/cgi/bugreport.cgi?bug=22793
- https://debbugs.gnu.org/cgi/bugreport.cgi?bug=32806
- https://debbugs.gnu.org/cgi/bugreport.cgi?bug=34238
- https://sourceware.org/bugzilla/show_bug.cgi?id=18986"
- severity: "Low"
- normalized_severity: "Low"
- package:
- id: "0"
- name: "glibc"
- version: ""
- kind: ""
- source: null
- package_db: ""
- repository_hint: ""
- dist:
- id: "0"
- did: "ubuntu"
- name: "Ubuntu"
- version: "18.04.3 LTS (Bionic Beaver)"
- version_code_name: "bionic"
- version_id: "18.04"
- arch: ""
- cpe: ""
- pretty_name: ""
- repo:
- id: "0"
- name: "Ubuntu 18.04.3 LTS"
- key: ""
- uri: ""
- issued: "2019-10-12T07:20:50.52Z"
- fixed_in_version: "2.28-0ubuntu1"
-
- Distribution:
- value:
- id: "1"
- did: "ubuntu"
- name: "Ubuntu"
- version: "18.04.3 LTS (Bionic Beaver)"
- version_code_name: "bionic"
- version_id: "18.04"
- arch: ""
- cpe: ""
- pretty_name: "Ubuntu 18.04.3 LTS"
-
- Package:
- value:
- id: "10"
- name: "libapt-pkg5.0"
- version: "1.6.11"
- kind: "binary"
- normalized_version: ""
- arch: "x86"
- module: ""
- cpe: ""
- source:
- id: "9"
- name: "apt"
- version: "1.6.11"
- kind: "source"
- source: null
-
- VulnSummary:
- value:
- name: "CVE-2009-5155"
- description: >-
- In the GNU C Library (aka glibc or libc6) before 2.28,
- parse_reg_exp in posix/regcomp.c misparses alternatives,
- which allows attackers to cause a denial of service (assertion
- failure and application exit) or trigger an incorrect result
- by attempting a regular-expression match."
- normalized_severity: "Low"
- fixed_in_version: "v0.0.1"
- links: "http://link-to-advisory"
- package:
- id: "0"
- name: "glibc"
- version: ""
- kind: ""
- source: null
- package_db: ""
- repository_hint: ""
- dist:
- id: "0"
- did: "ubuntu"
- name: "Ubuntu"
- version: "18.04.3 LTS (Bionic Beaver)"
- version_code_name: "bionic"
- version_id: "18.04"
- arch: ""
- cpe: ""
- pretty_name: ""
- repo:
- id: "0"
- name: "Ubuntu 18.04.3 LTS"
- key: ""
- uri: ""
-
- schemas:
- Page:
- title: Page
- description: >-
- A page object indicating to the client how to retrieve multiple pages of
- a particular entity.
- properties:
- size:
- description: "The maximum number of elements in a page"
- type: int
- example: 1
- next:
- description: "The next id to submit to the api to continue paging"
- type: string
- example: "1b4d0db2-e757-4150-bbbb-543658144205"
-
- PagedNotifications:
- title: PagedNotifications
- type: object
- description: "A page object followed by a list of notifications"
- properties:
- page:
- description: >-
- A page object informing the client the next page to retrieve.
- If page.next becomes "-1" the client should stop paging.
- type: object
- example:
- size: 100
- next: "1b4d0db2-e757-4150-bbbb-543658144205"
- notifications:
- description: "A list of notifications within this page"
- type: array
- items:
- $ref: '#/components/schemas/Notification'
-
- Callback:
- title: Callback
- type: object
- description: "A callback for clients to retrieve notifications"
- properties:
- notification_id:
- description: "the unique identifier for this set of notifications"
- type: string
- example: "269886f3-0146-4f08-9bf7-cb1138d48643"
- callback:
- description: "the url where notifications can be retrieved"
- type: string
- example: >-
- http://clair-notifier/notifier/api/v1/notification/269886f3-0146-4f08-9bf7-cb1138d48643
-
- VulnSummary:
- title: VulnSummary
- type: object
- description: "A summary of a vulnerability"
- properties:
- name:
- description: "the vulnerability name"
- type: string
- example: "CVE-2009-5155"
- fixed_in_version:
- description: >-
- The version which the vulnerability is fixed in. Empty if not fixed.
- type: string
- example: "v0.0.1"
- links:
- description: "links to external information about vulnerability"
- type: string
- example: "http://link-to-advisory"
- description:
- description: "the vulnerability name"
- type: string
- example: >-
- In the GNU C Library (aka glibc or libc6) before 2.28,
- parse_reg_exp in posix/regcomp.c misparses alternatives,
- which allows attackers to cause a denial of service (assertion
- failure and application exit) or trigger an incorrect result
- by attempting a regular-expression match."
- normalized_severity:
- description: >-
- A well defined set of severity strings guaranteed to be present.
- type: string
- enum: [Unknown, Negligible, Low, Medium, High, Critical]
- package:
- $ref: '#/components/schemas/Package'
- distribution:
- $ref: '#/components/schemas/Distribution'
- repository:
- $ref: '#/components/schemas/Repository'
-
- Notification:
- title: Notification
- type: object
- description: >-
- A notification expressing a change in a manifest affected by a
- vulnerability.
- properties:
- id:
- description: "a unique identifier for this notification"
- type: string
- example: "5e4b387e-88d3-4364-86fd-063447a6fad2"
- manifest:
- description: >-
- The hash of the manifest affected by the provided vulnerability.
- type: string
- example: >-
- sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a
- reason:
- description: "the reason for the notifcation, [added | removed]"
- type: string
- example: "added"
- vulnerability:
- $ref: '#/components/schemas/VulnSummary'
-
- Environment:
- title: Environment
- type: object
- description: "The environment a particular package was discovered in."
- properties:
- package_db:
- description: >-
- The filesystem path or unique identifier of a package database.
- type: string
- example: "var/lib/dpkg/status"
- introduced_in:
- $ref: '#/components/schemas/Digest'
- distribution_id:
- description: >-
- The distribution ID found in an associated IndexReport or
- VulnerabilityReport.
- type: string
- example: "1"
- required:
- - package_db
- - introduced_in
- - distribution_id
-
- IndexReport:
- title: IndexReport
- type: object
- description: >-
- A report of the Index process for a particular manifest. A
- client's usage of this is largely information. Clair uses this
- report for matching Vulnerabilities.
- properties:
- manifest_hash:
- $ref: '#/components/schemas/Digest'
- state:
- description: "The current state of the index operation"
- type: string
- example: "IndexFinished"
- packages:
- type: object
- description: "A map of Package objects indexed by Package.id"
- example:
- "10":
- $ref: '#/components/examples/Package/value'
- additionalProperties:
- $ref: '#/components/schemas/Package'
- distributions:
- type: object
- description: >-
- A map of Distribution objects keyed by their Distribution.id
- discovered in the manifest.
- example:
- "1":
- $ref: '#/components/examples/Distribution/value'
- additionalProperties:
- $ref: '#/components/schemas/Distribution'
- environments:
- type: object
- description: >-
- A map of lists containing Environment objects keyed by the
- associated Package.id.
- example:
- "10":
- # swagger bug does not allow inline reference here -_-
- # - $ref: '#/components/examples/Environment/value'
- - package_db: "var/lib/dpkg/status"
- introduced_in: >-
- sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a
- distribution_id: "1"
-
- additionalProperties:
- type: array
- items:
- $ref: '#/components/schemas/Environment'
- success:
- type: boolean
- description: "A bool indicating succcessful index"
- example: true
- err:
- type: string
- description: "An error message on event of unsuccessful index"
- example: ""
- required:
- - manifest_hash
- - state
- - packages
- - distributions
- - environments
- - success
- - err
-
- VulnerabilityReport:
- title: VulnerabilityReport
- type: object
- description: >-
- A report expressing discovered packages, package environments,
- and package vulnerabilities within a Manifest.
- properties:
- manifest_hash:
- $ref: '#/components/schemas/Digest'
- packages:
- type: object
- description: "A map of Package objects indexed by Package.id"
- example:
- "10":
- $ref: '#/components/examples/Package/value'
- additionalProperties:
- $ref: '#/components/schemas/Package'
- distributions:
- type: object
- description: >-
- A map of Distribution objects indexed by Distribution.id.
- example:
- "1":
- $ref: '#/components/examples/Distribution/value'
- additionalProperties:
- $ref: '#/components/schemas/Distribution'
- environments:
- type: object
- description: "A mapping of Environment lists indexed by Package.id"
- example:
- "10":
- # swagger bug does not allow inline reference here -_-
- # - $ref: '#/components/examples/Environment/value'
- - package_db: "var/lib/dpkg/status"
- introduced_in: >-
- sha256:35c102085707f703de2d9eaad8752d6fe1b8f02b5d2149f1d8357c9cc7fb7d0a
- distribution_id: "1"
- additionalProperties:
- type: array
- items:
- $ref: '#/components/schemas/Environment'
- vulnerabilities:
- description: "A map of Vulnerabilities indexed by Vulnerability.id"
- type: object
- example:
- "356835":
- $ref: '#/components/examples/Vulnerability/value'
- additionalProperties:
- $ref: '#/components/schemas/Vulnerability'
- package_vulnerabilities:
- description: >-
- A mapping of Vulnerability.id lists indexed by Package.id.
- example:
- "10":
- - "356835"
- additionalProperties:
- type: array
- items:
- type: string
- required:
- - manifest_hash
- - packages
- - distributions
- - environments
- - vulnerabilities
- - package_vulnerabilities
-
- Vulnerability:
- title: Vulnerability
- type: object
- description: "A unique vulnerability indexed by Clair"
- example:
- $ref: '#/components/examples/Vulnerability/value'
- properties:
- id:
- description: "A unique ID representing this vulnerability."
- type: string
- updater:
- description: "A unique ID representing this vulnerability."
- type: string
- name:
- description: "Name of this specific vulnerability."
- type: string
- description:
- description: "A description of this specific vulnerability."
- type: string
- links:
- description: >-
- A space separate list of links to any external information.
- type: string
- severity:
- description: >-
- A severity keyword taken verbatim from the vulnerability source.
- type: string
- normalized_severity:
- description: >-
- A well defined set of severity strings guaranteed to be present.
- type: string
- enum: [Unknown, Negligible, Low, Medium, High, Critical]
- package:
- $ref: '#/components/schemas/Package'
- distribution:
- $ref: '#/components/schemas/Distribution'
- repository:
- $ref: '#/components/schemas/Repository'
- issued:
- description: >-
- The timestamp in which the vulnerability was issued
- type: string
- range:
- description: >-
- The range of package versions affected by this vulnerability.
- type: string
- fixed_in_version:
- description: "A unique ID representing this vulnerability."
- type: string
- required:
- - id
- - updater
- - name
- - description
- - links
- - severity
- - normalized_severity
- - fixed_in_version
-
- Distribution:
- title: Distribution
- type: object
- description: >-
- An indexed distribution discovered in a layer. See
- https://www.freedesktop.org/software/systemd/man/os-release.html
- for explanations and example of fields.
- example:
- $ref: '#/components/examples/Distribution/value'
- properties:
- id:
- description: "A unique ID representing this distribution"
- type: string
- did:
- type: string
- name:
- type: string
- version:
- type: string
- version_code_name:
- type: string
- version_id:
- type: string
- arch:
- type: string
- cpe:
- type: string
- pretty_name:
- type: string
- required:
- - id
- - did
- - name
- - version
- - version_code_name
- - version_id
- - arch
- - cpe
- - pretty_name
-
- Package:
- title: Package
- type: object
- description: "A package discovered by indexing a Manifest"
- example:
- $ref: '#/components/examples/Package/value'
- properties:
- id:
- description: "A unique ID representing this package"
- type: string
- name:
- description: "Name of the Package"
- type: string
- version:
- description: "Version of the Package"
- type: string
- kind:
- description: "Kind of package. Source | Binary"
- type: string
- source:
- $ref: '#/components/schemas/Package'
- normalized_version:
- $ref: '#/components/schemas/Version'
- arch:
- description: "The package's target system architecture"
- type: string
- module:
- description: "A module further defining a namespace for a package"
- type: string
- cpe:
- description: "A CPE identifying the package"
- type: string
- required:
- - id
- - name
- - version
-
- Repository:
- title: Repository
- type: object
- description: "A package repository"
- properties:
- id:
- type: string
- name:
- type: string
- key:
- type: string
- uri:
- type: string
- cpe:
- type: string
-
- Version:
- title: Version
- type: string
- description: >-
- Version is a normalized claircore version, composed of a "kind" and an
- array of integers such that two versions of the same kind have the
- correct ordering when the integers are compared pair-wise.
- example: >-
- pep440:0.0.0.0.0.0.0.0.0
-
- Manifest:
- title: Manifest
- type: object
- description: >-
- A Manifest representing a container. The 'layers' array must
- preserve the original container's layer order for accurate usage.
- properties:
- hash:
- $ref: '#/components/schemas/Digest'
- layers:
- type: array
- items:
- $ref: '#/components/schemas/Layer'
- required:
- - hash
- - layers
-
- Layer:
- title: Layer
- type: object
- description: "A Layer within a Manifest and where Clair may retrieve it."
- properties:
- hash:
- $ref: '#/components/schemas/Digest'
- uri:
- type: string
- description: >-
- A URI describing where the layer may be found. Implementations
- MUST support http(s) schemes and MAY support additional
- schemes.
- example: >-
- https://storage.example.com/blob/2f077db56abccc19f16f140f629ae98e904b4b7d563957a7fc319bd11b82ba36
- headers:
- type: object
- description: >-
- map of arrays of header values keyed by header
- value. e.g. map[string][]string
- additionalProperties:
- type: array
- items:
- type: string
- required:
- - hash
- - uri
- - headers
-
- BulkDelete:
- title: 'BulkDelete'
- type: array
- description: 'An array of Digests to be deleted.'
- items:
- $ref: '#/components/schemas/Digest'
-
- Error:
- title: Error
- type: object
- description: "A general error schema returned when status is not 200 OK"
- properties:
- code:
- type: string
- description: "a code for this particular error"
- message:
- type: string
- description: "a message with further detail"
-
- State:
- title: State
- type: object
- description: an opaque identifier
- example:
- state: "aae368a064d7c5a433d0bf2c4f5554cc"
- properties:
- state:
- type: string
- description: an opaque identifier
- required:
- - state
-
- Digest:
- title: Digest
- type: string
- description: >-
- A digest string with prefixed algorithm. The format is described here:
- https://github.com/opencontainers/image-spec/blob/master/descriptor.md#digests
-
- Digests are used throughout the API to identify Layers and Manifests.
- example: >-
- sha256:fc84b5febd328eccaa913807716887b3eb5ed08bc22cc6933a9ebf82766725e3