diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 5d3f99b839..ccd63e3557 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -227,7 +227,7 @@ Semantic conventions are validated for name formatting and backward compatibilit Here's [the full list of compatibility checks](./policies/compatibility.rego). Removing attributes, metrics, or enum members is not allowed, they should be deprecated instead. -It applies to stable and experimental conventions and prevents semantic conventions auto-generated libraries from introducing breaking changes. +It applies to stable and unstable conventions and prevents semantic conventions auto-generated libraries from introducing breaking changes. You can run backward compatibility check (along with other policies) in all yaml files with the following command: diff --git a/docs/general/semantic-convention-groups.md b/docs/general/semantic-convention-groups.md index 137d7eaf12..54b57d2387 100644 --- a/docs/general/semantic-convention-groups.md +++ b/docs/general/semantic-convention-groups.md @@ -35,13 +35,13 @@ for the details. ## Group Stability - +Semantic Convention groups can have the following [stability levels][MaturityLevel]: +`development`, `alpha`, `beta`, `release_candidate`, and `stable`. -Semantic Convention groups can be `stable` (corresponds to -[Stable maturity level][MaturityLevel]) or `experimental` (corresponds to [Development maturity level][MaturityLevel]) -if stability level is not specified, it's assumed to be `experimental`. +Stability level is required on groups of all types except `attribute_group`. +If stability level is not specified, it's assumed to be `development`. -Group stability MUST NOT change from `stable` to `experimental`. +Group stability MUST NOT change from `stable` to any other level. Semantic convention group of any stability level MUST NOT be removed to preserve code generation and documentation for legacy instrumentations. @@ -60,31 +60,31 @@ Stability guarantees on a group apply to the group properties (such as type, id signal-specific properties) as well as overridden properties of stable attributes referenced by this group. -Stability guarantees on a group level **do not** apply to experimental attribute references. +Stability guarantees on a group level **do not** apply to unstable attribute references. -**Experimental groups:** +**Unstable groups:** -- MAY add or remove references to stable or experimental attributes +- MAY add or remove references to stable or unstable attributes - MAY change requirement level and other properties of attribute references **Stable groups:** -- MAY add or remove references to experimental attributes with `opt_in` +- MAY add or remove references to unstable attributes with `opt_in` requirement level. -- SHOULD NOT have references to experimental attributes with requirement level +- SHOULD NOT have references to unstable attributes with requirement level other than `opt_in`. - The requirement level of an experimental attribute reference + The requirement level of an unstable attribute reference MAY be changed when this attribute becomes stable in cases allowed by the [Versioning and Stability][Stability]. - MUST NOT remove references to stable attributes. -Stable instrumentations MUST NOT report telemetry following the experimental part -of semantic conventions by default. They MAY support experimental part and allow +Stable instrumentations MUST NOT report telemetry following the unstable part +of semantic conventions by default. They MAY support unstable part and allow users to opt into it. [Stability]: https://opentelemetry.io/docs/specs/otel/versioning-and-stability/#semantic-conventions-stability -[MaturityLevel]: https://github.com/open-telemetry/oteps/blob/main/text/0232-maturity-of-otel.md +[MaturityLevel]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.41.0/oteps/0232-maturity-of-otel.md [DocumentStatus]: https://opentelemetry.io/docs/specs/otel/document-status diff --git a/docs/non-normative/code-generation.md b/docs/non-normative/code-generation.md index 6d71a18194..abd456e9ba 100644 --- a/docs/non-normative/code-generation.md +++ b/docs/non-normative/code-generation.md @@ -28,7 +28,7 @@ Language SIGs that ship semantic conventions library may decide to ship a stable Possible solutions include: - Generate all Semantic Conventions for a given version in specific folder while keeping old versions intact. It is used by [opentelemetry-go](https://github.com/open-telemetry/opentelemetry-go/tree/main/semconv/) but could be problematic if the artifact size is a concern. -- Follow language-specific conventions to annotate experimental parts. For example, Semantic Conventions in Python puts experimental attributes in `opentelemetry.semconv._incubating` import path which is considered (following Python underscore convention) to be internal and subject to change. +- Follow language-specific conventions to annotate unstable parts. For example, Semantic Conventions in Python puts unstable attributes in `opentelemetry.semconv._incubating` import path which is considered (following Python underscore convention) to be internal and subject to change. - Ship two different artifacts: one that contains stable Semantic Conventions and another one with all available conventions. For example, [semantic-conventions in Java](https://github.com/open-telemetry/semantic-conventions-java) are shipped in two artifacts: `opentelemetry-semconv` and `opentelemetry-semconv-incubating`. > Note: @@ -37,16 +37,16 @@ Possible solutions include: > experimental conventions, the latter would be resolved leading to compilation or runtime issues in the application. Instrumentation libraries should depend on the stable (part of) semantic convention artifact or copy relevant definitions into their own code base. -Experimental semantic conventions are intended for end-user applications. +Unstable semantic conventions artifact is intended for end-user applications. ### Deprecated Conventions It's recommended to generate code for deprecated attributes, metrics, and other conventions. Use appropriate annotations to mark them as deprecated. -Conventions have a `stability` property which provide the stability level at the deprecation time (`experimental` or `stable`) and +Conventions have a `stability` property which provide the stability level at the deprecation time (`development`, `alpha`, `beta`, `release_candidate` or `stable`) and the `deprecated` property that describes deprecation reason which can be used to generate documentation. - Deprecated conventions that reached stability should not be removed without major version update according to SemVer. -- Conventions that were deprecated while being experimental should still be generated and kept in the preview (part of) semantic conventions artifact. It minimizes runtime issues +- Conventions that were deprecated while being unstable should still be generated and kept in the preview (part of) semantic conventions artifact. It minimizes runtime issues and breaking changes in user applications. Keep stable convention definitions inside the preview (part of) semantic conversions artifact. It prevents user code from breaking when semantic convention stabilizes. Deprecate stable definitions inside the preview artifact and point users to the stable location in generated documentation. @@ -58,7 +58,7 @@ This section contains suggestions on how to structure semantic convention artifa - Artifact name: - `opentelemetry-semconv` - stable conventions - - `opentelemetry-semconv-incubating` - (if applicable) the preview artifact containing all (stable and experimental) conventions + - `opentelemetry-semconv-incubating` - (if applicable) the preview artifact containing all (stable and unstable) conventions - Namespace: `opentelemetry.semconv` and `opentelemetry.semconv.incubating` - All supported Schema URLs should be listed to allow different instrumentations in the same application to provide the exact version of conventions they follow. - Attributes, metrics, and other convention definitions should be grouped by the convention type and the root namespace. See the example below: @@ -181,7 +181,6 @@ Notable changes on helper methods: - `attr.brief | to_doc_brief | indent` -> `attr.brief | comment(indent=4)`, check out extensive [comment formatting configuration](https://github.com/open-telemetry/weaver/blob/main/crates/weaver_forge/README.md#comment-filter) - stability/deprecation checks: - `attribute is stable` if checking one attribute, `attributes | select("stable")` to filter stable attributes - - `attribute is experimental` if checking one attribute, `attributes | select("experimental")` to filter experimental attributes - `attribute is deprecated` if checking one attribute, `attributes | select("deprecated")` to filter deprecated attributes - check if attribute is a template: `attribute.type is template_type` - new way to simplify switch-like logic: `key | map_text("map_name")`. Maps can be defined in the weaver config. diff --git a/docs/non-normative/db-migration.md b/docs/non-normative/db-migration.md index 582397599c..1ec7ce73ce 100644 --- a/docs/non-normative/db-migration.md +++ b/docs/non-normative/db-migration.md @@ -94,7 +94,7 @@ See [Metric `db.client.operation.duration` v1.28.0 (RC)](https://github.com/open ### Experimental connection metrics -Database connection metrics are still experimental, but there have been several changes in the latest release. +Database connection metrics are not stable yet, but there have been several changes in the latest release. #### Database client connection count diff --git a/docs/runtime/jvm-metrics.md b/docs/runtime/jvm-metrics.md index 76d7b2fb7c..d6e395173a 100644 --- a/docs/runtime/jvm-metrics.md +++ b/docs/runtime/jvm-metrics.md @@ -433,7 +433,7 @@ Note that the JVM does not provide a definition of what "recent" means. **Status**: [Development][DocumentStatus] -**Description:** Experimental Java Virtual Machine (JVM) metrics captured under `jvm.` +**Description:** In-development Java Virtual Machine (JVM) metrics captured under `jvm.` ### Metric: `jvm.memory.init` diff --git a/docs/runtime/nodejs-metrics.md b/docs/runtime/nodejs-metrics.md index f205da8990..aeeb7a5b75 100644 --- a/docs/runtime/nodejs-metrics.md +++ b/docs/runtime/nodejs-metrics.md @@ -10,30 +10,25 @@ This document describes semantic conventions for Node.js Runtime metrics in Open -- [Development](#development) - - [Metric: `nodejs.eventloop.delay.min`](#metric-nodejseventloopdelaymin) - - [Metric: `nodejs.eventloop.delay.max`](#metric-nodejseventloopdelaymax) - - [Metric: `nodejs.eventloop.delay.mean`](#metric-nodejseventloopdelaymean) - - [Metric: `nodejs.eventloop.delay.stddev`](#metric-nodejseventloopdelaystddev) - - [Metric: `nodejs.eventloop.delay.p50`](#metric-nodejseventloopdelayp50) - - [Metric: `nodejs.eventloop.delay.p90`](#metric-nodejseventloopdelayp90) - - [Metric: `nodejs.eventloop.delay.p99`](#metric-nodejseventloopdelayp99) - - [Metric: `nodejs.eventloop.utilization`](#metric-nodejseventlooputilization) - - [Metric: `nodejs.eventloop.time`](#metric-nodejseventlooptime) +- [Metric: `nodejs.eventloop.delay.min`](#metric-nodejseventloopdelaymin) +- [Metric: `nodejs.eventloop.delay.max`](#metric-nodejseventloopdelaymax) +- [Metric: `nodejs.eventloop.delay.mean`](#metric-nodejseventloopdelaymean) +- [Metric: `nodejs.eventloop.delay.stddev`](#metric-nodejseventloopdelaystddev) +- [Metric: `nodejs.eventloop.delay.p50`](#metric-nodejseventloopdelayp50) +- [Metric: `nodejs.eventloop.delay.p90`](#metric-nodejseventloopdelayp90) +- [Metric: `nodejs.eventloop.delay.p99`](#metric-nodejseventloopdelayp99) +- [Metric: `nodejs.eventloop.utilization`](#metric-nodejseventlooputilization) +- [Metric: `nodejs.eventloop.time`](#metric-nodejseventlooptime) -## Development - -**Status**: [Development][DocumentStatus] - -**Description:** Experimental Node.js Runtime metrics captured under `nodejs`. +**Description:** In-development Node.js Runtime metrics captured under `nodejs`. Note: The metrics for eventloop delay are split into separated values instead of a single histogram, because node runtime only returns single values through [`perf_hooks.monitorEventLoopDelay([options])`][Eventloop] and not the entire histogram, so it's not possible to convert it to an OpenTelemetry histogram. -### Metric: `nodejs.eventloop.delay.min` +## Metric: `nodejs.eventloop.delay.min` This metric is [recommended][MetricRecommended]. @@ -55,7 +50,7 @@ This metric is [recommended][MetricRecommended]. -### Metric: `nodejs.eventloop.delay.max` +## Metric: `nodejs.eventloop.delay.max` This metric is [recommended][MetricRecommended]. @@ -77,7 +72,7 @@ This metric is [recommended][MetricRecommended]. -### Metric: `nodejs.eventloop.delay.mean` +## Metric: `nodejs.eventloop.delay.mean` This metric is [recommended][MetricRecommended]. @@ -99,7 +94,7 @@ This metric is [recommended][MetricRecommended]. -### Metric: `nodejs.eventloop.delay.stddev` +## Metric: `nodejs.eventloop.delay.stddev` This metric is [recommended][MetricRecommended]. @@ -121,7 +116,7 @@ This metric is [recommended][MetricRecommended]. -### Metric: `nodejs.eventloop.delay.p50` +## Metric: `nodejs.eventloop.delay.p50` This metric is [recommended][MetricRecommended]. @@ -143,7 +138,7 @@ This metric is [recommended][MetricRecommended]. -### Metric: `nodejs.eventloop.delay.p90` +## Metric: `nodejs.eventloop.delay.p90` This metric is [recommended][MetricRecommended]. @@ -165,7 +160,7 @@ This metric is [recommended][MetricRecommended]. -### Metric: `nodejs.eventloop.delay.p99` +## Metric: `nodejs.eventloop.delay.p99` This metric is [recommended][MetricRecommended]. @@ -187,7 +182,7 @@ This metric is [recommended][MetricRecommended]. -### Metric: `nodejs.eventloop.utilization` +## Metric: `nodejs.eventloop.utilization` This metric is [recommended][MetricRecommended]. @@ -209,7 +204,7 @@ This metric is [recommended][MetricRecommended]. -### Metric: `nodejs.eventloop.time` +## Metric: `nodejs.eventloop.time` This metric is [recommended][MetricRecommended]. diff --git a/docs/runtime/v8js-metrics.md b/docs/runtime/v8js-metrics.md index ac9f36e69a..f6655314f3 100644 --- a/docs/runtime/v8js-metrics.md +++ b/docs/runtime/v8js-metrics.md @@ -10,22 +10,17 @@ This document describes semantic conventions for V8 JS Engine Runtime metrics in -- [Development](#development) - - [Metric: `v8js.gc.duration`](#metric-v8jsgcduration) - - [Metric: `v8js.memory.heap.limit`](#metric-v8jsmemoryheaplimit) - - [Metric: `v8js.memory.heap.used`](#metric-v8jsmemoryheapused) - - [Metric: `v8js.heap.space.available_size`](#metric-v8jsheapspaceavailable_size) - - [Metric: `v8js.heap.space.physical_size`](#metric-v8jsheapspacephysical_size) +- [Metric: `v8js.gc.duration`](#metric-v8jsgcduration) +- [Metric: `v8js.memory.heap.limit`](#metric-v8jsmemoryheaplimit) +- [Metric: `v8js.memory.heap.used`](#metric-v8jsmemoryheapused) +- [Metric: `v8js.heap.space.available_size`](#metric-v8jsheapspaceavailable_size) +- [Metric: `v8js.heap.space.physical_size`](#metric-v8jsheapspacephysical_size) -## Development +**Description:** In-development V8 JS Engine Runtime metrics captured under `v8js`. -**Status**: [Development][DocumentStatus] - -**Description:** Experimental V8 JS Engine Runtime metrics captured under `v8js`. - -### Metric: `v8js.gc.duration` +## Metric: `v8js.gc.duration` This metric is [recommended][MetricRecommended]. @@ -66,7 +61,7 @@ of `[ 0.01, 0.1, 1, 10 ]`. -### Metric: `v8js.memory.heap.limit` +## Metric: `v8js.memory.heap.limit` This metric is [recommended][MetricRecommended]. @@ -106,7 +101,7 @@ This metric is [recommended][MetricRecommended]. -### Metric: `v8js.memory.heap.used` +## Metric: `v8js.memory.heap.used` This metric is [recommended][MetricRecommended]. @@ -146,7 +141,7 @@ This metric is [recommended][MetricRecommended]. -### Metric: `v8js.heap.space.available_size` +## Metric: `v8js.heap.space.available_size` This metric is [recommended][MetricRecommended]. @@ -186,7 +181,7 @@ This metric is [recommended][MetricRecommended]. -### Metric: `v8js.heap.space.physical_size` +## Metric: `v8js.heap.space.physical_size` This metric is [recommended][MetricRecommended]. diff --git a/policies_test/group_stability_test.rego b/policies_test/group_stability_test.rego index 25070d49cd..8e9a0a528b 100644 --- a/policies_test/group_stability_test.rego +++ b/policies_test/group_stability_test.rego @@ -2,23 +2,29 @@ package after_resolution import future.keywords test_fails_on_experimental_not_opt_in_attribute_in_stable_group if { - count(deny) == 1 with input as {"groups": [{ "id": "span.foo", + experimental_stabilities := ["experimental", "development", "alpha", "beta", "release_candidate"] + every stability in experimental_stabilities { + count(deny) == 1 with input as {"groups": [{ "id": "span.foo", + "type": "span", + "stability": "stable", + "attributes": [{ + "name": "test.experimental", + "stability": stability, + "requirement_level": "required" + }]}]} + } +} + +test_passes_on_experimental_opt_in_attribute_in_stable_group if { + experimental_stabilities := ["experimental", "development", "alpha", "beta", "release_candidate"] + every stability in experimental_stabilities { + count(deny) == 0 with input as {"groups": [{ "id": "span.foo", "type": "span", "stability": "stable", "attributes": [{ "name": "test.experimental", - "stability": "experimental", - "requirement_level": "required" + "stability": stability, + "requirement_level": "opt_in" }]}]} -} - -test_passes_on_experimental_opt_in_attribute_in_stable_group if { - count(deny) == 0 with input as {"groups": [{ "id": "span.foo", - "type": "span", - "stability": "stable", - "attributes": [{ - "name": "test.experimental", - "stability": "experimental", - "requirement_level": "opt_in" - }]}]} + } }