Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating experimental -> development in docs #1895

Merged
merged 2 commits into from
Feb 13, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,7 @@ Semantic conventions are validated for name formatting and backward compatibilit
Here's [the full list of compatibility checks](./policies/compatibility.rego).

Removing attributes, metrics, or enum members is not allowed, they should be deprecated instead.
It applies to stable and experimental conventions and prevents semantic conventions auto-generated libraries from introducing breaking changes.
It applies to stable and unstable conventions and prevents semantic conventions auto-generated libraries from introducing breaking changes.

You can run backward compatibility check (along with other policies) in all yaml files with the following command:

Expand Down
28 changes: 14 additions & 14 deletions docs/general/semantic-convention-groups.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,13 @@ for the details.

## Group Stability

<!-- TODO: this section will need to change when https://github.com/open-telemetry/semantic-conventions/issues/1096 is implemented -->
Semantic Convention groups can have the following [stability levels][MaturityLevel]:
`development`, `alpha`, `beta`, `release_candidate`, and `stable`.

Semantic Convention groups can be `stable` (corresponds to
[Stable maturity level][MaturityLevel]) or `experimental` (corresponds to [Development maturity level][MaturityLevel])
if stability level is not specified, it's assumed to be `experimental`.
Stability level is required on groups of all types except `attribute_group`.
If stability level is not specified, it's assumed to be `development`.

Group stability MUST NOT change from `stable` to `experimental`.
Group stability MUST NOT change from `stable` to any other level.

Semantic convention group of any stability level MUST NOT be removed
to preserve code generation and documentation for legacy instrumentations.
Expand All @@ -60,31 +60,31 @@ Stability guarantees on a group apply to the group properties (such as type, id
signal-specific properties) as well as overridden properties of stable attributes
referenced by this group.

Stability guarantees on a group level **do not** apply to experimental attribute references.
Stability guarantees on a group level **do not** apply to unstable attribute references.

**Experimental groups:**
**Unstable groups:**

- MAY add or remove references to stable or experimental attributes
- MAY add or remove references to stable or unstable attributes
- MAY change requirement level and other properties of attribute references

**Stable groups:**

- MAY add or remove references to experimental attributes with `opt_in`
- MAY add or remove references to unstable attributes with `opt_in`
requirement level.
- SHOULD NOT have references to experimental attributes with requirement level
- SHOULD NOT have references to unstable attributes with requirement level
other than `opt_in`.
The requirement level of an experimental attribute reference
The requirement level of an unstable attribute reference
MAY be changed when this attribute becomes stable in cases allowed by the
[Versioning and Stability][Stability].
- MUST NOT remove references to stable attributes.

Stable instrumentations MUST NOT report telemetry following the experimental part
of semantic conventions by default. They MAY support experimental part and allow
Stable instrumentations MUST NOT report telemetry following the unstable part
of semantic conventions by default. They MAY support unstable part and allow
users to opt into it.

<!-- TODO: SchemaURL needs to contain some indication of stability level, e.g. as a suffix -->
<!-- https://github.com/open-telemetry/semantic-conventions/issues/1511 -->

[Stability]: https://opentelemetry.io/docs/specs/otel/versioning-and-stability/#semantic-conventions-stability
[MaturityLevel]: https://github.com/open-telemetry/oteps/blob/main/text/0232-maturity-of-otel.md
[MaturityLevel]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.41.0/oteps/0232-maturity-of-otel.md
[DocumentStatus]: https://opentelemetry.io/docs/specs/otel/document-status
11 changes: 5 additions & 6 deletions docs/non-normative/code-generation.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Language SIGs that ship semantic conventions library may decide to ship a stable
Possible solutions include:

- Generate all Semantic Conventions for a given version in specific folder while keeping old versions intact. It is used by [opentelemetry-go](https://github.com/open-telemetry/opentelemetry-go/tree/main/semconv/) but could be problematic if the artifact size is a concern.
- Follow language-specific conventions to annotate experimental parts. For example, Semantic Conventions in Python puts experimental attributes in `opentelemetry.semconv._incubating` import path which is considered (following Python underscore convention) to be internal and subject to change.
- Follow language-specific conventions to annotate unstable parts. For example, Semantic Conventions in Python puts unstable attributes in `opentelemetry.semconv._incubating` import path which is considered (following Python underscore convention) to be internal and subject to change.
- Ship two different artifacts: one that contains stable Semantic Conventions and another one with all available conventions. For example, [semantic-conventions in Java](https://github.com/open-telemetry/semantic-conventions-java) are shipped in two artifacts: `opentelemetry-semconv` and `opentelemetry-semconv-incubating`.

> Note:
Expand All @@ -37,16 +37,16 @@ Possible solutions include:
> experimental conventions, the latter would be resolved leading to compilation or runtime issues in the application.

Instrumentation libraries should depend on the stable (part of) semantic convention artifact or copy relevant definitions into their own code base.
Experimental semantic conventions are intended for end-user applications.
Unstable semantic conventions artifact is intended for end-user applications.

### Deprecated Conventions

It's recommended to generate code for deprecated attributes, metrics, and other conventions. Use appropriate annotations to mark them as deprecated.
Conventions have a `stability` property which provide the stability level at the deprecation time (`experimental` or `stable`) and
Conventions have a `stability` property which provide the stability level at the deprecation time (`development`, `alpha`, `beta`, `release_candidate` or `stable`) and
the `deprecated` property that describes deprecation reason which can be used to generate documentation.

- Deprecated conventions that reached stability should not be removed without major version update according to SemVer.
- Conventions that were deprecated while being experimental should still be generated and kept in the preview (part of) semantic conventions artifact. It minimizes runtime issues
- Conventions that were deprecated while being unstable should still be generated and kept in the preview (part of) semantic conventions artifact. It minimizes runtime issues
and breaking changes in user applications.

Keep stable convention definitions inside the preview (part of) semantic conversions artifact. It prevents user code from breaking when semantic convention stabilizes. Deprecate stable definitions inside the preview artifact and point users to the stable location in generated documentation.
Expand All @@ -58,7 +58,7 @@ This section contains suggestions on how to structure semantic convention artifa

- Artifact name:
- `opentelemetry-semconv` - stable conventions
- `opentelemetry-semconv-incubating` - (if applicable) the preview artifact containing all (stable and experimental) conventions
- `opentelemetry-semconv-incubating` - (if applicable) the preview artifact containing all (stable and unstable) conventions
- Namespace: `opentelemetry.semconv` and `opentelemetry.semconv.incubating`
- All supported Schema URLs should be listed to allow different instrumentations in the same application to provide the exact version of conventions they follow.
- Attributes, metrics, and other convention definitions should be grouped by the convention type and the root namespace. See the example below:
Expand Down Expand Up @@ -181,7 +181,6 @@ Notable changes on helper methods:
- `attr.brief | to_doc_brief | indent` -> `attr.brief | comment(indent=4)`, check out extensive [comment formatting configuration](https://github.com/open-telemetry/weaver/blob/main/crates/weaver_forge/README.md#comment-filter)
- stability/deprecation checks:
- `attribute is stable` if checking one attribute, `attributes | select("stable")` to filter stable attributes
- `attribute is experimental` if checking one attribute, `attributes | select("experimental")` to filter experimental attributes
- `attribute is deprecated` if checking one attribute, `attributes | select("deprecated")` to filter deprecated attributes
- check if attribute is a template: `attribute.type is template_type`
- new way to simplify switch-like logic: `key | map_text("map_name")`. Maps can be defined in the weaver config.
Expand Down
2 changes: 1 addition & 1 deletion docs/non-normative/db-migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ See [Metric `db.client.operation.duration` v1.28.0 (RC)](https://github.com/open

### Experimental connection metrics

Database connection metrics are still experimental, but there have been several changes in the latest release.
Database connection metrics are not stable yet, but there have been several changes in the latest release.

#### Database client connection count

Expand Down
2 changes: 1 addition & 1 deletion docs/runtime/jvm-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -433,7 +433,7 @@ Note that the JVM does not provide a definition of what "recent" means.

**Status**: [Development][DocumentStatus]

**Description:** Experimental Java Virtual Machine (JVM) metrics captured under `jvm.`
**Description:** In-development Java Virtual Machine (JVM) metrics captured under `jvm.`

### Metric: `jvm.memory.init`

Expand Down
43 changes: 19 additions & 24 deletions docs/runtime/nodejs-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,30 +10,25 @@ This document describes semantic conventions for Node.js Runtime metrics in Open

<!-- toc -->

- [Development](#development)
- [Metric: `nodejs.eventloop.delay.min`](#metric-nodejseventloopdelaymin)
- [Metric: `nodejs.eventloop.delay.max`](#metric-nodejseventloopdelaymax)
- [Metric: `nodejs.eventloop.delay.mean`](#metric-nodejseventloopdelaymean)
- [Metric: `nodejs.eventloop.delay.stddev`](#metric-nodejseventloopdelaystddev)
- [Metric: `nodejs.eventloop.delay.p50`](#metric-nodejseventloopdelayp50)
- [Metric: `nodejs.eventloop.delay.p90`](#metric-nodejseventloopdelayp90)
- [Metric: `nodejs.eventloop.delay.p99`](#metric-nodejseventloopdelayp99)
- [Metric: `nodejs.eventloop.utilization`](#metric-nodejseventlooputilization)
- [Metric: `nodejs.eventloop.time`](#metric-nodejseventlooptime)
- [Metric: `nodejs.eventloop.delay.min`](#metric-nodejseventloopdelaymin)
- [Metric: `nodejs.eventloop.delay.max`](#metric-nodejseventloopdelaymax)
- [Metric: `nodejs.eventloop.delay.mean`](#metric-nodejseventloopdelaymean)
- [Metric: `nodejs.eventloop.delay.stddev`](#metric-nodejseventloopdelaystddev)
- [Metric: `nodejs.eventloop.delay.p50`](#metric-nodejseventloopdelayp50)
- [Metric: `nodejs.eventloop.delay.p90`](#metric-nodejseventloopdelayp90)
- [Metric: `nodejs.eventloop.delay.p99`](#metric-nodejseventloopdelayp99)
- [Metric: `nodejs.eventloop.utilization`](#metric-nodejseventlooputilization)
- [Metric: `nodejs.eventloop.time`](#metric-nodejseventlooptime)

<!-- tocstop -->

## Development

**Status**: [Development][DocumentStatus]

**Description:** Experimental Node.js Runtime metrics captured under `nodejs`.
**Description:** In-development Node.js Runtime metrics captured under `nodejs`.

Note: The metrics for eventloop delay are split into separated values instead of a single histogram, because node runtime
only returns single values through [`perf_hooks.monitorEventLoopDelay([options])`][Eventloop] and not the entire
histogram, so it's not possible to convert it to an OpenTelemetry histogram.

### Metric: `nodejs.eventloop.delay.min`
## Metric: `nodejs.eventloop.delay.min`

This metric is [recommended][MetricRecommended].

Expand All @@ -55,7 +50,7 @@ This metric is [recommended][MetricRecommended].
<!-- END AUTOGENERATED TEXT -->
<!-- endsemconv -->

### Metric: `nodejs.eventloop.delay.max`
## Metric: `nodejs.eventloop.delay.max`

This metric is [recommended][MetricRecommended].

Expand All @@ -77,7 +72,7 @@ This metric is [recommended][MetricRecommended].
<!-- END AUTOGENERATED TEXT -->
<!-- endsemconv -->

### Metric: `nodejs.eventloop.delay.mean`
## Metric: `nodejs.eventloop.delay.mean`

This metric is [recommended][MetricRecommended].

Expand All @@ -99,7 +94,7 @@ This metric is [recommended][MetricRecommended].
<!-- END AUTOGENERATED TEXT -->
<!-- endsemconv -->

### Metric: `nodejs.eventloop.delay.stddev`
## Metric: `nodejs.eventloop.delay.stddev`

This metric is [recommended][MetricRecommended].

Expand All @@ -121,7 +116,7 @@ This metric is [recommended][MetricRecommended].
<!-- END AUTOGENERATED TEXT -->
<!-- endsemconv -->

### Metric: `nodejs.eventloop.delay.p50`
## Metric: `nodejs.eventloop.delay.p50`

This metric is [recommended][MetricRecommended].

Expand All @@ -143,7 +138,7 @@ This metric is [recommended][MetricRecommended].
<!-- END AUTOGENERATED TEXT -->
<!-- endsemconv -->

### Metric: `nodejs.eventloop.delay.p90`
## Metric: `nodejs.eventloop.delay.p90`

This metric is [recommended][MetricRecommended].

Expand All @@ -165,7 +160,7 @@ This metric is [recommended][MetricRecommended].
<!-- END AUTOGENERATED TEXT -->
<!-- endsemconv -->

### Metric: `nodejs.eventloop.delay.p99`
## Metric: `nodejs.eventloop.delay.p99`

This metric is [recommended][MetricRecommended].

Expand All @@ -187,7 +182,7 @@ This metric is [recommended][MetricRecommended].
<!-- END AUTOGENERATED TEXT -->
<!-- endsemconv -->

### Metric: `nodejs.eventloop.utilization`
## Metric: `nodejs.eventloop.utilization`

This metric is [recommended][MetricRecommended].

Expand All @@ -209,7 +204,7 @@ This metric is [recommended][MetricRecommended].
<!-- END AUTOGENERATED TEXT -->
<!-- endsemconv -->

### Metric: `nodejs.eventloop.time`
## Metric: `nodejs.eventloop.time`

This metric is [recommended][MetricRecommended].

Expand Down
27 changes: 11 additions & 16 deletions docs/runtime/v8js-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,22 +10,17 @@ This document describes semantic conventions for V8 JS Engine Runtime metrics in

<!-- toc -->

- [Development](#development)
- [Metric: `v8js.gc.duration`](#metric-v8jsgcduration)
- [Metric: `v8js.memory.heap.limit`](#metric-v8jsmemoryheaplimit)
- [Metric: `v8js.memory.heap.used`](#metric-v8jsmemoryheapused)
- [Metric: `v8js.heap.space.available_size`](#metric-v8jsheapspaceavailable_size)
- [Metric: `v8js.heap.space.physical_size`](#metric-v8jsheapspacephysical_size)
- [Metric: `v8js.gc.duration`](#metric-v8jsgcduration)
- [Metric: `v8js.memory.heap.limit`](#metric-v8jsmemoryheaplimit)
- [Metric: `v8js.memory.heap.used`](#metric-v8jsmemoryheapused)
- [Metric: `v8js.heap.space.available_size`](#metric-v8jsheapspaceavailable_size)
- [Metric: `v8js.heap.space.physical_size`](#metric-v8jsheapspacephysical_size)

<!-- tocstop -->

## Development
**Description:** In-development V8 JS Engine Runtime metrics captured under `v8js`.

**Status**: [Development][DocumentStatus]

**Description:** Experimental V8 JS Engine Runtime metrics captured under `v8js`.

### Metric: `v8js.gc.duration`
## Metric: `v8js.gc.duration`

This metric is [recommended][MetricRecommended].

Expand Down Expand Up @@ -66,7 +61,7 @@ of `[ 0.01, 0.1, 1, 10 ]`.
<!-- END AUTOGENERATED TEXT -->
<!-- endsemconv -->

### Metric: `v8js.memory.heap.limit`
## Metric: `v8js.memory.heap.limit`

This metric is [recommended][MetricRecommended].

Expand Down Expand Up @@ -106,7 +101,7 @@ This metric is [recommended][MetricRecommended].
<!-- END AUTOGENERATED TEXT -->
<!-- endsemconv -->

### Metric: `v8js.memory.heap.used`
## Metric: `v8js.memory.heap.used`

This metric is [recommended][MetricRecommended].

Expand Down Expand Up @@ -146,7 +141,7 @@ This metric is [recommended][MetricRecommended].
<!-- END AUTOGENERATED TEXT -->
<!-- endsemconv -->

### Metric: `v8js.heap.space.available_size`
## Metric: `v8js.heap.space.available_size`

This metric is [recommended][MetricRecommended].

Expand Down Expand Up @@ -186,7 +181,7 @@ This metric is [recommended][MetricRecommended].
<!-- END AUTOGENERATED TEXT -->
<!-- endsemconv -->

### Metric: `v8js.heap.space.physical_size`
## Metric: `v8js.heap.space.physical_size`

This metric is [recommended][MetricRecommended].

Expand Down
34 changes: 20 additions & 14 deletions policies_test/group_stability_test.rego
Original file line number Diff line number Diff line change
Expand Up @@ -2,23 +2,29 @@ package after_resolution
import future.keywords

test_fails_on_experimental_not_opt_in_attribute_in_stable_group if {
count(deny) == 1 with input as {"groups": [{ "id": "span.foo",
experimental_stabilities := ["experimental", "development", "alpha", "beta", "release_candidate"]
every stability in experimental_stabilities {
count(deny) == 1 with input as {"groups": [{ "id": "span.foo",
"type": "span",
"stability": "stable",
"attributes": [{
"name": "test.experimental",
"stability": stability,
"requirement_level": "required"
}]}]}
}
}

test_passes_on_experimental_opt_in_attribute_in_stable_group if {
experimental_stabilities := ["experimental", "development", "alpha", "beta", "release_candidate"]
every stability in experimental_stabilities {
count(deny) == 0 with input as {"groups": [{ "id": "span.foo",
"type": "span",
"stability": "stable",
"attributes": [{
"name": "test.experimental",
"stability": "experimental",
"requirement_level": "required"
"stability": stability,
"requirement_level": "opt_in"
}]}]}
}

test_passes_on_experimental_opt_in_attribute_in_stable_group if {
count(deny) == 0 with input as {"groups": [{ "id": "span.foo",
"type": "span",
"stability": "stable",
"attributes": [{
"name": "test.experimental",
"stability": "experimental",
"requirement_level": "opt_in"
}]}]}
}
}