Replies: 2 comments 6 replies
-
To capture the Here is the relevant code snippet to capture the from opentelemetry.trace import Span
from openinference.instrumentation.helpers import get_span_id
# Assuming `span` is an instance of `Span`
span_id = get_span_id(span)
print(span_id) Make sure that the Additionally, you can refer to the following example from a manually instrumented LangChain application to ensure proper span initialization and attribute setting: from opentelemetry import trace as trace_api
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
# Set up the tracer provider and span processor
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(
SimpleSpanProcessor(OTLPSpanExporter("http://localhost:6006/v1/traces"))
)
trace_api.set_tracer_provider(tracer_provider)
tracer = trace_api.get_tracer(__name__)
# Example of starting a span and setting attributes
with tracer.start_as_current_span("LangChain Span") as span:
span.set_attribute("example.attribute", "value")
span_id = span.get_span_context().span_id.to_bytes(8, "big").hex()
print(span_id) Ensure that your environment variables and tracer setup are correctly configured to avoid issues with span context propagation. For additional verification, you can refer to the test case from OpenInference to ensure that the from opentelemetry.trace import NonRecordingSpan, SpanContext
from openinference.instrumentation.helpers import get_span_id, get_trace_id
def test_get_span_and_trace_ids() -> None:
span = NonRecordingSpan(
SpanContext(
trace_id=83298525428589810613581548256892812612,
span_id=8006530202346048876,
is_remote=False,
)
)
assert get_span_id(span) == "6f1ce8cc7245cd6c"
assert get_trace_id(span) == "3eaab662c550df264f0fbd19bd8bfd44" This test case demonstrates the correct usage of For a complete setup, you can follow the instructions provided in the LangChain integration documentation to ensure that your LangChain application is properly instrumented and connected to the Phoenix server for observability and evaluation [1][2][3][4]. |
Beta Was this translation helpful? Give feedback.
-
I figured out the answer to that, after digging a bit deeper in the LangChain instrumentation code. So the solution is: from openinference.instrumentation.langchain import get_current_span
...
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| basic_rag_prompt
| llm
| { "output": StrOutputParser() }
| RunnablePassthrough.assign(
span_id=lambda x: get_current_span().get_span_context().span_id.to_bytes(8, "big").hex()
)
) Perhaps it's worth adding this to the documentation? |
Beta Was this translation helpful? Give feedback.
-
I'm trying to follow the guide to capture the span id in order to allow the front-end application to capture human feedback (following the guide here: https://docs.arize.com/phoenix/tracing/how-to-tracing/capture-feedback).
The problem is that I'm using (for now) a simple LangChain chain which is automatically instrumented using:
and when I try to get the span after the call, I always get a span id of '0000000000000000'.
I tried to add the span extraction into the chain in the following way:
but this ends up giving the same results. Is there a better / smarter way to get the current span_id?
In the final app I'll probably move to LangGraph, with a much more complex setup, but I'd like to already iron out all of these "prerequisits" at the very simple level.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions