FAQ

Why are the input and output of my trace empty?

Having input and output on your traces affects several important features:

  • Browsing traces: The traces list shows input/output previews, making it easier to find what you’re looking for
  • Evaluations: LLM-as-a-judge evaluators use trace input/output to assess quality. Empty fields mean evaluations won’t work
  • Search: You can search traces by their input/output content, but only if it’s populated

How trace input/output works

Before jumping into solutions, this section helps to understand how Langfuse populates trace input/output.

Traces and observations

In Langfuse, a trace represents a complete request or operation. Inside each trace, you have observations.

Trace: "User asks about weather"
├── Observation: Parse user intent
├── Observation: Call weather API
└── Observation: Generate response (LLM call)

Both traces and observations can have their own input/output fields. They serve different purposes:

Trace input/outputObservation input/output
What it representsThe overall request and responseEach individual step
Where you see itTraces list, evaluationsInside the trace detail view
How it’s setInherited from root observation OR set explicitlySet on each observation

The “root observation” rule

By default, trace input/output is copied from the observation at the top level (called the “root observation”).

This means:

  • If your root observation has input/output → the trace will too
  • If your root observation has no input/output → your trace will be empty (unless you set it explicitly, see below)

This is why your trace might show empty fields even though you can see data in the individual observations inside it.

Troubleshooting

The most common reasons for empty trace input/output are:

1. You’re using a short-lived application (scripts, serverless, notebooks)

Symptoms: Traces sometimes appear with missing data, or don’t appear at all.

Langfuse sends data in the background to keep your application fast. If your script or serverless function exits before the data is sent, it gets lost.

Solution: Call flush() before your application exits.

from langfuse import get_client
 
langfuse = get_client()
 
# Your code here...
 
# Before your script ends:
langfuse.flush()

If using the @observe() decorator:

from langfuse import observe, langfuse_context
 
@observe()
def main():
    # Your code here...
    pass
 
main()
langfuse_context.flush()

2. You haven’t set input/output on your root span

Symptoms: Trace input/output is always empty, but observations inside the trace have data.

If you’re manually creating spans, you need to either:

  • Set input/output on your root span, OR
  • Explicitly set input/output on the trace itself

Solution A: Set input/output on your root span

from langfuse import get_client
 
langfuse = get_client()
 
with langfuse.start_as_current_observation(
    as_type="span",
    name="my-pipeline"
) as root_span:
    user_input = "What's the weather like?"
    result = process_request(user_input)
 
    # Set input/output on the root span
    # This will automatically populate the trace
    root_span.update(
        input={"query": user_input},
        output={"response": result}
    )

Solution B: Set input/output directly on the trace

Sometimes the trace input/output should be different from any observation (e.g., you want a clean summary). You can set it explicitly:

from langfuse import get_client
 
langfuse = get_client()
 
with langfuse.start_as_current_observation(
    as_type="span",
    name="my-pipeline"
) as root_span:
    user_input = "What's the weather like?"
    result = process_request(user_input)
 
    # Set trace input/output explicitly (separate from span)
    root_span.update_trace(
        input={"user_question": user_input},
        output={"answer": result}
    )

3. You’re using the @observe() decorator but input/output capture is disabled

Symptoms: Decorated functions don’t show input/output, even though they return values.

The @observe() decorator automatically captures function arguments as input and return values as output. But this can be disabled.

Check if capture is disabled:

# Check your environment variables
echo $LANGFUSE_OBSERVE_DECORATOR_IO_CAPTURE_ENABLED

If this is set to false, input/output won’t be captured.

Solution: Enable capture

# In your environment
export LANGFUSE_OBSERVE_DECORATOR_IO_CAPTURE_ENABLED=true

Or enable it per-function:

from langfuse import observe
 
@observe(capture_input=True, capture_output=True)
def my_function(data):
    return process(data)

4. You’re using an OpenTelemetry-based integration

Symptoms: Traces from OTEL integrations (OpenLLMetry, Logfire, etc.) show empty input/output.

Different OpenTelemetry providers use different attribute names for input/output. Langfuse looks for specific attributes and may not find them if your provider uses different names.

Solution: Set the attributes Langfuse expects

Langfuse maps these OTEL span attributes to observation input/output (checked in this order):

Observation fieldOTEL attributes (in priority order)
inputlangfuse.observation.input, gen_ai.prompt, input.value, mlflow.spanInputs
outputlangfuse.observation.output, gen_ai.completion, output.value, mlflow.spanOutputs

For trace-level input/output, Langfuse looks for:

  • langfuse.trace.input / langfuse.trace.output, OR
  • The root span’s observation input/output (using the attributes above)

See the complete property mapping reference for all supported attributes.

Example: Manually set the attributes Langfuse recognizes:

from opentelemetry import trace
import json
 
tracer = trace.get_tracer(__name__)
 
with tracer.start_as_current_span("my-operation") as span:
    # Set attributes that Langfuse recognizes
    span.set_attribute("input.value", str(input_data))
    span.set_attribute("output.value", str(output_data))
 
    # Or use the langfuse namespace for guaranteed mapping
    span.set_attribute("langfuse.observation.input", json.dumps(input_data))
    span.set_attribute("langfuse.observation.output", json.dumps(output_data))
Which attributes does my OTEL provider use?

Different providers use different semantic conventions. Here’s how to find out what your provider sends:

  1. Enable debug logging in your OTEL exporter to see the raw span attributes
  2. Check the trace in Langfuse: Open a trace, click on an observation, and look at the “Metadata” tab to see which attributes were received
  3. Consult your provider’s documentation for their semantic conventions

Common providers and their conventions:

  • OpenLLMetry: Uses gen_ai.prompt and gen_ai.completion
  • OpenInference: Uses input.value and output.value
  • MLflow: Uses mlflow.spanInputs and mlflow.spanOutputs
  • Pydantic Logfire: Uses custom attributes (Langfuse has specific support since PR #5841)

If your provider uses different attribute names, you have two options:

  1. Manually set the attributes Langfuse expects (as shown above)
  2. Open a GitHub issue requesting support for your provider’s conventions

Many OTEL-specific issues have been fixed in recent Langfuse versions. If you’re self-hosting, make sure you’re on the latest version.


Still having issues?

If none of the above solutions work:

  1. Enable debug logging to see what’s being sent:
from langfuse import Langfuse
 
langfuse = Langfuse(debug=True)
  1. Check your SDK version and update if needed:
pip install --upgrade langfuse
  1. Look at the trace structure in the Langfuse dashboard:

    • Open a trace and look at the observation tree
    • Is there a single root observation?
    • Do the individual observations have input/output?
  2. Ask for help: Open a GitHub discussion with:

    • Your SDK version
    • A code snippet showing how you’re creating traces
    • A screenshot of the trace structure in the dashboard
Was this page helpful?