CrewAI tracing with IBM Instana and Langtrace

Karthik Kalyanaraman

Cofounder and CTO

Oct 31, 2024

Introduction

From day 1, we wanted to build an Observability solution that is,

  • Opensource

  • OpenTelemetry compatible

  • Prevents vendor lock-in

As a result of these early design decisions, developers today have the choice of using Langtrace's opentelemetry based tracing SDKs with any observability tool of their choice. In this post, we show case how you can use Langtrace for tracing your GenAI stack with IBM Instana.

IBM recently announced an integration with CrewAI. With this guide, we walk you through how easy it is to set up tracing for your CrewAI Agents with IBM's observability tool Instana set up with Langtrace's SDK that has native support for CrewAI

Setup

  1. Ensure you have an IBM Instana account

  2. Install Langtrace SDK

pip install -U langtrace-python-sdk
  1. In your OpenTelemetry collector setup, make sure you have the collector endpoint environment variable in .env set like this example:

export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317"
  1. Initialize the Langtrace SDK in your application with your OTLP collector endpoint:

import os
from openai import OpenAI

from langtrace_python_sdk import langtrace
from langtrace_python_sdk.utils.with_root_span import with_langtrace_root_span

from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import \
    OTLPSpanExporter

# Set up the OTLP exporter with the endpoint from your collector
otlp_exporter = OTLPSpanExporter(
    # This should match your collector's OTLP gRPC endpoint
    endpoint=os.environ.get("OTEL_EXPORTER_OTLP_ENDPOINT"),
    insecure=True  # Set to False if using HTTPS
)

# set up langtrace
langtrace.init(custom_remote_exporter=otlp_exporter)

# rest of your code
@with_langtrace_root_span() # This optional annotation is used to create a parent root span for the entire application
def my_agents():
  # add your code for Crew AI calls below
  # crew = Crew(...)
  # crew.kickoff()

my_agents()
  1. Note: The OTLP collector is setup using a config file in this example. The config file is shown below:

receivers:
  otlp:
    protocols:
      http:
        endpoint: "localhost:4318"
      grpc:
        endpoint: "localhost:4317"

processors:
  batch:

exporters:
  otlp:
    endpoint: "otlp-coral-saas.instana.io:4317"
    headers:
      "x-instana-key": "<YOUR_INSTANA_API_KEY>" # Replace with your actual Instana API key which you can find in your email confirmation after signing up for Instana.

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp
  1. With the environment variables set, run your application with OpenTelemetry instrumentation. Use the following command:

python main.py
  1. Note: Make sure your collector is running and configured correctly. To run the collector locally, you can use the following command:

otelcol-contrib --config otel-collector-config.yaml
  1. Once the application is running, you should see traces in your IBM Instana dashboard as shown below:

For more information check out our docs here - https://docs.langtrace.ai/supported-integrations/observability-tools/ibm-instana

Useful Resources

Ready to try Langtrace?

Try out the Langtrace SDK with just 2 lines of code.

Ready to deploy?

Try out the Langtrace SDK with just 2 lines of code.

Want to learn more?

Check out our documentation to learn more about how langtrace works

Join the Community

Check out our Discord community to ask questions and meet customers