Integrating Langtrace with Elastic APM for Comprehensive LLM Observability

Yemi Adejumobi

Platform Engineer

May 31, 2024

In today's world of microservices and distributed systems, observability has become a crucial aspect of maintaining and troubleshooting applications. In my conversations with enterprise customers, most prefer to integrate new AI tools into their existing infrastructure rather than adopt entirely new solutions. Langtrace makes this possible in a seamless way.

Langtrace is a powerful, developer friendly open-source tool for adding observability to your AI applications, and Elastic APM is a robust solution for monitoring the performance of your applications. In this blog post, we'll show you how to use Langtrace to send traces to Elastic APM and how to index and query data using LlamaIndex.

The demo project is a Q&A RAG application that leverages Langtrace for observability and LlamaIndex for data indexing and querying. Users can ask questions about the essay, and the system will provide relevant answers by querying the indexed content. This setup provides a practical example of how to integrate AI observability with data indexing and querying in a real-world application.

Prerequisites

Before we get started, make sure you have the following prerequisites:

Download Data

First, we need to download the data files to be indexed into a data directory.

!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' \
  -O 'data/paul_graham_essay.txt'

Setting Up the Environment

Let's set up a virtual environment and install the necessary packages:

python -m venv venv
source venv/bin/activate
pip install langtrace-python-sdk \
  'elastic-apm[opentelemetry]' \
  opentelemetry-instrumentation \
  opentelemetry-exporter-otlp \
  opentelemetry-api \
  opentelemetry-sdk \
  llama-index
opentelemetry-bootstrap -a install

Creating Your Script

Create a file named main.py and add the following code:

from langtrace_python_sdk import langtrace, with_langtrace_root_span
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
import logging
import sys
import os.path
from llama_index.core import (
    VectorStoreIndex,
    SimpleDirectoryReader,
    StorageContext,
    load_index_from_storage,
)

langtrace.init(api_key="YOUR_LANGTRACE_API_KEY")

@with_langtrace_root_span()
def main():
    logging.basicConfig(stream=sys.stdout, level=logging.INFO)
    logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))

    # check if storage already exists
    PERSIST_DIR = "./storage"
    if not os.path.exists(PERSIST_DIR):
        # load the documents and create the index
        documents = SimpleDirectoryReader("data").load_data()
        index = VectorStoreIndex.from_documents(documents)
        # store it for later
        index.storage_context.persist(persist_dir=PERSIST_DIR)
    else:
        # load the existing index
        storage_context = StorageContext.from_defaults(persist_dir=PERSIST_DIR)
        index = load_index_from_storage(storage_context)

    # Either way we can now query the index
    query_engine = index.as_query_engine()
    response = query_engine.query("What is the summary in opposite?")
    print(response)

main()

Replace YOUR_LANGTRACE_API_KEY with your actual Langtrace API key if you’d like to send traces to Langtrace Cloud

Configuring OpenTelemetry for Elastic APM

To send traces to Elastic APM, we need to set up some environment variables. These variables configure OpenTelemetry to use Elastic APM as the trace exporter.

  • Login to your Elastic deployment then click Observability

  • Navigate to the Integrations tab then APM

Select OpenTelemetry then set the following environment variables in your terminal:

export OTEL_SERVICE_NAME=demo-elastic-service
export OTEL_RESOURCE_ATTRIBUTES=service.name=demo-service
export OTEL_EXPORTER_OTLP_ENDPOINT="<https://your-elastic-apm-endpoint:443>"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer%20your-elastic-apm-token"
export OTEL_EXPORTER_OTLP_PROTOCOL=oltp
export OTEL_LOGS_EXPORTER=oltp

Replace your-elastic-apm-endpoint and your-elastic-apm-token with your actual Elastic APM endpoint and token.

Running the Application

With the environment variables set, we can now run our application with OpenTelemetry instrumentation. Use the following command: opentelemetry-instrument python main.py

Verifying the Setup

Once the application is running, you should see the output of your query printed to the console. Additionally, you should see traces in your Elastic APM dashboard corresponding to the operations performed in your script. This integration is made possible due to Langtrace's support for OpenTelemetry, which enables seamless tracing and observability.

Conclusion

In this blog post, we demonstrated how to use Langtrace to send traces to Elastic APM and how to index and query data using LlamaIndex. By integrating Langtrace with your Python/Typescript application and configuring OpenTelemetry, you can gain valuable insights into the behavior and performance of your AI applications. We encourage you to try this setup in your projects and explore the powerful observability features provided by Elastic APM.

Join our Discord and let us know if we can help in any way.


Ready to try Langtrace?

Try out the Langtrace SDK with just 2 lines of code.

Ready to deploy?

Try out the Langtrace SDK with just 2 lines of code.

Want to learn more?

Check out our documentation to learn more about how langtrace works

Join the Community

Check out our Discord community to ask questions and meet customers