Integrate Langtrace with LiteLLM

Karthik Kalyanaraman

Cofounder and CTO

Oct 15, 2024

Introduction

We are excited to announce that Langtrace now supports LiteLLM natively. This means:

  • If you are using LiteLLM for inference, you can setup Langtrace to automatically capture metrics such as token usage, cost, latency and request metadata.

  • Additionally you can install Langtrace to your LiteLLM proxy and set it up to send metrics to any OpenTelemetry compatible observability tool in addition to Langtrace.

Setup

  1. Sign up to Langtrace, create a project and get a Langtrace API key

  2. Install Langtrace SDK

pip install -U langtrace-python-sdk
  1. Setup the .env var

export LANGTRACE_API_KEY=YOUR_LANGTRACE_API_KEY
  1. Initialize Langtrace in your code

import os
from langtrace_python_sdk import langtrace # Must precede any llm module imports
langtrace.init(api_key = os.environ['LANGTRACE_API_KEY'])

# Your code here
  1. See the traces in Langtrace

For more information check out our docs here - https://docs.langtrace.ai/supported-integrations/llm-frameworks/litellm

Useful Resources

Ready to try Langtrace?

Try out the Langtrace SDK with just 2 lines of code.

Ready to deploy?

Try out the Langtrace SDK with just 2 lines of code.

Want to learn more?

Check out our documentation to learn more about how langtrace works

Join the Community

Check out our Discord community to ask questions and meet customers