Monitor, evaluate & improve
your LLM apps

Langtrace is an open-source observability tool that collects and analyzes traces and metrics to help you improve your LLM apps.

Advanced Security

Langtrace ensures the highest level of security. Our cloud platform is SOC 2 Type II certified, ensuring top-tier protection for your data.
AICPA

SOC 2

TYPE II

CERTIFIED

Trusted and recognized by

Comcast's logo
Mayflower's logo
Elastic's logo
SearchStax's logo

SearchStax

Neoskop's logo
Pulse Energy's logo

Pulse Energy

Simple non-intrusive setup

Access the Langtrace SDK with 2 lines of code
from langtrace_python_sdk import langtrace

langtrace.init(api_key=<your_api_key>)

Supports popular LLMs, frameworks and vector databases

Why Langtrace?

Open-Source & Secure

Langtrace can be self-hosted and supports OpenTelemetry standard traces, which can be ingested by any observability tool of your choice, resulting in no vendor lock-in.

End-to-end Observability

Get visibility and insights into your entire ML pipeline, whether it is a RAG or a fine-tuned model with traces and logs that cut across framework, vectorDB and LLM requests.

Establish a Feedback Loop

Annotate and create golden datasets with traced LLM interactions, and use them to continuously test and enhance your AI applications. Langtrace includes built-in heuristic, statistical, and model-based evaluations to support this process.

Build and deploy with confidence

Traces

Trace

Trace requests, detect bottlenecks, and optimize performance with traces.

Annotate

Annotate and Manually evaluate the LLM requests, and create golden datasets.
Annotate
Evaluations

Evaluate

Run LLM based automated evaluations to track performance overtime.

Playground

Compare the performance of your prompts across different models.
Playground
Metrics

Metrics

Track cost and latency at project, model and user levels.

Built by a world class team of builders from

Join the Langtrace community