Integrate Langtrace with Mistral

Karthik Kalyanaraman

Cofounder and CTO

Nov 14, 2024

Introduction

We are excited to announce that Langtrace now supports Mistral's family of models natively. This means Langtrace will automatically capture traces and metrics including token usage, cost, latency and model hyper parameters automatically if you are using Mistral's family of models.

Setup

  1. Sign up to Langtrace, create a project and get a Langtrace API key

  2. Install Langtrace SDK

pip install -U langtrace-python-sdk
  1. Setup the .env var

export LANGTRACE_API_KEY=YOUR_LANGTRACE_API_KEY
  1. Initialize Langtrace in your code

import os
from langtrace_python_sdk import langtrace, with_langtrace_root_span
from mistralai import Mistral

langtrace.init()

@with_langtrace_root_span("chat_complete")
def chat_complete():
    model = "mistral-large-latest"
    client = Mistral(api_key=os.environ["MISTRAL_API_KEY"])
    chat_response = client.chat.complete(
        model=model,
        messages=[
            {
                "role": "user",
                "content": "I need 10 cocktail recipes with tequila other than the classics like margarita, tequila"
            },
        ]
    )
    print(chat_response.choices[0].message.content)


chat_complete()
  1. See the traces in Langtrace

For more information check out our docs here - https://docs.langtrace.ai/supported-integrations/llm-tools/mistral-ai#mistral-ai

Useful Resources

Ready to try Langtrace?

Try out the Langtrace SDK with just 2 lines of code.

Ready to deploy?

Try out the Langtrace SDK with just 2 lines of code.

Want to learn more?

Check out our documentation to learn more about how langtrace works

Join the Community

Check out our Discord community to ask questions and meet customers