Langtrace adds support for Cleanlab TLM

Karthik Kalyanaraman

Cofounder and CTO

Mar 5, 2025

Introduction

We are thrilled to announce our latest integration update — Langtrace now supports Cleanlab's TLM natively! This integration enables you to automatically capture traces that includes the trustworthiness score, explanation and other metadata returned by Cleanlab's APIs. With this seamless integration, you can now gain deeper insights into your language model interactions while leveraging Cleanlab's robust capabilities.

Setup

Follow these steps to integrate Langtrace with Cleanlab:

  1. Create Your Langtrace Project
    Sign up on Langtrace, create a new project, and obtain your Langtrace API key.

  2. Install the Langtrace Python SDK
    pip install langtrace-python-sdk

  3. Configure Your Environment
    Set up your environment variables by creating a .env file and adding your Langtrace API key:
    export LANGTRACE_API_KEY=<YOUR_LANGTRACE_API_KEY>

  4. Similarly, ensure you have your Cleanlab API key available as an environment variable
    export CLEANLAB_API_KEY=<YOUR_CLEANLAB_API_KEY>

  5. Initialize and set up Langtrace in your code. See code sample below:


    import os
    
    from dotenv import find_dotenv, load_dotenv
    from langtrace_python_sdk import langtrace
    from langtrace_python_sdk.utils.with_root_span import with_langtrace_root_span
    
    _ = load_dotenv(find_dotenv())
    
    langtrace.init()
    
    from cleanlab_tlm import TLM
    from openai import OpenAI
    
    openai_client = OpenAI()
    
    tlm = TLM(
        api_key=os.getenv("TLM_API_KEY"),
        options={"log": ["explanation"], "model": "gpt-4o-mini"},
    )
    
    
    def inference(prompt: str):
        response = openai_client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[
                {"role": "user", "content": prompt},
            ],
            stream=False,
        )
        response_text = response.choices[0].message.content
        return response_text
    
    
    @with_langtrace_root_span("Get Trustworthiness Score")
    def inference_get_trustworthiness_score(prompt: str):
        response = inference(prompt)
        return tlm.get_trustworthiness_score(prompt, response)
    
    
    print(inference_get_trustworthiness_score("How many r's are in strawberry?"))


  6. Review Your Traces in Langtrace

    Once you run your code, Langtrace will automatically capture and log all relevant interactions. Visit your Langtrace dashboard to see the traces.




    Additional Resources

Ready to try Langtrace?

Try out the Langtrace SDK with just 2 lines of code.

Ready to deploy?

Try out the Langtrace SDK with just 2 lines of code.

Want to learn more?

Check out our documentation to learn more about how langtrace works

Join the Community

Check out our Discord community to ask questions and meet customers