Track Prompts in Your Traces with Langtrace

Dylan Zuber

Dylan Zuber

· 3 min read
Prompt tracking in Langtrace

Langtrace has introduced a new feature that allows you to attach prompt IDs and versions to your traces. This enhancement enables you to filter and group traces based on prompt information, which is especially useful for debugging and monitoring. In this blog post, we’ll guide you through the process of using this feature in both TypeScript and Python.

Why Track Prompt IDs and Versions?

Tracking prompt IDs and versions in your traces allows you to:

  • Easily identify which prompts are being used in your application.
  • Monitor the performance and behavior of specific prompts.
  • Debug issues more effectively by tracing back to the exact prompt version used.

Getting Prompts from the Langtrace Platform

You can start by retrieving prompts from the Langtrace platform (how to guide here) and include them in your trace. Here’s an example of how to do this:

# Must precede any llm module imports
from langtrace_python_sdk import get_prompt_from_registry

langtrace_prompt = get_prompt_from_registry(<Prompt Registry ID>)

# access the following values to send in trace:
prompt = langtrace_prompt["value"]
prompt_id = langtrace_prompt["id"]
prompt_version = langtrace_prompt["version"]

How to Attach Prompt IDs and Versions

Langtrace provides simple methods to attach prompt IDs and versions to your traces. Depending on the programming language you are using, you can utilize either the withAdditionalAttributes function for TypeScript or the inject_additional_attributes function for Python.

TypeScript Example

In TypeScript, you can wrap your code with the withAdditionalAttributes function to add prompt information to your trace spans. Here’s how you can do it:

import * as Langtrace from "@langtrace/typescript-sdk";
import OpenAI from 'openai'

Langtrace.init({
  write_spans_to_console: true,
})

const openai = new OpenAI()

export const run = async ()=>{
  const response = await Langtrace.withAdditionalAttributes(async () => {
  return await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [
    { role: 'system', content: 'Talk like a pirate' },
    { role: 'user', content: 'Tell me a story in 3 sentences or less.' }
    ],
    stream: false
  })
}, { prompt_id: "prompt1234", prompt_version: "1" })
}
run().then(() => console.log('done'))

Python Example

In Python, you can use the inject_additional_attributes function to achieve the same result. Below is an example demonstrating how to add prompt information to your trace spans:

from langtrace_python_sdk import inject_additional_attributes

def do_llm_stuff(name=""):
  response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Say this is a test three 
times"}],
    stream=False,
  )
  return response

def main():
    response = inject_additional_attributes(lambda: do_llm_stuff(name="llm"), {'prompt_id': 'promptid1234', 'prompt_version': '1'})

# if the function do not take arguments then this syntax will work
response = inject_additional_attributes(do_llm_stuff, {'prompt_id': 'promptid1234', 'prompt_version': '1'})

main()

Conclusion

Image

Now, when viewing your traces in the “Traces” tab, you can see the Prompt ID and Prompt Version included with the ingested traces! With the ability to attach prompt IDs and versions to your traces, Langtrace provides a powerful tool for monitoring and debugging your applications. By following the examples provided, you can easily integrate this feature into your workflow and gain deeper insights into your prompt usage.

We’d love to hear from you!

We’d love to hear your feedback on Langtrace! We invite you to join our community on Discord or reach out at [email protected] and share your experiences, insights, and suggestions. Together, we can continue to set new standards of observability and evaluations in LLM application development.

Happy tracing!

Dylan Zuber

About Dylan Zuber

Software Engineer at Scale3 Labs