Using Langtrace within Langtrace

Dylan Zuber

Software Engineer

Aug 21, 2024

Introduction

At Langtrace, our mission is to make it easier for developers to monitor, evaluate and improve their LLM apps. We recently took on a challenge to further push the boundaries of our platform by building something unique: a chatbot within Langtrace itself. This isn't just any chatbot; it's a Langtrace-powered bot that can help users with technical difficulties, answer questions about our platform, and provide code samples—all while being fully integrated into our observability platform. In this blog post, I'll walk you through how we achieved this, step by step, and how Langtrace's SDK played a pivotal role in making it all possible.

Step 1: Indexing Our Documentation in Pinecone

The foundation of our chatbot is the wealth of documentation we've created for our users. To make this information easily accessible, we first indexed all our documentation and stored it in a vector database using Pinecone. This allowed us to perform efficient, contextually relevant searches to provide accurate answers to users' questions.

Step 2: Building the Chat Bot UI

Next, we created a user-friendly chatbot UI within our platform using NextJS and Tailwind CSS. The UI serves as the front-end for users to interact with the bot, enabling them to ask questions or report technical issues seamlessly. This interface is tightly integrated into our existing platform, making it a natural extension of the Langtrace experience.

Step 3: Leveraging GPT-4o through Vercel AI with Contextual Answers

To make our chatbot truly powerful, we combined GPT-4o's capabilities using Vercel AI with the context provided by our indexed documentation. When a user asks a question, the bot retrieves relevant information from the vector database and uses it to generate a comprehensive response. This approach ensures that the answers are not only accurate but also tailored to the specific needs of our users. For technical questions, the bot can even provide code samples, making it a valuable tool for developers.

Langtrace.init({
  api_host: `${process.env.NEXT_PUBLIC_HOST}/api/trace`,
  api_key: process.env.NEXT_PUBLIC_LANGTRACE_DEMO_API_KEY,
  instrumentations: {
    pinecone: pinecone,
    ai: ai,
  },
  logging: { disable: true },
  disable_latest_version_check: true,
});

Step 4: Capturing Traces with Langtrace SDK

One of the key aspects of building this chatbot was ensuring that we could monitor and improve it over time. To achieve this, we imported Langtrace's TypeScript SDK into our platform. This integration allows us to capture every interaction, including Pinecone queries, embeddings, and GPT-4o input/output, as traces. These traces are stored in the Langtrace Demo Project and are viewable within the Traces tab for all users, giving us full visibility into how the chatbot operates.

Step 5: Integrating the Evaluate-LLM Component

Improvement is an ongoing process, and user feedback is essential. To facilitate this, we imported our evaluate-llm React component into the chatbot. This component allows users to evaluate the bot's responses directly within the UI. Not only does this provide us with valuable feedback to improve the bot, but it also serves as a straightforward example for our users on how to implement evaluations in their own LLM applications. All evaluations are captured and viewable in the Annotations tab within the Demo Project.

Step 6: Utilizing Stored Prompts

Finally, we made use of stored prompts from the Prompts tab for each input to the LLM. These prompts are crucial in guiding the model to generate accurate and contextually appropriate responses. By capturing and storing these prompts as part of the traces, we can analyze and refine them over time, further improving the chatbot's performance.

Conclusion

By following these steps, we were able to create a powerful tool that not only helps our users but also demonstrates the ease of integrating the Langtrace SDK into any LLM-powered application. The ability to monitor, evaluate, and improve the chatbot through Langtrace's observability features has made it much easier to ensure that our bot continues to deliver high-quality responses. We hope this project inspires you to explore the full potential of Langtrace in your own LLM applications.

Ready to Get the best out of your LLM Models?

Sign up for Langtrace today and start running comprehensive evaluations to ensure the accuracy and effectiveness of your models. Don’t ship on just vibes. Make data driven decisions and be more confident in your LLM Applications.

Support Our Open Source Project

If you found this tutorial helpful, please consider giving us a star on GitHub . Your support helps us continue to improve as well as share valuable resources with the community. Thank you!

Ready to try Langtrace?

Try out the Langtrace SDK with just 2 lines of code.

Ready to deploy?

Try out the Langtrace SDK with just 2 lines of code.

Want to learn more?

Check out our documentation to learn more about how langtrace works

Join the Community

Check out our Discord community to ask questions and meet customers