Integrating Langtrace into Your Next.js LLM Project
Obinna Okafor
⸱
Software Engineer
Jul 11, 2024
Introduction
In this post, I’ll walk you through the steps to integrate Langtrace into your Next.js project.
Langtrace helps you monitor, evaluate, and manage prompts in your LLM applications, ensuring optimal performance and reliability. Let's get started!
I created a chat application (Github link) that leverages OpenAI's API to demonstrate how to do this.

Prerequisites
To continue we’ll need the following:
A Langtrace account. Sign up to create one if you don’t already have one.
Access to the API keys for the LLM integration (e.g., OpenAI, Cohere).
Installation
First, you'll need to install the Langtrace Typescript SDK. Open your terminal and navigate to your Next.js project directory. Then, run the following command:
Configure Webpack
Next, you need to update your Next.js configuration to handle .node files and ignore warnings related to Opentelemetry. Open or create a next.config.(m)js file in the root of your project and add the following code:
Generate a Langtrace API key
Skip this step if you already have an API key
Log in to your Langtrace account and create a new project if you don’t already have one.
Select the project, go to settings then click on API Key to generate an API key.

Initialize Langtrace
To initialize Langtrace in a next.js project, we’ll need to do the following:
Import the Langtrace SDK
Import the module of the integration(s) we want to monitor using Langtrace (that will be OpenAI for this project) and pass it as an instrumentation to Langtrace. Langtrace supports instrumentations for different LLMs, VectorDBs and Frameworks. The instrumentations object is a key-value pair where the key is the name of the LLM, VectorDB or Framework and the value is the imported module. To see a full list of instrumentations, visit the docs.
Run the application
Now we’ll run the application to see what shows up on our Langtrace dashboard. To do this, run the following command in the terminal from the root folder of our chat app.
Next, we’ll open our browser and navigate to http://localhost:3000 to start interacting with our LLM (in this instance, OpenAI)

You should start seeing traces in your Langtrace dashboard.


Conclusion
By following these steps, you've successfully integrated LangTrace into your Next.js application. You can now capture traces, annotate requests, manage prompts, evaluate the performance of your application and track key metrics to optimize your LLM usage. This integration will help you optimize your LLM applications and ensure they run efficiently.
For more detailed information and advanced configurations, visit the Langtrace documentation
Feel free to reach out if you have any questions or need further assistance.
Happy langTracing
Ready to deploy?
Try out the Langtrace SDK with just 2 lines of code.
Want to learn more?
Check out our documentation to learn more about how langtrace works
Join the Community
Check out our Discord community to ask questions and meet customers