Guides

Integrating Langtrace into Your Next.js LLM Project

Obinna Okafor

Obinna Okafor

· 3 min read
Next.js + Langtrace

In this post, I’ll walk you through the steps to integrate Langtrace into your Next.js project.

Langtrace helps you monitor, evaluate, and manage prompts in your LLM applications, ensuring optimal performance and reliability. Let's get started!

I created a chat application (Github link) that leverages OpenAI's API to demonstrate how to do this.

Image


Prerequisites

To continue we’ll need the following:

  • A Langtrace account. Sign up to create one if you don’t already have one.
  • Access to the API keys for the LLM integration (e.g., OpenAI, Cohere).


Installation

First, you'll need to install the Langtrace Typescript SDK. Open your terminal and navigate to your Next.js project directory. Then, run the following command:

npm i @langtrase/typescript-sdk

Configure Webpack

Next, you need to update your Next.js configuration to handle .node files and ignore warnings related to Opentelemetry. Open or create a next.config.(m)js file in the root of your project and add the following code:

/** @type {import('next').NextConfig} */
const nextConfig = {
  webpack: (config, { isServer }) => {
    config.module.rules.push({
      test: /\.node$/,
      loader: "node-loader",
    });
    if (isServer) {
      config.ignoreWarnings = [{ module: /opentelemetry/ }];
    }
    return config;
  },
  // rest of config (if any)
};

export default nextConfig;

Generate a Langtrace API key

Skip this step if you already have an API key

  • Log in to your Langtrace account and create a new project if you don’t already have one.
  • Select the project, go to settings then click on API Key to generate an API key.
Image

Initialize Langtrace

To initialize Langtrace in a next.js project, we’ll need to do the following:

  • Import the Langtrace SDK
  • Import the module of the integration(s) we want to monitor using Langtrace (that will be OpenAI for this project) and pass it as an instrumentation to Langtrace. Langtrace supports instrumentations for different LLMs, VectorDBs and Frameworks. The instrumentations object is a key-value pair where the key is the name of the LLM, VectorDB or Framework and the value is the imported module. To see a full list of instrumentations, visit the docs.
// the langtrace import statement should come before any LLM import
import * as Langtrace from '@langtrase/typescript-sdk'
import * as openai from 'openai';

import OpenAI from "openai";

Langtrace.init({
  instrumentations: {openai: openai},
  api_key: process.env.LANGTRACE_API_KEY as string
})

Run the application

Now we’ll run the application to see what shows up on our Langtrace dashboard. To do this, run the following command in the terminal from the root folder of our chat app.

npm run dev

Next, we’ll open our browser and navigate to http://localhost:3000 to start interacting with our LLM (in this instance, OpenAI)

Image


You should start seeing traces in your Langtrace dashboard

Image
Image

Conclusion

By following these steps, you've successfully integrated LangTrace into your Next.js application. You can now capture traces, annotate requests, manage prompts, evaluate the performance of your application and track key metrics to optimize your LLM usage. This integration will help you optimize your LLM applications and ensure they run efficiently.

For more detailed information and advanced configurations, visit the Langtrace documentation

Feel free to reach out if you have any questions or need further assistance.

Happy langTracing

Obinna Okafor

About Obinna Okafor

A software engineer passionate about building lasting products on the web.