Integrate Langtrace with Arch

Obinna Okafor

Software Engineer

Jan 17, 2025

Introduction

We're excited to announce that Langtrace now supports Arch gateway. This means, Langtrace will automatically capture traces and metrics originating from Arch.

Arch is an intelligent gateway designed to protect, observe and personalize AI agents with your APIs. Arch is engineered with specialized (sub-billion) LLMs that are optimized for fast, cost-effective and accurate handling of prompts and are designed to run alongside your application servers as a self-contained process.

Setup

Prerequisites

Before you begin installing and running Arch, ensure you have the following:

  1. Docker System (v24)

  2. Docker compose (v2.29)

You'll be using Arch's CLI to manage and interact with the Arch gateway. To install the CLI, run the following command:

Note: We recommend you create a new Python environment to isolate dependencies. To do that, run the following:

python -m venv venvsource venv/bin/activate


Install Arch

pip install archgw==0


Create arch config file (arch_config.yaml):

version: v0.1

listener:
  address: 0.0.0.0
  port: 12000
  message_format: huggingface
  connect_timeout: 0.005s

llm_providers:
  - name: gpt-4o
    access_key: $OPENAI_API_KEY
    provider: openai
    model


Start the arch gateway with the config file

archgw up arch_config.yaml

2024-12-05 16:56:27,979 - cli.main - INFO - Starting archgw cli version: 0.1.8
...
2024-12-05 16:56:28,485 - cli.utils - INFO - Schema validation successful!
2024-12-05 16:56:28,485 - cli.main - INFO - Starging arch model server and arch gateway
...
2024-12-05 16:56:51,647 - cli.core - INFO -

Once the gateway is up you can start interacting with at port 12000 using OpenAI's client (Arch is openai client compatible)

Before we start making LLM requests, we need to do the following:

  1. Sign up to Langtrace, create a project and get a Langtrace API key.

  2. Install the Langtrace SDK.

pip install -U

Setup environment variables:

export LANGTRACE_API_KEY=YOUR_LANGTRACE_API_KEY
export OPENAI_API_KEY

Now we can start making requests

import os
from langtrace_python_sdk import langtrace # Must precede any llm module imports
from openai import OpenAI

langtrace.init(api_key = os.environ['LANGTRACE_API_KEY'])


client = OpenAI(api_key=os.environ['LANGTRACE_API_KEY'], base_url="http://localhost:12000/v1")

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": "You are a helpful assistant"},
        {"role": "user", "content": "Hello"},
    ]
)

print(chat_completion.choices[0].message.content)

Now you can see the traces in Langtrace.


Useful Resources

Ready to try Langtrace?

Try out the Langtrace SDK with just 2 lines of code.

Ready to deploy?

Try out the Langtrace SDK with just 2 lines of code.

Want to learn more?

Check out our documentation to learn more about how langtrace works

Join the Community

Check out our Discord community to ask questions and meet customers