Vercel AI SDK Integration

Effortlessly integrate the Vercel AI SDK into your Next.js app using llmonitor. We've built a custom hook that makes tracking your AI-driven chats a breeze.

1

Import and Initialize

Import llmonitor and the AI SDK helper hook, then initialize the monitor with your app ID.

import monitor, { useMonitorVercelAI } from "llmonitor/react"
monitor.init({
appId: "YOUR APP ID"
})
2

Wrap the useChat hook

export default function Chat() {
const ai = useChat({
// This is necessary to reconcile LLM calls made on the backend
sendExtraMessageFields: true
})
// Use the hook to wrap and track the AI SDK
const {
trackFeedback, // this a new function you can use to track feedback
messages,
input,
handleInputChange,
handleSubmit
} = useMonitorVercelAI(ai)
// Optional: Identify the user
useEffect(() => {
monitor.identify("elon", {
name: "Elon Musk",
email: "elon@tesla.com",
})
}, [])
return (
// ... your chat UI ...
)
}
3

Setup the monitor on the backend

We'll need to reconcile the OpenAI calls made in the backend, with messages sent from the frontend. To do this, we'll need to use the backend version of the monitor.

import monitor from "llmonitor";
import { monitorOpenAI } from "llmonitor/openai";
monitor.init({
appId: "YOUR APP ID",
})
// Create an OpenAI API client and monitor it
const openai = monitorOpenAI(
new OpenAI({
apiKey: process.env.OPENAI_API_KEY
})
);
4

Reconcile messages with OpenAI calls

Once your openai client is monitored, you can use the setParent method to reconcile the frontend message IDs with the backend call:

const response = await openai.chat.completions
.create({
model: "gpt-4",
stream: true,
messages: messages,
})
// The setParent method reconcilates the frontend call with the backend call
.setParent(lastMessageId);

Full API Function Example

// ./app/api/chat/route.ts
import OpenAI from "openai";
import { OpenAIStream, StreamingTextResponse } from "ai";
// Import the backend version of the monitor
import monitor from "llmonitor";
import { monitorOpenAI } from "llmonitor/openai";
monitor.init({
appId: "YOUR APP ID",
})
// Create an OpenAI API client and monitor it
const openai = monitorOpenAI(
new OpenAI({
apiKey: process.env.OPENAI_API_KEY
})
);
export const runtime = "edge";
export async function POST(req: Request) {
const data = await req.json()
const { messages: rawMessages } = data
// Keep only the content and role of each message, otherwise OpenAI throws an error
const messages = rawMessages.map(({ content, role }) => ({ role, content }))
// Get the last message's run ID
const lastMessageId = rawMessages[rawMessages.length - 1].id
// Ask OpenAI for a streaming chat completion given the prompt
const response = await openai.chat.completions
.create({
model: "gpt-4",
stream: true,
messages: messages,
})
// The setParent method reconcilates the frontend call with the backend call
.setParent(lastMessageId);
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
}

Questions? We're here to help.