Vercel AI SDK Integration
Effortlessly integrate the Vercel AI SDK into your Next.js app using llmonitor. We've built a custom hook that makes tracking your AI-driven chats a breeze.
This assumes you are using Next.js. If you are using another framework, contact us and we'll help you integrate.
Import and Initialize
Import llmonitor and the AI SDK helper hook, then initialize the monitor with your app ID.
import monitor, { useMonitorVercelAI } from "llmonitor/react"monitor.init({appId: "YOUR APP ID"})
Wrap the useChat hook
export default function Chat() {const ai = useChat({// This is necessary to reconcile LLM calls made on the backendsendExtraMessageFields: true})// Use the hook to wrap and track the AI SDKconst {trackFeedback, // this a new function you can use to track feedbackmessages,input,handleInputChange,handleSubmit} = useMonitorVercelAI(ai)// Optional: Identify the useruseEffect(() => {monitor.identify("elon", {name: "Elon Musk",email: "elon@tesla.com",})}, [])return (// ... your chat UI ...)}
Setup the monitor on the backend
We'll need to reconcile the OpenAI calls made in the backend, with messages sent from the frontend. To do this, we'll need to use the backend version of the monitor.
import monitor from "llmonitor";import { monitorOpenAI } from "llmonitor/openai";monitor.init({appId: "YOUR APP ID",})// Create an OpenAI API client and monitor itconst openai = monitorOpenAI(new OpenAI({apiKey: process.env.OPENAI_API_KEY}));
Reconcile messages with OpenAI calls
Once your openai client is monitored, you can use the setParent
method to reconcile the frontend message IDs with the backend call:
const response = await openai.chat.completions.create({model: "gpt-4",stream: true,messages: messages,})// The setParent method reconcilates the frontend call with the backend call.setParent(lastMessageId);
Full API Function Example
Make sure you've enabled sendExtraMessageFields
on the useChat
hook so that message IDs are also sent.
// ./app/api/chat/route.tsimport OpenAI from "openai";import { OpenAIStream, StreamingTextResponse } from "ai";// Import the backend version of the monitorimport monitor from "llmonitor";import { monitorOpenAI } from "llmonitor/openai";monitor.init({appId: "YOUR APP ID",})// Create an OpenAI API client and monitor itconst openai = monitorOpenAI(new OpenAI({apiKey: process.env.OPENAI_API_KEY}));export const runtime = "edge";export async function POST(req: Request) {const data = await req.json()const { messages: rawMessages } = data// Keep only the content and role of each message, otherwise OpenAI throws an errorconst messages = rawMessages.map(({ content, role }) => ({ role, content }))// Get the last message's run IDconst lastMessageId = rawMessages[rawMessages.length - 1].id// Ask OpenAI for a streaming chat completion given the promptconst response = await openai.chat.completions.create({model: "gpt-4",stream: true,messages: messages,})// The setParent method reconcilates the frontend call with the backend call.setParent(lastMessageId);const stream = OpenAIStream(response);return new StreamingTextResponse(stream);}