LangChain

We provide callback handler that can be used to track LangChain calls, chains and agents.

Setup

First, install the relevant llmonitor package:

pip install llmonitor
npm install llmonitor

Then, set the LLMONITOR_APP_ID environment variable to your app tracking id.

LLMONITOR_APP_ID="YOUR APP ID"

If you'd prefer not to set an environment variable, you can pass the key directly when initializing the callback handler:

from langchain.callbacks import LLMonitorCallbackHandler
handler = LLMonitorCallbackHandler(app_id="YOUR APP ID")
import { LLMonitorHandler } from "langchain/callbacks/handlers/llmonitor"
const handler = new LLMonitorHandler({
appId: "YOUR APP ID",
// apiUrl: 'custom self hosting url'
})

Usage with LLM calls

You can use the callback handler with any LLM or Chat class from LangChain.

from langchain.chat_models import ChatOpenAI
from langchain.callbacks import LLMonitorCallbackHandler
handler = LLMonitorCallbackHandler()
chat = ChatOpenAI(
callbacks=[handler],
)
import { LLMonitorHandler } from "langchain/callbacks/handlers/llmonitor"
const model = new ChatOpenAI({
callbacks: [new LLMonitorHandler()],
})

Usage with agents

The callback handler works seamlessly with LangChain agents and chains.

For agents, it is recommended to pass a name in the metadatas to track them in the dashboard.

Example:

from langchain.agents import load_tools, initialize_agent, AgentType
from langchain.llms import OpenAI
from langchain.callbacks import LLMonitorCallbackHandler
handler = LLMonitorCallbackHandler()
llm = OpenAI()
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)
agent.run(
"What is the approximate result of 78 to the power of 5?",
callbacks=[handler], # Add the handler to the agent
metadata={ "agentName": "SuperCalculator" }, # Identify the agent in the LLMonitor dashboard
)
import { LLMonitorHandler } from "langchain/callbacks/handlers/llmonitor"
import { initializeAgentExecutorWithOptions } from "langchain/agents"
import { ChatOpenAI } from "langchain/chat_models/openai"
import { Calculator } from "langchain/tools/calculator"
const tools = [new Calculator()]
const chat = new ChatOpenAI()
const executor = await initializeAgentExecutorWithOptions(tools, chat, {
agentType: "openai-functions",
})
const result = await executor.run(
"What is the approximate result of 78 to the power of 5?",
{
callbacks: [new LLMonitorHandler()], // Add the handler to the agent
metadata: { agentName: "SuperCalculator" }, // Identify the agent in the LLMonitor dashboard
}
)

Usage with custom agents

If you're partially using LangChain, you can use the callback handler combined with the llmonitor module to track custom agents:

from langchain.schema.messages import HumanMessage, SystemMessage
from langchain.chat_models import ChatOpenAI
from llmonitor import agent
chat = ChatOpenAI()
@agent()
def TranslatorAgent(query):
messages = [
SystemMessage(content="You're a helpful assistant"),
HumanMessage(content="What is the purpose of model regularization?"),
]
return chat.invoke(messages)
res = TranslatorAgent("Good morning")
import { ChatOpenAI } from "langchain/chat_models/openai"
import { HumanMessage, SystemMessage } from "langchain/schema"
import { LLMonitorHandler } from "langchain/callbacks/handlers/llmonitor"
import monitor from "llmonitor"
const chat = new ChatOpenAI({
callbacks: [new LLMonitorHandler()], // <- Add the LLMonitor Callback Handler here
})
async function TranslatorAgent(query) {
const res = await chat.call([
new SystemMessage(
"You are a translator agent that hides jokes in each translation."
),
new HumanMessage(
`Translate this sentence from English to French: ${query}`
),
])
return res.content
}
// By wrapping the agent with wrapAgent, we automatically track all input, outputs and errors
// And tools and logs will be tied to the correct agent
const translate = monitor.wrapAgent(TranslatorAgent)
const res = await translate("Good morning")

Questions? We're here to help.