HTTP Integration
The HTTP integration allows you to send events to the LLMonitor API endpoint. This is useful if you want to integrate LLMonitor with a custom language where our SDKs are not available.
The endpoint accepts POST requests with a JSON body containing an array of Event objects.
Endpoint
Authentication
There is no authentication required to send events to the API endpoint. However, you will need to provide valid app IDs in the Event object or the request will be rejected.
Example
Here is an example with cURL to send a POST request to the API endpoint:
Once your LLM call succeeds, you would need to send an end
event to the API endpoint with the output
data from the LLM call.
Input / output format
You can use any valid JSON for the input & output fields.
However, for LLM calls you can use the chat message format:
Example:
Event definition
The Event object has the following properties:
Property | Type | Required | Description |
---|---|---|---|
type | string | Yes | The type of the event. Can be one of "llm", "agent", "tool", "chain", "chat", "convo". |
app | string | Yes | The app ID of the application. |
event | string | No | The name of the event. Can be one of "start", "end, "error", "feedback", |
runId | string | Yes | The ID of the run (UUID recommended) |
parentRunId | string | No | The ID of the parent run, if any. |
timestamp | string | Yes | Timestamp in ISO 8601 format. |
input | any | No | Input data (with start events) |
tags | string[] | No | Array of tags. |
name | string | No | The name of the current model, agent, tool, etc. |
output | any | No | Output data (with end events) |
extra | any | No | Extra data associated with the run. |
feedback | any | No | Feedback data associated with the run (only when type = 'feedback') |
tokensUsage | object | No | An object containing the number of prompt and completion tokens used (only for llm run) |
error | object | No | An object containing the error message and stack trace if an error occurred. |
The tokensUsage
object has the following properties:
Property | Type | Required | Description |
---|---|---|---|
prompt | number | No | The number of prompt tokens used. |
completion | number | No | The number of completion tokens used. |
If tokensUsage
is not provided, the number of tokens used will be calculated from the input and output fields. This works with models from OpenAI, Anthropic and Google at the moment.
The error object has the following properties:
Property | Type | Required | Description |
---|---|---|---|
message | string | Yes | The error message. |
stack | string | No | The stack trace of the error. |
For the feedback
field, refer to the Feedback page for more information.