Each LLM is an adapter around a language model provider and the specific model version, eg: gpt-4o-mini
. Each Agent can pick their own model and a ZeeWorkflow can be configured to use a specific LLM as default.
const llm = new LLM({
provider: "openai",
id: "gpt-4o-mini",
});
List of supported LLM Model IDs
"gpt-4o"
"gpt-4o-mini"
"gpt-4o-2024-05-13"
"gpt-4o-2024-08-06"
"gpt-4o-2024-11-20"
"gpt-4o-audio-preview"
"gpt-4o-audio-preview-2024-10-01"
"gpt-4o-audio-preview-2024-12-17"
"gpt-4o-mini-2024-07-18"
"gpt-4-turbo"
"gpt-4-turbo-2024-04-09"
"gpt-4-turbo-preview"
"gpt-4-0125-preview"
"gpt-4-1106-preview"
"gpt-4"
"gpt-4-0613"
"gpt-3.5-turbo-0125"
"gpt-3.5-turbo"
"gpt-3.5-turbo-1106"
Environment Variables
Use Cases
Image Analysis
LLMs can also process images along with text using image URL messages. Here’s an example of analyzing an image using the LLM:
const messages = [
userMessage(
"What's in this image?"
),
userMessage([
{
type: "image",
image: "https://example.com/logo.png",
},
]),
],
const schema = z.object({
description: z.string(),
colors: z.array(z.string()),
text_content: z.string().optional(),
improvements: z.string().optional(),
});
const result = await llm.generate({
messages,
schema,
temperature: 0.8,
});
The LLM will analyze the image and return a structured response containing the description, colors used, and potential improvements. You can also use base64-encoded images by providing the data URL:
const messages = [
userMessage("What's in this image?"),
userMessage([
{
type: "image",
image: "data:image/png;base64,...",
},
]),
];