LLM Actions

LLM actions generate text using AI models. Define a prompt template with dynamic placeholders, and the action fills them in and calls the LLM at execution time.


Registration

bash
curl -X POST http://localhost:3009/actions \
  -H "X-Org-Id: acme-corp" \
  -H "Content-Type: application/json" \
  -d '{
    "action_name": "summarize_document",
    "kind": "llm",
    "description": "Summarize a document in 3 bullet points",
    "template": "Summarize the following document in 3 bullet points:\n\n{{document_text}}",
    "model": "gpt-4",
    "max_tokens": 500,
    "outputKey": "summary"
  }'

Properties

Property Type Required Description
action_name string Yes Unique name for this action
kind "llm" Yes
template string Yes Prompt template with {{ }} placeholders
model string No LLM model (default: configured in environment)
max_tokens number No Maximum tokens in the response (default: 1024)
temperature number No Sampling temperature 0–1 (default: 0.7)
outputKey string No Context key for the generated text

Prompt Templates

Use {{ }} syntax for dynamic content in the prompt:

json
{
  "action_name": "extract_invoice_fields",
  "kind": "llm",
  "template": "Extract the following fields from this invoice text and return as JSON:\n- invoice_number\n- vendor_name\n- amount\n- due_date\n\nInvoice text:\n{{invoice_text}}",
  "model": "gpt-4",
  "max_tokens": 256
}

Multiple placeholders are supported:

json
{
  "template": "You are a {{role}} at {{company}}.\n\nAnalyze this customer complaint and draft a response:\n\n{{complaint_text}}"
}

Usage in a Workflow

json
{
  "type": "summarize_document",
  "properties": {
    "document_text": "@input.document_content"
  },
  "outputKey": "summary"
}

The document_text property fills the {{document_text}} placeholder in the template. The LLM response is stored at @summary.


Usage as an Agent Tool

json
{
  "mode": "react",
  "objective": "Read the contract and identify key terms",
  "tools": [{ "type": "action", "name": "summarize_document" }, { "type": "action", "name": "extract_invoice_fields" }]
}

The agent sees the action's description and template parameters in its tool list. It can call summarize_document with the required document_text parameter whenever it needs a summary during its reasoning.


Structured Output

For actions that need structured responses (JSON, specific fields), instruct the model in the template:

json
{
  "action_name": "classify_ticket",
  "kind": "llm",
  "template": "Classify this support ticket. Return ONLY a JSON object with:\n- category: one of [billing, technical, account, other]\n- priority: one of [low, medium, high, critical]\n- summary: one sentence summary\n\nTicket:\n{{ticket_text}}",
  "model": "gpt-4",
  "max_tokens": 200
}

LLM actions vs ReAct agents. LLM actions run a single prompt and return the response — no iteration, no tool use. Use LLM actions for one-shot tasks: summarization, classification, extraction. Use ReAct agents when the task requires multi-step reasoning or tool interaction.

→ Next: [DB Actions](/actions/db)