# Hyph-en - Documentation Context > SYSTEM INFO: This document is optimized for LLM/AI agent ingestion. > It contains the complete documentation in a structured, context-dense format. > Navigation chrome, styling, and interactive elements have been stripped. > GENERATED: 2026-02-07T20:38:26.896Z --- ## PATH: Actions > Http (Source: actions/01-http.md) HTTP Actions ============ HTTP actions call external REST API endpoints. Use them to integrate with any system that has an API — CRMs, ERPs, payment processors, notification services. Registration ============ ``bash curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "actionname": "createsalesforcelead", "kind": "http", "description": "Create a new lead in Salesforce", "url": "https://api.salesforce.com/services/data/v58.0/sobjects/Lead", "httpmethod": "POST", "headers": { "Authorization": "Bearer orgconfig:api:salesforcetoken", "Content-Type": "application/json" }, "passthrough": true, "outputKey": "newlead" }' ` Properties ========== | Property | Type | Required | Description | |----------|------|----------|-------------| | actionname | string | Yes | Unique name for this action | | kind | "http" | Yes | | | url | string | Yes | Endpoint URL. Supports {{ }} template syntax for path parameters | | httpmethod | string | Yes | "GET", "POST", "PUT", "PATCH", "DELETE" | | headers | object | No | Request headers. Values support orgconfig: prefix for secrets | | contenttype | string | No | Request content type (default: "application/json") | | passthrough | boolean | No | If true, the full response body is available in context | | outputKey | string | No | Context key for the response | URL Templates ============= Use {{ }} syntax for dynamic URL segments: `json { "actionname": "getorder", "kind": "http", "url": "https://api.store.com/orders/{{orderid}}", "httpmethod": "GET" } ` When used in a workflow step, the orderid comes from the step properties: `json { "type": "getorder", "properties": { "orderid": "@input.orderid" } } ` Secret References ================= Use orgconfig: to reference encrypted secrets stored in org config (/platform/multi-tenancy): `json { "headers": { "Authorization": "Bearer orgconfig:api:stripekey", "X-Custom-Token": "orgconfig:api:vendortoken" } } ` The execution engine resolves orgconfig: references at runtime. The actual secret value never appears in the workflow definition or reasoning traces. Usage in a Workflow =================== `json { "type": "createsalesforcelead", "properties": { "FirstName": "@input.firstname", "LastName": "@input.lastname", "Email": "@input.email", "Company": "@input.company" }, "outputKey": "newlead" } ` With passthrough: true, the full API response is available at @newlead for subsequent steps. Usage as an Agent Tool ====================== List the action name in the agent's tools array: `json { "mode": "react", "objective": "Look up customer and update their record", "tools": [{ "type": "action", "name": "createsalesforcelead" }, { "type": "action", "name": "getorder" }] } ` The agent can call createsalesforcelead` with the required parameters. The action schema is included in the agent's prompt so it knows what parameters are needed. Error Handling ============== If the HTTP request fails (non-2xx status, timeout, connection error), the action returns an error result. In a workflow step, this fails the run. In a ReAct agent, the error is returned as an observation and the agent can decide how to proceed. HTTP timeouts default to 30 seconds. For long-running API calls, consider using an async pattern with polling. → Next: LLM Actions (/actions/llm) --- ## PATH: Agents > Deployment Patterns > Agent As Step (Source: agents/deployment-patterns/01-agent-as-step.md) Pattern A: Agent as Workflow Step ================================= The agent operates as one step within a larger deterministic workflow. The workflow handles sequencing, data flow, and branching. The agent handles the reasoning-intensive step. ``mermaid flowchart LR S1["Step 1: Matcher
(deterministic)"] --> S2["Step 2: ReAct Agent
(reasoning)"] S2 --> S3["Step 3: PbotApproval
(human review)"] S3 --> S4["Step 4: Custom Table
(log results)"] style S2 fill:#f3e8ff,stroke:#9333ea ` When to Use =========== - Most of the process is deterministic, but one step requires judgment - You want the workflow to control overall flow while delegating reasoning to the agent - The agent is a specialist embedded in a structured pipeline - You need predictable sequencing before and after the reasoning step Complete Example: Invoice Exception Investigation ================================================= A reconciliation workflow that matches invoices to payments, then uses an agent to investigate any unmatched exceptions. `json { "name": "invoicereconciliationwithinvestigation", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["invoiceid"], "tolerance": 0.02, "outputMatched": "reconciled", "outputUnmatchedLeft": "exceptions" } }, { "type": "loop", "filter": { "condition": { "greaterThan": [{ "length": "@exceptions" }, 0] } }, "properties": { "mode": "foreach", "itemspath": "@exceptions", "itemvariablename": "exception", "actionstoexecute": [ { "type": "loop", "properties": { "mode": "react", "objective": "Investigate why invoice {{exception.invoiceid}} for ${{exception.amount}} from {{exception.vendor}} has no matching payment. Check for partial payments, alternate vendor names, or recent credits. Recommend: approve, reject, or escalate.", "tools": [ { "type": "action", "name": "searchpayments" }, { "type": "action", "name": "lookupvendor" }, { "type": "action", "name": "checkcredits" } ], "maxiterations": 8, "onstuck": { "iterations": 3, "action": "escalate" }, "resultkey": "investigation" } } ], "maxconcurrency": 3, "failurestrategy": "continueonerror", "collectresults": true, "resultkey": "allinvestigations" } }, { "type": "PbotApproval", "filter": { "condition": { "greaterThan": [{ "length": "@exceptions" }, 0] } }, "properties": { "comment": "Review {{exceptions.length}} investigated exceptions", "requestpayload": { "investigations": "@allinvestigations" } } }, { "type": "custom-table", "properties": { "table": "reconciliationlog", "operation": "write", "keys": ["runid"], "values": ["@runid"], "fields": { "matchedcount": "@reconciled.length", "exceptioncount": "@exceptions.length", "processedat": "@now" } } } ] } } ` How It Works ============ 1. Matcher (deterministic) — matches invoices to payments by ID with 2% tolerance 2. Foreach + ReAct (agent) — for each unmatched invoice, an agent investigates why. Up to 3 run in parallel 3. PbotApproval (human) — a reviewer sees all investigations and approves or rejects 4. Custom Table (deterministic) — logs the reconciliation run The workflow controls the pipeline. The agent only runs within its step. It can't skip the approval step or modify the matcher configuration. Key Properties ============== - filter on the agent step — the agent loop only runs if there are exceptions. If everything matched, it's skipped entirely - maxconcurrency: 3 — limits parallel agent executions to avoid LLM rate limits - onstuck: escalate — if an agent can't figure out an exception, it pauses for human help rather than failing - collectresults: true — all investigation results are gathered into @all_investigations` for the approval step → Next: Pattern B: Agent as Trigger (/agents/deployment-patterns/agent-as-trigger) --- ## PATH: Agents > React Loop (Source: agents/01-react-loop.md) ReAct Loop ========== ReAct stands for Reasoning + Acting. The agent thinks about its current state, chooses an action, observes the result, and repeats until it completes the objective or hits a limit. Iteration Lifecycle =================== Each iteration follows a strict cycle: ``mermaid sequenceDiagram participant Engine participant LLM participant Tool Engine->>LLM: System prompt + history + "What next?" LLM->>Engine: { thought, action, actioninput } Engine->>Engine: Validate action is in tool allowlist Engine->>Tool: Execute action with actioninput Tool->>Engine: Observation (result) Engine->>Engine: Append to history, check limits Note over Engine: Repeat until complete
or maxiterations reached ` Step 1 — Prompt. The engine constructs a prompt containing the objective, available tools, and the history of all previous iterations. Step 2 — Think + Decide. The LLM responds with a structured JSON object: what it's thinking, what action to take, and with what parameters. Step 3 — Validate. The engine checks that the chosen action is in the declared tool list. Undeclared actions are rejected. Step 4 — Execute. The engine runs the tool and captures the result as an observation. Step 5 — Accumulate. The thought, action, and observation are appended to the iteration history. The engine checks whether to continue, stop, or trigger stuck detection. LLM Response Format =================== The agent must respond with this JSON structure on every iteration: `json { "thought": "I need to look up the customer's order history before I can assess the refund request.", "action": "getorderhistory", "actioninput": { "customerid": "cust-12345", "limit": 10 } } ` | Field | Type | Description | |-------|------|-------------| | thought | string | The agent's reasoning — what it's considering, what it knows, what it needs | | action | string | The tool to call — must be in the declared tools list | | actioninput | object | Parameters for the tool call | The thought field is critical for auditability. It captures why the agent chose this action, not just what it did. Every thought is stored in the reasoning trace (/agents/reasoning-traces). Prompt Construction =================== The engine constructs the LLM prompt from three parts: System Prompt ============= Sets the agent's role, objective, and available tools: ` You are an AI agent. Your objective is: {objective} You have access to these tools: - lookupticket: Look up a support ticket by ID - analyzesentiment: Analyze text sentiment and urgency - slackpost: Post a message to a Slack channel - complete: Signal that you have finished the task - pauseforhuman: Request human input On each turn, respond with a JSON object containing: - "thought": your reasoning about what to do next - "action": the tool to use - "actioninput": the parameters for the tool When you have completed the task, use complete with your final answer. ` Each tool's description and parameter schema are included so the agent knows what each tool does and what parameters it expects. Iteration History ================= Every previous iteration is appended as context: ` Iteration 1: Thought: "I need to look up ticket #12345 to understand the issue." Action: lookupticket Input: { "ticketid": "12345" } Observation: { "subject": "Billing Error", "status": "open", "priority": "high" } Iteration 2: Thought: "The ticket is about a billing error. Let me analyze the sentiment." Action: analyzesentiment Input: { "text": "I've been charged twice for my subscription..." } Observation: { "sentiment": "negative", "urgency": "high" } ` Current Turn Prompt =================== After the history, the engine asks: "Given the above, what is your next step?" Context Accumulation ==================== As iterations progress, the agent builds up knowledge: | Iteration | What the Agent Knows | |-----------|---------------------| | 1 | Objective only | | 2 | Objective + first tool result | | 3 | Objective + two tool results + its own reasoning about them | | N | Full history of thoughts, actions, and observations | Each observation becomes available for the agent to reference in subsequent reasoning. The agent effectively has a growing "working memory" of everything it has done and learned. Token Management ================ The iteration history grows with each cycle. For long-running agents, this can approach the LLM's context window limit. Hyphen manages this by: - Including the full history up to the context window limit - Summarizing older iterations when the history exceeds available tokens - Prioritizing recent iterations and the original objective Set maxiterations conservatively to prevent excessive token consumption. Most tasks complete in 5–10 iterations. Complex orchestration tasks may need 15–20. Termination Conditions ====================== The loop ends when any of these occur: | Condition | Result | |-----------|--------| | Agent calls complete | Success — the agent's answer is stored in context | | maxiterations reached | Failure — timeout, no answer produced | | timeoutms exceeded | Failure — execution time limit | | Agent calls _pauseforhuman | Paused — waiting for human input | | Stuck detection triggers with action: "fail" | Failure — agent was looping | | Stuck detection triggers with action: "escalate" | Paused — routed to human | Configuration Reference ======================= `json { "type": "loop", "properties": { "mode": "react", "objective": "Your task description here", "tools": [ { "type": "action", "name": "toola" }, { "type": "action", "name": "toolb" } ], "model": "gpt-4", "maxiterations": 10, "timeoutms": 300000, "temperature": 0.7, "onstuck": { "iterations": 3, "action": "escalate" }, "includereasoningtrace": true, "resultkey": "agentResult" } } ` Implicit tools are auto-injected. Built-in tools like complete, pauseforhuman, storememory, retrievememory, and logprogress__` are always available — you don't need to declare them. Only declare your custom action and workflow tools. See Tool Declarations (/agents/tool-declarations) for the full format reference. → Next: Built-in Tools (/agents/built-in-tools) --- ## PATH: Getting Started > Quickstart (Source: getting-started/01-quickstart.md) Quickstart ========== From zero to a governed operational workflow in 15 minutes. Who This Is For =============== Operations lead — "My team spends hours investigating exceptions manually. I need AI that works within our approval process." → Skip to Your First Governed Workflow (#your-first-governed-workflow) Platform engineer — "We need to embed governed agent execution into our product." → Start at The Graduated Pipeline (#the-graduated-pipeline) Developer — "Just show me the API." → Skip to API Fast Path (#api-fast-path) What You'll Build ================= A working governed workflow that demonstrates the core pattern: deterministic rules handle clear cases, a bounded AI agent investigates exceptions, a human reviewer makes the final call on edge cases, and every decision is logged. We'll use invoice-to-payment reconciliation as the example — but the same architecture applies to incident response, contract review, employee onboarding, claims processing, or any operational process where you need autonomous AI within boundaries. The Graduated Pipeline ====================== Every Hyphen deployment follows this architecture: ``mermaid flowchart TD A["Operational Event"] --> B["Deterministic Rules"] B -->|"~80% auto-resolved"| F["✅ Done"] B -->|"Exceptions"| C["ReAct Agent — Bounded AI"] C -->|"~15% AI-resolved"| F C -->|"Low confidence"| D["PbotApproval — Human Review"] D -->|"~5% human-resolved"| F F --> E["Audit Trail"] ` Rules handle the clear cases. AI handles the ambiguous middle. Humans handle the edge cases. Every layer feeds the next. Nothing falls through. This pattern works because it matches how real operations teams already work — experienced staff handle the obvious cases, specialists investigate the tricky ones, managers approve the exceptions. Hyphen encodes that same graduated judgment into infrastructure. Your First Governed Workflow ============================ Step 1: Prepare Your Data ========================= Two datasets — invoices and payments — with real-world messiness: Invoices: `json [ { "invoiceid": "INV-001", "vendor": "Acme Corp", "amount": 10000.00, "date": "2026-01-15" }, { "invoiceid": "INV-002", "vendor": "Globex Inc", "amount": 4500.00, "date": "2026-01-18" }, { "invoiceid": "INV-003", "vendor": "Initech", "amount": 7250.00, "date": "2026-01-20" }, { "invoiceid": "INV-004", "vendor": "Umbrella Ltd", "amount": 15000.00, "date": "2026-01-22" }, { "invoiceid": "INV-005", "vendor": "Stark Industries", "amount": 3200.00, "date": "2026-01-25" } ] ` Payments: `json [ { "invoiceid": "INV-001", "vendor": "Acme Corp", "amount": 10000.00, "date": "2026-01-16" }, { "invoiceid": "INV-002", "vendor": "Globex Inc", "amount": 4480.00, "date": "2026-01-19" }, { "invoiceid": "INV-004", "vendor": "Umbrela Ltd", "amount": 15000.00, "date": "2026-01-23" }, { "invoiceid": "INV-006", "vendor": "Wayne Enterprises", "amount": 8800.00, "date": "2026-01-27" } ] ` Notice: INV-002 has a $20 discrepancy, INV-004 has a vendor typo ("Umbrela"), INV-003 and INV-005 have no payments, and INV-006 is a payment without an invoice. Step 2: Create the Workflow =========================== api POST /workflows Create a workflow with matching, AI investigation, human approval, and audit logging. `bash curl -X POST http://localhost:3009/workflows \ -H "X-Org-Id: your-org" \ -H "Content-Type: application/json" \ -d '{ "name": "invoicepaymentreconciliation", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["invoiceid"], "tolerance": 0.02, "dateWindowDays": 3, "fuzzyThreshold": 85, "descriptionKey": "vendor", "outputMatched": "matched", "outputUnmatchedLeft": "unmatchedinvoices", "outputUnmatchedRight": "unmatchedpayments" } }, { "type": "loop", "filter": { "condition": { "greaterThan": [{ "length": "@unmatchedinvoices" }, 0] } }, "properties": { "mode": "react", "objective": "Investigate these unmatched invoices. For each, determine the most likely reason (timing delay, data entry error, duplicate, or missing payment). Recommend: wait, follow up with vendor, or escalate.", "tools": [ { "type": "action", "name": "lookupvendorhistory" } ], "maxiterations": 10, "onstuck": { "iterations": 3, "action": "escalate" }, "resultkey": "investigation" } }, { "type": "PbotApproval", "filter": { "condition": { "greaterThan": [{ "length": "@unmatchedinvoices" }, 0] } }, "properties": { "comment": "AI investigated unmatched invoices. Review findings and approve recommended actions.", "requestpayload": { "matchedcount": "@matched.length", "unmatchedinvoices": "@unmatchedinvoices", "aiinvestigation": "@investigation" } } }, { "type": "custom-table", "properties": { "table": "reconciliationlog", "operation": "write", "keys": ["runid", "timestamp"], "values": ["@_runid", "@now"], "fields": { "matchedcount": "@matched.length", "exceptioncount": "@unmatchedinvoices.length", "status": "completed" } } } ] } }' ` Note on tools: Built-in agent tools (complete, pauseforhuman, storememory, retrievememory, logprogress_) are automatically injected into every ReAct agent — you don't need to declare them. Only declare your custom action tools and workflow tools. What this defines: | Step | Primitive | What Happens | |------|-----------|-------------| | 1 | matcher | Compares invoices to payments: exact invoiceid, 2% amount tolerance, ±3 day date window, 85% fuzzy vendor match | | 2 | loop (react) | AI agent investigates unmatched invoices — bounded to 10 iterations with stuck detection | | 3 | PbotApproval | Human reviewer sees results + AI reasoning — makes final call | | 4 | custom-table | Logs the run to an audit table | Step 3: Execute =============== api POST /workflows/:id/execute Run the workflow with your invoice and payment data. `bash curl -X POST http://localhost:3009/workflows/{workflowid}/execute \ -H "X-Org-Id: your-org" \ -H "Content-Type: application/json" \ -d '{ "invoices": [ { "invoiceid": "INV-001", "vendor": "Acme Corp", "amount": 10000.00, "date": "2026-01-15" }, { "invoiceid": "INV-002", "vendor": "Globex Inc", "amount": 4500.00, "date": "2026-01-18" }, { "invoiceid": "INV-003", "vendor": "Initech", "amount": 7250.00, "date": "2026-01-20" }, { "invoiceid": "INV-004", "vendor": "Umbrella Ltd", "amount": 15000.00, "date": "2026-01-22" }, { "invoiceid": "INV-005", "vendor": "Stark Industries", "amount": 3200.00, "date": "2026-01-25" } ], "payments": [ { "invoiceid": "INV-001", "vendor": "Acme Corp", "amount": 10000.00, "date": "2026-01-16" }, { "invoiceid": "INV-002", "vendor": "Globex Inc", "amount": 4480.00, "date": "2026-01-19" }, { "invoiceid": "INV-004", "vendor": "Umbrela Ltd", "amount": 15000.00, "date": "2026-01-23" }, { "invoiceid": "INV-006", "vendor": "Wayne Enterprises", "amount": 8800.00, "date": "2026-01-27" } ] }' ` Response: `json { "id": "runabc123", "status": "running" } ` Step 4: Check Results ===================== api GET /runs/:runId/status Poll for status. The run will pause at PbotApproval waiting for human review. `bash curl http://localhost:3009/runs/runabc123/status \ -H "X-Org-Id: your-org" ` Expected matcher output: | Record | Result | Why | |--------|--------|-----| | INV-001 | ✅ Matched | Exact match, 1-day date offset | | INV-002 | ✅ Matched | $4,500 vs $4,480 — within 2% tolerance | | INV-004 | ✅ Matched | "Umbrella" vs "Umbrela" — 87% fuzzy match | | INV-003 | ❌ Unmatched | No payment found | | INV-005 | ❌ Unmatched | No payment found | | INV-006 | ❌ Unmatched payment | No invoice found | The matcher handled 3 of 5 invoices automatically — including the amount discrepancy and vendor typo. No AI cost. Near-instant. The 2 unmatched invoices flow to the ReAct agent. Sample reasoning trace: ` Iteration 1: Thought: "INV-003 from Initech — no payment match. Dated Jan 20, only 16 days ago. Standard terms are net-30." Action: _logprogress Input: { "message": "INV-003 likely within payment terms" } Iteration 3: Thought: "Both INV-003 and INV-005 are recent. Recommend: monitor for 30 days, then follow up if still unmatched." Action: complete_ Input: { "answer": "2 unmatched invoices — both within payment terms. Recommend: wait 30 days.", "confidence": 0.85 } ` Step 5: Approve =============== The workflow pauses at PbotApproval. The reviewer sees matched records, exceptions, and the AI investigation. api POST /approvals/:runId/:stepIndex Submit the human decision. It becomes part of the permanent audit trail. `bash curl -X POST http://localhost:3009/approvals/runabc123/2 \ -H "X-Org-Id: your-org" \ -H "Content-Type: application/json" \ -d '{ "approved": true, "comments": "Agree with AI recommendation. Monitor and follow up in 30 days.", "data": { "reviewer": "jane.doe@company.com" } }' ` The workflow resumes, logs to the audit table, and completes. What just happened. One API call matched 60% of records automatically, got AI investigation of exceptions, routed edge cases to a human with full context, and logged everything. Manual process: 4+ hours. Hyphen: ~30 seconds of compute. API Fast Path ============= For developers who want the five-minute tour. Setup ===== `bash BASEURL=http://localhost:3009 ORGID=your-org ` 1. Health Check =============== api GET /health `bash curl $BASEURL/health → { "status": "ok" } ==================== ` 2. Register an Action ===================== api POST /actions Register a reusable operation — HTTP, LLM, DB, matcher, or custom-table. `bash curl -X POST $BASEURL/actions \ -H "X-Org-Id: $ORGID" \ -H "Content-Type: application/json" \ -d '{ "actionname": "fetchcustomer", "kind": "http", "url": "https://api.example.com/customers/{{customerid}}", "httpmethod": "GET", "passthrough": true }' → { "id": "actabc123", "actionname": "fetchcustomer" } ========================================================= ` 3. Create a Workflow ==================== api POST /workflows `bash curl -X POST $BASEURL/workflows \ -H "X-Org-Id: $ORGID" \ -H "Content-Type: application/json" \ -d '{ "name": "myfirstworkflow", "definition": { "actions": [ { "type": "fetchcustomer", "properties": { "customerid": "@input.id" } }, { "type": "custom-table", "properties": { "table": "auditlog", "operation": "write", "keys": ["action"], "values": ["customerlookup"] }} ] } }' → { "id": "wfxyz789" } ======================= ` 4. Execute ========== api POST /workflows/:id/execute `bash curl -X POST $BASEURL/workflows/wfxyz789/execute \ -H "X-Org-Id: $ORGID" \ -H "Content-Type: application/json" \ -d '{ "id": "cust-12345" }' → { "id": "runabc", "status": "running" } ========================================== ` 5. Check Status =============== api GET /runs/:runId/status `bash curl $BASEURL/runs/runabc/status -H "X-Org-Id: $ORGID" → { "status": "completed", "context": { ... } } =============================================== ` 6. Run a Standalone Agent ========================= api POST /agents/execute Execute an AI agent outside of a workflow — agent-as-trigger or agent-as-orchestrator pattern. `bash curl -X POST $BASEURL/agents/execute \ -H "X-Org-Id: $ORGID" \ -H "Content-Type: application/json" \ -d '{ "objective": "Analyze this support ticket and draft a response", "tools": [ { "type": "action", "name": "fetchcustomer" }, { "type": "action", "name": "gmailsend" } ], "config": { "model": "gpt-4", "maxiterations": 10, "onstuck": { "iterations": 3, "action": "escalate" } } }' ` 7. Get the Reasoning Trace ========================== api GET /agents/:id/trace Full audit trail — every thought, action, and observation. `bash curl $BASEURL/agents/{agentrunid}/trace -H "X-Org-Id: $ORGID" ` Next Steps: Connect Your Systems ================================ Register actions to connect your systems, then reference them as agent tools using typed declarations. HTTP action (any internal API): `bash curl -X POST $BASEURL/actions \ -H "X-Org-Id: $ORGID" \ -H "Content-Type: application/json" \ -d '{ "actionname": "lookuprecord", "kind": "http", "url": "https://your-system.com/api/records/{{recordid}}", "httpmethod": "GET", "headers": { "Authorization": "Bearer orgconfig:api:systemtoken" } }' ` LLM action (AI analysis): `bash curl -X POST $BASEURL/actions \ -H "X-Org-Id: $ORGID" \ -H "Content-Type: application/json" \ -d '{ "actionname": "analyzeexception", "kind": "llm", "template": "Analyze this operational exception and recommend a resolution:\n\n{{exceptiondata}}", "model": "gpt-4", "maxtokens": 300 }' ` Then use them as typed tool references in any agent: `json { "mode": "react", "objective": "Investigate this exception and notify the responsible party", "tools": [ { "type": "action", "name": "lookuprecord" }, { "type": "action", "name": "analyzeexception" }, { "type": "action", "name": "gmailsend" }, { "type": "workflow", "id": "wfescalationprocess" } ] } ` Note: Built-in tools (complete, _pauseforhuman, storememory, retrievememory, logprogress__`) are automatically injected — you never need to list them. Same Architecture, Different Domains ==================================== The reconciliation example above demonstrates the graduated pipeline. Here's how the same pattern applies across industries: | Domain | What plays the "Matcher" role | What the Agent investigates | Who reviews | |--------|-----|------|------| | IT Security | Correlate alerts against known-benign indicators | Enrich with threat intel, assess severity, recommend containment | SOC analyst | | Legal | Compare extracted contract terms against standard playbook | Classify deviation risk, search precedent, draft redlines | Legal counsel | | People Ops | N/A — agent orchestrates directly | Coordinate IT provisioning, benefits, training, equipment | HR coordinator | | Healthcare | Match claims to policies, detect duplicates | Analyze denial codes, check medical necessity, draft appeals | Clinical reviewer | See all templates → (/templates) Three Things to Remember ======================== 1. You don't need AI for everything. The matcher handles the majority of operational data work with zero AI cost. Start with deterministic rules. Add AI only for exceptions that require judgment. 2. Governance is structural, not policy. The agent can only use tools you've declared, can only iterate up to the cap you set, and escalates automatically when stuck. This is architecture, not a policy document. 3. Every decision is auditable. Matcher results, agent reasoning traces, human approval decisions — all captured, all queryable, all persistent. Continue learning: Core Concepts (/getting-started/core-concepts) explains the six building blocks. Your First Workflow (/getting-started/your-first-workflow) walks through building a more complex pipeline step by step. Templates (/templates) show production-ready patterns across finance, IT, legal, healthcare, and people operations. --- ## PATH: Guides > Invoice Matching (Source: guides/01-invoice-matching.md) End-to-End: Invoice Reconciliation ================================== This guide walks through building a complete invoice reconciliation workflow from scratch. By the end, you'll have a pipeline that matches invoices to payments, investigates exceptions with an AI agent, routes edge cases to human reviewers, and logs everything to an audit table — running on a daily schedule. Prerequisites ============= You need a running Hyphen instance and curl. All examples use X-Org-Id: acme-corp. Step 1: Register Org Config =========================== Store the API keys your workflow actions will use: ``bash curl -X POST http://localhost:3009/org-config \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "key": "api:llmapikey", "value": "sk-your-openai-key" }' ` `bash curl -X POST http://localhost:3009/org-config \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "key": "api:erptoken", "value": "your-erp-api-token" }' ` Config values are encrypted at rest and referenced in workflows with the orgconfig: prefix. Step 2: Register HTTP Actions ============================= Register the actions the AI agent will use as tools: `bash Look up a purchase order from your ERP ====================================== curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "actionname": "lookuppurchaseorder", "kind": "http", "url": "https://erp.acme.com/api/purchase-orders/{{ponumber}}", "httpmethod": "GET", "headers": { "Authorization": "Bearer orgconfig:api:erptoken" }, "passthrough": true }' ` `bash Search payment history by vendor ================================ curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "actionname": "checkpaymenthistory", "kind": "db", "datasource": "orgconfig:db:financepg", "query": "SELECT FROM payments WHERE vendorid = $1 AND paymentdate BETWEEN $2 AND $3", "params": ["@input.vendorid", "@input.startdate", "@input.enddate"], "passthrough": true }' ` `bash Check for duplicate invoices ============================ curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "actionname": "searchduplicateinvoices", "kind": "db", "datasource": "orgconfig:db:financepg", "query": "SELECT FROM invoices WHERE amount = $1 AND vendorid = $2 AND invoicedate BETWEEN $3 AND $4 AND invoiceid != $5", "params": ["@input.amount", "@input.vendorid", "@input.startdate", "@input.enddate", "@input.invoiceid"], "passthrough": true }' ` Step 3: Create the Matcher Workflow =================================== Start simple — just the matcher step: `bash curl -X POST http://localhost:3009/workflows \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "name": "invoicereconciliationv1", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["ponumber", "vendorid"], "tolerance": 0.02, "dateWindowDays": 5, "fuzzyThreshold": 85, "descriptionKey": "vendorname", "outputMatched": "reconciled", "outputUnmatchedLeft": "unmatchedinvoices", "outputUnmatchedRight": "unmatchedpayments" } } ] } }' ` Response: `json { "id": "wfabc123", "name": "invoicereconciliationv1" } ` Step 4: Execute with Sample Data ================================ `bash curl -X POST http://localhost:3009/workflows/wfabc123/execute \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "invoices": [ { "invoiceid": "INV-001", "ponumber": "PO-100", "vendorid": "V-50", "vendorname": "Acme Corp", "amount": 1000.00, "date": "2026-01-15" }, { "invoiceid": "INV-002", "ponumber": "PO-101", "vendorid": "V-51", "vendorname": "Beta LLC", "amount": 2500.00, "date": "2026-01-16" }, { "invoiceid": "INV-003", "ponumber": "PO-999", "vendorid": "V-52", "vendorname": "Gamma Inc", "amount": 750.00, "date": "2026-01-17" } ], "payments": [ { "paymentid": "PAY-001", "ponumber": "PO-100", "vendorid": "V-50", "vendorname": "ACME Corporation", "amount": 1000.00, "date": "2026-01-18" }, { "paymentid": "PAY-002", "ponumber": "PO-101", "vendorid": "V-51", "vendorname": "Beta LLC", "amount": 2450.00, "date": "2026-01-20" } ] }' ` Check the result: `bash curl http://localhost:3009/runs/runxyz789/status \ -H "X-Org-Id: acme-corp" ` You should see INV-001 matched to PAY-001 (exact match), INV-002 matched to PAY-002 (within 2% tolerance), and INV-003 as an unmatched exception (no matching PO-999 payment). Step 5: Add the Agent Step ========================== Update the workflow to investigate exceptions: `bash curl -X PUT http://localhost:3009/workflows/wfabc123 \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "name": "invoicereconciliationv2", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["ponumber", "vendorid"], "tolerance": 0.02, "dateWindowDays": 5, "fuzzyThreshold": 85, "descriptionKey": "vendorname", "outputMatched": "reconciled", "outputUnmatchedLeft": "unmatchedinvoices", "outputUnmatchedRight": "unmatchedpayments" } }, { "type": "loop", "filter": { "condition": { "greaterThan": [{ "length": "@unmatchedinvoices" }, 0] } }, "properties": { "mode": "foreach", "itemspath": "@unmatchedinvoices", "itemvariablename": "exception", "actionstoexecute": [ { "type": "loop", "properties": { "mode": "react", "objective": "Investigate unmatched invoice {{exception.invoiceid}} from {{exception.vendorname}} for ${{exception.amount}}. Determine root cause and recommend action.", "tools": [ { "type": "action", "name": "lookuppurchaseorder" }, { "type": "action", "name": "checkpaymenthistory" }, { "type": "action", "name": "searchduplicateinvoices" } ], "maxiterations": 8, "onstuck": { "iterations": 3, "action": "retrywithhint", "hint": "Complete with your best assessment." }, "resultkey": "investigation" } } ], "maxconcurrency": 5, "failurestrategy": "continueonerror", "collectresults": true, "resultkey": "allinvestigations" } } ] } }' ` Step 6: Add Approval for Write-Offs =================================== Add a PbotApproval step so a human reviews the agent's findings before any write-offs: Add this step after the loop in the actions array: `json { "type": "PbotApproval", "filter": { "condition": { "greaterThan": [{ "length": "@unmatchedinvoices" }, 0] } }, "properties": { "comment": "{{unmatchedinvoices.length}} exceptions investigated. Review AI findings.", "requestpayload": { "reconciledcount": "@reconciled.length", "investigations": "@allinvestigations", "unmatchedpayments": "@unmatchedpayments" } } } ` When the workflow pauses, submit the approval: `bash curl -X POST http://localhost:3009/approvals/runxyz789/2 \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "approved": true, "comments": "Reviewed. Write off INV-003 as vendor error.", "data": { "reviewer": "jane@acme.com" } }' ` Step 7: Add Audit Logging ========================= Add a custom-table step to persist the reconciliation results: `json { "type": "custom-table", "properties": { "table": "reconciliationlog", "operation": "write", "keys": ["runid", "rundate"], "values": ["@runid", "@now"], "fields": { "totalinvoices": "@input.invoices.length", "autoreconciled": "@reconciled.length", "exceptions": "@unmatchedinvoices.length", "status": "completed" } } } ` First, create the table: `bash curl -X POST http://localhost:3009/custom-tables \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "name": "reconciliationlog", "fields": [ { "name": "runid", "type": "text", "required": true }, { "name": "rundate", "type": "timestamptz", "required": true }, { "name": "totalinvoices", "type": "integer" }, { "name": "autoreconciled", "type": "integer" }, { "name": "exceptions", "type": "integer" }, { "name": "status", "type": "text", "required": true } ] }' ` Step 8: Schedule for Daily Execution ==================================== Add a schedule block to the workflow definition: `json { "name": "invoicereconciliationv3", "definition": { "schedule": { "every": "1d", "at": "02:00", "timezone": "America/New_York" }, "actions": [ ... ] } } `` The workflow now runs automatically at 2 AM Eastern every day. What You Built ============== A complete reconciliation pipeline that: 1. Matches invoices to payments on PO number, vendor ID, amount (within 2%), and date (within 5 days) 2. Investigates unmatched exceptions using an AI agent with access to your ERP and payment database 3. Routes edge cases to a human reviewer with full context and agent reasoning 4. Logs every run to an audit table for compliance and trend analysis 5. Runs automatically on a daily schedule For the full production-ready version with all steps combined, see the AP Invoice Reconciliation template (/templates/ap-invoice-reconciliation). --- ## PATH: Integrations > Gmail (Source: integrations/01-gmail.md) Gmail ===== Send, read, and reply to emails from workflows and agents using the Gmail API. Available Actions ================= | Action | Description | |--------|-------------| | gmailsend | Send a new email | | gmailread | Read emails matching a query | | gmailreply | Reply to an existing email thread | Setup ===== 1. Store OAuth App Credentials ============================== Register your Google OAuth app credentials with Hyphen. You'll need a Google Cloud project with the Gmail API enabled and OAuth consent screen configured. ``bash curl -X POST http://localhost:3009/oauth/gmail/app-credentials \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "clientid": "123456789.apps.googleusercontent.com", "clientsecret": "GOCSPX-your-client-secret" }' ` api POST /oauth/:provider/app-credentials Store OAuth app credentials (clientid, clientsecret) for a provider. Credentials are encrypted at rest. 2. Authorize a User Account =========================== Get the authorization URL and redirect the user to grant access: `bash curl "http://localhost:3009/oauth/gmail/authorize?returnurl=https://yourapp.com/oauth/callback" \ -H "X-Org-Id: acme-corp" ` api GET /oauth/:provider/authorize Returns the OAuth authorization URL. The returnurl parameter specifies where to redirect after authorization. The user completes the Google consent flow. Hyphen handles the callback, exchanges the auth code for tokens, and stores them encrypted. 3. Verify Connection ==================== `bash curl http://localhost:3009/oauth/connections \ -H "X-Org-Id: acme-corp" ` Response: `json [ { "provider": "gmail", "account": "notifications@acme.com", "status": "active", "scopes": ["gmail.send", "gmail.readonly"], "connectedat": "2026-02-01T10:00:00Z" } ] ` Using in Workflows ================== Reference Gmail actions as workflow steps. The _oauthaccount_ property specifies which connected account to use. Send Email ========== `json { "type": "gmailsend", "properties": { "_oauthaccount_": "notifications@acme.com", "to": "@input.recipientemail", "subject": "Invoice {{input.invoiceid}} Processed", "body": "Dear {{input.customername}},\n\nYour invoice has been processed successfully.\n\nAmount: ${{input.amount}}\nReference: {{input.invoiceid}}" } } ` Read Emails =========== `json { "type": "gmailread", "properties": { "_oauthaccount_": "billing@acme.com", "query": "from:vendor@supplier.com subject:invoice after:2026/01/01", "maxresults": 10 } } ` Reply to Thread =============== `json { "type": "gmailreply", "properties": { "oauthaccount_": "support@acme.com", "threadid": "@input.threadid", "body": "Thank you for your inquiry. We've processed your request." } } ` Using as Agent Tools ==================== Include Gmail actions in an agent's tools array. The agent decides when and how to use them. `json { "mode": "react", "objective": "Process this support request and send a response", "tools": [ "lookupcustomer", "checkorderhistory", { "type": "action", "name": "gmailsend" }, { "type": "action", "name": "gmailread" } ], "maxiterations": 10 } ` The agent might search for previous emails from the customer, draft a response, and send it — or pause for human review if the issue is complex. Complete Example: Batch Statement Sender ======================================== `json { "name": "monthlystatementsender", "definition": { "actions": [ { "type": "loop", "properties": { "mode": "foreach", "itemspath": "@input.customers", "itemvariablename": "customer", "actionstoexecute": [ { "type": "gmailsend", "properties": { "oauthaccount_": "billing@acme.com", "to": "@customer.email", "subject": "Monthly Statement — {{customer.name}}", "body": "Dear {{customer.name}},\n\nPlease find your statement for {{input.month}}.\n\nBalance: ${{customer.balance}}\nDue: {{input.duedate}}" } } ], "maxconcurrency": 10, "failurestrategy": "continueonerror", "collectresults": true, "resultkey": "sendResults" } } ] } } `` Token Management ================ Hyphen automatically refreshes Gmail OAuth tokens before they expire. If a token refresh fails (e.g., user revoked access), the workflow step fails with an authentication error. You can check token status and force a refresh: api GET /oauth/connections/:provider/:account Validate a specific OAuth connection. api POST /oauth/connections/:provider/:account/refresh Force refresh an OAuth token. --- ## PATH: Platform > Architecture (Source: platform/01-architecture.md) Architecture ============ Hyphen operates on a three-phase execution model unified by a single idea: AI as Compiler. ``mermaid sequenceDiagram participant User participant AI Compiler participant Spec participant Engine participant Agent participant Human User->>AI Compiler: "Match invoices to payments,
investigate exceptions,
get approval for write-offs" AI Compiler->>Spec: JSON workflow specification Note over Spec: Frozen intent —
deterministic from here Spec->>Engine: Execute step by step Engine->>Engine: Step 1: Matcher (deterministic) Engine->>Agent: Step 2: ReAct loop (bounded) Agent->>Agent: Think → Act → Observe → Repeat Agent->>Engine: complete with answer Engine->>Human: Step 3: PbotApproval (pause) Human->>Engine: Approved Engine->>Engine: Step 4: Custom Table (log) Engine->>User: Run completed ✓ ` Phase 1: AI as Compiler (Design Time) ===================================== A user describes intent in plain language: > "Create a workflow that matches invoices to payments by invoice number with 2% tolerance. For unmatched invoices over $5,000, have an AI agent investigate. For anything the agent can't resolve, get human approval." Hyphen's AI compiler translates this into a precise JSON workflow specification — the conditions, data references, branching logic, escalation paths. `json { "name": "invoicereconciliation", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["invoiceid"], "tolerance": 0.02, "outputMatched": "matched", "outputUnmatchedLeft": "exceptions" } }, { "type": "loop", "filter": { "condition": { "greaterThan": ["@exceptions.length", 0] } }, "properties": { "mode": "react", "objective": "Investigate unmatched invoices...", "tools": [ { "type": "action", "name": "lookuppurchaseorder" }, { "type": "action", "name": "checkpaymenthistory" } ], "maxiterations": 10, "resultkey": "investigation" } }, { "type": "PbotApproval", "properties": { "comment": "Review AI investigation findings", "requestpayload": { "investigation": "@investigation" } } } ] } } ` The human describes the what. The AI produces the how. Humans are bad at writing recursive JSON specs with nested condition trees and context path references. AI is great at it. AI as Compiler, not AI as Runtime. The AI's job ends once the spec is produced. It doesn't make decisions during execution — it compiles intent into a deterministic blueprint. This distinction is critical for auditability: you can review the spec before it runs, and the execution engine follows it exactly. api POST /ai/generate-workflow Submit a natural language description. Returns a generationid for polling. api POST /workflows/create-from-ai Deploy the generated spec — creates the workflow, registers any needed actions, and provisions custom tables. Phase 2: Deterministic Runtime (Execution) ========================================== The execution engine runs the compiled specification exactly as written. It does not improvise. If the spec says "wait for human approval," it waits. If a condition evaluates to false, it follows the defined fallback path. If a step has an onFalse branch, it takes it. No hallucination. No skipped steps. `mermaid flowchart TD A["Workflow Spec"] --> B{"Top-level
condition?"} B -->|"true"| C["Step 1"] B -->|"false"| X["conditionnotmet"] C --> D{"Step 2
filter?"} D -->|"true"| E["Execute step"] D -->|"false"| F["Execute onFalse"] E --> G["Step 3"] F --> G G --> H["Step N..."] H --> Z["Run completed"] ` Each step executes in sequence. The engine resolves @path references from the accumulated context, evaluates filter conditions, and routes to onFalse branches when conditions fail. Every step output is captured and added to context for subsequent steps. Run statuses: | Status | Meaning | |--------|---------| | running | Steps executing | | paused | Waiting for human approval or form input | | completed | All steps finished | | failed | A step threw an error | | conditionnotmet | Top-level gate evaluated false | Phase 3: Bounded Agentic Runtime (Governed Autonomy) ==================================================== For tasks that require reasoning — analyzing exceptions, interpreting documents, making judgment calls — Hyphen provides ReAct (Reasoning + Acting) loops. Within these loops, an AI agent thinks, uses tools, observes results, and iterates toward an objective. But the agent operates inside a cage defined by the spec: Structural permissioning. Only tools explicitly declared in the workflow definition are available. The agent cannot discover or invent capabilities. This isn't a policy layer — it's architectural. If you declare ["lookupticket", "gmailsend", "complete"], those are the only three actions the agent can take. Bounded iteration. maxiterations caps prevent runaway execution. A typical production agent runs 5–15 iterations. Stuck detection. The engine identifies when an agent is looping without progress (repeating the same action with the same parameters) and triggers the configured recovery strategy: fail, retry with a hint, or escalate to a human. Human-in-the-loop. The agent can call pauseforhuman when confidence is low. Execution pauses, the human sees full context, their decision becomes part of the reasoning trace, and the agent resumes. Reasoning traces. Every iteration captures the agent's thought, the action it chose, the parameters it used, and the observation it received. This is the audit trail — every step of the agent's reasoning, queryable and persistent. Checkpoint and resume. Agent state is persisted. If execution pauses for human input or fails mid-run, it can resume from where it left off. `mermaid flowchart LR subgraph "Agent Cage (defined by spec)" T["Think"] --> A["Act
(declared tools only)"] A --> O["Observe"] O --> T end O -->|"maxiterations
or complete"| Done["Result"] O -->|"stuck detected"| Recovery["Escalate / Retry / Fail"] O -->|"_pauseforhuman_"| Human["Human Input"] Human --> T ` *The spec defines what the agent can do. The agent decides what it will do. Everything it does is recorded.* This is governed autonomy — the agent reasons independently within explicit boundaries. Enterprises audit specs, not vibes. Position in the Stack ===================== Hyphen sits between the AI models (which provide intelligence) and your business systems (which provide capabilities): ` ┌─────────────────────────────────────┐ │ Your Application / UI │ ├─────────────────────────────────────┤ │ LLM Provider (OpenAI, Anthropic) │ ← Intelligence ├─────────────────────────────────────┤ │ Hyphen │ ← Orchestration, governance, audit ├─────────────────────────────────────┤ │ Your APIs, databases, services │ ← Capabilities └─────────────────────────────────────┘ `` Hyphen integrates with any LLM provider. It connects to your existing infrastructure via registered actions. It doesn't require rearchitecting your backend — it wraps what you already have in a governed execution environment. → Next: Workflow DSL (/platform/workflow-dsl) — the complete JSON specification reference --- ## PATH: Primitives > Matcher (Source: primitives/01-matcher.md) Matcher ======= The matcher takes two datasets and finds records that correspond to each other. It outputs three arrays: matched pairs, unmatched records from the left side, and unmatched records from the right side. ``mermaid flowchart LR L["Left Dataset
(e.g. invoices)"] --> M{"Matcher"} R["Right Dataset
(e.g. payments)"] --> M M --> Matched["Matched Pairs"] M --> UL["Unmatched Left"] M --> UR["Unmatched Right"] ` Basic Usage =========== `json { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["invoiceid"], "outputMatched": "matched", "outputUnmatchedLeft": "unmatchedinvoices", "outputUnmatchedRight": "unmatchedpayments" } } ` This matches invoices to payments by exact invoiceid. Records with matching IDs land in @matched. Invoices with no payment go to @unmatchedinvoices. Payments with no invoice go to @unmatchedpayments. Properties Reference ==================== | Property | Type | Required | Description | |----------|------|----------|-------------| | left | array / @path | Yes | First dataset | | right | array / @path | Yes | Second dataset | | matchOn | string[] | Yes | Fields that must match exactly | | tolerance | number | No | Numeric tolerance as a decimal (0.02 = 2%). Applied to numeric fields not in matchOn | | dateWindowDays | number | No | Date tolerance in days (±N). Applied to date fields | | fuzzyThreshold | number | No | Text similarity threshold 0–100. Applied to the field specified by descriptionKey | | descriptionKey | string | No | Field name for fuzzy text matching | | rules | array | No | Custom matching rules evaluated via conditionEvaluator | | outputMatched | string | No | Context key for matched pairs (default: "matched") | | outputUnmatchedLeft | string | No | Context key for unmatched left records (default: "unmatchedLeft") | | outputUnmatchedRight | string | No | Context key for unmatched right records (default: "unmatchedRight") | Matching Criteria ================= Exact Key Matching (matchOn) ============================== Fields listed in matchOn must match exactly. This is the primary matching criteria — records are only compared if their matchOn fields align. `json { "matchOn": ["invoiceid"] } ` Multiple keys create a composite match — all must match: `json { "matchOn": ["vendorid", "invoicenumber"] } ` Numeric Tolerance (tolerance) =============================== For numeric fields (amounts, quantities), allow a percentage deviation. A tolerance of 0.02 means a 2% difference is still considered a match. `json { "matchOn": ["invoiceid"], "tolerance": 0.02 } ` With this configuration, an invoice for $1,000 would match a payment of $980–$1,020. Date Window (dateWindowDays) ============================== Allow date fields to differ by up to N days: `json { "matchOn": ["invoiceid"], "dateWindowDays": 3 } ` An invoice dated January 10 would match a payment dated January 7–13. Fuzzy Text Matching (fuzzyThreshold + descriptionKey) ========================================================= Compare text fields using the fuzzball similarity algorithm. The threshold is 0–100 where 100 is an exact match: `json { "matchOn": ["vendorid"], "fuzzyThreshold": 85, "descriptionKey": "description" } ` This matches records where vendorid is identical and the description fields are at least 85% similar. Useful for matching line-item descriptions that may be worded differently across systems. Custom Rules (rules) ====================== Define additional matching rules evaluated by the condition engine: `json { "matchOn": ["invoiceid"], "rules": [ { "condition": { "lessOrEqual": [ { "abs": { "subtract": ["@left.amount", "@right.amount"] } }, 50 ] } } ] } ` Custom rules use the same condition operators as workflow conditions (/platform/conditional-logic), with @left and @right referencing the current pair being compared. Output Format ============= Matched Pairs ============= Each matched record contains both the left and right record: `json [ { "a": { "invoiceid": "INV-001", "amount": 1000, "vendor": "Acme" }, "b": { "invoiceid": "INV-001", "amount": 1000, "vendor": "Acme Corp" }, "matchscore": 0.95, "amountdifference": 0 } ] ` The a field is the left record, b is the right record. matchscore reflects overall match quality. amountdifference shows numeric deviation when tolerance matching is used. Unmatched Records ================= Unmatched arrays contain the original records with no modifications: `json [ { "invoiceid": "INV-099", "amount": 5000, "vendor": "NewVendor" } ] ` Worked Example ============== Input: `json { "invoices": [ { "invoiceid": "INV-001", "amount": 1000.00, "date": "2025-01-10", "description": "Monthly service fee" }, { "invoiceid": "INV-002", "amount": 2500.00, "date": "2025-01-15", "description": "Equipment rental" }, { "invoiceid": "INV-003", "amount": 750.00, "date": "2025-01-20", "description": "Consulting hours" } ], "payments": [ { "invoiceid": "INV-001", "amount": 1000.00, "date": "2025-01-12", "description": "Monthly service" }, { "invoiceid": "INV-002", "amount": 2475.00, "date": "2025-01-15", "description": "Equip rental Jan" } ] } ` Matcher configuration: `json { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["invoice_id"], "tolerance": 0.02, "dateWindowDays": 3, "fuzzyThreshold": 80, "descriptionKey": "description", "outputMatched": "reconciled", "outputUnmatchedLeft": "exceptions" } } ` Results: - @reconciled: INV-001 (exact match), INV-002 (amount within 2% tolerance, descriptions 80%+ similar) - @exceptions: INV-003 (no matching payment found) Redis Optimization ================== For large datasets (10,000+ records per side), the matcher automatically uses Redis for indexing when available. This provides significant performance improvements by pre-indexing records by their matchOn` keys rather than performing pairwise comparison. No configuration change is needed — the matcher detects Redis availability and dataset size automatically. Matcher as the foundation. Most Hyphen workflows start with a matcher step. The matched records flow into deterministic processing, while exceptions route to AI agents or human review. This is the graduated exception handling pattern: deterministic rules for clear cases, AI for ambiguous cases, humans for edge cases. → Next: Loop (/primitives/loop) --- ## PATH: Templates > Ap Invoice Reconciliation (Source: templates/01-ap-invoice-reconciliation.md) AP Invoice Reconciliation ========================= Three-way match across purchase orders, invoices, and payments. The universal starting point — every company has this problem, most solve it with spreadsheets. What Gets Automated =================== PO-to-invoice matching on PO number, amount tolerance, and date windows. Fuzzy vendor name matching catches data entry inconsistencies. Agent investigates discrepancies: wrong amounts, missing POs, duplicate invoices, partial payments. What Humans Still Own ===================== Write-off approvals above threshold. Vendor dispute resolution. New vendor onboarding decisions. Pipeline ======== ``mermaid flowchart TD A[Input: Invoices + Payments + POs] --> B[Matcher] B -->|~80% matched| C[Auto-Reconcile] B -->|~20% exceptions| D{Exception Type} D -->|Amount discrepancy| E[ReAct Agent] D -->|Missing PO| E D -->|Duplicate suspected| E E -->|High confidence| F[Auto-Resolve + Log] E -->|Low confidence| G[Human Approval] G -->|Approved| H[Process + Log] G -->|Rejected| I[Flag for Review] C --> J[Custom Table: Reconciliation Log] F --> J H --> J I --> J style B fill:#e8a84c,color:#09090b,stroke:none style E fill:#4ade80,color:#09090b,stroke:none style G fill:#60a5fa,color:#09090b,stroke:none ` Workflow Definition =================== `json { "name": "apinvoicereconciliation", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["ponumber", "vendorid"], "tolerance": 0.02, "dateWindowDays": 5, "fuzzyThreshold": 85, "descriptionKey": "vendorname", "outputMatched": "reconciled", "outputUnmatchedLeft": "unmatchedinvoices", "outputUnmatchedRight": "unmatchedpayments" } }, { "type": "loop", "filter": { "condition": { "greaterThan": [{ "length": "@unmatchedinvoices" }, 0] } }, "properties": { "mode": "foreach", "itemspath": "@unmatchedinvoices", "itemvariablename": "exception", "actionstoexecute": [ { "type": "loop", "properties": { "mode": "react", "objective": "Investigate this unmatched invoice: {{exception.invoiceid}} from {{exception.vendorname}} for ${{exception.amount}}. Determine the root cause: timing delay (check payment terms), amount discrepancy (compare to PO), duplicate invoice, missing PO, or data entry error. Recommend: auto-resolve, follow up with vendor, or escalate for write-off review.", "tools": [ { "type": "action", "name": "lookuppurchaseorder" }, { "type": "action", "name": "checkpaymenthistory" }, { "type": "action", "name": "searchduplicateinvoices" } ], "maxiterations": 8, "onstuck": { "iterations": 3, "action": "retrywithhint", "hint": "If you cannot determine the root cause, complete with your best assessment and low confidence." }, "resultkey": "investigation" } } ], "maxconcurrency": 5, "failurestrategy": "continueonerror", "collectresults": true, "resultkey": "allinvestigations" } }, { "type": "PbotApproval", "filter": { "condition": { "greaterThan": [{ "length": "@unmatchedinvoices" }, 0] } }, "properties": { "comment": "{{unmatchedinvoices.length}} exceptions investigated. Review AI findings and approve recommended actions.", "requestpayload": { "reconciledcount": "@reconciled.length", "exceptioncount": "@unmatchedinvoices.length", "investigations": "@allinvestigations", "unmatchedpayments": "@unmatchedpayments" } } }, { "type": "custom-table", "properties": { "table": "reconciliationlog", "operation": "write", "keys": ["runid", "rundate"], "values": ["@_runid", "@now"], "fields": { "totalinvoices": "@input.invoices.length", "autoreconciled": "@reconciled.length", "exceptionsinvestigated": "@unmatchedinvoices.length", "unmatchedpayments": "@unmatchedpayments.length", "status": "completed" } } } ] } } ` Required Registered Actions =========================== | Action | Kind | Purpose | |--------|------|---------| | lookuppurchaseorder | http | Query ERP for PO details by PO number | | checkpaymenthistory | db | Search payment records for a vendor within date range | | searchduplicateinvoices | db | Check for invoices with matching amounts and close dates | Customization Notes =================== Tolerance. The default 2% (0.02`) handles typical rounding differences. Lower to 0.5% for high-precision environments; raise to 5% if partial payments are common. Date window. 5 days covers standard payment processing lag. Extend to 15–30 days for international vendors with longer settlement cycles. Fuzzy threshold. 85 catches minor name variations ("Acme Corp" vs "ACME Corporation"). Lower to 75 if vendor names are highly inconsistent across systems. Agent iterations. 8 iterations gives the agent room to check multiple data sources. Reduce to 5 for simpler investigations; increase to 12 if your exception patterns are complex. --- ## PATH: Actions > Llm (Source: actions/02-llm.md) LLM Actions =========== LLM actions generate text using AI models. Define a prompt template with dynamic placeholders, and the action fills them in and calls the LLM at execution time. Registration ============ ``bash curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "actionname": "summarizedocument", "kind": "llm", "description": "Summarize a document in 3 bullet points", "template": "Summarize the following document in 3 bullet points:\n\n{{documenttext}}", "model": "gpt-4", "maxtokens": 500, "outputKey": "summary" }' ` Properties ========== | Property | Type | Required | Description | |----------|------|----------|-------------| | actionname | string | Yes | Unique name for this action | | kind | "llm" | Yes | | | template | string | Yes | Prompt template with {{ }} placeholders | | model | string | No | LLM model (default: configured in environment) | | maxtokens | number | No | Maximum tokens in the response (default: 1024) | | temperature | number | No | Sampling temperature 0–1 (default: 0.7) | | outputKey | string | No | Context key for the generated text | Prompt Templates ================ Use {{ }} syntax for dynamic content in the prompt: `json { "actionname": "extractinvoicefields", "kind": "llm", "template": "Extract the following fields from this invoice text and return as JSON:\n- invoicenumber\n- vendorname\n- amount\n- duedate\n\nInvoice text:\n{{invoicetext}}", "model": "gpt-4", "maxtokens": 256 } ` Multiple placeholders are supported: `json { "template": "You are a {{role}} at {{company}}.\n\nAnalyze this customer complaint and draft a response:\n\n{{complainttext}}" } ` Usage in a Workflow =================== `json { "type": "summarizedocument", "properties": { "documenttext": "@input.documentcontent" }, "outputKey": "summary" } ` The documenttext property fills the {{documenttext}} placeholder in the template. The LLM response is stored at @summary. Usage as an Agent Tool ====================== `json { "mode": "react", "objective": "Read the contract and identify key terms", "tools": [{ "type": "action", "name": "summarizedocument" }, { "type": "action", "name": "extractinvoicefields" }] } ` The agent sees the action's description and template parameters in its tool list. It can call summarizedocument with the required documenttext parameter whenever it needs a summary during its reasoning. Structured Output ================= For actions that need structured responses (JSON, specific fields), instruct the model in the template: `json { "actionname": "classifyticket", "kind": "llm", "template": "Classify this support ticket. Return ONLY a JSON object with:\n- category: one of [billing, technical, account, other]\n- priority: one of [low, medium, high, critical]\n- summary: one sentence summary\n\nTicket:\n{{tickettext}}", "model": "gpt-4", "max_tokens": 200 } `` LLM actions vs ReAct agents. LLM actions run a single prompt and return the response — no iteration, no tool use. Use LLM actions for one-shot tasks: summarization, classification, extraction. Use ReAct agents when the task requires multi-step reasoning or tool interaction. → Next: DB Actions (/actions/db) --- ## PATH: Agents > Built In Tools (Source: agents/02-built-in-tools.md) Built-in Tools ============== Every agent has access to these built-in tools. Include them in your tools array by name. | Tool | Purpose | |------|---------| | complete | Signal task completion with final answer | | _runworkflow | Trigger another Hyphen workflow | | pauseforhuman | Pause and request human input | | storememory | Save data for later retrieval within the session | | retrievememory | Recall previously stored data | | logprogress | Record milestones for observability | complete ============== Signal that the agent has finished its task and provide the final answer. Parameters ========== | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | answer | string | Yes | The final answer or result | | confidence | number | No | Confidence score 0–1 (default: 1.0) | | summary | string | No | Brief summary of work performed | Example ======= ``json { "thought": "I've verified the receipt, checked policy compliance, and the expense is within limits. Ready to approve.", "action": "complete", "actioninput": { "answer": "Approve. Expense of $847 for client dinner is within the $1,000 entertainment policy limit. Receipt verified, date matches credit card statement.", "confidence": 0.95, "summary": "Verified receipt authenticity, checked against entertainment policy, confirmed credit card match." } } ` Result ====== The agent loop terminates successfully. The answer, confidence, and summary are stored in context under the resultkey configured on the loop. `json { "@agentResult": { "answer": "Approve. Expense of $847...", "confidence": 0.95, "summary": "Verified receipt authenticity..." } } ` Always include complete in your tools list. Without it, the agent has no way to finish and will exhaust maxiterations. _runworkflow_ ================== Trigger another Hyphen workflow from within the agent. The agent can dispatch sub-tasks to specialized workflows and use the results in its reasoning. Parameters ========== | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | workflowid | string | No | ID of the workflow to run | | workflowname | string | No | Name of the workflow (alternative to ID) | | input | object | Yes | Input payload for the workflow | | wait | boolean | No | Wait for completion (default: true) | | timeoutms | number | No | Timeout in milliseconds (default: 60000) | *Either workflowid or workflowname is required. Example ======= `json { "thought": "I need to run the sanctions screening workflow for this customer before proceeding with onboarding.", "action": "_runworkflow_", "actioninput": { "workflowname": "sanctionsscreening", "input": { "customername": "Acme Corp", "country": "DE", "registrationnumber": "HRB 12345" }, "wait": true, "timeoutms": 120000 } } ` Result ====== `json { "workflowrunid": "runxyz789", "workflowid": "wfabc123", "status": "completed", "result": { "screeningresult": "clear", "matchesfound": 0 }, "context": { } } ` The agent receives the workflow's output as an observation and can reason about it in the next iteration. Async dispatch. Set wait: false to trigger a workflow without waiting. The agent gets back a workflowrunid immediately and can check status later or move on. _pauseforhuman ===================== Pause the agent and request human input. The agent's state is checkpointed — when the human responds, the agent resumes from exactly where it left off. Parameters ========== | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | question | string | Yes | Question for the human reviewer | | options | string[] | No | List of choices to present | | contextsummary | string | No | Summary of work done so far | | urgency | string | No | "low", "medium", or "high" | Example ======= `json { "thought": "I found two matching customers. I can't determine which one is correct from the data alone — I need a human to decide.", "action": "_pauseforhuman", "actioninput": { "question": "Two customers match 'John Smith'. Which should I use for this refund?", "options": [ "John Smith (ID: C-001) — Premium, member since 2019, $45K lifetime value", "John Smith (ID: C-047) — Standard, member since 2023, $2K lifetime value" ], "contextsummary": "Processing refund #RF-789 for order #12345. Amount: $234.00.", "urgency": "medium" } } ` What Happens ============ 1. Agent state is persisted (checkpointed) 2. Agent status changes to paused 3. An approval-style request is created 4. Human reviews context and responds via POST /agents/:id/resume 5. Agent resumes with the human's response as the next observation Resuming ======== `bash curl -X POST http://localhost:3009/agents/{agentRunId}/resume \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "humaninput": "Use John Smith C-001, the premium customer.", "approved": true }' ` The agent's next iteration sees the human's response as an observation and continues reasoning. _storememory_ ================== Store a key-value pair for later retrieval within the same agent execution. Use this to save intermediate results, extracted data, or decisions that you'll need in a later iteration. Parameters ========== | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | key | string | Yes | Storage key | | value | any | Yes | Value to store (string, number, object, array) | | ttlseconds | number | No | Time-to-live before automatic expiration | Example ======= `json { "thought": "I found the customer's account details. Let me save this for when I draft the email.", "action": "_storememory_", "actioninput": { "key": "customerinfo", "value": { "name": "Jane Doe", "email": "jane@acme.com", "tier": "premium", "accountageyears": 4 } } } ` Result ====== `json { "stored": true, "key": "customerinfo", "message": "Stored value under key \"customerinfo\"" } ` retrievememory ===================== Retrieve a previously stored value by key. Parameters ========== | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | key | string | Yes | Storage key to retrieve | Example ======= `json { "thought": "Now I need the customer info I saved earlier to personalize this email.", "action": "retrievememory", "actioninput": { "key": "customerinfo" } } ` Result (found) ============== `json { "found": true, "key": "customerinfo", "value": { "name": "Jane Doe", "email": "jane@acme.com", "tier": "premium", "accountageyears": 4 }, "storedat": "2026-02-01T10:30:00Z" } ` Result (not found) ================== `json { "found": false, "key": "unknownkey", "message": "No memory found for key \"unknownkey\"" } ` Memory scope. storememory and retrievememory are scoped to a single agent execution. Memory does not persist across separate agent runs. For persistent storage across runs, use Custom Tables (/primitives/custom-table). logprogress ================== Record a progress milestone without affecting execution flow. Useful for observability — tracking where the agent is in a long-running task. Parameters ========== | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | message | string | Yes | Progress description | | milestone | string | No | Named milestone identifier | | data | object | No | Structured data associated with the milestone | Example ======= `json { "thought": "Research phase is done. I found 3 articles and 2 contacts. Moving to drafting.", "action": "logprogress", "actioninput": { "message": "Research phase complete. Found 3 relevant articles and 2 key contacts.", "milestone": "researchcomplete", "data": { "articlesfound": 3, "contactsfound": 2, "sources": ["TechCrunch", "Company Blog", "SEC Filing"] } } } ` Result ====== `json { "logged": true, "milestone": "researchcomplete", "message": "Progress logged" } `` Progress logs appear in the agent's reasoning trace (/agents/reasoning-traces) and are queryable for monitoring dashboards. → Next: Tool Declarations (/agents/tool-declarations) --- ## PATH: Agents > Deployment Patterns > Agent As Trigger (Source: agents/deployment-patterns/02-agent-as-trigger.md) Pattern B: Agent as Workflow Trigger ==================================== The agent operates as a smart ingestion layer. It receives unstructured input, reasons about it, and triggers the appropriate workflow with structured parameters. You don't know upfront which workflow to run — the agent decides. ``mermaid flowchart LR Input["Unstructured Input
(email, document, webhook)"] --> Agent["ReAct Agent"] Agent -->|"invoice detected"| W1["Invoice Processing
Workflow"] Agent -->|"support request"| W2["Support Ticket
Workflow"] Agent -->|"vendor inquiry"| W3["Vendor Onboarding
Workflow"] style Agent fill:#f3e8ff,stroke:#9333ea ` When to Use =========== - Input arrives in unstructured form (emails, documents, free-text) - You don't know upfront which workflow to run - Classification and extraction are required before processing - The agent acts as an intelligent router — analyzing input and dispatching Complete Example: Document Classifier and Router ================================================ A standalone agent that receives a document, classifies it, extracts relevant fields, and triggers the correct processing workflow. `json { "objective": "Classify this incoming document and route it to the correct processing workflow.\n\nDocument content:\n{{input.documenttext}}\n\nSender: {{input.senderemail}}\nReceived: {{input.receivedat}}", "tools": [ { "type": "action", "name": "documentclassifier" }, { "type": "action", "name": "fieldextractor" }, { "type": "workflow", "name": "invoiceprocessing" }, { "type": "workflow", "name": "purchaseorderprocessing" }, { "type": "workflow", "name": "supporttickethandler" }, { "type": "workflow", "name": "contractreview" } ], "config": { "model": "gpt-4", "maxiterations": 8, "onstuck": { "iterations": 3, "action": "escalate" } } } ` Execute via the standalone agent API: `bash curl -X POST http://localhost:3009/agents/execute \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "objective": "Classify this incoming document and route it...", "tools": [ ... ], "config": { "model": "gpt-4", "maxiterations": 8 } }' ` Sample Reasoning Trace ====================== ` Iteration 1: Thought: "I need to classify this document first." Action: classifydocument Input: { "text": "Invoice #INV-2024-0891..." } Observation: { "category": "invoice", "confidence": 0.97 } Iteration 2: Thought: "It's an invoice with high confidence. Let me extract the key fields." Action: extractfields Input: { "text": "Invoice #INV-2024-0891...", "documenttype": "invoice" } Observation: { "invoicenumber": "INV-2024-0891", "vendor": "Acme Supplies", "amount": 4250.00, "duedate": "2026-03-15" } Iteration 3: Thought: "I have the classification and extracted fields. Let me log progress and trigger the invoice workflow." Action: logprogress Input: { "message": "Document classified as invoice", "milestone": "classified" } Iteration 4: Thought: "Now I'll trigger the invoice processing workflow with the extracted fields." Action: runworkflow Input: { "workflowname": "invoiceprocessing", "input": { "invoicenumber": "INV-2024-0891", "vendor": "Acme Supplies", "amount": 4250.00, "duedate": "2026-03-15", "senderemail": "billing@acme.com", "source": "emailingestion" }, "wait": false } Observation: { "workflowrunid": "rundef456", "status": "running" } Iteration 5: Thought: "Invoice workflow triggered. I'm done." Action: complete Input: { "answer": "Document classified as invoice (97% confidence). Triggered invoiceprocessing workflow (rundef456) with extracted fields.", "confidence": 0.97 } ` Key Properties ============== - Workflow tools are essential — declaring { "type": "workflow", "name": "..." } gives the agent the ability to dispatch to specialized workflows - wait: false on the workflow trigger — the agent doesn't need to wait for invoice processing to complete. It classifies and routes - Standalone execution — this agent runs via POST /agents/execute, not inside a workflow. It is the entry point - Implicit tools — _logprogress, complete, and pauseforhuman_ are auto-injected; no need to declare them Scaling the Pattern =================== For high-volume ingestion, combine with a foreach loop in a wrapper workflow: `json { "name": "batchdocumentprocessor", "definition": { "actions": [ { "type": "loop", "properties": { "mode": "foreach", "itemspath": "@input.documents", "itemvariablename": "doc", "actionstoexecute": [ { "type": "loop", "properties": { "mode": "react", "objective": "Classify and route: {{doc.text}}", "tools": [ { "type": "action", "name": "classifydocument" }, { "type": "action", "name": "extractfields" }, { "type": "workflow", "name": "invoiceprocessing" }, { "type": "workflow", "name": "supporttickethandler" } ], "maxiterations": 6 } } ], "maxconcurrency": 5, "failurestrategy": "continueonerror" } } ] } } `` → Next: Pattern C: Agent as Orchestrator (/agents/deployment-patterns/agent-as-orchestrator) --- ## PATH: Getting Started > Core Concepts (Source: getting-started/02-core-concepts.md) Core Concepts ============= Six concepts cover everything you need to build with Hyphen. ``mermaid graph LR W["Workflow
(JSON spec)"] --> P["Primitives
(built-in steps)"] W --> A["Actions
(registered ops)"] P --> C["Context
(@path data flow)"] A --> C C --> AG["Agents
(ReAct loops)"] AG --> R["Runs
(execution + audit)"] P --> R style W fill:#e3f2fd,stroke:#1565c0 style P fill:#e8f5e9,stroke:#2e7d32 style A fill:#fff3e0,stroke:#e65100 style C fill:#f3e5f5,stroke:#6a1b9a style AG fill:#fce4ec,stroke:#c62828 style R fill:#e0f2f1,stroke:#00695c ` 1. Workflows ============ A workflow is a JSON spec that defines a process. It has a name, an optional top-level condition gate, and an ordered array of actions (steps). `json { "name": "invoicereconciliation", "definition": { "condition": { "greaterThan": ["@input.invoices.length", 0] }, "actions": [ { "type": "matcher", "properties": { ... } }, { "type": "loop", "properties": { "mode": "react", ... } }, { "type": "PbotApproval", "properties": { ... } } ] } } ` Workflows are created via POST /workflows, executed via POST /workflows/:id/execute, and produce runs with full audit trails. They can also be generated from plain language via POST /ai/generate-workflow. AI as Compiler. Hyphen's core philosophy: AI generates the workflow spec upfront (compile time), then the engine executes it deterministically (runtime). AI doesn't improvise during execution — it produces the blueprint, and the engine follows it exactly. 2. Actions ========== Actions are reusable operations you register once and reference by name in any workflow. Five kinds: | Kind | What It Does | Example | |------|-------------|---------| | http | Calls REST APIs | POST to Salesforce, GET from Stripe | | llm | AI text generation | Summarize a document, extract entities | | db | Database queries | SELECT from your warehouse | | matcher | Pre-configured matching | Matching rules saved as a reusable action | | custom-table | Table operations | Read/write to Hyphen-managed tables | Register an action: `bash curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: your-org" \ -H "Content-Type: application/json" \ -d '{ "actionname": "fetchcustomer", "kind": "http", "url": "https://api.example.com/customers/{{customerid}}", "httpmethod": "GET", "passthrough": true }' ` Use it in a workflow step: `json { "type": "fetchcustomer", "properties": { "customerid": "@input.id" } } ` Use it as a ReAct agent tool: `json { "tools": [{ "type": "action", "name": "fetchcustomer" }] } ` All action kinds work everywhere — in workflow steps, foreach loops, and as agent tools. → Full reference: Actions (/actions) 3. Primitives ============= Primitives are built-in workflow steps that don't require registration. They handle core orchestration patterns: | Primitive | Purpose | When to Use | |-----------|---------|-------------| | Matcher (/primitives/matcher) | Multi-criteria data matching | Reconciling two datasets (invoices↔payments, claims↔records) | | Loop (/primitives/loop) | Batch processing (foreach) or AI reasoning (react) | Processing N items, or letting an agent investigate a problem | | PbotApproval (/primitives/approval) | Human-in-the-loop | Manager sign-off, compliance review, edge case decisions | | PbotForm (/primitives/form) | External input collection | Vendor submits shipping details, customer provides documents | | Custom Table (/primitives/custom-table) | Multi-tenant data storage | Audit logs, operational state, cross-run memory | Primitives use the step type field directly — no registration required: `json { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", ... } } { "type": "PbotApproval", "properties": { "comment": "Review this transaction" } } ` 4. Context ========== Context is how data flows between steps. Every workflow run maintains a context object that grows as steps execute. @path References ================ The @ prefix references values from context: | Path | What It References | |------|-------------------| | @input.field | Data passed when the workflow was executed | | @matched | Matched records from a matcher step | | @unmatchedLeft | Left-side unmatched records from a matcher | | @item | Current item inside a foreach loop | | @_runid | The current run's ID | | @_approved | Boolean result from a PbotApproval step | | @stepOutput.field | Output from a named step (via outputKey) | Template Interpolation ====================== Use {{ }} for string interpolation inside property values: `json { "message": "Hello {{input.customername}}, your balance is ${{context.balance}}" } ` Data Flow Example ================= `mermaid flowchart LR subgraph "Step 1: Matcher" M["@input.invoices + @input.payments"] end subgraph "Step 2: Loop" L["@unmatchedinvoices → @item"] end subgraph "Step 3: Approval" A["@investigation (from agent resultkey)"] end M -->|"matched, unmatchedinvoices"| L L -->|"investigation"| A ` → Full reference: Context Resolution (/platform/context-resolution) 5. Agents ========= Agents are ReAct (Reasoning + Acting) loops — AI that thinks, uses tools, observes results, and iterates toward an objective. They operate within boundaries defined by the workflow spec. `json { "type": "loop", "properties": { "mode": "react", "objective": "Investigate this unmatched invoice and recommend an action", "tools": [ { "type": "action", "name": "lookuperpinvoice" }, { "type": "action", "name": "gmailsend" } ], "maxiterations": 10, "onstuck": { "iterations": 3, "action": "escalate" }, "resultkey": "investigation" } } ` Governance Model ================ Every agent runs inside a cage defined by its spec: Structural permissioning — only tools explicitly declared in tools are available. The agent cannot discover or invent capabilities. Bounded iteration — maxiterations caps prevent runaway execution. Stuck detection — onstuck identifies when an agent loops without progress and triggers recovery (fail, retry with hint, or escalate to human). Human escalation — pauseforhuman lets the agent request human input when confidence is low. The human's decision becomes part of the reasoning trace. Reasoning traces — every iteration captures thought, action, parameters, and observation. Full chain of reasoning, queryable and auditable. Tool Declaration ================ Tools support two formats — string shorthand for simple references, and full object definitions for explicit parameter schemas: `json { "tools": [ { "type": "action", "name": "lookupticket" }, { "type": "action", "name": "verifyidentity" }, { "type": "workflow", "id": "wfescalationreview" } ] } ` Action tools reference registered actions by name — the resolver automatically fetches descriptions and parameters. Workflow tools reference workflows by ID, letting the agent trigger sub-workflows. Built-in tools (complete, pauseforhuman, storememory, retrievememory, logprogress_) are always auto-injected. Three Deployment Patterns ========================= | Pattern | Agent Role | Example | |---------|-----------|---------| | Agent as Step | One step in a larger workflow | Matcher finds exceptions → agent investigates them | | Agent as Trigger | Smart ingestion layer | Agent classifies incoming document → triggers the right workflow | | Agent as Orchestrator | Coordinates multiple workflows | Agent runs KYC, sanctions screening, and account setup in sequence | → Full reference: Agents (/agents) 6. Runs ======= A run is a single execution of a workflow. It captures everything: input, step outputs, agent reasoning traces, human decisions, timing, and final status. `bash Execute a workflow → get a run ============================== curl -X POST http://localhost:3009/workflows/{id}/execute \ -H "X-Org-Id: your-org" \ -d '{ "invoices": [...], "payments": [...] }' → { "id": "runabc123", "status": "running" } ============================================= Check status ============ curl http://localhost:3009/runs/runabc123/status \ -H "X-Org-Id: your-org" ` Run Statuses ============ | Status | Meaning | |--------|---------| | running | Execution in progress | | paused | Waiting for human approval or form input | | completed | Successfully finished | | failed | Execution failed | | conditionnot_met | Top-level condition evaluated to false | What's Captured =============== Every run persists: the full input payload, each step's output, all agent reasoning traces (thought → action → observation for every iteration), human approval decisions with reviewer identity and comments, and timestamps for every state transition. This is the audit trail — queryable, exportable, and persistent. → API reference: Run Status (/api) How They Fit Together ===================== A typical production deployment: 1. Register actions — connect your ERP, CRM, databases, and communication tools 2. Define a workflow — compose primitives and actions with conditional logic 3. Execute — send data in, get a run back 4. Monitor — poll status, handle approvals, read reasoning traces 5. Audit — every decision has a paper trail `mermaid sequenceDiagram participant You participant Hyphen participant AI participant Human You->>Hyphen: POST /workflows/:id/execute Hyphen->>Hyphen: Step 1: Matcher (deterministic) Hyphen->>AI: Step 2: ReAct loop (bounded) AI->>Hyphen: complete with answer Hyphen->>Human: Step 3: PbotApproval (pause) Human->>Hyphen: Approved Hyphen->>Hyphen: Step 4: Custom Table (log) Hyphen->>You: Run completed ✓ `` Continue building: Your First Workflow (/getting-started/your-first-workflow) walks through creating a multi-step workflow with conditional branching. Your First Agent (/getting-started/your-first-agent) builds a standalone ReAct agent with custom tools. --- ## PATH: Guides > Building An Agent Orchestrator (Source: guides/02-building-an-agent-orchestrator.md) End-to-End: Agent Orchestrator ============================== Build a support ticket processing agent that looks up ticket details, analyzes sentiment, drafts a response, and notifies the team on Slack when urgency is high. This guide covers action registration, ReAct workflow creation, reasoning trace monitoring, and human escalation handling. Prerequisites ============= A running Hyphen instance with curl. Slack OAuth configured for your org (see Slack integration (/integrations/slack)). Step 1: Register the Zendesk Lookup Action ========================================== ``bash curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "actionname": "lookupticket", "kind": "http", "url": "https://acme.zendesk.com/api/v2/tickets/{{ticketid}}.json", "httpmethod": "GET", "headers": { "Authorization": "Bearer orgconfig:api:zendesktoken" }, "passthrough": true }' ` Step 2: Register the Sentiment Analysis Action ============================================== This is an LLM action — it uses an AI model to analyze text: `bash curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "actionname": "analyzesentiment", "kind": "llm", "template": "Analyze this support ticket. Return JSON with: sentiment (positive/negative/neutral), urgency (low/medium/high), category (billing/technical/account/other), and a one-sentence summary.\n\nSubject: {{subject}}\nBody: {{body}}", "model": "gpt-4o-mini", "maxtokens": 256 }' ` Step 3: Create the Orchestrator Workflow ======================================== `bash curl -X POST http://localhost:3009/workflows \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "name": "supportticketorchestrator", "definition": { "actions": [ { "type": "loop", "properties": { "mode": "react", "objective": "Process support ticket {{input.ticketid}}. Steps: 1) Look up the ticket details, 2) Analyze sentiment and urgency, 3) Draft a response approach, 4) If high urgency, notify #support-escalations on Slack, 5) Complete with your recommendation and the drafted response.", "tools": [ { "type": "action", "name": "lookupticket" }, { "type": "action", "name": "analyzesentiment" }, { "type": "action", "name": "slackpost" } ], "model": "gpt-4o", "maxiterations": 10, "onstuck": { "iterations": 3, "action": "escalate" }, "resultkey": "agentresult" } }, { "type": "custom-table", "properties": { "table": "ticketprocessinglog", "operation": "write", "keys": ["ticketid"], "values": ["@input.ticketid"], "fields": { "result": "@agentresult.answer", "confidence": "@agentresult.confidence", "runid": "@runid", "processedat": "@now" } } } ] } }' ` Step 4: Execute and Monitor =========================== Trigger the workflow: `bash curl -X POST http://localhost:3009/workflows/wfabc123/execute \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "ticketid": "12345" }' ` Poll for status: `bash curl http://localhost:3009/runs/runxyz789/status \ -H "X-Org-Id: acme-corp" ` Once complete, retrieve the full reasoning trace to see exactly how the agent worked through the problem: `bash curl http://localhost:3009/agents/agentabc/trace \ -H "X-Org-Id: acme-corp" ` The trace shows each iteration — the agent's thought process, which tool it chose, the parameters it used, and the observation it received: `json { "reasoningtrace": [ { "iteration": 1, "thought": "First I need to look up the ticket details", "action": "lookupticket", "actioninput": { "ticketid": "12345" }, "observation": { "subject": "Billing Error", "description": "I was charged twice...", "status": "open" } }, { "iteration": 2, "thought": "Now I'll analyze the sentiment and urgency", "action": "analyzesentiment", "actioninput": { "subject": "Billing Error", "body": "I was charged twice..." }, "observation": { "sentiment": "negative", "urgency": "high", "category": "billing" } }, { "iteration": 3, "thought": "High urgency billing issue. I should notify the team on Slack.", "action": "slackpost", "actioninput": { "channel": "#support-escalations", "text": "🚨 High urgency: Ticket #12345 — Billing error, customer charged twice" }, "observation": { "ok": true, "ts": "1706..." } }, { "iteration": 4, "thought": "I've gathered all info and notified the team. Time to complete.", "action": "complete", "actioninput": { "answer": "High urgency billing issue. Customer double-charged. Team notified on Slack.", "confidence": 0.95 } } ] } ` Step 5: Handle Human Escalation =============================== If the agent encounters something it can't resolve — say stuck detection triggers, or the agent explicitly calls _pauseforhuman — the run pauses. Check for paused runs: `bash curl "http://localhost:3009/agents?status=paused" \ -H "X-Org-Id: acme-corp" ` Resume with human guidance: `bash curl -X POST http://localhost:3009/agents/agentabc/resume \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "humaninput": "This is a known issue with our payment processor. Refund the customer and escalate to payments team.", "approved": true }' ` The agent continues from where it left off, incorporating the human's guidance into its next iteration. Step 6: Add Slack Notification on Completion ============================================ You can also use webhooks to notify your team when any agent completes. Configure a webhook listener for the agentcompleted event, or add a Slack step after the agent in the workflow: `json { "type": "slackpost", "properties": { "oauthaccount_": "acme-workspace", "channel": "#support-processed", "text": "✅ Ticket {{input.ticketid}} processed. Confidence: {{agentresult.confidence}}. Result: {{agentresult.answer}}" } } `` What You Built ============== A support ticket processing system that: 1. Looks up ticket details from Zendesk 2. Analyzes sentiment and urgency using an LLM action 3. Escalates high-urgency tickets to Slack 4. Pauses for human input when stuck or uncertain 5. Logs every decision with full reasoning traces for audit 6. Notifies the team on completion For more complex orchestration patterns where agents coordinate multiple workflows, see Pattern C: Agent as Orchestrator (/agents/deployment-patterns/agent-as-orchestrator). --- ## PATH: Integrations > Slack (Source: integrations/02-slack.md) Slack ===== Post messages to channels, read channel history, and send direct messages from workflows and agents. Available Actions ================= | Action | Description | |--------|-------------| | slackpost | Post a message to a channel | | slackreadchannel | Read recent messages from a channel | | slackdm | Send a direct message to a user | Setup ===== 1. Store OAuth App Credentials ============================== Create a Slack app at api.slack.com/apps (https://api.slack.com/apps) with the required scopes, then store the credentials: ``bash curl -X POST http://localhost:3009/oauth/slack/app-credentials \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "clientid": "1234567890.1234567890", "clientsecret": "your-slack-client-secret" }' ` 2. Authorize a Workspace ======================== `bash curl "http://localhost:3009/oauth/slack/authorize?returnurl=https://yourapp.com/oauth/callback" \ -H "X-Org-Id: acme-corp" ` A workspace admin completes the Slack OAuth flow. Hyphen stores the bot token and workspace access. 3. Verify Connection ==================== `bash curl http://localhost:3009/oauth/connections \ -H "X-Org-Id: acme-corp" ` Using in Workflows ================== Post to Channel =============== `json { "type": "slackpost", "properties": { "_oauthaccount_": "acme-workspace", "channel": "#finance-alerts", "text": "🔔 Reconciliation complete: {{matched.length}} matched, {{exceptions.length}} exceptions requiring review." } } ` Read Channel History ==================== `json { "type": "slackreadchannel", "properties": { "oauthaccount_": "acme-workspace", "channel": "#vendor-communications", "limit": 20 } } ` Send Direct Message =================== `json { "type": "slackdm", "properties": { "_oauthaccount_": "acme-workspace", "user": "U0123ABCDEF", "text": "Your expense report for ${{input.amount}} has been approved." } } ` Using as Agent Tools ==================== Agents use Slack to notify teams, read context from channels, and alert on-call personnel: `json { "mode": "react", "objective": "Investigate this alert and notify the team if action is needed", "tools": [ "checksystemstatus", "querylogs", { "type": "action", "name": "slackpost" }, { "type": "action", "name": "slackreadchannel" } ], "maxiterations": 10 } ` Complete Example: Exception Alert Workflow ========================================== Post a summary to Slack when reconciliation finds exceptions, with conditional urgency: `json { "name": "reconciliationwithslackalert", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["invoiceid"], "tolerance": 0.02, "outputMatched": "matched", "outputUnmatchedLeft": "exceptions" } }, { "type": "slackpost", "filter": { "condition": { "greaterThan": [{ "length": "@exceptions" }, 0] } }, "properties": { "oauthaccount_": "acme-workspace", "channel": "#finance-exceptions", "text": "⚠️ Reconciliation Alert\n\nMatched: {{matched.length}} records\nExceptions: {{exceptions.length}} unmatched invoices\n\nPlease review in the Hyphen dashboard." }, "onFalse": { "type": "slackpost", "properties": { "_oauthaccount__": "acme-workspace", "channel": "#finance-ops", "text": "✅ Reconciliation complete — all {{matched.length}} records matched." } } } ] } } `` --- ## PATH: Platform > Workflow Dsl (Source: platform/02-workflow-dsl.md) Workflow DSL ============ Every Hyphen workflow is a JSON document with a defined structure. This page is the complete specification. Workflow Structure ================== ``json { "name": "workflowname", "definition": { "condition": { }, "actions": [ ], "schedule": { } } } ` Top-Level Fields ================ | Field | Type | Required | Description | |-------|------|----------|-------------| | name | string | Yes | Unique identifier within the organization | | definition | object | Yes | The workflow specification | | definition.condition | object | No | Top-level gate — if false, workflow ends with conditionnotmet | | definition.actions | array | Yes | Ordered array of steps to execute | | definition.schedule | object | No | Scheduling configuration (see Scheduling (/platform/scheduling)) | Step Structure ============== Each element in the actions array is a step: `json { "type": "steptype", "properties": { }, "filter": { }, "onFalse": { }, "outputKey": "resultname" } ` Step Fields =========== | Field | Type | Required | Description | |-------|------|----------|-------------| | type | string | Yes | Primitive name (matcher, loop, PbotApproval, PbotForm, custom-table) or registered action name | | properties | object | Yes | Step-specific configuration — varies by type | | filter | object | No | Condition evaluated before execution. If false, step is skipped (or onFalse runs) | | filter.condition | object | No | The condition expression (see Conditional Logic (/platform/conditional-logic)) | | onFalse | object | No | Alternative step to execute when filter evaluates to false. Same structure as a step | | outputKey | string | No | Custom key name for storing this step's output in context | Step Types ========== Primitives (built-in, no registration required): | Type | Reference | |------|-----------| | matcher | Matcher (/primitives/matcher) | | loop | Loop (/primitives/loop) — foreach or react mode | | PbotApproval | Approval (/primitives/approval) | | PbotForm | Form (/primitives/form) | | custom-table | Custom Table (/primitives/custom-table) | Registered actions — use the actionname you registered as the type: `json { "type": "fetchcustomer", "properties": { "customerid": "@input.id" } } ` OAuth business tools — pre-registered actions available after OAuth setup: `json { "type": "gmailsend", "properties": { "oauthaccount_": "ops@company.com", "to": "@input.email", "subject": "Update" } } ` → See Actions (/actions) for registration details Context Resolution: @path =========================== The @ prefix references values from the execution context. Context accumulates as steps execute — each step's output is added for subsequent steps to reference. | Path | Description | Example | |------|-------------|---------| | @input.field | Workflow input payload | @input.customerid | | @input.field.nested | Nested input fields | @input.invoice.amount | | @outputKey.field | Step output by outputKey | @matched.0.invoiceid | | @item | Current item in a foreach loop | @item.email | | @item.field | Field on current loop item | @item.amount | | @runid | Current run ID | | | @step | Current step index | | | @now | Current timestamp | | | @approved | Boolean from PbotApproval | | Array indexing is supported: @matched.0.invoiceid references the first matched record's invoiceid. → Full deep dive: Context Resolution (/platform/context-resolution) Template Syntax: {{ }} ======================== Double braces perform string interpolation inside property values. Use for constructing dynamic strings: `json { "subject": "Invoice {{input.invoiceid}} requires review", "body": "Dear {{input.vendorname}},\n\nYour invoice for ${{input.amount}} is pending." } ` @path returns the actual value (object, array, number). {{template}} performs string interpolation — it converts the value to a string and embeds it in surrounding text. Use @path for data references in properties. Use {{ }} when building human-readable strings. Condition Expressions ===================== Conditions are used in definition.condition (top-level gate) and filter.condition (step-level branching). `json { "condition": { "and": [ { "greaterThan": ["@input.amount", 1000] }, { "equal": ["@input.status", "pending"] } ] } } ` → Complete operator reference: Conditional Logic (/platform/conditional-logic) Schedule Configuration ====================== Optional — makes the workflow run on a recurring basis: `json { "schedule": { "every": "1d", "at": "02:00", "timezone": "America/NewYork" } } ` → Full reference: Scheduling (/platform/scheduling) Complete Example ================ A production workflow combining all DSL features: `json { "name": "dailyinvoicereconciliation", "definition": { "condition": { "and": [ { "equal": ["@input.type", "invoice"] }, { "greaterThan": ["@input.items.length", 0] } ] }, "actions": [ { "type": "matcher", "properties": { "left": "@input.items", "right": "@input.purchaseorders", "matchOn": ["ponumber", "amount"], "tolerance": 0.02, "outputMatched": "matched", "outputUnmatchedLeft": "unmatcheditems" } }, { "type": "loop", "filter": { "condition": { "greaterThan": ["@unmatcheditems.length", 0] } }, "properties": { "mode": "react", "objective": "Review {{unmatcheditems.length}} unmatched items", "tools": [ { "type": "action", "name": "searchpurchaseorders" }, { "type": "action", "name": "slackpost" } ], "maxiterations": 15, "onstuck": { "iterations": 3, "action": "escalate" }, "resultkey": "reviewresult" } }, { "type": "custom-table", "properties": { "table": "processinglog", "operation": "write", "keys": ["invoiceid"], "values": ["@input.invoiceid"], "fields": { "matchedcount": "@matched.length", "unmatchedcount": "@unmatcheditems.length", "reviewresult": "@reviewresult", "processedat": "@now" } } } ], "schedule": { "every": "1d", "at": "02:00", "timezone": "America/New_York" } } } ` api POST /workflows Create the workflow definition. api POST /workflows/:id/execute Execute with an input payload. The @input.*` paths resolve to the data you send. --- ## PATH: Primitives > Loop (Source: primitives/02-loop.md) Loop ==== The loop primitive has two modes: foreach for deterministic batch processing and react for AI agent reasoning. Both share the same type: "loop" — the mode property determines behavior. Foreach Mode ============ Process an array of items with configurable concurrency and failure handling. ``json { "type": "loop", "properties": { "mode": "foreach", "itemspath": "@input.customers", "itemvariablename": "customer", "actionstoexecute": [ { "type": "gmailsend", "properties": { "_oauthaccount_": "notifications@company.com", "to": "@customer.email", "subject": "Monthly Statement for {{customer.name}}", "body": "Dear {{customer.name}},\n\nPlease find your statement attached." } } ], "maxconcurrency": 10, "failurestrategy": "continueonerror", "collectresults": true, "resultkey": "emailResults" } } ` Foreach Properties ================== | Property | Type | Required | Description | |----------|------|----------|-------------| | mode | "foreach" | Yes | Selects foreach mode | | itemspath | @path | Yes | Array of items to iterate over | | itemvariablename | string | Yes | Variable name for the current item (accessible as @{name}) | | actionstoexecute | array | Yes | Steps to run for each item — same structure as workflow actions | | maxconcurrency | number | No | Maximum parallel executions (default: 1) | | failurestrategy | string | No | "stoponerror" (default) or "continueonerror" | | collectresults | boolean | No | Whether to gather results from all iterations (default: false) | | resultkey | string | No | Context key for collected results | Item Access =========== Inside actionstoexecute, reference the current item using the variable name: `json { "itemvariablename": "order", "actionstoexecute": [ { "type": "processpayment", "properties": { "orderid": "@order.id", "amount": "@order.total", "currency": "@order.currency" } } ] } ` Failure Strategies ================== stoponerror (default): If any item fails, the loop stops and the run fails. Use when all items must succeed. continueonerror: Failed items are logged but processing continues. Use for best-effort batch operations like sending notifications. React Mode ========== Run an AI agent that reasons step-by-step toward an objective. See Agents (/agents) for full details. `json { "type": "loop", "properties": { "mode": "react", "objective": "Investigate this expense report. Check policy compliance, verify receipts, recommend approval or rejection.", "tools": [ { "type": "action", "name": "lookupemployee" }, { "type": "action", "name": "checkexpensepolicy" }, { "type": "action", "name": "verifyreceipt" } ], "model": "gpt-4", "maxiterations": 15, "timeoutms": 300000, "temperature": 0.7, "onstuck": { "action": "escalate", "iterations": 3 }, "includereasoningtrace": true, "resultkey": "expenseDecision" } } ` React Properties ================ | Property | Type | Required | Description | |----------|------|----------|-------------| | mode | "react" | Yes | Selects react mode | | objective | string | Yes | What the agent should accomplish. Supports {{ }} templates | | tools | array | Yes | Available tools — typed declarations or strings (action names). See Tool Declarations (/agents/tool-declarations) | | model | string | No | LLM model to use (default: configured in environment) | | maxiterations | number | No | Maximum think-act-observe cycles (default: 10) | | timeoutms | number | No | Maximum execution time in milliseconds (default: 300000) | | temperature | number | No | LLM temperature 0–1 (default: 0.7) | | onstuck | object | No | Recovery when agent loops without progress | | onstuck.iterations | number | No | Repeated iterations before triggering (default: 3) | | onstuck.action | string | No | "fail", "escalate", or "retrywithhint" | | onstuck.hint | string | No | Guidance text for retrywithhint | | includereasoningtrace | boolean | No | Store full reasoning trace (default: true) | | resultkey | string | No | Context key for the agent's final answer | When to Use Which Mode ====================== | Use Foreach When | Use React When | |-----------------|---------------| | You know exactly what to do with each item | The task requires judgment or reasoning | | Processing is deterministic | The approach depends on intermediate results | | Items are independent of each other | The agent needs to decide what to do next | | You need parallel processing | You need natural language understanding | Nested Loops: Foreach Containing React ====================================== A common pattern: iterate over a list of items, and for each item, run an AI agent: `json { "type": "loop", "properties": { "mode": "foreach", "itemspath": "@unmatchedinvoices", "itemvariablename": "exception", "actionstoexecute": [ { "type": "loop", "properties": { "mode": "react", "objective": "Investigate why invoice {{exception.invoiceid}} for ${{exception.amount}} has no matching payment. Check vendor history, look for partial payments, and recommend next steps.", "tools": [ { "type": "action", "name": "searchpayments" }, { "type": "action", "name": "lookupvendor" } ], "maxiterations": 8, "resultkey": "investigation" } } ], "maxconcurrency": 3, "failurestrategy": "continueonerror", "collectresults": true, "resultkey": "allinvestigations" } } ` This processes each exception in parallel (up to 3 at a time), with an AI agent investigating each one independently. Results are collected into @allinvestigations. Concurrency for nested react loops. When running agents in parallel via foreach, keep maxconcurrency moderate (3–5). Each agent makes LLM API calls, so high concurrency can hit rate limits. Use continueon_error` to ensure one failed investigation doesn't block the others. → Next: Approval (/primitives/approval) --- ## PATH: Templates > Insurance Claims Adjudication (Source: templates/02-insurance-claims-adjudication.md) Insurance Claims Adjudication ============================= Claim-to-policy matching, coverage verification, and graduated adjudication. The insurance industry processes billions of claims annually — the majority through manual review queues that this pipeline replaces. What Gets Automated =================== Matching claims to policies on policy number, member ID, and date of service. Auto-adjudicating clean claims that fall within coverage limits and have standard procedure codes. AI analysis of coding ambiguities, coverage edge cases, and multi-provider claims. What Humans Still Own ===================== Complex multi-party claims. Fraud referrals. Policy exception decisions. Appeal adjudication on contested denials. Pipeline ======== ``mermaid flowchart TD A[Input: New Claims + Policy Records] --> B[Matcher: Claim-to-Policy] B -->|Policy found| C[Matcher: Duplicate Detection] B -->|No policy match| D[Deny: No Coverage] C -->|No duplicate| E{Auto-Approve Eligible?} C -->|Duplicate found| F[Flag + Halt] E -->|Clean claim ~60%| G[Auto-Adjudicate] E -->|Requires review ~40%| H[ReAct Agent] H --> H1[Verify coverage terms] H1 --> H2[Validate procedure codes] H2 --> H3[Calculate allowed amount] H3 -->|High confidence| I[Auto-Resolve] H3 -->|Low confidence| J[Human Adjudicator] J -->|Approve| K[Process Payment] J -->|Deny| L[Issue Denial Code] J -->|Refer SIU| M[Flag for Investigation] D --> N[Custom Table: Adjudication Log] G --> N I --> N K --> N L --> N F --> N style B fill:#e8a84c,color:#09090b,stroke:none style C fill:#e8a84c,color:#09090b,stroke:none style H fill:#4ade80,color:#09090b,stroke:none style J fill:#60a5fa,color:#09090b,stroke:none ` This template uses two matcher steps in sequence — first for claim-to-policy matching, then for duplicate detection. This is the only template with chained matchers. Workflow Definition =================== `json { "name": "claimsadjudication", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.claims", "right": "@input.policies", "matchOn": ["policynumber", "memberid"], "dateWindowDays": 0, "outputMatched": "claimswithpolicy", "outputUnmatchedLeft": "nopolicyclaims", "outputUnmatchedRight": "unusedpolicies" } }, { "type": "matcher", "filter": { "condition": { "greaterThan": [{ "length": "@claimswithpolicy" }, 0] } }, "properties": { "left": "@claimswithpolicy", "right": "@input.recentclaims", "matchOn": ["memberid", "procedurecode", "servicedate"], "tolerance": 0, "dateWindowDays": 1, "outputMatched": "potentialduplicates", "outputUnmatchedLeft": "uniqueclaims", "outputUnmatchedRight": "nodupmatch" } }, { "type": "loop", "filter": { "condition": { "greaterThan": [{ "length": "@uniqueclaims" }, 0] } }, "properties": { "mode": "foreach", "itemspath": "@uniqueclaims", "itemvariablename": "claim", "actionstoexecute": [ { "type": "loop", "properties": { "mode": "react", "objective": "Adjudicate claim {{claim.a.claimid}}. Policy: {{claim.b.policynumber}}. Procedure: {{claim.a.procedurecode}}. Billed: ${{claim.a.billedamount}}. Verify: 1) Coverage active for service date, 2) Procedure covered under plan, 3) Deductible and benefit limits, 4) In-network status. Recommend: approve with allowed amount, deny with reason code, or escalate.", "tools": [ { "type": "action", "name": "verifycoverage" }, { "type": "action", "name": "lookupprocedurecodes" }, { "type": "action", "name": "checkbenefitlimits" }, { "type": "action", "name": "calculateallowedamount" } ], "maxiterations": 12, "onstuck": { "iterations": 4, "action": "escalate" }, "resultkey": "adjudication" } } ], "maxconcurrency": 10, "failurestrategy": "continueonerror", "collectresults": true, "resultkey": "alladjudications" } }, { "type": "PbotApproval", "properties": { "comment": "{{uniqueclaims.length}} claims adjudicated. {{potentialduplicates.length}} duplicates flagged. Review AI decisions.", "requestpayload": { "adjudications": "@alladjudications", "duplicates": "@potentialduplicates", "nopolicydenials": "@nopolicyclaims.length" } } }, { "type": "custom-table", "properties": { "table": "adjudicationlog", "operation": "write", "keys": ["batchid"], "values": ["@runid"], "fields": { "claimsprocessed": "@input.claims.length", "autodeniednopolicy": "@nopolicyclaims.length", "duplicatesflagged": "@potentialduplicates.length", "adjudicated": "@uniqueclaims.length", "completedat": "@now" } } } ] } } ` Required Registered Actions =========================== | Action | Kind | Purpose | |--------|------|---------| | verifycoverage | http | Check policy status and effective dates | | lookupprocedurecodes | db | Retrieve allowed procedure codes for policy type | | checkbenefitlimits | db | Query remaining benefit limits for member/year | | calculateallowed_amount | http | Compute allowed amount per fee schedule | Customization Notes =================== Duplicate detection window. The 1-day dateWindowDays on the second matcher catches same-day duplicate submissions. Extend to 3–5 days if your intake has batch delays. Agent concurrency. 10 concurrent adjudications works for typical batch sizes. Reduce if your coverage verification API has rate limits. Stuck detection. The escalate` action on stuck sends complex claims directly to a human adjudicator rather than retrying — appropriate for claims where incorrect auto-decisions have financial and regulatory consequences. --- ## PATH: Actions > Db (Source: actions/03-db.md) DB Actions ========== DB actions execute queries against your databases. Connection strings are stored encrypted in org config (/platform/multi-tenancy) and referenced via orgconfig: prefix. Registration ============ ``bash curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "actionname": "getpendingorders", "kind": "db", "description": "Fetch pending orders from the warehouse database", "datasource": "orgconfig:db:warehousepg", "query": "SELECT FROM orders WHERE status = $1 AND createdat > $2", "params": ["pending", "@input.sincedate"], "passthrough": true, "outputKey": "orders" }' ` Properties ========== | Property | Type | Required | Description | |----------|------|----------|-------------| | actionname | string | Yes | Unique name for this action | | kind | "db" | Yes | | | datasource | string | Yes | Connection string, typically orgconfig:db:keyname | | query | string | Yes | SQL query or database command. Uses $1, $2, etc. for parameters | | params | array | No | Ordered parameters for the query. Supports @path and literal values | | passthrough | boolean | No | If true, raw query results are passed to context | | outputKey | string | No | Context key for query results | Parameterized Queries ===================== Always use parameterized queries ($1, $2, etc.) instead of string interpolation for security: `json { "query": "SELECT FROM customers WHERE region = $1 AND annualrevenue > $2", "params": ["@input.region", "@input.minrevenue"] } ` Parameters can be literal values or @path references: `json { "query": "UPDATE invoices SET status = $1 WHERE invoiceid = $2", "params": ["reconciled", "@matched.0.invoiceid"] } ` Datasource Configuration ======================== Store connection strings in org config: `bash PostgreSQL ========== curl -X POST http://localhost:3009/org-config \ -H "X-Org-Id: acme-corp" \ -d '{ "key": "db:warehousepg", "value": "postgresql://user:pass@host:5432/warehouse" }' MySQL ===== curl -X POST http://localhost:3009/org-config \ -H "X-Org-Id: acme-corp" \ -d '{ "key": "db:crmmysql", "value": "mysql://user:pass@host:3306/crm" }' ` Then reference in the action: `json { "datasource": "orgconfig:db:warehousepg" } ` The connection string is resolved at execution time and never stored in the action definition. Usage in a Workflow =================== `json { "type": "getpendingorders", "properties": { "sincedate": "@input.startdate" }, "outputKey": "orders" } ` Query results are stored at @orders — an array of row objects. Usage as an Agent Tool ====================== `json { "mode": "react", "objective": "Find all overdue invoices and summarize by vendor", "tools": [{ "type": "action", "name": "getpendingorders" }, { "type": "action", "name": "getvendor_info" }] } ` The agent can query the database to gather information during its reasoning process. Results are returned as observations the agent can analyze. Supported Databases =================== | Database | Datasource Format | |----------|-------------------| | PostgreSQL | postgresql://user:pass@host:5432/dbname | | MySQL | mysql://user:pass@host:3306/dbname | | MongoDB | mongodb://user:pass@host:27017/dbname | | Neo4j | neo4j://user:pass@host:7687` | Security. DB actions execute queries with the permissions of the connection string's user. Use database users with the minimum necessary privileges — read-only users for query actions, limited write access for mutation actions. Never use admin credentials. → Next: Matcher Actions (/actions/matcher) --- ## PATH: Agents > Custom Tools (Source: agents/03-custom-tools.md) Tool Declarations ================= Tools give ReAct agents the ability to take actions. Hyphen uses a typed tool declaration system — you declare what tools are available, and the resolver handles the rest. The Two Tool Types ================== Action Tools ============ Reference any registered action by name. The resolver automatically fetches the action's description, parameters, and kind from the database. ``json { "type": "action", "name": "classifydocument" } ` That's it. The resolver looks up classifydocument in your registered actions, builds parameter hints from its kind and properties, and presents a fully enriched tool definition to the LLM. You don't need to repeat the description or parameters. Workflow Tools ============== Reference a workflow by ID. The agent can trigger it during execution via _runworkflow_. `json { "type": "workflow", "id": "wfabc123" } ` The resolver fetches the workflow's name and description, then presents it to the LLM by its human-readable name. When the agent invokes the tool, Hyphen executes the workflow as a sub-process. Complete Example ================ An agent that researches companies, enriches data through a pipeline, and sends outreach: `json { "type": "loop", "properties": { "mode": "react", "objective": "Research {{input.companyname}} and draft an outreach email", "tools": [ { "type": "action", "name": "searchcompany" }, { "type": "action", "name": "searchnews" }, { "type": "action", "name": "gmailsend" }, { "type": "workflow", "id": "wfenrichmentpipeline" } ], "model": "gpt-4", "maxiterations": 10, "onstuck": { "iterations": 3, "action": "escalate" }, "resultkey": "agentResult" } } ` The agent gets four tools (searchcompany, searchnews, gmailsend, and the enrichment workflow) plus all implicit tools — without any inline descriptions or parameter specs. Implicit Tools (Always Available) ================================= These tools are automatically injected into every ReAct agent. You never need to declare them: | Tool | Purpose | |------|---------| | complete | Signal task completion with answer and confidence | | _pauseforhuman | Pause and request human input | | storememory | Store data for later retrieval within the session | | retrievememory | Retrieve previously stored data | | logprogress_ | Record milestones for observability | See Built-in Tools (/agents/built-in-tools) for the full parameter reference. Tool Resolution Order ===================== The resolver processes the tools array in this order: 1. Typed objects ({ type: "action" | "workflow" }) — preferred format. Resolved via batched DB queries (max 2 queries regardless of tool count). 2. Legacy strings ("actionname") — still supported. Treated as { type: "action", name: "..." } internally. 3. Legacy inline objects ({ name, action, description, parameters }) — still supported. Routed as custom tool definitions with full parameter specs. 4. Implicit tools — always auto-injected after all declared tools are resolved. Unresolved actions (not found in registered actions or built-in OAuth actions) are still added to the tool set — the agent receives feedback if it tries to use an unknown tool. Structural Permissioning ======================== The tools array is an architectural constraint, not a policy layer. The agent cannot discover, invent, or access capabilities beyond what you've declared: `json { "tools": [ { "type": "action", "name": "blockip" }, { "type": "action", "name": "isolatehost" } ] } ` This agent can block IPs and isolate hosts. It cannot delete firewall rules, shut down servers, or take any other action — even if those actions are registered in your org. The tool list defines the boundary. This is what makes governed autonomy possible. The agent reasons freely within the boundary. The boundary itself is structural. Legacy Formats (Backward Compatible) ==================================== String References ================= The simplest legacy format. Still fully supported and will remain so indefinitely: `json { "tools": ["lookupcustomer", "gmailsend"] } ` Internally converted to { type: "action", name: "lookupcustomer" }. Inline Object Definitions ========================= Full control over what the agent sees. Still supported for backward compatibility, but not recommended for new workflows: `json { "tools": [ { "name": "calculatediscount", "action": "discountcalculator", "description": "Calculate discount based on customer tier and order value", "parameters": { "customertier": { "type": "string", "required": true, "description": "Customer tier: bronze, silver, gold, platinum" }, "ordervalue": { "type": "number", "required": true, "description": "Total order value in USD" } } } ] } ` | Field | Type | Required | Description | |-------|------|----------|-------------| | name | string | Yes | Tool name the agent uses in the action field | | action | string | Yes | Registered action this tool maps to | | description | string | Yes | What the tool does — shown to the agent in its prompt | | parameters | object | Yes | Parameter definitions with type, required, description, enum | The inline object format may be deprecated in a future version. New workflows should use typed declarations exclusively. Mixing Formats ============== All three formats can coexist in the same tools array: `json { "tools": [ { "type": "action", "name": "searchcompany" }, "gmailsend", { "name": "customscorer", "action": "leadscoringapi", "description": "Score a lead based on engagement data", "parameters": { "lead_id": { "type": "string", "required": true } } } ] } ` The resolver handles each format appropriately. However, for consistency and maintainability, prefer typed declarations for all new work. When to Use Each Format ======================= | Format | When to Use | |--------|-------------| | Typed action { type: "action" } | Default for all new workflows. Let the resolver pull metadata from the action registration. | | Typed workflow { type: "workflow" } | When the agent needs to trigger sub-workflows (Pattern B or C). | | Legacy string "name" | Quick prototyping, or when migrating existing workflows that already use strings. | | Legacy inline object | Only when you need to override the registered action's description or expose a subset of parameters to the agent. | Tips for Effective Tool Sets ============================ Fewer tools = better agent performance. Agents reason more effectively with 3–7 tools than with 15. Scope the tool set to what the specific task requires. Action descriptions matter. The resolver pulls descriptions from your registered actions. Write good action descriptions at registration time, and every agent that references the action benefits. Use workflow tools for complex sub-processes. Instead of giving the agent 10 granular tools, consider wrapping related steps into a workflow and giving the agent one workflow tool. Test with reasoning traces. After executing an agent, inspect the reasoning trace at GET /agents/:id/trace`. If the agent is misusing tools, the trace shows exactly where the confusion occurs — usually a description problem. → Next: Stuck Detection (/agents/stuck-detection) --- ## PATH: Agents > Deployment Patterns > Agent As Orchestrator (Source: agents/deployment-patterns/03-agent-as-orchestrator.md) Pattern C: Agent as Orchestrator ================================ The agent coordinates multiple workflows, making decisions about what to trigger, in what order, and how to synthesize results across them. The agent is the decision-maker; each workflow is a capability it can invoke. ``mermaid flowchart TD Agent["Orchestrator Agent"] --> KYC["KYC Verification
Workflow"] Agent --> Sanctions["Sanctions Screening
Workflow"] Agent --> EDD["Enhanced Due Diligence
Workflow"] Agent --> Account["Account Provisioning
Workflow"] KYC -->|result| Agent Sanctions -->|result| Agent EDD -->|result| Agent Agent -->|"conflicts detected"| Human["Human Review"] Human -->|decision| Agent Agent --> Account style Agent fill:#f3e8ff,stroke:#9333ea ` When to Use =========== - The process requires dynamic coordination across multiple sub-processes - The sequence of steps depends on intermediate results - You need to synthesize information from multiple workflows before deciding the next step - The agent is the decision-maker, not just a participant Complete Example: Customer Onboarding ===================================== An orchestrator agent that onboards a business customer — running identity verification, sanctions screening, and conditionally triggering enhanced due diligence. `json { "objective": "Onboard business customer: {{input.companyname}}\n\nCustomer details:\n- Company: {{input.companyname}}\n- Country: {{input.country}}\n- Registration: {{input.registrationnumber}}\n- Contact: {{input.contactname}} ({{input.contactemail}})\n\nProcess:\n1. Run identity verification\n2. Run sanctions screening\n3. If any concerns, run enhanced due diligence\n4. If still uncertain, request human review\n5. If approved, provision the account\n6. Send welcome email", "tools": [ { "type": "workflow", "name": "identityverification" }, { "type": "workflow", "name": "sanctionsscreening" }, { "type": "workflow", "name": "enhancedduediligence" }, { "type": "workflow", "name": "accountprovisioning" }, { "type": "action", "name": "gmailsend" } ], "config": { "model": "gpt-4", "maxiterations": 20, "timeoutms": 600000, "onstuck": { "iterations": 3, "action": "escalate" } } } ` Sample Reasoning Trace ====================== ` Iteration 1: Thought: "Starting onboarding for Acme GmbH. First step is identity verification." Action: _logprogress Input: { "message": "Starting onboarding", "milestone": "begin" } Iteration 2: Thought: "Let me run the identity verification workflow." Action: runworkflow Input: { "workflowname": "identityverification", "input": { "companyname": "Acme GmbH", "country": "DE", "registrationnumber": "HRB 12345" }, "wait": true, "timeoutms": 120000 } Observation: { "status": "completed", "result": { "verified": true, "confidence": 0.92, "source": "traderegister" } } Iteration 3: Thought: "Identity verified. Now I need to run sanctions screening." Action: storememory_ Input: { "key": "kycresult", "value": { "verified": true, "confidence": 0.92 } } Iteration 4: Thought: "Running sanctions screening." Action: _runworkflow_ Input: { "workflowname": "sanctionsscreening", "input": { "entityname": "Acme GmbH", "country": "DE", "registrationnumber": "HRB 12345" }, "wait": true } Observation: { "status": "completed", "result": { "matchesfound": 1, "matchtype": "partialname", "risklevel": "medium" } } Iteration 5: Thought: "Sanctions screening found a partial name match with medium risk. This needs enhanced due diligence before I can proceed." Action: logprogress_ Input: { "message": "Partial sanctions match — triggering EDD", "milestone": "eddrequired" } Iteration 6: Thought: "Running enhanced due diligence workflow." Action: _runworkflow_ Input: { "workflowname": "enhancedduediligence", "input": { "entityname": "Acme GmbH", "country": "DE", "sanctionsmatch": { "matchtype": "partialname", "risklevel": "medium" } }, "wait": true, "timeoutms": 180000 } Observation: { "status": "completed", "result": { "conclusion": "falsepositive", "confidence": 0.78, "reasoning": "Name similarity to sanctioned entity 'Acme Holdings Ltd' but different jurisdiction and industry." } } Iteration 7: Thought: "EDD concluded false positive but confidence is only 78%. Since this is below 85%, I should get human confirmation before provisioning." Action: pauseforhuman Input: { "question": "EDD concluded false positive (78% confidence) for Acme GmbH sanctions match. Should I proceed with account provisioning?", "options": ["Approve — proceed with onboarding", "Reject — deny application", "Request additional documentation"], "contextsummary": "KYC verified (92%), sanctions partial match on name, EDD says false positive (78%)", "urgency": "medium" } — Agent pauses. Human reviews and responds: "Approve — proceed with onboarding" — Iteration 8: Thought: "Human approved. Let me provision the account." Action: _runworkflow_ Input: { "workflowname": "accountprovisioning", "input": { "companyname": "Acme GmbH", "country": "DE", "contactemail": "hans@acme.de", "approvedby": "humanreview" }, "wait": true } Observation: { "status": "completed", "result": { "accountid": "accxyz789" } } Iteration 9: Thought: "Account provisioned. Sending welcome email." Action: gmailsend Input: { "_oauthaccount_": "onboarding@ourcompany.com", "to": "hans@acme.de", "subject": "Welcome to Our Platform — Acme GmbH", "body": "Dear Hans,\n\nYour account (accxyz789) has been provisioned..." } Observation: { "sent": true, "messageid": "msgabc" } Iteration 10: Thought: "Onboarding complete. All steps passed." Action: complete Input: { "answer": "Onboarding complete for Acme GmbH. KYC verified, sanctions false positive confirmed by human review, account accxyz789 provisioned, welcome email sent.", "confidence": 0.95, "summary": "4 workflows executed, 1 human review, account provisioned" } ` What Makes This Pattern Powerful ================================ Dynamic sequencing. The agent decided to run EDD because sanctions screening returned a partial match. If screening had returned clean, it would have skipped EDD entirely. Cross-workflow synthesis. The agent combined results from KYC, sanctions, and EDD to make a holistic decision — and recognized that 78% confidence was too low to proceed without human confirmation. Conditional escalation. The agent used a confidence threshold to decide when to involve a human. Clear cases flow through automatically; uncertain cases get human review. Full audit trail. Every workflow triggered, every result received, every human decision, and the agent's reasoning about all of it — captured in one trace. Key Design Considerations ========================= - Set maxiterations higher for orchestrator agents (15–20). They make more tool calls because they coordinate multiple workflows - Set timeoutms appropriately. Orchestators wait for sub-workflows, so total time is the sum of all sub-workflow durations plus agent reasoning time - Use storememory to save intermediate results. The agent's context window grows with each iteration, but explicit memory storage makes retrieval reliable - Include pauseforhuman__` as a tool. Orchestrators make high-stakes decisions and should have the option to escalate → Back to Deployment Patterns overview (/agents/deployment-patterns) --- ## PATH: Getting Started > Your First Workflow (Source: getting-started/03-your-first-workflow.md) Your First Workflow =================== Build a working workflow from scratch: register an action, create a matcher with conditional branching, execute it with sample data, add human approval, and read the final output. Prerequisites ============= A running Hyphen instance at http://localhost:3009. All examples use X-Org-Id: tutorial-org. Step 1: Register a Mock HTTP Action =================================== Before building the workflow, register an action the workflow can call. This simulates updating an ERP system: ``bash curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: tutorial-org" \ -H "Content-Type: application/json" \ -d '{ "actionname": "updateerpstatus", "kind": "http", "description": "Mark a record as reconciled in the ERP", "url": "https://httpbin.org/post", "httpmethod": "POST", "passthrough": true }' ` We're using httpbin.org as a mock endpoint. In production, this would be your actual ERP API. Verify it was registered: `bash curl http://localhost:3009/actions \ -H "X-Org-Id: tutorial-org" ` Step 2: Create the Workflow =========================== Create a workflow that matches invoices to payments and conditionally processes the results: `bash curl -X POST http://localhost:3009/workflows \ -H "X-Org-Id: tutorial-org" \ -H "Content-Type: application/json" \ -d '{ "name": "firstworkflow", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["invoiceid"], "tolerance": 0.05, "outputMatched": "matched", "outputUnmatchedLeft": "unmatchedinvoices", "outputUnmatchedRight": "unmatchedpayments" } }, { "type": "updateerpstatus", "filter": { "condition": { "greaterThan": [{ "length": "@matched" }, 0] } }, "properties": { "records": "@matched", "action": "markreconciled" } } ] } }' ` Save the returned workflow ID — you'll need it to execute. What this does: Takes two datasets (invoices and payments), matches them on invoiceid with 5% amount tolerance, then calls the updateerpstatus action if any matches were found. Step 3: Execute with Sample Data ================================ `bash curl -X POST http://localhost:3009/workflows/WORKFLOWID/execute \ -H "X-Org-Id: tutorial-org" \ -H "Content-Type: application/json" \ -d '{ "invoices": [ { "invoiceid": "INV-001", "vendor": "Acme Corp", "amount": 1000.00 }, { "invoiceid": "INV-002", "vendor": "Beta LLC", "amount": 2500.00 }, { "invoiceid": "INV-003", "vendor": "Gamma Inc", "amount": 500.00 } ], "payments": [ { "invoiceid": "INV-001", "vendor": "Acme Corp", "amount": 1000.00 }, { "invoiceid": "INV-002", "vendor": "Beta LLC", "amount": 2480.00 } ] }' ` Replace WORKFLOWID with the ID returned in Step 2. Step 4: Read the Run Output =========================== `bash curl http://localhost:3009/runs/RUNID/status \ -H "X-Org-Id: tutorial-org" ` Expected output: `json { "status": "completed", "context": { "matched": [ { "a": { "invoiceid": "INV-001", "amount": 1000.00 }, "b": { "invoiceid": "INV-001", "amount": 1000.00 } }, { "a": { "invoiceid": "INV-002", "amount": 2500.00 }, "b": { "invoiceid": "INV-002", "amount": 2480.00 } } ], "unmatchedinvoices": [ { "invoiceid": "INV-003", "vendor": "Gamma Inc", "amount": 500.00 } ], "unmatchedpayments": [] } } ` INV-001 matched exactly. INV-002 matched within the 5% tolerance ($2500 vs $2480 = 0.8% difference). INV-003 had no matching payment, so it appears in unmatchedinvoices. Step 5: Add a PbotApproval Step =============================== Update the workflow to require human approval when there are unmatched invoices: `bash curl -X PUT http://localhost:3009/workflows/WORKFLOWID \ -H "X-Org-Id: tutorial-org" \ -H "Content-Type: application/json" \ -d '{ "name": "firstworkflowv2", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["invoiceid"], "tolerance": 0.05, "outputMatched": "matched", "outputUnmatchedLeft": "unmatchedinvoices", "outputUnmatchedRight": "unmatchedpayments" } }, { "type": "updateerpstatus", "filter": { "condition": { "greaterThan": [{ "length": "@matched" }, 0] } }, "properties": { "records": "@matched", "action": "markreconciled" } }, { "type": "PbotApproval", "filter": { "condition": { "greaterThan": [{ "length": "@unmatchedinvoices" }, 0] } }, "properties": { "comment": "{{unmatchedinvoices.length}} invoices have no matching payment. Review and decide.", "requestpayload": { "unmatched": "@unmatchedinvoices", "matchedcount": "@matched.length" } } } ] } }' ` Execute it again with the same data. This time the run will pause at the approval step. Step 6: Submit Approval ======================= Check the run status — it should show paused: `bash curl http://localhost:3009/runs/RUNID/status \ -H "X-Org-Id: tutorial-org" ` Submit the approval: `bash curl -X POST http://localhost:3009/approvals/RUNID/2 \ -H "X-Org-Id: tutorial-org" \ -H "Content-Type: application/json" \ -d '{ "approved": true, "comments": "INV-003 is a known timing issue. Will resolve next cycle.", "data": { "reviewer": "tutorial-user" } }' ` The /2 in the URL refers to step index 2 (the third step, zero-indexed). Step 7: Check Final State ========================= `bash curl http://localhost:3009/runs/RUNID/status \ -H "X-Org-Id: tutorial-org" ` The run should now show completed with the approval decision captured: `json { "status": "completed", "context": { "matched": [ ... ], "unmatchedinvoices": [ ... ], "approved": true, "approvaldata": { "reviewer": "tutorial-user", "comments": "INV-003 is a known timing issue." } } } ` What You Built ============== A workflow that: 1. Matches invoices to payments on invoiceid` with 5% amount tolerance 2. Calls an external action to mark matched records as reconciled 3. Pauses for human review when unmatched invoices exist 4. Captures the reviewer's decision as part of the permanent audit trail Next: Your First Agent (/getting-started/your-first-agent) — add an AI agent that investigates the unmatched exceptions. --- ## PATH: Guides > Human In The Loop (Source: guides/03-human-in-the-loop.md) Human-in-the-Loop ================= Hyphen provides two mechanisms for human involvement: PbotApproval in workflows and _pauseforhuman in agents. Both pause execution, present context to a reviewer, capture a structured decision, and resume. This guide covers both patterns, webhook notifications, building review UIs, and resuming with structured data. PbotApproval in Workflows ========================= The PbotApproval primitive pauses a deterministic workflow at a defined point. Use it for known decision gates — expense approvals, reconciliation sign-offs, compliance reviews. Basic Usage =========== ``json { "type": "PbotApproval", "properties": { "comment": "Review expense report for ${{input.amount}} from {{input.employeename}}", "requestpayload": { "expenseid": "@input.expenseid", "amount": "@input.amount", "category": "@input.category", "receipts": "@input.receipturls" } } } ` When the workflow reaches this step, execution pauses and the run status changes to paused. Conditional Approval ==================== Gate the approval step so it only triggers when needed: `json { "type": "PbotApproval", "filter": { "condition": { "greaterThan": ["@input.amount", 10000] } }, "properties": { "comment": "High-value expense: ${{input.amount}} requires manager approval" } } ` Expenses under $10,000 skip the approval and continue to the next step. Submitting an Approval ====================== `bash POST /approvals/:runId/:stepIndex X-Org-Id: acme-corp Content-Type: application/json { "approved": true, "comments": "Verified receipts, approved.", "data": { "reviewer": "manager@acme.com", "overrideamount": null, "notes": "Standard business travel" } } ` After submission, the workflow resumes. The approval decision is available in context: | Context Path | Value | |-------------|-------| | @approved | true or false | | @approvaldata.reviewer | "manager@acme.com" | | @_approvaldata.notes | "Standard business travel" | Branching on Approval ===================== Use a filter on subsequent steps to branch based on the approval decision: `json { "type": "processpayment", "filter": { "condition": { "equal": ["@approved", true] } }, "properties": { "expenseid": "@input.expenseid" }, "onFalse": { "type": "gmailsend", "properties": { "_oauthaccount_": "finance@acme.com", "to": "@input.employeeemail", "subject": "Expense report rejected", "body": "Your expense report was not approved. Reviewer notes: {{_approvaldata.notes}}" } } } ` _pauseforhuman in Agents =============================== Inside a ReAct loop, the agent decides when to escalate. Use this for dynamic situations where the agent discovers it needs human guidance during its reasoning process. How the Agent Calls It ====================== `json { "thought": "I found two possible matches and I'm not confident which is correct. I need human guidance.", "action": "pauseforhuman", "actioninput": { "question": "Which customer record should I use for this refund?", "options": [ "John Smith (ID: C-001) — Premium since 2019, 47 orders", "John Smith (ID: C-047) — Standard since 2023, 3 orders" ], "contextsummary": "Processing refund for order #12345. Found two matching customer records.", "urgency": "medium" } } ` What Happens ============ 1. Agent execution pauses — status becomes paused 2. The agentpaused webhook fires (if configured) 3. A human reviews the context and responds 4. The agent resumes with the human's input injected into its conversation history Resuming a Paused Agent ======================= `bash POST /agents/:agentRunId/resume X-Org-Id: acme-corp Content-Type: application/json { "humaninput": "Use C-001 — the premium customer. The order history confirms this is the right account.", "approved": true } ` The agent's next iteration sees the human's response as an observation and continues reasoning from there. When an agent pauses, its full state is checkpointed. You can resume hours or days later — the agent picks up exactly where it left off. But be aware that external state (API data, database records) may have changed in the interim. Webhook Notifications ===================== Both approval mechanisms fire webhooks so your systems can react immediately. Workflow Approval Requested =========================== `json { "event": "pbotapprovalrequested", "timestamp": "2026-02-01T10:30:00Z", "orgid": "acme-corp", "data": { "runid": "runabc123", "workflowid": "wfxyz", "stepindex": 2, "comment": "Review expense report for $15,000", "requestpayload": { ... } } } ` Agent Paused ============ `json { "event": "agentpaused", "timestamp": "2026-02-01T10:30:00Z", "orgid": "acme-corp", "data": { "agentrunid": "agentabc123", "question": "Which customer record should I use?", "options": ["C-001", "C-047"], "contextsummary": "Processing refund for order #12345", "urgency": "medium", "iteration": 5 } } ` Use these webhooks to route notifications to Slack, email, or your custom review dashboard. Building a Review UI ==================== The approval API provides everything you need to build a custom review interface. List Pending Approvals ====================== `bash GET /approvals/:runId X-Org-Id: acme-corp ` Returns the approval request with full context — the comment, request payload, and any data the workflow provided. List Paused Agents ================== `bash GET /agents?status=paused X-Org-Id: acme-corp ` Returns all paused agent runs with their questions, options, and context summaries. Review UI Pattern ================= A typical review UI flow: `mermaid sequenceDiagram participant W as Webhook participant UI as Review Dashboard participant API as Hyphen API W->>UI: pbotapprovalrequested / agentpaused UI->>API: GET /approvals/:runId (or GET /agents/:id/status) API-->>UI: Full context + request payload Note over UI: Reviewer sees context,
makes decision UI->>API: POST /approvals/:runId/:step (or POST /agents/:id/resume) API-->>UI: 200 OK — execution resumed ` Structured Data in Approvals ============================ The data field in approval submissions lets reviewers pass structured information back to the workflow: `json { "approved": true, "data": { "adjustedamount": 8500, "reasoncode": "partialapproval", "reviewernotes": "Approved hotel and flights, rejected entertainment expenses" } } ` These values are available downstream as @approvaldata.adjustedamount, @approvaldata.reasoncode, etc. When to Use Which ================= | Scenario | Mechanism | Why | |----------|-----------|-----| | Known decision gate (expense > $10K) | PbotApproval | Predictable, defined at design time | | Agent discovers it needs help | pauseforhuman | Dynamic, decided at runtime | | Batch review of AI findings | PbotApproval after a loop | Collect all results, review once | | Real-time escalation during investigation | pauseforhuman | Agent determines when confidence is low | | Compliance sign-off | PbotApproval | Regulatory requirement, always needed | | Ambiguous data during research | pauseforhuman_` | Agent can't resolve without human context | Both mechanisms produce audit records. Both support structured data in the response. Both can trigger webhooks. The difference is whether the decision point is known upfront (approval) or discovered at runtime (pause). --- ## PATH: Integrations > Outlook (Source: integrations/03-outlook.md) Outlook ======= Send emails, read inbox messages, and create calendar events from workflows and agents using the Microsoft Graph API. Available Actions ================= | Action | Description | |--------|-------------| | outlooksend | Send a new email | | outlookread | Read emails matching a query | | outlookcalendarcreate | Create a calendar event | Setup ===== 1. Store OAuth App Credentials ============================== Register an app in Azure Active Directory with Microsoft Graph permissions, then store the credentials: ``bash curl -X POST http://localhost:3009/oauth/outlook/app-credentials \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "clientid": "your-azure-app-client-id", "clientsecret": "your-azure-app-client-secret" }' ` 2. Authorize a User Account =========================== `bash curl "http://localhost:3009/oauth/outlook/authorize?returnurl=https://yourapp.com/oauth/callback" \ -H "X-Org-Id: acme-corp" ` The user completes the Microsoft consent flow. Hyphen stores the tokens and handles refresh. 3. Verify Connection ==================== `bash curl http://localhost:3009/oauth/connections \ -H "X-Org-Id: acme-corp" ` Using in Workflows ================== Send Email ========== `json { "type": "outlooksend", "properties": { "_oauthaccount_": "ops@acme.com", "to": "@input.recipient", "subject": "Action Required: Review Pending Approvals", "body": "You have {{input.pendingcount}} approvals waiting for your review." } } ` Read Inbox ========== `json { "type": "outlookread", "properties": { "oauthaccount_": "procurement@acme.com", "query": "from:vendor@supplier.com subject:quote", "maxresults": 5 } } ` Create Calendar Event ===================== `json { "type": "outlookcalendarcreate", "properties": { "_oauthaccount_": "scheduling@acme.com", "subject": "Onboarding Call — {{input.companyname}}", "start": "@input.meetingtime", "durationminutes": 30, "attendees": ["@input.contactemail", "onboarding@acme.com"], "body": "Welcome call for new customer {{input.companyname}}." } } ` Using as Agent Tools ==================== `json { "mode": "react", "objective": "Schedule onboarding for the new customer and send confirmation", "tools": [ "lookupcustomer", { "type": "action", "name": "outlooksend" }, { "type": "action", "name": "outlookcalendarcreate" } ], "maxiterations": 8 } ` Complete Example: Post-Approval Notification ============================================ When an approval is granted, send an Outlook email and schedule a follow-up: `json { "name": "approvalfollowup", "definition": { "actions": [ { "type": "PbotApproval", "properties": { "comment": "Approve vendor {{input.vendorname}} for ${{input.contractvalue}}?", "requestpayload": { "vendor": "@input.vendorname", "value": "@input.contractvalue" } } }, { "type": "outlooksend", "filter": { "condition": { "equal": ["@approved", true] } }, "properties": { "oauthaccount": "procurement@acme.com", "to": "@input.vendoremail", "subject": "Vendor Approval Confirmed — {{input.vendorname}}", "body": "Your vendor application has been approved. Contract value: ${{input.contractvalue}}. Our team will be in touch to finalize onboarding." } }, { "type": "outlookcalendarcreate", "filter": { "condition": { "equal": ["@approved", true] } }, "properties": { "oauthaccount": "procurement@acme.com", "subject": "Vendor Onboarding — {{input.vendorname}}", "start": "@input.proposedmeetingtime", "durationminutes": 45, "attendees": ["@input.vendoremail", "procurement@acme.com"] } } ] } } `` --- ## PATH: Platform > Context Resolution (Source: platform/03-context-resolution.md) Context Resolution ================== Every workflow run maintains a context object. It starts with the input payload and grows as each step adds its output. The @path syntax lets any step reference data produced by earlier steps. How Context Accumulates ======================= ``mermaid flowchart TD subgraph "Context after input" I["@input.invoices
@input.payments"] end subgraph "Context after Step 1 (Matcher)" M["+ @matched
+ @unmatchedinvoices
+ @unmatchedpayments"] end subgraph "Context after Step 2 (ReAct)" R["+ @investigation"] end subgraph "Context after Step 3 (Approval)" A["+ @approved
+ @approvaldata"] end I --> M --> R --> A ` Each step can read anything from context that was set before it. Steps cannot read from steps that haven't executed yet. @path Reference Table ===================== | Path | Source | Description | |------|--------|-------------| | @input. | Workflow execution payload | The JSON body sent to POST /workflows/:id/execute | | @input.field.nested | Nested input | Dot notation traverses nested objects | | @outputKey. | Step output | References output by outputKey or default output name | | @matched | Matcher | Array of matched record pairs | | @unmatchedLeft | Matcher | Left-side records with no match (or custom name via outputUnmatchedLeft) | | @unmatchedRight | Matcher | Right-side records with no match (or custom name via outputUnmatchedRight) | | @item | Foreach loop | Current item being processed | | @item.field | Foreach loop | Field on the current loop item | | @runid | System | Current run identifier | | @step | System | Current step index (0-based) | | @now | System | Current ISO timestamp | | @approved | PbotApproval | Boolean — whether the reviewer approved | | @_approvaldata | PbotApproval | Additional data from the reviewer's response | Naming Step Outputs =================== By default, matcher outputs use their configured output names (outputMatched, outputUnmatchedLeft, outputUnmatchedRight). Other steps store results under their outputKey or under a default name. `json { "type": "loop", "properties": { "mode": "react", "objective": "Investigate exceptions", "tools": [{ "type": "action", "name": "lookuprecord" }], "resultkey": "investigation" } } ` The agent's final answer is stored at @investigation in context, accessible by all subsequent steps. For registered actions, outputKey controls the context key: `json { "type": "fetchcustomer", "properties": { "customerid": "@input.id" }, "outputKey": "customerdata" } ` Result available at @customerdata for subsequent steps. Nested Path Resolution ====================== Dot notation traverses nested objects to any depth: `json { "type": "sendnotification", "properties": { "email": "@customerdata.contact.email", "name": "@customerdata.contact.firstname", "company": "@customerdata.organization.name" } } ` Array Indexing ============== Access specific array elements by index: `json { "firstmatchid": "@matched.0.invoiceid", "secondmatchamount": "@matched.1.amount" } ` Access array length: `json { "condition": { "greaterThan": [{ "length": "@unmatchedinvoices" }, 0] } } ` Template Interpolation: {{ }} =============================== Double braces perform string interpolation — they convert a value to a string and embed it in surrounding text. Use for building human-readable messages, email subjects, Slack posts, and log entries. `json { "type": "gmailsend", "properties": { "subject": "Invoice {{input.invoiceid}} — {{input.vendorname}}", "body": "Dear {{input.vendorname}},\n\nYour invoice for ${{input.amount}} has been processed.\n\nMatched records: {{matched.length}}\nExceptions: {{unmatchedinvoices.length}}" } } ` @path vs {{ }} — These serve different purposes. @path returns the raw value (object, array, number, boolean) and is used for data references in properties. {{template}} converts to string and embeds in text. Use @path when passing data. Use {{ }} when constructing messages. `json { "datareference": "@input.invoices", "message": "Processing {{input.invoices.length}} invoices" } ` Worked Example: 4-Step Workflow =============================== `json { "name": "fullcontextexample", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["invoiceid"], "tolerance": 0.02, "outputMatched": "reconciled", "outputUnmatchedLeft": "exceptions" } }, { "type": "loop", "properties": { "mode": "foreach", "itemspath": "@exceptions", "itemvariablename": "exception", "actionstoexecute": [ { "type": "lookupvendor", "properties": { "vendorid": "@exception.vendorid" } } ], "collectresults": true, "resultkey": "vendorlookups" } }, { "type": "PbotApproval", "filter": { "condition": { "greaterThan": ["@exceptions.length", 5] } }, "properties": { "comment": "{{exceptions.length}} exceptions found — review before proceeding", "requestpayload": { "exceptioncount": "@exceptions.length", "vendordetails": "@vendorlookups", "reconciledcount": "@reconciled.length" } } }, { "type": "custom-table", "properties": { "table": "reconlog", "operation": "write", "keys": ["runid"], "values": ["@_runid"], "fields": { "matched": "@reconciled.length", "exceptions": "@exceptions.length", "approved": "@_approved", "timestamp": "@now" } } } ] } } ` Context at each step: | After Step | Keys Added | Available | |------------|-----------|-----------| | Input | @input.invoices, @input.payments | input | | Step 1 (Matcher) | @reconciled, @exceptions | input + matcher output | | Step 2 (Loop) | @vendorlookups | input + matcher + loop results | | Step 3 (Approval) | @approved, @approval_data` | input + matcher + loop + approval | | Step 4 (Table) | — (writes to storage) | everything | → Next: Conditional Logic (/platform/conditional-logic) --- ## PATH: Primitives > Approval (Source: primitives/03-approval.md) Approval (PbotApproval) ======================= PbotApproval pauses workflow execution and waits for a human to review and decide. The reviewer sees the context you provide, makes a decision, and the workflow resumes with their response in context. Basic Usage =========== ``json { "type": "PbotApproval", "properties": { "comment": "Please review this expense for ${{input.amount}}", "requestpayload": { "expenseid": "@input.id", "amount": "@input.amount", "category": "@input.category", "submittedby": "@input.employeename" } } } ` When execution reaches this step, the run status changes to paused. The workflow stays paused until the approval is submitted. Properties Reference ==================== | Property | Type | Required | Description | |----------|------|----------|-------------| | comment | string | Yes | Human-readable description of what needs review. Supports {{ }} templates | | requestpayload | object | No | Structured data presented to the reviewer | Approval Flow ============= `mermaid sequenceDiagram participant Workflow participant Engine participant Reviewer Workflow->>Engine: PbotApproval step Engine->>Engine: Run status → "paused" Engine-->>Reviewer: Webhook: pbotapprovalrequested Note over Reviewer: Reviews context,
makes decision Reviewer->>Engine: POST /approvals/:runId/:stepIndex Engine->>Engine: Run status → "running" Engine->>Workflow: Resume with approval context ` 1. Workflow pauses ================== The engine changes the run status to paused and emits a pbotapprovalrequested webhook. 2. Reviewer decides =================== The reviewer sees the comment and requestpayload. They submit their decision: `bash curl -X POST http://localhost:3009/approvals/{runId}/0 \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "approved": true, "comments": "Reviewed and approved — receipt verified", "data": { "reviewer": "jane@company.com", "overrideamount": null, "notes": "All documentation present" } }' ` api POST /approvals/:runId/:stepIndex Submit an approval decision for a paused workflow. 3. Workflow resumes =================== After submission, the run resumes from the next step. The approval response is available in context. Approval Response Format ======================== | Field | Type | Required | Description | |-------|------|----------|-------------| | approved | boolean | Yes | Whether the reviewer approved | | comments | string | No | Reviewer's comments | | data | object | No | Additional structured data from the reviewer | Context After Approval ====================== After a PbotApproval step completes, the following keys are added to context: | Context Key | Value | |-------------|-------| | @approved | Boolean — true if approved, false if rejected | | @approvaldata | The data object from the reviewer's response | Use these in subsequent steps to branch on the approval result: `json { "type": "processexpense", "filter": { "condition": { "equal": ["@approved", true] } }, "properties": { "expenseid": "@input.id", "approvedby": "@approvaldata.reviewer" }, "onFalse": { "type": "notifyemployee", "properties": { "message": "Your expense was rejected. Reason: {{approvaldata.notes}}" } } } ` Conditional Approval ==================== Combine with filter to only require approval under certain conditions: `json { "type": "PbotApproval", "filter": { "condition": { "or": [ { "greaterThan": ["@input.amount", 10000] }, { "equal": ["@input.vendorstatus", "new"] } ] } }, "properties": { "comment": "High-value or new-vendor expense — requires review", "requestpayload": { "amount": "@input.amount", "vendor": "@input.vendorname" } } } ` When the filter evaluates to false, the step is skipped and the workflow continues. The @approved context key is not set in this case. Listing Pending Approvals ========================= api GET /approvals/:runId List all pending approval requests for a workflow run. Webhook Notification ==================== When a PbotApproval step pauses a workflow, a pbotapprovalrequested webhook event is emitted. Use this to notify reviewers via your application, Slack, email, or any external system. The webhook payload includes the runId, stepIndex, comment, and requestpayload` so your notification system can provide full context. → Next: Form (PbotForm) (/primitives/form) --- ## PATH: Templates > Healthcare Denial Management (Source: templates/03-healthcare-denial-management.md) Healthcare Denial Management ============================ Remittance-to-claim matching, denial pattern analysis, and automated appeal drafting. Revenue cycle teams spend 65+ hours per week on denials — most of which follow predictable patterns that an agent can identify, analyze, and draft responses for. What Gets Automated =================== Matching ERA/835 remittance records to submitted claims. Parsing denial reason codes (CARC/RARC). AI agent analyzes denial patterns, cross-references payer-specific rules, identifies root causes, and drafts appeal letters with supporting documentation references. What Humans Still Own ===================== Final review of appeal language before submission. Payer negotiation on contested amounts. Process changes based on systemic denial patterns. Pipeline ======== ``mermaid flowchart TD A[Input: Remittances + Submitted Claims] --> B[Matcher: Remittance-to-Claim] B -->|Paid in full ~45%| C[Close Claim] B -->|Denied or underpaid ~55%| D[Parse Denial Codes] B -->|No remittance found| E[Flag: No Response] D --> F[ReAct Agent: Denial Analysis] F --> F1[Decode CARC/RARC codes] F1 --> F2[Check payer-specific rules] F2 --> F3[Search prior appeal outcomes] F3 -->|Correctable| G[Draft Corrected Claim] F3 -->|Appealable| H[Draft Appeal Letter] F3 -->|Non-recoverable| I[Recommend Write-Off] G --> J[Human Review: RCM Analyst] H --> J I --> J J -->|Approve| K[Submit to Clearinghouse] J -->|Revise| L[Edit + Resubmit] C --> M[Custom Table: Denial Tracking] K --> M L --> M E --> M style B fill:#e8a84c,color:#09090b,stroke:none style F fill:#4ade80,color:#09090b,stroke:none style J fill:#60a5fa,color:#09090b,stroke:none ` Workflow Definition =================== `json { "name": "denialmanagement", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.submittedclaims", "right": "@input.remittances", "matchOn": ["claimid", "payerid"], "tolerance": 0.005, "dateWindowDays": 90, "outputMatched": "remittancematched", "outputUnmatchedLeft": "noresponseclaims", "outputUnmatchedRight": "unmatchedremittances" } }, { "type": "loop", "properties": { "mode": "foreach", "itemspath": "@remittancematched", "itemvariablename": "pair", "actionstoexecute": [ { "type": "loop", "filter": { "condition": { "notEqual": ["@pair.b.paymentstatus", "paidinfull"] } }, "properties": { "mode": "react", "objective": "Analyze denial for claim {{pair.a.claimid}}. Payer: {{pair.b.payername}}. Denial codes: {{pair.b.denialcodes}}. Billed: ${{pair.a.billedamount}}. Paid: ${{pair.b.paidamount}}. Steps: 1) Parse CARC/RARC codes and identify denial category, 2) Look up payer-specific rules for this denial type, 3) Search for prior appeals on same denial pattern and their outcomes, 4) Classify as correctable (resubmit with fix), appealable (submit appeal), or non-recoverable (write off), 5) If correctable or appealable, draft the response citing contract terms and clinical rationale.", "tools": [ { "type": "action", "name": "lookupdenialcodes" }, { "type": "action", "name": "getpayerrules" }, { "type": "action", "name": "searchdenialhistory" }, { "type": "action", "name": "getclinicaldocumentation" }, { "type": "action", "name": "draftappealletter" } ], "maxiterations": 10, "onstuck": { "iterations": 4, "action": "escalate" }, "resultkey": "denialanalysis" } } ], "maxconcurrency": 5, "failurestrategy": "continueonerror", "collectresults": true, "resultkey": "allanalyses" } }, { "type": "PbotApproval", "properties": { "comment": "Denial analyses complete. Review appeal drafts, corrected claims, and write-off recommendations before submission.", "requestpayload": { "totalmatched": "@remittancematched.length", "noresponse": "@noresponseclaims.length", "analyses": "@allanalyses" } } }, { "type": "custom-table", "properties": { "table": "denialtracking", "operation": "write", "keys": ["batchid", "rundate"], "values": ["@runid", "@now"], "fields": { "claimsanalyzed": "@remittancematched.length", "noresponseflagged": "@noresponseclaims.length", "status": "completed" } } } ] } } ` Required Registered Actions =========================== | Action | Kind | Purpose | |--------|------|---------| | lookupdenialcodes | db | Translate CARC/RARC codes to reasons and appeal guidance | | getpayerrules | http | Retrieve payer-specific billing and appeal rules | | searchdenialhistory | db | Find prior denials with same codes for pattern detection | | getclinicaldocumentation | http | Pull relevant clinical notes for appeal support | | draftappealletter | llm | Generate appeal letter from denial analysis and documentation | The draftappealletter action is an LLM action — it generates text using an AI model. The agent uses it as a tool to produce appeal drafts that the human reviewer edits before submission. Customization Notes =================== Date window. 90 days covers typical payer response timelines. Some payers may take longer — adjust based on your payer mix. Tolerance. The tight 0.5% tolerance (0.005) is appropriate for claims where payment amounts should closely match billed amounts. Widen for payers with known contractual adjustment patterns. Prior appeal search. The searchdenialhistory` action is critical for pattern detection. The agent uses historical appeal outcomes to predict which denials are worth appealing vs. writing off. Concurrency. 5 concurrent analyses balances throughput with the need for sequential tool calls within each denial investigation. --- ## PATH: Actions > Matcher (Source: actions/04-matcher.md) Matcher Actions =============== Matcher actions are pre-configured matching rules. Instead of specifying matcher properties in every workflow, register a configuration once and reference it by name. Registration ============ ``bash curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "actionname": "invoicepaymentmatcher", "kind": "matcher", "description": "Match invoices to payments by invoice number with 2% tolerance and 5-day date window", "left": "@input.invoices", "right": "@input.payments", "matchOn": ["invoicenumber", "vendorid"], "tolerance": 0.02, "dateWindowDays": 5, "fuzzyThreshold": 85, "descriptionKey": "description" }' ` Properties ========== | Property | Type | Required | Description | |----------|------|----------|-------------| | actionname | string | Yes | Unique name for this action | | kind | "matcher" | Yes | | | left | @path | Yes | Left dataset path | | right | @path | Yes | Right dataset path | | matchOn | string[] | Yes | Fields for exact matching | | tolerance | number | No | Numeric tolerance (decimal) | | dateWindowDays | number | No | Date window (±days) | | fuzzyThreshold | number | No | Text similarity 0–100 | | descriptionKey | string | No | Field for fuzzy matching | All matching properties work identically to the Matcher primitive (/primitives/matcher). The difference is that a matcher action saves the configuration for reuse. Registered vs Primitive ======================= Use the matcher primitive when matching configuration varies per workflow: `json { "type": "matcher", "properties": { "matchOn": ["invoiceid"], "tolerance": 0.01 } } ` Use a registered matcher action when you have a standard matching configuration used across multiple workflows: `json { "type": "invoicepaymentmatcher", "properties": { } } ` Usage in a Workflow =================== `json { "type": "invoicepaymentmatcher", "properties": { "invoices": "@input.newinvoices", "payments": "@input.recentpayments" } } ` Properties passed at execution time can override the registered defaults. Usage as an Agent Tool ====================== `json { "mode": "react", "objective": "Reconcile this batch of transactions", "tools": [{ "type": "action", "name": "invoicepayment_matcher" }] } `` The agent can invoke the matcher during its reasoning when it needs to compare datasets. → Next: Custom Table Actions (/actions/custom-table) --- ## PATH: Agents > Stuck Detection (Source: agents/04-stuck-detection.md) Stuck Detection =============== Agents can get stuck — repeating the same action, cycling between two tools, or failing to make progress toward the objective. Stuck detection identifies these patterns and triggers a recovery strategy. Configuration ============= ``json { "onstuck": { "iterations": 3, "action": "escalate", "hint": "Try a different approach or use complete with your best answer." } } ` | Property | Type | Required | Description | |----------|------|----------|-------------| | iterations | number | No | Number of repeated iterations before triggering (default: 3) | | action | string | No | Recovery strategy: "fail", "escalate", or "retrywithhint" (default: "fail") | | hint | string | No | Guidance text injected into the prompt (only used with "retrywithhint") | How Detection Works =================== The engine monitors the agent's action history across iterations. It detects stuck patterns when: - The agent calls the same action with the same parameters for N consecutive iterations - The agent alternates between two actions without producing new information - The agent produces repeated identical thoughts suggesting it's not incorporating new observations When any pattern is detected for the configured number of iterations, the recovery strategy fires. Recovery Strategies =================== "fail" — Terminate ==================== The agent loop ends immediately with a failure status. Use when you'd rather fail fast than risk incorrect results. `json { "onstuck": { "iterations": 3, "action": "fail" } } ` The run status becomes failed with a clear error message indicating the agent was stuck. "escalate" — Pause for Human ============================== The agent pauses and the run is routed to a human reviewer, just like _pauseforhuman. The human sees the agent's full reasoning trace up to the point it got stuck, plus context about the stuck pattern. `json { "onstuck": { "iterations": 3, "action": "escalate" } } ` The human can provide guidance via POST /agents/:id/resume, and the agent continues with the human's input as additional context. This is the recommended strategy for production agents in critical workflows. "retrywithhint" — Redirect ============================== The engine injects a hint into the agent's next prompt, nudging it toward a different approach. The agent gets one more chance to make progress. `json { "onstuck": { "iterations": 3, "action": "retrywithhint", "hint": "You seem to be repeating yourself. Try using complete to provide your best answer with the information you have, or try a different tool." } } ` If the agent gets stuck again after the hint, it fails. The hint is a one-shot redirect, not an infinite retry. Setting Thresholds ================== iterations: 2 — Aggressive. Triggers quickly but may false-positive on agents that legitimately need to retry a tool (e.g., polling for an async result). iterations: 3 — Balanced. The default recommendation. Catches genuine loops while tolerating one legitimate retry. iterations: 5 — Conservative. Use for agents with tools that may legitimately return different results on repeated calls (e.g., searching with refined queries). Example: Stuck Detection in Practice ==================================== An agent investigating an unmatched invoice: ` Iteration 1: Thought: "Let me search for related payments." Action: searchpayments { "invoiceid": "INV-099" } Observation: { "results": [] } Iteration 2: Thought: "No results. Let me try searching again with a broader query." Action: searchpayments { "invoiceid": "INV-099" } ← same action, same params Observation: { "results": [] } Iteration 3: Thought: "Still nothing. Let me search once more." Action: searchpayments { "invoiceid": "INV-099" } ← stuck detected! ` With "action": "escalate", the agent pauses here and a human reviews the trace. The human might respond: "This invoice is from a new vendor not yet in the payment system. Mark as pending vendor setup." With "action": "retrywithhint" and hint "No payments exist for this invoice. Consider whether it might be from a new vendor or a different system.", the agent gets one more iteration with this additional context. Best Practices ============== - Always configure onstuck for production agents. The default "fail" is safe but gives you no recovery path. "escalate" is almost always better. - Pair stuck detection with maxiterations as a hard ceiling. Stuck detection catches loops; maxiterations catches everything else. - Write hints that suggest alternatives, not just "try again." Good: "If the search returns no results, use complete to report that no match was found." Bad: "Try harder." - Monitor stuck rates via the hyphenagentstuck_total` metric. High stuck rates indicate the objective is unclear, tools are insufficient, or the model needs more guidance. → Next: Reasoning Traces (/agents/reasoning-traces) --- ## PATH: Getting Started > Your First Agent (Source: getting-started/04-your-first-agent.md) Your First Agent ================ Build an AI agent that reasons through a task: register tools, create a workflow with a ReAct loop, execute it, watch the reasoning trace unfold, handle a human escalation, and read the final answer. Prerequisites ============= A running Hyphen instance at http://localhost:3009 with an LLM API key configured. All examples use X-Org-Id: tutorial-org. Store your LLM key if you haven't already: ``bash curl -X POST http://localhost:3009/org-config \ -H "X-Org-Id: tutorial-org" \ -H "Content-Type: application/json" \ -d '{ "key": "api:llmapikey", "value": "sk-your-openai-key" }' ` Step 1: Register Tools ====================== Agents need tools to work with. Register an HTTP action and an LLM action: `bash An HTTP action that looks up order details ========================================== curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: tutorial-org" \ -H "Content-Type: application/json" \ -d '{ "actionname": "lookuporder", "kind": "http", "description": "Look up order details by order ID", "url": "https://httpbin.org/anything/orders/{{orderid}}", "httpmethod": "GET", "passthrough": true }' ` `bash An LLM action that drafts a customer response ============================================= curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: tutorial-org" \ -H "Content-Type: application/json" \ -d '{ "actionname": "draftresponse", "kind": "llm", "description": "Draft a professional customer service response", "template": "Draft a professional, empathetic response to this customer issue:\n\nIssue: {{issue}}\nOrder details: {{orderdetails}}\n\nKeep it under 3 sentences.", "model": "gpt-4o-mini", "maxtokens": 256 }' ` Step 2: Create the Agent Workflow ================================= Create a workflow with a ReAct loop step. The agent gets an objective, a set of tools, and autonomy to reason through the problem: `bash curl -X POST http://localhost:3009/workflows \ -H "X-Org-Id: tutorial-org" \ -H "Content-Type: application/json" \ -d '{ "name": "customerissueagent", "definition": { "actions": [ { "type": "loop", "properties": { "mode": "react", "objective": "Handle this customer issue: {{input.issue}}. Order ID: {{input.orderid}}. Steps: 1) Look up the order details, 2) Draft a response to the customer, 3) If the issue involves a refund over $100, pause for human approval before completing. 4) Complete with your drafted response and a confidence score.", "tools": [ { "type": "action", "name": "lookuporder" }, { "type": "action", "name": "draftresponse" } ], "model": "gpt-4o", "maxiterations": 8, "onstuck": { "iterations": 3, "action": "escalate" }, "resultkey": "agentresult" } } ] } }' ` What this does: The agent receives an objective in natural language and a list of tools. It reasons step by step — thinking about what to do, choosing a tool, observing the result, and repeating until it calls complete. If it encounters a refund over $100, it pauses for human approval. Step 3: Execute with an Objective ================================= `bash curl -X POST http://localhost:3009/workflows/WORKFLOWID/execute \ -H "X-Org-Id: tutorial-org" \ -H "Content-Type: application/json" \ -d '{ "issue": "I was charged twice for my order and I want a refund of $150", "orderid": "ORD-12345" }' ` Step 4: Watch the Reasoning Trace ================================= Check the run status: `bash curl http://localhost:3009/runs/RUNID/status \ -H "X-Org-Id: tutorial-org" ` If the agent is still running, you'll see status: "running". Once it pauses or completes, the reasoning trace shows exactly how it worked through the problem: `json { "status": "paused", "reasoningtrace": [ { "iteration": 1, "thought": "The customer reports being charged twice and wants a $150 refund. Let me first look up the order details.", "action": "lookuporder", "actioninput": { "orderid": "ORD-12345" }, "observation": { "url": "https://httpbin.org/anything/orders/ORD-12345", "method": "GET" } }, { "iteration": 2, "thought": "I have the order details. Now I'll draft a response to the customer.", "action": "draftresponse", "actioninput": { "issue": "charged twice, requesting $150 refund", "orderdetails": "..." }, "observation": "Dear valued customer, I sincerely apologize for the duplicate charge..." }, { "iteration": 3, "thought": "The refund is $150, which is over $100. Per my instructions, I need to pause for human approval before completing.", "action": "pauseforhuman", "actioninput": { "question": "Customer requests $150 refund for duplicate charge on order ORD-12345. Approve the refund?", "options": ["Approve full refund ($150)", "Approve partial refund", "Deny — investigate further"], "contextsummary": "Order looked up. Response drafted. Awaiting refund approval.", "urgency": "medium" } } ] } ` The agent looked up the order, drafted a response, recognized the refund exceeds $100, and paused for human approval — exactly as instructed. Step 5: Handle the Pause ======================== The agent is waiting for human input. Provide it: `bash curl -X POST http://localhost:3009/agents/AGENTRUNID/resume \ -H "X-Org-Id: tutorial-org" \ -H "Content-Type: application/json" \ -d '{ "humaninput": "Approved. Process the full $150 refund. Also add a 10% discount code for their next order as a goodwill gesture.", "approved": true }' ` The agent resumes, incorporating the human's guidance into its next iteration. Step 6: Read the Final Answer ============================= After resuming, the agent completes: `bash curl http://localhost:3009/runs/RUNID/status \ -H "X-Org-Id: tutorial-org" ` `json { "status": "completed", "context": { "agentresult": { "answer": "Refund of $150 approved and processed. Customer response drafted with apology and 10% discount code for next order.", "confidence": 0.95, "summary": "Looked up order, drafted response, obtained human approval for refund." } } } ` The confidence score tells you how certain the agent was about its answer. The full reasoning trace is preserved as an audit trail. Step 7: View the Complete Trace =============================== For the full audit trail including all iterations (before and after the pause): `bash curl http://localhost:3009/agents/AGENTRUNID/trace \ -H "X-Org-Id: tutorial-org" ` This returns every thought, action, and observation — the complete record of the agent's reasoning and decisions. What You Built ============== An AI agent that: 1. Reasons through a customer issue step by step 2. Uses tools — looking up order data and drafting responses 3. Pauses for human approval when a refund exceeds a threshold 4. Resumes with human guidance incorporated into its reasoning 5. Completes with a confidence-scored answer and full audit trail Key concepts demonstrated: | Concept | How It Appeared | |---------|----------------| | ReAct loop | Agent thinks → acts → observes → repeats | | Tool usage | lookuporder (HTTP), draftresponse (LLM) | | Human escalation | _pauseforhuman_ when refund > $100 | | Reasoning trace | Full chain of thought captured per iteration | | Confidence scoring | Agent reports 0.95 confidence in final answer | | Stuck detection | Configured at 3 iterations with escalate` fallback | Next: Explore the templates (/templates) for production-ready patterns, or read the agent deployment patterns (/agents/deployment-patterns) guide to learn about agents as workflow triggers and orchestrators. --- ## PATH: Guides > Graduated Exception Handling (Source: guides/04-graduated-exception-handling.md) Graduated Exception Handling ============================ The core Hyphen philosophy: don't automate everything and don't review everything. Build a pipeline where deterministic rules handle the majority, an AI agent investigates the ambiguous middle, and humans make the final call on genuine edge cases. We call this the 80/15/5 pattern. Why Graduated Beats Binary ========================== Most automation projects fail because they frame the choice as binary: automate all of it (and accept error risk) or review all of it (and accept the labor cost). Neither works. Full automation fails because real data is messy. Tolerances, fuzzy matches, and exception patterns mean some percentage of cases will always need judgment. Full manual review fails because 80% of the work is straightforward. Reviewing clear cases is waste — it's expensive, slow, and demoralizing for the people doing it. The graduated approach matches the effort to the difficulty: | Tier | Volume | Handler | Cost per Case | |------|--------|---------|--------------| | Clear cases | ~80% | Matcher (deterministic) | Near zero | | Ambiguous cases | ~15% | ReAct agent (bounded AI) | Low | | Edge cases | ~5% | Human reviewer | High | The exact split varies by domain. Insurance claims might be 60/30/10. Bank reconciliation might be 90/8/2. The structure stays the same. Configuring the 80%: The Matcher ================================ The matcher handles clear cases — records that match on key fields within defined tolerances. Configuration determines what counts as "clear." ``json { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["ponumber", "vendorid"], "tolerance": 0.02, "dateWindowDays": 5, "fuzzyThreshold": 85, "descriptionKey": "vendorname", "outputMatched": "autoreconciled", "outputUnmatchedLeft": "exceptions", "outputUnmatchedRight": "unmatchedpayments" } } ` Tuning the Matcher ================== Tighter tolerances → more exceptions, higher accuracy on auto-reconciled records. Looser tolerances → fewer exceptions, but some auto-reconciled records may be incorrect. Start tight and loosen based on data: | Parameter | Conservative | Balanced | Aggressive | |-----------|-------------|----------|------------| | tolerance | 0.005 (0.5%) | 0.02 (2%) | 0.05 (5%) | | dateWindowDays | 1 | 5 | 14 | | fuzzyThreshold | 90 | 85 | 75 | Run the matcher against a historical dataset where you already know the correct matches. Compare the auto-reconciled output to your ground truth. This tells you your false positive rate at each tolerance level. Configuring the 15%: The Agent ============================== Exceptions from the matcher go to an AI agent for investigation. The agent uses registered actions as tools to look up additional data, cross-reference records, and form a recommendation. `json { "type": "loop", "filter": { "condition": { "greaterThan": [{ "length": "@exceptions" }, 0] } }, "properties": { "mode": "foreach", "itemspath": "@exceptions", "itemvariablename": "exception", "actionstoexecute": [ { "type": "loop", "properties": { "mode": "react", "objective": "Investigate exception: {{exception.invoiceid}} for ${{exception.amount}}. Determine root cause and recommend action.", "tools": [ { "type": "action", "name": "lookuppurchaseorder" }, { "type": "action", "name": "checkpaymenthistory" }, { "type": "action", "name": "searchduplicateinvoices" } ], "maxiterations": 8, "onstuck": { "iterations": 3, "action": "retrywithhint", "hint": "Complete with your best assessment and low confidence." }, "resultkey": "investigation" } } ], "maxconcurrency": 5, "failurestrategy": "continueonerror", "collectresults": true, "resultkey": "allinvestigations" } } ` Tuning the Agent ================ maxiterations controls how deep the agent can investigate. Too low and it gives up before finding the answer. Too high and you burn tokens on dead ends. | Complexity | Recommended Iterations | |-----------|----------------------| | Simple lookups (1–2 data sources) | 5 | | Standard investigation (3–4 sources) | 8 | | Complex analysis (multiple sources + reasoning) | 12–15 | onstuck determines what happens when the agent loops without progress. For the graduated pipeline, retrywithhint is usually best — give the agent one more chance with guidance before escalating: `json "onstuck": { "iterations": 3, "action": "retrywithhint", "hint": "If you cannot determine the root cause, complete with recommendation to escalate for human review." } ` maxconcurrency limits parallel investigations. Higher concurrency means faster batch processing but more load on your data source APIs. Configuring the 5%: Human Review ================================ Cases the agent can't resolve with high confidence get routed to a human. The approval step presents the agent's findings — not raw data — so the reviewer has context. `json { "type": "PbotApproval", "filter": { "condition": { "greaterThan": [{ "length": "@exceptions" }, 0] } }, "properties": { "comment": "{{exceptions.length}} exceptions investigated. Review AI findings and approve recommended actions.", "requestpayload": { "autoreconciledcount": "@autoreconciled.length", "exceptioncount": "@exceptions.length", "investigations": "@allinvestigations", "unmatchedpayments": "@unmatchedpayments" } } } ` What the Reviewer Sees ====================== The requestpayload gives the reviewer everything they need: - How many records matched automatically (provides confidence the pipeline is working) - The agent's investigation for each exception (root cause analysis, recommendation, confidence score) - Raw unmatched records for reference The reviewer's job is to approve, reject, or modify the agent's recommendations — not to redo the investigation from scratch. The Complete Pipeline ===================== Putting it together in one workflow: `json { "name": "graduatedreconciliation", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["ponumber", "vendorid"], "tolerance": 0.02, "dateWindowDays": 5, "fuzzyThreshold": 85, "descriptionKey": "vendorname", "outputMatched": "autoreconciled", "outputUnmatchedLeft": "exceptions", "outputUnmatchedRight": "unmatchedpayments" } }, { "type": "loop", "filter": { "condition": { "greaterThan": [{ "length": "@exceptions" }, 0] } }, "properties": { "mode": "foreach", "itemspath": "@exceptions", "itemvariablename": "exception", "actionstoexecute": [ { "type": "loop", "properties": { "mode": "react", "objective": "Investigate and recommend action for exception {{exception.invoiceid}}.", "tools": [ { "type": "action", "name": "lookuppurchaseorder" }, { "type": "action", "name": "checkpaymenthistory" } ], "maxiterations": 8, "onstuck": { "iterations": 3, "action": "retrywithhint", "hint": "Complete with best assessment." }, "resultkey": "investigation" } } ], "maxconcurrency": 5, "collectresults": true, "resultkey": "allinvestigations" } }, { "type": "PbotApproval", "filter": { "condition": { "greaterThan": [{ "length": "@exceptions" }, 0] } }, "properties": { "comment": "{{exceptions.length}} exceptions reviewed by AI. Approve actions.", "requestpayload": { "autocount": "@autoreconciled.length", "investigations": "@allinvestigations" } } }, { "type": "custom-table", "properties": { "table": "reconciliationmetrics", "operation": "write", "keys": ["runid"], "values": ["@runid"], "fields": { "total": "@input.invoices.length", "autoreconciled": "@autoreconciled.length", "agentinvestigated": "@exceptions.length", "completedat": "@now" } } } ] } } ` Measuring the Split =================== Track the ratio over time using the custom-table logging step. Query the reconciliationmetrics table to measure: - Auto-reconciled rate — autoreconciled / total. If this drops below 70%, your matcher tolerances may need adjustment or your data quality has degraded. - Exception rate — agent_investigated / total`. If this spikes, investigate whether new vendors, format changes, or system migrations are creating more mismatches. - Agent confidence distribution — Track confidence scores from agent completions. Low average confidence suggests the tools need better data or the objective needs refinement. The goal is continuous improvement: as you tune tolerances and agent objectives, the auto-reconciled rate should climb and the human review rate should shrink. --- ## PATH: Integrations > Webhooks (Source: integrations/04-webhooks.md) Webhooks ======== Webhooks notify your systems when events occur in Hyphen. Use them to trigger external processes, update dashboards, alert on-call teams, or build custom UIs around Hyphen workflows. Events ====== | Event | Fires When | |-------|-----------| | pbotapprovalrequested | A PbotApproval step pauses the workflow, waiting for a reviewer | | pbotformrequested | A PbotForm step pauses the workflow, waiting for external input | | workflowcompleted | A workflow run finishes successfully | | workflowfailed | A workflow run fails | | agentpaused | An agent calls pauseforhuman or stuck detection escalates | | agentcompleted | An agent run finishes successfully | | agentfailed | An agent run fails | Payload Format ============== Every webhook sends a JSON payload with a consistent envelope: ``json { "event": "pbotapprovalrequested", "timestamp": "2026-02-01T10:30:00Z", "orgid": "acme-corp", "data": { "runid": "runabc123", "workflowid": "wfdef456", "workflowname": "invoicereconciliation", "stepindex": 2, "steptype": "PbotApproval", "comment": "Review 5 unmatched invoices totaling $12,340", "requestpayload": { "unmatchedcount": 5, "totalamount": 12340 } } } ` | Field | Description | |-------|-------------| | event | The event type | | timestamp | When the event occurred (ISO 8601) | | orgid | The organization this event belongs to | | data | Event-specific payload — varies by event type | Event Payloads ============== pbotapprovalrequested ========================= `json { "event": "pbotapprovalrequested", "data": { "runid": "runabc123", "workflowid": "wfdef456", "workflowname": "invoicereconciliation", "stepindex": 2, "comment": "Review 5 unmatched invoices", "requestpayload": { } } } ` Use this to build custom approval UIs or route approval requests to Slack, email, or a ticketing system. Respond via POST /approvals/:runId/:stepIndex. pbotformrequested ===================== `json { "event": "pbotformrequested", "data": { "runid": "runabc123", "workflowid": "wfdef456", "workflowname": "vendoronboarding", "stepindex": 1, "expectedkeys": ["shippingaddress", "deliverydate"], "ttlseconds": 86400 } } ` workflowcompleted ==================== `json { "event": "workflowcompleted", "data": { "runid": "runabc123", "workflowid": "wfdef456", "workflowname": "invoicereconciliation", "durationms": 4520, "stepsexecuted": 4 } } ` workflowfailed ================= `json { "event": "workflowfailed", "data": { "runid": "runabc123", "workflowid": "wfdef456", "workflowname": "invoicereconciliation", "failedstep": 2, "error": "Action 'fetchpayments' returned HTTP 500" } } ` agentpaused ============== `json { "event": "agentpaused", "data": { "agentrunid": "agentxyz789", "reason": "humanrequested", "question": "Two customers match. Which should I use?", "options": ["John Smith C-001", "John Smith C-047"], "urgency": "medium", "iterationscompleted": 4 } } ` The reason field is either "humanrequested" (agent called pauseforhuman) or "stuckescalation" (stuck detection triggered escalation). Resume via POST /agents/:id/resume. agentcompleted ================= `json { "event": "agentcompleted", "data": { "agentrunid": "agentxyz789", "iterationsused": 6, "durationms": 12340, "result": { "answer": "Ticket resolved. Refund processed.", "confidence": 0.95 } } } ` agentfailed ============== `json { "event": "agentfailed", "data": { "agentrunid": "agentxyz789", "reason": "maxiterationsexceeded", "iterationsused": 10, "lastthought": "I still can't find the matching record..." } } ` Retry Logic =========== If your webhook endpoint returns a non-2xx status code or times out (30 second limit), Hyphen retries with exponential backoff: | Attempt | Delay | |---------|-------| | 1st retry | 10 seconds | | 2nd retry | 30 seconds | | 3rd retry | 2 minutes | | 4th retry | 10 minutes | | 5th retry | 1 hour | After 5 failed retries, the webhook is marked as failed for that event. The workflow or agent continues regardless — webhooks are fire-and-forget notifications, not blocking operations. Webhooks don't block execution. A failed webhook does not pause or fail the workflow. If you need guaranteed delivery, use the API to poll for status changes instead. Idempotency =========== Each webhook payload includes the runid or agentrunid as a natural idempotency key. Your handler should check for duplicates — retries may deliver the same event multiple times. Common Patterns =============== Custom approval UI. Listen for pbotapprovalrequested, display context in your app, submit the decision via POST /approvals/:runId/:stepIndex. Slack alerts. Listen for workflowfailed and agentfailed, post to an alerts channel. This runs outside of Hyphen's built-in Slack integration — useful when you want failure notifications regardless of whether the workflow itself uses Slack. Dashboard updates. Listen for workflowcompleted and agentcompleted, update a real-time dashboard with run statistics and outcomes. Escalation routing. Listen for agentpaused, check the urgency` field, and route to the appropriate team — high urgency to the on-call engineer, low urgency to a review queue. --- ## PATH: Platform > Conditional Logic (Source: platform/04-conditional-logic.md) Conditional Logic ================= Hyphen supports conditional execution at two levels: gating an entire workflow, and branching individual steps. Top-Level Conditions ==================== A condition on the workflow definition gates the entire run. If it evaluates to false, the workflow ends immediately with status conditionnotmet and no steps execute. ``json { "name": "processhighvalue", "definition": { "condition": { "greaterThan": ["@input.ordertotal", 10000] }, "actions": [ ] } } ` Use top-level conditions to prevent unnecessary execution — for example, only running a reconciliation workflow when there are actually records to process. Step-Level Branching: filter + onFalse ========================================== Each step can have a filter that determines whether it executes. If the filter is false, the step is skipped — unless an onFalse alternative is defined. `json { "type": "sendnotification", "filter": { "condition": { "greaterThan": ["@input.amount", 5000] } }, "properties": { "channel": "urgent", "message": "High-value: ${{input.amount}}" }, "onFalse": { "type": "sendnotification", "properties": { "channel": "standard", "message": "Processed: ${{input.amount}}" } } } ` `mermaid flowchart TD A{"amount > $5,000?"} -->|Yes| B["Send to #urgent"] A -->|No| C["Send to #standard"] B --> D["Next step"] C --> D ` The onFalse step has the same structure as any other step — it can have its own type, properties, and even nested filter/onFalse for multi-branch logic. Comparison Operators ==================== | Operator | Syntax | Description | |----------|--------|-------------| | equal | { "equal": [left, right] } | Strict equality | | notEqual | { "notEqual": [left, right] } | Not equal | | greaterThan | { "greaterThan": [left, right] } | Greater than (numeric) | | lessThan | { "lessThan": [left, right] } | Less than (numeric) | | greaterOrEqual | { "greaterOrEqual": [left, right] } | Greater or equal | | lessOrEqual | { "lessOrEqual": [left, right] } | Less or equal | | hasKey | { "hasKey": "path" } | Key exists in context | | in | { "in": [value, array] } | Value is in list | | matches | { "matches": [value, regex] } | Regex match | Operands can be literal values or @path references: `json { "equal": ["@input.status", "active"] } { "greaterThan": ["@input.amount", 1000] } { "in": ["@input.country", ["US", "CA", "MX"]] } { "matches": ["@input.email", ".@company\\.com$"] } { "hasKey": "@input.metadata.priority" } ` The length helper extracts array or string length: `json { "greaterThan": [{ "length": "@unmatchedinvoices" }, 0] } ` Combinators: and, or, not =============================== Combine conditions for complex logic: and — all conditions must be true: `json { "and": [ { "greaterThan": ["@input.amount", 1000] }, { "equal": ["@input.verified", true] } ] } ` or — at least one must be true: `json { "or": [ { "equal": ["@input.customertype", "premium"] }, { "greaterThan": ["@input.yearsactive", 5] } ] } ` not — inverts a condition: `json { "not": { "equal": ["@input.status", "cancelled"] } } ` Nested Condition Trees ====================== Combinators nest to any depth. Here's a real-world example that gates an approval step: Require approval when: amount over $10,000 AND (vendor is new OR country is high-risk) AND the invoice is not already pre-approved.* `json { "type": "PbotApproval", "filter": { "condition": { "and": [ { "greaterThan": ["@input.amount", 10000] }, { "or": [ { "equal": ["@input.vendorstatus", "new"] }, { "in": ["@input.country", ["RU", "CN", "IR", "KP"]] } ] }, { "not": { "equal": ["@input.preapproved", true] } } ] } }, "properties": { "comment": "High-value invoice from {{input.vendorstatus}} vendor in {{input.country}}", "requestpayload": { "amount": "@input.amount", "vendor": "@input.vendorname", "country": "@input.country" } } } ` Evaluation order. Conditions evaluate depth-first. Inner conditions resolve before outer combinators. All @path references resolve against the current execution context at the time the condition is evaluated. Multi-Branch Pattern ==================== Chain filter/onFalse for multi-way branching: `json { "type": "routepremium", "filter": { "condition": { "equal": ["@input.tier", "premium"] } }, "properties": { "queue": "premium" }, "onFalse": { "type": "routestandard", "filter": { "condition": { "equal": ["@input.tier", "standard"] } }, "properties": { "queue": "standard" }, "onFalse": { "type": "routebasic", "properties": { "queue": "basic" } } } } `` This evaluates as: premium → standard → basic (fallback). → Next: Scheduling (/platform/scheduling) --- ## PATH: Primitives > Form (Source: primitives/04-form.md) Form (PbotForm) =============== PbotForm pauses workflow execution to collect structured input from an external party. Unlike PbotApproval (which collects a decision from an internal reviewer), PbotForm collects data from anyone — customers, vendors, partners. Basic Usage =========== ``json { "type": "PbotForm", "properties": { "expectedkeys": ["shippingaddress", "deliverydate", "specialinstructions"], "ttlseconds": 86400, "reminderintervals": [3600, 7200] } } ` When execution reaches this step, the run pauses and waits for form submission. Properties Reference ==================== | Property | Type | Required | Description | |----------|------|----------|-------------| | expectedkeys | string[] | Yes | Fields the form expects. Submission must include these keys | | ttlseconds | number | No | Time-to-live — how long the form stays open before expiring (default: no expiration) | | reminderintervals | number[] | No | Seconds after creation to send reminders. [3600, 7200] sends reminders at 1 hour and 2 hours | Submission Flow =============== `mermaid sequenceDiagram participant Workflow participant Engine participant External Workflow->>Engine: PbotForm step Engine->>Engine: Run status → "paused" Note over External: Receives form link
(via your app/email) External->>Engine: POST /forms/:runId/:stepIndex/submit Engine->>Engine: Validate expectedkeys Engine->>Engine: Run status → "running" Engine->>Workflow: Resume with form data in context ` Submitting Form Data ==================== `bash curl -X POST http://localhost:3009/forms/{runId}/0/submit \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "shippingaddress": "123 Main St, City, ST 12345", "deliverydate": "2025-02-15", "specialinstructions": "Leave at door" }' ` api POST /forms/:runId/:stepIndex/submit Submit form data for a paused PbotForm step. api GET /forms/pending List all pending form requests across the organization. Context After Submission ======================== The submitted data is added directly to the execution context. Each key from the submission becomes accessible via @path: `json { "type": "scheduledelivery", "properties": { "address": "@shippingaddress", "date": "@deliverydate", "notes": "@specialinstructions" } } ` TTL and Expiration ================== When ttlseconds is set, the form expires after the specified duration. If the form is not submitted before expiration, the workflow run fails with a timeout error. Use TTL for time-sensitive workflows where stale input would be harmful — for example, a vendor onboarding form that's only valid for 24 hours. Reminder Intervals ================== The reminderintervals array specifies when reminders should be sent (in seconds after the form is created). This triggers reminder webhook events that your application can use to send follow-up notifications. `json { "reminderintervals": [3600, 7200, 43200] } ` This sends reminders at 1 hour, 2 hours, and 12 hours after the form is created. PbotForm vs External Forms ========================== Hyphen has two form mechanisms: | Feature | PbotForm | External Forms | |---------|----------|----------------| | Purpose | Collect input within a workflow | Standalone data collection | | Tied to a run | Yes — pauses and resumes a workflow | No — independent | | Created via | Workflow step | POST /external-forms` | | Submissions | Single submission resumes the run | Multiple submissions collected | | Best for | Vendor provides shipping details mid-workflow | Customer feedback survey | → Next: Custom Table (/primitives/custom-table) --- ## PATH: Templates > Kyc Customer Onboarding (Source: templates/04-kyc-customer-onboarding.md) KYC Customer Onboarding ======================= Agent-as-orchestrator pattern. Unlike the matching-first templates, this one starts with an AI agent that coordinates multiple verification workflows, synthesizes results across them, and escalates when signals conflict. What Gets Automated =================== Identity verification, sanctions screening, adverse media scanning, and risk scoring — run as separate sub-workflows coordinated by an orchestrator agent. The agent synthesizes pass/fail/flag signals across all checks, identifies conflicts, and makes an onboarding recommendation. What Humans Still Own ===================== Final approval on flagged or conflicted cases. Enhanced due diligence investigations. Policy exception decisions for high-profile applicants. Pipeline ======== ``mermaid flowchart TD A[Input: Applicant Data] --> B[Orchestrator Agent] B --> C[runworkflow: Identity Verification] B --> D[runworkflow: Sanctions Screening] B --> E[runworkflow: Adverse Media Scan] C -->|pass / fail / flag| F[Agent Synthesizes Results] D -->|pass / fail / flag| F E -->|pass / fail / flag| F F -->|All pass| G[Auto-Approve] F -->|Conflict detected| H{Agent Resolves} F -->|Hard fail| I[Auto-Deny] H -->|Resolved| J[Approve with Note] H -->|Unresolvable| K[pauseforhuman: Compliance Review] K -->|Approved| L[Provision Account] K -->|Denied| M[Reject with Reason] K -->|EDD required| N[runworkflow: Enhanced Due Diligence] N --> K G --> O[Custom Table: Onboarding Log] J --> O L --> O M --> O I --> O style B fill:#4ade80,color:#09090b,stroke:none style K fill:#60a5fa,color:#09090b,stroke:none style C fill:#e8a84c,color:#09090b,stroke:none style D fill:#e8a84c,color:#09090b,stroke:none style E fill:#e8a84c,color:#09090b,stroke:none ` This template demonstrates Pattern C: Agent as Orchestrator (/agents/deployment-patterns/agent-as-orchestrator). The agent coordinates multiple sub-workflows using _runworkflow_ and synthesizes their results to make a decision. Workflow Definition =================== `json { "name": "kyccustomeronboarding", "definition": { "actions": [ { "type": "loop", "properties": { "mode": "react", "objective": "Onboard applicant {{input.applicantname}} ({{input.applicantid}}). Execute verification sequence: 1) Run identity verification workflow, 2) Run sanctions screening workflow, 3) Run adverse media scan workflow, 4) Synthesize results — if all pass, approve with risk score. If any conflict (e.g., identity passes but sanctions flags a partial name match), investigate the discrepancy. If confidence below 0.8, pause for human compliance review with full context and reasoning. 5) Provide final recommendation with risk level (low/medium/high).", "tools": [ { "type": "workflow", "id": "wfidentityverification" }, { "type": "workflow", "id": "wfsanctionsscreening" }, { "type": "workflow", "id": "wfadversemediascan" } ], "maxiterations": 15, "timeoutms": 300000, "onstuck": { "iterations": 4, "action": "escalate" }, "resultkey": "onboardingdecision" } }, { "type": "loop", "filter": { "condition": { "equal": ["@onboardingdecision.answer.decision", "approved"] } }, "properties": { "mode": "react", "objective": "Provision account for approved applicant {{input.applicantid}}. Create account, assign tier based on risk score, and trigger welcome sequence.", "tools": [ { "type": "action", "name": "createcustomeraccount" }, { "type": "action", "name": "assignaccounttier" }, { "type": "workflow", "id": "wfwelcomesequence" } ], "maxiterations": 5, "resultkey": "provisioningresult" } }, { "type": "gmailsend", "filter": { "condition": { "equal": ["@onboardingdecision.answer.decision", "approved"] } }, "properties": { "_oauthaccount_": "onboarding@company.com", "to": "@input.applicantemail", "subject": "Welcome — your account is ready", "body": "Your account has been verified and provisioned." }, "onFalse": { "type": "gmailsend", "properties": { "oauthaccount_": "onboarding@company.com", "to": "@input.applicantemail", "subject": "Application update", "body": "We were unable to complete your application. A team member will follow up within 2 business days." } } }, { "type": "custom-table", "properties": { "table": "onboardinglog", "operation": "write", "keys": ["applicantid"], "values": ["@input.applicantid"], "fields": { "decision": "@onboardingdecision.answer.decision", "riskscore": "@onboardingdecision.answer.risklevel", "confidence": "@onboardingdecision.confidence", "runid": "@runid", "completedat": "@now" } } } ] } } ` Required Workflows ================== These workflows are triggered by the orchestrator agent via runworkflow_: | Workflow | Purpose | |----------|---------| | identityverification | Document verification, liveness check, PII validation | | sanctionsscreening | OFAC, EU, UN sanctions list matching | | adversemediascan | News and media screening for negative coverage | | enhancedduediligence | Deep investigation for flagged applicants | Required Registered Actions =========================== | Action | Kind | Purpose | |--------|------|---------| | createcustomeraccount | http | Provision account in your system | | assignaccounttier | http | Set account tier based on risk assessment | Customization Notes =================== Confidence threshold. The 0.8 confidence threshold in the objective controls when the agent escalates to a human. Lower to 0.7 for a more cautious approach; raise to 0.9 if you trust the verification workflows and want fewer escalations. Timeout. 300 seconds (5 minutes) covers the round-trip time for multiple sub-workflow executions. Extend if your verification providers have slow response times. Iterations. 15 iterations is higher than other templates because the orchestrator needs to run multiple workflows, store intermediate results, synthesize, and potentially escalate. Don't reduce below 10. Sub-workflows. Each verification workflow (identityverification, sanctions_screening`, etc.) must be created separately in Hyphen before running this template. They can be simple HTTP-action workflows that call your verification providers. --- ## PATH: Actions > Custom Table (Source: actions/05-custom-table.md) Custom Table Actions ==================== Custom table actions register pre-configured table operations. Like the Custom Table primitive (/primitives/custom-table), they read and write to Hyphen-managed tables — but as registered actions, they can be reused by name across workflows and called by agents as tools. Registration ============ ``bash curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "actionname": "logreconciliation", "kind": "custom-table", "description": "Log a reconciliation result to the audit table" }' ` Custom table actions are lightweight registrations — the specific table, operation, and fields are typically provided at the workflow step level or by the agent at runtime. Properties ========== | Property | Type | Required | Description | |----------|------|----------|-------------| | actionname | string | Yes | Unique name for this action | | kind | "custom-table" | Yes | | | description | string | No | Human-readable description (shown to agents) | Usage in a Workflow =================== When used as a workflow step, provide the table operation details in properties: `json { "type": "logreconciliation", "properties": { "table": "reconaudit", "operation": "write", "keys": ["runid", "timestamp"], "values": ["@_runid", "@now"], "fields": { "matchedcount": "@matched.length", "exceptioncount": "@exceptions.length", "status": "completed" } } } ` The operation types and field references work identically to the Custom Table primitive (/primitives/custom-table): read, write, update, and upsert. Usage as an Agent Tool ====================== `json { "mode": "react", "objective": "Check if this invoice was previously processed, and log the current result", "tools": [{ "type": "action", "name": "logreconciliation" }] } ` When used as an agent tool, the agent provides the table, operation, and fields in its actioninput. The agent sees the action's description to understand what the tool does. Example agent response: `json { "thought": "I should log this reconciliation result before completing.", "action": "logreconciliation", "actioninput": { "table": "reconaudit", "operation": "write", "keys": ["invoiceid"], "values": ["INV-001"], "fields": { "status": "matched", "confidence": 0.95 } } } ` Registered vs Primitive ======================= Use the custom-table primitive for simple, one-off table operations within a specific workflow: `json { "type": "custom-table", "properties": { "table": "log", "operation": "write", ... } } `` Use a registered custom-table action when you want agents to be able to use table operations as tools, or when the same table operation pattern is reused across multiple workflows. → Back to Actions overview (/actions) --- ## PATH: Agents > Reasoning Traces (Source: agents/05-reasoning-traces.md) Reasoning Traces ================ Every agent execution produces a reasoning trace — a complete record of every thought, action, input, and observation across all iterations. Traces are stored persistently and are queryable via API. Trace Format ============ Each iteration in the trace contains: ``json { "iteration": 1, "timestamp": "2026-02-01T10:30:01Z", "thought": "I need to look up the ticket details to understand the customer's issue.", "action": "lookupticket", "actioninput": { "ticketid": "TK-12345" }, "observation": { "subject": "Billing Error — Double Charge", "status": "open", "priority": "high", "customerid": "cust-789", "createdat": "2026-01-31T14:22:00Z" }, "durationms": 342 } ` | Field | Description | |-------|-------------| | iteration | Sequence number (1-indexed) | | timestamp | When this iteration executed | | thought | The agent's reasoning — why it chose this action | | action | Which tool was called | | actioninput | Parameters passed to the tool | | observation | The tool's response | | durationms | How long the tool call took | A complete trace is an array of these objects, one per iteration, plus metadata about the overall run. Accessing Traces ================ Get Full Trace ============== `bash curl http://localhost:3009/agents/{agentRunId}/trace \ -H "X-Org-Id: acme-corp" ` api GET /agents/:id/trace Returns the complete reasoning trace for an agent run. Get Status with Trace ===================== Include the trace inline with the status response: `bash curl http://localhost:3009/agents/{agentRunId}/status?includetrace=true \ -H "X-Org-Id: acme-corp" ` api GET /agents/:id/status?includetrace=true Returns agent status plus the full reasoning trace. Response ======== `json { "agentrunid": "agentabc123", "status": "completed", "objective": "Process support ticket TK-12345", "model": "gpt-4", "iterationsused": 4, "maxiterations": 10, "startedat": "2026-02-01T10:30:00Z", "completedat": "2026-02-01T10:30:12Z", "totaldurationms": 12340, "result": { "answer": "Ticket resolved. Refund of $49.99 processed, confirmation email sent.", "confidence": 0.95 }, "reasoningtrace": [ { "iteration": 1, "thought": "...", "action": "lookupticket", "..." : "..." }, { "iteration": 2, "thought": "...", "action": "checkbilling", "..." : "..." }, { "iteration": 3, "thought": "...", "action": "processrefund", "..." : "..." }, { "iteration": 4, "thought": "...", "action": "complete", "..." : "..." } ] } ` Querying Traces =============== List Agent Runs =============== Filter by status to find runs that need review: `bash All paused agents (waiting for human input) =========================================== curl "http://localhost:3009/agents?status=paused" \ -H "X-Org-Id: acme-corp" All failed agents ================= curl "http://localhost:3009/agents?status=failed" \ -H "X-Org-Id: acme-corp" Recent runs with pagination =========================== curl "http://localhost:3009/agents?limit=20&offset=0" \ -H "X-Org-Id: acme-corp" ` api GET /agents?status={status} List agent runs filtered by status: running, paused, completed, failed. Compliance Use Cases ==================== Reasoning traces serve critical compliance and governance functions: Regulatory audit. When a regulator asks "why was this transaction flagged?" you can show the exact reasoning chain — what the agent checked, what it found, and why it made its decision. Decision explainability. Every agent decision has a thought field explaining the reasoning. This satisfies explainability requirements in regulated industries (financial services, healthcare, insurance). Process verification. Traces prove that required steps were followed. If the process requires sanctions screening before account opening, the trace shows the screening tool was called and the result was checked. Incident investigation. When something goes wrong, the trace shows exactly where and why. You can see the agent's last thought, which tool failed, and what observation led to an incorrect decision. Human oversight documentation. When an agent pauses for human input, the trace records both the agent's question and the human's response. This documents the human-in-the-loop review for audit purposes. Secret Redaction ================ Secrets that appear in reasoning traces are automatically redacted before storage. This includes: - API keys and tokens referenced via orgconfig: - OAuth access tokens and refresh tokens - Database connection strings - Any value matching common secret patterns If an agent's thought or observation contains a secret value (because the LLM included it in its reasoning or a tool returned it), the redaction layer strips it: `json { "thought": "I'll call the API with the authentication token.", "action": "fetchdata", "actioninput": { "url": "https://api.example.com/data", "token": "[REDACTED]" }, "observation": { "status": 200, "data": { "..." : "..." } } } ` Redaction happens at storage time. The actual secret is used during execution but never persisted in the trace. Trace Storage ============= Traces are stored in the database alongside the agent run record. They are scoped per organization (same multi-tenant isolation (/platform/multi-tenancy) as all other data) and persist indefinitely unless explicitly deleted. For high-volume deployments, configure trace retention policies to manage storage. The includereasoning_trace property on the loop can be set to false` to disable trace storage for non-critical agents — though this is not recommended for production workflows that may need audit review. → Next: Deployment Patterns (/agents/deployment-patterns) --- ## PATH: Guides > Ai Workflow Generation (Source: guides/05-ai-workflow-generation.md) AI Workflow Generation ====================== Hyphen's AI compiler translates natural language descriptions into deterministic JSON workflow specifications. You describe the what. The AI produces the how — the step sequence, data references, conditions, agent objectives, and tool declarations. This guide walks through the generation flow: submitting a prompt, monitoring generation, reviewing the output, deploying the workflow, and iterating. Step 1: Submit a Generation Request =================================== Describe what you want the workflow to do in plain language: ``bash curl -X POST http://localhost:3009/ai/generate-workflow \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "prompt": "Create a workflow that processes incoming invoices. For each invoice, match it against our payment records using PO number and vendor ID. If a match is found within 2% tolerance, auto-reconcile it. If no match is found, have an AI agent investigate the exception by looking up the purchase order and checking payment history. For invoices over $10,000 that the agent cannot resolve, require manager approval before writing off. Log everything to an audit table.", "llmOptions": { "model": "gpt-4o", "temperature": 0.7 } }' ` Response: `json { "generationid": "genabc123", "status": "processing", "message": "Workflow generation started" } ` The generation runs asynchronously. Complex workflows with multiple steps, conditions, and agent configurations typically take 10–30 seconds. Step 2: Poll Generation Status ============================== `bash curl http://localhost:3009/ai/generate-workflow/genabc123/status \ -H "X-Org-Id: acme-corp" ` Response while processing: `json { "generationid": "genabc123", "status": "processing" } ` Response when complete: `json { "generationid": "genabc123", "status": "completed" } ` Step 3: Retrieve the Generated Spec =================================== `bash curl http://localhost:3009/ai/generate-workflow/genabc123 \ -H "X-Org-Id: acme-corp" ` The response includes the complete workflow definition, any actions that need to be registered, and custom tables to create: `json { "generationid": "genabc123", "status": "completed", "hyphenworkflowdefinition": { "name": "invoiceprocessing", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.invoices", "right": "@input.payments", "matchOn": ["ponumber", "vendorid"], "tolerance": 0.02, "outputMatched": "reconciled", "outputUnmatchedLeft": "exceptions", "outputUnmatchedRight": "unmatchedpayments" } }, ... ] } }, "actionstoregister": [ { "actionname": "lookuppurchaseorder", "kind": "http", "description": "Look up purchase order details from ERP" } ], "customtablestocreate": [ { "name": "reconciliationaudit", "fields": [...] } ], "workflowdescription": "Invoice matching with agent exception handling and manager approval for high-value write-offs" } ` Review Checklist ================ Before deploying, verify: - Matcher configuration — Are the matchOn fields correct for your data schema? Is the tolerance appropriate? - Agent objective — Does it accurately describe the investigation you want? Are the right tools listed? - Conditions — Do the filter conditions match your business rules ($10K threshold, etc.)? - Action references — Do the action names match actions you've already registered, or do you need to register new ones? - Context paths — Are @path references pointing to the right data? Step 4: Deploy the Workflow =========================== Use the create-from-ai endpoint to create the workflow, register actions, and create tables in one call: `bash curl -X POST http://localhost:3009/workflows/create-from-ai \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "hyphenworkflowdefinition": { ... }, "actionstoregister": [ ... ], "customtablestocreate": [ ... ], "workflowdescription": "Invoice matching with exception handling", "hasexception": false }' ` Response: `json { "processId": "procxyz789", "status": "processing" } ` Poll for completion: `bash curl http://localhost:3009/workflows/generation-status/procxyz789 \ -H "X-Org-Id: acme-corp" ` Once complete, the workflow is live and ready to execute. Actions registered via create-from-ai are stubs — they have the correct names and kinds but may need their URLs, queries, or templates filled in. Review and update each registered action before executing the workflow with real data. Step 5: Iterate on the Prompt ============================= If the generated workflow doesn't match your requirements, refine the prompt and regenerate. Tips for better results: Be specific about data fields. Instead of "match invoices to payments," say "match on ponumber and vendorid` with 2% amount tolerance." Name your thresholds. "Invoices over $10,000" is clearer than "high-value invoices." Describe the agent's investigation steps. "Have the agent check the purchase order, look at payment history for the vendor, and search for duplicate invoices" gives the AI compiler clear tool requirements. Specify what humans should review. "Require manager approval for write-offs" is more precise than "add a human review step." Include scheduling if needed. "Run this workflow daily at 2 AM Eastern" generates the schedule block. Prompt Refinement Example ========================= First attempt: > "Create a workflow to process invoices" Too vague — the AI has to guess your matching criteria, exception handling, and approval rules. Refined: > "Create a workflow that matches incoming invoices to payment records on PO number and vendor ID with 2% amount tolerance and 5-day date window. Use fuzzy matching on vendor names at 85% threshold. For unmatched invoices, have an AI agent investigate by looking up the PO in our ERP, checking payment history, and searching for duplicates. If the agent's confidence is below 0.8, pause for human review. Log all results to a reconciliationaudit table. Run daily at 6 AM Eastern." This prompt gives the AI compiler everything it needs to produce a complete, accurate specification on the first attempt. --- ## PATH: Platform > Scheduling (Source: platform/05-scheduling.md) Scheduling ========== Workflows can run on a recurring schedule. Add a schedule object to the workflow definition. Configuration ============= ``json { "name": "dailyreconciliation", "definition": { "schedule": { "every": "1d", "at": "02:00", "timezone": "America/NewYork" }, "actions": [ ] } } ` Schedule Fields =============== | Field | Type | Required | Description | |-------|------|----------|-------------| | every | string | Yes | Interval — 30m, 1h, 6h, 1d, 7d | | at | string | No | Time of day in 24h format ("02:00", "14:30"). Only applies when every is 1d or longer | | timezone | string | No | IANA timezone ("America/NewYork", "Europe/London", "Asia/Tokyo"). Defaults to UTC | Examples ======== Daily at 2:00 AM Eastern: `json { "schedule": { "every": "1d", "at": "02:00", "timezone": "America/NewYork" } } ` Every 6 hours: `json { "schedule": { "every": "6h" } } ` Every 30 minutes: `json { "schedule": { "every": "30m" } } ` Weekly (every 7 days) at midnight UTC: `json { "schedule": { "every": "7d", "at": "00:00" } } ` How Scheduled Runs Work ======================= Scheduled workflows execute with an empty @input unless a default input payload is configured. The @now context variable reflects the timestamp at execution time. Each scheduled run creates a standard workflow run with its own run_id, full context, and audit trail — identical to manually triggered runs. If a scheduled run is still in progress when the next interval fires, the new run is queued. Runs do not overlap by default. Scheduling + conditions. You can combine schedule with a top-level condition` to create conditional scheduled workflows — for example, a daily run that only proceeds if there are new records to process. The schedule fires the workflow, but the condition gates execution. → Next: Multi-Tenancy (/platform/multi-tenancy) --- ## PATH: Primitives > Custom Table (Source: primitives/05-custom-table.md) Custom Table ============ Custom tables provide multi-tenant data storage within Hyphen. Create tables with typed fields, then read, write, update, and upsert records from workflows. Every mutation is audit-logged. Creating a Table ================ Tables are created via API before use in workflows: ``bash curl -X POST http://localhost:3009/custom-tables \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "name": "reconciliationlog", "fields": [ { "name": "runid", "type": "text", "required": true }, { "name": "invoiceid", "type": "text", "required": true }, { "name": "amount", "type": "numeric", "required": false }, { "name": "status", "type": "text", "required": true }, { "name": "matched", "type": "boolean", "required": false }, { "name": "processedat", "type": "timestamptz", "required": false } ] }' ` api POST /custom-tables Create a new custom table with typed field definitions. Field Types =========== | Type | Description | Example Values | |------|-------------|----------------| | text | String value, optional maxLength | "INV-001", "pending" | | integer | Whole numbers | 42, 0, -5 | | numeric | Decimal numbers | 1500.50, 0.02 | | boolean | True/false | true, false | | timestamptz | Timestamp with timezone | "2025-01-15T10:30:00Z" | Usage in Workflows ================== Use type: "custom-table" as a workflow step with the desired operation. Write (Insert) ============== `json { "type": "custom-table", "properties": { "table": "reconciliationlog", "operation": "write", "keys": ["runid", "invoiceid"], "values": ["@runid", "@input.invoiceid"], "fields": { "amount": "@input.amount", "status": "processed", "matched": true, "processedat": "@now" } } } ` Read (Query) ============ `json { "type": "custom-table", "properties": { "table": "reconciliationlog", "operation": "read", "keys": ["invoiceid"], "values": ["@input.invoiceid"] }, "outputKey": "previousruns" } ` Results are stored in context under the outputKey. Update ====== `json { "type": "custom-table", "properties": { "table": "reconciliationlog", "operation": "update", "keys": ["runid"], "values": ["@_runid"], "fields": { "status": "approved", "matched": true } } } ` Updates records matching the keys/values criteria with the new fields. Upsert (Insert or Update) ========================= `json { "type": "custom-table", "properties": { "table": "vendorpreferences", "operation": "upsert", "keys": ["vendorid"], "values": ["@input.vendorid"], "fields": { "preferredpayment": "@input.paymentmethod", "lastupdated": "@now" } } } ` If a record with the matching keys exists, it's updated. Otherwise, a new record is created. Properties Reference ==================== | Property | Type | Required | Description | |----------|------|----------|-------------| | table | string | Yes | Table name | | operation | string | Yes | "read", "write", "update", or "upsert" | | keys | string[] | Yes | Field names used to identify records | | values | array | Yes | Values corresponding to keys (supports @path) | | fields | object | No | Additional fields to write/update (supports @path values) | Direct API Access ================= Tables can also be accessed directly via REST API, outside of workflows: api GET /custom-tables/:tableName View all data in a custom table. api POST /custom-tables/:tableName/insert Insert a single record. api POST /custom-tables/:tableName/bulk-insert Insert multiple records at once. api PUT /custom-tables/:tableName/update Update records matching criteria. Audit Log ========= Every mutation (write, update, upsert) is logged in the table's audit trail. The audit log records who made the change, when, what the previous value was, and what the new value is. `bash curl -X GET http://localhost:3009/custom-tables/reconciliationlog/audit \ -H "X-Org-Id: acme-corp" ` api GET /custom-tables/:tableName/audit View the audit log for a custom table. Multi-Tenant Isolation ====================== Custom tables are fully scoped per organization. A table named reconciliationlog in org acme-corp is completely separate from a table with the same name in org globex-inc`. There is no cross-org data access. Common Patterns =============== Workflow audit log: Write a record after every workflow run with key metrics — matched count, exception count, approval status, timestamp. Operational memory: Store intermediate results across workflow runs. An agent can read from a custom table to check whether an invoice was previously processed before investigating it again. State tracking: Track the status of long-running multi-step processes. Each workflow run updates the record as it progresses through stages. → Next: Actions (/actions) — registering reusable operations --- ## PATH: Templates > Vendor Payment Reconciliation (Source: templates/05-vendor-payment-reconciliation.md) Vendor Payment Reconciliation ============================= Bank statement to AP ledger matching. Distinct from AP Invoice Reconciliation (/templates/ap-invoice-reconciliation) because the data is messier — bank descriptions rarely match vendor names cleanly, payments are often consolidated, and timing gaps are wider. Fuzzy matching and date windows do the heavy lifting. What Gets Automated =================== Matching bank transactions to AP ledger entries despite format mismatches. Investigating consolidated payments (one bank debit covering multiple invoices). Identifying timing differences between payment date and bank clearing date. Agent resolves partial payments and unknown transactions. What Humans Still Own ===================== Unidentified transactions after AI investigation. Month-end reconciliation sign-off. Bank fee disputes and error corrections. Pipeline ======== ``mermaid flowchart TD A[Input: Bank Statement + AP Ledger] --> B[Matcher: Fuzzy Match] B -->|Exact match ~50%| C[Auto-Clear] B -->|Fuzzy match ~25%| D{Within Tolerance?} B -->|No match ~25%| E[Exception Queue] D -->|Yes| F[Auto-Clear with Note] D -->|No| E E --> G[ReAct Agent: Investigate] G --> G1[Check consolidated payments] G1 --> G2[Search for timing delays] G2 --> G3[Fuzzy vendor name match] G3 -->|Identified| H[Auto-Resolve] G3 -->|Partial match| I[Human Review] G3 -->|Unidentified| I I -->|Matched| J[Manual Clear + Log] I -->|Unknown| K[Flag for Bank Inquiry] C --> L[Custom Table: Bank Recon Log] F --> L H --> L J --> L K --> L style B fill:#e8a84c,color:#09090b,stroke:none style G fill:#4ade80,color:#09090b,stroke:none style I fill:#60a5fa,color:#09090b,stroke:none ` This template includes a schedule block — it runs automatically every day at 6 AM Eastern. Remove or adjust the schedule for manual execution. Workflow Definition =================== `json { "name": "vendorpaymentreconciliation", "definition": { "schedule": { "every": "1d", "at": "06:00", "timezone": "America/NewYork" }, "actions": [ { "type": "matcher", "properties": { "left": "@input.banktransactions", "right": "@input.apledger", "matchOn": ["referencenumber"], "tolerance": 0.005, "dateWindowDays": 7, "fuzzyThreshold": 75, "descriptionKey": "description", "outputMatched": "cleared", "outputUnmatchedLeft": "unmatchedbank", "outputUnmatchedRight": "outstandingpayments" } }, { "type": "loop", "filter": { "condition": { "greaterThan": [{ "length": "@unmatchedbank" }, 0] } }, "properties": { "mode": "foreach", "itemspath": "@unmatchedbank", "itemvariablename": "txn", "actionstoexecute": [ { "type": "loop", "properties": { "mode": "react", "objective": "Investigate unmatched bank transaction: '{{txn.description}}' for ${{txn.amount}} on {{txn.date}}. Determine if this is: 1) A consolidated payment — search for AP entries that sum to this amount within ±7 days, 2) A timing delay — check for a matching AP entry in the next period, 3) A vendor name mismatch — the bank description may not match AP vendor name, try fuzzy matching, 4) A bank fee or interest charge. Provide: root cause, matched AP entries if found, and recommended action.", "tools": [ { "type": "action", "name": "searchapbyamount" }, { "type": "action", "name": "searchapbydaterange" }, { "type": "action", "name": "searchapbyvendor" }, { "type": "action", "name": "getknownbankfees" } ], "maxiterations": 8, "onstuck": { "iterations": 3, "action": "retrywithhint", "hint": "If you cannot identify the transaction, complete with recommendation to escalate for manual review." }, "resultkey": "investigation" } } ], "maxconcurrency": 5, "failurestrategy": "continueonerror", "collectresults": true, "resultkey": "bankinvestigations" } }, { "type": "PbotApproval", "filter": { "condition": { "greaterThan": [{ "length": "@unmatchedbank" }, 0] } }, "properties": { "comment": "Bank reconciliation: {{cleared.length}} auto-cleared, {{unmatchedbank.length}} investigated. Review findings and approve close entries.", "requestpayload": { "clearedcount": "@cleared.length", "outstandingcount": "@outstandingpayments.length", "investigations": "@bankinvestigations" } } }, { "type": "custom-table", "properties": { "table": "bankreconciliationlog", "operation": "write", "keys": ["runid", "reconciliationdate"], "values": ["@runid", "@now"], "fields": { "banktransactions": "@input.banktransactions.length", "autocleared": "@cleared.length", "investigated": "@unmatchedbank.length", "outstandingpayments": "@outstandingpayments.length", "status": "completed" } } } ] } } ` Required Registered Actions =========================== | Action | Kind | Purpose | |--------|------|---------| | searchapbyamount | db | Find AP entries matching an amount or sum of amounts | | searchapbydaterange | db | Find AP entries within a date window | | searchapbyvendor | db | Fuzzy search AP entries by vendor/payee name | | getknownbankfees | db | Retrieve known fee patterns for the bank account | Customization Notes =================== Fuzzy threshold. 75 is lower than other templates because bank descriptions are notoriously messy ("WIRE TRF ACME" vs. "Acme Corporation"). Lower to 65 if your bank formats are especially terse; raise to 85 if descriptions are reasonably clean. Date window. 7 days covers typical bank clearing delays. Extend to 10–14 days for international wires or ACH batches that clear slowly. Consolidated payment detection. The searchapbyamount action should support sum-matching — finding multiple AP entries whose amounts sum to the bank transaction amount. This is the most common exception type in bank reconciliation. Bank fee patterns. The getknownbank_fees` action returns known fee types and amounts for the bank account. Preloading common patterns (monthly maintenance fees, wire fees, etc.) lets the agent quickly classify these without investigation. --- ## PATH: Guides > Embedding Hyphen (Source: guides/06-embedding-hyphen.md) Embedding Hyphen ================ This guide is for companies embedding Hyphen into their own products — SaaS platforms that want to offer governed AI operations to their customers without building orchestration infrastructure. One Hyphen instance serves many customer organizations, each fully isolated. Multi-Tenant Architecture ========================= Hyphen is multi-tenant by default. Every API call includes an X-Org-Id header that scopes all data — workflows, runs, credentials, tables, agent traces — to that organization. No data leaks between tenants. `` Your Platform ├── Customer A → X-Org-Id: customer-a → Hyphen ├── Customer B → X-Org-Id: customer-b → Hyphen └── Customer C → X-Org-Id: customer-c → Hyphen ` Each customer gets their own: | Resource | Isolation | |----------|-----------| | Workflows | Scoped per org — customers can't see each other's definitions | | Runs | Scoped per org — execution history is private | | Actions | Scoped per org — each customer registers their own integrations | | OAuth connections | Scoped per org — each customer connects their own Gmail/Slack/Outlook | | Custom tables | Scoped per org — data storage is isolated | | Org config | Scoped per org — API keys and secrets are encrypted per tenant | Org Provisioning ================ When a new customer signs up on your platform, provision their Hyphen org: 1. Choose an Org ID =================== Use a stable, unique identifier from your system — a UUID, slug, or customer ID: ` X-Org-Id: custabc123 ` Hyphen creates the org implicitly on first API call. No explicit org creation step is required. 2. Store Their Credentials ========================== If the customer's workflows will call external APIs, store their credentials: `bash curl -X POST http://localhost:3009/org-config \ -H "X-Org-Id: custabc123" \ -H "Content-Type: application/json" \ -d '{ "key": "api:llmapikey", "value": "sk-customer-openai-key" }' ` Customers can use your platform's LLM key (stored once, shared across orgs) or bring their own. Reference shared keys via a common config prefix. 3. Create Default Workflows =========================== Pre-load workflow templates so customers have a starting point: `bash curl -X POST http://localhost:3009/workflows \ -H "X-Org-Id: custabc123" \ -H "Content-Type: application/json" \ -d '{ "name": "invoicereconciliation", "definition": { ... } }' ` 4. Enable Features ================== Use feature flags to control which capabilities each customer has access to: `bash curl -X POST http://localhost:3009/admin/feature-flags/reactloopsenabled/enable \ -H "Content-Type: application/json" \ -d '{ "orgid": "custabc123" }' ` Credential Management ===================== Per-Customer API Keys ===================== Each customer's external API credentials are stored encrypted in their org config: `bash Customer A's Salesforce token ============================= curl -X POST http://localhost:3009/org-config \ -H "X-Org-Id: customer-a" \ -d '{ "key": "api:salesforcetoken", "value": "customer-a-token" }' Customer B's Salesforce token (different instance) ================================================== curl -X POST http://localhost:3009/org-config \ -H "X-Org-Id: customer-b" \ -d '{ "key": "api:salesforcetoken", "value": "customer-b-token" }' ` Both customers can use the same workflow definition — the orgconfig:api:salesforcetoken reference resolves to each customer's own token at execution time. Per-Customer OAuth ================== Each customer connects their own business tool accounts: `bash Customer A stores their Google OAuth app ======================================== curl -X POST http://localhost:3009/oauth/gmail/app-credentials \ -H "X-Org-Id: customer-a" \ -d '{ "clientid": "customer-a-client-id", "clientsecret": "..." }' Customer A's user authorizes ============================ GET /oauth/gmail/authorize?returnurl=https://yourplatform.com/oauth/callback X-Org-Id: customer-a ` Your platform handles the OAuth redirect flow, but Hyphen stores and manages the tokens per org. White-Labeling the Approval UX ============================== When workflows pause for approval, your platform needs to present the review UI to the customer's users. Use the approval API and webhooks to build this. Webhook → Your Platform → Customer's Reviewer ============================================= `mermaid sequenceDiagram participant H as Hyphen participant P as Your Platform participant R as Customer's Reviewer H->>P: Webhook: pbotapprovalrequested P->>P: Route to customer's review queue P->>R: Notification (email, in-app, Slack) R->>P: Opens review UI P->>H: GET /approvals/:runId (fetch context) H-->>P: Approval details + request payload P->>R: Display context in your branded UI R->>P: Submit decision P->>H: POST /approvals/:runId/:step H-->>P: 200 OK — workflow resumed ` Building Your Review UI ======================= Fetch the approval context: `bash curl http://localhost:3009/approvals/runabc123 \ -H "X-Org-Id: customer-a" ` Display it in your platform's UI with your branding. The requestpayload contains whatever the workflow author included — investigation results, matched records, agent reasoning. Submit the customer's decision: `bash curl -X POST http://localhost:3009/approvals/runabc123/2 \ -H "X-Org-Id: customer-a" \ -H "Content-Type: application/json" \ -d '{ "approved": true, "comments": "Looks good", "data": { "reviewer": "user@customer-a.com" } }' ` Your customers never interact with Hyphen directly — they see your platform's approval experience. Webhook Integration =================== Register webhook endpoints to receive events for each customer org. Common patterns: Routing Events to Customers =========================== Your webhook handler receives events for all orgs and routes them: `javascript app.post('/hyphen-webhook', (req, res) => { const { event, orgid, data } = req.body; switch (event) { case 'pbotapprovalrequested': // Add to customer's approval queue addToApprovalQueue(orgid, data); break; case 'workflowcompleted': // Update customer's dashboard updateDashboard(orgid, data); break; case 'workflowfailed': // Alert customer's admin sendAlert(orgid, data); break; } res.sendStatus(200); }); ` Customer-Facing Notifications ============================= Translate Hyphen events into notifications in your platform: | Hyphen Event | Customer Notification | |-------------|----------------------| | pbotapprovalrequested | "Action required: Review reconciliation exceptions" | | workflowcompleted | "Daily reconciliation complete: 450 matched, 12 exceptions" | | workflowfailed | "Reconciliation failed — check data source connectivity" | | agentpaused | "AI agent needs your input on ticket #12345" | Billing Considerations ====================== Track usage per org for billing. Hyphen's metrics endpoint provides per-org data: `bash curl http://localhost:3009/admin/metrics/org/customer-a ` Key metrics for billing: | Metric | Billing Model | |--------|--------------| | Workflow executions | Per-run pricing | | Agent iterations | Per-iteration or per-run | | Matcher records processed | Volume-based tiers | | Custom table storage | Storage tiers | | OAuth API calls | Pass-through or bundled | Usage Tracking Pattern ====================== Log each workflow execution to your billing system: `javascript // Webhook handler for completed workflows if (event === 'workflowcompleted') { billingService.recordUsage({ orgid: orgid, workflowid: data.workflowid, runid: data.runid, stepsexecuted: data.stepscompleted, agentiterations: data.agentiterations || 0, timestamp: data.completedat }); } `` Deployment Checklist ==================== Before going live with embedded Hyphen: 1. Org provisioning flow — Automated creation of org config, default workflows, and feature flags on customer signup 2. Credential management — Secure flow for customers to provide API keys and OAuth connections 3. Approval UX — Branded review interface using the approval API and webhooks 4. Webhook routing — Handler that routes events to the correct customer's notification channels 5. Usage tracking — Metrics collection for billing and capacity planning 6. Feature flags — Controlled rollout of capabilities per customer tier 7. Error handling — Graceful degradation when Hyphen is unavailable, with customer-facing error messages --- ## PATH: Platform > Multi Tenancy (Source: platform/06-multi-tenancy.md) Multi-Tenancy ============= Every Hyphen resource is scoped to an organization. Workflows, runs, actions, tables, OAuth credentials, and agent executions are all isolated per tenant. Organization Identification =========================== Every API call requires the X-Org-Id header: ``bash curl -X GET http://localhost:3009/workflows \ -H "X-Org-Id: acme-corp" ` The header determines which tenant's data is accessed. There is no cross-org data leakage — a request with X-Org-Id: acme-corp will never see workflows belonging to X-Org-Id: globex-inc. api GET /workflows Returns only workflows belonging to the organization identified by X-Org-Id. Encrypted Secret Storage (Org Config) ===================================== Each organization stores API keys, database connection strings, and other secrets in encrypted org config. Values are stored encrypted at rest and never returned in plaintext via API. `bash Store a secret ============== curl -X POST http://localhost:3009/org-config \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "key": "api:openaikey", "value": "sk-proj-abc123..." }' ` `bash List all config (values are masked) =================================== curl -X GET http://localhost:3009/org-config \ -H "X-Org-Id: acme-corp" ` api POST /org-config Store or update an encrypted configuration key. api GET /org-config List all configuration keys (values masked). Key Naming Conventions ====================== | Prefix | Purpose | Example | |--------|---------|---------| | api: | Third-party API keys | api:openaikey, api:salesforcetoken | | db: | Database connection strings | db:orderspg, db:analyticsmongo | | oauth: | OAuth app credentials | oauth:gmailclientid | Referencing Secrets in Workflows ================================ Use the orgconfig: prefix to reference encrypted values in workflow steps. The execution engine resolves these at runtime — the actual secret value never appears in the workflow definition. `json { "actionname": "fetchcrmdata", "kind": "http", "url": "https://api.salesforce.com/query", "headers": { "Authorization": "Bearer orgconfig:api:salesforcetoken" } } ` `json { "actionname": "querywarehouse", "kind": "db", "datasource": "orgconfig:db:warehousepg", "query": "SELECT * FROM orders WHERE status = $1" } ` The orgconfig: prefix works in action registration, workflow properties, and any string value that the execution engine resolves. Data Isolation ============== Every data type is scoped per organization: | Resource | Isolation | |----------|-----------| | Workflows | Definitions visible only to owning org | | Runs | Execution history, context, and status per org | | Actions | Registered actions scoped per org | | Custom Tables | Table definitions and data per org | | OAuth Connections | Tokens and credentials per org | | Agent Runs | Agent executions, traces, and memory per org | | External Forms | Form definitions and submissions per org | | Org Config | Encrypted secrets per org | Credential Scoping ================== OAuth tokens are scoped to both the organization and the specific connected account. When an agent uses gmailsend, it sends from the account that was authorized for that org — not a shared platform account. `json { "type": "gmailsend", "properties": { "_oauthaccount": "ops@acme-corp.com", "to": "@input.recipient", "subject": "Invoice Update" } } ` The oauthaccount_ property identifies which connected account to use. Each org connects their own accounts through the OAuth flow. Multi-Tenant Embedding ====================== Hyphen's multi-tenancy is designed for platform embedding. If you're building a product that offers workflow automation to your customers, each of your customers becomes a Hyphen organization: ` Your Platform └── Customer A → X-Org-Id: customer-a └── Customer B → X-Org-Id: customer-b └── Customer C → X-Org-Id: customer-c ` Each customer's workflows, data, credentials, and agent executions are fully isolated. Your platform makes API calls on behalf of your customers by setting the appropriate X-Org-Id` header. → Next: Security (/platform/security) --- ## PATH: Templates > It Incident Response (Source: templates/06-it-incident-response.md) IT Incident Response ==================== Automate SOC alert triage and containment. The agent ingests security alerts, enriches indicators with threat intelligence, correlates across SIEM data, takes containment actions, and escalates to analysts when confidence is low. The reasoning trace is the incident report. Architecture ============ ``mermaid flowchart TD A["Security Alert Ingested
(SIEM, EDR, firewall)"] --> B["Matcher — Correlate
against known indicators"] B -->|"Known benign / duplicate"| F["✅ Auto-close"] B -->|"Unknown or suspicious"| C["ReAct Agent — Investigate
Enrich + correlate + recommend"] C -->|"Clear threat, auto-contain"| G["Containment Action
(block IP, isolate host)"] C -->|"Ambiguous / high severity"| D["Human Review
SOC Analyst"] D --> G G --> E["Audit Trail
Incident report auto-generated"] style B fill:#fef3c7,stroke:#d97706 style C fill:#dcfce7,stroke:#16a34a style D fill:#dbeafe,stroke:#2563eb ` Graduated response: Known-benign alerts auto-close (~70%). Clear threats get auto-containment with agent reasoning (~20%). Ambiguous or high-severity incidents escalate to SOC analysts (~10%). Required Actions ================ Register these before deploying the workflow: | Action | Kind | Purpose | |--------|------|---------| | enrichindicator | http | Query threat intel feeds (VirusTotal, AbuseIPDB, MISP) | | querysiem | http | Search SIEM for correlated events (Splunk, Elastic, Sentinel) | | lookupasset | http | Map IP/hostname to asset inventory and owner | | blockip | http | Add IP to firewall blocklist | | isolatehost | http | Trigger host isolation via EDR (CrowdStrike, SentinelOne) | | createticket | http | Open incident ticket in ServiceNow / Jira | | classifyseverity | llm | AI-assess severity from enrichment context | `bash curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: your-org" \ -H "Content-Type: application/json" \ -d '{ "actionname": "enrichindicator", "kind": "http", "url": "https://www.virustotal.com/api/v3/ipaddresses/{{indicator}}", "httpmethod": "GET", "headers": { "x-apikey": "orgconfig:api:virustotalkey" }, "passthrough": true }' ` `bash curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: your-org" \ -H "Content-Type: application/json" \ -d '{ "actionname": "classifyseverity", "kind": "llm", "template": "Given this security alert and enrichment data, classify severity as critical/high/medium/low and recommend containment action.\n\nAlert: {{alert}}\nEnrichment: {{enrichment}}\nAsset info: {{asset}}\n\nRespond with JSON: {\"severity\": \"...\", \"recommendation\": \"...\", \"confidence\": 0.0-1.0}", "model": "gpt-4", "maxtokens": 500 }' ` Workflow Definition =================== `json { "name": "incidentresponsepipeline", "definition": { "actions": [ { "type": "matcher", "properties": { "left": "@input.alerts", "right": "@input.knownbenignindicators", "matchOn": ["indicatorvalue"], "outputMatched": "knownbenign", "outputUnmatchedLeft": "alertstoinvestigate", "outputUnmatchedRight": "unusedbenign" } }, { "type": "loop", "filter": { "condition": { "greaterThan": [{ "length": "@alertstoinvestigate" }, 0] } }, "properties": { "mode": "foreach", "itemspath": "@alertstoinvestigate", "itemvariablename": "alert", "actionstoexecute": [ { "type": "loop", "properties": { "mode": "react", "objective": "Investigate this security alert and recommend containment:\n\nAlert type: {{alert.type}}\nIndicator: {{alert.indicatorvalue}}\nSource: {{alert.source}}\nTimestamp: {{alert.timestamp}}\n\nSteps: 1) Enrich the indicator with threat intelligence, 2) Look up the affected asset and its owner, 3) Query SIEM for correlated events in the last 24 hours, 4) Classify severity, 5) If critical/high severity with high confidence — take containment action (block IP or isolate host). If medium/low or uncertain — escalate for human review. 6) Complete with your assessment.", "tools": [ { "type": "action", "name": "enrichindicator" }, { "type": "action", "name": "querysiem" }, { "type": "action", "name": "lookupasset" }, { "type": "action", "name": "classifyseverity" }, { "type": "action", "name": "blockip" }, { "type": "action", "name": "isolatehost" }, { "type": "action", "name": "createticket" } ], "model": "gpt-4", "maxiterations": 12, "timeoutms": 300000, "onstuck": { "iterations": 3, "action": "escalate" }, "resultkey": "investigation" } } ], "maxconcurrency": 5, "failurestrategy": "continueonerror", "resultkey": "investigationresults" } }, { "type": "custom-table", "properties": { "table": "incidentlog", "operation": "write", "keys": ["runid", "timestamp"], "values": ["@runid", "@now"], "fields": { "alertsreceived": "@input.alerts.length", "autoclosed": "@knownbenign.length", "investigated": "@alertstoinvestigate.length", "status": "completed" } } } ] } } ` Structural Permissioning in Action ================================== This is where governed autonomy matters most. The agent's tool list is an architectural constraint, not a policy: | Tool declared | Agent can... | |---------------|-------------| | blockip | Block an IP address at the firewall | | isolatehost | Isolate a host via EDR | | createticket | Open an incident ticket | | Tool NOT declared | Agent cannot... | |-------------------|----------------| | deletefirewallrule | Modify existing firewall rules | | shutdownserver | Shut down production systems | | accesscredentialsvault | Read or modify stored secrets | The agent cannot discover or invent tools. It works with exactly the capabilities you've declared. If you want the agent to block IPs but not isolate hosts, remove isolatehost from the tools list. No prompt engineering required — it's structural. Sample Reasoning Trace ====================== ` Iteration 1: Thought: "Suspicious IP 185.220.101.42 flagged by firewall. Let me check threat intel." Action: enrichindicator Input: { "indicator": "185.220.101.42" } Result: { "maliciousscore": 87, "tags": ["torexitnode", "scanning"], "lastseen": "2026-02-06" } Iteration 2: Thought: "Known Tor exit node with high malicious score. Let me check what asset it targeted." Action: lookupasset Input: { "ip": "10.0.1.45" } Result: { "hostname": "web-prod-03", "owner": "Platform Team", "criticality": "high" } Iteration 3: Thought: "High-criticality production server targeted by known malicious IP. Checking SIEM for more context." Action: querysiem Input: { "query": "srcip=185.220.101.42 last 24h", "limit": 50 } Result: { "eventcount": 247, "patterns": ["portscan", "sshbruteforce"] } Iteration 4: Thought: "247 events including SSH brute force against a production server. This is critical. Blocking immediately." Action: blockip Input: { "ip": "185.220.101.42", "reason": "Tor exit node — SSH brute force against web-prod-03" } Result: { "blocked": true, "ruleid": "fw-rule-8823" } Iteration 5: Thought: "IP blocked. Creating incident ticket for follow-up investigation." Action: createticket Input: { "severity": "high", "title": "SSH brute force from Tor exit node against web-prod-03", ... } Result: { "ticketid": "INC-4521" } Iteration 6: Thought: "Contained and documented. Completing with assessment." Action: complete Input: { "answer": "Critical: Tor exit node conducting SSH brute force against production server. IP blocked. Ticket INC-4521 created.", "confidence": 0.95 } ` This trace is the incident report. Every decision — why the IP was flagged, what intelligence confirmed it, what containment was taken — is queryable and auditable. Customization ============= Alert sources: Replace @input.alerts with webhook ingestion from your SIEM (Splunk, Elastic, Microsoft Sentinel, CrowdStrike). Threat intel feeds: Register additional enrichindicator actions for multiple feeds — VirusTotal, AbuseIPDB, MISP, Recorded Future. The agent will use whichever it deems most relevant. Containment actions: Control the blast radius by tuning the tool list. Start with blockip only, add isolatehost after trust is established. Escalation threshold: Adjust maxiterations and onstuck settings. For SOC teams with fast SLAs, set onstuck.iterations: 2 with action: "escalate"` to route to analysts quickly. Compliance alignment: The audit table + reasoning trace satisfies SOC 2 CC7.3 (incident response), ISO 27001 A.16 (incident management), and NIST CSF RS.AN (response analysis). No additional logging required. --- ## PATH: Platform > Security (Source: platform/07-security.md) Security ======== Hyphen's security model is architectural, not bolted on. Protections are enforced by the execution engine, not by policies that can be bypassed. Structural Permissioning ======================== Agents can only use tools explicitly declared in the workflow definition. This is enforced at the engine level — the agent's LLM prompt only includes declared tools, and the execution engine rejects any action call not in the allowlist. ``json { "tools": [{ "type": "action", "name": "lookupticket" }, { "type": "action", "name": "gmailsend" }] } ` With this declaration, the agent can look up tickets, send emails, and signal completion. It cannot access databases, post to Slack, trigger other workflows, or use any capability not in the list. If the LLM hallucinates a tool call to deletedatabase, the engine rejects it — the action isn't registered in the tool allowlist. This isn't a guardrail on top of broad access. The agent literally has no mechanism to invoke undeclared tools. Secret Leakage Prevention ========================= API keys, tokens, connection strings, and other secrets referenced via orgconfig: are automatically redacted from: - Workflow run context (returned by status endpoints) - Agent reasoning traces - Log output - Error messages If an agent's reasoning trace contains a secret value (because the LLM included it in its "thought"), the redaction layer strips it before storage. Secrets resolve at execution time only and are never persisted in plaintext. SSRF Protection =============== HTTP actions and agent tool calls that make outbound requests are protected against Server-Side Request Forgery (SSRF). The following are blocked: - Cloud metadata endpoints (169.254.169.254, metadata.google.internal, etc.) - Private/internal network ranges (10.x.x.x, 172.16.x.x, 192.168.x.x) - Loopback addresses (127.0.0.1, localhost) Requests to blocked addresses fail with a clear error. This prevents agents from being tricked into accessing internal infrastructure. Prompt Injection Defense ======================== Hyphen implements multiple layers of defense against prompt injection in ReAct agent loops: Input sanitization. User-provided data that flows into agent prompts is sanitized to prevent instruction override. Structured output parsing. The engine expects structured JSON responses from the LLM (thought, action, actioninput). Freeform responses that don't match the expected format are rejected, preventing the agent from being redirected by injected instructions. Tool allowlist enforcement. Even if a prompt injection convinces the LLM to attempt an unauthorized action, structural permissioning blocks it at the engine level. Fork Bomb Prevention ==================== Agents can trigger other workflows via _runworkflow_. To prevent infinite recursion (workflow A triggers workflow B which triggers workflow A), Hyphen enforces depth limits on nested workflow execution. If the nesting depth exceeds the configured limit, the execution fails with an error rather than running indefinitely. OAuth CSRF Protection ===================== The OAuth authorization flow uses signed JWT state tokens with short expiration times. This prevents cross-site request forgery attacks during the OAuth callback: - State token is generated server-side with a secret key - Token includes the org ID, provider, and expiration - Callback validates the token signature and expiration before exchanging the authorization code Rate Limiting ============= API requests are rate-limited per organization to prevent abuse and ensure fair resource allocation: | Resource | Limit | |----------|-------| | API requests | Per-org configurable | | Agent executions | Per-org configurable | | Concurrent workflow runs | Per-org configurable | | OAuth token refreshes | Per-provider limits | Rate limit headers are included in API responses. When limits are exceeded, requests return 429 Too Many Requests with a Retry-After header. Security Checklist ================== When deploying Hyphen in production: - Store all secrets in org config (never hardcode in workflow definitions) - Use the minimum necessary tool set for each agent (principle of least privilege) - Set appropriate maxiterations and timeoutms on ReAct loops - Configure onstuck recovery strategies to prevent runaway agents - Review agent reasoning traces for unexpected behavior patterns - Enable webhook notifications for agent_paused` events to ensure human review is timely - Use top-level conditions to gate workflows that should only run under specific circumstances → Next: Primitives (/primitives) — the five built-in building blocks --- ## PATH: Templates > Contract Review (Source: templates/07-contract-review.md) Contract Review & Extraction ============================ Automate contract analysis at scale. The agent reads uploaded contracts, extracts key terms into structured data, compares against your standard playbook, flags deviations and risky clauses, and routes findings for legal review. Every extraction and flag is auditable. Architecture ============ ``mermaid flowchart TD A["Contract Uploaded
(PDF, DOCX)"] --> B["Matcher — Compare
extracted terms against playbook"] B -->|"Standard terms"| F["✅ Auto-approved"] B -->|"Deviations found"| C["ReAct Agent — Analyze
Classify risk, draft comments"] C -->|"Low risk, minor deviations"| F C -->|"High risk / novel clauses"| D["Legal Review
PbotApproval"] D --> F F --> E["Custom Table
Contract term database"] style B fill:#fef3c7,stroke:#d97706 style C fill:#dcfce7,stroke:#16a34a style D fill:#dbeafe,stroke:#2563eb ` Graduated review: Standard-term contracts auto-approve (~60%). Minor deviations get AI risk analysis and recommended redlines (~25%). Novel clauses, high-value agreements, or high-risk provisions escalate to legal counsel (~15%). Required Actions ================ | Action | Kind | Purpose | |--------|------|---------| | extractcontractterms | llm | Parse contract and extract structured key terms | | lookupplaybook | http | Fetch standard acceptable terms for this contract type | | assessclauserisk | llm | Classify risk level of a specific clause | | draftredline | llm | Generate suggested alternative language | | searchprecedent | http | Search past executed contracts for similar clauses | | logcontract | custom-table | Write extracted terms to contract database | `bash curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: your-org" \ -H "Content-Type: application/json" \ -d '{ "actionname": "extractcontractterms", "kind": "llm", "template": "Extract the following key terms from this contract as JSON:\n- parties (array of names)\n- effectivedate\n- terminationdate\n- autorenewal (boolean)\n- renewalnoticedays\n- governinglaw\n- liabilitycap (amount or \"unlimited\")\n- indemnificationscope (\"mutual\", \"one-way\", \"none\")\n- paymentterms\n- slacommitments (array)\n- terminationforconvenience (boolean)\n- changeofcontrolclause (boolean)\n- datahandlingprovisions (summary)\n- noncompetescope (summary or \"none\")\n\nContract text:\n{{contracttext}}\n\nRespond with valid JSON only.", "model": "gpt-4", "maxtokens": 2000 }' ` `bash curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: your-org" \ -H "Content-Type: application/json" \ -d '{ "actionname": "assessclauserisk", "kind": "llm", "template": "Assess the risk of this contract clause for our organization.\n\nClause: {{clausetext}}\nOur standard position: {{standardposition}}\nContract value: {{contractvalue}}\n\nClassify as:\n- low: Minor deviation, commercially acceptable\n- medium: Notable deviation, should be negotiated but not a blocker\n- high: Material risk, requires legal review before signing\n- critical: Unacceptable as written, must be changed\n\nRespond with JSON: {\"risklevel\": \"...\", \"reasoning\": \"...\", \"recommendation\": \"...\"}", "model": "gpt-4", "maxtokens": 500 }' ` Workflow Definition =================== `json { "name": "contractreviewpipeline", "definition": { "actions": [ { "type": "extractcontractterms", "properties": { "contracttext": "@input.contracttext" }, "outputKey": "extractedterms" }, { "type": "matcher", "properties": { "left": "@extractedterms.clauses", "right": "@input.playbookterms", "matchOn": ["clausetype"], "fuzzyThreshold": 80, "descriptionKey": "clausetext", "outputMatched": "standardclauses", "outputUnmatchedLeft": "deviations", "outputUnmatchedRight": "missingclauses" } }, { "type": "loop", "filter": { "condition": { "or": [ { "greaterThan": [{ "length": "@deviations" }, 0] }, { "greaterThan": [{ "length": "@missingclauses" }, 0] } ] } }, "properties": { "mode": "react", "objective": "Review these contract deviations and missing clauses.\n\nDeviations from playbook: {{deviations}}\nMissing standard clauses: {{missingclauses}}\nContract value: {{input.contractvalue}}\nCounterparty: {{input.counterparty}}\n\nFor each deviation: 1) Assess the risk level, 2) Search for precedent in past contracts, 3) If high/critical risk, draft redline language. 4) For missing clauses, note whether they're required for this contract type. 5) Complete with a summary and overall risk score.", "tools": [ { "type": "action", "name": "assessclauserisk" }, { "type": "action", "name": "searchprecedent" }, { "type": "action", "name": "draftredline" } ], "model": "gpt-4", "maxiterations": 15, "timeoutms": 300000, "onstuck": { "iterations": 4, "action": "escalate" }, "resultkey": "reviewanalysis" } }, { "type": "PbotApproval", "filter": { "condition": { "or": [ { "greaterThan": ["@input.contractvalue", 100000] }, { "in": ["@reviewanalysis.overallrisk", ["high", "critical"]] } ] } }, "properties": { "comment": "Contract review complete for {{input.counterparty}}. Value: ${{input.contractvalue}}. Risk: {{reviewanalysis.overallrisk}}. Review AI analysis and approve or request changes.", "requestpayload": { "counterparty": "@input.counterparty", "contractvalue": "@input.contractvalue", "extractedterms": "@extractedterms", "deviations": "@deviations", "missingclauses": "@missingclauses", "aianalysis": "@reviewanalysis" } } }, { "type": "custom-table", "properties": { "table": "contractregistry", "operation": "write", "keys": ["contractid"], "values": ["@input.contractid"], "fields": { "counterparty": "@input.counterparty", "contractvalue": "@input.contractvalue", "effectivedate": "@extractedterms.effectivedate", "terminationdate": "@extractedterms.terminationdate", "liabilitycap": "@extractedterms.liabilitycap", "overallrisk": "@reviewanalysis.overallrisk", "deviationscount": "@deviations.length", "reviewedat": "@now", "reviewerapproved": "@_approved" } } } ] } } ` What the Agent Investigates =========================== For each deviation from your standard playbook, the agent: 1. Classifies risk. Uses the assessclauserisk LLM action with your standard position as context. A liability cap of $1M on a $50K contract is low risk. Unlimited liability on a $500K contract is critical. 2. Searches precedent. Queries your contract database for past agreements with similar deviations. "We accepted a 12-month non-compete from Vendor X in 2024" provides negotiating context. 3. Drafts redlines. For high/critical risks, generates alternative language that protects your position while remaining commercially reasonable. 4. Flags missing clauses. If your playbook requires a data processing addendum and the contract doesn't include one, the agent notes this as a gap requiring attention. Sample Reasoning Trace ====================== ` Iteration 1: Thought: "3 deviations found. Starting with the most concerning: unlimited liability clause." Action: assessclauserisk Input: { "clausetext": "Vendor shall be liable for all damages without limitation...", "standardposition": "Liability capped at 2x annual contract value", "contractvalue": "250000" } Result: { "risklevel": "critical", "reasoning": "Unlimited liability on $250K contract exposes us...", "recommendation": "Require cap at 2x annual value ($500K)" } Iteration 2: Thought: "Critical risk. Let me check if we've accepted unlimited liability from this vendor before." Action: searchprecedent Input: { "counterparty": "TechVendor Inc", "clausetype": "liability" } Result: { "found": true, "precedent": "2024 contract: liability capped at $300K" } Iteration 3: Thought: "We had a cap last time. This is a regression. Drafting redline." Action: draftredline Input: { "original": "Vendor shall be liable for all damages without limitation...", "position": "Cap at 2x annual value, consistent with 2024 agreement" } Result: { "suggestedlanguage": "Vendor's aggregate liability shall not exceed two times the annual fees..." } Iteration 4: Thought: "Logging progress and moving to next deviation." Action: logprogress Input: { "message": "Liability clause: CRITICAL. Redline drafted. Precedent: 2024 cap at $300K." } ... [continues for remaining deviations] ... Iteration 8: Thought: "All deviations reviewed. 1 critical (liability), 1 medium (auto-renewal), 1 low (governing law)." Action: complete_ Input: { "answer": "Overall risk: HIGH. 1 critical clause (unlimited liability — redline drafted), 1 medium (auto-renewal without notice period), 1 low (governing law difference). Recommend negotiation before signing.", "confidence": 0.9 } ` Customization ============= Contract types: Adjust the extractcontractterms template for your contract types — SaaS agreements, NDAs, MSAs, SOWs, vendor agreements. Each type can have a different extraction schema. Playbook terms: The @input.playbookterms dataset is your standard acceptable positions. Maintain this as a custom table that legal updates as policies change. Risk thresholds: The approval filter triggers on contracts over $100K or high/critical risk. Adjust to your organization's materiality thresholds. Approval routing: For different contract types, configure different approval flows — procurement approves vendor agreements, legal approves customer-facing terms, finance approves payment terms beyond standard. Term database: The contract_registry` custom table builds a searchable database of all extracted terms across every contract. This enables portfolio-level analysis — "how many contracts have unlimited liability?" — and feeds the precedent search tool. --- ## PATH: Templates > Employee Onboarding (Source: templates/08-employee-onboarding.md) Employee Onboarding =================== Automate the full employee onboarding lifecycle. An orchestrator agent coordinates across IT provisioning, benefits enrollment, training assignments, equipment ordering, and compliance documentation — triggering specialized workflows for each domain and handling exceptions when systems conflict or approvals are needed. Architecture ============ ``mermaid flowchart TD A["New Hire Record
from HRIS"] --> B["Orchestrator Agent
Coordinates full onboarding"] B --> C["_runworkflow
IT Provisioning"] B --> D["runworkflow
Benefits Enrollment"] B --> E["runworkflow
Training Assignment"] B --> F["runworkflow
Equipment Ordering"] C --> G{"All systems
provisioned?"} D --> G E --> G F --> G G -->|"Yes"| H["✅ Onboarding Complete
Welcome email sent"] G -->|"Conflicts / failures"| I["Human Review
HR Coordinator"] I --> H style B fill:#dcfce7,stroke:#16a34a style C fill:#f3e8ff,stroke:#7c3aed style D fill:#f3e8ff,stroke:#7c3aed style E fill:#f3e8ff,stroke:#7c3aed style F fill:#f3e8ff,stroke:#7c3aed style I fill:#dbeafe,stroke:#2563eb ` Deployment pattern: Agent-as-orchestrator (Pattern C). The agent is the decision-maker, each sub-workflow is a capability it invokes via runworkflow_. The agent decides sequencing, handles dependencies, and manages exceptions. Sub-Workflows ============= The orchestrator coordinates four domain-specific workflows. Each is a standard Hyphen workflow that can also be run independently. IT Provisioning Workflow ======================== Provisions accounts across all required systems: `json { "name": "itprovisioning", "definition": { "actions": [ { "type": "createadaccount", "properties": { "firstname": "@input.firstname", "lastname": "@input.lastname", "department": "@input.department", "role": "@input.role" } }, { "type": "provisionemail", "properties": { "username": "@input.email", "distributionlists": "@input.dlmembership" } }, { "type": "assignlicenses", "properties": { "useremail": "@input.email", "licenses": "@input.requiredlicenses" } }, { "type": "grantsystemaccess", "filter": { "condition": { "greaterThan": [{ "length": "@input.systemaccess" }, 0] } }, "properties": { "useremail": "@input.email", "systems": "@input.systemaccess" } }, { "type": "PbotApproval", "filter": { "condition": { "in": ["@input.accesslevel", ["admin", "elevated"]] } }, "properties": { "comment": "New hire {{input.firstname}} {{input.lastname}} requires {{input.accesslevel}} access. Manager approval needed.", "requestpayload": { "employee": "@input.firstname", "role": "@input.role", "systems": "@input.systemaccess", "accesslevel": "@input.accesslevel" } } } ] } } ` Benefits Enrollment Workflow ============================ `json { "name": "benefitsenrollment", "definition": { "actions": [ { "type": "checkeligibility", "properties": { "employeetype": "@input.employeetype", "startdate": "@input.startdate", "location": "@input.location" } }, { "type": "createbenefitsprofile", "properties": { "employeeid": "@input.employeeid", "eligibleplans": "@eligibility.plans" } }, { "type": "PbotForm", "properties": { "expectedkeys": ["healthplan", "dentalplan", "visionplan", "retirementcontribution", "lifeinsurance"], "ttlseconds": 604800, "reminderintervals": [172800, 432000] } }, { "type": "enrollbenefits", "properties": { "employeeid": "@input.employeeid", "selections": "@formData" } } ] } } ` Training Assignment Workflow ============================ `json { "name": "trainingassignment", "definition": { "actions": [ { "type": "lookuprolerequirements", "properties": { "role": "@input.role", "department": "@input.department", "location": "@input.location" } }, { "type": "loop", "properties": { "mode": "foreach", "itemspath": "@rolerequirements.requiredcourses", "itemvariablename": "course", "actionstoexecute": [ { "type": "assigncourse", "properties": { "employeeid": "@input.employeeid", "courseid": "@course.id", "duedate": "@course.duedate" } } ], "maxconcurrency": 5, "failurestrategy": "continueonerror", "resultkey": "enrollmentresults" } } ] } } ` Equipment Ordering Workflow =========================== `json { "name": "equipmentordering", "definition": { "actions": [ { "type": "lookupstandardkit", "properties": { "role": "@input.role", "location": "@input.location" } }, { "type": "PbotApproval", "filter": { "condition": { "greaterThan": ["@standardkit.totalcost", 3000] } }, "properties": { "comment": "Equipment order for {{input.firstname}} {{input.lastname}} exceeds $3,000. Items: {{standardkit.items}}. Approve?", "requestpayload": { "items": "@standardkit.items", "totalcost": "@standardkit.totalcost" } } }, { "type": "submitequipmentorder", "properties": { "employeeid": "@input.employeeid", "items": "@standardkit.items", "shipto": "@input.worklocation" } } ] } } ` Orchestrator Agent ================== The top-level agent coordinates everything: `json { "name": "employeeonboardingorchestrator", "definition": { "actions": [ { "type": "loop", "properties": { "mode": "react", "objective": "Onboard new employee:\n\nName: {{input.firstname}} {{input.lastname}}\nRole: {{input.role}}\nDepartment: {{input.department}}\nStart date: {{input.startdate}}\nLocation: {{input.location}}\nEmployee type: {{input.employeetype}}\n\nSteps:\n1) Run IT provisioning workflow — wait for completion\n2) Run benefits enrollment workflow — this may take days (PbotForm), run async\n3) Run training assignment workflow — wait for completion\n4) Run equipment ordering workflow — wait for completion\n5) If any workflow fails, log the failure and pause for HR review\n6) Once IT provisioning and training are done, send welcome email\n7) Complete with a summary of all onboarding steps and their status", "tools": [ { "type": "action", "name": "gmailsend" }, { "type": "workflow", "id": "wfitprovisioning" }, { "type": "workflow", "id": "wfbenefitsenrollment" }, { "type": "workflow", "id": "wftrainingassignment" }, { "type": "workflow", "id": "wfequipmentordering" } ], "model": "gpt-4", "maxiterations": 20, "timeoutms": 600000, "onstuck": { "iterations": 4, "action": "escalate" }, "resultkey": "onboardingresult" } }, { "type": "custom-table", "properties": { "table": "onboardinglog", "operation": "write", "keys": ["employeeid"], "values": ["@input.employeeid"], "fields": { "name": "@input.firstname", "role": "@input.role", "startdate": "@input.startdate", "onboardingstatus": "@onboardingresult.answer", "completedat": "@now" } } } ] } } ` Sample Reasoning Trace ====================== ` Iteration 1: Thought: "Starting onboarding for Sarah Chen, Senior Engineer, Platform Team. First: IT provisioning." Action: runworkflow_ Input: { "workflowname": "itprovisioning", "input": { "firstname": "Sarah", "lastname": "Chen", "department": "Engineering", "role": "Senior Engineer", "email": "sarah.chen@company.com", "requiredlicenses": ["GitHub Enterprise", "AWS Console", "Datadog"], "systemaccess": ["productionk8s", "stagingdb"], "accesslevel": "elevated" }, "wait": true } Result: { "status": "completed", "context": { "approved": true } } Iteration 2: Thought: "IT provisioned. Elevated access was approved by manager. Storing this and moving to benefits." Action: storememory Input: { "key": "itstatus", "value": "completed — elevated access approved" } Iteration 3: Thought: "Benefits enrollment requires employee input (plan selections). Running async — it may take days." Action: _runworkflow_ Input: { "workflowname": "benefitsenrollment", "input": { "employeeid": "EMP-4521", "employeetype": "fulltime", "startdate": "2026-02-15", "location": "San Francisco" }, "wait": false } Result: { "workflowrunid": "runben789", "status": "running" } Iteration 4: Thought: "Benefits running async. Proceeding with training and equipment in parallel." Action: logprogress Input: { "message": "IT ✅ | Benefits ⏳ (async) | Training: starting | Equipment: pending" } Iteration 5: Thought: "Assigning required training." Action: runworkflow Input: { "workflowname": "trainingassignment", "input": { "employeeid": "EMP-4521", "role": "Senior Engineer", "department": "Engineering", "location": "San Francisco" }, "wait": true } Result: { "status": "completed", "context": { "coursesassigned": 7 } } Iteration 6: Thought: "7 courses assigned. Now equipment." Action: runworkflow_ Input: { "workflowname": "equipmentordering", "input": { "employeeid": "EMP-4521", "firstname": "Sarah", "lastname": "Chen", "role": "Senior Engineer", "location": "San Francisco", "worklocation": "SF Office" }, "wait": true } Result: { "status": "completed", "context": { "orderid": "EQ-8832" } } Iteration 7: Thought: "IT, training, and equipment complete. Benefits still async. Sending welcome email now." Action: gmailsend Input: { "oauthaccount": "hr@company.com", "to": "sarah.chen@company.com", "subject": "Welcome to the team, Sarah!", "body": "Hi Sarah,\n\nWe're excited to have you join the Platform Team on Feb 15!\n\nYour accounts are set up:\n- Email: sarah.chen@company.com\n- GitHub, AWS, and Datadog access: ready\n- 7 training courses assigned in the LMS\n- Equipment shipping to SF Office\n\nYou'll receive a separate email about benefits enrollment. Please complete your plan selections within 7 days.\n\nSee you on the 15th!" } Result: { "sent": true } Iteration 8: Thought: "Welcome email sent. All synchronous steps complete. Benefits enrollment is async and will complete when Sarah submits her selections." Action: complete_ Input: { "answer": "Onboarding 90% complete. IT provisioned (elevated access approved), 7 training courses assigned, equipment ordered (EQ-8832), welcome email sent. Benefits enrollment pending employee input (async).", "confidence": 0.95, "summary": "4 workflows triggered, 3 completed, 1 async pending" } ` Why Agent-as-Orchestrator ========================= This template demonstrates why a human coordinator is the bottleneck in onboarding — they manually sequence across disconnected systems, track which steps are done, handle exceptions, and send follow-up communications. The orchestrator agent replicates that judgment: | Human coordinator does | Orchestrator agent does | |----------------------|------------------------| | Checks which systems need provisioning | Reads role requirements, triggers itprovisioning | | Follows up on pending approvals | Waits for PbotApproval, escalates if stuck | | Decides sequencing (IT before training) | Reasons about dependencies, runs workflows in logical order | | Sends welcome email once ready | Sends email after confirming IT + training are done | | Tracks overall status | _logprogress at each milestone | | Escalates when things go wrong | pauseforhuman__ when workflows fail | The difference: the agent does it in minutes, across any number of simultaneous new hires, with a complete audit trail. Customization ============= HRIS integration: Replace @input with a webhook trigger from your HRIS (Workday, BambooHR, Rippling) that fires when a new hire record is created. Sub-workflows: Add or remove sub-workflows based on your onboarding process — background check, office badge provisioning, parking assignment, team buddy assignment. Each is a standard Hyphen workflow the agent can invoke. Approval routing: IT provisioning requires manager approval for elevated access. Equipment requires finance approval above $3K. Adjust thresholds and approval routing per your policies. Timeline handling: Benefits enrollment uses PbotForm` with a 7-day TTL and reminders at 48h and 5 days. The orchestrator runs this async so it doesn't block the rest of onboarding. Offboarding: Mirror this template for employee offboarding — revoke access, recover equipment, process final pay, transfer knowledge base ownership. Same orchestrator pattern, reverse direction. --- ## PATH: Actions > Index (Source: actions/index.md) Actions ======= Actions are reusable operations you register once and reference by name. Every action has a kind that determines how it executes. The Five Action Kinds ===================== | Kind | What It Does | Example | |------|-------------|---------| | HTTP (/actions/http) | Call external REST APIs | POST to Salesforce, GET from Stripe | | LLM (/actions/llm) | AI text generation | Summarize a document, extract entities | | DB (/actions/db) | Query databases | SELECT from your data warehouse | | Matcher (/actions/matcher) | Pre-configured matching rules | Invoice-to-payment matching with saved config | | Custom Table (/actions/custom-table) | Table operations | Read/write to Hyphen-managed tables | Works Everywhere ================ All registered actions work in three contexts: | Context | How It's Used | |---------|--------------| | Workflow step | "type": "myaction" in the actions array | | Foreach loop | Inside actionstoexecute — runs once per item | | ReAct agent tool | Listed in tools — agent decides when to call it | Registration ============ Register an action via API, then use its actionname as a step type: ``bash Register ======== curl -X POST http://localhost:3009/actions \ -H "X-Org-Id: acme-corp" \ -H "Content-Type: application/json" \ -d '{ "actionname": "fetchcustomer", "kind": "http", "url": "https://api.crm.com/customers/{{customerid}}", "httpmethod": "GET", "headers": { "Authorization": "Bearer orgconfig:api:crmtoken" } }' ` `json // Use in a workflow { "type": "fetchcustomer", "properties": { "customerid": "@input.id" } } ` api POST /actions Register a new action. api GET /actions List all registered actions for the organization. api GET /actions/:id Get details of a specific action. api PUT /actions/:id Update an existing action. Common Properties ================= These properties apply to all action kinds: | Property | Type | Required | Description | |----------|------|----------|-------------| | actionname | string | Yes | Unique name within the org — used as type in workflows | | kind | string | Yes | "http", "llm", "db", "matcher", or "custom-table" | | description | string | No | Human-readable description | | passthrough | boolean | No | If true, the raw response is passed through to context (default: false) | | outputKey` | string | No | Custom context key for the action's output | --- ## PATH: Agents > Deployment Patterns > Index (Source: agents/deployment-patterns/index.md) Deployment Patterns =================== Hyphen agents operate in three patterns depending on where autonomy lives in your architecture. All three share the same governance model: structural permissioning, reasoning traces, stuck detection, and human escalation. Decision Matrix =============== | Pattern | Agent's Role | Best For | Autonomy Level | |---------|-------------|----------|---------------| | Agent as Step (/agents/deployment-patterns/agent-as-step) | One step in a deterministic workflow | Mostly-deterministic processes with one reasoning-heavy step | Low — workflow controls flow | | Agent as Trigger (/agents/deployment-patterns/agent-as-trigger) | Smart ingestion and routing layer | Unstructured input that needs classification before processing | Medium — agent decides which workflow | | Agent as Orchestrator (/agents/deployment-patterns/agent-as-orchestrator) | Coordinator of multiple workflows | Multi-step processes requiring dynamic coordination | High — agent controls the entire process | ``mermaid flowchart LR subgraph "Pattern A: Step" WA["Workflow"] --> AA["Agent Step"] --> WA2["More Steps"] end subgraph "Pattern B: Trigger" Input["Unstructured Input"] --> AB["Agent"] --> WB["Workflow"] end subgraph "Pattern C: Orchestrator" AC["Agent"] --> W1["Workflow 1"] AC --> W2["Workflow 2"] AC --> W3["Workflow 3"] end `` Choosing a Pattern ================== Start with Pattern A (agent as step) if you have an existing deterministic process that needs AI judgment at one point — for example, a reconciliation workflow where matched records are processed automatically but exceptions need investigation. Use Pattern B (agent as trigger) when input arrives in unstructured form and you need to classify, extract, and route before processing begins — for example, incoming emails that could be invoices, support requests, or vendor inquiries. Use Pattern C (agent as orchestrator) when the process itself is dynamic — the agent needs to decide what to do next based on results from previous steps — for example, customer onboarding where sanctions screening results determine whether enhanced due diligence is needed. --- ## PATH: Agents > Index (Source: agents/index.md) Agents ====== Hyphen agents are operational decision-makers. They reason step-by-step, use tools, observe results, and iterate toward an objective. They are not chatbots. They process, investigate, classify, recommend, and act — with every thought and action captured in a reasoning trace. The key constraint: agents operate inside a cage defined by the workflow spec. They can only use tools you've explicitly declared. They can't discover or invent capabilities. If the spec says they can look up tickets and send emails, that's all they can do — even if the LLM is capable of much more. ``mermaid flowchart TD subgraph "The Agent Cage" O["Objective"] --> T["Think"] T --> A["Act (declared tools only)"] A --> Ob["Observe result"] Ob --> T Ob --> C["Complete"] end Spec["Workflow Spec"] -.->|defines tools,
iterations, guardrails| O H["Human"] -.->|reviews when
agent escalates| A ` Two Ways to Run Agents ====================== As a workflow step. The agent is one step inside a larger deterministic workflow. Use type: "loop" with mode: "react" — see Loop primitive (/primitives/loop). `json { "type": "loop", "properties": { "mode": "react", "objective": "Investigate this exception", "tools": [{ "type": "action", "name": "lookuprecord" }], "maxiterations": 10 } } ` As a standalone agent. Execute directly via the agent API, outside any workflow. Use for ad-hoc tasks, agent-as-trigger, and orchestrator patterns. `bash POST /agents/execute ` api POST /agents/execute Execute a standalone agent. Supports sync (wait for result) and async (?async=true, poll for status) modes. api GET /agents/:id/status Get agent run status, optionally with full reasoning trace (?includetrace=true). api GET /agents/:id/trace Get the complete reasoning trace for an agent run. api POST /agents/:id/resume Resume a paused agent with human input. What's in This Section ====================== | Page | Description | |------|-------------| | ReAct Loop (/agents/react-loop) | The think → act → observe cycle, prompt construction, iteration lifecycle | | Built-in Tools (/agents/built-in-tools) | complete, runworkflow, pauseforhuman__`, memory, logging | | Tool Declarations (/agents/tool-declarations) | Typed tool declarations — action tools, workflow tools, and legacy formats | | Stuck Detection (/agents/stuck-detection) | Recovery strategies when agents loop without progress | | Reasoning Traces (/agents/reasoning-traces) | Audit trail format, querying, compliance, secret redaction | | Deployment Patterns (/agents/deployment-patterns) | Three patterns: agent as step, trigger, or orchestrator | --- ## PATH: Api > Index (Source: api/index.md) API Reference ============= Hyphen is API-first. Every capability — workflow creation, execution, agent orchestration, approvals, OAuth configuration — is an HTTP endpoint. Base URL ======== `` http://localhost:3009 ` Replace with your deployed instance URL in production. Authentication ============== All requests require the X-Org-Id header for multi-tenant isolation: `bash curl -H "X-Org-Id: your-org-id" http://localhost:3009/workflows ` Every workflow, run, action, table, and credential is scoped to the organization specified in this header. OpenAPI Specification ===================== The full interactive API spec is available here: - Hyphen API (OpenAPI 3.0) (/api/openapi) — Every endpoint, request/response schema, and parameter documented Endpoints at a Glance ===================== Health & Discovery ================== api GET /health Verify the API is running. api GET /builtin-actions List pre-registered OAuth business tools (Gmail, Slack, Outlook). Organization Configuration ========================== api POST /org-config Store or update an encrypted configuration key (API keys, connection strings, secrets). api GET /org-config List all configuration keys (values masked). Actions ======= api POST /actions Register a custom action (HTTP, LLM, DB, matcher, or custom-table). api GET /actions List all registered actions. api GET /actions/:id Get action details. api PUT /actions/:id Update an action. Workflows ========= api POST /workflows Create a workflow definition. api GET /workflows List all workflows. api GET /workflows/:id Get workflow details. api PUT /workflows/:id Update a workflow. api DELETE /workflows/:id Delete a workflow. Execution ========= api POST /workflows/:id/execute Execute a workflow with an input payload. Returns a run ID. api GET /runs/:runId/status Get current run status, context, and step outputs. Approvals ========= api POST /approvals/:runId/:stepIndex Submit an approval decision for a paused workflow. api GET /approvals/:runId List pending approvals for a run. Standalone Agents ================= api POST /agents/execute Execute an AI agent outside of a workflow. Supports sync and async modes (?async=true). api GET /agents/:id/status Get agent status. Add ?includetrace=true for reasoning trace. api GET /agents/:id/trace Get the full reasoning trace (audit trail) — every thought, action, and observation. api POST /agents/:id/resume Resume a paused agent with human input. api DELETE /agents/:id Cancel a running agent. api GET /agents List agent runs. Filter by ?status=paused, ?status=running, etc. Custom Tables ============= api POST /custom-tables Create a new table with typed fields. api POST /custom-tables/:name/insert Insert a record. api POST /custom-tables/:name/bulk-insert Insert multiple records. api PUT /custom-tables/:name/update Update records matching criteria. api GET /custom-tables/:name View all data in a table. api GET /custom-tables/:name/audit View the audit log for a table. External Forms ============== api POST /external-forms Create an external-facing form for data collection. api POST /external-forms/:id/submit Submit a form response. api GET /external-forms/:id/submissions Get all submissions. AI Workflow Generation ====================== api POST /ai/generate-workflow Generate a workflow from a natural language description. api GET /ai/generate-workflow/:id/status Check generation status. api GET /ai/generate-workflow/:id Retrieve the generated workflow spec. api POST /workflows/create-from-ai Create a workflow from AI-generated output (registers actions and tables too). OAuth Business Tools ==================== api GET /oauth/providers List available OAuth providers and configuration status. api POST /oauth/:provider/app-credentials Store OAuth app credentials (clientid, clientsecret). api GET /oauth/:provider/authorize Get authorization URL for user consent flow. api GET /oauth/connections List all OAuth connections for the organization. Error Responses =============== All endpoints return standard HTTP status codes: | Status | Meaning | |--------|---------| | 200 | Success | | 201 | Created | | 400 | Bad request — invalid parameters or workflow definition | | 404 | Not found — workflow, run, or action doesn't exist | | 409 | Conflict — duplicate action name or table name | | 500 | Server error | Error responses include a message field: `json { "error": "Workflow not found", "message": "No workflow with id 'wfxyz' for org 'your-org'" } ` Rate Limiting ============= Rate limits are applied per organization. Monitor usage via the admin metrics endpoint. For AI agents: Fetch /llms.txt` (/llms.txt) for the complete documentation in structured text optimized for LLM consumption. --- ## PATH: Api > Openapi (Source: api/openapi.json) # Hyphen API

Complete REST API for the Hyphen platform — infrastructure for governed autonomous AI. Build workflows, execute AI agents, manage approvals, and orchestrate business operations with full audit trails.

Version: 2.0.0 Base URL: http://localhost:3009 ## Actions

Register and manage custom actions

### GET /actions List All Actions

List all registered actions for the organization

**Parameters:** - `undefined` [undefined]: string **Responses:** - 200: Success ### POST /actions Register DB Action

Register a DB action for database queries (MySQL, Postgres, MongoDB, Neo4j)

**Parameters:** - `undefined` [undefined]: string **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ### GET /actions/{actionId} Get Action by ID

Get details of a specific registered action

**Parameters:** - `undefined` [undefined]: string - `actionId` [path]: string (required) **Responses:** - 200: Success ### PUT /actions/{actionId} Update Action

Update an existing registered action

**Parameters:** - `undefined` [undefined]: string - `actionId` [path]: string (required) **Request Body:** Content-Type: application/json **Responses:** - 200: Success ## Agents

Standalone AI agent execution

### GET /agents List Agent Runs by Status

List agent runs filtered by status (running, paused, completed, failed)

**Parameters:** - `undefined` [undefined]: string - `status` [query]: string **Responses:** - 200: Success ### POST /agents/execute Execute Agent (Async)

Execute an agent asynchronously and poll for status

**Parameters:** - `undefined` [undefined]: string - `async` [query]: string **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ### DELETE /agents/{agentRunId} Cancel Agent

Cancel a running agent

**Parameters:** - `undefined` [undefined]: string - `agentRunId` [path]: string (required) **Responses:** - 200: Success ### POST /agents/{agentRunId}/resume Resume Paused Agent

Resume a paused agent with human input

**Parameters:** - `undefined` [undefined]: string - `agentRunId` [path]: string (required) **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ### GET /agents/{agentRunId}/status Get Agent Status with Trace

Get agent status including full reasoning trace

**Parameters:** - `undefined` [undefined]: string - `agentRunId` [path]: string (required) - `include_trace` [query]: string **Responses:** - 200: Success ### GET /agents/{agentRunId}/trace Get Agent Reasoning Trace

Get full reasoning trace for an agent run (audit trail)

**Parameters:** - `undefined` [undefined]: string - `agentRunId` [path]: string (required) **Responses:** - 200: Success ## AI Generation

AI-powered workflow generation from natural language

### POST /ai/generate-workflow Initiate AI Generation

Submit natural language description for AI workflow generation

**Parameters:** - `undefined` [undefined]: string **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ### GET /ai/generate-workflow/{generationId} Get Generated Workflow

Retrieve the AI-generated workflow specification

**Parameters:** - `undefined` [undefined]: string - `generationId` [path]: string (required) **Responses:** - 200: Success ### GET /ai/generate-workflow/{generationId}/status Get Generation Status

Check status of AI generation request

**Parameters:** - `undefined` [undefined]: string - `generationId` [path]: string (required) **Responses:** - 200: Success ### POST /workflows/create-from-ai Create Workflow from AI Output

Create workflow, actions, and tables from AI-generated specification

**Parameters:** - `undefined` [undefined]: string **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ### GET /workflows/generation-status/{processId} Get Creation Process Status

Get status of workflow creation process

**Parameters:** - `undefined` [undefined]: string - `processId` [path]: string (required) **Responses:** - 200: Success ## Approvals

Human-in-the-loop approval management

### GET /approvals/{runId} List Pending Approvals

List all approval requests for a workflow run

**Parameters:** - `undefined` [undefined]: string - `runId` [path]: string (required) **Responses:** - 200: Success ### POST /approvals/{runId}/{stepIndex} Submit Approval

Submit approval for a paused workflow step

**Parameters:** - `undefined` [undefined]: string - `runId` [path]: string (required) - `stepIndex` [path]: integer (required) Step index of the approval step in the workflow **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ## Health

Health check and discovery

### GET /builtin-actions List Built-in Actions

List all pre-registered OAuth business tools:

**Parameters:** - `undefined` [undefined]: string **Responses:** - 200: Success ### GET /health Health Check

Verify API is running and healthy

**Responses:** - 200: Success ## Custom Tables

Multi-tenant data storage

### POST /custom-tables Create Custom Table

Create a new custom table

**Parameters:** - `undefined` [undefined]: string **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ### GET /custom-tables/{customTableName} View Table Data

View all data in a custom table

**Parameters:** - `undefined` [undefined]: string - `customTableName` [path]: string (required) **Responses:** - 200: Success ### DELETE /custom-tables/{customTableName} Drop Custom Table

Delete a custom table

**Parameters:** - `undefined` [undefined]: string - `customTableName` [path]: string (required) **Responses:** - 200: Success ### GET /custom-tables/{customTableName}/audit View Table Audit Log

View audit log for a custom table

**Parameters:** - `undefined` [undefined]: string - `customTableName` [path]: string (required) **Responses:** - 200: Success ### POST /custom-tables/{customTableName}/bulk-insert Bulk Insert Records

Insert multiple records at once

**Parameters:** - `undefined` [undefined]: string - `customTableName` [path]: string (required) **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ### POST /custom-tables/{customTableName}/insert Insert Record

Insert a single record into a custom table

**Parameters:** - `undefined` [undefined]: string - `customTableName` [path]: string (required) **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ### PUT /custom-tables/{customTableName}/update Update Records

Update records matching criteria

**Parameters:** - `undefined` [undefined]: string - `customTableName` [path]: string (required) **Request Body:** Content-Type: application/json **Responses:** - 200: Success ## External Forms

External-facing form management

### GET /external-forms List Forms

List all external forms

**Parameters:** - `undefined` [undefined]: string **Responses:** - 200: Success ### POST /external-forms Create External Form

Create an external form

**Parameters:** - `undefined` [undefined]: string **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ### GET /external-forms/{formId} Get Form

Get form details

**Parameters:** - `undefined` [undefined]: string - `formId` [path]: string (required) **Responses:** - 200: Success ### PUT /external-forms/{formId} Update Form

Update an external form

**Parameters:** - `undefined` [undefined]: string - `formId` [path]: string (required) **Request Body:** Content-Type: application/json **Responses:** - 200: Success ### DELETE /external-forms/{formId} Delete Form

Delete an external form

**Parameters:** - `undefined` [undefined]: string - `formId` [path]: string (required) **Responses:** - 200: Success ### GET /external-forms/{formId}/analytics Get Form Analytics

Get analytics for a form

**Parameters:** - `undefined` [undefined]: string - `formId` [path]: string (required) **Responses:** - 200: Success ### GET /external-forms/{formId}/submissions Get Form Submissions

Get all submissions for a form

**Parameters:** - `undefined` [undefined]: string - `formId` [path]: string (required) **Responses:** - 200: Success ### POST /external-forms/{formId}/submit Submit Form

Submit a form response

**Parameters:** - `undefined` [undefined]: string - `formId` [path]: string (required) **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ## Internal Forms

Workflow-bound form submissions (PbotForm)

### GET /forms/pending List Pending Forms

List all pending form requests

**Parameters:** - `undefined` [undefined]: string **Responses:** - 200: Success ### POST /forms/{runId}/{stepIndex}/submit Submit Form Data

Submit form data for a paused PbotForm step

**Parameters:** - `undefined` [undefined]: string - `runId` [path]: string (required) - `stepIndex` [path]: integer (required) Step index of the PbotForm step **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ## OAuth

OAuth business tool integrations

### GET /oauth/connections List OAuth Connections

List all OAuth connections for the organization

**Parameters:** - `undefined` [undefined]: string **Responses:** - 200: Success ### GET /oauth/connections/{oauthProvider}/{accountEmail} Validate OAuth Connection

Validate a specific OAuth connection

**Parameters:** - `undefined` [undefined]: string - `oauthProvider` [path]: string (required) - `accountEmail` [path]: string (required) Email of the connected OAuth account **Responses:** - 200: Success ### DELETE /oauth/connections/{oauthProvider}/{accountEmail} Remove OAuth Connection

Remove an OAuth connection

**Parameters:** - `undefined` [undefined]: string - `oauthProvider` [path]: string (required) - `accountEmail` [path]: string (required) Email of the connected OAuth account **Responses:** - 200: Success ### POST /oauth/connections/{oauthProvider}/{accountEmail}/refresh Refresh OAuth Token

Force refresh an OAuth token

**Parameters:** - `undefined` [undefined]: string - `oauthProvider` [path]: string (required) - `accountEmail` [path]: string (required) Email of the connected OAuth account **Responses:** - 200: Success - 201: Created ### GET /oauth/providers List OAuth Providers

List available OAuth providers and their configuration status

**Parameters:** - `undefined` [undefined]: string **Responses:** - 200: Success ### GET /oauth/{oauthProvider}/app-credentials Check OAuth App Credentials

Check if OAuth app credentials are configured (doesn't return secrets)

**Parameters:** - `undefined` [undefined]: string - `oauthProvider` [path]: string (required) **Responses:** - 200: Success ### POST /oauth/{oauthProvider}/app-credentials Store OAuth App Credentials

Store OAuth app credentials (clientid, clientsecret) for a provider

**Parameters:** - `undefined` [undefined]: string - `oauthProvider` [path]: string (required) **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ### GET /oauth/{oauthProvider}/authorize Get Authorization URL

Get OAuth authorization URL to redirect user for consent

**Parameters:** - `undefined` [undefined]: string - `oauthProvider` [path]: string (required) - `return_url` [query]: string **Responses:** - 200: Success ## Org Config

Organization-level configuration and secrets

### GET /org-config List All Config

List all configuration keys for the organization (values are masked)

**Parameters:** - `undefined` [undefined]: string **Responses:** - 200: Success ### POST /org-config Upsert Config Key

Store or update a configuration key for the organization

**Parameters:** - `undefined` [undefined]: string **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ## Execution

Execute workflows and check run status

### GET /runs/{runId}/status Get Run Status

Get current status of a workflow run including context and step details

**Parameters:** - `undefined` [undefined]: string - `runId` [path]: string (required) **Responses:** - 200: Success ### POST /workflows/{workflowId}/execute Execute Workflow

Execute a workflow with input payload

**Parameters:** - `undefined` [undefined]: string - `workflowId` [path]: string (required) **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ## Schemas

JSON schema registry

### POST /schemas Create Schema

Register a JSON schema

**Parameters:** - `undefined` [undefined]: string **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ### GET /schemas/{schemaName} Get Schema

Retrieve a registered schema

**Parameters:** - `undefined` [undefined]: string - `schemaName` [path]: string (required) Name of the registered schema **Responses:** - 200: Success ## Utilities

Utility endpoints

### POST /utils/generate-dag Generate Workflow DAG

Generate DAG visualization data from workflow definition

**Parameters:** - `undefined` [undefined]: string **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ## Workflows

Create and manage workflow definitions

### GET /workflows List All Workflows

List all workflow definitions for the organization

**Parameters:** - `undefined` [undefined]: string **Responses:** - 200: Success ### POST /workflows Create Workflow with ReAct Agent

Create a workflow with ReAct agent for AI-powered reasoning

**Parameters:** - `undefined` [undefined]: string **Request Body:** Content-Type: application/json **Responses:** - 200: Success - 201: Created ### GET /workflows/{workflowId} Get Workflow by ID

Get details of a specific workflow

**Parameters:** - `undefined` [undefined]: string - `workflowId` [path]: string (required) **Responses:** - 200: Success ### PUT /workflows/{workflowId} Update Workflow

Update an existing workflow definition

**Parameters:** - `undefined` [undefined]: string - `workflowId` [path]: string (required) **Request Body:** Content-Type: application/json **Responses:** - 200: Success ### DELETE /workflows/{workflowId} Delete Workflow

Delete a workflow definition

**Parameters:** - `undefined` [undefined]: string - `workflowId` [path]: string (required) **Responses:** - 200: Success --- ## PATH: Getting Started > Index (Source: getting-started/index.md) Getting Started =============== Prerequisites ============= You need three things: - An API endpoint — Hyphen runs on http://localhost:3009 by default, or your deployed instance URL - An org ID — Any string; this isolates your data (X-Org-Id header on every request) - curl or Postman — Every example uses curl; the Postman collection (/api) works too No SDK required. Hyphen is API-first — every capability is an HTTP endpoint. Learning Path ============= | Page | Time | What You'll Learn | |------|------|-------------------| | Quickstart (/getting-started/quickstart) | 15 min | Register an action, build a workflow with matching + approval, execute it, check the audit trail | | Core Concepts (/getting-started/core-concepts) | 10 min | The six building blocks: workflows, actions, primitives, context, agents, runs | | Your First Workflow (/getting-started/your-first-workflow) | 20 min | Step-by-step: matcher → conditional → approval → custom table | | Your First Agent (/getting-started/your-first-agent) | 20 min | Build a ReAct agent with custom tools, stuck detection, and human escalation | Skip ahead if you're a developer. The Quickstart (/getting-started/quickstart) has a fast-path section — register an action, create a workflow, execute it, read the trace. No narrative. --- ## PATH: Guides > Index (Source: guides/index.md) Guides ====== Step-by-step walkthroughs that build real workflows from scratch. Each guide takes you from zero to a working system. | Guide | What You'll Build | Time | |-------|-------------------|------| | Invoice Reconciliation (/guides/invoice-matching) | Matcher + agent + approval pipeline for invoice-to-payment matching | 30 min | | Agent Orchestrator (/guides/building-an-agent-orchestrator) | Multi-workflow agent that processes support tickets | 30 min | | Human-in-the-Loop (/guides/human-in-the-loop) | Approval flows and agent pause/resume patterns | 20 min | | Graduated Exception Handling (/guides/graduated-exception-handling) | The 80/15/5 pattern — rules, AI, humans in one pipeline | 25 min | | AI Workflow Generation (/guides/ai-workflow-generation) | Natural language to workflow using the AI compiler | 15 min | | Embedding Hyphen (/guides/embedding-hyphen) | Multi-tenant platform integration for SaaS companies | 30 min | --- ## PATH: Home (Source: home.md) Hyphen ====== Infrastructure for governed autonomous AI. Hyphen is the execution layer for AI agents that make real decisions in real businesses. Define workflows in JSON or plain language, execute them deterministically, and let bounded AI agents handle the reasoning — with full audit trails, structural permissioning, and human-in-the-loop at every critical juncture. Start Building ============== - Quickstart Guide (/getting-started/quickstart) — Register an action, create a workflow, execute it. 15 minutes. - Core Concepts (/getting-started/core-concepts) — The six building blocks you need to understand. - API Reference (/api) — Full endpoint spec with request/response examples. - Templates (/templates) — Production-ready workflow patterns across industries. Governed Autonomy ================= AI agents that reason and act on their own — within boundaries you define. ``mermaid flowchart TD A["Operational Event
(alert, document, request, exception)"] --> B["Deterministic Rules
Matcher + Conditions"] B -->|"~80% auto-handled"| C["✅ Resolved"] B -->|"Exceptions"| D["ReAct Agent
Bounded AI — thinks, acts, observes"] D -->|"~15% AI-resolved"| C D -->|"Low confidence"| E["Human Review
PbotApproval"] E -->|"~5% human-resolved"| C C --> F["Audit Trail
Every step, every decision, every reasoning trace"] style B fill:#e8f5e9,stroke:#2e7d32 style D fill:#e3f2fd,stroke:#1565c0 style E fill:#fff3e0,stroke:#e65100 style F fill:#f3e5f5,stroke:#6a1b9a ` 80 / 15 / 5 — The exact ratio varies by use case, but the architecture is always the same: rules first, AI second, humans last. This graduated approach gives you automation speed with human-grade accuracy. How It Works ============ Hyphen operates on a three-phase execution model: Phase 1 — AI as Compiler. Describe what you want in plain language. Hyphen's AI compiles your intent into a precise JSON workflow specification — the conditions, data references, branching logic, and escalation paths. Humans describe the what. AI produces the how. Phase 2 — Deterministic Runtime. The execution engine runs the compiled spec exactly as written. No improvisation. If the spec says "wait for approval," it waits. If a condition fails, it follows the defined fallback. Enterprises audit specs, not vibes. Phase 3 — Bounded Agentic Runtime. For tasks requiring judgment — investigating exceptions, interpreting documents, making recommendations — ReAct agents reason within cages defined by the spec. Only declared tools are available. Every thought and action is captured. Iteration caps and stuck detection prevent runaway execution. What You Build With It ====================== The same primitives — matching, agents, approvals, storage — compose into workflows across any operational domain. | Domain | Use Case | Pattern | |--------|----------|---------| | Finance | Invoice reconciliation, payment matching, exception investigation | Matcher → Agent → Approval (/templates/ap-invoice-reconciliation) | | IT & Security | Alert triage, incident response, access reviews | Agent enriches → containment → escalation (/templates/it-incident-response) | | Legal | Contract review, clause extraction, playbook deviation flagging | LLM extraction → Matcher → Agent (/templates/contract-review) | | People Ops | Employee onboarding/offboarding, leave processing | Agent-as-orchestrator (/templates/employee-onboarding) | | Healthcare | Claims adjudication, denial management, prior authorization | Dual matcher → Agent → Clinical review (/templates/insurance-claims-adjudication) | | Supply Chain | Supplier risk, PO exception handling, RFP processing | Scheduled agent → threshold escalation | | Customer Ops | Escalation investigation, SLA monitoring, renewals | Agent pulls context → drafts resolution → approval | See all templates → (/templates) The Primitives ============== Four building blocks compose into any operational workflow: | Primitive | Purpose | Example | |-----------|---------|---------| | Matcher (/primitives/matcher) | Multi-criteria data matching | Compare invoices to payments, alerts to known indicators, contract terms to playbook | | Loop (/primitives/loop) | Batch processing (foreach) or AI reasoning (react) | Process 10K records, investigate an exception, orchestrate across systems | | PbotApproval (/primitives/approval) | Human-in-the-loop | Manager sign-off, legal review, SOC analyst escalation | | Custom Table (/primitives/custom-table) | Multi-tenant storage | Audit log, contract registry, incident database | Three Deployment Patterns ========================= The ReAct agent operates in three patterns depending on where autonomy lives: `mermaid flowchart LR subgraph A["Pattern A: Agent as Step"] direction TB A1["Workflow Step 1"] --> A2["🤖 Agent Step"] A2 --> A3["Workflow Step 3"] end subgraph B["Pattern B: Agent as Trigger"] direction TB B1["Inbound Event"] --> B2["🤖 Agent classifies"] B2 --> B3["→ Workflow A or B or C"] end subgraph C["Pattern C: Agent as Orchestrator"] direction TB C1["🤖 Agent coordinates"] --> C2["→ Workflow 1"] C1 --> C3["→ Workflow 2"] C1 --> C4["→ Workflow 3"] end ` | Pattern | When to Use | Example | |---------|-------------|---------| | Agent as Step (/agents/deployment-patterns/agent-as-step) | Most of the process is deterministic, one step requires judgment | Invoice matching → agent investigates exceptions | | Agent as Trigger (/agents/deployment-patterns/agent-as-trigger) | You don't know upfront which workflow to run | Agent classifies incoming document → routes to correct pipeline | | Agent as Orchestrator (/agents/deployment-patterns/agent-as-orchestrator) | Dynamic coordination across sub-processes | Agent coordinates onboarding across IT, HR, training, equipment | For AI Agents ============= If you're an AI agent or coding assistant, fetch /llms.txt` (/llms.txt) for the complete documentation in a single structured text file optimized for LLM consumption. Built by the Hyphen team. API Reference (/api) · GitHub (https://github.com/hyphen-platform) --- ## PATH: Integrations > Index (Source: integrations/index.md) Integrations ============ Hyphen includes built-in OAuth integrations for common business tools and a webhook system for event-driven architectures. Business Tools ============== | Provider | Actions | Auth | |----------|---------|------| | Gmail (/integrations/gmail) | gmailsend, gmailread, gmailreply | OAuth 2.0 | | Slack (/integrations/slack) | slackpost, slackreadchannel, slackdm | OAuth 2.0 | | Outlook (/integrations/outlook) | outlooksend, outlookread, outlookcalendarcreate | OAuth 2.0 | Multi-Tenant OAuth Model ======================== Each organization manages their own OAuth credentials. Hyphen stores and refreshes tokens automatically. ``mermaid sequenceDiagram participant Org as Your Org participant Hyphen participant Provider as Google / Slack / Microsoft Org->>Hyphen: Store OAuth app credentials (clientid, client_secret) Org->>Hyphen: Request authorization URL Hyphen->>Org: Redirect URL Org->>Provider: User authorizes access Provider->>Hyphen: OAuth callback with auth code Hyphen->>Hyphen: Store encrypted tokens Note over Hyphen: Tokens auto-refresh before expiry Org->>Hyphen: Use tools in workflows and agents `` The setup flow is the same for every provider: store your OAuth app credentials, authorize user accounts, then use the tools in workflows and agents. Webhooks ======== Webhooks (/integrations/webhooks) let you receive notifications when events occur in Hyphen — approval requests, workflow completions, agent pauses, and more. What's in This Section ====================== | Page | Description | |------|-------------| | Gmail (/integrations/gmail) | Send, read, and reply to emails via Gmail API | | Slack (/integrations/slack) | Post messages, read channels, and send DMs | | Outlook (/integrations/outlook) | Send emails, read inbox, and create calendar events | | Webhooks (/integrations/webhooks) | Event notifications for approval requests, completions, and agent pauses | --- ## PATH: Platform > Index (Source: platform/index.md) Platform ======== The Hyphen platform architecture, from execution model to security controls. | Page | What It Covers | |------|---------------| | Architecture (/platform/architecture) | Three-phase execution model — AI as Compiler, Deterministic Runtime, Bounded Agentic Runtime | | Workflow DSL (/platform/workflow-dsl) | Complete JSON DSL specification — every field, type, and option | | Context Resolution (/platform/context-resolution) | @path syntax, data flow between steps, template interpolation | | Conditional Logic (/platform/conditional-logic) | filter, onFalse, condition operators, combinators, nested trees | | Scheduling (/platform/scheduling) | Interval, time-of-day, and timezone-based execution | | Multi-Tenancy (/platform/multi-tenancy) | X-Org-Id isolation, encrypted secrets, credential scoping | | Security (/platform/security) | Structural permissioning, SSRF protection, prompt injection defense, secret redaction | --- ## PATH: Primitives > Index (Source: primitives/index.md) Primitives ========== Primitives are built into Hyphen. They don't need to be registered — use them directly as step types in any workflow. ``mermaid flowchart LR subgraph "Workflow" M["Matcher"] --> L["Loop"] L --> A["PbotApproval"] A --> F["PbotForm"] F --> T["Custom Table"] end ` | Primitive | Type Value | What It Does | |-----------|-----------|--------------| | Matcher (/primitives/matcher) | matcher | Compare two datasets — exact keys, numeric tolerance, date windows, fuzzy text. Outputs matched pairs and exceptions from both sides | | Loop (/primitives/loop) | loop | Two modes: foreach processes items in parallel with configurable concurrency; react runs an AI agent with tools, guardrails, and stuck detection | | Approval (/primitives/approval) | PbotApproval | Pause execution for human review. The reviewer sees context, makes a decision, and execution resumes | | Form (/primitives/form) | PbotForm | Pause execution to collect external input via a form. Supports TTL and reminder intervals | | Custom Table (/primitives/custom-table) | custom-table` | Multi-tenant data storage — read, write, update, and upsert operations with full audit logging | All primitives produce output that flows into context via @ path resolution (/platform/context-resolution), making their results available to subsequent steps. --- ## PATH: Templates > Index (Source: templates/index.md) Templates ========= Production-ready workflow patterns. Each template includes the complete JSON workflow definition, required action registrations, sample data, and expected execution flow. The Graduated Pipeline ====================== Every template follows the same architecture — deterministic rules first, AI reasoning second, human review last: ``mermaid flowchart LR A["Operational
Event"] --> B["Deterministic
Rules"] B -->|"Auto-resolved"| E["✅"] B -->|"Exceptions"| C["Bounded
Agent"] C -->|"AI-resolved"| E C -->|"Low confidence"| D["Human
Review"] D --> E style B fill:#fef3cd,stroke:#856404 style C fill:#d1ecf1,stroke:#0c5460 style D fill:#d4edda,stroke:#155724 ` The ratio varies — reconciliation workflows route ~80% through rules, while incident response may route ~70% — but the pattern is always the same. Finance & Accounting ==================== Templates for matching, reconciliation, and payment processing. AP Invoice Reconciliation (/templates/ap-invoice-reconciliation) ================================================================= Match invoices to payments with tolerance for amount discrepancies, date offsets, and vendor name variations. Agent investigates unmatched exceptions. Human approves write-offs and adjustments. Primitives used: Matcher → Loop (react) → PbotApproval → Custom Table Graduated split: ~75% auto-matched, ~15% AI-investigated, ~10% human-reviewed Vendor Payment Reconciliation (/templates/vendor-payment-reconciliation) ========================================================================= Cross-border payment reconciliation handling FX volatility, intermediary bank fees, and asynchronous settlement windows. Multi-stage matching with cascading tolerance. Primitives used: Matcher (two-pass) → Loop (foreach) → Loop (react) → PbotApproval → Custom Table Graduated split: ~70% auto-matched, ~20% AI-investigated, ~10% human-reviewed Healthcare ========== Templates for claims processing and revenue cycle management. Insurance Claims Adjudication (/templates/insurance-claims-adjudication) ========================================================================= Dual-matcher pipeline: first match claims to policies, then match service codes to fee schedules. Agent analyzes complex cases involving medical necessity, bundling disputes, and authorization gaps. Clinical reviewer handles edge cases. Primitives used: Matcher (dual) → Loop (react) → PbotApproval → Custom Table Graduated split: ~70% auto-adjudicated, ~20% AI-analyzed, ~10% clinical review Healthcare Denial Management (/templates/healthcare-denial-management) ======================================================================= Automated denial analysis, categorization, and appeal generation. Agent classifies denial reasons, searches for supporting documentation, and drafts appeal letters. Compliance reviewer approves before submission. Primitives used: Loop (foreach) → Loop (react) → PbotApproval → Custom Table Graduated split: ~60% auto-categorized, ~25% AI-drafted appeals, ~15% compliance review IT & Security ============= Templates for incident response and operational monitoring. IT Incident Response (/templates/it-incident-response) ======================================================= SOC alert triage workflow. Matcher correlates alerts against known benign indicators. Agent enriches with threat intelligence, queries SIEM, classifies severity, and recommends containment. SOC analyst reviews high-severity and ambiguous cases. Primitives used: Matcher → Loop (react) → PbotApproval → Custom Table Graduated split: ~70% known-benign auto-closed, ~20% auto-contained, ~10% SOC escalation Structural permissioning demo: Agent can blockip and isolatehost but cannot deletefirewallrule or shutdownserver — tool list = architectural constraint. Legal & Compliance ================== Templates for contract analysis and regulatory processing. Contract Review (/templates/contract-review) ============================================= Automated contract analysis pipeline. LLM extracts terms, matcher compares against standard playbook, agent assesses deviations and drafts redlines. Legal counsel reviews high-risk and high-value contracts. Custom table builds searchable contract registry. Primitives used: LLM extraction → Matcher → Loop (react) → PbotApproval → Custom Table Graduated split: ~60% standard-term auto-approved, ~25% AI-analyzed deviations, ~15% legal review People Operations ================= Templates for employee lifecycle management. Employee Onboarding (/templates/employee-onboarding) ===================================================== Agent-as-orchestrator pattern. A coordinator agent sequences four sub-workflows — IT provisioning, benefits enrollment, training assignment, equipment ordering — handling dependencies, async form collection, and failure recovery. Demonstrates why agent-as-orchestrator replaces human coordinator judgment. Primitives used: Loop (react) with runworkflow → PbotApproval (within sub-workflows) → PbotForm (benefits) → Custom Table Pattern: Agent as Orchestrator (Pattern C) Identity & Risk =============== Templates for customer verification and compliance screening. KYC Customer Onboarding (/templates/kyc-customer-onboarding) ============================================================= Identity verification, sanctions screening, risk assessment, and account provisioning. Agent coordinates across verification providers, handles conflicting signals, and escalates high-risk applications. Compliance officer makes final determination on edge cases. Primitives used: Loop (react) with runworkflow → PbotApproval → Custom Table Pattern: Agent as Orchestrator (Pattern C) Customization Guide =================== Matching Thresholds =================== Adjust based on your tolerance for false positives vs. false negatives: | Profile | tolerance | fuzzyThreshold | dateWindowDays | Use When | |---------|------------|-------------------|-------------------|----------| | Conservative | 0.005 (0.5%) | 95 | 1 | High-value transactions, regulated reporting | | Balanced | 0.02 (2%) | 85 | 3 | General reconciliation, invoice matching | | Permissive | 0.05 (5%) | 75 | 7 | Cross-border payments, legacy system data | Agent Boundaries ================ | Profile | maxiterations | onstuck | timeoutms | Use When | |---------|-----------------|------------|--------------|----------| | Tight | 5 | fail after 2 | 60000 | Simple lookups, low-risk decisions | | Standard | 10–15 | escalate after 3 | 300000 | Exception investigation, document analysis | | Exploratory | 20–25 | retrywithhint after 4 | 600000 | Complex research, multi-system coordination | Tool Declarations ================= Use typed tool declarations to give agents access to your registered actions and workflows: `json { "tools": [ { "type": "action", "name": "yourregisteredaction" }, { "type": "workflow", "id": "wfsubprocessid" } ] } ` Built-in tools (complete, pauseforhuman, storememory, retrievememory, logprogress__`) are always auto-injected. See Tool Declarations (/agents/tool-declarations) for the full reference. Building Your Own Template ========================== Every template follows the same structure. To create a custom workflow: 1. Identify the graduated split — What percentage of events can be handled by rules? What needs AI? What needs a human? 2. Define the matcher — What two datasets are you comparing? What fields should match? What tolerances make sense? 3. Scope the agent — What tools does the agent need? What's the objective? How many iterations is reasonable? 4. Set the approval trigger — What conditions require human review? Amount thresholds? Confidence thresholds? Risk levels? 5. Design the audit trail — What fields should the custom table capture for compliance and analysis? Start with the Quickstart (/getting-started/quickstart), then adapt the template closest to your use case. --- --- END OF DOCUMENTATION CONTEXT Total documents: 59 Generated at: 2026-02-07T20:38:26.896Z