Agent
Create powerful AI agents using any LLM provider
The Agent block serves as the interface between your workflow and Large Language Models (LLMs). It executes inference requests against various AI providers, processes natural language inputs according to defined instructions, and generates structured or unstructured outputs for downstream consumption.
Overview
The Agent block enables you to:
Process natural language: Analyze user input and generate contextual responses
Execute AI-powered tasks: Perform content analysis, generation, and decision-making
Call external tools: Access APIs, databases, and services during processing
Generate structured output: Return JSON data that matches your schema requirements
Configuration Options
System Prompt
The system prompt establishes the agent's operational parameters and behavioral constraints. This configuration defines the agent's role, response methodology, and processing boundaries for all incoming requests.
You are a helpful assistant that specializes in financial analysis.
Always provide clear explanations and cite sources when possible.
When responding to questions about investments, include risk disclaimers.
User Prompt
The user prompt represents the primary input data for inference processing. This parameter accepts natural language text or structured data that the agent will analyze and respond to. Input sources include:
- Static Configuration: Direct text input specified in the block configuration
- Dynamic Input: Data passed from upstream blocks through connection interfaces
- Runtime Generation: Programmatically generated content during workflow execution
Model Selection
The Agent block supports multiple LLM providers through a unified inference interface. Available models include:
OpenAI Models: GPT-4o, o1, o3, o4-mini, gpt-4.1 (API-based inference) Anthropic Models: Claude 3.7 Sonnet (API-based inference) Google Models: Gemini 2.5 Pro, Gemini 2.0 Flash (API-based inference) Alternative Providers: Groq, Cerebras, xAI, DeepSeek (API-based inference) Local Deployment: Ollama-compatible models (self-hosted inference)
Temperature
Control the creativity and randomness of responses:
More deterministic, focused responses. Best for factual tasks, customer support, and situations where accuracy is critical.
Balanced creativity and focus. Suitable for general purpose applications that require both accuracy and some creativity.
More creative, varied responses. Ideal for creative writing, brainstorming, and generating diverse ideas.
The temperature range (0-1 or 0-2) varies depending on the selected model.
API Key
Your API key for the selected LLM provider. This is securely stored and used for authentication.
Tools
Tools extend the agent's capabilities through external API integrations and service connections. The tool system enables function calling, allowing the agent to execute operations beyond text generation.
Tool Integration Process:
- Access the Tools configuration section within the Agent block
- Select from 60+ pre-built integrations or define custom functions
- Configure authentication parameters and operational constraints
Available Tool Categories:
- Communication: Gmail, Slack, Telegram, WhatsApp, Microsoft Teams
- Data Sources: Notion, Google Sheets, Airtable, Supabase, Pinecone
- Web Services: Firecrawl, Google Search, Exa AI, browser automation
- Development: GitHub, Jira, Linear repository and issue management
- AI Services: OpenAI, Perplexity, Hugging Face, ElevenLabs
Tool Execution Control:
- Auto: Model determines tool invocation based on context and necessity
- Required: Tool must be called during every inference request
- None: Tool definition available but excluded from model context
Response Format
The Response Format parameter enforces structured output generation through JSON Schema validation. This ensures consistent, machine-readable responses that conform to predefined data structures:
{
"name": "user_analysis",
"schema": {
"type": "object",
"properties": {
"sentiment": {
"type": "string",
"enum": ["positive", "negative", "neutral"]
},
"confidence": {
"type": "number",
"minimum": 0,
"maximum": 1
}
},
"required": ["sentiment", "confidence"]
}
}
This configuration constrains the model's output to comply with the specified schema, preventing free-form text responses and ensuring structured data generation.
Accessing Results
After an agent completes, you can access its outputs:
<agent.content>
: The agent's response text or structured data<agent.tokens>
: Token usage statistics (prompt, completion, total)<agent.tool_calls>
: Details of any tools the agent used during execution<agent.cost>
: Estimated cost of the API call (if available)
Advanced Features
Memory Integration
Agents can maintain context across interactions using the memory system:
// In a Function block before the agent
const memory = {
conversation_history: previousMessages,
user_preferences: userProfile,
session_data: currentSession
};
Structured Output Validation
Use JSON Schema to ensure consistent, machine-readable responses:
{
"type": "object",
"properties": {
"analysis": {"type": "string"},
"confidence": {"type": "number", "minimum": 0, "maximum": 1},
"categories": {"type": "array", "items": {"type": "string"}}
},
"required": ["analysis", "confidence"]
}
Error Handling
Agents automatically handle common errors:
- API rate limits with exponential backoff
- Invalid tool calls with retry logic
- Network failures with connection recovery
- Schema validation errors with fallback responses
Inputs and Outputs
System Prompt: Instructions defining agent behavior and role
User Prompt: Input text or data to process
Model: AI model selection (OpenAI, Anthropic, Google, etc.)
Temperature: Response randomness control (0-2)
Tools: Array of available tools for function calling
Response Format: JSON Schema for structured output
agent.content: Agent's response text or structured data
agent.tokens: Token usage statistics object
agent.tool_calls: Array of tool execution details
agent.cost: Estimated API call cost (if available)
Content: Primary response output from the agent
Metadata: Usage statistics and execution details
Access: Available in blocks after the agent
Example Use Cases
Customer Support Automation
Scenario: Handle customer inquiries with database access
- User submits support ticket via API block
- Agent processes inquiry with product database tools
- Agent generates response and creates follow-up ticket
- Response block sends reply to customer
Multi-Model Content Analysis
Scenario: Analyze content with different AI models
- Function block processes uploaded document
- Agent with GPT-4o performs technical analysis
- Agent with Claude analyzes sentiment and tone
- Function block combines results for final report
Tool-Powered Research Assistant
Scenario: Research assistant with web search and document access
- User query received via input
- Agent searches web using Google Search tool
- Agent accesses Notion database for internal docs
- Agent compiles comprehensive research report
Best Practices
- Be specific in system prompts: Clearly define the agent's role, tone, and limitations. The more specific your instructions are, the better the agent will be able to fulfill its intended purpose.
- Choose the right temperature setting: Use lower temperature settings (0-0.3) when accuracy is important, or increase temperature (0.7-2.0) for more creative or varied responses
- Leverage tools effectively: Integrate tools that complement the agent's purpose and enhance its capabilities. Be selective about which tools you provide to avoid overwhelming the agent. For tasks with little overlap, use another Agent block for the best results.