TopFlow
LearnBuildSecurity
AI Node

Text Model Node

Generate text using large language models (LLMs) like GPT-4, Claude, Gemini, and more. The most versatile node in TopFlow for AI-powered text generation, analysis, and transformation.

Overview

The Text Model Node is the core AI node for text generation. It:

  • Connects to any LLM supported by the Vercel AI SDK
  • Accepts text input from upstream nodes
  • Generates text based on prompts and configuration
  • Outputs generated text to downstream nodes
  • Supports streaming, temperature control, and token limits
Common Use Cases
  • Text generation - Create blog posts, reports, emails, code
  • Analysis - Sentiment analysis, threat assessment, data extraction
  • Classification - Categorize, tag, or label text data
  • Summarization - Condense long documents into key points
  • Translation - Convert between languages
  • Q&A - Answer questions based on context

Configuration

Required Parameters

model

  • Type: string
  • Required: Yes
  • Description: Model identifier (e.g., "gpt-4", "claude-3-opus")
  • Example: "gpt-4", "gpt-4-turbo", "claude-3-sonnet-20240229"
  • Validation: Must be a valid model name for the selected provider

prompt

  • Type: string
  • Required: Yes
  • Description: The text prompt sent to the model. Supports variable interpolation ($input1, $input2)
  • Example: "Analyze this security log: $input1"
  • Validation: Cannot be empty
Optional Parameters

temperature

  • Type: number
  • Range: 0.0 to 2.0
  • Default: 0.7
  • Description: Controls randomness. Lower = more focused/deterministic, higher = more creative/random
  • Use cases: 0.0-0.3 (factual), 0.5-0.8 (balanced), 0.9-2.0 (creative)

maxTokens

  • Type: number
  • Range: 1 to model maximum (e.g., 4096 for GPT-4)
  • Default: 1000
  • Description: Maximum number of tokens to generate
  • Note: Higher values increase cost and latency

topP

  • Type: number
  • Range: 0.0 to 1.0
  • Default: 1.0
  • Description: Nucleus sampling - only consider top P probability mass
  • Note: Alternative to temperature for controlling randomness

frequencyPenalty

  • Type: number
  • Range: -2.0 to 2.0
  • Default: 0
  • Description: Penalize tokens based on frequency in the text so far
  • Use case: Reduce repetition (positive values)

presencePenalty

  • Type: number
  • Range: -2.0 to 2.0
  • Default: 0
  • Description: Penalize tokens based on whether they appear in the text
  • Use case: Encourage topic diversity (positive values)

stop

  • Type: string[]
  • Default: undefined
  • Description: Stop sequences that halt generation
  • Example: ["\\n\\n", "END", "---"]

Supported Models

OpenAI
GPT-4, GPT-3.5 models
  • ▸gpt-4 - Most capable
  • ▸gpt-4-turbo - Faster, cheaper
  • ▸gpt-4-turbo-preview - Latest preview
  • ▸gpt-3.5-turbo - Fast, affordable
API Key: openai
Anthropic
Claude 3 family
  • ▸claude-3-opus-20240229 - Most intelligent
  • ▸claude-3-sonnet-20240229 - Balanced
  • ▸claude-3-haiku-20240307 - Fastest
  • ▸claude-2.1 - Legacy
API Key: anthropic
Google
Gemini models
  • ▸gemini-pro - Text only
  • ▸gemini-pro-vision - Multimodal
  • ▸gemini-1.5-pro - Extended context
API Key: google
Groq
Fast inference
  • ▸llama-3-70b - High quality
  • ▸llama-3-8b - Ultra fast
  • ▸mixtral-8x7b - Balanced
API Key: groq

Node Data Interface

TypeScript Definition
export type TextModelNodeData = {
  // Required configuration
  model: string
  prompt: string

  // Optional - Generation settings
  temperature?: number
  maxTokens?: number
  topP?: number
  frequencyPenalty?: number
  presencePenalty?: number
  stop?: string[]

  // Execution state (managed by system)
  status?: "idle" | "running" | "completed" | "error"
  output?: any
  error?: string
}

Usage Examples

Example 1: Simple Text Generation
Basic usage with minimal configuration
import { openai } from "@ai-sdk/openai"
import { generateText } from "ai"

const result = await generateText({
  model: openai("gpt-4"),
  prompt: "Explain quantum computing in simple terms"
})

console.log(result.text)
// Output: "Quantum computing is..."
Example 2: Security Log Analysis
Real-world use case with variable interpolation

Workflow setup:

const workflow = {
  nodes: [
    {
      id: "start",
      type: "start",
      data: { input: "Failed login from IP 192.168.1.100" }
    },
    {
      id: "analyze",
      type: "textModel",
      data: {
        model: "gpt-4",
        prompt: `Analyze this security event and provide:
1. Severity (LOW/MEDIUM/HIGH/CRITICAL)
2. Potential threat type
3. Recommended action

Event: $input1`,
        temperature: 0.3, // More deterministic for security analysis
        maxTokens: 500
      }
    }
  ],
  edges: [
    { id: "e1", source: "start", target: "analyze" }
  ]
}

Expected output:

Severity: HIGH

Potential threat type: Brute force attack or unauthorized access attempt

Recommended action:
1. Block IP 192.168.1.100 immediately
2. Review recent activity from this IP
3. Notify security team for further investigation
Example 3: Multi-Step Analysis
Chaining multiple Text Model nodes
const workflow = {
  nodes: [
    {
      id: "start",
      type: "start",
      data: { input: "Customer feedback text..." }
    },
    {
      id: "sentiment",
      type: "textModel",
      data: {
        model: "gpt-4",
        prompt: "Classify sentiment (positive/negative/neutral): $input1",
        temperature: 0.2,
        maxTokens: 50
      }
    },
    {
      id: "topics",
      type: "textModel",
      data: {
        model: "gpt-4",
        prompt: "Extract key topics from: $input1",
        temperature: 0.3,
        maxTokens: 200
      }
    },
    {
      id: "summary",
      type: "textModel",
      data: {
        model: "gpt-4",
        prompt: `Summarize this feedback analysis:
Sentiment: $input1
Topics: $input2

Original feedback: $input3`,
        temperature: 0.5,
        maxTokens: 300
      }
    }
  ],
  edges: [
    { source: "start", target: "sentiment" },
    { source: "start", target: "topics" },
    { source: "sentiment", target: "summary" },
    { source: "topics", target: "summary" },
    { source: "start", target: "summary" }
  ]
}
Example 4: Temperature Control
Different temperatures for different use cases
Factual (Low Temperature)
{
  model: "gpt-4",
  prompt: "What is 2+2?",
  temperature: 0.1
}
// Nearly always: "4"
Creative (High Temperature)
{
  model: "gpt-4",
  prompt: "Write a poem",
  temperature: 1.5
}
// Highly varied output

Validation Rules

Errors (Block Execution)
  • ❌
    Missing Model

    Model field cannot be empty

  • ❌
    Empty Prompt

    Prompt field cannot be empty

  • ❌
    Missing API Key

    No API key configured for selected provider (openai, anthropic, google, groq)

  • ❌
    Invalid Temperature

    Temperature must be between 0 and 2

Warnings (Don't Block)
  • ⚠️
    High Temperature

    Temperature > 1.5 may produce inconsistent results

  • ⚠️
    Large maxTokens

    maxTokens > 2000 increases cost and latency significantly

  • ⚠️
    PII in Prompt

    Prompt contains potential personal data (email, SSN, credit card)

Code Generation

Generated TypeScript Code
When exporting workflow to code

Workflow Function Export:

import { openai } from "@ai-sdk/openai"
import { generateText } from "ai"

export async function runAgentWorkflow(initialInput?: string) {
  const node_start = initialInput || "Default input"

  // Text Model node generates this code
  const node_textModel = await generateText({
    model: openai("gpt-4"),
    prompt: `Analyze: ${node_start}`,
    temperature: 0.7,
    maxTokens: 1000,
  })

  return node_textModel.text
}

Route Handler Export:

export async function POST(req: Request) {
  const { input } = await req.json()

  const result = await generateText({
    model: openai(process.env.OPENAI_API_KEY!),
    prompt: `Analyze: ${input}`,
    temperature: 0.7,
    maxTokens: 1000,
  })

  return Response.json({
    output: result.text,
    usage: result.usage
  })
}

Best Practices

Do
  • ✓
    Use low temperature (0.1-0.3) for factual responses
  • ✓
    Use medium temperature (0.5-0.8) for creative tasks
  • ✓
    Set maxTokens to minimum needed for your use case
  • ✓
    Use clear, specific prompts with examples
  • ✓
    Test with multiple temperature values to find optimal
  • ✓
    Use stop sequences to prevent over-generation
Don't
  • ✗
    Don't use high temperature for factual queries
  • ✗
    Don't set maxTokens higher than needed (wastes money)
  • ✗
    Don't include PII in prompts without masking
  • ✗
    Don't use vague prompts ("analyze this")
  • ✗
    Don't assume first output is best (test variations)
  • ✗
    Don't hardcode API keys in prompts
Performance Tips
Model Selection:
  • • GPT-4: Highest quality, slowest (3-10s)
  • • GPT-3.5 Turbo: Good balance (1-3s)
  • • Claude 3 Haiku: Fast, cost-effective (1-2s)
  • • Groq Llama 3 70B: Fastest, GPT-4 quality (0.5-2s)
Cost Optimization:
  • • Use cheaper models for simple tasks
  • • Reduce maxTokens to minimum
  • • Cache common responses
  • • Batch similar requests

Related Nodes

Prompt Node
Template and reuse prompts
View Documentation
Structured Output Node
Parse AI responses into validated JSON
View Documentation

Next Steps

Explore related nodes and patterns for building AI workflows:

HTTP Request NodeView All NodesBest Practices