Generate text using large language models (LLMs) like GPT-4, Claude, Gemini, and more. The most versatile node in TopFlow for AI-powered text generation, analysis, and transformation.
The Text Model Node is the core AI node for text generation. It:
string"gpt-4", "gpt-4-turbo", "claude-3-sonnet-20240229"string"Analyze this security log: $input1"numbernumbernumbernumbernumberstring[]undefined["\\n\\n", "END", "---"]gpt-4 - Most capablegpt-4-turbo - Faster, cheapergpt-4-turbo-preview - Latest previewgpt-3.5-turbo - Fast, affordableclaude-3-opus-20240229 - Most intelligentclaude-3-sonnet-20240229 - Balancedclaude-3-haiku-20240307 - Fastestclaude-2.1 - Legacygemini-pro - Text onlygemini-pro-vision - Multimodalgemini-1.5-pro - Extended contextllama-3-70b - High qualityllama-3-8b - Ultra fastmixtral-8x7b - Balancedexport type TextModelNodeData = {
// Required configuration
model: string
prompt: string
// Optional - Generation settings
temperature?: number
maxTokens?: number
topP?: number
frequencyPenalty?: number
presencePenalty?: number
stop?: string[]
// Execution state (managed by system)
status?: "idle" | "running" | "completed" | "error"
output?: any
error?: string
}import { openai } from "@ai-sdk/openai"
import { generateText } from "ai"
const result = await generateText({
model: openai("gpt-4"),
prompt: "Explain quantum computing in simple terms"
})
console.log(result.text)
// Output: "Quantum computing is..."Workflow setup:
const workflow = {
nodes: [
{
id: "start",
type: "start",
data: { input: "Failed login from IP 192.168.1.100" }
},
{
id: "analyze",
type: "textModel",
data: {
model: "gpt-4",
prompt: `Analyze this security event and provide:
1. Severity (LOW/MEDIUM/HIGH/CRITICAL)
2. Potential threat type
3. Recommended action
Event: $input1`,
temperature: 0.3, // More deterministic for security analysis
maxTokens: 500
}
}
],
edges: [
{ id: "e1", source: "start", target: "analyze" }
]
}Expected output:
Severity: HIGH Potential threat type: Brute force attack or unauthorized access attempt Recommended action: 1. Block IP 192.168.1.100 immediately 2. Review recent activity from this IP 3. Notify security team for further investigation
const workflow = {
nodes: [
{
id: "start",
type: "start",
data: { input: "Customer feedback text..." }
},
{
id: "sentiment",
type: "textModel",
data: {
model: "gpt-4",
prompt: "Classify sentiment (positive/negative/neutral): $input1",
temperature: 0.2,
maxTokens: 50
}
},
{
id: "topics",
type: "textModel",
data: {
model: "gpt-4",
prompt: "Extract key topics from: $input1",
temperature: 0.3,
maxTokens: 200
}
},
{
id: "summary",
type: "textModel",
data: {
model: "gpt-4",
prompt: `Summarize this feedback analysis:
Sentiment: $input1
Topics: $input2
Original feedback: $input3`,
temperature: 0.5,
maxTokens: 300
}
}
],
edges: [
{ source: "start", target: "sentiment" },
{ source: "start", target: "topics" },
{ source: "sentiment", target: "summary" },
{ source: "topics", target: "summary" },
{ source: "start", target: "summary" }
]
}{
model: "gpt-4",
prompt: "What is 2+2?",
temperature: 0.1
}
// Nearly always: "4"{
model: "gpt-4",
prompt: "Write a poem",
temperature: 1.5
}
// Highly varied outputModel field cannot be empty
Prompt field cannot be empty
No API key configured for selected provider (openai, anthropic, google, groq)
Temperature must be between 0 and 2
Temperature > 1.5 may produce inconsistent results
maxTokens > 2000 increases cost and latency significantly
Prompt contains potential personal data (email, SSN, credit card)
Workflow Function Export:
import { openai } from "@ai-sdk/openai"
import { generateText } from "ai"
export async function runAgentWorkflow(initialInput?: string) {
const node_start = initialInput || "Default input"
// Text Model node generates this code
const node_textModel = await generateText({
model: openai("gpt-4"),
prompt: `Analyze: ${node_start}`,
temperature: 0.7,
maxTokens: 1000,
})
return node_textModel.text
}Route Handler Export:
export async function POST(req: Request) {
const { input } = await req.json()
const result = await generateText({
model: openai(process.env.OPENAI_API_KEY!),
prompt: `Analyze: ${input}`,
temperature: 0.7,
maxTokens: 1000,
})
return Response.json({
output: result.text,
usage: result.usage
})
}Explore related nodes and patterns for building AI workflows: