General
AI
Prompting
Learn how to use LLMs for text generation, completion, and prompting.
The starter kit provides utilities for using LLMs for text generation, completion, and other prompting tasks beyond chatbots.
Basic Text Generation
Generate text using the generateText function:
lib/ai/generate.ts
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
export async function generateSummary(content: string) {
const { text } = await generateText({
model: openai('gpt-4o-mini'),
prompt: `Summarize the following content in 3 sentences:\n\n${content}`
});
return text;
}Server Actions
Use AI in Server Actions:
app/actions/generate-content.ts
'use server';
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
export async function generateBlogPost(topic: string) {
const { text } = await generateText({
model: openai('gpt-4o-mini'),
prompt: `Write a blog post about: ${topic}`,
maxTokens: 2000
});
return text;
}Structured Outputs
Generate structured JSON outputs:
lib/ai/generate-structured.ts
import { openai } from '@ai-sdk/openai';
import { generateObject } from 'ai';
import { z } from 'zod';
const ProductSchema = z.object({
name: z.string(),
description: z.string(),
price: z.number(),
features: z.array(z.string())
});
export async function generateProduct(productType: string) {
const { object } = await generateObject({
model: openai('gpt-4o-mini'),
schema: ProductSchema,
prompt: `Generate a product specification for: ${productType}`
});
return object; // Fully typed as ProductSchema
}Prompt Templates
Create reusable prompt templates:
lib/ai/prompts.ts
export const prompts = {
summarize: (content: string) =>
`Summarize the following content in 3 sentences:\n\n${content}`,
translate: (text: string, targetLanguage: string) =>
`Translate the following text to ${targetLanguage}:\n\n${text}`,
extractKeywords: (content: string) =>
`Extract 5 key keywords from the following content:\n\n${content}`,
generateTitle: (content: string) =>
`Generate a compelling title for the following content:\n\n${content}`
};Usage:
lib/ai/use-prompts.ts
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { prompts } from './prompts';
export async function summarizeContent(content: string) {
const { text } = await generateText({
model: openai('gpt-4o-mini'),
prompt: prompts.summarize(content)
});
return text;
}System Prompts
Use system prompts to guide model behavior:
lib/ai/generate-with-system.ts
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
export async function generateResponse(userInput: string) {
const { text } = await generateText({
model: openai('gpt-4o-mini'),
system:
'You are a helpful assistant that provides concise, accurate answers.',
prompt: userInput
});
return text;
}Temperature and Sampling
Control randomness and creativity:
lib/ai/generate-creative.ts
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
// Creative writing (higher temperature)
export async function generateCreativeStory(prompt: string) {
const { text } = await generateText({
model: openai('gpt-4o-mini'),
prompt,
temperature: 0.9, // More creative
maxTokens: 1000
});
return text;
}
// Factual content (lower temperature)
export async function generateFactualContent(prompt: string) {
const { text } = await generateText({
model: openai('gpt-4o-mini'),
prompt,
temperature: 0.2, // More deterministic
maxTokens: 500
});
return text;
}Streaming Text Generation
Stream text generation for better UX:
app/api/ai/generate/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
export async function POST(req: Request) {
const { prompt } = await req.json();
const result = streamText({
model: openai('gpt-4o-mini'),
prompt
});
return result.toDataStreamResponse();
}Client-side usage:
components/streaming-generator.tsx
'use client';
import { useCompletion } from '@ai-sdk/react';
export function StreamingGenerator() {
const { completion, input, handleInputChange, handleSubmit, isLoading } =
useCompletion({
api: '/api/ai/generate'
});
return (
<div>
<div>{completion}</div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={handleInputChange}
placeholder="Enter a prompt..."
/>
<button
type="submit"
disabled={isLoading}
>
Generate
</button>
</form>
</div>
);
}Using Different Providers
Switch between AI providers:
lib/ai/provider.ts
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';
import { openai } from '@ai-sdk/openai';
export function getModel(provider: 'openai' | 'anthropic' | 'google') {
switch (provider) {
case 'openai':
return openai('gpt-4o-mini');
case 'anthropic':
return anthropic('claude-3-haiku-20240307');
case 'google':
return google('gemini-pro');
default:
return openai('gpt-4o-mini');
}
}Usage:
lib/ai/generate.ts
import { generateText as generateTextSDK } from 'ai';
import { getModel } from './provider';
export async function generateText(prompt: string, provider = 'openai') {
const { text } = await generateTextSDK({
model: getModel(provider),
prompt
});
return text;
}Error Handling
Handle API errors gracefully:
lib/ai/generate-safe.ts
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
export async function generateTextSafely(prompt: string) {
try {
const { text } = await generateText({
model: openai('gpt-4o-mini'),
prompt
});
return { success: true, text };
} catch (error) {
console.error('AI generation error:', error);
return {
success: false,
error: error instanceof Error ? error.message : 'Unknown error'
};
}
}Best Practices
- Use appropriate models - Choose models based on task complexity
- Set temperature wisely - Lower for factual, higher for creative
- Limit token usage - Set
maxTokensto control costs - Use system prompts - Guide model behavior with system messages
- Handle errors - Always wrap AI calls in try-catch
- Cache results - Cache expensive generations when possible
- Monitor usage - Track token usage and costs