General
AI

Chatbot

Build AI-powered chatbots with streaming responses and conversation history.

The starter kit includes a complete chatbot system with streaming responses, conversation history, and a beautiful UI.

Overview

The chatbot uses:

  • Vercel AI SDK - For streaming responses and state management
  • tRPC - For type-safe chat CRUD operations
  • OpenAI - For the LLM backend (configurable)

Streaming Endpoint

The main AI chat logic resides in an API route to support real-time streaming of tokens to the client.

app/api/ai/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { and, eq } from 'drizzle-orm';

import { getSession } from '@/lib/auth/server';
import { db } from '@/lib/db';
import { aiChatTable } from '@/lib/db/schema';

export async function POST(req: Request) {
  const session = await getSession();
  if (!session) {
    return Response.json({ error: 'Unauthorized' }, { status: 401 });
  }

  const { messages, chatId, organizationId } = await req.json();

  const result = streamText({
    model: openai('gpt-4o-mini'),
    messages,
    async onFinish({ text, usage }) {
      // Save assistant's response to the database
      if (chatId) {
        const updatedMessages = [
          ...messages,
          { role: 'assistant', content: text }
        ];

        await db
          .update(aiChatTable)
          .set({ messages: JSON.stringify(updatedMessages) })
          .where(
            organizationId
              ? and(
                  eq(aiChatTable.id, chatId),
                  eq(aiChatTable.organizationId, organizationId)
                )
              : and(
                  eq(aiChatTable.id, chatId),
                  eq(aiChatTable.userId, session.user.id)
                )
          );
      }
    }
  });

  return result.toTextStreamResponse();
}

UI Components

Main Chat Component

The AiChat component provides a full conversation interface with a history sidebar.

app/(saas)/dashboard/ai/page.tsx
import { AiChat } from '@/components/ai/ai-chat';
import { getSession } from '@/lib/auth/server';

export default async function AiPage() {
  const session = await getSession();
  const organizationId = session?.session.activeOrganizationId;

  if (!organizationId) {
    return <div>No active organization</div>;
  }

  return <AiChat organizationId={organizationId} />;
}

Custom Hook

For more control, you can use the useChat hook directly from the Vercel AI SDK.

components/my-custom-ai.tsx
'use client';

import { useChat } from '@ai-sdk/react';

export function MyCustomAI() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } =
    useChat({
      api: '/api/ai/chat',
      onFinish: (message) => {
        // Handle message completion
        console.log('Message finished:', message);
      }
    });

  return (
    <div>
      {messages.map((message) => (
        <div key={message.id}>
          <strong>{message.role}:</strong> {message.content}
        </div>
      ))}
      <form onSubmit={handleSubmit}>
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Type a message..."
          disabled={isLoading}
        />
        <button
          type="submit"
          disabled={isLoading}
        >
          Send
        </button>
      </form>
    </div>
  );
}

Conversation History

Chats are stored in the database and can be retrieved via tRPC:

trpc/routers/organization/organization-ai-router.ts
import { createTRPCRouter, protectedOrganizationProcedure } from '@/trpc/init';
import { TRPCError } from '@trpc/server';
import { and, desc, eq, sql } from 'drizzle-orm';
import { z } from 'zod';

import { appConfig } from '@/config/app.config';
import { db } from '@/lib/db';
import { aiChatTable } from '@/lib/db/schema';

export const organizationAiRouter = createTRPCRouter({
  listChats: protectedOrganizationProcedure
    .input(
      z
        .object({
          limit: z
            .number()
            .min(1)
            .max(appConfig.pagination.maxLimit)
            .optional()
            .default(appConfig.pagination.defaultLimit),
          offset: z.number().min(0).optional().default(0)
        })
        .optional()
    )
    .query(async ({ ctx, input }) => {
      // Use SQL builder to select only needed columns and extract first message
      const chats = await db
        .select({
          id: aiChatTable.id,
          title: aiChatTable.title,
          pinned: aiChatTable.pinned,
          createdAt: aiChatTable.createdAt,
          firstMessageContent: sql<string | null>`
            CASE 
              WHEN ${aiChatTable.messages} IS NOT NULL 
                AND ${aiChatTable.messages}::jsonb != '[]'::jsonb 
              THEN (${aiChatTable.messages}::jsonb->0->>'content')
              ELSE NULL 
            END
          `.as('first_message_content')
        })
        .from(aiChatTable)
        .where(eq(aiChatTable.organizationId, ctx.organization.id))
        .orderBy(desc(aiChatTable.pinned), desc(aiChatTable.createdAt))
        .limit(input?.limit ?? 20)
        .offset(input?.offset ?? 0);

      return { chats };
    }),

  getChat: protectedOrganizationProcedure
    .input(z.object({ id: z.string().uuid() }))
    .query(async ({ ctx, input }) => {
      const chat = await db.query.aiChatTable.findFirst({
        where: and(
          eq(aiChatTable.id, input.id),
          eq(aiChatTable.organizationId, ctx.organization.id)
        )
      });

      if (!chat) {
        throw new TRPCError({
          code: 'NOT_FOUND',
          message: 'Chat not found'
        });
      }

      return {
        chat: {
          ...chat,
          messages: chat.messages ? JSON.parse(chat.messages) : []
        }
      };
    }),

  createChat: protectedOrganizationProcedure
    .input(z.object({ title: z.string().optional() }).optional())
    .mutation(async ({ ctx, input }) => {
      const [chat] = await db
        .insert(aiChatTable)
        .values({
          organizationId: ctx.organization.id,
          title: input?.title || 'New Chat',
          messages: JSON.stringify([])
        })
        .returning();

      return { chat };
    }),

  deleteChat: protectedOrganizationProcedure
    .input(z.object({ id: z.string().uuid() }))
    .mutation(async ({ input, ctx }) => {
      await db
        .delete(aiChatTable)
        .where(
          and(
            eq(aiChatTable.id, input.id),
            eq(aiChatTable.organizationId, ctx.organization.id)
          )
        );
    })
});

Tool Calling (Function Calling)

You can easily add tools that the AI can call to perform actions like searching your database or calling external APIs.

app/api/ai/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { ilike } from 'drizzle-orm';
import { z } from 'zod';

import { db } from '@/lib/db';
import { leadTable } from '@/lib/db/schema';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai('gpt-4o-mini'),
    messages,
    tools: {
      findLeads: {
        description: 'Find leads in the database by name',
        parameters: z.object({
          query: z.string().describe('The search query')
        }),
        execute: async ({ query }) => {
          const leads = await db.query.leadTable.findMany({
            where: ilike(leadTable.name, `%${query}%`),
            limit: 10
          });

          return leads;
        }
      },
      getWeather: {
        description: 'Get the current weather for a location',
        parameters: z.object({
          location: z.string().describe('The city name')
        }),
        execute: async ({ location }) => {
          // Call weather API
          const response = await fetch(
            `https://api.weather.com/v1/current?location=${location}`
          );
          return await response.json();
        }
      }
    }
  });

  return result.toTextStreamResponse();
}

Customizing the Model

You can customize which model to use:

app/api/ai/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';

export async function POST(req: Request) {
  const { messages, model } = await req.json();

  const result = streamText({
    model: openai(model || 'gpt-4o-mini'), // Default to gpt-4o-mini
    messages,
    temperature: 0.7, // Control randomness
    maxTokens: 1000 // Limit response length
  });

  return result.toTextStreamResponse();
}

Error Handling

Handle errors gracefully:

app/api/ai/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';

export async function POST(req: Request) {
  try {
    const { messages } = await req.json();

    const result = streamText({
      model: openai('gpt-4o-mini'),
      messages
    });

    return result.toTextStreamResponse();
  } catch (error) {
    console.error('AI chat error:', error);
    return Response.json(
      { error: 'Failed to process chat request' },
      { status: 500 }
    );
  }
}

Rate Limiting

Implement rate limiting to control costs:

app/api/ai/chat/route.ts
import { rateLimit } from '@/lib/rate-limit';

export async function POST(req: Request) {
  const session = await getSession();

  if (!session) {
    return Response.json({ error: 'Unauthorized' }, { status: 401 });
  }

  // Check rate limit
  const { success } = await rateLimit.limit(session.user.id);

  if (!success) {
    return Response.json({ error: 'Rate limit exceeded' }, { status: 429 });
  }

  // Process chat request
  // ...
}

Best Practices

  1. Stream responses - Always use streaming for better UX
  2. Save conversations - Store chat history in the database
  3. Implement rate limiting - Control API costs
  4. Handle errors - Provide user-friendly error messages
  5. Use tools wisely - Add tools for database queries and external APIs
  6. Monitor usage - Track token usage and costs