The Vercel AI SDK has become the standard way to build AI-powered applications in Next.js and React. It handles streaming, chat UI state, tool calling, and multi-model support out of the box. What it doesn't handle natively is web search -- you need to integrate a search API yourself.
This tutorial shows you how to add web search to Vercel AI SDK applications using SearchHive's SwiftSearch API. The pattern works for any search API, but SearchHive has the advantage of also providing web scraping (ScrapeForge) and content extraction (DeepDive) under the same API key.
Key Takeaways
- Vercel AI SDK supports tool calling natively -- define a search tool and let the AI decide when to use it
- SearchHive SwiftSearch integrates as a simple fetch call in your tool handler
- The same pattern works for ScrapeForge -- scrape full page content when search snippets aren't enough
- Streaming works out of the box -- search results stream back through the AI SDK's response pipeline
- SearchHive's free tier is sufficient for development and low-traffic applications
Setting Up the Project
npx create-next-app@latest ai-search-app --typescript --tailwind --app
cd ai-search-app
npm install ai @ai-sdk/openai
Create a .env.local file:
OPENAI_API_KEY=your_openai_key
SEARCHHIVE_API_KEY=your_searchhive_key
Pattern 1: Search Tool with AI SDK Tool Calling
The cleanest integration uses the AI SDK's built-in tool system. Define a search tool, and the model decides when to call it based on the user's question.
API Route
// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
async function searchWeb(query: string, numResults = 5) {
const resp = await fetch('https://api.searchhive.dev/v1/swiftsearch', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.SEARCHHIVE_API_KEY}`,
},
body: JSON.stringify({
engine: 'google',
query,
num_results: numResults,
}),
});
if (!resp.ok) throw new Error(`Search failed: ${resp.status}`);
const data = await resp.json();
const results = data.results || [];
return results
.slice(0, numResults)
.map(
(r: any) =>
`- ${r.title}\n URL: ${r.url}\n ${r.snippet || ''}`
)
.join('\n\n');
}
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
searchWeb: tool({
description:
'Search the web for current information. Use when you need real-time data or information beyond your training data.',
parameters: z.object({
query: z.string().describe('The search query'),
numResults: z
.number()
.optional()
.default(5)
.describe('Number of results to return (1-10)'),
}),
execute: async ({ query, numResults }) => {
return await searchWeb(query, numResults);
},
}),
},
maxSteps: 5, // Allow multiple tool calls per turn
});
return result.toDataStreamResponse();
}
Frontend
// app/page.tsx
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat();
return (
<div className="max-w-2xl mx-auto p-4">
<div className="space-y-4 mb-4">
{messages.map((m) => (
<div key={m.id} className={`p-3 rounded ${m.role === 'user' ? 'bg-blue-100 ml-8' : 'bg-gray-100 mr-8'}`}>
<p className="font-semibold text-sm">{m.role === 'user' ? 'You' : 'AI'}</p>
<p className="whitespace-pre-wrap">{m.content}</p>
</div>
))}
</div>
<form onSubmit={handleSubmit} className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask anything... (AI will search the web when needed)"
className="flex-1 p-3 border rounded"
disabled={isLoading}
/>
<button type="submit" disabled={isLoading} className="bg-black text-white px-6 py-3 rounded">
{isLoading ? 'Searching...' : 'Send'}
</button>
</form>
</div>
);
}
Pattern 2: Search + Scrape for Deep Content
Search snippets are often not enough. When the AI needs full page content, combine SwiftSearch with ScrapeForge.
// app/api/deep-research/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
searchAndScrape: tool({
description: 'Search the web and scrape the top result for full content. Use when you need detailed information beyond search snippets.',
parameters: z.object({
query: z.string(),
}),
execute: async ({ query }) => {
// Step 1: Search
const searchResp = await fetch('https://api.searchhive.dev/v1/swiftsearch', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.SEARCHHIVE_API_KEY}`,
},
body: JSON.stringify({ engine: 'google', query, num_results: 3 }),
});
const searchData = await searchResp.json();
const topUrl = searchData.results?.[0]?.url;
if (!topUrl) return 'No results found.';
// Step 2: Scrape full page
const scrapeResp = await fetch('https://api.searchhive.dev/v1/scrapeforge/scrape', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.SEARCHHIVE_API_KEY}`,
},
body: JSON.stringify({ url: topUrl, format: 'markdown' }),
});
const scrapeData = await scrapeResp.json();
const markdown = scrapeData.markdown || scrapeData.content || '';
// Return first 10K chars to stay within context limits
return `Source: ${topUrl}\n\n${markdown.slice(0, 10000)}`;
},
}),
},
maxSteps: 5,
});
return result.toDataStreamResponse();
}
Pattern 3: RAG with Search Results
For applications where every response should be grounded in web data, use a system message that forces the AI to search before answering.
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
system: 'You are a research assistant. ALWAYS search the web before answering to ensure your information is current and accurate. Cite your sources.',
messages,
tools: {
searchWeb: tool({
description: 'Search the web for information',
parameters: z.object({
query: z.string(),
}),
execute: async ({ query }) => {
const resp = await fetch('https://api.searchhive.dev/v1/swiftsearch', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.SEARCHHIVE_API_KEY}`,
},
body: JSON.stringify({ engine: 'google', query, num_results: 5 }),
});
const data = await resp.json();
return (data.results || [])
.map((r: any) => `[${r.title}](${r.url}): ${r.snippet || ''}`)
.join('\n');
},
}),
},
toolChoice: 'required', // Force at least one tool call
maxSteps: 3,
});
return result.toDataStreamResponse();
}
Comparison: SearchHive vs Alternatives for Vercel AI SDK
| Feature | SearchHive | SerpApi | Tavily | Brave Search |
|---|---|---|---|---|
| Free tier | Renewable credits | 100/mo | 1,000/mo | 2,000/mo |
| Search + Scrape | Yes (SwiftSearch + ScrapeForge) | Search only | Search only | Search only |
| Pricing per 1K | ~$1-2 | ~$10 | ~$3 | $3-5 |
| TypeScript friendly | Yes (REST free JSON formatter) | Yes | Yes (native SDK) | Yes |
| Content extraction | DeepDive | No | Summarize only | No |
Best Practices
- Set
maxStepsto 3-5 to allow multi-tool calls without runaway loops - Cache search results on the server side to avoid redundant API calls
- Rate limit tool calls to prevent abuse (Vercel's edge runtime can enforce this)
- Use
toolChoice: 'auto'for general chat (let AI decide) ortoolChoice: 'required'for research apps - Handle search failures gracefully -- return a message to the AI so it can still respond
Conclusion
Integrating web search into Vercel AI SDK applications takes about 30 lines of TypeScript. The AI SDK's tool system handles the complexity of multi-turn tool calling, streaming, and state management. SearchHive provides the search backbone with the added advantage of scraping and extraction capabilities in one API.
Related: Best Search APIs for AI Agents | Building an MCP Search Server | Best Web Scraping APIs for LLMs and RAG Pipelines
Add web search to your Next.js AI app with SearchHive's free tier -- SwiftSearch, ScrapeForge, and DeepDive, one API key.