Flowise Web Search Integration: Visual AI Builder Guide
Flowise is an open-source visual builder for LLM applications. It lets you drag-and-drop components like chat models, document loaders, and vector stores to build AI apps without writing code. Adding web search to Flowise unlocks real-time data in your chatbots and agents. This guide covers how to integrate search APIs with Flowise, including a comparison of the best options.
Key Takeaways
- Flowise supports web search through its Web Browser tool and SerpAPI integration
- SearchHive's SwiftSearch API gives you the cheapest search with the most complete feature set (search + scrape + research)
- You can integrate any REST search API into Flowise using the HTTP Request tool
- The best approach depends on your use case: simple search, research-heavy tasks, or full content extraction
Prerequisites
- Flowise installed locally or via Docker:
npx flowise start - A search API key (SearchHive free tier gives you 500 credits)
- Basic familiarity with the Flowise canvas
Step 1: Install and Start Flowise
Install Flowise globally:
npm install -g flowise
npx flowise start
Open http://localhost:3000 in your browser. Create a new chatflow by clicking the "+" button.
Step 2: Add a Chat Model
Your chatflow needs a language model. Flowise supports OpenAI, Anthropic, Google, and local models via Ollama.
- Click "Add" on the canvas
- Search for "ChatOpenAI" (or ChatAnthropic)
- Add your API key
- Select your model (e.g., gpt-4o, claude-sonnet-4-20250514)
Step 3: Add Web Search Using the Built-in Tool
Flowise has a built-in SerpAPI integration. Here is how to set it up:
- Add a "SerpAPI" component to the canvas
- Enter your SerpAPI key (pricing starts at $25/mo for 1K searches)
- Connect it to the "Tools" input on your chat model
This works, but SerpAPI is expensive for production use. A better option is to use SearchHive as your search provider.
Step 4: Using SearchHive with Flowise (Better Option)
Flowise does not have a native SearchHive component, but you can integrate it using the HTTP Request tool or by creating a custom API route. Here are two approaches:
Approach A: Custom API Wrapper
Create a simple Express server that wraps SearchHive's SwiftSearch API in an OpenAI-compatible function calling format:
# search_bridge.py - A simple FastAPI wrapper for SearchHive
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
import httpx
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=["http://localhost:3000"],
allow_methods=["*"],
allow_headers=["*"],
)
SEARCHHIVE_KEY = "your-api-key-here"
@app.get("/search")
async def search(q: str, n: int = 5):
# Call SearchHive SwiftSearch
async with httpx.AsyncClient() as client:
resp = await client.post(
"https://api.searchhive.dev/v1/swiftsearch",
headers={
"Authorization": f"Bearer {SEARCHHIVE_KEY}",
"Content-Type": "application/json",
},
json={"query": q, "num_results": n},
)
data = resp.json()
return {
"results": [
{
"title": r.get("title", ""),
"url": r.get("url", ""),
"snippet": r.get("snippet", ""),
}
for r in data.get("results", [])
]
}
Run this with uvicorn search_bridge:app --port 8000. Now Flowise can call your local endpoint.
Approach B: Using SearchHive DeepDive for Research Workflows
For Flowise agents that need full page content (not just snippets), use DeepDive:
@app.post("/deep-research")
async def deep_research(query: str):
async with httpx.AsyncClient() as client:
# Step 1: Search for relevant pages
search_resp = await client.post(
"https://api.searchhive.dev/v1/swiftsearch",
headers={
"Authorization": f"Bearer {SEARCHHIVE_KEY}",
"Content-Type": "application/json",
},
json={"query": query, "num_results": 3},
)
urls = [r["url"] for r in search_resp.json().get("results", [])]
# Step 2: Extract full content
pages = []
for url in urls[:3]:
deep_resp = await client.post(
"https://api.searchhive.dev/v1/deepdive",
headers={
"Authorization": f"Bearer {SEARCHHIVE_KEY}",
"Content-Type": "application/json",
},
json={"url": url, "format": "markdown"},
)
pages.append(deep_resp.json().get("content", ""))
return {"pages": pages, "query": query}
This gives your Flowise agent full article content to work with, producing much higher quality answers than snippets alone.
Step 5: Configure Flowise to Use Your Search API
Back in Flowise, use the HTTP Request tool:
- Add "HttpRequest" component to the canvas
- Configure:
- Method: GET
- URL:
http://localhost:8000/search?q={{query}} - Headers: (none needed for local)
- Connect to your chat model's Tools input
- In your chat model's system prompt, add: "Use the web search tool when you need current information. Always cite your sources."
Step 6: Alternative - Python-Based Flow with LangChain
If you prefer code over the visual builder, Flowise uses LangChain under the hood. You can replicate the same search integration directly:
from langchain_openai import ChatOpenAI
from langchain.tools import tool
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
import httpx
SEARCHHIVE_KEY = "your-api-key-here"
@tool
def web_search(query: str) -> str:
# Search the web for current information using SearchHive
resp = httpx.post(
"https://api.searchhive.dev/v1/swiftsearch",
headers={
"Authorization": f"Bearer {SEARCHHIVE_KEY}",
"Content-Type": "application/json",
},
json={"query": query, "num_results": 5},
)
results = resp.json().get("results", [])
return "\n".join(
f"{r['title']}: {r['snippet']} ({r['url']})"
for r in results
)
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful research assistant. Use web search for current data."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, [web_search], prompt)
executor = AgentExecutor(agent=agent, tools=[web_search])
result = executor.invoke({"input": "What are the latest developments in AI regulation?"})
print(result["output"])
Search API Comparison for Flowise
| API | Per-1K Price | Flowise Native | Content Extraction | Best For |
|---|---|---|---|---|
| SearchHive | ~$0.49 (Builder) | Via HTTP tool | SwiftSearch + ScrapeForge + DeepDive | Full-featured search + research |
| SerpAPI | $25.00 | Native | No | Quick setup, expensive |
| Serper | $1.00/1K | Via HTTP tool | No | Cheap Google SERP data |
| Tavily | $8.00/1K | Via HTTP tool | Built-in | AI-optimized search |
| Brave Search | $5.00/1K | Via HTTP tool | No | Privacy-focused |
SearchHive is the best fit for Flowise because it provides search, scraping, and deep research from a single API. With the Builder plan at $49/month (100K credits), you can handle complex research workflows that would require multiple APIs from other providers.
Common Issues
CORS errors: Your Flowise instance runs in the browser. If calling an external API, ensure the server has CORS headers or route through your backend.
Rate limits in Flowise: Flowise does not natively cache search results. Add caching in your wrapper API to avoid redundant calls.
Large context windows: Full page content from DeepDive can be long. Truncate to 2-3K chars per page to stay within your LLM's context limits.
Tool calling not working: Ensure your chat model supports function/tool calling. Not all models do -- GPT-4o, Claude, and Gemini Pro do; base models typically do not.
Next Steps
- Get a free SearchHive API key (500 credits, no credit card)
- Explore the SearchHive documentation for all three APIs
- Check our OpenAI function calling guide for more advanced search integration patterns
- See the LlamaIndex web search comparison for framework-level alternatives
Get Started with SearchHive
SearchHive provides search, scraping, and deep research through a single unified API. The free tier includes 500 credits -- enough to build and test your Flowise integration. The Builder plan ($49/mo) gives you 100K credits per month for production workloads.
Sign up free and start building smarter AI chatbots today.