How to Compare Developer API Tools — Step-by-Step Tutorial
Evaluating developer API tools is harder than it looks. Pricing pages hide real costs behind "credits" and "MAUs." Feature comparison tables are misleading. Free tiers look generous until you hit the limits. This tutorial walks you through a systematic approach to comparing API tools using SearchHive's SwiftSearch API to gather real data.
Introduction
Every developer has been there: you need to pick an API for search, scraping, auth, email, or payments. You open three competitor tabs, scan the pricing pages, and realize none of them make an apples-to-apples comparison easy.
This tutorial builds a practical comparison pipeline using Python and SearchHive's APIs. You'll learn to:
- Scrape competitor pricing pages automatically
- Extract structured pricing data
- Build comparison tables programmatically
- Calculate true costs at your actual usage volume
Prerequisites
- Python 3.8+ with the
requestslibrary - A SearchHive API key (get one free)
- Basic familiarity with Python and free JSON formatter
pip install requests
Step 1: Set Up Your SearchHive API Key
First, get your API key from the SearchHive dashboard. The free tier includes 500 credits — enough to follow this entire tutorial.
import requests
import json
API_KEY = "sh_live_your_api_key_here"
BASE_URL = "https://api.searchhive.dev/v1"
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
# Test your connection
resp = requests.post(
f"{BASE_URL}/search",
headers=headers,
json={"query": "test", "limit": 1}
)
print(f"API status: {resp.status_code}")
print(f"Credits remaining: {resp.headers.get('x-credits-remaining', 'unknown')}")
Step 2: Discover Competitor Pricing Pages with SwiftSearch
Use SwiftSearch to find the pricing pages of the tools you want to compare. This is faster than manually searching each competitor.
# Search for competitor pricing pages
competitors = ["firecrawl", "scrapingbee", "serpapi", "serper.dev", "tavily"]
pricing_pages = {}
for competitor in competitors:
resp = requests.post(
f"{BASE_URL}/search",
headers=headers,
json={
"query": f"{competitor} pricing API 2025",
"limit": 3
}
)
results = resp.json().get("data", [])
if results:
# Pick the most relevant pricing page URL
pricing_pages[competitor] = results[0]["url"]
print(f"{competitor}: {results[0]['url']}")
Step 3: Scrape Each Pricing Page with ScrapeForge
Once you have the URLs, scrape each pricing page to extract raw content. ScrapeForge handles JavaScript rendering for modern sites.
raw_data = {}
for name, url in pricing_pages.items():
resp = requests.post(
f"{BASE_URL}/scrape",
headers=headers,
json={
"url": url,
"render_js": True,
"format": "markdown"
}
)
content = resp.json().get("data", {}).get("content", "")
raw_data[name] = content
print(f"Scraped {name}: {len(content)} characters")
Step 4: Extract Structured Pricing Data
Parse the raw markdown to extract pricing tiers. This is the most flexible step — you can adapt the parsing logic to any competitor's pricing page format.
# Simple price tier extraction from markdown
# Looks for patterns like "$X/month" or "$X / mo"
import re
def extract_prices(markdown_text):
# Find dollar amounts near common pricing indicators
pattern = r'\$(\d+(?:,\d+)?)\s*(?:\/?\s*(?:mo|month|year|yr))?'
prices = re.findall(pattern, markdown_text)
# Convert to integers and sort
unique_prices = sorted(set(int(p.replace(",", "")) for p in prices))
return unique_prices
pricing_summary = {}
for name, content in raw_data.items():
prices = extract_prices(content)
pricing_summary[name] = prices
print(f"{name}: ${prices}")
Step 5: Build a Comparison Table
Now compile the extracted data into a structured comparison:
# Define your expected usage volume
MONTHLY_REQUESTS = 100000
# Known pricing data (verified from research)
competitor_data = {
"SearchHive": {
"free_tier": 500,
"starter": {"price": 9, "volume": 5000, "per_unit": 0.0018},
"popular": {"price": 49, "volume": 100000, "per_unit": 0.00049},
"high_volume": {"price": 199, "volume": 500000, "per_unit": 0.000398},
},
"Firecrawl": {
"free_tier": 500,
"starter": {"price": 16, "volume": 3000, "per_unit": 0.0053},
"popular": {"price": 83, "volume": 100000, "per_unit": 0.00083},
"high_volume": {"price": 333, "volume": 500000, "per_unit": 0.000666},
},
"ScrapingBee": {
"free_tier": 1000,
"starter": {"price": 49, "volume": 250000, "per_unit": 0.000196},
"popular": {"price": 99, "volume": 1000000, "per_unit": 0.000099},
"high_volume": {"price": 249, "volume": 3000000, "per_unit": 0.000083},
},
}
# Print comparison table
print(f"\n{'Tool':<20} {'Free Tier':<12} {'Popular Plan':<18} {'Cost/100K':<12}")
print("-" * 62)
for tool, data in competitor_data.items():
popular = data["popular"]
print(f"{tool:<20} {data['free_tier']:<12} ${popular['price']}/mo ({popular['volume']:,}) ${popular['price']:<10}")
# Calculate cost at your usage volume
print(f"\n--- Cost at {MONTHLY_REQUESTS:,} requests/month ---")
for tool, data in competitor_data.items():
for tier_name, tier in data.items():
if tier_name in ("free_tier",):
continue
if tier["volume"] >= MONTHLY_REQUESTS:
print(f"{tool} ({tier_name}): ${tier['price']}/mo")
break
else:
# Extrapolate cost
high = data["high_volume"]
extrapolated = int(high["price"] * (MONTHLY_REQUESTS / high["volume"]))
print(f"{tool} (extrapolated): ~${extrapolated}/mo")
Step 6: Evaluate Features Beyond Price
Price is important, but it's not the only factor. Use SwiftSearch to find user reviews and feature comparisons:
# Search for real user experiences
resp = requests.post(
f"{BASE_URL}/search",
headers=headers,
json={
"query": "firecrawl vs scrapingbee developer experience review",
"limit": 5
}
)
print("\n--- User Reviews and Comparisons ---")
for result in resp.json().get("data", []):
print(f" {result['title']}")
print(f" {result['url']}")
print()
Key features to evaluate for any developer API:
- SDK quality — official SDKs for Python, Node.js, Go?
- Documentation — clear examples, quickstart guide, API reference?
- Rate limits — requests per second, per minute?
- Response time — P50 and P99 latency?
- Error handling — clear error codes, retry guidance?
- Support — community (Discord/Slack) vs email vs priority?
Step 7: Automate With DeepDive
For deeper analysis, use SearchHive's DeepDive API to research each tool comprehensively:
# Deep research on a specific competitor
resp = requests.post(
f"{BASE_URL}/research",
headers=headers,
json={
"query": "SearchHive vs Firecrawl web scraping API comparison 2025",
"depth": "standard"
}
)
research = resp.json()
print(f"Research summary: {research.get('data', {}).get('summary', '')[:500]}")
DeepDive compiles information from multiple sources into a structured research report — useful for evaluating tools you haven't used before.
Complete Code Example
Here's the full pipeline in one script:
import requests
import json
import re
import time
API_KEY = "sh_live_your_api_key_here"
BASE_URL = "https://api.searchhive.dev/v1"
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
# Step 1: Define competitors
competitors = ["firecrawl", "scrapingbee", "serpapi", "tavily"]
# Step 2: Find pricing pages
print("=== Step 1: Finding pricing pages ===")
pricing_pages = {}
for comp in competitors:
resp = requests.post(
f"{BASE_URL}/search",
headers=headers,
json={"query": f"{comp} API pricing 2025", "limit": 3}
)
results = resp.json().get("data", [])
if results:
pricing_pages[comp] = results[0]["url"]
print(f" {comp}: {results[0]['url']}")
time.sleep(1)
# Step 3: Scrape pricing data
print("\n=== Step 2: Scraping pricing pages ===")
for name, url in pricing_pages.items():
resp = requests.post(
f"{BASE_URL}/scrape",
headers=headers,
json={"url": url, "render_js": True, "format": "markdown"}
)
content = resp.json().get("data", {}).get("content", "")
print(f" {name}: {len(content)} chars scraped")
time.sleep(1)
print("\n=== Done! ===")
Common Issues
- Empty search results? Try broader queries. SwiftSearch works best with 3-6 word queries.
- Blocked by Cloudflare? ScrapeForge handles this automatically with
"proxy": "auto". If you still get blocked, the site may require specific browser profiles. - Running out of credits? Check your usage in the SearchHive dashboard. The Builder plan ($49/mo) gives 100K credits.
- Rate limited by SearchHive? Add delays between requests. The free tier has lower rate limits.
Next Steps
- Build a recurring comparison pipeline that runs monthly to track pricing changes
- Add automated alerts when competitors change their pricing
- Extend the pipeline to scrape feature comparison pages and changelogs
For more developer tool comparisons, check out /blog/best-api-authentication-methods-tools-2025 and /blog/best-marketplace-data-collection-tools-2025.
Start comparing tools with 500 free SearchHive credits — no credit card required.