You have 500 URLs. You need the content of all of them. You don't need a special batch endpoint — you need concurrent HTTP requests and a 15-minute cache. denkbot.dog handles both. Run your scrapes in parallel and let the caching layer handle duplicates.
Bulk data extraction, product catalog scraping, content migration pipelines, mass metadata extraction, and any workflow involving a list of URLs.
// Batch scraping with concurrency control
import PQueue from 'p-queue'
const queue = new PQueue({ concurrency: 10 })
const urls = await getUrlsFromDatabase() // your 500 URLs
const results = await Promise.all(
urls.map(url =>
queue.add(() =>
fetch('https://api.denkbot.dog/scrape', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.DENKBOT_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({ url }),
}).then(r => r.json())
)
)
)Not yet. Use parallel HTTP requests with a concurrency limit.
We recommend max 10-20 concurrent requests on the free tier to stay within rate limits.
Yes. The same URL requested twice within 15 minutes returns the cached version for free.

€19/year. Unlimited requests. API key ready in 30 seconds.