🗺️Crawling & Sitemaps

Scrape Multiple URLs in Batch

You have 500 URLs. You need the content of all of them. You don't need a special batch endpoint — you need concurrent HTTP requests and a 15-minute cache. denkbot.dog handles both. Run your scrapes in parallel and let the caching layer handle duplicates.

What you'd use this for

Bulk data extraction, product catalog scraping, content migration pipelines, mass metadata extraction, and any workflow involving a list of URLs.

How it works

example
// Batch scraping with concurrency control
import PQueue from 'p-queue'

const queue = new PQueue({ concurrency: 10 })
const urls = await getUrlsFromDatabase() // your 500 URLs

const results = await Promise.all(
  urls.map(url =>
    queue.add(() =>
      fetch('https://api.denkbot.dog/scrape', {
        method: 'POST',
        headers: {
          'Authorization': `Bearer ${process.env.DENKBOT_KEY}`,
          'Content-Type': 'application/json',
        },
        body: JSON.stringify({ url }),
      }).then(r => r.json())
    )
  )
)

Questions & Answers

Is there a native batch endpoint?+

Not yet. Use parallel HTTP requests with a concurrency limit.

How many concurrent requests can I make?+

We recommend max 10-20 concurrent requests on the free tier to stay within rate limits.

Do duplicate URLs in the batch get cached?+

Yes. The same URL requested twice within 15 minutes returns the cached version for free.

Ready to start fetching?

€19/year. Unlimited requests. API key ready in 30 seconds.