πŸ—ΊοΈCrawling & Sitemaps

Crawl Entire Websites via API

You need all the URLs. Not just the homepage. All of them. The blog posts, the product pages, the forgotten /archive page from 2019. denkbot.dog's crawl endpoint fetches the whole site and returns a structured tree. The dog explores so you don't have to.

What you'd use this for

Site audits, content inventories, migration planning, broken link detection, SEO analysis, and building sitemaps for sites that don't have one.

How it works

example
curl -X POST https://api.denkbot.dog/crawl \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://example.com",
    "maxPages": 200,
    "maxDepth": 5
  }'

Questions & Answers

How deep can it crawl?+

Up to 500 pages and configurable depth levels. Default is 3 levels deep, 50 pages.

Does it respect robots.txt?+

Yes, by default. Pass respectRobotsTxt: false to override (use responsibly).

Will it follow external links?+

No by default. Pass followExternalLinks: true to explore beyond the starting domain.

Ready to start fetching?

€19/year. Unlimited requests. API key ready in 30 seconds.