πŸ—ΊοΈCrawling & Sitemaps

Extract Sitemaps from Any Website

Sitemaps are XML. Nobody likes parsing XML. The dog parses it, you get a clean JSON array. It even follows sitemap index files recursively, so nested sitemaps aren't a problem.

What you'd use this for

SEO audits, content monitoring, building navigation trees, syncing content to databases, and finding every URL on a site without crawling page-by-page.

How it works

example
curl -X POST https://api.denkbot.dog/sitemap \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "url": "https://example.com" }'

Questions & Answers

What if the site has multiple sitemaps?+

We follow sitemap index files and return all URLs from all child sitemaps.

What if there's no sitemap?+

We return an empty array and a 404-style error. No drama.

How many URLs can it return?+

Up to 5000. Pass a limit param to cap it lower.

Ready to start fetching?

€19/year. Unlimited requests. API key ready in 30 seconds.