You need all the URLs. Not just the homepage. All of them. The blog posts, the product pages, the forgotten /archive page from 2019. denkbot.dog's crawl endpoint fetches the whole site and returns a structured tree. The dog explores so you don't have to.
Site audits, content inventories, migration planning, broken link detection, SEO analysis, and building sitemaps for sites that don't have one.
curl -X POST https://api.denkbot.dog/crawl \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"maxPages": 200,
"maxDepth": 5
}'Up to 500 pages and configurable depth levels. Default is 3 levels deep, 50 pages.
Yes, by default. Pass respectRobotsTxt: false to override (use responsibly).
No by default. Pass followExternalLinks: true to explore beyond the starting domain.

β¬19/year. Unlimited requests. API key ready in 30 seconds.