πŸ—ΊοΈCrawling & Sitemaps

Check All Links on a Website

Broken links are an embarrassment and an SEO problem. Finding them means crawling the site, extracting links, and checking each one. denkbot.dog handles the crawling and link extraction part β€” you wire up the status code checking logic. The dog finds the links. You decide which ones are broken.

What you'd use this for

Website maintenance, SEO health checks, automated 404 detection, pre-launch link audits, and ongoing link health monitoring.

How it works

example
// Step 1: Crawl to get all pages
const crawl = await fetch('https://api.denkbot.dog/crawl', {
  method: 'POST',
  headers: { 'Authorization': 'Bearer YOUR_API_KEY', 'Content-Type': 'application/json' },
  body: JSON.stringify({ url: 'https://example.com', maxPages: 100 }),
}).then(r => r.json())

// Step 2: Scrape each page and extract links
const allLinks = new Set()
for (const page of getAllUrls(crawl.tree)) {
  const { links } = await scrape(page)
  links.forEach(l => allLinks.add(l.href))
}

// Step 3: Check each link
const results = await Promise.all(
  [...allLinks].map(href =>
    fetch(href, { method: 'HEAD' })
      .then(r => ({ href, status: r.status, ok: r.ok }))
      .catch(() => ({ href, status: 0, ok: false }))
  )
)

Questions & Answers

Does denkbot.dog have a built-in link checker?+

Not as a dedicated feature. You'd combine /crawl + /scrape and do the checking yourself.

Can it find broken external links too?+

The links array from /scrape includes all hrefs including external. You can check those too.

How long does it take to check 1000 links?+

Depends on your parallelism. At 10 concurrent requests, around 2-3 minutes.

Ready to start fetching?

€19/year. Unlimited requests. API key ready in 30 seconds.