πŸ—ΊοΈCrawling & Sitemaps

Extract All Internal Links from a Website

Internal links are the connective tissue of a website's SEO. Extracting them means either writing a scraper or using a tool. denkbot.dog's /scrape endpoint returns all links on the page, and /crawl follows them across the entire site. The dog fetches. The link map builds itself.

What you'd use this for

Internal link analysis for SEO, finding orphaned pages, building site navigation maps, detecting broken internal links, and content silo analysis.

How it works

example
# Get all links from a page
curl -X POST https://api.denkbot.dog/scrape \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "url": "https://example.com" }' \
  | jq '[.links[] | select(.href | startswith("https://example.com"))]'

Questions & Answers

Are both internal and external links returned?+

Yes. The links array contains all hrefs. Filter by domain in your code.

Does it include the anchor text?+

Yes. Each link object has href and text properties.

What about nofollow links?+

All links are returned. rel attributes are in the raw HTML if you need to check them.

Ready to start fetching?

€19/year. Unlimited requests. API key ready in 30 seconds.