You have 10,000 URLs. You need the title and description of each. Writing a scraper for this is overkill. Making 10,000 cURL calls is also overkill. denkbot.dog handles both efficiently with 15-minute caching on repeated URLs. The dog retrieves. You spreadsheet.
Bulk SEO data gathering, populating CMS fields from URLs, content audits, broken metadata detection, and building navigation from URL metadata.
// Batch extract titles and descriptions
const urls = ['https://example.com', 'https://example.com/about']
const results = await Promise.all(
urls.map(url =>
fetch('https://api.denkbot.dog/scrape', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json',
},
body: JSON.stringify({ url }),
})
.then(r => r.json())
.then(({ url, title, metadata }) => ({
url,
title,
description: metadata.description,
}))
)
)metadata.description will be null.
Yes. The title you get is from the final destination URL.
Yes. Free tier: 100 req/day. Run batches with a small delay to be safe.

€19/year. Unlimited requests. API key ready in 30 seconds.