Every Python AI agent eventually needs to read the web. Instead of installing Playwright, managing browser binaries, and writing async scraper boilerplate, call denkbot.dog. One HTTP request, structured JSON back. Works with every Python agent framework that can call an HTTP endpoint.
CrewAI agents with web research tasks, AutoGen conversations that need factual grounding, custom agent loops built with bare Python, any asyncio-based agent that needs non-blocking URL fetching.
import httpx
from dataclasses import dataclass
@dataclass
class PageContent:
url: str
title: str
text: str
links: list[dict]
cached: bool
class DenkbotClient:
BASE = "https://api.denkbot.dog"
def __init__(self, api_key: str):
self.headers = {"Authorization": f"Bearer {api_key}"}
def scrape(self, url: str, render_js: bool = False) -> PageContent:
r = httpx.post(f"{self.BASE}/scrape", headers=self.headers,
json={"url": url, "renderJs": render_js, "format": "json"}, timeout=30)
d = r.json()
return PageContent(d["url"], d["title"], d["text"], d["links"], d["cached"])
async def scrape_async(self, url: str) -> PageContent:
async with httpx.AsyncClient() as client:
r = await client.post(f"{self.BASE}/scrape", headers=self.headers,
json={"url": url, "format": "json"}, timeout=30)
d = r.json()
return PageContent(d["url"], d["title"], d["text"], d["links"], d["cached"])Yes. Use scrape_async for non-blocking fetching inside any async agent loop or asyncio.gather call.
Yes. Use asyncio.gather([client.scrape_async(url) for url in urls]) — each request runs concurrently.
Paid plan is unlimited. The 15-minute cache means repeated hits on the same URL don't make extra requests.

€19/year. Unlimited requests. API key ready in 30 seconds.