LLMs have training cutoffs. The web doesn't. When your model needs to know about something that happened last week, or needs to read a specific page, denkbot.dog bridges the gap. Fetch any URL, get back clean text, inject it into your prompt. The model reads. You move on.
Augmenting prompts with live documentation, grounding model responses with fresh web content, building ChatGPT-like tools that can browse specific URLs, providing models with current pricing pages, changelogs, or news articles.
import httpx
def prompt_with_url(question: str, url: str) -> str:
r = httpx.post("https://api.denkbot.dog/scrape",
headers={"Authorization": f"Bearer {DENKBOT_API_KEY}"},
json={"url": url, "format": "json"}, timeout=30)
page = r.json()
return f"""You are a helpful assistant. Use the following web page content to answer the question.
URL: {page['url']}
Title: {page['title']}
Content:
{page['text'][:6000]}
Question: {question}"""
# Use with any LLM API:
context = prompt_with_url(
"What are the rate limits?",
"https://docs.stripe.com/rate-limits"
)Varies by page — typically 2,000–30,000 characters of extracted text. Trim to fit your context window before injecting.
Yes. The output is plain text — paste it into any prompt for any model: GPT-4, Claude, Gemini, Mistral, Llama, whatever.
Pass renderJs: true. Playwright renders the page before text extraction, so React/Vue/Angular apps work correctly.

€19/year. Unlimited requests. API key ready in 30 seconds.