🤖AI & LLM Integrations

Feed Live Web Content into Your LLM's Context

LLMs have training cutoffs. The web doesn't. When your model needs to know about something that happened last week, or needs to read a specific page, denkbot.dog bridges the gap. Fetch any URL, get back clean text, inject it into your prompt. The model reads. You move on.

What you'd use this for

Augmenting prompts with live documentation, grounding model responses with fresh web content, building ChatGPT-like tools that can browse specific URLs, providing models with current pricing pages, changelogs, or news articles.

How it works

example
import httpx

def prompt_with_url(question: str, url: str) -> str:
    r = httpx.post("https://api.denkbot.dog/scrape",
        headers={"Authorization": f"Bearer {DENKBOT_API_KEY}"},
        json={"url": url, "format": "json"}, timeout=30)
    page = r.json()

    return f"""You are a helpful assistant. Use the following web page content to answer the question.

URL: {page['url']}
Title: {page['title']}

Content:
{page['text'][:6000]}

Question: {question}"""

# Use with any LLM API:
context = prompt_with_url(
    "What are the rate limits?",
    "https://docs.stripe.com/rate-limits"
)

Questions & Answers

How much text does a scraped page return?+

Varies by page — typically 2,000–30,000 characters of extracted text. Trim to fit your context window before injecting.

Does it work with any LLM?+

Yes. The output is plain text — paste it into any prompt for any model: GPT-4, Claude, Gemini, Mistral, Llama, whatever.

How do I handle pages that need JavaScript?+

Pass renderJs: true. Playwright renders the page before text extraction, so React/Vue/Angular apps work correctly.

Ready to start fetching?

€19/year. Unlimited requests. API key ready in 30 seconds.