Rookery turns crawl chaos into a quiet priority system: what changed, what matters, what to fix first, and how to prove it worked.
No card to start JS-rendered crawls SEO + AEO proof
Audit / demo-shop.com
live
Score 64.5
287 URLs · 20 issues
crit
2
high
6
med
8
low
4
Missing title on 12 product pages0.92
Duplicate titles across 14 category URLs0.78
Thin content on 18 programmatic pages0.71
Redirect chains (>1 hop) on 11 URLs0.58
High-level updates, not data overload
Lift +420 clicks/mo
Proof $8.2k indexed
Queue 1 approval
GSC 92% confidence
The honest truth
The crawler isn't the hard part.
Screaming Frog tells you what's wrong. You still spend four hours turning that into a prioritized punch-list, a slide deck, and a message to your dev team. Rookery does that part.
15,000 rows
A fresh crawl CSV is useless without triage. Which of these actually matter this quarter?
No real priority
Sev-severe-severe is a ranking by severity, not by clicks lost. Those are different sorts.
Handwritten fixes
Every title tag, every meta rewrite, written from scratch for each client.
Still in Sheets
A dashboard stitched from pivot tables is not a deliverable. It's a confession.
How it works
A quieter loop for messy SEO work.
01
Crawl
We spider every URL on your site, honor robots, and render JavaScript pages in headless Chromium. Up to 10,000 URLs per crawl on Pro (500 on Free).
02
Connect
Connect Search Console for real query context. Crawler-estimated signals keep the workflow useful before other data sources are ready.
03
Draft
Claude writes the fix: a rewritten title, a new meta description, a specific redirect recommendation. You review and edit before anything ships.
04
Deliver
One-click client report. CSV export. Shareable link. Jira-ready tickets (soon). Hand off, not homework.
AEO / new in 2026
Search isn't only Google anymore. Get cited by AI.
The same crawl now tells you what to fix for Google and what to prepare for ChatGPT, Claude, Perplexity, and AI Overviews.
Audit llms.txt, robots.txt for GPTBot / ClaudeBot / PerplexityBot, citation-friendly schema, and content structure that AI engines can quote.
Tracked prompts
Pin the prompts that matter to your brand. Rookery runs them nightly across ChatGPT, Claude, Perplexity, and Google AI Overviews and reports your share-of-voice.
Referral traffic from AI
See which AI engines actually drove humans to your site, on which pages, with what intent. Validates the work the Prepare and Track sections set up.
Screaming Frog is a brilliant desktop crawler that hands you raw data. Rookery runs in the cloud, adds Search Console context when connected, prioritizes the work, and drafts the fix for each one. Different layer of the stack.
How is this different from Ahrefs Site Audit?
Ahrefs Site Audit is a decent cloud crawler but the depth of analysis is intentionally shallow — it's one feature in a big suite. Rookery is focused: deeper crawl fidelity, and an AI insights layer you can actually deliver from.
What counts as a crawl?
A single run against one site, up to 10,000 URLs per crawl on Pro (500 on Free). Re-running the same site counts as one audit. We don't meter URLs — no credit anxiety.
Do you store my Search Console data?
We store the Search Console metrics needed for your audits and do not resell them. OAuth tokens are stored server-side and can be disconnected from the site connector page.
Can I trust the AI-drafted fixes?
Drafts are drafts. You review and edit every one before it leaves the app. We never push to WordPress. Alt-text and title drafts are usually one-pass good; structural recommendations still need your judgment.
Why is it called Rookery?
A rookery is a colony of rooks. Rooks are corvids — curious, organized, and very good at noticing what's out of place. It felt right.
Your next audit runs itself.
Start your 14-day Pro trial. No card required. See the task board, the drafted fixes, the report — then decide.