▎ FAQ · 30 questions · 6 categories · ~3,200 words ← bestaeoskill.com
▎ Frequently Asked Questions

The 30 most asked questions about GEO, AEO, and AI search.

Citation-ready answers, organized in six categories. Click any question to expand. Each answer is sized for AI engine extraction (40-200 words) and grounded in the peer-reviewed Princeton KDD 2024 research where applicable.

▎ Definitions · 5 questions
What is GEO (Generative Engine Optimization)?
GEO is the practice of structuring web content to be visible and citable by generative AI search engines — ChatGPT, Claude, Perplexity, and Google Gemini. The discipline was formalized by Aggarwal et al. in their 2024 KDD paper, which tested 9 optimization tactics on 10,000 queries and demonstrated up to +115% citation likelihood from source emphasis alone. Full definition →
What is AEO (Answer Engine Optimization)?
AEO is the practice of structuring web content to be selected as the direct answer by featured snippets, voice assistants, Google AI Overviews, and Bing instant answers. The highest-leverage AEO tactic is FAQPage schema, which produces the highest single-signal answer extraction rate across surfaces. Full definition →
What is the difference between GEO, AEO, and SEO?
SEO targets traditional list-of-links engines and optimizes for ranking. AEO targets answer surfaces (featured snippets, voice, AI Overviews) and optimizes for being the answer. GEO targets generative engines (ChatGPT, Claude, Perplexity, Gemini) and optimizes for being a cited source. The three disciplines overlap heavily; most foundational tactics serve all three. Full disambiguation →
What is best-aeo-skill?
best-aeo-skill is the research-backed, evidence-first GEO/AEO skill for Claude Code. It audits, fixes, and monitors website visibility across ChatGPT, Claude, Perplexity, Gemini, and Google AI Overviews. It computes a 0-100 composite GEO Score across 4 vectors (Technical, Citability, Schema, Entity), backed by 33 evidence collectors and 100 numbered optimization rules. The methodology is built on the peer-reviewed Princeton KDD 2024 paper.
What is the Composite GEO Score?
A 0-100 number computed as a weighted sum of four vectors: Technical Accessibility (default 20%), Content Citability (35%), Structured Data (20%), Entity & Brand Signals (25%). Score bands: Excellent (86-100, cited frequently), Good (68-85, regular citation, gaps to fix), Foundation (36-67, indexed but rarely cited), Critical (0-35, effectively invisible). Weights adapt to your business profile.
▎ GEO Mechanics · 5 questions
How do AI engines decide what to cite?
Three steps: retrieval (find candidate sources from the index, similar to traditional ranking), synthesis (LLM writes a coherent answer drawing from candidates), and attribution (engine selects which sources to cite). GEO is mostly about steps 2 and 3 — making your content quotable, attributable, and likely to be selected as a citation source.
What is the strongest GEO tactic?
Source emphasis. Princeton's 2024 paper found that pages which explicitly emphasize their sources — through inline citations, prominent attribution, or bolded reference indicators — are 2.15× more likely to be cited by generative engines (+115% citation likelihood). It costs nothing beyond formatting changes to existing content.
How many statistics should my content have?
Princeton's research suggests ~1 statistic per 200 words as a target density (i.e., 0.5 per 100 words). Pages meeting this density receive +40% more citations. For a 1500-word article, aim for 7-8 numeric claims. Each statistic should be paired with a citation to its primary source — that compounds Princeton's source emphasis tactic (+115%) on top.
Do I need expert quotes in every article?
Not in every article, but in your high-priority content. Princeton's research found 2-4 attributed quotations per 1000 words raises citation likelihood by 41%. AI engines treat quoted passages as "anchor evidence" when synthesizing responses. Quotes must include the speaker's name and credential — anonymous "expert says" patterns reduce citation rate.
How fast does GEO show results?
Faster than SEO. AI engines re-crawl frequently and update their context regularly. Sites that ship the top-3 fixes from a GEO audit typically see measurable AI citation increases within 2-4 weeks. Full reach takes 60-90 days as the engines re-process your content. Compare to SEO, where a major content investment can take 6-12 months to fully rank.
▎ AEO Mechanics · 5 questions
What surfaces does AEO target?
Five primary surfaces: (1) Google featured snippets — boxed direct answer above search results; (2) Google AI Overviews — AI-generated summaries triggering on 25.11% of searches in 2026; (3) Voice assistants — Siri, Alexa, Google Assistant; (4) Bing instant answers and Copilot; (5) Apple Intelligence and other native-OS assistants. Each surface has slightly different mechanics but they all reward extractability + clear formatting + structured data.
How is FAQPage schema configured?
FAQPage is a Schema.org markup type. Each Q&A pair becomes a Question entity with an acceptedAnswer Answer entity. Required: name (the question text, ending with ?) and text (the answer, 30-200 words). Add 5-10 Q&A pairs per page. Use real user questions from your search console, support tickets, or sales calls — synthetic Q&A is detected and de-cited.
What is Speakable schema?
Speakable is the Schema.org property that marks sentences as suitable for voice playback. Voice assistants — Google Assistant, Apple Intelligence, Alexa — use Speakable to determine which passages should be read aloud. Mark passages by CSS selector or xpath. Best practice: marked passages should be 25-45 words, lead with the answer, use natural language without symbols or metadata.
How long should answers be for AEO?
By surface: featured snippets 40-60 words; voice responses 25-45 words; AI Overview synthesized answers 80-150 words. Format your direct-answer paragraphs in this range. Longer content can support longer surfaces, but the first answer paragraph should hit these word counts to maximize extraction probability.
Can I track AEO performance?
Yes, in three layers: (1) Featured snippet inclusion via Google Search Console "Search appearance" report; (2) AI Overview inclusion via newer tools like OtterlyAI that monitor mentions; (3) Voice search by proxy via Search Console position data for question-style queries (containing "how", "what", "why", "when"). Direct voice tracking is harder but possible with platform-specific tools.
▎ Tactical Questions · 5 questions
Should I add llms.txt to my site?
Yes, but don't over-weight it. As of Q1 2026, only ~10% of domains have an llms.txt file, and only ~0.1% of AI bot traffic actually fetches it. However, Anthropic officially honors it for ClaudeBot, the standard is gaining adoption, and the cost to generate is near-zero. Generate /llms.txt — but treat it as a small part of your strategy, not the foundation.
Which AI bots should my robots.txt allow?
All 27 in 2026: GPTBot, ChatGPT-User, OAI-SearchBot (OpenAI); ClaudeBot, anthropic-ai, Claude-Web, Claude-User, Claude-SearchBot (Anthropic); PerplexityBot, Perplexity-User (Perplexity); Google-Extended, GoogleOther (Google AI); Applebot, Applebot-Extended (Apple); FacebookBot, Meta-ExternalAgent (Meta); plus YouBot, cohere-ai, MistralAI-User, CCBot, Bytespider, Diffbot, Amazonbot, DuckDuckBot, YandexBot, Bingbot, Googlebot. Each provider may use multiple user-agents for different surfaces.
Does Cloudflare block AI bots even if my robots.txt allows them?
Yes, sometimes. Cloudflare has a dedicated "AI Bots" management category in its Security → Bots dashboard. Some sites have this enabled by default — blocking bots their robots.txt explicitly allows. Verify in your Cloudflare dashboard. Akamai has similar bot-management rules. Always test fetch-as-bot for the engines you care about: curl with the User-agent header, parse the response, confirm content is in the markup.
Why does my SPA score low on GEO?
AI bots have inconsistent JavaScript execution. A pure SPA (single-page app) that renders content client-side often appears empty to many bots. Use server-side rendering (SSR), static generation, or a hybrid approach for content-bearing pages. Test fetch-as-GPTBot or fetch-as-ClaudeBot to verify content is in the initial HTML response, not added by JS post-load.
Should I block AI bots to "protect my content"?
No — at least not if you want AI citation traffic. Blocking GPTBot means invisibility in ChatGPT (87% of AI referral traffic). Blocking ClaudeBot means invisibility in Claude. Blocking PerplexityBot means invisibility in Perplexity. The "protect my content" framing assumes you can both block and be cited; you can't. The right strategy: allow all 27 AI bots, track AI referral traffic in analytics, optimize for citation conversion (which is 5× higher than Google organic).
▎ Tooling · 5 questions
What tools measure GEO performance?
Three layers: (1) Composite scoring tools like best-aeo-skill — run a free audit at bestaeoskill.com/audit; (2) AI search visibility tools like OtterlyAI, Profound, AI Rank Lab — monitor mentions across ChatGPT, Perplexity, Gemini; (3) Traditional SEO tools (Semrush, Ahrefs) which have started adding AI search modules. Use composite tools for diagnostics, mention-tracking tools for share-of-voice analysis.
How do I install best-aeo-skill?
Three install paths: Claude Code: /plugin install best-aeo-skill; Cursor / Codex / 35+ agents: npx skills add bestaeoskill/best-aeo-skill; Manual: git clone https://github.com/bestaeoskill/best-aeo-skill.git ~/.claude/skills/best-aeo-skill. After install, ask Claude "audit https://yoursite.com" — the skill auto-activates.
Is best-aeo-skill free?
Yes. MIT licensed, fully open source, free to use, fork, and ship. The core audit operates standalone with zero external dependencies — no API key required. The hosted audit tool at bestaeoskill.com/audit is also free, no signup. Optional integrations (live SERP data, full-site crawl) are pluggable extensions if you need them.
Does best-aeo-skill replace Ahrefs or Semrush?
No, they're complementary. Ahrefs and Semrush measure traditional SEO signals: domain authority, backlinks, keyword positions. best-aeo-skill measures things AI engines use that traditional tools don't track: statistic density, expert quote count, AI bot accessibility, llms.txt presence, FAQPage schema coverage. Use both — Ahrefs/Semrush for SEO, best-aeo-skill for AEO/GEO.
Can I integrate the audit into CI/CD?
Yes. best-aeo-skill outputs SARIF (for GitHub Code Scanning) and JUnit XML (for GitHub Actions, GitLab, Jenkins). The monitor sub-skill includes a --fail-on-drop flag that exits with non-zero status if your composite score drops below a threshold. Wire this into your deploy pipeline as a gate.
▎ About best-aeo-skill · 5 questions
What's in the SKILL.md?
874 lines, 40KB of canonical specification: architecture overview, the 4-vector composite scoring methodology, 8 adaptive profiles (SaaS / e-com / publisher / local / agency / devtools / academic / default), 7 sub-skills with inputs/outputs/CLI, 5 specialist agents, 33 evidence collectors enumerated, 4 frameworks bundled (CORE-EEAT, CITE, Princeton, Confidence), 100 numbered optimization rules in 6 categories, output formats (JSON/Markdown/HTML/SARIF/JUnit), common workflows, anti-patterns, and 18 cited sources. Read full SKILL.md →
What are confidence labels?
Anti-hallucination labels attached to every audit finding. Confirmed means the finding was directly observed by an evidence collector. Likely means inferred from 2 or more collectors that agree. Hypothesis means LLM judgment or single weak signal — always flagged for human review. Other GEO tools present "issues" without labels, leading users to act on hallucinated problems.
What are the 33 evidence collectors?
Technical (9): robots_check, ai_bot_access, js_render, cdn_blocking, response_codes, sitemap_check, http2_check, mobile_render, lazyload_check. Citability (10): statistic_density, quote_extractor, citation_check, freshness_check, readability, passage_score, fluency_check, hedge_density, claim_verifier, rag_chunk_score. Schema (7): schema_validate, faq_check, article_check, jsonld_lint, speakable_check, product_check, breadcrumb_check. Entity (7): entity_extractor, author_check, knowledge_graph, nap_consistency, brand_signal, sameas_links, expertise_signals.
What's the difference between best-aeo-skill and other GEO tools?
Five differentiators: (1) confidence-labeled findings — every issue marked Confirmed/Likely/Hypothesis to eliminate hallucinated recommendations; (2) multi-engine — optimizes for ChatGPT, Claude, Perplexity, Gemini, and AI Overviews simultaneously; (3) action-oriented — fix --apply rewrites content and ships schema, not just audit reports; (4) adaptive scoring — re-weights for SaaS, e-commerce, publisher, local, agency profiles; (5) peer-reviewed research foundation — built directly on Princeton KDD 2024 methodology.
How was best-aeo-skill built?
Code written from scratch in pure Python stdlib — no external dependencies for the core audit. Methodology derives directly from the peer-reviewed Princeton KDD 2024 paper (arXiv:2311.09735). 11 evidence collectors implement the Princeton tactics + industry-tracked signals. Concept of confidence-labeled findings is best practice in the broader audit-tools community. Each scoring weight, each rule, traces back to a citation in our research foundation.
▎ Next Steps