Building for the agentic web: a playbook for firms that can't afford to be invisible.
If you've read the first two parts of this series, you understand the problem. The internet is bifurcating. AI agents are evaluating businesses on a layer most firms haven't built for. Now the question is: what does it actually take to exist on the Agentic Web? The answer isn't one thing. It's a stack — five layers, built in sequence, each reinforcing the next. Miss any layer and the chain breaks.
This is Part 3 of a three-part series. Part 1 covered the data: bot traffic will surpass human traffic by 2027. Part 2 introduced the bifurcation thesis. This piece is the playbook — what to actually do about it.
We work with professional services firms across legal, wealth management, and healthcare. The playbook below reflects what we've learned running AI Presence Audits across these industries, but the principles apply to any high-consideration business where trust drives the sale.
Layer 1: Access — can AI agents reach you?
This is the foundation, and it's where most firms fail before they even start. AI agents can't evaluate what they can't read. If your website blocks AI crawlers — and there's a meaningful chance it does without your knowledge — nothing else in this playbook matters until that's fixed.
The Cloudflare check
If your site runs behind Cloudflare, your AI bot traffic is likely being blocked by default. This changed in July 2025 when Cloudflare began blocking AI crawlers as a standard setting. One in five websites globally uses Cloudflare. Many of them lost AI visibility overnight without any deliberate action. The fix is straightforward: adjust your Cloudflare settings to allow verified AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended) while maintaining protection against malicious bots. Your web team can do this in under an hour. The impact is immediate.
The robots.txt audit
Even without Cloudflare, your robots.txt file may be explicitly blocking AI bots. This sometimes happens when development teams copy restrictive configurations from templates. Check for disallow rules targeting AI-specific crawlers.
The JavaScript parity check
Even if bots aren't blocked outright, they may be seeing a stripped-down version of your site. Content rendered exclusively through JavaScript — client testimonials, FAQ accordions, review widgets, dynamic service descriptions — is often invisible to AI crawlers that don't execute JavaScript. If your richest, most persuasive content exists only in JS-rendered elements, AI agents see a skeleton while humans see the full page.
This is the first layer for a reason. We've audited firms with excellent content, strong reputations, and decades of authority in their fields — and found them returning 403 errors to every AI crawler that attempted to read their site. You can't optimize what you've locked behind a wall.
Layer 2: Structure — can AI agents understand you?
Access gets the agent in the door. Structure determines whether it can make sense of what it finds. AI agents don't read websites the way humans do. They don't scan visually, absorb brand tone, or feel the weight of a well-chosen photograph. They parse data. They look for explicit, machine-readable signals that tell them what your business is, what you do, who you serve, and why you're credible.
Schema markup is your machine-readable identity card
At minimum, professional services firms should implement Organization schema (who you are), ProfessionalService schema (what you offer), and FAQPage schema (what questions you answer). These structured data formats give AI systems a reliable, parseable summary of your entity — not just your page content, but your identity as a business.
Without schema, an AI agent reads your site's HTML and tries to infer what you do from prose. With schema, you're telling it directly: here's our name, our service types, our geographic coverage, our credentials. That's the difference between being interpreted and being understood.
Content structure determines citability
AI systems extract information in chunks. They look for concise, direct answers to specific questions — what the GEO field calls "answer nuggets." If your service pages lead with atmospheric brand copy and bury the concrete details three paragraphs down, the AI may never get to the part that matters.
The structural principle is simple: lead every key page with a direct, factual statement of what you do, for whom, and why it matters — in 40 to 60 words. Then expand. This isn't about dumbing down your content. It's about front-loading the information that machines extract first.
FAQ architecture is the fastest win in GEO
We've seen case after case where launching a well-structured FAQ section — formatted as genuine Q&A with schema markup — produces measurable visibility improvements within 60 days. FAQ content works because it mirrors exactly how people prompt AI systems: they ask questions. If your site has pre-structured answers to those questions, you're meeting the AI where it lives.
The questions should come from your actual prospects, not your marketing team's assumptions. Think about what a prospective client asks during their first phone call. Those are your FAQ topics.
Layer 3: Authority — does the AI trust you?
Access and structure make you visible. Authority determines whether the AI actually recommends you. AI systems don't trust businesses because they say they're trustworthy. They trust businesses that other credible sources say are trustworthy. This is the single most underinvested dimension for professional services firms, and it's consistently the hardest gap to close.
Third-party citations are the new backlinks
In traditional SEO, authority was built through backlinks — other sites linking to yours. In GEO, authority is built through brand mentions in sources that AI systems treat as credible. That means industry publications, major business media, legal or financial directories, Wikipedia (where applicable), LinkedIn company profiles, and review platforms.
Most professional services firms have strong offline reputations and weak online citation footprints. Their managing partners are well-known in their professional circles. Their firm is recommended over lunch conversations and at industry conferences. But when an AI agent searches for corroborating third-party evidence, it finds almost nothing — because the reputation lives in a network of human relationships that machines can't see.
Consistency across sources is a trust signal
AI systems cross-reference information about your business across multiple sources. If your LinkedIn page says you have 45 employees but your website says "a team of over 60," that's a contradiction that erodes trust. If your practice areas are described differently on your site, your directory listings, and your industry profiles, the AI has no reliable ground truth.
Entity consistency is tedious work — auditing every platform where your business appears and ensuring the facts align. But it's the kind of work that compounds. Once your entity signals are clean and consistent, every AI query that touches your business benefits.
Thought leadership needs to be citable, not just impressive
Most professional services thought leadership is written for human readers — long, nuanced, qualitative. It positions the author as thoughtful. But it often lacks the specific, factual, data-backed claims that AI systems extract and cite.
The fix isn't to stop writing thought leadership. It's to ensure every piece includes concrete data points, specific claims, and citable statements — not just opinions and frameworks. AI systems cite facts. They summarize opinions. The brands that provide both get recommended. The brands that provide only opinions get paraphrased at best and ignored at worst.
Layer 4: Narrative — what does the AI say about you?
This layer is about controlling the story. When someone asks an AI about your firm, the AI constructs a narrative. It pulls from your website (if accessible), from third-party sources, from training data, and from whatever other signals it can find. That narrative becomes your brand's first impression for a growing share of prospective clients.
Most firms have never audited what that narrative actually says. We run what we call a "prompt gap analysis" — testing the actual questions prospects ask across ChatGPT, Gemini, Perplexity, and Copilot, and documenting what each platform says (or doesn't say) about the firm.
Common patterns we see in the findings:
The AI confuses the firm with a similarly named business. The AI describes services the firm no longer offers. The AI attributes the firm's specialization incorrectly. The AI mentions competitors favorably and omits the firm entirely. The AI doesn't know the firm exists.
Each of these is a solvable problem. But you can't solve what you haven't measured.
Run at least 20 prompts across four platforms
Cover the full decision journey: category discovery prompts ("what are the best estate planning attorneys in New York"), comparison prompts ("firm A vs firm B"), recommendation prompts ("recommend a wealth manager for high-net-worth individuals"), and brand-specific prompts ("tell me about [your firm]").
Document the narrative drift
Compare what the AI says to what you want it to say. The gap between intended positioning and AI-constructed narrative is your narrative drift. Every piece of content you create, every third-party placement you pursue, every structural improvement you make should be aimed at closing that gap.
Layer 5: Readiness — can AI agents act on your behalf?
This is the frontier. Most firms aren't here yet — and that's fine, because it's the top of the stack, not the foundation. But it's where the market is heading, and firms that start thinking about it now will have a significant advantage.
Agentic commerce readiness is about more than being discovered. It's about whether AI agents can take action once they've found you. Can an agent check your availability and book a consultation? Can it parse your service descriptions well enough to match a client's needs to your offerings? Can it initiate a request for proposal on behalf of a buyer?
Today, this mostly shows up in structured booking data, machine-readable scheduling endpoints, and service catalogs formatted for programmatic access. Tomorrow, it will involve the full stack of agentic commerce protocols — MCP (Anthropic's Model Context Protocol), ACP (OpenAI's Agentic Commerce Protocol), and UCP (Google's Universal Commerce Protocol) — that enable AI agents to discover, evaluate, and transact with businesses on behalf of their users.
For most professional services firms, the immediate action is simpler: ensure that your contact and scheduling infrastructure is machine-readable. If an AI agent determines you're the right recommendation, can it find a structured way to help the prospect engage with you? Or does the journey dead-end at a generic contact form with no structured data?
The 90-day stack
If you're a professional services firm reading this and wondering where to start, here's how we sequence the work.
Days 1–14 · Access
Audit your bot protection. Check Cloudflare settings. Review robots.txt. Identify JavaScript rendering gaps. This is the highest-impact, lowest-effort phase. If your site is currently blocking AI crawlers, fixing this alone can shift your visibility within weeks.
Days 15–45 · Structure
Implement core schema markup (Organization, ProfessionalService, FAQPage). Restructure key service pages with answer nuggets in the first 60 words. Build or optimize your FAQ section with real prospect questions and schema markup.
Days 30–60 · Narrative baseline
Run a prompt gap analysis across all four major AI platforms. Document what they say, where you're absent, and where competitors dominate. This becomes your roadmap for authority-building and content strategy.
Days 45–90 · Authority
Begin the longer-cycle work: securing third-party mentions in authoritative sources, building entity consistency across platforms, publishing citable thought leadership, and engaging authentically in the community channels (Reddit, industry forums) that AI systems treat as credible sources.
This isn't a one-time project. AI platforms update their models, new competitors emerge, and the agentic commerce protocol landscape is evolving rapidly. The firms that treat Agentic Web presence as an ongoing discipline — the way they treat Human Web presence today — are the ones that will compound their advantage over time.
The bottom line
The internet has split. The Human Web — the one you've been building for decades — still matters. But the Agentic Web is where a growing share of the discovery, evaluation, and decision-making now happens.
Building for both isn't optional. It's the cost of remaining competitive in a market where your next client's first impression of your firm may be formed entirely by what an AI agent says about you.
The stack is clear: access, structure, authority, narrative, readiness. Five layers, built in sequence, each reinforcing the next.
The firms that build this stack now — while the competitive field is still open and the models are still forming their understanding of categories — will own their position in AI-mediated discovery. The firms that wait will pay exponentially more to catch up. Every month you wait, the cost of catching up goes up. The compounding has already started.
The Agentic Shift — Series Index
Part 1 · Bot Traffic Exceeds Human by 2027
Part 2 · The Two Webs: Why the Internet Is Splitting in Half
Part 3 (this piece) · Building for the Agentic Web: A Playbook