Summary: Organic sessions dropped 32% over six months despite Google Search Console (GSC) showing stable rankings and impressions. Competitors began appearing inside AI Overview/assistant-style results; our brand did not. Paid channels and last-click attribution took the blame under budget scrutiny. We built a cross-channel monitoring and intervention program, reclaimed visibility in AI/assistant surfaces, tightened measurement, and returned organic traffic to growth within 6 https://zandersinspiringword.theburnward.com/how-to-build-authority-and-trust-with-ai-models-1 months. This case study explains background, challenges, approach, implementation, results, lessons, and how to apply them.
1. Background and context
Client: Mid-market B2B SaaS company focused on workforce management (referred to below as “the company”). Audience: — procurement teams and HR leaders.
- Pre-issue baseline (6 months prior): Monthly organic sessions ~45,000; paid sessions ~8,000; MQLs from organic ~380/month. Measurement stack: GA4 (client-side), Google Search Console, an SEO tool (rank tracking + on-page audit), HubSpot for CRM, and paid media platforms (Google Ads, LinkedIn). Initial observation: SEO tool reported all green (site health 96%, no critical errors, CTR and core web vitals unchanged). GSC showed stable average positions across tracked keywords and flat impressions. Stakeholder context: Marketing budget under scrutiny by CFO; attribution questioned as paid looked more measurable.
Why this matters
If organic traffic can fall while rankings “look fine,” the company risks losing high-intent prospects without clear measurement or attribution. The finance team demanded ROI and clear causal proof before renewing budgets, which could devastate acquisition if organic investment was reduced.
2. The challenge faced
We distilled three core problems:
Traffic decline (sessions down 32% in six months) with no clear signals in GSC or the SEO tool. Emergence of competitors inside AI/assistant-generated answers and “AI Overview” panels that siphoned organic clicks. The brand had no visibility in these surfaces. Insufficient cross-channel attribution: last-click attribution and partial event loss made paid channels look more effective and forced budget scrutiny.Key contradictions:
- Rankings stable but clicks down — indicates change in SERP/answer surfaces, not necessarily ranking positions. SEO tool green checks measured on-page technical signals but did not measure presence in new answer formats or LLM-driven result surfaces.
Analogy
Think of the search ecosystem as a river delta. Your website is a shoreline where fish (users) land. For years, fish came to your beach. Then new channels (AI overviews, assistant answers) built usurped shoals upstream — the fish were captured before reaching your shore. Your tide gauge (SEO tool) still showed water levels OK, but the fish counts at the beach plummeted.
3. Approach taken
We used a three-pronged program focused on visibility, measurement, and experimentation:
- Visibility: Map and win presence in non-traditional search surfaces (AI Overviews, assistant responses, Perplexity/ChatGPT snippets, Google “AI Overviews” cards and Knowledge Panels). Measurement: Close the attribution blind spots via server-side tagging, CRM stitching, and incrementality testing. Experimentation & content: Signal injection to improve entity-level recognition and citation likelihood in LLM outputs (structured data, authoritative citations, internal linking, branded anchor text).
Principle: If clicks are being captured by AI/assistant surfaces, treat those surfaces as new publishers. They return to search via signals about authority, clarity, brevity, and citations.

High-level hypothesis
AI/assistant features were synthesizing competitor content and authoritative third-party sources into concise answers. Our content lacked the specific citation signals and entity markers that LLMs and Google’s AI features favored, so users clicked on assistants or competitor links instead. Fixing entity signals and proving causal lift would restore traffic.
4. Implementation process
Timeline: 6 months. Team: SEO lead, data engineer, content strategist, developer, CRO specialist, and analytics lead.
Phase 1 — Measurement audit and quick wins (Weeks 0–4)
- Data reconciliation: Compare GSC clicks/impressions to GA4 sessions to locate exact divergence windows. Found a step drop coinciding with a Google interface update (approx. Nov 15). Snapshotting: Created daily snapshots of SERPs for top 200 keywords. Stored HTML and screenshots. (Screenshot example: “SERP_snapshot_2024-11-16_keyword_X.png”) LLM queries harness: Built a simple runner to query ChatGPT, Claude, Perplexity for 60 target queries and saved responses. Example query: “Best workforce scheduling software for enterprise — include top vendors and short comparison.” Quick technical fixes: Fixed a misapplied robot rule on 8 category pages and reindexed via Google Search Console (no immediate big lift but removed an access risk).
Phase 2 — Entity and citation signal work (Weeks 4–12)
- Structured data: Added and validated Organization, Product, Breadcrumb, FAQ, and HowTo schema across 120 high-value pages. Ensured canonical URLs and consistent brand naming. Authoritativeness signals: Added authoritative cites on long-form guides (links to whitepapers, industry reports) and inline citations to signals LLMs favor (date, author, clear claim + evidence). Knowledge graph work: Claimed and enriched Google Knowledge Panel with logo, mission statement, social profiles, and founding date. Created a detailed “About” page with entity-focused headings and links to authoritative PR coverage. Branded context in content: Where appropriate, we included one-paragraph “brand summary” that could be parsed as a direct answer (short, factual, citable). These are designed to be easily quoted by LLMs.
Phase 3 — Assistant/LLM monitoring & active seeding (Weeks 12–24)
- LLM prompts: Automated weekly queries to major models and Perplexity. Tracked whether the brand was named, what URL was cited, and whether a competitor was favored. (Example prompt set provided below.) Seeding plan: For queries where competitors appeared in AI Overviews, we created concise, highly-cited content aimed to be the preferred source for synthesis. These were 600–900 word briefs with clear claims, bullet lists, and two or three authoritative citations. Outreach: Amplified these briefs via PR and syndication to industry sites to increase external signals (publish on partner blogs, press release with Clearbit/Crunchbase links). Measureable signals: Added UTM templates to syndicated content to track assistant-driven clickbacks via referrals and tracked click-throughs from Knowledge Panel and People Also Ask (PAA) entries.
Phase 4 — Attribution and incrementality (Weeks 8–24 overlapping)
- Server-side GA4 implementation to reduce ad-blocker and cookie loss. Event consolidation and CRM stitching: Pass HubSpot contact IDs to GA4 via server-side endpoints to tie sessions to MQLs. Holdout experiments: Ran geo holdouts for a paid search campaign to measure organic+paid interplay and incremental lift. Attribution model: Built an MTA-style model that combined time-decay and algorithmic weighting for assisted conversions; compared results to last-click and MMM estimates.
Practical prompt examples we used
- “List the top 5 enterprise workforce management vendors and include a one-sentence differentiator for each. Cite sources.” “Compare [Company] vs [Competitor] for shift planning in 100 words. Include recommended use case.” “Who provides workforce scheduling with AI forecasting — include pricing model and link references.”
We stored and diffed model outputs weekly to detect shifts.
5. Results and metrics
Timeframe: 6 months from program start to steady state.
Metric Before (6-month avg) After (6-month avg, months 4–9) Delta Organic sessions (GA4) 45,000/month 57,600/month +28% (+12,600) GSC clicks (sitewide) 28,200/month 36,400/month +29% (+8,200) Featured snippets / PAA wins 42 pages 77 pages +83% (+35) Mentions in LLM outputs (sample queries) 2 of 60 queries 26 of 60 queries +1,200% MQLs from organic 380/month 500/month +31% (+120) Return on ad spend (RAoS) cross-channel (composite) 2.1x 3.4x +62%Key takeaways from the metrics:
- Organic sessions and GSC clicks rose roughly in tandem once visibility in assistant/AI summaries began to include our brand and cite our pages. LLM mention rate (a custom metric) rose from 3% to 43% of sample queries — an early indicator of restored presence in AI-driven answers. Attribution improvements showed organic assisted conversions had been undercounted by up to 38% under last-click; the company regained confidence in organic ROI.
6. Lessons learned
Data-first lessons we distilled:
Rank tracking alone is insufficient. Average position and on-page health do not capture the rise of assistant/AI answer surfaces. Treat them as separate channels to monitor. LLMs and AI Overviews favor short, factual, citable snippets and external authoritative references. Long-form SEO content still matters but must include extractable facts and clear citations for use in syntheses. Measurement gaps amplify budget risk. Server-side capture and CRM stitching converted ambiguous “lost users” into attributable conversions, changing budget outcomes. Proactive seeding + syndication works. Getting your content cited elsewhere (news, partners, research citations) materially increases the chance of being used by LLMs and AI overviews. Incrementality tests are the only language the finance team trusts. Geo holdouts and holdback campaigns were decisive in proving organic value and optimizing spend.Metaphor
If your site is the lighthouse, then AI-overviews are new fog lights appearing on ships. You must make your beam visible to them with clear, citable beacons; otherwise ships will dock at the nearer light and your harbor will empty.
7. How to apply these lessons — step-by-step playbook
Actionable checklist you can apply in 90 days.
Week 0–2: Audit and snapshot
- Reconcile GSC vs GA4 sessions and identify exact drop windows. Run a SERP snapshot for top 200 keywords and save HTML + screenshots. Set up LLM/assistant query runner for 50–100 priority queries.
Week 2–6: Quick technical and content fixes
- Fix robots, canonicals, sitemap anomalies. Add Organization, Product, FAQ, Breadcrumb schema to priority pages and validate in Rich Results test. Create 5–10 short, citable briefs (600–900 words) for queries where AI Overviews favor competitors. Use bullet lists and 2–3 authoritative citations per brief.
Month 2–3: Signal amplification and monitoring
- Syndicate these briefs to partner sites, press, and industry forums. Track UTMs for incoming traffic and citations. Continue weekly LLM queries and diff outputs. Log whether your brand or URL is cited. Optimize internal linking and anchor text for entity signals (consistent brand name, product names in H1/H2s).
Month 3–6: Attribution and incrementality
- Move to server-side tagging to reduce data loss; pass CRM IDs to analytics to stitch sessions to leads. Run holdout/incrementality tests for paid channels to measure organic interplay and inform budget allocation. Publish a monthly visibility dashboard: SERP snapshots, LLM citation rate, GSC clicks, GA4 sessions, MQLs, and ROI.
Monitoring prompts and queries — practical examples
- “What are the top workforce scheduling tools for 1,000+ employees?” “Pros and cons of using AI for shift forecasting — cite studies.” “Compare [Company] and [Competitor] for unionized workforce scheduling.”
Automate these weekly and set alerts when your brand or core URLs are absent for three consecutive weeks on high-priority queries.
Conclusion
When organic traffic falls but your SEO tool and GSC look fine, the missing piece is often new search surfaces — AI overviews and LLM-derived answers — and measurement blind spots. The fix blends technical hygiene with entity-driven content, active seeding, and rigorous measurement (server-side tagging + incrementality testing). The result in this case was a 28% recovery in organic sessions, a 31% rise in organic MQLs, and clearer ROI reporting that satisfied finance.
Practical next step: run the 2-week audit and LLM snapshot. If you want, we can provide a prebuilt runner for ChatGPT/Claude/Perplexity queries and a dashboard template for LLM citation tracking.
