RESOURCE · GUIDE

SEO and GEO measurement: how to tell if your organic visibility is actually growing

SEOCOM's pillar guide to building an SEO and GEO measurement system that drives decisions, not dashboard decoration.

14 min read

Most SEO projects that land at SEOCOM share the same problem before we even touch content or links: no one can answer the question “is it working?” with data. There is an inherited dashboard, there are impressions in Search Console, there is an organic-traffic curve in GA4, but no one can say with confidence whether SEO is gaining ground or losing against competitors.

This guide explains how to build an SEO and GEO measurement layer that holds up in a meeting with the leadership team. No magic tools, no invented metrics, no thirty-widget dashboards. Using the same data sources you already have, read differently.

Why classic SEO measurement no longer suffices

For years, SEO measurement boiled down to three metrics: positions, organic traffic and conversions from organic. Those three still matter, but two structural shifts have left them incomplete.

First shift: SERPs are no longer ten blue links. People Also Ask blocks, featured snippets, videos, local packs, Shopping carousels and now AI Overviews sit above the classic results. Position 1 in 2020 and position 1 in 2026 do not mean the same thing. A project can climb in positions while losing CTR because vertical blocks consume the user’s attention before the link is even reached.

Second shift: GEO. A growing share of users now run their queries in ChatGPT, Claude, Perplexity or Gemini, and never pass through Google. Those clicks never show up in Search Console. If your brand is cited in a ChatGPT response and the user converts, your organic-traffic dashboard will never credit it to SEO.

Modern measurement has to answer three questions at once: how much visibility in traditional search, how much in generative answers, and how is that translating into business. A dashboard that only covers one of the three is incomplete.

The guiding principle: measure what matters, not everything you can measure

At SEOCOM we apply a simple rule: any metric that does not have an associated decision behind it does not make it onto the dashboard. If nobody is going to change anything based on that number, it is in the way. A dashboard with fifty metrics is not a sign of maturity, it is a sign that the team has not done the hard work of prioritising.

We recommend a maximum of five primary KPIs at the business layer. More than five and leadership stops paying attention. Fewer than three and you lack context. That middle band is where the dashboards people actually look at live.

Measuring classic SEO

What follows are the metrics we ask for on every SEOCOM project before touching anything. Not every metric you could measure, just the ones you cannot skip.

Real visibility, not average position

Search Console’s average position sums thousands of keywords and averages them. It is a terrible metric for making decisions. The three that replace it:

  • Keywords in top 3, top 10, top 20. Three separate curves. If top 3 and top 20 both rise, you are gaining authority. If top 3 falls and top 20 rises, your content is ageing, Google is downgrading it and new traffic is arriving through long-tail only.
  • Keywords with impressions but no clicks. That is your opportunity backlog. Filtering GSC for positions 5-15 with low CTR gives you the next 30 days of editorial work.
  • Share of voice on a target keyword set. Define 50-200 keywords that represent your business. Measure what percentage of that set is ranking top 10 for you versus each competitor. This is the metric that goes to leadership.

Traffic by intent, not total traffic

An increase in organic traffic that does not convert is not a win, it is noise. Minimum segmentation:

  • Branded vs non-branded traffic. Crucial to separate the SEO “the client would get anyway” from the SEO you are generating. If total traffic goes up but non-branded does not move, the credit is not yours.
  • Traffic by funnel stage. Discovery (informational posts), consideration (comparative guides, product pages) and conversion (service or product pages). An increase in discovery without a matching lift in consideration signals an internal-linking problem.
  • Traffic by content type. Blog, service pages, product listings, category pages. Each grows for different reasons. Blending them hides the diagnosis.

Conversions attributed to organic

This is where most projects fail. Last-click attribution undervalues SEO: the user discovers your brand via organic, forgets about it, comes back through direct a week later and converts. Last-click says “direct conversion”. Organic did the work.

The minimum reasonable measurement:

  • Data-driven attribution model in GA4. Not perfect, but distributes credit more fairly across touchpoints.
  • Assisted conversions broken out: how many conversions touched an organic session at some point, even if it was not the last one.
  • If budget allows, a server-side measurement layer (Google Tag Gateway for example) to recover the data that Safari ITP and content blockers kill before GA4 ever sees it.

The tools we actually use

We have seen dashboards with eight different tools doing roughly the same job. It is counterproductive. Our minimum stack:

  • Google Search Console for Google search data: impressions, clicks, CTR, positions. The source of truth for Google organic traffic.
  • Google Analytics 4 for on-site behaviour and conversions, with the data-driven attribution model switched on by default.
  • One rank-tracking tool (Ahrefs, Semrush or Sistrix depending on market and client budget). Pick one and stay with it for at least 12 months so you can compare evolution without the noise of changing vendors.
  • Looker Studio to unify sources in a single dashboard. Decent alternatives: Databox, Power BI. If the client already runs one of those, adapt instead of imposing.

Those four answer 80% of the questions on a typical SEO project. If you need more, it is usually a symptom that the first four are misconfigured.

Common mistakes that wreck SEO reporting

The ones we keep repeating to clients arriving with an inherited dashboard:

  • Measuring impressions as if they were traffic. Impressions are opportunity, not outcome. A rise in impressions without a proportional rise in clicks may mean you are appearing in low positions with low CTR. Not always a good sign.
  • Not separating branded from non-branded. Any company with brand investment mixes both in the same number and cannot see which part of the traffic is SEO-attributable.
  • Last-click attribution. Explained above: kills SEO and over-credits paid and direct.
  • Ignoring long-tail because “it converts less”. Long-tail is what builds topical authority. Without long-tail volume, the top keywords do not rise.
  • Looking at GA4 only once a month. With the volumes typical projects handle, anomalies are only caught on a weekly cadence. Monthly is too late.

Concrete attribution examples

Attribution is the area where most projects argue over numbers without moving. Three examples with illustrative but realistic figures we see often when auditing accounts:

Ecommerce case

An ecommerce with 120,000 organic sessions per month and 1,800 total orders. With last-click attribution in GA4, 540 orders show up as organic (30%). With data-driven attribution organic moves to 720 (40%): the model distributes credit to prior organic sessions of users who ended up converting via direct or email. With a first-click view (not default in GA4 but retrievable through exploration reports), it rises to 900 (50%).

The gap between last-click and data-driven is not cosmetic. If your average ticket is €80, last-click says SEO generates €43,200/month, data-driven says €57,600/month. That is a €14,400 difference defending your content or link-building budget in the next meeting. We recommend data-driven as default, but document which model you use: switching models mid-year makes comparisons impossible.

Search Console / GA4 cross-check: GSC counts clicks from Google; GA4 counts sessions from Google. A healthy gap sits between 5% and 15% (bounce before the tag fires, blocked cookies, Safari ITP). A gap above 30% is a configuration problem, not an attribution nuance.

B2B SaaS case

A SaaS with a 45-day decision cycle, €12,000 annual ticket. The user searches “[software category] comparison”, reads three posts, downloads an ebook, forgets about you, returns via direct traffic 20 days later, books a demo, and closes 30 days after that.

With a 7-day attribution window (default until GA4), all those conversions appear as direct traffic. With a 30-day window they appear as organic if the last organic session falls inside. With a 90-day window attribution recovers the initial discovery touchpoint.

Practical recommendation: in B2B with cycles over 30 days, configure extended attribution windows in GA4 (up to 90 days for click, 30 for engaged view). Use conversion-path reports to see how many touchpoints an SQL has on average; if the answer is 4-5, the last-click model is lying to you by construction.

Complementary metric: content-assisted conversions. Filter GA4 for users who visited your blog in the last 90 days and later converted. Not a formal attribution model, but a real indicator of content value.

Content / media case

A publisher with ad revenue has a different problem: there is no “conversion”, there is engagement monetisation. Key metrics are sessions, pages/session, average time on page and 75% scroll depth. All segmented by traffic source.

Illustrative example: 500,000 organic sessions/month at 1.8 pages/session with an RPM of €4 produce €3,600/month of directly attributable revenue. But there is a second effect: organic traffic feeds the recurring audience that later comes back via direct or newsletter. If you can measure what share of your direct traffic has passed through organic in the last 30 days (GA4 allows this), you will see the real SEO value can be 2-3x what direct attribution shows.

A framing that works in leadership meetings: if you had to bring the same organic sessions via Google Ads, how much would they cost?

Example: 120,000 sessions/month at an average CPC of €1.50 (competitive sector) equal €180,000/month of cost saved vs paid. For an SEO project with a €6,000 monthly fee, gross ROI is 30x. That figure is defensible in a leadership meeting without further context and justifies maintaining or increasing investment.

Caveat: this calculation has assumptions. It assumes every organic session is replicable via paid (not true for informational long-tail) and that CPC holds at that volume (it would not). Use it as order of magnitude, not as an exact figure.

Measuring GEO: knowing whether AI is citing you

GEO measurement is in a phase similar to SEO in 2005: it works, but there is no standard tool. What does work rests on three complementary blocks.

Why GEO measurement is not SEO measurement

In GEO there is no “position 1”. A ChatGPT response can cite four brands in its paragraph and they are not ordered as a ranking, they are justified by the context of the prompt. The concept of “top 10” does not apply. What does apply: you appear or you do not, and when you appear, with what framing.

That shifts the measurement question. In SEO we ask “at what position is my URL?”. In GEO we ask “am I cited?, in which prompts?, how am I described?”.

Periodic prompt audits

Define a set of 30-50 prompts representative of your business. Examples: “recommend an SEO agency for an international ecommerce”, “best CMSs for SEO”, “SEO agency Barcelona”. Run that set monthly in ChatGPT, Claude, Perplexity and Gemini. Log which prompts you appear in, at which position inside the response, and with what phrasing.

Manual work. No Search Console for this. But the only way to know whether you are entering the answers or the AI is ignoring you. There are emerging tools (HubSpot AI Search Grader, Profound, Semrush AI Search Grade) that automate part of the process, but none yet replaces the full manual pass. We recommend starting manual and adding automation when the volume justifies it.

LLM referrals in GA4

Configure a GA4 segment filtering traffic with referrer chat.openai.com, perplexity.ai, claude.ai, copilot.microsoft.com, gemini.google.com. Low-volume traffic, very high intent: the user has already filtered their decision with the AI before clicking.

Create custom GA4 events for these referrers and give them their own KPI on the dashboard. Even if it is 10 clicks a month today, the growth trend is what counts.

AI Overviews inside Google

For GEO traffic that happens inside Google (AI Overviews, residual SGE), Search Console does not yet expose differentiated data. What you can measure:

  • CTR drops on specific keywords while positions hold. Likely signal: Google is serving an AI Overview above and stealing clicks.
  • Featured-snippet appearances as a proxy. URLs that win a featured snippet are more likely to be used as a source in AI Overviews.

Brand mentions in the public corpus

LLM responses reflect what the models read during training and what their live retrieval system surfaces. If your brand rarely appears in industry forums, specialised press or transcribed podcasts, LLMs will not cite you. Tracking the evolution of brand mentions across the open web is preventative GEO measurement: it explains why you appear or do not, even before you ask.

How to establish a baseline when there is no industry standard

The practice we recommend on new projects: define a fixed set of 30 target prompts today and run them this same week. Document which brands appear, including yours if it shows up, and save that as month-zero baseline. Every month repeat the same exercise with the same set. In three months you have evolution, in six you have a trend.

Not elegant but it works. And when stable industry tools arrive, you will have accumulated months of comparable data that no new tool can give you retroactively.

What follows is the exact order in which we set up measurement on a new project. Designed so that someone with admin access to the properties can follow it without needing another search.

Day 1 · Foundational base

  • Search Console with domain property, not URL prefix. Verified via DNS TXT. If you only have URL prefix, you are losing data from subdomains and http/https versions.
  • Sitemap.xml submitted (sitemap-index.xml if there are several). Reasonable priorities, no noindex URLs inside, and validated via gsc url inspection on 3-5 random URLs.
  • GA4 with Enhanced measurement on: scroll, external clicks, video, file downloads. Check that enhanced events do not duplicate your custom events.
  • Conversion events defined: do not mark everything as a conversion. Two or three real ones (lead, purchase, signup). Everything else stays as a normal event.
  • Cross-domain tracking if you have checkout on a subdomain or an external gateway. Without it, the conversion shows as a self-referral and attribution breaks.
  • Internal-IP and crawler filters on the GA4 view. Crawlers via the native toggle, internal IPs via an exclusion rule.

Week 1-2 · Analysis layer

  • Search Console → Looker Studio connection via the official connector. Build a unified source that combines GSC + GA4 in the same report.
  • Base dashboard with primary KPIs: organic traffic, organic conversions, average CTR by keyword cluster, top 20 pages by clicks, top 20 keywords by clicks.
  • Segmentation by page type: URL contains /blog/ → blog; /product/ → product; /services/ → service; else → other. This segmentation is what lets you see which content type is growing.
  • GA4 custom dimensions for useful non-default fields: page type, editorial category, author (if applicable), funnel stage.
  • Saved audiences for “Session started via organic”, “Session started via non-branded organic”, “User with organic-assisted conversion”.

Month 1 · Competitive context

  • Keyword tracking tool (Ahrefs, Semrush or Sistrix) loaded with a set of 50-200 priority keywords. Not a thousand: that noise does not help. Pick keywords with the criterion “volume × commercial intent × realistic chance to rank”.
  • Alerts for sharp position moves (>5 positions in a week). In Ahrefs set them up from the project dashboard; in Semrush from Position Tracking.
  • Full site crawl with Screaming Frog once a month. Save exports to compare evolution of 4xx/5xx errors, duplicate meta descriptions, empty H1s.
  • Competitive benchmark: share of voice over the same keyword set, with 3-5 real competitors tracked over time.

When applicable · GEO layer

  • Baseline prompt set (30-50 representative prompts) executed manually in ChatGPT, Claude, Perplexity and Gemini. Documented in a Google Sheet with columns per tool and date.
  • Dedicated tools if volume justifies the investment: HubSpot AI Search Grader (free, limited), Profound, Semrush AI Search Grade. None is yet mandatory, all complement the manual baseline.
  • llms.txt at site root declaring the priority content you want LLMs to see. Still emerging as a standard, but low implementation cost.
  • Measure LLM referrers in GA4: segment with source chat.openai.com, perplexity.ai, claude.ai, copilot.microsoft.com, gemini.google.com. Promote it to a dedicated KPI even while volumes are small.

Month 2 onwards · Reporting layer

  • Executive dashboard without jargon, with a maximum of 5-8 metrics, the same ones every month. Audience: CEO/CMO. Format: public Looker Studio or a monthly email capture.
  • Narrative monthly report of 1-2 pages. Not a 40-slide PDF. Three questions answered: what happened, why, what will we do next month.
  • Proactive anomaly alerts: traffic loss >15% week-over-week, position drops on top-10 keywords, mass 5xx errors. Delivered via Slack or email, not buried in a dashboard nobody opens.
  • Quarterly strategy review with concrete decisions: what worked, what expands, what stops. That is governance, not reporting.

This stack is the minimum viable setup for a serious SEO project. Adding layers (BigQuery with GA4 export, log file analysis, full server-side measurement) only makes sense once the team has squeezed the base stack. Building BigQuery before configuring GA4 events properly is building the roof before the walls.

Dashboards that actually serve

A useful SEO/GEO dashboard has three tabs, not twenty. Anything that does not fit into one of the three gets cut.

Tab 1 · Business

The only one leadership would see. Three big numbers:

  1. Organic conversions for the period vs same period last year. With data-driven attribution.
  2. Share of voice on the target keyword set. As a percentage of the whole set.
  3. Total organic clicks. Including GEO referrals as a separate segment, but summed into the total.

Below, up to four secondary KPIs: keywords in top 3, non-branded CTR, non-branded traffic and organic-assisted conversions. More than four and the tab blurs.

Tab 2 · Execution

For the SEO team. Weekly detail:

  • New keywords in top 10.
  • Keywords dropped out of top 10.
  • Newly indexed pages.
  • Pages with CTR drops.
  • GEO prompts executed this week and results.

Tab 3 · Technical

For the product team. Crawl health:

  • Core Web Vitals (LCP, INP, CLS) at the top quartile.
  • 5xx errors over 7 days.
  • Search Console indexing coverage: valid, excluded and reasons.
  • Googlebot crawl logs if available.

Review cadence by horizon

  • Weekly (operational): execution tab. Spot sharp dips or jumps and act.
  • Monthly (tactical): all three tabs together. Reporting committee with the client.
  • Quarterly (strategic): review of primary KPIs, alignment with business, adjustment of the target keyword set if the product has shifted.

Skipping the weekly rhythm is the most common failure. By monthly you cannot correct things anymore, only explain them.

Non-obvious mistakes

The ones above are the obvious ones. The ones below appear when the dashboard “looks fine” but the decisions do not arrive:

  • Measuring many metrics without prioritising. A dashboard with 40 widgets invites non-decision. People open the dashboard, look at five numbers, close it. The rest is expensive decoration.
  • Not cross-referencing data between tools. Search Console and GA4 will say different things for legitimate reasons (sampling, attribution windows, traffic vs session). If you do not understand why they differ, you will spend the meeting arguing numbers instead of the project.
  • Ignoring indirect impact. A post that does not convert but supports rankings of other pages via internal linking is valuable. Dashboards that measure only direct conversion kill it in the next content-pruning round.
  • Not evolving the measurement when the business changes. If the client adds a new product line or enters a new market, the target keyword set has to update next month, not next quarter.
  • Confusing pretty reports with useful reports. A 40-page PDF with charts is not good reporting. A navigable dashboard with three tabs and three stacks of pending decisions is.

How to present SEO and GEO reporting to leadership

Most SEO reports that end up in the hands of a CEO or CMO are unreadable to them. Too many keywords, too many CTRs, too many impressions without context. Leadership does not need to know how many impressions you have, they need to decide whether to invest more, hold or cut the SEO budget. Reports that do not serve that decision are decoration.

The problem with typical SEO reporting

A 30-slide monthly report with Search Console screenshots is not an executive report, it is a data dump. If leadership opens the document and cannot tell in 60 seconds whether SEO is working, the report has failed. SEO has not failed, the story around it has.

We have seen projects with excellent SEO work lose budget because the reporting did not know how to communicate it. And we have seen mediocre projects defend their budgets because the reporting told a coherent story. Knowing how to present is part of the job, not an extra.

The three-layer report

We recommend structuring each monthly report in three stacked layers. Every layer adds detail for whoever wants to dig deeper, but each previous layer stands on its own.

Layer 1 · Three lines of summary in the first paragraph. What happened, what it means, what we recommend. No jargon. A CEO should understand the state of SEO without reading the rest. Example: “In October non-branded organic traffic grew 14% YoY. The curve comes from the product-content cluster we launched in September. We recommend doubling editorial output in that vertical during Q1.”

Layer 2 · The 4-6 numbers that matter. Not 20, not 40. Maximum six. Our recommendation:

  • Non-branded organic traffic (YoY and MoM).
  • Leads or sales attributed to organic (data-driven model).
  • Equivalent value saved vs paid (with explicit assumptions).
  • Share of voice on the target keyword set.
  • Keywords in top 3 (curve).
  • Traffic from LLMs (even if small, to track the trend).

Each one with this month’s value, comparison with the previous month, and year-over-year. Nothing more.

Layer 3 · The context. What you did this month (not as a checklist, as a narrative) and what came from the environment. Leadership needs to separate “this is SEO’s merit” from “this is the market”. If there was a Google update, say how it affected you. If a competitor disappeared from the SERPs, explain why. If there is seasonality, anticipate it.

Cadence and ritual

  • Narrative monthly report, 1-2 pages. Not a 40-slide PDF. Sent as a document, not an attachment nobody opens. An email with the three-line summary plus a link to the live dashboard works better than a PDF.
  • Quarterly strategy review with concrete decisions: what to expand, what to stop, what to change. Governance, not reporting. 60 minutes with a pre-shared agenda.
  • Point alerts for anomalies: sudden traffic loss, unexplained spike, massive CTR drop on top keywords. Via Slack or direct email, not waiting for the monthly report. A month of no reaction can cost €15,000 in missed opportunity.

What NOT to do in a leadership report

  • Send raw Search Console exports. Nobody reads them.
  • Celebrate rankings that bring no conversions. “We are top 3 for [keyword with no volume]” does not impress a CFO. Rankings are input, conversions are output.
  • Hide bad news out of fear. Leadership respects transparency with an action plan. A bad month well explained generates more trust than a mediocre one disguised.
  • Use jargon without defining. If you write “CWV”, define it once. Do not assume technical knowledge.
  • Present without a recommendation. Each report should end with a requested action or a suggested decision. A report without a call to action is noise.

The mental template

Every monthly report should answer three questions, in this order:

  1. Is SEO growing as an investment? How much value in money we are generating.
  2. What happened this month? Narrative of facts, not metric dump.
  3. What will we do differently next month? Concrete decision, with owner and deadline.

If the report does not answer the three, rewrite it. It is a simple filter and leadership appreciates it.

Frequently asked questions

Frequently asked questions about SEO and GEO measurement

  • How long until SEO produces measurable results?

    In projects with a decent technical base and new content shipping every month, clear signals from month 3-4: new keywords entering top 20, rising impressions in GSC. Conversions realistically attributable to non-branded SEO, somewhere between month 6 and 9. Any agency promising measurable results in under 3 months is either selling smoke or working on branded keywords that would have converted anyway.

  • How do I measure SEO without Google Analytics (cookieless)?

    It can be done, with caveats. Three angles: (1) Search Console still reports impressions and clicks without relying on cookies, enough to measure SERP performance. (2) A server-side layer (Google Tag Gateway or similar) recovers a substantial share of GA4 data that Safari ITP and blockers would kill. (3) Self-hosted server-side analytics via logs or tools like Plausible/Matomo as a complement. Fully cookieless measurement is possible but loses granularity on multi-touch attribution.

  • Can ROI from SEO be predicted before starting?

    Honestly, not with precision. What you can do is scenario-estimate: calculate search volume for the target keyword set, apply expected CTR per position (CTR curves for your sector are publicly available), and cross-check against the site's historical CVR on organic traffic. The result is a range, not a number. Any agency giving you a precise ROI figure with decimals is making it up.

  • Is it worth investing in GEO now or waiting?

    Depends on horizon. If your client is B2B or SaaS with long decision cycles where the user investigates with AI before getting in touch, investing now is defensible: LLMs memorise the corpus over months, the sooner you start appearing the sooner it consolidates. If it is low-ticket ecommerce with impulsive purchase, GEO weighs little today and classic SEO should stay the priority. The manual prompt baseline described above you can start this week with zero investment, and that we recommend to every project.

When it makes sense to ask for help

SEO and GEO measurement can be built with internal resources if the team has the technical judgement. But if:

  • The in-house team lacks experience defining reasonable GA4 attribution.
  • Nobody is running periodic LLM prompt audits.
  • The dashboard was inherited and no one knows how each metric is read.
  • Reports do not support business decisions.

Then outside consultancy accelerates the process. At SEOCOM we build the SEO and GEO measurement system as part of strategic consultancy in the first quarter. The deliverable is not a pretty dashboard, it is that by the end of that quarter you can answer the leadership question with data: “is SEO working?”.

LET'S TALK ABOUT YOUR PROJECT

We design SEO and GEO measurement systems from scratch

KPI definition, technical setup, attribution audit and the monthly reporting system leadership actually reads. Applied to your specific project.

Start the conversation →