Glossary100 terms · 7 categories · Updated weekly

Every Google Ads term ecom founders actually search.

100 terms defined in plain operator language. Performance Max, AI Max, Demand Gen, GMC, server-side tracking, POAS, MER. Each entry tells you what it is, why it matters, and what most agencies get wrong about it.

100
Defined terms
7
Categories
2-4
Sentences each
Free
Forever

Campaign types

8 terms
P

Performance Max

#

Google's automated campaign that runs across Search, Shopping, YouTube, Display, Discover, and Maps from a single asset group.

PMax took over most ecom Shopping spend in 2024-2026 because Google routed inventory through it whether buyers asked or not. The campaign is a black box from the dashboard but you control three real levers: asset group structure, audience signals, and feed quality. Most agencies treat PMax as set-and-forget. We treat it as a structure problem with weekly tuning.

A

AI Max

#

Google's 2026 AI-optimised campaign that consolidates Search, Shopping, and Demand Gen with deeper LLM-driven matching.

AI Max went GA April 15, 2026 and changes how match types and asset selection work inside Search and Shopping. The configuration most agencies miss is the brand-exclusion list and feed labelling that prevents AI Max from cannibalising your existing PMax. If you turn it on without the structure work first, blended ROAS drops in week one.

D

Demand Gen

#

Google's YouTube + Discover + Gmail campaign type, the successor to Video Action Campaigns.

Demand Gen replaced VAC in late 2024 and runs on a different signal stack, it's audience-led rather than keyword-led. Creative volume matters more than creative polish: six scripts, three hooks per script, vertical and 16:9 from the same source. Without that velocity Demand Gen burns budget on the wrong audiences before the algorithm finds the right ones.

S

Standard Shopping

#

Manual-bid Google Shopping campaigns where you control bids and structure at the product-group level.

Standard Shopping mostly disappeared into PMax for ecom but it still earns its place for branded queries and high-margin SKUs that need protection from PMax's auto-allocation. The accounts winning in 2026 keep Standard Shopping alive on a thin layer for the products and queries that need a hand on the wheel.

Y

YouTube Ads

#

Google's video-first campaign types covering in-stream, in-feed, Shorts, and Discover placements.

YouTube is the most underpriced attention surface in advertising for ecom in 2026. Most agencies run YouTube as a brand-awareness layer. We run it as direct-response with the script-hook-angle matrix: six scripts × three hooks per batch, mass-tested, format-matched for vertical, in-feed, and 16:9 from the same source.

Shopping

#

Google's product-listing ad placement on the SERP, Google Shopping tab, YouTube, and Discover.

Shopping is feed-driven: every dollar of Shopping performance traces back to feed quality, not bidding. Title structure, custom labels, store quality, and reviews aggregation are worth more than tactical bid changes. Most agencies tune the bid; we tune the feed.

D

Discovery campaigns (legacy)

#

The pre-2024 Discover/Gmail/YouTube placement campaign type Google rolled into Demand Gen.

Discovery campaigns are deprecated. Any account still running them is on the migration deadline and should be reading the Demand Gen onboarding flow, not optimising the legacy structure. The asset and audience setup is similar enough that the migration is mostly a remap rather than a rebuild.

Auction & bidding

16 terms
S

Smart Bidding

#

Google's machine-learning bid strategies (tCPA, tROAS, Max Conversions, Max Conversion Value) that adjust bids per auction.

Smart Bidding only works if your conversion signal is clean. If tracking is broken or your conversion value is wrong, Smart Bidding optimises against the wrong outcome and you can't tell because the dashboard reports the same numbers it learned from. Server-side tracking and Enhanced Conversions are the upstream fix; bidding tweaks are downstream.

T

tCPA (Target CPA)

#

Smart Bidding strategy that targets a maximum cost per conversion.

tCPA shifts the optimisation goal from auction-by-auction to a CPA average across the portfolio. The trap is that it only optimises for *count* of conversions, not *value*. If you sell a $40 SKU and a $400 SKU through the same campaign, tCPA will pull spend toward whichever volume converts faster, which is usually the cheap one. Use tROAS instead when conversion values vary.

tROAS (Target ROAS)

#

Smart Bidding strategy that targets a return-on-ad-spend ratio (revenue per ad dollar).

tROAS is the right Smart Bidding strategy for ecom because it optimises against value not count. The catch: it lags. Setting tROAS too aggressively starves the campaign of impression volume and the algorithm never gets enough conversions to learn from. Most accounts need a 60-90 day calibration before tROAS produces the curve agencies promise on the sales call.

M

Max Conversions

#

Smart Bidding strategy that maximises the number of conversions inside a fixed budget, with no CPA target.

Max Conversions is the bidding strategy you start with when a campaign has insufficient conversion history for tCPA or tROAS to learn. It's a 30-60 day calibration step, not a permanent state. The accounts that get stuck on Max Conversions for six months are the accounts where the agency forgot to graduate them.

Max Conversion Value

#

Smart Bidding strategy that maximises total conversion value inside a fixed budget, with no tROAS target.

Max Conversion Value is what you use after Max Conversions has built enough learning history but before you can confidently set a tROAS target. It optimises against value rather than count, which matters when SKUs vary in price. Most ecom accounts should sit on this for 30-60 days before graduating to tROAS.

Manual CPC

#

The bid strategy where you set a max cost per click yourself instead of letting Smart Bidding do it.

Manual CPC is mostly a relic for ecom in 2026 because Smart Bidding outperforms it on accounts with sufficient conversion data. The exception is brand-protection campaigns where you want a flat ceiling and predictable cost per click on a small set of branded queries. Outside of that, leaving Manual CPC on a non-brand campaign is leaving performance on the table.

E

Enhanced CPC (eCPC)

#

A semi-automated bid strategy that adjusts your manual CPC up or down based on conversion likelihood.

eCPC is the bridge between Manual CPC and full Smart Bidding. It nudges your bid in real time but keeps your max-CPC ceiling in place. We see it most often on brand-protection campaigns or low-volume search campaigns that don't have enough data for tROAS or tCPA to learn cleanly.

C

CPM (cost per mille)

#

Cost per thousand impressions. The pricing model used for awareness-led video and display.

CPM is the bid currency for top-of-funnel YouTube and Display when the goal is reach, not clicks. For ecom direct response on YouTube and Demand Gen, CPM is a diagnostic rather than a target: a sudden CPM spike usually means the algorithm is competing for a shrinking audience pool, which is one of the earliest signs of creative fatigue.

CPV (cost per view)

#

Cost per video view. A YouTube-only metric where a view counts after 30 seconds or full completion of a shorter ad.

CPV is the unit cost on YouTube in-stream and skippable in-feed. A healthy CPV depends entirely on creative quality: a strong hook earns a 30-second view at a lower CPV than a weak one because more viewers self-select to keep watching. Tracking CPV alongside view-through CPA tells you whether the creative is doing its job.

Q

Quality Score

#

Google's 1-10 score for a keyword, based on expected CTR, ad relevance, and landing page experience.

Quality Score still exists in Search but matters less than it did pre-PMax because so much spend now flows through automated campaign types where the score isn't surfaced. On the campaigns where it does matter, low scores correlate strongly with poor landing pages, not with bid level. Fix the page, the score follows.

A

Ad Rank

#

The score Google uses to decide whether and where your ad shows in the auction, combining bid, Quality Score, and expected impact.

Ad Rank is the auction output that determines whether your ad shows at all and which slot it lands in. The opaque variables are 'expected ad extension impact' and 'auction context', but the controllable inputs are bid, Quality Score, and asset extensions. Most poor positions are caused by missing extensions, not low bids.

C

CTR (click-through rate)

#

Clicks divided by impressions. The first signal that the ad headline plus extension is or isn't earning attention.

CTR matters most on Search and Shopping. A CTR below the category baseline almost always means the ad copy is generic or the asset extensions are missing. Inside PMax the surfaced CTR is a blended number across placements, so it's a directional signal rather than a diagnostic.

I

Impression share

#

The percentage of impressions you got out of the impressions you were eligible for in the auction.

Impression share is the diagnostic that tells you whether you're losing volume to budget, to rank, or to neither. Search lost IS (rank) high means raise the bid or fix Quality Score; Search lost IS (budget) high means raise the daily budget. Both above zero on the same campaign is a structural problem worth surfacing.

A

Auction Insights

#

The Google Ads report showing which competitors you're sharing the auction with and how often.

Auction Insights is the cleanest first look at the competitive set on Search and Shopping. The metric to watch is overlap rate per competitor: a sudden jump usually means a new entrant is bidding aggressively in your terms. The report does not surface for PMax.

B

Bid adjustments

#

Percentage modifiers applied to your base bid based on device, location, audience, or schedule.

Bid adjustments mostly disappeared into Smart Bidding's automated decisioning, but they're still relevant on Manual CPC, Performance Max audience signals, and Demand Gen optimization. The rule of thumb: a +20% adjustment that doesn't move volume or efficiency is signal that the segment is already being treated correctly by the algorithm.

S

Seasonality adjustments

#

A Smart Bidding tool to temporarily nudge conversion rates up or down for predictable events like Black Friday.

Seasonality adjustments are the lever you set the week before a sale or a launch so Smart Bidding doesn't get caught flat-footed when conversion rates spike. They cap at 14 days and they only work when the event is known and predictable: a one-day promo, a flash sale, a launch window. Outside of those windows, leave them off.

Targeting

15 terms
S

Search Themes

#

Phrases inside an asset group that signal to PMax which search queries you want to compete for.

Search themes were Google's answer to the loss of keyword visibility inside PMax. They're a signal, not a guarantee, PMax can ignore them. The structural mistake most agencies make is stuffing too many search themes into one asset group, which dilutes the signal and makes the asset group harder for the algorithm to learn against.

A

Asset Groups

#

PMax's organisational unit holding a set of creative assets, audience signals, search themes, and a budget share.

Asset groups are the actual lever inside PMax. Most agencies set up one asset group per campaign and call it a day. The accounts winning in 2026 split asset groups by intent (branded / generic / competitor / problem-aware) so each one gets its own creative set, audience signal, and feed slice. Every asset group is its own mini-campaign with its own performance signal.

Audience Signals

#

First-party data, custom segments, and in-market signals you feed PMax to seed its targeting model.

Audience signals don't constrain who PMax targets, they tell the algorithm where to start learning. Customer match lists, recent buyers, high-LTV cohorts, and competitor in-market segments are the four signals that consistently move asset group performance in the first 30 days. Without them PMax burns the first month of spend on cold prospecting that converts at half the rate.

C

Custom Segments

#

Audience definitions built from competitor URLs, search queries, and apps to seed Google's audience model.

Custom segments are how you tell Google 'find me people who behave like the customers of these specific competitors.' Built right, a custom segment can move PMax CPA 15-30% in 60 days. Built lazy (one URL, no queries), it does nothing.

Customer Match

#

First-party customer email/phone list uploaded to Google to target or exclude existing customers.

Customer match lists are the highest-quality audience signal available to ecom advertisers in 2026. The two uses that compound: feeding high-LTV buyers as a positive signal to PMax, and excluding recent purchasers from prospecting campaigns so you stop paying to acquire customers you already have. Both require a working CRM sync, not a one-time upload.

I

In-market audiences

#

Google's audience segments of users actively researching a category right now.

In-market audiences sit at the bottom of the funnel and signal active research intent. They're useful as audience signals on PMax and Demand Gen and as targeting on Display, but the segment-name granularity is shallow: you can target 'beauty / skincare' but not 'mid-luxury serum buyers'. For finer granularity, layer custom segments on top.

A

Affinity audiences

#

Google's audience segments of users with long-term lifestyle interests in a category.

Affinity audiences sit at the top of the funnel: 'beauty enthusiasts' rather than 'currently shopping for a serum'. They're useful for awareness budgets on YouTube and Display, almost never useful as a primary signal for conversion-led ecom campaigns. If you're using affinity as the audience signal on a Demand Gen campaign with a sales target, swap it for in-market plus customer match.

L

Life events

#

Google's audience segments triggered by major milestones: moving, getting married, having a baby.

Life events targeting is sharp when the product maps to the event (home goods + recently moved, baby brand + new parent) and dilutive when it doesn't. Most ecom buyers either don't use it or don't audit it, then wonder why their YouTube spend is finding the wrong people. Audit it once a quarter and remove anything that doesn't have a direct line to the SKU.

D

Detailed demographics

#

Income, education, marital status, and parental status segments available as targeting or signals.

Detailed demographics are most useful as exclusions, not inclusions. Excluding the wrong income tier from a luxury brand or the wrong parental status from a single-target product usually produces a cleaner lift than narrowing inclusions, because the inclusions force the algorithm into a tighter funnel that may not have signal density.

P

Placement targeting

#

Specifying or excluding the websites, YouTube channels, or apps where your ads run on Display and Video campaigns.

Placement targeting is mostly an exclusion lever in 2026. Inclusion lists tend to starve the algorithm; exclusion lists clean out the long tail of low-quality placements (made-for-ads sites, irrelevant apps, kid content) that Google's automatic placement otherwise drips spend into. Run the YouTube placement report monthly and exclude anything with high spend and low conversions.

N

Negative keywords

#

Keyword exclusions at the campaign or ad-group level that prevent your ad from showing on those queries.

Negative keywords still matter on Search and on the campaign-level negative list inside PMax. The biggest leak in most accounts is missing brand negatives on non-brand campaigns, which lets non-brand campaigns claim credit for branded clicks. The second biggest is missing competitor or unrelated category terms that creep in via broad match.

M

Match types

#

How tightly Google matches a keyword to a search query: broad, phrase, or exact.

Match types in 2026 are tighter than they read on the dashboard: broad has gotten broader, exact has gotten loser. The implication is that match-type strategy alone won't keep your search terms list clean; you have to layer aggressive negative keywords plus search themes and brand exclusions on top. Pure-exact campaigns are mostly dead for ecom non-brand.

G

Geo-targeting

#

Restricting ad delivery to specific countries, regions, cities, or postcode-level radii.

Geo-targeting is two settings most accounts get wrong. The presence target ('people in your targeted locations') is what most ecom shops want; the interest target ('people interested in your targeted locations') leaks spend on out-of-market traffic. Verify the setting on every campaign once a quarter, and use geo-holdouts to measure incrementality on top of it.

A

Ad scheduling (day-parting)

#

Restricting ad delivery to specific days of the week or hours of the day.

Day-parting is mostly redundant under Smart Bidding because the algorithm already learns time-of-day conversion rates. The exception is when you have a non-conversion constraint, like a lean support team that can't service queries overnight, where pausing during off-hours protects the customer experience even if it slightly hurts conversion volume.

O

Optimised Targeting (audience expansion)

#

Google's setting that lets the algorithm expand beyond your audience signal to find similar converters.

Optimised Targeting is on by default in PMax and Demand Gen, and turning it off rarely helps because it removes the algorithm's main lever for finding net-new audiences. The accounts where it does cause problems are usually the ones with a weak audience signal (no customer match, generic in-market segment) where the expansion lands on irrelevant pools. Fix the signal, then leave optimisation on.

Tracking

15 terms
E

Enhanced Conversions

#

Google's first-party data layer that hashes user identifiers and sends them with conversions for better attribution.

Enhanced Conversions recovers 5-15% of attribution lost to iOS, ad blockers, and consent banners. It's free, takes a few hours to wire properly, and most accounts haven't done it. The configuration trap is sending the wrong field as the user identifier, most setups break silently and report nothing about it.

S

Server-side tagging

#

Tracking architecture where conversion events fire from your server (via GTM Server-side or equivalent), not the browser.

Server-side tagging is what survived iOS 14, ad blockers, and consent walls. Browser-side tracking has been broken for years; the dashboards reporting numbers from it are reporting fiction. The transition is a one-time engineering job, not an ongoing cost. The accounts running cleanly on server-side in 2026 are the accounts where Smart Bidding actually learns.

C

Conversions API

#

Google, Meta, and TikTok's server-to-server conversion endpoints, the alternative to browser-side pixel tracking.

Conversions API (CAPI for Meta, GCL Match for Google) is how you send tracking data the platform can actually trust in 2026. Setup quality is the variable: a CAPI integration that fires events with the wrong event_id produces duplicate conversions; one with no user identifiers produces noise. EMQ scores are the diagnostic.

G

GA4

#

Google's analytics product, replacement for Universal Analytics, event-based by default.

GA4 is the analytics layer Smart Bidding learns against by default. The mistake most ecom shops still make in 2026 is using GA4 as their attribution source of truth despite it under-counting purchases by 15-30% versus Shopify. GA4 is for behaviour; Shopify is for revenue. Don't optimise spend against the wrong number.

GTM (Google Tag Manager)

#

Tag deployment system that loads tracking pixels and event scripts without touching site code.

GTM is fine for the basic load but ecom accounts at scale should run GTM Server-side, not just web GTM. Server-side GTM gives you control over which pixels fire, what data they send, and how you transform events before they hit the platform. It's a one-time configuration that fixes most data-quality problems forever.

E

EMQ (Event Match Quality)

#

Meta's score (1-10) for how well conversion events match to a user identity. Equivalent score in other platforms.

EMQ is the diagnostic for whether your CAPI / Conversions API setup is actually working. An EMQ above 8 means the platform can match your conversions to ad clicks reliably. Below 6, you're losing attribution. Most ecom accounts running CAPI think they're set; the EMQ number tells the truth.

C

Conversion actions

#

The events you've defined in Google Ads as a conversion: purchase, lead, add-to-cart, etc.

The single most-common tracking misconfiguration we see is too many conversion actions counted as primary, which causes Smart Bidding to optimise against the wrong target. A clean ecom setup has Purchase as the only primary action, with view-content / add-to-cart / begin-checkout configured as secondary for diagnostic only. Audit which actions are flagged primary in every account on intake.

Conversion windows

#

How long after a click or view Google attributes a conversion to the ad: typically 30 days click, 1 day view.

Conversion windows are the underrated tracking lever. A 90-day click window inflates ROAS on slow-consideration products; a 7-day click window deflates it on fast-consideration ones. The right window matches the actual purchase consideration cycle, surface it from your GA4 path-length data, then set the window to match the modal lag.

V

View-through conversions

#

Conversions credited to an ad the user saw but did not click, recorded within the view-through window.

View-through conversions are most relevant on YouTube and Display where the click rate is structurally low. They inflate the apparent value of those campaigns, so the discipline is to look at view-through CPA separately from click-through CPA and treat the former as a secondary metric. A 7-day VTC window is the default; pulling that down to 1 day is the easiest sanity check on YouTube performance.

A

Attribution models

#

Rules for assigning conversion credit across multiple touchpoints: data-driven, last-click, first-click, linear, time-decay, position-based.

Data-driven attribution is the Google default and the right pick for accounts with sufficient conversion volume because it learns the actual touchpoint weights. Below the volume threshold, DDA falls back to last-click. The model only matters when your account has multiple Google touchpoints; for single-touch accounts the model selection is academic.

O

Offline conversion import (OCI)

#

Uploading conversions that happened off-platform (CRM, sales calls, store visits) so Smart Bidding can optimise against them.

OCI is essential for any ecom brand where the real conversion happens after a website event: returns-and-refunds-adjusted revenue, repeat purchases, lifetime value tiers. Smart Bidding then optimises against the better target rather than the noisy on-platform proxy. The configuration involves passing GCLID at the website level and then matching it back when the offline event fires.

G

GCLID

#

Google Click ID. The unique parameter Google appends to the URL of every paid click so it can match conversions back.

GCLID is what makes auto-tagging work. If your site or your CRM strips query parameters during checkout (some Shopify apps and most older landing-page builders do), the GCLID is lost and offline conversion import breaks silently. Every tracking audit should sanity-check that GCLID round-trips from the ad click to the order confirmation.

U

UTM parameters

#

URL query parameters (utm_source, utm_medium, utm_campaign, etc.) used to track campaign performance in GA4.

UTMs are the analytics-side counterpart to GCLIDs. Most ecom accounts have inconsistent UTMs across paid channels, which makes blended reporting useless because the same campaign shows up under three different source/medium combinations in GA4. Lock a UTM convention once at the agency level and enforce it on every paid channel.

P

Pixel events

#

The named conversion events fired from a tracking pixel: PageView, ViewContent, AddToCart, InitiateCheckout, Purchase, etc.

Pixel events are the standard event taxonomy across Meta, Google, TikTok, and most other platforms. Mismatched event firing across pixels is one of the cheapest tracking fixes available: send the same event name with the same value on the same trigger and you get parity across all your bidding algorithms. Any deviation is a leak.

Feed & merchant

17 terms
G

GMC (Google Merchant Center)

#

The product-feed hub Google uses to populate Shopping ads, Free Listings, and Performance Max.

GMC is where Shopping performance is decided. Item disapprovals, account suspensions, store-quality flags, and feed validation all happen here, and most of them go un-fixed for weeks because nobody is checking the GMC dashboard. The accounts that win in Shopping treat GMC monitoring as a daily job, not a quarterly cleanup.

GMC Suspension

#

Google's account-level penalty that stops all Shopping and PMax product traffic until the suspension is lifted.

GMC suspensions kill ecom revenue overnight and most agencies don't have a recovery playbook. The two most common causes are misrepresentation (price/availability mismatch between feed and PDP) and policy violations (restricted product category, missing pages). Recovery is feed surgery plus a written reconsideration request to a real Google reviewer, not a button click.

I

Item Disapproval

#

GMC flag on individual products that prevents them showing in Shopping ads, while the rest of the feed runs.

Item disapprovals look minor but compound: one disapproved product on a hero SKU is a revenue leak you never see in the campaign dashboard. Feed rules and custom labels can prevent them at submission time. The audit-ready accounts have a daily disapproval-count check that pages the buyer when the number spikes.

F

Feed columns

#

The 70+ structured fields (title, description, custom label, price, GTIN, etc.) that describe each product to Google.

Feed columns are the highest-leverage work in Shopping and most agencies don't do them. Title structure alone (brand → product type → key attribute → variant) is worth 10-30% of ROAS in most accounts. Custom labels turn into PMax asset-group splits. GTIN compliance keeps you out of restricted-category bans.

C

Custom Labels

#

Five user-defined feed columns (custom_label_0 through custom_label_4) used to group products inside PMax and Shopping.

Custom labels are the bridge between the feed and PMax structure. Tagged correctly (margin tier, seasonality, hero SKU flag, new arrival) they let you split asset groups and protect high-margin SKUs from PMax's auto-allocation. Tagged lazy or not at all, you're running PMax with one big asset group and praying.

F

Feed Rules

#

GMC's transformation engine that lets you modify feed values at submission without touching your Shopify export.

Feed rules let you patch a broken feed without engineering work. The pattern most agencies miss: build feed rules to enforce a title structure, fill missing custom labels from product type, exclude restricted SKUs by GTIN range. It's the difference between a feed audit and a feed fix.

P

Primary feed

#

The main product feed in Google Merchant Center that GMC reads first to populate your catalog.

Primary feed is the source of truth for the catalog: title, description, image, price, availability, GTIN, brand, product type. Almost every ecom account has exactly one primary feed (their Shopify product feed) and most of the leverage is in cleaning that feed up before reaching for supplemental feeds.

S

Supplemental feed

#

An additional feed that overrides or augments specific columns of the primary feed without replacing it.

Supplemental feeds are the surgical tool for fixing GMC issues without forking your Shopify feed. The most common use case is custom_label_0..4 (margin tier, performance tier, seasonality), title overrides for specific SKUs, and fixing GTIN errors on imported brands. Layer them rather than rewriting the primary.

G

GTIN

#

Global Trade Item Number. The barcode-style identifier Google uses to match your products to its catalog.

Missing or invalid GTINs are one of the top three causes of GMC item disapprovals on third-party-brand resellers. Google strongly prefers feeds with GTIN populated because it can then verify the product against its own catalog. For private-label brands without GTINs, the identifier_exists attribute set to FALSE is the correct workaround.

M

MPN

#

Manufacturer Part Number. A secondary product identifier Google accepts when GTIN is unavailable.

MPN is the fallback when a product has no GTIN, in combination with the brand attribute. Together they let GMC match the product to its catalog. Private-label ecom brands without GTINs can usually skip MPN too if they set identifier_exists to FALSE, but for resellers MPN is essential.

B

Brand attribute

#

The required feed field naming the manufacturer or designer of the product.

The brand attribute is non-negotiable for GMC submission and Google uses it for matching, search filtering, and brand-restriction logic. Misspelling brand names is a quiet leak across 1000-SKU feeds. Run a sanity diff against the canonical brand list quarterly.

P

Product types

#

Your own taxonomy of product categories, declared in the product_type feed column.

Product types are the most-skipped feed lever. They're what asset groups inside PMax can filter on cleanly: 'Apparel > Tops > T-shirts' lets you build an asset group for tops only. Most accounts leave product_type blank or duplicate the Google product category, and miss the routing leverage.

G

Google product category

#

Google's standardised product taxonomy, around 6000 categories deep. Used for matching and policy enforcement.

Google product category is set by Google's matching when you submit a product, but you can override it. Mis-categorisation triggers a wide range of policy issues (apparel categorised as healthcare, supplements categorised as cosmetics) and the wrong category sometimes restricts the product from auction eligibility entirely. Audit the assigned category against your product type for every category change.

A

Availability

#

The feed attribute declaring whether a product is in_stock, out_of_stock, preorder, or backorder.

Availability drift is one of the silent killers of ecom Shopping. Out-of-stock products that still show as in_stock in the feed get clicks that never convert and quietly burn budget. The fix is a feed connector that updates availability in near-real-time from the Shopify inventory state, not the once-a-day Shopify-default refresh.

P

Promotions feed

#

A separate GMC feed for promotional offers (percentage off, free shipping, BOGO) that show as ad annotations.

Promotions feeds add the discount badge to your Shopping ads and almost always lift CTR by a meaningful margin during a promotion window. The setup is a one-time GMC config plus a feed CSV that updates with each promo. Most brands launch a Black Friday promo and forget to wire the promotions feed; that's the easiest CTR lift available during the highest-traffic week of the year.

Product ratings

#

The 1-5 star rating shown next to your Shopping ad, sourced from approved review providers.

Product ratings need a minimum of 50 reviews per product across approved providers (Yotpo, Reviews.io, Trustpilot, etc.) before GMC will surface them. They lift CTR significantly on Shopping. The setup is a one-time provider integration; the maintenance is making sure the review feed continues to flow.

S

Shipping settings

#

GMC-level shipping costs, delivery times, and free-shipping thresholds shown in your Shopping ads.

Shipping settings are configured in GMC, not the feed, but they show as text annotations on the Shopping result. Inaccurate shipping cost is a top cause of item disapprovals and click-to-conversion drop-off. Audit shipping settings every time the rate card changes at the Shopify level.

Measurement

17 terms
R

ROAS (Return on Ad Spend)

#

Revenue attributed to ad spend divided by the spend itself. Reported per-channel.

ROAS is the metric every agency leads with and every founder runs their business on, except they shouldn't. ROAS doesn't account for product margin, refunds, return rate, or new-vs-repeat customer mix. A 5x ROAS on a 25% margin product loses money. POAS is the correct version of this metric for any operator who looks at the P&L.

P

POAS (Profit on Ad Spend)

#

Profit (after COGS, refunds, returns) attributed to ad spend, divided by the spend itself.

POAS is what your business actually runs on. ROAS is what the dashboard reports. Ad-Lab measures and optimises against POAS, accounts for product margin, refunds, return rate, and net-new vs repeat customer mix, and reports ROAS alongside for context only. Agencies that don't talk about POAS in 2026 are agencies that haven't read your P&L.

M

MER (Marketing Efficiency Ratio)

#

Total revenue divided by total marketing spend across all channels. The blended view of efficiency.

MER is the metric that answers 'is the marketing function as a whole making money.' Channel ROAS adds up to a lie when channels overlap (Meta drives Google branded search, Google drives email, etc). MER cuts through that. The accounts running clean MER targets in 2026 are the accounts where the founder and the agency look at the same number on the same day.

B

Blended ROAS

#

Total store revenue divided by total ad spend across channels. Same-day approximation of MER.

Blended ROAS is the daily-readable cousin of MER. It's what you check on the way to the standup. The trap: it can mask channel-level decay because branded search bleed and repeat-customer revenue inflate it. Watch the trend, not the absolute number.

A

AOV (Average Order Value)

#

Total revenue divided by total orders. The lever between traffic volume and revenue.

AOV is the cheapest lever in ecom and the one most agencies forget exists. Bundle, threshold-free-shipping, and post-purchase upsell pushes AOV 10-25% in 30 days, which lets you absorb a higher CPA without hurting POAS. We track AOV alongside ROAS in every weekly readout because spend changes the mix and the mix changes AOV.

C

CPA (Cost Per Acquisition)

#

Ad spend divided by conversions. The price of buying one new customer or sale.

CPA is useful for setting bid floors but not for evaluating campaign health in isolation. A campaign with a $40 CPA and $200 AOV is profitable; one with $15 CPA and $30 AOV is not. Always read CPA next to AOV and gross margin, never alone.

CAC (Customer Acquisition Cost)

#

Total marketing spend divided by new customers acquired. The blended cost of one new buyer.

CAC is CPA's blended cousin. It's the right metric for evaluating whether the marketing function is sustainable, especially when LTV is plotted against it. The 3:1 LTV:CAC rule is the industry shortcut; the actual answer depends on your repurchase rate, margin, and burn rate.

L

LTV (Lifetime Value)

#

Total profit a customer generates across their relationship with your brand, after COGS.

LTV is the upstream number that tells you how much CAC you can afford. The mistake most ecom shops make is calculating LTV on revenue not profit, which inflates the number 2-4x. Use 12-month or 24-month LTV with refund and return rate baked in. The realistic number is what determines whether you can scale Google + YouTube spend without going underwater.

B

Branded Search Bleed

#

When a high percentage of reported Google Ads revenue comes from people searching your brand name (who would have bought anyway).

Branded search bleed is the most-reported pattern across the $150M of accounts we audit. If 80% of your 'Google Ads revenue' is your own brand name, you don't have a Google Ads program, you have a brand-search reporter. Splitting branded vs non-brand campaigns surfaces the real ROAS on incremental traffic. Most accounts hate the truth this number tells.

I

Incrementality

#

The portion of attributed revenue that would not have happened without the ad spend.

Incrementality is the only attribution metric that survives every cookie change, attribution model swap, and iOS update. The cleanest measurement is geo-holdout testing, turn off ads in a matched market for 4-6 weeks and measure the revenue delta. Most agencies don't do incrementality testing because the answer is uncomfortable.

N

nCAC (new-customer CAC)

#

Customer acquisition cost calculated only against newly-acquired customers, ignoring repeat buyers.

nCAC is the metric most DTC brands should target instead of blended CAC because repeat customers don't need the same level of paid acquisition support. A blended CAC of $40 might hide a new-customer CAC of $90 once you strip out the repeat purchases. Set the bid target on nCAC, not blended.

R

Repeat purchase rate (RPR)

#

Percentage of customers who place a second order within a defined window.

RPR is the engine of LTV. A brand with a 25% 90-day RPR can sustain a much higher acquisition CAC than a brand with a 10% RPR because each acquired customer drives more downstream revenue. The right paid-spend posture follows from RPR: high-RPR brands lean into volume, low-RPR brands lean into margin.

N

New customer ratio

#

Percentage of orders in a window that come from first-time customers vs. repeat buyers.

New customer ratio is the diagnostic that tells you whether your paid budget is actually doing acquisition or just subsidising repeat customers who would have come back anyway. A ratio dropping from 60% to 40% over a quarter usually means the campaigns are over-targeting customer-match and remarketing pools. Push the ratio back up by widening the audience signal and excluding existing customers from acquisition campaigns.

C

Contribution margin

#

Revenue minus the variable costs of producing and delivering each unit. The margin available to spend on acquisition.

Contribution margin is the right denominator for POAS, not gross margin. It strips out the costs that scale with each order (COGS, payment processing, shipping, refunds, returns) and leaves the margin actually available to spend on customer acquisition. Most brands optimise on gross margin and quietly lose money on the orders they're celebrating.

P

Payback period (CAC payback)

#

How many months it takes for a customer's contribution margin to recover the CAC spent to acquire them.

Payback period is the financial constraint that gates how aggressive you can run acquisition. A 3-month payback brand can scale paid spend faster than a 12-month payback brand because cash recycles faster. The right way to set the POAS target on a campaign is to start from the maximum payback you can finance, then back-solve.

C

Cohort analysis

#

Grouping customers by their acquisition month and tracking each cohort's revenue, RPR, and LTV over time.

Cohort analysis is what turns a noisy month-over-month revenue chart into actionable signal. A cohort acquired during a deep discount month usually has a worse 90-day RPR than a full-price cohort, which means the discount-driven CAC was actually higher than it looked at acquisition. Most ecom brands don't run cohorts past the 30-day mark; the leverage is in the 90 and 180-day reads.

G

Geo-holdout test

#

Pausing ads in a matched geographic market and comparing revenue against an active market to measure incrementality.

Geo-holdouts are the cleanest incrementality test ecom brands have access to. The test design pairs two similar markets, runs ads in one and pauses in the other for 4-6 weeks, and reads the revenue delta. Done right, it's the answer to 'what would happen if we cut Google spend?' that no attribution model can give you.

Creative

12 terms
H

Hook

#

The first 1-3 seconds of a video ad, the part that earns or loses the next 30 seconds of attention.

Hooks are the variable that controls whether your video ad gets watched or skipped. The Ad-Lab system runs three hooks per script in a batch (eighteen ad units from six scripts). Most agencies write one hook per script and wonder why the cost-per-view is high. Hook variety is the cheapest creative lever in YouTube and Demand Gen.

V

VSL (Video Sales Letter)

#

Long-form video ad (typically 2-15 minutes) structured as a sales argument with a defined offer at the end.

VSLs are the video format that converts hardest in ecom for considered-purchase categories. The mistake most agencies make is treating VSLs as 'long YouTube ads.' VSLs are landing pages with a video at the top, optimised on per-second drop-off, with the offer copy locked in advance. The whole format is a conversion engine, not a content piece.

A

Advertorial

#

Editorial-style landing page that reads like an article, sells like an ad.

Advertorials are the highest-converting cold-traffic landing format for most ecom verticals in 2026. The format works because it reads as information first, sales second. The execution trap: skipping the editorial structure and writing a sales page with a magazine font. Real advertorials are journalism on the buyer's problem, with the brand as the answer in act three.

L

Listicle

#

Numbered-list landing page ("7 things to look for in X") used as a top-of-funnel ad destination.

Listicles work because they answer a question the buyer is already asking and rank the buyer's options. The Ad-Lab listicle format positions the brand as one of several options in act one, then earns position one through specific evidence in act three. Done right it converts cold traffic at 2-3x the rate of a generic PDP.

Landing pages

#

Purpose-built post-click pages that match the ad's message, in formats like advertorial, listicle, VSL, sales, comparison, keyword theme.

Landing pages are the difference between paid traffic that converts and paid traffic that bounces back to Google. Most ecom shops route paid clicks to their generic PDP; we build five to ten purpose-built pages per client per month, format-matched to the ad and the audience signal. The traffic does not hit your homepage.

S

Sales page

#

A long-form, single-product landing page structured around a sales argument: problem, solution, proof, offer.

Sales pages are the right format for high-consideration ecom products (supplements, beauty, home goods) where the ad click happens before the buyer has decided to buy. The structure ties the ad's hook to a fuller version of the same argument and routes to the cart only after the proof and offer sections. Generic PDPs convert worse on this traffic by 30-60%.

C

Comparison page

#

A landing page that compares your product to one or more named competitors on specific attributes.

Comparison pages catch consideration-stage traffic mid-search and convert better than sales pages on those clicks because they answer the question the buyer typed. The honest version (where you concede the categories the competitor wins) outperforms the marketing version because AI crawlers and buyers both reward truthful framing.

P

Proof stack

#

The sequence of credibility signals (testimonials, ratings, third-party validation, results) inside an ad or page.

Proof stack is the part of every script and every landing page that earns the buyer's belief that the claim is real. The order matters: third-party evidence (press, ratings) before first-party (testimonials, before-and-after) before the offer. Reversing the order kills conversion because the buyer doesn't trust the testimonial until they trust the source.

C

CTA placement

#

Where in a video script or landing page the call-to-action is delivered, front-loaded vs back-loaded.

Front-loaded CTA in the first five seconds of a YouTube ad works for high-intent traffic where the buyer is already searching. Back-loaded CTA after the proof stack works for top-of-funnel where the buyer needs the argument first. Most accounts use one approach across all formats and audiences and miss the lift from matching the placement to the intent.

D

DCO (dynamic creative optimisation)

#

Algorithmic recombination of creative elements (headlines, images, CTAs) to find the highest-performing combination.

DCO is built into PMax, Demand Gen, and most modern ad systems. The implication for the creative team is that you stop authoring finished ads and start authoring matrix-friendly inputs: four headlines, four descriptions, six images, the algorithm assembles them. The catch is that asset variety has to be real (different angles, different proof) for DCO to find lift; cosmetic variation does nothing.

U

UGC (user-generated content)

#

Creative produced by customers or contracted creators in a customer voice, rather than studio-polished brand assets.

UGC outperforms studio creative on most ecom YouTube and Demand Gen tests because the format reads native to the platform and the buyer trusts the source. The mistake most brands make is treating UGC as a one-off batch rather than a continuous production pipeline; a healthy creative ops cadence ships ten to twenty new UGC clips a month.

F

Format-matched creative

#

Creative shot or assembled specifically for a target placement (Shorts vertical, in-feed square, in-stream 16:9), not re-cropped after the fact.

Re-cropping a 16:9 hero shot to a 9:16 vertical loses the framing that made the hook work and the CTA usually clips out of frame. Format-matched creative is shot or composed for each format from the source, which doubles the production cost and roughly triples the conversion rate vs re-crops. The math works almost every time.

Missing a term?

If you searched for a Google Ads term and it's not here, that's a glossary bug we fix in 48 hours.

The list grows from real search-query reports across the $150M+ of accounts we manage. Tell us in the audit call which term you couldn't find and it lands in the next deploy.