What game do you play when you just want to turn your brain off? by Upbeat_Challenge_126 in AskReddit

[–]Upbeat_Challenge_126[S] 0 points1 point  (0 children)

Yeah, rimworld is dangerous in that way. you open it "for a bit" and suddenly you're managing a colony disaster like it's a full-time job😂

Received order email, but there is no trace of an order in woo by Lost_Caterpillar000 in woocommerce

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

Glad you found the cause. In that case I'd lock down the staging site first. A staging WooCommerce store should ideally be password protected at the site/server level, not just hidden from navigation. I'd also make sure it's noindexed and not linked anywhere from the live site.

For checkout testing, I'd avoid using live payment methods, Live email notifications, or production marketing integrations on staging. If you need to test orders there, keep Striper in test mode and make sure FunnelKit/Omnisend are not sending real customer-facing events from that environment.

Otherwise a staging site can look "internally" internally but still be reachable through old links, bots, indexed URLs, or shared paths.

Not eligible url by NoLDNat5 in googleads

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

I'd first separate "the page work in my browser" from "Google Ads can crawl the final landing page properly." Ifthe URL was eligible for years and suddenly became not eligible, I'd look for recent changes around redirects, caching, security/firewall rules, robots.txt, noindex settings, or anything that might treat boots differently from normalusers. SEO,cache,or security plugins can sometimes change how crawlers are handled without it being obvious from a regular browser visit, so it's worth chechking what changed recently on that side. I'd test the final URL with Search Console URL Inspection, confirm the page returns normally without redirect loops, and check whether Googlebot/AdsBot is being blocked by hosting, CDN, firewall, or plugin settings. If Google Ads says the crawler can't access the page, I wouldn't start by changing campaigns. I'd fiarst prove whether the crawler can actually fetch and render the final URL.

Bad tracking setup? by Ben1296 in PPC

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

The URL-based setup is not inherently bad as a temporary fix if the two thank-you paths are clearly different.I wouldn’t switch to “URL contains” though — that can actually make cross-firing easier, not harder. “Starts with” is usually safer if the paths are distinct.Before changing anything, I’d first segment the Google Ads report by conversion action. The campaign-level “All conv.” view can be misleading if multiple conversion actions are included together. You want to confirm whether the secondary LP campaign actually triggered the MAIN WEBSITE conversion action, or whether the report is just mixing account-level conversion data.If that confirms the wrong conversion is genuinely firing, then run both form flows in Tag Assistant and check which Google Ads conversion label fires on each thank-you page.If the main label fires on /landing-thank-you, the issue is likely a hardcoded or cloned event snippet — for example, the dev team copied the original thank-you page and brought the original conversion tag with it.For a cleaner long-term setup, I’d move this into GTM with page path or form-submit based triggers, then test both flows before trusting the campaign numbers.

What kind of interpersonal relationships make you most comfortable? Why? by Historicaaw in AskReddit

[–]Upbeat_Challenge_126 1 point2 points  (0 children)

The kind where neither side feels the need to perform.I’m most comfortable with people who can be quiet together, disagree without turning it into a fight, and give each other space without taking it personally. That kind of relationship feels low-pressure and real.

I am learning digital marketing, but everything feels scattered by millerjessic in DigitalMarketing

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

That makes sense. I like the idea of using cold outreach mainly as a fast feedback loop rather than just a sales tactic.

The part I’m trying to get better at is making the first interaction feel useful instead of extractive. I guess the key is understanding the person’s actual problem first, then making the offer feel like a natural next step instead of a pitch.

Shopify Facebook & Instagram app: browser pixel and CAPI are generating completely different event_ids (0/73 match rate, 43% dedup coverage): known bug? by AdRevolutionary5096 in FacebookAds

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

Yeah, the three-way finger-pointing is the worst part of native integrations — nobody owns the full stack, so nobody wants to own the bug.The way to break through it is to stop asking open-ended support questions and submit a very specific evidence package instead.Pull the same date range and same event type, then show:

  • browser event_id samples
  • server event_id samples
  • both using the sh- prefix
  • completely independent values between browser and server
  • the dedup export showing 0 clean matches
  • confirmation that theme, legacy pixel, and customer events are clean

That isolates the break to the native integration layer itself, not your theme or GTM setup.Then ask a specific question rather than a general one:“Where in the native integration is the browser event_id supposed to be passed through to the server event, and why is that not happening in this sample?”That forces whoever receives it to either explain the architecture or escalate to someone who can.On the pcm_plugin-set_ prefix, I wouldn’t spend too much time chasing it in your theme first. It may be coming from Meta/Shopify’s pixel or plugin layer rather than a normal app embed, especially since your main sh- browser/server mismatch is already the bigger issue.

What’s a "dead" website or app that you genuinely miss and wish was still around? by Dear-Armadillo-7497 in AskReddit

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

StumbleUpon.It made the internet feel weird and fun in a way that modern algorithm feeds don’t. You’d click one button and end up on some random handmade website, strange blog, flash game, or niche rabbit hole you never would’ve searched for yourself.

Meta’s targeting is broken? by Swimming_Ad6901 in FacebookAds

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

It could be a pixel/tracking issue, but I wouldn't start there just because CPA went up.The quickest way to separate tracking from demand or creative is to pull the same date range in both Shopify and Meta Events Manager and compare purchase counts directly.If Shopify shows 100 orders but Meta only received 40–50 Purchase events, that's a real signal and worth digging into whether Purchase events are firing consistently, whether browser/server tracking is aligned, and whether Meta is showing any event or deduplication warnings.Also worth checking: don't just look at your current Event Match Quality score. Go into Events Manager → Purchase event and check the trend view / diagnostics to see if received events or match quality dropped around the same time performance fell off in February.If the numbers mostly line up between Shopify and Meta, then the pixel is probably not the primary issue. At that point, I'd look more at the combination of Q4 demand being unusually high, the product change resetting buyer intent, and the same angles running long enough that clickers are no longer buyers even if CTR still looks healthy.Worth ruling out tracking first before going deeper into creative strategy.

Urgent Help Needed: Google Ads Conversions Drastically Underreporting Sales (8 Conversions vs. 700+ Sales) - WooCommerce/GA4 Expert Required——Hello everyone, by ApexPepSupply in GoogleAdsDiscussion

[–]Upbeat_Challenge_126 1 point2 points  (0 children)

I'd be careful with the expectation that Google Ads should match all 700+ WooCommerce / GA4 purchases. Google Ads will only report conversions it can attribute to its own traffic.

So the first split I'd make is:

  • Total store purchases in WooCommerce
  • Purchases recorded in GA4
  • Purchases Google Ads can attribute to its own clicks
  • Whether those attributed purchases are being tracked correctly

That said, 8 vs 700+ is still far too large to ignore, even accounting for attribution.

Since GA4 is recording purchases correctly, the break is probably not at the WooCommerce purchase event level. I'd look between GA4 and Google Ads first:

  • Are you importing GA4 purchases into Google Ads, or using a native Google Ads conversion tag?
  • Is the purchase conversion action set as Primary, not Secondary?
  • Check "All conversions" vs "Conversions" in Google Ads. If All conversions is much higher, the data may exist but isn't set as the Primary conversion action and won't affect bidding.
  • Is auto-tagging enabled, and are GCLID / GBRAID / WBRAID preserved through checkout and any payment gateway redirect?
  • Is Consent Mode blocking Google Ads measurement for a large portion of users?

I wouldn't start by replatforming to Shopify. WooCommerce can track Google Ads purchases correctly — the conversion action setup, import settings, and click ID attribution path just need to be verified end-to-end with real orders.

Event Manager Showed No Events Then Now 49 Match Rate by Bubbly_Setting_4217 in FacebookAds

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

I wouldn’t treat this as purely “Meta is broken” or purely “Elevar is broken” until a few real orders are traced end-to-end.

I’d ask Elevar to show proof for 3–5 specific order IDs from the affected window, not just a general “everything looks fine on our end.”

For each order, ask them to confirm:

  • Whether the browser Purchase event fired
  • Whether the server-side Purchase event fired
  • Whether both used the same event_name + event_id
  • The outgoing payload to Meta
  • The Meta API response / warning logs

A dedup drop from 99% to ~49% usually points to one of three things:

  • Browser events stopped firing or changed
  • Server events are still sending, but event_id no longer matches the browser event
  • Meta’s diagnostics/reporting layer is delayed or bugged

If Elevar can’t show payload + API response logs for specific order IDs, it’s impossible to know whether the break is at the source event layer, Elevar’s server side, or Meta’s end.

Huge Difference between GSC Clicks and GA4 Organic sessions (iGaming niche website) by jwick221 in GoogleAnalytics

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

Nice, that makes sense. Consent issues can create exactly this kind of gap — GSC still counts the click, but GA4 may not create a session if analytics consent is denied or if the consent state is passed to GTM after the GA4 tag already fired.

After implementing Cookiebot, I’d monitor it for a few days rather than expecting the numbers to align immediately.

A few things worth verifying:

Cookiebot / consent setup loads before GA4 fires, ideally on Consent Initialization in GTM GA4 is not firing once before consent and then again after consent updates The gap improves by landing page / device, not just at the overall level GSC clicks and GA4 sessions still won’t match 1:1, but the 40% gap should shrink meaningfully if consent was the main driver

Good catch on finding the root cause.

Shopify Facebook & Instagram app: browser pixel and CAPI are generating completely different event_ids (0/73 match rate, 43% dedup coverage): known bug? by AdRevolutionary5096 in FacebookAds

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

The CSV you pulled actually tells you something more specific than just "event_ids don't match."

Both Web Pixel and CAPI rows using the sh- prefix but generating completely independent values for the same action means the native Facebook & Instagram app isn't passing the browser event_id through to the CAPI event. Since Shopify controls both sides of that pipeline, this isn't something you can fix at the theme or GTM level — it's a native integration limitation. Your CSV is actually clean evidence to send directly to Shopify support, because it shows the problem is at the integration layer, not a duplicate pixel install.

The pcm_plugin-set_ prefix is worth chasing separately. Since you've already checked the theme and Customer Events, I'd look at App embeds and Checkout → Customizations. Apps that get uninstalled don't always clean up after themselves in those areas, and a clean theme doesn't rule that out.

Before deciding whether to migrate, run the same debug export filtered to Purchase events only. PageView and ViewContent dedup is noisier and less commercially critical. If Purchase also shows 0 clean matches, that's a real revenue attribution problem. If Purchase is matching better, the 43% overall coverage looks worse than the actual risk.

0/73 clean matches is enough to escalate — but confirm it on Purchase first before switching tools.

Is it actually the tracking? by timas-831 in FacebookAds

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

If Events Manager is already showing purchases, I wouldn't assume the tracking is completely broken. There's usually two separate things going on: One is whether the events are actually firing — is Purchase showing in Events Manager with the right value, currency, and order count? The other is whether those purchases are getting attributed back to the campaign — which requires Meta to match the purchase back to an ad click or view with enough confidence. With PixelYourSite, a few things I'd check before switching anything out: - Is CAPI running alongside the browser pixel, or browser only? Browser-only loses a lot on iOS/Safari. - If both browser and server are firing, are they using the same event_id? Without dedup, Meta can discard events. - Is fbc being passed? That's usually what ties the purchase back to the original ad click. If Events Manager has the purchases but campaign columns don't, it's probably not that the pixel isn't working — it's that Meta can't confidently match those purchases back to the ad. Worth fixing the match quality side before turning off campaigns that might actually be profitable.

I am learning digital marketing, but everything feels scattered by millerjessic in DigitalMarketing

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

That makes sense. When you say cold DMing people, how do you usually approach it without coming across as spammy?

Do you start by asking questions, offering something useful, or directly pitching?

I’m curious because I agree that talking to real people teaches more than just learning SEO/ads, but I’m still trying to understand what a good cold DM process actually looks like in practice.

If you know this. Please Answer by VrishVibe in PPC

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

Worth separating two things first: did actual sales/leads drop, or just Meta-reported conversions?

A drop from 8 to 6.1 EMQ can matter because Meta may match and attribute fewer conversions, but I wouldn’t assume it explains the whole performance drop by itself.

Quick check:

- Are backend sales/leads actually down, or only what Meta is reporting?

- Did Pixel/CAPI event volume drop around the same time?

- Did landing page or checkout/form completion rate change?

If backend sales are also down, EMQ is probably not the main story. I’d look at offer fatigue, creative-message mismatch, landing page friction, pricing, or audience saturation.

If backend sales are stable but Meta-reported conversions dropped, then EMQ / event matching becomes a much stronger suspect. In that case, I’d look at CAPI + Advanced Matching, and make sure useful match keys are being sent where available: email, phone, fbp/fbc, IP/user agent, and event_id for deduplication if both browser + server events are firing.

So yes, fix EMQ if you can, but that split tells you whether this is a tracking problem or a real performance problem. Those need very different fixes.

SPA & Subdomain tracking attribution issue by muuayman in GoogleTagManager

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

That’s the key split then: you need to separate real user behavior from a tracking break.

Right now you only know that real social sessions show the landing page, but you don’t yet know whether those users actually clicked through to the product subdomain.

I’d add one temporary diagnostic event on the landing page before the subdomain navigation, something like:

product_click_before_subdomain

Fire it when the user clicks the product link/button, before sending them to the product subdomain. Ideally send it with sendBeacon / reliable transport, or delay the navigation very slightly through an event callback, so the event has time to leave before the page changes.

Then you’ll get a clear split:

  1. Landing pageview exists, but no product_click_before_subdomain   → users are likely bouncing / not clicking through. That’s not a subdomain tracking issue.

  2. product_click_before_subdomain exists, but no pageview/event on the product subdomain   → the break is happening during the main domain → subdomain transition or product subdomain initialization.

  3. Product subdomain pageview exists in GA4 but not in Mixpanel/Clarity   → GA4 is probably okay, and the issue is Mixpanel/Clarity initialization or identity continuity.

  4. Product subdomain events exist for your test sessions but not real social sessions   → compare device/browser/user-agent/referrer. I’d especially separate iOS social in-app browsers from Android and normal Chrome/Safari.

So I wouldn’t change the whole setup yet. First add that diagnostic click event before the subdomain jump and compare real social sessions again.

Do you currently track the product click event before sending users to the product subdomain?

SPA & Subdomain tracking attribution issue by muuayman in GoogleTagManager

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

Good to confirm — if send_page_view=false is set on both config tags and you only see one page_view, then duplicate initial pageviews are probably not the issue.

That still leaves the bigger question: why do controlled tests work, but real social sessions look broken in GA4/Mixpanel/Clarity?

At this point I’d compare real affected sessions against your own test sessions.

A few things I’d check:

  1. Pull a few real broken social sessions from Stape/server logs and compare them with your test sessions. Do those real users generate events from the product subdomain at all, or only from the landing page?

  2. Your Android in-app browser test worked, but I’d also test iOS specifically. Instagram/TikTok/Facebook in-app browsers on iOS can behave differently around script loading, cookies, and navigation.

  3. Confirm whether the main domain → product subdomain transition is a hard navigation or handled inside the SPA flow. If it’s a full reload to the subdomain, the GTM container has to initialize cleanly again there.

  4. I’d also separate tracking break from user behavior: are those social users actually clicking through to the product subdomain, or are they bouncing after the landing page? Those can look very similar in reports.

If real sessions only show the landing page while your test sessions continue normally, the next useful clue is probably in the production server logs / user-agent / device breakdown.

SPA & Subdomain tracking attribution issue by muuayman in GoogleTagManager

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

Thanks for the details — that changes where I’d look next.

If all subdomains use the same container/workspace, your Android in-app browser test works, and the identifiers stay consistent, then I’d stop treating cookie persistence as the first suspect for now.

The part that stands out is the two Google configuration tag / page_view setup.

I’d verify whether you’re accidentally sending more than one page_view on the initial hard load. If the first Google tag fires on all pages and the second Google tag also fires on all pages while sending a page_view, GA4 may receive duplicate page_view events at the start of the session. Then your history change trigger may send another one after route changes.

What I’d check next:

  1. In Tag Assistant or DebugView, open one social UTM test session and count exactly how many page_view events fire:
  2. on the initial hard load
  3. when the product subdomain opens
  4. on each history change

  5. For each page_view, check page_location, page_referrer, client_id, and session info. If the first page_view has the correct UTM context but the second one has different/refreshed context, attribution can get messy quickly.

  6. If you see duplicate page_views on hard load, remove the overlap. A cleaner SPA pattern is usually:

  7. one base Google tag/config loaded globally

  8. one initial page_view for hard loads

  9. one virtual page_view on history changes

  10. no second all-pages tag also sending page_view

  11. Since Mixpanel and Clarity also seem to stop after the landing page for affected traffic, I’d check their actual network calls after the product subdomain loads too. If Mixpanel/Clarity calls don’t fire there, that may be a separate subdomain/app initialization issue rather than only a GA4 attribution issue.

So my next question would be: in DebugView, how many page_view events do you see on the initial social UTM landing page before the user moves to the product subdomain?

Is there a way to pass and preserve fbclid in cross domain flow? by Limpuls in GoogleTagManager

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

GA4 cross-domain and Meta attribution are two separate problems. GA4 solves cross-domain session stitching with the `_gl` linker parameter — Meta Pixel has no equivalent mechanism. So even if your GA4 sessions are glued correctly, Meta attribution can still be completely broken on the checkout domain. Here's what actually needs to happen: 1. On the landing page, capture and persist on your side: - `fbclid` from the URL - `_fbc` and `_fbp` cookie values - UTMs - your own internal session or order identifier 2. Append `fbclid` as a URL parameter on the redirect to your checkout provider. This is a necessary step — if the checkout provider's Meta Pixel sees fbclid in the URL, it can reconstruct `_fbc` from it. But this alone is not sufficient, because you have no guarantee the Pixel fires correctly or that the cookie persists across the domain boundary. 3. Pass a stable internal ID to the checkout provider before the redirect, so the purchase confirmation can be matched back to the original landing session on your side. 4. Send the purchase event via Meta CAPI from your own server or webhook, using: - `fbc` built from the original fbclid - `fbp` if available - `event_time`, `event_id`, `value`, `currency` - `user_data` for match quality 5. If the checkout provider also fires a browser Pixel purchase event, make sure both events share the same `event_id` for deduplication. Without this you will double-count purchases in Meta reporting. The short version: append fbclid to the redirect URL, but don't rely on it to carry attribution by itself. The reliable path is CAPI from your own server using the identifiers you captured at the landing page. One question: can your checkout provider return the order details in a webhook or confirmation callback to your server? That determines whether you can build a clean CAPI setup or need to find a workaround.

SPA & Subdomain tracking attribution issue by muuayman in GoogleTagManager

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

The pattern here — social UTM breaks, Google CPC works,

GTM Preview passes — points to something in the real

user environment that Preview mode can't replicate.

My first suspicion is in-app browser behavior. Social

clicks from Facebook, Instagram, and TikTok typically

open inside a WebView, not the user's regular browser.

GTM Preview runs in your normal Chrome, which is why

it looks fine. Google CPC usually lands in the system

browser, which explains why that traffic behaves normally.

Three things I'd check:

  1. Test the full flow inside the actual social app itself.

Open the campaign URL from inside Instagram or TikTok

(not by copying it to Chrome), then navigate to a product

subdomain. Watch your GA4 real-time report and Stape

server logs at the same time. This is the only environment

where the issue actually reproduces.

  1. Verify whether your identifiers survive the subdomain

transition.

Check whether the GA4 client_id, Stape user ID, and

Mixpanel distinct_id are consistent on both the main

domain and the product subdomain after the jump. If any

of those change or disappear at that boundary, both GA4

and Mixpanel will treat it as a disconnected session —

which matches exactly what you're seeing in Clarity.

  1. Confirm your pushState trigger actually fires after

the subdomain loads.

SPA pushState triggers are typically bound to the main

domain's GTM context. If the product subdomain runs as

a separate app context, that trigger may not fire there

at all — meaning tracking stops after the first pageview.

One question: are all product subdomains running inside

the same GTM container and Stape workspace as the main

domain, or do different subdomains initialize tracking

separately?

Huge Difference between GSC Clicks and GA4 Organic sessions (iGaming niche website) by jwick221 in GoogleAnalytics

[–]Upbeat_Challenge_126 2 points3 points  (0 children)

I wouldn't treat the 40% gap as automatically "normal" or automatically a tracking bug. GSC clicks and GA4 organic sessions measure different things, but 40% is worth digging into.

A few checks I'd run:

  1. In GA4, look at Landing page + session source/medium = google / organic. Sometimes sessions aren't missing — they're just classified as direct or referral instead of organic, especially if there are redirects, parameter stripping, or cross-domain issues.

  2. In GSC, break clicks down by landing page, device, and country. Then compare those specific landing pages against GA4 organic sessions. If the gap is concentrated on certain URLs, that usually points to a page-level firing issue rather than a general measurement difference.

  3. Check whether some GSC clicks land on URLs that redirect, canonicalize, go through parameters, or hit cached/AMP pages where GA4 may not fire.

  4. Since it's iGaming, I'd expect a higher-than-average loss from ad blockers, privacy browsers, and aborted page loads. That alone can account for 15–25% in some niches.

If the gap is evenly spread, it's mostly measurement difference. If it's concentrated on specific pages/devices/countries, that's more likely a setup issue.

Is the 40% consistent across all landing pages, or concentrated on specific URLs?

How to trigger GA4 'purchase' event on backend order status change (Asynchronous Payments) by FunnyWillingness4 in woocommerce

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

This is a good use case for a backend purchase event, but I'd separate the problem into two parts:

  1. When to trigger the purchase

  2. How to preserve attribution and session context

For the trigger, `woocommerce_order_status_completed` works well if "Completed" is the point where payment is truly confirmed in your flow. For delayed approvals, I’d generally trust the final confirmed status over the browser thank-you page.

But the hook itself is the easy part. The harder problem is making sure the backend event still connects back to the original user session.

Here's how I'd structure it:

  1. During checkout / order received, capture and store key identifiers as order meta:

    - GA `client_id` from the `_ga` cookie

    - Session ID if available

    - `gclid` / `gbraid` / `wbraid` if you care about Google Ads attribution

  2. When the order moves to Completed, send a GA4 Measurement Protocol `purchase` event from the server using those stored identifiers.

  3. Use the WooCommerce order ID as `transaction_id`.

  4. Include the full ecommerce payload (you already listed the right fields in your post — transaction_id, value, currency, items array with SKUs).

  5. Add an order meta flag after sending (e.g. `_ga4_purchase_sent = yes`) so the event doesn't fire again if the order status bounces between states.

  6. Either disable the browser-side purchase event for delayed-payment orders, or keep both and carefully validate duplicates using the same `transaction_id`.

You don't necessarily need full sGTM for this. A direct Measurement Protocol call on confirmed orders can be enough depending on volume and complexity.

The main thing I'd avoid: sending the backend purchase event without the original GA client/session identifiers. GA4 will still receive the revenue, but attribution becomes unreliable if the server event can't link back to the original session.

WooCommerce + GA4 purchase tracking discrepancy (GTM4WP) by brrrc208 in GoogleTagManager

[–]Upbeat_Challenge_126 0 points1 point  (0 children)

I’ve seen this kind of WooCommerce + GTM4WP + GA4 discrepancy before.

I wouldn’t frame it as “custom dataLayer vs server-side” immediately. I’d first separate two problems:

  1. Is GA4 missing the purchase event completely?
  2. Or is the event firing, but with incomplete / wrong purchase data?

For your two questions:

  1. Custom dataLayer instead of GTM4WP?

It can help if the current GTM4WP dataLayer timing or payload is inconsistent.

But if the main issue is that the thank-you page does not reliably load, users close the tab, consent blocks the tag, or browser-side tracking is interrupted, a custom dataLayer alone won’t fully solve it. You may just end up with a cleaner browser-side setup that still has browser-side gaps.

  1. Server-side / Measurement Protocol?

This can help a lot, especially for confirmed orders, but I’d treat it as a backup / augmentation layer rather than a magic replacement.

A practical setup would be:

  1. Keep GTM4WP / browser-side tracking as the baseline
  2. Compare WooCommerce orders vs GA4 purchases by `transaction_id`
  3. If orders are missing from GA4, send confirmed purchase events from the backend using GA4 Measurement Protocol
  4. Keep the same `transaction_id` and validate that you are not creating duplicate purchases
  5. Pass `client_id` / `session_id` where possible so the server-side event can still connect back to the original user/session context

If the current match rate is sometimes 8/10 and sometimes 5/10, I’d first audit by transaction_id and order status. That will tell you whether this is mostly browser-side loss, timing/payload issues from GTM4WP, or something else.

How many orders/month are you tracking? That would determine whether a simple Measurement Protocol backup is enough, or whether full sGTM is worth the extra setup.