Skip to content

Case Study: WordPress Theme Migration with Zero Downtime and Technical SEO Recovery

By Jasper Frumau Case Studies

An established Australian sports betting tips site — 300+ published posts, a custom affiliates post type, and bookmaker reviews competing on commercial keywords — needed to move from a fully bespoke custom theme to a purpose-built affiliate theme (PokaTheme v4.1.3).

This was not a cosmetic redesign. It was a full theme swap: different template structure, different Custom Post Types, different heading output, different block editor compatibility. And it had to go live without losing rankings.

My job was to get staging SEO-ready, execute the cutover, and fix whatever broke afterward.

Why Theme Migrations Create Technical SEO Problems

When you swap WordPress themes, several things break quietly in ways that only show up after Google recrawls:

  • Template heading output changes — the old theme may have output H1 from the_title() in its single post template. If post body content also had an H1, you now have duplicate H1s across hundreds of pages.
  • Legacy HTML in post content — raw <h2>, <p>, and <script> tags embedded in classic editor content don’t know they’re in a new theme. JSON-LD schema blocks that lived as raw HTML in the old CMS get re-parsed by Gutenberg incorrectly.
  • Custom Post Types — the old theme registered its own CPTs. The new theme registers different ones. Anything referencing the old CPT slugs becomes a 404.
  • URL structure shifts — even small permalink changes create redirect chains if not handled upfront.
  • Plugin compatibility — SEO plugins hook into the editor by post type. Change the CPT and the integration can break.

All of these happened on this project.

Phase 1: Pre-Launch Audit

Before touching production I ran a full Screaming Frog crawl of staging, cross-referenced it with a Yoast SEO keyword export, and worked through every issue category.

Internal 404 Errors — 14 Found

The crawl flagged 14 internal URLs returning 404. Root causes split three ways:

  • Old review slugs — the old theme used paths like /betfair-review and /ladbrokes-review. The new theme’s affiliates CPT uses /reviews/{slug}/. Every internal link using the old pattern was now broken.
  • Hardcoded domain URLs in post content — some posts linked to the full live domain URL using an absolute path. When staging was being tested against its own domain, these became cross-domain links to the wrong environment — and would break permanently after cutover.
  • Unpublished content — one bookmaker review was still in draft. Pages linking to its review URL were linking to a 404.

All 14 were resolved before cutover. The old-slug pattern got 301 redirects via Yoast SEO Premium. The hardcoded URLs were updated to relative paths. The draft post was flagged for the client.

Redirect Audit — 149 Internal Redirects

Yoast SEO Premium was managing 149 internal redirects. Many were redirect chains — a URL pointing to another redirect before reaching the final destination. Chains are wasteful for crawl budget and fragment link equity.

I flattened the chains most likely to affect crawled pages:

  • /betr-review-and-promo-code/reviews/betr/ — was a two-hop chain
  • /melbourne-cup-2021 and /melbourne-cup-betting-2020/melbourne-cup/ — both chained through intermediate URLs
  • Several promo code pages for rebranded bookmakers — I confirmed the rebrand history and pointed all of them directly to the current review

One redirect conflict was more involved: a rebranded bookmaker had both a live page and a Yoast redirect rule pointing to its replacement. The page was taking priority and suppressing the redirect. This was documented and flagged for post-launch resolution.

Semantic HTML — Double H1s on 84 Pages

This was the most direct consequence of the theme swap.

The old custom theme output <h1><?php the_title(); ?></h1> from its single post template. Post body content — written against the old theme — also contained <h1> tags at the top of the content area. Under the old theme this was invisible as a duplicate because the template and content H1 were the same heading. Under the new theme, which has its own template heading output, both H1s now rendered.

Screaming Frog’s H1 duplicate report returned 84 affected pages.

I prioritised the 11 highest-value money pages (bookmaker reviews and key commercial landing pages) and changed the in-content H1 tags to H2. The bulk of the 84 were date-specific race tip posts — these were going noindex anyway, lower priority.

The footer had a separate heading structure problem: two Custom HTML widgets each contained <h2> headings that appeared on every page of the site, driving a site-wide duplicate H2 count. These were changed to styled <p> tags in the WordPress widget admin.

A post-cutover crawl still showed a “Related Posts” <h2> appearing on 109 pages — a heading in a theme template file that outputs the same text site-wide. Identified and flagged for a template fix.

Missing SEO Metadata — 16 Bookmaker Review Pages

The Yoast data export revealed 16 bookmaker review pages on staging with no SEO title, no meta description, and no focus keyphrase. These were all high-value commercial pages.

When content is migrated between themes — especially if the old theme used a different SEO plugin or stored meta in custom fields — this kind of data loss is common. The staging site had been built independently and the metadata simply hadn’t been carried across.

For each page I cross-referenced the equivalent live URL, extracted existing meta where it was present, and applied it to staging. Where meta was missing entirely I wrote new titles and descriptions within character limits (60 chars for title, 155 for description).

Canonicals, robots.txt, and X-Robots-Tag

The staging host was returning X-Robots-Tag: noindex, nofollow, nosnippet, noarchive at server level and serving Disallow: / in robots.txt — correct for a staging environment. I verified these were hosting-environment settings that would resolve automatically on domain cutover, and added them to the cutover verification checklist. Canonical tags were absent on staging but confirmed present with the production domain after DNS propagation.

Phase 2: Cutover

Cutover happened on 31 March 2026. Staging became the new production via a domain/DNS switch — no content migration, no database merge. The staging site was the new live site.

Pre-launch checklist:

  • Internal 404s resolved
  • Redirect chains flattened
  • Money page H1s fixed
  • Footer H2s corrected
  • SEO metadata added to all 16 bookmaker reviews
  • robots.txt and X-Robots-Tag verified post-DNS propagation
  • Canonical tags confirmed with production domain

Zero downtime. No ranking drop in the immediate post-cutover window.

Phase 3: Post-Launch Issues

Structured Data — 5 GSC “Incorrect Value Type” Errors

Within 48 hours of going live, Google Search Console flagged five pages under “Incorrect value type” — a structured data error that had been present since mid-2024 but was carried through the migration unfixed.

The root cause was a Gutenberg block editor compatibility issue specific to legacy content. The affected pages had <script type="application/ld+json"> JSON-LD schema blocks embedded as raw HTML inside classic editor content. When WordPress re-processed this content in the block editor, it did two things that broke the JSON:

  1. Wrapped the schema content in <p> (paragraph) blocks instead of Custom HTML blocks
  2. Inserted <br /> tags inside the JSON string — breaking the syntax entirely

Google’s parser read the malformed JSON as incorrect value types, disqualifying these pages from rich results (review stars, FAQ snippets).

The fix was manual but methodical:

  1. Open the affected page in Gutenberg
  2. Locate the JSON-LD content — rendered as a Paragraph block
  3. Convert to Custom HTML block
  4. Remove all <br /> tags from inside the JSON
  5. Validate JSON syntax
  6. Save and verify with curl

I used a curl + Python check to validate each page after saving:

curl -s "https://www.example.com/reviews/betfair/" \
  | python3 -c "
import sys, re, json
html = sys.stdin.read()
blocks = re.findall(r'<script type=\"application/ld\+json\"[^>]*>(.*?)</script>', html, re.DOTALL)
for i, block in enumerate(blocks):
    try:
        json.loads(block)
        print(f'Block {i+1}: VALID')
    except json.JSONDecodeError as e:
        print(f'Block {i+1}: INVALID — {e}')
"

All five pages were fixed the same day. Three confirmed clean immediately via curl. Two remained in Cloudflare and WP Rocket preload cache — confirmed clean in the editor and rechecked after cache expiry. After fixing all five, I clicked Validate Fix in Google Search Console to trigger a re-crawl.

Affiliate Link Redirect Loop

Shortly after cutover, all affiliate links starting with /offer/ started returning “too many redirects” for visitors who accessed them without a trailing slash. These are ThirstyAffiliates-managed links — the primary revenue links on the site.

The loop worked like this:

GET https://www.example.com/offer/bookmaker     (no trailing slash)
→ 301 example.com/offer/bookmaker              (www stripped — wrong)
→ 301 example.com/offer/bookmaker              (same URL — infinite loop)

With a trailing slash, ThirstyAffiliates’ rewrite rule matched and the redirect resolved correctly to the affiliate URL. Without it, WordPress/nginx fell through to a generic redirect that stripped www and looped.

A GTM tag was patched immediately to enforce trailing slashes on all on-site affiliate link clicks — stopping the bleed for live visitors. But this only protected on-site clicks. Direct URL access, external backlinks, and crawler requests remained broken.

The permanent fix required two nginx config changes:

  1. A location block in the www server block to 301-redirect /offer/{slug} (no trailing slash) to /offer/{slug}/ before any other rules run
  2. A clean non-www to www redirect in the non-www server block, to prevent the www-stripping behaviour

There was also a secondary issue: the non-www intermediate URL was returning x-robots-tag: noindex, nofollow at server level — meaning Google could potentially refuse to follow the affiliate redirect from that hop. This was traced to a blanket noindex directive on the non-www vhost and flagged for removal.

SEO Plugin JS Errors — Custom Post Type Compatibility

On the day of cutover, Gutenberg editor pages for the custom affiliates post type were throwing JavaScript errors on load — thrown by Yoast SEO Premium’s AI scripts, and potentially causing intermittent post save failures.

The cause: Yoast’s AI scripts were not initialising correctly on non-standard custom post types registered by the new theme. A Yoast SEO plugin update resolved the compatibility issue.

A separate set of deprecation warnings appeared for all 11 custom Gutenberg blocks in the theme — all registered at API version 1, deprecated since WordPress 6.9. Blocks remained functional but the issue was raised with the theme developer for a future update.

Summary

AreaWork
Theme migrationCustom bespoke theme to PokaTheme v4.1.3, full template audit
404 resolution14 internal 404s resolved pre-cutover
Redirect management149 redirects audited, chains flattened, conflict resolved
Semantic HTMLDouble H1s fixed on 11 money pages, footer H2s corrected
SEO metadata16 bookmaker reviews with missing meta — all filled
Structured data5 GSC “Incorrect value type” errors — JSON-LD rebuilt in correct blocks
Affiliate redirectsRedirect loop diagnosed, GTM patched, nginx fix specified
Editor compatibilityYoast JS errors on custom CPT resolved; block API deprecations flagged
Cutover verificationrobots.txt, X-Robots-Tag, canonicals, sitemap — all confirmed

No rankings were lost. GSC structured data errors were fixed within 48 hours of going live. The site was crawlable, indexable, and structured-data-valid from day one on the new theme.

A Note on Tooling — Claude Code in the Workflow

Throughout this project I used Claude Code (Anthropic’s AI coding assistant) as a technical workflow tool. Not for writing content — for the unglamorous parts of the work that eat time during a migration.

Specifically it helped with:

  • Processing Screaming Frog exports — the crawl data came out of Screaming Frog as Excel files. I used Claude Code to write Python scripts that converted these to CSV and extracted the specific issue categories (H1 duplicates, redirect chains, 4xx errors) into structured lists I could work from directly.
  • Writing the JSON-LD validation script — the curl + Python command used to verify structured data blocks after each fix was written with Claude Code. It saved running the blocks through an external validator for each of the five affected pages.
  • The bash check script — a shell script for re-verifying the staging site’s curl responses across all the 404 fixes, run repeatedly as each redirect was added.
  • Child theme code — the post expiry / auto-noindex system in the child theme, including the Yoast wpseo_robots filter hook that automatically noindexes posts past their expiry date. Claude Code produced the initial implementation; I reviewed every line, tested it on staging, and adjusted before it went anywhere near production. AI-generated PHP needs the same scrutiny as any other code, and you need to understand what it is doing to know whether it is right.
  • Maintaining the action plan — the project documentation (404 fix status, redirect audit log, pre-launch checklist) was kept in a structured markdown file that Claude Code helped update and cross-reference as items were resolved.

The project also had a CLAUDE.md file at the repo root — a persistent context document that gave Claude Code memory of the site architecture, environment URLs, plugin stack, and custom post type structure across sessions. This meant I could resume work without re-explaining the setup each time.

The point is not that AI did the SEO work. The crawl analysis, redirect decisions, heading fixes, and structured data debugging all required judgment calls that came from reading the site. But having a coding assistant that could turn a Screaming Frog Excel export into an actionable issue list in seconds — or write a validation script on the spot — meaningfully reduced the time between identifying a problem and confirming it was fixed.

What it cannot replace is knowing the code well enough to verify what gets generated, and doing the actual testing. Every script was reviewed before running. Every piece of PHP went through staging before touching production. Claude Code is a capable assistant — it is not a substitute for understanding what you are shipping.

Need This for Your Site?

If you are planning a WordPress theme migration or dealing with post-launch technical SEO issues, get in touch. I offer WordPress technical SEO audits and migration support for sites of all sizes.

Leave a Reply