This is one of those posts that will only appeal to the nerdiest of readers. It’s a breakdown of some of the technical challenges — and the solutions we built — to keep a large catalog of listings (roughly ~70k on average) in sync with GunBroker, when the way products get updated on the site doesn’t always give us a clean signal that anything changed at all.


The Core Problem: Silent Database Writes

WooCommerce, like most WordPress-based systems, is hook-driven. When a product is saved, hooks fire. When a price changes through the UI, hooks fire. You can listen for those events and react accordingly — push an update, invalidate a cache, log a change.

The problem is that not everyone plays by those rules.

We use a service called Garidium for distributor feed syncing. Garidium updates WooCommerce product prices — regular price, cost — by writing directly to the wp_postmeta table. No WordPress hooks. No WooCommerce events. From the application’s perspective, nothing happened. A hook-based sync architecture would never see these changes, and our GunBroker listings would quietly go stale. It has to do this due to the sheer volume of data they’re syncing, and their high-frequency updates (most sites are on 20 minute sync schedules).

With ~70k active listings, “quietly stale” is not an acceptable failure mode.


The Solution: Stop Watching for Changes, Start Measuring Them

Instead of trying to intercept writes we can’t reliably observe, we flipped the model entirely: on every scheduled sync cycle, we recompute what the GunBroker listing should look like right now, fingerprint it, and compare that fingerprint to the last one we pushed.

The fingerprint is an MD5 hash over the six values that actually appear in the GunBroker API payload:

$current_hash = md5( sprintf(
    '%s|%d|%.2f|%d|%.2f|%.2f',
    $local_sku,           // Transformed SKU (prefix applied, sanitized)
    (int) $local_quantity, // Effective stock (raw stock minus buffer)
    (float) $local_price,  // Final GB listing price (after markup + rounding + MAP)
    $local_can_offer ? 1 : 0,
    $local_auto_accept,
    $local_auto_reject
) );

If the hash matches what we last pushed, we skip the API call entirely. If it doesn’t match, we push and store the new hash.

This works because we’re not hashing raw database values — we’re hashing derived outputs. The final listing price, for example, isn’t just regular_price from the database. It’s the result of a multi-stage pipeline:

WC regular_price → markup mode → rounding rules → minimum price floor → MAP adjustment → offer cascade → GunBroker constraints

So when Garidium writes a new cost to wp_postmeta, that value flows through the pipeline, the derived listing price changes, the hash changes, and the sync fires — even though no WordPress hook ever told us anything happened.


Batch Architecture for Large Catalogs

Syncing 70k products isn’t something you do in a single request. The system uses WooCommerce’s built-in Action Scheduler for background execution, with a continuation-based chunking pattern:

  1. Acquire a transient lock, reset progress counters, schedule the first async action
  2. Process the next 50 product IDs — priming the WP object cache first with update_meta_cache() to avoid N+1 queries
  3. Schedule the next continuation and exit
  4. Repeat until the catalog is exhausted, then clear the lock and log a summary

A stall watchdog runs on every admin page load: if a batch has been “running” for more than 15 minutes with no pending continuation in Action Scheduler, it re-acquires the lock and kicks off a fresh continuation. This handles edge cases like server restarts mid-batch without requiring manual intervention.


The Force Resync Escape Hatch

Hash-based diffing is efficient, but there are moments when you want a guaranteed full-catalog push — after a bulk pricing update, after changing a global markup rule, or just for peace of mind.

The admin UI has a “Force Price Re-Sync” button. When clicked, it:

  1. Cancels any in-progress inventory batch
  2. Sets a global flag: wgbs_force_price_resync = 1
  3. Immediately enqueues a new sync run

The hash comparison logic checks for this flag:

if ( $current_hash === $last_hash && '1' !== $force_resync ) {
    return true; // skip
}

When the flag is set, every product pushes to the API regardless of hash match. The flag clears automatically once the batch completes.


Why This Architecture Holds Up

The key design property is that the system doesn’t care who changed a value or how they changed it. It only cares whether the API-facing output is different from what was last pushed. That makes it resilient to:

  • Third-party services writing directly to the database (Garidium)
  • Admin changes to global markup or rounding rules
  • MAP rule changes at the manufacturer level
  • Stock adjustments from any source

Any change, anywhere in the pipeline, collapses into the same fingerprint comparison. The sync either fires or it doesn’t, based entirely on whether the answer changed — not on whether WordPress knew a question was asked.