What website monitoring actually means
“Website monitoring” collapses at least four different practices into a single bucket. Before picking tools, figure out which you actually need - most teams need more than one:
- Uptime monitoring - is the site responding with a 2xx?
- Change detection - has the content changed since last check?
- Performance monitoring - page weight, core web vitals, TTFB.
- Synthetic testing - is a multi-step user flow still functional?
Uptime tools (Pingdom, UptimeRobot, StatusCake) answer the first question. Change-detection tools (OnChange, Visualping, Distill) answer the second. They overlap in the margins but aren't substitutes - a site can be 100% up and completely broken, and a site can have a typo fix flagged by change detection that doesn't move any uptime needle.
The four categories you can monitor
Public web pages
Marketing pages, pricing pages, product pages, blog posts, terms of service. The classic change-detection use case. Tools that do this well handle JavaScript-rendered content via a real browser (not regex over the HTML).
REST and JSON APIs
Your own APIs or third-party endpoints you integrate with. The change you care about isn't visual; it's whether a field disappeared, a type changed, or the response shape drifted. Tools supporting this category let you configure HTTP method, headers, auth, and body, then diff against the previous JSON response structurally.
Screenshots of entire pages
For layout regressions, visual brand changes, or monitoring pages where the underlying HTML is obfuscated (think SPA frameworks with hashed class names). You capture a pixel-level screenshot and diff against the previous one.
Structural state (sitemaps, nav, page count)
Often ignored, but important for SEO and competitive intelligence. When a site silently ships a new navigation structure, adds a page to its sitemap, or removes a product category, that's a signal. Crawl-based change detection (site watches, sitemap diffs) handles this category.
Uptime monitoring vs change detection
The confusion here costs teams real money. Quick distinction:
- Uptime monitoring is a ping.It answers “is the HTTP layer alive?” at 30-second to 5-minute intervals across geographic regions, and pages you when a 5xx shows up.
- Change detection is a comparison.It fetches a page, parses it, and asks “is this different from the last time I saw it?” The HTTP layer can be perfectly healthy and the pricing page can still be silently wrong.
You want both. The teams that get burned are the ones who bought uptime monitoring, assumed it covered “content accuracy,” and discovered a week later that marketing changed the pricing block and nobody noticed.
We cover this in more depth in a dedicated post.
Check intervals - how often should you poll?
Polling too often burns budget and risks rate-limiting the target. Polling too rarely defeats the point. A rough calibration:
- Every 5-30 seconds - flash sales, breaking-news sites, APIs feeding real-time features. Paid plans only on most providers.
- Every 1-5 minutes - product pages, pricing pages, high-traffic marketing pages. The default for most commerce monitoring.
- Every 15-60 minutes - competitor blogs, career pages, status pages. Changes matter but aren't time-critical.
- Daily to weekly - terms of service, privacy policies, sitemaps, regulatory pages. Compliance-driven.
- Monthly / quarterly - long-form content, documentation, rarely-updated pages. Mostly for an audit trail.
A hidden cost: visual diffs and full a11y scans are more expensive per check than text diffs. Don't run them every 30 seconds unless you have a specific reason.
CSS selectors and the noise problem
The single biggest source of monitoring alert fatigue is watching the wrong part of the page. Example: you watch the whole homepage for a “price change” - and get alerted every time the recommended-products carousel rotates, the server timestamp updates, or the logged-in user's name renders.
CSS selectors are the fix. Point the monitor at just the element that matters -.pricing-table, #hero-headline, main article h1. Tools with a visual element picker make this painless; tools without one make you paste selectors by hand.
A related tactic: ignore-selectors. Monitor the full page but tell the tool “ignore .ad-banner, .timestamp, .cart-count”. Useful when the meaningful region is large or doesn't have a clean selector.
Visual diffs: when they help, when they waste your time
Visual (screenshot) diffs compare pixels between two full-page screenshots. They catch things text diffs miss - a layout shift, a color change, a missing image, a font swap. They also generate a lot of noise on pages with dynamic content.
Use visual diffs when:
- You're monitoring a brand-critical page (homepage, landing page, checkout).
- The HTML is hashed/obfuscated and text diffs are unreliable.
- You want to catch layout regressions after a deploy.
Prefer text diffs when:
- The page has animations, carousels, or timestamps that change on every load.
- You're watching a specific content region that has stable markup.
- Budget matters (visual diffs cost more to compute and store).
API and JSON monitoring
Most third-party APIs change without warning: fields get deprecated, types shift from string to number, rate limits tighten. If your integration relies on a specific response shape, API monitoring is the cheapest insurance policy in the category.
Look for tools that let you configure the full request (method, headers, body), handle authentication tokens, and diff the response structurally - a good tool shows you not “the bytes changed” but “the total_items field was removed, three new fields appeared, and user.idchanged type from int to string.”
OnChange's API monitoring use-case page walks through a concrete example.
Alert routing without burning out your team
Every change-detection tool will happily fire thousands of alerts. The practical art is shaping that firehose into something actionable:
- One channel per audience.Marketing doesn't need API-regression alerts; on-call engineers don't need pricing-block typos.
- Threshold rules. A 1% diff on a blog post is noise; a 40% diff on checkout is an outage. Good tools let you set per-monitor thresholds.
- Category filters.AI-categorized changes (pricing / content / layout / technical) can route to different destinations so product alerts don't drown ops alerts.
- Escalation paths. Email for the marketing review queue; Slack for the shared engineering channel; PagerDuty / Opsgenie only for genuinely on-call worthy events.
Accessibility: the category most teams skip
WCAG 2.2 is the current baseline, Section 508 and the European Accessibility Act both lean on it, and ADA litigation is rising year over year. Change detection is the mechanism that keeps your conformance claim honest between formal audits.
A good workflow: run an initial full scan, mark it as the attested baseline, then auto-scan on every detected content change. Regressions get flagged with the specific WCAG criterion affected and the selector that broke. Fix fast, attest again, repeat.
See our WCAG 2.2 checklist. Or read how OnChange's a11y scanner works.
Attribution: knowing why something changed
Every change-detection tool tells you what changed. Very few tell you why. But “why” is the question you're actually asking when the alert fires - which deploy caused this, which commit is responsible, which third-party script rev just broke everything.
Attribution closes that loop. At its simplest, it's correlation: for every detected change, look at what deploys, commits, or vendor script updates happened in the preceding window, and present the candidates alongside the alert. Done well, it collapses the post-detection forensics from half an hour to 30 seconds.
OnChange ships causal attribution via Git commit correlation + Claude-generated verdicts.
Common pitfalls
- Monitoring the whole page instead of a CSS-targeted region - every unrelated change fires.
- Polling too fast on a static page - wasting budget and rate-limiting yourself.
- Polling too slow on a flash-sale page - missing the event entirely.
- Sending every alert to one shared Slack channel - nobody triages it.
- Not versioning your alert rules - when false-positive rates climb you can't tell what changed.
- Skipping the baseline approval step on a11y scans - every change looks like a regression forever.
- Using an extension-based tool for production monitoring - checks stop when the browser closes.
How to pick a tool
A short decision tree:
- Are you monitoring fewer than 5 pages for personal use?A browser extension is probably enough - they're free and good at what they do.
- Do you need monitoring that runs 24/7, not tied to a browser? Cloud-based tools only.
- Do you need sub-minute checks? Filter to tools that advertise 30s or faster. Many consumer tools cap at 5-minute minimums.
- Do you need API / JSON monitoring alongside page monitoring?Check the tool's API endpoint support explicitly - not every change-detection tool does this well.
- Do you need WCAG accessibility scans in the same workflow?Filter further - most change-detection tools don't integrate a11y.
- Do you want to script monitors from CI/CD? You need a REST API that covers every operation, not a subset.
- Agency or in-house team with multiple clients? Look for per-client tags, branded reports, and no per-client pricing surcharge.
If most of those are yes, compare OnChange against the alternatives - we built it for exactly this intersection of needs.