AlterLabAlterLab
PricingComparePlaygroundBlogDocs
    AlterLabAlterLab
    PricingPlaygroundBlogDocsChangelog
    IntroductionInstallationYour First Request
    REST APIJob PollingAPI Keys
    OverviewPythonNode.js
    JavaScript RenderingOutput FormatsNewPDF & OCRCachingWebhooksWebSocket Real-TimeNewBring Your Own ProxyProWeb CrawlingNewBatch ScrapingNewSchedulerNewChange DetectionNewCloud Storage ExportNewSpend LimitsNewOrganizations & TeamsNewAlerts & NotificationsNew
    Structured ExtractionAIE-commerce ScrapingNews MonitoringPrice MonitoringNewMulti-Page CrawlingNewMonitoring DashboardNewAI Agent / MCPMCPData Pipeline to CloudNew
    PricingRate LimitsError Codes
    From FirecrawlFrom ApifyNewFrom ScrapingBee / ScraperAPINew
    PlaygroundPricingStatus
    Tutorial
    Monitoring

    Real-Time Monitoring Dashboard

    Build a complete monitoring system that tracks competitor pages for content changes, sends real-time webhook notifications, and displays change history on a custom dashboard.

    Prerequisites

    This tutorial combines several AlterLab features. Familiarity with the Change Detection and Webhooks guides is helpful but not required.

    Overview

    A real-time monitoring dashboard has four main components working together:

    1. Monitors

    Automated watchers that check target URLs on a schedule and detect content changes using semantic, exact, or CSS selector diffs.

    2. Webhooks

    Real-time notifications delivered to your endpoint when a monitor detects a change or encounters an error.

    3. Alerts

    Threshold-based rules that escalate repeated failures or significant changes to Slack, email, or PagerDuty.

    4. Dashboard

    A visual interface that aggregates change history, monitor health status, and diff timelines for quick decision-making.

    Step 1: Create Monitors

    Start by creating monitors for each page you want to track. Use semantic diff for product pages (ignores layout changes, catches content changes) and selector diff when you only care about specific elements like prices or stock status.

    Choosing a Diff Mode

    Semantic compares the structural meaning of content — ideal for articles, product descriptions, and legal pages. Selector targets specific CSS selectors — perfect for prices, inventory badges, or single data points. See the diff modes reference for details.
    import requests
    
    API_KEY = "YOUR_API_KEY"
    BASE_URL = "https://api.alterlab.io/api/v1"
    HEADERS = {"X-API-Key": API_KEY, "Content-Type": "application/json"}
    
    # Define pages to monitor
    targets = [
        {
            "name": "Competitor A - Pricing",
            "url": "https://competitor-a.com/pricing",
            "diff_mode": "semantic",
            "cron": "0 */6 * * *",  # Every 6 hours
        },
        {
            "name": "Competitor B - Product Page",
            "url": "https://competitor-b.com/product/widget",
            "diff_mode": "selector",
            "css_selector": ".price, .stock-status, .product-title",
            "cron": "0 */2 * * *",  # Every 2 hours
        },
        {
            "name": "Regulatory Filing",
            "url": "https://regulator.gov/filings/latest",
            "diff_mode": "semantic",
            "cron": "0 9 * * 1-5",  # Weekdays at 9 AM
        },
    ]
    
    monitor_ids = []
    for target in targets:
        payload = {
            "name": target["name"],
            "url": target["url"],
            "diff_mode": target["diff_mode"],
            "cron": target["cron"],
            "notify_on": ["change", "error"],
        }
        if "css_selector" in target:
            payload["css_selector"] = target["css_selector"]
    
        resp = requests.post(
            f"{BASE_URL}/monitors", headers=HEADERS, json=payload
        ).json()
        monitor_ids.append(resp["id"])
        print(f"Monitor created: {resp['id']} -> {target['name']}")
    
    print(f"\n{len(monitor_ids)} monitors active.")

    Step 2: Set Up Webhooks

    Register a webhook endpoint to receive real-time notifications when your monitors detect changes. The webhook payload includes the diff details, change severity, and snapshot references.

    Webhook Security

    Always verify the HMAC signature on incoming webhooks to ensure they come from AlterLab. See the signature verification guide for details.
    import hashlib
    import hmac
    import json
    from datetime import datetime
    from flask import Flask, request, jsonify
    
    app = Flask(__name__)
    
    WEBHOOK_SECRET = "your_webhook_secret"
    change_log = []  # In production, use a database
    
    def verify_signature(payload: bytes, signature: str) -> bool:
        """Verify the HMAC-SHA256 signature from AlterLab."""
        expected = hmac.new(
            WEBHOOK_SECRET.encode(), payload, hashlib.sha256
        ).hexdigest()
        return hmac.compare_digest(f"sha256={expected}", signature)
    
    @app.route("/webhooks/monitor", methods=["POST"])
    def handle_monitor_webhook():
        # Verify signature
        signature = request.headers.get("X-AlterLab-Signature", "")
        if not verify_signature(request.data, signature):
            return jsonify({"error": "Invalid signature"}), 401
    
        event = request.json
        event_type = event.get("event")
    
        if event_type == "monitor.change_detected":
            change = {
                "monitor_id": event["monitor_id"],
                "monitor_name": event["monitor_name"],
                "url": event["url"],
                "diff_mode": event["diff_mode"],
                "change_summary": event["change_summary"],
                "changed_at": event["timestamp"],
                "snapshot_id": event["snapshot_id"],
                "previous_snapshot_id": event["previous_snapshot_id"],
                "received_at": datetime.utcnow().isoformat(),
            }
            change_log.append(change)
            print(f"Change detected on {change['monitor_name']}: "
                  f"{change['change_summary']}")
    
            # Check if change is significant enough to alert
            if should_alert(change):
                send_alert(change)
    
        elif event_type == "monitor.error":
            print(f"Monitor error: {event['monitor_name']} - {event['error']}")
            track_failure(event["monitor_id"], event["error"])
    
        return jsonify({"status": "ok"}), 200
    
    def should_alert(change: dict) -> bool:
        """Determine if a change warrants an alert."""
        # Alert on all semantic changes (they filter noise already)
        if change["diff_mode"] == "semantic":
            return True
        # For selector diffs, alert if price or stock changed
        summary = change["change_summary"].lower()
        return any(kw in summary for kw in ["price", "stock", "unavailable"])
    
    def send_alert(change: dict):
        """Send alert to configured channels."""
        # See Step 3 for full implementation
        print(f"ALERT: {change['monitor_name']} changed!")
    
    def track_failure(monitor_id: str, error: str):
        """Track consecutive failures for alerting."""
        # See Step 3 for threshold-based alerting
        pass
    
    if __name__ == "__main__":
        app.run(port=8080)

    Next, register this endpoint with AlterLab:

    # Register webhook for monitor events
    webhook = requests.post(
        f"{BASE_URL}/webhooks",
        headers=HEADERS,
        json={
            "url": "https://your-app.com/webhooks/monitor",
            "events": ["monitor.change_detected", "monitor.error"],
            "description": "Monitoring dashboard - change notifications",
        },
    ).json()
    
    print(f"Webhook registered: {webhook['id']}")
    print(f"Save this secret for verification: {webhook['secret']}")

    Step 3: Configure Alerts

    Build an alerting layer on top of your webhook receiver. Track consecutive failures per monitor and route significant changes to your team via Slack, email, or any HTTP endpoint.

    import requests
    from collections import defaultdict
    from datetime import datetime, timedelta
    
    # Failure tracking
    failure_counts = defaultdict(int)
    failure_timestamps = defaultdict(list)
    FAILURE_THRESHOLD = 3  # Alert after 3 consecutive failures
    FAILURE_WINDOW = timedelta(hours=1)
    
    # Alert channels
    SLACK_WEBHOOK_URL = "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
    ALERT_EMAIL = "[email protected]"
    
    def track_failure(monitor_id: str, error: str):
        """Track consecutive failures and alert when threshold is exceeded."""
        now = datetime.utcnow()
    
        # Clean old failures outside the window
        failure_timestamps[monitor_id] = [
            ts for ts in failure_timestamps[monitor_id]
            if now - ts < FAILURE_WINDOW
        ]
    
        failure_timestamps[monitor_id].append(now)
        failure_counts[monitor_id] += 1
    
        recent_failures = len(failure_timestamps[monitor_id])
        if recent_failures >= FAILURE_THRESHOLD:
            send_failure_alert(monitor_id, error, recent_failures)
            # Reset counter after alerting
            failure_counts[monitor_id] = 0
            failure_timestamps[monitor_id] = []
    
    def send_failure_alert(monitor_id: str, error: str, count: int):
        """Send failure alert to Slack."""
        requests.post(SLACK_WEBHOOK_URL, json={
            "text": (
                f":red_circle: Monitor Alert\n"
                f"*Monitor*: {monitor_id}\n"
                f"*Failures*: {count} in the last hour\n"
                f"*Last Error*: {error}\n"
                f"*Action*: Check monitor status in the dashboard"
            ),
        })
    
    def send_change_alert(change: dict):
        """Send change alert with context to Slack."""
        severity = "large" if len(change["change_summary"]) > 200 else "small"
        emoji = ":large_orange_diamond:" if severity == "large" else ":small_blue_diamond:"
    
        requests.post(SLACK_WEBHOOK_URL, json={
            "blocks": [
                {
                    "type": "header",
                    "text": {
                        "type": "plain_text",
                        "text": f"{emoji} Change Detected",
                    },
                },
                {
                    "type": "section",
                    "fields": [
                        {"type": "mrkdwn", "text": f"*Monitor:*\n{change['monitor_name']}"},
                        {"type": "mrkdwn", "text": f"*URL:*\n{change['url']}"},
                        {"type": "mrkdwn", "text": f"*Mode:*\n{change['diff_mode']}"},
                        {"type": "mrkdwn", "text": f"*When:*\n{change['changed_at']}"},
                    ],
                },
                {
                    "type": "section",
                    "text": {
                        "type": "mrkdwn",
                        "text": f"*Summary:*\n{change['change_summary'][:500]}",
                    },
                },
            ],
        })

    Step 4: Build the Dashboard

    Query the AlterLab API to pull monitor status, snapshots, and change history. Combine this data into a dashboard view that shows at a glance which pages changed, when, and how.

    import requests
    from datetime import datetime
    
    API_KEY = "YOUR_API_KEY"
    BASE_URL = "https://api.alterlab.io/api/v1"
    HEADERS = {"X-API-Key": API_KEY}
    
    
    def get_dashboard_data():
        """Fetch all data needed for the monitoring dashboard."""
        # 1. Get all monitors with their current status
        monitors = requests.get(
            f"{BASE_URL}/monitors", headers=HEADERS
        ).json()
    
        dashboard = {"monitors": [], "recent_changes": [], "summary": {}}
        total_changes = 0
        active_count = 0
        error_count = 0
    
        for monitor in monitors.get("items", []):
            monitor_id = monitor["id"]
    
            # 2. Get recent snapshots for each monitor
            snapshots = requests.get(
                f"{BASE_URL}/monitors/{monitor_id}/snapshots",
                headers=HEADERS,
                params={"limit": 10},
            ).json()
    
            # 3. Get detected changes
            changes = requests.get(
                f"{BASE_URL}/monitors/{monitor_id}/changes",
                headers=HEADERS,
                params={"limit": 20},
            ).json()
    
            monitor_data = {
                "id": monitor_id,
                "name": monitor["name"],
                "url": monitor["url"],
                "status": monitor.get("status", "unknown"),
                "diff_mode": monitor["diff_mode"],
                "last_checked": monitor.get("last_checked_at"),
                "next_check": monitor.get("next_check_at"),
                "snapshot_count": len(snapshots.get("items", [])),
                "change_count": len(changes.get("items", [])),
                "recent_changes": [],
            }
    
            # Track recent changes with diffs
            for change in changes.get("items", [])[:5]:
                change_entry = {
                    "detected_at": change["detected_at"],
                    "summary": change.get("summary", ""),
                    "diff": change.get("diff", ""),
                    "snapshot_id": change.get("snapshot_id"),
                }
                monitor_data["recent_changes"].append(change_entry)
                dashboard["recent_changes"].append({
                    **change_entry,
                    "monitor_name": monitor["name"],
                    "url": monitor["url"],
                })
    
            total_changes += monitor_data["change_count"]
            if monitor_data["status"] == "active":
                active_count += 1
            elif monitor_data["status"] == "error":
                error_count += 1
    
            dashboard["monitors"].append(monitor_data)
    
        # Sort recent changes by time (newest first)
        dashboard["recent_changes"].sort(
            key=lambda c: c["detected_at"], reverse=True
        )
    
        dashboard["summary"] = {
            "total_monitors": len(dashboard["monitors"]),
            "active": active_count,
            "errors": error_count,
            "total_changes_detected": total_changes,
            "last_updated": datetime.utcnow().isoformat(),
        }
    
        return dashboard
    
    
    def print_dashboard(data):
        """Print a text-based dashboard summary."""
        s = data["summary"]
        print("=" * 60)
        print("  MONITORING DASHBOARD")
        print("=" * 60)
        print(f"  Monitors: {s['total_monitors']} total, "
              f"{s['active']} active, {s['errors']} errors")
        print(f"  Changes detected: {s['total_changes_detected']}")
        print(f"  Last updated: {s['last_updated']}")
        print("-" * 60)
    
        for m in data["monitors"]:
            status_icon = {
                "active": "+", "paused": "~", "error": "!"
            }.get(m["status"], "?")
            print(f"  [{status_icon}] {m['name']}")
            print(f"      URL: {m['url']}")
            print(f"      Mode: {m['diff_mode']} | "
                  f"Changes: {m['change_count']} | "
                  f"Last check: {m['last_checked'] or 'never'}")
    
            if m["recent_changes"]:
                latest = m["recent_changes"][0]
                print(f"      Latest: {latest['summary'][:80]}")
            print()
    
        if data["recent_changes"]:
            print("-" * 60)
            print("  RECENT CHANGES (last 5)")
            print("-" * 60)
            for change in data["recent_changes"][:5]:
                print(f"  [{change['detected_at']}] {change['monitor_name']}")
                print(f"    {change['summary'][:100]}")
                print()
    
    
    # Run the dashboard
    dashboard = get_dashboard_data()
    print_dashboard(dashboard)

    Step 5: Handle Baselines & False Positives

    Not every detected change is meaningful. Dynamic content like timestamps, session IDs, or ad banners can trigger false positives. Here's how to manage baselines and filter noise.

    When to Reset a Baseline

    Reset a monitor's baseline after expected changes like a product relaunch, website redesign, or planned content update. This prevents the monitor from endlessly flagging the same known change.
    import requests
    
    API_KEY = "YOUR_API_KEY"
    BASE_URL = "https://api.alterlab.io/api/v1"
    HEADERS = {"X-API-Key": API_KEY, "Content-Type": "application/json"}
    
    
    # --- Strategy 1: Use CSS selector mode to ignore noise ---
    # Instead of monitoring the full page (which catches ads, timestamps, etc.),
    # target only the content you care about.
    
    def create_focused_monitor(name: str, url: str, selector: str, cron: str):
        """Create a monitor that only watches specific page elements."""
        return requests.post(
            f"{BASE_URL}/monitors",
            headers=HEADERS,
            json={
                "name": name,
                "url": url,
                "diff_mode": "selector",
                "css_selector": selector,
                "cron": cron,
                "notify_on": ["change"],
            },
        ).json()
    
    
    # Example: Only watch the pricing table, ignore everything else
    create_focused_monitor(
        name="Competitor Pricing Table",
        url="https://competitor.com/pricing",
        selector=".pricing-table, .plan-card, .price-amount",
        cron="0 */4 * * *",
    )
    
    # Example: Watch product specs, ignore reviews and ads
    create_focused_monitor(
        name="Product Specs",
        url="https://competitor.com/product/widget",
        selector=".product-specs, .product-title, .product-price",
        cron="0 */6 * * *",
    )
    
    
    # --- Strategy 2: Reset baseline after expected changes ---
    
    def reset_baseline(monitor_id: str):
        """
        Reset the monitor baseline to the current page state.
        Future diffs will compare against this new snapshot.
        """
        # Take a fresh snapshot
        snapshot = requests.post(
            f"{BASE_URL}/monitors/{monitor_id}/snapshots",
            headers=HEADERS,
        ).json()
        print(f"Baseline reset. New snapshot: {snapshot['id']}")
        return snapshot
    
    
    # --- Strategy 3: Filter changes in your webhook handler ---
    
    IGNORE_PATTERNS = [
        "copyright 2025",    # Year updates
        "last updated",      # Timestamp changes
        "session",           # Session tokens in URLs
        "csrf",              # CSRF tokens
        "nonce",             # Script nonces
    ]
    
    def is_meaningful_change(change_summary: str) -> bool:
        """Filter out known false positive patterns."""
        summary_lower = change_summary.lower()
        for pattern in IGNORE_PATTERNS:
            if pattern in summary_lower:
                return False
        # Ignore very small changes (likely dynamic content)
        if len(change_summary) < 20:
            return False
        return True

    Full Pipeline Example

    Here's a complete end-to-end example that ties everything together: creating monitors, registering webhooks, handling changes, and building the dashboard data structure.

    """
    Complete monitoring dashboard pipeline.
    Creates monitors, registers webhooks, and builds a dashboard.
    """
    import requests
    from datetime import datetime
    
    API_KEY = "YOUR_API_KEY"
    BASE_URL = "https://api.alterlab.io/api/v1"
    HEADERS = {"X-API-Key": API_KEY, "Content-Type": "application/json"}
    WEBHOOK_URL = "https://your-app.com/webhooks/monitor"
    
    # --- 1. Create monitors ---
    monitors_config = [
        {
            "name": "Competitor A - Pricing",
            "url": "https://competitor-a.com/pricing",
            "diff_mode": "semantic",
            "cron": "0 */6 * * *",
        },
        {
            "name": "Competitor B - Product",
            "url": "https://competitor-b.com/product/widget",
            "diff_mode": "selector",
            "css_selector": ".price, .stock-status",
            "cron": "0 */2 * * *",
        },
        {
            "name": "Regulatory Page",
            "url": "https://regulator.gov/filings/latest",
            "diff_mode": "semantic",
            "cron": "0 9 * * 1-5",
        },
    ]
    
    monitor_ids = []
    for config in monitors_config:
        payload = {**config, "notify_on": ["change", "error"]}
        resp = requests.post(
            f"{BASE_URL}/monitors", headers=HEADERS, json=payload
        ).json()
        monitor_ids.append(resp["id"])
        print(f"Created monitor: {resp['id']} -> {config['name']}")
    
    # --- 2. Register webhook ---
    webhook = requests.post(
        f"{BASE_URL}/webhooks",
        headers=HEADERS,
        json={
            "url": WEBHOOK_URL,
            "events": ["monitor.change_detected", "monitor.error"],
            "description": "Monitoring dashboard",
        },
    ).json()
    print(f"Webhook registered: {webhook['id']}")
    
    # --- 3. Fetch dashboard data ---
    dashboard = {"monitors": [], "changes": [], "updated": datetime.utcnow().isoformat()}
    for mid in monitor_ids:
        monitor = requests.get(f"{BASE_URL}/monitors/{mid}", headers=HEADERS).json()
        changes = requests.get(
            f"{BASE_URL}/monitors/{mid}/changes",
            headers=HEADERS,
            params={"limit": 10},
        ).json()
    
        dashboard["monitors"].append({
            "id": mid,
            "name": monitor["name"],
            "status": monitor.get("status"),
            "last_checked": monitor.get("last_checked_at"),
            "changes": changes.get("items", []),
        })
    
        for c in changes.get("items", []):
            dashboard["changes"].append({
                "monitor": monitor["name"],
                "detected_at": c["detected_at"],
                "summary": c.get("summary", ""),
            })
    
    # Sort all changes newest first
    dashboard["changes"].sort(key=lambda c: c["detected_at"], reverse=True)
    
    print(f"\nDashboard ready: {len(monitor_ids)} monitors, "
          f"{len(dashboard['changes'])} changes tracked.")

    Use Cases

    Competitor Monitoring

    Track competitor pricing pages, feature lists, and product catalogs. Get alerted when they change prices, add features, or launch new products.

    Recommended: Semantic diff with 6-hour intervals

    Regulatory Compliance

    Monitor government filing pages, policy documents, and regulatory databases. Catch updates to compliance requirements before they take effect.

    Recommended: Semantic diff with daily checks on weekdays

    Stock & Price Alerts

    Watch product pages for price drops, restocks, or availability changes. Perfect for limited-edition items or volatile pricing.

    Recommended: Selector diff targeting .price, .stock-status with 2-hour intervals

    Best Practices

    Use selector mode to reduce noise

    Full-page semantic diffs catch everything, including ads, timestamps, and session tokens. Use CSS selector mode to target only the content that matters to you — prices, product specs, or specific text blocks.

    Set appropriate check intervals

    Match your check frequency to how often the page actually changes. Pricing pages rarely change hourly — every 4–6 hours is usually enough. Regulatory filings need daily checks at most. Over-polling wastes credits.

    Verify webhook signatures

    Always validate the HMAC-SHA256 signature on incoming webhooks. Without verification, anyone can POST fake change events to your endpoint. See the signature verification guide.

    Reset baselines after known changes

    When a monitored page undergoes an expected update (redesign, product launch), reset the baseline to prevent the monitor from repeatedly flagging the same diff. Use the snapshots endpoint to set a new reference point.

    Store change history in your own database

    While AlterLab stores snapshots and changes, keeping a copy in your own database gives you full control over retention, querying, and visualization. Use the webhook handler to persist every change event.

    Last updated: March 2026

    On this page

    AlterLabAlterLab

    AlterLab is the modern web scraping platform for developers. Reliable, scalable, and easy to use.

    Product

    • Pricing
    • Documentation
    • Changelog
    • Status

    Solutions

    • Python API
    • JS Rendering
    • Anti-Bot Bypass
    • Compare APIs

    Comparisons

    • Compare All
    • vs ScraperAPI
    • vs Firecrawl
    • vs ScrapingBee
    • vs Bright Data
    • vs Apify

    Company

    • About
    • Blog
    • Contact
    • FAQ

    Guides

    • Bypass Cloudflare
    • Playwright Anti-Detection
    • Puppeteer Bypass Guide
    • Selenium Detection Fix
    • Best Scraping APIs 2026

    Legal

    • Privacy
    • Terms
    • Acceptable Use
    • DPA
    • Cookie Policy
    • Licenses

    © 2026 RapierCraft Inc. All rights reserved.

    Middletown, DE