AlterLabAlterLab
PricingComparePlaygroundBlogDocs
    AlterLabAlterLab
    PricingPlaygroundBlogDocsChangelog
    IntroductionInstallationYour First Request
    REST APIJob PollingAPI Keys
    OverviewPythonNode.js
    JavaScript RenderingOutput FormatsPDF & OCRCachingWebhooksJSON Schema FilteringWebSocket Real-TimeBring Your Own ProxyProWeb CrawlingBatch ScrapingSchedulerChange DetectionCloud Storage ExportSpend LimitsOrganizations & TeamsAlerts & Notifications
    Structured ExtractionAIE-commerce ScrapingNews MonitoringPrice MonitoringMulti-Page CrawlingMonitoring DashboardAI Agent / MCPMCPData Pipeline to Cloud
    PricingRate LimitsError Codes
    From FirecrawlFrom ApifyFrom ScrapingBee / ScraperAPI
    PlaygroundPricingStatus
    Integration
    n8n

    n8n Node

    Community node for workflow automation. Drag a node onto your canvas, connect your API key, and pipe live web data anywhere — no code required.

    Overview

    n8n-nodes-alterlab is a community node published on npm. Once installed, an AlterLab node appears in the n8n node picker. Drop it into any workflow to scrape a URL, extract structured data with a JSON schema or natural language prompt, and wire the result to Slack, Google Sheets, a database, or any other n8n-connected service.

    No Code

    Configure scrape targets, output formats, and extraction rules entirely through n8n's visual UI.

    Full Anti-Bot

    Every node execution runs through AlterLab's tier escalation — Cloudflare, CAPTCHA, and JS-heavy sites handled automatically.

    Any Destination

    Feed scraped data directly to Slack, Google Sheets, Postgres, Airtable, webhooks, or any of n8n's 400+ integrations.

    Installation

    There are three ways to add the node depending on how you run n8n.

    Option 1 — n8n Cloud or Docker (recommended)

    Use the built-in community node installer in the n8n UI:

    1. Open your n8n instance and go to Settings → Community Nodes.
    2. Click Install a community node.
    3. Enter n8n-nodes-alterlab and click Install.
    4. Reload the editor — the AlterLab node will appear in the palette.

    n8n Cloud restriction

    Community nodes require n8n Cloud or a self-hosted instance. If your cloud plan restricts community nodes, use the self-hosted path below.

    Option 2 — Self-hosted CLI

    Install directly into the n8n global package from the same Node.js environment where n8n runs, then restart n8n:

    Bash
    npm install n8n-nodes-alterlab
    
    # Restart n8n after installation
    n8n start

    Option 3 — Build from source

    For local development or to pin a specific commit:

    Bash
    git clone https://github.com/RapierCraft/n8n-nodes-alterlab.git
    cd n8n-nodes-alterlab
    npm install
    npm run build
    
    # Link into your local n8n install
    npm link
    cd /path/to/your/n8n
    npm link n8n-nodes-alterlab

    Authentication

    The AlterLab node supports two credential types. Both are configured under Credentials → New Credential in n8n.

    API Key

    Simple

    Best for personal workflows or single-user setups.

    1. Go to your AlterLab Dashboard → API Keys and copy your key.
    2. In n8n, create a new AlterLab API credential.
    3. Paste the key into the API Key field and save.

    OAuth2

    Recommended for teams

    Best for shared n8n instances where multiple team members run workflows.

    1. Create a new AlterLab OAuth2 credential in n8n.
    2. Click Connect — you will be redirected to AlterLab to sign in.
    3. Approve access. n8n stores and refreshes the token automatically.

    Operations

    The node exposes two operations selectable from the Operation dropdown inside the node editor.

    Scrape

    Default

    Fetches a URL and returns its content in one or more output formats. Supports full anti-bot tier escalation, JS rendering, proxies, screenshot capture, and structured data extraction. This is the operation you will use for the vast majority of workflows.

    Estimate Cost

    Previews the credit cost for a scrape request without actually executing it. Useful for building cost-gate logic in workflows — for example, only scraping when the estimate falls within budget. Returns a breakdown of base cost plus any add-on charges (rendering, proxies, screenshots).

    Output Options

    Configure what the node returns under the Output section of the node editor.

    ParameterValuesDescription
    formatsmarkdown, json, html, textOne or more output formats. Default: [markdown, json]. JSON includes structured metadata; markdown is best for LLM pipelines.
    timeout1–300 secondsMaximum time to wait for page load and rendering. Default: 90. Increase for slow or JS-heavy pages.
    cacheenabled/disabled, TTL 60–86400sEnable response caching to avoid re-scraping the same URL within a workflow run. Set the TTL in seconds (default disabled). Cached hits use zero credits.

    Advanced Options

    Expand the Advanced section in the node editor to access rendering, proxy, and interaction controls.

    renderJs

    Enable headless browser rendering for JavaScript-heavy pages (React, Vue, SPAs). Adds +$0.0006 per request. Required for sites where content loads after the initial HTML response.

    +$0.0006
    screenshot

    Capture a full-page PNG screenshot alongside the scrape result. The image is returned as a base64-encoded string in the node output, ready to attach to emails or upload to storage.

    +$0.0002
    useProxy

    Route the request through a residential proxy pool. Improves success rates on geo-restricted or bot-protected sites. Pair with proxyCountry to target a specific region.

    +$0.0002
    proxyCountry

    Two-letter country code for geo-targeted proxy exit nodes. Supported values include US, DE, GB, JP, FR, CA, AU, and 30+ more. Requires useProxy to be enabled.

    waitCondition

    Controls when the headless browser considers the page ready. Options: networkidle (no network activity for 500ms — most reliable), domcontentloaded (fast, before deferred scripts run), load (all resources including images). Only relevant when renderJs is enabled.

    removeCookieBanners

    Automatically detect and dismiss cookie consent overlays before capturing content. Enabled by default. Disable only if the site requires a specific consent state for content access.

    Extraction & Structured Data

    The node can extract structured data from scraped pages using three approaches, configured under the Extraction section.

    Built-in profiles

    Select a pre-built extraction profile to instantly get clean, structured output for common page types — no schema writing required.

    autoproductarticlejob_postingfaqrecipeevent

    The auto profile detects the page type and applies the best matching extractor. product returns name, price, currency, availability, images, and reviews. article returns title, author, publish date, and body text.

    Custom JSON Schema

    Define exactly what fields to extract by providing a JSON Schema. AlterLab maps page content to your schema and returns a validated JSON object. Ideal for repeatable pipelines where you own the output shape.

    JSON
    {
      "type": "object",
      "properties": {
        "title":       { "type": "string" },
        "price":       { "type": "number" },
        "currency":    { "type": "string" },
        "in_stock":    { "type": "boolean" },
        "rating":      { "type": "number" },
        "review_count":{ "type": "integer" }
      },
      "required": ["title", "price"]
    }

    Natural language prompts

    Describe what you want in plain English. The extraction engine interprets the prompt against the scraped content and returns the result as structured JSON. Good for one-off or exploratory workflows.

    TEXT
    Extract the product name, current sale price, original price,
    discount percentage, and whether the item ships for free.

    Extraction credits

    Structured extraction (all three methods) uses the Cortex extraction engine and is included in the base scrape cost for Starter and above plans. Custom JSON schemas and prompts on the Free plan consume additional credits.

    Cost Controls

    Use these parameters to cap spend and tune the tier escalation behaviour for each node. Found under Cost Controls in the node editor.

    ParameterTypeDescription
    maxCreditsnumberHard credit ceiling per request. If escalation would exceed this, the request fails fast rather than spending over budget. Useful for bulk workflows where a runaway cost is unacceptable.
    forceTier1 – 4Skip escalation and run exactly this tier. Use when you know the target always needs a specific tier (e.g., forceTier: 3 for a known Cloudflare-protected site).
    maxTier1 – 4Allow escalation but cap it at this tier. Balances success rate against cost — for example, maxTier: 2 prevents spending on premium anti-bot passes for low-priority sources.
    preferCostbooleanHint to the escalation engine to bias toward lower-cost tiers when multiple options have similar predicted success rates. Mutually exclusive with preferSpeed.
    preferSpeedbooleanHint to bias toward faster tiers even if they cost slightly more. Use for time-sensitive monitoring workflows where latency matters more than per-request cost.

    forceTier overrides intelligence

    Setting forceTier bypasses AlterLab's learned domain strategies. Only use it when you have confirmed the tier requirement from prior runs or from Cortex analytics.

    Example Workflows

    These are common patterns that n8n users build with the AlterLab node. Import any of them from the n8n workflow library by searching AlterLab.

    1

    Scrape product prices → Google Sheets

    A Schedule Trigger fires daily. The AlterLab node scrapes a list of product URLs (supplied by a Code node or a Google Sheets lookup) using the product extraction profile. The resulting {name, price, in_stock} objects are written back to a Google Sheets row via the Google Sheets node. A final IF node sends a Slack alert if any price dropped more than 10%.

    Schedule Trigger→AlterLab (product profile)→Google Sheets→IF (price drop)→Slack
    2

    Monitor news articles → Slack digest

    An RSS Trigger (or Webhook) receives new article URLs. The AlterLab node scrapes each URL with the article profile, extracting title, author, publish date, and a clean markdown body. A Summarize node (or an OpenAI node) condenses the markdown to three bullet points. The result is posted to a Slack channel as a formatted digest card. Runs entirely without a paid news API subscription.

    RSS / Webhook→AlterLab (article profile)→OpenAI (summarize)→Slack
    3

    Extract job listings → database

    A Schedule Trigger runs every 6 hours. A Code node builds a list of job board search URLs. The AlterLab node scrapes each with a custom JSON schema requesting {title, company, location, salary_range, apply_url} and renderJs: true for JS-rendered boards. A Merge node deduplicates on apply_url, and new records are inserted into Postgres via the Postgres node. A final Webhook node notifies a downstream service of the new batch.

    Schedule Trigger→AlterLab (JSON schema)→Merge (dedup)→Postgres→Webhook

    Template library

    All three workflows above are available as one-click templates in the n8n workflow library. Search AlterLab in the Templates tab of your n8n instance to import them directly.
    MCP ServerChrome Extension
    Last updated: March 2026

    On this page

    AlterLabAlterLab

    AlterLab is the modern web scraping platform for developers. Reliable, scalable, and easy to use.

    Product

    • Pricing
    • Documentation
    • Changelog
    • Status

    Solutions

    • Python API
    • JS Rendering
    • Anti-Bot Bypass
    • Compare APIs

    Comparisons

    • Compare All
    • vs ScraperAPI
    • vs Firecrawl
    • vs ScrapingBee
    • vs Bright Data
    • vs Apify

    Company

    • About
    • Blog
    • Contact
    • FAQ

    Guides

    • Bypass Cloudflare
    • Playwright Anti-Detection
    • Puppeteer Bypass Guide
    • Selenium Detection Fix
    • Best Scraping APIs 2026

    Legal

    • Privacy
    • Terms
    • Acceptable Use
    • DPA
    • Cookie Policy
    • Licenses

    © 2026 RapierCraft Inc. All rights reserved.

    Middletown, DE