AlterLabAlterLab
PricingComparePlaygroundBlogDocsChangelog
    AlterLabAlterLab
    PricingComparePlaygroundBlogDocsChangelog
    IntroductionQuickstartInstallationYour First Request
    REST APIJob PollingAPI KeysSessions APINew
    OverviewPythonNode.js
    JavaScript RenderingOutput FormatsPDF & OCRCachingWebhooksJSON Schema FilteringWebSocket Real-TimeBring Your Own ProxyProAuthenticated ScrapingNewWeb CrawlingBatch ScrapingSchedulerChange DetectionCloud Storage ExportSpend LimitsOrganizations & TeamsAlerts & Notifications
    Structured ExtractionAIE-commerce ScrapingNews MonitoringPrice MonitoringMulti-Page CrawlingMonitoring DashboardAI Agent / MCPMCPData Pipeline to Cloud
    PricingRate LimitsError Codes
    From FirecrawlFrom ApifyFrom ScrapingBee / ScraperAPI
    PlaygroundPricingStatus
    Reference

    Error Codes Reference

    Complete reference for AlterLab API errors, status codes, and how to handle them gracefully.

    Error Response Structure

    All errors return a consistent JSON structure with error,message, and optional details fields. Check the HTTP status code first, then parse the response body for details.

    HTTP Status Codes

    CodeStatusDescriptionRetry?
    200OKRequest succeeded (sync response with content)N/A
    202AcceptedJob queued for async processing (poll job_id)N/A
    204No ContentSuccessful deletion, no response bodyN/A
    400Bad RequestInvalid request parameters or malformed JSONNo
    401UnauthorizedMissing, invalid, or expired API key/tokenNo
    402Payment RequiredInsufficient balance or paywall detectedNo
    403ForbiddenAccess denied by target site or blocked by anti-botMaybe
    404Not FoundResource not found (job, webhook, etc.)No
    409ConflictDuplicate resource (webhook URL already exists)No
    413Payload Too LargeContent exceeds size limit (10MB)No
    415Unsupported MediaURL content type not supported for requested modeNo
    422UnprocessableAll scraping tiers failed or content extraction failedMaybe
    429Too Many RequestsRate limit exceeded, check Retry-After headerYes (after delay)
    500Internal ErrorUnexpected server errorYes
    502Bad GatewayUpstream error or worker queue issueYes
    503Service UnavailableService temporarily unavailable or target site downYes
    504Gateway TimeoutRequest timed out waiting for responseYes

    Scraping Errors

    These errors occur during the scraping process. Most result in automatic refunds.

    BLOCKED_BY_ANTIBOT
    403
    Refunded

    The target site detected and blocked the scraping attempt. All tier escalations failed.

    Solution: Try using max_tier: "4" to enable CAPTCHA solving, or contact support for sites with advanced protection.

    CHALLENGE_DETECTED
    403
    Refunded

    A bot detection challenge (Cloudflare, reCAPTCHA, hCaptcha) was encountered.

    Solution: Enable JavaScript rendering with mode: "js" or increase max tier.

    TIMEOUT
    504
    Refunded

    The scraping operation timed out before completing.

    Solution: Increase timeout parameter or use async mode with polling.

    CONTENT_TOO_LARGE
    413
    Refunded

    The page content exceeds the 10MB size limit.

    Solution: Target a specific section of the page or use a different scraping strategy.

    UNSUPPORTED_CONTENT_TYPE
    415
    Refunded

    The URL returns a content type not supported by the selected mode.

    Solution: Use the appropriate mode - pdf for PDFs, ocr for images.

    EXTRACTION_FAILED
    422
    Refunded

    Content was fetched but could not be parsed or extracted.

    Solution: Check if the URL returns valid HTML/content. Try a different extraction mode.

    REQUIRES_AUTH
    401
    Refunded

    The page requires authentication to access.

    Solution: This URL is behind a login wall and cannot be scraped without credentials.

    PAYWALL_DETECTED
    402
    Refunded

    The content is behind a paywall and requires subscription to access.

    Solution: Content requires paid subscription. Cannot be accessed via scraping.

    NETWORK_ERROR
    503
    Refunded

    Network connection failed - DNS error, connection refused, or SSL error.

    Solution: Verify the URL is valid and the site is online. Retry after a brief delay.

    Authentication Errors

    INVALID_API_KEY
    401

    The API key provided is invalid, malformed, or has been revoked.

    Solution: Check that your API key starts with sk_live_ or sk_test_ and is active in your dashboard.

    MISSING_API_KEY
    401

    No API key was provided in the request headers.

    Solution: Include the X-API-Key header with your API key.

    SESSION_EXPIRED
    401

    The session token (for dashboard APIs) has expired.

    Solution: Log in again to get a fresh session token.

    Session Errors (Authenticated Scraping)

    Errors related to authenticated scraping (BYOS) session management and usage.

    SESSION_NOT_FOUND
    404

    The provided session_id does not exist or belongs to another user.

    Solution: Verify the UUID. Use GET /sessions to list your valid sessions.

    SESSION_EXPIRED
    410

    The session cookies have expired or the session was marked as expired after repeated validation failures.

    Solution: Log in to the target site again and call POST /sessions/:id/refresh with fresh cookies.

    SESSION_DOMAIN_MISMATCH
    400

    The scrape URL domain does not match the session's stored domain. For example, using an amazon.com session to scrape walmart.com.

    Solution: Create a separate session for each target domain.

    SESSION_INACTIVE
    410

    The session has been deleted or disabled.

    Solution: Create a new session with fresh cookies.

    SESSION_LIMIT_REACHED
    429

    Maximum number of sessions per account (50) has been reached.

    Solution: Delete unused or expired sessions. Use GET /sessions?status=expired to find candidates for cleanup.

    SESSION_COOKIES_CONFLICT
    400

    Both session_id and cookies were provided in the scrape request. These parameters are mutually exclusive.

    Solution: Use either session_id (for stored sessions) or cookies (for inline one-off use), not both.

    Billing Errors

    INSUFFICIENT_CREDITS
    402

    Your account has insufficient balance to complete this request.

    Solution: Check your balance with GET /api/v1/usage and upgrade your plan or add funds.

    EXCEEDS_COST_LIMIT
    400

    The estimated cost exceeds your max_credits limit.

    Solution: Increase max_credits or use a simpler scraping mode.

    SUBSCRIPTION_INACTIVE
    402

    Your subscription has been cancelled or payment failed.

    Solution: Check your billing status in the dashboard and update payment method if needed.

    Rate Limit Errors

    RATE_LIMIT_EXCEEDED
    429

    You have exceeded your plan's rate limit (requests per minute).

    Solution: Check the X-RateLimit-Reset header for when to retry.

    Rate Limit Headers:

    • X-RateLimit-Limit: Your plan's rate limit
    • X-RateLimit-Remaining: Requests remaining in window
    • X-RateLimit-Reset: Unix timestamp when limit resets
    PlanRate LimitBurst Limit
    Free30 req/min5 req/sec
    Basic60 req/min10 req/sec
    Growth100 req/min20 req/sec
    Scale300 req/min30 req/sec
    High Volume500 req/min50 req/sec

    Refund Policy

    Costs are automatically refunded for most error conditions where the scraping failed through no fault of the user.

    Costs Refunded

    • Site blocked access (403, anti-bot)
    • Request timeout
    • Network/connection errors
    • Content extraction failed
    • Unsupported content type
    • Page requires authentication
    • Paywall detected
    • Server errors (5xx)

    Costs NOT Refunded

    • Successful scrape (any tier)
    • Cache hit (free)
    • Invalid API key
    • Rate limit exceeded
    • Invalid request parameters
    • Cancelled by user

    Refund Timing

    Refunds are processed instantly when an error occurs. Check the billing.refundedfield in error responses to confirm.

    Error Response Format

    JSON
    {
      "error": "INSUFFICIENT_CREDITS",
      "message": "Your account has insufficient balance for this request",
      "request_id": "req_550e8400-e29b-41d4-a716-446655440000",
      "details": {
        "required_credits": 5,
        "available_credits": 2,
        "estimated_tier": "2"
      }
    }
    FieldTypeDescription
    errorstringMachine-readable error code
    messagestringHuman-readable error description
    request_idstring?Unique request ID for support inquiries
    detailsobject?Additional context about the error

    Handling Errors

    Python
    import time
    import requests
    from requests.exceptions import RequestException
    
    def scrape_with_error_handling(url: str, api_key: str, max_retries: int = 3):
        """Scrape with proper error handling and retries."""
    
        for attempt in range(max_retries):
            try:
                response = requests.post(
                    "https://api.alterlab.io/api/v1/scrape",
                    headers={"X-API-Key": api_key},
                    json={"url": url},
                    timeout=60
                )
    
                # Success
                if response.status_code in (200, 202):
                    return response.json()
    
                # Parse error response
                error = response.json()
                error_code = error.get("error", "UNKNOWN")
    
                # Don't retry client errors
                if response.status_code in (400, 401, 402, 404, 409, 415):
                    raise Exception(f"Client error: {error_code} - {error.get('message')}")
    
                # Rate limited - wait and retry
                if response.status_code == 429:
                    reset_time = int(response.headers.get("X-RateLimit-Reset", 0))
                    wait_time = max(reset_time - time.time(), 60)
                    print(f"Rate limited. Waiting {wait_time}s...")
                    time.sleep(wait_time)
                    continue
    
                # Server error - retry with backoff
                if response.status_code >= 500:
                    wait_time = 2 ** attempt
                    print(f"Server error {response.status_code}. Retrying in {wait_time}s...")
                    time.sleep(wait_time)
                    continue
    
                # 422 - All tiers failed, might want to try different settings
                if response.status_code == 422:
                    raise Exception(f"Scraping failed: {error.get('message')}")
    
            except RequestException as e:
                if attempt < max_retries - 1:
                    time.sleep(2 ** attempt)
                    continue
                raise
    
        raise Exception(f"Max retries ({max_retries}) exceeded")
    Rate LimitsFrom Firecrawl
    Last updated: March 2026

    On this page