AlterLabAlterLab
Reference

Error Codes Reference

Complete reference for AlterLab API errors, status codes, and how to handle them gracefully.

Error Response Structure

All errors return a consistent JSON structure with error,message, and optional details fields. Check the HTTP status code first, then parse the response body for details.

HTTP Status Codes

CodeStatusDescriptionRetry?
200OKRequest succeeded (sync response with content)N/A
202AcceptedJob queued for async processing (poll job_id)N/A
204No ContentSuccessful deletion, no response bodyN/A
400Bad RequestInvalid request parameters or malformed JSONNo
401UnauthorizedMissing, invalid, or expired API key/tokenNo
402Payment RequiredInsufficient balance or paywall detectedNo
403ForbiddenAccess denied by target site or blocked by anti-botMaybe
404Not FoundResource not found (job, webhook, etc.)No
409ConflictDuplicate resource (webhook URL already exists)No
413Payload Too LargeContent exceeds size limit (10MB)No
415Unsupported MediaURL content type not supported for requested modeNo
422UnprocessableAll scraping tiers failed or content extraction failedMaybe
429Too Many RequestsRate limit exceeded, check Retry-After headerYes (after delay)
500Internal ErrorUnexpected server errorYes
502Bad GatewayUpstream error or worker queue issueYes
503Service UnavailableService temporarily unavailable or target site downYes
504Gateway TimeoutRequest timed out waiting for responseYes

Scraping Errors

These errors occur during the scraping process. Most result in automatic refunds.

BLOCKED_BY_ANTIBOT
403
Refunded

The target site detected and blocked the scraping attempt. All tier escalations failed.

Solution: Try using max_tier: "4" to enable CAPTCHA solving, or contact support for sites with advanced protection.

CHALLENGE_DETECTED
403
Refunded

A bot detection challenge (Cloudflare, reCAPTCHA, hCaptcha) was encountered.

Solution: Enable JavaScript rendering with mode: "js" or increase max tier.

TIMEOUT
504
Refunded

The scraping operation timed out before completing.

Solution: Increase timeout parameter or use async mode with polling.

CONTENT_TOO_LARGE
413
Refunded

The page content exceeds the 10MB size limit.

Solution: Target a specific section of the page or use a different scraping strategy.

UNSUPPORTED_CONTENT_TYPE
415
Refunded

The URL returns a content type not supported by the selected mode.

Solution: Use the appropriate mode - pdf for PDFs, ocr for images.

EXTRACTION_FAILED
422
Refunded

Content was fetched but could not be parsed or extracted.

Solution: Check if the URL returns valid HTML/content. Try a different extraction mode.

REQUIRES_AUTH
401
Refunded

The page requires authentication to access.

Solution: This URL is behind a login wall and cannot be scraped without credentials.

PAYWALL_DETECTED
402
Refunded

The content is behind a paywall and requires subscription to access.

Solution: Content requires paid subscription. Cannot be accessed via scraping.

NETWORK_ERROR
503
Refunded

Network connection failed - DNS error, connection refused, or SSL error.

Solution: Verify the URL is valid and the site is online. Retry after a brief delay.

Authentication Errors

INVALID_API_KEY
401

The API key provided is invalid, malformed, or has been revoked.

Solution: Check that your API key starts with sk_live_ or sk_test_ and is active in your dashboard.

MISSING_API_KEY
401

No API key was provided in the request headers.

Solution: Include the X-API-Key header with your API key.

SESSION_EXPIRED
401

The session token (for dashboard APIs) has expired.

Solution: Log in again to get a fresh session token.

Billing Errors

INSUFFICIENT_CREDITS
402

Your account has insufficient balance to complete this request.

Solution: Check your balance with GET /api/v1/usage and upgrade your plan or add funds.

EXCEEDS_COST_LIMIT
400

The estimated cost exceeds your max_credits limit.

Solution: Increase max_credits or use a simpler scraping mode.

SUBSCRIPTION_INACTIVE
402

Your subscription has been cancelled or payment failed.

Solution: Check your billing status in the dashboard and update payment method if needed.

Rate Limit Errors

RATE_LIMIT_EXCEEDED
429

You have exceeded your plan's rate limit (requests per minute).

Solution: Check the X-RateLimit-Reset header for when to retry.

Rate Limit Headers:

  • X-RateLimit-Limit: Your plan's rate limit
  • X-RateLimit-Remaining: Requests remaining in window
  • X-RateLimit-Reset: Unix timestamp when limit resets
PlanRate LimitBurst Limit
Free10 req/min5 req/sec
Starter60 req/min10 req/sec
Pro300 req/min30 req/sec
Ultra1000 req/min100 req/sec

Refund Policy

Costs are automatically refunded for most error conditions where the scraping failed through no fault of the user.

Costs Refunded

  • Site blocked access (403, anti-bot)
  • Request timeout
  • Network/connection errors
  • Content extraction failed
  • Unsupported content type
  • Page requires authentication
  • Paywall detected
  • Server errors (5xx)

Costs NOT Refunded

  • Successful scrape (any tier)
  • Cache hit (free)
  • Invalid API key
  • Rate limit exceeded
  • Invalid request parameters
  • Cancelled by user

Refund Timing

Refunds are processed instantly when an error occurs. Check the billing.refundedfield in error responses to confirm.

Error Response Format

{
  "error": "INSUFFICIENT_CREDITS",
  "message": "Your account has insufficient balance for this request",
  "request_id": "req_550e8400-e29b-41d4-a716-446655440000",
  "details": {
    "required_credits": 5,
    "available_credits": 2,
    "estimated_tier": "2"
  }
}
FieldTypeDescription
errorstringMachine-readable error code
messagestringHuman-readable error description
request_idstring?Unique request ID for support inquiries
detailsobject?Additional context about the error

Handling Errors

import time
import requests
from requests.exceptions import RequestException

def scrape_with_error_handling(url: str, api_key: str, max_retries: int = 3):
    """Scrape with proper error handling and retries."""

    for attempt in range(max_retries):
        try:
            response = requests.post(
                "https://api.alterlab.io/api/v1/scrape",
                headers={"X-API-Key": api_key},
                json={"url": url},
                timeout=60
            )

            # Success
            if response.status_code in (200, 202):
                return response.json()

            # Parse error response
            error = response.json()
            error_code = error.get("error", "UNKNOWN")

            # Don't retry client errors
            if response.status_code in (400, 401, 402, 404, 409, 415):
                raise Exception(f"Client error: {error_code} - {error.get('message')}")

            # Rate limited - wait and retry
            if response.status_code == 429:
                reset_time = int(response.headers.get("X-RateLimit-Reset", 0))
                wait_time = max(reset_time - time.time(), 60)
                print(f"Rate limited. Waiting {wait_time}s...")
                time.sleep(wait_time)
                continue

            # Server error - retry with backoff
            if response.status_code >= 500:
                wait_time = 2 ** attempt
                print(f"Server error {response.status_code}. Retrying in {wait_time}s...")
                time.sleep(wait_time)
                continue

            # 422 - All tiers failed, might want to try different settings
            if response.status_code == 422:
                raise Exception(f"Scraping failed: {error.get('message')}")

        except RequestException as e:
            if attempt < max_retries - 1:
                time.sleep(2 ** attempt)
                continue
            raise

    raise Exception(f"Max retries ({max_retries}) exceeded")