Python • Async + Sync • Type hints
Web scraping in Python, one line
Official Python SDK for AlterLab. Anti-bot bypass, automatic retries, and clean markdown output — all with async/await.
pip install alterlabAsync usage
Async-first design for modern Python applications.
import asyncio
from alterlab import AsyncAlterLabClient
async def main():
client = AsyncAlterLabClient(api_key="your-api-key")
# Scrape a URL — anti-bot bypass handled automatically
result = await client.scrape(
"https://example.com/product-page",
formats=["markdown", "text"],
)
print(result.markdown)
print(f"Credits used: {result.credits_used}")
asyncio.run(main())Sync usage
Synchronous wrapper for scripts, notebooks, and frameworks that don't use async.
from alterlab import AlterLabClient
client = AlterLabClient(api_key="your-api-key")
# Scrape with bot-protected tier
result = client.scrape(
"https://example.com",
formats=["markdown"],
tier=3, # Stealth browser mode — bypasses Cloudflare JS challenges
)
print(result.markdown)SDK vs raw HTTP requests
The SDK handles the plumbing so you don't have to.
Raw HTTP (manual)
import httpx, time
def scrape(url, api_key):
for attempt in range(3):
resp = httpx.post(
"https://api.alterlab.io/v1/scrape",
headers={"X-API-Key": api_key},
json={"url": url},
timeout=30,
)
if resp.status_code == 200:
return resp.json()["markdown"]
time.sleep(2 ** attempt)
raise Exception("Failed")AlterLab SDK (recommended)
from alterlab import AlterLabClient
client = AlterLabClient(api_key=api_key)
def scrape(url):
# Retries, auth, tier selection,
# error handling — all handled
result = client.scrape(url)
return result.markdownWhat the SDK handles
Automatic retry on failure with exponential backoff
Anti-bot bypass — all 5 tiers accessible with one parameter
Returns clean markdown, text, or structured JSON
Async-first with sync wrapper for compatibility
Full type hints — works with mypy and pyright
No dependency bloat — minimal footprint
Start scraping in Python
Free credits on signup. No subscription. Balance never expires.