Alerts & Notifications
Monitor your scraping operations and get notified when something needs attention. Set threshold-based rules for credit usage, failure rates, response times, and more.
Organization-scoped
How It Works
Define
Create an alert rule with a type, threshold conditions, and notification channels. Optionally scope it to specific domains.
Monitor
AlterLab continuously evaluates your scraping activity against your alert conditions — failure rates, credit burn, response times, and more.
Trigger
When a condition is breached, the alert fires. A cooldown period prevents repeated notifications for the same ongoing issue.
Notify
Notifications are delivered via your chosen channels — email, webhook, or both. Every triggered alert is recorded in your alert history.
Alert Types
AlterLab supports six alert types, each designed for a different monitoring scenario. Each type has its own set of required conditions.
Credit Threshold
Fires when your credit balance drops below a percentage of your total allocation. Use this to avoid running out of credits mid-pipeline.
| Condition | Type | Description |
|---|---|---|
| threshold_percent | number | Percentage of total credits remaining (e.g., 20 fires when balance drops below 20%) |
Domain Failure Rate
Fires when the failure rate for a domain exceeds a threshold within a rolling time window. Useful for detecting when a target site starts blocking your requests.
| Condition | Type | Description |
|---|---|---|
| threshold | number | Failure rate percentage (e.g., 50 for 50% failure rate) |
| window_minutes | integer | Rolling window in minutes (1 – 10,080 / 7 days) |
| min_requests | integer | Minimum requests in the window before the alert can fire (1 – 10,000) |
Consecutive Job Failures
Fires after a set number of scrape jobs fail in a row. Catches systematic issues like expired sessions or site-wide blocks.
| Condition | Type | Description |
|---|---|---|
| consecutive_count | integer | Number of consecutive failures before alerting (e.g., 5) |
Response Time Spike
Fires when the average response time exceeds a threshold within a rolling window. Detects performance degradation early.
| Condition | Type | Description |
|---|---|---|
| threshold_seconds | number | Average response time in seconds to trigger on (e.g., 10 for 10s average) |
| window_minutes | integer | Rolling window in minutes (1 – 10,080) |
| min_requests | integer | Minimum requests in the window before the alert can fire |
Daily Failure Count
Fires when the total number of failed jobs in a day exceeds a threshold. A simple absolute cap on daily failures.
| Condition | Type | Description |
|---|---|---|
| max_failures | integer | Maximum failures per day before alerting (e.g., 100) |
Schedule Failure
Fires when a scheduled scrape job fails. No additional conditions required — any scheduled run that fails will trigger this alert.
No conditions needed
schedule_failure type requires an empty conditions object: "conditions": {}.Create an Alert Rule
/api/v1/alerts/rulescurl -X POST https://api.alterlab.io/api/v1/alerts/rules \
-H "Authorization: Bearer your_jwt_token" \
-H "Content-Type: application/json" \
-d '{
"name": "High failure rate on amazon.com",
"alert_type": "domain_failure_rate",
"conditions": {
"threshold": 50,
"window_minutes": 60,
"min_requests": 10
},
"domain_filter": ["amazon.com"],
"channels": {
"email": true
},
"cooldown_minutes": 120
}'Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
| name | string | Yes | Human-readable name (1–255 chars) |
| alert_type | string | Yes | One of: credit_threshold, domain_failure_rate, job_consecutive_failures, response_time_spike, daily_failure_count, schedule_failure |
| conditions | object | Yes | Threshold conditions (varies by alert type — see above) |
| domain_filter | string[] | No | Scope the alert to specific domains (max 50) |
| channels | object | No | Delivery channels (default: {"email": true}) |
| cooldown_minutes | integer | No | Minutes between re-alerts (default: 60, range: 5–1440) |
Manage Alert Rules
List Rules
/api/v1/alerts/rulesReturns all active alert rules for your workspace. Pass include_inactive=true to include disabled rules, or filter by alert_type.
curl https://api.alterlab.io/api/v1/alerts/rules \
-H "Authorization: Bearer your_jwt_token"
# Filter by type
curl "https://api.alterlab.io/api/v1/alerts/rules?alert_type=credit_threshold" \
-H "Authorization: Bearer your_jwt_token"
# Include disabled rules
curl "https://api.alterlab.io/api/v1/alerts/rules?include_inactive=true" \
-H "Authorization: Bearer your_jwt_token"Get a Rule
/api/v1/alerts/rules/:rule_idcurl https://api.alterlab.io/api/v1/alerts/rules/550e8400-e29b-41d4-a716-446655440000 \
-H "Authorization: Bearer your_jwt_token"Response:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"organization_id": "org_abc123",
"name": "High failure rate on amazon.com",
"alert_type": "domain_failure_rate",
"conditions": {
"threshold": 50,
"window_minutes": 60,
"min_requests": 10
},
"domain_filter": ["amazon.com"],
"channels": { "email": true },
"is_active": true,
"last_triggered_at": "2026-03-20T14:30:00Z",
"cooldown_minutes": 120,
"created_at": "2026-03-15T10:00:00Z",
"updated_at": "2026-03-15T10:00:00Z"
}Update a Rule
/api/v1/alerts/rules/:rule_idAll fields are optional. Send only the fields you want to change. Use is_active to enable/disable a rule without deleting it.
# Tighten the threshold and add webhook delivery
curl -X PATCH https://api.alterlab.io/api/v1/alerts/rules/550e8400-... \
-H "Authorization: Bearer your_jwt_token" \
-H "Content-Type: application/json" \
-d '{
"conditions": {
"threshold": 30,
"window_minutes": 30,
"min_requests": 5
},
"channels": {
"email": true,
"webhook_id": "wh_abc123"
}
}'
# Disable a rule
curl -X PATCH https://api.alterlab.io/api/v1/alerts/rules/550e8400-... \
-H "Authorization: Bearer your_jwt_token" \
-H "Content-Type: application/json" \
-d '{"is_active": false}'Delete a Rule
/api/v1/alerts/rules/:rule_idPermanently deletes the rule and all its associated alert history.
curl -X DELETE https://api.alterlab.io/api/v1/alerts/rules/550e8400-... \
-H "Authorization: Bearer your_jwt_token"Notification Channels
Each alert rule specifies how you want to be notified. At least one channel must be enabled.
Set "email": true in the channels object. Alert emails are sent to all workspace members. This is the default channel.
"channels": {
"email": true
}Webhook
Provide a webhook_id to receive alert payloads at your HTTP endpoint. You can combine webhook delivery with email.
"channels": {
"email": true,
"webhook_id": "wh_abc123"
}Webhook payload example:
{
"alert_rule_id": "550e8400-e29b-41d4-a716-446655440000",
"alert_type": "domain_failure_rate",
"message": "Failure rate for amazon.com exceeded 50% (63%) in the last 60 minutes",
"details": {
"domain": "amazon.com",
"failure_rate": 63.2,
"window_minutes": 60,
"total_requests": 38,
"failed_requests": 24
},
"created_at": "2026-03-24T10:15:00Z"
}Webhook setup
webhook_id references a pre-configured endpoint.Cooldown
The cooldown_minutes field controls how often an alert can re-fire after being triggered. This prevents notification spam during sustained issues.
| Range | Default | Recommendation |
|---|---|---|
| 5 – 1,440 minutes | 60 minutes | Use 5–15 min for critical alerts (credit depletion), 60–240 min for informational alerts (failure rate trends) |
Domain Filtering
Scope alert rules to specific domains using the domain_filter field. When set, the alert only evaluates activity for those domains. When omitted, the alert applies to all domains.
{
"name": "Amazon & eBay failure rate",
"alert_type": "domain_failure_rate",
"conditions": {
"threshold": 40,
"window_minutes": 30,
"min_requests": 20
},
"domain_filter": ["amazon.com", "ebay.com"],
"channels": { "email": true }
}You can specify up to 50 domains per alert rule.
Alert History
/api/v1/alerts/historyBrowse all triggered alerts with optional filters. Each entry records the alert type, message, details, and delivery status.
# List recent alerts
curl "https://api.alterlab.io/api/v1/alerts/history?limit=20" \
-H "Authorization: Bearer your_jwt_token"
# Filter by rule
curl "https://api.alterlab.io/api/v1/alerts/history?rule_id=550e8400-..." \
-H "Authorization: Bearer your_jwt_token"
# Filter by type
curl "https://api.alterlab.io/api/v1/alerts/history?alert_type=credit_threshold" \
-H "Authorization: Bearer your_jwt_token"Query parameters:
| Parameter | Type | Description |
|---|---|---|
| rule_id | UUID | Filter by specific alert rule |
| alert_type | string | Filter by alert type |
| limit | integer | Results per page (default: 50, max: 200) |
| offset | integer | Pagination offset (default: 0) |
Response:
{
"alerts": [
{
"id": "a1b2c3d4-...",
"alert_rule_id": "550e8400-...",
"alert_type": "domain_failure_rate",
"message": "Failure rate for amazon.com exceeded 50% (63%) in the last 60 minutes",
"details": {
"domain": "amazon.com",
"failure_rate": 63.2,
"total_requests": 38,
"failed_requests": 24
},
"delivered_via": {
"email": true,
"webhook": "delivered"
},
"created_at": "2026-03-24T10:15:00Z"
}
],
"total": 42
}Python Example
import requests
API_URL = "https://api.alterlab.io/api/v1"
HEADERS = {
"Authorization": "Bearer your_jwt_token",
"Content-Type": "application/json",
}
# Create a credit threshold alert
rule = requests.post(
f"{API_URL}/alerts/rules",
headers=HEADERS,
json={
"name": "Low credit warning",
"alert_type": "credit_threshold",
"conditions": {"threshold_percent": 20},
"channels": {"email": True},
"cooldown_minutes": 30,
},
).json()
print(f"Alert rule created: {rule['id']}")
print(f"Type: {rule['alert_type']}, Active: {rule['is_active']}")
# Create a failure rate alert for specific domains
failure_rule = requests.post(
f"{API_URL}/alerts/rules",
headers=HEADERS,
json={
"name": "E-commerce failure spike",
"alert_type": "domain_failure_rate",
"conditions": {
"threshold": 40,
"window_minutes": 30,
"min_requests": 10,
},
"domain_filter": ["amazon.com", "ebay.com", "walmart.com"],
"channels": {"email": True, "webhook_id": "wh_abc123"},
"cooldown_minutes": 60,
},
).json()
print(f"Failure alert created: {failure_rule['id']}")
# List all active rules
rules = requests.get(
f"{API_URL}/alerts/rules",
headers=HEADERS,
).json()
for r in rules["rules"]:
status = "active" if r["is_active"] else "disabled"
print(f" {r['name']} ({r['alert_type']}) — {status}")
# Check alert history
history = requests.get(
f"{API_URL}/alerts/history?limit=10",
headers=HEADERS,
).json()
print(f"\nRecent alerts ({history['total']} total):")
for alert in history["alerts"]:
print(f" [{alert['created_at']}] {alert['alert_type']}: {alert['message']}")
# Disable a rule
requests.patch(
f"{API_URL}/alerts/rules/{rule['id']}",
headers=HEADERS,
json={"is_active": False},
)
print(f"\nRule {rule['id']} disabled")Node.js Example
const API_URL = "https://api.alterlab.io/api/v1";
const headers = {
Authorization: "Bearer your_jwt_token",
"Content-Type": "application/json",
};
// Create a response time spike alert
const rule = await fetch(`${API_URL}/alerts/rules`, {
method: "POST",
headers,
body: JSON.stringify({
name: "Slow response warning",
alert_type: "response_time_spike",
conditions: {
threshold_seconds: 15,
window_minutes: 30,
min_requests: 5,
},
channels: { email: true },
cooldown_minutes: 60,
}),
}).then((r) => r.json());
console.log(`Rule created: ${rule.id} (${rule.alert_type})`);
// Create a consecutive failures alert
const consRule = await fetch(`${API_URL}/alerts/rules`, {
method: "POST",
headers,
body: JSON.stringify({
name: "Consecutive failure detector",
alert_type: "job_consecutive_failures",
conditions: { consecutive_count: 5 },
channels: { email: true, webhook_id: "wh_abc123" },
cooldown_minutes: 15,
}),
}).then((r) => r.json());
console.log(`Rule created: ${consRule.id}`);
// List all rules (including disabled)
const list = await fetch(
`${API_URL}/alerts/rules?include_inactive=true`,
{ headers }
).then((r) => r.json());
for (const r of list.rules) {
const status = r.is_active ? "active" : "disabled";
console.log(` ${r.name} (${r.alert_type}) — ${status}`);
}
// Browse alert history filtered by type
const history = await fetch(
`${API_URL}/alerts/history?alert_type=response_time_spike&limit=10`,
{ headers }
).then((r) => r.json());
console.log(`\nRecent alerts (${history.total} total):`);
for (const a of history.alerts) {
console.log(` [${a.created_at}] ${a.message}`);
}
// Update rule — tighten threshold
await fetch(`${API_URL}/alerts/rules/${rule.id}`, {
method: "PATCH",
headers,
body: JSON.stringify({
conditions: {
threshold_seconds: 10,
window_minutes: 15,
min_requests: 3,
},
}),
});
console.log(`\nRule ${rule.id} threshold tightened`);
// Delete a rule
await fetch(`${API_URL}/alerts/rules/${consRule.id}`, {
method: "DELETE",
headers,
});
console.log(`Rule ${consRule.id} deleted`);