The Write-Only Webhook Philosophy
Webhooks are the universal integration pattern for alerting. Any system that can send an HTTP POST can trigger an alert — your CI/CD pipeline, your APM tool, your custom monitoring scripts, your error tracking service.
But there's a problem: most webhook integrations are all-or-nothing. Either every event creates an alert, or you need to build filtering logic in the sending system. When you can't control the sender, you're stuck with a flood of events, many of which don't warrant human attention.
The solution is server-side webhook filtering: evaluate conditions on the incoming payload and only create alerts that match your criteria. Everything else gets a 200 response (so the sender thinks it succeeded) but no alert is created.
This is the write-only webhook philosophy: webhooks can only create alerts (never modify or delete them), and filters control which events are worth creating.
How Webhook Filtering Works
A webhook filter is a set of conditions evaluated against the incoming JSON payload. Each condition specifies a field, an operator, and a value.
The basic structure of a filter rule:
- Field: The JSON path in the incoming payload (e.g., "severity", "metadata.environment", "metadata.error.code")
- Operator: How to compare the field value (equals, contains, greater than, regex, etc.)
- Value: The expected value to compare against
- Type: The data type (string, number, boolean)
Multiple conditions are combined using a match mode:
- All (AND): Every condition must be true for the alert to be created
- Any (OR): At least one condition must be true
- None (NOT): No conditions can be true (exclusion mode)
Available Operators
A comprehensive filtering system supports operators across several categories:
Equality operators:
- eq (equals) — exact match
- ne (not equals) — anything except this value
Text matching operators:
- co (contains) — field value contains the substring
- nc (does not contain) — field value does not contain the substring
- sw (starts with) — field value starts with the string
- ew (ends with) — field value ends with the string
- regex — field value matches a regular expression pattern
Numeric operators:
- gt (greater than)
- gte (greater than or equal)
- lt (less than)
- lte (less than or equal)
Existence operators:
- is (is set) — the field exists in the payload
- ns (is not set) — the field does not exist
Common Filtering Patterns
Here are practical filter configurations for real-world scenarios.
Pattern 1: Production-Only Alerts
Only create alerts for production environment events, ignoring staging, development, and local:
- Match mode: All
- Condition: field = "metadata.environment", operator = eq, value = "production"
This single condition eliminates all non-production noise. If your sending system includes an environment field in the metadata, this filter alone can reduce alert volume by 60-80%.
Pattern 2: Severity Threshold
Only create alerts for high and critical severity events:
- Match mode: Any
- Condition 1: field = "severity", operator = eq, value = "critical"
- Condition 2: field = "severity", operator = eq, value = "high"
Using "any" match mode means either condition being true creates an alert. Low and medium severity events are filtered out.
Pattern 3: Error Count Threshold
Only create alerts when the error count exceeds a threshold:
- Match mode: All
- Condition: field = "metadata.error_count", operator = gte, value = 100, type = number
This prevents alerts for one-off errors while catching sustained error spikes.
Pattern 4: Exclude Test and Development Traffic
Create alerts for everything except test and development environments:
- Match mode: None
- Condition 1: field = "metadata.environment", operator = sw, value = "test"
- Condition 2: field = "metadata.environment", operator = sw, value = "dev"
- Condition 3: field = "metadata.url", operator = co, value = "localhost"
- Condition 4: field = "metadata.url", operator = co, value = "127.0.0.1"
Using "none" match mode means if any condition is true, the alert is not created. This is the exclusion pattern — block specific noise sources while allowing everything else through.
Pattern 5: Service-Specific Routing
Only create alerts from specific services:
- Match mode: Any
- Condition 1: field = "metadata.service", operator = eq, value = "payment-api"
- Condition 2: field = "metadata.service", operator = eq, value = "auth-service"
- Condition 3: field = "metadata.service", operator = eq, value = "order-processor"
This lets you create separate webhooks per service group, each with its own severity level and escalation policy.
Pattern 6: Regex-Based Matching
For complex matching needs, regex operators provide flexibility:
- Match mode: All
- Condition: field = "title", operator = regex, value = "(database|redis|postgres).*timeout"
This matches any title containing a database technology followed by "timeout" — catching "database connection timeout", "redis read timeout", "postgres query timeout", etc.
Nested Field Paths
Webhook payloads often contain nested data. Good filtering systems support dot-notation paths to access nested fields.
For a payload like:
{
"title": "API Error",
"severity": "high",
"metadata": {
"environment": "production",
"service": "payment-api",
"error": {
"code": 503,
"message": "Service Unavailable"
}
}
}
You can filter on any nested field:
- field = "metadata.environment" → "production"
- field = "metadata.error.code" → 503
- field = "metadata.error.message" → "Service Unavailable"
The Fail-Safe Principle
A well-designed filtering system should fail safe. If filter evaluation encounters an error — malformed payload, unexpected data type, regex compilation failure — the alert should be created, not silently dropped.
The reasoning: it's better to create a noisy alert that gets investigated than to silently drop an alert for a real incident because the filter had a bug.
Filter Response Transparency
When a webhook event is filtered out (no alert created), the response should indicate this clearly:
Status: 200 OK
{
"status": "filtered",
"message": "Event filtered by webhook rules",
"reason": "No conditions matched (filterMatch: any)"
}
When an alert is created:
Status: 200 OK
{
"alertId": "abc123",
"status": "created",
"message": "Alert created successfully"
}
Both return 200 to prevent the sending system from retrying. The status field tells the integrator whether an alert was actually created.
Building vs. Buying Filtering
You could build webhook filtering logic in your own middleware — a Lambda function, a proxy service, or a custom API gateway rule. But this approach requires:
- Writing and maintaining the evaluation logic for every operator
- Building a UI for non-engineers to configure filters
- Handling edge cases (null fields, type mismatches, nested paths)
- Testing regex patterns safely (preventing ReDoS attacks)
- Logging filtered events for audit purposes
For most teams, using a tool with built-in filtering is significantly faster than building it.
Reducing Noise at the Source
OpShift includes webhook filtering with all the operators and patterns described above. Filters are configured per webhook endpoint using either a visual builder (WHEN/IF/THEN interface) or a JSON editor for advanced users. Filtered events return transparent responses, and filter evaluation is fail-safe.
Combined with alert grouping (so even the alerts that pass filtering get deduplicated), you get a low-noise alert pipeline from any system that can send a webhook.
Flat pricing: $14/month for up to 50 users, $39/month for up to 500 users. No per-seat charges. Configure your first webhook at opshift.io.