Enterprise observability
Table of contents
- Enterprise observability overview
- Log streaming
- Setting up log streaming
- Log streaming schema
- Troubleshooting log delivery failures
Enterprise observability overview
Enterprise observability is the ability for enterprise customers to monitor and understand system behavior across your entire digital stack, from infrastructure and applications to APIs and content delivery, in near real-time.
Contentful handles delivery, while you manage storage, access, and analysis using your preferred tools.
Log streaming overview
Log streaming is a Enterprise observability capability that provides continuous access to activity data generated by Contentful APIs. Log streaming is useful for teams that need visibility into how your organization is consuming Contentful is being used across applications and environments.
Logs are streamed in near real time to your configured storage destination, where they can be processed, analyzed, or retained using your existing tools and workflows. Contentful handles the delivery, while you manage storage, access, and downstream processing.
Delivery guarantees
Contentful’s log streaming follows an at-least-once delivery model. Which means each log event is sent one or more times:
- Events are retried automatically if delivery fails. No action needed from the user.
- In some cases, the same event may be delivered more than once.
Best-effort delivery
Log delivery operates on a best-effort basis. Due to typical challenges in any distributed system (such as network faults, latency, or temporary unavailability of your destination), delivery is not guaranteed in every circumstance.
In rare cases, a log record may be dropped to preserve system performance and stability. There are no SLAs on delivery timing or completeness.
Frequency and file output
Logs are sent every few minutes. The exact interval is not configurable. Each delivery cycle may produce multiple files.
Files are delivered in NDJSON format, compressed as .jsonl.gz. Each file follows a structured naming convention:
log_source=<api_type>/organization_id=<org_id>/year=YYYY/month=MM/day=DD/compacted-<api_type>-<uuid>.jsonl.gz
Example:
log_source=cda/organization_id=274qvl9SkVAlToItncE81X/year=2026/month=04/day=21/compacted-cda-023c67dd-564a-4ada-9bbf-570ae8f17067-36.jsonl.gz
This path structure allows multiple log sources to be delivered to the same cloud destination without files overwriting each other. There are no configuration options for delivery frequency, interval, batch size, or file format.
Setting up log streaming
Contentful uses log streaming to continuously deliver API activity data to your configured destination. Logs are emitted and sent in near real time, allowing you to process and analyze them using your existing observability tools.
Log streaming currently supports Content Delivery API (CDA) log streaming to Amazon S3. It represents the first step in Contentful’s broader Observability capabilities, which will expand to support additional log sources and cloud storage providers over time.
Prerequisites and limits
Before setting up Enterprise Observability, ensure you have:
- An Enterprise plan subscription.
- Organization owner or admin role in Contentful.
Configuration limits: A maximum of 2 configurations per storage destination per log source per organization. For example, you can have up to 2 CDA to AWS S3 configurations. The same limit applies per provider as additional destinations become available.
Static IP addresses
Contentful uses static IP addresses to deliver logs, ensuring consistent and secure communication with your storage destination. You can use these IPs to configure allowlists or firewall rules for uninterrupted log delivery.
If your organization requires IP allowlisting, configure your firewall or network settings to include the static IP addresses Contentful uses for Enterprise Observability log delivery. The addresses vary by data residency region and are the same as those used for Audit Logs.
For the full list, read Audit Logs: Static IP addresses.
Configure logs
Set up log streaming to deliver API activity data from Contentful to your cloud storage destination. This allows you to monitor, analyze, and retain logs using your existing observability tools.
Step 1: Prepare your AWS infrastructure
The AWS setup for Enterprise Observability (creating an S3 bucket, an IAM policy, and a cross-account IAM role) is identical to the setup used for Audit Logs. Follow the Audit Logs AWS Configuration guide through Step 5: Configure your S3 bucket policy.
Step 2: Configure the destination in Contentful
- Go to Organization settings → Observability in the Contentful web app.
- Click Create new configuration.
- Select a Log source.NOTE: CDA is currently the only supported API.
- Choose the destination where logs will be delivered: Amazon Web Services (AWS) S3.NOTE: Amazon Web Services (AWS) S3 is currently the only supported destination.
- Enter storage configuration details. To set up the AWS S3 destination, follow the steps here.
- Configuration name: enter a descriptive name to identify this log streaming configuration (for example,
cda-logs-production). - Click Save to begin streaming logs to your configured destination.
What happens next
After successful configuration and manual enablement (within two business days), logs will be delivered automatically in near real-time.
Contentful’s log delivery operates on a best-effort basis, meaning logs are sent as quickly as possible (typically within minutes), but timing is not guaranteed and may vary depending on system load.
Log streaming schema
Use the following schema structure of each log event generated by the Content Delivery API (CDA) and delivered through log streaming. Each event represents a single API request and response, including metadata such as request details, response status, and performance metrics.
When building parsers, expect that additional attributes may be added. Reference by attribute name instead of field position.
| Field | Type | Description |
|---|---|---|
contentful.organization.id |
string | ID of the Contentful organization. |
contentful.space.id |
string | ID of the Contentful space that received the request. |
contentful.cache.status |
string | Whether the response was served from cache. Known values: HIT, MISS. |
contentful.request.id |
string | Unique identifier for the request. Use this field as a deduplication key in your pipeline. |
event.start |
string (ISO 8601) | Timestamp when the request was received. |
event.end |
string (ISO 8601) | Timestamp when the response was sent. |
duration.microseconds |
integer | End-to-end request duration in microseconds. |
http.request.method |
string | HTTP method. Supported value: GET. Additional methods will be available as more log sources are added. |
http.route |
string | Templatized route pattern with path parameters as placeholders (e.g., /spaces/:space/environments/:environment/entries). Useful for grouping requests by endpoint type regardless of which space or environment was accessed. |
url.path |
string | Actual request path with resolved values (e.g., /spaces/p4lm9x2q7rts/environments/master/entries). |
url.query |
string | Query string, excluding the leading ? (e.g., content_type=blogPost&limit=50). |
http.response.status_code |
integer | HTTP response status code (e.g., 200, 404). |
http.response.body.size |
integer | Response body size in bytes. |
user_agent.original |
string | Full User-Agent string from the request. |
Example event:
{
"contentful.organization.id": "<org id>",
"contentful.space.id": "<space id>",
"contentful.cache.status": "MISS",
"contentful.request.id": "d2f4b5a1-8c9e-4d7f-91ab-2f6c3e8b7a10",
"event.start": "2026-04-09T14:22:11+0000",
"event.end": "2026-04-09T14:22:12+0000",
"duration.microseconds": 842731,
"http.request.method": "GET",
"http.route": "/spaces/:space/environments/:environment/entries",
"url.path": "/spaces/p4lm9x2q7rts/environments/master/entries",
"url.query": "content_type=blogPost&limit=50",
"http.response.status_code": 201,
"http.response.body.size": 1287,
"user_agent.original": "Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/146.0.0.0 Mobile Safari/537.36"
}
Monitoring delivery
Delivery statuses
Each configuration has a delivery status visible in the Contentful web app and returned via the Management API using the delivery status API. You will see the following statuses:
- Verifying: the configuration has been saved. Waiting for activation or first successful delivery.
- Delivering: the logs are being delivered successfully.
- Failing: ensure in the Contentful web app that the configuration details are correct. Action required: Fix your cloud storage configuration within 24 hours to prevent data loss by following the below troubleshooting steps.
Email notifications
When delivery failures are detected and automatic retries have not resolved them, Contentful sends an email notification to all Organization Owners and Organization Admins for the affected organization.
Troubleshooting log delivery failures
Log streaming can fail when Contentful is unable to deliver log data to your configured storage destination. These issues can be caused by misconfiguration, such as invalid credentials or insufficient permissions.
When delivery fails, Contentful will notify organization admins and owners by email and continue retrying automatically.
Status shows failing
Delivery is failing due to an issue with your destination configuration. Common causes:
- Expired or invalid IAM role: the IAM role ARN is incorrect, the role has been deleted, or the trust relationship no longer allows Contentful to assume it.
- Insufficient write permissions: the IAM policy does not grant s3:PutObject on the target bucket.
- Incorrect bucket name or region: the values in your configuration do not match your actual S3 bucket.
- Bucket policy blocks cross-account access: the bucket policy does not allow Contentful's AWS account to write objects.
How delivery failures work
Contentful attempts to continuously stream logs to your configured destination. If delivery fails, logs are retried automatically.
- Organization owners and admins will receive an email notification once per day while failures persist.
- If configuration is updated, the delivery resumes within 24 hours, no log data will be lost.
- After 24 hours, undelivered logs for that period cannot be recovered.
How to resolve delivery failures
- Go to Organization settings → Observability in the Contentful web app.
- Locate the affected configuration.
- Review your storage configuration in your cloud provider (AWS S3).
- Update any invalid credentials, permissions, or configuration values.
- Save your changes in Contentful.
Once your configuration is corrected, Contentful will automatically resume delivery on the next retry. No manual retry is required.
Receiving duplicate events
Duplicate events are expected. Enterprise Observability uses an at-least-once delivery model& by design. Use the contentful.request.id field to deduplicate events in your downstream pipeline.
Logs are missing for a specific time period
If no working configuration existed during that period, those logs cannot be recovered. Enterprise Observability does not backfill historical data.
If a configuration was active but logs are missing, check the delivery status API for failures during that period. If failures occurred more than 24 hours ago, those logs have been permanently discarded.