Was this page helpful?

Set up Audit Logs

Table of contents

What are audit logs?

Audit logs are currently in private early access stage. Only selected customers have access to the feature. This stage is a testing phase, changes can be expected, do not rely on this feature for production use cases.

Audit logs allow customers to track and view all the changes made in their organization. They provide visibility and are useful for investigating an incident or getting a detailed report on relevant events (such as changes to roles and permissions, users invited, spaces deleted, etc.).

This is only available for Premium and Premium plus customers.

The audit logs feature securely transfers this information to your own storage (an AWS S3 bucket), ensuring that you have a clear and accessible history of actions for monitoring and analysis purposes.

Audit log delivery

During the private early access phase, audit logs are shipped to AWS S3 buckets owned by customers. Storing the audit logs in storage that you own helps you to have control and will allow you to ensure that audit logs are kept for as long as necessary. By storing the data in your own storage you have the following benefits:

  • Consistency: This way you can apply the same rules and policies to this as you do for other similar data. You can control who has access to it.
  • Data retention: This enables you to store it for as long as you need to maintain compliance for your company.
  • Data analysis: And it allows you to serve this data to the tools you already use for analysis.

All delivered audit logs are provided in CSV format for compatibility and ease of analysis. The file format is contentful-audit-unstable-beta-ORGANIZATION-ID-YYYYYYMMDDTHHmmsssssZ.csv.

Audit logs are updated and delivered on a daily schedule.

Events captured by the audit log

Entities Actions logged
  1. Spaces
  2. Environments
  3. UI config
  4. Content model templates
  5. References across spaces
  6. Space enablements
  7. Editor interface
  8. UI Extensions
  9. Entries
  10. Assets
  11. Locales
  12. Tags
  13. Webhooks
  14. Roles
  15. Snapshots
  16. Space membership
  17. API Keys
  18. Comments
  19. Workflows
  20. Tasks
  21. Releases
  22. App installations
  • Update of entities
  • Deletion of entities

Event details

The private early access phase and breaking changes:

  • Unstable schema: The current schema is unstable and subject to change. Breaking changes are anticipated as the service evolves.
  • Integration advisory: Clients are strongly advised against building production integrations that rely on the current schema due to its instability.
  • Exclusions: The service does not currently log details of POST requests, this means that most of the actions that create entities will not be tracked for now.
Field Description
request_time The time when the action occurred.
request_method The type of HTTP Method used in the request. Included: PUT, DELETE, PATCH. Not included: GET and POST.
request_path The full path that was called in the request.
request_query The information with which the request was called and can be used to determine how the state was potentially altered.
response_status The HTTP response code of the request. Can be used to determine if a request was successful or not.
content_length The number of bytes returned in the response.
space Space ID this request was send to.
route Similar to the request_path except without params, which only includes the route structure.
referrer The referrer identifies which url the request came from.
actor_id The User or App ID which made the request. Example: `user:2YVRzNgF2sE64ooav1eKSd`,`app:6zsefpijez5t/master/klhjl46h34j5hlh46`

Requirements

  1. AWS account: An active AWS account is necessary.
Disclaimer: As this service is in the private early access stage, customers should exercise caution and avoid reliance on the current schema for critical integrations. The service is expected to undergo significant changes in the near future.

For further details, please contact our support

Stopping the audit logs delivery

To stop the delivery of the audit logs, please contact our support

Audit logs set up

As part of enabling audit log shipping to your AWS S3 bucket, you need to create an AWS IAM role that Contentful can assume. This will allow Contentful to securely transfer audit logs to your AWS S3 bucket without the need to store any credentials.

Prerequisites

  • An AWS account with permissions to create IAM roles and edit S3 bucket policies.
  • Contentful's AWS account ID: 606137763417.

Step 1: Create an S3 Bucket

  1. Log in to your AWS Management Console.
  2. Navigate to S3, click Create bucket.
  3. Enter a unique bucket name and select the region where you want the bucket to reside. Note: you will need to enter this name later.
  4. Configure options as required (e.g., versioning, logging, tags).
  5. Review and create the bucket.

Step 2: Create a New IAM Policy

  1. Log in to your AWS Management Console.
  2. Navigate to IAM -> Policies -> Create policy.
  3. Select the JSON tab and paste the following policy, replacing with the name of your S3 bucket (from Step 1). Make sure to keep the /* at the end:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:PutObject",
      "Resource": "arn:aws:s3:::<Your-S3-Bucket-Name>/*"
    }
  ]
}
  1. Click Next, give it a meaningful name and description, and then click Create.

Step 3: Create a New IAM Role for Cross-Account Access

  1. In the IAM dashboard, go to Roles -> Create role.
  2. Select AWS Account under the "Trusted entity type" section, then in the section below select Another AWS account and enter Contentful's AWS account ID: 606137763417.
  3. Enable the option Require external ID and insert your Contentful organization ID. The primary function of the external ID is to address and prevent the confused deputy problem. You can find the organization ID in the Contentful web app.
  4. Click Next, skip attaching permissions policies now (we will attach the policy created in Step 2).
  5. Review, name the role, and then create it.

Step 4: Attach the Policy to the IAM Role

  1. Go to the newly created role in IAM -> Roles.
  2. Under "Permissions" in the Add permissions dropdown, click Attach policies.
  3. Find the policy you created in Step 2, select it, and then click Add permission.

Step 5: Configure Your S3 Bucket Policy

  1. Go to S3, find your bucket from Step 1, and then click Permissions.
  2. Edit the Bucket policy and add the following statement, replacing with the ARN of the IAM role you created in Step 3 and <Your-S3-Bucket-Name> with the name of your S3 bucket. Make sure to keep the /* at the end of the bucket ARN:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "<Your-IAM-Role-ARN>"
      },
      "Action": "s3:PutObject",
      "Resource": "arn:aws:s3:::<Your-S3-Bucket-Name>/*"
    }
  ]
}
  1. Save the changes.

Step 6: Provide Contentful with the Necessary Information

Send the following details to Contentful:

  • Your AWS account ID.
  • The ARN of your S3 Bucket.
  • The ARN of the IAM role you created.
  • AWS Region.
  • Your Contentful organization ID (the one you used as the external ID in Step 3).
  • Any specific paths or prefixes within your S3 bucket where logs should be placed.

By following these steps, you've securely enabled Contentful to ship logs to your AWS S3 bucket. Contentful will use AWS STS to assume the role you've created, ensuring a secure and efficient transfer of audit log data.