Was this page helpful?

Set up Audit Logs

Table of contents

What are audit logs?

Audit logs are currently in private early access stage. Only selected customers have access to the feature. This stage is a testing phase, changes can be expected, do not rely on this feature for production use cases.

Audit logs allow customers to track and view all the changes made in their organization. They provide visibility and are useful for investigating an incident or getting a detailed report on relevant events (such as changes to roles and permissions, users invited, spaces deleted, etc.).

NOTE: This is only available on specific plans. Reach out to your Sales representative for more information.

The audit logs feature securely transfers this information to your own storage (an AWS S3 bucket or Azure Blob Store), ensuring that you have a clear and accessible history of actions for monitoring and analysis purposes.

Audit log delivery

During the private early access phase, audit logs are shipped to your AWS S3 Bucket or Azure Blob Store. Storing the audit logs in storage that you own helps you to have control and will allow you to ensure that audit logs are kept for as long as necessary. By storing the data in your own storage you have the following benefits:

  • Consistency: This way you can apply the same rules and policies to this as you do for other similar data. You can control who has access to it.
  • Data retention: This enables you to store it for as long as you need to maintain compliance for your company.
  • Data analysis: And it allows you to serve this data to the tools you already use for analysis.

All delivered audit logs are provided in CSV format for compatibility and ease of analysis. The file format is contentful-audit-unstable-beta-ORGANIZATION-ID-YYYYYYMMDDTHHmmsssssZ.csv.

Audit logs are updated and delivered on a daily schedule.

Events captured by the audit log

Entities Actions logged
  1. Spaces
  2. Environments
  3. UI config
  4. Content model templates
  5. References across spaces
  6. Space enablements
  7. Editor interface
  8. UI Extensions
  9. Entries
  10. Assets
  11. Locales
  12. Tags
  13. Webhooks
  14. Roles
  15. Snapshots
  16. Space membership
  17. API Keys
  18. Comments
  19. Workflows
  20. Tasks
  21. Releases
  22. App installations
  • Update of entities
  • Deletion of entities

Event details

The private early access phase and breaking changes:

  • Unstable schema: The current schema is unstable and subject to change. Breaking changes are anticipated as the service evolves.
  • Integration advisory: Clients are strongly advised against building production integrations that rely on the current schema due to its instability.
  • Exclusions: The service does not currently log details of POST requests, this means that most of the actions that create entities will not be tracked for now.
Field Description
request_time The time when the action occurred.
request_method The type of HTTP Method used in the request. Included: PUT, DELETE, PATCH. Not included: GET and POST.
request_path The full path that was called in the request.
request_query The information with which the request was called and can be used to determine how the state was potentially altered.
response_status The HTTP response code of the request. Can be used to determine if a request was successful or not.
content_length The number of bytes returned in the response.
space Space ID this request was send to.
route Similar to the request_path except without params, which only includes the route structure.
referrer The referrer identifies which url the request came from.
actor_id The User or App ID which made the request. Example: `user:2YVRzNgF2sE64ooav1eKSd`,`app:6zsefpijez5t/master/klhjl46h34j5hlh46`

Requirements

  1. AWS or Azure account: An active AWS or Azure account is necessary.
Disclaimer: As this service is in the private early access stage, customers should exercise caution and avoid reliance on the current schema for critical integrations. The service is expected to undergo significant changes in the near future.

For further details, please contact our support

Stopping the audit logs delivery

To stop the delivery of the audit logs, please contact our support

Audit logs set up

To set up your infrastructure to receive Audit Logs, you will need to make some configuration changes and share some information with Contentful.

The exact process is dependant on your storage provider. Please follow the appropriate guide from the list:

Audit logs AWS Configuration

As part of enabling audit log shipping to your AWS S3 bucket, you need to create an AWS IAM role that Contentful can assume. This will allow Contentful to securely transfer audit logs to your AWS S3 bucket without the need to store any credentials.

Prerequisites

  • An AWS account with permissions to create IAM roles and edit S3 bucket policies.
  • Contentful's AWS account ID: 606137763417.

Step 1: Create an S3 Bucket

  1. Log in to your AWS Management Console.
  2. Navigate to S3, click Create bucket.
  3. Enter a unique bucket name and select the region where you want the bucket to reside. Note: you will need to enter this name later.
  4. Configure options as required (e.g., versioning, logging, tags).
  5. Review and create the bucket.

Step 2: Create a New IAM Policy

  1. Log in to your AWS Management Console.
  2. Navigate to IAM -> Policies -> Create policy.
  3. Select the JSON tab and paste the following policy, replacing with the name of your S3 bucket (from Step 1). Make sure to keep the /* at the end:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:PutObject",
      "Resource": "arn:aws:s3:::<Your-S3-Bucket-Name>/*"
    }
  ]
}
  1. Click Next, give it a meaningful name and description, and then click Create.

Step 3: Create a New IAM Role for Cross-Account Access

  1. In the IAM dashboard, go to Roles -> Create role.
  2. Select AWS Account under the "Trusted entity type" section, then in the section below select Another AWS account and enter Contentful's AWS account ID: 606137763417.
  3. Enable the option Require external ID and insert your Contentful organization ID. The primary function of the external ID is to address and prevent the confused deputy problem. You can find the organization ID in the Contentful web app.
  4. Click Next, skip attaching permissions policies now (we will attach the policy created in Step 2).
  5. Review, name the role, and then create it.

Step 4: Attach the Policy to the IAM Role

  1. Go to the newly created role in IAM -> Roles.
  2. Under "Permissions" in the Add permissions dropdown, click Attach policies.
  3. Find the policy you created in Step 2, select it, and then click Add permission.

Step 5: Configure Your S3 Bucket Policy

  1. Go to S3, find your bucket from Step 1, and then click Permissions.
  2. Edit the Bucket policy and add the following statement, replacing with the ARN of the IAM role you created in Step 3 and <Your-S3-Bucket-Name> with the name of your S3 bucket. Make sure to keep the /* at the end of the bucket ARN:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "<Your-IAM-Role-ARN>"
      },
      "Action": "s3:PutObject",
      "Resource": "arn:aws:s3:::<Your-S3-Bucket-Name>/*"
    }
  ]
}
  1. Save the changes.

Step 6: Provide Contentful with the Necessary Information

Send the following details to Contentful:

  • Your AWS account ID.
  • The ARN of your S3 Bucket.
  • The ARN of the IAM role you created.
  • AWS Region.
  • Your Contentful organization ID (the one you used as the external ID in Step 3).
  • Any specific paths or prefixes within your S3 bucket where logs should be placed.

By following these steps, you've securely enabled Contentful to ship logs to your AWS S3 bucket. Contentful will use AWS STS to assume the role you've created, ensuring a secure and efficient transfer of audit log data.

Audit Logs Azure Configuration

As part of enabling audit log shipping to your Azure Blob Storage container, you need to create a Shared Access Signature (SAS) user that Contentful can use. This will allow Contentful to securely transfer audit logs directly to your Azure Storage Account container. This guide will help you create a Shared Access Signature (SAS) user specifically for Contentful.

A shared access signature (SAS) provides secure delegated access to resources in your storage account. With a SAS, you have granular control over how a client can access your data. For example:

  • What resources the client may access

  • What permissions they have to those resources

  • How long the SAS is valid

We will use RSA 4096 encryption to secure your SAS Token in-transit and at rest. Access to decrypt is provided only to the Audit Logging system and a small number of people who manage the Audit Logging service. For more information please contact us via support.

Prerequisites

  1. An Azure account with access to create a Blob Store and a SAS Token

  2. Access to a linux/bsd command line

  3. OpenSSL (v3.2.0 or above recommended) with the pkeyutl module

    a. Try openssl version to validate the version number

    b. Try openssl pkeyutl to validate the pkeyutl module

  4. The Contentful public key used for encryption (see below)

Step 1: Create an Azure Storage Account

  1. Log in to your Azure Portal

  2. Navigate to Storage Accounts -> Create

  3. Select the Subscription under which to create the Storage Account

  4. Select or Create the Resource Group for the Storage Account

  5. Enter a unique storage account name and select the region where you want the account to reside. Note this name, you will need it later

  6. Configure options as required (e.g., performance, redundancy, etc)

  7. Click Review + create

  8. On the Review + create page check that everything correct and if you’re satisfied click Create to create the storage account

Step 2: Create a Container

  1. Log in to your Azure Portal

  2. Navigate to Storage Accounts and click the one you created in Step 1 to open it

  3. On the left sidebar, under Data storage, click Containers

  4. In the top toolbar click the + Container button to create a new container

  5. Enter a unique container name

Note this name, you will need it later
  1. Configure options as required (e.g., encryption scope, versioning, etc)

  2. Review and click Create to create the container

Step 3: Create the SAS Token

  1. Log in to your Azure Portal

  2. Navigate to Storage Accounts and click the one you created in Step 1 to open it

  3. On the left sidebar, under Data storage, click Containers

  4. Click the name of the container you create in the steps above

  5. On the left sidebar, under Settings, click Shared access tokens

  6. Select the Permissions dropdown, deselect Read, then select Create and Write

  7. Set an expiry date that complies with your secret rotation policy

  8. Click Generate SAS token and URL

  9. Copy the value of the Blob SAS URL field that's displayed. You will use this URL in the next steps

Step 4: Encrypt the SAS Token

Option 1: Use the Command Line and OpenSSL to encrypt your SAS Token

Public Keys

Public key for EU data residency:

-----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu4E2RnFXoFdUl4WrbSzf
ZOsxk6S1Ir4ntGG4qiefhlWMJ+Ix7VDXm9jo3p7xvm//3I2BjZLZKe5hPwGGbuDk
kFwX3pVUY4TW3OV/fBP6qZmkOT2o1BI/FJH88tZdq5400pf37GaPBDfkZ10KRAn8
Y1kUS6fU7XT9Aa7BOyUTVLNOoo/A9QN9dfslPzL9ypZy8gZoboRJxKEAeHjGcysE
uqw2yhG3zJ+hnGnG4SP53Ltx0t4Byi/+TdG1d0CT6NF9nV9PeD8RvHDcJbVuNQu1
Q5hp72UhdGvKQrpm1AXTbQXaJydWmdardp2RrHyfHXGAU8ItRX+DscQbI32lk0sD
xeCFGWFRvIEl+NdGWUbq81ayaxou7LHord7xDzSwlZlxuFwazkvzwslqXNnroNTo
B5XldsYq3jftN+t0+Uran39JtTZUOFkRhtY74uIKtV/NJgD53CYkPpIwCDupXUCp
CXa+tshYgCi2qPul2ShqDs2r2f+uwAQfbndwzlCAnADPL+Gg+M7sZrEOYsLGWAB9
b6S9/gYVLE9vs3b3FTWMLLScfHD7jN/za19r1W22soK/tmq4KBrQLZ3S6l+4AeWy
HvQIxbJgGYkA6V/bDVOy3AeK1BgeKWkeAhi3UNAcQrjqBUkviwyIlMoV0kISFAFi
f848xOTgJVC1IbNBTmW9H3cCAwEAAQ==
-----END PUBLIC KEY-----

Public key for US/global data residency (most customers):

-----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAsfMbvs0JwvAFKiyM6HPp
xxti7sLAxENAGykPkfr7Y5BomohXtgoApwXebfiFB1492s7dQuOiaU5wNVW8xjjG
cIxYBUVaa1+CSSy3by6LoWcoqbqsVk4ec1D//SMwkxo77HZwwIII1DiIQbcTOC8/
tEv2YeGxtEI9A9C8SqTSjqz6dd9AY0rpKI3038RvrblLyj8hUj86zuTRB2wMgj9Q
EQb0mzEIuPeDznHR/LRpqkWx12q0w5NzPygezThmLXq4OdJKM+qHPEThFATWrZZi
ZP1z+A6QsBMOianJ1ltE8V6D3+o8y/2BqkmMlhwZxSKyYLGVGpwSWKbLBLod78G9
BLQcf9FxjzNFrd+WUNrwLnENX5OV6beJzpJPk79j9smMHgldf0Iy6aVRVQKlQMat
rBkwaqQN0kNafpwHpMwj7GFvt+yEDbq0lh5f18j28tq2heaRnbR6Sjb/pqn0tuoA
EgUI85D/BkYkBCEVB9I5vNtJovJP9B2mj93ZVaqBCijCQ2Rkmu8igSD/zEKnlEJQ
F2fl+LCgdOuZJpO7FNuWNUrgX38M/t7ZWgtXjZUc+ct6+Zs1zjYh5D8MvuldG6Gw
TBfpiEVXMsluM4avp+17rqBtkgGMLGdXPaIfR3WC0IaakMzUsWlEd8/3/JR8dol3
40qKGlC3ALrjjxqHRGfGCu8CAwEAAQ==
-----END PUBLIC KEY-----

Steps

  1. Using the command line, create a directory to save any artefacts used during this process

  2. cd into the working directory

  3. Save the required public key (from above) into a file in the directory called contentful-audit-public-key.pem

  4. Create a new file called sas-token.plain in the directory

  5. Amend the sas-token.plain file to contain your SAS Token, being careful to remove any leading or trailing white space, including line feed characters

  6. You should now have a directory containing:

contentful-audit-public-key.pem
sas-token.plain
  1. Encrypt the SAS Token using OpenSSL
openssl pkeyutl -encrypt \
  -in sas-token.plain \
  -out sas-token.enc \
  -pubin -keyform PEM \
  -inkey contentful-audit-public-key.pem \
  -pkeyopt rsa_padding_mode:oaep \
  -pkeyopt rsa_oaep_md:sha256
  1. Encode the SAS Token using Base64 so that it can be easily submitted to Contentful
cat sas-token.enc | base64 > sas-token.enc.b64
  1. Display the content of sas-token.enc.b64 to share with Contentful
cat sas-token.enc.b64
The base64 encoded SAS Token can be safely submitted in clear text to Contentful. It can only be decrypted to the original SAS Token once we have it, using our secure private key.

Option 2: Use this page to encrypt your SAS Token

All enryption takes place on this page. Your SAS Token will not be stored or sent to us.

SAS Token:

Data Residency Region:

SAS Token Cipher:

Click the Encrypt button above to generate your cipher
Pass this text to Contentful in the SAS Token field of the Audit Logging Beta sign-up form