Set up Audit Logs
Table of contents
- What are audit logs?
- Audit log delivery
- Event details
- Events captured by the audit log
- Requirements
- Audit logs set up
What are audit logs?
Audit logs allow customers to track and view all the changes made in their organization. They provide visibility and are useful for investigating an incident or getting a detailed report on relevant events (such as changes to roles and permissions, users invited, spaces deleted, etc.).
The audit logs feature securely transfers this information to your own storage (an AWS S3 bucket or Azure Blob Store), ensuring that you have a clear and accessible history of actions for monitoring and analysis purposes.
Audit log delivery
During the private early access phase, audit logs are shipped to your AWS S3 Bucket or Azure Blob Store. Storing the audit logs in storage that you own helps you to have control and will allow you to ensure that audit logs are kept for as long as necessary. By storing the data in your own storage you have the following benefits:
- Consistency: This way you can apply the same rules and policies to this as you do for other similar data. You can control who has access to it.
- Data retention: This enables you to store it for as long as you need to maintain compliance for your company.
- Data analysis: And it allows you to serve this data to the tools you already use for analysis.
All delivered audit logs are provided in CSV format for compatibility and ease of analysis. The file format is contentful-audit-unstable-beta-ORGANIZATION-ID-YYYYYYMMDDTHHmmsssssZ.csv
.
Events captured by the audit log
Entities | Actions logged |
|
|
Field | Description |
request_time | The time when the action occurred. |
request_method | The type of HTTP Method used in the request. Included: PUT, DELETE, PATCH. Not included: GET and POST. |
request_path | The full path that was called in the request. |
request_query | The information with which the request was called and can be used to determine how the state was potentially altered. |
response_status | The HTTP response code of the request. Can be used to determine if a request was successful or not. |
content_length | The number of bytes returned in the response. |
space | Space ID this request was send to. |
route | Similar to the request_path except without params, which only includes the route structure. |
referrer | The referrer identifies which url the request came from. |
actor_id | The User or App ID which made the request. Example: `user:2YVRzNgF2sE64ooav1eKSd`,`app:6zsefpijez5t/master/klhjl46h34j5hlh46` |
Requirements
- AWS or Azure account: An active AWS or Azure account is necessary.
For further details, please contact our support
Stopping the audit logs delivery
To stop the delivery of the audit logs, please contact our support
Audit logs set up
To set up your infrastructure to receive Audit Logs, you will need to make some configuration changes and share some information with Contentful.
The exact process is dependant on your storage provider. Please follow the appropriate guide from the list:
Audit logs AWS Configuration
As part of enabling audit log shipping to your AWS S3 bucket, you need to create an AWS IAM role that Contentful can assume. This will allow Contentful to securely transfer audit logs to your AWS S3 bucket without the need to store any credentials.
Prerequisites
- An AWS account with permissions to create IAM roles and edit S3 bucket policies.
- Contentful's AWS account ID:
606137763417
.
Step 1: Create an S3 Bucket
- Log in to your AWS Management Console.
- Navigate to S3, click Create bucket.
- Enter a unique bucket name and select the region where you want the bucket to reside. Note: you will need to enter this name later.
- Configure options as required (e.g., versioning, logging, tags).
- Review and create the bucket.
Step 2: Create a New IAM Policy
- Log in to your AWS Management Console.
- Navigate to IAM -> Policies -> Create policy.
- Select the JSON tab and paste the following policy, replacing
with the name of your S3 bucket (from Step 1). Make sure to keep the /*
at the end:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<Your-S3-Bucket-Name>/*"
}
]
}
- Click Next, give it a meaningful name and description, and then click Create.
Step 3: Create a New IAM Role for Cross-Account Access
- In the IAM dashboard, go to Roles -> Create role.
- Select AWS Account under the "Trusted entity type" section, then in the section below select Another AWS account and enter Contentful's AWS account ID:
606137763417
. - Enable the option Require external ID and insert your Contentful organization ID. The primary function of the external ID is to address and prevent the confused deputy problem. You can find the organization ID in the Contentful web app.
- Click Next, skip attaching permissions policies now (we will attach the policy created in Step 2).
- Review, name the role, and then create it.
Step 4: Attach the Policy to the IAM Role
- Go to the newly created role in IAM -> Roles.
- Under "Permissions" in the Add permissions dropdown, click Attach policies.
- Find the policy you created in Step 2, select it, and then click Add permission.
Step 5: Configure Your S3 Bucket Policy
- Go to S3, find your bucket from Step 1, and then click Permissions.
- Edit the Bucket policy and add the following statement, replacing
with the ARN of the IAM role you created in Step 3 and <Your-S3-Bucket-Name>
with the name of your S3 bucket. Make sure to keep the/*
at the end of the bucket ARN:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "<Your-IAM-Role-ARN>"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<Your-S3-Bucket-Name>/*"
}
]
}
- Save the changes.
Step 6: Provide Contentful with the Necessary Information
Send the following details to Contentful:
- Your AWS account ID.
- The ARN of your S3 Bucket.
- The ARN of the IAM role you created.
- AWS Region.
- Your Contentful organization ID (the one you used as the external ID in Step 3).
- Any specific paths or prefixes within your S3 bucket where logs should be placed.
By following these steps, you've securely enabled Contentful to ship logs to your AWS S3 bucket. Contentful will use AWS STS to assume the role you've created, ensuring a secure and efficient transfer of audit log data.
Audit Logs Azure Configuration
As part of enabling audit log shipping to your Azure Blob Storage container, you need to create a Shared Access Signature (SAS) user that Contentful can use. This will allow Contentful to securely transfer audit logs directly to your Azure Storage Account container. This guide will help you create a Shared Access Signature (SAS) user specifically for Contentful.
A shared access signature (SAS) provides secure delegated access to resources in your storage account. With a SAS, you have granular control over how a client can access your data. For example:
What resources the client may access
What permissions they have to those resources
How long the SAS is valid
We will use RSA 4096 encryption to secure your SAS Token in-transit and at rest. Access to decrypt is provided only to the Audit Logging system and a small number of people who manage the Audit Logging service. For more information please contact us via support.
Prerequisites
An Azure account with access to create a Blob Store and a SAS Token
Access to a linux/bsd command line
OpenSSL (v3.2.0 or above recommended) with the
pkeyutl
modulea. Try
openssl version
to validate the version numberb. Try
openssl pkeyutl
to validate thepkeyutl
moduleThe Contentful public key used for encryption (see below)
Step 1: Create an Azure Storage Account
Log in to your Azure Portal
Navigate to Storage Accounts -> Create
Select the Subscription under which to create the Storage Account
Select or Create the Resource Group for the Storage Account
Enter a unique storage account name and select the region where you want the account to reside. Note this name, you will need it later
Configure options as required (e.g., performance, redundancy, etc)
Click Review + create
On the Review + create page check that everything correct and if you’re satisfied click Create to create the storage account
Step 2: Create a Container
Log in to your Azure Portal
Navigate to Storage Accounts and click the one you created in Step 1 to open it
On the left sidebar, under Data storage, click Containers
In the top toolbar click the + Container button to create a new container
Enter a unique container name
Configure options as required (e.g., encryption scope, versioning, etc)
Review and click Create to create the container
Step 3: Create the SAS Token
Log in to your Azure Portal
Navigate to Storage Accounts and click the one you created in Step 1 to open it
On the left sidebar, under Data storage, click Containers
Click the name of the container you create in the steps above
On the left sidebar, under Settings, click Shared access tokens
Select the Permissions dropdown, deselect Read, then select Create and Write
Set an expiry date that complies with your secret rotation policy
Click Generate SAS token and URL
Copy the value of the Blob SAS URL field that's displayed. You will use this URL in the next steps
Step 4: Encrypt the SAS Token
Option 1: Use the Command Line and OpenSSL to encrypt your SAS Token
Public Keys
Public key for EU data residency:
-----BEGIN PUBLIC KEY----- MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu4E2RnFXoFdUl4WrbSzf ZOsxk6S1Ir4ntGG4qiefhlWMJ+Ix7VDXm9jo3p7xvm//3I2BjZLZKe5hPwGGbuDk kFwX3pVUY4TW3OV/fBP6qZmkOT2o1BI/FJH88tZdq5400pf37GaPBDfkZ10KRAn8 Y1kUS6fU7XT9Aa7BOyUTVLNOoo/A9QN9dfslPzL9ypZy8gZoboRJxKEAeHjGcysE uqw2yhG3zJ+hnGnG4SP53Ltx0t4Byi/+TdG1d0CT6NF9nV9PeD8RvHDcJbVuNQu1 Q5hp72UhdGvKQrpm1AXTbQXaJydWmdardp2RrHyfHXGAU8ItRX+DscQbI32lk0sD xeCFGWFRvIEl+NdGWUbq81ayaxou7LHord7xDzSwlZlxuFwazkvzwslqXNnroNTo B5XldsYq3jftN+t0+Uran39JtTZUOFkRhtY74uIKtV/NJgD53CYkPpIwCDupXUCp CXa+tshYgCi2qPul2ShqDs2r2f+uwAQfbndwzlCAnADPL+Gg+M7sZrEOYsLGWAB9 b6S9/gYVLE9vs3b3FTWMLLScfHD7jN/za19r1W22soK/tmq4KBrQLZ3S6l+4AeWy HvQIxbJgGYkA6V/bDVOy3AeK1BgeKWkeAhi3UNAcQrjqBUkviwyIlMoV0kISFAFi f848xOTgJVC1IbNBTmW9H3cCAwEAAQ== -----END PUBLIC KEY-----
Public key for US/global data residency (most customers):
-----BEGIN PUBLIC KEY----- MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAsfMbvs0JwvAFKiyM6HPp xxti7sLAxENAGykPkfr7Y5BomohXtgoApwXebfiFB1492s7dQuOiaU5wNVW8xjjG cIxYBUVaa1+CSSy3by6LoWcoqbqsVk4ec1D//SMwkxo77HZwwIII1DiIQbcTOC8/ tEv2YeGxtEI9A9C8SqTSjqz6dd9AY0rpKI3038RvrblLyj8hUj86zuTRB2wMgj9Q EQb0mzEIuPeDznHR/LRpqkWx12q0w5NzPygezThmLXq4OdJKM+qHPEThFATWrZZi ZP1z+A6QsBMOianJ1ltE8V6D3+o8y/2BqkmMlhwZxSKyYLGVGpwSWKbLBLod78G9 BLQcf9FxjzNFrd+WUNrwLnENX5OV6beJzpJPk79j9smMHgldf0Iy6aVRVQKlQMat rBkwaqQN0kNafpwHpMwj7GFvt+yEDbq0lh5f18j28tq2heaRnbR6Sjb/pqn0tuoA EgUI85D/BkYkBCEVB9I5vNtJovJP9B2mj93ZVaqBCijCQ2Rkmu8igSD/zEKnlEJQ F2fl+LCgdOuZJpO7FNuWNUrgX38M/t7ZWgtXjZUc+ct6+Zs1zjYh5D8MvuldG6Gw TBfpiEVXMsluM4avp+17rqBtkgGMLGdXPaIfR3WC0IaakMzUsWlEd8/3/JR8dol3 40qKGlC3ALrjjxqHRGfGCu8CAwEAAQ== -----END PUBLIC KEY-----
Steps
Using the command line, create a directory to save any artefacts used during this process
cd
into the working directorySave the required public key (from above) into a file in the directory called
contentful-audit-public-key.pem
Create a new file called
sas-token.plain
in the directoryAmend the
sas-token.plain
file to contain your SAS Token, being careful to remove any leading or trailing white space, including line feed charactersYou should now have a directory containing:
contentful-audit-public-key.pem
sas-token.plain
- Encrypt the SAS Token using OpenSSL
openssl pkeyutl -encrypt \
-in sas-token.plain \
-out sas-token.enc \
-pubin -keyform PEM \
-inkey contentful-audit-public-key.pem \
-pkeyopt rsa_padding_mode:oaep \
-pkeyopt rsa_oaep_md:sha256
- Encode the SAS Token using Base64 so that it can be easily submitted to Contentful
cat sas-token.enc | base64 > sas-token.enc.b64
- Display the content of
sas-token.enc.b64
to share with Contentful
cat sas-token.enc.b64
Option 2: Use this page to encrypt your SAS Token
SAS Token:
Data Residency Region:
SAS Token Cipher:
Click the Encrypt button above to generate your cipher