Automated migrations with periodic metric checks

Database migrations that are applied to tables with millions and millions of rows can run for a long time and can also cause upticks or sustained increase in CPU, RAM or IO usage (or all of them combined). In some cases, the migration combined with other concurrent database activity can be enough to bring the database into an unhealthy state (e.g. statements timing out due to the load on the DB).

You can imagine that for a content platform such as Contentful, we want to prevent any performance issue in the databases which store our customers' content. The databases are automatically monitored 24/7 and when they enter an anomalous state, on-call engineers are paged to check the alert and take the appropriate measures. A migration could trigger some of these alerts, so to avoid unnecessary pages and ensure the availability and performance of the databases, the engineer running the migration would have to be ready to stop any migration before alerts were fired and restart it when it was considered safe to do again. This process worked well until our scale made it not sustainable anymore:

  • The migration could only be rolled out at once to a handful of the database instances because one person can't monitor multiple metrics across hundreds of instances. Even when running the migrations on a handful of instances I was sometimes late in stopping the migration and an alert was fired.

  • Keeping track of the status of the migration on each database instance is busy work which doesn't add any value.

  • The migration can only run when there is an engineer overseeing it. Migrations were run during office hours and losing the possibility to run during low traffic periods (when engineers like to sleep).

The gist of the problem was that relying on an engineer to constantly monitor the migration execution wasn’t sustainable and it was something which didn't bring any joy to the engineers (I can attest to that as I ran many of these migrations) or value to the company (my team would rather have me working on something else). Conceptually, the solution to these problems is simple and resembles 100% what an engineer did before: automate the periodical metrics checks and stop the migration if some thresholds are crossed and then only after the metrics have returned to safe values, re-start the migration.

Some context

Our migration files are node.js modules (exporting up and down functions) which are executed by a runner in our migrations SDK. The following diagram shows the main components of the old SDK: a thin migration runner which would take the migration and execute it until it finished or failed.

Graph depicting the separation of migration code vs migration runner

The solution

Let's start by enhancing the previous diagram with the key additions to our database migrations sdk. As before, at the top is the database migration the engineer wants to execute on the databases and as before, the execution of the migration will be controlled by the migration runner. The difference is that now the runner relies on two new components: the metrics service and the stoppable migration. These two new components are what helps make sure that database migrations don't cause any harm on the databases’ availability and their performance.

Illustration of the Solution to connect migraton code and the migration runner

We will cover both components in the following sections.

Read metric. Analyze metric. Signal result.

For the first release of the new version of the database migrations sdk the scope was to automatically monitor the same metrics the engineer was using before (cpu, memory and io usage) but making it possible to extend this set in the future with things like api error rate, execution time of other concurrent jobs in the databases, etc. Due to this requirement we designed the metrics service to delegate most of the work to what we called metric providers and to use the data returned from them to determine if the migration could continue running or not.

typescript
export interface MetricsService {
 pollAllMetrics: (startTime: Timestamp, endTime: Timestamp) => Promise<Metric[]>
 analyzeAllMetrics: (metrics: Metric[]) => Signal
}

All that the metrics service knows about the metric providers is the interface they must implement. This allows us to create providers which are specific to one metrics source (e.g. CloudWatch) without having to change anything on the metrics service.

typescript
export interface MetricsProvider<T extends Metric> {
 pollMetrics: MetricDataSource<T>
 analyzeMetric: (metrics: T) => Signal
 canAnalyzeMetric: (metric: Metric) => metric is T
}

For example, below there is a snippet from the analyzeMetric method in the CloudWatch metrics provider. The analyzeMetric takes the result of the pollMetrics function (not included in the snippet) and the thresholds we have configured for the migration (i.e. which values are considered safe for each migration). If, for example, the measurement for the CPU usage or the one for the FreeableMemory are outside of the safe values, a NoGo signal will be returned to the metrics service, which in turn will forward it to the migrations runner.

typescript
function analyzeMetric(
 metrics: DBMetric,
 thresholds: MetricsAnalysisThresholds
): Signal {
...
 switch (metric.type) {
   case DBMetricTypes.CPU:
     if (measure.value > thresholds.nogo.cpuOver) {
         const signal = makeNoGoSignal({
           measure,
           threshold: thresholds.nogo.cpuOver,
           reason: `CPU usage "${measure.value}" is over the threshold "${thresholds.nogo.cpuOver}"`,
        })
         return signal
      }
       return ok({ signal: makeGoSignal(), state })
   case DBMetricTypes.FreeableMemory:
     if (measure.value < thresholds.nogo.freeableMemoryBelow) {
       const signal = makeNoGoSignal({
         measure,
         threshold: thresholds.nogo.freeableMemoryBelow,
         reason: `The current value of FreeableMemory "${measure.value}" is over the threshold "${thresholds.nogo.freeableMemoryBelow}"`,
      })
       return signal
    }
}

The value returned from the analyzeMetric method in the metrics providers is a Signal. A Signal can be a Go (continue execution) or NoGo (stop execution) decision that a provider made after analyzing the polled metrics. The metrics service also returns a Signal from its analyzeAllMetrics method, which derives from the signals returned by the providers. We decided to use a Signal for two reasons:

  • The migration runner, the providers and the metrics service only share a very slim type, the Signal, which reduces the chances of coupling across different layers. At the same time, it is more meaningful to return a primitive like a boolean or a string value.

  • The logic in both the metrics service and the runner is easier to read and understand since the dependencies with other parts of the sdk are smaller.

typescript
enum SignalKind {
 Go = 'go',
 NoGo = 'nogo',
}

interface Signal {
 kind: SignalKind
 // The details property can be used to store information to be included in logs or notifications.
 details?: any
}

The migration runner will schedule a periodical call to the metrics service to determine if the migration has to be stopped or not.

Stoppable migrations

Continuing where we left it in the previous section, the migration runner must stop a migration if it receives a NoGo signal from the metrics service. This raises the obvious question: how to stop a running migration? What we needed was to prevent a migration from executing more operations on the database and only allow them again once the migration runner received a Go signal. A way to not execute more database operations could have been to terminate the job that runs the migration but that would have also terminated the metrics service and the migration runner. If that code was terminated, we wouldn't have a way to restart the migration after the metrics values went back to safe value, so we had to find a way that would handle the stop and start of the migration from within the running job.

To better explain how we solved this, I'm going to use an example migration like the following one:

typescript
export function up (db: DatabaseClient): Promise<void> {
  while (true) {
    await db.tx((tx: ITx) => {
      const { rows } = await db.result(`
			SELECT employees.name, employees.surname, titles.name
			FROM employees JOIN titles ON employees.id = titles.employee_id
			WHERE employees.signature IS NULL
			LIMIT 50000
			ORDER BY ID DESC
			`);

      if (rows.length === 0) {
        return
      }

      const values = ... // do some transformation on the rows returned from the previous query
      const valuesForUpdate = values.map((val) => `(${val})`)
      const { count } = await db.query(`
			UPDATE employees SET signature = new_vals.signature
			FROM (values, ${valuesForUpdate}) as new_vals (signature)
			WHERE signature is NULL
			`)

      // Get the count of how many rows are left to migrate
      const result = await db.query(`
			SELECT count(*) as remaining
			FROM employees
			WHERE signature is NULL
			`)

      console.log(`There are still "${result[0].remaining}" rows to be migrated`)
    })
  }        
}

This example migration reads some data from the tables employees and titles to generate the value of a recently added signature column. These operations will be repeated in a loop until there is no more data to transform (i.e. there are no more rows where the signature column is null). Now imagine that the migration runner receives a NoGo signal while the migration is counting the number of remaining rows to migrate. Because of that, the migration shouldn't execute the SELECT query to read the name, surname and title name on the next loop iteration but instead do it only once a Go signal is received.

Our first attempt was to decorate the database client the migration uses (the db argument the migration function receives ) and inside it control whether to forward calls to the real client or block them.

typescript
function createWrappedDBClient (db: DatabaseClient): { wrapped: DatabaseClient, stop: () => void, resume: () => void} {
 let deferrable = makeDeferrable()
 let mustStop = false
 const enabledStop = (): boolean => (mustStop = true)
 const disableStop = (): boolean => {
   mustStop = false
   deferrable.resolve()
   deferrable = makeDeferrable()
}

 const wrapped = {
   query: (sql: string): Promise<IResult> => {
     if (mustStop) {
       await deferrable.promise
    }

     return db.query(sql)
  },
   ... // other decorated methods
}

 return { wrapped, enabledStop, disableStop }
}

The createWrappedDBClient returns the decorated client and two helper functions (enableStop and disableStop) that are used to enable and disable the forwarding of the calls to the real database client. After enableStop is called, the next call to the decorated methods will be blocked on the deferrable which will only be resolved, allowing for the call to be forwarded to the real client, after a call to disableStop. This approach works, the migration runner can use the enableStop function to prevent the migration from making more queries to the database after a NoGo signal is received and the disableStop to allow for the queries to go through again.

Blocking calls this way has, however, a big drawback. To explain why we have to talk about transaction isolation in Postgres. With the READ COMMITTED isolation level, if a transaction in the application code wants to update or delete a row which had been modified by an unfinished transaction in the migration, the call made from the application would have to wait until the transaction in the migration is committed or rolled back. In the example migration, if we blocked after updating the foo table but before the transaction finished, we could be blocking other concurrent application transactions and this was something we can't permit.

We modified the database client decorator and instead of blocking before forwarding the call to the real db client, we now throw an exception, which would implicitly roll back any inflight transaction and prevent anymore calls being sent to the database. The rest of the decorator is conceptually the same, still returning two functions, now called enableThrowing and disableThrowing which control the two states of the decorator.

typescript
function createWrappedDBClient(
 db: ITask<Promise<void>>
): { wrapped: MigrationDBClient; disableThrowing: () => void; enableThrowing: () => void } {
 let mustThrow = false

 const enableThrowing = (): boolean => (mustThrow = true)
 const disableThrowing = (): boolean => (mustThrow = false)

 const wrapped: MigrationDBClient = {
   result<T = any>(query: QueryParam, values?: any, cb?: (value: IResultExt) => T, thisArg?: any): Promise<T> {
     if (mustThrow) {
       throw new RejectedDBClientUsageError('result', [query, values])
    }

     return db.result(query, values, cb, thisArg)
  },
// other wrapped methods
   ...
}

 return { wrapped, enableThrowing, disableThrowing }
}

Complementary to the changes to the decorator, we added a function that leverages the exception thrown from the decorated client (RejectedDBClientUsageError) to know when the migration has to be stopped in a controlled way.

  • Only exit if the migration has been fully executed or an exception different to RejectedDBClientUsageError is thrown from the migration code.

  • If a RejectedDBClientUsageError has been thrown, the function will block and not execute the migration again until the resume helper is used.

typescript
function createAbortableMigration(
 migration: Migration,
 db: ITask<Promise<void>>,
): {run: () => Promise<void>, abortMigration: () => void, restartMigration: () => void} {
 const , { wrapped, enableThrowing, disableThrowing } = createWrappedDBClient(db)
 let deferrable: Deferrable | null

 const run = async (): Promise<void> => {
   let running = true

   while (running) {
     try {
       await deferrable?.promise
       await migration(wrapped, shardId)
       running = false
    } catch (e) {
       if (e.kind !== RejectedDBClientUsageError.kind) {
         throw e
      }
    }
  }
}

 const abortMigration = (): void => {
   if (deferrable) {
     return
  }

   deferrable = createDeferrable()

   /*
  * After calling "enableThrowing" the next call
  * to any db client method will throw a "RejectedDBClientUsageError".
  *
  * That together with the just created `deferrable` will stop execution
  * of the migration
  */    
   enableThrowing()
}

 const restartMigration = (): void => {
   /*
  * After calling "disableThrowing" calls to any
  * db client method won't throw a "RejectedDBClientUsageError"
  */
   disableThrowing()

   /*
  * Resolve the promise that keeps the migration from running
  * in the "run" function
  */
   deferrable?.resolve()
   deferrable = null
}

 return { run, abortMigration, restartMigration }
}

Now the migration runner can use the abortMigration function to prevent the migration from making more queries to the database after a NoGo signal is received and the restartMigration to restart the migration again.

Putting all the pieces together

All what we have explained up to now comes together in the migration runner. The runner will start the periodical polling and analysis of the metrics (via the metrics service) and it will also create a stoppable migration. The migration runner will execute the migration and use the abortMigration and restartMigration functions to control the execution of the migration based on the signals that the metrics service will periodically return.

if (signal.kind === SignalKind.NoGo) {
 logger.info({
   msg: 'Pausing migration',
   details: signal.details,
})
 abortMigration()
}

if (signal.kind === SignalKind.Go) {
 logger.info('Resuming migration')
 restartMigration()
}

Did we solve the problems we had? Let's see:

  • "The migration could only be rolled out at once to a handful of the database instances because one person can't monitor multiple metrics across hundreds of instances." We can now roll out the migration to all database instances at one. On each instance, the migration’s sdk will automatically start and stop the migration as many times as necessary depending on the metrics values.

  • "Keeping track of the status of the migration on each database instance is busywork which doesn't add any value." No need to do this anymore, the migration sdk knows when the migration finished and when it still has to be running.

  • "The migration can only run when there is an engineer overseeing it." Now the migration can run 24/7.

This is the first step to improve the developer experience of those engineers who have to run database migrations and to ensure that the availability and performance of the content databases, a core component for Contentful, is not affected. There are still rough edges and many ideas of how to make this more awesome to use. If you would like to work on projects like this go check out our careers page. We are hiring!

About the author

Don't miss the latest

Get updates in your inbox
Discover new insights from the Contentful developer community each month.
add-circle arrow-right remove style-two-pin-marker subtract-circle