Made with love

logo

by Prismic PM team

Platform Updates - March 25. - APIs updates

By François for the platform team

tl;dr

This week, the team is fully back on Prismic DB migration.

- We have postponed last week goal about releasing a new version of the core API to this week.

- Understand if we can massively parallelize the migration tasks.

- Release a new version of the Custom Type API supporting the new database.

Improving our dry run tooling

Because migrating a database is a critical path that we can't fail, we built a safety system called the shadowing system. It allows us to start using the new database without sunsetting the previous one to keep comparing results.

Over the last weeks, we were focusing on assets to enable the new media library. This week we are getting back in the main migration.

There are two possibles axis for improvements:

- Migrate data with no errors,

- and migrate data with speed.

This week, we want to work on both of these axis i.e. understand how to increase the speed of the migration as we will have more data/entities (documents, locales, history, custom types, etc.) to migrate than the asset migration. We also want to progress on fixing errors during the migration of entities without finishing everything this week.

Product adaptation before the actual migration

Last week, one of our goals was to define a strategy to implement a new search engine. As a result we identified actions that needed to be done before the migration. We will go for a transition period using the previous search engine with the new database and keep the majority of the effort after the actual migration.

This week, as part of our effort to adapt our system to the new database, we would release a new version of the Core API and a new version of Custom Type API. They will bring the ability for these APIs to communicate both with the new and the legacy databases.

As soon as all the entire system is able to communicate with the new database we will start a shadowing mode i.e. migrating the data and compare the result between the new and the legacy. This will allow us to run the new system without taking the risk of a major incident involving data corruption in cause of issue.