GCP Postgres CloudSQL Instance — Upgrade

Ritresh Girdhar
3 min readFeb 11, 2022

Sharing my learning — Upgrade Postgres GCP SQL instance without export/import and any downtime but via Logical Replication (CDC)

Photo by Adam Rhodes on Unsplash

Sharing my experience of upgrading GCP SQL instance from Postgres 9.6 to Postgres 13.4.

The normal upgrade includes the below tasks https://cloud.google.com/sql/docs/postgres/upgrade-major-db-version

  • Create a new instance with version 13.4
  • Export data from the old version
  • Save it to cloud storage
  • Import it into a new version

But these sounds easy but practically it requires downtime as exporting/importing data ~ 40Gb takes time.

The answer to the above problem is “Logical replication”.

Here, I am going to explain how I did that upgrade via logical replication without any downtime and interruption into service.

Pre-requisite — Both SQL instances should be in the same GCP project. You could create via terraform or manually.

Left — Primary SQL instance & Right — Logical SQL instance

Steps:

  • Enable Postgres logical replication configuration on both instances cloudsql.enable_pglogical=on
  • Connect to instances via SQL client and execute below command on both instancesCREATE EXTENSION pglogical;
  • Create worker node on both instances.
primary-test$ SELECT pglogical.create_node(
node_name := 'primary',
dsn := 'host=<primary-ip> port=5432 dbname=postgres user=replication_user password=secret'
);
logical-test$ SELECT pglogical.create_node(
node_name := 'replica',
dsn := 'host=<replica-ip> port=5432 dbname=postgres user=replication_user password=secret'
);
  • Connect to logical-test instance and create subscription
SELECT pglogical.create_subscription(
subscription_name := 'test_sub',
provider_dsn := 'host=<primary-ip> port=5432 dbname=postgres user=replication_user password=secret'
);

SELECT * FROM pglogical.show_subscription_status('test_sub');
  • Export DDL from the primary-test instance and import to the logical-test instance. For example — creating a table on both instances
CREATE TABLE replica_test (id SERIAL PRIMARY KEY, data text);
  • Connect to the primary-test instance and add the tables into the replication table set.
SELECT pglogical.replication_set_add_table('default', 'replica_test', true);
  • Connect to the primary-test instance and insert a few values
> INSERT INTO replica_test (data) VALUES ('BatMan'),('SuperMan'),('IronMan');
> INSERT 0 3
  • Connect to the logical-test instance and run a select query
> select * from replica_test;id |   data----+----------1 | BatMan2 | SuperMan3 | IronMan(3 rows)

This way without any export/import and any changes I implemented DB upgrade and updated the application DB endpoint configuration to let it point to the new DB.

Clean up — Behind the scene, pg_logical use the CDC outbox pattern, once data is migrated and the application is tested delete the subscription and the worker nodes.

Note: Keep one thing in mind that superuser and Replication the role is assigned to the user which you are using for communication

Do read about SQL Migration Job, we were not able to use that due to some other reason but that also provides the similar functionality OOTB by GCP.

Thanks for reading!

--

--

Ritresh Girdhar

Father || Coder || Engineer || Learner || Reader || Writer || Silent Observer