
An Ultimate Guide to Upgrading Your PostgreSQL Installation: From 17 to 18
PostgreSQL major version upgrades are one of those tasks that every DBA has to deal with regularly. They are routine, but they are also full of small, potentially dangerous details that can turn a straightforward maintenance window into an incident. Having performed hundreds of upgrades across different environments over the years, I want to share a comprehensive, practical guide to upgrading from PostgreSQL 17 to 18, with particular focus on what has changed and what has finally improved in the upgrade process itself.
This article is based on my PGConf.EU 2024 talk, updated to cover the PostgreSQL 17→18 upgrade path and the significant improvements that landed in version 18. This time of the year we usually recommend our customers to upgrade: current release 18.3 is stable enough.
Why Upgrades Matter
Lets start with a reality check. PostgreSQL major versions are supported for five years. If you are running a version that is past its end-of-life, you are exposed to unpatched security vulnerabilities and bugs that the community will never fix. But even within the support window, newer versions bring performance improvements, new features, and better tooling. The question is not whether to upgrade, but how to do it safely and with minimal downtime.
The upgrade itself is not rocket science. The tricky part is the combination of small details: replication slots, extension compatibility, configuration drift between old and new clusters, statistics collection, and the behaviour of your connection pooler during the switchover. Any one of these can bite you if you are not paying attention.
Preparation
Read release notes. If you want to jump from let’s say version 13 to version 18, read them all. There could be some manual steps you need to make.
Plan and test. Upgrades are simple and straightforward, even major upgrades. The problem is in small details which are easy to overlook. Collation, checksums, extensions, forgotten checkpoint, customised statistics targets could make your life complicated.
Three Upgrade Methods
There are three fundamentally different approaches to performing a PostgreSQL major version upgrade. Each has its own trade-offs, and the right choice depends on your environment, your downtime budget, and the size of your database.
1. pg_dump / pg_restore
This is the oldest and simplest method. You dump the entire database from the old cluster and restore it into a fresh cluster running the new version. It is conceptually straightforward and works across any version gap, for example, you can jump from PostgreSQL 12 straight to 18 if you want.
The downsides are significant for large databases. You need roughly double the disk space (old cluster plus the dump/new cluster), and the downtime is proportional to the size of your data. For a multi-TB database, this can mean hours or even days of downtime. On the other hand, for small databases or development environments, dump/restore is perfectly fine and sometimes the easiest option.
One thing to remember: after a dump/restore, your new cluster has no optimiser statistics at all. You must run ANALYZE on every table before the planner can produce reasonable query plans. On large databases, this analysis pass itself can take a significant amount of time. If you use partitioning, think about analyzing tables more carefully.
2. pg_upgrade
pg_upgrade is the purpose-built tool for major version upgrades. It works by replacing the system catalog (the pg_catalog schema) of the old cluster with one compatible with the new version, while keeping your actual data files in place. The key insight is that PostgreSQL’s on-disk data format rarely changes between major versions — it is the system catalog structure that evolves.
pg_upgrade supports two primary modes:
Copy mode creates a full copy of your data files. This is the safe option — if something goes wrong, your old cluster is untouched and you can simply start it back up. The downside is that you need enough disk space for a complete copy, and the process takes time proportional to the amount of data. I personally use this method quite rarely, mostly because if you can afford copy, you probably can also afford pg_dump and pg_restore, which is simpler.
Link mode (--link) creates hard links to the existing data files instead of copying them. This is dramatically faster — the upgrade of a multi-terabyte database can complete in seconds rather than hours. The catch is that once you start the new cluster, the old data files are modified, and there is no going back. If you discover a problem after starting the upgraded cluster, you cannot roll back to the old version without restoring from backup.
PostgreSQL 18 introduces a third option: swap mode (--swap). This swaps the data directories between the old and new clusters. It is potentially the fastest mode, as it avoids both copying data files and creating hard links. Performance gain for an average database is probably quite small, but if you have a lot of database objects, and sometimes this is the case, this performance gain could be crucial.
In practice, for production upgrades, I recommend link mode with plan B procedure: promotion of a physical replica if something goes wrong, and plan C: a full pgBackRest backup taken immediately before the upgrade. This gives you the speed of link mode with a reliable, verified safety net you can restore from if anything goes wrong. pgBackRest supports parallel backup and restore, incremental backups, and backup verification — everything you need for a proper production rollback strategy.
3. Logical Replication
For environments where even a few minutes of downtime is unacceptable, logical replication provides a near-zero-downtime upgrade path. You set up the new cluster as a logical replica of the old one, let it catch up, then switch your application to point at the new cluster.
This approach requires more setup and has more moving parts. You need to handle DDL changes carefully (logical replication does not replicate DDL), you need to manage sequences, and you need a reliable switchover procedure. But for large, high-availability databases, it is often the only viable option. Sometimes, the preparation phase for an upgrade with logical replication could be quite long and very expensive. This happens if the database schema is designed without logical replication in mind and must be redesigned to let logical replication work.
The main advantage over pg_upgrade is that the new cluster is fully warmed up and serving traffic within seconds of the switchover. With pg_upgrade, even though the data files are in place, the buffer cache is cold, and the system needs time to warm up.
The Big Change in PostgreSQL 18: Statistics Preservation
For years, one of the most painful aspects of pg_upgrade has been the loss of optimiser statistics. After an upgrade, pg_stats was empty. The planner had no idea about data distribution, cardinality, or correlation until you ran ANALYZE on every table.
For small databases, this was an inconvenience — run ANALYZE and wait a few minutes. For large databases with hundreds of tables, some containing billions of rows and dozens of columns, the post-upgrade analyze could take hours. During this time, the planner was essentially flying blind, producing suboptimal query plans that could bring a busy production system into trouble.
PostgreSQL 18 changes this fundamentally. The new version of pg_upgrade preserves optimizer statistics by default. When you run pg_upgrade, the statistics from your old cluster’s pg_stats — the n_distinct values, null fractions, average widths, most common values, histograms — are all carried over to the new cluster. The planner can produce good query plans immediately after the upgrade, without waiting for an analyze pass.
There are some caveats. Extended statistics (those created with CREATE STATISTICS) are not yet preserved — this is a limitation of the current implementation. And a new --no-statistics flag has been added if you explicitly want to skip statistics transfer (for example, if you want to start fresh for some reason).
Additionally, PostgreSQL 18 adds the --missing-stats-only flag to vacuumdb, which lets you analyze only those tables and columns that lack statistics. This is a smart complement to the statistics preservation feature — after an upgrade, you can run vacuumdb --all --analyze-in-stages --missing-stats-only to fill in any gaps (such as extended statistics) without re-analyzing tables that already have perfectly good statistics from the old cluster.
Other pg_upgrade Improvements in PostgreSQL 18
Beyond statistics preservation, PostgreSQL 18 brings several other improvements to the upgrade process:
Parallel database checks via --jobs: The per-database checks that pg_upgrade performs (data type compatibility, subscription state validation, relation information gathering, extension update detection, etc.) can now run in parallel across databases using libpq asynchronous connection API. On clusters with many databases, this significantly reduces the time spent in the checking phase.
The --swap option: As mentioned earlier, this new mode swaps data directories between old and new clusters, potentially offering the fastest upgrade path for environments where directory renaming is cheap (which it is on most filesystems).
pg_dump --statistics and --statistics-only: These new pg_dump options allow you to dump and restore optimizer statistics independently. This opens up interesting workflows — for example, you could dump statistics from a production database and load them into a staging environment to get realistic query plans without copying the actual data.
Practical Walkthrough: Upgrading from PostgreSQL 17 to 18
Let me walk through a complete upgrade using pg_upgrade with link mode on a Debian/Ubuntu system. All commands below were tested on an actual GCP Compute Engine instance running Ubuntu 22.04.
Step 1: Install Both Versions
First, ensure you have the PGDG repository configured and both PostgreSQL versions installed:
# Add PGDG repository (if not already configured)
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main"
> /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc |
sudo apt-key add -
sudo apt-get update
# Install PostgreSQL 18
sudo apt-get install -y postgresql-18After installation, verify both clusters are present:
$ pg_lsclusters
Ver Cluster Port Status Owner Data directory
17 main 5432 online postgres /var/lib/postgresql/17/main
18 main 5433 down postgres /var/lib/postgresql/18/mainStep 2: Verify Pre-Upgrade Statistics (Baseline)
Before the upgrade, let us capture the current optimizer statistics so we can verify they survive the upgrade:
$ sudo -u postgres psql -p 5432 -d upgrade_test -c
"SELECT relname, n_live_tup, last_analyze
FROM pg_stat_user_tables;"
relname | n_live_tup | last_analyze
---------+------------+-------------------------------
orders | 100000 | 2026-03-18 10:09:53.771214+00
$ sudo -u postgres psql -p 5432 -d upgrade_test -c
"SELECT tablename, attname, n_distinct, null_frac, avg_width
FROM pg_stats WHERE tablename = 'orders' ORDER BY attname;"
tablename | attname | n_distinct | null_frac | avg_width
-----------+----------------+------------+-----------+-----------
orders | amount | -0.54117 | 0 | 6
orders | customer_name | -1 | 0 | 14
orders | id | -1 | 0 | 4
orders | order_date | 1 | 0 | 8These are the values we want to see preserved after the upgrade.
Step 3: Stop Both Clusters
pg_upgrade requires both the old and new clusters to be stopped:
sudo pg_ctlcluster 17 main stop
sudo pg_ctlcluster 18 main stopIn real production, the order could be different. For example, you can first make all the preparations, then issue CHECKPOINT command make sure that Postgres stops faster etc.
Step 4: Enable Data Checksums on the Old Cluster
Starting with PostgreSQL 18, pg_createcluster enables data page checksums by default on Debian/Ubuntu. If your PostgreSQL 17 cluster was created without checksums (which was the default before PG 18), pg_upgrade will refuse to proceed:
old cluster does not use data checksums but the new one does
Failure, exitingThe correct solution is to enable checksums on the old cluster using pg_checksums. The cluster must be stopped first:
sudo pg_ctlcluster 17 main stop
sudo -u postgres /usr/lib/postgresql/17/bin/pg_checksums
--enable -D /var/lib/postgresql/17/mainOn a large database this can take some time — pg_checksums needs to read and write every data page to compute and store the checksum. Plan for this accordingly; on a multi-terabyte cluster the process can run for a while depending on your I/O throughput. You can monitor progress with the --progress flag.
This is not just a workaround for the upgrade — checksums are your first line of defense against silent data corruption, and the performance overhead at runtime is negligible on modern hardware. If you have been running without checksums, the upgrade is a good opportunity to finally enable them.
Step 4.5: Dry Run with —check
Before performing the actual upgrade, always run pg_upgrade with the --check flag first. This performs all pre-flight consistency checks without touching any data:
sudo -u postgres /usr/lib/postgresql/18/bin/pg_upgrade
--old-datadir=/var/lib/postgresql/17/main
--new-datadir=/var/lib/postgresql/18/main
--old-bindir=/usr/lib/postgresql/17/bin
--new-bindir=/usr/lib/postgresql/18/bin
--link --check
-o '-c config_file=/etc/postgresql/17/main/postgresql.conf'
-O '-c config_file=/etc/postgresql/18/main/postgresql.conf'If all checks pass, you know the real upgrade will succeed. If something fails — incompatible data types, missing extensions, replication slot issues — you can fix it before committing to the upgrade window. In production, I always run --check during a preparation phase days before the actual maintenance window. --check is not a 100% guarantee that your actual upgrade would work smoothly. There are some corner cases.
Step 5: Run pg_upgrade
Now run the upgrade with link mode. On Debian/Ubuntu, you need to pass the configuration file paths explicitly because the data directory layout differs from the upstream default:
cd /tmp
sudo -u postgres /usr/lib/postgresql/18/bin/pg_upgrade
--old-datadir=/var/lib/postgresql/17/main
--new-datadir=/var/lib/postgresql/18/main
--old-bindir=/usr/lib/postgresql/17/bin
--new-bindir=/usr/lib/postgresql/18/bin
--link
--jobs=2
-o '-c config_file=/etc/postgresql/17/main/postgresql.conf'
-O '-c config_file=/etc/postgresql/18/main/postgresql.conf'The output walks you through each phase:
Performing Consistency Checks
=============================
Checking cluster versions ok
Checking database connection settings ok
Checking database user is the install user ok
Checking for prepared transactions ok
Checking for contrib/isn with bigint-passing mismatch ok
Checking for valid logical replication slots ok
Checking for subscription state ok
Checking data type usage ok
Checking for objects affected by Unicode update ok
Checking for not-null constraint inconsistencies ok
Creating dump of database schemas ok
...
Performing Upgrade
==================
Setting locale and encoding for new cluster ok
Analyzing all rows in the new cluster ok
Freezing all rows in the new cluster ok
...
Linking user relation files ok
Setting next OID for new cluster ok
Sync data directory to disk ok
Creating script to delete old cluster ok
Checking for extension updates ok
Upgrade Complete
----------------Notice the «Linking user relation files» line — this confirms hard links were used, making the data transfer nearly instantaneous.
The output also includes an important message:
Some statistics are not transferred by pg_upgrade.
Once you start the new server, consider running these two commands:
/usr/lib/postgresql/18/bin/vacuumdb --all --analyze-in-stages --missing-stats-only
/usr/lib/postgresql/18/bin/vacuumdb --all --analyze-onlyStep 6: Start the New Cluster and Verify
sudo pg_ctlcluster 18 main startNow, the moment of truth — let us check if the statistics survived:
$ sudo -u postgres psql -p 5433 -d upgrade_test -c "SELECT version();"
version
--------------------------------------------------------------------------------------------------------------
PostgreSQL 18.3 (Ubuntu 18.3-1.pgdg22.04+1) on x86_64-pc-linux-gnu, compiled by gcc ... 11.4.0, 64-bit
$ sudo -u postgres psql -p 5433 -d upgrade_test -c
"SELECT tablename, attname, n_distinct, null_frac, avg_width
FROM pg_stats WHERE tablename = 'orders' ORDER BY attname;"
tablename | attname | n_distinct | null_frac | avg_width
-----------+----------------+------------+-----------+-----------
orders | amount | -0.54117 | 0 | 6
orders | customer_name | -1 | 0 | 14
orders | id | -1 | 0 | 4
orders | order_date | 1 | 0 | 8The statistics are identical. The n_distinct, null_frac, and avg_width values match exactly what we had before the upgrade. The planner can produce optimal query plans immediately — no waiting for a lengthy analyze pass.
It is worth noting that pg_stat_user_tables counters (like n_live_tup and last_analyze) are reset to zero — these are activity statistics, not optimizer statistics, and they are expected to be reset during an upgrade.
Step 7: Run Post-Upgrade Maintenance
Even with statistics preserved, you should still run the recommended vacuumdb commands. The --missing-stats-only flag ensures that only tables and columns lacking statistics get analysed — tables that already have statistics from the upgrade are skipped:
$ sudo -u postgres /usr/lib/postgresql/18/bin/vacuumdb
-p 5433 --all --analyze-in-stages --missing-stats-only
vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "upgrade_test": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "upgrade_test": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics
vacuumdb: processing database "upgrade_test": Generating default (full) optimizer statisticsThe three-stage approach (minimal → medium → full) is designed to get some statistics in place quickly, then progressively improve them. With --missing-stats-only, the heavy lifting is already done — this pass fills in the gaps.
For large databases, collecting statistics could be a hard thing. Sometimes, if you run vacuumdb, you can run multiple vacuumdb commands. That can better than paralyzing vacuumdb itself with --jobs if you have many databases in your cluster.
Step 8: Verify Query Plans
After starting the upgraded cluster, run EXPLAIN ANALYZE on a few of your most critical queries to confirm the planner is using the preserved statistics effectively:
sudo -u postgres psql -p 5433 -d upgrade_test -c
"EXPLAIN ANALYZE SELECT * FROM orders WHERE amount > 500;"Check that the planner’s row estimates are close to the actual row counts. If the estimates are wildly off, it may indicate that statistics were not transferred correctly for that particular table — run ANALYZE on the affected table manually.
Rollback Strategy
With link mode, there is no going back once you start the new cluster — the shared data files are now owned by PG 18. You need a solid rollback plan before you begin.
The first option is to fail over to a replica that is still running the old version. In a typical production setup, you should have at least one streaming replica that you intentionally do not upgrade together with the primary. If something goes wrong after the upgrade, you promote this replica and point your application at it. This is the fastest rollback path — it takes seconds.
If failing over to a replica is not possible (or you do not have one), the next option is restoring from a pgBackRest backup taken before the upgrade. Make sure you take a full backup and verify it before proceeding with the upgrade. pgBackRest’s parallel restore makes this significantly faster than you might expect, even for multi-terabyte databases — but it is still slower than a replica failover, so treat it as a second line of defense.
If the upgrade fails during pg_upgrade (before you start the new cluster), you can still start the old cluster. pg_upgrade explicitly warns about this with link mode: the old cluster’s pg_control.old file must be renamed back to pg_control first.
If you used copy mode instead of link mode, rollback is trivial — the old cluster is completely untouched and can simply be restarted.
Things to Watch Out For
Replication Slots
If your cluster uses physical replication, replication slots from the old cluster will not work on the new one. You need to drop them before upgrading and recreate them afterward. pg_upgrade checks for this and will refuse to proceed if it finds active replication slots that would be problematic.
For logical replication slots, the situation is more nuanced in PG 18 — pg_upgrade now checks for valid logical replication slots and handles them more gracefully than previous versions.
Extension Compatibility
Every extension you use must be compatible with the new PostgreSQL version. pg_upgrade checks for required shared libraries, but it cannot verify that the extension’s internal state is compatible. Always check the extension release notes before upgrading.
Common extensions like pg_stat_statements, pg_partman, and PostGIS typically release updated versions alongside new PostgreSQL releases, but you should verify this.
Configuration Drift
pg_upgrade does not copy your postgresql.conf settings. After upgrading, your new cluster will use the default PG 18 configuration, not your carefully tuned PG 17 settings. Make sure to review and port your configuration parameters to the new cluster’s configuration files before starting it.
On Debian/Ubuntu, the config files live in /etc/postgresql/<version>/main/, so you need to copy your settings from the 17 directory to the 18 directory. Pay particular attention to shared_preload_libraries, max_connections, memory-related settings, and any custom GUCs.
PgBouncer Integration
If you run PgBouncer in front of PostgreSQL (and you should), you need to coordinate the switchover carefully. PgBouncer gives you a clean way to drain connections before the upgrade and resume traffic afterward. Here is the procedure:
# 1. Pause PgBouncer — stops assigning new server connections,
# waits for active transactions to finish
psql -p 6432 -U pgbouncer pgbouncer -c "PAUSE;"
# 2. Verify all server connections are released
psql -p 6432 -U pgbouncer pgbouncer -c "SHOW SERVERS;"
# state should be "idle" for all, active_transactions = 0
# 3. Stop old cluster, run pg_upgrade, start new cluster
# (see Steps 3–6 above)
# 4. If the new cluster runs on a different port, update pgbouncer.ini:
# host = 127.0.0.1
# port = 5433
# Then reload the config:
psql -p 6432 -U pgbouncer pgbouncer -c "RELOAD;"
# 5. Resume PgBouncer — clients reconnect transparently
psql -p 6432 -U pgbouncer pgbouncer -c "RESUME;"With link mode, the actual upgrade between PAUSE and RESUME typically takes under a minute even for very large databases. From the application’s perspective, it looks like a brief connection stall — no errors, no reconnection logic needed, assuming your client-side connection timeout is generous enough.
Data Corruption Detection
If your old cluster has data corruption and checksums are not enabled, pg_upgrade will happily transfer the corrupted data to the new cluster. You will not discover the problem until you try to read the affected pages. This is another reason I strongly recommend enabling data page checksums.
Before upgrading, consider running pg_amcheck on the old cluster to verify data integrity. This is especially important if you have ever experienced unexpected shutdowns, hardware failures, or if you are running on storage that does not guarantee crash consistency.
Choosing the Right Method
Here is my decision framework, refined over hundreds of production upgrades:
Use pg_dump/pg_restore when your database is small (under 50 GB), you have ample downtime, or you are skipping multiple major versions. Also appropriate for development and staging environments where downtime is not a concern.
Use pg_upgrade with link mode for most production upgrades. It is fast, well-tested, and with PG 18’s statistics preservation, the post-upgrade warm-up period is dramatically shorter. Always take a full pgBackRest backup before upgrading so you have a verified rollback path.
Use logical replication when you have near-zero downtime requirements, when you need to upgrade across many versions at once, or when you want to perform the upgrade gradually (migrating tables one at a time). The operational complexity is higher, but the flexibility is unmatched.
Conclusion
PostgreSQL 18 represents a genuine leap forward in the upgrade experience. The preservation of optimiser statistics during pg_upgrade alone eliminates what was previously one of the most time-consuming and risky parts of any major version upgrade. Combined with parallel consistency checks, the new --swap mode, and the --missing-stats-only flag for vacuumdb, the tooling has matured significantly.
The core advice remains the same as it has always been: plan carefully, test in a non-production environment first, have a rollback strategy, and coordinate with your team on the maintenance window. But with PG 18, the upgrade process itself is faster, safer, and requires less post-upgrade maintenance than ever before.
If you are still running PostgreSQL 16 or earlier, now is an excellent time to plan your upgrade path and going straight to 18 gives you the best upgrade tooling the PostgreSQL community has ever produced.
Every environment has its own quirks e.g. replication topologies, extension stacks, traffic patterns, downtime constraints. If you are running on managed cloud infrastructure like AWS RDS, the upgrade process is a different beast entirely, with its own set of constraints and gotchas. My colleague Marat wrote about it separately in How to Upgrade RDS PostgreSQL with Minimal Downtime.
At Data Egret, we run upgrade workshops and consulting sessions where we work through the specifics of your setup, build a step-by-step upgrade plan, and help you test it before the real maintenance window. If you would like a second pair of eyes on your upgrade strategy, feel free to reach out.

