Ben
banner
benjdd.com
Ben
@benjdd.com
databases @planetscale.com
This is a convenient mechanism for exposing multiple versions to different transactions. This is also the reason why Postgres needs VACUUM. Over time, old rows can cause table bloat and need to be reclaimed or cleaned up, requiring dedicated VACUUM (FULL) operations or tools like pg_repack.
November 12, 2025 at 7:37 PM
When an existing row is updated, a new tuple is created with a new version of that row. The ID of the transaction that made the change is set as the xmax for the old version and xmin for the new version.

Which version of a row a transaction sees depends on its transaction ID and isolation level.
November 12, 2025 at 7:37 PM
Writing the data page happens later on via checkpointing, background jobs, or forced flushes due to memory page eviction.

The WAL is the key to all of this! It facilitates high-performance I/O and crash recovery.
November 7, 2025 at 3:47 PM
(3) A new record is inserted into the memory buffer for the WAL. It contains all the information needed to reconstruct the insert.

(4) The WAL is flushed to disk (via fsync or similar) to ensure the data resides in durable storage. After this succeeds, Postgres return success to the client.
November 7, 2025 at 3:47 PM
(1) Postgres receives the query and determines where to place it. This could already be in memory, or it may have to load / create one.

(2) The record is written to this page in memory only. The page is marked as “dirty” meaning it needs to get flushed to disk in the future, but not immediately.
November 7, 2025 at 3:47 PM
I know a guy at @planetscale.com who can tell you all about it.
November 2, 2025 at 2:15 PM
Ok wow, @joshmgross.com informed me that the new gp3 limit is 80k IOPS! There are are certainly still concerns about latency and performance, but this is a huge upgrade.

aws.amazon.com/about-aws/wh...
Amazon EBS increases the maximum size and provisioned performance of General Purpose (gp3) volumes - AWS
Discover more about what's new at AWS with Amazon EBS increases the maximum size and provisioned performance of General Purpose (gp3) volumes
aws.amazon.com
November 2, 2025 at 2:15 PM
The best performer? Local NVMe disks. These are the fastest and often the most cost-effective. So long as data is replicated safely, it's a huge W for database performance.
November 2, 2025 at 1:38 PM
gp3 is more cost-effective, but is limited on IOPS (16k max) and performs worse than io2.

io2 goes all the way up to 64k IOPS (256k for block-express) and generally performs quite well, but is EXPENSIVE. Upgrading from gp3 to io2 can easily 5x storage costs.
November 2, 2025 at 1:38 PM
Interesting. Yeah I see how having enforced constraints would help the optimizer in many scenarios.
October 29, 2025 at 10:32 PM
Removing them can result in performance improvements, but require shifting the maintenance of the relationship to the app-side. Apps need to ensure their transactions always leave these relationships in a valid state.

Databases: the world of infinite tradeoffs.
October 29, 2025 at 3:02 PM