In this example we run pgbench with a scale factor of 1000 which equates to a database size of around 15GB. The linux VM has 32G RAM, so we don’t expect to see many reads.
Using prometheus with the Linux node exporter we can see the disk IO pattern from pgbench. As expected the write pattern to the log disk (sda) is quite constant, while the write pattern to the database files (sdb) is bursty.
I had to tune the parameter checkpoint_completion_target from 0.5 to 0.9 otherwise the SCSI stack became overwhelmed during checkpoints, and caused log-writes to stall.
In this example, we use Postgres and the pgbench workload generator to drive some load in a virtual machine. Assume a Linux virtual machine that has Postgres installed. Specifically using a Bitnami virtual appliance.
Once the VM has been started, connect to the console
Allow access to postgres port 5432 – which is the postgres DB port or allow ssh
$sudo ufw allow5432
Note the postgres user password (cat ./bitnami_credentials)
Login to psql from the console or ssh
Optionally change password (the password prompted is the one from bitnami_credentials for the postgres database user).
postgres=# alter user postgres with password 'NEW_PASSWORD';
Create a DB to run the pgbench workload. In this case I name the db pgbench-sf10 for “Scale Factor 10”. Scale Factors are how the size of the database is determined.
$sudo-upostgres createdb pgbench-sf10
Initialise the DB with data ready to run the benchmark. The “createdb” step just creates an empty schema.
-i means “initialize”
-s means “scale factor” e.g. 10
pgbench-sf10 is the database schema to use. We use the one just created pgbench-sf10
Noe run a workload against the DB schema called pgbench-sf10
$sudo-upostgres pgbench pgbench-sf10
The workload pattern, and load on the system will vary greatly depending on the scale factor.
For this experiment I am using Postgres v11 on Linux 3.10 kernel. The goal was to see what gains can be made from using hugepages. I use the “built in” benchmark pgbench to run a simple set of queries.
Since I am interested in only the gains from hugepages I chose to use the “-S” parameter to pgbench which means perform only the “select” statements. Obviously this masks any costs that might be seen when dirtying hugepages – but it kept the experiment from having to be concerned with writing to the filesystem.
The workstation has 32GB of memory Postgres is given 16GB of memory using the parameter
pgbench creates a ~7.4gb database using a scale-factor of 500