PostgreSQL + ActiveMQ = (auto)Vacuum Hell

This is one of those "I had this weird problem that others are likely to encounter as well, so here's a detailed explanation that will hopefully come up in a search engine for those people when they are trying to figure it out" posts.

As most PostgreSQL administrators know, vacuuming is a necessity for the long-term survival of your database.  This is because the PostgreSQL Multi-Version Concurrency Control strategy replaces changed/deleted rows with their new versions, but keeps the old tuples around for as long as some other existing transaction might need to see them.  Eventually these old rows are no longer visible to any transaction, and they can be cleaned up for re-use or discarded completely once an entire page is empty.

This works out really well, provided you have fairly straightforward database access going on: transactions start, stuff happens, transactions complete; repeat.  Things can go radically wrong if the "transactions complete" part doesn't happen, however.

Because PostgreSQL has no idea about what any given transaction might do next, it is not able to vacuum out any dead rows newer than the oldest transaction still in progress.  If you think about it, that makes sense.  Any given transaction could look at an older version of any row that was around at the time that transaction began, in any table in the database.

The symptom of this is that vacuum tells you about a large number of dead tuples, but says that it removed 0 / none.  Doing a "vacuum verbose" tells you there was a large number of "dead row versions" which "cannot be removed yet".  If you turn on autovacuum logging, you'll see messages like this:

Feb 27 02:57:41 slurm postgres[12345]: [3-1] LOG:  automatic vacuum of table "activemq.public.activemq_msgs": index scans: 0
Feb 27 02:57:41 slurm postgres[12345]: [3-2] #011pages: 0 removed, 14002 remain
Feb 27 02:57:41 slurm postgres[12345]: [3-3] #011tuples: 0 removed, 58897 remain

The number that remain will continue increasing forever, and the number removed is always 0.  Like me, you might naively check for transactions running against activemq_msgs, and find none, or find only ones which are short-lived.  And that would be you mistake.  While autovacuum runs per-table, running transactions are per-database.  You may well have a long-running transaction running statements against some other table preventing rows from being removed from the table you're watching.  Again, this is because PostgreSQL cannot predict the future; that long-running transaction might run a query against the table you're watching two seconds from now, and as long as that could happen, those old tuples cannot be removed.

How does this relate to ActiveMQ, you ask?  If you're running ActiveMQ and using PostgreSQL as your backing/persistent store (and you may well have reasons to do this), and you don't do anything to change it, the default failover locking strategy is for the master to acquire a JDBC lock at startup, and hold onto it forever.  This translates into a transaction that starts when ActiveMQ starts, and never completes until ActiveMQ exits.  You can see this in progress from the command line:

activemq=# select xact_start, query from pg_stat_activity where xact_start is not null and datname='activemq';
          xact_start           |                                                query                                                
-------------------------------+-----------------------------------------------------------------------------------------------------
 2014-02-27 01:24:13.677693+00 | UPDATE ACTIVEMQ_LOCK SET TIME = $1 WHERE ID = 1

If you look at the xact_start timestamp, you'll see that this query has been running since ActiveMQ started.  You can also see the locks it creates:


activemq=# select n_live_tup, n_dead_tup, relname, relid from pg_stat_user_tables order by n_dead_tup desc;
 n_live_tup | n_dead_tup |    relname    | relid 
------------+------------+---------------+-------
        628 |      58903 | activemq_msgs | 16387
          4 |          0 | activemq_acks | 16398
          1 |          0 | activemq_lock | 16419
activemq=# select locktype, mode from pg_locks where relation = 16419;
 locktype |       mode       
----------+------------------
 relation | RowShareLock
 relation | RowExclusiveLock


Again, as long as this transaction is running holding the ActiveMQ lock, (auto)vacuum cannot reclaim any dead tuples for this entire database.

Fortunately, ActiveMQ has a workable solution for this problem in version 5.7 and later in the form of the Lease Database Locker.  Instead of starting a transaction and blocking forever, instead the master will create a short-lived transaction and lock long enough to try to get a leased lock, which it will periodically renew (with timing that you specify in the configuration; see the ActiveMQ documentation for an example).  So long as the lock keeps renewing, the slave won't try to take over.  Your failover time, then, depends on the duration of the lease; it won't be nearly-instantaneous as it would in the case of a lock held when ActiveMQ exits (though it could be faster than a transaction ending after a socket times out due to an unclean exit).

Because the locking transactions come and go, rather than persisting forever, the autovacuum process is able to reap your dead tuples.


So the moral of the story is this: if you're using PostgreSQL as the persistent store for ActiveMQ, make sure you configure the Lease Database Locker in your persistenceAdapter configuration.  Otherwise, PostgreSQL will never be able to vacuum out old tuples and you may suffer performance degradation and a database that bloats in size forever (or until you stop ActiveMQ, run a vacuum, and restart it).

About this Entry

This page contains a single entry by spatula published on February 26, 2014 7:58 PM.

Five More Bloody Signs You Aren't Bloody Agile was the previous entry in this blog.

How jDeb configuration can cause an app to fail after 4 minutes is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Pages

Powered by Movable Type 5.02