![]() It's only an approximation because if the most recent rows have been deleted they won't be counted.Īlso, commit timestamp records are only kept for a limited time. If you have them enabled in nf (and did so in the past too), you can check the commit timestamp for the row with the greatest xmin to approximate the last modified time. Update: PostgreSQL 9.5 has commit timestamps. Update for PostgreSQL 9.5 commit timestamps ![]() If I had to do this, and didn't have the time to write a patch to do it properly, I'd probably use the listen/notify approach outlined above. It might be nice to add last-updated times to pg_stat_., but you'd have to convince the community it was worth the overhead or provide a way to make it optionally tracked - and you'd have to write the code to keep the stats and submit a patch, because only somebody who wants this feature is going to bother with that. start by reading the code, don't just post asking "how do I. Don't just start by writing code raise your idea on -hackers once you've thought about it enough to have a well defined way to do it (i.e. The best way would be to patch the statistics code to record this information and submit a patch to PostgreSQL for inclusion in core. I haven't looked to see how practical this would be take a look at the various _hook options in the sources. The only advantage of this over the listen/notify approach is that it doesn't lose information on crash - but it's even less efficient, too.Īnother approach might be to write a C extension function that uses (eg) ProcessUtility_hook, ExecutorRun_hook, etc to trap table changes and lazily update stats. You then have a helper process on a separate connection, or called periodically by your app, aggregate that table for the latest info, merge it into a summary table of most recent changes, and truncate the log table. ![]() This would insert a (relation_oid, timestamp) pair into a change-logging table. To avoid the worst of the concurrency issues you could instead log the change timestamps using a before insert or update or delete or truncate on tablename for each statement execute trigger, generalized to take the relation oid as a parameter. If the system crashes, you lose your record of most recent modifications, but that's ok, you just treat all tables as just-modified if you're starting up after a crash. Your daemon can accumulate change notifications and lazily write them back to a table in the database. These get sent when the transaction commits. Use ON INSERT OR UPDATE OR DELETE triggers to send NOTIFYs when a table changes, with the table oid as the notify payload. Have an external daemon process connect to PostgreSQL and LISTEN for events. I don't recommend it.Ī slightly less awful alternative is to use LISTEN and NOTIFY. It will also add a fair bit of overhead to every transaction. Be aware that doing so will serialize all writes to the table, destroying concurrency. One option for doing it reliably is to use a trigger to update a table containing the last-modified times for each table. It doesn't tell you when the most recent activity was, only that it was since the last stats reset, and there's no information about what happened before that stats reset. These can give you the time of the last stats reset, and activity stats since the last stats reset. If you don't need reliability, you can potentially use the information in pg_stat_database and pg_stat_all_tables. It might not be where you expect if you're trying to look at it concurrently without taking an appropriate lock. Some operations, like vaccum full, will replace the relfilenode. Larger tables have multiple forks, you'd have to check all the forks and pick the newest timestamp Ī simple SELECT can generate write activity to the underlying table due to hint-bit setting Īutovaccum and other maintenance that doesn't change the user visible data still modifies the relation files Once the record is in WAL, Pg doesn't rush to write it to the heap, and it might not even get written until the next system checkpoint Writes are initially recorded to the write-head log (WAL), then lazily to the heap (the table files). Using the relfilenode is wrong for a lot of reasons: There is no reliable, authorative record of the last modified time of a table.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |