-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Experiment] Use incremental vacuum to speed up delete? #2800
[Experiment] Use incremental vacuum to speed up delete? #2800
Conversation
cf81d93
to
e8c09a3
Compare
e8c09a3
to
7a9bbc3
Compare
Sounds interesting. I will test it later. Added it to the 1.22.0 milestone first. |
Currently testing the experimental fix. 'Will report no later than this Friday on how it is going. Instructions how I applied it on the Uptime-Kuma container for who may be interested:
Docker Compose file example:
|
7a9bbc3
to
099e610
Compare
Reporting back (late, but still)! It did seem to have accelerated my Uptime Kuma installation but all slowdowns/locks up still seem to exist. They just happen later unfortunately. I still consider it as an improvement as it has prevented my Uptime Kuma instance to be so locked up that it couldn't do its monitoring pings to Healthchecks.io (DMS to check if it's dead) in a timely manner. |
Thanks for the feedback. I guess it's mainly the benefit of |
099e610
to
8080e27
Compare
@chakflying could you add |
@chakflying I might have missed it but you're never invoking
I still think we're better of with disabling autovacuum entirely and running a full blown |
The missing executions of |
8080e27
to
101af82
Compare
It want to start testing it during my development, so I merge it first. |
@louislam: Any news after testing? |
I didn't see any negative effects, so I will keep this. |
Our uptime-kuma instance with ~80 monitors and 365 days of history feels a lot snappier now. sqlite db size for reference: 4800MB |
Description
Background
In SQLite, data is stored in pages. When data is deleted, the pages which store that data are marked as free in the "freelist", which can be reused when new data is added. This means that no actual delete happens on disk.
In the past, we found that the database size increased a lot in long term use, and deleting data doesn't free up disk space.
auto_vacuum = FULL
was added to solve this issue.auto_vacuum
According to the SQLite documentation, when
auto_vacuum
is set to full, on every transaction freelist pages are moved to the end of the file and truncated. If you think about it, this is terrible for our use case, since all our heartbeat data is inserted interleaved:If monitor 1 is deleted, essentially all the pages in table needs to be shifted to reclaim the free space:
If
auto_vacuum
is set to incremental, this behavior is disabled, and we can run a separate commandincremental_vacuum
to free up a limited number of pages.Nightly cleanup
Currently the server deletes heartbeat data older than the retention period every night. I think theoretically
auto_vacuum = FULL
also has no use here. Assuming the monitor config is not changed, the server will produce the same amount of data each day. If we cleanup the database every night, all the freelist pages can be reused the next day. The database size should not increase past the size of the data retention period. There is indeed one caveat, if the retention period is decreased, the database size will not decrease correspondingly. But maybe trigger a vacuum on changing of retention period can be a solution?Experiment
However, I have not researched enough to see if this is actually happening on delete, or if the alignment situation in the heartbeat table is such that the all the pages are moved like that on delete, or it just leads to more fragmentation. I also don't have a big enough database to test this on, and my 4MB database showed no noticeable improvement.
Anyone who is interested can test this PR to see if delete is faster. I have added a logging function to time how long the delete takes. It's set at the INFO level so it might not show up by default.