You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
File storage performance degrades significantly depending on the maximum size of the bbolt DB over its lifetime. Notably, this is true even if the DB is empty, as bbolt never frees disk space unless manually instructed to compact it (which is tricky to do online). The reason for the performance degradation is that bbolt keeps a freelist data structure to track storage space, and the default implementation of this structure is prone to fragmentation. In the default configuration, it also syncs this freelist to disk on every transaction, which can be expensive if it's large.
See the additional context section for benchmarks.
Describe the solution you'd like
Disable freelist syncing for bbolt. This increases startup time by two orders of magnitude (~2 ms -> ~200ms for a 2 GB db file), but makes every operation close to 2x faster. See benchmarks below and the bbolt documentation for reference.
Switch to a different freelist data structure. This makes small dbs around 10% slower, but larger ones many orders of magnitude faster. See the bbolt documentation for freelist types for reference.
Describe alternatives you've considered
Compaction solves this problem, but requires manually enabling it and restarting the application. It also requires additional disk space and increases start time much in the same way as not syncing the freelist does.
This is a problem we've originally discovered in core's persistent queue. It can also be mitigated there by explicitly rotating DB files. This is fairly complex though, and the options changes look like a performance win for everyone with little downside.
Additional context
I've added two additional benchmarks to the existing suite. BenchmarkClientSetLargeDB does the same benchmark as BenchmarkClientSet, but it prepares the DB by inserting 2000 1Mi values, and then deleting them. BenchmarkClientInitLargeDB does the same preparation, and then reopens the DB, forcing it to regenerate the freelist. I'm going to submit a PR with these shortly.
Is your feature request related to a problem? Please describe.
File storage performance degrades significantly depending on the maximum size of the bbolt DB over its lifetime. Notably, this is true even if the DB is empty, as bbolt never frees disk space unless manually instructed to compact it (which is tricky to do online). The reason for the performance degradation is that bbolt keeps a freelist data structure to track storage space, and the default implementation of this structure is prone to fragmentation. In the default configuration, it also syncs this freelist to disk on every transaction, which can be expensive if it's large.
See the additional context section for benchmarks.
Describe the solution you'd like
Describe alternatives you've considered
Compaction solves this problem, but requires manually enabling it and restarting the application. It also requires additional disk space and increases start time much in the same way as not syncing the freelist does.
This is a problem we've originally discovered in core's persistent queue. It can also be mitigated there by explicitly rotating DB files. This is fairly complex though, and the options changes look like a performance win for everyone with little downside.
Additional context
I've added two additional benchmarks to the existing suite.
BenchmarkClientSetLargeDB
does the same benchmark asBenchmarkClientSet
, but it prepares the DB by inserting 2000 1Mi values, and then deleting them.BenchmarkClientInitLargeDB
does the same preparation, and then reopens the DB, forcing it to regenerate the freelist. I'm going to submit a PR with these shortly.With current settings:
NoFreelistSync: true
:NoFreelistSync: true, FreelistType: bbolt.FreelistMapType
:The text was updated successfully, but these errors were encountered: