How to evaluate block allocator changes

This proposal has been accepted as a session.


One Line Summary

This session will discuss ways we can and should evaluate changes to a file system block allocator to avoid performance regressions.


Changing file system block allocator is a tricky thing to do on its own, but despite that we do not have any standardised way to evaluate the change to prevent performance regressions.

Xfstests is currently the standard tool to test file systems to avoid regressions and bugs. It is widely used by file system developers so it seems like the right place for performance regression tests as well. But that’s just the beginning.

Having simple performance “unit” tests such as sequential/random/direct/async read/write is easy, but it does not give us complete picture especially since block allocator changes might have bigger impact in the long run when file system ages. That’s why we need workload examples, simulations of the real applications.

Some of it may be simulated by fio, fsstress, filebend or others, but ultimately the best test would be provided by the application developers. Having it easy to setup and run, small and without complex dependenties will be the key. Do we need a specific file system aging tool ? Can we use workload simulation to age the file system with the certain workload ? We do not care about the data, how to speed up the process ?

But some problems might be even discovered more quickly by static analyze of the file system. But what are the traits we’re looking for ? File fragmentation, free space fragmentation, data/metadata locality, that much is obvious, what else ? That’s up for the discussion.


performance, file system, block allocator, regression