[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Choice of min_free_size



On Wed, 16 Aug 2000, David Woodhouse wrote:
> I would _really_ like to see a mathematical proof that the GC can't kill
> itself like this, and that there'll always be room to delete files. 

So the problem really is, we know we can always gain space by
collecting a sector with >0 bytes of obsolete data (sector->sector), but
can we put an upper limit on the amount of live data we accept in the
filesystem to guarantee that the global GC session will stop and how far
can we stretch the defrag. Hey that is kind of Turing-like! :) 

It's like playing a 15-game.. 

R = total amount of obsolete data
Ls = live data in the sector we're collecting _from_
F = total amount of empty data the log can expand into
Fs = amount of empty data left in the sector we're collecting _to_

SS = sector size, 64kb for example

max_chunk_size = 32kb, the max node we can write with a single write

This discussion assumes we never write nodes across sector boundaries. I
don't think this is the case now so we first need to fix that (without
that, we cannot support sector remapping).

As long as you start with an empty sector (remember that 15-game!) you can
always rearrange the layout to put all the obsolete data (R) consecutively
(that is, it was obsolete data, it's free space now). There's no risk of
deadlocking there - the risks come from how you judge when to stop, how
you judge when to start, and how you judge whether to defrag nodes or not.

How to start: well you need that empty sector, and you need enough space
to write one max_chunk_size node. So you'll always want at least
SS + max_chunk_size free space. If you're dangerously close of getting
below that or a write ends up with SS < F < max_chunk_size you'll need to
collect, or if a write would _need_ more space than that. If R < SS at
this point, we should not accept any writes and just say -ENOMEM, because
we _know_ the GC will get sad, there's no point in running it at all.

How to run the gc:
If R >= SS, you know you'll gain a free sector by performing the
collect, eventually. If it's much larger than SS, you can break the GC
when you think you'll collected enough free sectors.

Now the easy criteria for saying ok to a node defrag is:

1. The nodes you'll smack together are in the same sector (that always
works since you'll end up with gaining a meta-data chunk or more)

The not-so-obvious criteria is:

2. you can bring in an out-of-sector node X in the defrag, if R(orig) -
sizeof(X) > SS (the global criteria!) AND Ls + sizeof(X) <= Fs (the local
criteria). The global rule only applies if X is not at all part of the
set to be collected - if it's part of the set to collect, obviously it
won't affect the GC termination rule. The local rule makes sure that you
won't get stuck directly because you could not collect a single sector
(thus throwing away that empty square in the 15-game).

Notice that this is different from the way it works now - now we just read
Y bytes from the offset, but with the rule above we always smack together
existing nodes. That takes care of your problem with the holes
disappearing, David (add to the rule that you can't defrag the current
node if the next node is "further away" than would be sane to compactify).

How to stop GC:

Either when you feel like it and you've freed at least one sector, or
when R gets below SS - there's no point in continuing to do GC of course.
This check should be made after each scheduler erase of a sector.


I think the only non-obvious part of this is the defrag part. What do you
think ?

-Bjorn