[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Garbage collection on nearly-full filesystem



On Fri, 14 Jul 2000, David Woodhouse wrote:
> On each scan through the flash, I believe that it's moving lots and lots of
> completely clean erase blocks from one location in the flash to another. 
> It would be nice if we could optimise this a little.

Yes, we've actually talked of this on this list a while ago.. :)

Someone proposed a weighting function to know which flash-sector to try to
collect (instead of taking the next higher sector after the log-tail).

You also need to tell the jffs functions to align nodes to
sector-boundaries.

And you need to make some code that when the log has finished writing a
full sector, instead of going to the next it should start at a clean
sector recently collected (that does not need to be the next one).

It's definately something in the pipeline but I think we should make 100%
sure the filesystem as is works without bugs before adding this
optimization.

We're also considering compression but likewise, it would be best to make
sure the system works fine (with re. to the O_APPEND issues, inode
invalidation stuff etc).

So the GC might be inefficient now, but at least it should not collect
indefinitely (there was a bug before that could make that happen
sometimes).

> It should be possible to leave those blocks in place, and have the log 
> write round them to the next unclean block on the flash, shouldn't it?

Yes. The log is still circular logically, but you have a virtual sector ->
physical sector map underneath that.. 

Fortunately we don't need to store this map in the flash itself. The scan
will make sense of the nodes even if it scans the sectors in a random
order.

-Bjorn