[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: jffs2 fragmentation
On Fri, 31 October 2003 11:53:20 +0000, David Woodhouse wrote:
> On Fri, 2003-10-31 at 12:24 +0100, Jörn Engel wrote:
> > If your explanation is correct, a shift from 4 to 28 minutes would
> > correspond to 6 clean nodes reused for every 1 dirty node deleted and
> > new node written.
> > Doesn't make a lot of sense with a filesystem that should be >80% free
> > or dirty, does it?
> Hmmm. The figure of 87% was _with_ the large file, wasn't it? How full
> is it when the large file is deleted?
> When it's 80% full it does make sense. It's 80% full. 20% "free or
> dirty". Your 20% free space is mixed in with the clean data; you have to
> move 6 nodes out of the way for every node's worth of space you recover.
> Consider the case where every eraseblock has 80% clean data and 20% of
> each contains part of the large file you've just deleted, and is hence
> now dirty. Then you write the same large file again. Garbage collection
> happens -- each time we GC a full eraseblock we recover and rewrite 80%
> of an eraseblock of clean data, and we manage to write 20% of an
> eraseblock of the new file. The 80/20 ratio hence remains stable.
Sorry, I should have reread the original message. The fs under
pressure is permanently 87% full, not 87% empty. Makes perfect sense
When in doubt, punt. When somebody actually complains, go back and fix it...
The 90% solution is a good thing.
-- Rob Landley
To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to email@example.com