[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: jffs2 fragmentation
On Fri, 2003-10-31 at 12:24 +0100, JÃ¶rn Engel wrote:
> If your explanation is correct, a shift from 4 to 28 minutes would
> correspond to 6 clean nodes reused for every 1 dirty node deleted and
> new node written.
> Doesn't make a lot of sense with a filesystem that should be >80% free
> or dirty, does it?
Hmmm. The figure of 87% was _with_ the large file, wasn't it? How full
is it when the large file is deleted?
When it's 80% full it does make sense. It's 80% full. 20% "free or
dirty". Your 20% free space is mixed in with the clean data; you have to
move 6 nodes out of the way for every node's worth of space you recover.
Consider the case where every eraseblock has 80% clean data and 20% of
each contains part of the large file you've just deleted, and is hence
now dirty. Then you write the same large file again. Garbage collection
happens -- each time we GC a full eraseblock we recover and rewrite 80%
of an eraseblock of clean data, and we manage to write 20% of an
eraseblock of the new file. The 80/20 ratio hence remains stable.
To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to email@example.com