[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Choice of min_free_size

On Tue, 15 Aug 2000, David Woodhouse wrote:
> Look at the worst case scenario. The erase block at the tail is entirely 
> full of tiny nodes, none of which are in the same file. 
> For each 60 bytes in the tail, you then need to write max_chunk_size bytes 
> to the head of the log.

Ouch.. yes. Somehow maybe the GC needs to tally the nodes in the sector
it's trying to erase _before_ moving any of them, to figure out how much
it can use. In the meantime it's probably enough just to check on the
first node to see if it's meaningful to defrag that node (I guess that's
what you did in the patch you just posted, not looking too closely).

The choice of max_chunk_size and min free size was just a heuristic.

Another thing is probably that sector allocation bitmap we talked about
before, so that we finally can use the GC in combination with huge chunks
of RO data in the filesystem without massive recycling of the log.

I just got our 2.4 Linux port up and running, so I'll start trying to get
JFFS 2.4 to run on that :) Only thing that bugs me is the lack of
partitioning support in mtd (if that's still the case). I'd have to put
everything including the kernel in the same JFFS partition then and make a
"lilo" loader resident in the first block or something to locate the
/vmlinux file in JFFS and decompress it. I still think a partitioned flash
device is better.