[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Choice of min_free_size




dwmw2@xxxxxxx.com said:
>  Was there a mathematical proof behind the choice of max_chunk_size
> and  min_free_size? 

I'll answer that myself - no.

However, we do need to be able to prove mathematically that in all cases, 
the GC will be able to recover space. 

jffs_garbage_collect_next() currently works by finding the node right at the
tail of the log, then writing out as much current data as possible from that
offset in the file. For files which have become large, that's going to be 
max_chunk_size ( == sector_size / 2)

Look at the worst case scenario. The erase block at the tail is entirely 
full of tiny nodes, none of which are in the same file. 
For each 60 bytes in the tail, you then need to write max_chunk_size bytes 
to the head of the log.

With 128Kb erase blocks, this means that you need ((128K/60) * 196608) = 
429MB of slack space between the head and the tail of the log.

In fact, you'll notice that we probably aren't going to _have_ 429MB of 
data on the filesystem, so we can't hit the worst case scenario. We _do_, 
however, sometimes try to use more space than there is available.

I'm going to try changing to jffs_garbage_collect_next() to only write 
larger nodes than the nodes it's obsoleting if there's space to play with - 
otherwise just to write out a new node with the same data range. 

Something along the lines of
	space_needed = min_free_size - (oldnode->fm->offset % sector_size)

	space_slack = free_size - space_needed

	new_node_max_size = min(max_chunk_size, old_node_size + space_slack)

Somebody feed me more coffee.


--
dwmw2