[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Anyone using mtd on NAND flash?




bjorn.wesen@xxxxxxx.com said:
>  Speaking of that, is there anyone who's actually looking at it yet ?

Not yet, although I'm trying to make time.


bjorn.wesen@xxxxxxx.com said:
> We've had some emails about what metric to use to decide when a block
> should be reused (based on wear-leveling, aging of data etc) but no
> actual code suggestions..  

I was going to start with something simple:

	if(jiffies % 5)
		recycle_the_block_with_most_dirty_space();
	else
		recycle_the_oldest_block();


bjorn.wesen@xxxxxxx.com said:
>  List Of Things To Do to make this work (might not be complete):
>    * Node-writing in JFFS should never cross a sector/block boundary
>      (just as it does not wrap around a node at the end of the flash 
>      today)

Yep. The code in jffs_file_write() is now happy to deal with
jffs_write_node() just returning a short write. It'll repeat until all the
data are written. The other callers of jffs_write_node() only write small 
nodes anyway, so it doesn't make sense to split them.

>    * When we read the end of a block, instead of automatically picking
>      the next block, we ask a new function "get_next_usable_block()" or
>      whatever, which does the lookup using the reuse-metric etc...

>    * The GC should make the join/split choices according to the stuff
>      I wrote in a mail a month ago, to make certain no lockups can
>      occur. 

Definitely. With RW compression, this gets even more complicated. If you
have a node covering a certain range of a file, but a subsequent write has
changed some of the data, then it's entirely feasible that when you come to
GC the first node, the data no longer compress as well. 

In this case, you find that the new node _has_ to take up more physical 
space on the flash than the node that's being obsoleted. This means that it 
makes it far harder than before to place an upper bound on the amount of 
free space that you're required to keep between the head and tail of the 
log to ensure that you don't get stuck.

One suggestion has been to allow the old data to be rewritten as-is, in the 
case where the data would 'expand' and there's no slack space available. 
This would require an extra field 'real_version' to indicate that the data 
it contains are not supposed to obsolete any data which really were written 
after the original node.


--
dwmw2