[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Endurance





daniel.haensse@xxxxxxx.ch said:
>  The log will be erased every 24 hours. So, if I choose a big enough
> chip that  can hold 24 hours, it will be 1 million days, which should
> be enough ;-)

For JFFS, you'll have the overhead of a struct jffs_raw_inode for every ten 
bytes - each write will take 72 bytes. So 40 minutes worth (about a page of 
real data) will take up ~29KiB of flash, and a day's worth should take up 
just under 1MiB.

For JFFS2, it's not as good, because JFFS2 will 'helpfully' merge data 
pages as you write them. So the first write will take a similar amount as 
in JFFS (80 bytes), but the second node written will contain all 20 bytes 
of data and obsolete the original one. The third node will contain 30 
bytes, etc.... up to the 409'th write which will contain 4090 bytes. 

If your data are uncompressable, then in 40 minutes you'll create about a 
page of data but churn 80 + 88 + 100 .... = 850KiB of flash. If you get 2:1 
compression that'll only be ~440KiB, but it's still a lot more than the 29 
that JFFS would take. This is a behaviour pattern that JFFS2 really doesn't 
handle very well.

We should make JFFS2 merge pages only if the previous node in the page is in
a _different_ erase block to the one the current write is going to. That
way, we wouldn't get this kind of growth. Doing that should be fairly 
trivial.

--
dwmw2



To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com