[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Checksum handling in jffs_rewrite_data().

On Thu, 24 Aug 2000, David Woodhouse wrote:

> In jffs_rewrite_data(), is there a reason why the checksums are written to
> the raw_inode _after_ the data are written?

No. It is just a small optimization.

> When power is lost during the data write, this is causing jffs_scan_flash() 
> to find a node with invalid checksum. Then it finds the 'dirty' area 
> immediately after that node, which actually contains the start of the name 
> and data for the node which was being written. Then it scans the flash 
> until it finds another valid node, and marks the whole area dirty.

No, that's not so good. Imagine a file containing a JFFS image.

> This isn't good because it's going to have been the largest free area on 
> the flash, and we end up with zero free space.

As far as I can see, you always loose this whole area on the flash if the
power is lost. However, it could be good to have the raw inode correct and
then just have to check if the name and then the data was written correctly.
If the name is wrong, jffs_scan_flash tries to find a node directly after
the name field.

> Creating the checksum before-hand would mean we have to read in all the 
> data twice, but I can't see a better option.

No. Is it possible to reuse previously calculated checksums? One could
fetch the checksums from those nodes we are going to rewrite and add those
checksums. If this is possible we don't have to read in the data twice.


> I'll look at cleaning up the behaviour of jffs_scan_flash() when it finds 
> dirty space followed by large amounts of free space.