[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Lost CRC error



Hello,

on doing some tests with JFFS2 (agreed not the latest version) I
found a serious problem: a CRC error in the data part of a node
never gets through to an application reading the file containing
the erronous node.

My test case was this:
On a read-only filesystem I did a md5sum of the file, afterwards
changed some bytes in the data part of one of the nodes belonging
to that file through /dev/mtdblock1, and did the md5sum again.
I got the same sum and no I/O error.

Not too surprising, the file was still cached.

So I did an umount and after that again a mount.
At mount time, I got an error message in syslog with a CRC error.
Fine.
But when I tried to read the file, it succeeded: no I/O error.
But I got a different md5sum.

So I looked a bit into this and found that the scan at mount time
built an inode cache, but didn't include the node with the
CRC error.
On file read, only the inodes in the inode cache were checked,
and for the missing range a hole frag was created!
So an application reads zeros instead of getting an I/O error.

Is the problem only in my old version of JFFS2 (I think December
2004 for kernel 2.4), or does it still exist?

The fix was pretty simple after the problem was found: I simply
add the node to the cache at scan time anyway, pretending the
error occurred only after the scan.
That way an application that tries to read the file gets a correct
I/O error.

 Detlef

-- 
Detlef Vollmann   vollmann engineering gmbh
Linux and C++ for Embedded Systems    http://www.vollmann.ch/
Linux for PXA270 Colibri module: http://www.vollmann.ch/en/colibri/

To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com