[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Major JFFS2 bug (?)




Vipin.Malik@xxxxxxx.com said:
>  The system does (seems to) do page writes to JFFS2, *and commits them
> by having valid CRC's/versions ID's etc.). IMHO this is wrong. Till
> all the pages (i.e. data send down in a single write) is written to
> JFFS2, they must not be "committed" *logically* to the fs.

This was my interpretation of the POSIX spec too. I was argued down.
I'm fairly sure that none of the other journalling filesystems do this
either.

AFAICT, generic_file_write() just doesn't permit the semantics you desire. 
You need to provide your own jffs2_file_write() function, and the easiest 
way to make it conform is to prevent it from ever writing more than 
PAGE_SIZE bytes - it's perfectly entitled to return early, according to the 
spec. Then watch all your programs break - not even glibc expects this 
behaviour, even though it's permitted by the spec.


Vipin.Malik@xxxxxxx.com said:
> Ok, I reduced the max allowed file size created by the program to 4000
> bytes. The system ran till approx 55 power cycles before having a bad
> CRC again. Again it was a file that was last been written to when
> power failed. This time the size was 2710 bytes (Huh?). The CRC was
> bad. So obviously, neither the older data (with its CRC) was
> preserved, nor did the new data (and its CRC) take.

Eep. Something's obsoleting the original data, on the basis that we have a 
newer node for that range, before actually checking the CRC on the data in 
the new node. Bad, naughty, and probably quite easy to fix. It's either in 
the scan code or in read_inode. 

--
dwmw2



To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com