[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Major JFFS2 bug (?)
I have the original f/s (image) that I reported on (in the mail you copied
below), with the 2710 byte file.
Do you want that or would you rather I reproduced the problem again (fresh :)?
David Woodhouse wrote:
> email@example.com said:
> > Vipin.Malik@xxxxxxx.com said:
> > > Ok, I reduced the max allowed file size created by the program to 4000
> > > bytes. The system ran till approx 55 power cycles before having a bad
> > > CRC again. Again it was a file that was last been written to when
> > > power failed. This time the size was 2710 bytes (Huh?). The CRC was
> > > bad. So obviously, neither the older data (with its CRC) was
> > > preserved, nor did the new data (and its CRC) take.
> > Eep. Something's obsoleting the original data, on the basis that we
> > have a newer node for that range, before actually checking the CRC on
> > the data in the new node. Bad, naughty, and probably quite easy to
> > fix. It's either in the scan code or in read_inode.
> I'm not sure my diagnosis was correct. The scan code shouldn't allow the
> half-completed node to get into the lists, because it _does_ already check
> the CRC.
> Vipin, can you reproduce this and show me verbose messages and/or images of
> the offending filesystem?
To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to firstname.lastname@example.org