[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: JFFS2: powerfailtesting again
> StrongARM 1100 @133MHz ~ 124 Bogomips, 32Mb SDRAM , 4Mb16bit wide Intel boot
> block flash - 28F320C3 , see
That's about 2.5 more BOGO mips than mine ;)
> > There is something wrong for sure. On my 100MHz 486 with a 8MB flash
> > partition
> > with about 5MB of data on it (JFFS2), I don't have any blocks more than a few
> > seconds
> > at a time.
> I have big incompressible file on fs , could this be a problem ?
> I'm using Intel boot flash and it takes (currently) less than 0.5 sec to erase
> sector on it, but it could be longer according to specification.
Is this a static file or is this file being written to? If it is a static file, I
don't see why
this would cause a long mount time.
> > I have a small program that runs as a real time POSIX task (in the kernel).
> > In writing out a few 10's of bytes
> > every 100ms, the worst case block I got on the wakeup time for the task was
> > ~600ms. This was writing
> > to the fs that was ~60% full.
> > >From what I remember I had let the task run for hours. Maybe I need to let it
> > run for days and see what the
> > worst case time would be. :)
> try to fill fs 100% and restart immediately.
That is a big problem. Why would your fs become completely full in real life? If it
it means the gc task cannot gc more data, on average in the background, than you are
creating in the foreground- on average.
This means that you are going to have inconsistent read/write performance to your
JFFS2 fs. Good write times till your file system gets full, then a horrible block
GC gets to clear space.
Ok, I re-ran my test (the one I mentioned above). This time letting it go overnight
(18 hrs), writing about 30bytes per 100ms.
(That's about 18 megabytes of data to the 8 MB fs which is about 55% full of static
The worst case block that I got was ~3.4 seconds.
But what is more interesting is that the majority of the blocks are in the region of
~500 - 600ms,
but are **NOT always associated with chip erases**.
Take a look at an excerpt from my o/p log file (COL 1= Delay from last wakeup(IDEAL
= 100ms), COL 2= wake latency in ms):
The "CHIP_ERASE" messages are from printk's in the MTD flash erase routine in
This way I can see how often the sectors are being erased and what the jitter with
each sector erase is.
124.643000 ms 24.643000 ms
139.394000 ms 39.394000 ms
124.368000 ms 24.368000 ms
129.335000 ms 29.335000 ms
131.789000 ms 31.789000 ms
2048.839000 ms 1948.839000 ms <-----Notice 1.9sec jitter. There is NO CHIP_ERASE
139.355000 ms 39.355000 ms
130.043000 ms 30.043000 ms
362.199000 ms 262.199000 ms <-----Again notice the 262ms jitter right *before*
the chip erase. GC task at work?
157.740000 ms 57.740000 ms <---The jitter from the above CHIP_ERASE is "only"
57+100ms = 157ms worst case.??!!?
530.136000 ms 430.136000 ms <---Again GC task at work???
154.708000 ms 54.708000 ms
There is nothing going on in the system, except writes to JFFS2 and GC's in the
background. What in the GC task is blocking for more than
few 100's of milli seconds, sometime more than a second????
I think this is worth investigating.
David, The GC task, even though it runs as a kernel thread, is the execution preempt
able? (Not the system calls it makes of course, the
If yes, any idea where the extra long blocking time is coming from? (if not from
> My FS is almost unchangeable except for rapid updates of single file up to 100%
> of free space and fast reboot after this - try to do this several times . If this
> triggers the problem ?
Is this your normal expected system behavior? If not, why are you testing this
To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to firstname.lastname@example.org