[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: JFFS2: powerfailtesting again



We have done some profiling with JFFS1 (not JFFS2 as of yet).
What we've found is that there is indeed very high variability in GC (and
therefore, mount) 
times, and there are many variables that can affect it.  I think this is
something we'll all need to 
understand better and work to optimize (to the extent possible) over time.

One interesting consideration is processor MIPS.  GC is a completely compute
bound
process.  Processor type, speed, and bus size can have a huge effect on GC
times.
One thing that may be obscuring this effect is that much of the development and
evaluation
is done using PCs with plenty of memory and power.  We are running with
relatively small
memory and low power - 60 MHz ARM7 with 16 bit data bus to FLASH and DRAM.

Many PDAs are running high power risc (200 MHz StrongArm for example).
Interestingly,
the new Linux palmtops (PDAs) coming out are running lower speed devices - 60
MHz.  
Comparing these 2 classes of device, a GC would take more than 3x longer on the 
60 MHz PDA than it would on the 200 MHz PDA (assuming they both had the same bus

size - it could be the low cost PDAs are running 16 bit busses vs 32 bit busses
on the 
high end PDAs, in which case the 60 MHz PDA would be that much slower).

In one test we did, we created a 1MB flash partition.  We created 50 files, then
randomly wrote
a block of data to each file of between 10 and 100 bytes.  We ran the random
fill until the file 
system was "full" - there were 335K actual data bytes and 526K JFFS header bytes
(JFFS
is VERY inefficient for small files and small blocks) in 6140 nodes.  At this
point, the system was
booted, and it took 44 seconds to mount the file system (a full GC is normally
done during a mount)!  
Compare this to 3 seconds to mount an empty file system.  When half the files
were deleted a GC
took 18 seconds.  It is interesting to note that if we had run this test with a
2MB partition, the times
would have roughly doubled.

Now, if you are running an appliance like a router or print server that normally
never gets
turned off and sits in a closet, occasional long startups are may be tolerable.
But if you are
building a consumer device where these delays will be plainly visible to a user,
it could be
a huge issue!  In this case, steps must be taken to understand how the file
system is being used
and what can be done to prevent the user from experiencing these delays.

As mentioned above, JFFS1 is very inefficient space-wise for storing files which
are frequently
updated with small writes.  As you might imagine, this also affects GC
performance.  Each write
incurs header overhead until a GC when these small nodes are consolidated.
Clearly, the more
you have, the longer it takes.  So, on a device where JFFS is used to store
large, infrequently
changing code and data files (largely read-only), and where there is adequate
free space, you 
won't likely see performance issues.  But on a system that, for example, has
large log files which 
are updated very frequently with small entries, long and relatively frequent GC
events can create
performance problems, especially if the device is interacting with a user when
the events occur.

Interestingly, even if the a log file, as described above, is fairly small
compared to the file
system size and all the other files are large, infrequently changing files,
performance can
still be a problem, particularly if the file system is close to full.  The
problem is that as GC 
moves through the file system, the wear leveling causes all the big, unchanging
data to 
have to move.  One way around this is to have 2 volumes - 1 for large unchanging
files (which
will rarely GC) and one sized optimally for the frequently changing files.  The
trade-off here
is overhead.  JFFS1 requires around 2.5 free sectors.  Consider a device with a
single 1 MB Flash.
2.5 sectors is around 160K, or 16% of the entire device!  Having to move to 2
volumes to get
better GC performance reduces the available space by 16%.  Even if you have a
2MB device, 
you are still talking about an 8% loss.  And note - as embedded Linux
development continues,
you will see more and more devices that need and be able to live in 2MB flash
for cost reasons!  
The Linux PDAs coming out are all looking like 4MB+ devices, but there is talk
(note agenda's initial
announcement claimed 2MB version would be available, though they only have 8MB
now) about getting 
it down to 2MB for low cost versions (an additional 2MB is a significant cost in
a $199 device).

As you can imagine, seeing discussions of 4+ free sectors required for JFFS2 has
caused some
anxiety and we are hopeful that when the dust settles, it will prove to require
no more than JFFS1
(and hopefully less)!

To put the above in perspective, there are limits to what can be done and limits
to when
it will help to do something.  There is some overhead and performance impact
that must be 
endured if you want a power-fail-safe FLASH replacement for hard disk/CMOS RAM
persistent
storage.   The overhead will _never_ be as low as it is for CMOS RAM storage.
Also, 
given time, 4MB will become the sweet spot on flash prices, and high end
processors are getting 
cheaper.  What this says is that you have rapidly diminishing returns on trying
to improve 
efficiency JFFS space efficiency and GC performance beyond a certain point, that
point being
bound by both some minimum overhead required to achieve the desired
functionality, and the cost curves
on FLASH and processor speed.  On the other hand, 1MB flash is still
considerably less expensive
than 2MB (in the context of a $199 consumer device) and current projections say
that 1MB devices
will be around for a while.  Further, the ARM7 class processor is just really
hitting it's stride now,
being significantly less expensive than StrongARM class processors and being
built into many
ASICS.

While we look at improving JFFS (1 and 2), we can also work to understand the
performance
characteristics well to optimize their application.  One trade-off (separate
volumes for frequent and
infrequently updated files) is mentioned above.  Another is to force GC
frequently and during non-critical
times to minimize worse-case GC times, to minimize variation in GC times, and to
minimize the 
probability that a user will see any delays at all.

Hope this is helpful.





> -----Original Message-----
> From:	Vladimir Serov [SMTP:vserov@xxxxxxx.com]
> Sent:	Wednesday, June 06, 2001 4:07 AM
> To:	David Woodhouse
> Cc:	jffs-dev@xxxxxxx.com
> Subject:	Re: JFFS2: powerfailtesting again
> 
> Hi David
> Sorry for some period of silence, I've encountered several problems in my
> setup
> ( sysrq wasn't working on my console - patched, Expect sometimes looses data
> ->
> simulated jffs2 failures - make workaround , several time-consuming runs to
> verify changes). But now I'm ready to supply You with more reliable data.
> 
> JFFS2 looks stable after ~ 300 reboots (and keeps running) --- VERY GOOD !!!
> 
> I've made debug printout when GC pass starts and stops and made timing of
> this,
> as well as overall boot , init and copy stages timing. It appears that mean
> time of this stages in a long run are 2, 117 and 182 seconds while on freshly
> flashed filesytem it takes 2, 2 and 14 seconds respectively. The mean time of
> GC mass is about 9 seconds but there is huge variation in it, i've seen ones
> longer than 3 min.. As You can remember GC demon is supposed to be low
> priority
> process but it implemented as kernel thread and thus cannot be interrupted by
> timer.
> 
> IMHO 3 min of uninterrupted running is a way long. This definitely looks like
> a
> hangover. Is it possible to make calls to schedule() more frequent, not just
> between GC passes ?
> 
> Did anybody perform profiling of JFFS , where is a bottleneck ? 10 minute for
> boot from 3mb partition is a rather long time.
> Do You have plans to improve performance ? Is there room for this ?
> 
> Thank You in advance
> With best regards,
>                         Vladimir.
> 
>  << File: jffs.ps.gz >> 

To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com