[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: JFFS2 performance (fwd)



Nicolas Pitre wrote:

> Please reply to John G Dorsey <john+@xxxxxxx.edu> since I don't 
> know if he's subscribed to the list.

I am now. =)

Thanks to Nicolas, I've been able to track down one of the two
performance bottlenecks in my JFFS2 bootldr implementation. (The i/d
caches were off, despite an appearance in the code to the contrary.) The
other, however, I think may be algorithmic in nature.

The file for which I care the most when it comes to replay performance
is a Linux zImage, which is mostly gzipped program text/data with a
small uncompressed program bolted onto the side. When creating the JFFS2
inodes for such a file, jffs2_compress() emits a few ZLIB nodes, a few
NONE nodes, and a whole bunch of DYNRUBIN nodes. In the latter case, the
compression savings tends to be less than a dozen bytes per inode. The
runtime cost to decompress a DYNRUBIN inode, however, seems to be higher
than the time for, say, a ZLIB inode. This means that I can replay the
journal for my 8MB filesystem in two seconds, then wait several times
that duration for the zImage to be reassembled.

I think there's a simple fix for this, which is to only report that a
given compression type has "succeeded" if it can produce at least, say,
10% savings. (This threshold might vary with algorithm cost.) I've tried
this in dynrubin_compress(), and the replay performance for zImage went
through the roof.

-jd

To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com