[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: JFFS compression and hard links.



On Wed, 10 Jan 2001, Joe deBlaquiere wrote:
> Another issue to consider is decompression buffer space. If you compress 
> a whole file, you could need a good bit of buffer space, whereas if you 
> limit the chunks that you're compressing to 32k-64k you can still get 
> good compresion and don't have to worry too much about running out of RAM.

Actually, limiting to the page-size still provides good compression of
files that can be compressed. cramfs does this for example. The difference
between a cramfs image with 4096 and 8192 chunk sizes, when the image was
around 350 kbyte, was only maybe 20kbyte. 

readpage gets awfully slow if we need to unpack to a buffer multiple times
the page_cache_size and then extract the pages. Sure we can cache the
unpacking, but that runs into other problems, for example many programs
tend to jump around accessing pages randomly. That would be a
death-warrant for nodes of uncompressed size > PAGE_CACHE_SIZE.

So I definitely vote for only storing one page worth of data in a
jffs_node if compression is enabled. If it's disabled, it's not important.

-BW


To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com