[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: JFFS compression and hard links.



64k should be fine as long as we're talking about uncompressed size.

So what happens when we have 10k left in a sector and we're trying to 
write a big file. Do we just write into other empty secotrs and wait 
until we have something that fits in that space? Do we try to get a 
complete compressed node into that 10k or should we go ahead and 
compress 64k and perhaps have a 'partial node/node continuation' record?

I really think the compressed nodes need to be complete unto themselves, 
but it makes it difficult to choose how much data to bite off if you're 
trying to pack it efficiently into flash.

David Woodhouse wrote:

> jadb@xxxxxxx.com said:
> 
>>  Another issue to consider is decompression buffer space. If you
>> compress  a whole file, you could need a good bit of buffer space,
>> whereas if you  limit the chunks that you're compressing to 32k-64k
>> you can still get  good compresion and don't have to worry too much
>> about running out of RAM.
> 
> 
> We already restrict nodes to erasesize/2 in size, which is typically around 
> 64K. It's feasible to reduce that further to get sensible behaviour with 
> compression.
> 
> --
> dwmw2


-- 
Joe deBlaquiere
Red Hat, Inc.
307 Wynn Drive
Huntsville AL, 35805
voice : (256)-704-9200
fax   : (256)-837-3839


To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com