[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: JFFS compression and hard links.



I actually would like a user space compress tool and not to have
"compress on the fly" support. This simplifies the model greatly and
speeds up the normal write performance. The user space tool would need
to be able to call a "compressed_write" on a live mounted filesystem,
but this should not be much different than normal writes (which would be
uncompressed).

The user space tool will still yield a filesystem that has the
compression benefits of cramfs with full update capability. exporting
the "compressed_write" api would then allow for individual applications
to compress and write files in one pass if they choose. A gzip like
filter interface would be even nicer, as apps would not need to deal
with compression at all, just pipe though the cjffs fiter. Ie:

tar cvf - * | cjffs_compress_fiter_with_a_shorter_name_than_this >
foo.tar

to get foo.tar compressed before being written to flash.

I'm not saying that transparent compression is a Bad Thing, just that it
is not nearly as urgent and is significantly more difficult to do right.
There still are read/write issues like what happens when an app opens a
very large compressed file and tries to rewrite a few bytes in the
middle? I would say that the rewritten data gets stored as uncompressed
inodes and the untouched compressed nodes stay where they were. This
would require that the entire file be recompressed to retore it to
optimal compression, but I can live with that.

I do agree with David that there should be both compressed and
uncompressed size information stored. Picture performance of reading
mmaped libraries. They frquently jump to large offsets and as
compression will vary, the compressed offset calculation can get way out
of wack for larger files.

David Woodhouse wrote:
> 
> bjorn.wesen@xxxxxxx.com said:
> >  We definately need the compressed footprint. It's not obvious that
> > the uncompressed size needs to be there - each jffs node on flash will
> > be a separate contigous compressed block but with an "uncompressed"
> > data offset. So if we need to find bytes 20000-21000 of a file, we'd
> > find the node with data offset closest under 20000, unpack it and hope
> > all the data is there otherwise look for the next node and unpack that
> > as well.
> 
> Hmmm... yeah, that could possibly work. It'll make the node lists more
> complex, so as we've already got to break compatibility I'm tempted just to
> add the extra field.
> 
> bjorn.wesen@xxxxxxx.com said:
> > This gets much simplier if we make the decision to only support
> > compress-once systems, where mkfs.jffs will do the compression and do
> > it so each jffs_node corresponds to one page, a la cramfs.
> 
> S'cheating. :)
> 
> We might as well bite the bullet and do dynamic compression right from the
> start - people want it.
> 
> bjorn.wesen@xxxxxxx.com said:
> >  Heh I don't even know hard-links semantics in normal filesystems so
> > I'll let that pass :)
> 
> Simple - the same inode is linked to by two or more directories.
> 
> Doing that in JFFS is going to be, erm, interesting. It basically means
> you can't have 'name' or 'parent' information in the inode itself - you
> have to have directory structure represented another way.
> 
> --
> dwmw2
> 
> To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
> the body of a message to majordomo@xxxxxxx.com

-- 
Tim Riker - http://rikers.org/ - short SIGs! <g>
All I need to know I could have learned in Kindergarten
... if I'd just been paying attention.

To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com