[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: JFFS2 empty file overhead questions
> jffs2_inode_cache 25002 25012 20 148 148 1
> jffs2_node_frag 0 0 20 0 0 1
> jffs2_raw_node_ref 50063 50096 16 248 248 1
> jffs2_tmp_dnode 0 0 12 0 0 1
> jffs2_raw_inode 0 0 68 0 0 1>
> jffs2_raw_dirent 0 92 40 0 1 > 1
> jffs2_full_dnode 1 202 16 1 1 > 1
OK, so you have ~1MiB of jffs2_raw_node_refs and ~0.5MiB of
jffs2_inode_caches. Each file you've created has two nodes associated with
it, and an entry in the inode hash table.
Without that bare minimum of information, we wouldn't be able to open the
file on demand -- we'd have to scan the whole medium for nodes belonging to
each file on read_inode(). There's very little we can do about it.
> size-64 25009 25016 64 424 424 1
Hmmm. Those will be the associated 25000-odd jffs2_full_dirents, each
containing the name. That's because you have the directory inode in-core,
and it has a complete list of the associated dirents and names. That's most
of another 2MiB.
If we want to optimise for this case, perhaps we could stop keeping the
name in memory. That would reduce the size of the object to 21 bytes, and
we have a hash of the name anyway which we use for pre-comparison so we
wouldn't need to read the actual name from the flash very often in
lookup(), and readdir() can just live with it.
We could probably drop some other members from the jffs2_full_dirent while
we're at it - given that we're going to have to read from the flash in
lookup() and readdir() there's not a lot of point in keeping the version,
ino# and type either -- so we may be able to get that down to 12 bytes per
dirent, which would be only 400KiB.
We'd then need to invent a 'jffs2_tmp_dirent_info' with all the original
information in it again, for use by the scan code much like the
jffs2_tmp_dnode_info -- because we don't want to have to go back to the
medium again during scan.
To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to firstname.lastname@example.org