[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: JFFS II Performance
(Cc back to the list after I dropped the list from my one-line 'can you
profile it' - hope you don't mind).
> We did more testing. Looks like it spends a lot of time in
> "jffs2_build_inode" during the mount.
That's building the map of which range of bytes in the file(s) come from
which physical node on the flash. It keeps a linked list representing the
physical nodes, and jffs2_build_inode() builds this linked list by calling
jffs2_add_full_dnode_to_fraglist() for each physical node in turn.
Currently, jffs2_add_full_dnode_to_fraglist will start at the beginning of
the linked list, and follow the list all the way to the end. It'll do this
repeatedly, as each node is added. It may be possible to add a kind of hash
table for large files - something like keeping a separate list for each
half-megabyte rather than a single large list.
> Also it spends a lot of time in "jffs2_lookup" when we do a "ls -al".
In jffs2_lookup()? That's odd, jffs2_lookup takes time proportional to the
number of files in the directory, not related to the size of the files. But
it _does_ call iget(), which in turn will call jffs2_read_inode(). How were
you doing the profiling? Is it possible that your profiling missed the fact
that it ended up in jffs2_read_inode()? That function does much the same as
jffs2_build_inode() - it builds up the linked list of nodes again.
> What are the things we can do to improve the overall performance of
> JFFS II ?
The long mount time is due to JFFS2 having to scan the entire flash, and
then build the above-mentioned list for _every_ inode on the flash during
mount, so that it knows which physical nodes are obsolete, and has an
accurate count of how much clean/dirty space there is on the flash.
The scan of the entire flash will be necessary until someone implements a
form of checkpointing, so we periodically (or just on a clean unmount) write
out to the beginning of an eraseblock the details of what nodes are where
(from the list of jffs2_raw_node_refs). On mount, we'd just look in the
beginning of each eraseblock to find the latest such checkpoint, then do
the complete scan only on blocks which are _newer_ than the checkpoint.
Also it doesn't really _need_ to work out which nodes are obsoleted until
the first _write_ attempt is made, so that stage could possibly be done in
the background, allowing read activity to happen before it's complete. A
write to the filesystem often happens fairly early in the boot sequence
though, so it's not clear this would be worth looking at.
However, on NOR flash we actually go back and clear an extra bit in nodes
which have been obsoleted, so it's possible we might be able to avoid the
build_inode part of the current mount process entirely.
Those are the big things that come to mind - other than that, spending time
with the profiler and optimising individual routines has been known to be
To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to firstname.lastname@example.org