[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: What on Earth is JFFS2 GC doing?

Vipin.Malik@xxxxxxx.com said:
> The file is going up now to vipin@xxxxxxx.org:jittertest2-debug-s
> napshot.log. See top of file for my initial comments then search for
> [*** <comment> ***] for comments in middle.

OK, thanks again for the data.

I see two problems here. 

Firstly, you get to write a whole eraseblock full of data without doing
_any_ garbage collection. Then as soon as c->nr_free_blocks goes down to 4,
it makes you wait while it garbage-collects a whole eraseblock to make
space. We should use a metric for triggering GC which is less discrete -
maybe based on free_size instead of nr_free_blocks. Or more usefully on 
	free_size + ((gc_node->flash_offset & ~3) - gcblock->offset)
which should compensate for the amount of GC done already through the block.

Secondly, and more importantly, the garbage collection performance is 
sucking _hard_. The precise reason isn't clear. 

You appear to be under memory pressure here - or perhaps you're just running
a 2.4 kernel and the VM is b0rken :). Either way, the effect is the same -
each time GC does iget() and subsequently iput(), jffs2_clear_inode() is
called almost immediately to flush it. So the icache isn't helping you here
- each time you move a node you're having to go all the way through

Actually, it looks like you're not often doing a read_inode() for the same 
inode two or more times close together, anyway. So that's possibly not 
relevant - you'd need stuff to remain in the icache long-term for it to 
help you much.

The jffs2_get_ino_cache() function is going to suck quite hard here, if 
there are many inodes in the filesystem. The number of hash buckets is 
currently set to 1 - i.e. it's a big linked list which it trawls through 
each time. Increasing INOCACHE_HASHSIZE in include/linux/jffs2_fs_sb.h 
may improve that. 

Also, we're reading nodes from the medium every time we do this - and we 
may be taking a long time for that too. 

Can you build profiling in, then reset the profile data each time your test
program does a write? Once a write takes more than 60 seconds, print the
profile data instead of just resetting it. I'd like to know where it's
spending all the time.


To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com