[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Re: Node cache too large



On Tue, 12 Dec 2000, Dan wrote:
> It is clearly inefficient use of flash and RAM. JFFS is great

> Before JFFS can hit the "big-time" it needs work in this area.

Checkpointing will fix this, but I don't know if anybody is working on it.

I'm finally getting our Linux 2.4 system up and running so I can
start looking at JFFS code again.. then I might be able to actually
contribute something apart from ramblings from the co-designer :) 

> I think one of the things on their list of things to do is
> being able to take a "bad erase unit" out of service. 
> This will require JFFS to become more "erase unit" aware.
> Which in-turn will make garbage collection easier, the file
> system does not have to be a contiguous chunk moving through
> flash. Garbage collection could de-frag these high node count
> files, then look for erase-units that can be freed with

Yeah this is actually under work now by some people (whom I don't know
exactly). It's not particularly difficult, but it requires some knowledge
of the JFFS internals of course.

As you note having a blocklist enables better GC'ing with lots of
stationary data, better wear-levelling, managing bad-blocks etc.

GC already merges many small nodes into bigger.

Actually this problem (bad GC performance with stationary data) is
orthogonal to the high node-count problem (which needs checkpointing to be
able to not keep the entire node-tree in-core - when you need memory, you
just flush the structures and rescan them from the last checkpoint when
someone asks for a dentry you don't have)

We just need to figure out a good checkpoint record format that makes
on-demand scan really fast :) 

-BW


To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com