[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Node cache too large
I see what you mean.
My approach to solve (or lessen) the problem is to do the following:
- Use a slab cache to allocate/free jffs_node (saving a bit of RAM by
- Make thread_should_wake() aware of the amount of jffs_node allocated,
and trigger a GC when the number of jffs_node in the system is
excessive. GC will execute jffs_rewrite_data(), that should re-combine
files with bigger node amount, therefore freeing jffs_node memory.
I assume that I read the GC code (jffs_rewrite_data()) properly: When
called, it will re-combine jffs_nodes to as much as PAGE_SIZE together.
Do jffs experts agree?
> Your example hits the Achille's heel of JFFS. Each write
> creates a jffs_node both in RAM and in flash. By the authors'
> own admission garbage collection needs work.
> This is a good reason not to use JFFS for frequent logging.
> It is clearly inefficient use of flash and RAM. JFFS is great
> for Read-Only mounts and for Read-Write situations requiring
> the very occasional write (i.e. upgrading applications
> or config files, etc.).
> Before JFFS can hit the "big-time" it needs work in this area.
> It is still relatively new and I am sure they are accepting
> volunteers. :-)
> Dan McDonald
> The Late Night Software Shop
> Sign up today for your Free E-mail at: http://www.canoe.ca/CanoeMail
Colubris Networks (http://www.colubris.com)
To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to email@example.com