[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RAM usage, checkpointing, etc...



Some thoughts on what we really need to keep in-core... scribbled down 
before I forget what I was thinking about over the weekend.

I suspect we could get away with keeping in-core only a list of tuples of
 { inode_no, flash_offset, obsolete }
...for each node on the flash. I forget why I decided we needed the 'obsolete' 
info, now.

Upon read_inode() for any given inode, we can fairly quickly read the 
headers of the nodes which are listed for that inode and rebuild the 
jffs_file structure. Actually the jffs_file struct doesn't need half the 
stuff we keep in it ATM, because that can be kept in the inode itself.

This means that the GC code will need to do an iget() for the inode for 
each jffs_node it has to GC. But that's OK, if we have enough RAM and we're 
GC'ing hard, they'll stay in core. Otherwise they'll get destroyed under 
memory pressure and it'll have to recreate them. We get to deal nicely with 
VM pressure rather than just allocating shedloads of space and trying to 
throw together some heuristics for keeping it small.

I still can't remember why I wanted the 'obsolete' part. It'll help us do
recycling of inode numbers if we ever pass 4 milliard inodes in the lifetime
of a filesystem, but I don't think that's likely to happen in real life.
(We'd need it because we can reuse an inum only when all obsolete nodes are
actually deleted from the flash, not just when they're  all marked as
deleted. So we have to track them even after they're dead.)

--
dwmw2



To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com