[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: max_chunk_size



On Mon, 10 Jul 2000, Nick Ivanter wrote:
> Does anybody remember (or know) why maximum size of a data chunk is
> limited to the half of a sector size?
> 
> By the way, in the jffs_file_write() there is no check for data writing
> not to exceed the max_chunk_size. So if write() is called with a buffer
> larger then 32K, then no splitting to the multiple nodes is performed,
> instead one large node is written. It causes  that node to be rejected
> when mounting the filesystem next time.

It's a heuristic value, Finn has the clue on why it is exactly that but
he's on a heavy backlog of mails right now :) I expect him to explain
everything about JFFS now that he's back :)

Basically a node can not be of infinite size because when the GC needs to
move part of it, you cannot modify the in-flash old version to be smaller
(which you would need to since you're about to erase a sector containing
part of it).

So a node's max size needs to be tuned while tuning the GC mechanism.
There is a tradeoff between collect sizes and node-sizes which is quite
hairy I guess. Suppose you have 2kb sector sizes instead - should you make
the GC work with those small sizes (requiring more finegrained splitting
of nodes, wasting more space for node info) or should you tell it to try
to collect multiple adjacent sectors at a time (not really taking
advantage of the smaller sectors).

It should be mentioned that the sector-size of the flash is _only_
relevant really for the GC mechanism - the actual reading and writing to
the log has nothing to do with the partitioning into sectors (other than
the load-balancing that we talked about before).

-Bjorn