[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: max_chunk_size



On Mon, 10 Jul 2000, Bjorn Wesen wrote:

> On Mon, 10 Jul 2000, Nick Ivanter wrote:
> > Does anybody remember (or know) why maximum size of a data chunk is
> > limited to the half of a sector size?

This is a thing that I really would like some feedback on.

First of all there must be some kind of maximum size of a data
chunk. Suppose you have a flash device of size X and a data chunk
that has been written to it of size 0.51 * X. Then there is no
possibility to make a garbage collect where that data chunk is
involved. I think everybody can agree on that.

So there must be an upper limit of the size of a chunk of data
and one such limit could be the size of one sector. In that case
there always have to be two sectors of free space. (The chunks
of data do not have to lie aligned on the sectors.)

In order to shrink the size of the flash that has to be free, we
also have to shrink the size of the largest allowable chunk of
data.

The reason why the maximum size of a chunk of data is 2^15 bytes
is because we then can shrink the the smallest unused part of the
flash device to 3 * 2^15 bytes.

Does this reasoning make any sense?


> > By the way, in the jffs_file_write() there is no check for data writing
> > not to exceed the max_chunk_size. So if write() is called with a buffer
> > larger then 32K, then no splitting to the multiple nodes is performed,
> > instead one large node is written. It causes  that node to be rejected
> > when mounting the filesystem next time.

Okey. Correct me if I'm wrong but I think that the buffer that
jffs_file_write() is passed has the size of a page or possibly two
pages. Never thought that much of it since I never had any problems
with it. On the other hand there is code in jffs_rewrite_data() that
prevents too large data chunks to be written.


> It's a heuristic value, Finn has the clue on why it is exactly that but
> he's on a heavy backlog of mails right now :) I expect him to explain
> everything about JFFS now that he's back :)

I'm sorry I went on vacation. :)


> Basically a node can not be of infinite size because when the GC needs to
> move part of it, you cannot modify the in-flash old version to be smaller
> (which you would need to since you're about to erase a sector containing
> part of it).
> 
> So a node's max size needs to be tuned while tuning the GC mechanism.
> There is a tradeoff between collect sizes and node-sizes which is quite
> hairy I guess.

Yes. It is hairy. I have left out some details in the above reasoning.
For instance, the garbage collector, to some extent, merges a number of
data chunks into a few larger ones. But the GC can only merge data chunks
as long as there is enough room for doing so. I have also been thinking
of adding an extra criterium for triggering a garbage collect: Too much
meta data for example in a log file (many small chunks).


> Suppose you have 2kb sector sizes instead - should you make
> the GC work with those small sizes (requiring more finegrained splitting
> of nodes, wasting more space for node info) or should you tell it to try
> to collect multiple adjacent sectors at a time (not really taking
> advantage of the smaller sectors).
> 
> It should be mentioned that the sector-size of the flash is _only_
> relevant really for the GC mechanism - the actual reading and writing to
> the log has nothing to do with the partitioning into sectors (other than
> the load-balancing that we talked about before).
> 
> -Bjorn
> 

/Finn