[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: max_chunk_size



On Tue, 11 Jul 2000, Nick Ivanter wrote:

> Finn Hakansson wrote:
> 
> >
> >
> > The reason why the maximum size of a chunk of data is 2^15 bytes
> > is because we then can shrink the the smallest unused part of the
> > flash device to 3 * 2^15 bytes.
> >
> 
> Did I understand right that maximum size of a chunk should actually depend on
> the total size of the device rather than on the size of erasable block?

A good question. To me, both the total size of the device and its
sector size matter in this decision. I cannot come up with a good
formula or definition though and I'd be more than happy if someone
could give me feedback on how to do this. My instinct told me the
today's solution was fair enough. :)


> > > > By the way, in the jffs_file_write() there is no check for data writing
> > > > not to exceed the max_chunk_size. So if write() is called with a buffer
> > > > larger then 32K, then no splitting to the multiple nodes is performed,
> > > > instead one large node is written. It causes  that node to be rejected
> > > > when mounting the filesystem next time.

> I have written a simple test program that creates a file and writes 64K of
> garbage to it in a single write() call. It writes ok, but when mounting
> filesystem next time that node is rejected and the file appears to be of zero
> length.

Okey. Then one should compute how large the chunk of data should be
in jffs_file_write(), set that size in the raw_inode struct and
return the size written. The calling process should then make another
write() with the rest of the data. How about that?


> [...]

> When I am thinking of the GC argoritm in JFFS I'm asking myself a question, is
> it realy reasonable to start it so often? Currently GC is run every time the
> amount of dirty flash memory becomes greater then a sector size. If that dirty
> space is far from the head of the log, it causes several unnecessary writes
> and erases. Doesn't this lead to the rapid wear of the flash?

You are right. I have to apology for this poor algorithm. It is a
simple algorithm that made us come out with our first Linux product
in time. This is clearly a thing that could be tuned. I thought of
many different kinds algoritms for quite some time before I chose
this simple one.

One reason for the way the algorithm is designed today is that we,
in Axis's products, have little space left for the JFFS partition
and thus we had to make sure that the flash was as clean as possible
all the time. So, as soon as we can do a garbage collect, we do one.

Also, if someone was writing a large file to the device, this write
shouldn't be interrupted by a slow garbage collect in the middle of
doing so, I thought.


> Another thing
> that I can not even imagine the reason for, is an attempt to start a GC from
> the write_super() function while it is already done in the jffs_insert_node().

That is because the system could had been power cycled in the middle
of a write or perhaps in the middle of another garbage collect and thus
new garbage collect must be performed. It is not an expensive call if
there isn't anything to garbage collect.

It is a way to insure that the system starts up "fresh".


> Thanks,
> Nick

Thank _you_!
/Finn