[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: max_chunk_size



Finn Hakansson wrote:

>
>
> The reason why the maximum size of a chunk of data is 2^15 bytes
> is because we then can shrink the the smallest unused part of the
> flash device to 3 * 2^15 bytes.
>

Did I understand right that maximum size of a chunk should actually depend on
the total size of the device rather than on the size of erasable block?

>
>
> > > By the way, in the jffs_file_write() there is no check for data writing
> > > not to exceed the max_chunk_size. So if write() is called with a buffer
> > > larger then 32K, then no splitting to the multiple nodes is performed,
> > > instead one large node is written. It causes  that node to be rejected
> > > when mounting the filesystem next time.
>
> Okey. Correct me if I'm wrong but I think that the buffer that
> jffs_file_write() is passed has the size of a page or possibly two
> pages.

The size of buffer passed to the jffs_file_write() is up to the user program
that calls write(). It is true that the programs from GNU fileutils package
(cp, mv, etc.) allocate buffers of a page size, but other programs may
allocate greater buffers.

> Never thought that much of it since I never had any problems
> with it.

I have written a simple test program that creates a file and writes 64K of
garbage to it in a single write() call. It writes ok, but when mounting
filesystem next time that node is rejected and the file appears to be of zero
length.

> On the other hand there is code in jffs_rewrite_data() that
> prevents too large data chunks to be written.

jffs_rewrite_data() is not called from jffs_file_write()

>
>
> data chunks into a few larger ones. But the GC can only merge data chunks
> as long as there is enough room for doing so. I have also been thinking
> of adding an extra criterium for triggering a garbage collect: Too much
> meta data for example in a log file (many small chunks).
>
>

When I am thinking of the GC argoritm in JFFS I'm asking myself a question, is
it realy reasonable to start it so often? Currently GC is run every time the
amount of dirty flash memory becomes greater then a sector size. If that dirty
space is far from the head of the log, it causes several unnecessary writes
and erases. Doesn't this lead to the rapid wear of the flash? Another thing
that I can not even imagine the reason for, is an attempt to start a GC from
the write_super() function while it is already done in the jffs_insert_node().



Thanks,
Nick