[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Does anybody remember (or know) why maximum size of a data chunk is
limited to the half of a sector size?
By the way, in the jffs_file_write() there is no check for data writing
not to exceed the max_chunk_size. So if write() is called with a buffer
larger then 32K, then no splitting to the multiple nodes is performed,
instead one large node is written. It causes that node to be rejected
when mounting the filesystem next time.