[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: JFFS2 problems




David and Jorn, thank you for your replies.
And, sorry for my lazy reply. It's difficult for me to write
English...



> On Fri, 2004-02-20 at 20:13 +0900, Shuichi Mitarai wrote:
> > Hi, all.
> 
> Hello Mitarai-san, thank you for your suggestions.
> 
> > *** 1. Mount time ***
> > 	<problem>
> > 	- It's important for Consumer Electric Appliances to
> > 	  reduce boot time. Mount time is one of them. JFFS2
> > 	  spends a lot of time, because of scanning these
> > 	  all partitions.
> > 
> > 	<sujestion>
> > 	+ To avoid scanning all partitions, all data which
> > 	  are related to JFFS2 on RAM are backuped to ROM
> > 	  only when the OS halted correctly. Otherwise, when
> > 	  the OS halted unuseally, scanning all JFFS2
> > 	  partitions occur, like the current JFFS2.
> 
> That's a useful suggestion which has been made before. In fact, you do
> something like this in a way such that it's useful even when there's an
> _unclean_ shutdown, and without taking much time to write such a
> checking during unmount.
> 
> Instead of a single 'dump' with information for the whole file system,
> each eraseblock can have such a data structure at the end of the
> eraseblock. Then, when mounting the file system, the JFFS2 code only
> needs to read in the information from the dump at the end of each
> eraseblock; it's not necessary to scan the whole of the flash as we do
> at the moment. 

Yes, I agree to your idea. But can eraseblock have such a
data structure at the _end_ of the eraseblock? If possible,
I want to how you think about replacement of the structure.


> 
> The implementation of this is high on the TODO list, but has not yet
> been requested by a customer.
> 
> > *** 2. GC algorithm ***
> > 	<problem>
> > 	- The block which will be garbage collected, depends
> > 	  on probability, which is defined by
> > 	  jffs2_find_gc_block(), like following.
> > 		<probability>		<selected list>
> > 		50/128			erasable_list
> > 		60/128			very_dirty_list
> > 		16/128			dirty_list
> > 		otherwise		clean_list
> > 					dirty_list
> > 					    ...
> > 	  If erasable_list, very_dirty_list and dirty_list
> > 	  are almost empty, it is difficult to get a
> > 	  erasable block even if writable regions are existing
> > 	  on the partition. For this case, the function almost
> > 	  selects clean_list. This means the GC can't make a
> > 	  erasable block and this leads the write throughput
> > 	  fall.
> 
> If this is true, then it is an error. We should _definitely_ be
> favouring the dirtier lists, as you say. 
> 
> But read the code again carefully. Consider the case where the
> erasable_list is empty, the very_dirty_list is empty, and the dirty_list
> has a single block. Do you not agree that we will pick the block from
> the dirty_list with a probability of 127/128?
> 

Sorry, my example was bad. 
According to my personal source code review, I think the
jffs2_find_gc_block() has following problems. 

example:
	1) If erasable_list has few blocks and clean_list
	   has all other blocks, jffs2_find_gc_block()
	   selects a block of clean_list with a probability
	   of 78/128. This is possible to decrease write
	   throughput because the function will select a
	   block of clean_list with higher probability. So,
	   the GC must work many times to make an erasable
	   block.

	2) When clean_list has all blocks except the GC
	   reserved blocks, for example 512B datum are
	   written to JFFS2 partition with the size of 10MB, 
	   the GC must select a block of clean_list to make
	   an erasable block many many times. This may be an
	   infinite loop at the view point of users. So JFFS2
	   shouldn't always allow to write a data even if the
	   total of available area, which areas need to be garbage
	   collected, is enough for the data. Other evaluation
	   , except summing the available area, should be
	   considerd.

But I haven't confirmed these with my reference machine yet.
So, I'll send a report of these, if I confirmed them.


> There _are_ improvements which can be made to the algorithm, but I don't
> think it's as fundamentally broken as you suggest. Of course, feel free
> to demonstrate that I'm wrong, and I will apologise and fix it :)
> 
> > *** 3. Difficulty of estimating the avalable size ***
> > 	<suggestion>
> > 	+ Node size is defined constantly, for example 512B.
> > 	  So we don't need to care about obsolete nodes,
> > 	  don't need fraglist struct and we can estimate the
> > 	  size we need when a data is written.
> 
> The problem with this is that fixed-size blocks will lose you the
> benefit of compression. What would be the point in knowing precisely how
> much data can be squeezed onto the flash, if in doing so you halve the
> available space?
> 

For example, considering the case where we must always store 
100 picture datum of the each size of 80KB. How much ROM
size must be needed for that specification? I can't estimate
the ROM size for the current JFFS2 because 
(jffs2_sb_info*) c->free_size can't be used immediately without
GCing. So, I suggested defined fixed-size blocks. Actually,
this will lose the benefit of compression. But it's
important for me to estimate ROM size correctly before
implimenting applications.



If I customize the current JFFS2 with these ideas, I'll send
patch some day.

thank you all:-)

> -- 
> dwmw2


To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com