[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

JFFS2 problems

Hi, all.

We are evaluating for JFFS2 to apply to Consumer Electric
Appliances. We use Innovator as a reference model. Please
see below for informations of the Innovator HW spec.

	CPU	:168MHz
	Mem	:32MB
	TC	:98MHz
	F-ROM	:32MB (NOR, Intel)
	JFFS2	:linux/fs/jffs2/TODO - v 1.9
		(I'm not sure the version, because files are
		 only discribed cvs versions...)

Especially, we are testing about Wareleveling for JFFS2. As
a result of these evaluations, we found some problems
described below.

*** 1. Mount time ***
	- It's important for Consumer Electric Appliances to
	  reduce boot time. Mount time is one of them. JFFS2
	  spends a lot of time, because of scanning these
	  all partitions.

	+ To avoid scanning all partitions, all data which
	  are related to JFFS2 on RAM are backuped to ROM
	  only when the OS halted correctly. Otherwise, when
	  the OS halted unuseally, scanning all JFFS2
	  partitions occur, like the current JFFS2.

*** 2. GC algorithm ***
	- The block which will be garbage collected, depends
	  on probability, which is defined by
	  jffs2_find_gc_block(), like following.
		<probability>		<selected list>
		50/128			erasable_list
		60/128			very_dirty_list
		16/128			dirty_list
		otherwise		clean_list
	  If erasable_list, very_dirty_list and dirty_list
	  are almost empty, it is difficult to get a
	  erasable block even if writable regions are existing
	  on the partition. For this case, the function almost
	  selects clean_list. This means the GC can't make a
	  erasable block and this leads the write throughput

	+ I think jffs2_find_gc_block() should not touch the
	  clean_list. This function concentrates on
	  increasing erasable blocks. Rotating, like moving a
	  clean block to an other block, should be processed
	  by the init process on some other timing, for

*** 3. Difficulty of estimating the avalable size ***
	- It is difficult to evaluate if the specification
	  of a CE appliance equipment is sufficient, because
	  writing size is different case by case , although
	  same contents files are written to a JFFS2
	  partition. This is caused by following reasons.
	  # All nodes include inode information and we
	    can't care about how many nodes are made. We
	    can't estimate how many same inodes are needed
	    in the ROM, because the number of nodes for the
	    file can't be predicted.
	  # If we have situations like overwriting files
	    partially, overlapped region is created between
	    the previous node and the new node. The region
	    of the previous node can't be used as a writable
	    region, because that node is not completely

	+ Node size is defined constantly, for example 512B.
	  So we don't need to care about obsolete nodes,
	  don't need fraglist struct and we can estimate the
	  size we need when a data is written.

I hope I'll get some opinions of this JFFS2 community.


To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com