[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Slow jffs2 startup on DOM



I fixed it. Problem was what I suspected, the CLEANMARKER is never explicitly
obsoleted. This patch fixes it(against latest 2_4_branch)

Jocke

--- /usr/local/src/MTD/mtd/fs/jffs2/nodemgmt.c	Sat Feb 23 16:07:18 2002
+++ fs/jffs2/nodemgmt.c	Mon Feb 25 12:04:18 2002
@@ -332,22 +332,25 @@
 		D2(printk(KERN_DEBUG "Not moving nextblock 0x%08x to dirty/erase_pending list\n", jeb->offset));
 	} else if (jeb == c->gcblock) {
 		D2(printk(KERN_DEBUG "Not moving gcblock 0x%08x to dirty/erase_pending list\n", jeb->offset));
-#if 0 /* We no longer do this here. It can screw the wear levelling. If you have a lot of static
-	 data and a few blocks free, and you just create new files and keep deleting/overwriting
-	 them, then you'd keep erasing and reusing those blocks without ever moving stuff around.
-	 So we leave completely obsoleted blocks on the dirty_list and let the GC delete them 
-	 when it finds them there. That way, we still get the 'once in a while, take a clean block'
-	 to spread out the flash usage */
-	} else if (!jeb->used_size) {
-		D1(printk(KERN_DEBUG "Eraseblock at 0x%08x completely dirtied. Removing from (dirty?) list...\n", jeb->offset));
+	} else if ((jeb->used_size == PAD(sizeof(struct jffs2_unknown_node)) && !jeb->first_node->next_in_ino)
+		   && (jiffies % 64)) {
+		/* We no longer do this unconditionally. It can screw the wear
+		   levelling. If you have a lot of static data and a few
+		   blocks free, and you just create new files and keep
+		   deleting/overwriting them, then you'd keep erasing and
+		   reusing those blocks without ever moving stuff around.  So
+		   occasionally we leave completely obsoleted blocks on the
+		   dirty_list and let the GC delete them when it finds them
+		   there. That way, we still get the 'once in a while, take a
+		   clean block' to spread out the flash usage.
+		*/
+		D1(printk(KERN_DEBUG "Eraseblock at 0x%08x completely dirtied. Removing from (dirty?) list...\n", jeb->offset));
 		list_del(&jeb->list);
 		D1(printk(KERN_DEBUG "...and adding to erase_pending_list\n"));
 		list_add_tail(&jeb->list, &c->erase_pending_list);
 		c->nr_erasing_blocks++;
 		jffs2_erase_pending_trigger(c);
-		//		OFNI_BS_2SFFJ(c)->s_dirt = 1;
 		D1(printk(KERN_DEBUG "Done OK\n"));
-#endif
 	} else if (jeb->dirty_size == ref->totlen) {
 		D1(printk(KERN_DEBUG "Eraseblock at 0x%08x is freshly dirtied. Removing from clean list...\n", jeb->offset));
 		list_del(&jeb->list);

> -----Original Message-----
> From: owner-jffs-dev@xxxxxxx.com]On">mailto:owner-jffs-dev@xxxxxxx.com]On Behalf
> Of Joakim Tjernlund
> Sent: Monday, February 25, 2002 10:58
> To: David Woodhouse
> Cc: Gad Hayisraeli; gleixner@xxxxxxx.de; Michael Michael;
> jffs-dev@xxxxxxx.com
> Subject: RE: Slow jffs2 startup on DOM 
> 
> 
> Hi again
> 
> Tried this on the 2_4_branch and it does not work. I enabled the 
> 
> D1(printk(KERN_DEBUG "Eraseblock at 0x%08x completely dirtied. Removing from (dirty?) list...\n", jeb->offset));
> 
> message but it never triggers. 
> Question: Are the CLEANMARKER ever obsoleted? If not, does jeb->used_size ever
> reach zero for a block with a CLENMARKER?
> 
>  Jocke
> 
> > That one I can explain, although it shouldn't make any difference whether 
> > you reboot cleanly or not.
> > 
> > When we obsoleted the final node in an eraseblock, we used to stick it 
> > straight on the erase_pending_list for recycling. That screwed the wear 
> > levelling - if you continually create and remove {lots of,large} files, 
> > then you'll keep using the same blocks over and over again.
> > 
> > So I took that out, and we just file those blocks on the dirty_list for 
> > later erasure by the garbage-collection code. However, the mount code will 
> > observe that they're entirely empty and file them for immediate erasure, 
> > and this is what makes kupdated eat your CPU on remount.
> > 
> > I see two options for fixing this.
> > 
> > First, we could partially revert the earlier change - mostly just erase 
> > the block straight away as we used to, but _occasionally_ put it on the 
> > dirty_list to keep the wear levelling moving.
> > 
> > Secondly, we could prevent the code from erasing the all-dirty blocks
> > immediately on mount, by filing them at the front of the dirty_list so 
> > they're first up for garbage-collection. 
> > 
> > I've just committed the first option to CVS, which should render the second 
> > pointless - there will rarely be many such blocks anyway. 
> > 
> > Thomas - this may break the NAND support, as it takes some NAND-specific 
> > code out of #if 0. Although the NAND support was already broken, because 
> > that code really should have been used :)
> > 
> > In the 1 in 64 times that we just stick this block on the dirty_list instead
> > of erasing it immediately (or putting it on the erase_pending_wbuf list),
> > what stops the GC code from subsequently erasing it before the wbuf is
> > flushed? (Until today, that was 64 in 64 times.)
> > 
> > Index: nodemgmt.c
> > ===================================================================
> > RCS file: /home/cvs/mtd/fs/jffs2/nodemgmt.c,v
> > retrieving revision 1.55
> > retrieving revision 1.56
> > diff -u -p -r1.55 -r1.56
> > --- nodemgmt.c	2002/02/21 23:50:46	1.55
> > +++ nodemgmt.c	2002/02/24 11:55:23	1.56
> > @@ -31,7 +31,7 @@
> >   * provisions above, a recipient may use your version of this file
> >   * under either the RHEPL or the GPL.
> >   *
> > - * $Id: nodemgmt.c,v 1.55 2002/02/21 23:50:46 gleixner Exp $
> > + * $Id: nodemgmt.c,v 1.56 2002/02/24 11:55:23 dwmw2 Exp $
> >   *
> >   */
> >  
> > @@ -348,13 +348,17 @@ void jffs2_mark_node_obsolete(struct jff
> >  		D2(printk(KERN_DEBUG "Not moving nextblock 0x%08x to dirty/erase_pending list\n", jeb->offset));
> >  	} else if (jeb == c->gcblock) {
> >  		D2(printk(KERN_DEBUG "Not moving gcblock 0x%08x to dirty/erase_pending list\n", jeb->offset));
> > -#if 0 /* We no longer do this here. It can screw the wear levelling. If you have a lot of static
> > -	 data and a few blocks free, and you just create new files and keep deleting/overwriting
> > -	 them, then you'd keep erasing and reusing those blocks without ever moving stuff around.
> > -	 So we leave completely obsoleted blocks on the dirty_list and let the GC delete them 
> > -	 when it finds them there. That way, we still get the 'once in a while, take a clean block'
> > -	 to spread out the flash usage */
> > -	} else if (!jeb->used_size) {
> > +	} else if (!jeb->used_size && (jiffies % 64)) {
> > +		/* We no longer do this unconditionally. It can screw the wear
> > +		   levelling. If you have a lot of static data and a few
> > +		   blocks free, and you just create new files and keep
> > +		   deleting/overwriting them, then you'd keep erasing and
> > +		   reusing those blocks without ever moving stuff around.  So
> > +		   occasionally we leave completely obsoleted blocks on the
> > +		   dirty_list and let the GC delete them when it finds them
> > +		   there. That way, we still get the 'once in a while, take a
> > +		   clean block' to spread out the flash usage.
> > +		*/
> >  		D1(printk(KERN_DEBUG "Eraseblock at 0x%08x completely dirtied. Removing from (dirty?) list...\n", jeb->offset));
> >  		list_del(&jeb->list);
> >  		if (c->wbuf_len) {
> > @@ -387,7 +391,6 @@ void jffs2_mark_node_obsolete(struct jff
> >  			jffs2_erase_pending_trigger(c);
> >  		}
> >  		D1(printk(KERN_DEBUG "Done OK\n"));
> > -#endif
> >  	} else if (jeb->dirty_size == ref->totlen) {
> >  		D1(printk(KERN_DEBUG "Eraseblock at 0x%08x is freshly dirtied. Removing from clean list...\n", jeb->offset));
> >  		list_del(&jeb->list);
> > 
> > --
> > dwmw2
> > 
> > 
> > 
> > 
> > 
> > ______________________________________________________
> > Linux MTD discussion mailing list
> > http://lists.infradead.org/mailman/listinfo/linux-mtd/
> > 
> 
> 
> To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
> the body of a message to majordomo@xxxxxxx.com
> 


To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com