[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Ok, maybe I am just completely missing something here, but I am stumped. I
am running some tests (similar to the jitter tests Vipin did a long time
ago), and I am seeing some very bad performance. Below is a summary of
what I am doing and what I am seeing.
I have a jffs2 filesystem that is mounted and starts out at ~87% full.
There are various small files on there, and one big file, ~15MiB (lgood
compression :). Here is what I do:
rm /opt/big_file; cp /home/big_file /opt/big_file; rm /opt/big_file; cp
/opt is the jffs2 filesystem, /home is an nfs mounted directory. big_file
and big_file2 are the same size, but have different content (i.e. possibly
different compression ratios).
Normally, a rm/cp pair takes about 2 minutes on my system. After about 10
iterations, the copies begin to take longer, about 3-4 minutes. After
about 10 iterations they take upwards of 1/2 an hour.
Looking at the dirty, used, and wasted size from a df command, I see this:
So almost 13 eraseblocks worth of dirty space (NOR flash with 128KiB
eraseblocks). But, there is only 1 block on the dirty_list, 6 on the
free_list, and 44 on the clean_list. Garbage collection is going on during
the writes obviously, but it doesn't seem to be making any difference. In
fact, the dirty_size _increases_.
So here is my question: because of the size of the file involved and the
relative lack of free eraseblocks to start with, is it possible that the
filesystem is so fragmented that the dirty space is spread accross many
eraseblocks and garbage collection is incapable of actually freeing up any
That is the only conclusion I could come to. Any thoughts would be
Want to check if your PC is virus-infected? Get a FREE computer virus scan
online from McAfee.
To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to firstname.lastname@example.org