[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Returning true available space on jffs f/s



Axel Barnitzke wrote:

> On Fri, 23 Feb 2001, Vipin Malik wrote:
>
> First of all thanks to you (and David) for patching this
> io error. It was just right in time for me :-)
>
> > The "df" command is really interesting to watch when one is writing to a
> > jffs file system where there are older deleted files.
> > The "% free" jumps all over the place, you can actually do a "df", write
> > some data to the jffs partition, do "df" again and see
> > the amount of free space actually *increase*!
>
> free space? you mean used space.

same diff. though my sentence was logically constructed with "free space",
but you are
correct- df shows "used space" :)


>
> I wrote a small cp test script, which copies /usr/bin to /usr/local/bin
> (on my strongarm/ipaq) compares every file in /usr/local/bin and /usr/bin
> and than after that /usr/local/bin is removed again.
> Here are some of the df outputs of what I have done:
>
> /dev/mtdblock6            2048      1508       540  74% /usr/local
> Filesystem           1k-blocks      Used Available Use% Mounted on
> /dev/mtdblock6            2048        28      2020   1% /usr/local
> Filesystem           1k-blocks      Used Available Use% Mounted on
> /dev/mtdblock6            2048      1532       516  75% /usr/local
> Filesystem           1k-blocks      Used Available Use% Mounted on
> /dev/mtdblock6            2048       304      1744  15% /usr/local
> Filesystem           1k-blocks      Used Available Use% Mounted on
> /dev/mtdblock6            2048      1556       492  76% /usr/local
> Filesystem           1k-blocks      Used Available Use% Mounted on
> /dev/mtdblock6            2048        68      1980   3% /usr/local
> Filesystem           1k-blocks      Used Available Use% Mounted on
> /dev/mtdblock6            2048      1576       472  77% /usr/local
> Filesystem           1k-blocks      Used Available Use% Mounted on
> /dev/mtdblock6            2048        80      1968   4% /usr/local
> Filesystem           1k-blocks      Used Available Use% Mounted on
> /dev/mtdblock6            2048      1584       464  77% /usr/local
> Filesystem           1k-blocks      Used Available Use% Mounted on
> /dev/mtdblock6            2048       108      1940   5% /usr/local
> Filesystem           1k-blocks      Used Available Use% Mounted on
> /dev/mtdblock6            2048      1616       432  79% /usr/local
> Filesystem           1k-blocks      Used Available Use% Mounted on
> /dev/mtdblock6            2048       140      1908   7% /usr/local
>
> and so on ...
> as you mention the use space increases.
> After reaching about 15% to 18% (I don't remeber the gc policity
> exact) the gc starts and you'll (I saw) see a used space of 1%
> again...
> For me that's ok and I have no errors recognized after more than
> 100000 writes/reads on that filesystem.

Actually, I have a more severe test that I will run next week.
This test will test power down reliability of the file system.

Here is how it works:

1. A program creates 100 random size binary files and appends
a check sum at the end (last 2 bytes).
2.  On startup a "checkfs" program is started that verifies the integrity
of the 100 files (by looking at the checksum). At most 1 file is allowed to
have a bad checksum. That file is recreated with random data (and random
size)
and a new checksum is appended.
If more than 1 file is bad or any file is unreadable, then the program stops.

3. A "ok-to-power-down" signal is sent out of the serial port, which is
received by a power down box, which displays the number of messages received
on the display, waits a random amount of time (0-20 secs) and yanks power to
the box that sent the message.
4. After sending above "ok-to-power-me-down" message to said external box,
the checkfs program goes into an infinite loop recreating all 100 files (one
by one, starting
from a random file number).
5. This program was very effective in the past in testing power down
reliability of
IDE flash disks in embedded systems.

I'll let you guys know the result :)


>
>
> > But more important than interesting, is the practical fact, that
> > embedded systems, have to garbage collect older logs etc, that
> > are only deleted when space for the new logs is getting thin. The only
> > way I know of getting that info is through statfs. If statfs
> > does not return a correct number, it is likely to throw off the log gc
> > program.
>
> How do you mean that? IMHO as long as there is min_free_size no
> logger has any problem. When some logfiles grow and free_size
> becomes less than min_free_size -- gc ist started and you get
> more space. There are no problems with marked dirty pages.
> But I see your problem -- there is hidden free space on that device
> and statfs won't give you the right value.

What I mean is, say your embedded system is logging various data,  has
config files, err logs etc. on it.
The system has a finite amount of flash NV memory in it. Now, you don't want
the
logging task (and other tasks) to worry about whether a new log can be
appended
or stored to the system (i.e. enough space is available).

You probably want a "log" gc thread that runs-say every hour, and checks to
see
how much free space is in the NV flash memory fs. If free space is < (say)
90%,
it will erase some old logs, or logs in some increasing order of importance
(e.g.
apache logs first to go, then data logs starting from the oldest date etc.).

If statfs (or "df") is misreporting the amount of usage on the flash file
system, it will
play havoc with this program, causing it to start deleting logs etc. even
though there is
enough space on the device.

Vipin

>
>
> --
> --------------------------------------
> ++ axel (barney) barnitzke
> ++ it consultant
> ++ lisa systems
>
> email :: mailto:barney@xxxxxxx.de


To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to majordomo@xxxxxxx.com