File size to optimise wear leveling on UBIFS partition

130 Views Asked by At

I need to repeatedly write 8 bytes of data (uint64_t counter) to a ubifs partition of size 256 MB. I am concerned about the flash wear due to the repeated writes.

Looking at the ubinfo -a output on the partition, minimum I/O unit size is 2048 bytes. So my first attempt was to do circular writes to a file of size 2048 bytes. I.e. Write 256 times (times 8 bytes) and then go back to the beginning ad infinitum.

I have set up a program to test this theory and after two weeks I notice the Current maximum erase counter value a.k.a. max_ec counter has gone up to about 20,000 after about 1.8 billion writes. That is way more than what I'd expect on a perfectly even wear across all Erase Blocks. My next approach would be to try a file size closer to 124KB, i.e. the size of the Logical Erase Block and see if it makes any difference.

I see three options:

  • Try different file sizes for comparison
  • Read ubifs driver code
  • Rebuild kernel with debug enabled and get more ubifs debug logs

Is there a better way?

Here is the little C program that does repeated writes:

#include<stdio.h>
#include<fcntl.h>
#include<stdlib.h>
#include<unistd.h>
#include<sys/stat.h>

int main(int argc, char *argv[]){
    int fd;
    char *ptr;
    uint64_t writeops, filesize, index=0, i;

    if (argc!=3){
        printf("Usage: %s filesize(uint64_t) writeops(uint64_t)\n");
        return 1;
    }

    // Sanitise args
    filesize = strtoul(argv[1], &ptr, 10);
    writeops = strtoul(argv[2], &ptr, 10);

    if (filesize%8 !=0){
        printf("Filesize must be a multiple of 8\n");
        return 2;
    }

    // Open a file to write
    fd = open("/mnt/user/magicnumber", O_CREAT|O_RDWR|O_DSYNC|O_LARGEFILE, S_IRUSR|S_IWUSR);

    // Get file size to be as big as the filesize arg
    lseek(fd, filesize-1, SEEK_SET);
    write(fd, '\0', 1);

    // Begin actual testing
    for(i=1,index=0; i<=writeops; i++,index+=8){
        if (index == filesize){
            index = 0;
        }
        lseek(fd, index, SEEK_SET);
        write(fd, (char *)&i, sizeof(i));
    }
    close(fd);
    return 0;
}
1

There are 1 best solutions below

0
On

Looking at the ubinfo -a output on the partition, minimum I/O unit size is 2048 bytes. So my first attempt was to do circular writes to a file of size 2048 bytes.

It looks you misunderstood the the relationship between minum I/O unit size and file size. minimum I/O unit size is mostly the page size for NAND flash chip, but it doesn't mean the file with the same size (minimum I/O unit size) is just using the space of a minimum I/O unit because any ubifs file will use additional space for ubi + ubifs management payload.

It doesn't make sense in most time for performing so extremely frequency write operation. Why not just store the data into buffer and write buffer back to NAND infrequently?