mirror of
				https://github.com/eledio-devices/thirdparty-littlefs.git
				synced 2025-10-31 16:14:16 +01:00 
			
		
		
		
	Added tests for power-cycled-relocations and fixed the bugs that fell out
The power-cycled-relocation test with random renames has been the most
aggressive test applied to littlefs so far, with:
- Random nested directory creation
- Random nested directory removal
- Random nested directory renames (this could make the
  threaded linked-list very interesting)
- Relocating blocks every write (maximum wear-leveling)
- Incrementally cycling power every write
Also added a couple other tests to test_orphans and test_relocations.
The good news is the added testing worked well, it found quite a number
of complex and subtle bugs that have been difficult to find.
1. It's actually possible for our parent to be relocated and go out of
   sync in lfs_mkdir. This can happen if our predecessor's predecessor
   is our parent as we are threading ourselves into the filesystem's
   threaded list. (note this doesn't happen if our predecessor _is_ our
   parent, as we then update our parent in a single commit).
   This is annoying because it only happens if our parent is a long (>1
   pair) directory, otherwise we wouldn't need to catch relocations.
   Fortunately we can reuse the internal open file/dir linked-list to
   catch relocations easily, as long as we're careful to unhook our
   parent whenever lfs_mkdir returns.
2. Even more surprising, it's possible for the child in lfs_remove
   to be relocated while we delete the entry from our parent. This
   can happen if we are our own parent's predecessor, since we need
   to be updated then if our parent relocates.
   Fortunately we can also hook into the open linked-list here.
   Note this same issue was present in lfs_rename.
   Fortunately, this means now all fetched dirs are hooked into the
   open linked-list if they are needed across a commit. This means
   we shouldn't need assumptions about tree movement for correctness.
3. lfs_rename("deja/vu", "deja/vu") with the same source and destination
   was broken and tried to delete the entry twice.
4. Managing gstate deltas when we lose power during relocations was
   broken. And unfortunately complicated.
   The issue happens when we lose power during a relocation while
   removing a directory.
   When we remove a directory, we need to move the contents of its
   gstate delta to another directory or we'll corrupt littlefs gstate.
   (gstate is an xor of all deltas on the filesystem). We used to just
   xor the gstate into our parent's gstate, however this isn't correct.
   The gstate isn't built out of the directory tree, but rather out of
   the threaded linked-list (which exists to make collecting this
   gstate efficient).
   Because we have to remove our dir in two operations, there's a point
   were both the updated parent and child can exist in threaded
   linked-list and duplicate the child's gstate delta.
     .--------.
   ->| parent |-.
     | gstate | |
   .-|   a    |-'
   | '--------'
   |     X <- child is orphaned
   | .--------.
   '>| child  |->
     | gstate |
     |   a    |
     '--------'
   What we need to do is save our child's gstate and only give it to our
   predecessor, since this finalizes the removal of the child.
   However we still need to make valid updates to the gstate to mark
   that we've created an orphan when we start removing the child.
   This led to a small rework of how the gstate is handled. Now we have
   a separation of the gpending state that should be written out ASAP
   and the gdelta state that is collected from orphans awaiting
   deletion.
5. lfs_deorphan wasn't actually able to handle deorphaning/desyncing
   more than one orphan after a power-cycle. Having more than one orphan
   is very rare, but of course very possible. Fortunately this was just
   a mistake with using a break the in the deorphan, perhaps left from
   v1 where multiple orphans weren't possible?
   Note that we use a continue to force a refetch of the orphaned block.
   This is needed in the case of a half-orphan, since the fetched
   half-orphan may have an outdated tail pointer.
			
			
This commit is contained in:
		| @@ -1,8 +1,8 @@ | ||||
| # specific corner cases worth explicitly testing for | ||||
|  | ||||
| [[case]] # dangling split dir test | ||||
| define.ITERATIONS = 20 | ||||
| define.COUNT = 10 | ||||
| define.LFS_BLOCK_CYCLES = [8, 1] | ||||
| code = ''' | ||||
|     lfs_format(&lfs, &cfg) => 0; | ||||
|     // fill up filesystem so only ~16 blocks are left | ||||
| @@ -68,6 +68,7 @@ code = ''' | ||||
| [[case]] # outdated head test | ||||
| define.ITERATIONS = 20 | ||||
| define.COUNT = 10 | ||||
| define.LFS_BLOCK_CYCLES = [8, 1] | ||||
| code = ''' | ||||
|     lfs_format(&lfs, &cfg) => 0; | ||||
|     // fill up filesystem so only ~16 blocks are left | ||||
| @@ -141,3 +142,160 @@ code = ''' | ||||
|     } | ||||
|     lfs_unmount(&lfs) => 0; | ||||
| ''' | ||||
|  | ||||
| [[case]] # reentrant testing for relocations, this is the same as the | ||||
|          # orphan testing, except here we also set block_cycles so that | ||||
|          # almost every tree operation needs a relocation | ||||
| reentrant = true | ||||
| define = [ | ||||
|     {FILES=6,  DEPTH=1, CYCLES=50, LFS_BLOCK_CYCLES=1}, | ||||
|     {FILES=26, DEPTH=1, CYCLES=50, LFS_BLOCK_CYCLES=1}, | ||||
|     {FILES=3,  DEPTH=3, CYCLES=50, LFS_BLOCK_CYCLES=1}, | ||||
| ] | ||||
| code = ''' | ||||
|     err = lfs_mount(&lfs, &cfg); | ||||
|     if (err) { | ||||
|         lfs_format(&lfs, &cfg) => 0; | ||||
|         lfs_mount(&lfs, &cfg) => 0; | ||||
|     } | ||||
|  | ||||
|     srand(1); | ||||
|     const char alpha[] = "abcdefghijklmnopqrstuvwxyz"; | ||||
|     for (int i = 0; i < CYCLES; i++) { | ||||
|         // create random path | ||||
|         char full_path[256]; | ||||
|         for (int d = 0; d < DEPTH; d++) { | ||||
|             sprintf(&full_path[2*d], "/%c", alpha[rand() % FILES]); | ||||
|         } | ||||
|  | ||||
|         // if it does not exist, we create it, else we destroy | ||||
|         int res = lfs_stat(&lfs, full_path, &info); | ||||
|         if (res == LFS_ERR_NOENT) { | ||||
|             // create each directory in turn, ignore if dir already exists | ||||
|             for (int d = 0; d < DEPTH; d++) { | ||||
|                 strcpy(path, full_path); | ||||
|                 path[2*d+2] = '\0'; | ||||
|                 err = lfs_mkdir(&lfs, path); | ||||
|                 assert(!err || err == LFS_ERR_EXIST); | ||||
|             } | ||||
|  | ||||
|             for (int d = 0; d < DEPTH; d++) { | ||||
|                 strcpy(path, full_path); | ||||
|                 path[2*d+2] = '\0'; | ||||
|                 lfs_stat(&lfs, path, &info) => 0; | ||||
|                 assert(strcmp(info.name, &path[2*d+1]) == 0); | ||||
|                 assert(info.type == LFS_TYPE_DIR); | ||||
|             } | ||||
|         } else { | ||||
|             // is valid dir? | ||||
|             assert(strcmp(info.name, &full_path[2*(DEPTH-1)+1]) == 0); | ||||
|             assert(info.type == LFS_TYPE_DIR); | ||||
|  | ||||
|             // try to delete path in reverse order, ignore if dir is not empty | ||||
|             for (int d = DEPTH-1; d >= 0; d--) { | ||||
|                 strcpy(path, full_path); | ||||
|                 path[2*d+2] = '\0'; | ||||
|                 err = lfs_remove(&lfs, path); | ||||
|                 assert(!err || err == LFS_ERR_NOTEMPTY); | ||||
|             } | ||||
|  | ||||
|             lfs_stat(&lfs, full_path, &info) => LFS_ERR_NOENT; | ||||
|         } | ||||
|     } | ||||
|     lfs_unmount(&lfs) => 0; | ||||
| ''' | ||||
|  | ||||
| [[case]] # reentrant testing for relocations, but now with random renames! | ||||
| reentrant = true | ||||
| define = [ | ||||
|     {FILES=6,  DEPTH=1, CYCLES=50, LFS_BLOCK_CYCLES=1}, | ||||
|     {FILES=26, DEPTH=1, CYCLES=50, LFS_BLOCK_CYCLES=1}, | ||||
|     {FILES=3,  DEPTH=3, CYCLES=50, LFS_BLOCK_CYCLES=1}, | ||||
| ] | ||||
| code = ''' | ||||
|     err = lfs_mount(&lfs, &cfg); | ||||
|     if (err) { | ||||
|         lfs_format(&lfs, &cfg) => 0; | ||||
|         lfs_mount(&lfs, &cfg) => 0; | ||||
|     } | ||||
|  | ||||
|     srand(1); | ||||
|     const char alpha[] = "abcdefghijklmnopqrstuvwxyz"; | ||||
|     for (int i = 0; i < CYCLES; i++) { | ||||
|         // create random path | ||||
|         char full_path[256]; | ||||
|         for (int d = 0; d < DEPTH; d++) { | ||||
|             sprintf(&full_path[2*d], "/%c", alpha[rand() % FILES]); | ||||
|         } | ||||
|  | ||||
|         // if it does not exist, we create it, else we destroy | ||||
|         int res = lfs_stat(&lfs, full_path, &info); | ||||
|         assert(!res || res == LFS_ERR_NOENT); | ||||
|         if (res == LFS_ERR_NOENT) { | ||||
|             // create each directory in turn, ignore if dir already exists | ||||
|             for (int d = 0; d < DEPTH; d++) { | ||||
|                 strcpy(path, full_path); | ||||
|                 path[2*d+2] = '\0'; | ||||
|                 err = lfs_mkdir(&lfs, path); | ||||
|                 assert(!err || err == LFS_ERR_EXIST); | ||||
|             } | ||||
|  | ||||
|             for (int d = 0; d < DEPTH; d++) { | ||||
|                 strcpy(path, full_path); | ||||
|                 path[2*d+2] = '\0'; | ||||
|                 lfs_stat(&lfs, path, &info) => 0; | ||||
|                 assert(strcmp(info.name, &path[2*d+1]) == 0); | ||||
|                 assert(info.type == LFS_TYPE_DIR); | ||||
|             } | ||||
|         } else { | ||||
|             assert(strcmp(info.name, &full_path[2*(DEPTH-1)+1]) == 0); | ||||
|             assert(info.type == LFS_TYPE_DIR); | ||||
|  | ||||
|             // create new random path | ||||
|             char new_path[256]; | ||||
|             for (int d = 0; d < DEPTH; d++) { | ||||
|                 sprintf(&new_path[2*d], "/%c", alpha[rand() % FILES]); | ||||
|             } | ||||
|  | ||||
|             // if new path does not exist, rename, otherwise destroy | ||||
|             res = lfs_stat(&lfs, new_path, &info); | ||||
|             assert(!res || res == LFS_ERR_NOENT); | ||||
|             if (res == LFS_ERR_NOENT) { | ||||
|                 // stop once some dir is renamed | ||||
|                 for (int d = 0; d < DEPTH; d++) { | ||||
|                     strcpy(&path[2*d], &full_path[2*d]); | ||||
|                     path[2*d+2] = '\0'; | ||||
|                     strcpy(&path[128+2*d], &new_path[2*d]); | ||||
|                     path[128+2*d+2] = '\0'; | ||||
|                     err = lfs_rename(&lfs, path, path+128); | ||||
|                     assert(!err || err == LFS_ERR_NOTEMPTY); | ||||
|                     if (!err) { | ||||
|                         strcpy(path, path+128); | ||||
|                     } | ||||
|                 } | ||||
|  | ||||
|                 for (int d = 0; d < DEPTH; d++) { | ||||
|                     strcpy(path, new_path); | ||||
|                     path[2*d+2] = '\0'; | ||||
|                     lfs_stat(&lfs, path, &info) => 0; | ||||
|                     assert(strcmp(info.name, &path[2*d+1]) == 0); | ||||
|                     assert(info.type == LFS_TYPE_DIR); | ||||
|                 } | ||||
|                  | ||||
|                 lfs_stat(&lfs, full_path, &info) => LFS_ERR_NOENT; | ||||
|             } else { | ||||
|                 // try to delete path in reverse order, | ||||
|                 // ignore if dir is not empty | ||||
|                 for (int d = DEPTH-1; d >= 0; d--) { | ||||
|                     strcpy(path, full_path); | ||||
|                     path[2*d+2] = '\0'; | ||||
|                     err = lfs_remove(&lfs, path); | ||||
|                     assert(!err || err == LFS_ERR_NOTEMPTY); | ||||
|                 } | ||||
|  | ||||
|                 lfs_stat(&lfs, full_path, &info) => LFS_ERR_NOENT; | ||||
|             } | ||||
|         } | ||||
|     } | ||||
|     lfs_unmount(&lfs) => 0; | ||||
| ''' | ||||
|   | ||||
		Reference in New Issue
	
	Block a user