Also fixed a bug in dir splitting when there's a large number of open
files, which was the main reason I was trying to make it easier to debug
disk images.
One part of the recent test changes was to move away from the
file-per-block emubd and instead simulate storage with a single
contiguous file. The file-per-block format was marginally useful
at the beginning, but as the remaining bugs get more subtle, it
becomes more useful to inspect littlefs through scripts that
make the underlying metadata more human-readable.
The key benefit of switching to a contiguous file is these same
scripts can be reused for real disk images and can even read through
/dev/sdb or similar.
- ./scripts/readblock.py disk block_size block
off data
00000000: 71 01 00 00 f0 0f ff f7 6c 69 74 74 6c 65 66 73 q.......littlefs
00000010: 2f e0 00 10 00 00 02 00 00 02 00 00 00 04 00 00 /...............
00000020: ff 00 00 00 ff ff ff 7f fe 03 00 00 20 00 04 19 ...............
00000030: 61 00 00 0c 00 62 20 30 0c 09 a0 01 00 00 64 00 a....b 0......d.
...
readblock.py prints a hex dump of a given block on disk. It's basically
just "dd if=disk bs=block_size count=1 skip=block | xxd -g1 -" but with
less typing.
- ./scripts/readmdir.py disk block_size block1 block2
off tag type id len data (truncated)
0000003b: 0020000a dir 0 10 63 6f 6c 64 63 6f 66 66 coldcoff
00000049: 20000008 dirstruct 0 8 02 02 00 00 03 02 00 00 ........
00000008: 00200409 dir 1 9 68 6f 74 63 6f 66 66 65 hotcoffe
00000015: 20000408 dirstruct 1 8 fe 01 00 00 ff 01 00 00 ........
readmdir.py prints info about the tags in a metadata pair on disk. It
can print the currently active tags as well as the raw log of the
metadata pair.
- ./scripts/readtree.py disk block_size
superblock "littlefs"
version v2.0
block_size 512
block_count 1024
name_max 255
file_max 2147483647
attr_max 1022
gstate 0x000000000000000000000000
dir "/"
mdir {0x0, 0x1} rev 3
v id 0 superblock "littlefs" inline size 24
mdir {0x77, 0x78} rev 1
id 0 dir "coffee" dir {0x1fc, 0x1fd}
dir "/coffee"
mdir {0x1fd, 0x1fc} rev 2
id 0 dir "coldcoffee" dir {0x202, 0x203}
id 1 dir "hotcoffee" dir {0x1fe, 0x1ff}
dir "/coffee/coldcoffee"
mdir {0x202, 0x203} rev 1
dir "/coffee/warmcoffee"
mdir {0x200, 0x201} rev 1
readtree.py parses the littlefs tree and prints info about the
semantics of what's on disk. This includes the superblock,
global-state, and directories/metadata-pairs. It doesn't print
the filesystem tree though, that could be a different tool.
Also finished migrating tests with test_relocations and test_exhaustion.
The issue I was running into when migrating these tests was a lack of
flexibility with what you could do with the block devices. It was
possible to hack in some hooks for things like bad blocks and power
loss, but it wasn't clean or easily extendable.
The solution here was to just put all of these test extensions into a
third block device, testbd, that uses the other two example block
devices internally.
testbd has several useful features for testing. Note this makes it a
pretty terrible block device _example_ since these hooks look more
complicated than a block device needs to be.
- testbd can simulate different erase values, supporting 1s, 0s, other byte
patterns, or no erases at all (which can cause surprising bugs). This
actually depends on the simulated erase values in ramdb and filebd.
I did try to move this out of rambd/filebd, but it's not possible to
simulate erases in testbd without buffering entire blocks and creating
an excessive amount of extra write operations.
- testbd also helps simulate power-loss by containing a "power cycles"
counter that is decremented every write operation until it calls exit.
This is notably faster than the previous gdb approach, which is
valuable since the reentrant tests tend to take a while to resolve.
- testbd also tracks wear, which can be manually set and read. This is
very useful for testing things like bad block handling, wear leveling,
or even changing the effective size of the block device at runtime.