Compare commits

...

114 Commits

Author SHA1 Message Date
Christopher Haster
20b46ded5a WIP making good progress, found a solution for the cycle issue
Found solution for cycle issue for tail+branch solution, though it does
require creating an extra metadata-pair on relocate.

Need to get passing the advanced relocation tests, this is currently
failing. Also need to figure out why I added the assert on tail end, it
may be related.
2020-03-10 08:43:28 -05:00
Christopher Haster
eecb06a9dc WIP crazy new idea work in progress
passing non-reentrant tests already!
2020-03-06 20:14:27 -06:00
Christopher Haster
3ee291de59 WIP so close 2020-02-28 06:03:59 -06:00
Christopher Haster
f7bc22937a WIP branches in dir 2020-02-28 04:51:25 -06:00
Christopher Haster
26ed6dee7d WIP initial commit of branch idea to repair non-DAG trees 2020-02-26 15:50:38 -06:00
Christopher Haster
e33bef55d3 Added "evil" tests and detecion/recovery from bad pointers and infinite loops
These two features have been much requested by users, and have even had
several PRs proposed to fix these in several cases. Before this, these
error conditions usually were caught by internal asserts, however
asserts prevented users from implementing their own workarounds.

It's taken me a while to provide/accept a useful recovery mechanism
(returning LFS_ERR_CORRUPT instead of asserting) because my original thinking
was that these error conditions only occur due to bugs in the filesystem, and
these bugs should be fixed properly.

While I still think this is mostly true, the point has been made clear
that being able to recover from these conditions is definitely worth the
code cost. Hopefully this new behaviour helps the longevity of devices
even if the storage code fails.

Another, less important, reason I didn't want to accept fixes for these
situations was the lack of tests that prove the code's value. This has
been fixed with the new testing framework thanks to the additional of
"internal tests" which can call C static functions and really take
advantage of the internal information of the filesystem.
2020-02-24 08:18:28 -06:00
Chris Desjardins
cb26157880 Change assert to runtime check.
I had a system that was constantly hitting this assert, after making
this change it recovered immediately.
2020-02-23 22:18:08 -06:00
Christopher Haster
a7dfae4526 Minor tweaks to debugging scripts, fixed explode_asserts.py off-by-1
- Changed readmdir.py to print the metadata pair and revision count,
  which is useful when debugging commit issues.
- Added truncated data view to readtree.py by default. This does mean
  readtree.py must read all files on the filesystem to show the
  truncated data, hopefully this does not end up being a problem.
- Made overall representation hopefully more readable, including moving
  superblock under the root dir, userattrs under files, fixing a gstate
  rendering issue.
- Added rendering of soft-tails as dotted-arrows, hopefully this isn't
  too noisy.
- Fixed explode_asserts.py off-by-1 in #line mapping caused by a strip
  call in the assert generation eating newlines. The script matches
  line numbers between the original+modified files by emitting assert
  statements that use the same number of lines. An off-by-1 here causes
  the entire file to map lines incorrectly, which can be very annoying.
2020-02-22 23:50:03 -06:00
Christopher Haster
50fe8ae258 Renamed test_format -> test_superblocks, tweaked superblock tests
With the superblock expansion stuff, the test_format tests have grown
to test more advanced superblock-related features. This is fine but
deserves a rename so it's more clear.

Also fixed a typo that meant tests never ran with block cycles.
2020-02-22 23:35:28 -06:00
Christopher Haster
0990296619 Limited byte-level tests to native testing due to time
Byte-level writes are expensive and not suggested (caches >= 4 bytes
make much more sense), however there are many corner cases with
byte-level writes that can be easy to miss (power-loss leaving single
bytes written to disk).

Unfortunately, byte-level writes mixed with power-loss testing, the
Travis infrastructure, and Arm Thumb instruction set simulation
exceeds the 50-minute budget Travis allocates for jobs.

For now I'm disabling the byte-level tests under Qemu, with the hope that
performance improvements in littlefs will let us turn these tests back
on in the future.
2020-02-18 18:05:08 -06:00
Christopher Haster
d04b077506 Fixed minor things to get CI passing again
- Added caching to Travis install dirs, because otherwise
  pip3 install fails randomly
- Increased size of littlefs-fuse disk because test script has
  a larger footprint now
- Skip a couple of reentrant tests under byte-level writes because
  the tests just take too long and cause Travis to bail due to no
  output for 10m
- Fixed various Valgrind errors
  - Suppressed uninit checks for tests where LFS_BLOCK_ERASE_VALUE == -1.
    In this case rambd goes uninitialized, which is fine for rambd's
    purposes. Note I couldn't figure out how to limit this suppression
    to only the malloc in rambd, this doesn't seem possible with Valgrind.
  - Fixed memory leaks in exhaustion tests
  - Fixed off-by-1 string null-terminator issue in paths tests
- Fixed lfs_file_sync issue caused by revealed by fixing memory leaks
  in exhaustion tests. Getting ENOSPC during a file write puts the file
  in a bad state where littlefs doesn't know how to write it out safely.
  In this case, lfs_file_sync and lfs_file_close return 0 without
  writing out state so that device-side resources can still be cleaned
  up. To recover from ENOSPC, the file needs to be reopened and the
  writes recreated. Not sure if there is a better way to handle this.
- Added some quality-of-life improvements to Valgrind testing
  - Fit Valgrind messages into truncated output when not in verbose mode
  - Turned on origin tracking
2020-02-18 18:05:03 -06:00
Christopher Haster
c7987a3162 Restructured .travis.yml to span more jobs
The core of littlefs's CI testing is the full test suite, `make test`, run
under a number of configurations:

- Processor architecture:
  - x86 (native)
  - Arm Thumb
  - MIPS
  - PowerPC
- Storage geometry:
  - rs=16   ps=16   cs=64   bs=512   (default)
  - rs=1    ps=1    cs=64   bs=4KiB  (NOR flash)
  - rs=512  ps=512  cs=512  bs=512   (eMMC)
  - rs=4KiB ps=4KiB cs=4KiB bs=32KiB (NAND flash)
- Other corner cases:
  - no intrinsics
  - no inline
  - byte-level read/writes
  - single block-cycles
  - odd block counts
  - odd block sizes

The number of different configurations we need to test quickly exceeds the
50 minute time limit Travis has on jobs. Fortunately, we can split these
tests out into multiple jobs. This seems to be the intended course of
action for large CI "builds" in Travis, as this gives Travis a finer
grain of control over limiting builds.

Unfortunately, this created a couple issues:

1. The Travis configuration isn't actually that flexible. It allows a
   single "matrix expansion" which can be generated from top-level lists
   of different configurations. But it doesn't let you generate a matrix
   from two seperate environment variable lists (for arch + geometry).

   Without multiple matrix expansions, we're stuck writing out each test
   permutation by hand.

   On the bright-side, this was a good chance to really learn how YAML
   anchors work. I'm torn because on one hand anchors add what feels
   like unnecessary complexity to a config language, on the other hand,
   they did help quite a bit in working around Travis's limitations.

2. Now that we have 47 jobs instead of 7, reporting a separate status
   for each job stops making sense.

   What I've opted for here is to use a special NAME variable to
   deduplicate jobs, and used a few state-less rules to hopefully have
   the reported status make sense most of the time.

   - Overwrite "pending" statuses so that the last job to start owns the
     most recent "pending" status
   - Don't overwrite "failure" statuses unless the job number matches
     our own (in the case of CI restarts)
   - Don't write "success" statuses unless the job number matches our
     own, this should delay a green check-mark until the last-to-start
     job finishes
   - Always overwrite non-failures with "failure" statuses

   This does mean a temporary "success" may appear if the last job
   terminates before earlier jobs. But this is the simpliest solution
   I can think of without storing some complex state somewhere.

   Note we can only report the size this way because it's cheap to
   calculate in every job.
2020-02-18 17:34:23 -06:00
Christopher Haster
dcae185a00 Fixed typo in LFS_MKTAG_IF_ELSE 2020-02-12 11:31:34 -06:00
Christopher Haster
f4b17b379c Added test.py support for tmpfs-backed disks
RAM-backed testing is faster than file-backed testing. This is why
test.py uses rambd by default.

So why add support for tmpfs-backed disks if we can already run tests in
RAM? For reentrant testing.

Under reentrant testing we simulate power-loss by forcefully exiting the
test program at specific times. To make this power-loss meaningful, we need to
persist the disk across these power-losses. However, it's interesting to
note this persistence doesn't need to be actually backed by the
filesystem.

It may be possible to rearchitecture the tests to simulate power-loss a
different way, by say, using coroutines or setjmp/longjmp to leave
behind ongoing filesystem operations without terminating the program
completely. But at this point, I think it's best to work with what we
have.

And simply putting the test disks into a tmpfs mount-point seems to
work just fine.

Note this does force serialization of the tests, which isn't required
otherwise. Currently they are only serialized due to limitations in
test.py. If a future change wants to perallelize the tests, it may need
to rework RAM-backed reentrant tests.
2020-02-12 10:48:54 -06:00
Christopher Haster
9f546f154f Updated .travis.yml and added additional geometry constraints
Moved .travis.yml over to use the new test framework. A part of this
involved testing all of the configurations ran on the old framework
and deciding which to carry over. The new framework duplicates some of
the cases tested by the configurations so some configurations could be
dropped.

The .travis.yml includes some extreme ones, such as no inline files,
relocations every cycle, no intrinsics, power-loss every byte, unaligned
block_count and lookahead, and odd read_sizes.

There were several configurations were some tests failed because of
limitations in the tests themselves, so many conditions were added
to make sure the configurations can run on as many tests as possible.
2020-02-11 16:01:57 -06:00
Christopher Haster
b69cf890e6 Fixed CRC check when prog_size causes multiple CRCs per commit
This is a bit of a strange case that can be caused by storage with
very large prog sizes, such as NAND flash. We only have 10 bits to store
the size of our padding, so when the prog_size gets larger than 1024
bytes, we have to use multiple padding tags to commit to the next
prog_size boundary.

This causes some complication for the new logic that checks CRCs in case
our block becomes "readonly" and contains existing commits that just happen
to match our new commit size.

Here we just check the CRC of the first commit. This isn't perfect but
does protect against pure "readonly" blocks.
2020-02-09 22:43:20 -06:00
Christopher Haster
02c84ac5f4 Cleaned up dependent fixes on branch
These should probably have been cleaned up in each commit to allow
cherry-picking, but due to time I haven't been able to.

- Went with creating an mdir copy in lfs_dir_commit. This handles a
  number of related cleanup issues in lfs_dir_compact and it does so
  more robustly. As a plus we can use the copy to update dependencies
  in the mlist.

- Eliminated code left by the ENOSPC file outlining

- Cleaned up TODOs and lingering comments

- Changed the reentrant many directory create/rename/remove test to use
  a smaller set of directories because of space issues when
  READ/PROG_SIZE=512
2020-02-09 12:37:39 -06:00
Christopher Haster
6530cb3a61 Fixed lfs_fs_size doubling metadata-pairs
This was caused by the previous fix for allocations during
lfs_fs_deorphan in this branch. To catch half-orphans during block
allocations we needed to duplicate all metadata-pairs reported to
lfs_fs_traverse. Unfortunately this causes lfs_fs_size to report 2x the
number of metadata-pairs, which would undoubtably confuse users.

The fix here is inelegantly simple, just do a different traversale for
allocations and size measurements. It reuses the same code but touches
slightly different sets of blocks.

Unfortunately, this causes the public lfs_fs_traverse and lfs_fs_size
functions to split in how they report blocks. This is technically
allowed, since lfs_fs_traverse may report blocks multiple times due to
CoW behavior, however it's undesirable and I'm sure there will be some
confusion.

But I don't have a better solution, so from this point lfs_fs_traverse
will be reporting 2x metadata-blocks and shouldn't be used for finding
the number of available blocks on the filesystem.
2020-02-09 12:00:23 -06:00
Christopher Haster
fe957de892 Fixed broken wear-leveling when block_cycles = 2n-1
This was an interesting issue found during a GitHub discussion with
rmollway and thrasher8390.

Blocks in the metadata-pair are relocated every "block_cycles", or, more
mathy, when rev % block_cycles == 0 as long as rev += 1 every block write.

But there's a problem, rev isn't += 1 every block write. There are two
blocks in a metadata-pair, so looking at it from each blocks
perspective, rev += 2 every block write.

This leads to a sort of aliasing issue, where, if block_cycles is
divisible by 2, one block in the metadata-pair is always relocated, and
the other block is _never_ relocated. Causing a complete failure of
block-level wear-leveling.

Fortunately, because of a previous workaround to avoid block_cycles = 1
(since this will cause the relocation algorithm to never terminate), the
actual math is rev % (block_cycles+1) == 0. This means the bug only
shows its head in the much less likely case where block_cycles is a
multiple of 2 plus 1, or, in more mathy terms, block_cycles = 2n+1 for
some n.

To workaround this we can bitwise or our block_cycles with 1 to force it
to never be a multiple of 2n.

(Maybe we should do this during initialization? But then block_cycles
would need to be mutable.)

---

There's a few unrelated changes mixed into this commit that shouldn't be
there since I added this as part of a branch of bug fixes I'm putting
together rather hastily, so unfortunately this is not easily cherry-pickable.
2020-02-09 12:00:23 -06:00
Christopher Haster
6a550844f4 Modified readmdir/readtree to make reading non-truncated data easier
Added indention so there was a more clear separation between the tag
description and tag data.

Also took the best parts of readmdir.py and added it to readtree.py.
Initially I was thinking it was best for these to have completely
independent data representations, since you could always call readtree
to get more info, but this becomes tedius when needed to look at
low-level tag info across multiple directories on the filesystem.
2020-02-09 12:00:23 -06:00
Christopher Haster
f9c2fd93f2 Removed file outlining on ENOSPC in lfs_file_sync
This was initially added as protection against the case where a file
grew to no longer fit in a metadata-pair. While in most cases this
should be caught by the math in lfs_file_write, it doesn't handle a
problem that can happen if the files metadata is large enough that even
small inline files can't fit. This can happen if you combine a small
block size with large file names and many custom attributes.

But trying to outline on ENOSPC creates creates a lot of problems.

If we are actually low on space, this is one of the worst things we can
do. Inline files take up less space than CTZ skip-lists, but inline
files are rendered useless if we outline inline files as soon as we run
low on space.

On top of this, the outlining logic tries multiple mdir commits if it
gets ENOSPC, which can hide errors if ENOSPC is returned for other
reasons.

In a perfect world, we would be using a different error code for
no-room-in-metadata-pair, and no-blocks-on-disk.

For now I've removed the outlining logic and we will need to figure out
how to handle this situation more robustly.
2020-02-09 12:00:23 -06:00
Christopher Haster
44d7112794 Fixed tests/*.toml.* in .gitignore
Running test.py creates a log of garbage here
2020-02-09 12:00:22 -06:00
Christopher Haster
77e3078b9f Added/fixed tests for noop writes (where bd error can't be trusted)
It's interesting how many ways block devices can show failed writes:
1. prog can error
2. erase can error
3. read can error after writing (ECC failure)
4. prog doesn't error but doesn't write the data correctly
5. erase doesn't error but doesn't erase correctly

Can read fail without an error? Yes, though this appears the same as
prog and erase failing.

These weren't all simulated by testbd since I unintentionally assumed
the block device could always error. Fixed by added additional bad-black
behaviors to testbd.

Note: This also includes a small fix where we can miss bad writes if the
underlying block device contains a valid commit with the exact same
size in the exact same offset.
2020-02-09 12:00:22 -06:00
Christopher Haster
517d3414c5 Fixed more bugs, mostly related to ENOSPC on different geometries
Fixes:
- Fixed reproducability issue when we can't read a directory revision
- Fixed incorrect erase assumption if lfs_dir_fetch exceeds block size
- Fixed cleanup issue caused by lfs_fs_relocate failing when trying to
  outline a file in lfs_file_sync
- Fixed cleanup issue if we run out of space while extending a CTZ skip-list
- Fixed missing half-orphans when allocating blocks during lfs_fs_deorphan

Also:
- Added cycle-detection to readtree.py
- Allowed pseudo-C expressions in test conditions (and it's
  beautifully hacky, see line 187 of test.py)
- Better handling of ctrl-C during test runs
- Added build-only mode to test.py
- Limited stdout of test failures to 5 lines unless in verbose mode

Explanation of fixes below

1. Fixed reproducability issue when we can't read a directory revision

   An interesting subtlety of the block-device layer is that the
   block-device is allowed to return LFS_ERR_CORRUPT on reads to
   untouched blocks. This can easily happen if a user is using ECC or
   some sort of CMAC on their blocks. Normally we never run into this,
   except for the optimization around directory revisions where we use
   uninitialized data to start our revision count.

   We correctly handle this case by ignoring whats on disk if the read
   fails, but end up using unitialized RAM instead. This is not an issue
   for normal use, though it can lead to a small information leak.
   However it creates a big problem for reproducability, which is very
   helpful for debugging.

   I ended up running into a case where the RAM values for the revision
   count was different, causing two identical runs to wear-level at
   different times, leading to one version running out of space before a
   bug occured because it expanded the superblock early.

2. Fixed incorrect erase assumption if lfs_dir_fetch exceeds block size

   This could be caused if the previous tag was a valid commit and we
   lost power causing a partially written tag as the start of a new
   commit.

   Fortunately we already have a separate condition for exceeding the
   block size, so we can force that case to always treat the mdir as
   unerased.

3. Fixed cleanup issue caused by lfs_fs_relocate failing when trying to
   outline a file in lfs_file_sync

   Most operations involving metadata-pairs treat the mdir struct as
   entirely temporary and throw it out if any error occurs. Except for
   lfs_file_sync since the mdir is also a part of the file struct.

   This is relevant because of a cleanup issue in lfs_dir_compact that
   usually doesn't have side-effects. The issue is that lfs_fs_relocate
   can fail. It needs to allocate new blocks to relocate to, and as the
   disk reaches its end of life, it can fail with ENOSPC quite often.

   If lfs_fs_relocate fails, the containing lfs_dir_compact would return
   immediately without restoring the previous state of the mdir. If a new
   commit comes in on the same mdir, the old state left there could
   corrupt the filesystem.

   It's interesting to note this is forced to happen in lfs_file_sync,
   since it always tries to outline the file if it gets ENOSPC (ENOSPC
   can mean both no blocks to allocate and that the mdir is full). I'm
   not actually sure this bit of code is necessary anymore, we may be
   able to remove it.

4. Fixed cleanup issue if we run out of space while extending a CTZ
   skip-list

   The actually CTZ skip-list logic itself hasn't been touched in more
   than a year at this point, so I was surprised to find a bug here. But
   it turns out the CTZ skip-list could be put in an invalid state if we
   run out of space while trying to extend the skip-list.

   This only becomes a problem if we keep the file open, clean up some
   space elsewhere, and then continue to write to the open file without
   modifying it. Fortunately an easy fix.

5. Fixed missing half-orphans when allocating blocks during
   lfs_fs_deorphan

   This was a really interesting bug. Normally, we don't have to worry
   about allocations, since we force consistency before we are allowed
   to allocate blocks. But what about the deorphan operation itself?
   Don't we need to allocate blocks if we relocate while deorphaning?

   It turns out the deorphan operation can lead to allocating blocks
   while there's still orphans and half-orphans on the threaded
   linked-list. Orphans aren't an issue, but half-orphans may contain
   references to blocks in the outdated half, which doesn't get scanned
   during the normal allocation pass.

   Fortunately we already fetch directory entries to check CTZ lists, so
   we can also check half-orphans here. However this causes
   lfs_fs_traverse to duplicate all metadata-pairs, not sure what to do
   about this yet.
2020-02-09 11:54:22 -06:00
Christopher Haster
aab6aa0ed9 Cleaned up test script and directory naming
- Removed old tests and test scripts
- Reorganize the block devices to live under one directory
- Plugged new test framework into Makefile

renamed:
- scripts/test_.py -> scripts/test.py
- tests_ -> tests
- {file,ram,test}bd/* -> bd/*

It took a surprising amount of effort to make the Makefile behave since
it turns out the "test_%" rule could override "tests/test_%.toml.test"
which is generated as part of test.py.
2020-01-27 10:16:29 -06:00
Christopher Haster
52ef0c1c9e Fixed a crazy consistency issue in test.py
The root of the problem was the notorious Python quirk with mutable
default parameters. The default defines for the TestSuite class ended
up being mutated as the class determined the permutations to test,
corrupting other test's defines.

However, the only define that was mutated this way was the CACHE_SIZE
config in test_entries.

The crazy thing was how this small innocuous change would cause
"./scripts/test.py -nr test_relocations" and "./scripts/test.py -nr"
to drift out of sync only after a commit spanning the different cache
sizes would be written out with a different number of prog calls. This
offset the power-cycle counter enough to cause one case to make it to
an erase, and the other to not.

Normally, the difference between a successful/unsuccessful erase
wouldn't change the result of a test, but in this case it offset the
revision count used for wear-leveling, causing one run run expand the
superblock and the other to not.

This change to the filesystem would then propogate through the rest of
the test, making it difficult to reproduce test failures.

Fortunately the fix was to just make a copy of the default define
dictionary. This should also prevent accidently mutating of dicts
belonging to our caller.

Oh, also fixed a buffer overflow in test_files.
2020-01-26 23:53:53 -06:00
Christopher Haster
b9d0695e0a Rewrote explode_asserts.py to be more efficient
Normally I wouldn't consider optimizing this sort of script, but
explode_asserts.py proved to be terribly inefficient and dominated
the build time for running tests. It was slow enough to be distracting
when attempting to test patches while debugging. Just running
explode_asserts.py was ~10x slower than the rest of the compilation
process.

After implementing a proper tokenizer and switching to a handwritten
recursive descent parser, I was able to speed up explode_asserts.py
by ~5x and make test compilation much more tolerable.

I don't think this was a limitaiton of parsy, but rather switching
to a recursive descent parser made it much easier to find the hotspots
where parsing was wasting cycles (string slicing for one).

It's interesting to note that while the assert patterns can be parsed
with a LL(1) parser (by dumping seen tokens if a pattern fails),
I didn't bother as it's much easier to write the patterns with LL(k)
and parsing asserts is predicated by the "assert" string.

A few other tweaks:
- allowed combining different test modes in one run
- added a --no-internal option
- changed test_.py to start counting cases from 1
- added assert(memcmp(a, b) == 0) matching
- added better handling of string escapes in assert messages

time to run tests:
before: 1m31.122s
after:  0m41.447s
2020-01-26 23:53:53 -06:00
Christopher Haster
a5d614fbfb Added tests for power-cycled-relocations and fixed the bugs that fell out
The power-cycled-relocation test with random renames has been the most
aggressive test applied to littlefs so far, with:
- Random nested directory creation
- Random nested directory removal
- Random nested directory renames (this could make the
  threaded linked-list very interesting)
- Relocating blocks every write (maximum wear-leveling)
- Incrementally cycling power every write

Also added a couple other tests to test_orphans and test_relocations.

The good news is the added testing worked well, it found quite a number
of complex and subtle bugs that have been difficult to find.

1. It's actually possible for our parent to be relocated and go out of
   sync in lfs_mkdir. This can happen if our predecessor's predecessor
   is our parent as we are threading ourselves into the filesystem's
   threaded list. (note this doesn't happen if our predecessor _is_ our
   parent, as we then update our parent in a single commit).

   This is annoying because it only happens if our parent is a long (>1
   pair) directory, otherwise we wouldn't need to catch relocations.
   Fortunately we can reuse the internal open file/dir linked-list to
   catch relocations easily, as long as we're careful to unhook our
   parent whenever lfs_mkdir returns.

2. Even more surprising, it's possible for the child in lfs_remove
   to be relocated while we delete the entry from our parent. This
   can happen if we are our own parent's predecessor, since we need
   to be updated then if our parent relocates.

   Fortunately we can also hook into the open linked-list here.

   Note this same issue was present in lfs_rename.

   Fortunately, this means now all fetched dirs are hooked into the
   open linked-list if they are needed across a commit. This means
   we shouldn't need assumptions about tree movement for correctness.

3. lfs_rename("deja/vu", "deja/vu") with the same source and destination
   was broken and tried to delete the entry twice.

4. Managing gstate deltas when we lose power during relocations was
   broken. And unfortunately complicated.

   The issue happens when we lose power during a relocation while
   removing a directory.

   When we remove a directory, we need to move the contents of its
   gstate delta to another directory or we'll corrupt littlefs gstate.
   (gstate is an xor of all deltas on the filesystem). We used to just
   xor the gstate into our parent's gstate, however this isn't correct.

   The gstate isn't built out of the directory tree, but rather out of
   the threaded linked-list (which exists to make collecting this
   gstate efficient).

   Because we have to remove our dir in two operations, there's a point
   were both the updated parent and child can exist in threaded
   linked-list and duplicate the child's gstate delta.

     .--------.
   ->| parent |-.
     | gstate | |
   .-|   a    |-'
   | '--------'
   |     X <- child is orphaned
   | .--------.
   '>| child  |->
     | gstate |
     |   a    |
     '--------'

   What we need to do is save our child's gstate and only give it to our
   predecessor, since this finalizes the removal of the child.

   However we still need to make valid updates to the gstate to mark
   that we've created an orphan when we start removing the child.

   This led to a small rework of how the gstate is handled. Now we have
   a separation of the gpending state that should be written out ASAP
   and the gdelta state that is collected from orphans awaiting
   deletion.

5. lfs_deorphan wasn't actually able to handle deorphaning/desyncing
   more than one orphan after a power-cycle. Having more than one orphan
   is very rare, but of course very possible. Fortunately this was just
   a mistake with using a break the in the deorphan, perhaps left from
   v1 where multiple orphans weren't possible?

   Note that we use a continue to force a refetch of the orphaned block.
   This is needed in the case of a half-orphan, since the fetched
   half-orphan may have an outdated tail pointer.
2020-01-26 23:45:54 -06:00
Christopher Haster
f4b6a6b328 Fixed issues with neighbor updates during moves
The root of the problem was some assumptions about what tags could be
sent to lfs_dir_commit.

- The first assumption is that there could be only one splice (create/delete)
  tag at a time, which is trivially broken by the core commit in lfs_rename.

- The second assumption is that there is at most one create and one delete in
  a single commit. This is less obvious but turns out to not be true in
  the case that we rename a file such that it overwrites another file in
  the same directory (1 delete for source file, 1 delete for destination).

- The third assumption was that there was an ordering to the
  delete/creates passed to lfs_dir_commit. It may be possible to force all
  deletes to follow creates by rearranging the tags in lfs_rename, but
  this risks overflowing tag ids.

The way the lfs_dir_commit first collected the "deletetag" and "createtag"
broke all three of these assumptions. And because we lose the ordering
information we can no longer apply the directory changes to open files
correctly. The file ids may be shifted in a way that doesn't reflect the
actual operations on disk.

These problems were made worst by lfs_dir_commit cleaning up moves
implicitly, which also creates deletes implicitly. While cleaning up moves
in lfs_dir_commit may save some code size, it makes the commit logic much more
difficult to implement correctly.

This bug turned into pulling out a dead tree stump, roots and all.

I ended up reworking how lfs_dir_commit updates open files so that it
has less assumptions, now it just traverses the commit tags multiple
times in order to update file ids after a successful commit in the
correct order.

This also got rid of the dir copy by carefully updating split dirs
after all files have an up-to-date copy of the original dir.

I also just removed the implicit move cleanup. It turns out the only
commits that can occur before we have cleaned up the move is in
lfs_fs_relocate, so it was simple enough to explicitly handle this case
when we update our parent and pred during a relocate.

Cases where we may need to fix moves:
- In lfs_rename when we move a file/dir
- In lfs_demove if we lose power
- In lfs_fs_relocate if we have to relocate our parent and we find it
  had a pending move (or else the move will be outdated)
- In lfs_fs_relocate if we have to relocate our predecessor and we find it
  had a pending move (or else the move will be outdated)

Note the two cases in lfs_fs_relocate may be recursive. But
lfs_fs_relocate can only trigger other lfs_fs_relocates so it's not
possible for pending moves to spill out into other filesystem commits

And of couse, I added several tests to cover these situations. Hopefully
the rename-with-open-files logic should be fairly locked down now.

found with initial fix by eastmoutain
2020-01-20 19:27:27 -06:00
Christopher Haster
9453ebd15d Added/improved disk-reading debug scripts
Also fixed a bug in dir splitting when there's a large number of open
files, which was the main reason I was trying to make it easier to debug
disk images.

One part of the recent test changes was to move away from the
file-per-block emubd and instead simulate storage with a single
contiguous file. The file-per-block format was marginally useful
at the beginning, but as the remaining bugs get more subtle, it
becomes more useful to inspect littlefs through scripts that
make the underlying metadata more human-readable.

The key benefit of switching to a contiguous file is these same
scripts can be reused for real disk images and can even read through
/dev/sdb or similar.

- ./scripts/readblock.py disk block_size block

  off       data
  00000000: 71 01 00 00 f0 0f ff f7 6c 69 74 74 6c 65 66 73  q.......littlefs
  00000010: 2f e0 00 10 00 00 02 00 00 02 00 00 00 04 00 00  /...............
  00000020: ff 00 00 00 ff ff ff 7f fe 03 00 00 20 00 04 19  ...............
  00000030: 61 00 00 0c 00 62 20 30 0c 09 a0 01 00 00 64 00  a....b 0......d.
  ...

  readblock.py prints a hex dump of a given block on disk. It's basically
  just "dd if=disk bs=block_size count=1 skip=block | xxd -g1 -" but with
  less typing.

- ./scripts/readmdir.py disk block_size block1 block2

  off       tag       type            id  len  data (truncated)
  0000003b: 0020000a  dir              0   10  63 6f 6c 64 63 6f 66 66 coldcoff
  00000049: 20000008  dirstruct        0    8  02 02 00 00 03 02 00 00 ........
  00000008: 00200409  dir              1    9  68 6f 74 63 6f 66 66 65 hotcoffe
  00000015: 20000408  dirstruct        1    8  fe 01 00 00 ff 01 00 00 ........

  readmdir.py prints info about the tags in a metadata pair on disk. It
  can print the currently active tags as well as the raw log of the
  metadata pair.

- ./scripts/readtree.py disk block_size

  superblock "littlefs"
    version v2.0
    block_size 512
    block_count 1024
    name_max 255
    file_max 2147483647
    attr_max 1022
  gstate 0x000000000000000000000000
  dir "/"
  mdir {0x0, 0x1} rev 3
  v id 0 superblock "littlefs" inline size 24
  mdir {0x77, 0x78} rev 1
    id 0 dir "coffee" dir {0x1fc, 0x1fd}
  dir "/coffee"
  mdir {0x1fd, 0x1fc} rev 2
    id 0 dir "coldcoffee" dir {0x202, 0x203}
    id 1 dir "hotcoffee" dir {0x1fe, 0x1ff}
  dir "/coffee/coldcoffee"
  mdir {0x202, 0x203} rev 1
  dir "/coffee/warmcoffee"
  mdir {0x200, 0x201} rev 1

  readtree.py parses the littlefs tree and prints info about the
  semantics of what's on disk. This includes the superblock,
  global-state, and directories/metadata-pairs. It doesn't print
  the filesystem tree though, that could be a different tool.
2020-01-20 19:27:27 -06:00
Christopher Haster
fb65057a3c Restructured block devices again for better test exploitation
Also finished migrating tests with test_relocations and test_exhaustion.

The issue I was running into when migrating these tests was a lack of
flexibility with what you could do with the block devices. It was
possible to hack in some hooks for things like bad blocks and power
loss, but it wasn't clean or easily extendable.

The solution here was to just put all of these test extensions into a
third block device, testbd, that uses the other two example block
devices internally.

testbd has several useful features for testing. Note this makes it a
pretty terrible block device _example_ since these hooks look more
complicated than a block device needs to be.

- testbd can simulate different erase values, supporting 1s, 0s, other byte
  patterns, or no erases at all (which can cause surprising bugs). This
  actually depends on the simulated erase values in ramdb and filebd.

  I did try to move this out of rambd/filebd, but it's not possible to
  simulate erases in testbd without buffering entire blocks and creating
  an excessive amount of extra write operations.

- testbd also helps simulate power-loss by containing a "power cycles"
  counter that is decremented every write operation until it calls exit.

  This is notably faster than the previous gdb approach, which is
  valuable since the reentrant tests tend to take a while to resolve.

- testbd also tracks wear, which can be manually set and read. This is
  very useful for testing things like bad block handling, wear leveling,
  or even changing the effective size of the block device at runtime.
2020-01-20 19:27:24 -06:00
Christopher Haster
ecc2857c0e Migrated bad-block tests
Even with adding better reentrance testing, the bad-block tests are
still very useful at isolating the block eviction logic.

This also required rewriting a bit of the internal testing wirework
to allow custom block devices which opens up quite a bit more straegies
for testing.
2020-01-14 12:04:20 -06:00
Christopher Haster
5181ce66cd Migrated the first of the tests with internal knowledge
Both test_move and test_orphan needed internal knowledge which comes
with the addition of the "in" attribute. This was in the plan for the
test-revamp from the beginning as it really opens up the ability to
write more unit-style-tests using internal knowledge of how littlefs
works. More unit-style-tests should help _fix_ bugs by limiting the
scope of the test and where the bug could be hiding.

The "in" attribute effectively runs tests _inside_ the .c file
specified, giving the test access to all static members without
needed to change their visibility.
2020-01-14 09:14:01 -06:00
Christopher Haster
b06ce54279 Migrated the bulk of the feature-specific tests
This involved some minor tweaks for the various types of tests, added
predicates to the test framework (necessary for test_entries and
test_alloc), and cleaned up some of the testing semantics such as
reporting how many tests are filtered, showing permutation config on
the result screen, and properly inheriting suite config in cases.
2020-01-12 22:21:09 -06:00
Christopher Haster
1d2688a771 Migrated test_files, test_dirs, test_format suites to new framework
Also some tweaks to test_.py to capture Makefile warnings and print
test locations a bit better.
2020-01-11 15:58:17 -06:00
Christopher Haster
eeaf536eca Replaced emubd with rambd and filebd
The idea behind emubd (file per block), was neat, but doesn't add much
value over a block device that just operates on a single linear file
(other than adding a significant amount of overhead). Initially it
helped with debugging, but when the metadata format became more complex
in v2, most debugging ends up going through the debug.py script anyways.

Aside from being simpler, moving to filebd means it is also possible to
mount disk images directly.

Also introduced rambd, which keeps the disk contents in RAM. This is
very useful for testing where it increases the speed _significantly_.
- test_dirs w/ filebd - 0m7.170s
- test_dirs w/ rambd  - 0m0.966s

These follow the emubd model of using the lfs_config for geometry. I'm
not convinced this is the best approach, but it gets the job done.

I've also added lfs_ramdb_createcfg to add additional config similar to
lfs_file_opencfg. This is useful for specifying erase_value, which tells
the block device to simulate erases similar to flash devices. Note that
the default (-1) meets the minimum block device requirements and is the
most performant.
2020-01-02 18:36:53 -06:00
Christopher Haster
53d2b02f2a Added reentrant and gdb testing mechanisms to test framework
Aside from reworking the internals of test_.py to work well with
inherited TestCase classes, this also provides the two main features
that were the main reason for revamping the test framework

1. ./scripts/test_.py --reentrant

   Runs reentrant tests (tests with reentrant=true in the .toml
   configuration) under gdb such that the program is killed on every
   call to lfs_emubd_prog or lfs_emubd_erase.

   Currently this just increments a number of prog/erases to skip, which
   means it doesn't necessarily check every possible branch of the test,
   but this should still provide a good coverage of power-loss tests.

2. ./scripts/test_.py --gdb

   Run the tests and if a failure is hit, drop into GDB. In theory this
   will be very useful for reproducing and debugging test failures.

   Note this can be combined with --reentrant to drop into GDB on the
   exact cycle of power-loss where the tests fail.
2019-12-31 11:51:52 -06:00
Christopher Haster
ed8341ec4c Reworked permutation generation in test framework and cleanup
- Reworked how permutations work
  - Now with global defines as well (apply to all code)
  - Also supports lists of different permutation sets
- Added better cleanup in tests and "make clean"
2019-12-30 13:01:08 -06:00
Christopher Haster
f42e007709 Created initial implementation of revamped test.py
This is the start of reworking littlefs's testing framework based on
lessons learned from the initial testing framework.

1. The testing framework needs to be _flexible_. It was hacky, which by
   itself isn't a downside, but it wasn't _flexible_. This limited what
   could be done with the tests and there ended up being many
   workarounds just to reproduce bugs.

   The idea behind this revamped framework is to separate the
   description of tests (tests/test_dirs.toml) and the running of tests
   (scripts/test.py).

   Now, with the logic moved entirely to python, it's possible to run
   the test under varying environments. In addition to the "just don't
   assert" run, I'm also looking to run the tests in valgrind for memory
   checking, and an environment with simulated power-loss.

   The test description can also contain abstract attributes that help
   control how tests can be ran, such as "leaky" to identify tests where
   memory leaks are expected. This keeps test limitations at a minimum
   without limiting how the tests can be ran.

2. Multi-stage-process tests didn't really add value and limited what
   the testing environment.

   Unmounting + mounting can be done in a single process to test the
   same logic. It would be really difficult to make this fail only
   when memory is zeroed, though that can still be caught by
   power-resilient tests.

   Requiring every test to be a single process adds several options
   for test execution, such as using a RAM-backed block device for
   speed, or even running the tests on a device.

3. Added fancy assert interception. This wasn't really a requirement,
   but something I've been wanting to experiment with for a while.

   During testing, scripts/explode_asserts.py is added to the build
   process. This is a custom C-preprocessor that parses out assert
   statements and replaces them with _very_ verbose asserts that
   wouldn't normally be possible with just C macros.

   It even goes as far as to report the arguments to strcmp, since the
   lack of visibility here was very annoying.

   tests_/test_dirs.toml:186:assert: assert failed with "..", expected eq "..."
       assert(strcmp(info.name, "...") == 0);

   One downside is that simply parsing C in python is slower than the
   entire rest of the compilation, but fortunately this can be
   alleviated by parallelizing the test builds through make.

Other neat bits:
- All generated files are a suffix of the test description, this helps
  cleanup and means it's (theoretically) possible to parallelize the
  tests.
- The generated test.c is shoved base64 into an ad-hoc Makefile, this
  means it doesn't force a rebuild of tests all the time.
- Test parameterizing is now easier.
- Hopefully this framework can be repurposed also for benchmarks in the
  future.
2019-12-28 23:43:02 -06:00
Christopher Haster
ce2c01f098 Fixed lfs_dir_fetchmatch not understanding overwritten tags
Sometimes small, single line code change hides behind it a complicated
story. This is one of those times.

If you look at this diff, you may note that this is a case of
lfs_dir_fetchmatch not correctly handling a tag that invalidates a
callback used to search for some condition, in this case a search for a
parent, which is invalidated by a later dir tag overwritting the
previous dir pair.

But how can this happen? Dir-pair-tags are only overwritten during
relocations (when a block goes bad or exceeds the block_cycles config
option for dynamic wear-leveling). Other dir operations create new
directory entries. And the only lfs_dir_fetchmatch condition that relies
on overwrites (as opposed to proper deletes) is when we need to find a
directory's parent, an operation that only occurs during a _different_
relocation. And a false _positive_, can only happen if we don't have a
parent. Which is really unlikely when we search for directory parents!

This bug and minimal test case was found by Matthew Renzelmann. In a
unfortunate series of events, first a file creation causes a directory
split to occur. This creates a new, orphaned metadata-pair containing
our new file. However, the revision count on this metadata-pair
indicates the pair is due for relocation as a part of wear-leveling.
Normally, this is fine, even though this metadata-pair has no parent,
the lfs_dir_find should return ENOENT and continue without error.
However, here we get hit by our fetchmatch bug. A previous, unrelated
relocation overwrites a pair which just happens to contain the block
allocated for a new metadata-pair. When we search for a parent,
lfs_dir_fetchmatch incorrectly finds this old, outdated metadata pair
and incorrectly tells our orphan it's found its parent.

As you can imagine the orphan's dissapointment must be immense.

So an unfortunately timed dir split triggers a relocation which
incorrectly finds a previously written parent that has been outdated
by another relocation.

As a solution we can outdate our found tag if it is overwritten by
an exact match during lfs_dir_fetchmatch.

As a part of this I started adding a new set of tests: tests/test_relocations,
for aggressive relocations tests. This is already by appended to by
another PR. I suspect relocations is relatively under-tested and is
becoming more important due to recent improvements in wear-leveling.
2019-12-01 16:32:01 -06:00
Christopher Haster
0197b18100 Fixed issue with superblock breaking lfs_dir_seek
The superblock entry takes up id 0 in the root directory (not all
entries are files, though currently the superblock is the only
exception). Normally, reading a directory correctly skips the
superblock and only reports non-superblock files.

However, this doesn't work perfectly for lfs_dir_seek, which tries
to be clever to not touch the disk.

Fortunately, we can fix this by adding an offset for the superblock.
This will only work while the superblock is the only non-file entry,
otherwise we would need to touch the disk to properly seek in a
directory (though we already touch the disk a bit to get dir-tails
during seeks).

Found by jhartika
2019-12-01 16:25:08 -06:00
Christopher Haster
1f11e6b78a Merge pull request #338 from ARMmbed/fix-readme-desc
README: fix incorrect description
2019-12-01 16:24:53 -06:00
Christopher Haster
9a7a3f637a Merge pull request #337 from ARMmbed/fix-null-fetchmatch
fix nullptr access in lfs_dir_fetchmatch (#185)
2019-12-01 16:24:44 -06:00
Christopher Haster
8188019cbf Merge pull request #334 from mon/bugfix/inttypes
Fix some LFS_TRACE format specifiers
2019-12-01 16:22:33 -06:00
Christopher Haster
d6dc728c87 Fixed some issues in lfs_migrate
- Bad size used for writing out softtail tag
- Use of sizeof address instead of intended target
2019-12-01 16:22:15 -06:00
Christopher Haster
aeff2a28cf Stop wear-leveling during migration
Stop proactively relocate blocks during migrations, this can cause a number of
failure states such: clobbering the v1 superblock if we relocate root, and
invalidating directory pointers if we relocate the head of a directory. On top
of this, relocations increase the overall complexity of lfs_migration, which is
already a delicate operation.
2019-12-01 16:21:57 -06:00
Christopher Haster
aae22c8256 Fixed issue with directories falling out of date after block relocation
This is caused by dir->head not being updated when dir->m.pair may be.
This causes the two to fall out of sync and later dir rewinds to fail.

This bug stems all the way back from the first commits of littlefs, so
it's surprising it has avoided detection for this long. Perhaps because
lfs_dir_rewind is not used often.
2019-12-01 16:21:57 -06:00
Christopher Haster
60e67ae080 Fixed implicit change-of-sign warning in lfs_dir_fetch
Warning on MDK v5.27.1
Found by geniusgogo
2019-11-26 16:42:49 -06:00
grunwald-m
64dedee2d1 prepare upstream bugfix of lfs
-> call lfs_dir_fetchmatch with ftag=-1 in order to set the invalid bit
   and never let the function match a dir
2019-11-26 11:48:53 -06:00
Will
5925db48da Fix some LFS_TRACE format specifiers 2019-11-22 14:29:57 +10:00
liaoweixiong
ab56dc5a8b README: fix incorrect description
In my point of view, file updates will commit to filesystem only when
sync or close. There is a extra word 'no' here.

Fixes: bdff4bc59e ("Updated DESIGN.md to reflect v2 changes")
Signed-off-by: liaoweixiong <liaoweixiong@allwinnertech.com>
2019-11-15 18:53:53 +08:00
Christopher Haster
6b65737715 Merge pull request #308 from roykuper13/readme-example-update-block-cycles
Update readme example code in accordance to the block_cycles change
2019-10-15 10:36:42 -05:00
Christopher Haster
4ebe6030c5 Merge pull request #294 from ARMmbed/fix-max-null-tests
Fixed off-by-one null terminator in tests
2019-10-15 10:36:04 -05:00
Christopher Haster
7ae8d778f1 Merge pull request #299 from sipke/sipke/fix-types-for-16bit-machines-v2
fix types for 16bit machines v2
2019-10-15 10:35:47 -05:00
Roy Kupershmid
4d068a154d Update README example code in accordance to the block_cycles change
An addition to 38a2a8d. When executing the given example in README,
you immediately get an assertion error because block_cycles is initiated to 0.
2019-10-13 20:27:18 +03:00
Sipke Vriend
ba088aa213 lfs_dir_*: Cast error return codes to int.
For correctness, cast the lfs_stag_t variables to int when returning a negative error code.
2019-10-01 15:24:17 +10:00
Sipke Vriend
955b296bcc lfs_file_rewind: Cast error return codes to int.
For correctness, cast the lfs_stag_t variables to int when returning a negative error code.
2019-10-01 14:22:25 +10:00
Sipke Vriend
241dbc6f86 lfs_stat: Cast error return codes to int.
For correctness, cast the lfs_stag_t variables to int when returning a negative error code.
2019-10-01 14:22:01 +10:00
Sipke Vriend
8cca58f1a6 lfs_file_truncate: ensure lfs_file_seek return code is lsf_soff_t and cast error returns
To ensure 16 bit devices do not invalidly truncate lfs_file_write return codes, change
the return variable to be lfs_ssize_t which is the lfs_file_write return code and
cast to int if it is a negative error code.
2019-10-01 14:20:43 +10:00
Sipke Vriend
97f86af4e9 lfs_remove: Cast tag/error return codes to int.
For correctness, cast the lfs_stag_t variables to int when returning a negative error code.
2019-10-01 13:56:51 +10:00
Sipke Vriend
d40302c5e3 lfs_rename: Cast error return codes to int.
For correctness, cast the lfs_stag_t variables to int when returning a negative error code.
2019-10-01 13:51:52 +10:00
Sipke Vriend
0b5a78e2cd Adjust lfs_dir_find return code to ensure 32 bit value.
lfs_dir_find returns either a negative return code or a tag.
For 32 bit machines with int as 32 bits this co-incides, but for smaller
bit processors, we need to ensure a 32 bit value is returned so change
the return type to lfs_stag_t.
2019-10-01 11:52:02 +10:00
Christopher Haster
27b6cc829b Fixed off-by-one null terminator in tests
Found by mr-at-eo
2019-09-23 10:43:39 -05:00
Christopher Haster
fd204ac2fb Merge pull request #278 from roykuper13/validate-lfs-cfg-sizes
lfs: Validate lfs-cfg sizes before performing any arithmetic logics with them
2019-09-19 10:02:54 -05:00
Christopher Haster
bd99402d9a Merge pull request #281 from patrick--/fix-lfs-embud-file-resource-leak
Fix for issue #260
2019-09-19 10:02:42 -05:00
Christopher Haster
bce442a86b Merge pull request #282 from runderwo/master
Corrections for typos and grammar
2019-09-19 10:02:34 -05:00
Christopher Haster
f26e970a0e Merge pull request #286 from sipke/sipke/fix-warnings-shift-count
build: Fix warnings about shift count width difference for 16 bit com…
2019-09-19 10:02:25 -05:00
Sipke Vriend
965d29b887 build: Fix warnings about shift count width difference for 16 bit compiler
Build warnings exist on a gcc based 16 bit compiler. Cast relevant types
to fix.

littlefs/lfs.c: In function 'lfs_gstate_xororphans':
littlefs/lfs.c:355:5: warning: left shift count >= width of type
littlefs/lfs.c: In function 'lfs_dir_fetchmatch':
littlefs/lfs.c:849:17: warning: left shift count >= width of type
littlefs/lfs.c: In function 'lfs_dir_commitcrc':
littlefs/lfs.c:1278:9: warning: left shift count >= width of type
2019-09-09 13:53:50 +10:00
Ryan Underwood
f7fd7d966a Corrections for typos and grammar 2019-09-01 21:11:49 -07:00
Patrick Servello
d5aba27d60 Fix for issue #260
Certain functions within lfs_emubd.c were susceptible to file resource leaks due to certain code paths not issuing an fclose() before returning.
2019-08-31 20:47:26 -05:00
Roy Kupershmid
0c77123eee lfs: Validate lfs-cfg sizes before performing arithmetic logics with them 2019-08-31 16:57:56 +03:00
Christopher Haster
494dd6673d Merge pull request #263 from rojer/wundef
Fix build with -Wundef
2019-08-08 18:50:40 -05:00
Christopher Haster
fce2569005 Merge pull request #257 from pabigot/pr/20190803a
fix seek position corruption in truncate function
2019-08-08 18:50:28 -05:00
Christopher Haster
9d1f1211a9 Merge pull request #253 from pabigot/pr/20190730a
lfs: correct alignment restriction on lookahead buffer
2019-08-08 18:50:15 -05:00
Christopher Haster
151104c790 Changed CI to create release note for patches
This is a result of feedback that the current release notes made it too
difficult to see what changes happened on patch releases. From my
experience as well it became difficult to chase down which release a
commit landed on.

The risk is that this creates additional noise, both for the release
page and for user notifications. I am open to feedback if this causes a
problem.

Other tweaks on the CI side, these came from iteration with the same
scheme for coru and equeue:

- Changed version branch updates to be atomic (vN and vN-prefix). This
  makes it a bit easier to fix if one of the pushes fails due to a rogue
  branch with the same name.

- Added GEKY_BOT_DRAFT as a CI macro that can optionally switch between
  only creating drafts or immediately posting a release. The default is
  what I will be trying with littlefs which is to draft minor/major
  releases, but automatically create patch release.

  The real benefit of automatic releases is to use on tiny repos that
  don't really have an active maintainer. Though this is definitely no
  longer the case with littlefs, and I'm happy it has gained this much
  attention.
2019-08-08 18:50:00 -05:00
Deomid "rojer" Ryabkov
303ffb2da4 Fix build with -Wundef
Part of https://github.com/mongoose-os-libs/vfs-fs-lfs/issues/2
2019-08-08 16:54:34 +01:00
Peter A. Bigot
5bf71fa43e lfs: do not reposition seek pointer on truncate
When using lfs_file_truncate() to make a file shorter the file block and
off were incorrectly positioned at the new end, resulting in invalid
data accessed when reading.  Lift the seek pointer restoration to apply
to both increasing and reducing truncates.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-08-03 17:17:49 -05:00
Peter A. Bigot
55fb1416c7 lfs: initialize file offs field
The uninitialized value creates confusion when diagnosing anomalies.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-08-03 09:59:27 -05:00
Peter A. Bigot
dc031ce1d9 lfs: use meaningful names for magic block identifiers
The difference between 0xffffffff and 0xfffffffe is too subtle.  Use
names that reflect what the value represents.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-08-03 09:59:07 -05:00
Peter A. Bigot
f85ff1d2f8 lfs: correct alignment restriction on lookahead buffer
The buffer need only be 32-bit aligned.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-07-30 20:02:42 -05:00
Christopher Haster
db054684a6 Bump version to v2.1 2019-07-29 01:42:28 -05:00
Christopher Haster
7872918ec8 Fixed issue where lfs_migrate would relocate root and corrupt superblock
Found during testing, the issue was with lfs_migrate in combination with
wear leveling.

Normally, we can expect lfs_migrate to be able to respect the user-configured
block_cycles. It already has allocation information on which blocks are
used by both v1 and v2, so it should be safe to relocate blocks as
needed.

However, this fell apart when root was relocated. If lfs_migrate found a
root that needed migration, it would happily relocate the root. This
would normally be fine, except relocating the root has a side-effect of
needed to update the superblock. Which, during migration, is in a
delicate state of containing both v1's and v2's superblocks in the same
metadata pair. If the superblock ends up needing to compact, this would
clobber the v1 superblock and corrupt the filesystem during migration.

The best fix I could come up with is to specifically dissallow migrating the
root directory during migration. Fortunately this is behind the
LFS_MIGRATE macro, so the code cost for this check is not normally paid.
2019-07-29 01:42:06 -05:00
Christopher Haster
e249854858 Removed dependency on uninitialized value in lfs_file_t struct 2019-07-29 00:43:54 -05:00
Christopher Haster
501b0240a9 Merge pull request #232 from ARMmbed/debug-improvements
Debug improvements
2019-07-28 21:53:55 -05:00
Christopher Haster
e1f3b90b56 Merge remote-tracking branch 'origin/master' into debug-improvements 2019-07-28 21:53:13 -05:00
Christopher Haster
74fe46de3d Merge pull request #233 from ARMmbed/discourage-no-wear-leveling
Change block_cycles disable from 0 to -1
2019-07-28 21:35:48 -05:00
Christopher Haster
582b596ed1 Merge pull request #242 from ARMmbed/fix-2048-erase-size
Fix issues with large prog sizes (prog_size > 1KiB)
2019-07-28 21:35:22 -05:00
Christopher Haster
0d4c0b105c Fixed issue where inline files were not cleaned up
Due to the logging nature of metadata pairs, switching from inline files
(type3 = 0x201) to CTZ skip-lists (type3 = 0x202) does not explicitly
erase inline files, but instead leaves them up to compaction to omit.
To save code size, this is handled by the same logic that deduplicates
tags.

Unfortunately, this wasn't working. Due to a relatively late change in v2
the struct's type field was changed to no longer be a part of determining a
tag's "uniqueness". A part of this should have been the modification of
directory traversal filtering to respect type-dependent uniqueness, but
I missed this.

The fix is to add in correct type-dependent filtering. Also there was
some clean up necessary around removing delete tags during compaction
and outlining files.

Note that while this appears to conflict with the possibility of
combining inline + ctz files, we still have the device-side-only
LFS_TYPE_FROM tag that can be repurposed for 256 additional inline
"chunks".

Found by Johnxjj
2019-07-28 21:34:17 -05:00
Christopher Haster
4850e01e14 Changed rdonly/wronly mistakes to assert
Previously these returned LFS_ERR_BADF. But attempting to modify a file
opened read-only, or reading a write-only flie, is a user error and
should not occur in normal use.

Changing this to an assert allows the logic to be omitted if the user
disables asserts to reduce the code footprint (not suggested unless the
user really really knows what they're doing).
2019-07-28 21:32:06 -05:00
Christopher Haster
4ec4425272 Fixed overlapping memcpy in emubd
Found by DanielLyubin
2019-07-28 21:26:24 -05:00
Christopher Haster
31e28fddb7 Merge pull request #237 from Ar2rL/reverse_finalize_close
Protect (LFS_ASSERT) file operations against using not opened or closed files.
2019-07-28 21:26:03 -05:00
Christopher Haster
3806d88285 Fixed seek-related typos in lfs.h
- lfs_file_rewind == lfs_file_seek(lfs, file, 0, LFS_SEEK_SET)
- lfs_file_seek returns the _new_ position of the file
2019-07-28 21:25:18 -05:00
Christopher Haster
de5972699a Fixed license header in lfs.c
Found by pabigot
2019-07-28 21:25:00 -05:00
Christopher Haster
0d8ffd6b86 Merge pull request #239 from pabigot/pr/20190723a
lfs: correct documentation on lookahead-related values
2019-07-28 21:24:39 -05:00
Christopher Haster
c0af471bc1 Merge pull request #227 from haneefmubarak/patch-1
removed <dirent.h> preventing compile on some archs
2019-07-28 21:24:22 -05:00
Christopher Haster
e8c023aab0 Changed FUSE branch to v2 (previously v2-alpha) 2019-07-28 20:43:12 -05:00
Christopher Haster
38a2a8d2a3 Minor improvement to documentation over block_cycles
Suggested by haneefmubarak
2019-07-28 20:42:13 -05:00
Christopher Haster
51fabc672b Switched to using hex for blocks and ids in debug output
This is a minor quality of life change to help debugging, specifically
when debugging test failures.

Before, the test framework used hex, while the log output used decimal.
This was slightly annoying to convert between.

Why not output lengths/offset in hex? I don't have a big reason. I find
it easier to reason about lengths in decimal and ids (such as addresses
or block numbers) in hex. But this may just be me.
2019-07-26 20:09:24 -05:00
Christopher Haster
19838371fb Fixed issue where sed buffering (QUIET=1) caused Travis timeout 2019-07-26 19:51:20 -05:00
Christopher Haster
312326c4e4 Added a better solution for large prog sizes
A current limitation of the lfs tag is the 10-bit (1024) length field.
This field is used to indicate padding for commits and effectively
limits the size of commits to 1KiB. Because commits must be prog size
aligned, this is a problem on devices with prog size > 1024.

[----                   6KiB erase block                   ----]
[-- 2KiB prog size --|-- 2KiB prog size --|-- 2KiB prog size --]
[ 1KiB commit |  ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ]

This can be increased to 12-bit (4096), but for NAND devices this is
still to small to completely solve the issue.

The previous workaround was to just create unaligned commits. This can
occur naturally if littlefs is used on portable media as the prog size
does not have to be consistent on different drivers. If littlefs sees
an unaligned commit, it treats the dir as unerased and must compact the
dir if it creates any new commits.

Unfortunately this isn't great. It effectively means that every small
commit forced an erase on devices with prog size > 1024. This is pretty
terrible.

[----                   6KiB erase block                   ----]
[-- 2KiB prog size --|-- 2KiB prog size --|-- 2KiB prog size --]
[ 1KiB commit |------------------- wasted ---------------------]

A different solution, implemented here, is to use multiple crc tags
to pad the commit until the remaining space fits in the padding. This
effectively looks like multiple empty commits and has a small runtime
cost to parse these tags, but otherwise does no harm.

[----                   6KiB erase block                   ----]
[-- 2KiB prog size --|-- 2KiB prog size --|-- 2KiB prog size --]
[ 1KiB commit | noop | 1KiB commit | noop | 1KiB commit | noop ]

It was a bit tricky to implement, but now we can effectively support
unlimited prog sizes since there's no limit to the number of commits
in a block.

found by kazink and joicetm
2019-07-26 19:51:15 -05:00
Christopher Haster
ef1c926940 Increased testing to include geometries that can't be fully tested
This is primarily to get better test coverage over devices with very
large erase/prog/read sizes. The unfortunate state of the tests is
that most of them rely on a specific block device size, so that
ENOSPC and ECORRUPT errors occur in specific situations.

This should be improved in the future, but at least for now we can
open up some of the simpler tests to run on these different
configurations.

Also added testing over both 0x00 and 0xff erase values in emubd.

Also added a number of small file tests that expose issues prevalent
on NAND devices.
2019-07-26 19:50:17 -05:00
Christopher Haster
72e3bb4448 Refactored a handful of things in tests
- Now test errors have correct line reporting! #line directives
  are passed to the compiler that reference the relevant line in
  the test case shell script.

  --- Multi-block directory ---
  ./tests/test_dirs.sh:109: assert failed with 0, expected 1
      lfs_unmount(&lfs) => 1

- Cleaned up the number of implicit global variables provided to
  tests. A lot of these were infrequently used and made it difficult
  to remember what was provided. This isn't an MCU, so there's very
  little cost to stack allocations when needed.

- Minimized the results.py script (previously stats.py) output to
  match minimization of test output.
2019-07-26 11:11:34 -05:00
Christopher Haster
649640c605 Fixed workaround for erase sizes >1024 B
Introduced in 0b76635, the workaround for erases sizes >1024 is to
commit with an unaligned CRC tag. Upon reading an unaligned CRC,
littlefs should treat the metadata pair as "requires erased". While
necessary for portability, this also lets us workaround the lack of
handling of erases sizes >1024.

Unfortunately, this workaround wasn't implemented correctly (by me)
in the case that the metadata-pair does not immediately compact. This
is solved here by added the erase check to lfs_dir_commit.

Note this is still only a part of a workaround which should be replaced.
One potential solution is to pad the commit with multiple smaller CRC
tags until we reach the next prog_size boundary.

found by kazink
2019-07-24 14:45:21 -05:00
Peter A. Bigot
eb013e6dd6 lfs: correct documentation on lookahead-related values
The size of the lookahead buffer is required to be a multiple of 8 bytes
in anticipation of a future improvement.  The buffer itself need only be
aligned to support access through a uint32_t pointer.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-07-23 11:05:04 -05:00
Ar2rL
7e1bad3eee Set LFS_F_OPENED flag at places required by lfs internal logic. 2019-07-21 14:36:40 +02:00
Ar2rL
72a3758958 Use LFS_F_OPENED flag to protect against use of not opened or closed file. 2019-07-21 11:34:53 +02:00
Ar2rL
df2e676562 Add necessary flag to mark file as being opened. 2019-07-21 11:34:14 +02:00
Christopher Haster
53a6e04712 Changed block_cycles disable from 0 to -1
As it is now, block_cycles = 0 disables wear leveling. This was a
mistake as 0 is the "default" value for several other config options.
It's even worse when migrating from v1 as it's easy to miss the addition
of block_cycles and end up with a filesystem that is not actually
wear-leveling.

Clearly, block_cycles = 0 should do anything but disable wear-leveling.

Here, I've changed block_cycles = 0 to assert. Forcing users to set a
value for block_cycles (500 is suggested). block_cycles can be set to -1
to explicitly disable wear leveling if desired.
2019-07-17 17:05:20 -05:00
Christopher Haster
1aaf1cb6c0 Minor improvements to testing framework
- Moved scripts into scripts folder
- Removed what have been relatively unhelpful assert printing
2019-07-16 20:53:39 -05:00
Christopher Haster
52a90b8dcc Added asserts on positive return values from block device functions
This has been a large source of porting errors, partially due to my
fault in not having enough porting documentation, which is also
planned.

In the short term, asserts should at least help catch these types of
errors instead of just letting the filesystem collapse after recieving
an odd error code.
2019-07-16 15:55:29 -05:00
Christopher Haster
e279c8ff90 Tweaked debug output
- Changed "No more free space" to be an error as suggested by davidefer
- Tweaked output to be more parsable (no space between lfs and warn)
2019-07-16 15:40:26 -05:00
Christopher Haster
6a1ee91490 Added trace statements through LFS_YES_TRACE
To use, compile and run with LFS_YES_TRACE defined:
make CFLAGS+=-DLFS_YES_TRACE=1 test_format

The name LFS_YES_TRACE was chosen to match the LFS_NO_DEBUG and
LFS_NO_WARN defines for the similar levels of output. The YES is
necessary to avoid a conflict with the actual LFS_TRACE macro that
gets emitting. LFS_TRACE can also be defined directly to provide
a custom trace formatter.

Hopefully having trace statements at the littlefs C API helps
debugging and reproducing issues.
2019-07-16 15:14:32 -05:00
Haneef Mubarak
2e92f7a49b actually removed <dirent.h> 2019-07-12 11:46:18 -07:00
Haneef Mubarak
2588948d70 removed <dirent.h> preventing compile on some archs 2019-07-11 15:46:17 -07:00
54 changed files with 12270 additions and 4727 deletions

1
.gitignore vendored
View File

@@ -7,3 +7,4 @@
blocks/ blocks/
lfs lfs
test.c test.c
tests/*.toml.*

View File

@@ -1,36 +1,70 @@
# Environment variables # environment variables
env: env:
global: global:
- CFLAGS=-Werror - CFLAGS=-Werror
- MAKEFLAGS=-j
# Common test script # cache installation dirs
script: cache:
pip: true
directories:
- $HOME/.cache/apt
# common installation
_: &install-common
# need toml, also pip3 isn't installed by default?
- sudo apt-get install python3 python3-pip
- sudo pip3 install toml
# setup a ram-backed disk to speed up reentrant tests
- mkdir disks
- sudo mount -t tmpfs -o size=100m tmpfs disks
- export TFLAGS="$TFLAGS --disk=disks/disk"
# test cases
_: &test-example
# make sure example can at least compile # make sure example can at least compile
- sed -n '/``` c/,/```/{/```/d; p;}' README.md > test.c && - sed -n '/``` c/,/```/{/```/d; p}' README.md > test.c &&
make all CFLAGS+=" make all CFLAGS+="
-Duser_provided_block_device_read=NULL -Duser_provided_block_device_read=NULL
-Duser_provided_block_device_prog=NULL -Duser_provided_block_device_prog=NULL
-Duser_provided_block_device_erase=NULL -Duser_provided_block_device_erase=NULL
-Duser_provided_block_device_sync=NULL -Duser_provided_block_device_sync=NULL
-include stdio.h" -include stdio.h"
# default tests
_: &test-default
# normal+reentrant tests
- make test TFLAGS+="-nrk"
# common real-life geometries
_: &test-nor
# NOR flash: read/prog = 1 block = 4KiB
- make test TFLAGS+="-nrk -DLFS_READ_SIZE=1 -DLFS_BLOCK_SIZE=4096"
_: &test-emmc
# eMMC: read/prog = 512 block = 512
- make test TFLAGS+="-nrk -DLFS_READ_SIZE=512 -DLFS_BLOCK_SIZE=512"
_: &test-nand
# NAND flash: read/prog = 4KiB block = 32KiB
- make test TFLAGS+="-nrk -DLFS_READ_SIZE=4096 -DLFS_BLOCK_SIZE=\(32*1024\)"
# other extreme geometries that are useful for testing various corner cases
_: &test-no-intrinsics
- make test TFLAGS+="-nrk -DLFS_NO_INTRINSICS"
_: &test-no-inline
- make test TFLAGS+="-nrk -DLFS_INLINE_MAX=0"
_: &test-byte-writes
- make test TFLAGS+="-nrk -DLFS_READ_SIZE=1 -DLFS_CACHE_SIZE=1"
_: &test-block-cycles
- make test TFLAGS+="-nrk -DLFS_BLOCK_CYCLES=1"
_: &test-odd-block-count
- make test TFLAGS+="-nrk -DLFS_BLOCK_COUNT=1023 -DLFS_LOOKAHEAD_SIZE=256"
_: &test-odd-block-size
- make test TFLAGS+="-nrk -DLFS_READ_SIZE=11 -DLFS_BLOCK_SIZE=704"
# run tests # report size
- make test QUIET=1 _: &report-size
# run tests with a few different configurations
- make test QUIET=1 CFLAGS+="-DLFS_READ_SIZE=1 -DLFS_CACHE_SIZE=4"
- make test QUIET=1 CFLAGS+="-DLFS_READ_SIZE=512 -DLFS_CACHE_SIZE=512 -DLFS_BLOCK_CYCLES=16"
- make test QUIET=1 CFLAGS+="-DLFS_BLOCK_COUNT=1023 -DLFS_LOOKAHEAD_SIZE=256"
- make clean test QUIET=1 CFLAGS+="-DLFS_INLINE_MAX=0"
- make clean test QUIET=1 CFLAGS+="-DLFS_NO_INTRINSICS"
# compile and find the code size with the smallest configuration # compile and find the code size with the smallest configuration
- make clean size - make -j1 clean size
OBJ="$(ls lfs*.o | tr '\n' ' ')" OBJ="$(ls lfs*.c | sed 's/\.c/\.o/' | tr '\n' ' ')"
CFLAGS+="-DLFS_NO_ASSERT -DLFS_NO_DEBUG -DLFS_NO_WARN -DLFS_NO_ERROR" CFLAGS+="-DLFS_NO_ASSERT -DLFS_NO_DEBUG -DLFS_NO_WARN -DLFS_NO_ERROR"
| tee sizes | tee sizes
# update status if we succeeded, compare with master if possible # update status if we succeeded, compare with master if possible
- | - |
if [ "$TRAVIS_TEST_RESULT" -eq 0 ] if [ "$TRAVIS_TEST_RESULT" -eq 0 ]
@@ -38,10 +72,10 @@ script:
CURR=$(tail -n1 sizes | awk '{print $1}') CURR=$(tail -n1 sizes | awk '{print $1}')
PREV=$(curl -u "$GEKY_BOT_STATUSES" https://api.github.com/repos/$TRAVIS_REPO_SLUG/status/master \ PREV=$(curl -u "$GEKY_BOT_STATUSES" https://api.github.com/repos/$TRAVIS_REPO_SLUG/status/master \
| jq -re "select(.sha != \"$TRAVIS_COMMIT\") | jq -re "select(.sha != \"$TRAVIS_COMMIT\")
| .statuses[] | select(.context == \"$STAGE/$NAME\").description | .statuses[] | select(.context == \"${TRAVIS_BUILD_STAGE_NAME,,}/$NAME\").description
| capture(\"code size is (?<size>[0-9]+)\").size" \ | capture(\"code size is (?<size>[0-9]+)\").size" \
|| echo 0) || echo 0)
STATUS="Passed, code size is ${CURR}B" STATUS="Passed, code size is ${CURR}B"
if [ "$PREV" -ne 0 ] if [ "$PREV" -ne 0 ]
then then
@@ -49,261 +83,347 @@ script:
fi fi
fi fi
# CI matrix # stage control
stages:
- name: test
- name: deploy
if: branch = master AND type = push
# job control
jobs: jobs:
include: # native testing
# native testing - &x86
- stage: test stage: test
env: env:
- STAGE=test - NAME=littlefs-x86
- NAME=littlefs-x86 install: *install-common
script: [*test-example, *report-size]
- {<<: *x86, script: [*test-default, *report-size]}
- {<<: *x86, script: [*test-nor, *report-size]}
- {<<: *x86, script: [*test-emmc, *report-size]}
- {<<: *x86, script: [*test-nand, *report-size]}
- {<<: *x86, script: [*test-no-intrinsics, *report-size]}
- {<<: *x86, script: [*test-no-inline, *report-size]}
- {<<: *x86, script: [*test-byte-writes, *report-size]}
- {<<: *x86, script: [*test-block-cycles, *report-size]}
- {<<: *x86, script: [*test-odd-block-count, *report-size]}
- {<<: *x86, script: [*test-odd-block-size, *report-size]}
# cross-compile with ARM (thumb mode) # cross-compile with ARM (thumb mode)
- stage: test - &arm
env: stage: test
- STAGE=test env:
- NAME=littlefs-arm - NAME=littlefs-arm
- CC="arm-linux-gnueabi-gcc --static -mthumb" - CC="arm-linux-gnueabi-gcc --static -mthumb"
- EXEC="qemu-arm" - TFLAGS="$TFLAGS --exec=qemu-arm"
install: install:
- sudo apt-get install - *install-common
gcc-arm-linux-gnueabi - sudo apt-get install
libc6-dev-armel-cross gcc-arm-linux-gnueabi
qemu-user libc6-dev-armel-cross
- arm-linux-gnueabi-gcc --version qemu-user
- qemu-arm -version - arm-linux-gnueabi-gcc --version
- qemu-arm -version
script: [*test-example, *report-size]
- {<<: *arm, script: [*test-default, *report-size]}
- {<<: *arm, script: [*test-nor, *report-size]}
- {<<: *arm, script: [*test-emmc, *report-size]}
- {<<: *arm, script: [*test-nand, *report-size]}
- {<<: *arm, script: [*test-no-intrinsics, *report-size]}
- {<<: *arm, script: [*test-no-inline, *report-size]}
# it just takes way to long to run byte-level writes in qemu,
# note this is still tested in the native tests
#- {<<: *arm, script: [*test-byte-writes, *report-size]}
- {<<: *arm, script: [*test-block-cycles, *report-size]}
- {<<: *arm, script: [*test-odd-block-count, *report-size]}
- {<<: *arm, script: [*test-odd-block-size, *report-size]}
# cross-compile with PowerPC # cross-compile with MIPS
- stage: test - &mips
env: stage: test
- STAGE=test env:
- NAME=littlefs-powerpc - NAME=littlefs-mips
- CC="powerpc-linux-gnu-gcc --static" - CC="mips-linux-gnu-gcc --static"
- EXEC="qemu-ppc" - TFLAGS="$TFLAGS --exec=qemu-mips"
install: install:
- sudo apt-get install - *install-common
gcc-powerpc-linux-gnu - sudo apt-get install
libc6-dev-powerpc-cross gcc-mips-linux-gnu
qemu-user libc6-dev-mips-cross
- powerpc-linux-gnu-gcc --version qemu-user
- qemu-ppc -version - mips-linux-gnu-gcc --version
- qemu-mips -version
script: [*test-example, *report-size]
- {<<: *mips, script: [*test-default, *report-size]}
- {<<: *mips, script: [*test-nor, *report-size]}
- {<<: *mips, script: [*test-emmc, *report-size]}
- {<<: *mips, script: [*test-nand, *report-size]}
- {<<: *mips, script: [*test-no-intrinsics, *report-size]}
- {<<: *mips, script: [*test-no-inline, *report-size]}
# it just takes way to long to run byte-level writes in qemu,
# note this is still tested in the native tests
#- {<<: *mips, script: [*test-byte-writes, *report-size]}
- {<<: *mips, script: [*test-block-cycles, *report-size]}
- {<<: *mips, script: [*test-odd-block-count, *report-size]}
- {<<: *mips, script: [*test-odd-block-size, *report-size]}
# cross-compile with MIPS # cross-compile with PowerPC
- stage: test - &powerpc
env: stage: test
- STAGE=test env:
- NAME=littlefs-mips - NAME=littlefs-powerpc
- CC="mips-linux-gnu-gcc --static" - CC="powerpc-linux-gnu-gcc --static"
- EXEC="qemu-mips" - TFLAGS="$TFLAGS --exec=qemu-ppc"
install: install:
- sudo apt-get install - *install-common
gcc-mips-linux-gnu - sudo apt-get install
libc6-dev-mips-cross gcc-powerpc-linux-gnu
qemu-user libc6-dev-powerpc-cross
- mips-linux-gnu-gcc --version qemu-user
- qemu-mips -version - powerpc-linux-gnu-gcc --version
- qemu-ppc -version
script: [*test-example, *report-size]
- {<<: *powerpc, script: [*test-default, *report-size]}
- {<<: *powerpc, script: [*test-nor, *report-size]}
- {<<: *powerpc, script: [*test-emmc, *report-size]}
- {<<: *powerpc, script: [*test-nand, *report-size]}
- {<<: *powerpc, script: [*test-no-intrinsics, *report-size]}
- {<<: *powerpc, script: [*test-no-inline, *report-size]}
# it just takes way to long to run byte-level writes in qemu,
# note this is still tested in the native tests
#- {<<: *powerpc, script: [*test-byte-writes, *report-size]}
- {<<: *powerpc, script: [*test-block-cycles, *report-size]}
- {<<: *powerpc, script: [*test-odd-block-count, *report-size]}
- {<<: *powerpc, script: [*test-odd-block-size, *report-size]}
# self-host with littlefs-fuse for fuzz test # test under valgrind, checking for memory errors
- stage: test - &valgrind
env: stage: test
- STAGE=test env:
- NAME=littlefs-fuse - NAME=littlefs-valgrind
if: branch !~ -prefix$ install:
install: - *install-common
- sudo apt-get install libfuse-dev - sudo apt-get install valgrind
- git clone --depth 1 https://github.com/geky/littlefs-fuse -b v2-alpha - valgrind --version
- fusermount -V script:
- gcc --version - make test TFLAGS+="-k --valgrind"
before_script:
# setup disk for littlefs-fuse
- rm -rf littlefs-fuse/littlefs/*
- cp -r $(git ls-tree --name-only HEAD) littlefs-fuse/littlefs
- mkdir mount # self-host with littlefs-fuse for fuzz test
- sudo chmod a+rw /dev/loop0 - stage: test
- dd if=/dev/zero bs=512 count=4096 of=disk env:
- losetup /dev/loop0 disk - NAME=littlefs-fuse
script: if: branch !~ -prefix$
# self-host test install:
- make -C littlefs-fuse - *install-common
- sudo apt-get install libfuse-dev
- git clone --depth 1 https://github.com/geky/littlefs-fuse -b v2
- fusermount -V
- gcc --version
- littlefs-fuse/lfs --format /dev/loop0 # setup disk for littlefs-fuse
- littlefs-fuse/lfs /dev/loop0 mount - rm -rf littlefs-fuse/littlefs/*
- cp -r $(git ls-tree --name-only HEAD) littlefs-fuse/littlefs
- ls mount - mkdir mount
- mkdir mount/littlefs - sudo chmod a+rw /dev/loop0
- cp -r $(git ls-tree --name-only HEAD) mount/littlefs - dd if=/dev/zero bs=512 count=128K of=disk
- cd mount/littlefs - losetup /dev/loop0 disk
- stat . script:
- ls -flh # self-host test
- make -B test_dirs test_files QUIET=1 - make -C littlefs-fuse
# self-host with littlefs-fuse for fuzz test - littlefs-fuse/lfs --format /dev/loop0
- stage: test - littlefs-fuse/lfs /dev/loop0 mount
env:
- STAGE=test
- NAME=littlefs-migration
if: branch !~ -prefix$
install:
- sudo apt-get install libfuse-dev
- git clone --depth 1 https://github.com/geky/littlefs-fuse -b v2-alpha v2
- git clone --depth 1 https://github.com/geky/littlefs-fuse -b v1 v1
- fusermount -V
- gcc --version
before_script:
# setup disk for littlefs-fuse
- rm -rf v2/littlefs/*
- cp -r $(git ls-tree --name-only HEAD) v2/littlefs
- mkdir mount - ls mount
- sudo chmod a+rw /dev/loop0 - mkdir mount/littlefs
- dd if=/dev/zero bs=512 count=4096 of=disk - cp -r $(git ls-tree --name-only HEAD) mount/littlefs
- losetup /dev/loop0 disk - cd mount/littlefs
script: - stat .
# compile v1 and v2 - ls -flh
- make -C v1 - make -B test
- make -C v2
# run self-host test with v1 # test migration using littlefs-fuse
- v1/lfs --format /dev/loop0 - stage: test
- v1/lfs /dev/loop0 mount env:
- NAME=littlefs-migration
if: branch !~ -prefix$
install:
- *install-common
- sudo apt-get install libfuse-dev
- git clone --depth 1 https://github.com/geky/littlefs-fuse -b v2 v2
- git clone --depth 1 https://github.com/geky/littlefs-fuse -b v1 v1
- fusermount -V
- gcc --version
- ls mount # setup disk for littlefs-fuse
- mkdir mount/littlefs - rm -rf v2/littlefs/*
- cp -r $(git ls-tree --name-only HEAD) mount/littlefs - cp -r $(git ls-tree --name-only HEAD) v2/littlefs
- cd mount/littlefs
- stat .
- ls -flh
- make -B test_dirs test_files QUIET=1
# attempt to migrate - mkdir mount
- cd ../.. - sudo chmod a+rw /dev/loop0
- fusermount -u mount - dd if=/dev/zero bs=512 count=128K of=disk
- losetup /dev/loop0 disk
script:
# compile v1 and v2
- make -C v1
- make -C v2
- v2/lfs --migrate /dev/loop0 # run self-host test with v1
- v2/lfs /dev/loop0 mount - v1/lfs --format /dev/loop0
- v1/lfs /dev/loop0 mount
# run self-host test with v2 right where we left off - ls mount
- ls mount - mkdir mount/littlefs
- cd mount/littlefs - cp -r $(git ls-tree --name-only HEAD) mount/littlefs
- stat . - cd mount/littlefs
- ls -flh - stat .
- make -B test_dirs test_files QUIET=1 - ls -flh
- make -B test
# Automatically create releases # attempt to migrate
- stage: deploy - cd ../..
env: - fusermount -u mount
- STAGE=deploy
- NAME=deploy
script:
- |
bash << 'SCRIPT'
set -ev
# Find version defined in lfs.h
LFS_VERSION=$(grep -ox '#define LFS_VERSION .*' lfs.h | cut -d ' ' -f3)
LFS_VERSION_MAJOR=$((0xffff & ($LFS_VERSION >> 16)))
LFS_VERSION_MINOR=$((0xffff & ($LFS_VERSION >> 0)))
# Grab latests patch from repo tags, default to 0, needs finagling
# to get past github's pagination api
PREV_URL=https://api.github.com/repos/$TRAVIS_REPO_SLUG/git/refs/tags/v$LFS_VERSION_MAJOR.$LFS_VERSION_MINOR.
PREV_URL=$(curl -u "$GEKY_BOT_RELEASES" "$PREV_URL" -I \
| sed -n '/^Link/{s/.*<\(.*\)>; rel="last"/\1/;p;q0};$q1' \
|| echo $PREV_URL)
LFS_VERSION_PATCH=$(curl -u "$GEKY_BOT_RELEASES" "$PREV_URL" \
| jq 'map(.ref | match("\\bv.*\\..*\\.(.*)$";"g")
.captures[].string | tonumber) | max + 1' \
|| echo 0)
# We have our new version
LFS_VERSION="v$LFS_VERSION_MAJOR.$LFS_VERSION_MINOR.$LFS_VERSION_PATCH"
echo "VERSION $LFS_VERSION"
# Check that we're the most recent commit
CURRENT_COMMIT=$(curl -f -u "$GEKY_BOT_RELEASES" \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/commits/master \
| jq -re '.sha')
[ "$TRAVIS_COMMIT" == "$CURRENT_COMMIT" ] || exit 0
# Create major branch
git branch v$LFS_VERSION_MAJOR HEAD
# Create major prefix branch
git config user.name "geky bot"
git config user.email "bot@geky.net"
git fetch https://github.com/$TRAVIS_REPO_SLUG.git \
--depth=50 v$LFS_VERSION_MAJOR-prefix || true
./scripts/prefix.py lfs$LFS_VERSION_MAJOR
git branch v$LFS_VERSION_MAJOR-prefix $( \
git commit-tree $(git write-tree) \
$(git rev-parse --verify -q FETCH_HEAD | sed -e 's/^/-p /') \
-p HEAD \
-m "Generated v$LFS_VERSION_MAJOR prefixes")
git reset --hard
# Update major version branches (vN and vN-prefix)
git push https://$GEKY_BOT_RELEASES@github.com/$TRAVIS_REPO_SLUG.git \
v$LFS_VERSION_MAJOR \
v$LFS_VERSION_MAJOR-prefix
# Create patch version tag (vN.N.N)
curl -f -u "$GEKY_BOT_RELEASES" -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/git/refs \
-d "{
\"ref\": \"refs/tags/$LFS_VERSION\",
\"sha\": \"$TRAVIS_COMMIT\"
}"
# Create minor release?
[[ "$LFS_VERSION" == *.0 ]] || exit 0
# Build release notes
PREV=$(git tag --sort=-v:refname -l "v*.0" | head -1)
if [ ! -z "$PREV" ]
then
echo "PREV $PREV"
CHANGES=$'### Changes\n\n'$( \
git log --oneline $PREV.. --grep='^Merge' --invert-grep)
printf "CHANGES\n%s\n\n" "$CHANGES"
fi
# Create the release
curl -f -u "$GEKY_BOT_RELEASES" -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/releases \
-d "{
\"tag_name\": \"$LFS_VERSION\",
\"name\": \"${LFS_VERSION%.0}\",
\"draft\": true,
\"body\": $(jq -sR '.' <<< "$CHANGES")
}" #"
SCRIPT
# Manage statuses - v2/lfs --migrate /dev/loop0
- v2/lfs /dev/loop0 mount
# run self-host test with v2 right where we left off
- ls mount
- cd mount/littlefs
- stat .
- ls -flh
- make -B test
# automatically create releases
- stage: deploy
env:
- NAME=deploy
script:
- |
bash << 'SCRIPT'
set -ev
# Find version defined in lfs.h
LFS_VERSION=$(grep -ox '#define LFS_VERSION .*' lfs.h | cut -d ' ' -f3)
LFS_VERSION_MAJOR=$((0xffff & ($LFS_VERSION >> 16)))
LFS_VERSION_MINOR=$((0xffff & ($LFS_VERSION >> 0)))
# Grab latests patch from repo tags, default to 0, needs finagling
# to get past github's pagination api
PREV_URL=https://api.github.com/repos/$TRAVIS_REPO_SLUG/git/refs/tags/v$LFS_VERSION_MAJOR.$LFS_VERSION_MINOR.
PREV_URL=$(curl -u "$GEKY_BOT_RELEASES" "$PREV_URL" -I \
| sed -n '/^Link/{s/.*<\(.*\)>; rel="last"/\1/;p;q0};$q1' \
|| echo $PREV_URL)
LFS_VERSION_PATCH=$(curl -u "$GEKY_BOT_RELEASES" "$PREV_URL" \
| jq 'map(.ref | match("\\bv.*\\..*\\.(.*)$";"g")
.captures[].string | tonumber) | max + 1' \
|| echo 0)
# We have our new version
LFS_VERSION="v$LFS_VERSION_MAJOR.$LFS_VERSION_MINOR.$LFS_VERSION_PATCH"
echo "VERSION $LFS_VERSION"
# Check that we're the most recent commit
CURRENT_COMMIT=$(curl -f -u "$GEKY_BOT_RELEASES" \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/commits/master \
| jq -re '.sha')
[ "$TRAVIS_COMMIT" == "$CURRENT_COMMIT" ] || exit 0
# Create major branch
git branch v$LFS_VERSION_MAJOR HEAD
# Create major prefix branch
git config user.name "geky bot"
git config user.email "bot@geky.net"
git fetch https://github.com/$TRAVIS_REPO_SLUG.git \
--depth=50 v$LFS_VERSION_MAJOR-prefix || true
./scripts/prefix.py lfs$LFS_VERSION_MAJOR
git branch v$LFS_VERSION_MAJOR-prefix $( \
git commit-tree $(git write-tree) \
$(git rev-parse --verify -q FETCH_HEAD | sed -e 's/^/-p /') \
-p HEAD \
-m "Generated v$LFS_VERSION_MAJOR prefixes")
git reset --hard
# Update major version branches (vN and vN-prefix)
git push --atomic https://$GEKY_BOT_RELEASES@github.com/$TRAVIS_REPO_SLUG.git \
v$LFS_VERSION_MAJOR \
v$LFS_VERSION_MAJOR-prefix
# Build release notes
PREV=$(git tag --sort=-v:refname -l "v*" | head -1)
if [ ! -z "$PREV" ]
then
echo "PREV $PREV"
CHANGES=$(git log --oneline $PREV.. --grep='^Merge' --invert-grep)
printf "CHANGES\n%s\n\n" "$CHANGES"
fi
case ${GEKY_BOT_DRAFT:-minor} in
true) DRAFT=true ;;
minor) DRAFT=$(jq -R 'endswith(".0")' <<< "$LFS_VERSION") ;;
false) DRAFT=false ;;
esac
# Create the release and patch version tag (vN.N.N)
curl -f -u "$GEKY_BOT_RELEASES" -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/releases \
-d "{
\"tag_name\": \"$LFS_VERSION\",
\"name\": \"${LFS_VERSION%.0}\",
\"target_commitish\": \"$TRAVIS_COMMIT\",
\"draft\": $DRAFT,
\"body\": $(jq -sR '.' <<< "$CHANGES")
}" #"
SCRIPT
# manage statuses
before_install: before_install:
- | - |
curl -u "$GEKY_BOT_STATUSES" -X POST \ # don't clobber other (not us) failures
https://api.github.com/repos/$TRAVIS_REPO_SLUG/statuses/${TRAVIS_PULL_REQUEST_SHA:-$TRAVIS_COMMIT} \ if ! curl https://api.github.com/repos/$TRAVIS_REPO_SLUG/status/${TRAVIS_PULL_REQUEST_SHA:-$TRAVIS_COMMIT} \
-d "{ | jq -e ".statuses[] | select(
\"context\": \"$STAGE/$NAME\", .context == \"${TRAVIS_BUILD_STAGE_NAME,,}/$NAME\" and
\"state\": \"pending\", .state == \"failure\" and
\"description\": \"${STATUS:-In progress}\", (.target_url | endswith(\"$TRAVIS_JOB_NUMBER\") | not))"
\"target_url\": \"https://travis-ci.org/$TRAVIS_REPO_SLUG/jobs/$TRAVIS_JOB_ID\" then
}" curl -u "$GEKY_BOT_STATUSES" -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/statuses/${TRAVIS_PULL_REQUEST_SHA:-$TRAVIS_COMMIT} \
-d "{
\"context\": \"${TRAVIS_BUILD_STAGE_NAME,,}/$NAME\",
\"state\": \"pending\",
\"description\": \"${STATUS:-In progress}\",
\"target_url\": \"$TRAVIS_JOB_WEB_URL#$TRAVIS_JOB_NUMBER\"
}"
fi
after_failure: after_failure:
- | - |
curl -u "$GEKY_BOT_STATUSES" -X POST \ # don't clobber other (not us) failures
https://api.github.com/repos/$TRAVIS_REPO_SLUG/statuses/${TRAVIS_PULL_REQUEST_SHA:-$TRAVIS_COMMIT} \ if ! curl https://api.github.com/repos/$TRAVIS_REPO_SLUG/status/${TRAVIS_PULL_REQUEST_SHA:-$TRAVIS_COMMIT} \
-d "{ | jq -e ".statuses[] | select(
\"context\": \"$STAGE/$NAME\", .context == \"${TRAVIS_BUILD_STAGE_NAME,,}/$NAME\" and
\"state\": \"failure\", .state == \"failure\" and
\"description\": \"${STATUS:-Failed}\", (.target_url | endswith(\"$TRAVIS_JOB_NUMBER\") | not))"
\"target_url\": \"https://travis-ci.org/$TRAVIS_REPO_SLUG/jobs/$TRAVIS_JOB_ID\" then
}" curl -u "$GEKY_BOT_STATUSES" -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/statuses/${TRAVIS_PULL_REQUEST_SHA:-$TRAVIS_COMMIT} \
-d "{
\"context\": \"${TRAVIS_BUILD_STAGE_NAME,,}/$NAME\",
\"state\": \"failure\",
\"description\": \"${STATUS:-Failed}\",
\"target_url\": \"$TRAVIS_JOB_WEB_URL#$TRAVIS_JOB_NUMBER\"
}"
fi
after_success: after_success:
- | - |
curl -u "$GEKY_BOT_STATUSES" -X POST \ # don't clobber other (not us) failures
https://api.github.com/repos/$TRAVIS_REPO_SLUG/statuses/${TRAVIS_PULL_REQUEST_SHA:-$TRAVIS_COMMIT} \ # only update if we were last job to mark in progress,
-d "{ # this isn't perfect but is probably good enough
\"context\": \"$STAGE/$NAME\", if ! curl https://api.github.com/repos/$TRAVIS_REPO_SLUG/status/${TRAVIS_PULL_REQUEST_SHA:-$TRAVIS_COMMIT} \
\"state\": \"success\", | jq -e ".statuses[] | select(
\"description\": \"${STATUS:-Passed}\", .context == \"${TRAVIS_BUILD_STAGE_NAME,,}/$NAME\" and
\"target_url\": \"https://travis-ci.org/$TRAVIS_REPO_SLUG/jobs/$TRAVIS_JOB_ID\" (.state == \"failure\" or .state == \"pending\") and
}" (.target_url | endswith(\"$TRAVIS_JOB_NUMBER\") | not))"
then
# Job control curl -u "$GEKY_BOT_STATUSES" -X POST \
stages: https://api.github.com/repos/$TRAVIS_REPO_SLUG/statuses/${TRAVIS_PULL_REQUEST_SHA:-$TRAVIS_COMMIT} \
- name: test -d "{
- name: deploy \"context\": \"${TRAVIS_BUILD_STAGE_NAME,,}/$NAME\",
if: branch = master AND type = push \"state\": \"success\",
\"description\": \"${STATUS:-Passed}\",
\"target_url\": \"$TRAVIS_JOB_WEB_URL#$TRAVIS_JOB_NUMBER\"
}"
fi

View File

@@ -254,7 +254,7 @@ have weaknesses that limit their usefulness. But if we merge the two they can
mutually solve each other's limitations. mutually solve each other's limitations.
This is the idea behind littlefs. At the sub-block level, littlefs is built This is the idea behind littlefs. At the sub-block level, littlefs is built
out of small, two blocks logs that provide atomic updates to metadata anywhere out of small, two block logs that provide atomic updates to metadata anywhere
on the filesystem. At the super-block level, littlefs is a CObW tree of blocks on the filesystem. At the super-block level, littlefs is a CObW tree of blocks
that can be evicted on demand. that can be evicted on demand.
@@ -676,7 +676,7 @@ block, this cost is fairly reasonable.
--- ---
This is a new data structure, so we still have several questions. What is the This is a new data structure, so we still have several questions. What is the
storage overage? Can the number of pointers exceed the size of a block? How do storage overhead? Can the number of pointers exceed the size of a block? How do
we store a CTZ skip-list in our metadata pairs? we store a CTZ skip-list in our metadata pairs?
To find the storage overhead, we can look at the data structure as multiple To find the storage overhead, we can look at the data structure as multiple
@@ -742,8 +742,8 @@ where:
2. popcount(![x]) = the number of bits that are 1 in ![x] 2. popcount(![x]) = the number of bits that are 1 in ![x]
Initial tests of this surprising property seem to hold. As ![n] approaches Initial tests of this surprising property seem to hold. As ![n] approaches
infinity, we end up with an average overhead of 2 pointers, which matches what infinity, we end up with an average overhead of 2 pointers, which matches our
our assumption from earlier. During iteration, the popcount function seems to assumption from earlier. During iteration, the popcount function seems to
handle deviations from this average. Of course, just to make sure I wrote a handle deviations from this average. Of course, just to make sure I wrote a
quick script that verified this property for all 32-bit integers. quick script that verified this property for all 32-bit integers.
@@ -767,7 +767,7 @@ overflow, but we can avoid this by rearranging the equation a bit:
![off = N - (B-2w/8)n - (w/8)popcount(n)][ctz-formula7] ![off = N - (B-2w/8)n - (w/8)popcount(n)][ctz-formula7]
Our solution requires quite a bit of math, but computer are very good at math. Our solution requires quite a bit of math, but computers are very good at math.
Now we can find both our block index and offset from a size in _O(1)_, letting Now we can find both our block index and offset from a size in _O(1)_, letting
us store CTZ skip-lists with only a pointer and size. us store CTZ skip-lists with only a pointer and size.
@@ -850,7 +850,7 @@ nearly every write to the filesystem.
Normally, block allocation involves some sort of free list or bitmap stored on Normally, block allocation involves some sort of free list or bitmap stored on
the filesystem that is updated with free blocks. However, with power the filesystem that is updated with free blocks. However, with power
resilience, keeping these structure consistent becomes difficult. It doesn't resilience, keeping these structures consistent becomes difficult. It doesn't
help that any mistake in updating these structures can result in lost blocks help that any mistake in updating these structures can result in lost blocks
that are impossible to recover. that are impossible to recover.
@@ -894,9 +894,9 @@ high-risk error conditions.
--- ---
Our block allocator needs to find free blocks efficiently. You could traverse Our block allocator needs to find free blocks efficiently. You could traverse
through every block on storage and check each one against our filesystem tree, through every block on storage and check each one against our filesystem tree;
however the runtime would be abhorrent. We need to somehow collect multiple however, the runtime would be abhorrent. We need to somehow collect multiple
blocks each traversal. blocks per traversal.
Looking at existing designs, some larger filesystems that use a similar "drop Looking at existing designs, some larger filesystems that use a similar "drop
it on the floor" strategy store a bitmap of the entire storage in [RAM]. This it on the floor" strategy store a bitmap of the entire storage in [RAM]. This
@@ -920,8 +920,8 @@ a brute force traversal. Instead of a bitmap the size of storage, we keep track
of a small, fixed-size bitmap called the lookahead buffer. During block of a small, fixed-size bitmap called the lookahead buffer. During block
allocation, we take blocks from the lookahead buffer. If the lookahead buffer allocation, we take blocks from the lookahead buffer. If the lookahead buffer
is empty, we scan the filesystem for more free blocks, populating our lookahead is empty, we scan the filesystem for more free blocks, populating our lookahead
buffer. Each scan we use an increasing offset, circling the storage as blocks buffer. In each scan we use an increasing offset, circling the storage as
are allocated. blocks are allocated.
Here's what it might look like to allocate 4 blocks on a decently busy Here's what it might look like to allocate 4 blocks on a decently busy
filesystem with a 32 bit lookahead and a total of 128 blocks (512 KiB filesystem with a 32 bit lookahead and a total of 128 blocks (512 KiB
@@ -950,7 +950,7 @@ alloc = 112 lookahead: ffff8000
``` ```
This lookahead approach has a runtime complexity of _O(n&sup2;)_ to completely This lookahead approach has a runtime complexity of _O(n&sup2;)_ to completely
scan storage, however, bitmaps are surprisingly compact, and in practice only scan storage; however, bitmaps are surprisingly compact, and in practice only
one or two passes are usually needed to find free blocks. Additionally, the one or two passes are usually needed to find free blocks. Additionally, the
performance of the allocator can be optimized by adjusting the block size or performance of the allocator can be optimized by adjusting the block size or
size of the lookahead buffer, trading either write granularity or RAM for size of the lookahead buffer, trading either write granularity or RAM for
@@ -1173,9 +1173,9 @@ We may find that the new block is also bad, but hopefully after repeating this
cycle we'll eventually find a new block where a write succeeds. If we don't, cycle we'll eventually find a new block where a write succeeds. If we don't,
that means that all blocks in our storage are bad, and we've reached the end of that means that all blocks in our storage are bad, and we've reached the end of
our device's usable life. At this point, littlefs will return an "out of space" our device's usable life. At this point, littlefs will return an "out of space"
error, which is technically true, there are no more good blocks, but as an error. This is technically true, as there are no more good blocks, but as an
added benefit also matches the error condition expected by users of dynamically added benefit it also matches the error condition expected by users of
sized data. dynamically sized data.
--- ---
@@ -1187,7 +1187,7 @@ original data even after it has been corrupted. One such mechanism for this is
ECC is an extension to the idea of a checksum. Where a checksum such as CRC can ECC is an extension to the idea of a checksum. Where a checksum such as CRC can
detect that an error has occurred in the data, ECC can detect and actually detect that an error has occurred in the data, ECC can detect and actually
correct some amount of errors. However, there is a limit to how many errors ECC correct some amount of errors. However, there is a limit to how many errors ECC
can detect, call the [Hamming bound][wikipedia-hamming-bound]. As the number of can detect: the [Hamming bound][wikipedia-hamming-bound]. As the number of
errors approaches the Hamming bound, we may still be able to detect errors, but errors approaches the Hamming bound, we may still be able to detect errors, but
can no longer fix the data. If we've reached this point the block is can no longer fix the data. If we've reached this point the block is
unrecoverable. unrecoverable.
@@ -1202,7 +1202,7 @@ chip itself.
In littlefs, ECC is entirely optional. Read errors can instead be prevented In littlefs, ECC is entirely optional. Read errors can instead be prevented
proactively by wear leveling. But it's important to note that ECC can be used proactively by wear leveling. But it's important to note that ECC can be used
at the block device level to modestly extend the life of a device. littlefs at the block device level to modestly extend the life of a device. littlefs
respects any errors reported by the block device, allow a block device to respects any errors reported by the block device, allowing a block device to
provide additional aggressive error detection. provide additional aggressive error detection.
--- ---
@@ -1231,7 +1231,7 @@ Generally, wear leveling algorithms fall into one of two categories:
we need to consider all blocks, including blocks that already contain data. we need to consider all blocks, including blocks that already contain data.
As a tradeoff for code size and complexity, littlefs (currently) only provides As a tradeoff for code size and complexity, littlefs (currently) only provides
dynamic wear leveling. This is a best efforts solution. Wear is not distributed dynamic wear leveling. This is a best effort solution. Wear is not distributed
perfectly, but it is distributed among the free blocks and greatly extends the perfectly, but it is distributed among the free blocks and greatly extends the
life of a device. life of a device.
@@ -1378,7 +1378,7 @@ We can make several improvements. First, instead of giving each file its own
metadata pair, we can store multiple files in a single metadata pair. One way metadata pair, we can store multiple files in a single metadata pair. One way
to do this is to directly associate a directory with a metadata pair (or a to do this is to directly associate a directory with a metadata pair (or a
linked list of metadata pairs). This makes it easy for multiple files to share linked list of metadata pairs). This makes it easy for multiple files to share
the directory's metadata pair for logging and reduce the collective storage the directory's metadata pair for logging and reduces the collective storage
overhead. overhead.
The strict binding of metadata pairs and directories also gives users The strict binding of metadata pairs and directories also gives users
@@ -1816,12 +1816,12 @@ while manipulating the directory tree (foreshadowing!).
## The move problem ## The move problem
We have one last challenge. The move problem. Phrasing the problem is simple: We have one last challenge: the move problem. Phrasing the problem is simple:
How do you atomically move a file between two directories? How do you atomically move a file between two directories?
In littlefs we can atomically commit to directories, but we can't create In littlefs we can atomically commit to directories, but we can't create
an atomic commit that span multiple directories. The filesystem must go an atomic commit that spans multiple directories. The filesystem must go
through a minimum of two distinct states to complete a move. through a minimum of two distinct states to complete a move.
To make matters worse, file moves are a common form of synchronization for To make matters worse, file moves are a common form of synchronization for
@@ -1831,13 +1831,13 @@ atomic moves right.
So what can we do? So what can we do?
- We definitely can't just let power-loss result in duplicated or lost files. - We definitely can't just let power-loss result in duplicated or lost files.
This could easily break user's code and would only reveal itself in extreme This could easily break users' code and would only reveal itself in extreme
cases. We were only able to be lazy about the threaded linked-list because cases. We were only able to be lazy about the threaded linked-list because
it isn't user facing and we can handle the corner cases internally. it isn't user facing and we can handle the corner cases internally.
- Some filesystems propagate COW operations up the tree until finding a common - Some filesystems propagate COW operations up the tree until a common parent
parent. Unfortunately this interacts poorly with our threaded tree and brings is found. Unfortunately this interacts poorly with our threaded tree and
back the issue of upward propagation of wear. brings back the issue of upward propagation of wear.
- In a previous version of littlefs we tried to solve this problem by going - In a previous version of littlefs we tried to solve this problem by going
back and forth between the source and destination, marking and unmarking the back and forth between the source and destination, marking and unmarking the
@@ -1852,7 +1852,7 @@ introduction of a mechanism called "global state".
--- ---
Global state is a small set of state that can be updated from _any_ metadata Global state is a small set of state that can be updated from _any_ metadata
pair. Combining global state with metadata pair's ability to update multiple pair. Combining global state with metadata pairs' ability to update multiple
entries in one commit gives us a powerful tool for crafting complex atomic entries in one commit gives us a powerful tool for crafting complex atomic
operations. operations.
@@ -1910,7 +1910,7 @@ the filesystem is mounted.
You may have noticed that global state is very expensive. We keep a copy in You may have noticed that global state is very expensive. We keep a copy in
RAM and a delta in an unbounded number of metadata pairs. Even if we reset RAM and a delta in an unbounded number of metadata pairs. Even if we reset
the global state to its initial value we can't easily clean up the deltas on the global state to its initial value, we can't easily clean up the deltas on
disk. For this reason, it's very important that we keep the size of global disk. For this reason, it's very important that we keep the size of global
state bounded and extremely small. But, even with a strict budget, global state bounded and extremely small. But, even with a strict budget, global
state is incredibly valuable. state is incredibly valuable.

View File

@@ -7,15 +7,11 @@ CC ?= gcc
AR ?= ar AR ?= ar
SIZE ?= size SIZE ?= size
SRC += $(wildcard *.c emubd/*.c) SRC += $(wildcard *.c bd/*.c)
OBJ := $(SRC:.c=.o) OBJ := $(SRC:.c=.o)
DEP := $(SRC:.c=.d) DEP := $(SRC:.c=.d)
ASM := $(SRC:.c=.s) ASM := $(SRC:.c=.s)
TEST := $(patsubst tests/%.sh,%,$(wildcard tests/test_*))
SHELL = /bin/bash -o pipefail
ifdef DEBUG ifdef DEBUG
override CFLAGS += -O0 -g3 override CFLAGS += -O0 -g3
else else
@@ -24,12 +20,19 @@ endif
ifdef WORD ifdef WORD
override CFLAGS += -m$(WORD) override CFLAGS += -m$(WORD)
endif endif
ifdef TRACE
override CFLAGS += -DLFS_YES_TRACE
endif
override CFLAGS += -I. override CFLAGS += -I.
override CFLAGS += -std=c99 -Wall -pedantic override CFLAGS += -std=c99 -Wall -pedantic
override CFLAGS += -Wextra -Wshadow -Wjump-misses-init override CFLAGS += -Wextra -Wshadow -Wjump-misses-init -Wundef
# Remove missing-field-initializers because of GCC bug # Remove missing-field-initializers because of GCC bug
override CFLAGS += -Wno-missing-field-initializers override CFLAGS += -Wno-missing-field-initializers
ifdef VERBOSE
override TFLAGS += -v
endif
all: $(TARGET) all: $(TARGET)
@@ -38,18 +41,11 @@ asm: $(ASM)
size: $(OBJ) size: $(OBJ)
$(SIZE) -t $^ $(SIZE) -t $^
.SUFFIXES: test:
test: test_format test_dirs test_files test_seek test_truncate \ ./scripts/test.py $(TFLAGS)
test_entries test_interspersed test_alloc test_paths test_attrs \ .SECONDEXPANSION:
test_move test_orphan test_corrupt test%: tests/test$$(firstword $$(subst \#, ,%)).toml
@rm test.c ./scripts/test.py $@ $(TFLAGS)
test_%: tests/test_%.sh
ifdef QUIET
@./$< | sed -n '/^[-=]/p'
else
./$<
endif
-include $(DEP) -include $(DEP)
@@ -70,3 +66,4 @@ clean:
rm -f $(OBJ) rm -f $(OBJ)
rm -f $(DEP) rm -f $(DEP)
rm -f $(ASM) rm -f $(ASM)
rm -f tests/*.toml.*

View File

@@ -53,6 +53,7 @@ const struct lfs_config cfg = {
.block_count = 128, .block_count = 128,
.cache_size = 16, .cache_size = 16,
.lookahead_size = 16, .lookahead_size = 16,
.block_cycles = 500,
}; };
// entry point // entry point
@@ -109,7 +110,7 @@ directory functions, with the deviation that the allocation of filesystem
structures must be provided by the user. structures must be provided by the user.
All POSIX operations, such as remove and rename, are atomic, even in event All POSIX operations, such as remove and rename, are atomic, even in event
of power-loss. Additionally, no file updates are not actually committed to of power-loss. Additionally, file updates are not actually committed to
the filesystem until sync or close is called on the file. the filesystem until sync or close is called on the file.
## Other notes ## Other notes

204
bd/lfs_filebd.c Normal file
View File

@@ -0,0 +1,204 @@
/*
* Block device emulated in a file
*
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#include "bd/lfs_filebd.h"
#include <fcntl.h>
#include <unistd.h>
#include <errno.h>
int lfs_filebd_createcfg(const struct lfs_config *cfg, const char *path,
const struct lfs_filebd_config *bdcfg) {
LFS_TRACE("lfs_filebd_createcfg(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p, "
".read_size=%"PRIu32", .prog_size=%"PRIu32", "
".block_size=%"PRIu32", .block_count=%"PRIu32"}, "
"\"%s\", "
"%p {.erase_value=%"PRId32"})",
(void*)cfg, cfg->context,
(void*)(uintptr_t)cfg->read, (void*)(uintptr_t)cfg->prog,
(void*)(uintptr_t)cfg->erase, (void*)(uintptr_t)cfg->sync,
cfg->read_size, cfg->prog_size, cfg->block_size, cfg->block_count,
path, (void*)bdcfg, bdcfg->erase_value);
lfs_filebd_t *bd = cfg->context;
bd->cfg = bdcfg;
// open file
bd->fd = open(path, O_RDWR | O_CREAT, 0666);
if (bd->fd < 0) {
int err = -errno;
LFS_TRACE("lfs_filebd_createcfg -> %d", err);
return err;
}
LFS_TRACE("lfs_filebd_createcfg -> %d", 0);
return 0;
}
int lfs_filebd_create(const struct lfs_config *cfg, const char *path) {
LFS_TRACE("lfs_filebd_create(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p, "
".read_size=%"PRIu32", .prog_size=%"PRIu32", "
".block_size=%"PRIu32", .block_count=%"PRIu32"}, "
"\"%s\")",
(void*)cfg, cfg->context,
(void*)(uintptr_t)cfg->read, (void*)(uintptr_t)cfg->prog,
(void*)(uintptr_t)cfg->erase, (void*)(uintptr_t)cfg->sync,
cfg->read_size, cfg->prog_size, cfg->block_size, cfg->block_count,
path);
static const struct lfs_filebd_config defaults = {.erase_value=-1};
int err = lfs_filebd_createcfg(cfg, path, &defaults);
LFS_TRACE("lfs_filebd_create -> %d", err);
return err;
}
int lfs_filebd_destroy(const struct lfs_config *cfg) {
LFS_TRACE("lfs_filebd_destroy(%p)", (void*)cfg);
lfs_filebd_t *bd = cfg->context;
int err = close(bd->fd);
if (err < 0) {
err = -errno;
LFS_TRACE("lfs_filebd_destroy -> %d", err);
return err;
}
LFS_TRACE("lfs_filebd_destroy -> %d", 0);
return 0;
}
int lfs_filebd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size) {
LFS_TRACE("lfs_filebd_read(%p, 0x%"PRIx32", %"PRIu32", %p, %"PRIu32")",
(void*)cfg, block, off, buffer, size);
lfs_filebd_t *bd = cfg->context;
// check if read is valid
LFS_ASSERT(off % cfg->read_size == 0);
LFS_ASSERT(size % cfg->read_size == 0);
LFS_ASSERT(block < cfg->block_count);
// zero for reproducability (in case file is truncated)
if (bd->cfg->erase_value != -1) {
memset(buffer, bd->cfg->erase_value, size);
}
// read
off_t res1 = lseek(bd->fd,
(off_t)block*cfg->block_size + (off_t)off, SEEK_SET);
if (res1 < 0) {
int err = -errno;
LFS_TRACE("lfs_filebd_read -> %d", err);
return err;
}
ssize_t res2 = read(bd->fd, buffer, size);
if (res2 < 0) {
int err = -errno;
LFS_TRACE("lfs_filebd_read -> %d", err);
return err;
}
LFS_TRACE("lfs_filebd_read -> %d", 0);
return 0;
}
int lfs_filebd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size) {
LFS_TRACE("lfs_filebd_prog(%p, 0x%"PRIx32", %"PRIu32", %p, %"PRIu32")",
(void*)cfg, block, off, buffer, size);
lfs_filebd_t *bd = cfg->context;
// check if write is valid
LFS_ASSERT(off % cfg->prog_size == 0);
LFS_ASSERT(size % cfg->prog_size == 0);
LFS_ASSERT(block < cfg->block_count);
// check that data was erased? only needed for testing
if (bd->cfg->erase_value != -1) {
off_t res1 = lseek(bd->fd,
(off_t)block*cfg->block_size + (off_t)off, SEEK_SET);
if (res1 < 0) {
int err = -errno;
LFS_TRACE("lfs_filebd_prog -> %d", err);
return err;
}
for (lfs_off_t i = 0; i < size; i++) {
uint8_t c;
ssize_t res2 = read(bd->fd, &c, 1);
if (res2 < 0) {
int err = -errno;
LFS_TRACE("lfs_filebd_prog -> %d", err);
return err;
}
LFS_ASSERT(c == bd->cfg->erase_value);
}
}
// program data
off_t res1 = lseek(bd->fd,
(off_t)block*cfg->block_size + (off_t)off, SEEK_SET);
if (res1 < 0) {
int err = -errno;
LFS_TRACE("lfs_filebd_prog -> %d", err);
return err;
}
ssize_t res2 = write(bd->fd, buffer, size);
if (res2 < 0) {
int err = -errno;
LFS_TRACE("lfs_filebd_prog -> %d", err);
return err;
}
LFS_TRACE("lfs_filebd_prog -> %d", 0);
return 0;
}
int lfs_filebd_erase(const struct lfs_config *cfg, lfs_block_t block) {
LFS_TRACE("lfs_filebd_erase(%p, 0x%"PRIx32")", (void*)cfg, block);
lfs_filebd_t *bd = cfg->context;
// check if erase is valid
LFS_ASSERT(block < cfg->block_count);
// erase, only needed for testing
if (bd->cfg->erase_value != -1) {
off_t res1 = lseek(bd->fd, (off_t)block*cfg->block_size, SEEK_SET);
if (res1 < 0) {
int err = -errno;
LFS_TRACE("lfs_filebd_erase -> %d", err);
return err;
}
for (lfs_off_t i = 0; i < cfg->block_size; i++) {
ssize_t res2 = write(bd->fd, &(uint8_t){bd->cfg->erase_value}, 1);
if (res2 < 0) {
int err = -errno;
LFS_TRACE("lfs_filebd_erase -> %d", err);
return err;
}
}
}
LFS_TRACE("lfs_filebd_erase -> %d", 0);
return 0;
}
int lfs_filebd_sync(const struct lfs_config *cfg) {
LFS_TRACE("lfs_filebd_sync(%p)", (void*)cfg);
// file sync
lfs_filebd_t *bd = cfg->context;
int err = fsync(bd->fd);
if (err) {
err = -errno;
LFS_TRACE("lfs_filebd_sync -> %d", 0);
return err;
}
LFS_TRACE("lfs_filebd_sync -> %d", 0);
return 0;
}

65
bd/lfs_filebd.h Normal file
View File

@@ -0,0 +1,65 @@
/*
* Block device emulated in a file
*
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef LFS_FILEBD_H
#define LFS_FILEBD_H
#include "lfs.h"
#include "lfs_util.h"
#ifdef __cplusplus
extern "C"
{
#endif
// filebd config (optional)
struct lfs_filebd_config {
// 8-bit erase value to use for simulating erases. -1 does not simulate
// erases, which can speed up testing by avoiding all the extra block-device
// operations to store the erase value.
int32_t erase_value;
};
// filebd state
typedef struct lfs_filebd {
int fd;
const struct lfs_filebd_config *cfg;
} lfs_filebd_t;
// Create a file block device using the geometry in lfs_config
int lfs_filebd_create(const struct lfs_config *cfg, const char *path);
int lfs_filebd_createcfg(const struct lfs_config *cfg, const char *path,
const struct lfs_filebd_config *bdcfg);
// Clean up memory associated with block device
int lfs_filebd_destroy(const struct lfs_config *cfg);
// Read a block
int lfs_filebd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size);
// Program a block
//
// The block must have previously been erased.
int lfs_filebd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size);
// Erase a block
//
// A block must be erased before being programmed. The
// state of an erased block is undefined.
int lfs_filebd_erase(const struct lfs_config *cfg, lfs_block_t block);
// Sync the block device
int lfs_filebd_sync(const struct lfs_config *cfg);
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif

138
bd/lfs_rambd.c Normal file
View File

@@ -0,0 +1,138 @@
/*
* Block device emulated in RAM
*
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#include "bd/lfs_rambd.h"
int lfs_rambd_createcfg(const struct lfs_config *cfg,
const struct lfs_rambd_config *bdcfg) {
LFS_TRACE("lfs_rambd_createcfg(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p, "
".read_size=%"PRIu32", .prog_size=%"PRIu32", "
".block_size=%"PRIu32", .block_count=%"PRIu32"}, "
"%p {.erase_value=%"PRId32", .buffer=%p})",
(void*)cfg, cfg->context,
(void*)(uintptr_t)cfg->read, (void*)(uintptr_t)cfg->prog,
(void*)(uintptr_t)cfg->erase, (void*)(uintptr_t)cfg->sync,
cfg->read_size, cfg->prog_size, cfg->block_size, cfg->block_count,
(void*)bdcfg, bdcfg->erase_value, bdcfg->buffer);
lfs_rambd_t *bd = cfg->context;
bd->cfg = bdcfg;
// allocate buffer?
if (bd->cfg->buffer) {
bd->buffer = bd->cfg->buffer;
} else {
bd->buffer = lfs_malloc(cfg->block_size * cfg->block_count);
if (!bd->buffer) {
LFS_TRACE("lfs_rambd_createcfg -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
}
// zero for reproducability?
if (bd->cfg->erase_value != -1) {
memset(bd->buffer, bd->cfg->erase_value,
cfg->block_size * cfg->block_count);
}
LFS_TRACE("lfs_rambd_createcfg -> %d", 0);
return 0;
}
int lfs_rambd_create(const struct lfs_config *cfg) {
LFS_TRACE("lfs_rambd_create(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p, "
".read_size=%"PRIu32", .prog_size=%"PRIu32", "
".block_size=%"PRIu32", .block_count=%"PRIu32"})",
(void*)cfg, cfg->context,
(void*)(uintptr_t)cfg->read, (void*)(uintptr_t)cfg->prog,
(void*)(uintptr_t)cfg->erase, (void*)(uintptr_t)cfg->sync,
cfg->read_size, cfg->prog_size, cfg->block_size, cfg->block_count);
static const struct lfs_rambd_config defaults = {.erase_value=-1};
int err = lfs_rambd_createcfg(cfg, &defaults);
LFS_TRACE("lfs_rambd_create -> %d", err);
return err;
}
int lfs_rambd_destroy(const struct lfs_config *cfg) {
LFS_TRACE("lfs_rambd_destroy(%p)", (void*)cfg);
// clean up memory
lfs_rambd_t *bd = cfg->context;
if (!bd->cfg->buffer) {
lfs_free(bd->buffer);
}
LFS_TRACE("lfs_rambd_destroy -> %d", 0);
return 0;
}
int lfs_rambd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size) {
LFS_TRACE("lfs_rambd_read(%p, 0x%"PRIx32", %"PRIu32", %p, %"PRIu32")",
(void*)cfg, block, off, buffer, size);
lfs_rambd_t *bd = cfg->context;
// check if read is valid
LFS_ASSERT(off % cfg->read_size == 0);
LFS_ASSERT(size % cfg->read_size == 0);
LFS_ASSERT(block < cfg->block_count);
// read data
memcpy(buffer, &bd->buffer[block*cfg->block_size + off], size);
LFS_TRACE("lfs_rambd_read -> %d", 0);
return 0;
}
int lfs_rambd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size) {
LFS_TRACE("lfs_rambd_prog(%p, 0x%"PRIx32", %"PRIu32", %p, %"PRIu32")",
(void*)cfg, block, off, buffer, size);
lfs_rambd_t *bd = cfg->context;
// check if write is valid
LFS_ASSERT(off % cfg->prog_size == 0);
LFS_ASSERT(size % cfg->prog_size == 0);
LFS_ASSERT(block < cfg->block_count);
// check that data was erased? only needed for testing
if (bd->cfg->erase_value != -1) {
for (lfs_off_t i = 0; i < size; i++) {
LFS_ASSERT(bd->buffer[block*cfg->block_size + off + i] ==
bd->cfg->erase_value);
}
}
// program data
memcpy(&bd->buffer[block*cfg->block_size + off], buffer, size);
LFS_TRACE("lfs_rambd_prog -> %d", 0);
return 0;
}
int lfs_rambd_erase(const struct lfs_config *cfg, lfs_block_t block) {
LFS_TRACE("lfs_rambd_erase(%p, 0x%"PRIx32")", (void*)cfg, block);
lfs_rambd_t *bd = cfg->context;
// check if erase is valid
LFS_ASSERT(block < cfg->block_count);
// erase, only needed for testing
if (bd->cfg->erase_value != -1) {
memset(&bd->buffer[block*cfg->block_size],
bd->cfg->erase_value, cfg->block_size);
}
LFS_TRACE("lfs_rambd_erase -> %d", 0);
return 0;
}
int lfs_rambd_sync(const struct lfs_config *cfg) {
LFS_TRACE("lfs_rambd_sync(%p)", (void*)cfg);
// sync does nothing because we aren't backed by anything real
(void)cfg;
LFS_TRACE("lfs_rambd_sync -> %d", 0);
return 0;
}

67
bd/lfs_rambd.h Normal file
View File

@@ -0,0 +1,67 @@
/*
* Block device emulated in RAM
*
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef LFS_RAMBD_H
#define LFS_RAMBD_H
#include "lfs.h"
#include "lfs_util.h"
#ifdef __cplusplus
extern "C"
{
#endif
// rambd config (optional)
struct lfs_rambd_config {
// 8-bit erase value to simulate erasing with. -1 indicates no erase
// occurs, which is still a valid block device
int32_t erase_value;
// Optional statically allocated buffer for the block device.
void *buffer;
};
// rambd state
typedef struct lfs_rambd {
uint8_t *buffer;
const struct lfs_rambd_config *cfg;
} lfs_rambd_t;
// Create a RAM block device using the geometry in lfs_config
int lfs_rambd_create(const struct lfs_config *cfg);
int lfs_rambd_createcfg(const struct lfs_config *cfg,
const struct lfs_rambd_config *bdcfg);
// Clean up memory associated with block device
int lfs_rambd_destroy(const struct lfs_config *cfg);
// Read a block
int lfs_rambd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size);
// Program a block
//
// The block must have previously been erased.
int lfs_rambd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size);
// Erase a block
//
// A block must be erased before being programmed. The
// state of an erased block is undefined.
int lfs_rambd_erase(const struct lfs_config *cfg, lfs_block_t block);
// Sync the block device
int lfs_rambd_sync(const struct lfs_config *cfg);
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif

300
bd/lfs_testbd.c Normal file
View File

@@ -0,0 +1,300 @@
/*
* Testing block device, wraps filebd and rambd while providing a bunch
* of hooks for testing littlefs in various conditions.
*
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#include "bd/lfs_testbd.h"
#include <stdlib.h>
int lfs_testbd_createcfg(const struct lfs_config *cfg, const char *path,
const struct lfs_testbd_config *bdcfg) {
LFS_TRACE("lfs_testbd_createcfg(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p, "
".read_size=%"PRIu32", .prog_size=%"PRIu32", "
".block_size=%"PRIu32", .block_count=%"PRIu32"}, "
"\"%s\", "
"%p {.erase_value=%"PRId32", .erase_cycles=%"PRIu32", "
".badblock_behavior=%"PRIu8", .power_cycles=%"PRIu32", "
".buffer=%p, .wear_buffer=%p})",
(void*)cfg, cfg->context,
(void*)(uintptr_t)cfg->read, (void*)(uintptr_t)cfg->prog,
(void*)(uintptr_t)cfg->erase, (void*)(uintptr_t)cfg->sync,
cfg->read_size, cfg->prog_size, cfg->block_size, cfg->block_count,
path, (void*)bdcfg, bdcfg->erase_value, bdcfg->erase_cycles,
bdcfg->badblock_behavior, bdcfg->power_cycles,
bdcfg->buffer, bdcfg->wear_buffer);
lfs_testbd_t *bd = cfg->context;
bd->cfg = bdcfg;
// setup testing things
bd->persist = path;
bd->power_cycles = bd->cfg->power_cycles;
if (bd->cfg->erase_cycles) {
if (bd->cfg->wear_buffer) {
bd->wear = bd->cfg->wear_buffer;
} else {
bd->wear = lfs_malloc(sizeof(lfs_testbd_wear_t) * cfg->block_count);
if (!bd->wear) {
LFS_TRACE("lfs_testbd_createcfg -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
}
memset(bd->wear, 0, sizeof(lfs_testbd_wear_t) * cfg->block_count);
}
// create underlying block device
if (bd->persist) {
bd->u.file.cfg = (struct lfs_filebd_config){
.erase_value = bd->cfg->erase_value,
};
int err = lfs_filebd_createcfg(cfg, path, &bd->u.file.cfg);
LFS_TRACE("lfs_testbd_createcfg -> %d", err);
return err;
} else {
bd->u.ram.cfg = (struct lfs_rambd_config){
.erase_value = bd->cfg->erase_value,
.buffer = bd->cfg->buffer,
};
int err = lfs_rambd_createcfg(cfg, &bd->u.ram.cfg);
LFS_TRACE("lfs_testbd_createcfg -> %d", err);
return err;
}
}
int lfs_testbd_create(const struct lfs_config *cfg, const char *path) {
LFS_TRACE("lfs_testbd_create(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p, "
".read_size=%"PRIu32", .prog_size=%"PRIu32", "
".block_size=%"PRIu32", .block_count=%"PRIu32"}, "
"\"%s\")",
(void*)cfg, cfg->context,
(void*)(uintptr_t)cfg->read, (void*)(uintptr_t)cfg->prog,
(void*)(uintptr_t)cfg->erase, (void*)(uintptr_t)cfg->sync,
cfg->read_size, cfg->prog_size, cfg->block_size, cfg->block_count,
path);
static const struct lfs_testbd_config defaults = {.erase_value=-1};
int err = lfs_testbd_createcfg(cfg, path, &defaults);
LFS_TRACE("lfs_testbd_create -> %d", err);
return err;
}
int lfs_testbd_destroy(const struct lfs_config *cfg) {
LFS_TRACE("lfs_testbd_destroy(%p)", (void*)cfg);
lfs_testbd_t *bd = cfg->context;
if (bd->cfg->erase_cycles && !bd->cfg->wear_buffer) {
lfs_free(bd->wear);
}
if (bd->persist) {
int err = lfs_filebd_destroy(cfg);
LFS_TRACE("lfs_testbd_destroy -> %d", err);
return err;
} else {
int err = lfs_rambd_destroy(cfg);
LFS_TRACE("lfs_testbd_destroy -> %d", err);
return err;
}
}
/// Internal mapping to block devices ///
static int lfs_testbd_rawread(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size) {
lfs_testbd_t *bd = cfg->context;
if (bd->persist) {
return lfs_filebd_read(cfg, block, off, buffer, size);
} else {
return lfs_rambd_read(cfg, block, off, buffer, size);
}
}
static int lfs_testbd_rawprog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size) {
lfs_testbd_t *bd = cfg->context;
if (bd->persist) {
return lfs_filebd_prog(cfg, block, off, buffer, size);
} else {
return lfs_rambd_prog(cfg, block, off, buffer, size);
}
}
static int lfs_testbd_rawerase(const struct lfs_config *cfg,
lfs_block_t block) {
lfs_testbd_t *bd = cfg->context;
if (bd->persist) {
return lfs_filebd_erase(cfg, block);
} else {
return lfs_rambd_erase(cfg, block);
}
}
static int lfs_testbd_rawsync(const struct lfs_config *cfg) {
lfs_testbd_t *bd = cfg->context;
if (bd->persist) {
return lfs_filebd_sync(cfg);
} else {
return lfs_rambd_sync(cfg);
}
}
/// block device API ///
int lfs_testbd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size) {
LFS_TRACE("lfs_testbd_read(%p, 0x%"PRIx32", %"PRIu32", %p, %"PRIu32")",
(void*)cfg, block, off, buffer, size);
lfs_testbd_t *bd = cfg->context;
// check if read is valid
LFS_ASSERT(off % cfg->read_size == 0);
LFS_ASSERT(size % cfg->read_size == 0);
LFS_ASSERT(block < cfg->block_count);
// block bad?
if (bd->cfg->erase_cycles && bd->wear[block] >= bd->cfg->erase_cycles &&
bd->cfg->badblock_behavior == LFS_TESTBD_BADBLOCK_READERROR) {
LFS_TRACE("lfs_testbd_read -> %d", LFS_ERR_CORRUPT);
return LFS_ERR_CORRUPT;
}
// read
int err = lfs_testbd_rawread(cfg, block, off, buffer, size);
LFS_TRACE("lfs_testbd_read -> %d", err);
return err;
}
int lfs_testbd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size) {
LFS_TRACE("lfs_testbd_prog(%p, 0x%"PRIx32", %"PRIu32", %p, %"PRIu32")",
(void*)cfg, block, off, buffer, size);
lfs_testbd_t *bd = cfg->context;
// check if write is valid
LFS_ASSERT(off % cfg->prog_size == 0);
LFS_ASSERT(size % cfg->prog_size == 0);
LFS_ASSERT(block < cfg->block_count);
// block bad?
if (bd->cfg->erase_cycles && bd->wear[block] >= bd->cfg->erase_cycles) {
if (bd->cfg->badblock_behavior ==
LFS_TESTBD_BADBLOCK_PROGERROR) {
LFS_TRACE("lfs_testbd_prog -> %d", LFS_ERR_CORRUPT);
return LFS_ERR_CORRUPT;
} else if (bd->cfg->badblock_behavior ==
LFS_TESTBD_BADBLOCK_PROGNOOP ||
bd->cfg->badblock_behavior ==
LFS_TESTBD_BADBLOCK_ERASENOOP) {
LFS_TRACE("lfs_testbd_prog -> %d", 0);
return 0;
}
}
// prog
int err = lfs_testbd_rawprog(cfg, block, off, buffer, size);
if (err) {
LFS_TRACE("lfs_testbd_prog -> %d", err);
return err;
}
// lose power?
if (bd->power_cycles > 0) {
bd->power_cycles -= 1;
if (bd->power_cycles == 0) {
// sync to make sure we persist the last changes
assert(lfs_testbd_rawsync(cfg) == 0);
// simulate power loss
exit(33);
}
}
LFS_TRACE("lfs_testbd_prog -> %d", 0);
return 0;
}
int lfs_testbd_erase(const struct lfs_config *cfg, lfs_block_t block) {
LFS_TRACE("lfs_testbd_erase(%p, 0x%"PRIx32")", (void*)cfg, block);
lfs_testbd_t *bd = cfg->context;
// check if erase is valid
LFS_ASSERT(block < cfg->block_count);
// block bad?
if (bd->cfg->erase_cycles) {
if (bd->wear[block] >= bd->cfg->erase_cycles) {
if (bd->cfg->badblock_behavior ==
LFS_TESTBD_BADBLOCK_ERASEERROR) {
LFS_TRACE("lfs_testbd_erase -> %d", LFS_ERR_CORRUPT);
return LFS_ERR_CORRUPT;
} else if (bd->cfg->badblock_behavior ==
LFS_TESTBD_BADBLOCK_ERASENOOP) {
LFS_TRACE("lfs_testbd_erase -> %d", 0);
return 0;
}
} else {
// mark wear
bd->wear[block] += 1;
}
}
// erase
int err = lfs_testbd_rawerase(cfg, block);
if (err) {
LFS_TRACE("lfs_testbd_erase -> %d", err);
return err;
}
// lose power?
if (bd->power_cycles > 0) {
bd->power_cycles -= 1;
if (bd->power_cycles == 0) {
// sync to make sure we persist the last changes
assert(lfs_testbd_rawsync(cfg) == 0);
// simulate power loss
exit(33);
}
}
LFS_TRACE("lfs_testbd_prog -> %d", 0);
return 0;
}
int lfs_testbd_sync(const struct lfs_config *cfg) {
LFS_TRACE("lfs_testbd_sync(%p)", (void*)cfg);
int err = lfs_testbd_rawsync(cfg);
LFS_TRACE("lfs_testbd_sync -> %d", err);
return err;
}
/// simulated wear operations ///
lfs_testbd_swear_t lfs_testbd_getwear(const struct lfs_config *cfg,
lfs_block_t block) {
LFS_TRACE("lfs_testbd_getwear(%p, %"PRIu32")", (void*)cfg, block);
lfs_testbd_t *bd = cfg->context;
// check if block is valid
LFS_ASSERT(bd->cfg->erase_cycles);
LFS_ASSERT(block < cfg->block_count);
LFS_TRACE("lfs_testbd_getwear -> %"PRIu32, bd->wear[block]);
return bd->wear[block];
}
int lfs_testbd_setwear(const struct lfs_config *cfg,
lfs_block_t block, lfs_testbd_wear_t wear) {
LFS_TRACE("lfs_testbd_setwear(%p, %"PRIu32")", (void*)cfg, block);
lfs_testbd_t *bd = cfg->context;
// check if block is valid
LFS_ASSERT(bd->cfg->erase_cycles);
LFS_ASSERT(block < cfg->block_count);
bd->wear[block] = wear;
LFS_TRACE("lfs_testbd_setwear -> %d", 0);
return 0;
}

134
bd/lfs_testbd.h Normal file
View File

@@ -0,0 +1,134 @@
/*
* Testing block device, wraps filebd and rambd while providing a bunch
* of hooks for testing littlefs in various conditions.
*
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef LFS_TESTBD_H
#define LFS_TESTBD_H
#include "lfs.h"
#include "lfs_util.h"
#include "bd/lfs_rambd.h"
#include "bd/lfs_filebd.h"
#ifdef __cplusplus
extern "C"
{
#endif
// Mode determining how "bad blocks" behave during testing. This simulates
// some real-world circumstances such as progs not sticking (prog-noop),
// a readonly disk (erase-noop), and ECC failures (read-error).
//
// Not that read-noop is not allowed. Read _must_ return a consistent (but
// may be arbitrary) value on every read.
enum lfs_testbd_badblock_behavior {
LFS_TESTBD_BADBLOCK_PROGERROR,
LFS_TESTBD_BADBLOCK_ERASEERROR,
LFS_TESTBD_BADBLOCK_READERROR,
LFS_TESTBD_BADBLOCK_PROGNOOP,
LFS_TESTBD_BADBLOCK_ERASENOOP,
};
// Type for measuring wear
typedef uint32_t lfs_testbd_wear_t;
typedef int32_t lfs_testbd_swear_t;
// testbd config, this is required for testing
struct lfs_testbd_config {
// 8-bit erase value to use for simulating erases. -1 does not simulate
// erases, which can speed up testing by avoiding all the extra block-device
// operations to store the erase value.
int32_t erase_value;
// Number of erase cycles before a block becomes "bad". The exact behavior
// of bad blocks is controlled by the badblock_mode.
uint32_t erase_cycles;
// The mode determining how bad blocks fail
uint8_t badblock_behavior;
// Number of write operations (erase/prog) before forcefully killing
// the program with exit. Simulates power-loss. 0 disables.
uint32_t power_cycles;
// Optional buffer for RAM block device.
void *buffer;
// Optional buffer for wear
void *wear_buffer;
};
// testbd state
typedef struct lfs_testbd {
union {
struct {
lfs_filebd_t bd;
struct lfs_filebd_config cfg;
} file;
struct {
lfs_rambd_t bd;
struct lfs_rambd_config cfg;
} ram;
} u;
bool persist;
uint32_t power_cycles;
lfs_testbd_wear_t *wear;
const struct lfs_testbd_config *cfg;
} lfs_testbd_t;
/// Block device API ///
// Create a test block device using the geometry in lfs_config
//
// Note that filebd is used if a path is provided, if path is NULL
// testbd will use rambd which can be much faster.
int lfs_testbd_create(const struct lfs_config *cfg, const char *path);
int lfs_testbd_createcfg(const struct lfs_config *cfg, const char *path,
const struct lfs_testbd_config *bdcfg);
// Clean up memory associated with block device
int lfs_testbd_destroy(const struct lfs_config *cfg);
// Read a block
int lfs_testbd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size);
// Program a block
//
// The block must have previously been erased.
int lfs_testbd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size);
// Erase a block
//
// A block must be erased before being programmed. The
// state of an erased block is undefined.
int lfs_testbd_erase(const struct lfs_config *cfg, lfs_block_t block);
// Sync the block device
int lfs_testbd_sync(const struct lfs_config *cfg);
/// Additional extended API for driving test features ///
// Get simulated wear on a given block
lfs_testbd_swear_t lfs_testbd_getwear(const struct lfs_config *cfg,
lfs_block_t block);
// Manually set simulated wear on a given block
int lfs_testbd_setwear(const struct lfs_config *cfg,
lfs_block_t block, lfs_testbd_wear_t wear);
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif

View File

@@ -1,324 +0,0 @@
/*
* Block device emulated on standard files
*
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#include "emubd/lfs_emubd.h"
#include <errno.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <limits.h>
#include <dirent.h>
#include <sys/stat.h>
#include <unistd.h>
#include <assert.h>
#include <stdbool.h>
#include <inttypes.h>
// Emulated block device utils
static inline void lfs_emubd_tole32(lfs_emubd_t *emu) {
emu->cfg.read_size = lfs_tole32(emu->cfg.read_size);
emu->cfg.prog_size = lfs_tole32(emu->cfg.prog_size);
emu->cfg.block_size = lfs_tole32(emu->cfg.block_size);
emu->cfg.block_count = lfs_tole32(emu->cfg.block_count);
emu->stats.read_count = lfs_tole32(emu->stats.read_count);
emu->stats.prog_count = lfs_tole32(emu->stats.prog_count);
emu->stats.erase_count = lfs_tole32(emu->stats.erase_count);
for (unsigned i = 0; i < sizeof(emu->history.blocks) /
sizeof(emu->history.blocks[0]); i++) {
emu->history.blocks[i] = lfs_tole32(emu->history.blocks[i]);
}
}
static inline void lfs_emubd_fromle32(lfs_emubd_t *emu) {
emu->cfg.read_size = lfs_fromle32(emu->cfg.read_size);
emu->cfg.prog_size = lfs_fromle32(emu->cfg.prog_size);
emu->cfg.block_size = lfs_fromle32(emu->cfg.block_size);
emu->cfg.block_count = lfs_fromle32(emu->cfg.block_count);
emu->stats.read_count = lfs_fromle32(emu->stats.read_count);
emu->stats.prog_count = lfs_fromle32(emu->stats.prog_count);
emu->stats.erase_count = lfs_fromle32(emu->stats.erase_count);
for (unsigned i = 0; i < sizeof(emu->history.blocks) /
sizeof(emu->history.blocks[0]); i++) {
emu->history.blocks[i] = lfs_fromle32(emu->history.blocks[i]);
}
}
// Block device emulated on existing filesystem
int lfs_emubd_create(const struct lfs_config *cfg, const char *path) {
lfs_emubd_t *emu = cfg->context;
emu->cfg.read_size = cfg->read_size;
emu->cfg.prog_size = cfg->prog_size;
emu->cfg.block_size = cfg->block_size;
emu->cfg.block_count = cfg->block_count;
// Allocate buffer for creating children files
size_t pathlen = strlen(path);
emu->path = malloc(pathlen + 1 + LFS_NAME_MAX + 1);
if (!emu->path) {
return -ENOMEM;
}
strcpy(emu->path, path);
emu->path[pathlen] = '/';
emu->child = &emu->path[pathlen+1];
memset(emu->child, '\0', LFS_NAME_MAX+1);
// Create directory if it doesn't exist
int err = mkdir(path, 0777);
if (err && errno != EEXIST) {
return -errno;
}
// Load stats to continue incrementing
snprintf(emu->child, LFS_NAME_MAX, ".stats");
FILE *f = fopen(emu->path, "r");
if (!f) {
memset(&emu->stats, 0, sizeof(emu->stats));
} else {
size_t res = fread(&emu->stats, sizeof(emu->stats), 1, f);
lfs_emubd_fromle32(emu);
if (res < 1) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
}
}
// Load history
snprintf(emu->child, LFS_NAME_MAX, ".history");
f = fopen(emu->path, "r");
if (!f) {
memset(&emu->history, 0, sizeof(emu->history));
} else {
size_t res = fread(&emu->history, sizeof(emu->history), 1, f);
lfs_emubd_fromle32(emu);
if (res < 1) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
}
}
return 0;
}
void lfs_emubd_destroy(const struct lfs_config *cfg) {
lfs_emubd_sync(cfg);
lfs_emubd_t *emu = cfg->context;
free(emu->path);
}
int lfs_emubd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size) {
lfs_emubd_t *emu = cfg->context;
uint8_t *data = buffer;
// Check if read is valid
assert(off % cfg->read_size == 0);
assert(size % cfg->read_size == 0);
assert(block < cfg->block_count);
// Zero out buffer for debugging
memset(data, 0, size);
// Read data
snprintf(emu->child, LFS_NAME_MAX, "%" PRIx32, block);
FILE *f = fopen(emu->path, "rb");
if (!f && errno != ENOENT) {
return -errno;
}
if (f) {
int err = fseek(f, off, SEEK_SET);
if (err) {
return -errno;
}
size_t res = fread(data, 1, size, f);
if (res < size && !feof(f)) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
}
}
emu->stats.read_count += 1;
return 0;
}
int lfs_emubd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size) {
lfs_emubd_t *emu = cfg->context;
const uint8_t *data = buffer;
// Check if write is valid
assert(off % cfg->prog_size == 0);
assert(size % cfg->prog_size == 0);
assert(block < cfg->block_count);
// Program data
snprintf(emu->child, LFS_NAME_MAX, "%" PRIx32, block);
FILE *f = fopen(emu->path, "r+b");
if (!f) {
return (errno == EACCES) ? 0 : -errno;
}
// Check that file was erased
assert(f);
int err = fseek(f, off, SEEK_SET);
if (err) {
return -errno;
}
size_t res = fwrite(data, 1, size, f);
if (res < size) {
return -errno;
}
err = fseek(f, off, SEEK_SET);
if (err) {
return -errno;
}
uint8_t dat;
res = fread(&dat, 1, 1, f);
if (res < 1) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
}
// update history and stats
if (block != emu->history.blocks[0]) {
memcpy(&emu->history.blocks[1], &emu->history.blocks[0],
sizeof(emu->history) - sizeof(emu->history.blocks[0]));
emu->history.blocks[0] = block;
}
emu->stats.prog_count += 1;
return 0;
}
int lfs_emubd_erase(const struct lfs_config *cfg, lfs_block_t block) {
lfs_emubd_t *emu = cfg->context;
// Check if erase is valid
assert(block < cfg->block_count);
// Erase the block
snprintf(emu->child, LFS_NAME_MAX, "%" PRIx32, block);
struct stat st;
int err = stat(emu->path, &st);
if (err && errno != ENOENT) {
return -errno;
}
if (!err && S_ISREG(st.st_mode) && (S_IWUSR & st.st_mode)) {
err = unlink(emu->path);
if (err) {
return -errno;
}
}
if (err || (S_ISREG(st.st_mode) && (S_IWUSR & st.st_mode))) {
FILE *f = fopen(emu->path, "w");
if (!f) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
}
}
emu->stats.erase_count += 1;
return 0;
}
int lfs_emubd_sync(const struct lfs_config *cfg) {
lfs_emubd_t *emu = cfg->context;
// Just write out info/stats for later lookup
snprintf(emu->child, LFS_NAME_MAX, ".config");
FILE *f = fopen(emu->path, "w");
if (!f) {
return -errno;
}
lfs_emubd_tole32(emu);
size_t res = fwrite(&emu->cfg, sizeof(emu->cfg), 1, f);
lfs_emubd_fromle32(emu);
if (res < 1) {
return -errno;
}
int err = fclose(f);
if (err) {
return -errno;
}
snprintf(emu->child, LFS_NAME_MAX, ".stats");
f = fopen(emu->path, "w");
if (!f) {
return -errno;
}
lfs_emubd_tole32(emu);
res = fwrite(&emu->stats, sizeof(emu->stats), 1, f);
lfs_emubd_fromle32(emu);
if (res < 1) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
}
snprintf(emu->child, LFS_NAME_MAX, ".history");
f = fopen(emu->path, "w");
if (!f) {
return -errno;
}
lfs_emubd_tole32(emu);
res = fwrite(&emu->history, sizeof(emu->history), 1, f);
lfs_emubd_fromle32(emu);
if (res < 1) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
}
return 0;
}

View File

@@ -1,91 +0,0 @@
/*
* Block device emulated on standard files
*
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef LFS_EMUBD_H
#define LFS_EMUBD_H
#include "lfs.h"
#include "lfs_util.h"
#ifdef __cplusplus
extern "C"
{
#endif
// Config options
#ifndef LFS_EMUBD_READ_SIZE
#define LFS_EMUBD_READ_SIZE 1
#endif
#ifndef LFS_EMUBD_PROG_SIZE
#define LFS_EMUBD_PROG_SIZE 1
#endif
#ifndef LFS_EMUBD_ERASE_SIZE
#define LFS_EMUBD_ERASE_SIZE 512
#endif
#ifndef LFS_EMUBD_TOTAL_SIZE
#define LFS_EMUBD_TOTAL_SIZE 524288
#endif
// The emu bd state
typedef struct lfs_emubd {
char *path;
char *child;
struct {
uint64_t read_count;
uint64_t prog_count;
uint64_t erase_count;
} stats;
struct {
lfs_block_t blocks[4];
} history;
struct {
uint32_t read_size;
uint32_t prog_size;
uint32_t block_size;
uint32_t block_count;
} cfg;
} lfs_emubd_t;
// Create a block device using path for the directory to store blocks
int lfs_emubd_create(const struct lfs_config *cfg, const char *path);
// Clean up memory associated with emu block device
void lfs_emubd_destroy(const struct lfs_config *cfg);
// Read a block
int lfs_emubd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size);
// Program a block
//
// The block must have previously been erased.
int lfs_emubd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size);
// Erase a block
//
// A block must be erased before being programmed. The
// state of an erased block is undefined.
int lfs_emubd_erase(const struct lfs_config *cfg, lfs_block_t block);
// Sync the block device
int lfs_emubd_sync(const struct lfs_config *cfg);
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif

2329
lfs.c

File diff suppressed because it is too large Load Diff

43
lfs.h
View File

@@ -21,7 +21,7 @@ extern "C"
// Software library version // Software library version
// Major (top-nibble), incremented on backwards incompatible changes // Major (top-nibble), incremented on backwards incompatible changes
// Minor (bottom-nibble), incremented on feature additions // Minor (bottom-nibble), incremented on feature additions
#define LFS_VERSION 0x00020000 #define LFS_VERSION 0x00020001
#define LFS_VERSION_MAJOR (0xffff & (LFS_VERSION >> 16)) #define LFS_VERSION_MAJOR (0xffff & (LFS_VERSION >> 16))
#define LFS_VERSION_MINOR (0xffff & (LFS_VERSION >> 0)) #define LFS_VERSION_MINOR (0xffff & (LFS_VERSION >> 0))
@@ -111,12 +111,14 @@ enum lfs_type {
LFS_TYPE_INLINESTRUCT = 0x201, LFS_TYPE_INLINESTRUCT = 0x201,
LFS_TYPE_SOFTTAIL = 0x600, LFS_TYPE_SOFTTAIL = 0x600,
LFS_TYPE_HARDTAIL = 0x601, LFS_TYPE_HARDTAIL = 0x601,
LFS_TYPE_BRANCH = 0x681,
LFS_TYPE_MOVESTATE = 0x7ff, LFS_TYPE_MOVESTATE = 0x7ff,
// internal chip sources // internal chip sources
LFS_FROM_NOOP = 0x000, LFS_FROM_NOOP = 0x000,
LFS_FROM_MOVE = 0x101, LFS_FROM_MOVE = 0x101,
LFS_FROM_USERATTRS = 0x102, LFS_FROM_DROP = 0x102,
LFS_FROM_USERATTRS = 0x103,
}; };
// File open flags // File open flags
@@ -136,6 +138,7 @@ enum lfs_open_flags {
LFS_F_READING = 0x040000, // File has been read since last flush LFS_F_READING = 0x040000, // File has been read since last flush
LFS_F_ERRED = 0x080000, // An error occured during write LFS_F_ERRED = 0x080000, // An error occured during write
LFS_F_INLINE = 0x100000, // Currently inlined in directory entry LFS_F_INLINE = 0x100000, // Currently inlined in directory entry
LFS_F_OPENED = 0x200000, // File has been opened
}; };
// File seek flags // File seek flags
@@ -190,9 +193,13 @@ struct lfs_config {
// Number of erasable blocks on the device. // Number of erasable blocks on the device.
lfs_size_t block_count; lfs_size_t block_count;
// Number of erase cycles before we should move data to another block. // Number of erase cycles before littlefs evicts metadata logs and moves
// May be zero, in which case no block-level wear-leveling is performed. // the metadata to another block. Suggested values are in the
uint32_t block_cycles; // range 100-1000, with large values having better performance at the cost
// of less consistent wear distribution.
//
// Set to -1 to disable block-level wear-leveling.
int32_t block_cycles;
// Size of block caches. Each cache buffers a portion of a block in RAM. // Size of block caches. Each cache buffers a portion of a block in RAM.
// The littlefs needs a read cache, a program cache, and one additional // The littlefs needs a read cache, a program cache, and one additional
@@ -204,7 +211,7 @@ struct lfs_config {
// Size of the lookahead buffer in bytes. A larger lookahead buffer // Size of the lookahead buffer in bytes. A larger lookahead buffer
// increases the number of blocks found during an allocation pass. The // increases the number of blocks found during an allocation pass. The
// lookahead buffer is stored as a compact bitmap, so each byte of RAM // lookahead buffer is stored as a compact bitmap, so each byte of RAM
// can track 8 blocks. Must be a multiple of 4. // can track 8 blocks. Must be a multiple of 8.
lfs_size_t lookahead_size; lfs_size_t lookahead_size;
// Optional statically allocated read buffer. Must be cache_size. // Optional statically allocated read buffer. Must be cache_size.
@@ -216,7 +223,7 @@ struct lfs_config {
void *prog_buffer; void *prog_buffer;
// Optional statically allocated lookahead buffer. Must be lookahead_size // Optional statically allocated lookahead buffer. Must be lookahead_size
// and aligned to a 64-bit boundary. By default lfs_malloc is used to // and aligned to a 32-bit boundary. By default lfs_malloc is used to
// allocate this buffer. // allocate this buffer.
void *lookahead_buffer; void *lookahead_buffer;
@@ -305,8 +312,11 @@ typedef struct lfs_mdir {
uint32_t etag; uint32_t etag;
uint16_t count; uint16_t count;
bool erased; bool erased;
bool first; // TODO come on
bool split; bool split;
bool mustrelocate; // TODO not great either
lfs_block_t tail[2]; lfs_block_t tail[2];
lfs_block_t branch[2];
} lfs_mdir_t; } lfs_mdir_t;
// littlefs directory type // littlefs directory type
@@ -350,12 +360,20 @@ typedef struct lfs_superblock {
lfs_size_t attr_max; lfs_size_t attr_max;
} lfs_superblock_t; } lfs_superblock_t;
typedef struct lfs_gstate {
uint32_t tag;
lfs_block_t pair[2];
} lfs_gstate_t;
// The littlefs filesystem type // The littlefs filesystem type
typedef struct lfs { typedef struct lfs {
lfs_cache_t rcache; lfs_cache_t rcache;
lfs_cache_t pcache; lfs_cache_t pcache;
lfs_block_t root[2]; lfs_block_t root[2];
lfs_block_t relocate_tail[2];
lfs_block_t relocate_end[2];
bool relocate_do_hack; // TODO fixme
struct lfs_mlist { struct lfs_mlist {
struct lfs_mlist *next; struct lfs_mlist *next;
uint16_t id; uint16_t id;
@@ -364,10 +382,9 @@ typedef struct lfs {
} *mlist; } *mlist;
uint32_t seed; uint32_t seed;
struct lfs_gstate { lfs_gstate_t gstate;
uint32_t tag; lfs_gstate_t gdisk;
lfs_block_t pair[2]; lfs_gstate_t gdelta;
} gstate, gpending, gdelta;
struct lfs_free { struct lfs_free {
lfs_block_t off; lfs_block_t off;
@@ -528,7 +545,7 @@ lfs_ssize_t lfs_file_write(lfs_t *lfs, lfs_file_t *file,
// Change the position of the file // Change the position of the file
// //
// The change in position is determined by the offset and whence flag. // The change in position is determined by the offset and whence flag.
// Returns the old position of the file, or a negative error code on failure. // Returns the new position of the file, or a negative error code on failure.
lfs_soff_t lfs_file_seek(lfs_t *lfs, lfs_file_t *file, lfs_soff_t lfs_file_seek(lfs_t *lfs, lfs_file_t *file,
lfs_soff_t off, int whence); lfs_soff_t off, int whence);
@@ -545,7 +562,7 @@ lfs_soff_t lfs_file_tell(lfs_t *lfs, lfs_file_t *file);
// Change the position of the file to the beginning of the file // Change the position of the file to the beginning of the file
// //
// Equivalent to lfs_file_seek(lfs, file, 0, LFS_SEEK_CUR) // Equivalent to lfs_file_seek(lfs, file, 0, LFS_SEEK_SET)
// Returns a negative error code on failure. // Returns a negative error code on failure.
int lfs_file_rewind(lfs_t *lfs, lfs_file_t *file); int lfs_file_rewind(lfs_t *lfs, lfs_file_t *file);

View File

@@ -31,7 +31,10 @@
#ifndef LFS_NO_ASSERT #ifndef LFS_NO_ASSERT
#include <assert.h> #include <assert.h>
#endif #endif
#if !defined(LFS_NO_DEBUG) || !defined(LFS_NO_WARN) || !defined(LFS_NO_ERROR) #if !defined(LFS_NO_DEBUG) || \
!defined(LFS_NO_WARN) || \
!defined(LFS_NO_ERROR) || \
defined(LFS_YES_TRACE)
#include <stdio.h> #include <stdio.h>
#endif #endif
@@ -46,23 +49,30 @@ extern "C"
// code footprint // code footprint
// Logging functions // Logging functions
#ifdef LFS_YES_TRACE
#define LFS_TRACE(fmt, ...) \
printf("%s:%d:trace: " fmt "\n", __FILE__, __LINE__, __VA_ARGS__)
#else
#define LFS_TRACE(fmt, ...)
#endif
#ifndef LFS_NO_DEBUG #ifndef LFS_NO_DEBUG
#define LFS_DEBUG(fmt, ...) \ #define LFS_DEBUG(fmt, ...) \
printf("lfs debug:%d: " fmt "\n", __LINE__, __VA_ARGS__) printf("%s:%d:debug: " fmt "\n", __FILE__, __LINE__, __VA_ARGS__)
#else #else
#define LFS_DEBUG(fmt, ...) #define LFS_DEBUG(fmt, ...)
#endif #endif
#ifndef LFS_NO_WARN #ifndef LFS_NO_WARN
#define LFS_WARN(fmt, ...) \ #define LFS_WARN(fmt, ...) \
printf("lfs warn:%d: " fmt "\n", __LINE__, __VA_ARGS__) printf("%s:%d:warn: " fmt "\n", __FILE__, __LINE__, __VA_ARGS__)
#else #else
#define LFS_WARN(fmt, ...) #define LFS_WARN(fmt, ...)
#endif #endif
#ifndef LFS_NO_ERROR #ifndef LFS_NO_ERROR
#define LFS_ERROR(fmt, ...) \ #define LFS_ERROR(fmt, ...) \
printf("lfs error:%d: " fmt "\n", __LINE__, __VA_ARGS__) printf("%s:%d:error: " fmt "\n", __FILE__, __LINE__, __VA_ARGS__)
#else #else
#define LFS_ERROR(fmt, ...) #define LFS_ERROR(fmt, ...)
#endif #endif

383
scripts/explode_asserts.py Executable file
View File

@@ -0,0 +1,383 @@
#!/usr/bin/env python3
import re
import sys
PATTERN = ['LFS_ASSERT', 'assert']
PREFIX = 'LFS'
MAXWIDTH = 16
ASSERT = "__{PREFIX}_ASSERT_{TYPE}_{COMP}"
FAIL = """
__attribute__((unused))
static void __{prefix}_assert_fail_{type}(
const char *file, int line, const char *comp,
{ctype} lh, size_t lsize,
{ctype} rh, size_t rsize) {{
printf("%s:%d:assert: assert failed with ", file, line);
__{prefix}_assert_print_{type}(lh, lsize);
printf(", expected %s ", comp);
__{prefix}_assert_print_{type}(rh, rsize);
printf("\\n");
fflush(NULL);
raise(SIGABRT);
}}
"""
COMP = {
'==': 'eq',
'!=': 'ne',
'<=': 'le',
'>=': 'ge',
'<': 'lt',
'>': 'gt',
}
TYPE = {
'int': {
'ctype': 'intmax_t',
'fail': FAIL,
'print': """
__attribute__((unused))
static void __{prefix}_assert_print_{type}({ctype} v, size_t size) {{
(void)size;
printf("%"PRIiMAX, v);
}}
""",
'assert': """
#define __{PREFIX}_ASSERT_{TYPE}_{COMP}(file, line, lh, rh)
do {{
__typeof__(lh) _lh = lh;
__typeof__(lh) _rh = (__typeof__(lh))rh;
if (!(_lh {op} _rh)) {{
__{prefix}_assert_fail_{type}(file, line, "{comp}",
(intmax_t)_lh, 0, (intmax_t)_rh, 0);
}}
}} while (0)
"""
},
'bool': {
'ctype': 'bool',
'fail': FAIL,
'print': """
__attribute__((unused))
static void __{prefix}_assert_print_{type}({ctype} v, size_t size) {{
(void)size;
printf("%s", v ? "true" : "false");
}}
""",
'assert': """
#define __{PREFIX}_ASSERT_{TYPE}_{COMP}(file, line, lh, rh)
do {{
bool _lh = !!(lh);
bool _rh = !!(rh);
if (!(_lh {op} _rh)) {{
__{prefix}_assert_fail_{type}(file, line, "{comp}",
_lh, 0, _rh, 0);
}}
}} while (0)
"""
},
'mem': {
'ctype': 'const void *',
'fail': FAIL,
'print': """
__attribute__((unused))
static void __{prefix}_assert_print_{type}({ctype} v, size_t size) {{
const uint8_t *s = v;
printf("\\\"");
for (size_t i = 0; i < size && i < {maxwidth}; i++) {{
if (s[i] >= ' ' && s[i] <= '~') {{
printf("%c", s[i]);
}} else {{
printf("\\\\x%02x", s[i]);
}}
}}
if (size > {maxwidth}) {{
printf("...");
}}
printf("\\\"");
}}
""",
'assert': """
#define __{PREFIX}_ASSERT_{TYPE}_{COMP}(file, line, lh, rh, size)
do {{
const void *_lh = lh;
const void *_rh = rh;
if (!(memcmp(_lh, _rh, size) {op} 0)) {{
__{prefix}_assert_fail_{type}(file, line, "{comp}",
_lh, size, _rh, size);
}}
}} while (0)
"""
},
'str': {
'ctype': 'const char *',
'fail': FAIL,
'print': """
__attribute__((unused))
static void __{prefix}_assert_print_{type}({ctype} v, size_t size) {{
__{prefix}_assert_print_mem(v, size);
}}
""",
'assert': """
#define __{PREFIX}_ASSERT_{TYPE}_{COMP}(file, line, lh, rh)
do {{
const char *_lh = lh;
const char *_rh = rh;
if (!(strcmp(_lh, _rh) {op} 0)) {{
__{prefix}_assert_fail_{type}(file, line, "{comp}",
_lh, strlen(_lh), _rh, strlen(_rh));
}}
}} while (0)
"""
}
}
def mkdecls(outf, maxwidth=16):
outf.write("#include <stdio.h>\n")
outf.write("#include <stdbool.h>\n")
outf.write("#include <stdint.h>\n")
outf.write("#include <inttypes.h>\n")
outf.write("#include <signal.h>\n")
for type, desc in sorted(TYPE.items()):
format = {
'type': type.lower(), 'TYPE': type.upper(),
'ctype': desc['ctype'],
'prefix': PREFIX.lower(), 'PREFIX': PREFIX.upper(),
'maxwidth': maxwidth,
}
outf.write(re.sub('\s+', ' ',
desc['print'].strip().format(**format))+'\n')
outf.write(re.sub('\s+', ' ',
desc['fail'].strip().format(**format))+'\n')
for op, comp in sorted(COMP.items()):
format.update({
'comp': comp.lower(), 'COMP': comp.upper(),
'op': op,
})
outf.write(re.sub('\s+', ' ',
desc['assert'].strip().format(**format))+'\n')
def mkassert(type, comp, lh, rh, size=None):
format = {
'type': type.lower(), 'TYPE': type.upper(),
'comp': comp.lower(), 'COMP': comp.upper(),
'prefix': PREFIX.lower(), 'PREFIX': PREFIX.upper(),
'lh': lh.strip(' '),
'rh': rh.strip(' '),
'size': size,
}
if size:
return ((ASSERT + '(__FILE__, __LINE__, {lh}, {rh}, {size})')
.format(**format))
else:
return ((ASSERT + '(__FILE__, __LINE__, {lh}, {rh})')
.format(**format))
# simple recursive descent parser
LEX = {
'ws': [r'(?:\s|\n|#.*?\n|//.*?\n|/\*.*?\*/)+'],
'assert': PATTERN,
'string': [r'"(?:\\.|[^"])*"', r"'(?:\\.|[^'])\'"],
'arrow': ['=>'],
'paren': ['\(', '\)'],
'op': ['strcmp', 'memcmp', '->'],
'comp': ['==', '!=', '<=', '>=', '<', '>'],
'logic': ['\&\&', '\|\|'],
'sep': [':', ';', '\{', '\}', ','],
}
class ParseFailure(Exception):
def __init__(self, expected, found):
self.expected = expected
self.found = found
def __str__(self):
return "expected %r, found %s..." % (
self.expected, repr(self.found)[:70])
class Parse:
def __init__(self, inf, lexemes):
p = '|'.join('(?P<%s>%s)' % (n, '|'.join(l))
for n, l in lexemes.items())
p = re.compile(p, re.DOTALL)
data = inf.read()
tokens = []
while True:
m = p.search(data)
if m:
if m.start() > 0:
tokens.append((None, data[:m.start()]))
tokens.append((m.lastgroup, m.group()))
data = data[m.end():]
else:
tokens.append((None, data))
break
self.tokens = tokens
self.off = 0
def lookahead(self, *pattern):
if self.off < len(self.tokens):
token = self.tokens[self.off]
if token[0] in pattern or token[1] in pattern:
self.m = token[1]
return self.m
self.m = None
return self.m
def accept(self, *patterns):
m = self.lookahead(*patterns)
if m is not None:
self.off += 1
return m
def expect(self, *patterns):
m = self.accept(*patterns)
if not m:
raise ParseFailure(patterns, self.tokens[self.off:])
return m
def push(self):
return self.off
def pop(self, state):
self.off = state
def passert(p):
def pastr(p):
p.expect('assert') ; p.accept('ws') ; p.expect('(') ; p.accept('ws')
p.expect('strcmp') ; p.accept('ws') ; p.expect('(') ; p.accept('ws')
lh = pexpr(p) ; p.accept('ws')
p.expect(',') ; p.accept('ws')
rh = pexpr(p) ; p.accept('ws')
p.expect(')') ; p.accept('ws')
comp = p.expect('comp') ; p.accept('ws')
p.expect('0') ; p.accept('ws')
p.expect(')')
return mkassert('str', COMP[comp], lh, rh)
def pamem(p):
p.expect('assert') ; p.accept('ws') ; p.expect('(') ; p.accept('ws')
p.expect('memcmp') ; p.accept('ws') ; p.expect('(') ; p.accept('ws')
lh = pexpr(p) ; p.accept('ws')
p.expect(',') ; p.accept('ws')
rh = pexpr(p) ; p.accept('ws')
p.expect(',') ; p.accept('ws')
size = pexpr(p) ; p.accept('ws')
p.expect(')') ; p.accept('ws')
comp = p.expect('comp') ; p.accept('ws')
p.expect('0') ; p.accept('ws')
p.expect(')')
return mkassert('mem', COMP[comp], lh, rh, size)
def paint(p):
p.expect('assert') ; p.accept('ws') ; p.expect('(') ; p.accept('ws')
lh = pexpr(p) ; p.accept('ws')
comp = p.expect('comp') ; p.accept('ws')
rh = pexpr(p) ; p.accept('ws')
p.expect(')')
return mkassert('int', COMP[comp], lh, rh)
def pabool(p):
p.expect('assert') ; p.accept('ws') ; p.expect('(') ; p.accept('ws')
lh = pexprs(p) ; p.accept('ws')
p.expect(')')
return mkassert('bool', 'eq', lh, 'true')
def pa(p):
return p.expect('assert')
state = p.push()
lastf = None
for pa in [pastr, pamem, paint, pabool, pa]:
try:
return pa(p)
except ParseFailure as f:
p.pop(state)
lastf = f
else:
raise lastf
def pexpr(p):
res = []
while True:
if p.accept('('):
res.append(p.m)
while True:
res.append(pexprs(p))
if p.accept('sep'):
res.append(p.m)
else:
break
res.append(p.expect(')'))
elif p.lookahead('assert'):
res.append(passert(p))
elif p.accept('assert', 'ws', 'string', 'op', None):
res.append(p.m)
else:
return ''.join(res)
def pexprs(p):
res = []
while True:
res.append(pexpr(p))
if p.accept('comp', 'logic', ','):
res.append(p.m)
else:
return ''.join(res)
def pstmt(p):
ws = p.accept('ws') or ''
lh = pexprs(p)
if p.accept('=>'):
rh = pexprs(p)
return ws + mkassert('int', 'eq', lh, rh)
else:
return ws + lh
def main(args):
inf = open(args.input, 'r') if args.input else sys.stdin
outf = open(args.output, 'w') if args.output else sys.stdout
lexemes = LEX.copy()
if args.pattern:
lexemes['assert'] = args.pattern
p = Parse(inf, lexemes)
# write extra verbose asserts
mkdecls(outf, maxwidth=args.maxwidth)
if args.input:
outf.write("#line %d \"%s\"\n" % (1, args.input))
# parse and write out stmt at a time
try:
while True:
outf.write(pstmt(p))
if p.accept('sep'):
outf.write(p.m)
else:
break
except ParseFailure as f:
pass
for i in range(p.off, len(p.tokens)):
outf.write(p.tokens[i][1])
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(
description="Cpp step that increases assert verbosity")
parser.add_argument('input', nargs='?',
help="Input C file after cpp.")
parser.add_argument('-o', '--output', required=True,
help="Output C file.")
parser.add_argument('-p', '--pattern', action='append',
help="Patterns to search for starting an assert statement.")
parser.add_argument('--maxwidth', default=MAXWIDTH, type=int,
help="Maximum number of characters to display for strcmp and memcmp.")
main(parser.parse_args())

26
scripts/readblock.py Executable file
View File

@@ -0,0 +1,26 @@
#!/usr/bin/env python3
import subprocess as sp
def main(args):
with open(args.disk, 'rb') as f:
f.seek(args.block * args.block_size)
block = (f.read(args.block_size)
.ljust(args.block_size, b'\xff'))
# what did you expect?
print("%-8s %-s" % ('off', 'data'))
return sp.run(['xxd', '-g1', '-'], input=block).returncode
if __name__ == "__main__":
import argparse
import sys
parser = argparse.ArgumentParser(
description="Hex dump a specific block in a disk.")
parser.add_argument('disk',
help="File representing the block device.")
parser.add_argument('block_size', type=lambda x: int(x, 0),
help="Size of a block in bytes.")
parser.add_argument('block', type=lambda x: int(x, 0),
help="Address of block to dump.")
sys.exit(main(parser.parse_args()))

358
scripts/readmdir.py Executable file
View File

@@ -0,0 +1,358 @@
#!/usr/bin/env python3
import struct
import binascii
import sys
import itertools as it
TAG_TYPES = {
'splice': (0x700, 0x400),
'create': (0x7ff, 0x401),
'delete': (0x7ff, 0x4ff),
'name': (0x700, 0x000),
'reg': (0x7ff, 0x001),
'dir': (0x7ff, 0x002),
'superblock': (0x7ff, 0x0ff),
'struct': (0x700, 0x200),
'dirstruct': (0x7ff, 0x200),
'ctzstruct': (0x7ff, 0x202),
'inlinestruct': (0x7ff, 0x201),
'userattr': (0x700, 0x300),
'tail': (0x700, 0x600), # TODO rename these?
'softtail': (0x7ff, 0x600),
'hardtail': (0x7ff, 0x601),
'branch': (0x7ff, 0x681),
'gstate': (0x700, 0x700),
'movestate': (0x7ff, 0x7ff),
'crc': (0x700, 0x500),
}
class Tag:
def __init__(self, *args):
if len(args) == 1:
self.tag = args[0]
elif len(args) == 3:
if isinstance(args[0], str):
type = TAG_TYPES[args[0]][1]
else:
type = args[0]
if isinstance(args[1], str):
id = int(args[1], 0) if args[1] not in 'x.' else 0x3ff
else:
id = args[1]
if isinstance(args[2], str):
size = int(args[2], str) if args[2] not in 'x.' else 0x3ff
else:
size = args[2]
self.tag = (type << 20) | (id << 10) | size
else:
assert False
@property
def isvalid(self):
return not bool(self.tag & 0x80000000)
@property
def isattr(self):
return not bool(self.tag & 0x40000000)
@property
def iscompactable(self):
return bool(self.tag & 0x20000000)
@property
def isunique(self):
return not bool(self.tag & 0x10000000)
@property
def type(self):
return (self.tag & 0x7ff00000) >> 20
@property
def type1(self):
return (self.tag & 0x70000000) >> 20
@property
def type3(self):
return (self.tag & 0x7ff00000) >> 20
@property
def id(self):
return (self.tag & 0x000ffc00) >> 10
@property
def size(self):
return (self.tag & 0x000003ff) >> 0
@property
def dsize(self):
return 4 + (self.size if self.size != 0x3ff else 0)
@property
def chunk(self):
return self.type & 0xff
@property
def schunk(self):
return struct.unpack('b', struct.pack('B', self.chunk))[0]
def is_(self, type):
return (self.type & TAG_TYPES[type][0]) == TAG_TYPES[type][1]
def mkmask(self):
return Tag(
0x780 if self.is_('tail') else 0x700 if self.isunique else 0x7ff, # TODO best way?
0x3ff if self.isattr else 0,
0)
def chid(self, nid):
ntag = Tag(self.type, nid, self.size)
if hasattr(self, 'off'): ntag.off = self.off
if hasattr(self, 'data'): ntag.data = self.data
if hasattr(self, 'crc'): ntag.crc = self.crc
return ntag
def typerepr(self):
if self.is_('crc') and getattr(self, 'crc', 0xffffffff) != 0xffffffff:
return 'crc (bad)'
reverse_types = {v: k for k, v in TAG_TYPES.items()}
for prefix in range(12):
mask = 0x7ff & ~((1 << prefix)-1)
if (mask, self.type & mask) in reverse_types:
type = reverse_types[mask, self.type & mask]
if prefix > 0:
return '%s %#0*x' % (
type, prefix//4, self.type & ((1 << prefix)-1))
else:
return type
else:
return '%02x' % self.type
def idrepr(self):
return repr(self.id) if self.id != 0x3ff else '.'
def sizerepr(self):
return repr(self.size) if self.size != 0x3ff else 'x'
def __repr__(self):
return 'Tag(%r, %d, %d)' % (self.typerepr(), self.id, self.size)
def __lt__(self, other):
return (self.id, self.type) < (other.id, other.type)
def __bool__(self):
return self.isvalid
def __int__(self):
return self.tag
def __index__(self):
return self.tag
class MetadataPair:
def __init__(self, blocks):
if len(blocks) > 1:
self.pair = [MetadataPair([block]) for block in blocks]
self.pair = sorted(self.pair, reverse=True)
self.data = self.pair[0].data
self.rev = self.pair[0].rev
self.tags = self.pair[0].tags
self.ids = self.pair[0].ids
self.log = self.pair[0].log
self.all_ = self.pair[0].all_
return
self.pair = [self]
self.data = blocks[0]
block = self.data
self.rev, = struct.unpack('<I', block[0:4])
crc = binascii.crc32(block[0:4])
# parse tags
corrupt = False
tag = Tag(0xffffffff)
off = 4
self.log = []
self.all_ = []
while len(block) - off >= 4:
ntag, = struct.unpack('>I', block[off:off+4])
tag = Tag(int(tag) ^ ntag)
tag.off = off + 4
tag.data = block[off+4:off+tag.dsize]
if tag.is_('crc'):
crc = binascii.crc32(block[off:off+4+4], crc)
else:
crc = binascii.crc32(block[off:off+tag.dsize], crc)
tag.crc = crc
off += tag.dsize
self.all_.append(tag)
if tag.is_('crc'):
# is valid commit?
if crc != 0xffffffff:
corrupt = True
if not corrupt:
self.log = self.all_.copy()
# reset tag parsing
crc = 0
tag = Tag(int(tag) ^ ((tag.type & 1) << 31))
# find active ids
self.ids = list(it.takewhile(
lambda id: Tag('name', id, 0) in self,
it.count()))
# find most recent tags
self.tags = []
for tag in self.log:
if tag.is_('crc') or tag.is_('splice'):
continue
elif tag.id == 0x3ff:
if tag in self and self[tag] is tag:
self.tags.append(tag)
else:
# id could have change, I know this is messy and slow
# but it works
for id in self.ids:
ntag = tag.chid(id)
if ntag in self and self[ntag] is tag:
self.tags.append(ntag)
self.tags = sorted(self.tags)
def __bool__(self):
return bool(self.log)
def __lt__(self, other):
# corrupt blocks don't count
if not self or not other:
return bool(other)
# use sequence arithmetic to avoid overflow
return not ((other.rev - self.rev) & 0x80000000)
def __contains__(self, args):
try:
self[args]
return True
except KeyError:
return False
def __getitem__(self, args):
if isinstance(args, tuple):
gmask, gtag = args
else:
gmask, gtag = args.mkmask(), args
gdiff = 0
for tag in reversed(self.log):
if (gmask.id != 0 and tag.is_('splice') and
tag.id <= gtag.id - gdiff):
if tag.is_('create') and tag.id == gtag.id - gdiff:
# creation point
break
gdiff += tag.schunk
if ((int(gmask) & int(tag)) ==
(int(gmask) & int(gtag.chid(gtag.id - gdiff)))):
if tag.size == 0x3ff:
# deleted
break
return tag
raise KeyError(gmask, gtag)
def _dump_tags(self, tags, f=sys.stdout, truncate=True):
f.write("%-8s %-8s %-13s %4s %4s" % (
'off', 'tag', 'type', 'id', 'len'))
if truncate:
f.write(' data (truncated)')
f.write('\n')
for tag in tags:
f.write("%08x: %08x %-13s %4s %4s" % (
tag.off, tag,
tag.typerepr(), tag.idrepr(), tag.sizerepr()))
if truncate:
f.write(" %-23s %-8s\n" % (
' '.join('%02x' % c for c in tag.data[:8]),
''.join(c if c >= ' ' and c <= '~' else '.'
for c in map(chr, tag.data[:8]))))
else:
f.write("\n")
for i in range(0, len(tag.data), 16):
f.write(" %08x: %-47s %-16s\n" % (
tag.off+i,
' '.join('%02x' % c for c in tag.data[i:i+16]),
''.join(c if c >= ' ' and c <= '~' else '.'
for c in map(chr, tag.data[i:i+16]))))
def dump_tags(self, f=sys.stdout, truncate=True):
self._dump_tags(self.tags, f=f, truncate=truncate)
def dump_log(self, f=sys.stdout, truncate=True):
self._dump_tags(self.log, f=f, truncate=truncate)
def dump_all(self, f=sys.stdout, truncate=True):
self._dump_tags(self.all_, f=f, truncate=truncate)
def main(args):
blocks = []
with open(args.disk, 'rb') as f:
for block in [args.block1, args.block2]:
if block is None:
continue
f.seek(block * args.block_size)
blocks.append(f.read(args.block_size)
.ljust(args.block_size, b'\xff'))
# find most recent pair
mdir = MetadataPair(blocks)
print("mdir {%s} rev %d%s%s" % (
', '.join('%#x' % b
for b in [args.block1, args.block2]
if b is not None),
mdir.rev,
' (was %s)' % ', '.join('%d' % m.rev for m in mdir.pair[1:])
if len(mdir.pair) > 1 else '',
' (corrupted!)' if not mdir else ''))
if args.all:
mdir.dump_all(truncate=not args.no_truncate)
elif args.log:
mdir.dump_log(truncate=not args.no_truncate)
else:
mdir.dump_tags(truncate=not args.no_truncate)
return 0 if mdir else 1
if __name__ == "__main__":
import argparse
import sys
parser = argparse.ArgumentParser(
description="Dump useful info about metadata pairs in littlefs.")
parser.add_argument('disk',
help="File representing the block device.")
parser.add_argument('block_size', type=lambda x: int(x, 0),
help="Size of a block in bytes.")
parser.add_argument('block1', type=lambda x: int(x, 0),
help="First block address for finding the metadata pair.")
parser.add_argument('block2', nargs='?', type=lambda x: int(x, 0),
help="Second block address for finding the metadata pair.")
parser.add_argument('-l', '--log', action='store_true',
help="Show tags in log.")
parser.add_argument('-a', '--all', action='store_true',
help="Show all tags in log, included tags in corrupted commits.")
parser.add_argument('-T', '--no-truncate', action='store_true',
help="Don't truncate large amounts of data.")
sys.exit(main(parser.parse_args()))

307
scripts/readtree.py Executable file
View File

@@ -0,0 +1,307 @@
#!/usr/bin/env python3
import struct
import sys
import json
import io
import itertools as it
import collections as c
from readmdir import Tag, MetadataPair
def popc(x):
return bin(x).count('1')
def ctz(x):
return len(bin(x)) - len(bin(x).rstrip('0'))
def dumpentries(args, mdir, mdirs, f):
for k, id_ in enumerate(mdir.ids):
name = mdir[Tag('name', id_, 0)]
struct_ = mdir[Tag('struct', id_, 0)]
desc = "id %d %s %s" % (
id_, name.typerepr(),
json.dumps(name.data.decode('utf8')))
if struct_.is_('dirstruct'):
pair = struct.unpack('<II', struct_.data[:8].ljust(8, b'\xff'))
desc += " dir {%#x, %#x}%s" % (
pair[0], pair[1],
'?' if frozenset(pair) not in mdirs else '')
if struct_.is_('ctzstruct'):
desc += " ctz {%#x} size %d" % struct.unpack(
'<II', struct_.data[:8].ljust(8, b'\xff'))
if struct_.is_('inlinestruct'):
desc += " inline size %d" % struct_.size
data = None
if struct_.is_('inlinestruct'):
data = struct_.data
elif struct_.is_('ctzstruct'):
block, size = struct.unpack(
'<II', struct_.data[:8].ljust(8, b'\xff'))
data = []
i = 0 if size == 0 else (size-1) // (args.block_size - 8)
if i != 0:
i = ((size-1) - 4*popc(i-1)+2) // (args.block_size - 8)
with open(args.disk, 'rb') as f2:
while i >= 0:
f2.seek(block * args.block_size)
dat = f2.read(args.block_size)
data.append(dat[4*(ctz(i)+1) if i != 0 else 0:])
block, = struct.unpack('<I', dat[:4].ljust(4, b'\xff'))
i -= 1
data = bytes(it.islice(
it.chain.from_iterable(reversed(data)), size))
f.write("%-45s%s\n" % (desc,
"%-23s %-8s" % (
' '.join('%02x' % c for c in data[:8]),
''.join(c if c >= ' ' and c <= '~' else '.'
for c in map(chr, data[:8])))
if not args.no_truncate and len(desc) < 45
and data is not None else ""))
if name.is_('superblock') and struct_.is_('inlinestruct'):
f.write(
" block_size %d\n"
" block_count %d\n"
" name_max %d\n"
" file_max %d\n"
" attr_max %d\n" % struct.unpack(
'<IIIII', struct_.data[4:4+20].ljust(20, b'\xff')))
for tag in mdir.tags:
if tag.id==id_ and tag.is_('userattr'):
desc = "%s size %d" % (tag.typerepr(), tag.size)
f.write(" %-43s%s\n" % (desc,
"%-23s %-8s" % (
' '.join('%02x' % c for c in tag.data[:8]),
''.join(c if c >= ' ' and c <= '~' else '.'
for c in map(chr, tag.data[:8])))
if not args.no_truncate and len(desc) < 43 else ""))
if args.no_truncate:
for i in range(0, len(tag.data), 16):
f.write(" %08x: %-47s %-16s\n" % (
i, ' '.join('%02x' % c for c in tag.data[i:i+16]),
''.join(c if c >= ' ' and c <= '~' else '.'
for c in map(chr, tag.data[i:i+16]))))
if args.no_truncate and data is not None:
for i in range(0, len(data), 16):
f.write(" %08x: %-47s %-16s\n" % (
i, ' '.join('%02x' % c for c in data[i:i+16]),
''.join(c if c >= ' ' and c <= '~' else '.'
for c in map(chr, data[i:i+16]))))
def main(args):
superblock = None
gstate = b'\0\0\0\0\0\0\0\0\0\0\0\0'
mdirs = c.OrderedDict()
corrupted = []
cycle = False
with open(args.disk, 'rb') as f:
tail = (args.block1, args.block2)
while tail:
if frozenset(tail) in mdirs:
# cycle detected
cycle = tail
break
# load mdir
data = []
blocks = {}
for block in tail:
f.seek(block * args.block_size)
data.append(f.read(args.block_size)
.ljust(args.block_size, b'\xff'))
blocks[id(data[-1])] = block
mdir = MetadataPair(data)
mdir.blocks = tuple(blocks[id(p.data)] for p in mdir.pair)
# fetch some key metadata as a we scan
try:
mdir.tail = mdir[Tag('tail', 0, 0)]
if mdir.tail.size != 8 or mdir.tail.data == 8*b'\xff':
mdir.tail = None
except KeyError:
mdir.tail = None
try:
mdir.branch = mdir[Tag('branch', 0, 0)]
if mdir.branch.size != 8 or mdir.branch.data == 8*b'\xff':
mdir.branch = None
except KeyError:
mdir.branch = None
# have superblock?
try:
nsuperblock = mdir[
Tag(0x7ff, 0x3ff, 0), Tag('superblock', 0, 0)]
superblock = nsuperblock, mdir[Tag('inlinestruct', 0, 0)]
except KeyError:
pass
# have gstate?
try:
ngstate = mdir[Tag('movestate', 0, 0)]
gstate = bytes((a or 0) ^ (b or 0)
for a,b in it.zip_longest(gstate, ngstate.data))
except KeyError:
pass
# corrupted?
if not mdir:
corrupted.append(mdir)
# add to metadata-pairs
mdirs[frozenset(mdir.blocks)] = mdir
tail = (struct.unpack('<II', mdir.tail.data)
if mdir.tail else None)
# derive paths and build directories
dirs = {}
rogue = {}
pending = [('/', (args.block1, args.block2))]
while pending:
path, branch = pending.pop(0)
dir = []
while branch and frozenset(branch) in mdirs:
mdir = mdirs[frozenset(branch)]
dir.append(mdir)
for tag in mdir.tags:
if tag.is_('dir'):
try:
npath = path + '/' + tag.data.decode('utf8')
npath = npath.replace('//', '/')
dirstruct = mdir[Tag('dirstruct', tag.id, 0)]
npair = struct.unpack('<II', dirstruct.data)
pending.append((npath, npair))
except KeyError:
pass
branch = (struct.unpack('<II', mdir.branch.data)
if mdir.branch else None)
if not dir:
rogue[path] = branch
else:
dirs[path] = dir
# also find orphans
not_orphans = {frozenset(mdir.blocks)
for dir in dirs.values()
for mdir in dir}
orphans = []
for pair, mdir in mdirs.items():
if pair not in not_orphans:
if len(orphans) > 0 and (pair == frozenset(
struct.unpack('<II', orphans[-1][-1].tail.data))):
orphans[-1].append(mdir)
else:
orphans.append([mdir])
# print littlefs + version info
version = ('?', '?')
if superblock:
version = tuple(reversed(
struct.unpack('<HH', superblock[1].data[0:4].ljust(4, b'\xff'))))
print("%-47s%s" % ("littlefs v%s.%s" % version,
"data (truncated, if it fits)"
if not any([args.no_truncate, args.tags, args.log, args.all]) else ""))
# print gstate
badgstate = None
print("gstate 0x%s" % ''.join('%02x' % c for c in gstate))
tag = Tag(struct.unpack('<I', gstate[0:4].ljust(4, b'\xff'))[0])
blocks = struct.unpack('<II', gstate[4:4+8].ljust(8, b'\xff'))
if tag.size or not tag.isvalid:
print(" orphans >=%d" % max(tag.size, 1))
if tag.type:
if frozenset(blocks) not in mdirs:
badgstate = gstate
print(" move dir {%#x, %#x}%s id %d" % (
blocks[0], blocks[1],
'?' if frozenset(blocks) not in mdirs else '',
tag.id))
# print dir info
for path, dir in it.chain(
sorted(dirs.items()),
zip(it.repeat(None), orphans)):
print("dir %s" % json.dumps(path) if path else "orphaned")
for j, mdir in enumerate(dir):
print("mdir {%#x, %#x} rev %d (was %d)%s%s" % (
mdir.blocks[0], mdir.blocks[1], mdir.rev, mdir.pair[1].rev,
' (corrupted!)' if not mdir else '',
' -> {%#x, %#x}' % struct.unpack('<II', mdir.tail.data)
if mdir.tail else ''))
f = io.StringIO()
if args.tags:
mdir.dump_tags(f, truncate=not args.no_truncate)
elif args.log:
mdir.dump_log(f, truncate=not args.no_truncate)
elif args.all:
mdir.dump_all(f, truncate=not args.no_truncate)
else:
dumpentries(args, mdir, mdirs, f)
lines = list(filter(None, f.getvalue().split('\n')))
for k, line in enumerate(lines):
print("%s %s" % (
' ' if j == len(dir)-1 else
'v' if k == len(lines)-1 else
'|' if path else '.',
line))
errcode = 0
for mdir in corrupted:
errcode = errcode or 1
print("*** corrupted mdir {%#x, %#x}! ***" % (
mdir.blocks[0], mdir.blocks[1]))
for path, pair in rogue.items():
errcode = errcode or 2
print("*** couldn't find dir %s {%#x, %#x}! ***" % (
json.dumps(path), pair[0], pair[1]))
if badgstate:
errcode = errcode or 3
print("*** bad gstate 0x%s! ***" %
''.join('%02x' % c for c in gstate))
if cycle:
errcode = errcode or 4
print("*** cycle detected {%#x, %#x}! ***" % (
cycle[0], cycle[1]))
return errcode
if __name__ == "__main__":
import argparse
import sys
parser = argparse.ArgumentParser(
description="Dump semantic info about the metadata tree in littlefs")
parser.add_argument('disk',
help="File representing the block device.")
parser.add_argument('block_size', type=lambda x: int(x, 0),
help="Size of a block in bytes.")
parser.add_argument('block1', nargs='?', default=0,
type=lambda x: int(x, 0),
help="Optional first block address for finding the root.")
parser.add_argument('block2', nargs='?', default=1,
type=lambda x: int(x, 0),
help="Optional second block address for finding the root.")
parser.add_argument('-t', '--tags', action='store_true',
help="Show metadata tags instead of reconstructing entries.")
parser.add_argument('-l', '--log', action='store_true',
help="Show tags in log.")
parser.add_argument('-a', '--all', action='store_true',
help="Show all tags in log, included tags in corrupted commits.")
parser.add_argument('-T', '--no-truncate', action='store_true',
help="Show the full contents of files/attrs/tags.")
sys.exit(main(parser.parse_args()))

781
scripts/test.py Executable file
View File

@@ -0,0 +1,781 @@
#!/usr/bin/env python3
# This script manages littlefs tests, which are configured with
# .toml files stored in the tests directory.
#
import toml
import glob
import re
import os
import io
import itertools as it
import collections.abc as abc
import subprocess as sp
import base64
import sys
import copy
import shlex
import pty
import errno
import signal
TESTDIR = 'tests'
RULES = """
define FLATTEN
tests/%$(subst /,.,$(target)): $(target)
./scripts/explode_asserts.py $$< -o $$@
endef
$(foreach target,$(SRC),$(eval $(FLATTEN)))
-include tests/*.d
.SECONDARY:
%.test: %.test.o $(foreach f,$(subst /,.,$(SRC:.c=.o)),%.$f)
$(CC) $(CFLAGS) $^ $(LFLAGS) -o $@
"""
GLOBALS = """
//////////////// AUTOGENERATED TEST ////////////////
#include "lfs.h"
#include "bd/lfs_testbd.h"
#include <stdio.h>
extern const char *lfs_testbd_path;
extern uint32_t lfs_testbd_cycles;
"""
DEFINES = {
'LFS_READ_SIZE': 16,
'LFS_PROG_SIZE': 'LFS_READ_SIZE',
'LFS_BLOCK_SIZE': 512,
'LFS_BLOCK_COUNT': 1024,
'LFS_BLOCK_CYCLES': -1,
'LFS_CACHE_SIZE': '(64 % LFS_PROG_SIZE == 0 ? 64 : LFS_PROG_SIZE)',
'LFS_LOOKAHEAD_SIZE': 16,
'LFS_ERASE_VALUE': 0xff,
'LFS_ERASE_CYCLES': 0,
'LFS_BADBLOCK_BEHAVIOR': 'LFS_TESTBD_BADBLOCK_PROGERROR',
}
PROLOGUE = """
// prologue
__attribute__((unused)) lfs_t lfs;
__attribute__((unused)) lfs_testbd_t bd;
__attribute__((unused)) lfs_file_t file;
__attribute__((unused)) lfs_dir_t dir;
__attribute__((unused)) struct lfs_info info;
__attribute__((unused)) char path[1024];
__attribute__((unused)) uint8_t buffer[1024];
__attribute__((unused)) lfs_size_t size;
__attribute__((unused)) int err;
__attribute__((unused)) const struct lfs_config cfg = {
.context = &bd,
.read = lfs_testbd_read,
.prog = lfs_testbd_prog,
.erase = lfs_testbd_erase,
.sync = lfs_testbd_sync,
.read_size = LFS_READ_SIZE,
.prog_size = LFS_PROG_SIZE,
.block_size = LFS_BLOCK_SIZE,
.block_count = LFS_BLOCK_COUNT,
.block_cycles = LFS_BLOCK_CYCLES,
.cache_size = LFS_CACHE_SIZE,
.lookahead_size = LFS_LOOKAHEAD_SIZE,
};
__attribute__((unused)) const struct lfs_testbd_config bdcfg = {
.erase_value = LFS_ERASE_VALUE,
.erase_cycles = LFS_ERASE_CYCLES,
.badblock_behavior = LFS_BADBLOCK_BEHAVIOR,
.power_cycles = lfs_testbd_cycles,
};
lfs_testbd_createcfg(&cfg, lfs_testbd_path, &bdcfg) => 0;
"""
EPILOGUE = """
// epilogue
lfs_testbd_destroy(&cfg) => 0;
"""
PASS = '\033[32m✓\033[0m'
FAIL = '\033[31m✗\033[0m'
class TestFailure(Exception):
def __init__(self, case, returncode=None, stdout=None, assert_=None):
self.case = case
self.returncode = returncode
self.stdout = stdout
self.assert_ = assert_
class TestCase:
def __init__(self, config, filter=filter,
suite=None, caseno=None, lineno=None, **_):
self.config = config
self.filter = filter
self.suite = suite
self.caseno = caseno
self.lineno = lineno
self.code = config['code']
self.code_lineno = config['code_lineno']
self.defines = config.get('define', {})
self.if_ = config.get('if', None)
self.in_ = config.get('in', None)
def __str__(self):
if hasattr(self, 'permno'):
if any(k not in self.case.defines for k in self.defines):
return '%s#%d#%d (%s)' % (
self.suite.name, self.caseno, self.permno, ', '.join(
'%s=%s' % (k, v) for k, v in self.defines.items()
if k not in self.case.defines))
else:
return '%s#%d#%d' % (
self.suite.name, self.caseno, self.permno)
else:
return '%s#%d' % (
self.suite.name, self.caseno)
def permute(self, class_=None, defines={}, permno=None, **_):
ncase = (class_ or type(self))(self.config)
for k, v in self.__dict__.items():
setattr(ncase, k, v)
ncase.case = self
ncase.perms = [ncase]
ncase.permno = permno
ncase.defines = defines
return ncase
def build(self, f, **_):
# prologue
for k, v in sorted(self.defines.items()):
if k not in self.suite.defines:
f.write('#define %s %s\n' % (k, v))
f.write('void test_case%d(%s) {' % (self.caseno, ','.join(
'\n'+8*' '+'__attribute__((unused)) intmax_t %s' % k
for k in sorted(self.perms[0].defines)
if k not in self.defines)))
f.write(PROLOGUE)
f.write('\n')
f.write(4*' '+'// test case %d\n' % self.caseno)
f.write(4*' '+'#line %d "%s"\n' % (self.code_lineno, self.suite.path))
# test case goes here
f.write(self.code)
# epilogue
f.write(EPILOGUE)
f.write('}\n')
for k, v in sorted(self.defines.items()):
if k not in self.suite.defines:
f.write('#undef %s\n' % k)
def shouldtest(self, **args):
if (self.filter is not None and
len(self.filter) >= 1 and
self.filter[0] != self.caseno):
return False
elif (self.filter is not None and
len(self.filter) >= 2 and
self.filter[1] != self.permno):
return False
elif args.get('no_internal', False) and self.in_ is not None:
return False
elif self.if_ is not None:
if_ = self.if_
while True:
for k, v in sorted(self.defines.items(),
key=lambda x: len(x[0]), reverse=True):
if k in if_:
if_ = if_.replace(k, '(%s)' % v)
break
else:
break
if_ = (
re.sub('(\&\&|\?)', ' and ',
re.sub('(\|\||:)', ' or ',
re.sub('!(?!=)', ' not ', if_))))
return eval(if_)
else:
return True
def test(self, exec=[], persist=False, cycles=None,
gdb=False, failure=None, disk=None, **args):
# build command
cmd = exec + ['./%s.test' % self.suite.path,
repr(self.caseno), repr(self.permno)]
# persist disk or keep in RAM for speed?
if persist:
if not disk:
disk = self.suite.path + '.disk'
if persist != 'noerase':
try:
with open(disk, 'w') as f:
f.truncate(0)
if args.get('verbose', False):
print('truncate --size=0', disk)
except FileNotFoundError:
pass
cmd.append(disk)
# simulate power-loss after n cycles?
if cycles:
cmd.append(str(cycles))
# failed? drop into debugger?
if gdb and failure:
ncmd = ['gdb']
if gdb == 'assert':
ncmd.extend(['-ex', 'r'])
if failure.assert_:
ncmd.extend(['-ex', 'up 2'])
elif gdb == 'start' or isinstance(gdb, int):
ncmd.extend([
'-ex', 'b %s:%d' % (self.suite.path, self.code_lineno),
'-ex', 'r'])
ncmd.extend(['--args'] + cmd)
if args.get('verbose', False):
print(' '.join(shlex.quote(c) for c in ncmd))
signal.signal(signal.SIGINT, signal.SIG_IGN)
sys.exit(sp.call(ncmd))
# run test case!
mpty, spty = pty.openpty()
if args.get('verbose', False):
print(' '.join(shlex.quote(c) for c in cmd))
proc = sp.Popen(cmd, stdout=spty, stderr=spty)
os.close(spty)
mpty = os.fdopen(mpty, 'r', 1)
stdout = []
assert_ = None
try:
while True:
try:
line = mpty.readline()
except OSError as e:
if e.errno == errno.EIO:
break
raise
stdout.append(line)
if args.get('verbose', False):
sys.stdout.write(line)
# intercept asserts
m = re.match(
'^{0}([^:]+):(\d+):(?:\d+:)?{0}{1}:{0}(.*)$'
.format('(?:\033\[[\d;]*.| )*', 'assert'),
line)
if m and assert_ is None:
try:
with open(m.group(1)) as f:
lineno = int(m.group(2))
line = (next(it.islice(f, lineno-1, None))
.strip('\n'))
assert_ = {
'path': m.group(1),
'line': line,
'lineno': lineno,
'message': m.group(3)}
except:
pass
except KeyboardInterrupt:
raise TestFailure(self, 1, stdout, None)
proc.wait()
# did we pass?
if proc.returncode != 0:
raise TestFailure(self, proc.returncode, stdout, assert_)
else:
return PASS
class ValgrindTestCase(TestCase):
def __init__(self, config, **args):
self.leaky = config.get('leaky', False)
super().__init__(config, **args)
def shouldtest(self, **args):
return not self.leaky and super().shouldtest(**args)
def test(self, exec=[], **args):
verbose = args.get('verbose', False)
uninit = (self.defines.get('LFS_ERASE_VALUE', None) == -1)
exec = [
'valgrind',
'--leak-check=full',
] + (['--undef-value-errors=no'] if uninit else []) + [
] + (['--track-origins=yes'] if not uninit else []) + [
'--error-exitcode=4',
'--error-limit=no',
] + (['--num-callers=1'] if not verbose else []) + [
'-q'] + exec
return super().test(exec=exec, **args)
class ReentrantTestCase(TestCase):
def __init__(self, config, **args):
self.reentrant = config.get('reentrant', False)
super().__init__(config, **args)
def shouldtest(self, **args):
return self.reentrant and super().shouldtest(**args)
def test(self, persist=False, gdb=False, failure=None, **args):
for cycles in it.count(1):
# clear disk first?
if cycles == 1 and persist != 'noerase':
persist = 'erase'
else:
persist = 'noerase'
# exact cycle we should drop into debugger?
if gdb and failure and (
failure.cycleno == cycles or
(isinstance(gdb, int) and gdb == cycles)):
return super().test(gdb=gdb, persist=persist, cycles=cycles,
failure=failure, **args)
# run tests, but kill the program after prog/erase has
# been hit n cycles. We exit with a special return code if the
# program has not finished, since this isn't a test failure.
try:
return super().test(persist=persist, cycles=cycles, **args)
except TestFailure as nfailure:
if nfailure.returncode == 33:
continue
else:
nfailure.cycleno = cycles
raise
class TestSuite:
def __init__(self, path, classes=[TestCase], defines={},
filter=None, **args):
self.name = os.path.basename(path)
if self.name.endswith('.toml'):
self.name = self.name[:-len('.toml')]
self.path = path
self.classes = classes
self.defines = defines.copy()
self.filter = filter
with open(path) as f:
# load tests
config = toml.load(f)
# find line numbers
f.seek(0)
linenos = []
code_linenos = []
for i, line in enumerate(f):
if re.match(r'\[\[\s*case\s*\]\]', line):
linenos.append(i+1)
if re.match(r'code\s*=\s*(\'\'\'|""")', line):
code_linenos.append(i+2)
code_linenos.reverse()
# grab global config
for k, v in config.get('define', {}).items():
if k not in self.defines:
self.defines[k] = v
self.code = config.get('code', None)
if self.code is not None:
self.code_lineno = code_linenos.pop()
# create initial test cases
self.cases = []
for i, (case, lineno) in enumerate(zip(config['case'], linenos)):
# code lineno?
if 'code' in case:
case['code_lineno'] = code_linenos.pop()
# merge conditions if necessary
if 'if' in config and 'if' in case:
case['if'] = '(%s) && (%s)' % (config['if'], case['if'])
elif 'if' in config:
case['if'] = config['if']
# initialize test case
self.cases.append(TestCase(case, filter=filter,
suite=self, caseno=i+1, lineno=lineno, **args))
def __str__(self):
return self.name
def __lt__(self, other):
return self.name < other.name
def permute(self, **args):
for case in self.cases:
# lets find all parameterized definitions, in one of [args.D,
# suite.defines, case.defines, DEFINES]. Note that each of these
# can be either a dict of defines, or a list of dicts, expressing
# an initial set of permutations.
pending = [{}]
for inits in [self.defines, case.defines, DEFINES]:
if not isinstance(inits, list):
inits = [inits]
npending = []
for init, pinit in it.product(inits, pending):
ninit = pinit.copy()
for k, v in init.items():
if k not in ninit:
try:
ninit[k] = eval(v)
except:
ninit[k] = v
npending.append(ninit)
pending = npending
# expand permutations
pending = list(reversed(pending))
expanded = []
while pending:
perm = pending.pop()
for k, v in sorted(perm.items()):
if not isinstance(v, str) and isinstance(v, abc.Iterable):
for nv in reversed(v):
nperm = perm.copy()
nperm[k] = nv
pending.append(nperm)
break
else:
expanded.append(perm)
# generate permutations
case.perms = []
for i, (class_, defines) in enumerate(
it.product(self.classes, expanded)):
case.perms.append(case.permute(
class_, defines, permno=i+1, **args))
# also track non-unique defines
case.defines = {}
for k, v in case.perms[0].defines.items():
if all(perm.defines[k] == v for perm in case.perms):
case.defines[k] = v
# track all perms and non-unique defines
self.perms = []
for case in self.cases:
self.perms.extend(case.perms)
self.defines = {}
for k, v in self.perms[0].defines.items():
if all(perm.defines.get(k, None) == v for perm in self.perms):
self.defines[k] = v
return self.perms
def build(self, **args):
# build test files
tf = open(self.path + '.test.c.t', 'w')
tf.write(GLOBALS)
if self.code is not None:
tf.write('#line %d "%s"\n' % (self.code_lineno, self.path))
tf.write(self.code)
tfs = {None: tf}
for case in self.cases:
if case.in_ not in tfs:
tfs[case.in_] = open(self.path+'.'+
case.in_.replace('/', '.')+'.t', 'w')
tfs[case.in_].write('#line 1 "%s"\n' % case.in_)
with open(case.in_) as f:
for line in f:
tfs[case.in_].write(line)
tfs[case.in_].write('\n')
tfs[case.in_].write(GLOBALS)
tfs[case.in_].write('\n')
case.build(tfs[case.in_], **args)
tf.write('\n')
tf.write('const char *lfs_testbd_path;\n')
tf.write('uint32_t lfs_testbd_cycles;\n')
tf.write('int main(int argc, char **argv) {\n')
tf.write(4*' '+'int case_ = (argc > 1) ? atoi(argv[1]) : 0;\n')
tf.write(4*' '+'int perm = (argc > 2) ? atoi(argv[2]) : 0;\n')
tf.write(4*' '+'lfs_testbd_path = (argc > 3) ? argv[3] : NULL;\n')
tf.write(4*' '+'lfs_testbd_cycles = (argc > 4) ? atoi(argv[4]) : 0;\n')
for perm in self.perms:
# test declaration
tf.write(4*' '+'extern void test_case%d(%s);\n' % (
perm.caseno, ', '.join(
'intmax_t %s' % k for k in sorted(perm.defines)
if k not in perm.case.defines)))
# test call
tf.write(4*' '+
'if (argc < 3 || (case_ == %d && perm == %d)) {'
' test_case%d(%s); '
'}\n' % (perm.caseno, perm.permno, perm.caseno, ', '.join(
str(v) for k, v in sorted(perm.defines.items())
if k not in perm.case.defines)))
tf.write('}\n')
for tf in tfs.values():
tf.close()
# write makefiles
with open(self.path + '.mk', 'w') as mk:
mk.write(RULES.replace(4*' ', '\t'))
mk.write('\n')
# add truely global defines globally
for k, v in sorted(self.defines.items()):
mk.write('%s: override CFLAGS += -D%s=%r\n' % (
self.path+'.test', k, v))
for path in tfs:
if path is None:
mk.write('%s: %s | %s\n' % (
self.path+'.test.c',
self.path,
self.path+'.test.c.t'))
else:
mk.write('%s: %s %s | %s\n' % (
self.path+'.'+path.replace('/', '.'),
self.path, path,
self.path+'.'+path.replace('/', '.')+'.t'))
mk.write('\t./scripts/explode_asserts.py $| -o $@\n')
self.makefile = self.path + '.mk'
self.target = self.path + '.test'
return self.makefile, self.target
def test(self, **args):
# run test suite!
if not args.get('verbose', True):
sys.stdout.write(self.name + ' ')
sys.stdout.flush()
for perm in self.perms:
if not perm.shouldtest(**args):
continue
try:
result = perm.test(**args)
except TestFailure as failure:
perm.result = failure
if not args.get('verbose', True):
sys.stdout.write(FAIL)
sys.stdout.flush()
if not args.get('keep_going', False):
if not args.get('verbose', True):
sys.stdout.write('\n')
raise
else:
perm.result = PASS
if not args.get('verbose', True):
sys.stdout.write(PASS)
sys.stdout.flush()
if not args.get('verbose', True):
sys.stdout.write('\n')
def main(**args):
# figure out explicit defines
defines = {}
for define in args['D']:
k, v, *_ = define.split('=', 2) + ['']
defines[k] = v
# and what class of TestCase to run
classes = []
if args.get('normal', False):
classes.append(TestCase)
if args.get('reentrant', False):
classes.append(ReentrantTestCase)
if args.get('valgrind', False):
classes.append(ValgrindTestCase)
if not classes:
classes = [TestCase]
suites = []
for testpath in args['testpaths']:
# optionally specified test case/perm
testpath, *filter = testpath.split('#')
filter = [int(f) for f in filter]
# figure out the suite's toml file
if os.path.isdir(testpath):
testpath = testpath + '/test_*.toml'
elif os.path.isfile(testpath):
testpath = testpath
elif testpath.endswith('.toml'):
testpath = TESTDIR + '/' + testpath
else:
testpath = TESTDIR + '/' + testpath + '.toml'
# find tests
for path in glob.glob(testpath):
suites.append(TestSuite(path, classes, defines, filter, **args))
# sort for reproducability
suites = sorted(suites)
# generate permutations
for suite in suites:
suite.permute(**args)
# build tests in parallel
print('====== building ======')
makefiles = []
targets = []
for suite in suites:
makefile, target = suite.build(**args)
makefiles.append(makefile)
targets.append(target)
cmd = (['make', '-f', 'Makefile'] +
list(it.chain.from_iterable(['-f', m] for m in makefiles)) +
[target for target in targets])
mpty, spty = pty.openpty()
if args.get('verbose', False):
print(' '.join(shlex.quote(c) for c in cmd))
proc = sp.Popen(cmd, stdout=spty, stderr=spty)
os.close(spty)
mpty = os.fdopen(mpty, 'r', 1)
stdout = []
while True:
try:
line = mpty.readline()
except OSError as e:
if e.errno == errno.EIO:
break
raise
stdout.append(line)
if args.get('verbose', False):
sys.stdout.write(line)
# intercept warnings
m = re.match(
'^{0}([^:]+):(\d+):(?:\d+:)?{0}{1}:{0}(.*)$'
.format('(?:\033\[[\d;]*.| )*', 'warning'),
line)
if m and not args.get('verbose', False):
try:
with open(m.group(1)) as f:
lineno = int(m.group(2))
line = next(it.islice(f, lineno-1, None)).strip('\n')
sys.stdout.write(
"\033[01m{path}:{lineno}:\033[01;35mwarning:\033[m "
"{message}\n{line}\n\n".format(
path=m.group(1), line=line, lineno=lineno,
message=m.group(3)))
except:
pass
proc.wait()
if proc.returncode != 0:
if not args.get('verbose', False):
for line in stdout:
sys.stdout.write(line)
sys.exit(-3)
print('built %d test suites, %d test cases, %d permutations' % (
len(suites),
sum(len(suite.cases) for suite in suites),
sum(len(suite.perms) for suite in suites)))
filtered = 0
for suite in suites:
for perm in suite.perms:
filtered += perm.shouldtest(**args)
if filtered != sum(len(suite.perms) for suite in suites):
print('filtered down to %d permutations' % filtered)
# only requested to build?
if args.get('build', False):
return 0
print('====== testing ======')
try:
for suite in suites:
suite.test(**args)
except TestFailure:
pass
print('====== results ======')
passed = 0
failed = 0
for suite in suites:
for perm in suite.perms:
if not hasattr(perm, 'result'):
continue
if perm.result == PASS:
passed += 1
else:
sys.stdout.write(
"\033[01m{path}:{lineno}:\033[01;31mfailure:\033[m "
"{perm} failed with {returncode}\n".format(
perm=perm, path=perm.suite.path, lineno=perm.lineno,
returncode=perm.result.returncode or 0))
if perm.result.stdout:
if perm.result.assert_:
stdout = perm.result.stdout[:-1]
else:
stdout = perm.result.stdout
for line in stdout[-5:]:
sys.stdout.write(line)
if perm.result.assert_:
sys.stdout.write(
"\033[01m{path}:{lineno}:\033[01;31massert:\033[m "
"{message}\n{line}\n".format(
**perm.result.assert_))
sys.stdout.write('\n')
failed += 1
if args.get('gdb', False):
failure = None
for suite in suites:
for perm in suite.perms:
if getattr(perm, 'result', PASS) != PASS:
failure = perm.result
if failure is not None:
print('======= gdb ======')
# drop into gdb
failure.case.test(failure=failure, **args)
sys.exit(0)
print('tests passed: %d' % passed)
print('tests failed: %d' % failed)
return 1 if failed > 0 else 0
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(
description="Run parameterized tests in various configurations.")
parser.add_argument('testpaths', nargs='*', default=[TESTDIR],
help="Description of test(s) to run. By default, this is all tests \
found in the \"{0}\" directory. Here, you can specify a different \
directory of tests, a specific file, a suite by name, and even a \
specific test case by adding brackets. For example \
\"test_dirs[0]\" or \"{0}/test_dirs.toml[0]\".".format(TESTDIR))
parser.add_argument('-D', action='append', default=[],
help="Overriding parameter definitions.")
parser.add_argument('-v', '--verbose', action='store_true',
help="Output everything that is happening.")
parser.add_argument('-k', '--keep-going', action='store_true',
help="Run all tests instead of stopping on first error. Useful for CI.")
parser.add_argument('-p', '--persist', choices=['erase', 'noerase'],
nargs='?', const='erase',
help="Store disk image in a file.")
parser.add_argument('-b', '--build', action='store_true',
help="Only build the tests, do not execute.")
parser.add_argument('-g', '--gdb', metavar='{init,start,assert},CYCLE',
type=lambda n: n if n in {'init', 'start', 'assert'} else int(n, 0),
nargs='?', const='assert',
help="Drop into gdb on test failure.")
parser.add_argument('--no-internal', action='store_true',
help="Don't run tests that require internal knowledge.")
parser.add_argument('-n', '--normal', action='store_true',
help="Run tests normally.")
parser.add_argument('-r', '--reentrant', action='store_true',
help="Run reentrant tests with simulated power-loss.")
parser.add_argument('-V', '--valgrind', action='store_true',
help="Run non-leaky tests under valgrind to check for memory leaks.")
parser.add_argument('-e', '--exec', default=[], type=lambda e: e.split(' '),
help="Run tests with another executable prefixed on the command line.")
parser.add_argument('-d', '--disk',
help="Specify a file to use for persistent/reentrant tests.")
sys.exit(main(**vars(parser.parse_args())))

View File

@@ -1,44 +0,0 @@
#!/usr/bin/env python2
import struct
import sys
import os
import argparse
def corrupt(block):
with open(block, 'r+b') as file:
# skip rev
file.read(4)
# go to last commit
tag = 0xffffffff
while True:
try:
ntag, = struct.unpack('>I', file.read(4))
except struct.error:
break
tag ^= ntag
size = (tag & 0x3ff) if (tag & 0x3ff) != 0x3ff else 0
file.seek(size, os.SEEK_CUR)
# lob off last 3 bytes
file.seek(-(size + 3), os.SEEK_CUR)
file.truncate()
def main(args):
if args.n or not args.blocks:
with open('blocks/.history', 'rb') as file:
for i in range(int(args.n or 1)):
last, = struct.unpack('<I', file.read(4))
args.blocks.append('blocks/%x' % last)
for block in args.blocks:
print 'corrupting %s' % block
corrupt(block)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-n')
parser.add_argument('blocks', nargs='*')
main(parser.parse_args())

View File

@@ -1,112 +0,0 @@
#!/usr/bin/env python2
import struct
import binascii
TYPES = {
(0x700, 0x400): 'splice',
(0x7ff, 0x401): 'create',
(0x7ff, 0x4ff): 'delete',
(0x700, 0x000): 'name',
(0x7ff, 0x001): 'name reg',
(0x7ff, 0x002): 'name dir',
(0x7ff, 0x0ff): 'name superblock',
(0x700, 0x200): 'struct',
(0x7ff, 0x200): 'struct dir',
(0x7ff, 0x202): 'struct ctz',
(0x7ff, 0x201): 'struct inline',
(0x700, 0x300): 'userattr',
(0x700, 0x600): 'tail',
(0x7ff, 0x600): 'tail soft',
(0x7ff, 0x601): 'tail hard',
(0x700, 0x700): 'gstate',
(0x7ff, 0x7ff): 'gstate move',
(0x700, 0x500): 'crc',
}
def typeof(type):
for prefix in range(12):
mask = 0x7ff & ~((1 << prefix)-1)
if (mask, type & mask) in TYPES:
return TYPES[mask, type & mask] + (
' %0*x' % (prefix/4, type & ((1 << prefix)-1))
if prefix else '')
else:
return '%02x' % type
def main(*blocks):
# find most recent block
file = None
rev = None
crc = None
versions = []
for block in blocks:
try:
nfile = open(block, 'rb')
ndata = nfile.read(4)
ncrc = binascii.crc32(ndata)
nrev, = struct.unpack('<I', ndata)
assert rev != nrev
if not file or ((rev - nrev) & 0x80000000):
file = nfile
rev = nrev
crc = ncrc
versions.append((nrev, '%s (rev %d)' % (block, nrev)))
except (IOError, struct.error):
pass
if not file:
print 'Bad metadata pair {%s}' % ', '.join(blocks)
return 1
print "--- %s ---" % ', '.join(v for _,v in sorted(versions, reverse=True))
# go through each tag, print useful information
print "%-4s %-8s %-14s %3s %4s %s" % (
'off', 'tag', 'type', 'id', 'len', 'dump')
tag = 0xffffffff
off = 4
while True:
try:
data = file.read(4)
crc = binascii.crc32(data, crc)
ntag, = struct.unpack('>I', data)
except struct.error:
break
tag ^= ntag
off += 4
type = (tag & 0x7ff00000) >> 20
id = (tag & 0x000ffc00) >> 10
size = (tag & 0x000003ff) >> 0
iscrc = (type & 0x700) == 0x500
data = file.read(size if size != 0x3ff else 0)
if iscrc:
crc = binascii.crc32(data[:4], crc)
else:
crc = binascii.crc32(data, crc)
print '%04x: %08x %-15s %3s %4s %-23s %-8s' % (
off, tag,
typeof(type) + (' bad!' if iscrc and ~crc else ''),
id if id != 0x3ff else '.',
size if size != 0x3ff else 'x',
' '.join('%02x' % ord(c) for c in data[:8]),
''.join(c if c >= ' ' and c <= '~' else '.' for c in data[:8]))
off += size if size != 0x3ff else 0
if iscrc:
crc = 0
tag ^= (type & 1) << 31
return 0
if __name__ == "__main__":
import sys
sys.exit(main(*sys.argv[1:]))

View File

@@ -1,30 +0,0 @@
#!/usr/bin/env python2
import struct
import sys
import time
import os
import re
def main():
with open('blocks/.config') as file:
s = struct.unpack('<LLLL', file.read())
print 'read_size: %d' % s[0]
print 'prog_size: %d' % s[1]
print 'block_size: %d' % s[2]
print 'block_size: %d' % s[3]
print 'real_size: %d' % sum(
os.path.getsize(os.path.join('blocks', f))
for f in os.listdir('blocks') if re.match('\d+', f))
with open('blocks/.stats') as file:
s = struct.unpack('<QQQ', file.read())
print 'read_count: %d' % s[0]
print 'prog_count: %d' % s[1]
print 'erase_count: %d' % s[2]
print 'runtime: %.3f' % (time.time() - os.stat('blocks').st_ctime)
if __name__ == "__main__":
main(*sys.argv[1:])

View File

@@ -1,116 +0,0 @@
/// AUTOGENERATED TEST ///
#include "lfs.h"
#include "emubd/lfs_emubd.h"
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
// test stuff
static void test_log(const char *s, uintmax_t v) {{
printf("%s: %jd\n", s, v);
}}
static void test_assert(const char *file, unsigned line,
const char *s, uintmax_t v, uintmax_t e) {{
static const char *last[6] = {{0, 0}};
if (v != e || !(last[0] == s || last[1] == s ||
last[2] == s || last[3] == s ||
last[4] == s || last[5] == s)) {{
test_log(s, v);
last[0] = last[1];
last[1] = last[2];
last[2] = last[3];
last[3] = last[4];
last[4] = last[5];
last[5] = s;
}}
if (v != e) {{
fprintf(stderr, "\033[31m%s:%u: assert %s failed with %jd, "
"expected %jd\033[0m\n", file, line, s, v, e);
exit(-2);
}}
}}
#define test_assert(s, v, e) test_assert(__FILE__, __LINE__, s, v, e)
// utility functions for traversals
static int __attribute__((used)) test_count(void *p, lfs_block_t b) {{
(void)b;
unsigned *u = (unsigned*)p;
*u += 1;
return 0;
}}
// lfs declarations
lfs_t lfs;
lfs_emubd_t bd;
lfs_file_t file[4];
lfs_dir_t dir[4];
struct lfs_info info;
uint8_t buffer[1024];
uint8_t wbuffer[1024];
uint8_t rbuffer[1024];
lfs_size_t size;
lfs_size_t wsize;
lfs_size_t rsize;
uintmax_t test;
#ifndef LFS_READ_SIZE
#define LFS_READ_SIZE 16
#endif
#ifndef LFS_PROG_SIZE
#define LFS_PROG_SIZE LFS_READ_SIZE
#endif
#ifndef LFS_BLOCK_SIZE
#define LFS_BLOCK_SIZE 512
#endif
#ifndef LFS_BLOCK_COUNT
#define LFS_BLOCK_COUNT 1024
#endif
#ifndef LFS_BLOCK_CYCLES
#define LFS_BLOCK_CYCLES 1024
#endif
#ifndef LFS_CACHE_SIZE
#define LFS_CACHE_SIZE 64
#endif
#ifndef LFS_LOOKAHEAD_SIZE
#define LFS_LOOKAHEAD_SIZE 16
#endif
const struct lfs_config cfg = {{
.context = &bd,
.read = &lfs_emubd_read,
.prog = &lfs_emubd_prog,
.erase = &lfs_emubd_erase,
.sync = &lfs_emubd_sync,
.read_size = LFS_READ_SIZE,
.prog_size = LFS_PROG_SIZE,
.block_size = LFS_BLOCK_SIZE,
.block_count = LFS_BLOCK_COUNT,
.block_cycles = LFS_BLOCK_CYCLES,
.cache_size = LFS_CACHE_SIZE,
.lookahead_size = LFS_LOOKAHEAD_SIZE,
}};
// Entry point
int main(void) {{
lfs_emubd_create(&cfg, "blocks");
{tests}
lfs_emubd_destroy(&cfg);
}}

View File

@@ -1,61 +0,0 @@
#!/usr/bin/env python2
import re
import sys
import subprocess
import os
def generate(test):
with open("tests/template.fmt") as file:
template = file.read()
lines = []
for line in re.split('(?<=(?:.;| [{}]))\n', test.read()):
match = re.match('(?: *\n)*( *)(.*)=>(.*);', line, re.DOTALL | re.MULTILINE)
if match:
tab, test, expect = match.groups()
lines.append(tab+'test = {test};'.format(test=test.strip()))
lines.append(tab+'test_assert("{name}", test, {expect});'.format(
name = re.match('\w*', test.strip()).group(),
expect = expect.strip()))
else:
lines.append(line)
# Create test file
with open('test.c', 'w') as file:
file.write(template.format(tests='\n'.join(lines)))
# Remove build artifacts to force rebuild
try:
os.remove('test.o')
os.remove('lfs')
except OSError:
pass
def compile():
subprocess.check_call([
os.environ.get('MAKE', 'make'),
'--no-print-directory', '-s'])
def execute():
if 'EXEC' in os.environ:
subprocess.check_call([os.environ['EXEC'], "./lfs"])
else:
subprocess.check_call(["./lfs"])
def main(test=None):
if test and not test.startswith('-'):
with open(test) as file:
generate(file)
else:
generate(sys.stdin)
compile()
if test == '-s':
sys.exit(1)
execute()
if __name__ == "__main__":
main(*sys.argv[1:])

View File

@@ -1,485 +0,0 @@
#!/bin/bash
set -eu
echo "=== Allocator tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
SIZE=15000
lfs_mkdir() {
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "$1") => 0;
lfs_unmount(&lfs) => 0;
TEST
}
lfs_remove() {
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "$1/eggs") => 0;
lfs_remove(&lfs, "$1/bacon") => 0;
lfs_remove(&lfs, "$1/pancakes") => 0;
lfs_remove(&lfs, "$1") => 0;
lfs_unmount(&lfs) => 0;
TEST
}
lfs_alloc_singleproc() {
tests/test.py << TEST
const char *names[] = {"bacon", "eggs", "pancakes"};
lfs_mount(&lfs, &cfg) => 0;
for (unsigned n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
sprintf((char*)buffer, "$1/%s", names[n]);
lfs_file_open(&lfs, &file[n], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
}
for (unsigned n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
size = strlen(names[n]);
for (int i = 0; i < $SIZE; i++) {
lfs_file_write(&lfs, &file[n], names[n], size) => size;
}
}
for (unsigned n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
lfs_file_close(&lfs, &file[n]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
}
lfs_alloc_multiproc() {
for name in bacon eggs pancakes
do
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "$1/$name",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size = strlen("$name");
memcpy(buffer, "$name", size);
for (int i = 0; i < $SIZE; i++) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
done
}
lfs_verify() {
for name in bacon eggs pancakes
do
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "$1/$name", LFS_O_RDONLY) => 0;
size = strlen("$name");
for (int i = 0; i < $SIZE; i++) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "$name", size) => 0;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
done
}
echo "--- Single-process allocation test ---"
lfs_mkdir singleproc
lfs_alloc_singleproc singleproc
lfs_verify singleproc
echo "--- Multi-process allocation test ---"
lfs_mkdir multiproc
lfs_alloc_multiproc multiproc
lfs_verify multiproc
lfs_verify singleproc
echo "--- Single-process reuse test ---"
lfs_remove singleproc
lfs_mkdir singleprocreuse
lfs_alloc_singleproc singleprocreuse
lfs_verify singleprocreuse
lfs_verify multiproc
echo "--- Multi-process reuse test ---"
lfs_remove multiproc
lfs_mkdir multiprocreuse
lfs_alloc_singleproc multiprocreuse
lfs_verify multiprocreuse
lfs_verify singleprocreuse
echo "--- Cleanup ---"
lfs_remove multiprocreuse
lfs_remove singleprocreuse
echo "--- Exhaustion test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
size = strlen("exhaustion");
memcpy(buffer, "exhaustion", size);
lfs_file_write(&lfs, &file[0], buffer, size) => size;
lfs_file_sync(&lfs, &file[0]) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
lfs_ssize_t res;
while (true) {
res = lfs_file_write(&lfs, &file[0], buffer, size);
if (res < 0) {
break;
}
res => size;
}
res => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_RDONLY);
size = strlen("exhaustion");
lfs_file_size(&lfs, &file[0]) => size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "exhaustion", size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Exhaustion wraparound test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_file_open(&lfs, &file[0], "padding", LFS_O_WRONLY | LFS_O_CREAT);
size = strlen("buffering");
memcpy(buffer, "buffering", size);
for (int i = 0; i < $SIZE; i++) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_remove(&lfs, "padding") => 0;
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
size = strlen("exhaustion");
memcpy(buffer, "exhaustion", size);
lfs_file_write(&lfs, &file[0], buffer, size) => size;
lfs_file_sync(&lfs, &file[0]) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
lfs_ssize_t res;
while (true) {
res = lfs_file_write(&lfs, &file[0], buffer, size);
if (res < 0) {
break;
}
res => size;
}
res => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_RDONLY);
size = strlen("exhaustion");
lfs_file_size(&lfs, &file[0]) => size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "exhaustion", size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Dir exhaustion test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
// find out max file size
lfs_mkdir(&lfs, "exhaustiondir") => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
int count = 0;
int err;
while (true) {
err = lfs_file_write(&lfs, &file[0], buffer, size);
if (err < 0) {
break;
}
count += 1;
}
err => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_remove(&lfs, "exhaustiondir") => 0;
// see if dir fits with max file size
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
for (int i = 0; i < count; i++) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_mkdir(&lfs, "exhaustiondir") => 0;
lfs_remove(&lfs, "exhaustiondir") => 0;
lfs_remove(&lfs, "exhaustion") => 0;
// see if dir fits with > max file size
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
for (int i = 0; i < count+1; i++) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_mkdir(&lfs, "exhaustiondir") => LFS_ERR_NOSPC;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Chained dir exhaustion test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
// find out max file size
lfs_mkdir(&lfs, "exhaustiondir") => 0;
for (int i = 0; i < 10; i++) {
sprintf((char*)buffer, "dirwithanexhaustivelylongnameforpadding%d", i);
lfs_mkdir(&lfs, (char*)buffer) => 0;
}
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
int count = 0;
int err;
while (true) {
err = lfs_file_write(&lfs, &file[0], buffer, size);
if (err < 0) {
break;
}
count += 1;
}
err => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_remove(&lfs, "exhaustiondir") => 0;
for (int i = 0; i < 10; i++) {
sprintf((char*)buffer, "dirwithanexhaustivelylongnameforpadding%d", i);
lfs_remove(&lfs, (char*)buffer) => 0;
}
// see that chained dir fails
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
for (int i = 0; i < count+1; i++) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_sync(&lfs, &file[0]) => 0;
for (int i = 0; i < 10; i++) {
sprintf((char*)buffer, "dirwithanexhaustivelylongnameforpadding%d", i);
lfs_mkdir(&lfs, (char*)buffer) => 0;
}
lfs_mkdir(&lfs, "exhaustiondir") => LFS_ERR_NOSPC;
// shorten file to try a second chained dir
while (true) {
err = lfs_mkdir(&lfs, "exhaustiondir");
if (err != LFS_ERR_NOSPC) {
break;
}
lfs_ssize_t filesize = lfs_file_size(&lfs, &file[0]);
filesize > 0 => true;
lfs_file_truncate(&lfs, &file[0], filesize - size) => 0;
lfs_file_sync(&lfs, &file[0]) => 0;
}
err => 0;
lfs_mkdir(&lfs, "exhaustiondir2") => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Split dir test ---"
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
// create one block hole for half a directory
lfs_file_open(&lfs, &file[0], "bump", LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (lfs_size_t i = 0; i < cfg.block_size; i += 2) {
memcpy(&buffer[i], "hi", 2);
}
lfs_file_write(&lfs, &file[0], buffer, cfg.block_size) => cfg.block_size;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < (cfg.block_count-4)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
// remount to force reset of lookahead
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
// open hole
lfs_remove(&lfs, "bump") => 0;
lfs_mkdir(&lfs, "splitdir") => 0;
lfs_file_open(&lfs, &file[0], "splitdir/bump",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (lfs_size_t i = 0; i < cfg.block_size; i += 2) {
memcpy(&buffer[i], "hi", 2);
}
lfs_file_write(&lfs, &file[0], buffer, 2*cfg.block_size) => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Outdated lookahead test ---"
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// fill completely with two files
lfs_file_open(&lfs, &file[0], "exhaustion1",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_open(&lfs, &file[0], "exhaustion2",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2+1)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
// remount to force reset of lookahead
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
// rewrite one file
lfs_file_open(&lfs, &file[0], "exhaustion1",
LFS_O_WRONLY | LFS_O_TRUNC) => 0;
lfs_file_sync(&lfs, &file[0]) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
// rewrite second file, this requires lookahead does not
// use old population
lfs_file_open(&lfs, &file[0], "exhaustion2",
LFS_O_WRONLY | LFS_O_TRUNC) => 0;
lfs_file_sync(&lfs, &file[0]) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2+1)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
TEST
echo "--- Outdated lookahead and split dir test ---"
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// fill completely with two files
lfs_file_open(&lfs, &file[0], "exhaustion1",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_open(&lfs, &file[0], "exhaustion2",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2+1)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
// remount to force reset of lookahead
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
// rewrite one file with a hole of one block
lfs_file_open(&lfs, &file[0], "exhaustion1",
LFS_O_WRONLY | LFS_O_TRUNC) => 0;
lfs_file_sync(&lfs, &file[0]) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2)/2 - 1)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
// try to allocate a directory, should fail!
lfs_mkdir(&lfs, "split") => LFS_ERR_NOSPC;
// file should not fail
lfs_file_open(&lfs, &file[0], "notasplit",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_write(&lfs, &file[0], "hi", 2) => 2;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

569
tests/test_alloc.toml Normal file
View File

@@ -0,0 +1,569 @@
# allocator tests
# note for these to work there are a number constraints on the device geometry
if = 'LFS_BLOCK_CYCLES == -1'
[[case]] # parallel allocation test
define.FILES = 3
define.SIZE = '(((LFS_BLOCK_SIZE-8)*(LFS_BLOCK_COUNT-6)) / FILES)'
code = '''
const char *names[FILES] = {"bacon", "eggs", "pancakes"};
lfs_file_t files[FILES];
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "breakfast") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int n = 0; n < FILES; n++) {
sprintf(path, "breakfast/%s", names[n]);
lfs_file_open(&lfs, &files[n], path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
}
for (int n = 0; n < FILES; n++) {
size = strlen(names[n]);
for (lfs_size_t i = 0; i < SIZE; i += size) {
lfs_file_write(&lfs, &files[n], names[n], size) => size;
}
}
for (int n = 0; n < FILES; n++) {
lfs_file_close(&lfs, &files[n]) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int n = 0; n < FILES; n++) {
sprintf(path, "breakfast/%s", names[n]);
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
size = strlen(names[n]);
for (lfs_size_t i = 0; i < SIZE; i += size) {
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, names[n], size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # serial allocation test
define.FILES = 3
define.SIZE = '(((LFS_BLOCK_SIZE-8)*(LFS_BLOCK_COUNT-6)) / FILES)'
code = '''
const char *names[FILES] = {"bacon", "eggs", "pancakes"};
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "breakfast") => 0;
lfs_unmount(&lfs) => 0;
for (int n = 0; n < FILES; n++) {
lfs_mount(&lfs, &cfg) => 0;
sprintf(path, "breakfast/%s", names[n]);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size = strlen(names[n]);
memcpy(buffer, names[n], size);
for (int i = 0; i < SIZE; i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
}
lfs_mount(&lfs, &cfg) => 0;
for (int n = 0; n < FILES; n++) {
sprintf(path, "breakfast/%s", names[n]);
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
size = strlen(names[n]);
for (int i = 0; i < SIZE; i += size) {
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, names[n], size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # parallel allocation reuse test
define.FILES = 3
define.SIZE = '(((LFS_BLOCK_SIZE-8)*(LFS_BLOCK_COUNT-6)) / FILES)'
define.CYCLES = [1, 10]
code = '''
const char *names[FILES] = {"bacon", "eggs", "pancakes"};
lfs_file_t files[FILES];
lfs_format(&lfs, &cfg) => 0;
for (int c = 0; c < CYCLES; c++) {
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "breakfast") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int n = 0; n < FILES; n++) {
sprintf(path, "breakfast/%s", names[n]);
lfs_file_open(&lfs, &files[n], path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
}
for (int n = 0; n < FILES; n++) {
size = strlen(names[n]);
for (int i = 0; i < SIZE; i += size) {
lfs_file_write(&lfs, &files[n], names[n], size) => size;
}
}
for (int n = 0; n < FILES; n++) {
lfs_file_close(&lfs, &files[n]) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int n = 0; n < FILES; n++) {
sprintf(path, "breakfast/%s", names[n]);
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
size = strlen(names[n]);
for (int i = 0; i < SIZE; i += size) {
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, names[n], size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int n = 0; n < FILES; n++) {
sprintf(path, "breakfast/%s", names[n]);
lfs_remove(&lfs, path) => 0;
}
lfs_remove(&lfs, "breakfast") => 0;
lfs_unmount(&lfs) => 0;
}
'''
[[case]] # serial allocation reuse test
define.FILES = 3
define.SIZE = '(((LFS_BLOCK_SIZE-8)*(LFS_BLOCK_COUNT-6)) / FILES)'
define.CYCLES = [1, 10]
code = '''
const char *names[FILES] = {"bacon", "eggs", "pancakes"};
lfs_format(&lfs, &cfg) => 0;
for (int c = 0; c < CYCLES; c++) {
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "breakfast") => 0;
lfs_unmount(&lfs) => 0;
for (int n = 0; n < FILES; n++) {
lfs_mount(&lfs, &cfg) => 0;
sprintf(path, "breakfast/%s", names[n]);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size = strlen(names[n]);
memcpy(buffer, names[n], size);
for (int i = 0; i < SIZE; i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
}
lfs_mount(&lfs, &cfg) => 0;
for (int n = 0; n < FILES; n++) {
sprintf(path, "breakfast/%s", names[n]);
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
size = strlen(names[n]);
for (int i = 0; i < SIZE; i += size) {
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, names[n], size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int n = 0; n < FILES; n++) {
sprintf(path, "breakfast/%s", names[n]);
lfs_remove(&lfs, path) => 0;
}
lfs_remove(&lfs, "breakfast") => 0;
lfs_unmount(&lfs) => 0;
}
'''
[[case]] # exhaustion test
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
size = strlen("exhaustion");
memcpy(buffer, "exhaustion", size);
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_sync(&lfs, &file) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
lfs_ssize_t res;
while (true) {
res = lfs_file_write(&lfs, &file, buffer, size);
if (res < 0) {
break;
}
res => size;
}
res => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_RDONLY);
size = strlen("exhaustion");
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "exhaustion", size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # exhaustion wraparound test
define.SIZE = '(((LFS_BLOCK_SIZE-8)*(LFS_BLOCK_COUNT-4)) / 3)'
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "padding", LFS_O_WRONLY | LFS_O_CREAT);
size = strlen("buffering");
memcpy(buffer, "buffering", size);
for (int i = 0; i < SIZE; i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "padding") => 0;
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
size = strlen("exhaustion");
memcpy(buffer, "exhaustion", size);
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_sync(&lfs, &file) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
lfs_ssize_t res;
while (true) {
res = lfs_file_write(&lfs, &file, buffer, size);
if (res < 0) {
break;
}
res => size;
}
res => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_RDONLY);
size = strlen("exhaustion");
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "exhaustion", size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # dir exhaustion test
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// find out max file size
lfs_mkdir(&lfs, "exhaustiondir") => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
int count = 0;
while (true) {
err = lfs_file_write(&lfs, &file, buffer, size);
if (err < 0) {
break;
}
count += 1;
}
err => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_remove(&lfs, "exhaustiondir") => 0;
// see if dir fits with max file size
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
for (int i = 0; i < count; i++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_mkdir(&lfs, "exhaustiondir") => 0;
lfs_remove(&lfs, "exhaustiondir") => 0;
lfs_remove(&lfs, "exhaustion") => 0;
// see if dir fits with > max file size
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
for (int i = 0; i < count+1; i++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_mkdir(&lfs, "exhaustiondir") => LFS_ERR_NOSPC;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_unmount(&lfs) => 0;
'''
# Below, I don't like these tests. They're fragile and depend _heavily_
# on the geometry of the block device. But they are valuable. Eventually they
# should be removed and replaced with generalized tests.
[[case]] # chained dir exhaustion test
define.LFS_BLOCK_SIZE = 512
define.LFS_BLOCK_COUNT = 1024
if = 'LFS_BLOCK_SIZE == 512 && LFS_BLOCK_COUNT == 1024'
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// find out max file size
lfs_mkdir(&lfs, "exhaustiondir") => 0;
for (int i = 0; i < 10; i++) {
sprintf(path, "dirwithanexhaustivelylongnameforpadding%d", i);
lfs_mkdir(&lfs, path) => 0;
}
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
int count = 0;
while (true) {
err = lfs_file_write(&lfs, &file, buffer, size);
if (err < 0) {
break;
}
count += 1;
}
err => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_remove(&lfs, "exhaustiondir") => 0;
for (int i = 0; i < 10; i++) {
sprintf(path, "dirwithanexhaustivelylongnameforpadding%d", i);
lfs_remove(&lfs, path) => 0;
}
// see that chained dir fails
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
for (int i = 0; i < count+1; i++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_sync(&lfs, &file) => 0;
for (int i = 0; i < 10; i++) {
sprintf(path, "dirwithanexhaustivelylongnameforpadding%d", i);
lfs_mkdir(&lfs, path) => 0;
}
lfs_mkdir(&lfs, "exhaustiondir") => LFS_ERR_NOSPC;
// shorten file to try a second chained dir
while (true) {
err = lfs_mkdir(&lfs, "exhaustiondir");
if (err != LFS_ERR_NOSPC) {
break;
}
lfs_ssize_t filesize = lfs_file_size(&lfs, &file);
filesize > 0 => true;
lfs_file_truncate(&lfs, &file, filesize - size) => 0;
lfs_file_sync(&lfs, &file) => 0;
}
err => 0;
lfs_mkdir(&lfs, "exhaustiondir2") => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # split dir test
define.LFS_BLOCK_SIZE = 512
define.LFS_BLOCK_COUNT = 1024
if = 'LFS_BLOCK_SIZE == 512 && LFS_BLOCK_COUNT == 1024'
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// create one block hole for half a directory
lfs_file_open(&lfs, &file, "bump", LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (lfs_size_t i = 0; i < cfg.block_size; i += 2) {
memcpy(&buffer[i], "hi", 2);
}
lfs_file_write(&lfs, &file, buffer, cfg.block_size) => cfg.block_size;
lfs_file_close(&lfs, &file) => 0;
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < (cfg.block_count-4)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
// remount to force reset of lookahead
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
// open hole
lfs_remove(&lfs, "bump") => 0;
lfs_mkdir(&lfs, "splitdir") => 0;
lfs_file_open(&lfs, &file, "splitdir/bump",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (lfs_size_t i = 0; i < cfg.block_size; i += 2) {
memcpy(&buffer[i], "hi", 2);
}
lfs_file_write(&lfs, &file, buffer, 2*cfg.block_size) => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # outdated lookahead test
define.LFS_BLOCK_SIZE = 512
define.LFS_BLOCK_COUNT = 1024
if = 'LFS_BLOCK_SIZE == 512 && LFS_BLOCK_COUNT == 1024'
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// fill completely with two files
lfs_file_open(&lfs, &file, "exhaustion1",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_file_open(&lfs, &file, "exhaustion2",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2+1)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
// remount to force reset of lookahead
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
// rewrite one file
lfs_file_open(&lfs, &file, "exhaustion1",
LFS_O_WRONLY | LFS_O_TRUNC) => 0;
lfs_file_sync(&lfs, &file) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
// rewrite second file, this requires lookahead does not
// use old population
lfs_file_open(&lfs, &file, "exhaustion2",
LFS_O_WRONLY | LFS_O_TRUNC) => 0;
lfs_file_sync(&lfs, &file) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2+1)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # outdated lookahead and split dir test
define.LFS_BLOCK_SIZE = 512
define.LFS_BLOCK_COUNT = 1024
if = 'LFS_BLOCK_SIZE == 512 && LFS_BLOCK_COUNT == 1024'
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// fill completely with two files
lfs_file_open(&lfs, &file, "exhaustion1",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_file_open(&lfs, &file, "exhaustion2",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2+1)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
// remount to force reset of lookahead
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
// rewrite one file with a hole of one block
lfs_file_open(&lfs, &file, "exhaustion1",
LFS_O_WRONLY | LFS_O_TRUNC) => 0;
lfs_file_sync(&lfs, &file) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2)/2 - 1)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
// try to allocate a directory, should fail!
lfs_mkdir(&lfs, "split") => LFS_ERR_NOSPC;
// file should not fail
lfs_file_open(&lfs, &file, "notasplit",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_write(&lfs, &file, "hi", 2) => 2;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''

160
tests/test_attrs.sh → tests/test_attrs.toml Executable file → Normal file
View File

@@ -1,24 +1,15 @@
#!/bin/bash [[case]] # set/get attribute
set -eu code = '''
echo "=== Attr tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0; lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "hello") => 0; lfs_mkdir(&lfs, "hello") => 0;
lfs_file_open(&lfs, &file[0], "hello/hello", lfs_file_open(&lfs, &file, "hello/hello", LFS_O_WRONLY | LFS_O_CREAT) => 0;
LFS_O_WRONLY | LFS_O_CREAT) => 0; lfs_file_write(&lfs, &file, "hello", strlen("hello")) => strlen("hello");
lfs_file_write(&lfs, &file[0], "hello", strlen("hello")) lfs_file_close(&lfs, &file);
=> strlen("hello");
lfs_file_close(&lfs, &file[0]);
lfs_unmount(&lfs) => 0; lfs_unmount(&lfs) => 0;
TEST
echo "--- Set/get attribute ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
memset(buffer, 0, sizeof(buffer));
lfs_setattr(&lfs, "hello", 'A', "aaaa", 4) => 0; lfs_setattr(&lfs, "hello", 'A', "aaaa", 4) => 0;
lfs_setattr(&lfs, "hello", 'B', "bbbbbb", 6) => 0; lfs_setattr(&lfs, "hello", 'B', "bbbbbb", 6) => 0;
lfs_setattr(&lfs, "hello", 'C', "ccccc", 5) => 0; lfs_setattr(&lfs, "hello", 'C', "ccccc", 5) => 0;
@@ -68,9 +59,9 @@ tests/test.py << TEST
lfs_getattr(&lfs, "hello", 'C', buffer+10, 5) => 5; lfs_getattr(&lfs, "hello", 'C', buffer+10, 5) => 5;
lfs_unmount(&lfs) => 0; lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
memset(buffer, 0, sizeof(buffer));
lfs_getattr(&lfs, "hello", 'A', buffer, 4) => 4; lfs_getattr(&lfs, "hello", 'A', buffer, 4) => 4;
lfs_getattr(&lfs, "hello", 'B', buffer+4, 9) => 9; lfs_getattr(&lfs, "hello", 'B', buffer+4, 9) => 9;
lfs_getattr(&lfs, "hello", 'C', buffer+13, 5) => 5; lfs_getattr(&lfs, "hello", 'C', buffer+13, 5) => 5;
@@ -78,16 +69,25 @@ tests/test.py << TEST
memcmp(buffer+4, "fffffffff", 9) => 0; memcmp(buffer+4, "fffffffff", 9) => 0;
memcmp(buffer+13, "ccccc", 5) => 0; memcmp(buffer+13, "ccccc", 5) => 0;
lfs_file_open(&lfs, &file[0], "hello/hello", LFS_O_RDONLY) => 0; lfs_file_open(&lfs, &file, "hello/hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], buffer, sizeof(buffer)) => strlen("hello"); lfs_file_read(&lfs, &file, buffer, sizeof(buffer)) => strlen("hello");
memcmp(buffer, "hello", strlen("hello")) => 0; memcmp(buffer, "hello", strlen("hello")) => 0;
lfs_file_close(&lfs, &file[0]); lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0; lfs_unmount(&lfs) => 0;
TEST '''
echo "--- Set/get root attribute ---" [[case]] # set/get root attribute
tests/test.py << TEST code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "hello") => 0;
lfs_file_open(&lfs, &file, "hello/hello", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_write(&lfs, &file, "hello", strlen("hello")) => strlen("hello");
lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
memset(buffer, 0, sizeof(buffer));
lfs_setattr(&lfs, "/", 'A', "aaaa", 4) => 0; lfs_setattr(&lfs, "/", 'A', "aaaa", 4) => 0;
lfs_setattr(&lfs, "/", 'B', "bbbbbb", 6) => 0; lfs_setattr(&lfs, "/", 'B', "bbbbbb", 6) => 0;
lfs_setattr(&lfs, "/", 'C', "ccccc", 5) => 0; lfs_setattr(&lfs, "/", 'C', "ccccc", 5) => 0;
@@ -136,9 +136,9 @@ tests/test.py << TEST
lfs_getattr(&lfs, "/", 'B', buffer+4, 6) => 9; lfs_getattr(&lfs, "/", 'B', buffer+4, 6) => 9;
lfs_getattr(&lfs, "/", 'C', buffer+10, 5) => 5; lfs_getattr(&lfs, "/", 'C', buffer+10, 5) => 5;
lfs_unmount(&lfs) => 0; lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
memset(buffer, 0, sizeof(buffer));
lfs_getattr(&lfs, "/", 'A', buffer, 4) => 4; lfs_getattr(&lfs, "/", 'A', buffer, 4) => 4;
lfs_getattr(&lfs, "/", 'B', buffer+4, 9) => 9; lfs_getattr(&lfs, "/", 'B', buffer+4, 9) => 9;
lfs_getattr(&lfs, "/", 'C', buffer+13, 5) => 5; lfs_getattr(&lfs, "/", 'C', buffer+13, 5) => 5;
@@ -146,16 +146,25 @@ tests/test.py << TEST
memcmp(buffer+4, "fffffffff", 9) => 0; memcmp(buffer+4, "fffffffff", 9) => 0;
memcmp(buffer+13, "ccccc", 5) => 0; memcmp(buffer+13, "ccccc", 5) => 0;
lfs_file_open(&lfs, &file[0], "hello/hello", LFS_O_RDONLY) => 0; lfs_file_open(&lfs, &file, "hello/hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], buffer, sizeof(buffer)) => strlen("hello"); lfs_file_read(&lfs, &file, buffer, sizeof(buffer)) => strlen("hello");
memcmp(buffer, "hello", strlen("hello")) => 0; memcmp(buffer, "hello", strlen("hello")) => 0;
lfs_file_close(&lfs, &file[0]); lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0; lfs_unmount(&lfs) => 0;
TEST '''
echo "--- Set/get file attribute ---" [[case]] # set/get file attribute
tests/test.py << TEST code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "hello") => 0;
lfs_file_open(&lfs, &file, "hello/hello", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_write(&lfs, &file, "hello", strlen("hello")) => strlen("hello");
lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
memset(buffer, 0, sizeof(buffer));
struct lfs_attr attrs1[] = { struct lfs_attr attrs1[] = {
{'A', buffer, 4}, {'A', buffer, 4},
{'B', buffer+4, 6}, {'B', buffer+4, 6},
@@ -163,55 +172,55 @@ tests/test.py << TEST
}; };
struct lfs_file_config cfg1 = {.attrs=attrs1, .attr_count=3}; struct lfs_file_config cfg1 = {.attrs=attrs1, .attr_count=3};
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_WRONLY, &cfg1) => 0; lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
memcpy(buffer, "aaaa", 4); memcpy(buffer, "aaaa", 4);
memcpy(buffer+4, "bbbbbb", 6); memcpy(buffer+4, "bbbbbb", 6);
memcpy(buffer+10, "ccccc", 5); memcpy(buffer+10, "ccccc", 5);
lfs_file_close(&lfs, &file[0]) => 0; lfs_file_close(&lfs, &file) => 0;
memset(buffer, 0, 15); memset(buffer, 0, 15);
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_RDONLY, &cfg1) => 0; lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file[0]) => 0; lfs_file_close(&lfs, &file) => 0;
memcmp(buffer, "aaaa", 4) => 0; memcmp(buffer, "aaaa", 4) => 0;
memcmp(buffer+4, "bbbbbb", 6) => 0; memcmp(buffer+4, "bbbbbb", 6) => 0;
memcmp(buffer+10, "ccccc", 5) => 0; memcmp(buffer+10, "ccccc", 5) => 0;
attrs1[1].size = 0; attrs1[1].size = 0;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_WRONLY, &cfg1) => 0; lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file[0]) => 0; lfs_file_close(&lfs, &file) => 0;
memset(buffer, 0, 15); memset(buffer, 0, 15);
attrs1[1].size = 6; attrs1[1].size = 6;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_RDONLY, &cfg1) => 0; lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file[0]) => 0; lfs_file_close(&lfs, &file) => 0;
memcmp(buffer, "aaaa", 4) => 0; memcmp(buffer, "aaaa", 4) => 0;
memcmp(buffer+4, "\0\0\0\0\0\0", 6) => 0; memcmp(buffer+4, "\0\0\0\0\0\0", 6) => 0;
memcmp(buffer+10, "ccccc", 5) => 0; memcmp(buffer+10, "ccccc", 5) => 0;
attrs1[1].size = 6; attrs1[1].size = 6;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_WRONLY, &cfg1) => 0; lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
memcpy(buffer+4, "dddddd", 6); memcpy(buffer+4, "dddddd", 6);
lfs_file_close(&lfs, &file[0]) => 0; lfs_file_close(&lfs, &file) => 0;
memset(buffer, 0, 15); memset(buffer, 0, 15);
attrs1[1].size = 6; attrs1[1].size = 6;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_RDONLY, &cfg1) => 0; lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file[0]) => 0; lfs_file_close(&lfs, &file) => 0;
memcmp(buffer, "aaaa", 4) => 0; memcmp(buffer, "aaaa", 4) => 0;
memcmp(buffer+4, "dddddd", 6) => 0; memcmp(buffer+4, "dddddd", 6) => 0;
memcmp(buffer+10, "ccccc", 5) => 0; memcmp(buffer+10, "ccccc", 5) => 0;
attrs1[1].size = 3; attrs1[1].size = 3;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_WRONLY, &cfg1) => 0; lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
memcpy(buffer+4, "eee", 3); memcpy(buffer+4, "eee", 3);
lfs_file_close(&lfs, &file[0]) => 0; lfs_file_close(&lfs, &file) => 0;
memset(buffer, 0, 15); memset(buffer, 0, 15);
attrs1[1].size = 6; attrs1[1].size = 6;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_RDONLY, &cfg1) => 0; lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file[0]) => 0; lfs_file_close(&lfs, &file) => 0;
memcmp(buffer, "aaaa", 4) => 0; memcmp(buffer, "aaaa", 4) => 0;
memcmp(buffer+4, "eee\0\0\0", 6) => 0; memcmp(buffer+4, "eee\0\0\0", 6) => 0;
memcmp(buffer+10, "ccccc", 5) => 0; memcmp(buffer+10, "ccccc", 5) => 0;
attrs1[0].size = LFS_ATTR_MAX+1; attrs1[0].size = LFS_ATTR_MAX+1;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_WRONLY, &cfg1) lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_WRONLY, &cfg1)
=> LFS_ERR_NOSPC; => LFS_ERR_NOSPC;
struct lfs_attr attrs2[] = { struct lfs_attr attrs2[] = {
@@ -220,40 +229,52 @@ tests/test.py << TEST
{'C', buffer+13, 5}, {'C', buffer+13, 5},
}; };
struct lfs_file_config cfg2 = {.attrs=attrs2, .attr_count=3}; struct lfs_file_config cfg2 = {.attrs=attrs2, .attr_count=3};
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_RDWR, &cfg2) => 0; lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_RDWR, &cfg2) => 0;
memcpy(buffer+4, "fffffffff", 9); memcpy(buffer+4, "fffffffff", 9);
lfs_file_close(&lfs, &file[0]) => 0; lfs_file_close(&lfs, &file) => 0;
attrs1[0].size = 4; attrs1[0].size = 4;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_RDONLY, &cfg1) => 0; lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file[0]) => 0; lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0; lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
struct lfs_attr attrs2[] = { memset(buffer, 0, sizeof(buffer));
struct lfs_attr attrs3[] = {
{'A', buffer, 4}, {'A', buffer, 4},
{'B', buffer+4, 9}, {'B', buffer+4, 9},
{'C', buffer+13, 5}, {'C', buffer+13, 5},
}; };
struct lfs_file_config cfg2 = {.attrs=attrs2, .attr_count=3}; struct lfs_file_config cfg3 = {.attrs=attrs3, .attr_count=3};
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_RDONLY, &cfg2) => 0; lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_RDONLY, &cfg3) => 0;
lfs_file_close(&lfs, &file[0]) => 0; lfs_file_close(&lfs, &file) => 0;
memcmp(buffer, "aaaa", 4) => 0; memcmp(buffer, "aaaa", 4) => 0;
memcmp(buffer+4, "fffffffff", 9) => 0; memcmp(buffer+4, "fffffffff", 9) => 0;
memcmp(buffer+13, "ccccc", 5) => 0; memcmp(buffer+13, "ccccc", 5) => 0;
lfs_file_open(&lfs, &file[0], "hello/hello", LFS_O_RDONLY) => 0; lfs_file_open(&lfs, &file, "hello/hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], buffer, sizeof(buffer)) => strlen("hello"); lfs_file_read(&lfs, &file, buffer, sizeof(buffer)) => strlen("hello");
memcmp(buffer, "hello", strlen("hello")) => 0; memcmp(buffer, "hello", strlen("hello")) => 0;
lfs_file_close(&lfs, &file[0]); lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0; lfs_unmount(&lfs) => 0;
TEST '''
echo "--- Deferred file attributes ---" [[case]] # deferred file attributes
tests/test.py << TEST code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "hello") => 0;
lfs_file_open(&lfs, &file, "hello/hello", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_write(&lfs, &file, "hello", strlen("hello")) => strlen("hello");
lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_setattr(&lfs, "hello/hello", 'B', "fffffffff", 9) => 0;
lfs_setattr(&lfs, "hello/hello", 'C', "ccccc", 5) => 0;
memset(buffer, 0, sizeof(buffer));
struct lfs_attr attrs1[] = { struct lfs_attr attrs1[] = {
{'B', "gggg", 4}, {'B', "gggg", 4},
{'C', "", 0}, {'C', "", 0},
@@ -261,7 +282,7 @@ tests/test.py << TEST
}; };
struct lfs_file_config cfg1 = {.attrs=attrs1, .attr_count=3}; struct lfs_file_config cfg1 = {.attrs=attrs1, .attr_count=3};
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_WRONLY, &cfg1) => 0; lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
lfs_getattr(&lfs, "hello/hello", 'B', buffer, 9) => 9; lfs_getattr(&lfs, "hello/hello", 'B', buffer, 9) => 9;
lfs_getattr(&lfs, "hello/hello", 'C', buffer+9, 9) => 5; lfs_getattr(&lfs, "hello/hello", 'C', buffer+9, 9) => 5;
@@ -270,7 +291,7 @@ tests/test.py << TEST
memcmp(buffer+9, "ccccc\0\0\0\0", 9) => 0; memcmp(buffer+9, "ccccc\0\0\0\0", 9) => 0;
memcmp(buffer+18, "\0\0\0\0\0\0\0\0\0", 9) => 0; memcmp(buffer+18, "\0\0\0\0\0\0\0\0\0", 9) => 0;
lfs_file_sync(&lfs, &file[0]) => 0; lfs_file_sync(&lfs, &file) => 0;
lfs_getattr(&lfs, "hello/hello", 'B', buffer, 9) => 4; lfs_getattr(&lfs, "hello/hello", 'B', buffer, 9) => 4;
lfs_getattr(&lfs, "hello/hello", 'C', buffer+9, 9) => 0; lfs_getattr(&lfs, "hello/hello", 'C', buffer+9, 9) => 0;
lfs_getattr(&lfs, "hello/hello", 'D', buffer+18, 9) => 4; lfs_getattr(&lfs, "hello/hello", 'D', buffer+18, 9) => 4;
@@ -278,9 +299,6 @@ tests/test.py << TEST
memcmp(buffer+9, "\0\0\0\0\0\0\0\0\0", 9) => 0; memcmp(buffer+9, "\0\0\0\0\0\0\0\0\0", 9) => 0;
memcmp(buffer+18, "hhhh\0\0\0\0\0", 9) => 0; memcmp(buffer+18, "hhhh\0\0\0\0\0", 9) => 0;
lfs_file_close(&lfs, &file[0]) => 0; lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0; lfs_unmount(&lfs) => 0;
TEST '''
echo "--- Results ---"
tests/stats.py

241
tests/test_badblocks.toml Normal file
View File

@@ -0,0 +1,241 @@
# bad blocks with block cycles should be tested in test_relocations
if = 'LFS_BLOCK_CYCLES == -1'
[[case]] # single bad blocks
define.LFS_BLOCK_COUNT = 256 # small bd so test runs faster
define.LFS_ERASE_CYCLES = 0xffffffff
define.LFS_ERASE_VALUE = [0x00, 0xff, -1]
define.LFS_BADBLOCK_BEHAVIOR = [
'LFS_TESTBD_BADBLOCK_PROGERROR',
'LFS_TESTBD_BADBLOCK_ERASEERROR',
'LFS_TESTBD_BADBLOCK_READERROR',
'LFS_TESTBD_BADBLOCK_PROGNOOP',
'LFS_TESTBD_BADBLOCK_ERASENOOP',
]
define.NAMEMULT = 64
define.FILEMULT = 1
code = '''
for (lfs_block_t badblock = 2; badblock < LFS_BLOCK_COUNT; badblock++) {
lfs_testbd_setwear(&cfg, badblock-1, 0) => 0;
lfs_testbd_setwear(&cfg, badblock, 0xffffffff) => 0;
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 1; i < 10; i++) {
for (int j = 0; j < NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[NAMEMULT] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => 0;
buffer[NAMEMULT] = '/';
for (int j = 0; j < NAMEMULT; j++) {
buffer[j+NAMEMULT+1] = '0'+i;
}
buffer[2*NAMEMULT+1] = '\0';
lfs_file_open(&lfs, &file, (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = NAMEMULT;
for (int j = 0; j < i*FILEMULT; j++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 1; i < 10; i++) {
for (int j = 0; j < NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[NAMEMULT] = '\0';
lfs_stat(&lfs, (char*)buffer, &info) => 0;
info.type => LFS_TYPE_DIR;
buffer[NAMEMULT] = '/';
for (int j = 0; j < NAMEMULT; j++) {
buffer[j+NAMEMULT+1] = '0'+i;
}
buffer[2*NAMEMULT+1] = '\0';
lfs_file_open(&lfs, &file, (char*)buffer, LFS_O_RDONLY) => 0;
size = NAMEMULT;
for (int j = 0; j < i*FILEMULT; j++) {
uint8_t rbuffer[1024];
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(buffer, rbuffer, size) => 0;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
}
'''
[[case]] # region corruption (causes cascading failures)
define.LFS_BLOCK_COUNT = 256 # small bd so test runs faster
define.LFS_ERASE_CYCLES = 0xffffffff
define.LFS_ERASE_VALUE = [0x00, 0xff, -1]
define.LFS_BADBLOCK_BEHAVIOR = [
'LFS_TESTBD_BADBLOCK_PROGERROR',
'LFS_TESTBD_BADBLOCK_ERASEERROR',
'LFS_TESTBD_BADBLOCK_READERROR',
'LFS_TESTBD_BADBLOCK_PROGNOOP',
'LFS_TESTBD_BADBLOCK_ERASENOOP',
]
define.NAMEMULT = 64
define.FILEMULT = 1
code = '''
for (lfs_block_t i = 0; i < (LFS_BLOCK_COUNT-2)/2; i++) {
lfs_testbd_setwear(&cfg, i+2, 0xffffffff) => 0;
}
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 1; i < 10; i++) {
for (int j = 0; j < NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[NAMEMULT] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => 0;
buffer[NAMEMULT] = '/';
for (int j = 0; j < NAMEMULT; j++) {
buffer[j+NAMEMULT+1] = '0'+i;
}
buffer[2*NAMEMULT+1] = '\0';
lfs_file_open(&lfs, &file, (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = NAMEMULT;
for (int j = 0; j < i*FILEMULT; j++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 1; i < 10; i++) {
for (int j = 0; j < NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[NAMEMULT] = '\0';
lfs_stat(&lfs, (char*)buffer, &info) => 0;
info.type => LFS_TYPE_DIR;
buffer[NAMEMULT] = '/';
for (int j = 0; j < NAMEMULT; j++) {
buffer[j+NAMEMULT+1] = '0'+i;
}
buffer[2*NAMEMULT+1] = '\0';
lfs_file_open(&lfs, &file, (char*)buffer, LFS_O_RDONLY) => 0;
size = NAMEMULT;
for (int j = 0; j < i*FILEMULT; j++) {
uint8_t rbuffer[1024];
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(buffer, rbuffer, size) => 0;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # alternating corruption (causes cascading failures)
define.LFS_BLOCK_COUNT = 256 # small bd so test runs faster
define.LFS_ERASE_CYCLES = 0xffffffff
define.LFS_ERASE_VALUE = [0x00, 0xff, -1]
define.LFS_BADBLOCK_BEHAVIOR = [
'LFS_TESTBD_BADBLOCK_PROGERROR',
'LFS_TESTBD_BADBLOCK_ERASEERROR',
'LFS_TESTBD_BADBLOCK_READERROR',
'LFS_TESTBD_BADBLOCK_PROGNOOP',
'LFS_TESTBD_BADBLOCK_ERASENOOP',
]
define.NAMEMULT = 64
define.FILEMULT = 1
code = '''
for (lfs_block_t i = 0; i < (LFS_BLOCK_COUNT-2)/2; i++) {
lfs_testbd_setwear(&cfg, (2*i) + 2, 0xffffffff) => 0;
}
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 1; i < 10; i++) {
for (int j = 0; j < NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[NAMEMULT] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => 0;
buffer[NAMEMULT] = '/';
for (int j = 0; j < NAMEMULT; j++) {
buffer[j+NAMEMULT+1] = '0'+i;
}
buffer[2*NAMEMULT+1] = '\0';
lfs_file_open(&lfs, &file, (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = NAMEMULT;
for (int j = 0; j < i*FILEMULT; j++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 1; i < 10; i++) {
for (int j = 0; j < NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[NAMEMULT] = '\0';
lfs_stat(&lfs, (char*)buffer, &info) => 0;
info.type => LFS_TYPE_DIR;
buffer[NAMEMULT] = '/';
for (int j = 0; j < NAMEMULT; j++) {
buffer[j+NAMEMULT+1] = '0'+i;
}
buffer[2*NAMEMULT+1] = '\0';
lfs_file_open(&lfs, &file, (char*)buffer, LFS_O_RDONLY) => 0;
size = NAMEMULT;
for (int j = 0; j < i*FILEMULT; j++) {
uint8_t rbuffer[1024];
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(buffer, rbuffer, size) => 0;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''
# other corner cases
[[case]] # bad superblocks (corrupt 1 or 0)
define.LFS_ERASE_CYCLES = 0xffffffff
define.LFS_ERASE_VALUE = [0x00, 0xff, -1]
define.LFS_BADBLOCK_BEHAVIOR = [
'LFS_TESTBD_BADBLOCK_PROGERROR',
'LFS_TESTBD_BADBLOCK_ERASEERROR',
'LFS_TESTBD_BADBLOCK_READERROR',
'LFS_TESTBD_BADBLOCK_PROGNOOP',
'LFS_TESTBD_BADBLOCK_ERASENOOP',
]
code = '''
lfs_testbd_setwear(&cfg, 0, 0xffffffff) => 0;
lfs_testbd_setwear(&cfg, 1, 0xffffffff) => 0;
lfs_format(&lfs, &cfg) => LFS_ERR_NOSPC;
lfs_mount(&lfs, &cfg) => LFS_ERR_CORRUPT;
'''

View File

@@ -1,118 +0,0 @@
#!/bin/bash
set -eu
echo "=== Corrupt tests ==="
NAMEMULT=64
FILEMULT=1
lfs_mktree() {
tests/test.py ${1:-} << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 1; i < 10; i++) {
for (int j = 0; j < $NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[$NAMEMULT] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => 0;
buffer[$NAMEMULT] = '/';
for (int j = 0; j < $NAMEMULT; j++) {
buffer[j+$NAMEMULT+1] = '0'+i;
}
buffer[2*$NAMEMULT+1] = '\0';
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = $NAMEMULT;
for (int j = 0; j < i*$FILEMULT; j++) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
}
lfs_chktree() {
tests/test.py ${1:-} << TEST
lfs_mount(&lfs, &cfg) => 0;
for (int i = 1; i < 10; i++) {
for (int j = 0; j < $NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[$NAMEMULT] = '\0';
lfs_stat(&lfs, (char*)buffer, &info) => 0;
info.type => LFS_TYPE_DIR;
buffer[$NAMEMULT] = '/';
for (int j = 0; j < $NAMEMULT; j++) {
buffer[j+$NAMEMULT+1] = '0'+i;
}
buffer[2*$NAMEMULT+1] = '\0';
lfs_file_open(&lfs, &file[0], (char*)buffer, LFS_O_RDONLY) => 0;
size = $NAMEMULT;
for (int j = 0; j < i*$FILEMULT; j++) {
lfs_file_read(&lfs, &file[0], rbuffer, size) => size;
memcmp(buffer, rbuffer, size) => 0;
}
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
}
echo "--- Sanity check ---"
rm -rf blocks
lfs_mktree
lfs_chktree
BLOCKS="$(ls blocks | grep -vw '[01]')"
echo "--- Block corruption ---"
for b in $BLOCKS
do
rm -rf blocks
mkdir blocks
ln -s /dev/zero blocks/$b
lfs_mktree
lfs_chktree
done
echo "--- Block persistance ---"
for b in $BLOCKS
do
rm -rf blocks
mkdir blocks
lfs_mktree
chmod a-w blocks/$b || true
lfs_mktree
lfs_chktree
done
echo "--- Big region corruption ---"
rm -rf blocks
mkdir blocks
for i in {2..512}
do
ln -s /dev/zero blocks/$(printf '%x' $i)
done
lfs_mktree
lfs_chktree
echo "--- Alternating corruption ---"
rm -rf blocks
mkdir blocks
for i in {2..1024..2}
do
ln -s /dev/zero blocks/$(printf '%x' $i)
done
lfs_mktree
lfs_chktree
echo "--- Results ---"
tests/stats.py

View File

@@ -1,484 +0,0 @@
#!/bin/bash
set -eu
LARGESIZE=128
echo "=== Directory tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
echo "--- Root directory ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Directory creation ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "potato") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- File creation ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "burito", LFS_O_CREAT | LFS_O_WRONLY) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Directory iteration ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "burito") => 0;
info.type => LFS_TYPE_REG;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "potato") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Directory failures ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "potato") => LFS_ERR_EXIST;
lfs_dir_open(&lfs, &dir[0], "tomato") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "burito") => LFS_ERR_NOTDIR;
lfs_file_open(&lfs, &file[0], "tomato", LFS_O_RDONLY) => LFS_ERR_NOENT;
lfs_file_open(&lfs, &file[0], "potato", LFS_O_RDONLY) => LFS_ERR_ISDIR;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Nested directories ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "potato/baked") => 0;
lfs_mkdir(&lfs, "potato/sweet") => 0;
lfs_mkdir(&lfs, "potato/fried") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "potato") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "baked") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "fried") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "sweet") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Multi-block directory ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "cactus") => 0;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "cactus/test%03d", i);
lfs_mkdir(&lfs, (char*)buffer) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "cactus") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "test%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
info.type => LFS_TYPE_DIR;
}
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Directory remove ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "potato") => LFS_ERR_NOTEMPTY;
lfs_remove(&lfs, "potato/sweet") => 0;
lfs_remove(&lfs, "potato/baked") => 0;
lfs_remove(&lfs, "potato/fried") => 0;
lfs_dir_open(&lfs, &dir[0], "potato") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_remove(&lfs, "potato") => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "burito") => 0;
info.type => LFS_TYPE_REG;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "cactus") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "burito") => 0;
info.type => LFS_TYPE_REG;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "cactus") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Directory rename ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "coldpotato") => 0;
lfs_mkdir(&lfs, "coldpotato/baked") => 0;
lfs_mkdir(&lfs, "coldpotato/sweet") => 0;
lfs_mkdir(&lfs, "coldpotato/fried") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "coldpotato", "hotpotato") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "hotpotato") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "baked") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "fried") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "sweet") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "warmpotato") => 0;
lfs_mkdir(&lfs, "warmpotato/mushy") => 0;
lfs_rename(&lfs, "hotpotato", "warmpotato") => LFS_ERR_NOTEMPTY;
lfs_remove(&lfs, "warmpotato/mushy") => 0;
lfs_rename(&lfs, "hotpotato", "warmpotato") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "warmpotato") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "baked") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "fried") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "sweet") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "coldpotato") => 0;
lfs_rename(&lfs, "warmpotato/baked", "coldpotato/baked") => 0;
lfs_rename(&lfs, "warmpotato/sweet", "coldpotato/sweet") => 0;
lfs_rename(&lfs, "warmpotato/fried", "coldpotato/fried") => 0;
lfs_remove(&lfs, "coldpotato") => LFS_ERR_NOTEMPTY;
lfs_remove(&lfs, "warmpotato") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "coldpotato") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "baked") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "fried") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "sweet") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Recursive remove ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "coldpotato") => LFS_ERR_NOTEMPTY;
lfs_dir_open(&lfs, &dir[0], "coldpotato") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
while (true) {
int err = lfs_dir_read(&lfs, &dir[0], &info);
err >= 0 => 1;
if (err == 0) {
break;
}
strcpy((char*)buffer, "coldpotato/");
strcat((char*)buffer, info.name);
lfs_remove(&lfs, (char*)buffer) => 0;
}
lfs_remove(&lfs, "coldpotato") => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "burito") => 0;
info.type => LFS_TYPE_REG;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "cactus") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Multi-block rename ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "cactus/test%03d", i);
sprintf((char*)wbuffer, "cactus/tedd%03d", i);
lfs_rename(&lfs, (char*)buffer, (char*)wbuffer) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "cactus") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "tedd%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
info.type => LFS_TYPE_DIR;
}
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Multi-block remove ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "cactus") => LFS_ERR_NOTEMPTY;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "cactus/tedd%03d", i);
lfs_remove(&lfs, (char*)buffer) => 0;
}
lfs_remove(&lfs, "cactus") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "burito") => 0;
info.type => LFS_TYPE_REG;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Multi-block directory with files ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "prickly-pear") => 0;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "prickly-pear/test%03d", i);
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = 6;
memcpy(wbuffer, "Hello", size);
lfs_file_write(&lfs, &file[0], wbuffer, size) => size;
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "prickly-pear") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "test%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
info.type => LFS_TYPE_REG;
info.size => 6;
}
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Multi-block rename with files ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "prickly-pear/test%03d", i);
sprintf((char*)wbuffer, "prickly-pear/tedd%03d", i);
lfs_rename(&lfs, (char*)buffer, (char*)wbuffer) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "prickly-pear") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "tedd%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
info.type => LFS_TYPE_REG;
info.size => 6;
}
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Multi-block remove with files ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "prickly-pear") => LFS_ERR_NOTEMPTY;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "prickly-pear/tedd%03d", i);
lfs_remove(&lfs, (char*)buffer) => 0;
}
lfs_remove(&lfs, "prickly-pear") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "burito") => 0;
info.type => LFS_TYPE_REG;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

843
tests/test_dirs.toml Normal file
View File

@@ -0,0 +1,843 @@
[[case]] # root
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # many directory creation
define.N = 'range(0, 100, 3)'
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < N; i++) {
sprintf(path, "dir%03d", i);
lfs_mkdir(&lfs, path) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
for (int i = 0; i < N; i++) {
sprintf(path, "dir%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, path) == 0);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # many directory removal
define.N = 'range(3, 100, 11)'
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < N; i++) {
sprintf(path, "removeme%03d", i);
lfs_mkdir(&lfs, path) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
for (int i = 0; i < N; i++) {
sprintf(path, "removeme%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, path) == 0);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs);
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < N; i++) {
sprintf(path, "removeme%03d", i);
lfs_remove(&lfs, path) => 0;
}
lfs_unmount(&lfs);
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # many directory rename
define.N = 'range(3, 100, 11)'
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < N; i++) {
sprintf(path, "test%03d", i);
lfs_mkdir(&lfs, path) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
for (int i = 0; i < N; i++) {
sprintf(path, "test%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, path) == 0);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs);
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < N; i++) {
char oldpath[128];
char newpath[128];
sprintf(oldpath, "test%03d", i);
sprintf(newpath, "tedd%03d", i);
lfs_rename(&lfs, oldpath, newpath) => 0;
}
lfs_unmount(&lfs);
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
for (int i = 0; i < N; i++) {
sprintf(path, "tedd%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, path) == 0);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs);
'''
[[case]] # reentrant many directory creation/rename/removal
define.N = [5, 11]
reentrant = true
code = '''
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
for (int i = 0; i < N; i++) {
sprintf(path, "hi%03d", i);
err = lfs_mkdir(&lfs, path);
assert(err == 0 || err == LFS_ERR_EXIST);
}
for (int i = 0; i < N; i++) {
sprintf(path, "hello%03d", i);
err = lfs_remove(&lfs, path);
assert(err == 0 || err == LFS_ERR_NOENT);
}
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
for (int i = 0; i < N; i++) {
sprintf(path, "hi%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, path) == 0);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
for (int i = 0; i < N; i++) {
char oldpath[128];
char newpath[128];
sprintf(oldpath, "hi%03d", i);
sprintf(newpath, "hello%03d", i);
// YES this can overwrite an existing newpath
lfs_rename(&lfs, oldpath, newpath) => 0;
}
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
for (int i = 0; i < N; i++) {
sprintf(path, "hello%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, path) == 0);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
for (int i = 0; i < N; i++) {
sprintf(path, "hello%03d", i);
lfs_remove(&lfs, path) => 0;
}
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # file creation
define.N = 'range(3, 100, 11)'
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < N; i++) {
sprintf(path, "file%03d", i);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
lfs_file_close(&lfs, &file) => 0;
}
// TODO rm me
lfs_mkdir(&lfs, "a") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "a") == 0);
for (int i = 0; i < N; i++) {
sprintf(path, "file%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_REG);
assert(strcmp(info.name, path) == 0);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs);
'''
[[case]] # file removal
define.N = 'range(0, 100, 3)'
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < N; i++) {
sprintf(path, "removeme%03d", i);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
for (int i = 0; i < N; i++) {
sprintf(path, "removeme%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_REG);
assert(strcmp(info.name, path) == 0);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs);
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < N; i++) {
sprintf(path, "removeme%03d", i);
lfs_remove(&lfs, path) => 0;
}
lfs_unmount(&lfs);
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # file rename
define.N = 'range(0, 100, 3)'
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < N; i++) {
sprintf(path, "test%03d", i);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
for (int i = 0; i < N; i++) {
sprintf(path, "test%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_REG);
assert(strcmp(info.name, path) == 0);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs);
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < N; i++) {
char oldpath[128];
char newpath[128];
sprintf(oldpath, "test%03d", i);
sprintf(newpath, "tedd%03d", i);
lfs_rename(&lfs, oldpath, newpath) => 0;
}
lfs_unmount(&lfs);
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
for (int i = 0; i < N; i++) {
sprintf(path, "tedd%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_REG);
assert(strcmp(info.name, path) == 0);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs);
'''
[[case]] # reentrant file creation/rename/removal
define.N = [5, 25]
reentrant = true
code = '''
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
for (int i = 0; i < N; i++) {
sprintf(path, "hi%03d", i);
lfs_file_open(&lfs, &file, path, LFS_O_CREAT | LFS_O_WRONLY) => 0;
lfs_file_close(&lfs, &file) => 0;
}
for (int i = 0; i < N; i++) {
sprintf(path, "hello%03d", i);
err = lfs_remove(&lfs, path);
assert(err == 0 || err == LFS_ERR_NOENT);
}
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
for (int i = 0; i < N; i++) {
sprintf(path, "hi%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_REG);
assert(strcmp(info.name, path) == 0);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
for (int i = 0; i < N; i++) {
char oldpath[128];
char newpath[128];
sprintf(oldpath, "hi%03d", i);
sprintf(newpath, "hello%03d", i);
// YES this can overwrite an existing newpath
lfs_rename(&lfs, oldpath, newpath) => 0;
}
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
for (int i = 0; i < N; i++) {
sprintf(path, "hello%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_REG);
assert(strcmp(info.name, path) == 0);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
for (int i = 0; i < N; i++) {
sprintf(path, "hello%03d", i);
lfs_remove(&lfs, path) => 0;
}
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # nested directories
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "potato") => 0;
lfs_file_open(&lfs, &file, "burito",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "potato/baked") => 0;
lfs_mkdir(&lfs, "potato/sweet") => 0;
lfs_mkdir(&lfs, "potato/fried") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "potato") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "baked") == 0);
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "fried") == 0);
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "sweet") == 0);
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
// try removing?
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "potato") => LFS_ERR_NOTEMPTY;
lfs_unmount(&lfs) => 0;
// try renaming?
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "potato", "coldpotato") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "coldpotato", "warmpotato") => 0;
lfs_rename(&lfs, "warmpotato", "hotpotato") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "potato") => LFS_ERR_NOENT;
lfs_remove(&lfs, "coldpotato") => LFS_ERR_NOENT;
lfs_remove(&lfs, "warmpotato") => LFS_ERR_NOENT;
lfs_remove(&lfs, "hotpotato") => LFS_ERR_NOTEMPTY;
lfs_unmount(&lfs) => 0;
// try cross-directory renaming
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "coldpotato") => 0;
lfs_rename(&lfs, "hotpotato/baked", "coldpotato/baked") => 0;
lfs_rename(&lfs, "coldpotato", "hotpotato") => LFS_ERR_NOTEMPTY;
lfs_remove(&lfs, "coldpotato") => LFS_ERR_NOTEMPTY;
lfs_remove(&lfs, "hotpotato") => LFS_ERR_NOTEMPTY;
lfs_rename(&lfs, "hotpotato/fried", "coldpotato/fried") => 0;
lfs_rename(&lfs, "coldpotato", "hotpotato") => LFS_ERR_NOTEMPTY;
lfs_remove(&lfs, "coldpotato") => LFS_ERR_NOTEMPTY;
lfs_remove(&lfs, "hotpotato") => LFS_ERR_NOTEMPTY;
lfs_rename(&lfs, "hotpotato/sweet", "coldpotato/sweet") => 0;
lfs_rename(&lfs, "coldpotato", "hotpotato") => 0;
lfs_remove(&lfs, "coldpotato") => LFS_ERR_NOENT;
lfs_remove(&lfs, "hotpotato") => LFS_ERR_NOTEMPTY;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "hotpotato") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "baked") == 0);
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "fried") == 0);
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "sweet") == 0);
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
// final remove
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "hotpotato") => LFS_ERR_NOTEMPTY;
lfs_remove(&lfs, "hotpotato/baked") => 0;
lfs_remove(&lfs, "hotpotato") => LFS_ERR_NOTEMPTY;
lfs_remove(&lfs, "hotpotato/fried") => 0;
lfs_remove(&lfs, "hotpotato") => LFS_ERR_NOTEMPTY;
lfs_remove(&lfs, "hotpotato/sweet") => 0;
lfs_remove(&lfs, "hotpotato") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "burito") == 0);
info.type => LFS_TYPE_REG;
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # recursive remove
define.N = [10, 100]
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "prickly-pear") => 0;
for (int i = 0; i < N; i++) {
sprintf(path, "prickly-pear/cactus%03d", i);
lfs_mkdir(&lfs, path) => 0;
}
lfs_dir_open(&lfs, &dir, "prickly-pear") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
for (int i = 0; i < N; i++) {
sprintf(path, "cactus%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, path) == 0);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs);
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "prickly-pear") => LFS_ERR_NOTEMPTY;
lfs_dir_open(&lfs, &dir, "prickly-pear") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
for (int i = 0; i < N; i++) {
sprintf(path, "cactus%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, path) == 0);
sprintf(path, "prickly-pear/%s", info.name);
lfs_remove(&lfs, path) => 0;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_remove(&lfs, "prickly-pear") => 0;
lfs_remove(&lfs, "prickly-pear") => LFS_ERR_NOENT;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "prickly-pear") => LFS_ERR_NOENT;
lfs_unmount(&lfs) => 0;
'''
[[case]] # other error cases
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "potato") => 0;
lfs_file_open(&lfs, &file, "burito",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "potato") => LFS_ERR_EXIST;
lfs_mkdir(&lfs, "burito") => LFS_ERR_EXIST;
lfs_file_open(&lfs, &file, "burito",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => LFS_ERR_EXIST;
lfs_file_open(&lfs, &file, "potato",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => LFS_ERR_EXIST;
lfs_dir_open(&lfs, &dir, "tomato") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir, "burito") => LFS_ERR_NOTDIR;
lfs_file_open(&lfs, &file, "tomato", LFS_O_RDONLY) => LFS_ERR_NOENT;
lfs_file_open(&lfs, &file, "potato", LFS_O_RDONLY) => LFS_ERR_ISDIR;
lfs_file_open(&lfs, &file, "tomato", LFS_O_WRONLY) => LFS_ERR_NOENT;
lfs_file_open(&lfs, &file, "potato", LFS_O_WRONLY) => LFS_ERR_ISDIR;
lfs_file_open(&lfs, &file, "potato",
LFS_O_WRONLY | LFS_O_CREAT) => LFS_ERR_ISDIR;
lfs_mkdir(&lfs, "/") => LFS_ERR_EXIST;
lfs_file_open(&lfs, &file, "/",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => LFS_ERR_EXIST;
lfs_file_open(&lfs, &file, "/", LFS_O_RDONLY) => LFS_ERR_ISDIR;
lfs_file_open(&lfs, &file, "/", LFS_O_WRONLY) => LFS_ERR_ISDIR;
lfs_file_open(&lfs, &file, "/",
LFS_O_WRONLY | LFS_O_CREAT) => LFS_ERR_ISDIR;
// check that errors did not corrupt directory
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_REG);
assert(strcmp(info.name, "burito") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "potato") == 0);
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
// or on disk
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_REG);
assert(strcmp(info.name, "burito") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "potato") == 0);
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # directory seek
define.COUNT = [4, 128, 132]
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "hello") => 0;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "hello/kitty%03d", i);
lfs_mkdir(&lfs, path) => 0;
}
lfs_unmount(&lfs) => 0;
for (int j = 2; j < COUNT; j++) {
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "hello") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_soff_t pos;
for (int i = 0; i < j; i++) {
sprintf(path, "kitty%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, path) == 0);
assert(info.type == LFS_TYPE_DIR);
pos = lfs_dir_tell(&lfs, &dir);
assert(pos >= 0);
}
lfs_dir_seek(&lfs, &dir, pos) => 0;
sprintf(path, "kitty%03d", j);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, path) == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_rewind(&lfs, &dir) => 0;
sprintf(path, "kitty%03d", 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, path) == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_seek(&lfs, &dir, pos) => 0;
sprintf(path, "kitty%03d", j);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, path) == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
}
'''
[[case]] # root seek
define.COUNT = [4, 128, 132]
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "hi%03d", i);
lfs_mkdir(&lfs, path) => 0;
}
lfs_unmount(&lfs) => 0;
for (int j = 2; j < COUNT; j++) {
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_soff_t pos;
for (int i = 0; i < j; i++) {
sprintf(path, "hi%03d", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, path) == 0);
assert(info.type == LFS_TYPE_DIR);
pos = lfs_dir_tell(&lfs, &dir);
assert(pos >= 0);
}
lfs_dir_seek(&lfs, &dir, pos) => 0;
sprintf(path, "hi%03d", j);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, path) == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_rewind(&lfs, &dir) => 0;
sprintf(path, "hi%03d", 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, path) == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_seek(&lfs, &dir, pos) => 0;
sprintf(path, "hi%03d", j);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, path) == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
}
'''

View File

@@ -1,221 +0,0 @@
#!/bin/bash
set -eu
# Note: These tests are intended for 512 byte inline size at different
# inline sizes they should still pass, but won't be testing anything
echo "=== Entry tests ==="
rm -rf blocks
function read_file {
cat << TEST
size = $2;
lfs_file_open(&lfs, &file[0], "$1", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
TEST
}
function write_file {
cat << TEST
size = $2;
lfs_file_open(&lfs, &file[0], "$1",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file[0], wbuffer, size) => size;
lfs_file_close(&lfs, &file[0]) => 0;
TEST
}
echo "--- Entry grow test ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
$(write_file "hi0" 20)
$(write_file "hi1" 20)
$(write_file "hi2" 20)
$(write_file "hi3" 20)
$(read_file "hi1" 20)
$(write_file "hi1" 200)
$(read_file "hi0" 20)
$(read_file "hi1" 200)
$(read_file "hi2" 20)
$(read_file "hi3" 20)
lfs_unmount(&lfs) => 0;
TEST
echo "--- Entry shrink test ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
$(write_file "hi0" 20)
$(write_file "hi1" 200)
$(write_file "hi2" 20)
$(write_file "hi3" 20)
$(read_file "hi1" 200)
$(write_file "hi1" 20)
$(read_file "hi0" 20)
$(read_file "hi1" 20)
$(read_file "hi2" 20)
$(read_file "hi3" 20)
lfs_unmount(&lfs) => 0;
TEST
echo "--- Entry spill test ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
$(write_file "hi0" 200)
$(write_file "hi1" 200)
$(write_file "hi2" 200)
$(write_file "hi3" 200)
$(read_file "hi0" 200)
$(read_file "hi1" 200)
$(read_file "hi2" 200)
$(read_file "hi3" 200)
lfs_unmount(&lfs) => 0;
TEST
echo "--- Entry push spill test ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
$(write_file "hi0" 200)
$(write_file "hi1" 20)
$(write_file "hi2" 200)
$(write_file "hi3" 200)
$(read_file "hi1" 20)
$(write_file "hi1" 200)
$(read_file "hi0" 200)
$(read_file "hi1" 200)
$(read_file "hi2" 200)
$(read_file "hi3" 200)
lfs_unmount(&lfs) => 0;
TEST
echo "--- Entry push spill two test ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
$(write_file "hi0" 200)
$(write_file "hi1" 20)
$(write_file "hi2" 200)
$(write_file "hi3" 200)
$(write_file "hi4" 200)
$(read_file "hi1" 20)
$(write_file "hi1" 200)
$(read_file "hi0" 200)
$(read_file "hi1" 200)
$(read_file "hi2" 200)
$(read_file "hi3" 200)
$(read_file "hi4" 200)
lfs_unmount(&lfs) => 0;
TEST
echo "--- Entry drop test ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
$(write_file "hi0" 200)
$(write_file "hi1" 200)
$(write_file "hi2" 200)
$(write_file "hi3" 200)
lfs_remove(&lfs, "hi1") => 0;
lfs_stat(&lfs, "hi1", &info) => LFS_ERR_NOENT;
$(read_file "hi0" 200)
$(read_file "hi2" 200)
$(read_file "hi3" 200)
lfs_remove(&lfs, "hi2") => 0;
lfs_stat(&lfs, "hi2", &info) => LFS_ERR_NOENT;
$(read_file "hi0" 200)
$(read_file "hi3" 200)
lfs_remove(&lfs, "hi3") => 0;
lfs_stat(&lfs, "hi3", &info) => LFS_ERR_NOENT;
$(read_file "hi0" 200)
lfs_remove(&lfs, "hi0") => 0;
lfs_stat(&lfs, "hi0", &info) => LFS_ERR_NOENT;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Create too big ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
memset(buffer, 'm', 200);
buffer[200] = '\0';
size = 400;
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file[0], wbuffer, size) => size;
lfs_file_close(&lfs, &file[0]) => 0;
size = 400;
lfs_file_open(&lfs, &file[0], (char*)buffer, LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Resize too big ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
memset(buffer, 'm', 200);
buffer[200] = '\0';
size = 40;
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file[0], wbuffer, size) => size;
lfs_file_close(&lfs, &file[0]) => 0;
size = 40;
lfs_file_open(&lfs, &file[0], (char*)buffer, LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
size = 400;
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file[0], wbuffer, size) => size;
lfs_file_close(&lfs, &file[0]) => 0;
size = 400;
lfs_file_open(&lfs, &file[0], (char*)buffer, LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

611
tests/test_entries.toml Normal file
View File

@@ -0,0 +1,611 @@
# These tests are for some specific corner cases with neighboring inline files.
# Note that these tests are intended for 512 byte inline sizes. They should
# still pass with other inline sizes but wouldn't be testing anything.
define.LFS_CACHE_SIZE = 512
if = 'LFS_CACHE_SIZE % LFS_PROG_SIZE == 0 && LFS_CACHE_SIZE == 512'
[[case]] # entry grow test
code = '''
uint8_t wbuffer[1024];
uint8_t rbuffer[1024];
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// write hi0 20
sprintf(path, "hi0"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi2 20
sprintf(path, "hi2"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi3 20
sprintf(path, "hi3"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// write hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi0 20
sprintf(path, "hi0"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi2 20
sprintf(path, "hi2"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi3 20
sprintf(path, "hi3"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # entry shrink test
code = '''
uint8_t wbuffer[1024];
uint8_t rbuffer[1024];
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// write hi0 20
sprintf(path, "hi0"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi2 20
sprintf(path, "hi2"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi3 20
sprintf(path, "hi3"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// write hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi0 20
sprintf(path, "hi0"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi2 20
sprintf(path, "hi2"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi3 20
sprintf(path, "hi3"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # entry spill test
code = '''
uint8_t wbuffer[1024];
uint8_t rbuffer[1024];
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// write hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # entry push spill test
code = '''
uint8_t wbuffer[1024];
uint8_t rbuffer[1024];
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// write hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// write hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # entry push spill two test
code = '''
uint8_t wbuffer[1024];
uint8_t rbuffer[1024];
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// write hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi4 200
sprintf(path, "hi4"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// write hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi4 200
sprintf(path, "hi4"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # entry drop test
code = '''
uint8_t wbuffer[1024];
uint8_t rbuffer[1024];
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// write hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "hi1") => 0;
lfs_stat(&lfs, "hi1", &info) => LFS_ERR_NOENT;
// read hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "hi2") => 0;
lfs_stat(&lfs, "hi2", &info) => LFS_ERR_NOENT;
// read hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "hi3") => 0;
lfs_stat(&lfs, "hi3", &info) => LFS_ERR_NOENT;
// read hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "hi0") => 0;
lfs_stat(&lfs, "hi0", &info) => LFS_ERR_NOENT;
lfs_unmount(&lfs) => 0;
'''
[[case]] # create too big
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
memset(path, 'm', 200);
path[200] = '\0';
size = 400;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
uint8_t wbuffer[1024];
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_close(&lfs, &file) => 0;
size = 400;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
uint8_t rbuffer[1024];
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # resize too big
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
memset(path, 'm', 200);
path[200] = '\0';
size = 40;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
uint8_t wbuffer[1024];
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_close(&lfs, &file) => 0;
size = 40;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
uint8_t rbuffer[1024];
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
size = 400;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_close(&lfs, &file) => 0;
size = 400;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''

285
tests/test_evil.toml Normal file
View File

@@ -0,0 +1,285 @@
# Tests for recovering from conditions which shouldn't normally
# happen during normal operation of littlefs
# invalid pointer tests (outside of block_count)
[[case]] # invalid tail-pointer test
define.TAIL_TYPE = ['LFS_TYPE_HARDTAIL', 'LFS_TYPE_SOFTTAIL']
define.INVALSET = [0x3, 0x1, 0x2]
in = "lfs.c"
code = '''
// create littlefs
lfs_format(&lfs, &cfg) => 0;
// change tail-pointer to invalid pointers
lfs_init(&lfs, &cfg) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
lfs_dir_commit(&lfs, &mdir, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_HARDTAIL, 0x3ff, 8),
(lfs_block_t[2]){
(INVALSET & 0x1) ? 0xcccccccc : 0,
(INVALSET & 0x2) ? 0xcccccccc : 0}})) => 0;
lfs_deinit(&lfs) => 0;
// test that mount fails gracefully
lfs_mount(&lfs, &cfg) => LFS_ERR_CORRUPT;
'''
[[case]] # invalid dir pointer test
define.INVALSET = [0x3, 0x1, 0x2]
in = "lfs.c"
code = '''
// create littlefs
lfs_format(&lfs, &cfg) => 0;
// make a dir
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "dir_here") => 0;
lfs_unmount(&lfs) => 0;
// change the dir pointer to be invalid
lfs_init(&lfs, &cfg) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
// make sure id 1 == our directory
lfs_dir_get(&lfs, &mdir,
LFS_MKTAG(0x700, 0x3ff, 0),
LFS_MKTAG(LFS_TYPE_NAME, 1, strlen("dir_here")), buffer)
=> LFS_MKTAG(LFS_TYPE_DIR, 1, strlen("dir_here"));
assert(memcmp((char*)buffer, "dir_here", strlen("dir_here")) == 0);
// change dir pointer
lfs_dir_commit(&lfs, &mdir, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_DIRSTRUCT, 1, 8),
(lfs_block_t[2]){
(INVALSET & 0x1) ? 0xcccccccc : 0,
(INVALSET & 0x2) ? 0xcccccccc : 0}})) => 0;
lfs_deinit(&lfs) => 0;
// test that accessing our bad dir fails, note there's a number
// of ways to access the dir, some can fail, but some don't
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "dir_here", &info) => 0;
assert(strcmp(info.name, "dir_here") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_open(&lfs, &dir, "dir_here") => LFS_ERR_CORRUPT;
lfs_stat(&lfs, "dir_here/file_here", &info) => LFS_ERR_CORRUPT;
lfs_dir_open(&lfs, &dir, "dir_here/dir_here") => LFS_ERR_CORRUPT;
lfs_file_open(&lfs, &file, "dir_here/file_here",
LFS_O_RDONLY) => LFS_ERR_CORRUPT;
lfs_file_open(&lfs, &file, "dir_here/file_here",
LFS_O_WRONLY | LFS_O_CREAT) => LFS_ERR_CORRUPT;
lfs_unmount(&lfs) => 0;
'''
[[case]] # invalid file pointer test
in = "lfs.c"
define.SIZE = [10, 1000, 100000] # faked file size
code = '''
// create littlefs
lfs_format(&lfs, &cfg) => 0;
// make a file
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "file_here",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// change the file pointer to be invalid
lfs_init(&lfs, &cfg) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
// make sure id 1 == our file
lfs_dir_get(&lfs, &mdir,
LFS_MKTAG(0x700, 0x3ff, 0),
LFS_MKTAG(LFS_TYPE_NAME, 1, strlen("file_here")), buffer)
=> LFS_MKTAG(LFS_TYPE_REG, 1, strlen("file_here"));
assert(memcmp((char*)buffer, "file_here", strlen("file_here")) == 0);
// change file pointer
lfs_dir_commit(&lfs, &mdir, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_CTZSTRUCT, 1, sizeof(struct lfs_ctz)),
&(struct lfs_ctz){0xcccccccc, lfs_tole32(SIZE)}})) => 0;
lfs_deinit(&lfs) => 0;
// test that accessing our bad file fails, note there's a number
// of ways to access the dir, some can fail, but some don't
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "file_here", &info) => 0;
assert(strcmp(info.name, "file_here") == 0);
assert(info.type == LFS_TYPE_REG);
assert(info.size == SIZE);
lfs_file_open(&lfs, &file, "file_here", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, buffer, SIZE) => LFS_ERR_CORRUPT;
lfs_file_close(&lfs, &file) => 0;
// any allocs that traverse CTZ must unfortunately must fail
if (SIZE > 2*LFS_BLOCK_SIZE) {
lfs_mkdir(&lfs, "dir_here") => LFS_ERR_CORRUPT;
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # invalid pointer in CTZ skip-list test
define.SIZE = ['2*LFS_BLOCK_SIZE', '3*LFS_BLOCK_SIZE', '4*LFS_BLOCK_SIZE']
in = "lfs.c"
code = '''
// create littlefs
lfs_format(&lfs, &cfg) => 0;
// make a file
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "file_here",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (int i = 0; i < SIZE; i++) {
char c = 'c';
lfs_file_write(&lfs, &file, &c, 1) => 1;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// change pointer in CTZ skip-list to be invalid
lfs_init(&lfs, &cfg) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
// make sure id 1 == our file and get our CTZ structure
lfs_dir_get(&lfs, &mdir,
LFS_MKTAG(0x700, 0x3ff, 0),
LFS_MKTAG(LFS_TYPE_NAME, 1, strlen("file_here")), buffer)
=> LFS_MKTAG(LFS_TYPE_REG, 1, strlen("file_here"));
assert(memcmp((char*)buffer, "file_here", strlen("file_here")) == 0);
struct lfs_ctz ctz;
lfs_dir_get(&lfs, &mdir,
LFS_MKTAG(0x700, 0x3ff, 0),
LFS_MKTAG(LFS_TYPE_STRUCT, 1, sizeof(struct lfs_ctz)), &ctz)
=> LFS_MKTAG(LFS_TYPE_CTZSTRUCT, 1, sizeof(struct lfs_ctz));
// rewrite block to contain bad pointer
uint8_t bbuffer[LFS_BLOCK_SIZE];
cfg.read(&cfg, ctz.head, 0, bbuffer, LFS_BLOCK_SIZE) => 0;
uint32_t bad = lfs_tole32(0xcccccccc);
memcpy(&bbuffer[0], &bad, sizeof(bad));
memcpy(&bbuffer[4], &bad, sizeof(bad));
cfg.erase(&cfg, ctz.head) => 0;
cfg.prog(&cfg, ctz.head, 0, bbuffer, LFS_BLOCK_SIZE) => 0;
lfs_deinit(&lfs) => 0;
// test that accessing our bad file fails, note there's a number
// of ways to access the dir, some can fail, but some don't
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "file_here", &info) => 0;
assert(strcmp(info.name, "file_here") == 0);
assert(info.type == LFS_TYPE_REG);
assert(info.size == SIZE);
lfs_file_open(&lfs, &file, "file_here", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, buffer, SIZE) => LFS_ERR_CORRUPT;
lfs_file_close(&lfs, &file) => 0;
// any allocs that traverse CTZ must unfortunately must fail
if (SIZE > 2*LFS_BLOCK_SIZE) {
lfs_mkdir(&lfs, "dir_here") => LFS_ERR_CORRUPT;
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # invalid gstate pointer
define.INVALSET = [0x3, 0x1, 0x2]
in = "lfs.c"
code = '''
// create littlefs
lfs_format(&lfs, &cfg) => 0;
// create an invalid gstate
lfs_init(&lfs, &cfg) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
lfs_fs_prepmove(&lfs, 1, (lfs_block_t [2]){
(INVALSET & 0x1) ? 0xcccccccc : 0,
(INVALSET & 0x2) ? 0xcccccccc : 0});
lfs_dir_commit(&lfs, &mdir, NULL, 0) => 0;
lfs_deinit(&lfs) => 0;
// test that mount fails gracefully
// mount may not fail, but our first alloc should fail when
// we try to fix the gstate
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "should_fail") => LFS_ERR_CORRUPT;
lfs_unmount(&lfs) => 0;
'''
# cycle detection/recovery tests
[[case]] # metadata-pair threaded-list loop test
in = "lfs.c"
code = '''
// create littlefs
lfs_format(&lfs, &cfg) => 0;
// change tail-pointer to point to ourself
lfs_init(&lfs, &cfg) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
lfs_dir_commit(&lfs, &mdir, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_HARDTAIL, 0x3ff, 8),
(lfs_block_t[2]){0, 1}})) => 0;
lfs_deinit(&lfs) => 0;
// test that mount fails gracefully
lfs_mount(&lfs, &cfg) => LFS_ERR_CORRUPT;
'''
[[case]] # metadata-pair threaded-list 2-length loop test
in = "lfs.c"
code = '''
// create littlefs with child dir
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "child") => 0;
lfs_unmount(&lfs) => 0;
// find child
lfs_init(&lfs, &cfg) => 0;
lfs_mdir_t mdir;
lfs_block_t pair[2];
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
lfs_dir_get(&lfs, &mdir,
LFS_MKTAG(0x7ff, 0x3ff, 0),
LFS_MKTAG(LFS_TYPE_DIRSTRUCT, 1, sizeof(pair)), pair)
=> LFS_MKTAG(LFS_TYPE_DIRSTRUCT, 1, sizeof(pair));
// change tail-pointer to point to root
lfs_dir_fetch(&lfs, &mdir, pair) => 0;
lfs_dir_commit(&lfs, &mdir, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_HARDTAIL, 0x3ff, 8),
(lfs_block_t[2]){0, 1}})) => 0;
lfs_deinit(&lfs) => 0;
// test that mount fails gracefully
lfs_mount(&lfs, &cfg) => LFS_ERR_CORRUPT;
'''
[[case]] # metadata-pair threaded-list 1-length child loop test
in = "lfs.c"
code = '''
// create littlefs with child dir
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "child") => 0;
lfs_unmount(&lfs) => 0;
// find child
lfs_init(&lfs, &cfg) => 0;
lfs_mdir_t mdir;
lfs_block_t pair[2];
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
lfs_dir_get(&lfs, &mdir,
LFS_MKTAG(0x7ff, 0x3ff, 0),
LFS_MKTAG(LFS_TYPE_DIRSTRUCT, 1, sizeof(pair)), pair)
=> LFS_MKTAG(LFS_TYPE_DIRSTRUCT, 1, sizeof(pair));
// change tail-pointer to point to ourself
lfs_dir_fetch(&lfs, &mdir, pair) => 0;
lfs_dir_commit(&lfs, &mdir, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_HARDTAIL, 0x3ff, 8), pair})) => 0;
lfs_deinit(&lfs) => 0;
// test that mount fails gracefully
lfs_mount(&lfs, &cfg) => LFS_ERR_CORRUPT;
'''

465
tests/test_exhaustion.toml Normal file
View File

@@ -0,0 +1,465 @@
[[case]] # test running a filesystem to exhaustion
define.LFS_ERASE_CYCLES = 10
define.LFS_BLOCK_COUNT = 256 # small bd so test runs faster
define.LFS_BLOCK_CYCLES = 'LFS_ERASE_CYCLES / 2'
define.LFS_BADBLOCK_BEHAVIOR = [
'LFS_TESTBD_BADBLOCK_PROGERROR',
'LFS_TESTBD_BADBLOCK_ERASEERROR',
'LFS_TESTBD_BADBLOCK_READERROR',
'LFS_TESTBD_BADBLOCK_PROGNOOP',
'LFS_TESTBD_BADBLOCK_ERASENOOP',
]
define.FILES = 10
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "roadrunner") => 0;
lfs_unmount(&lfs) => 0;
uint32_t cycle = 0;
while (true) {
lfs_mount(&lfs, &cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// chose name, roughly random seed, and random 2^n size
sprintf(path, "roadrunner/test%d", i);
srand(cycle * i);
size = 1 << ((rand() % 10)+2);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (rand() % 26);
lfs_ssize_t res = lfs_file_write(&lfs, &file, &c, 1);
assert(res == 1 || res == LFS_ERR_NOSPC);
if (res == LFS_ERR_NOSPC) {
err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
if (err == LFS_ERR_NOSPC) {
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
sprintf(path, "roadrunner/test%d", i);
srand(cycle * i);
size = 1 << ((rand() % 10)+2);
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (rand() % 26);
char r;
lfs_file_read(&lfs, &file, &r, 1) => 1;
assert(r == c);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
cycle += 1;
}
exhausted:
// should still be readable
lfs_mount(&lfs, &cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
sprintf(path, "roadrunner/test%d", i);
lfs_stat(&lfs, path, &info) => 0;
}
lfs_unmount(&lfs) => 0;
LFS_WARN("completed %d cycles", cycle);
'''
[[case]] # test running a filesystem to exhaustion
# which also requires expanding superblocks
define.LFS_ERASE_CYCLES = 10
define.LFS_BLOCK_COUNT = 256 # small bd so test runs faster
define.LFS_BLOCK_CYCLES = 'LFS_ERASE_CYCLES / 2'
define.LFS_BADBLOCK_BEHAVIOR = [
'LFS_TESTBD_BADBLOCK_PROGERROR',
'LFS_TESTBD_BADBLOCK_ERASEERROR',
'LFS_TESTBD_BADBLOCK_READERROR',
'LFS_TESTBD_BADBLOCK_PROGNOOP',
'LFS_TESTBD_BADBLOCK_ERASENOOP',
]
define.FILES = 10
code = '''
lfs_format(&lfs, &cfg) => 0;
uint32_t cycle = 0;
while (true) {
lfs_mount(&lfs, &cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// chose name, roughly random seed, and random 2^n size
sprintf(path, "test%d", i);
srand(cycle * i);
size = 1 << ((rand() % 10)+2);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (rand() % 26);
lfs_ssize_t res = lfs_file_write(&lfs, &file, &c, 1);
assert(res == 1 || res == LFS_ERR_NOSPC);
if (res == LFS_ERR_NOSPC) {
err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
if (err == LFS_ERR_NOSPC) {
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
sprintf(path, "test%d", i);
srand(cycle * i);
size = 1 << ((rand() % 10)+2);
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (rand() % 26);
char r;
lfs_file_read(&lfs, &file, &r, 1) => 1;
assert(r == c);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
cycle += 1;
}
exhausted:
// should still be readable
lfs_mount(&lfs, &cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
sprintf(path, "test%d", i);
lfs_stat(&lfs, path, &info) => 0;
}
lfs_unmount(&lfs) => 0;
LFS_WARN("completed %d cycles", cycle);
'''
# These are a sort of high-level litmus test for wear-leveling. One definition
# of wear-leveling is that increasing a block device's space translates directly
# into increasing the block devices lifetime. This is something we can actually
# check for.
[[case]] # wear-level test running a filesystem to exhaustion
define.LFS_ERASE_CYCLES = 20
define.LFS_BLOCK_COUNT = 256 # small bd so test runs faster
define.LFS_BLOCK_CYCLES = 'LFS_ERASE_CYCLES / 2'
define.FILES = 10
code = '''
uint32_t run_cycles[2];
const uint32_t run_block_count[2] = {LFS_BLOCK_COUNT/2, LFS_BLOCK_COUNT};
for (int run = 0; run < 2; run++) {
for (lfs_block_t b = 0; b < LFS_BLOCK_COUNT; b++) {
lfs_testbd_setwear(&cfg, b,
(b < run_block_count[run]) ? 0 : LFS_ERASE_CYCLES) => 0;
}
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "roadrunner") => 0;
lfs_unmount(&lfs) => 0;
uint32_t cycle = 0;
while (true) {
lfs_mount(&lfs, &cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// chose name, roughly random seed, and random 2^n size
sprintf(path, "roadrunner/test%d", i);
srand(cycle * i);
size = 1 << ((rand() % 10)+2);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (rand() % 26);
lfs_ssize_t res = lfs_file_write(&lfs, &file, &c, 1);
assert(res == 1 || res == LFS_ERR_NOSPC);
if (res == LFS_ERR_NOSPC) {
err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
if (err == LFS_ERR_NOSPC) {
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
sprintf(path, "roadrunner/test%d", i);
srand(cycle * i);
size = 1 << ((rand() % 10)+2);
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (rand() % 26);
char r;
lfs_file_read(&lfs, &file, &r, 1) => 1;
assert(r == c);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
cycle += 1;
}
exhausted:
// should still be readable
lfs_mount(&lfs, &cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
sprintf(path, "roadrunner/test%d", i);
lfs_stat(&lfs, path, &info) => 0;
}
lfs_unmount(&lfs) => 0;
run_cycles[run] = cycle;
LFS_WARN("completed %d blocks %d cycles",
run_block_count[run], run_cycles[run]);
}
// check we increased the lifetime by 2x with ~10% error
LFS_ASSERT(run_cycles[1]*110/100 > 2*run_cycles[0]);
'''
[[case]] # wear-level test + expanding superblock
define.LFS_ERASE_CYCLES = 20
define.LFS_BLOCK_COUNT = 256 # small bd so test runs faster
define.LFS_BLOCK_CYCLES = 'LFS_ERASE_CYCLES / 2'
define.FILES = 10
code = '''
uint32_t run_cycles[2];
const uint32_t run_block_count[2] = {LFS_BLOCK_COUNT/2, LFS_BLOCK_COUNT};
for (int run = 0; run < 2; run++) {
for (lfs_block_t b = 0; b < LFS_BLOCK_COUNT; b++) {
lfs_testbd_setwear(&cfg, b,
(b < run_block_count[run]) ? 0 : LFS_ERASE_CYCLES) => 0;
}
lfs_format(&lfs, &cfg) => 0;
uint32_t cycle = 0;
while (true) {
lfs_mount(&lfs, &cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// chose name, roughly random seed, and random 2^n size
sprintf(path, "test%d", i);
srand(cycle * i);
size = 1 << ((rand() % 10)+2);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (rand() % 26);
lfs_ssize_t res = lfs_file_write(&lfs, &file, &c, 1);
assert(res == 1 || res == LFS_ERR_NOSPC);
if (res == LFS_ERR_NOSPC) {
err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
if (err == LFS_ERR_NOSPC) {
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
sprintf(path, "test%d", i);
srand(cycle * i);
size = 1 << ((rand() % 10)+2);
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (rand() % 26);
char r;
lfs_file_read(&lfs, &file, &r, 1) => 1;
assert(r == c);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
cycle += 1;
}
exhausted:
// should still be readable
lfs_mount(&lfs, &cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
sprintf(path, "test%d", i);
lfs_stat(&lfs, path, &info) => 0;
}
lfs_unmount(&lfs) => 0;
run_cycles[run] = cycle;
LFS_WARN("completed %d blocks %d cycles",
run_block_count[run], run_cycles[run]);
}
// check we increased the lifetime by 2x with ~10% error
LFS_ASSERT(run_cycles[1]*110/100 > 2*run_cycles[0]);
'''
[[case]] # test that we wear blocks roughly evenly
define.LFS_ERASE_CYCLES = 0xffffffff
define.LFS_BLOCK_COUNT = 256 # small bd so test runs faster
define.LFS_BLOCK_CYCLES = [5, 4, 3, 2, 1]
define.CYCLES = 100
define.FILES = 10
if = 'LFS_BLOCK_CYCLES < CYCLES/10'
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "roadrunner") => 0;
lfs_unmount(&lfs) => 0;
uint32_t cycle = 0;
while (cycle < CYCLES) {
lfs_mount(&lfs, &cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// chose name, roughly random seed, and random 2^n size
sprintf(path, "roadrunner/test%d", i);
srand(cycle * i);
size = 1 << 4; //((rand() % 10)+2);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (rand() % 26);
lfs_ssize_t res = lfs_file_write(&lfs, &file, &c, 1);
assert(res == 1 || res == LFS_ERR_NOSPC);
if (res == LFS_ERR_NOSPC) {
err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
if (err == LFS_ERR_NOSPC) {
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
sprintf(path, "roadrunner/test%d", i);
srand(cycle * i);
size = 1 << 4; //((rand() % 10)+2);
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (rand() % 26);
char r;
lfs_file_read(&lfs, &file, &r, 1) => 1;
assert(r == c);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
cycle += 1;
}
exhausted:
// should still be readable
lfs_mount(&lfs, &cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
sprintf(path, "roadrunner/test%d", i);
lfs_stat(&lfs, path, &info) => 0;
}
lfs_unmount(&lfs) => 0;
LFS_WARN("completed %d cycles", cycle);
// check the wear on our block device
lfs_testbd_wear_t minwear = -1;
lfs_testbd_wear_t totalwear = 0;
lfs_testbd_wear_t maxwear = 0;
// skip 0 and 1 as superblock movement is intentionally avoided
for (lfs_block_t b = 2; b < LFS_BLOCK_COUNT; b++) {
lfs_testbd_wear_t wear = lfs_testbd_getwear(&cfg, b);
printf("%08x: wear %d\n", b, wear);
assert(wear >= 0);
if (wear < minwear) {
minwear = wear;
}
if (wear > maxwear) {
maxwear = wear;
}
totalwear += wear;
}
lfs_testbd_wear_t avgwear = totalwear / LFS_BLOCK_COUNT;
LFS_WARN("max wear: %d cycles", maxwear);
LFS_WARN("avg wear: %d cycles", totalwear / LFS_BLOCK_COUNT);
LFS_WARN("min wear: %d cycles", minwear);
// find standard deviation^2
lfs_testbd_wear_t dev2 = 0;
for (lfs_block_t b = 2; b < LFS_BLOCK_COUNT; b++) {
lfs_testbd_wear_t wear = lfs_testbd_getwear(&cfg, b);
assert(wear >= 0);
lfs_testbd_swear_t diff = wear - avgwear;
dev2 += diff*diff;
}
dev2 /= totalwear;
LFS_WARN("std dev^2: %d", dev2);
assert(dev2 < 8);
'''

View File

@@ -1,158 +0,0 @@
#!/bin/bash
set -eu
SMALLSIZE=32
MEDIUMSIZE=8192
LARGESIZE=262144
echo "=== File tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
echo "--- Simple file test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello", LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("Hello World!\n");
memcpy(wbuffer, "Hello World!\n", size);
lfs_file_write(&lfs, &file[0], wbuffer, size) => size;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_open(&lfs, &file[0], "hello", LFS_O_RDONLY) => 0;
size = strlen("Hello World!\n");
lfs_file_read(&lfs, &file[0], rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
w_test() {
tests/test.py ${4:-} << TEST
size = $1;
lfs_size_t chunk = 31;
srand(0);
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "$2",
${3:-LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC}) => 0;
for (lfs_size_t i = 0; i < size; i += chunk) {
chunk = (chunk < size - i) ? chunk : size - i;
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = rand() & 0xff;
}
lfs_file_write(&lfs, &file[0], buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
}
r_test() {
tests/test.py << TEST
size = $1;
lfs_size_t chunk = 29;
srand(0);
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "$2", &info) => 0;
info.type => LFS_TYPE_REG;
info.size => size;
lfs_file_open(&lfs, &file[0], "$2", ${3:-LFS_O_RDONLY}) => 0;
for (lfs_size_t i = 0; i < size; i += chunk) {
chunk = (chunk < size - i) ? chunk : size - i;
lfs_file_read(&lfs, &file[0], buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk && i+b < size; b++) {
buffer[b] => rand() & 0xff;
}
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
}
echo "--- Small file test ---"
w_test $SMALLSIZE smallavacado
r_test $SMALLSIZE smallavacado
echo "--- Medium file test ---"
w_test $MEDIUMSIZE mediumavacado
r_test $MEDIUMSIZE mediumavacado
echo "--- Large file test ---"
w_test $LARGESIZE largeavacado
r_test $LARGESIZE largeavacado
echo "--- Zero file test ---"
w_test 0 noavacado
r_test 0 noavacado
echo "--- Truncate small test ---"
w_test $SMALLSIZE mediumavacado
r_test $SMALLSIZE mediumavacado
w_test $MEDIUMSIZE mediumavacado
r_test $MEDIUMSIZE mediumavacado
echo "--- Truncate zero test ---"
w_test $SMALLSIZE noavacado
r_test $SMALLSIZE noavacado
w_test 0 noavacado
r_test 0 noavacado
echo "--- Non-overlap check ---"
r_test $SMALLSIZE smallavacado
r_test $MEDIUMSIZE mediumavacado
r_test $LARGESIZE largeavacado
r_test 0 noavacado
echo "--- Dir check ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hello") => 0;
info.type => LFS_TYPE_REG;
info.size => strlen("Hello World!\n");
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "largeavacado") => 0;
info.type => LFS_TYPE_REG;
info.size => $LARGESIZE;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "mediumavacado") => 0;
info.type => LFS_TYPE_REG;
info.size => $MEDIUMSIZE;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "noavacado") => 0;
info.type => LFS_TYPE_REG;
info.size => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "smallavacado") => 0;
info.type => LFS_TYPE_REG;
info.size => $SMALLSIZE;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Many file test ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
tests/test.py << TEST
// Create 300 files of 6 bytes
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "directory") => 0;
for (unsigned i = 0; i < 300; i++) {
snprintf((char*)buffer, sizeof(buffer), "file_%03d", i);
lfs_file_open(&lfs, &file[0], (char*)buffer, LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = 6;
memcpy(wbuffer, "Hello", size);
lfs_file_write(&lfs, &file[0], wbuffer, size) => size;
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

486
tests/test_files.toml Normal file
View File

@@ -0,0 +1,486 @@
[[case]] # simple file test
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "hello",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
size = strlen("Hello World!")+1;
strcpy((char*)buffer, "Hello World!");
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(strcmp((char*)buffer, "Hello World!") == 0);
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # larger files
define.SIZE = [32, 8192, 262144, 0, 7, 8193]
define.CHUNKSIZE = [31, 16, 33, 1, 1023]
code = '''
lfs_format(&lfs, &cfg) => 0;
// write
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "avacado",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
srand(1);
for (lfs_size_t i = 0; i < SIZE; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = rand() & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// read
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE;
srand(1);
for (lfs_size_t i = 0; i < SIZE; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (rand() & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # rewriting files
define.SIZE1 = [32, 8192, 131072, 0, 7, 8193]
define.SIZE2 = [32, 8192, 131072, 0, 7, 8193]
define.CHUNKSIZE = [31, 16, 1]
code = '''
lfs_format(&lfs, &cfg) => 0;
// write
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "avacado",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
srand(1);
for (lfs_size_t i = 0; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = rand() & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// read
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE1;
srand(1);
for (lfs_size_t i = 0; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (rand() & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// rewrite
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_WRONLY) => 0;
srand(2);
for (lfs_size_t i = 0; i < SIZE2; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE2-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = rand() & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// read
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => lfs_max(SIZE1, SIZE2);
srand(2);
for (lfs_size_t i = 0; i < SIZE2; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE2-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (rand() & 0xff));
}
}
if (SIZE1 > SIZE2) {
srand(1);
for (lfs_size_t b = 0; b < SIZE2; b++) {
rand();
}
for (lfs_size_t i = SIZE2; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (rand() & 0xff));
}
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # appending files
define.SIZE1 = [32, 8192, 131072, 0, 7, 8193]
define.SIZE2 = [32, 8192, 131072, 0, 7, 8193]
define.CHUNKSIZE = [31, 16, 1]
code = '''
lfs_format(&lfs, &cfg) => 0;
// write
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "avacado",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
srand(1);
for (lfs_size_t i = 0; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = rand() & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// read
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE1;
srand(1);
for (lfs_size_t i = 0; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (rand() & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// append
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_WRONLY | LFS_O_APPEND) => 0;
srand(2);
for (lfs_size_t i = 0; i < SIZE2; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE2-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = rand() & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// read
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE1 + SIZE2;
srand(1);
for (lfs_size_t i = 0; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (rand() & 0xff));
}
}
srand(2);
for (lfs_size_t i = 0; i < SIZE2; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE2-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (rand() & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # truncating files
define.SIZE1 = [32, 8192, 131072, 0, 7, 8193]
define.SIZE2 = [32, 8192, 131072, 0, 7, 8193]
define.CHUNKSIZE = [31, 16, 1]
code = '''
lfs_format(&lfs, &cfg) => 0;
// write
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "avacado",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
srand(1);
for (lfs_size_t i = 0; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = rand() & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// read
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE1;
srand(1);
for (lfs_size_t i = 0; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (rand() & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// truncate
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_WRONLY | LFS_O_TRUNC) => 0;
srand(2);
for (lfs_size_t i = 0; i < SIZE2; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE2-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = rand() & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// read
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE2;
srand(2);
for (lfs_size_t i = 0; i < SIZE2; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE2-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (rand() & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # reentrant file writing
define.SIZE = [32, 0, 7, 2049]
define.CHUNKSIZE = [31, 16, 65]
reentrant = true
code = '''
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
err = lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY);
assert(err == LFS_ERR_NOENT || err == 0);
if (err == 0) {
// can only be 0 (new file) or full size
size = lfs_file_size(&lfs, &file);
assert(size == 0 || size == SIZE);
lfs_file_close(&lfs, &file) => 0;
}
// write
lfs_file_open(&lfs, &file, "avacado", LFS_O_WRONLY | LFS_O_CREAT) => 0;
srand(1);
for (lfs_size_t i = 0; i < SIZE; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = rand() & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
// read
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE;
srand(1);
for (lfs_size_t i = 0; i < SIZE; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (rand() & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # reentrant file writing with syncs
define = [
# append (O(n))
{MODE='LFS_O_APPEND', SIZE=[32, 0, 7, 2049], CHUNKSIZE=[31, 16, 65]},
# truncate (O(n^2))
{MODE='LFS_O_TRUNC', SIZE=[32, 0, 7, 200], CHUNKSIZE=[31, 16, 65]},
# rewrite (O(n^2))
{MODE=0, SIZE=[32, 0, 7, 200], CHUNKSIZE=[31, 16, 65]},
]
reentrant = true
code = '''
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
err = lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY);
assert(err == LFS_ERR_NOENT || err == 0);
if (err == 0) {
// with syncs we could be any size, but it at least must be valid data
size = lfs_file_size(&lfs, &file);
assert(size <= SIZE);
srand(1);
for (lfs_size_t i = 0; i < size; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, size-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (rand() & 0xff));
}
}
lfs_file_close(&lfs, &file) => 0;
}
// write
lfs_file_open(&lfs, &file, "avacado",
LFS_O_WRONLY | LFS_O_CREAT | MODE) => 0;
size = lfs_file_size(&lfs, &file);
assert(size <= SIZE);
srand(1);
lfs_size_t skip = (MODE == LFS_O_APPEND) ? size : 0;
for (lfs_size_t b = 0; b < skip; b++) {
rand();
}
for (lfs_size_t i = skip; i < SIZE; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = rand() & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
lfs_file_sync(&lfs, &file) => 0;
}
lfs_file_close(&lfs, &file) => 0;
// read
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE;
srand(1);
for (lfs_size_t i = 0; i < SIZE; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (rand() & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # many files
define.N = 300
code = '''
lfs_format(&lfs, &cfg) => 0;
// create N files of 7 bytes
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < N; i++) {
sprintf(path, "file_%03d", i);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
char wbuffer[1024];
size = 7;
snprintf(wbuffer, size, "Hi %03d", i);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_close(&lfs, &file) => 0;
char rbuffer[1024];
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(strcmp(rbuffer, wbuffer) == 0);
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # many files with power cycle
define.N = 300
code = '''
lfs_format(&lfs, &cfg) => 0;
// create N files of 7 bytes
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < N; i++) {
sprintf(path, "file_%03d", i);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
char wbuffer[1024];
size = 7;
snprintf(wbuffer, size, "Hi %03d", i);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
char rbuffer[1024];
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(strcmp(rbuffer, wbuffer) == 0);
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # many files with power loss
define.N = 300
reentrant = true
code = '''
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
// create N files of 7 bytes
for (int i = 0; i < N; i++) {
sprintf(path, "file_%03d", i);
err = lfs_file_open(&lfs, &file, path, LFS_O_WRONLY | LFS_O_CREAT);
char wbuffer[1024];
size = 7;
snprintf(wbuffer, size, "Hi %03d", i);
if ((lfs_size_t)lfs_file_size(&lfs, &file) != size) {
lfs_file_write(&lfs, &file, wbuffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
char rbuffer[1024];
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(strcmp(rbuffer, wbuffer) == 0);
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''

View File

@@ -1,50 +0,0 @@
#!/bin/bash
set -eu
echo "=== Formatting tests ==="
rm -rf blocks
echo "--- Basic formatting ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
echo "--- Basic mounting ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Invalid superblocks ---"
ln -f -s /dev/zero blocks/0
ln -f -s /dev/zero blocks/1
tests/test.py << TEST
lfs_format(&lfs, &cfg) => LFS_ERR_NOSPC;
TEST
rm blocks/0 blocks/1
echo "--- Invalid mount ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => LFS_ERR_CORRUPT;
TEST
echo "--- Expanding superblock ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < 100; i++) {
lfs_mkdir(&lfs, "dummy") => 0;
lfs_remove(&lfs, "dummy") => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "dummy") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

View File

@@ -1,186 +0,0 @@
#!/bin/bash
set -eu
echo "=== Interspersed tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
echo "--- Interspersed file test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "a", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_open(&lfs, &file[1], "b", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_open(&lfs, &file[2], "c", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_open(&lfs, &file[3], "d", LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (int i = 0; i < 10; i++) {
lfs_file_write(&lfs, &file[0], (const void*)"a", 1) => 1;
lfs_file_write(&lfs, &file[1], (const void*)"b", 1) => 1;
lfs_file_write(&lfs, &file[2], (const void*)"c", 1) => 1;
lfs_file_write(&lfs, &file[3], (const void*)"d", 1) => 1;
}
lfs_file_close(&lfs, &file[0]);
lfs_file_close(&lfs, &file[1]);
lfs_file_close(&lfs, &file[2]);
lfs_file_close(&lfs, &file[3]);
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "a") => 0;
info.type => LFS_TYPE_REG;
info.size => 10;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "b") => 0;
info.type => LFS_TYPE_REG;
info.size => 10;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "c") => 0;
info.type => LFS_TYPE_REG;
info.size => 10;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "d") => 0;
info.type => LFS_TYPE_REG;
info.size => 10;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_file_open(&lfs, &file[0], "a", LFS_O_RDONLY) => 0;
lfs_file_open(&lfs, &file[1], "b", LFS_O_RDONLY) => 0;
lfs_file_open(&lfs, &file[2], "c", LFS_O_RDONLY) => 0;
lfs_file_open(&lfs, &file[3], "d", LFS_O_RDONLY) => 0;
for (int i = 0; i < 10; i++) {
lfs_file_read(&lfs, &file[0], buffer, 1) => 1;
buffer[0] => 'a';
lfs_file_read(&lfs, &file[1], buffer, 1) => 1;
buffer[0] => 'b';
lfs_file_read(&lfs, &file[2], buffer, 1) => 1;
buffer[0] => 'c';
lfs_file_read(&lfs, &file[3], buffer, 1) => 1;
buffer[0] => 'd';
}
lfs_file_close(&lfs, &file[0]);
lfs_file_close(&lfs, &file[1]);
lfs_file_close(&lfs, &file[2]);
lfs_file_close(&lfs, &file[3]);
lfs_unmount(&lfs) => 0;
TEST
echo "--- Interspersed remove file test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "e", LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (int i = 0; i < 5; i++) {
lfs_file_write(&lfs, &file[0], (const void*)"e", 1) => 1;
}
lfs_remove(&lfs, "a") => 0;
lfs_remove(&lfs, "b") => 0;
lfs_remove(&lfs, "c") => 0;
lfs_remove(&lfs, "d") => 0;
for (int i = 0; i < 5; i++) {
lfs_file_write(&lfs, &file[0], (const void*)"e", 1) => 1;
}
lfs_file_close(&lfs, &file[0]);
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "e") => 0;
info.type => LFS_TYPE_REG;
info.size => 10;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_file_open(&lfs, &file[0], "e", LFS_O_RDONLY) => 0;
for (int i = 0; i < 10; i++) {
lfs_file_read(&lfs, &file[0], buffer, 1) => 1;
buffer[0] => 'e';
}
lfs_file_close(&lfs, &file[0]);
lfs_unmount(&lfs) => 0;
TEST
echo "--- Remove inconveniently test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "e", LFS_O_WRONLY | LFS_O_TRUNC) => 0;
lfs_file_open(&lfs, &file[1], "f", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_open(&lfs, &file[2], "g", LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (int i = 0; i < 5; i++) {
lfs_file_write(&lfs, &file[0], (const void*)"e", 1) => 1;
lfs_file_write(&lfs, &file[1], (const void*)"f", 1) => 1;
lfs_file_write(&lfs, &file[2], (const void*)"g", 1) => 1;
}
lfs_remove(&lfs, "f") => 0;
for (int i = 0; i < 5; i++) {
lfs_file_write(&lfs, &file[0], (const void*)"e", 1) => 1;
lfs_file_write(&lfs, &file[1], (const void*)"f", 1) => 1;
lfs_file_write(&lfs, &file[2], (const void*)"g", 1) => 1;
}
lfs_file_close(&lfs, &file[0]);
lfs_file_close(&lfs, &file[1]);
lfs_file_close(&lfs, &file[2]);
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "e") => 0;
info.type => LFS_TYPE_REG;
info.size => 10;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "g") => 0;
info.type => LFS_TYPE_REG;
info.size => 10;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_file_open(&lfs, &file[0], "e", LFS_O_RDONLY) => 0;
lfs_file_open(&lfs, &file[1], "g", LFS_O_RDONLY) => 0;
for (int i = 0; i < 10; i++) {
lfs_file_read(&lfs, &file[0], buffer, 1) => 1;
buffer[0] => 'e';
lfs_file_read(&lfs, &file[1], buffer, 1) => 1;
buffer[0] => 'g';
}
lfs_file_close(&lfs, &file[0]);
lfs_file_close(&lfs, &file[1]);
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

View File

@@ -0,0 +1,244 @@
[[case]] # interspersed file test
define.SIZE = [10, 100]
define.FILES = [4, 10, 26]
code = '''
lfs_file_t files[FILES];
const char alphas[] = "abcdefghijklmnopqrstuvwxyz";
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int j = 0; j < FILES; j++) {
sprintf(path, "%c", alphas[j]);
lfs_file_open(&lfs, &files[j], path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
}
for (int i = 0; i < SIZE; i++) {
for (int j = 0; j < FILES; j++) {
lfs_file_write(&lfs, &files[j], &alphas[j], 1) => 1;
}
}
for (int j = 0; j < FILES; j++) {
lfs_file_close(&lfs, &files[j]);
}
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
assert(info.type == LFS_TYPE_DIR);
for (int j = 0; j < FILES; j++) {
sprintf(path, "%c", alphas[j]);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, path) == 0);
assert(info.type == LFS_TYPE_REG);
assert(info.size == SIZE);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
for (int j = 0; j < FILES; j++) {
sprintf(path, "%c", alphas[j]);
lfs_file_open(&lfs, &files[j], path, LFS_O_RDONLY) => 0;
}
for (int i = 0; i < 10; i++) {
for (int j = 0; j < FILES; j++) {
lfs_file_read(&lfs, &files[j], buffer, 1) => 1;
assert(buffer[0] == alphas[j]);
}
}
for (int j = 0; j < FILES; j++) {
lfs_file_close(&lfs, &files[j]);
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # interspersed remove file test
define.SIZE = [10, 100]
define.FILES = [4, 10, 26]
code = '''
const char alphas[] = "abcdefghijklmnopqrstuvwxyz";
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int j = 0; j < FILES; j++) {
sprintf(path, "%c", alphas[j]);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
for (int i = 0; i < SIZE; i++) {
lfs_file_write(&lfs, &file, &alphas[j], 1) => 1;
}
lfs_file_close(&lfs, &file);
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "zzz", LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (int j = 0; j < FILES; j++) {
lfs_file_write(&lfs, &file, (const void*)"~", 1) => 1;
lfs_file_sync(&lfs, &file) => 0;
sprintf(path, "%c", alphas[j]);
lfs_remove(&lfs, path) => 0;
}
lfs_file_close(&lfs, &file);
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "zzz") == 0);
assert(info.type == LFS_TYPE_REG);
assert(info.size == FILES);
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_file_open(&lfs, &file, "zzz", LFS_O_RDONLY) => 0;
for (int i = 0; i < FILES; i++) {
lfs_file_read(&lfs, &file, buffer, 1) => 1;
assert(buffer[0] == '~');
}
lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0;
'''
[[case]] # remove inconveniently test
define.SIZE = [10, 100]
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_t files[3];
lfs_file_open(&lfs, &files[0], "e", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_open(&lfs, &files[1], "f", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_open(&lfs, &files[2], "g", LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (int i = 0; i < SIZE/2; i++) {
lfs_file_write(&lfs, &files[0], (const void*)"e", 1) => 1;
lfs_file_write(&lfs, &files[1], (const void*)"f", 1) => 1;
lfs_file_write(&lfs, &files[2], (const void*)"g", 1) => 1;
}
lfs_remove(&lfs, "f") => 0;
for (int i = 0; i < SIZE/2; i++) {
lfs_file_write(&lfs, &files[0], (const void*)"e", 1) => 1;
lfs_file_write(&lfs, &files[1], (const void*)"f", 1) => 1;
lfs_file_write(&lfs, &files[2], (const void*)"g", 1) => 1;
}
lfs_file_close(&lfs, &files[0]);
lfs_file_close(&lfs, &files[1]);
lfs_file_close(&lfs, &files[2]);
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "e") == 0);
assert(info.type == LFS_TYPE_REG);
assert(info.size == SIZE);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "g") == 0);
assert(info.type == LFS_TYPE_REG);
assert(info.size == SIZE);
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_file_open(&lfs, &files[0], "e", LFS_O_RDONLY) => 0;
lfs_file_open(&lfs, &files[1], "g", LFS_O_RDONLY) => 0;
for (int i = 0; i < SIZE; i++) {
lfs_file_read(&lfs, &files[0], buffer, 1) => 1;
assert(buffer[0] == 'e');
lfs_file_read(&lfs, &files[1], buffer, 1) => 1;
assert(buffer[0] == 'g');
}
lfs_file_close(&lfs, &files[0]);
lfs_file_close(&lfs, &files[1]);
lfs_unmount(&lfs) => 0;
'''
[[case]] # reentrant interspersed file test
define.SIZE = [10, 100]
define.FILES = [4, 10, 26]
reentrant = true
code = '''
lfs_file_t files[FILES];
const char alphas[] = "abcdefghijklmnopqrstuvwxyz";
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
for (int j = 0; j < FILES; j++) {
sprintf(path, "%c", alphas[j]);
lfs_file_open(&lfs, &files[j], path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
}
for (int i = 0; i < SIZE; i++) {
for (int j = 0; j < FILES; j++) {
size = lfs_file_size(&lfs, &files[j]);
assert((int)size >= 0);
if ((int)size <= i) {
lfs_file_write(&lfs, &files[j], &alphas[j], 1) => 1;
lfs_file_sync(&lfs, &files[j]) => 0;
}
}
}
for (int j = 0; j < FILES; j++) {
lfs_file_close(&lfs, &files[j]);
}
lfs_dir_open(&lfs, &dir, "/") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
assert(info.type == LFS_TYPE_DIR);
for (int j = 0; j < FILES; j++) {
sprintf(path, "%c", alphas[j]);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, path) == 0);
assert(info.type == LFS_TYPE_REG);
assert(info.size == SIZE);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
for (int j = 0; j < FILES; j++) {
sprintf(path, "%c", alphas[j]);
lfs_file_open(&lfs, &files[j], path, LFS_O_RDONLY) => 0;
}
for (int i = 0; i < 10; i++) {
for (int j = 0; j < FILES; j++) {
lfs_file_read(&lfs, &files[j], buffer, 1) => 1;
assert(buffer[0] == alphas[j]);
}
}
for (int j = 0; j < FILES; j++) {
lfs_file_close(&lfs, &files[j]);
}
lfs_unmount(&lfs) => 0;
'''

View File

@@ -1,332 +0,0 @@
#!/bin/bash
set -eu
echo "=== Move tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "a") => 0;
lfs_mkdir(&lfs, "b") => 0;
lfs_mkdir(&lfs, "c") => 0;
lfs_mkdir(&lfs, "d") => 0;
lfs_mkdir(&lfs, "a/hi") => 0;
lfs_mkdir(&lfs, "a/hi/hola") => 0;
lfs_mkdir(&lfs, "a/hi/bonjour") => 0;
lfs_mkdir(&lfs, "a/hi/ohayo") => 0;
lfs_file_open(&lfs, &file[0], "a/hello", LFS_O_CREAT | LFS_O_WRONLY) => 0;
lfs_file_write(&lfs, &file[0], "hola\n", 5) => 5;
lfs_file_write(&lfs, &file[0], "bonjour\n", 8) => 8;
lfs_file_write(&lfs, &file[0], "ohayo\n", 6) => 6;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move file ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "a/hello", "b/hello") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "a") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hi") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "b") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move file corrupt source ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "b/hello", "c/hello") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/corrupt.py -n 1
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "b") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "c") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move file corrupt source and dest ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "c/hello", "d/hello") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/corrupt.py -n 2
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "c") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "d") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move file after corrupt ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "c/hello", "d/hello") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "c") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "d") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move dir ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "a/hi", "b/hi") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "a") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "b") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hi") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move dir corrupt source ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "b/hi", "c/hi") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/corrupt.py -n 1
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "b") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "c") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hi") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move dir corrupt source and dest ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "c/hi", "d/hi") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/corrupt.py -n 2
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "c") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hi") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "d") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move dir after corrupt ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "c/hi", "d/hi") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "c") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "d") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hi") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move check ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "a/hi") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "b/hi") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "c/hi") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "d/hi") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "bonjour") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hola") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "ohayo") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "a/hello") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "b/hello") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "c/hello") => LFS_ERR_NOENT;
lfs_file_open(&lfs, &file[0], "d/hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], buffer, 5) => 5;
memcmp(buffer, "hola\n", 5) => 0;
lfs_file_read(&lfs, &file[0], buffer, 8) => 8;
memcmp(buffer, "bonjour\n", 8) => 0;
lfs_file_read(&lfs, &file[0], buffer, 6) => 6;
memcmp(buffer, "ohayo\n", 6) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move state stealing ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "b") => 0;
lfs_remove(&lfs, "c") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "a/hi") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "b") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "c") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "d/hi") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "bonjour") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hola") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "ohayo") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "a/hello") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "b") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "c") => LFS_ERR_NOENT;
lfs_file_open(&lfs, &file[0], "d/hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], buffer, 5) => 5;
memcmp(buffer, "hola\n", 5) => 0;
lfs_file_read(&lfs, &file[0], buffer, 8) => 8;
memcmp(buffer, "bonjour\n", 8) => 0;
lfs_file_read(&lfs, &file[0], buffer, 6) => 6;
memcmp(buffer, "ohayo\n", 6) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

1815
tests/test_move.toml Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,45 +0,0 @@
#!/bin/bash
set -eu
echo "=== Orphan tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
echo "--- Orphan test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "parent") => 0;
lfs_mkdir(&lfs, "parent/orphan") => 0;
lfs_mkdir(&lfs, "parent/child") => 0;
lfs_remove(&lfs, "parent/orphan") => 0;
TEST
# corrupt most recent commit, this should be the update to the previous
# linked-list entry and should orphan the child
tests/corrupt.py
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "parent/orphan", &info) => LFS_ERR_NOENT;
lfs_ssize_t before = lfs_fs_size(&lfs);
before => 8;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "parent/orphan", &info) => LFS_ERR_NOENT;
lfs_ssize_t orphaned = lfs_fs_size(&lfs);
orphaned => 8;
lfs_mkdir(&lfs, "parent/otherchild") => 0;
lfs_stat(&lfs, "parent/orphan", &info) => LFS_ERR_NOENT;
lfs_ssize_t deorphaned = lfs_fs_size(&lfs);
deorphaned => 8;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

120
tests/test_orphans.toml Normal file
View File

@@ -0,0 +1,120 @@
[[case]] # orphan test
in = "lfs.c"
if = 'LFS_PROG_SIZE <= 0x3fe' # only works with one crc per commit
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "parent") => 0;
lfs_mkdir(&lfs, "parent/orphan") => 0;
lfs_mkdir(&lfs, "parent/child") => 0;
lfs_remove(&lfs, "parent/orphan") => 0;
lfs_unmount(&lfs) => 0;
// corrupt the child's most recent commit, this should be the update
// to the linked-list entry, which should orphan the orphan. Note this
// makes a lot of assumptions about the remove operation.
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "parent/child") => 0;
lfs_block_t block = dir.m.pair[0];
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
uint8_t bbuffer[LFS_BLOCK_SIZE];
cfg.read(&cfg, block, 0, bbuffer, LFS_BLOCK_SIZE) => 0;
int off = LFS_BLOCK_SIZE-1;
while (off >= 0 && bbuffer[off] == LFS_ERASE_VALUE) {
off -= 1;
}
memset(&bbuffer[off-3], LFS_BLOCK_SIZE, 3);
cfg.erase(&cfg, block) => 0;
cfg.prog(&cfg, block, 0, bbuffer, LFS_BLOCK_SIZE) => 0;
cfg.sync(&cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "parent/orphan", &info) => LFS_ERR_NOENT;
lfs_stat(&lfs, "parent/child", &info) => 0;
lfs_fs_size(&lfs) => 8;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "parent/orphan", &info) => LFS_ERR_NOENT;
lfs_stat(&lfs, "parent/child", &info) => 0;
lfs_fs_size(&lfs) => 8;
// this mkdir should both create a dir and deorphan, so size
// should be unchanged
lfs_mkdir(&lfs, "parent/otherchild") => 0;
lfs_stat(&lfs, "parent/orphan", &info) => LFS_ERR_NOENT;
lfs_stat(&lfs, "parent/child", &info) => 0;
lfs_stat(&lfs, "parent/otherchild", &info) => 0;
lfs_fs_size(&lfs) => 8;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "parent/orphan", &info) => LFS_ERR_NOENT;
lfs_stat(&lfs, "parent/child", &info) => 0;
lfs_stat(&lfs, "parent/otherchild", &info) => 0;
lfs_fs_size(&lfs) => 8;
lfs_unmount(&lfs) => 0;
'''
[[case]] # reentrant testing for orphans, basically just spam mkdir/remove
reentrant = true
# TODO fix this case, caused by non-DAG trees
if = '!(DEPTH == 3 && LFS_CACHE_SIZE != 64)'
define = [
{FILES=6, DEPTH=1, CYCLES=20},
{FILES=26, DEPTH=1, CYCLES=20},
{FILES=3, DEPTH=3, CYCLES=20},
]
code = '''
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
srand(1);
const char alpha[] = "abcdefghijklmnopqrstuvwxyz";
for (int i = 0; i < CYCLES; i++) {
// create random path
char full_path[256];
for (int d = 0; d < DEPTH; d++) {
sprintf(&full_path[2*d], "/%c", alpha[rand() % FILES]);
}
// if it does not exist, we create it, else we destroy
int res = lfs_stat(&lfs, full_path, &info);
if (res == LFS_ERR_NOENT) {
// create each directory in turn, ignore if dir already exists
for (int d = 0; d < DEPTH; d++) {
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_mkdir(&lfs, path);
assert(!err || err == LFS_ERR_EXIST);
}
for (int d = 0; d < DEPTH; d++) {
strcpy(path, full_path);
path[2*d+2] = '\0';
lfs_stat(&lfs, path, &info) => 0;
assert(strcmp(info.name, &path[2*d+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
}
} else {
// is valid dir?
assert(strcmp(info.name, &full_path[2*(DEPTH-1)+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
// try to delete path in reverse order, ignore if dir is not empty
for (int d = DEPTH-1; d >= 0; d--) {
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_remove(&lfs, path);
assert(!err || err == LFS_ERR_NOTEMPTY);
}
lfs_stat(&lfs, full_path, &info) => LFS_ERR_NOENT;
}
}
lfs_unmount(&lfs) => 0;
'''

View File

@@ -1,201 +0,0 @@
#!/bin/bash
set -eu
echo "=== Path tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "tea") => 0;
lfs_mkdir(&lfs, "coffee") => 0;
lfs_mkdir(&lfs, "soda") => 0;
lfs_mkdir(&lfs, "tea/hottea") => 0;
lfs_mkdir(&lfs, "tea/warmtea") => 0;
lfs_mkdir(&lfs, "tea/coldtea") => 0;
lfs_mkdir(&lfs, "coffee/hotcoffee") => 0;
lfs_mkdir(&lfs, "coffee/warmcoffee") => 0;
lfs_mkdir(&lfs, "coffee/coldcoffee") => 0;
lfs_mkdir(&lfs, "soda/hotsoda") => 0;
lfs_mkdir(&lfs, "soda/warmsoda") => 0;
lfs_mkdir(&lfs, "soda/coldsoda") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Root path tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "/tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_mkdir(&lfs, "/milk1") => 0;
lfs_stat(&lfs, "/milk1", &info) => 0;
strcmp(info.name, "milk1") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Redundant slash path tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "/tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "//tea//hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "///tea///hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_mkdir(&lfs, "///milk2") => 0;
lfs_stat(&lfs, "///milk2", &info) => 0;
strcmp(info.name, "milk2") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Dot path tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "./tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "/./tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "/././tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "/./tea/./hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_mkdir(&lfs, "/./milk3") => 0;
lfs_stat(&lfs, "/./milk3", &info) => 0;
strcmp(info.name, "milk3") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Dot dot path tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "coffee/../tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "tea/coldtea/../hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "coffee/coldcoffee/../../tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "coffee/../soda/../tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_mkdir(&lfs, "coffee/../milk4") => 0;
lfs_stat(&lfs, "coffee/../milk4", &info) => 0;
strcmp(info.name, "milk4") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Trailing dot path tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "tea/hottea/", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "tea/hottea/.", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "tea/hottea/./.", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "tea/hottea/..", &info) => 0;
strcmp(info.name, "tea") => 0;
lfs_stat(&lfs, "tea/hottea/../.", &info) => 0;
strcmp(info.name, "tea") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Root dot dot path tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "coffee/../../../../../../tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_mkdir(&lfs, "coffee/../../../../../../milk5") => 0;
lfs_stat(&lfs, "coffee/../../../../../../milk5", &info) => 0;
strcmp(info.name, "milk5") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Root tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "/", &info) => 0;
info.type => LFS_TYPE_DIR;
strcmp(info.name, "/") => 0;
lfs_mkdir(&lfs, "/") => LFS_ERR_EXIST;
lfs_file_open(&lfs, &file[0], "/", LFS_O_WRONLY | LFS_O_CREAT)
=> LFS_ERR_ISDIR;
// more corner cases
lfs_remove(&lfs, "") => LFS_ERR_INVAL;
lfs_remove(&lfs, ".") => LFS_ERR_INVAL;
lfs_remove(&lfs, "..") => LFS_ERR_INVAL;
lfs_remove(&lfs, "/") => LFS_ERR_INVAL;
lfs_remove(&lfs, "//") => LFS_ERR_INVAL;
lfs_remove(&lfs, "./") => LFS_ERR_INVAL;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Sketchy path tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "dirt/ground") => LFS_ERR_NOENT;
lfs_mkdir(&lfs, "dirt/ground/earth") => LFS_ERR_NOENT;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Superblock conflict test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "littlefs") => 0;
lfs_remove(&lfs, "littlefs") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Max path test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
memset(buffer, 'w', LFS_NAME_MAX+1);
buffer[LFS_NAME_MAX+2] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => LFS_ERR_NAMETOOLONG;
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => LFS_ERR_NAMETOOLONG;
memcpy(buffer, "coffee/", strlen("coffee/"));
memset(buffer+strlen("coffee/"), 'w', LFS_NAME_MAX+1);
buffer[strlen("coffee/")+LFS_NAME_MAX+2] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => LFS_ERR_NAMETOOLONG;
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => LFS_ERR_NAMETOOLONG;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Really big path test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
memset(buffer, 'w', LFS_NAME_MAX);
buffer[LFS_NAME_MAX+1] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => 0;
lfs_remove(&lfs, (char*)buffer) => 0;
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_remove(&lfs, (char*)buffer) => 0;
memcpy(buffer, "coffee/", strlen("coffee/"));
memset(buffer+strlen("coffee/"), 'w', LFS_NAME_MAX);
buffer[strlen("coffee/")+LFS_NAME_MAX+1] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => 0;
lfs_remove(&lfs, (char*)buffer) => 0;
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_remove(&lfs, (char*)buffer) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

293
tests/test_paths.toml Normal file
View File

@@ -0,0 +1,293 @@
[[case]] # simple path test
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "tea") => 0;
lfs_mkdir(&lfs, "tea/hottea") => 0;
lfs_mkdir(&lfs, "tea/warmtea") => 0;
lfs_mkdir(&lfs, "tea/coldtea") => 0;
lfs_stat(&lfs, "tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "/tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_mkdir(&lfs, "/milk") => 0;
lfs_stat(&lfs, "/milk", &info) => 0;
assert(strcmp(info.name, "milk") == 0);
lfs_stat(&lfs, "milk", &info) => 0;
assert(strcmp(info.name, "milk") == 0);
lfs_unmount(&lfs) => 0;
'''
[[case]] # redundant slashes
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "tea") => 0;
lfs_mkdir(&lfs, "tea/hottea") => 0;
lfs_mkdir(&lfs, "tea/warmtea") => 0;
lfs_mkdir(&lfs, "tea/coldtea") => 0;
lfs_stat(&lfs, "/tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "//tea//hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "///tea///hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_mkdir(&lfs, "////milk") => 0;
lfs_stat(&lfs, "////milk", &info) => 0;
assert(strcmp(info.name, "milk") == 0);
lfs_stat(&lfs, "milk", &info) => 0;
assert(strcmp(info.name, "milk") == 0);
lfs_unmount(&lfs) => 0;
'''
[[case]] # dot path test
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "tea") => 0;
lfs_mkdir(&lfs, "tea/hottea") => 0;
lfs_mkdir(&lfs, "tea/warmtea") => 0;
lfs_mkdir(&lfs, "tea/coldtea") => 0;
lfs_stat(&lfs, "./tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "/./tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "/././tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "/./tea/./hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_mkdir(&lfs, "/./milk") => 0;
lfs_stat(&lfs, "/./milk", &info) => 0;
assert(strcmp(info.name, "milk") == 0);
lfs_stat(&lfs, "milk", &info) => 0;
assert(strcmp(info.name, "milk") == 0);
lfs_unmount(&lfs) => 0;
'''
[[case]] # dot dot path test
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "tea") => 0;
lfs_mkdir(&lfs, "tea/hottea") => 0;
lfs_mkdir(&lfs, "tea/warmtea") => 0;
lfs_mkdir(&lfs, "tea/coldtea") => 0;
lfs_mkdir(&lfs, "coffee") => 0;
lfs_mkdir(&lfs, "coffee/hotcoffee") => 0;
lfs_mkdir(&lfs, "coffee/warmcoffee") => 0;
lfs_mkdir(&lfs, "coffee/coldcoffee") => 0;
lfs_stat(&lfs, "coffee/../tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "tea/coldtea/../hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "coffee/coldcoffee/../../tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "coffee/../coffee/../tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_mkdir(&lfs, "coffee/../milk") => 0;
lfs_stat(&lfs, "coffee/../milk", &info) => 0;
strcmp(info.name, "milk") => 0;
lfs_stat(&lfs, "milk", &info) => 0;
strcmp(info.name, "milk") => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # trailing dot path test
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "tea") => 0;
lfs_mkdir(&lfs, "tea/hottea") => 0;
lfs_mkdir(&lfs, "tea/warmtea") => 0;
lfs_mkdir(&lfs, "tea/coldtea") => 0;
lfs_stat(&lfs, "tea/hottea/", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "tea/hottea/.", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "tea/hottea/./.", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "tea/hottea/..", &info) => 0;
assert(strcmp(info.name, "tea") == 0);
lfs_stat(&lfs, "tea/hottea/../.", &info) => 0;
assert(strcmp(info.name, "tea") == 0);
lfs_unmount(&lfs) => 0;
'''
[[case]] # leading dot path test
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, ".milk") => 0;
lfs_stat(&lfs, ".milk", &info) => 0;
strcmp(info.name, ".milk") => 0;
lfs_stat(&lfs, "tea/.././.milk", &info) => 0;
strcmp(info.name, ".milk") => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # root dot dot path test
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "tea") => 0;
lfs_mkdir(&lfs, "tea/hottea") => 0;
lfs_mkdir(&lfs, "tea/warmtea") => 0;
lfs_mkdir(&lfs, "tea/coldtea") => 0;
lfs_mkdir(&lfs, "coffee") => 0;
lfs_mkdir(&lfs, "coffee/hotcoffee") => 0;
lfs_mkdir(&lfs, "coffee/warmcoffee") => 0;
lfs_mkdir(&lfs, "coffee/coldcoffee") => 0;
lfs_stat(&lfs, "coffee/../../../../../../tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_mkdir(&lfs, "coffee/../../../../../../milk") => 0;
lfs_stat(&lfs, "coffee/../../../../../../milk", &info) => 0;
strcmp(info.name, "milk") => 0;
lfs_stat(&lfs, "milk", &info) => 0;
strcmp(info.name, "milk") => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # invalid path tests
code = '''
lfs_format(&lfs, &cfg);
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "dirt", &info) => LFS_ERR_NOENT;
lfs_stat(&lfs, "dirt/ground", &info) => LFS_ERR_NOENT;
lfs_stat(&lfs, "dirt/ground/earth", &info) => LFS_ERR_NOENT;
lfs_remove(&lfs, "dirt") => LFS_ERR_NOENT;
lfs_remove(&lfs, "dirt/ground") => LFS_ERR_NOENT;
lfs_remove(&lfs, "dirt/ground/earth") => LFS_ERR_NOENT;
lfs_mkdir(&lfs, "dirt/ground") => LFS_ERR_NOENT;
lfs_file_open(&lfs, &file, "dirt/ground", LFS_O_WRONLY | LFS_O_CREAT)
=> LFS_ERR_NOENT;
lfs_mkdir(&lfs, "dirt/ground/earth") => LFS_ERR_NOENT;
lfs_file_open(&lfs, &file, "dirt/ground/earth", LFS_O_WRONLY | LFS_O_CREAT)
=> LFS_ERR_NOENT;
lfs_unmount(&lfs) => 0;
'''
[[case]] # root operations
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "/", &info) => 0;
assert(strcmp(info.name, "/") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_mkdir(&lfs, "/") => LFS_ERR_EXIST;
lfs_file_open(&lfs, &file, "/", LFS_O_WRONLY | LFS_O_CREAT)
=> LFS_ERR_ISDIR;
lfs_remove(&lfs, "/") => LFS_ERR_INVAL;
lfs_unmount(&lfs) => 0;
'''
[[case]] # root representations
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "/", &info) => 0;
assert(strcmp(info.name, "/") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_stat(&lfs, "", &info) => 0;
assert(strcmp(info.name, "/") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_stat(&lfs, ".", &info) => 0;
assert(strcmp(info.name, "/") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_stat(&lfs, "..", &info) => 0;
assert(strcmp(info.name, "/") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_stat(&lfs, "//", &info) => 0;
assert(strcmp(info.name, "/") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_stat(&lfs, "./", &info) => 0;
assert(strcmp(info.name, "/") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_unmount(&lfs) => 0;
'''
[[case]] # superblock conflict test
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "littlefs", &info) => LFS_ERR_NOENT;
lfs_remove(&lfs, "littlefs") => LFS_ERR_NOENT;
lfs_mkdir(&lfs, "littlefs") => 0;
lfs_stat(&lfs, "littlefs", &info) => 0;
assert(strcmp(info.name, "littlefs") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_remove(&lfs, "littlefs") => 0;
lfs_stat(&lfs, "littlefs", &info) => LFS_ERR_NOENT;
lfs_unmount(&lfs) => 0;
'''
[[case]] # max path test
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "coffee") => 0;
lfs_mkdir(&lfs, "coffee/hotcoffee") => 0;
lfs_mkdir(&lfs, "coffee/warmcoffee") => 0;
lfs_mkdir(&lfs, "coffee/coldcoffee") => 0;
memset(path, 'w', LFS_NAME_MAX+1);
path[LFS_NAME_MAX+1] = '\0';
lfs_mkdir(&lfs, path) => LFS_ERR_NAMETOOLONG;
lfs_file_open(&lfs, &file, path, LFS_O_WRONLY | LFS_O_CREAT)
=> LFS_ERR_NAMETOOLONG;
memcpy(path, "coffee/", strlen("coffee/"));
memset(path+strlen("coffee/"), 'w', LFS_NAME_MAX+1);
path[strlen("coffee/")+LFS_NAME_MAX+1] = '\0';
lfs_mkdir(&lfs, path) => LFS_ERR_NAMETOOLONG;
lfs_file_open(&lfs, &file, path, LFS_O_WRONLY | LFS_O_CREAT)
=> LFS_ERR_NAMETOOLONG;
lfs_unmount(&lfs) => 0;
'''
[[case]] # really big path test
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "coffee") => 0;
lfs_mkdir(&lfs, "coffee/hotcoffee") => 0;
lfs_mkdir(&lfs, "coffee/warmcoffee") => 0;
lfs_mkdir(&lfs, "coffee/coldcoffee") => 0;
memset(path, 'w', LFS_NAME_MAX);
path[LFS_NAME_MAX] = '\0';
lfs_mkdir(&lfs, path) => 0;
lfs_remove(&lfs, path) => 0;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, path) => 0;
memcpy(path, "coffee/", strlen("coffee/"));
memset(path+strlen("coffee/"), 'w', LFS_NAME_MAX);
path[strlen("coffee/")+LFS_NAME_MAX] = '\0';
lfs_mkdir(&lfs, path) => 0;
lfs_remove(&lfs, path) => 0;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, path) => 0;
lfs_unmount(&lfs) => 0;
'''

305
tests/test_relocations.toml Normal file
View File

@@ -0,0 +1,305 @@
# specific corner cases worth explicitly testing for
[[case]] # dangling split dir test
define.ITERATIONS = 20
define.COUNT = 10
define.LFS_BLOCK_CYCLES = [8, 1]
code = '''
lfs_format(&lfs, &cfg) => 0;
// fill up filesystem so only ~16 blocks are left
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "padding", LFS_O_CREAT | LFS_O_WRONLY) => 0;
memset(buffer, 0, 512);
while (LFS_BLOCK_COUNT - lfs_fs_size(&lfs) > 16) {
lfs_file_write(&lfs, &file, buffer, 512) => 512;
}
lfs_file_close(&lfs, &file) => 0;
// make a child dir to use in bounded space
lfs_mkdir(&lfs, "child") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int j = 0; j < ITERATIONS; j++) {
for (int i = 0; i < COUNT; i++) {
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_file_open(&lfs, &file, path, LFS_O_CREAT | LFS_O_WRONLY) => 0;
lfs_file_close(&lfs, &file) => 0;
}
lfs_dir_open(&lfs, &dir, "child") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
if (j == ITERATIONS-1) {
break;
}
for (int i = 0; i < COUNT; i++) {
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_remove(&lfs, path) => 0;
}
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "child") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_remove(&lfs, path) => 0;
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # outdated head test
define.ITERATIONS = 20
define.COUNT = 10
define.LFS_BLOCK_CYCLES = [8, 1]
code = '''
lfs_format(&lfs, &cfg) => 0;
// fill up filesystem so only ~16 blocks are left
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "padding", LFS_O_CREAT | LFS_O_WRONLY) => 0;
memset(buffer, 0, 512);
while (LFS_BLOCK_COUNT - lfs_fs_size(&lfs) > 16) {
lfs_file_write(&lfs, &file, buffer, 512) => 512;
}
lfs_file_close(&lfs, &file) => 0;
// make a child dir to use in bounded space
lfs_mkdir(&lfs, "child") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int j = 0; j < ITERATIONS; j++) {
for (int i = 0; i < COUNT; i++) {
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_file_open(&lfs, &file, path, LFS_O_CREAT | LFS_O_WRONLY) => 0;
lfs_file_close(&lfs, &file) => 0;
}
lfs_dir_open(&lfs, &dir, "child") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
info.size => 0;
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_file_open(&lfs, &file, path, LFS_O_WRONLY) => 0;
lfs_file_write(&lfs, &file, "hi", 2) => 2;
lfs_file_close(&lfs, &file) => 0;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_rewind(&lfs, &dir) => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
info.size => 2;
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_file_open(&lfs, &file, path, LFS_O_WRONLY) => 0;
lfs_file_write(&lfs, &file, "hi", 2) => 2;
lfs_file_close(&lfs, &file) => 0;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_rewind(&lfs, &dir) => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
info.size => 2;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_remove(&lfs, path) => 0;
}
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # reentrant testing for relocations, this is the same as the
# orphan testing, except here we also set block_cycles so that
# almost every tree operation needs a relocation
reentrant = true
# TODO fix this case, caused by non-DAG trees
#if = '!(DEPTH == 3 && LFS_CACHE_SIZE != 64)'
define = [
{FILES=6, DEPTH=1, CYCLES=20, LFS_BLOCK_CYCLES=1},
{FILES=26, DEPTH=1, CYCLES=20, LFS_BLOCK_CYCLES=1},
{FILES=3, DEPTH=3, CYCLES=20, LFS_BLOCK_CYCLES=1},
]
code = '''
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
srand(1);
const char alpha[] = "abcdefghijklmnopqrstuvwxyz";
for (int i = 0; i < CYCLES; i++) {
// create random path
char full_path[256];
for (int d = 0; d < DEPTH; d++) {
sprintf(&full_path[2*d], "/%c", alpha[rand() % FILES]);
}
// if it does not exist, we create it, else we destroy
int res = lfs_stat(&lfs, full_path, &info);
if (res == LFS_ERR_NOENT) {
// create each directory in turn, ignore if dir already exists
for (int d = 0; d < DEPTH; d++) {
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_mkdir(&lfs, path);
assert(!err || err == LFS_ERR_EXIST);
}
for (int d = 0; d < DEPTH; d++) {
strcpy(path, full_path);
path[2*d+2] = '\0';
lfs_stat(&lfs, path, &info) => 0;
assert(strcmp(info.name, &path[2*d+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
}
} else {
// is valid dir?
assert(strcmp(info.name, &full_path[2*(DEPTH-1)+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
// try to delete path in reverse order, ignore if dir is not empty
for (int d = DEPTH-1; d >= 0; d--) {
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_remove(&lfs, path);
assert(!err || err == LFS_ERR_NOTEMPTY);
}
lfs_stat(&lfs, full_path, &info) => LFS_ERR_NOENT;
}
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # reentrant testing for relocations, but now with random renames!
reentrant = true
# TODO fix this case, caused by non-DAG trees
#if = '!(DEPTH == 3 && LFS_CACHE_SIZE != 64)'
define = [
{FILES=6, DEPTH=1, CYCLES=20, LFS_BLOCK_CYCLES=1},
{FILES=26, DEPTH=1, CYCLES=20, LFS_BLOCK_CYCLES=1},
{FILES=3, DEPTH=3, CYCLES=20, LFS_BLOCK_CYCLES=1},
]
code = '''
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
srand(1);
const char alpha[] = "abcdefghijklmnopqrstuvwxyz";
for (int i = 0; i < CYCLES; i++) {
// create random path
char full_path[256];
for (int d = 0; d < DEPTH; d++) {
sprintf(&full_path[2*d], "/%c", alpha[rand() % FILES]);
}
// if it does not exist, we create it, else we destroy
int res = lfs_stat(&lfs, full_path, &info);
assert(!res || res == LFS_ERR_NOENT);
if (res == LFS_ERR_NOENT) {
// create each directory in turn, ignore if dir already exists
for (int d = 0; d < DEPTH; d++) {
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_mkdir(&lfs, path);
assert(!err || err == LFS_ERR_EXIST);
}
for (int d = 0; d < DEPTH; d++) {
strcpy(path, full_path);
path[2*d+2] = '\0';
lfs_stat(&lfs, path, &info) => 0;
assert(strcmp(info.name, &path[2*d+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
}
} else {
assert(strcmp(info.name, &full_path[2*(DEPTH-1)+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
// create new random path
char new_path[256];
for (int d = 0; d < DEPTH; d++) {
sprintf(&new_path[2*d], "/%c", alpha[rand() % FILES]);
}
// if new path does not exist, rename, otherwise destroy
res = lfs_stat(&lfs, new_path, &info);
assert(!res || res == LFS_ERR_NOENT);
if (res == LFS_ERR_NOENT) {
// stop once some dir is renamed
for (int d = 0; d < DEPTH; d++) {
strcpy(&path[2*d], &full_path[2*d]);
path[2*d+2] = '\0';
strcpy(&path[128+2*d], &new_path[2*d]);
path[128+2*d+2] = '\0';
err = lfs_rename(&lfs, path, path+128);
assert(!err || err == LFS_ERR_NOTEMPTY);
if (!err) {
strcpy(path, path+128);
}
}
for (int d = 0; d < DEPTH; d++) {
strcpy(path, new_path);
path[2*d+2] = '\0';
lfs_stat(&lfs, path, &info) => 0;
assert(strcmp(info.name, &path[2*d+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
}
lfs_stat(&lfs, full_path, &info) => LFS_ERR_NOENT;
} else {
// try to delete path in reverse order,
// ignore if dir is not empty
for (int d = DEPTH-1; d >= 0; d--) {
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_remove(&lfs, path);
assert(!err || err == LFS_ERR_NOTEMPTY);
}
lfs_stat(&lfs, full_path, &info) => LFS_ERR_NOENT;
}
}
}
lfs_unmount(&lfs) => 0;
'''

View File

@@ -1,429 +0,0 @@
#!/bin/bash
set -eu
SMALLSIZE=4
MEDIUMSIZE=128
LARGESIZE=132
echo "=== Seek tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "hello") => 0;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "hello/kitty%03d", i);
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size = strlen("kittycatcat");
memcpy(buffer, "kittycatcat", size);
for (int j = 0; j < $LARGESIZE; j++) {
lfs_file_write(&lfs, &file[0], buffer, size);
}
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
echo "--- Simple dir seek ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_soff_t pos;
int i;
for (i = 0; i < $SMALLSIZE; i++) {
sprintf((char*)buffer, "kitty%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
pos = lfs_dir_tell(&lfs, &dir[0]);
}
pos >= 0 => 1;
lfs_dir_seek(&lfs, &dir[0], pos) => 0;
sprintf((char*)buffer, "kitty%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
lfs_dir_rewind(&lfs, &dir[0]) => 0;
sprintf((char*)buffer, "kitty%03d", 0);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
lfs_dir_seek(&lfs, &dir[0], pos) => 0;
sprintf((char*)buffer, "kitty%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Large dir seek ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_soff_t pos;
int i;
for (i = 0; i < $MEDIUMSIZE; i++) {
sprintf((char*)buffer, "kitty%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
pos = lfs_dir_tell(&lfs, &dir[0]);
}
pos >= 0 => 1;
lfs_dir_seek(&lfs, &dir[0], pos) => 0;
sprintf((char*)buffer, "kitty%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
lfs_dir_rewind(&lfs, &dir[0]) => 0;
sprintf((char*)buffer, "kitty%03d", 0);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
lfs_dir_seek(&lfs, &dir[0], pos) => 0;
sprintf((char*)buffer, "kitty%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Simple file seek ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello/kitty042", LFS_O_RDONLY) => 0;
lfs_soff_t pos;
size = strlen("kittycatcat");
for (int i = 0; i < $SMALLSIZE; i++) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
pos = lfs_file_tell(&lfs, &file[0]);
}
pos >= 0 => 1;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_rewind(&lfs, &file[0]) => 0;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_CUR) => size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], size, LFS_SEEK_CUR) => 3*size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], -size, LFS_SEEK_CUR) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], -size, LFS_SEEK_END) >= 0 => 1;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
size = lfs_file_size(&lfs, &file[0]);
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_CUR) => size;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Large file seek ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello/kitty042", LFS_O_RDONLY) => 0;
lfs_soff_t pos;
size = strlen("kittycatcat");
for (int i = 0; i < $MEDIUMSIZE; i++) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
pos = lfs_file_tell(&lfs, &file[0]);
}
pos >= 0 => 1;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_rewind(&lfs, &file[0]) => 0;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_CUR) => size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], size, LFS_SEEK_CUR) => 3*size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], -size, LFS_SEEK_CUR) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], -size, LFS_SEEK_END) >= 0 => 1;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
size = lfs_file_size(&lfs, &file[0]);
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_CUR) => size;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Simple file seek and write ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello/kitty042", LFS_O_RDWR) => 0;
lfs_soff_t pos;
size = strlen("kittycatcat");
for (int i = 0; i < $SMALLSIZE; i++) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
pos = lfs_file_tell(&lfs, &file[0]);
}
pos >= 0 => 1;
memcpy(buffer, "doggodogdog", size);
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_write(&lfs, &file[0], buffer, size) => size;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "doggodogdog", size) => 0;
lfs_file_rewind(&lfs, &file[0]) => 0;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "doggodogdog", size) => 0;
lfs_file_seek(&lfs, &file[0], -size, LFS_SEEK_END) >= 0 => 1;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
size = lfs_file_size(&lfs, &file[0]);
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_CUR) => size;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Large file seek and write ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello/kitty042", LFS_O_RDWR) => 0;
lfs_soff_t pos;
size = strlen("kittycatcat");
for (int i = 0; i < $MEDIUMSIZE; i++) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
if (i != $SMALLSIZE) {
memcmp(buffer, "kittycatcat", size) => 0;
}
pos = lfs_file_tell(&lfs, &file[0]);
}
pos >= 0 => 1;
memcpy(buffer, "doggodogdog", size);
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_write(&lfs, &file[0], buffer, size) => size;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "doggodogdog", size) => 0;
lfs_file_rewind(&lfs, &file[0]) => 0;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "doggodogdog", size) => 0;
lfs_file_seek(&lfs, &file[0], -size, LFS_SEEK_END) >= 0 => 1;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
size = lfs_file_size(&lfs, &file[0]);
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_CUR) => size;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Boundary seek and write ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello/kitty042", LFS_O_RDWR) => 0;
size = strlen("hedgehoghog");
const lfs_soff_t offsets[] = {512, 1020, 513, 1021, 511, 1019};
for (unsigned i = 0; i < sizeof(offsets) / sizeof(offsets[0]); i++) {
lfs_soff_t off = offsets[i];
memcpy(buffer, "hedgehoghog", size);
lfs_file_seek(&lfs, &file[0], off, LFS_SEEK_SET) => off;
lfs_file_write(&lfs, &file[0], buffer, size) => size;
lfs_file_seek(&lfs, &file[0], off, LFS_SEEK_SET) => off;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "hedgehoghog", size) => 0;
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_SET) => 0;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_sync(&lfs, &file[0]) => 0;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Out-of-bounds seek ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello/kitty042", LFS_O_RDWR) => 0;
size = strlen("kittycatcat");
lfs_file_size(&lfs, &file[0]) => $LARGESIZE*size;
lfs_file_seek(&lfs, &file[0], ($LARGESIZE+$SMALLSIZE)*size,
LFS_SEEK_SET) => ($LARGESIZE+$SMALLSIZE)*size;
lfs_file_read(&lfs, &file[0], buffer, size) => 0;
memcpy(buffer, "porcupineee", size);
lfs_file_write(&lfs, &file[0], buffer, size) => size;
lfs_file_seek(&lfs, &file[0], ($LARGESIZE+$SMALLSIZE)*size,
LFS_SEEK_SET) => ($LARGESIZE+$SMALLSIZE)*size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "porcupineee", size) => 0;
lfs_file_seek(&lfs, &file[0], $LARGESIZE*size,
LFS_SEEK_SET) => $LARGESIZE*size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "\0\0\0\0\0\0\0\0\0\0\0", size) => 0;
lfs_file_seek(&lfs, &file[0], -(($LARGESIZE+$SMALLSIZE)*size),
LFS_SEEK_CUR) => LFS_ERR_INVAL;
lfs_file_tell(&lfs, &file[0]) => ($LARGESIZE+1)*size;
lfs_file_seek(&lfs, &file[0], -(($LARGESIZE+2*$SMALLSIZE)*size),
LFS_SEEK_END) => LFS_ERR_INVAL;
lfs_file_tell(&lfs, &file[0]) => ($LARGESIZE+1)*size;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Inline write and seek ---"
for SIZE in $SMALLSIZE $MEDIUMSIZE $LARGESIZE
do
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello/tinykitty$SIZE",
LFS_O_RDWR | LFS_O_CREAT) => 0;
int j = 0;
int k = 0;
memcpy(buffer, "abcdefghijklmnopqrstuvwxyz", 26);
for (unsigned i = 0; i < $SIZE; i++) {
lfs_file_write(&lfs, &file[0], &buffer[j++ % 26], 1) => 1;
lfs_file_tell(&lfs, &file[0]) => i+1;
lfs_file_size(&lfs, &file[0]) => i+1;
}
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_SET) => 0;
lfs_file_tell(&lfs, &file[0]) => 0;
lfs_file_size(&lfs, &file[0]) => $SIZE;
for (unsigned i = 0; i < $SIZE; i++) {
uint8_t c;
lfs_file_read(&lfs, &file[0], &c, 1) => 1;
c => buffer[k++ % 26];
}
lfs_file_sync(&lfs, &file[0]) => 0;
lfs_file_tell(&lfs, &file[0]) => $SIZE;
lfs_file_size(&lfs, &file[0]) => $SIZE;
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_SET) => 0;
for (unsigned i = 0; i < $SIZE; i++) {
lfs_file_write(&lfs, &file[0], &buffer[j++ % 26], 1) => 1;
lfs_file_tell(&lfs, &file[0]) => i+1;
lfs_file_size(&lfs, &file[0]) => $SIZE;
lfs_file_sync(&lfs, &file[0]) => 0;
lfs_file_tell(&lfs, &file[0]) => i+1;
lfs_file_size(&lfs, &file[0]) => $SIZE;
if (i < $SIZE-2) {
uint8_t c[3];
lfs_file_seek(&lfs, &file[0], -1, LFS_SEEK_CUR) => i;
lfs_file_read(&lfs, &file[0], &c, 3) => 3;
lfs_file_tell(&lfs, &file[0]) => i+3;
lfs_file_size(&lfs, &file[0]) => $SIZE;
lfs_file_seek(&lfs, &file[0], i+1, LFS_SEEK_SET) => i+1;
lfs_file_tell(&lfs, &file[0]) => i+1;
lfs_file_size(&lfs, &file[0]) => $SIZE;
}
}
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_SET) => 0;
lfs_file_tell(&lfs, &file[0]) => 0;
lfs_file_size(&lfs, &file[0]) => $SIZE;
for (unsigned i = 0; i < $SIZE; i++) {
uint8_t c;
lfs_file_read(&lfs, &file[0], &c, 1) => 1;
c => buffer[k++ % 26];
}
lfs_file_sync(&lfs, &file[0]) => 0;
lfs_file_tell(&lfs, &file[0]) => $SIZE;
lfs_file_size(&lfs, &file[0]) => $SIZE;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
done
echo "--- Results ---"
tests/stats.py

380
tests/test_seek.toml Normal file
View File

@@ -0,0 +1,380 @@
[[case]] # simple file seek
define = [
{COUNT=132, SKIP=4},
{COUNT=132, SKIP=128},
{COUNT=200, SKIP=10},
{COUNT=200, SKIP=100},
{COUNT=4, SKIP=1},
{COUNT=4, SKIP=2},
]
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "kitty",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size = strlen("kittycatcat");
memcpy(buffer, "kittycatcat", size);
for (int j = 0; j < COUNT; j++) {
lfs_file_write(&lfs, &file, buffer, size);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "kitty", LFS_O_RDONLY) => 0;
lfs_soff_t pos = -1;
size = strlen("kittycatcat");
for (int i = 0; i < SKIP; i++) {
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
pos = lfs_file_tell(&lfs, &file);
}
assert(pos >= 0);
lfs_file_seek(&lfs, &file, pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_rewind(&lfs, &file) => 0;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_CUR) => size;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, size, LFS_SEEK_CUR) => 3*size;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, -size, LFS_SEEK_CUR) => pos;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, -size, LFS_SEEK_END) >= 0 => 1;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
size = lfs_file_size(&lfs, &file);
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_CUR) => size;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # simple file seek and write
define = [
{COUNT=132, SKIP=4},
{COUNT=132, SKIP=128},
{COUNT=200, SKIP=10},
{COUNT=200, SKIP=100},
{COUNT=4, SKIP=1},
{COUNT=4, SKIP=2},
]
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "kitty",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size = strlen("kittycatcat");
memcpy(buffer, "kittycatcat", size);
for (int j = 0; j < COUNT; j++) {
lfs_file_write(&lfs, &file, buffer, size);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "kitty", LFS_O_RDWR) => 0;
lfs_soff_t pos = -1;
size = strlen("kittycatcat");
for (int i = 0; i < SKIP; i++) {
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
pos = lfs_file_tell(&lfs, &file);
}
assert(pos >= 0);
memcpy(buffer, "doggodogdog", size);
lfs_file_seek(&lfs, &file, pos, LFS_SEEK_SET) => pos;
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_seek(&lfs, &file, pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "doggodogdog", size) => 0;
lfs_file_rewind(&lfs, &file) => 0;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "doggodogdog", size) => 0;
lfs_file_seek(&lfs, &file, -size, LFS_SEEK_END) >= 0 => 1;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
size = lfs_file_size(&lfs, &file);
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_CUR) => size;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # boundary seek and writes
define.COUNT = 132
define.OFFSETS = '"{512, 1020, 513, 1021, 511, 1019, 1441}"'
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "kitty",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size = strlen("kittycatcat");
memcpy(buffer, "kittycatcat", size);
for (int j = 0; j < COUNT; j++) {
lfs_file_write(&lfs, &file, buffer, size);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "kitty", LFS_O_RDWR) => 0;
size = strlen("hedgehoghog");
const lfs_soff_t offsets[] = OFFSETS;
for (unsigned i = 0; i < sizeof(offsets) / sizeof(offsets[0]); i++) {
lfs_soff_t off = offsets[i];
memcpy(buffer, "hedgehoghog", size);
lfs_file_seek(&lfs, &file, off, LFS_SEEK_SET) => off;
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_seek(&lfs, &file, off, LFS_SEEK_SET) => off;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "hedgehoghog", size) => 0;
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_SET) => 0;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, off, LFS_SEEK_SET) => off;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "hedgehoghog", size) => 0;
lfs_file_sync(&lfs, &file) => 0;
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_SET) => 0;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, off, LFS_SEEK_SET) => off;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "hedgehoghog", size) => 0;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # out of bounds seek
define = [
{COUNT=132, SKIP=4},
{COUNT=132, SKIP=128},
{COUNT=200, SKIP=10},
{COUNT=200, SKIP=100},
{COUNT=4, SKIP=2},
{COUNT=4, SKIP=3},
]
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "kitty",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size = strlen("kittycatcat");
memcpy(buffer, "kittycatcat", size);
for (int j = 0; j < COUNT; j++) {
lfs_file_write(&lfs, &file, buffer, size);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "kitty", LFS_O_RDWR) => 0;
size = strlen("kittycatcat");
lfs_file_size(&lfs, &file) => COUNT*size;
lfs_file_seek(&lfs, &file, (COUNT+SKIP)*size,
LFS_SEEK_SET) => (COUNT+SKIP)*size;
lfs_file_read(&lfs, &file, buffer, size) => 0;
memcpy(buffer, "porcupineee", size);
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_seek(&lfs, &file, (COUNT+SKIP)*size,
LFS_SEEK_SET) => (COUNT+SKIP)*size;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "porcupineee", size) => 0;
lfs_file_seek(&lfs, &file, COUNT*size,
LFS_SEEK_SET) => COUNT*size;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "\0\0\0\0\0\0\0\0\0\0\0", size) => 0;
lfs_file_seek(&lfs, &file, -((COUNT+SKIP)*size),
LFS_SEEK_CUR) => LFS_ERR_INVAL;
lfs_file_tell(&lfs, &file) => (COUNT+1)*size;
lfs_file_seek(&lfs, &file, -((COUNT+2*SKIP)*size),
LFS_SEEK_END) => LFS_ERR_INVAL;
lfs_file_tell(&lfs, &file) => (COUNT+1)*size;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # inline write and seek
define.SIZE = [2, 4, 128, 132]
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "tinykitty",
LFS_O_RDWR | LFS_O_CREAT) => 0;
int j = 0;
int k = 0;
memcpy(buffer, "abcdefghijklmnopqrstuvwxyz", 26);
for (unsigned i = 0; i < SIZE; i++) {
lfs_file_write(&lfs, &file, &buffer[j++ % 26], 1) => 1;
lfs_file_tell(&lfs, &file) => i+1;
lfs_file_size(&lfs, &file) => i+1;
}
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_SET) => 0;
lfs_file_tell(&lfs, &file) => 0;
lfs_file_size(&lfs, &file) => SIZE;
for (unsigned i = 0; i < SIZE; i++) {
uint8_t c;
lfs_file_read(&lfs, &file, &c, 1) => 1;
c => buffer[k++ % 26];
}
lfs_file_sync(&lfs, &file) => 0;
lfs_file_tell(&lfs, &file) => SIZE;
lfs_file_size(&lfs, &file) => SIZE;
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_SET) => 0;
for (unsigned i = 0; i < SIZE; i++) {
lfs_file_write(&lfs, &file, &buffer[j++ % 26], 1) => 1;
lfs_file_tell(&lfs, &file) => i+1;
lfs_file_size(&lfs, &file) => SIZE;
lfs_file_sync(&lfs, &file) => 0;
lfs_file_tell(&lfs, &file) => i+1;
lfs_file_size(&lfs, &file) => SIZE;
if (i < SIZE-2) {
uint8_t c[3];
lfs_file_seek(&lfs, &file, -1, LFS_SEEK_CUR) => i;
lfs_file_read(&lfs, &file, &c, 3) => 3;
lfs_file_tell(&lfs, &file) => i+3;
lfs_file_size(&lfs, &file) => SIZE;
lfs_file_seek(&lfs, &file, i+1, LFS_SEEK_SET) => i+1;
lfs_file_tell(&lfs, &file) => i+1;
lfs_file_size(&lfs, &file) => SIZE;
}
}
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_SET) => 0;
lfs_file_tell(&lfs, &file) => 0;
lfs_file_size(&lfs, &file) => SIZE;
for (unsigned i = 0; i < SIZE; i++) {
uint8_t c;
lfs_file_read(&lfs, &file, &c, 1) => 1;
c => buffer[k++ % 26];
}
lfs_file_sync(&lfs, &file) => 0;
lfs_file_tell(&lfs, &file) => SIZE;
lfs_file_size(&lfs, &file) => SIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # file seek and write with power-loss
# must be power-of-2 for quadratic probing to be exhaustive
define.COUNT = [4, 64, 128]
reentrant = true
code = '''
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
err = lfs_file_open(&lfs, &file, "kitty", LFS_O_RDONLY);
assert(!err || err == LFS_ERR_NOENT);
if (!err) {
if (lfs_file_size(&lfs, &file) != 0) {
lfs_file_size(&lfs, &file) => 11*COUNT;
for (int j = 0; j < COUNT; j++) {
memset(buffer, 0, 11+1);
lfs_file_read(&lfs, &file, buffer, 11) => 11;
assert(memcmp(buffer, "kittycatcat", 11) == 0 ||
memcmp(buffer, "doggodogdog", 11) == 0);
}
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_file_open(&lfs, &file, "kitty", LFS_O_WRONLY | LFS_O_CREAT) => 0;
if (lfs_file_size(&lfs, &file) == 0) {
for (int j = 0; j < COUNT; j++) {
strcpy((char*)buffer, "kittycatcat");
size = strlen((char*)buffer);
lfs_file_write(&lfs, &file, buffer, size) => size;
}
}
lfs_file_close(&lfs, &file) => 0;
strcpy((char*)buffer, "doggodogdog");
size = strlen((char*)buffer);
lfs_file_open(&lfs, &file, "kitty", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => COUNT*size;
// seek and write using quadratic probing to touch all
// 11-byte words in the file
lfs_off_t off = 0;
for (int j = 0; j < COUNT; j++) {
off = (5*off + 1) % COUNT;
lfs_file_seek(&lfs, &file, off*size, LFS_SEEK_SET) => off*size;
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, "kittycatcat", size) == 0 ||
memcmp(buffer, "doggodogdog", size) == 0);
if (memcmp(buffer, "doggodogdog", size) != 0) {
lfs_file_seek(&lfs, &file, off*size, LFS_SEEK_SET) => off*size;
strcpy((char*)buffer, "doggodogdog");
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_seek(&lfs, &file, off*size, LFS_SEEK_SET) => off*size;
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, "doggodogdog", size) == 0);
lfs_file_sync(&lfs, &file) => 0;
lfs_file_seek(&lfs, &file, off*size, LFS_SEEK_SET) => off*size;
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, "doggodogdog", size) == 0);
}
}
lfs_file_close(&lfs, &file) => 0;
lfs_file_open(&lfs, &file, "kitty", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => COUNT*size;
for (int j = 0; j < COUNT; j++) {
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, "doggodogdog", size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''

127
tests/test_superblocks.toml Normal file
View File

@@ -0,0 +1,127 @@
[[case]] # simple formatting test
code = '''
lfs_format(&lfs, &cfg) => 0;
'''
[[case]] # mount/unmount
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # reentrant format
reentrant = true
code = '''
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # invalid mount
code = '''
lfs_mount(&lfs, &cfg) => LFS_ERR_CORRUPT;
'''
[[case]] # expanding superblock
define.LFS_BLOCK_CYCLES = [32, 33, 1]
define.N = [10, 100, 1000]
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < N; i++) {
lfs_file_open(&lfs, &file, "dummy",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_stat(&lfs, "dummy", &info) => 0;
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
lfs_remove(&lfs, "dummy") => 0;
}
lfs_unmount(&lfs) => 0;
// one last check after power-cycle
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "dummy",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_stat(&lfs, "dummy", &info) => 0;
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
lfs_unmount(&lfs) => 0;
'''
[[case]] # expanding superblock with power cycle
define.LFS_BLOCK_CYCLES = [32, 33, 1]
define.N = [10, 100, 1000]
code = '''
lfs_format(&lfs, &cfg) => 0;
for (int i = 0; i < N; i++) {
lfs_mount(&lfs, &cfg) => 0;
// remove lingering dummy?
err = lfs_stat(&lfs, "dummy", &info);
assert(err == 0 || (err == LFS_ERR_NOENT && i == 0));
if (!err) {
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
lfs_remove(&lfs, "dummy") => 0;
}
lfs_file_open(&lfs, &file, "dummy",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_stat(&lfs, "dummy", &info) => 0;
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
lfs_unmount(&lfs) => 0;
}
// one last check after power-cycle
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "dummy", &info) => 0;
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
lfs_unmount(&lfs) => 0;
'''
[[case]] # reentrant expanding superblock
define.LFS_BLOCK_CYCLES = [2, 1]
define.N = 24
reentrant = true
code = '''
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
for (int i = 0; i < N; i++) {
// remove lingering dummy?
err = lfs_stat(&lfs, "dummy", &info);
assert(err == 0 || (err == LFS_ERR_NOENT && i == 0));
if (!err) {
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
lfs_remove(&lfs, "dummy") => 0;
}
lfs_file_open(&lfs, &file, "dummy",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_stat(&lfs, "dummy", &info) => 0;
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
}
lfs_unmount(&lfs) => 0;
// one last check after power-cycle
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "dummy", &info) => 0;
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
lfs_unmount(&lfs) => 0;
'''

View File

@@ -1,302 +0,0 @@
#!/bin/bash
set -eu
SMALLSIZE=32
MEDIUMSIZE=2048
LARGESIZE=8192
echo "=== Truncate tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
echo "--- Simple truncate ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldynoop",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
strcpy((char*)buffer, "hair");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < $LARGESIZE; j += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_size(&lfs, &file[0]) => $LARGESIZE;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldynoop", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file[0]) => $LARGESIZE;
lfs_file_truncate(&lfs, &file[0], $MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file[0]) => $MEDIUMSIZE;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldynoop", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file[0]) => $MEDIUMSIZE;
size = strlen("hair");
for (lfs_off_t j = 0; j < $MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
lfs_file_read(&lfs, &file[0], buffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Truncate and read ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldyread",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
strcpy((char*)buffer, "hair");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < $LARGESIZE; j += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_size(&lfs, &file[0]) => $LARGESIZE;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldyread", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file[0]) => $LARGESIZE;
lfs_file_truncate(&lfs, &file[0], $MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file[0]) => $MEDIUMSIZE;
size = strlen("hair");
for (lfs_off_t j = 0; j < $MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
lfs_file_read(&lfs, &file[0], buffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldyread", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file[0]) => $MEDIUMSIZE;
size = strlen("hair");
for (lfs_off_t j = 0; j < $MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
lfs_file_read(&lfs, &file[0], buffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Truncate and write ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldywrite",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
strcpy((char*)buffer, "hair");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < $LARGESIZE; j += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_size(&lfs, &file[0]) => $LARGESIZE;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldywrite", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file[0]) => $LARGESIZE;
lfs_file_truncate(&lfs, &file[0], $MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file[0]) => $MEDIUMSIZE;
strcpy((char*)buffer, "bald");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < $MEDIUMSIZE; j += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_size(&lfs, &file[0]) => $MEDIUMSIZE;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldywrite", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file[0]) => $MEDIUMSIZE;
size = strlen("bald");
for (lfs_off_t j = 0; j < $MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "bald", size) => 0;
}
lfs_file_read(&lfs, &file[0], buffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
# More aggressive general truncation tests
truncate_test() {
STARTSIZES="$1"
STARTSEEKS="$2"
HOTSIZES="$3"
COLDSIZES="$4"
tests/test.py << TEST
static const lfs_off_t startsizes[] = {$STARTSIZES};
static const lfs_off_t startseeks[] = {$STARTSEEKS};
static const lfs_off_t hotsizes[] = {$HOTSIZES};
lfs_mount(&lfs, &cfg) => 0;
for (unsigned i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
sprintf((char*)buffer, "hairyhead%d", i);
lfs_file_open(&lfs, &file[0], (const char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
strcpy((char*)buffer, "hair");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < startsizes[i]; j += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_size(&lfs, &file[0]) => startsizes[i];
if (startseeks[i] != startsizes[i]) {
lfs_file_seek(&lfs, &file[0],
startseeks[i], LFS_SEEK_SET) => startseeks[i];
}
lfs_file_truncate(&lfs, &file[0], hotsizes[i]) => 0;
lfs_file_size(&lfs, &file[0]) => hotsizes[i];
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
static const lfs_off_t startsizes[] = {$STARTSIZES};
static const lfs_off_t hotsizes[] = {$HOTSIZES};
static const lfs_off_t coldsizes[] = {$COLDSIZES};
lfs_mount(&lfs, &cfg) => 0;
for (unsigned i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
sprintf((char*)buffer, "hairyhead%d", i);
lfs_file_open(&lfs, &file[0], (const char*)buffer, LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file[0]) => hotsizes[i];
size = strlen("hair");
lfs_off_t j = 0;
for (; j < startsizes[i] && j < hotsizes[i]; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
for (; j < hotsizes[i]; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "\0\0\0\0", size) => 0;
}
lfs_file_truncate(&lfs, &file[0], coldsizes[i]) => 0;
lfs_file_size(&lfs, &file[0]) => coldsizes[i];
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
static const lfs_off_t startsizes[] = {$STARTSIZES};
static const lfs_off_t hotsizes[] = {$HOTSIZES};
static const lfs_off_t coldsizes[] = {$COLDSIZES};
lfs_mount(&lfs, &cfg) => 0;
for (unsigned i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
sprintf((char*)buffer, "hairyhead%d", i);
lfs_file_open(&lfs, &file[0], (const char*)buffer, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file[0]) => coldsizes[i];
size = strlen("hair");
lfs_off_t j = 0;
for (; j < startsizes[i] && j < hotsizes[i] && j < coldsizes[i];
j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
for (; j < coldsizes[i]; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "\0\0\0\0", size) => 0;
}
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
}
echo "--- Cold shrinking truncate ---"
truncate_test \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE"
echo "--- Cold expanding truncate ---"
truncate_test \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE"
echo "--- Warm shrinking truncate ---"
truncate_test \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, 0, 0, 0, 0"
echo "--- Warm expanding truncate ---"
truncate_test \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE"
echo "--- Mid-file shrinking truncate ---"
truncate_test \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
" $LARGESIZE, $LARGESIZE, $LARGESIZE, $LARGESIZE, $LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, 0, 0, 0, 0"
echo "--- Mid-file expanding truncate ---"
truncate_test \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE"
echo "--- Results ---"
tests/stats.py

394
tests/test_truncate.toml Normal file
View File

@@ -0,0 +1,394 @@
[[case]] # simple truncate
define.MEDIUMSIZE = [32, 2048]
define.LARGESIZE = 8192
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "baldynoop",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
strcpy((char*)buffer, "hair");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < LARGESIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_size(&lfs, &file) => LARGESIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "baldynoop", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => LARGESIZE;
lfs_file_truncate(&lfs, &file, MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "baldynoop", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
size = strlen("hair");
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
lfs_file_read(&lfs, &file, buffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # truncate and read
define.MEDIUMSIZE = [32, 2048]
define.LARGESIZE = 8192
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "baldyread",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
strcpy((char*)buffer, "hair");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < LARGESIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_size(&lfs, &file) => LARGESIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "baldyread", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => LARGESIZE;
lfs_file_truncate(&lfs, &file, MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
size = strlen("hair");
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
lfs_file_read(&lfs, &file, buffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "baldyread", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
size = strlen("hair");
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
lfs_file_read(&lfs, &file, buffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # write, truncate, and read
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "sequence",
LFS_O_RDWR | LFS_O_CREAT | LFS_O_TRUNC) => 0;
size = lfs_min(lfs.cfg->cache_size, sizeof(buffer)/2);
lfs_size_t qsize = size / 4;
uint8_t *wb = buffer;
uint8_t *rb = buffer + size;
for (lfs_off_t j = 0; j < size; ++j) {
wb[j] = j;
}
/* Spread sequence over size */
lfs_file_write(&lfs, &file, wb, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_tell(&lfs, &file) => size;
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_SET) => 0;
lfs_file_tell(&lfs, &file) => 0;
/* Chop off the last quarter */
lfs_size_t trunc = size - qsize;
lfs_file_truncate(&lfs, &file, trunc) => 0;
lfs_file_tell(&lfs, &file) => 0;
lfs_file_size(&lfs, &file) => trunc;
/* Read should produce first 3/4 */
lfs_file_read(&lfs, &file, rb, size) => trunc;
memcmp(rb, wb, trunc) => 0;
/* Move to 1/4 */
lfs_file_size(&lfs, &file) => trunc;
lfs_file_seek(&lfs, &file, qsize, LFS_SEEK_SET) => qsize;
lfs_file_tell(&lfs, &file) => qsize;
/* Chop to 1/2 */
trunc -= qsize;
lfs_file_truncate(&lfs, &file, trunc) => 0;
lfs_file_tell(&lfs, &file) => qsize;
lfs_file_size(&lfs, &file) => trunc;
/* Read should produce second quarter */
lfs_file_read(&lfs, &file, rb, size) => trunc - qsize;
memcmp(rb, wb + qsize, trunc - qsize) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # truncate and write
define.MEDIUMSIZE = [32, 2048]
define.LARGESIZE = 8192
code = '''
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "baldywrite",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
strcpy((char*)buffer, "hair");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < LARGESIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_size(&lfs, &file) => LARGESIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "baldywrite", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => LARGESIZE;
lfs_file_truncate(&lfs, &file, MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
strcpy((char*)buffer, "bald");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "baldywrite", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
size = strlen("bald");
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "bald", size) => 0;
}
lfs_file_read(&lfs, &file, buffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # truncate write under powerloss
define.SMALLSIZE = [4, 512]
define.MEDIUMSIZE = [32, 1024]
define.LARGESIZE = 2048
reentrant = true
code = '''
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
err = lfs_file_open(&lfs, &file, "baldy", LFS_O_RDONLY);
assert(!err || err == LFS_ERR_NOENT);
if (!err) {
size = lfs_file_size(&lfs, &file);
assert(size == 0 ||
size == LARGESIZE ||
size == MEDIUMSIZE ||
size == SMALLSIZE);
for (lfs_off_t j = 0; j < size; j += 4) {
lfs_file_read(&lfs, &file, buffer, 4) => 4;
assert(memcmp(buffer, "hair", 4) == 0 ||
memcmp(buffer, "bald", 4) == 0 ||
memcmp(buffer, "comb", 4) == 0);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_file_open(&lfs, &file, "baldy",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
lfs_file_size(&lfs, &file) => 0;
strcpy((char*)buffer, "hair");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < LARGESIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_size(&lfs, &file) => LARGESIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_file_open(&lfs, &file, "baldy", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => LARGESIZE;
lfs_file_truncate(&lfs, &file, MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
strcpy((char*)buffer, "bald");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_file_open(&lfs, &file, "baldy", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
lfs_file_truncate(&lfs, &file, SMALLSIZE) => 0;
lfs_file_size(&lfs, &file) => SMALLSIZE;
strcpy((char*)buffer, "comb");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < SMALLSIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_size(&lfs, &file) => SMALLSIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[[case]] # more aggressive general truncation tests
define.CONFIG = 'range(6)'
define.SMALLSIZE = 32
define.MEDIUMSIZE = 2048
define.LARGESIZE = 8192
code = '''
#define COUNT 5
const struct {
lfs_off_t startsizes[COUNT];
lfs_off_t startseeks[COUNT];
lfs_off_t hotsizes[COUNT];
lfs_off_t coldsizes[COUNT];
} configs[] = {
// cold shrinking
{{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{ 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE, 2*LARGESIZE}},
// cold expanding
{{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{ 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE, 2*LARGESIZE}},
// warm shrinking truncate
{{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{ 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE, 2*LARGESIZE},
{ 0, 0, 0, 0, 0}},
// warm expanding truncate
{{ 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE, 2*LARGESIZE},
{ 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE}},
// mid-file shrinking truncate
{{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{ LARGESIZE, LARGESIZE, LARGESIZE, LARGESIZE, LARGESIZE},
{ 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE, 2*LARGESIZE},
{ 0, 0, 0, 0, 0}},
// mid-file expanding truncate
{{ 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE, 2*LARGESIZE},
{ 0, 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE}},
};
const lfs_off_t *startsizes = configs[CONFIG].startsizes;
const lfs_off_t *startseeks = configs[CONFIG].startseeks;
const lfs_off_t *hotsizes = configs[CONFIG].hotsizes;
const lfs_off_t *coldsizes = configs[CONFIG].coldsizes;
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (unsigned i = 0; i < COUNT; i++) {
sprintf(path, "hairyhead%d", i);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
strcpy((char*)buffer, "hair");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < startsizes[i]; j += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_size(&lfs, &file) => startsizes[i];
if (startseeks[i] != startsizes[i]) {
lfs_file_seek(&lfs, &file,
startseeks[i], LFS_SEEK_SET) => startseeks[i];
}
lfs_file_truncate(&lfs, &file, hotsizes[i]) => 0;
lfs_file_size(&lfs, &file) => hotsizes[i];
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (unsigned i = 0; i < COUNT; i++) {
sprintf(path, "hairyhead%d", i);
lfs_file_open(&lfs, &file, path, LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => hotsizes[i];
size = strlen("hair");
lfs_off_t j = 0;
for (; j < startsizes[i] && j < hotsizes[i]; j += size) {
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
for (; j < hotsizes[i]; j += size) {
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "\0\0\0\0", size) => 0;
}
lfs_file_truncate(&lfs, &file, coldsizes[i]) => 0;
lfs_file_size(&lfs, &file) => coldsizes[i];
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (unsigned i = 0; i < COUNT; i++) {
sprintf(path, "hairyhead%d", i);
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => coldsizes[i];
size = strlen("hair");
lfs_off_t j = 0;
for (; j < startsizes[i] && j < hotsizes[i] && j < coldsizes[i];
j += size) {
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
for (; j < coldsizes[i]; j += size) {
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "\0\0\0\0", size) => 0;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''