This is primarily to get better test coverage over devices with very
large erase/prog/read sizes. The unfortunate state of the tests is
that most of them rely on a specific block device size, so that
ENOSPC and ECORRUPT errors occur in specific situations.
This should be improved in the future, but at least for now we can
open up some of the simpler tests to run on these different
configurations.
Also added testing over both 0x00 and 0xff erase values in emubd.
Also added a number of small file tests that expose issues prevalent
on NAND devices.
The fact that the lookahead buffer uses bits instead of bytes is an
internal detail. Poking this through to the user API has caused a decent
amount of confusion. Most buffers are provided as bytes and the
inconsistency here can be surprising.
The use of bytes instead of bits also makes us forward compatible in
the case that we want to change the lookahead internal representation
(hint segment list).
Additionally, we change the configuration name to lookahead_size. This
matches other configurations, such as cache_size and read_size, while
also notifying the user that something important changed at compile time
(by breaking).
Initially, littlefs relied entirely on bad-block detection for
wear-leveling. Conceptually, at the end of a devices lifespan, all
blocks would be worn evenly, even if they weren't worn out at the same
time. However, this doesn't work for all devices, rather than causing
corruption during writes, wear reduces a devices "sticking power",
causing bits to flip over time. This means for many devices, true
wear-leveling (dynamic or static) is required.
Fortunately, way back at the beginning, littlefs was designed to do full
dynamic wear-leveling, only dropping it when making the retrospectively
short-sighted realization that bad-block detection is theoretically
sufficient. We can enable dynamic wear-leveling with only a few tweaks
to littlefs. These can be implemented without breaking backwards
compatibility.
1. Evict metadata-pairs after a certain number of writes. Eviction in
this case is identical to a relocation to recover from a bad block.
We move our data and stick the old block back into our pool of
blocks.
For knowing when to evict, we already have a revision count for each
metadata-pair which gives us enough information. We add the
configuration option block_cycles and evict when our revision count
is a multiple of this value.
2. Now all blocks participate in COW behaviour. However we don't store
the state of our allocator, so every boot cycle we reuse the first
blocks on storage. This is very bad on a microcontroller, where we
may reboot often. We need a way to spread our usage across the disk.
To pull this off, we can simply randomize which block we start our
allocator at. But we need a random number generator that is different
on each boot. Fortunately we have a great source of entropy, our
filesystem. So we seed our block allocator with a simple hash of the
CRCs on our metadata-pairs. This can be done for free since we
already need to scan the metadata-pairs during mount.
What we end up with is a uniform distribution of wear on storage. The
wear is not perfect, if a block is used for metadata it gets more wear,
and the randomization may not be exact. But we can never actually get
perfect wear-leveling, since we're already resigned to dynamic
wear-leveling at the file level.
With the addition of metadata logging, we end up with a really
interesting two-stage wear-leveling algorithm. At the low-level,
metadata is statically wear-leveled. At the high-level, blocks are
dynamically wear-leveled.
---
This specific commit implements the first step, eviction of metadata
pairs. Entertwining this into the already complicated compact logic was
a bit annoying, however we can combine the logic for superblock
expansion with the logic for metadata-pair eviction.
The introduction of an explicit cache_size configuration allows
customization of the cache buffers independently from the hardware
read/write sizes.
This has been one of littlefs's main handicaps. Without a distinction
between cache units and hardware limitations, littlefs isn't able to
read or program _less_ than the cache size. This leads to the
counter-intuitive case where larger cache sizes can actually be harmful,
since larger read/prog sizes require sending more data over the bus if
we're only accessing a small set of data (for example the CTZ skip-list
traversal).
This is compounded with metadata logging, since a large program size
limits the number of commits we can write out in a single metadata
block. It really doesn't make sense to link program size + cache
size here.
With a separate cache_size configuration, we can be much smarter about
what we actually read/write from disk.
This also simplifies cache handling a bit. Before there were two
possible cache sizes, but these were rarely used. Note that the
cache_size is NOT written to the superblock and can be freely changed
without breaking backwards compatibility.
- no previous prototype for ‘test_assert’
- no previous prototype for ‘test_count’
- unused parameter ‘b’ in test_count
- function declaration isn’t a prototype for main
Some of the tests were creating a variable `res`, however the test
system itself relies on it's own `res` variable. This worked out by
luck, but could lead to problems if the res variables were different
types.
Changed the generated variable in the test system to the less common
name `test`, which also works out to share the same prefix as other test
functions.
Directories still consume two full erase blocks, but now only program
the exact on-disk region to store the directory contents. This results
in a decent improvement in the amount of data written and read to the
device when doing directory operations.
Calculating the checksum of dynamically sized data is surprisingly
tricky, since the size of the data could also contain errors. For the
littlefs, we can assume the data size must fit in an erase block.
If the data size is invalid, we can just treat the block as corrupted.
This provides a limited form of wear leveling. While wear is
not actually balanced across blocks, the filesystem can recover
from corrupted blocks and extend the lifetime of a device nearly
as much as dynamic wear leveling.
For use-cases where wear is important, it would be better to use
a full form of dynamic wear-leveling at the block level. (or
consider a logging filesystem).
Corrupted block handling was simply added on top of the existing
logic in place for the filesystem, so it's a bit more noodly than
it may have to be, but it gets the work done.
Removed scanning for stride
- Adds complexity with questionable benefit
- Can be added as an optimization later
Fixed handling around device boundaries and where lookahead may not be a
factor of the device size (consider small devices with only a few
blocks)
Added support for configuration with optional dynamic memory as found in
the caching configuration
This adds caching of the most recent read/program blocks, allowing
support of devices that don't have byte-level read+writes, along
with reduced device access on devices that do support byte-level
read+writes.
Note: The current implementation is a bit eager to drop caches where
it simplifies the cache layer. This layer is already complex enough.
Note: It may be worthwhile to add a compile switch for caching to
reduce code size, note sure.
Note: This does add a dependency on malloc, which could have a porting
layer, but I'm just using the functions from stdlib for now. These can be
overwritten with noops if the user controls the system, and keeps things
simple for now.
Before, the lfs had multiple paths to determine config options:
- lfs_config struct passed during initialization
- lfs_bd_info struct passed during block device initialization
- compile time options
This allowed different developers to provide their own needs
to the filesystem, such as the block device capabilities and
the higher level user's own tweaks.
However, this comes with additional complexity and action required
when the configurations are incompatible.
For now, this has been reduced to all information (including block
device function pointers) being passed through the lfs_config struct.
We just defer more complicated handling of configuration options to
the top level user.
This simplifies configuration handling and gives the top level user
the responsibility to handle configuration, which they probably would
have wanted to do anyways.
Removing the dependency to the parent pointer solves
many issues with non-atomic updates of children's
parent pointers with respect to any move operations.
However, this comes with an embarrassingly terrible
runtime as the only other option is to exhaustively
check every dir entry to find a child's parent.
Fortunately, deorphaning should be a relatively rare
operation.