This was a small hole in the logic that handles initializing the
lookahead buffer. To imitate exhaustion (so the block allocator
will trigger a scan), the lookahead buffer is rewound a full
lookahead and set up to look like it is exhausted. However,
unlike normal allocation, this rewind was not kept aligned to
a multiple of the scan size, which is limited by both the
lookahead buffer and the total storage size.
This bug went unnoticed for so long because it only causes
problems when the block device is both:
1. Not aligned to the lookahead buffer (not a power of 2)
2. Smaller than the lookahead buffer
While this seems like a strange corner case for a block device,
this turned out to be very common for internal flash, especially
when a handleful of blocks are reserved for code.
As it was, if a user operated on a directory while at the same
time iterating over the directory, the directory objects could
fall out of sync. In the best case, files may be skipped while
removing everything in a file, in the worst case, a very poorly
timed directory relocate could be missed.
Simple fix is to add the same directory tracking that is currently
in use for files, at a small code+complexity cost.
Mostly changed the wording around the goals/features of littlefs based
on feedback from other developers.
Also added in project links now that there are a few of those floating
around. And made the README a bit easier to navigate.
Short story, files are no longer committed to directories during
file sync/close if the last write did not complete successfully.
This avoids a set of interesting user-experience issues related
to the end-of-life behaviour of the filesystem.
As a filesystem approaches end-of-life, the chances of running into
LFS_ERR_NOSPC grows rather quickly. Since this condition occurs after
at the end of a devices life, it's likely that operating in these
conditions hasn't been tested thoroughly.
In the specific case of file-writes, you can hit an LFS_ERR_NOSPC after
parts of the file have been written out. If the program simply continues
and closes the file, the file is written out half completed. Since
littlefs has a strong garuntee the prevents half-writes, it's unlikely
this state of the file would be expected.
To make things worse, since close is also responsible for memory
cleanup, it's actually _impossible_ to continue working as it was
without leaking memory.
By prevent the file commits, end-of-life behaviour should at least retain
a previous copy of the filesystem without any surprises.
Specifically around error handling. As is, incorrectly handled
errors could cause higher code to get uninitialized blocks,
potentially leading to writes to arbitray blocks on storage.
This is only an issue in the weird case that are worn down block is
left in the odd state of not being able to change the data that resides
on the block. That being said, this does pop up often when simulating
wear on block devices.
Currently, directory commits checked if the write succeeded by crcing the
block to avoid the additional RAM cost for another buffer. However,
before this commit, directory commits just checked if the block crc was
valid, rather than comparing to the expected crc. This would usually
work, unless the block was stuck in a state with valid crc.
The fix is to simply compare with the expected crc to find errors.
The previous math for determining if we scanned all of disk wasn't set
up correctly in the lfs_mount function. If lookahead == block_count
the lfs_alloc function would think we had already searched the entire
disk.
This is only an issue if we manage to exhaust a block on the first
pass after mount, since lfs_alloc_ack resets the lookahead region
into a valid state after a succesful block allocation.
The littlefs allows buffers to be passed statically in the case
that a system does not have a heap. Unfortunately, this means we
can't round up in the case of an unaligned lookahead buffer.
Double unfortunately, rounding down after clamping to the block device
size could result in a lookahead of zero for block devices < 32 blocks
large.
The assert in littlefs does catch this case, but rounding down prevents
support for < 32 block devices.
The solution is to simply require a 32-bit aligned buffer with an
assert. This avoids runtime problems while allowing a user to pass
in the correct buffer for < 32 block devices. Rounding up can be
handled at higher API levels.
Same runtime cost, however reduces the logic and avoids one of
the two big branches. See the DESIGN.md for more info.
Now uses these equations instead of the messy guess and correct method:
n = (N - w/8(popcount(N/(B-2w/8)) + 2)) / (B-2w/8)
off = N - (B-w2/8)n - w/8popcount(n)
This reduces the O(n^2logn) runtime to read a file to only O(nlog).
The extra O(n) did not touch the disk, so it isn't a problem until the
files become very large, but this solution comes with very little cost.
Long story short, you can find the block index + offset pair for a
CTZ linked-list with this series of formulas:
n' = floor(N / (B - 2w/8))
N' = (B - 2w/8)n' + (w/8)popcount(n')
off' = N - N'
n, off =
n'-1, off'+B if off' < 0
n', off'+(w/8)(ctz(n')+1) if off' >= 0
For the long story, you will need to see the updated DESIGN.md
Initially, I was concerned that the number of pointers in the ctz
linked-list could exceed the storage in a block. Long story short
this isn't really possible outside of extremely small block sizes.
Since clamping impacts the layout of files on disk, removing the
block size removed quite a bit of logic and corner cases. Replaced
with an assert on block size during initialization.
---
Long story long, the minimum block size needed to store all ctz
pointers in a filesystem can be found with this formula:
B = (w/8)*log2(2^w / (B - 2*(w/8)))
where:
B = block size in bytes
w = pointer width in bits
It's not a very pretty formula, but does give us some useful info
if we apply some math:
min block size:
32 bit ctz linked-list = 104 bytes
64 bit ctz linked-list = 448 bytes
For littlefs, 128 bytes is a perfectly reasonable minimum block size.
Deduplication and deorphan steps aren't required under indentical
conditions, but they can be processed in the same iteration of the
filesystem. Since lfs_alloc (requires deorphan) occurs on most write
calls to the filesystem (requires deduplication), it was simpler to
just compine the steps into a single lfs_deorphan step.
Also traded out the places where lfs_rename/lfs_remove just defer
operations to the deorphan step. This adds a bit of code, but also
significantly speeds up directory operations.
The "move problem" has been present in littlefs for a while, but I haven't
come across a solution worth implementing for various reasons.
The problem is simple: how do we move directory entries across
directories atomically? Since multiple directory entries are involved,
we can't rely entirely on the atomic block updates. It ends up being
a bit of a puzzle.
To make the problem more complicated, any directory block update can
fail due to wear, and cause the directory block to need to be relocated.
This happens rarely, but brings a large number of corner cases.
---
The solution in this patch is simple:
1. Mark source as "moving"
2. Copy source to destination
3. Remove source
If littlefs ever runs into a "moving" entry, that means a power loss
occured during a move. Either the destination entry exists or it
doesn't. In this case we just search the entire filesystem for the
destination entry.
This is expensive, however the chance of a power loss during a move
is relatively low.
lfs_file_seek returned the _previous_ file offset on success, where
most standards return the _calculated_ offset on success.
This just falls into me not noticing a mistake, and shows why it's
always helpful to have a second set of eyes on code.
Simply limiting the lookahead region to the size of
the block device fixes the problem.
Also added logic to limit the allocated region and
floor to nearest word, since the additional memory
couldn't really be used effectively.
Moslty just a hole in testing. Dir blocks were not being
correctly collected when removing entries from very large
files due to forgetting about the tail-bit in the directory
block size. The test hole has now been filled.
Also added lfs_entry_size to avoid having to repeat that
expression since it is a bit ridiculous
- out-of-bound read results in eof
- out-of-bound write will fill missing area with zeros
The write behaviour matches expected posix behaviour, but was
under consideration for not being dropped, since littlefs does
not support holes, and support of out-of-band seeks adds complexity.
However, it turned out filling with zeros was trivial, and only
cost an extra 74 bytes of flash (0.48%).
When the lookahead buffer wraps around in an unaligned filesystem, it's
possible for blocks at the beginning of the disk to have a negative distance
from the lookahead, but still reside in the lookahead buffer.
Switching to signed modulo doesn't quite work due to how negative modulo
is implemented in C, so the simple solution is to shift the region to be
positive.
This off-by-one error was caused by a slight difference between the
pos argument to lfs_index_find and lfs_index_extend. When pos is on
a block boundary, lfs_index_extend expects block to point before pos,
as it would when writing a file linearly. But when seeking to that
pos, the lfs_index_find to warm things up just supplies the block it
expects pos to be in.
Fixed the off-by-one error and added a test case for several of these
cold seek+writes.
Zero attributes are actually supported at the moment, but this change
will allow entry attribute to be added in a backwards compatible manner.
Each dir entry is now prefixed with a 32 bit tag:
4b - entry type
4b - data structure
8b - entry len
8b - attribute len
8b - name len
A full entry on disk looks a bit like this:
[- 8 -|- 8 -|- 8 -|- 8 -|-- elen --|-- alen --|-- nlen --]
[ type | elen | alen | nlen | entry | attrs | name ]
The actually contents of the attributes section is a bit handwavey
until the first attributes are implemented, but to put plans in place:
Each attribute will be prefixed with only a byte that indicates the type
of attribute. Attributes should be sorted based on portability, since
unknown attributes will force attribute parsing to stop.
This provides a path for adding inlined files in the future, which
requires multiple lengths to distinguish between the file data and name.
As an extra bonus, the directory can now be iterated over even if the
types are unknown, since the name's representation is consistent on all
entry types.
This does come at the cost of reducing types from 16-bits to 8-bits, but
I doubt this will become a problem.
This was an easy fix, but highlighted the fact that the current testing
framework doesn't detect when a block is written to without an
associated erase.
Added a quick solution that creates an empty file during an erase.
An interesting side-effect of adding internal checks to the littlefs
for block errors, is that the littlefs starts to cover up its own
flaws. Probably out of embarrassment.
In this case, the relocation logic for directories left the littlefs
rcache dirty with invalid data. The littlefs detected the error,
treated it as a corrupted write, and just moved the "corrupted" block
to a new block, which as a side-effect flushes the rcache.
Since committing a dir will end up flushing the rcache to check for
errors anyways, we can just drop the rcache in lfs_bd_sync.
This bug required larger cache sizes to notice, since block errors
usually get detected in the early stages of writing to files.
Since the fix here requires both lfs_file_write and lfs_file_flush
relocate file blocks, the relocation logic was moved out into a
seperate function.
Before, the littlefs relied on the underlying block device
to report corruption that occurs when writing data to disk.
This requirement is easy to miss or implement incorrectly, since
the error detection is only required when a block becomes corrupted,
which is very unlikely to happen until late in the block device's
lifetime.
The littlefs can detect corruption itself by reading back written data.
This requires a bit of care to reuse the available buffers, and may rely
on checksums to avoid additional RAM requirements.
This does have a runtime penalty with the extra read operations, but
should make the littlefs much more robust to different implementations.
Directories still consume two full erase blocks, but now only program
the exact on-disk region to store the directory contents. This results
in a decent improvement in the amount of data written and read to the
device when doing directory operations.
Calculating the checksum of dynamically sized data is surprisingly
tricky, since the size of the data could also contain errors. For the
littlefs, we can assume the data size must fit in an erase block.
If the data size is invalid, we can just treat the block as corrupted.
More documentation may still by worthwhile (design documentation?),
but for now this provides a reasonable baseline.
- readme
- license
- header documentation
This provides a limited form of wear leveling. While wear is
not actually balanced across blocks, the filesystem can recover
from corrupted blocks and extend the lifetime of a device nearly
as much as dynamic wear leveling.
For use-cases where wear is important, it would be better to use
a full form of dynamic wear-leveling at the block level. (or
consider a logging filesystem).
Corrupted block handling was simply added on top of the existing
logic in place for the filesystem, so it's a bit more noodly than
it may have to be, but it gets the work done.