Mostly just removed LFS_FROM_DROP and changed the DSL grammar a bit to
allow drops to occur naturally through oldsize -> newsize diff expressed
in the region struct. This prevents us from having to add a drop every
time we want to update an entry in-place.
Although it's simple and probably what most users expect, the previous
custom attributes API suffered from one problem: the inability to update
attributes atomically.
If we consider our timestamp use case, updating a file would require:
1. Update the file
2. Update the timestamp
If a power loss occurs during this sequence of updates, we could end up
with a file with an incorrect timestamp.
Is this a big deal? Probably not, but it could be a surprise only found
after a power-loss. And littlefs was developed with the _specifically_
to avoid suprises during power-loss.
The littlefs is perfectly capable of bundling multiple attribute updates
in a single directory commit. That's kind of what it was designed to do.
So all we need is a new committer opcode for list of attributes, and
then poking that list of attributes through the API.
We could provide the single-attribute functions, but don't, because the
fewer functions makes for a smaller codebase, and these are already the
more advanced functions so we can expect more from users. This also
changes semantics about what happens when we don't find an attribute,
since erroring would throw away all of the other attributes we're
processing.
To atomically commit both custom attributes and file updates, we need a
new API, lfs_file_setattr. Unfortunately the semantics are a bit more
confusing than lfs_setattr, since the attributes aren't written out
immediately.
A much requested feature (mostly because of littlefs's notable lack of
timestamps), this commits adds support for user-specified custom
attributes.
Planned (though underestimated) since v1, custom attributes provide a
route for OSs and applications to provide their own metadata in
littlefs, without limiting portability.
However, unlike custom attributes that can be found on much more
powerful PC filesystems, these custom attributes are very limited,
intended for only a handful of bytes for very important metadata. Each
attribute has only a single byte to identify the attribute, and the
size of all attributes attached to a file is limited to 64 bytes.
Custom attributes can be accessed through the lfs_getattr, lfs_setattr,
and lfs_removeattr functions.
One of the big benefits of inline files is that small files no longer need to
take up a full block. This opens up an opportunity to provide much better
support for storage devices with only a handful of very large blocks. Such as
the internal flash found on most microcontrollers.
After investigating some use cases for a filesystem on internal flash,
it has become apparent that the 255-byte limit is going to be too
restrictive to be useful in many cases. Most uses I found needed files
~4-64 bytes in size, but it wasn't uncommon to find files ~512 bytes in
length.
To try to remedy this, I've pushed the 255 byte limit up to 1023 bytes,
by stealing some bits from the previously-unused attributes's size.
Unfortunately this limits attributes to 63 bytes in total and has a
minor code cost, but I'm not sure even 1023 bytes will be sufficient for
a lot of cases.
The littlefs will probably never be as efficient with internal flash as
other filesystems such as SPIFFS, it just wasn't designed for this sort of
limited geometry. However, this feature has been heavily requested, even
with limitations, because of the opportunity for code reuse on
microcontrollers with both internal and external flash.
Being a portable, microcontroller-scale embedded filesystem, littlefs is
presented with a relatively unique challenge. The amount of RAM
available is on completely different scales from machine to machine, and
what is normally a reasonable RAM assumption may break completely on an
embedded system.
A great example of this is file names. On almost every PC these days, the limit
for a file name is 255 bytes. It's a very convenient limit for a number
of reasons. However, on microcontrollers, allocating 255 bytes of RAM to
do a file search can be unreasonable.
The simplest solution (and one that has existing in littlefs for a
while), is to let this limit be redefined to a smaller value on devices
that need to save RAM. However, this presents an interesting portability
issue. If these devices are plugged into a PC with relatively infinite
RAM, nothing stops the PC from writing files with full 255-byte file
names, which can't be read on the small device.
One solution here is to store this limit on the superblock during format
time. When mounting a disk, the filesystem implementation is responsible for
checking this limit in the superblock. If it's larger than what can be
read, raise an error. If it's smaller, respect the limit on the
superblock and raise an error if the user attempts to exceed it.
In this commit, this strategy is adopted for file names, inline files,
and the size of all attributes, since these could impact the memory
consumption of the filesystem. (Recording the attribute's limit is
iffy, but is the only other arbitrary limit and could be used for disabling
support of custom attributes).
Note! This changes makes it very important to configure littlefs
correctly at format time. If littlefs is formatted on a PC without
changing the limits appropriately, it will be rejected by a smaller
device.
This move was surprisingly complex, but offers the ultimate opportunity for
code reuse in terms of resizable entries. Instead of needing to provide
separate functions for adding and removing entries, adding and removing
entries can just be viewed as changing an entry's size to-and-from zero.
Unfortunately, it's not _quite_ that simple, since append and remove
hide some relatively complex operations for when directory blocks
overflow or need to be cleaned up.
However, with enough shoehorning, and a new committer type that allows
specifying recursive commit lists (is this now a push-down automata?),
it does seem to be possible to shove all of the entry update logic into
a single function.
Sidenote, I switched back to an enum-based DSL, since the addition of a
recursive region opcode breaks the consistency of what needs to be
passed to the DSL callback functions. It's much simpler to handle each
opcode explicitly inside a recursive lfs_commit_region function.
Making the superblock look like "just another entry" allows us to treat
the superblock like "just another entry" and reuse a decent amount of
logic that would otherwise only be used a format and mount time. In this
case we can use append to write out the superblock like it was creating
a new entry on the filesystem.
Now when a file overflows the max inline file size, it will be correctly
written out to a proper block. Additionally, tweaked corner cases around
inline file, however this still needs significant testing.
A real neat part that surprised me is that littlefs _already_ contains
the logic for writing out inline files: in lfs_file_relocate! With a bit
of tweaking, littlefs can pull off both the overflow from inline to
normal files _and_ the relocating of bad blocks in files with the same
piece of logic.
Proof-of-concept implementation of inline files that stores the file's
content directly in its parent's directory pair.
Inline files are indicated by a different type stored in an entry's
struct field, and take advantage of resizable entries. Where a normal
file's entry would normally hold the reference to the CTZ skip-list, an
inline file's entry contains the contents of the actual file.
Unfortunately, storing the inline file on disk is the easy part. We also
need to manage inline files in the internals of littlefs and provide the
same operations that we do on normal files, all while reusing as much
code as possible to avoid a significant increase in code cost.
There is a relatively simple, though maybe a bit hacky, solution here. If a
file fits entirely in a cache line, the file logic never actually has to go to
disk. This means we can just give the file a "pretend" block (hopefully
one that would assert if ever written to), and carry out file operations
as normal, as long as we catch the file before it exceeds the cache line
and write out the file to an actual disk.
The size field is redundant, since an entry's size can be determined
from the nlen+elen+alen+4. However, as you may have guessed from that
expression, calculating the size this way is a bit roundabout and
inefficient. Despite its redundancy, it's cheaper to store the size in the
entry, though with a minor RAM cost.
Note, extra care must now be taken to make sure these size and len fields
don't fall out of sync.
Tweaked the commit callback to pass the arguments for from-memory
commits explicitly, with non-from-memory commits still being able to
hijack the opaque data pointer for additional state.
The from-memory commits make up the vast majority of commits in
littlefs, so this small change has a noticable impact.
Before, when appending new entries to a directory, we try to find empty space
in the last block of a directory chain. This has a nice side-effect that
the order of directory entries is maintained. However, this isn't strictly
necessary.
We're already scanning the directory chain in order, so other than changes to
directory order, there's no downside to taking advantage of any free
space we come across.
Now, instead of passing an enum for mem/disk commits, we pass a function
pointer that can specify any behaviour.
This has the benefit of opening up the possibility to pass any sort of
commit logic to the committers, and unused logic can be garbage-collected
by the compiler if unused. The downside is that unfortunately compilers have
a harder time optimizing around functions pointers than enums, and
fitting the state into structs for the callbacks may be costly.
Before, tags were implicitly updated by the dir update functions, which
have a strong understanding of the entry struct. However, most of the
time the tag was already a part of the entry struct being committed.
By making tag updates explicit, this does add cost to commits that
now have to pass tag updates explicitly, but it reduces cost where that
tag and entry update can be combined into one commit region.
It also simplifies the dir update functions.
Now, with the off, diff, and len parameters in each commit entry, we can build
up directory commits that resize entries. This adds complexity but opens
up the directory blocks to be much more flexible.
The main concern is that resizing entries can push around neighboring entries
in surprising ways, such as pushing them into new directory blocks when a
directory splits. This can break littlefs's internal logic in how it tracks
in-flight entries. The most problematic example being open files.
Fortunately, this is helped by a global linked-list of all files and
directories opened by the filesystem. As entries change size, the state
of open files/dirs may be updated as needed. Note this already needed to
exist for the ability to remove files/dirs, which has the same issue.
Expiremental implementation. This opens up the opportunity to use the same
commit description for both commits and appends, which effectively do the same
thing.
This should lead to better code reuse.
Really all this means is that the internal commit function was changed
from taking an array of "commit structures" to a linked-list of "commit
structures". The benefit of a linked-list is that layers of commit
functions can pull off some minor modifications to the description of
the commit. Most notably, commit functions can add additional entries
that will be atomically written out and CRCed along with the initial
commit.
Also a minor benefit, this is one less parameter when committing a
directory with zero entries.
Previously, commits could only come from memory in RAM. This meant any
entries had to be buffered in their entirety before they could be moved
to a different directory pair. By adding parameters for specifying
commits from existing entries stored on disk, we allow any sized entries
to be moved between directory pairs with a fixed RAM cost.
The separation of data-structure vs entry type has been implicit for a
while now, and even taken advantage of to simplify the traverse logic.
Explicitely separating the data-struct and entry types allows us to
introduce new data structures (inlined files).
As pointed out by davidefer, the lookahead pointer modular arithmetic
does not work around integer overflow when the pointer size is not a
multiple of the block count.
To avoid overflow problems, the easy solution is to stop trying to
work around integer overflows and keep the lookahead offset inside the
block device. To make this work, the ack was modified into a resetable
counter that is decremented every block allocation.
As a plus, quite a bit of the allocation logic ended up simplified.
One of the big simplifications in littlefs's implementation is the
complete lack of tracking free blocks, allowing operations to simply
drop blocks that are no longer in use.
However, this means the lookahead buffer can easily contain outdated
blocks that were previously deleted. This is usually fine, as littlefs
will rescan the storage if it can't find a free block in the lookahead
buffer, but after changes that caused littlefs to more conservatively
respect the alloc acks (e611cf5), any scanned blocks after an ack would
be incorrectly trusted.
The fix is to eagerly scan ahead in the lookahead when we allocate so
that alloc acks are better able to discredit old lookahead blocks. Since
usually alloc acks are tightly coupled to allocations of one or two blocks,
this allows littlefs to properly rescan every set of allocations.
This may still be a concern if there is a long series of worn out
blocks, but in the worst case littlefs will conservatively avoid using
blocks it's not sure about.
Found by davidefer
Like most of the lfs_dir_t functions, lfs_dir_append is responsible for
updating the lfs_dir_t struct if the underlying directory block is
moved. This property makes handling worn out blocks much easier by
removing the amount of state that needs to be considered during a
directory update.
However, extending the dir chain is a bit of a corner case. It's not
changing the old block, but callers of lfs_dir_append do assume the
"entry" will reside in "dir" after lfs_dir_append completes.
This issue only occurs when creating files, since mkdir does not use
the entry after lfs_dir_append. Unfortunately, the tests against
extending the directory chain were all made using mkdir.
Found by schouleu
Before this patch, when calling lfs_mkdir or lfs_file_open with root
as the target, littlefs wouldn't find the path properly and happily
run into undefined behaviour.
The fix is to populate a directory entry for root in the lfs_dir_find
function. As an added plus, this allowed several special cases around
root to be completely dropped.
Suggested by sn00pster, LFS_CONFIG is an opt-in user provided
configuration file that will override the util implementation in
lfs_util.h. This is useful for allowing system-specific overrides
without needing to rely on git merges or other forms of patching
for updates.
Using gcc cross compilers and qemu:
- make test CC="arm-linux-gnueabi-gcc --static -mthumb" EXEC="qemu-arm"
- make test CC="powerpc-linux-gnu-gcc --static" EXEC="qemu-ppc"
- make test CC="mips-linux-gnu-gcc --static" EXEC="qemu-mips"
Also separated out Travis jobs and added some size reporting
Note: It's still expected to modify lfs_utils.h when porting littlefs
to a new target/system. There's just too much room for system-specific
improvements, such as taking advantage of CRC hardware.
Rather, encouraging modification of lfs_util.h and making it easy to
modify and debug should result in better integration with the consuming
systems.
This just adds a bunch of quality-of-life improvements that should help
development and integration in littlefs.
- Macros that require no side-effects are all-caps
- System includes are only brought in when needed
- Malloc/free wrappers
- LFS_NO_* checks for quickly disabling things at the command line
- At least a little-bit more docs
Required to support big-endian processors, with the most notable being
the PowerPC architecture.
On little-endian architectures, these conversions can be optimized out
and have no code impact.
Initial patch provided by gmouchard
This helps significantly with supporting different compilers. Intrinsics for
different compilers can be added as they are found.
Note that for ARMCC, __builtin_ctz is not used. This was the result of a
strange issue where ARMCC only emits __builtin_ctz when passed the
--gnu flag, but __builtin_clz and __builtin_popcount are always emitted.
This isn't a big problem since the ARM instruction set doesn't have a
ctz instruction, and the npw2 based implementation is one of the most
efficient.
Also note that for littefs's purposes, we consider ctz(0) to be
undefined. This lets us save a branch in the software lfs_ctz
implementation.
Rather than tracking all in-flight blocks blocks during a lookahead,
littlefs uses an ack scheme to mark the first allocated block that
hasn't reached the disk yet. littlefs assumes all blocks since the
last ack are bad or in-flight, and uses this to know when it's out
of storage.
However, these unacked allocations were still being populated in the
lookahead buffer. If the whole block device fits in the lookahead
buffer, _and_ littlefs managed to scan around the whole storage while
an unacked block was still in-flight, it would assume the block was
free and misallocate it.
The fix is to only fill the lookahead buffer up to the last ack.
The internal free structure was restructured to simplify the runtime
calculation of lookahead size.
- Write on read-only file to return LFS_ERR_BADF
- Renaming directory onto file to return LFS_ERR_NOTEMPTY
- Changed LFS_ERR_INVAL in lfs_file_seek to assert