This is possible thanks to invoxiaamo's optimization of compacting
renames to avoid the O(n^3) nested filters. Not only does this
significantly reduce the runtime cost of that operation, but it
reduces the maximum possible depth of recursion to 3 frames.
Deepest lfs_dir_traverse before:
traverse with commit
'-> traverse with filter
'-> traverse with move
'-> traverse with filter
Deepest lfs_dir_traverse after:
traverse with commit
'-> traverse with move
'-> traverse with filter
There are two issues, when using the file-based block device emulation
on Windows Platforms:
1. There is no fsync implementation available. This needs to be mapped
to a Windows-specific FlushFileBuffers system call.
2. The block device file needs to be opened as binary file (O_BINARY)
The corresponding flag is not required for Linux.
Rename can be VERY time consuming. One of the reasons is the 4 recursion
level depth of lfs_dir_traverse() seen if a compaction happened during the
rename.
lfs_dir_compact()
size computation
[1] lfs_dir_traverse(cb=lfs_dir_commit_size)
- do 'duplicates and tag update'
[2] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1])
- Reaching a LFS_FROM_MOVE tag (here)
[3] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1]) <= on 'from' dir
- do 'duplicates and tag update'
[4] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[3])
followed by the compaction itself:
[1] lfs_dir_traverse(cb=lfs_dir_commit_commit)
- do 'duplicates and tag update'
[2] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1])
- Reaching a LFS_FROM_MOVE tag (here)
[3] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1]) <= on 'from' dir
- do 'duplicates and tag update'
[4] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[3])
Yet, analyse shows that levels [3] and [4] don't perform anything
if the callback is lfs_dir_traverse_filter...
A practical example:
- format and mount a 4KB block FS
- create 100 files of 256 Bytes named "/dummy_%d"
- create a 1024 Byte file "/test"
- rename "/test" "/test_rename"
- create a 1024 Byte file "/test"
- rename "/test" "/test_rename"
This triggers a compaction where lfs_dir_traverse was called 148393 times,
generating 25e6+ lfs_bd_read calls (~100 MB+ of data)
With the optimization, lfs_dir_traverse is now called 3248 times
(589e3 lfs_bds_calls (~2.3MB of data)
=> x 43 improvement...
- nit: Moving brace to end of if statement line for consistency
- mount: add more debug info per CR
- Fix compiler error from extra parentheses
- Fix superblock typo
The basic idea is simple, if we seek to a position in the currently
loaded cache, don't flush the cache. Notably this ensures that seek is
always as fast or faster than just reading the data.
This is a bit tricky since we need to check that our new block and
offset match the cache, fortunately we can skip the block check by
reevaluating the block index for both the current and new positions.
Note this only works whene reading, for writing we need to always flush
the cache, or else we will lose the pending write data.
Interestingly this was introduced by two different PRs which were not tested
together until pre-release testing:
- Fix lfs_file_seek doesn't update cache properties correctly
- Fix compiler warnings when LFS_READONLY defined
BUG CASE:Assume there are 6 blocks in littlefs, block 0,1,2,3 already allocated. 0 has a tail pair of {2, 3}. Now we try to write more into 0.
When writing to block 0, we will split(FIRST SPLIT), thus allocate block 4 and 5. Up to now , everything is as expected.
Then we will try to commit in block 4, during which split(SECOND SPLIT) is triggered again(In our case, some files are large, some are small, one split may not be enough). Still as expected now.
BUG happens when we try to alloc a new block pair for the second split:
As lookahead buffer reaches the end , a new lookahead buffer will be generated from flash content, and block 4, 5 are unused blocks in the new lookahead buffer because they are not programed yet. HOWEVER, block 4,5 should be occupied in the first split!!!!! The result is block 4,5 are allocated again(This is where things are getting wrong).
commit ce2c01f results in this bug. In the commit, a lfs_alloc_ack is inserted in lfs_dir_split, which will cause split to reset lfs->free.ack to block count.
In summary, this problem exists after 2.1.3.
Solution: don't call lfs_alloc_ack in lfs_dir_split.
lfs_dir_traverse is a bit unpleasant in that it is inherently a
recursive function, but without a strict bound of 4 calls (commit -> filter ->
move -> filter), and efforts to unroll the recursion comes at a
signification code cost.
It turns out the best solution I've found so far is to simple create an
explicit stack with an explicit bound of 4 calls (or more accurately,
3 pushed frames).
---
This actually highlights one of the bigger flaws in littlefs right now,
which is that this function, lfs_dir_traverse, takes O(n^2) disk reads
to traverse.
Note that LFS_FROM_MOVE can only occur once per commit, which is why
this code is O(n^2) and not O(n^4).
This mostly just required separate functions for "lfs_file_rawwrite" and
"lfs_file_flushedwrite", since lfs_file_flush recursively invokes
lfs_file_rawread and lfs_file_rawwrite.
This comes at a code cost, but gives us bounded and measurable RAM usage
on this code path.
lfs_dir_commit originally relied heavily on tail-recursion, though at
least one path (through relocations) was not tail-recursive, and could
cause unbounded stack usage in extreme cases of bad blocks. (Keep in
mind even extreme cases of bad blocks should be in scope for littlefs).
In order to remove recursion from this code path, several changed were
raequired:
- The lfs_dir_compact logic had to be somewhat inverted. Instead of
first compacting and then resolving issues such as relocations and
orphans, the overarching lfs_dir_commit now contains a state-machine
which after committing or compacting handles the extra changes to the
filesystem in a single, non-recursive loop
- Instead of fixing all relocations recursively, >1 relocation requires
defering to a full deorphan step. This step is unfortunately an
additional n^2 process. It also required some changes to lfs_deorphan
in order to ignore intentional orphans created as an intermediary in
lfs_mkdir. Maybe in the future we should remove these.
- Tail recursion normally found in lfs_fs_deorphan had to be rewritten
as a loop which restarts any time a new commit causes a relocation.
This does show that the algorithm may not terminate, but only if every
block is bad, which will eventually cause littlefs to run out of
blocks to write to.
As noted in Python's subprocess library:
> This will deadlock when using stdout=PIPE and/or stderr=PIPE and the
> child process generates enough output to a pipe such that it blocks
> waiting for the OS pipe buffer to accept more data.
Curiously, this only became a problem when updating to Ubuntu 20.04
in CI (python3.6 -> python3.8).
Avoids redundant counting of structs shared in multiple .c files, which
is very common. This is different from the other scripts,
code.py/data.py/stack.py, but this difference makes sense as struct
declarations have a very different lifetime.
Introduced when updating CI to Ubuntu 20.04, Ubuntu snaps consume
loop devices, which conflict with out assumption that /dev/loop0
will always be unused. Changed to request a dynamic loop device from
losetup, though it would have been nice if Ubuntu snaps allocated
from the last device or something.
This requires parsing an additional section of the dwarfinfo (--dwarf=rawlines)
to get the declaration file info.
---
Interpreting the results of ./scripts/structs.py reporting is a bit more
complicated than other scripts, structs aren't used in a consistent
manner so the cost of a large struct depends on the context in which it
is used.
But that being said, there really isn't much reason to report
internal-only structs. These structs really only exist for type-checking
in internal algorithms, and their cost will end up reflected in other RAM
measurements, either stack, heap, or other.
GCC is a bit frustrating here, it really wants to generate every file in
a single command, which _is_ more efficient if our build system could
leverage this. But -fcallgraph-info is a rather novel flag, so we can't
really rely on it for generally compiling and testing littlefs.
The multi-file output gets in the way when we want an explicitly
separate rule for callgraph-info generation. We can't generate the
callgraph-info without generating the objects files.
This becomes a surprsing issue when parallel building (make -j) is used!
Suddenly we might end up with both the .o and .ci rules writing to .o
files, which creates a really difficult to track down issue of corrupted
.o files.
The temporary solution is to use an order-only prerequisite. This still
ends up building the .o files twice, but it's an acceptable tradeoff for
not requiring the -fcallgraph-info for all builds.
Using errors=replace in python utf-8 decoding makes these scripts more
resilient to underlying errors, rather than just throwing an unhelpfully
generic decode error.
- Added to GitHub statuses (61 results)
- Reworked generated release table to include these (16 results, only thumb)
These also required a surprisingly large number of other changes:
- Bumbed CI Ubuntu version 18.04 -> 20.04, 22.04 is already on the
horizon but not usable in GitHub yet
- Manualy upgrade to GCC v10, this is required for the -fcallgraph-info
flag that scripts/stack.py uses.
- Increased paginated status queries to 100 per-page. If we have more
statuses than this the status diffs may get much more complicated...
- Forced whitespace in generated release table to always be nbsp. GitHub
tables get scrunched rather ugly without this, prefering margins to
readable tables.
- Added limited support for "∞" results, since this is returned by
./scripts/stack.py for recursive functions.
As a side-note, this increases the number of statuses reported
per-commit from 6 to 61, so hopefully that doesn't cause any problems...
- Changed `make summary` to show a one line summary
- Added `make lfs.csv` rule, which is useful for finding more info with
other scripts
- Fixed small issue in ./scripts/summary.py
- Added *.ci (callgraph) and *.csv (script output) to CI
A full summary of static measurements (code size, stack usage, etc) can now
be found with:
make summary
This is done through the combination of a new ./scripts/summary.py
script and the ability of existing scripts to merge into existing csv
files, allowing multiple results to be merged either in a pipeline, or
in parallel with a single ./script/summary.py call.
The ./scripts/summary.py script can also be used to quickly compare
different builds or configurations. This is a proper implementation
of a similar but hacky shell script that has already been very useful
for making optimization decisions:
$ ./scripts/structs.py new.csv -d old.csv --summary
name (2 added, 0 removed) code stack structs
TOTAL 28648 (-2.7%) 2448 1012
Also some other small tweaks to scripts:
- Removed state saving diff rules. This isn't the most useful way to
handle comparing changes.
- Added short flags for --summary (-Y) and --files (-F), since these
are quite often used.
- Added -L/--depth argument to show dependencies for scripts/stack.py,
this replaces calls.py
- Additional internal restructuring to avoid repeated code
- Removed incorrect diff percentage when there is no actual size
- Consistent percentage rendering in test.py
This required a patch to the --diff flag for the scripts to ignore
a missing file. This enables the useful one liner for making comparisons
with potentially missing previous versions:
./scripts/code.py lfs.o -d lfs.o.code.csv -o lfs.o.code.csv
function (0 added, 0 removed) old new diff
TOTAL 25476 25476 +0
One downside, these previous files are easy to delete as a part of make
clean, which limits their usefulness for comparing configuration
changes...