- Now test errors have correct line reporting! #line directives
are passed to the compiler that reference the relevant line in
the test case shell script.
--- Multi-block directory ---
./tests/test_dirs.sh:109: assert failed with 0, expected 1
lfs_unmount(&lfs) => 1
- Cleaned up the number of implicit global variables provided to
tests. A lot of these were infrequently used and made it difficult
to remember what was provided. This isn't an MCU, so there's very
little cost to stack allocations when needed.
- Minimized the results.py script (previously stats.py) output to
match minimization of test output.
Kind of a two-fold issue. One, the programming to the middle of inline
files was causing the cache to get updated to a half programmed state.
While fine, as all programs do occur in order in a block, this is less
efficient when writing to inline files as it would cause the inline file
to need to be reread even if it fits in the cache.
Two, the rereading of the inline file was broken and passed the file's
tag all the way to where a user would expect an error. This was easy to
fix but adds to the reasons we should have test coverage information.
Found by ebinans
The cause was mistakenly setting file->ctz.size directly instead of
file->pos, which file->ctz.size gets overwritten with later in
lfs_file_flush.
Also added better seek test cases specifically for inline files. This
should also catch most of the inline corner cases related to
lfs_file_size/lfs_file_tell.
Found by ebinans
The problem was not setting the file state correctly after the truncate.
To truncate < size, we end up using the cache to traverse the ctz
skip-list far away from where our file->pos is.
We can leave the last block in the cache in case we're going to append
to the file, but if we do this we need to set up file->block+file->off
to tell use where we are in the file, and set the LFS_F_READING flag to
indicate that our cache contains read data.
Note this is different than the LFS_F_DIRTY, which we need also. The
purpose of the flags are as follows:
- LFS_F_DIRTY - file ctz skip-list branch is out of sync with
filesystem, need to update metadata
- LFS_F_READING - file cache is in use for reading, need to drop cache
- LFS_F_WRITING - file cache is in use for writing, need to write out
cache to disk
The difference between flags is subtle but important because read/prog
caches are handled differently. Prog caches have asserts in place to
catch programs without erases (the infamous pcache->block == 0xffffffff
assert).
Though maybe the names deserve an update...
Found by ebinans
- Fixed uninitialized values found by valgrind.
- Fixed uninitialized value in lfs_dir_fetchmatch when handling revision
counts.
- Fixed mess left by lfs_dir_find when attempting to find the root
directory in lfs_rename and lfs_remove.
- Fixed corner case with definitions of lfs->cfg->block_cycles.
- Added test cases around different forms of the root directory.
I think all of these were found by TheLoneWolfling, so props!
This was caused by any commit containing entries large enough to
_always_ force a compaction. This would cause littlefs to think that it
would need to split infinitely because there was no base case.
The fix here is pretty simple: treat any commit with only a single entry
as unsplittable. This forces littlefs to first try overcompacting
(fitting more in a block than what has optimal runtime), and then
failing that return LFS_ERR_NOSPC for higher layers to handle.
found by TheLoneWolfling
While linked-lists do have some minor benefits, arrays are more
idiomatic in C and may provide a more intuitive API.
Initially the linked-list approach was more beneficial than it is now,
since it allowed custom attributes to be chained to internal linked
lists of attributes. However, this was dropped because exposing the
internal attribute list in this way created a rather messy user
interface that required strictly encoding the attributes with the
on-disk tag format.
Minor downside, users can no longer introduce custom attributes in
different layers (think OS vs app). Minor upside, the code size and
stack usage was reduced a bit.
Fortunately, this API can always be changed in the future without
breaking anything (except maybe API compatibility).
Before, the tag format's type field was limited to 9-bits. This sounds
like a lot, but this field needed to encode up to 256 user-specified
types. This limited the flexibility of the encoded types. As time went
on, more bits in the type field were repurposed for various things,
leaving a rather fragile type field.
Here we make the jump to full 11-bit type fields. This comes at the cost
of a smaller length field, however the use of the length field was
always going to come with a RAM limitation. Rather than putting pressure
on RAM for inline files, the new type field lets us encode a chunk
number, splitting up inline files into multiple updatable units. This
actually pushes the theoretical inline max from 8KiB to 256KiB! (Note
that we only allow a single 1KiB chunk for now, chunky inline files
is just a theoretical future improvement).
Here is the new 32-bit tag format, note that there are multiple levels
of types which break down into more info:
[---- 32 ----]
[1|-- 11 --|-- 10 --|-- 10 --]
^. ^ . ^ ^- entry length
|. | . \------------ file id chunk info
|. \-----.------------------ type info (type3)
\.-----------.------------------ valid bit
[-3-|-- 8 --]
^ ^- chunk info
\------- type info (type1)
Additionally, I've split the CREATE tag into separate SPLICE and NAME
tags. This simplified the new compact logic a bit. For now, littlefs
still follows the rule that a NAME tag precedes any other tags related
to a file, but this can change in the future.
There was an interesting subtlety with the existing layout of tags that
could become a problem in the future. Basically, littlefs avoids writing to
any region of storage it is not absolutely sure has been erased
beforehand. This is a part of limiting the number of assumptions about
storage. It's possible a storage technology can't support writes without
erases in a way that is undetectable at write time (Maybe changing a bit
without an erase decreases the longevity of the information stored on
the bit).
But the existing layout had a very tiny corner case where this wasn't
true. Consider the location of the valid bit in the tag struct:
[1|--- 31 ---]
^--- valid bit
The responsibility of this bit is to indicate if an attempt has been
made to write the following commit. If it is not set (the specific value
is dependent on a previous read and identified by the preceeding commit),
the assumption is that it is safe to write to the next region because it
has been erased previously. If it is set, we check if the next commit is
valid, if it isn't (because of CRC failure, likely due to power-loss), we
discard the commit. But because an attempt has been made to write to
that storage, we must then do a compaction to move to the other block in
the metadata-pair.
This plan looks good on paper, but what does it look like on storage?
The problem is that words in littlefs are in little-endian. So on
storage the tag actually looks like this:
[- 8 -|- 8 -|- 8 -|1|- 7 -]
^-- valid bit
This means that we don't actually set the valid bit before writing the
tag! We write the lower bytes first. If we lose power, we may have
written 3 bytes without this fact being detectable.
We could restructure the tag structure to store the valid bit lower,
however because none of the fields are 7 bits, this would make the
extraction more costly, and we then lose the ability to check this
valid bit with a sign comparison.
The simple solution is to just store the tag in big-endian. A small
benefit is that this will actually have a negative code cost on
big-endian machines.
This mixture of endiannesses is frustrating, however it is a pragmatic
solution with only a 20-byte code size cost.
The issue happens when a rename causes a split in the destination pair.
If the destination pair is the same as the source pair, this triggers the
logic to keep both pairs in sync. Unfortunately, this logic didn't work,
because the source entry still resides in the old source pair, unlike
the destination pair, which is now in the new pair created by the split.
The best fix for now is to refetch the source pair after the changes to the
destination pair. This isn't the most efficient solution, but fortunately
this bug has already been fixed in the revamped move logic in littlefs v2
(currently in progress).
Found by ohoc
This was an oversight on my part when adding strict ordering to
directories. Unfortunately now we can't take advantage of the atomic
creation of tail+dir entries. Now we need to first create the tail, then
create the actually directory entry. If we lose power, the orphan is
cleaned up like orphans created during remove.
Note that we still take advantage of the atomic tail+dir entries if we
are an end block. This is actually because this corner case is
complicated to _not_ do atomically, needing to update the directory we
just committed to.
A rather humorous issue, we accidentally ended up mixing our file
namespace with our superblocks. This meant if we created a file named
"littlefs" it would reference the superblock and all sorts of things
would break.
Fixing this also highlighted another issue, the fact that the superblock
always needs to come before any file entries in the directory. I didn't
account for this in the initial B-tree design, but we need a higher
ordering for superblocks + children + files than just name. To fix this
I added ordering information in the 2 bits currently unused in the tag
type. Though note that the size of these fields are flexible.
9-bit type field:
[--- 9 ---]
[1|- 3 -|- 2 -|- 3 -]
^ ^ ^ ^- type-specific info
| | \------- ordering info
| \------------- subtype
\----------------- user bit
Instead of storing files in an arbitrary order, we now store files in
ascending lexicographical order by filename.
Although a big change, this actually has little impact on how littlefs
works internally. We need to support file insertion, and compare file
names to find our position. But since we already need to scan the entire
directory block, this adds relatively little overhead.
What this does allow, is the potential to add B-tree support in the
future in a backwards compatible manner.
How could you add B-trees to littlefs?
1. Add an optional "child" tag with a pointer that allows you to skip to
a position in the metadata-pair list that composes the directory
2. When splitting a metadata-pair (sound familiar?), we either insert a
second child tag in our parent, or we create a new root containing
the child tags.
3. Each layer needs a bit stored in the tail-pointer to indicate if
we're going to the next layer. This can be created trivially when we
create a new root.
4. During lookup we keep two pointers containing the bounds of our
search. We may need to iterate through multiple metadata-pairs in our
linked-list, but this gives us a O(log n) lookup cost in a balanced
tree.
5. During deletion we also delete any children pointers. Note that
children pointers must come before the actual file entry.
This gives us a B-tree implementation that is compatible with the
current directory layout (assuming the files are ordered). This means
that B-trees could be supported by a host PC and ignored on a small
device. And during power-loss, we never end up with a broken filesystem,
just a less-than-optimal tree.
Note that we don't handle removes, so it's possible for a tree to become
unbalanced. But worst case that's the same as the current linked-list
implementation.
All we need to do now is keep directories ordered. If we decide to drop
B-tree support in the future or the B-tree implementation turns out
inherently flawed, we can just drop the ordered requirement without
breaking compatibility and recover the code cost.
The fact that the lookahead buffer uses bits instead of bytes is an
internal detail. Poking this through to the user API has caused a decent
amount of confusion. Most buffers are provided as bytes and the
inconsistency here can be surprising.
The use of bytes instead of bits also makes us forward compatible in
the case that we want to change the lookahead internal representation
(hint segment list).
Additionally, we change the configuration name to lookahead_size. This
matches other configurations, such as cache_size and read_size, while
also notifying the user that something important changed at compile time
(by breaking).
This is a minor tweak that resulted from looking at some other use cases
for the littlefs data-structure on disk. Consider an implementation that
does not need to buffer inline-files in RAM. In this case we should have
as large a tag size field as possible. Unfortunately, we don't have much
space to work with in the 32-bit tag struct, so we have to make some
compromises. These limitations could be removed with a 64-bit tag
struct, at the cost of code size.
32-bit tag structure:
[--- 32 ---]
[1|- 9 -|- 9 -|-- 13 --]
^ ^ ^ ^- entry length
| | \-------- file id
| \-------------- tag type
\------------------ valid bit
Conceptually these are two separate operations. However, they are both
only needed during mount, both require iteration over the linked-list of
metadata-pairs, and both are independent from each other.
Combining these into one gives us a nice code savings.
Additionally, this greatly simplifies the lookup of the root directory.
Initially we used a flag to indicate which superblock was root, since we
didn't want to fetch more pairs than we needed to. But since we're going
to fetch all metadata-pairs anyways, we can just use the last superblock
we find as the indicator of our root directory.
This follows from enabling tag deletion, however does require some
consideration with the APIs.
Now we can remove custom attributes, as well as determine if an attribute
exists or not.
Currently unused, the insertion of new file entries in arbitrary
locations in a metadata-pair is very easy to add into the existing
metadata logging.
The only tricky things:
1. Name tags must strictly precede any tags related to a file. We can
pull this off during a compact, but must make two passes. One for the
name tag, one for the file. Though a benefit of this is that now our
scans during moves can exit early upon finding the name tag.
1. We need to handle name tags appearing out of order. This makes name
tags symmetric to deletes, although it doesn't seem like we can
leverage this fact very well. Note this also means we need to make
the superblock tag a type of name tag.
The valid bit present in tags is a requirement to properly detect the
end of commits in metadata logs. The way it works is that the CRC entry is
allowed to specify what is needed from the next tag's valid bit. If it's
incorrect, we've reached the end of the commit. We then set the valid bit to
indicate when we tried to program a new commit. If we lose power, this
commit will still be thrown out by a bad checksum.
However, the valid bit is unused outside of the CRC entry. Here we turn on the
valid bit for all tags, which means we have a decent chance of exiting early
if we hit a half-written commit. We still need to guarantee detection of
the valid bit on commits following the CRC entry, so we allow the CRC
entry to flip the expected valid bit.
The only tricky part is what valid bit we expect by default, since this
is used on the first commit on a metadata log. Here we default to a 1,
which gives us the fastest exit on blocks that erase to 0. This is
because blocks that erase to 1s will implicitly flip the valid bit of
the next tag, allowing us to exit on the next tag.
If we defaulted to 0, we could exit faster on disks that erase to 1, but
would need to scan the entire block on disks that erase to 0 before we
realize a CRC commit is never coming.
While ECORRUPT is not a wrong error code, it doesn't match other
instances of hitting a corrupt block during write. During writes, if
blocks are detected as corrupt their data is evicted and moved to a new
clean block. This means that at the end of a disk's lifetime, exhaustion
errors will be reported as ENOSPC when littlefs can't find any new block
to store the data.
This has the benefit of matching behaviour when a new file is written
and no more blocks can be found, due to either a small disk or corrupted
blocks on disk. To littlefs it's like the disk shrinks in size over
time.
The initial implementation of inline files was thrown together fairly
quicky, however it has worked well so far and there hasn't been much
reason to change it.
One shortcut was to trick file writes into thinking they are writing to
imaginary blocks. This works well and reuses most of the file code
paths, as long as we don't flush the imaginary block out to disk.
Initially we did this by limiting inline_max to cache_max-1, ensuring
that the cache never fills up and gets flushed. This was a rather dirty
hack, the better solution, implemented here, is to handle the
representation of an "imaginary" block correctly all the way down into
the cache layer.
So now for files specifically, the value -1 represents a null pointer,
and the value -2 represents an "imaginary" block. This may become a
problem if the number of blocks approaches the max, however this -2
value is never written to disk and can be changed in the future without
breaking compatibility.
This was a pretty simple oversight on my part. Conceptually, there's no
difference between lfs_fs_getattr and lfs_getattr("/"). Any operations
on directories can be applied "globally" by referring to the root
directory.
Implementation wise, this actually fixes the "corner case" of storing
attributes on the root directory, which is broken since the root
directory doesn't have a related entry. Instead we need to use the root
superblock for this purpose.
Fewer functions means less code to document and maintain, so this is a
nice benefit. Now we just have a single lfs_getattr/setattr/removeattr set
of functions along with the ability to access attributes atomically in
lfs_file_opencfg.
This implements the second step of full dynamic wear-leveling, block
allocation randomization. This is the key part the uniformly distributes
wear across the filesystem, even through reboots.
The entropy actually comes from the filesystem itself, by xoring
together all of the CRCs in the metadata-pairs on the filesystem. While
this sounds like a ridiculous operation, it's easy to do when we already
scan the metadata-pairs at mount time.
This gives us a random number we can use for block allocation.
Unfortunately it's not a great general purpose random generator as the
output only changes every filesystem write. Fortunately that's exactly
when we need our allocator.
---
Additionally, the randomization created a mess for the testing
framework. Fortunately, this method of randomization is deterministic.
A very useful property for reproducing bugs.
Initially, littlefs relied entirely on bad-block detection for
wear-leveling. Conceptually, at the end of a devices lifespan, all
blocks would be worn evenly, even if they weren't worn out at the same
time. However, this doesn't work for all devices, rather than causing
corruption during writes, wear reduces a devices "sticking power",
causing bits to flip over time. This means for many devices, true
wear-leveling (dynamic or static) is required.
Fortunately, way back at the beginning, littlefs was designed to do full
dynamic wear-leveling, only dropping it when making the retrospectively
short-sighted realization that bad-block detection is theoretically
sufficient. We can enable dynamic wear-leveling with only a few tweaks
to littlefs. These can be implemented without breaking backwards
compatibility.
1. Evict metadata-pairs after a certain number of writes. Eviction in
this case is identical to a relocation to recover from a bad block.
We move our data and stick the old block back into our pool of
blocks.
For knowing when to evict, we already have a revision count for each
metadata-pair which gives us enough information. We add the
configuration option block_cycles and evict when our revision count
is a multiple of this value.
2. Now all blocks participate in COW behaviour. However we don't store
the state of our allocator, so every boot cycle we reuse the first
blocks on storage. This is very bad on a microcontroller, where we
may reboot often. We need a way to spread our usage across the disk.
To pull this off, we can simply randomize which block we start our
allocator at. But we need a random number generator that is different
on each boot. Fortunately we have a great source of entropy, our
filesystem. So we seed our block allocator with a simple hash of the
CRCs on our metadata-pairs. This can be done for free since we
already need to scan the metadata-pairs during mount.
What we end up with is a uniform distribution of wear on storage. The
wear is not perfect, if a block is used for metadata it gets more wear,
and the randomization may not be exact. But we can never actually get
perfect wear-leveling, since we're already resigned to dynamic
wear-leveling at the file level.
With the addition of metadata logging, we end up with a really
interesting two-stage wear-leveling algorithm. At the low-level,
metadata is statically wear-leveled. At the high-level, blocks are
dynamically wear-leveled.
---
This specific commit implements the first step, eviction of metadata
pairs. Entertwining this into the already complicated compact logic was
a bit annoying, however we can combine the logic for superblock
expansion with the logic for metadata-pair eviction.
This was mostly tweaking test cases to be accommodating for variable
sized superblock-lists. Though there were a few bugs that needed fixing:
- Changed compact to use source dir for move since the original dir
could have changed as a result of an expand.
- Created copy of current directory so we don't overwrite ourselves
during an internal commit update.
Also made sure all of the test suites provide reproducable results when
ran independently (the entry tests were behaving differently based on
which tests were ran before).
(Some where legitimate test failures)
Expanding superblocks has been on my wishlist for a while. The basic
idea is that instead of maintaining a fixed offset blocks {0, 1} to the
the root directory (1 pointer), we maintain a dynamically sized
linked-list of superblocks that point to the actual root. If the number
of writes to the root exceeds some value, we increase the size of the
superblock linked-list.
This can leverage existing metadata-pair operations. The revision count for
metadata-pairs provides some knowledge on how much wear we've put on the
superblock, and the threaded linked-list can also be reused for this
purpose. This means superblock expansion is both optional and cheap to
implement.
Expanding superblocks helps both extremely small and extremely large filesystem
(extreme being relative of course). On the small end, we can actually
collapse the superblock into the root directory and drop the hard requirement
of 4-blocks for the superblock. On the large end, our superblock will
now last longer than the rest of the filesystem. Each time we expand,
the number of cycles until the superblock dies is increased by a power.
Before we were stuck with this layout:
level cycles limit layout
1 E^2 390 MiB s0 -> root
Now we expand every time a fixed offset is exceeded:
level cycles limit layout
0 E 4 KiB s0+root
1 E^2 390 MiB s0 -> root
2 E^3 37 TiB s0 -> s1 -> root
3 E^4 3.6 EiB s0 -> s1 -> s2 -> root
...
Where the cycles are the number of cycles before death, and the limit is
the worst-case size a filesystem where early superblock death becomes a
concern (all writes to root using this formula: E^|s| = E*B, E = erase
cycles = 100000, B = block count, assuming 4096 byte blocks).
Note we can also store copies of the superblock entry on the expanded
superblocks. This may help filesystem recover tools in the future.
Because of limitations in how littlefs manages attributes on disk,
littlefs views zero-length attributes and missing attributes as the same
thing. The simpliest implementation of attributes mirrors this behaviour
transparently for the user.
State stealing is a tricky part of managing the xored-globals. When
removing a metadata-pair from the metadata chain, whichever
metadata-pair does the removing is also responsible for stealing the
removed metadata-pair's global delta and incorporating it into it's own
global delta. Otherwise the global state would become corrupted.
The introduction of an explicit cache_size configuration allows
customization of the cache buffers independently from the hardware
read/write sizes.
This has been one of littlefs's main handicaps. Without a distinction
between cache units and hardware limitations, littlefs isn't able to
read or program _less_ than the cache size. This leads to the
counter-intuitive case where larger cache sizes can actually be harmful,
since larger read/prog sizes require sending more data over the bus if
we're only accessing a small set of data (for example the CTZ skip-list
traversal).
This is compounded with metadata logging, since a large program size
limits the number of commits we can write out in a single metadata
block. It really doesn't make sense to link program size + cache
size here.
With a separate cache_size configuration, we can be much smarter about
what we actually read/write from disk.
This also simplifies cache handling a bit. Before there were two
possible cache sizes, but these were rarely used. Note that the
cache_size is NOT written to the superblock and can be freely changed
without breaking backwards compatibility.
Result of testing on zero-granularity blocks, where the prog size and
read size equals the block size. This represents SD cards and other
traditional forms of block storage where we don't really get a benefit
from the metadata logging.
Unfortunately, since updates in both are tested by the same script,
we can't really use simple bash commands. Added a more complex
script to simulate corruption. Fortunately this should be more robust
than the previous solutions.
The main fixes were around corner cases where the commit logic fell
apart when it didn't have room to complete commits, but these were
fixable in the current design.
The main thing to consider was how lfs_dir_fetchwith reacts to
corruption it finds and to make sure falling back to old values works
correctly.
Some of the tricky bits involved making sure we could fall back to both old
commits and old metadata blocks while still handling things like
synthetic moves correctly.
The main issue here was that the old orphan test relied on deleting the
block that contained the most recent update. In the new design this
doesn't really work since updates get appended to metadata-pairs
incrementally.
This is fixed by instead using the truncate command on the appropriate
block. We're now passing orphan tests.
Now that littlefs has been rebuilt almost from the ground up with the
intention to support custom attributes, adding in custom attribute
support is relatively easy.
The highest bit in the 9-bit type structure indicates that an attribute
is a user-specified custom attribute. The user then has a full 8-bits to
specify the attribute type. Other than that, custom attributes are
treated the same as system-level attributes.
Also made some tweaks to custom attributes:
- Adopted the opencfg for file-level attributes provided by dpgeorge
- Changed setattrs/getattrs to the simpler setattr/getattr functions
users will probably be more familiar with. Note that multiple
attributes can still be committed atomically with files, though not
with directories.
- Changed LFS_ATTRS_MAX -> LFS_ATTR_MAX since there's no longer a global
limit on the sum of attribute sizes, which was rather confusing.
Though they are still limited by what can fit in a metadata-pair.
I've been trying to keep tag types organized with an encoding that hints
if a tag uses its id field for file ids. However this seem to have been
a mistake. Using a null id of 0x3ff greatly simplified quite a bit of
the logic around managing file related tags.
The downside is one less id we can use, but if we look at the encoding
cost, donating one full bit costs us 2^9 id permutations vs 1 id
permutation. So even if we had a perfect encoding it's in our favor to
use a null id. The cost of null ids is code size, but with the
complexity around figuring out if a type used it's id or not it just
works out better to use a null id.
The issue lies in the reuse of the id field for globals. Before globals,
the only tags with a non-null (0x3ff) id field were names, structs, and
other file-specific metadata. But globals are also using this field for
the indirect delete, since otherwise the globals structure would be very
unaligned (74-bits long).
To make matters worse, the id field for globals contains the delta used
to reconstruct the globals at mount time. Which means the id field could
take on very absurd values and break the dir fetch logic if we're not
careful.
Solution is to use the scope portion of the type field where necessary,
although unforunately this does add some code cost.
Unfortunately, the behaviour needed of lfs_dir_fetchwith is as subtle as
it is important. When fetching from a block corrupted by power-loss,
lfs_dir_fetch must be able to rewind any state it picks up to before the
corruption. This is not limited to the directory state, but includes
find results and other side-effects.
This gets a bit complicated when trying to generalize littlefs's
fetchwith mechanics. Being able to scan a directory block during a fetch
greatly impacts the runtime of littlefs operations, but if the state is
generic how do we know what to rollback to?
The fix here is to leave the management of rolling back state to the
fetchwith match functions, and transparently pass a CRC tag to indicate
the temporary state can be saved.
Originally I tried to reuse the indirect delete to accomplish truely
atomic directory removes, however this fell apart when it came to
implementing directory removes as a side-effect of renames.
A single indirect-delete simply can't handle renames with removes as
a side effects. When copying an entry to its destination, we need to
atomically delete both the old entry, and the source of our copy. We
can't delete both with only a single indirect-delete. It is possible to
accomplish this with two indirect-deletes, but this is such an uncommon
case that it's really not worth supporting efficiently due to how
expensive globals are.
I also dropped indirect-deletes for normal directory removes. I may add
it back later, but at the moment it's extra code cost for that's not
traveled very often.
As a result, restructured the indirect delete handling to be a bit more
generic, now with a multipurpose lfs_globals_t struct instead of the
delete specific lfs_entry_t struct.
Also worked on integrating xored-globals, now with several primitive
global operations to manage fetching/updating globals on disk.
lfs_dir_fetchwith did not recover from failed dir fetches correctly,
added a temporary dir variable to hold dir contents while being
populated, allowing us to fall back to a known good dir state if a
commit is corrupted.
There is a RAM cost, but the upside is that our lfs_dir_fetchwith
actually works.
Also added better handling of move ids during some get functions.
Integration with the new journaling metadata has now progressed to the
point where all of the basic functionality tests are passing. This
includes:
- test_format
- test_dirs
- test_files
- test_seek
- test_truncate
- test_interspersed
- test_paths
Some of the fixes:
- Modified move to correctly change entry ids
- Called lfs_commit_move directly from compact, avoiding commit parsing
logic during a compact
- Opened up commit filters to be passed down from compact for moves
- Added correct drop logic to lfs_dir_delete
- Updated lfs_dir_seek to use ids instead of offsets
- Caught id updates manually where possible (this needs to be fixed)
Now with some tweaks to commit/compact, and a committers for entrylists and
moves specifically. No longer relying on a commitwith callback, the
types of commits are now infered from their tags.
This means we can now commit things atomically with special commits,
such as moves. Now lfs_rename can move entries to new names correctly.
Passing more tests now with the journalling change, but still have more
work to do.
The most humorous bug was a bug where during the three step move
process, the entry move logic would dumbly copy over any tags associated
with the moving entry, including the tag used to temporarily mark the
entry as "moving".
Also combined dir and commit traversal using a "stop_at_commit" flag in
directory struct as a short-term hack to combine the code paths.
Making the superblock look like "just another entry" allows us to treat
the superblock like "just another entry" and reuse a decent amount of
logic that would otherwise only be used a format and mount time. In this
case we can use append to write out the superblock like it was creating
a new entry on the filesystem.