Compare commits

...

22 Commits

Author SHA1 Message Date
Christopher Haster
59ce49fa4b Merge pull request #26 from Sim4n6/master
Added a .git ignore file
2018-02-08 23:28:55 -06:00
iamatacos
2f8ae344d2 Added a git ignore file with .o .d blocks dir and lfs bin 2018-02-08 02:20:51 -06:00
Christopher Haster
e611cf5050 Fix incorrect lookahead population before ack
Rather than tracking all in-flight blocks blocks during a lookahead,
littlefs uses an ack scheme to mark the first allocated block that
hasn't reached the disk yet. littlefs assumes all blocks since the
last ack are bad or in-flight, and uses this to know when it's out
of storage.

However, these unacked allocations were still being populated in the
lookahead buffer. If the whole block device fits in the lookahead
buffer, _and_ littlefs managed to scan around the whole storage while
an unacked block was still in-flight, it would assume the block was
free and misallocate it.

The fix is to only fill the lookahead buffer up to the last ack.
The internal free structure was restructured to simplify the runtime
calculation of lookahead size.
2018-02-08 01:52:39 -06:00
Christopher Haster
a25743a82a Fixed some minor error code differences
- Write on read-only file to return LFS_ERR_BADF
- Renaming directory onto file to return LFS_ERR_NOTEMPTY
- Changed LFS_ERR_INVAL in lfs_file_seek to assert
2018-02-04 14:36:36 -06:00
Christopher Haster
6716b5580a Fixed error check when truncating files to larger size 2018-02-04 14:09:55 -06:00
Christopher Haster
809ffde60f Merge pull request #24 from aldot/silence-shadow-warnings-1
Silence shadow warnings
2018-02-04 13:36:55 -06:00
Christopher Haster
dc513b172f Silenced more of aldot's warnings
Flags used:
-Wall -Wextra -Wshadow -Wwrite-strings -Wundef -Wstrict-prototypes
-Wunused -Wunused-parameter -Wunused-function -Wunused-value
-Wmissing-prototypes -Wmissing-declarations -Wold-style-definition
2018-02-04 13:15:30 -06:00
Bernhard Reutner-Fischer
aa50e03684 Commentary typo fix 2018-02-04 13:15:26 -06:00
Bernhard Reutner-Fischer
6d55755128 tests: Silence warnings in template
- no previous prototype for ‘test_assert’
- no previous prototype for ‘test_count’
- unused parameter ‘b’ in test_count
- function declaration isn’t a prototype for main
2018-02-04 13:15:17 -06:00
Bernhard Reutner-Fischer
029361ea16 Silence shadow warnings 2018-02-04 13:15:09 -06:00
Christopher Haster
fd04ed4f25 Added autogenerated release notes from commits 2018-02-02 02:35:07 -06:00
Bernhard Reutner-Fischer
3101bc92b3 Do not print command invocation if QUIET 2018-02-02 09:34:01 +01:00
Christopher Haster
d82e34c3ee Merge pull request #21 from aldot/doc-tweaks
documentation touch up, take 2
2018-02-01 15:06:24 -06:00
Bernhard Reutner-Fischer
436707c8d0 doc: Editorial tweaks 2018-02-01 14:56:43 -06:00
Bernhard Reutner-Fischer
3457252fe6 doc: Spelling fixes 2018-01-31 19:18:51 -06:00
Christopher Haster
6d8e0e21d0 Moved -Werror flag to CI only
The most useful part of -Werror is preventing code from being
merged that has warnings. However it is annoying for users who may have
different compilers with different warnings. Limiting -Werror to CI only
covers the main concern about warnings without limiting users.
2018-01-29 18:37:48 -06:00
Christopher Haster
88f678f4c6 Fixed self-assign warning in tests
Some of the tests were creating a variable `res`, however the test
system itself relies on it's own `res` variable. This worked out by
luck, but could lead to problems if the res variables were different
types.

Changed the generated variable in the test system to the less common
name `test`, which also works out to share the same prefix as other test
functions.
2018-01-29 18:37:48 -06:00
Christopher Haster
3ef4847434 Added remove step in tests to force rebuild
Found by user iamscottmoyers, this was an interesting bug with the test
system. If the new test.c file is generated fast enough, it may not have
a new timestamp and not get recompiled.

To fix, we can remove the specific files that need to be rebuilt (lfs and
test.o).
2018-01-29 18:37:41 -06:00
Christopher Haster
f694b14afb Merge pull request #16 from geky/versioning
Add version info for software library and on-disk structures
2018-01-29 01:20:23 -06:00
Christopher Haster
5a38d00dde Added deploy step in Travis to push new version as tags 2018-01-29 00:51:43 -06:00
Christopher Haster
035552a858 Add version info for software library and on-disk structures
An annoying part of filesystems is that the software library can change
independently of the on-disk structures. For this reason versioning is
very important, and must be handled separately for the software and
on-disk parts.

In this patch, littlefs provides two version numbers at compile time,
with major and minor parts, in the form of 6 macros.

LFS_VERSION        // Library version, uint32_t encoded
LFS_VERSION_MAJOR  // Major - Backwards incompatible changes
LFS_VERSION_MINOR  // Minor - Feature additions

LFS_DISK_VERSION        // On-disk version, uint32_t encoded
LFS_DISK_VERSION_MAJOR  // Major - Backwards incompatible changes
LFS_DISK_VERSION_MINOR  // Minor - Feature additions

Note that littlefs will error if it finds a major version number that
is different, or a minor version number that has regressed.
2018-01-26 14:26:25 -06:00
Christopher Haster
997c2e594e Fixed incorrect reliance on errno in emubd
When running the tests, the emubd erase function relied on the value of
errno to not change over a possible call to unlink. Annoyingly, I've
only seen this cause problems on a couple of specific Travis instances
while self-hosting littlefs on top of littlefs-fuse.
2018-01-22 19:28:29 -06:00
14 changed files with 303 additions and 147 deletions

9
.gitignore vendored Normal file
View File

@@ -0,0 +1,9 @@
# Compilation output
*.o
*.d
*.a
# Testing things
blocks/
lfs
test.c

View File

@@ -1,20 +1,23 @@
env:
- CFLAGS=-Werror
script: script:
# make sure example can at least compile # make sure example can at least compile
- sed -n '/``` c/,/```/{/```/d; p;}' README.md > test.c && - sed -n '/``` c/,/```/{/```/d; p;}' README.md > test.c &&
CFLAGS=' make all size CFLAGS+="
-Duser_provided_block_device_read=NULL -Duser_provided_block_device_read=NULL
-Duser_provided_block_device_prog=NULL -Duser_provided_block_device_prog=NULL
-Duser_provided_block_device_erase=NULL -Duser_provided_block_device_erase=NULL
-Duser_provided_block_device_sync=NULL -Duser_provided_block_device_sync=NULL
-include stdio.h -Werror' make all size -include stdio.h"
# run tests # run tests
- make test QUIET=1 - make test QUIET=1
# run tests with a few different configurations # run tests with a few different configurations
- CFLAGS="-DLFS_READ_SIZE=1 -DLFS_PROG_SIZE=1" make test QUIET=1 - make test QUIET=1 CFLAGS+="-DLFS_READ_SIZE=1 -DLFS_PROG_SIZE=1"
- CFLAGS="-DLFS_READ_SIZE=512 -DLFS_PROG_SIZE=512" make test QUIET=1 - make test QUIET=1 CFLAGS+="-DLFS_READ_SIZE=512 -DLFS_PROG_SIZE=512"
- CFLAGS="-DLFS_BLOCK_COUNT=1023 -DLFS_LOOKAHEAD=2048" make test QUIET=1 - make test QUIET=1 CFLAGS+="-DLFS_BLOCK_COUNT=1023 -DLFS_LOOKAHEAD=2048"
# self-host with littlefs-fuse for fuzz test # self-host with littlefs-fuse for fuzz test
- make -C littlefs-fuse - make -C littlefs-fuse
@@ -45,3 +48,60 @@ before_script:
- sudo chmod a+rw /dev/loop0 - sudo chmod a+rw /dev/loop0
- dd if=/dev/zero bs=512 count=2048 of=disk - dd if=/dev/zero bs=512 count=2048 of=disk
- losetup /dev/loop0 disk - losetup /dev/loop0 disk
deploy:
# Let before_deploy take over
provider: script
script: 'true'
on:
branch: master
before_deploy:
- cd $TRAVIS_BUILD_DIR
# Update tag for version defined in lfs.h
- LFS_VERSION=$(grep -ox '#define LFS_VERSION .*' lfs.h | cut -d ' ' -f3)
- LFS_VERSION_MAJOR=$((0xffff & ($LFS_VERSION >> 16)))
- LFS_VERSION_MINOR=$((0xffff & ($LFS_VERSION >> 0)))
- LFS_VERSION="v$LFS_VERSION_MAJOR.$LFS_VERSION_MINOR"
- echo "littlefs version $LFS_VERSION"
- |
curl -u $GEKY_BOT -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/git/refs \
-d "{
\"ref\": \"refs/tags/$LFS_VERSION\",
\"sha\": \"$TRAVIS_COMMIT\"
}"
- |
curl -f -u $GEKY_BOT -X PATCH \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/git/refs/tags/$LFS_VERSION \
-d "{
\"sha\": \"$TRAVIS_COMMIT\"
}"
# Create release notes from commits
- LFS_PREV_VERSION="v$LFS_VERSION_MAJOR.$(($LFS_VERSION_MINOR-1))"
- |
if [ $(git tag -l "$LFS_PREV_VERSION") ]
then
curl -u $GEKY_BOT -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/releases \
-d "{
\"tag_name\": \"$LFS_VERSION\",
\"name\": \"$LFS_VERSION\"
}"
RELEASE=$(
curl -f https://api.github.com/repos/$TRAVIS_REPO_SLUG/releases/tags/$LFS_VERSION
)
CHANGES=$(
git log --oneline $LFS_PREV_VERSION.. --grep='^Merge' --invert-grep
)
curl -f -u $GEKY_BOT -X PATCH \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/releases/$(
jq -r '.id' <<< "$RELEASE"
) \
-d "$(
jq -s '{
"body": ((.[0] // "" | sub("(?<=\n)#+ Changes.*"; ""; "mi"))
+ "### Changes\n\n" + .[1])
}' <(jq '.body' <<< "$RELEASE") <(jq -sR '.' <<< "$CHANGES")
)"
fi

View File

@@ -27,16 +27,17 @@ cheap, and can be very granular. For NOR flash specifically, byte-level
programs are quite common. Erasing, however, requires an expensive operation programs are quite common. Erasing, however, requires an expensive operation
that forces the state of large blocks of memory to reset in a destructive that forces the state of large blocks of memory to reset in a destructive
reaction that gives flash its name. The [Wikipedia entry](https://en.wikipedia.org/wiki/Flash_memory) reaction that gives flash its name. The [Wikipedia entry](https://en.wikipedia.org/wiki/Flash_memory)
has more information if you are interesting in how this works. has more information if you are interested in how this works.
This leaves us with an interesting set of limitations that can be simplified This leaves us with an interesting set of limitations that can be simplified
to three strong requirements: to three strong requirements:
1. **Power-loss resilient** - This is the main goal of the littlefs and the 1. **Power-loss resilient** - This is the main goal of the littlefs and the
focus of this project. Embedded systems are usually designed without a focus of this project.
shutdown routine and a notable lack of user interface for recovery, so
filesystems targeting embedded systems must be prepared to lose power an Embedded systems are usually designed without a shutdown routine and a
any given time. notable lack of user interface for recovery, so filesystems targeting
embedded systems must be prepared to lose power at any given time.
Despite this state of things, there are very few embedded filesystems that Despite this state of things, there are very few embedded filesystems that
handle power loss in a reasonable manner, and most can become corrupted if handle power loss in a reasonable manner, and most can become corrupted if
@@ -52,7 +53,8 @@ to three strong requirements:
which stores a file allocation table (FAT) at a specific offset from the which stores a file allocation table (FAT) at a specific offset from the
beginning of disk. Every block allocation will update this table, and after beginning of disk. Every block allocation will update this table, and after
100,000 updates, the block will likely go bad, rendering the filesystem 100,000 updates, the block will likely go bad, rendering the filesystem
unusable even if there are many more erase cycles available on the storage. unusable even if there are many more erase cycles available on the storage
as a whole.
3. **Bounded RAM/ROM** - Even with the design difficulties presented by the 3. **Bounded RAM/ROM** - Even with the design difficulties presented by the
previous two limitations, we have already seen several flash filesystems previous two limitations, we have already seen several flash filesystems
@@ -72,7 +74,7 @@ to three strong requirements:
## Existing designs? ## Existing designs?
There are of course, many different existing filesystem. Heres a very rough There are of course, many different existing filesystem. Here is a very rough
summary of the general ideas behind some of them. summary of the general ideas behind some of them.
Most of the existing filesystems fall into the one big category of filesystem Most of the existing filesystems fall into the one big category of filesystem
@@ -80,21 +82,21 @@ designed in the early days of spinny magnet disks. While there is a vast amount
of interesting technology and ideas in this area, the nature of spinny magnet of interesting technology and ideas in this area, the nature of spinny magnet
disks encourage properties, such as grouping writes near each other, that don't disks encourage properties, such as grouping writes near each other, that don't
make as much sense on recent storage types. For instance, on flash, write make as much sense on recent storage types. For instance, on flash, write
locality is not important and can actually increase wear destructively. locality is not important and can actually increase wear.
One of the most popular designs for flash filesystems is called the One of the most popular designs for flash filesystems is called the
[logging filesystem](https://en.wikipedia.org/wiki/Log-structured_file_system). [logging filesystem](https://en.wikipedia.org/wiki/Log-structured_file_system).
The flash filesystems [jffs](https://en.wikipedia.org/wiki/JFFS) The flash filesystems [jffs](https://en.wikipedia.org/wiki/JFFS)
and [yaffs](https://en.wikipedia.org/wiki/YAFFS) are good examples. In and [yaffs](https://en.wikipedia.org/wiki/YAFFS) are good examples. In a
logging filesystem, data is not store in a data structure on disk, but instead logging filesystem, data is not stored in a data structure on disk, but instead
the changes to the files are stored on disk. This has several neat advantages, the changes to the files are stored on disk. This has several neat advantages,
such as the fact that the data is written in a cyclic log format naturally such as the fact that the data is written in a cyclic log format and naturally
wear levels as a side effect. And, with a bit of error detection, the entire wear levels as a side effect. And, with a bit of error detection, the entire
filesystem can easily be designed to be resilient to power loss. The filesystem can easily be designed to be resilient to power loss. The
journalling component of most modern day filesystems is actually a reduced journaling component of most modern day filesystems is actually a reduced
form of a logging filesystem. However, logging filesystems have a difficulty form of a logging filesystem. However, logging filesystems have a difficulty
scaling as the size of storage increases. And most filesystems compensate by scaling as the size of storage increases. And most filesystems compensate by
caching large parts of the filesystem in RAM, a strategy that is unavailable caching large parts of the filesystem in RAM, a strategy that is inappropriate
for embedded systems. for embedded systems.
Another interesting filesystem design technique is that of [copy-on-write (COW)](https://en.wikipedia.org/wiki/Copy-on-write). Another interesting filesystem design technique is that of [copy-on-write (COW)](https://en.wikipedia.org/wiki/Copy-on-write).
@@ -107,14 +109,14 @@ where the COW data structures are synchronized.
## Metadata pairs ## Metadata pairs
The core piece of technology that provides the backbone for the littlefs is The core piece of technology that provides the backbone for the littlefs is
the concept of metadata pairs. The key idea here, is that any metadata that the concept of metadata pairs. The key idea here is that any metadata that
needs to be updated atomically is stored on a pair of blocks tagged with needs to be updated atomically is stored on a pair of blocks tagged with
a revision count and checksum. Every update alternates between these two a revision count and checksum. Every update alternates between these two
pairs, so that at any time there is always a backup containing the previous pairs, so that at any time there is always a backup containing the previous
state of the metadata. state of the metadata.
Consider a small example where each metadata pair has a revision count, Consider a small example where each metadata pair has a revision count,
a number as data, and the xor of the block as a quick checksum. If a number as data, and the XOR of the block as a quick checksum. If
we update the data to a value of 9, and then to a value of 5, here is we update the data to a value of 9, and then to a value of 5, here is
what the pair of blocks may look like after each update: what the pair of blocks may look like after each update:
``` ```
@@ -130,7 +132,7 @@ what the pair of blocks may look like after each update:
After each update, we can find the most up to date value of data by looking After each update, we can find the most up to date value of data by looking
at the revision count. at the revision count.
Now consider what the blocks may look like if we suddenly loss power while Now consider what the blocks may look like if we suddenly lose power while
changing the value of data to 5: changing the value of data to 5:
``` ```
block 1 block 2 block 1 block 2 block 1 block 2 block 1 block 2 block 1 block 2 block 1 block 2
@@ -149,7 +151,7 @@ check our checksum we notice that block 1 was corrupted. So we fall back to
block 2 and use the value 9. block 2 and use the value 9.
Using this concept, the littlefs is able to update metadata blocks atomically. Using this concept, the littlefs is able to update metadata blocks atomically.
There are a few other tweaks, such as using a 32 bit crc and using sequence There are a few other tweaks, such as using a 32 bit CRC and using sequence
arithmetic to handle revision count overflow, but the basic concept arithmetic to handle revision count overflow, but the basic concept
is the same. These metadata pairs define the backbone of the littlefs, and the is the same. These metadata pairs define the backbone of the littlefs, and the
rest of the filesystem is built on top of these atomic updates. rest of the filesystem is built on top of these atomic updates.
@@ -161,7 +163,7 @@ requires two blocks for each block of data. I'm sure users would be very
unhappy if their storage was suddenly cut in half! Instead of storing unhappy if their storage was suddenly cut in half! Instead of storing
everything in these metadata blocks, the littlefs uses a COW data structure everything in these metadata blocks, the littlefs uses a COW data structure
for files which is in turn pointed to by a metadata block. When for files which is in turn pointed to by a metadata block. When
we update a file, we create a copies of any blocks that are modified until we update a file, we create copies of any blocks that are modified until
the metadata blocks are updated with the new copy. Once the metadata block the metadata blocks are updated with the new copy. Once the metadata block
points to the new copy, we deallocate the old blocks that are no longer in use. points to the new copy, we deallocate the old blocks that are no longer in use.
@@ -184,7 +186,7 @@ Here is what updating a one-block file may look like:
update data in file update metadata pair update data in file update metadata pair
``` ```
It doesn't matter if we lose power while writing block 5 with the new data, It doesn't matter if we lose power while writing new data to block 5,
since the old data remains unmodified in block 4. This example also since the old data remains unmodified in block 4. This example also
highlights how the atomic updates of the metadata blocks provide a highlights how the atomic updates of the metadata blocks provide a
synchronization barrier for the rest of the littlefs. synchronization barrier for the rest of the littlefs.
@@ -206,7 +208,7 @@ files in filesystems. Of these, the littlefs uses a rather unique [COW](https://
data structure that allows the filesystem to reuse unmodified parts of the data structure that allows the filesystem to reuse unmodified parts of the
file without additional metadata pairs. file without additional metadata pairs.
First lets consider storing files in a simple linked-list. What happens when First lets consider storing files in a simple linked-list. What happens when we
append a block? We have to change the last block in the linked-list to point append a block? We have to change the last block in the linked-list to point
to this new block, which means we have to copy out the last block, and change to this new block, which means we have to copy out the last block, and change
the second-to-last block, and then the third-to-last, and so on until we've the second-to-last block, and then the third-to-last, and so on until we've
@@ -240,8 +242,8 @@ Exhibit B: A backwards linked-list
``` ```
However, a backwards linked-list does come with a rather glaring problem. However, a backwards linked-list does come with a rather glaring problem.
Iterating over a file _in order_ has a runtime of O(n^2). Gah! A quadratic Iterating over a file _in order_ has a runtime cost of O(n^2). Gah! A quadratic
runtime to just _read_ a file? That's awful. Keep in mind reading files are runtime to just _read_ a file? That's awful. Keep in mind reading files is
usually the most common filesystem operation. usually the most common filesystem operation.
To avoid this problem, the littlefs uses a multilayered linked-list. For To avoid this problem, the littlefs uses a multilayered linked-list. For
@@ -266,7 +268,7 @@ Exhibit C: A backwards CTZ skip-list
``` ```
The additional pointers allow us to navigate the data-structure on disk The additional pointers allow us to navigate the data-structure on disk
much more efficiently than in a single linked-list. much more efficiently than in a singly linked-list.
Taking exhibit C for example, here is the path from data block 5 to data Taking exhibit C for example, here is the path from data block 5 to data
block 1. You can see how data block 3 was completely skipped: block 1. You can see how data block 3 was completely skipped:
@@ -289,15 +291,15 @@ The path to data block 0 is even more quick, requiring only two jumps:
We can find the runtime complexity by looking at the path to any block from We can find the runtime complexity by looking at the path to any block from
the block containing the most pointers. Every step along the path divides the block containing the most pointers. Every step along the path divides
the search space for the block in half. This gives us a runtime of O(logn). the search space for the block in half. This gives us a runtime of O(log n).
To get to the block with the most pointers, we can perform the same steps To get to the block with the most pointers, we can perform the same steps
backwards, which puts the runtime at O(2logn) = O(logn). The interesting backwards, which puts the runtime at O(2 log n) = O(log n). The interesting
part about this data structure is that this optimal path occurs naturally part about this data structure is that this optimal path occurs naturally
if we greedily choose the pointer that covers the most distance without passing if we greedily choose the pointer that covers the most distance without passing
our target block. our target block.
So now we have a representation of files that can be appended trivially with So now we have a representation of files that can be appended trivially with
a runtime of O(1), and can be read with a worst case runtime of O(nlogn). a runtime of O(1), and can be read with a worst case runtime of O(n log n).
Given that the the runtime is also divided by the amount of data we can store Given that the the runtime is also divided by the amount of data we can store
in a block, this is pretty reasonable. in a block, this is pretty reasonable.
@@ -362,7 +364,7 @@ N = file size in bytes
And this works quite well, but is not trivial to calculate. This equation And this works quite well, but is not trivial to calculate. This equation
requires O(n) to compute, which brings the entire runtime of reading a file requires O(n) to compute, which brings the entire runtime of reading a file
to O(n^2logn). Fortunately, the additional O(n) does not need to touch disk, to O(n^2 log n). Fortunately, the additional O(n) does not need to touch disk,
so it is not completely unreasonable. But if we could solve this equation into so it is not completely unreasonable. But if we could solve this equation into
a form that is easily computable, we can avoid a big slowdown. a form that is easily computable, we can avoid a big slowdown.
@@ -379,11 +381,11 @@ unintuitive property:
![mindblown](https://latex.codecogs.com/svg.latex?%5Csum_i%5En%5Cleft%28%5Ctext%7Bctz%7D%28i%29&plus;1%5Cright%29%20%3D%202n-%5Ctext%7Bpopcount%7D%28n%29) ![mindblown](https://latex.codecogs.com/svg.latex?%5Csum_i%5En%5Cleft%28%5Ctext%7Bctz%7D%28i%29&plus;1%5Cright%29%20%3D%202n-%5Ctext%7Bpopcount%7D%28n%29)
where: where:
ctz(i) = the number of trailing bits that are 0 in i ctz(x) = the number of trailing bits that are 0 in x
popcount(i) = the number of bits that are 1 in i popcount(x) = the number of bits that are 1 in x
It's a bit bewildering that these two seemingly unrelated bitwise instructions It's a bit bewildering that these two seemingly unrelated bitwise instructions
are related by this property. But if we start to disect this equation we can are related by this property. But if we start to dissect this equation we can
see that it does hold. As n approaches infinity, we do end up with an average see that it does hold. As n approaches infinity, we do end up with an average
overhead of 2 pointers as we find earlier. And popcount seems to handle the overhead of 2 pointers as we find earlier. And popcount seems to handle the
error from this average as it accumulates in the CTZ skip-list. error from this average as it accumulates in the CTZ skip-list.
@@ -410,8 +412,7 @@ a bit to avoid integer overflow:
![formulaforoff](https://latex.codecogs.com/svg.latex?%5Cmathit%7Boff%7D%20%3D%20N%20-%20%5Cleft%28B-2%5Cfrac%7Bw%7D%7B8%7D%5Cright%29n%20-%20%5Cfrac%7Bw%7D%7B8%7D%5Ctext%7Bpopcount%7D%28n%29) ![formulaforoff](https://latex.codecogs.com/svg.latex?%5Cmathit%7Boff%7D%20%3D%20N%20-%20%5Cleft%28B-2%5Cfrac%7Bw%7D%7B8%7D%5Cright%29n%20-%20%5Cfrac%7Bw%7D%7B8%7D%5Ctext%7Bpopcount%7D%28n%29)
The solution involves quite a bit of math, but computers are very good at math. The solution involves quite a bit of math, but computers are very good at math.
We can now solve for the block index + offset while only needed to store the Now we can solve for both the block index and offset from the file size in O(1).
file size in O(1).
Here is what it might look like to update a file stored with a CTZ skip-list: Here is what it might look like to update a file stored with a CTZ skip-list:
``` ```
@@ -500,16 +501,17 @@ scanned to find the most recent free list, but once the list was found the
state of all free blocks becomes known. state of all free blocks becomes known.
However, this approach had several issues: However, this approach had several issues:
- There was a lot of nuanced logic for adding blocks to the free list without - There was a lot of nuanced logic for adding blocks to the free list without
modifying the blocks, since the blocks remain active until the metadata is modifying the blocks, since the blocks remain active until the metadata is
updated. updated.
- The free list had to support both additions and removals in fifo order while - The free list had to support both additions and removals in FIFO order while
minimizing block erases. minimizing block erases.
- The free list had to handle the case where the file system completely ran - The free list had to handle the case where the file system completely ran
out of blocks and may no longer be able to add blocks to the free list. out of blocks and may no longer be able to add blocks to the free list.
- If we used a revision count to track the most recently updated free list, - If we used a revision count to track the most recently updated free list,
metadata blocks that were left unmodified were ticking time bombs that would metadata blocks that were left unmodified were ticking time bombs that would
cause the system to go haywire if the revision count overflowed cause the system to go haywire if the revision count overflowed.
- Every single metadata block wasted space to store these free list references. - Every single metadata block wasted space to store these free list references.
Actually, to simplify, this approach had one massive glaring issue: complexity. Actually, to simplify, this approach had one massive glaring issue: complexity.
@@ -539,7 +541,7 @@ would have an abhorrent runtime.
So the littlefs compromises. It doesn't store a bitmap the size of the storage, So the littlefs compromises. It doesn't store a bitmap the size of the storage,
but it does store a little bit-vector that contains a fixed set lookahead but it does store a little bit-vector that contains a fixed set lookahead
for block allocations. During a block allocation, the lookahead vector is for block allocations. During a block allocation, the lookahead vector is
checked for any free blocks, if there are none, the lookahead region jumps checked for any free blocks. If there are none, the lookahead region jumps
forward and the entire filesystem is scanned for free blocks. forward and the entire filesystem is scanned for free blocks.
Here's what it might look like to allocate 4 blocks on a decently busy Here's what it might look like to allocate 4 blocks on a decently busy
@@ -622,7 +624,7 @@ So, as a solution, the littlefs adopted a sort of threaded tree. Each
directory not only contains pointers to all of its children, but also a directory not only contains pointers to all of its children, but also a
pointer to the next directory. These pointers create a linked-list that pointer to the next directory. These pointers create a linked-list that
is threaded through all of the directories in the filesystem. Since we is threaded through all of the directories in the filesystem. Since we
only use this linked list to check for existance, the order doesn't actually only use this linked list to check for existence, the order doesn't actually
matter. As an added plus, we can repurpose the pointer for the individual matter. As an added plus, we can repurpose the pointer for the individual
directory linked-lists and avoid using any additional space. directory linked-lists and avoid using any additional space.
@@ -773,7 +775,7 @@ deorphan step that simply iterates through every directory in the linked-list
and checks it against every directory entry in the filesystem to see if it and checks it against every directory entry in the filesystem to see if it
has a parent. The deorphan step occurs on the first block allocation after has a parent. The deorphan step occurs on the first block allocation after
boot, so orphans should never cause the littlefs to run out of storage boot, so orphans should never cause the littlefs to run out of storage
prematurely. Note that the deorphan step never needs to run in a readonly prematurely. Note that the deorphan step never needs to run in a read-only
filesystem. filesystem.
## The move problem ## The move problem
@@ -883,7 +885,7 @@ a power loss will occur during filesystem activity. We still need to handle
the condition, but runtime during a power loss takes a back seat to the runtime the condition, but runtime during a power loss takes a back seat to the runtime
during normal operations. during normal operations.
So what littlefs does is unelegantly simple. When littlefs moves a file, it So what littlefs does is inelegantly simple. When littlefs moves a file, it
marks the file as "moving". This is stored as a single bit in the directory marks the file as "moving". This is stored as a single bit in the directory
entry and doesn't take up much space. Then littlefs moves the directory, entry and doesn't take up much space. Then littlefs moves the directory,
finishing with the complete remove of the "moving" directory entry. finishing with the complete remove of the "moving" directory entry.
@@ -979,7 +981,7 @@ if it exists elsewhere in the filesystem.
So now that we have all of the pieces of a filesystem, we can look at a more So now that we have all of the pieces of a filesystem, we can look at a more
subtle attribute of embedded storage: The wear down of flash blocks. subtle attribute of embedded storage: The wear down of flash blocks.
The first concern for the littlefs, is that prefectly valid blocks can suddenly The first concern for the littlefs, is that perfectly valid blocks can suddenly
become unusable. As a nice side-effect of using a COW data-structure for files, become unusable. As a nice side-effect of using a COW data-structure for files,
we can simply move on to a different block when a file write fails. All we can simply move on to a different block when a file write fails. All
modifications to files are performed in copies, so we will only replace the modifications to files are performed in copies, so we will only replace the
@@ -1151,7 +1153,7 @@ develops errors and needs to be moved.
## Wear leveling ## Wear leveling
The second concern for the littlefs, is that blocks in the filesystem may wear The second concern for the littlefs is that blocks in the filesystem may wear
unevenly. In this situation, a filesystem may meet an early demise where unevenly. In this situation, a filesystem may meet an early demise where
there are no more non-corrupted blocks that aren't in use. It's common to there are no more non-corrupted blocks that aren't in use. It's common to
have files that were written once and left unmodified, wasting the potential have files that were written once and left unmodified, wasting the potential
@@ -1171,7 +1173,7 @@ of wear leveling:
In littlefs's case, it's possible to use the revision count on metadata pairs In littlefs's case, it's possible to use the revision count on metadata pairs
to approximate the wear of a metadata block. And combined with the COW nature to approximate the wear of a metadata block. And combined with the COW nature
of files, littlefs could provide your usually implementation of dynamic wear of files, littlefs could provide your usual implementation of dynamic wear
leveling. leveling.
However, the littlefs does not. This is for a few reasons. Most notably, even However, the littlefs does not. This is for a few reasons. Most notably, even
@@ -1210,9 +1212,9 @@ So, to summarize:
metadata block is active metadata block is active
4. Directory blocks contain either references to other directories or files 4. Directory blocks contain either references to other directories or files
5. Files are represented by copy-on-write CTZ skip-lists which support O(1) 5. Files are represented by copy-on-write CTZ skip-lists which support O(1)
append and O(nlogn) reading append and O(n log n) reading
6. Blocks are allocated by scanning the filesystem for used blocks in a 6. Blocks are allocated by scanning the filesystem for used blocks in a
fixed-size lookahead region is that stored in a bit-vector fixed-size lookahead region that is stored in a bit-vector
7. To facilitate scanning the filesystem, all directories are part of a 7. To facilitate scanning the filesystem, all directories are part of a
linked-list that is threaded through the entire filesystem linked-list that is threaded through the entire filesystem
8. If a block develops an error, the littlefs allocates a new block, and 8. If a block develops an error, the littlefs allocates a new block, and

View File

@@ -14,15 +14,15 @@ TEST := $(patsubst tests/%.sh,%,$(wildcard tests/test_*))
SHELL = /bin/bash -o pipefail SHELL = /bin/bash -o pipefail
ifdef DEBUG ifdef DEBUG
CFLAGS += -O0 -g3 override CFLAGS += -O0 -g3
else else
CFLAGS += -Os override CFLAGS += -Os
endif endif
ifdef WORD ifdef WORD
CFLAGS += -m$(WORD) override CFLAGS += -m$(WORD)
endif endif
CFLAGS += -I. override CFLAGS += -I.
CFLAGS += -std=c99 -Wall -pedantic override CFLAGS += -std=c99 -Wall -pedantic
all: $(TARGET) all: $(TARGET)
@@ -37,7 +37,7 @@ test: test_format test_dirs test_files test_seek test_truncate test_parallel \
test_alloc test_paths test_orphan test_move test_corrupt test_alloc test_paths test_orphan test_move test_corrupt
test_%: tests/test_%.sh test_%: tests/test_%.sh
ifdef QUIET ifdef QUIET
./$< | sed -n '/^[-=]/p' @./$< | sed -n '/^[-=]/p'
else else
./$< ./$<
endif endif

View File

@@ -16,7 +16,7 @@ of memory. Recursion is avoided and dynamic memory is limited to configurable
buffers that can be provided statically. buffers that can be provided statically.
**Power-loss resilient** - The littlefs is designed for systems that may have **Power-loss resilient** - The littlefs is designed for systems that may have
random power failures. The littlefs has strong copy-on-write guaruntees and random power failures. The littlefs has strong copy-on-write guarantees and
storage on disk is always kept in a valid state. storage on disk is always kept in a valid state.
**Wear leveling** - Since the most common form of embedded storage is erodible **Wear leveling** - Since the most common form of embedded storage is erodible
@@ -88,7 +88,7 @@ int main(void) {
## Usage ## Usage
Detailed documentation (or at least as much detail as is currently available) Detailed documentation (or at least as much detail as is currently available)
can be cound in the comments in [lfs.h](lfs.h). can be found in the comments in [lfs.h](lfs.h).
As you may have noticed, littlefs takes in a configuration structure that As you may have noticed, littlefs takes in a configuration structure that
defines how the filesystem operates. The configuration struct provides the defines how the filesystem operates. The configuration struct provides the
@@ -101,12 +101,12 @@ to the user to allocate, allowing multiple filesystems to be in use
simultaneously. With the `lfs_t` and configuration struct, a user can simultaneously. With the `lfs_t` and configuration struct, a user can
format a block device or mount the filesystem. format a block device or mount the filesystem.
Once mounted, the littlefs provides a full set of posix-like file and Once mounted, the littlefs provides a full set of POSIX-like file and
directory functions, with the deviation that the allocation of filesystem directory functions, with the deviation that the allocation of filesystem
structures must be provided by the user. structures must be provided by the user.
All posix operations, such as remove and rename, are atomic, even in event All POSIX operations, such as remove and rename, are atomic, even in event
of power-loss. Additionally, no file updates are actually commited to the of power-loss. Additionally, no file updates are actually committed to the
filesystem until sync or close is called on the file. filesystem until sync or close is called on the file.
## Other notes ## Other notes
@@ -116,7 +116,7 @@ can be either one of those found in the `enum lfs_error` in [lfs.h](lfs.h),
or an error returned by the user's block device operations. or an error returned by the user's block device operations.
It should also be noted that the current implementation of littlefs doesn't It should also be noted that the current implementation of littlefs doesn't
really do anything to insure that the data written to disk is machine portable. really do anything to ensure that the data written to disk is machine portable.
This is fine as long as all of the involved machines share endianness This is fine as long as all of the involved machines share endianness
(little-endian) and don't have strange padding requirements. (little-endian) and don't have strange padding requirements.
@@ -131,9 +131,9 @@ with all the nitty-gritty details. Can be useful for developing tooling.
## Testing ## Testing
The littlefs comes with a test suite designed to run on a pc using the The littlefs comes with a test suite designed to run on a PC using the
[emulated block device](emubd/lfs_emubd.h) found in the emubd directory. [emulated block device](emubd/lfs_emubd.h) found in the emubd directory.
The tests assume a linux environment and can be started with make: The tests assume a Linux environment and can be started with make:
``` bash ``` bash
make test make test
@@ -148,7 +148,7 @@ littlefs is available in Mbed OS as the [LittleFileSystem](https://os.mbed.com/d
class. class.
[littlefs-fuse](https://github.com/geky/littlefs-fuse) - A [FUSE](https://github.com/libfuse/libfuse) [littlefs-fuse](https://github.com/geky/littlefs-fuse) - A [FUSE](https://github.com/libfuse/libfuse)
wrapper for littlefs. The project allows you to mount littlefs directly in a wrapper for littlefs. The project allows you to mount littlefs directly on a
Linux machine. Can be useful for debugging littlefs if you have an SD card Linux machine. Can be useful for debugging littlefs if you have an SD card
handy. handy.

12
SPEC.md
View File

@@ -46,7 +46,7 @@ Here's the layout of metadata blocks on disk:
| 0x04 | 32 bits | dir size | | 0x04 | 32 bits | dir size |
| 0x08 | 64 bits | tail pointer | | 0x08 | 64 bits | tail pointer |
| 0x10 | size-16 bytes | dir entries | | 0x10 | size-16 bytes | dir entries |
| 0x00+s | 32 bits | crc | | 0x00+s | 32 bits | CRC |
**Revision count** - Incremented every update, only the uncorrupted **Revision count** - Incremented every update, only the uncorrupted
metadata-block with the most recent revision count contains the valid metadata. metadata-block with the most recent revision count contains the valid metadata.
@@ -75,7 +75,7 @@ Here's an example of a simple directory stored on disk:
(32 bits) revision count = 10 (0x0000000a) (32 bits) revision count = 10 (0x0000000a)
(32 bits) dir size = 154 bytes, end of dir (0x0000009a) (32 bits) dir size = 154 bytes, end of dir (0x0000009a)
(64 bits) tail pointer = 37, 36 (0x00000025, 0x00000024) (64 bits) tail pointer = 37, 36 (0x00000025, 0x00000024)
(32 bits) crc = 0xc86e3106 (32 bits) CRC = 0xc86e3106
00000000: 0a 00 00 00 9a 00 00 00 25 00 00 00 24 00 00 00 ........%...$... 00000000: 0a 00 00 00 9a 00 00 00 25 00 00 00 24 00 00 00 ........%...$...
00000010: 22 08 00 03 05 00 00 00 04 00 00 00 74 65 61 22 "...........tea" 00000010: 22 08 00 03 05 00 00 00 04 00 00 00 74 65 61 22 "...........tea"
@@ -138,12 +138,12 @@ not include the entry type size, attributes, or name. The full size in bytes
of the entry is 4 + entry length + attribute length + name length. of the entry is 4 + entry length + attribute length + name length.
**Attribute length** - Length of system-specific attributes in bytes. Since **Attribute length** - Length of system-specific attributes in bytes. Since
attributes are system specific, there is not much garuntee on the values in attributes are system specific, there is not much guarantee on the values in
this section, and systems are expected to work even when it is empty. See the this section, and systems are expected to work even when it is empty. See the
[attributes](#entry-attributes) section for more details. [attributes](#entry-attributes) section for more details.
**Name length** - Length of the entry name. Entry names are stored as utf8, **Name length** - Length of the entry name. Entry names are stored as UTF8,
although most systems will probably only support ascii. Entry names can not although most systems will probably only support ASCII. Entry names can not
contain '/' and can not be '.' or '..' as these are a part of the syntax of contain '/' and can not be '.' or '..' as these are a part of the syntax of
filesystem paths. filesystem paths.
@@ -222,7 +222,7 @@ Here's an example of a complete superblock:
(32 bits) block count = 1024 blocks (0x00000400) (32 bits) block count = 1024 blocks (0x00000400)
(32 bits) version = 1.1 (0x00010001) (32 bits) version = 1.1 (0x00010001)
(8 bytes) magic string = littlefs (8 bytes) magic string = littlefs
(32 bits) crc = 0xc50b74fa (32 bits) CRC = 0xc50b74fa
00000000: 03 00 00 00 34 00 00 00 03 00 00 00 02 00 00 00 ....4........... 00000000: 03 00 00 00 34 00 00 00 03 00 00 00 02 00 00 00 ....4...........
00000010: 2e 14 00 08 03 00 00 00 02 00 00 00 00 02 00 00 ................ 00000010: 2e 14 00 08 03 00 00 00 02 00 00 00 00 02 00 00 ................

View File

@@ -190,13 +190,13 @@ int lfs_emubd_erase(const struct lfs_config *cfg, lfs_block_t block) {
} }
if (!err && S_ISREG(st.st_mode) && (S_IWUSR & st.st_mode)) { if (!err && S_ISREG(st.st_mode) && (S_IWUSR & st.st_mode)) {
int err = unlink(emu->path); err = unlink(emu->path);
if (err) { if (err) {
return -errno; return -errno;
} }
} }
if (errno == ENOENT || (S_ISREG(st.st_mode) && (S_IWUSR & st.st_mode))) { if (err || (S_ISREG(st.st_mode) && (S_IWUSR & st.st_mode))) {
FILE *f = fopen(emu->path, "w"); FILE *f = fopen(emu->path, "w");
if (!f) { if (!f) {
return -errno; return -errno;

125
lfs.c
View File

@@ -278,7 +278,7 @@ static int lfs_alloc_lookahead(void *p, lfs_block_t block) {
% (lfs_soff_t)(lfs->cfg->block_count)) % (lfs_soff_t)(lfs->cfg->block_count))
+ lfs->cfg->block_count) % lfs->cfg->block_count; + lfs->cfg->block_count) % lfs->cfg->block_count;
if (off < lfs->cfg->lookahead) { if (off < lfs->free.size) {
lfs->free.buffer[off / 32] |= 1U << (off % 32); lfs->free.buffer[off / 32] |= 1U << (off % 32);
} }
@@ -287,18 +287,7 @@ static int lfs_alloc_lookahead(void *p, lfs_block_t block) {
static int lfs_alloc(lfs_t *lfs, lfs_block_t *block) { static int lfs_alloc(lfs_t *lfs, lfs_block_t *block) {
while (true) { while (true) {
while (true) { while (lfs->free.off != lfs->free.size) {
// check if we have looked at all blocks since last ack
if (lfs->free.begin + lfs->free.off == lfs->free.end) {
LFS_WARN("No more free space %d", lfs->free.end);
return LFS_ERR_NOSPC;
}
if (lfs->free.off >= lfs_min(
lfs->cfg->lookahead, lfs->cfg->block_count)) {
break;
}
lfs_block_t off = lfs->free.off; lfs_block_t off = lfs->free.off;
lfs->free.off += 1; lfs->free.off += 1;
@@ -309,7 +298,15 @@ static int lfs_alloc(lfs_t *lfs, lfs_block_t *block) {
} }
} }
lfs->free.begin += lfs_min(lfs->cfg->lookahead, lfs->cfg->block_count); // check if we have looked at all blocks since last ack
if (lfs->free.off == lfs->free.ack - lfs->free.begin) {
LFS_WARN("No more free space %d", lfs->free.off + lfs->free.begin);
return LFS_ERR_NOSPC;
}
lfs->free.begin += lfs->free.size;
lfs->free.size = lfs_min(lfs->cfg->lookahead,
lfs->free.ack - lfs->free.begin);
lfs->free.off = 0; lfs->free.off = 0;
// find mask of free blocks from tree // find mask of free blocks from tree
@@ -322,7 +319,7 @@ static int lfs_alloc(lfs_t *lfs, lfs_block_t *block) {
} }
static void lfs_alloc_ack(lfs_t *lfs) { static void lfs_alloc_ack(lfs_t *lfs) {
lfs->free.end = lfs->free.begin + lfs->free.off + lfs->cfg->block_count; lfs->free.ack = lfs->free.off-1 + lfs->free.begin + lfs->cfg->block_count;
} }
@@ -481,7 +478,7 @@ static int lfs_dir_commit(lfs_t *lfs, lfs_dir_t *dir,
while (newoff < (0x7fffffff & dir->d.size)-4) { while (newoff < (0x7fffffff & dir->d.size)-4) {
if (i < count && regions[i].oldoff == oldoff) { if (i < count && regions[i].oldoff == oldoff) {
lfs_crc(&crc, regions[i].newdata, regions[i].newlen); lfs_crc(&crc, regions[i].newdata, regions[i].newlen);
int err = lfs_bd_prog(lfs, dir->pair[0], err = lfs_bd_prog(lfs, dir->pair[0],
newoff, regions[i].newdata, regions[i].newlen); newoff, regions[i].newdata, regions[i].newlen);
if (err) { if (err) {
if (err == LFS_ERR_CORRUPT) { if (err == LFS_ERR_CORRUPT) {
@@ -495,7 +492,7 @@ static int lfs_dir_commit(lfs_t *lfs, lfs_dir_t *dir,
i += 1; i += 1;
} else { } else {
uint8_t data; uint8_t data;
int err = lfs_bd_read(lfs, oldpair[1], oldoff, &data, 1); err = lfs_bd_read(lfs, oldpair[1], oldoff, &data, 1);
if (err) { if (err) {
return err; return err;
} }
@@ -1005,7 +1002,7 @@ int lfs_dir_seek(lfs_t *lfs, lfs_dir_t *dir, lfs_off_t off) {
return LFS_ERR_INVAL; return LFS_ERR_INVAL;
} }
int err = lfs_dir_fetch(lfs, dir, dir->d.tail); err = lfs_dir_fetch(lfs, dir, dir->d.tail);
if (err) { if (err) {
return err; return err;
} }
@@ -1016,6 +1013,7 @@ int lfs_dir_seek(lfs_t *lfs, lfs_dir_t *dir, lfs_off_t off) {
} }
lfs_soff_t lfs_dir_tell(lfs_t *lfs, lfs_dir_t *dir) { lfs_soff_t lfs_dir_tell(lfs_t *lfs, lfs_dir_t *dir) {
(void)lfs;
return dir->pos; return dir->pos;
} }
@@ -1116,7 +1114,7 @@ static int lfs_ctz_extend(lfs_t *lfs,
if (size != lfs->cfg->block_size) { if (size != lfs->cfg->block_size) {
for (lfs_off_t i = 0; i < size; i++) { for (lfs_off_t i = 0; i < size; i++) {
uint8_t data; uint8_t data;
int err = lfs_cache_read(lfs, rcache, NULL, err = lfs_cache_read(lfs, rcache, NULL,
head, i, &data, 1); head, i, &data, 1);
if (err) { if (err) {
return err; return err;
@@ -1142,7 +1140,7 @@ static int lfs_ctz_extend(lfs_t *lfs,
lfs_size_t skips = lfs_ctz(index) + 1; lfs_size_t skips = lfs_ctz(index) + 1;
for (lfs_off_t i = 0; i < skips; i++) { for (lfs_off_t i = 0; i < skips; i++) {
int err = lfs_cache_prog(lfs, pcache, rcache, err = lfs_cache_prog(lfs, pcache, rcache,
nblock, 4*i, &head, 4); nblock, 4*i, &head, 4);
if (err) { if (err) {
if (err == LFS_ERR_CORRUPT) { if (err == LFS_ERR_CORRUPT) {
@@ -1450,7 +1448,7 @@ int lfs_file_sync(lfs_t *lfs, lfs_file_t *file) {
!lfs_pairisnull(file->pair)) { !lfs_pairisnull(file->pair)) {
// update dir entry // update dir entry
lfs_dir_t cwd; lfs_dir_t cwd;
int err = lfs_dir_fetch(lfs, &cwd, file->pair); err = lfs_dir_fetch(lfs, &cwd, file->pair);
if (err) { if (err) {
return err; return err;
} }
@@ -1462,11 +1460,7 @@ int lfs_file_sync(lfs_t *lfs, lfs_file_t *file) {
return err; return err;
} }
if (entry.d.type != LFS_TYPE_REG) { assert(entry.d.type == LFS_TYPE_REG);
// sanity check valid entry
return LFS_ERR_INVAL;
}
entry.d.u.file.head = file->head; entry.d.u.file.head = file->head;
entry.d.u.file.size = file->size; entry.d.u.file.size = file->size;
@@ -1487,7 +1481,7 @@ lfs_ssize_t lfs_file_read(lfs_t *lfs, lfs_file_t *file,
lfs_size_t nsize = size; lfs_size_t nsize = size;
if ((file->flags & 3) == LFS_O_WRONLY) { if ((file->flags & 3) == LFS_O_WRONLY) {
return LFS_ERR_INVAL; return LFS_ERR_BADF;
} }
if (file->flags & LFS_F_WRITING) { if (file->flags & LFS_F_WRITING) {
@@ -1543,7 +1537,7 @@ lfs_ssize_t lfs_file_write(lfs_t *lfs, lfs_file_t *file,
lfs_size_t nsize = size; lfs_size_t nsize = size;
if ((file->flags & 3) == LFS_O_RDONLY) { if ((file->flags & 3) == LFS_O_RDONLY) {
return LFS_ERR_INVAL; return LFS_ERR_BADF;
} }
if (file->flags & LFS_F_READING) { if (file->flags & LFS_F_READING) {
@@ -1666,10 +1660,11 @@ lfs_soff_t lfs_file_seek(lfs_t *lfs, lfs_file_t *file,
int lfs_file_truncate(lfs_t *lfs, lfs_file_t *file, lfs_off_t size) { int lfs_file_truncate(lfs_t *lfs, lfs_file_t *file, lfs_off_t size) {
if ((file->flags & 3) == LFS_O_RDONLY) { if ((file->flags & 3) == LFS_O_RDONLY) {
return LFS_ERR_INVAL; return LFS_ERR_BADF;
} }
if (size < lfs_file_size(lfs, file)) { lfs_off_t oldsize = lfs_file_size(lfs, file);
if (size < oldsize) {
// need to flush since directly changing metadata // need to flush since directly changing metadata
int err = lfs_file_flush(lfs, file); int err = lfs_file_flush(lfs, file);
if (err) { if (err) {
@@ -1686,13 +1681,13 @@ int lfs_file_truncate(lfs_t *lfs, lfs_file_t *file, lfs_off_t size) {
file->size = size; file->size = size;
file->flags |= LFS_F_DIRTY; file->flags |= LFS_F_DIRTY;
} else if (size > lfs_file_size(lfs, file)) { } else if (size > oldsize) {
lfs_off_t pos = file->pos; lfs_off_t pos = file->pos;
// flush+seek if not already at end // flush+seek if not already at end
if (file->pos != lfs_file_size(lfs, file)) { if (file->pos != oldsize) {
int err = lfs_file_seek(lfs, file, 0, SEEK_END); int err = lfs_file_seek(lfs, file, 0, SEEK_END);
if (err) { if (err < 0) {
return err; return err;
} }
} }
@@ -1716,6 +1711,7 @@ int lfs_file_truncate(lfs_t *lfs, lfs_file_t *file, lfs_off_t size) {
} }
lfs_soff_t lfs_file_tell(lfs_t *lfs, lfs_file_t *file) { lfs_soff_t lfs_file_tell(lfs_t *lfs, lfs_file_t *file) {
(void)lfs;
return file->pos; return file->pos;
} }
@@ -1729,6 +1725,7 @@ int lfs_file_rewind(lfs_t *lfs, lfs_file_t *file) {
} }
lfs_soff_t lfs_file_size(lfs_t *lfs, lfs_file_t *file) { lfs_soff_t lfs_file_size(lfs_t *lfs, lfs_file_t *file) {
(void)lfs;
if (file->flags & LFS_F_WRITING) { if (file->flags & LFS_F_WRITING) {
return lfs_max(file->pos, file->size); return lfs_max(file->pos, file->size);
} else { } else {
@@ -1737,7 +1734,7 @@ lfs_soff_t lfs_file_size(lfs_t *lfs, lfs_file_t *file) {
} }
/// General fs oprations /// /// General fs operations ///
int lfs_stat(lfs_t *lfs, const char *path, struct lfs_info *info) { int lfs_stat(lfs_t *lfs, const char *path, struct lfs_info *info) {
// check for root, can only be something like '/././../.' // check for root, can only be something like '/././../.'
if (strspn(path, "/.") == strlen(path)) { if (strspn(path, "/.") == strlen(path)) {
@@ -1801,7 +1798,7 @@ int lfs_remove(lfs_t *lfs, const char *path) {
// must be empty before removal, checking size // must be empty before removal, checking size
// without masking top bit checks for any case where // without masking top bit checks for any case where
// dir is not empty // dir is not empty
int err = lfs_dir_fetch(lfs, &dir, entry.d.u.dir); err = lfs_dir_fetch(lfs, &dir, entry.d.u.dir);
if (err) { if (err) {
return err; return err;
} else if (dir.d.size != sizeof(dir.d)+4) { } else if (dir.d.size != sizeof(dir.d)+4) {
@@ -1826,7 +1823,7 @@ int lfs_remove(lfs_t *lfs, const char *path) {
cwd.d.tail[0] = dir.d.tail[0]; cwd.d.tail[0] = dir.d.tail[0];
cwd.d.tail[1] = dir.d.tail[1]; cwd.d.tail[1] = dir.d.tail[1];
int err = lfs_dir_commit(lfs, &cwd, NULL, 0); err = lfs_dir_commit(lfs, &cwd, NULL, 0);
if (err) { if (err) {
return err; return err;
} }
@@ -1875,7 +1872,7 @@ int lfs_rename(lfs_t *lfs, const char *oldpath, const char *newpath) {
// must have same type // must have same type
if (prevexists && preventry.d.type != oldentry.d.type) { if (prevexists && preventry.d.type != oldentry.d.type) {
return LFS_ERR_INVAL; return LFS_ERR_ISDIR;
} }
lfs_dir_t dir; lfs_dir_t dir;
@@ -1883,11 +1880,11 @@ int lfs_rename(lfs_t *lfs, const char *oldpath, const char *newpath) {
// must be empty before removal, checking size // must be empty before removal, checking size
// without masking top bit checks for any case where // without masking top bit checks for any case where
// dir is not empty // dir is not empty
int err = lfs_dir_fetch(lfs, &dir, preventry.d.u.dir); err = lfs_dir_fetch(lfs, &dir, preventry.d.u.dir);
if (err) { if (err) {
return err; return err;
} else if (dir.d.size != sizeof(dir.d)+4) { } else if (dir.d.size != sizeof(dir.d)+4) {
return LFS_ERR_INVAL; return LFS_ERR_NOTEMPTY;
} }
} }
@@ -1910,12 +1907,12 @@ int lfs_rename(lfs_t *lfs, const char *oldpath, const char *newpath) {
newentry.d.nlen = strlen(newpath); newentry.d.nlen = strlen(newpath);
if (prevexists) { if (prevexists) {
int err = lfs_dir_update(lfs, &newcwd, &newentry, newpath); err = lfs_dir_update(lfs, &newcwd, &newentry, newpath);
if (err) { if (err) {
return err; return err;
} }
} else { } else {
int err = lfs_dir_append(lfs, &newcwd, &newentry, newpath); err = lfs_dir_append(lfs, &newcwd, &newentry, newpath);
if (err) { if (err) {
return err; return err;
} }
@@ -1943,7 +1940,7 @@ int lfs_rename(lfs_t *lfs, const char *oldpath, const char *newpath) {
newcwd.d.tail[0] = dir.d.tail[0]; newcwd.d.tail[0] = dir.d.tail[0];
newcwd.d.tail[1] = dir.d.tail[1]; newcwd.d.tail[1] = dir.d.tail[1];
int err = lfs_dir_commit(lfs, &newcwd, NULL, 0); err = lfs_dir_commit(lfs, &newcwd, NULL, 0);
if (err) { if (err) {
return err; return err;
} }
@@ -2035,11 +2032,11 @@ int lfs_format(lfs_t *lfs, const struct lfs_config *cfg) {
// create free lookahead // create free lookahead
memset(lfs->free.buffer, 0, lfs->cfg->lookahead/8); memset(lfs->free.buffer, 0, lfs->cfg->lookahead/8);
lfs->free.begin = 0; lfs->free.begin = 0;
lfs->free.size = lfs_min(lfs->cfg->lookahead, lfs->cfg->block_count);
lfs->free.off = 0; lfs->free.off = 0;
lfs->free.end = lfs->free.begin + lfs->free.off + lfs->cfg->block_count; lfs_alloc_ack(lfs);
// create superblock dir // create superblock dir
lfs_alloc_ack(lfs);
lfs_dir_t superdir; lfs_dir_t superdir;
err = lfs_dir_alloc(lfs, &superdir); err = lfs_dir_alloc(lfs, &superdir);
if (err) { if (err) {
@@ -2067,7 +2064,7 @@ int lfs_format(lfs_t *lfs, const struct lfs_config *cfg) {
.d.type = LFS_TYPE_SUPERBLOCK, .d.type = LFS_TYPE_SUPERBLOCK,
.d.elen = sizeof(superblock.d) - sizeof(superblock.d.magic) - 4, .d.elen = sizeof(superblock.d) - sizeof(superblock.d.magic) - 4,
.d.nlen = sizeof(superblock.d.magic), .d.nlen = sizeof(superblock.d.magic),
.d.version = 0x00010001, .d.version = LFS_DISK_VERSION,
.d.magic = {"littlefs"}, .d.magic = {"littlefs"},
.d.block_size = lfs->cfg->block_size, .d.block_size = lfs->cfg->block_size,
.d.block_count = lfs->cfg->block_count, .d.block_count = lfs->cfg->block_count,
@@ -2080,7 +2077,7 @@ int lfs_format(lfs_t *lfs, const struct lfs_config *cfg) {
// write both pairs to be safe // write both pairs to be safe
bool valid = false; bool valid = false;
for (int i = 0; i < 2; i++) { for (int i = 0; i < 2; i++) {
int err = lfs_dir_commit(lfs, &superdir, (struct lfs_region[]){ err = lfs_dir_commit(lfs, &superdir, (struct lfs_region[]){
{sizeof(superdir.d), sizeof(superblock.d), {sizeof(superdir.d), sizeof(superblock.d),
&superblock.d, sizeof(superblock.d)} &superblock.d, sizeof(superblock.d)}
}, 1); }, 1);
@@ -2112,9 +2109,10 @@ int lfs_mount(lfs_t *lfs, const struct lfs_config *cfg) {
} }
// setup free lookahead // setup free lookahead
lfs->free.begin = -lfs_min(lfs->cfg->lookahead, lfs->cfg->block_count); lfs->free.begin = 0;
lfs->free.off = -lfs->free.begin; lfs->free.size = 0;
lfs->free.end = lfs->free.begin + lfs->free.off + lfs->cfg->block_count; lfs->free.off = 0;
lfs_alloc_ack(lfs);
// load superblock // load superblock
lfs_dir_t dir; lfs_dir_t dir;
@@ -2125,7 +2123,7 @@ int lfs_mount(lfs_t *lfs, const struct lfs_config *cfg) {
} }
if (!err) { if (!err) {
int err = lfs_bd_read(lfs, dir.pair[0], sizeof(dir.d), err = lfs_bd_read(lfs, dir.pair[0], sizeof(dir.d),
&superblock.d, sizeof(superblock.d)); &superblock.d, sizeof(superblock.d));
if (err) { if (err) {
return err; return err;
@@ -2140,10 +2138,11 @@ int lfs_mount(lfs_t *lfs, const struct lfs_config *cfg) {
return LFS_ERR_CORRUPT; return LFS_ERR_CORRUPT;
} }
if (superblock.d.version > (0x00010001 | 0x0000ffff)) { uint16_t major_version = (0xffff & (superblock.d.version >> 16));
LFS_ERROR("Invalid version %d.%d", uint16_t minor_version = (0xffff & (superblock.d.version >> 0));
0xffff & (superblock.d.version >> 16), if ((major_version != LFS_DISK_VERSION_MAJOR ||
0xffff & (superblock.d.version >> 0)); minor_version > LFS_DISK_VERSION_MINOR)) {
LFS_ERROR("Invalid version %d.%d", major_version, minor_version);
return LFS_ERR_INVAL; return LFS_ERR_INVAL;
} }
@@ -2181,7 +2180,7 @@ int lfs_traverse(lfs_t *lfs, int (*cb)(void*, lfs_block_t), void *data) {
// iterate over contents // iterate over contents
while (dir.off + sizeof(entry.d) <= (0x7fffffff & dir.d.size)-4) { while (dir.off + sizeof(entry.d) <= (0x7fffffff & dir.d.size)-4) {
int err = lfs_bd_read(lfs, dir.pair[0], dir.off, err = lfs_bd_read(lfs, dir.pair[0], dir.off,
&entry.d, sizeof(entry.d)); &entry.d, sizeof(entry.d));
if (err) { if (err) {
return err; return err;
@@ -2189,7 +2188,7 @@ int lfs_traverse(lfs_t *lfs, int (*cb)(void*, lfs_block_t), void *data) {
dir.off += lfs_entry_size(&entry); dir.off += lfs_entry_size(&entry);
if ((0x70 & entry.d.type) == (0x70 & LFS_TYPE_REG)) { if ((0x70 & entry.d.type) == (0x70 & LFS_TYPE_REG)) {
int err = lfs_ctz_traverse(lfs, &lfs->rcache, NULL, err = lfs_ctz_traverse(lfs, &lfs->rcache, NULL,
entry.d.u.file.head, entry.d.u.file.size, cb, data); entry.d.u.file.head, entry.d.u.file.size, cb, data);
if (err) { if (err) {
return err; return err;
@@ -2243,7 +2242,7 @@ static int lfs_pred(lfs_t *lfs, const lfs_block_t dir[2], lfs_dir_t *pdir) {
return true; return true;
} }
int err = lfs_dir_fetch(lfs, pdir, pdir->d.tail); err = lfs_dir_fetch(lfs, pdir, pdir->d.tail);
if (err) { if (err) {
return err; return err;
} }
@@ -2269,7 +2268,7 @@ static int lfs_parent(lfs_t *lfs, const lfs_block_t dir[2],
} }
while (true) { while (true) {
int err = lfs_dir_next(lfs, parent, entry); err = lfs_dir_next(lfs, parent, entry);
if (err && err != LFS_ERR_NOENT) { if (err && err != LFS_ERR_NOENT) {
return err; return err;
} }
@@ -2303,13 +2302,13 @@ static int lfs_moved(lfs_t *lfs, const void *e) {
// iterate over all directory directory entries // iterate over all directory directory entries
lfs_entry_t entry; lfs_entry_t entry;
while (!lfs_pairisnull(cwd.d.tail)) { while (!lfs_pairisnull(cwd.d.tail)) {
int err = lfs_dir_fetch(lfs, &cwd, cwd.d.tail); err = lfs_dir_fetch(lfs, &cwd, cwd.d.tail);
if (err) { if (err) {
return err; return err;
} }
while (true) { while (true) {
int err = lfs_dir_next(lfs, &cwd, &entry); err = lfs_dir_next(lfs, &cwd, &entry);
if (err && err != LFS_ERR_NOENT) { if (err && err != LFS_ERR_NOENT) {
return err; return err;
} }
@@ -2440,7 +2439,7 @@ int lfs_deorphan(lfs_t *lfs) {
// check entries for moves // check entries for moves
lfs_entry_t entry; lfs_entry_t entry;
while (true) { while (true) {
int err = lfs_dir_next(lfs, &cwd, &entry); err = lfs_dir_next(lfs, &cwd, &entry);
if (err && err != LFS_ERR_NOENT) { if (err && err != LFS_ERR_NOENT) {
return err; return err;
} }
@@ -2459,7 +2458,7 @@ int lfs_deorphan(lfs_t *lfs) {
if (moved) { if (moved) {
LFS_DEBUG("Found move %d %d", LFS_DEBUG("Found move %d %d",
entry.d.u.dir[0], entry.d.u.dir[1]); entry.d.u.dir[0], entry.d.u.dir[1]);
int err = lfs_dir_remove(lfs, &cwd, &entry); err = lfs_dir_remove(lfs, &cwd, &entry);
if (err) { if (err) {
return err; return err;
} }
@@ -2467,7 +2466,7 @@ int lfs_deorphan(lfs_t *lfs) {
LFS_DEBUG("Found partial move %d %d", LFS_DEBUG("Found partial move %d %d",
entry.d.u.dir[0], entry.d.u.dir[1]); entry.d.u.dir[0], entry.d.u.dir[1]);
entry.d.type &= ~0x80; entry.d.type &= ~0x80;
int err = lfs_dir_update(lfs, &cwd, &entry, NULL); err = lfs_dir_update(lfs, &cwd, &entry, NULL);
if (err) { if (err) {
return err; return err;
} }

21
lfs.h
View File

@@ -22,6 +22,23 @@
#include <stdbool.h> #include <stdbool.h>
/// Version info ///
// Software library version
// Major (top-nibble), incremented on backwards incompatible changes
// Minor (bottom-nibble), incremented on feature additions
#define LFS_VERSION 0x00010002
#define LFS_VERSION_MAJOR (0xffff & (LFS_VERSION >> 16))
#define LFS_VERSION_MINOR (0xffff & (LFS_VERSION >> 0))
// Version of On-disk data structures
// Major (top-nibble), incremented on backwards incompatible changes
// Minor (bottom-nibble), incremented on feature additions
#define LFS_DISK_VERSION 0x00010001
#define LFS_DISK_VERSION_MAJOR (0xffff & (LFS_DISK_VERSION >> 16))
#define LFS_DISK_VERSION_MINOR (0xffff & (LFS_DISK_VERSION >> 0))
/// Definitions /// /// Definitions ///
// Type definitions // Type definitions
@@ -49,6 +66,7 @@ enum lfs_error {
LFS_ERR_NOTDIR = -20, // Entry is not a dir LFS_ERR_NOTDIR = -20, // Entry is not a dir
LFS_ERR_ISDIR = -21, // Entry is a dir LFS_ERR_ISDIR = -21, // Entry is a dir
LFS_ERR_NOTEMPTY = -39, // Dir is not empty LFS_ERR_NOTEMPTY = -39, // Dir is not empty
LFS_ERR_BADF = -9, // Bad file number
LFS_ERR_INVAL = -22, // Invalid parameter LFS_ERR_INVAL = -22, // Invalid parameter
LFS_ERR_NOSPC = -28, // No space left on device LFS_ERR_NOSPC = -28, // No space left on device
LFS_ERR_NOMEM = -12, // No more memory available LFS_ERR_NOMEM = -12, // No more memory available
@@ -242,8 +260,9 @@ typedef struct lfs_superblock {
typedef struct lfs_free { typedef struct lfs_free {
lfs_block_t begin; lfs_block_t begin;
lfs_block_t end; lfs_block_t size;
lfs_block_t off; lfs_block_t off;
lfs_block_t ack;
uint32_t *buffer; uint32_t *buffer;
} lfs_free_t; } lfs_free_t;

View File

@@ -7,11 +7,11 @@
// test stuff // test stuff
void test_log(const char *s, uintmax_t v) {{ static void test_log(const char *s, uintmax_t v) {{
printf("%s: %jd\n", s, v); printf("%s: %jd\n", s, v);
}} }}
void test_assert(const char *file, unsigned line, static void test_assert(const char *file, unsigned line,
const char *s, uintmax_t v, uintmax_t e) {{ const char *s, uintmax_t v, uintmax_t e) {{
static const char *last[6] = {{0, 0}}; static const char *last[6] = {{0, 0}};
if (v != e || !(last[0] == s || last[1] == s || if (v != e || !(last[0] == s || last[1] == s ||
@@ -37,7 +37,8 @@ void test_assert(const char *file, unsigned line,
// utility functions for traversals // utility functions for traversals
int test_count(void *p, lfs_block_t b) {{ static int __attribute__((used)) test_count(void *p, lfs_block_t b) {{
(void)b;
unsigned *u = (unsigned*)p; unsigned *u = (unsigned*)p;
*u += 1; *u += 1;
return 0; return 0;
@@ -58,7 +59,7 @@ lfs_size_t size;
lfs_size_t wsize; lfs_size_t wsize;
lfs_size_t rsize; lfs_size_t rsize;
uintmax_t res; uintmax_t test;
#ifndef LFS_READ_SIZE #ifndef LFS_READ_SIZE
#define LFS_READ_SIZE 16 #define LFS_READ_SIZE 16
@@ -96,7 +97,7 @@ const struct lfs_config cfg = {{
// Entry point // Entry point
int main() {{ int main(void) {{
lfs_emubd_create(&cfg, "blocks"); lfs_emubd_create(&cfg, "blocks");
{tests} {tests}

View File

@@ -14,19 +14,26 @@ def generate(test):
match = re.match('(?: *\n)*( *)(.*)=>(.*);', line, re.DOTALL | re.MULTILINE) match = re.match('(?: *\n)*( *)(.*)=>(.*);', line, re.DOTALL | re.MULTILINE)
if match: if match:
tab, test, expect = match.groups() tab, test, expect = match.groups()
lines.append(tab+'res = {test};'.format(test=test.strip())) lines.append(tab+'test = {test};'.format(test=test.strip()))
lines.append(tab+'test_assert("{name}", res, {expect});'.format( lines.append(tab+'test_assert("{name}", test, {expect});'.format(
name = re.match('\w*', test.strip()).group(), name = re.match('\w*', test.strip()).group(),
expect = expect.strip())) expect = expect.strip()))
else: else:
lines.append(line) lines.append(line)
# Create test file
with open('test.c', 'w') as file: with open('test.c', 'w') as file:
file.write(template.format(tests='\n'.join(lines))) file.write(template.format(tests='\n'.join(lines)))
# Remove build artifacts to force rebuild
try:
os.remove('test.o')
os.remove('lfs')
except OSError:
pass
def compile(): def compile():
os.environ['CFLAGS'] = os.environ.get('CFLAGS', '') + ' -Werror' subprocess.check_call(['make', '--no-print-directory', '-s'])
subprocess.check_call(['make', '--no-print-directory', '-s'], env=os.environ)
def execute(): def execute():
subprocess.check_call(["./lfs"]) subprocess.check_call(["./lfs"])

View File

@@ -266,6 +266,40 @@ tests/test.py << TEST
lfs_mkdir(&lfs, "exhaustiondir2") => LFS_ERR_NOSPC; lfs_mkdir(&lfs, "exhaustiondir2") => LFS_ERR_NOSPC;
TEST TEST
echo "--- Split dir test ---"
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
// create one block whole for half a directory
lfs_file_open(&lfs, &file[0], "bump", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_write(&lfs, &file[0], (void*)"hi", 2) => 2;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < (cfg.block_count-6)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
// open whole
lfs_remove(&lfs, "bump") => 0;
lfs_mkdir(&lfs, "splitdir") => 0;
lfs_file_open(&lfs, &file[0], "splitdir/bump",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_write(&lfs, &file[0], buffer, size) => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---" echo "--- Results ---"
tests/stats.py tests/stats.py

View File

@@ -220,7 +220,7 @@ tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "warmpotato") => 0; lfs_mkdir(&lfs, "warmpotato") => 0;
lfs_mkdir(&lfs, "warmpotato/mushy") => 0; lfs_mkdir(&lfs, "warmpotato/mushy") => 0;
lfs_rename(&lfs, "hotpotato", "warmpotato") => LFS_ERR_INVAL; lfs_rename(&lfs, "hotpotato", "warmpotato") => LFS_ERR_NOTEMPTY;
lfs_remove(&lfs, "warmpotato/mushy") => 0; lfs_remove(&lfs, "warmpotato/mushy") => 0;
lfs_rename(&lfs, "hotpotato", "warmpotato") => 0; lfs_rename(&lfs, "hotpotato", "warmpotato") => 0;

View File

@@ -13,10 +13,12 @@ TEST
truncate_test() { truncate_test() {
STARTSIZES="$1" STARTSIZES="$1"
HOTSIZES="$2" STARTSEEKS="$2"
COLDSIZES="$3" HOTSIZES="$3"
COLDSIZES="$4"
tests/test.py << TEST tests/test.py << TEST
static const lfs_off_t startsizes[] = {$STARTSIZES}; static const lfs_off_t startsizes[] = {$STARTSIZES};
static const lfs_off_t startseeks[] = {$STARTSEEKS};
static const lfs_off_t hotsizes[] = {$HOTSIZES}; static const lfs_off_t hotsizes[] = {$HOTSIZES};
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
@@ -33,6 +35,11 @@ tests/test.py << TEST
} }
lfs_file_size(&lfs, &file[0]) => startsizes[i]; lfs_file_size(&lfs, &file[0]) => startsizes[i];
if (startseeks[i] != startsizes[i]) {
lfs_file_seek(&lfs, &file[0],
startseeks[i], LFS_SEEK_SET) => startseeks[i];
}
lfs_file_truncate(&lfs, &file[0], hotsizes[i]) => 0; lfs_file_truncate(&lfs, &file[0], hotsizes[i]) => 0;
lfs_file_size(&lfs, &file[0]) => hotsizes[i]; lfs_file_size(&lfs, &file[0]) => hotsizes[i];
@@ -107,18 +114,21 @@ TEST
echo "--- Cold shrinking truncate ---" echo "--- Cold shrinking truncate ---"
truncate_test \ truncate_test \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \ "2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \ "2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" " 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE"
echo "--- Cold expanding truncate ---" echo "--- Cold expanding truncate ---"
truncate_test \ truncate_test \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \ " 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \ " 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" "2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE"
echo "--- Warm shrinking truncate ---" echo "--- Warm shrinking truncate ---"
truncate_test \ truncate_test \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \ "2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \ " 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, 0, 0, 0, 0" " 0, 0, 0, 0, 0"
@@ -126,6 +136,21 @@ truncate_test \
echo "--- Warm expanding truncate ---" echo "--- Warm expanding truncate ---"
truncate_test \ truncate_test \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \ " 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE"
echo "--- Mid-file shrinking truncate ---"
truncate_test \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
" $LARGESIZE, $LARGESIZE, $LARGESIZE, $LARGESIZE, $LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, 0, 0, 0, 0"
echo "--- Mid-file expanding truncate ---"
truncate_test \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \ "2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" "2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE"