Compare commits

...

17 Commits

Author SHA1 Message Date
Christopher Haster
ec4d8b68ad Changed release script to generate drafts 2018-10-20 12:34:41 -05:00
Christopher Haster
c7894a61e1 Added a handful of links to related projects
Interesting open-source projects that I've ran into around embedded
storage. May be interesting to others in the embedded space.

Added mklfs, SPIFFS, and Dhara.

Also a thanks to jolivepetrus for posting the mklfs tool he put
together.
2018-10-20 12:34:41 -05:00
Christopher Haster
195075819e Added 2GiB file size limit and EFBIG reporting
On disk, littlefs uses 32-bit integers to track file size. This sets a
theoretical limit of 4GiB for files.

However, the API passes file sizes around as signed numbers, with
negative values representing error codes. This means that not all of the
APIs will work with file sizes > 2GiB.

Because of related complications over in FUSE land, I've added the LFS_FILE_MAX
constant and proper error reporting if file writes/seeks exceed the 2GiB limit.
In v2 this will join the other constants that get stored in the
superblock to help portability. Since littlefs is targeting
microcontrollers, it's likely this will be a sufficient solution.

Note that it's still possible to enable partial-support for 4GiB files
by defining LFS_FILE_MAX during compilation. This will work for most of
the APIs, except lfs_file_seek, lfs_file_tell, and lfs_file_size.

We can also consider improving support for 4GiB files, by making seek a
bit more complicated and adding a lfs_file_stat function. I'll leave
this for a future improvement if there's interest.

Found by cgrozemuller
2018-10-20 12:34:23 -05:00
Christopher Haster
97d8d5e96a Fixed issue where a rename causes a split and pushes dir out of sync
The issue happens when a rename causes a split in the destination pair.
If the destination pair is the same as the source pair, this triggers the
logic to keep both pairs in sync. Unfortunately, this logic didn't work,
because the source entry still resides in the old source pair, unlike
the destination pair, which is now in the new pair created by the split.

The best fix for now is to refetch the source pair after the changes to the
destination pair. This isn't the most efficient solution, but fortunately
this bug has already been fixed in the revamped move logic in littlefs v2
(currently in progress).

Found by ohoc
2018-10-20 12:34:11 -05:00
Christopher Haster
0bb1f7af17 Modified release script to create notes only on minor releases
Before, release notes with a list of changes were created every
patch release. Unfortunately, it looks like this will create a lot of
noise on github, with a notification every patch release, which may be
as often as every time a PR is merged.

Rather than creating all of this noise for relatively uninteresting
changes, the script will now stick to simple tags, and create the
release notes only on minor releases.

I think this is what several of you were originally suggesting,
sorry about the journey, at least I learned a lot.
2018-09-29 12:31:27 -05:00
Christopher Haster
447d89cbd8 Merge pull request #109 from OTAkeys/pr/fix-sign-compare
Fix -Wsign-compare error
2018-09-29 12:29:54 -05:00
Vincent Dupont
28d2d96a83 Fix -Wsign-compare error 2018-09-29 11:33:19 -05:00
Christopher Haster
cb62bf2188 Fixed release script issue with fetching recent tags
Fetching all tags was triggering the pagination system inside the github
API. This prevent version tags from being found.

Modified to use the version tag prefix in the ref lookup, however this
still may cause an issue if there are still enough patch releases to trigger
pagination.

Simpleish solution is to grab the link header to jump to the last page,
since pagination results appear to be in sorted order.
2018-09-27 14:46:12 -05:00
Christopher Haster
646b1b5a6c Added -Wjump-misses-init and fixed uninitialized warnings 2018-09-26 18:58:54 -05:00
Christopher Haster
1b7a15599e Merge pull request #106 from conkerkh/master
If stats file doesn't exist lfs_emubd_create will fail.
2018-09-26 18:58:34 -05:00
Christopher Haster
e5a6938faf Fixed possible infinite loop in deorphan step
Normally, the linked-list of directory pairs should terminate at a null
pointer. However, it is possible if the filesystem is corrupted, that
that this linked-list forms a cycle.

This should never happen with littlefs's power resilience, but if it does
we should recover appropriately.

Modified lfs_deorphan to notice if we have a cycle and return
LFS_ERR_CORRUPT in that situation.

Found by kneko715
2018-09-26 18:58:11 -05:00
Chris
6ad544f3f3 If stats file doesn't exist lfs_emubd_create will fail.
This will create default stats file if it doesn't exist.
2018-09-26 18:24:58 -05:00
Christopher Haster
3419284689 Fixed issue with corruption due to different cache sizes
The lfs_cache_zero function that was recently added assumed a single cache
size, which is incorrect. This would cause a buffer overflow if
read_size != prog_size.

Since lfs_cache_zero is only used for scrubbing prog caches, the fix
here is to use lfs_cache_drop instead on read caches. Info in read
caches should never make its way to disk.

Found by nstcl
2018-09-04 13:57:22 -05:00
Christopher Haster
510cd13df9 Bumped minor version to v1.6 2018-07-27 15:59:18 -05:00
Christopher Haster
f5e0539951 Fixed issue with release script non-standard version tags 2018-07-27 15:20:00 -05:00
Christopher Haster
066448055c Moved SPDX and license info into README
This makes is a bit easier to find the description about the SPDX tags,
and fixes the issue where GitHub doesn't detect the license text.
2018-07-27 14:02:38 -05:00
Christopher Haster
d66723ccfd Merge pull request #81 from ARMmbed/simple-versioning
Simplified release process based on feedback
2018-07-27 14:02:23 -05:00
11 changed files with 323 additions and 204 deletions

View File

@@ -138,12 +138,15 @@ jobs:
- LFS_VERSION=$(grep -ox '#define LFS_VERSION .*' lfs.h | cut -d ' ' -f3) - LFS_VERSION=$(grep -ox '#define LFS_VERSION .*' lfs.h | cut -d ' ' -f3)
- LFS_VERSION_MAJOR=$((0xffff & ($LFS_VERSION >> 16))) - LFS_VERSION_MAJOR=$((0xffff & ($LFS_VERSION >> 16)))
- LFS_VERSION_MINOR=$((0xffff & ($LFS_VERSION >> 0))) - LFS_VERSION_MINOR=$((0xffff & ($LFS_VERSION >> 0)))
# Grab latests patch from repo tags, default to 0 # Grab latests patch from repo tags, default to 0, needs finagling to get past github's pagination api
- LFS_VERSION_PATCH=$(curl -f -u "$GEKY_BOT_RELEASES" - PREV_URL=https://api.github.com/repos/$TRAVIS_REPO_SLUG/git/refs/tags/v$LFS_VERSION_MAJOR.$LFS_VERSION_MINOR.
https://api.github.com/repos/$TRAVIS_REPO_SLUG/git/refs - PREV_URL=$(curl -u "$GEKY_BOT_RELEASES" "$PREV_URL" -I
| jq 'map(.ref | match( | sed -n '/^Link/{s/.*<\(.*\)>; rel="last"/\1/;p;q0};$q1'
"refs/tags/v'"$LFS_VERSION_MAJOR"'\\.'"$LFS_VERSION_MINOR"'\\.(.*)$") || echo $PREV_URL)
.captures[].string | tonumber + 1) | max // 0') - LFS_VERSION_PATCH=$(curl -u "$GEKY_BOT_RELEASES" "$PREV_URL"
| jq 'map(.ref | match("\\bv.*\\..*\\.(.*)$";"g")
.captures[].string | tonumber) | max + 1'
|| echo 0)
# We have our new version # We have our new version
- LFS_VERSION="v$LFS_VERSION_MAJOR.$LFS_VERSION_MINOR.$LFS_VERSION_PATCH" - LFS_VERSION="v$LFS_VERSION_MAJOR.$LFS_VERSION_MINOR.$LFS_VERSION_PATCH"
- echo "VERSION $LFS_VERSION" - echo "VERSION $LFS_VERSION"
@@ -154,24 +157,35 @@ jobs:
| jq -re '.sha') | jq -re '.sha')
if [ "$TRAVIS_COMMIT" == "$CURRENT_COMMIT" ] if [ "$TRAVIS_COMMIT" == "$CURRENT_COMMIT" ]
then then
# Build release notes # Create a simple tag
PREV=$(git tag --sort=-v:refname -l "v*.*.*" | head -1)
if [ ! -z "$PREV" ]
then
echo "PREV $PREV"
CHANGES=$'### Changes\n\n'$( \
git log --oneline $PREV.. --grep='^Merge' --invert-grep)
printf "CHANGES\n%s\n\n" "$CHANGES"
fi
# Create the release
curl -f -u "$GEKY_BOT_RELEASES" -X POST \ curl -f -u "$GEKY_BOT_RELEASES" -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/releases \ https://api.github.com/repos/$TRAVIS_REPO_SLUG/git/refs \
-d "{ -d "{
\"tag_name\": \"$LFS_VERSION\", \"ref\": \"refs/tags/$LFS_VERSION\",
\"target_commitish\": \"$TRAVIS_COMMIT\", \"sha\": \"$TRAVIS_COMMIT\"
\"name\": \"${LFS_VERSION%.0}\",
\"body\": $(jq -sR '.' <<< "$CHANGES")
}" }"
# Minor release?
if [[ "$LFS_VERSION" == *.0 ]]
then
# Build release notes
PREV=$(git tag --sort=-v:refname -l "v*.0" | head -1)
if [ ! -z "$PREV" ]
then
echo "PREV $PREV"
CHANGES=$'### Changes\n\n'$( \
git log --oneline $PREV.. --grep='^Merge' --invert-grep)
printf "CHANGES\n%s\n\n" "$CHANGES"
fi
# Create the release
curl -f -u "$GEKY_BOT_RELEASES" -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/releases \
-d "{
\"tag_name\": \"$LFS_VERSION\",
\"name\": \"${LFS_VERSION%.0}\",
\"draft\": true,
\"body\": $(jq -sR '.' <<< "$CHANGES")
}"
fi
fi fi
# Manage statuses # Manage statuses

View File

@@ -22,15 +22,3 @@ LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
---
*Note*:
Individual files contain the following tag instead of the full license text.
SPDX-License-Identifier: BSD-3-Clause
This enables machine processing of license information based on the SPDX
License Identifiers that are here available: http://spdx.org/licenses/

View File

@@ -25,7 +25,8 @@ ifdef WORD
override CFLAGS += -m$(WORD) override CFLAGS += -m$(WORD)
endif endif
override CFLAGS += -I. override CFLAGS += -I.
override CFLAGS += -std=c99 -Wall -pedantic -Wshadow -Wunused-parameter override CFLAGS += -std=c99 -Wall -pedantic
override CFLAGS += -Wshadow -Wunused-parameter -Wjump-misses-init -Wsign-compare
all: $(TARGET) all: $(TARGET)

View File

@@ -146,6 +146,19 @@ The tests assume a Linux environment and can be started with make:
make test make test
``` ```
## License
The littlefs is provided under the [BSD-3-Clause](https://spdx.org/licenses/BSD-3-Clause.html)
license. See [LICENSE.md](LICENSE.md) for more information. Contributions to
this project are accepted under the same license.
Individual files contain the following tag instead of the full license text.
SPDX-License-Identifier: BSD-3-Clause
This enables machine processing of license information based on the SPDX
License Identifiers that are here available: http://spdx.org/licenses/
## Related projects ## Related projects
[Mbed OS](https://github.com/ARMmbed/mbed-os/tree/master/features/filesystem/littlefs) - [Mbed OS](https://github.com/ARMmbed/mbed-os/tree/master/features/filesystem/littlefs) -
@@ -162,3 +175,18 @@ handy.
[littlefs-js](https://github.com/geky/littlefs-js) - A javascript wrapper for [littlefs-js](https://github.com/geky/littlefs-js) - A javascript wrapper for
littlefs. I'm not sure why you would want this, but it is handy for demos. littlefs. I'm not sure why you would want this, but it is handy for demos.
You can see it in action [here](http://littlefs.geky.net/demo.html). You can see it in action [here](http://littlefs.geky.net/demo.html).
[mklfs](https://github.com/whitecatboard/Lua-RTOS-ESP32/tree/master/components/mklfs/src) -
A command line tool built by the [Lua RTOS](https://github.com/whitecatboard/Lua-RTOS-ESP32)
guys for making littlefs images from a host PC. Supports Windows, Mac OS,
and Linux.
[SPIFFS](https://github.com/pellepl/spiffs) - Another excellent embedded
filesystem for NOR flash. As a more traditional logging filesystem with full
static wear-leveling, SPIFFS will likely outperform littlefs on small
memories such as the internal flash on microcontrollers.
[Dhara](https://github.com/dlbeer/dhara) - An interesting NAND flash
translation layer designed for small MCUs. It offers static wear-leveling and
power-resilience with only a fixed O(|address|) pointer structure stored on
each block and in RAM.

View File

@@ -47,19 +47,24 @@ int lfs_emubd_create(const struct lfs_config *cfg, const char *path) {
// Load stats to continue incrementing // Load stats to continue incrementing
snprintf(emu->child, LFS_NAME_MAX, "stats"); snprintf(emu->child, LFS_NAME_MAX, "stats");
FILE *f = fopen(emu->path, "r"); FILE *f = fopen(emu->path, "r");
if (!f) { if (!f && errno != ENOENT) {
return -errno; return -errno;
} }
size_t res = fread(&emu->stats, sizeof(emu->stats), 1, f); if (errno == ENOENT) {
if (res < 1) { memset(&emu->stats, 0x0, sizeof(emu->stats));
return -errno; } else {
} size_t res = fread(&emu->stats, sizeof(emu->stats), 1, f);
if (res < 1) {
return -errno;
}
err = fclose(f); err = fclose(f);
if (err) { if (err) {
return -errno; return -errno;
}
} }
return 0; return 0;

315
lfs.c
View File

@@ -888,7 +888,7 @@ nextname:
} }
// check that entry has not been moved // check that entry has not been moved
if (entry->d.type & 0x80) { if (!lfs->moving && entry->d.type & 0x80) {
int moved = lfs_moved(lfs, &entry->d.u); int moved = lfs_moved(lfs, &entry->d.u);
if (moved < 0 || moved) { if (moved < 0 || moved) {
return (moved < 0) ? moved : LFS_ERR_NOENT; return (moved < 0) ? moved : LFS_ERR_NOENT;
@@ -1373,7 +1373,10 @@ int lfs_file_opencfg(lfs_t *lfs, lfs_file_t *file,
} }
// zero to avoid information leak // zero to avoid information leak
lfs_cache_zero(lfs, &file->cache); lfs_cache_drop(lfs, &file->cache);
if ((file->flags & 3) != LFS_O_RDONLY) {
lfs_cache_zero(lfs, &file->cache);
}
// add to list of files // add to list of files
file->next = lfs->files; file->next = lfs->files;
@@ -1641,6 +1644,11 @@ lfs_ssize_t lfs_file_write(lfs_t *lfs, lfs_file_t *file,
file->pos = file->size; file->pos = file->size;
} }
if (file->pos + size > LFS_FILE_MAX) {
// larger than file limit?
return LFS_ERR_FBIG;
}
if (!(file->flags & LFS_F_WRITING) && file->pos > file->size) { if (!(file->flags & LFS_F_WRITING) && file->pos > file->size) {
// fill with zeros // fill with zeros
lfs_off_t pos = file->pos; lfs_off_t pos = file->pos;
@@ -1727,24 +1735,24 @@ lfs_soff_t lfs_file_seek(lfs_t *lfs, lfs_file_t *file,
return err; return err;
} }
// update pos // find new pos
lfs_soff_t npos = file->pos;
if (whence == LFS_SEEK_SET) { if (whence == LFS_SEEK_SET) {
file->pos = off; npos = off;
} else if (whence == LFS_SEEK_CUR) { } else if (whence == LFS_SEEK_CUR) {
if (off < 0 && (lfs_off_t)-off > file->pos) { npos = file->pos + off;
return LFS_ERR_INVAL;
}
file->pos = file->pos + off;
} else if (whence == LFS_SEEK_END) { } else if (whence == LFS_SEEK_END) {
if (off < 0 && (lfs_off_t)-off > file->size) { npos = file->size + off;
return LFS_ERR_INVAL;
}
file->pos = file->size + off;
} }
return file->pos; if (npos < 0 || npos > LFS_FILE_MAX) {
// file position out of range
return LFS_ERR_INVAL;
}
// update pos
file->pos = npos;
return npos;
} }
int lfs_file_truncate(lfs_t *lfs, lfs_file_t *file, lfs_off_t size) { int lfs_file_truncate(lfs_t *lfs, lfs_file_t *file, lfs_off_t size) {
@@ -1919,7 +1927,14 @@ int lfs_rename(lfs_t *lfs, const char *oldpath, const char *newpath) {
// find old entry // find old entry
lfs_dir_t oldcwd; lfs_dir_t oldcwd;
lfs_entry_t oldentry; lfs_entry_t oldentry;
int err = lfs_dir_find(lfs, &oldcwd, &oldentry, &oldpath); int err = lfs_dir_find(lfs, &oldcwd, &oldentry, &(const char *){oldpath});
if (err) {
return err;
}
// mark as moving
oldentry.d.type |= 0x80;
err = lfs_dir_update(lfs, &oldcwd, &oldentry, NULL);
if (err) { if (err) {
return err; return err;
} }
@@ -1932,11 +1947,9 @@ int lfs_rename(lfs_t *lfs, const char *oldpath, const char *newpath) {
return err; return err;
} }
bool prevexists = (err != LFS_ERR_NOENT);
bool samepair = (lfs_paircmp(oldcwd.pair, newcwd.pair) == 0);
// must have same type // must have same type
if (prevexists && preventry.d.type != oldentry.d.type) { bool prevexists = (err != LFS_ERR_NOENT);
if (prevexists && preventry.d.type != (0x7f & oldentry.d.type)) {
return LFS_ERR_ISDIR; return LFS_ERR_ISDIR;
} }
@@ -1953,18 +1966,6 @@ int lfs_rename(lfs_t *lfs, const char *oldpath, const char *newpath) {
} }
} }
// mark as moving
oldentry.d.type |= 0x80;
err = lfs_dir_update(lfs, &oldcwd, &oldentry, NULL);
if (err) {
return err;
}
// update pair if newcwd == oldcwd
if (samepair) {
newcwd = oldcwd;
}
// move to new location // move to new location
lfs_entry_t newentry = preventry; lfs_entry_t newentry = preventry;
newentry.d = oldentry.d; newentry.d = oldentry.d;
@@ -1983,10 +1984,13 @@ int lfs_rename(lfs_t *lfs, const char *oldpath, const char *newpath) {
} }
} }
// update pair if newcwd == oldcwd // fetch old pair again in case dir block changed
if (samepair) { lfs->moving = true;
oldcwd = newcwd; err = lfs_dir_find(lfs, &oldcwd, &oldentry, &oldpath);
if (err) {
return err;
} }
lfs->moving = false;
// remove old entry // remove old entry
err = lfs_dir_remove(lfs, &oldcwd, &oldentry); err = lfs_dir_remove(lfs, &oldcwd, &oldentry);
@@ -2055,8 +2059,8 @@ static int lfs_init(lfs_t *lfs, const struct lfs_config *cfg) {
} }
// zero to avoid information leaks // zero to avoid information leaks
lfs_cache_zero(lfs, &lfs->rcache);
lfs_cache_zero(lfs, &lfs->pcache); lfs_cache_zero(lfs, &lfs->pcache);
lfs_cache_drop(lfs, &lfs->rcache);
// setup lookahead, round down to nearest 32-bits // setup lookahead, round down to nearest 32-bits
LFS_ASSERT(lfs->cfg->lookahead % 32 == 0); LFS_ASSERT(lfs->cfg->lookahead % 32 == 0);
@@ -2084,6 +2088,7 @@ static int lfs_init(lfs_t *lfs, const struct lfs_config *cfg) {
lfs->files = NULL; lfs->files = NULL;
lfs->dirs = NULL; lfs->dirs = NULL;
lfs->deorphaned = false; lfs->deorphaned = false;
lfs->moving = false;
return 0; return 0;
@@ -2093,83 +2098,86 @@ cleanup:
} }
int lfs_format(lfs_t *lfs, const struct lfs_config *cfg) { int lfs_format(lfs_t *lfs, const struct lfs_config *cfg) {
int err = lfs_init(lfs, cfg); int err = 0;
if (err) { if (true) {
return err; err = lfs_init(lfs, cfg);
} if (err) {
return err;
}
// create free lookahead // create free lookahead
memset(lfs->free.buffer, 0, lfs->cfg->lookahead/8); memset(lfs->free.buffer, 0, lfs->cfg->lookahead/8);
lfs->free.off = 0; lfs->free.off = 0;
lfs->free.size = lfs_min(lfs->cfg->lookahead, lfs->cfg->block_count); lfs->free.size = lfs_min(lfs->cfg->lookahead, lfs->cfg->block_count);
lfs->free.i = 0; lfs->free.i = 0;
lfs_alloc_ack(lfs); lfs_alloc_ack(lfs);
// create superblock dir // create superblock dir
lfs_dir_t superdir; lfs_dir_t superdir;
err = lfs_dir_alloc(lfs, &superdir); err = lfs_dir_alloc(lfs, &superdir);
if (err) { if (err) {
goto cleanup;
}
// write root directory
lfs_dir_t root;
err = lfs_dir_alloc(lfs, &root);
if (err) {
goto cleanup;
}
err = lfs_dir_commit(lfs, &root, NULL, 0);
if (err) {
goto cleanup;
}
lfs->root[0] = root.pair[0];
lfs->root[1] = root.pair[1];
// write superblocks
lfs_superblock_t superblock = {
.off = sizeof(superdir.d),
.d.type = LFS_TYPE_SUPERBLOCK,
.d.elen = sizeof(superblock.d) - sizeof(superblock.d.magic) - 4,
.d.nlen = sizeof(superblock.d.magic),
.d.version = LFS_DISK_VERSION,
.d.magic = {"littlefs"},
.d.block_size = lfs->cfg->block_size,
.d.block_count = lfs->cfg->block_count,
.d.root = {lfs->root[0], lfs->root[1]},
};
superdir.d.tail[0] = root.pair[0];
superdir.d.tail[1] = root.pair[1];
superdir.d.size = sizeof(superdir.d) + sizeof(superblock.d) + 4;
// write both pairs to be safe
lfs_superblock_tole32(&superblock.d);
bool valid = false;
for (int i = 0; i < 2; i++) {
err = lfs_dir_commit(lfs, &superdir, (struct lfs_region[]){
{sizeof(superdir.d), sizeof(superblock.d),
&superblock.d, sizeof(superblock.d)}
}, 1);
if (err && err != LFS_ERR_CORRUPT) {
goto cleanup; goto cleanup;
} }
valid = valid || !err; // write root directory
} lfs_dir_t root;
err = lfs_dir_alloc(lfs, &root);
if (err) {
goto cleanup;
}
if (!valid) { err = lfs_dir_commit(lfs, &root, NULL, 0);
err = LFS_ERR_CORRUPT; if (err) {
goto cleanup; goto cleanup;
} }
// sanity check that fetch works lfs->root[0] = root.pair[0];
err = lfs_dir_fetch(lfs, &superdir, (const lfs_block_t[2]){0, 1}); lfs->root[1] = root.pair[1];
if (err) {
goto cleanup;
}
lfs_alloc_ack(lfs); // write superblocks
lfs_superblock_t superblock = {
.off = sizeof(superdir.d),
.d.type = LFS_TYPE_SUPERBLOCK,
.d.elen = sizeof(superblock.d) - sizeof(superblock.d.magic) - 4,
.d.nlen = sizeof(superblock.d.magic),
.d.version = LFS_DISK_VERSION,
.d.magic = {"littlefs"},
.d.block_size = lfs->cfg->block_size,
.d.block_count = lfs->cfg->block_count,
.d.root = {lfs->root[0], lfs->root[1]},
};
superdir.d.tail[0] = root.pair[0];
superdir.d.tail[1] = root.pair[1];
superdir.d.size = sizeof(superdir.d) + sizeof(superblock.d) + 4;
// write both pairs to be safe
lfs_superblock_tole32(&superblock.d);
bool valid = false;
for (int i = 0; i < 2; i++) {
err = lfs_dir_commit(lfs, &superdir, (struct lfs_region[]){
{sizeof(superdir.d), sizeof(superblock.d),
&superblock.d, sizeof(superblock.d)}
}, 1);
if (err && err != LFS_ERR_CORRUPT) {
goto cleanup;
}
valid = valid || !err;
}
if (!valid) {
err = LFS_ERR_CORRUPT;
goto cleanup;
}
// sanity check that fetch works
err = lfs_dir_fetch(lfs, &superdir, (const lfs_block_t[2]){0, 1});
if (err) {
goto cleanup;
}
lfs_alloc_ack(lfs);
}
cleanup: cleanup:
lfs_deinit(lfs); lfs_deinit(lfs);
@@ -2177,53 +2185,56 @@ cleanup:
} }
int lfs_mount(lfs_t *lfs, const struct lfs_config *cfg) { int lfs_mount(lfs_t *lfs, const struct lfs_config *cfg) {
int err = lfs_init(lfs, cfg); int err = 0;
if (err) { if (true) {
return err; err = lfs_init(lfs, cfg);
}
// setup free lookahead
lfs->free.off = 0;
lfs->free.size = 0;
lfs->free.i = 0;
lfs_alloc_ack(lfs);
// load superblock
lfs_dir_t dir;
lfs_superblock_t superblock;
err = lfs_dir_fetch(lfs, &dir, (const lfs_block_t[2]){0, 1});
if (err && err != LFS_ERR_CORRUPT) {
goto cleanup;
}
if (!err) {
err = lfs_bd_read(lfs, dir.pair[0], sizeof(dir.d),
&superblock.d, sizeof(superblock.d));
lfs_superblock_fromle32(&superblock.d);
if (err) { if (err) {
return err;
}
// setup free lookahead
lfs->free.off = 0;
lfs->free.size = 0;
lfs->free.i = 0;
lfs_alloc_ack(lfs);
// load superblock
lfs_dir_t dir;
lfs_superblock_t superblock;
err = lfs_dir_fetch(lfs, &dir, (const lfs_block_t[2]){0, 1});
if (err && err != LFS_ERR_CORRUPT) {
goto cleanup; goto cleanup;
} }
lfs->root[0] = superblock.d.root[0]; if (!err) {
lfs->root[1] = superblock.d.root[1]; err = lfs_bd_read(lfs, dir.pair[0], sizeof(dir.d),
} &superblock.d, sizeof(superblock.d));
lfs_superblock_fromle32(&superblock.d);
if (err) {
goto cleanup;
}
if (err || memcmp(superblock.d.magic, "littlefs", 8) != 0) { lfs->root[0] = superblock.d.root[0];
LFS_ERROR("Invalid superblock at %d %d", 0, 1); lfs->root[1] = superblock.d.root[1];
err = LFS_ERR_CORRUPT; }
goto cleanup;
}
uint16_t major_version = (0xffff & (superblock.d.version >> 16)); if (err || memcmp(superblock.d.magic, "littlefs", 8) != 0) {
uint16_t minor_version = (0xffff & (superblock.d.version >> 0)); LFS_ERROR("Invalid superblock at %d %d", 0, 1);
if ((major_version != LFS_DISK_VERSION_MAJOR || err = LFS_ERR_CORRUPT;
minor_version > LFS_DISK_VERSION_MINOR)) { goto cleanup;
LFS_ERROR("Invalid version %d.%d", major_version, minor_version); }
err = LFS_ERR_INVAL;
goto cleanup;
}
return 0; uint16_t major_version = (0xffff & (superblock.d.version >> 16));
uint16_t minor_version = (0xffff & (superblock.d.version >> 0));
if ((major_version != LFS_DISK_VERSION_MAJOR ||
minor_version > LFS_DISK_VERSION_MINOR)) {
LFS_ERROR("Invalid version %d.%d", major_version, minor_version);
err = LFS_ERR_INVAL;
goto cleanup;
}
return 0;
}
cleanup: cleanup:
@@ -2472,7 +2483,11 @@ int lfs_deorphan(lfs_t *lfs) {
lfs_dir_t cwd = {.d.tail[0] = 0, .d.tail[1] = 1}; lfs_dir_t cwd = {.d.tail[0] = 0, .d.tail[1] = 1};
// iterate over all directory directory entries // iterate over all directory directory entries
while (!lfs_pairisnull(cwd.d.tail)) { for (lfs_size_t i = 0; i < lfs->cfg->block_count; i++) {
if (lfs_pairisnull(cwd.d.tail)) {
return 0;
}
int err = lfs_dir_fetch(lfs, &cwd, cwd.d.tail); int err = lfs_dir_fetch(lfs, &cwd, cwd.d.tail);
if (err) { if (err) {
return err; return err;
@@ -2501,7 +2516,7 @@ int lfs_deorphan(lfs_t *lfs) {
return err; return err;
} }
break; return 0;
} }
if (!lfs_pairsync(entry.d.u.dir, pdir.d.tail)) { if (!lfs_pairsync(entry.d.u.dir, pdir.d.tail)) {
@@ -2517,7 +2532,7 @@ int lfs_deorphan(lfs_t *lfs) {
return err; return err;
} }
break; return 0;
} }
} }
@@ -2562,5 +2577,7 @@ int lfs_deorphan(lfs_t *lfs) {
memcpy(&pdir, &cwd, sizeof(pdir)); memcpy(&pdir, &cwd, sizeof(pdir));
} }
return 0; // If we reached here, we have more directory pairs than blocks in the
// filesystem... So something must be horribly wrong
return LFS_ERR_CORRUPT;
} }

9
lfs.h
View File

@@ -21,7 +21,7 @@ extern "C"
// Software library version // Software library version
// Major (top-nibble), incremented on backwards incompatible changes // Major (top-nibble), incremented on backwards incompatible changes
// Minor (bottom-nibble), incremented on feature additions // Minor (bottom-nibble), incremented on feature additions
#define LFS_VERSION 0x00010005 #define LFS_VERSION 0x00010007
#define LFS_VERSION_MAJOR (0xffff & (LFS_VERSION >> 16)) #define LFS_VERSION_MAJOR (0xffff & (LFS_VERSION >> 16))
#define LFS_VERSION_MINOR (0xffff & (LFS_VERSION >> 0)) #define LFS_VERSION_MINOR (0xffff & (LFS_VERSION >> 0))
@@ -49,6 +49,11 @@ typedef uint32_t lfs_block_t;
#define LFS_NAME_MAX 255 #define LFS_NAME_MAX 255
#endif #endif
// Max file size in bytes
#ifndef LFS_FILE_MAX
#define LFS_FILE_MAX 2147483647
#endif
// Possible error codes, these are negative to allow // Possible error codes, these are negative to allow
// valid positive return values // valid positive return values
enum lfs_error { enum lfs_error {
@@ -61,6 +66,7 @@ enum lfs_error {
LFS_ERR_ISDIR = -21, // Entry is a dir LFS_ERR_ISDIR = -21, // Entry is a dir
LFS_ERR_NOTEMPTY = -39, // Dir is not empty LFS_ERR_NOTEMPTY = -39, // Dir is not empty
LFS_ERR_BADF = -9, // Bad file number LFS_ERR_BADF = -9, // Bad file number
LFS_ERR_FBIG = -27, // File too large
LFS_ERR_INVAL = -22, // Invalid parameter LFS_ERR_INVAL = -22, // Invalid parameter
LFS_ERR_NOSPC = -28, // No space left on device LFS_ERR_NOSPC = -28, // No space left on device
LFS_ERR_NOMEM = -12, // No more memory available LFS_ERR_NOMEM = -12, // No more memory available
@@ -280,6 +286,7 @@ typedef struct lfs {
lfs_free_t free; lfs_free_t free;
bool deorphaned; bool deorphaned;
bool moving;
} lfs_t; } lfs_t;

View File

@@ -32,18 +32,18 @@ lfs_alloc_singleproc() {
tests/test.py << TEST tests/test.py << TEST
const char *names[] = {"bacon", "eggs", "pancakes"}; const char *names[] = {"bacon", "eggs", "pancakes"};
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
for (int n = 0; n < sizeof(names)/sizeof(names[0]); n++) { for (unsigned n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
sprintf((char*)buffer, "$1/%s", names[n]); sprintf((char*)buffer, "$1/%s", names[n]);
lfs_file_open(&lfs, &file[n], (char*)buffer, lfs_file_open(&lfs, &file[n], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0; LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
} }
for (int n = 0; n < sizeof(names)/sizeof(names[0]); n++) { for (unsigned n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
size = strlen(names[n]); size = strlen(names[n]);
for (int i = 0; i < $SIZE; i++) { for (int i = 0; i < $SIZE; i++) {
lfs_file_write(&lfs, &file[n], names[n], size) => size; lfs_file_write(&lfs, &file[n], names[n], size) => size;
} }
} }
for (int n = 0; n < sizeof(names)/sizeof(names[0]); n++) { for (unsigned n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
lfs_file_close(&lfs, &file[n]) => 0; lfs_file_close(&lfs, &file[n]) => 0;
} }
lfs_unmount(&lfs) => 0; lfs_unmount(&lfs) => 0;

View File

@@ -326,13 +326,42 @@ tests/test.py << TEST
lfs_unmount(&lfs) => 0; lfs_unmount(&lfs) => 0;
TEST TEST
echo "--- Multi-block rename ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "cactus/test%d", i);
sprintf((char*)wbuffer, "cactus/tedd%d", i);
lfs_rename(&lfs, (char*)buffer, (char*)wbuffer) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "cactus") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "tedd%d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
info.type => LFS_TYPE_DIR;
}
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Multi-block remove ---" echo "--- Multi-block remove ---"
tests/test.py << TEST tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "cactus") => LFS_ERR_NOTEMPTY; lfs_remove(&lfs, "cactus") => LFS_ERR_NOTEMPTY;
for (int i = 0; i < $LARGESIZE; i++) { for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "cactus/test%d", i); sprintf((char*)buffer, "cactus/tedd%d", i);
lfs_remove(&lfs, (char*)buffer) => 0; lfs_remove(&lfs, (char*)buffer) => 0;
} }
@@ -391,13 +420,43 @@ tests/test.py << TEST
lfs_unmount(&lfs) => 0; lfs_unmount(&lfs) => 0;
TEST TEST
echo "--- Multi-block rename with files ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "prickly-pear/test%d", i);
sprintf((char*)wbuffer, "prickly-pear/tedd%d", i);
lfs_rename(&lfs, (char*)buffer, (char*)wbuffer) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "prickly-pear") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "tedd%d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
info.type => LFS_TYPE_REG;
info.size => 6;
}
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Multi-block remove with files ---" echo "--- Multi-block remove with files ---"
tests/test.py << TEST tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "prickly-pear") => LFS_ERR_NOTEMPTY; lfs_remove(&lfs, "prickly-pear") => LFS_ERR_NOTEMPTY;
for (int i = 0; i < $LARGESIZE; i++) { for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "prickly-pear/test%d", i); sprintf((char*)buffer, "prickly-pear/tedd%d", i);
lfs_remove(&lfs, (char*)buffer) => 0; lfs_remove(&lfs, (char*)buffer) => 0;
} }

View File

@@ -301,7 +301,7 @@ tests/test.py << TEST
size = strlen("hedgehoghog"); size = strlen("hedgehoghog");
const lfs_soff_t offsets[] = {512, 1020, 513, 1021, 511, 1019}; const lfs_soff_t offsets[] = {512, 1020, 513, 1021, 511, 1019};
for (int i = 0; i < sizeof(offsets) / sizeof(offsets[0]); i++) { for (unsigned i = 0; i < sizeof(offsets) / sizeof(offsets[0]); i++) {
lfs_soff_t off = offsets[i]; lfs_soff_t off = offsets[i];
memcpy(buffer, "hedgehoghog", size); memcpy(buffer, "hedgehoghog", size);
lfs_file_seek(&lfs, &file[0], off, LFS_SEEK_SET) => off; lfs_file_seek(&lfs, &file[0], off, LFS_SEEK_SET) => off;

View File

@@ -23,14 +23,14 @@ tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) { for (unsigned i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
sprintf((char*)buffer, "hairyhead%d", i); sprintf((char*)buffer, "hairyhead%d", i);
lfs_file_open(&lfs, &file[0], (const char*)buffer, lfs_file_open(&lfs, &file[0], (const char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0; LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
strcpy((char*)buffer, "hair"); strcpy((char*)buffer, "hair");
size = strlen((char*)buffer); size = strlen((char*)buffer);
for (int j = 0; j < startsizes[i]; j += size) { for (lfs_off_t j = 0; j < startsizes[i]; j += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size; lfs_file_write(&lfs, &file[0], buffer, size) => size;
} }
lfs_file_size(&lfs, &file[0]) => startsizes[i]; lfs_file_size(&lfs, &file[0]) => startsizes[i];
@@ -55,13 +55,13 @@ tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) { for (unsigned i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
sprintf((char*)buffer, "hairyhead%d", i); sprintf((char*)buffer, "hairyhead%d", i);
lfs_file_open(&lfs, &file[0], (const char*)buffer, LFS_O_RDWR) => 0; lfs_file_open(&lfs, &file[0], (const char*)buffer, LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file[0]) => hotsizes[i]; lfs_file_size(&lfs, &file[0]) => hotsizes[i];
size = strlen("hair"); size = strlen("hair");
int j = 0; lfs_off_t j = 0;
for (; j < startsizes[i] && j < hotsizes[i]; j += size) { for (; j < startsizes[i] && j < hotsizes[i]; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size; lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "hair", size) => 0; memcmp(buffer, "hair", size) => 0;
@@ -87,13 +87,13 @@ tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0; lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) { for (unsigned i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
sprintf((char*)buffer, "hairyhead%d", i); sprintf((char*)buffer, "hairyhead%d", i);
lfs_file_open(&lfs, &file[0], (const char*)buffer, LFS_O_RDONLY) => 0; lfs_file_open(&lfs, &file[0], (const char*)buffer, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file[0]) => coldsizes[i]; lfs_file_size(&lfs, &file[0]) => coldsizes[i];
size = strlen("hair"); size = strlen("hair");
int j = 0; lfs_off_t j = 0;
for (; j < startsizes[i] && j < hotsizes[i] && j < coldsizes[i]; for (; j < startsizes[i] && j < hotsizes[i] && j < coldsizes[i];
j += size) { j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size; lfs_file_read(&lfs, &file[0], buffer, size) => size;