Compare commits

..

16 Commits

Author SHA1 Message Date
Christopher Haster
bad02b6003 Changed release script to generate drafts 2018-10-20 12:32:41 -05:00
Christopher Haster
0d7ae0fef4 Added a handful of links to related projects
Interesting open-source projects that I've ran into around embedded
storage. May be interesting to others in the embedded space.

Added mklfs, SPIFFS, and Dhara.

Also a thanks to jolivepetrus for posting the mklfs tool he put
together.
2018-10-14 11:45:38 -05:00
Christopher Haster
0bb1f7af17 Modified release script to create notes only on minor releases
Before, release notes with a list of changes were created every
patch release. Unfortunately, it looks like this will create a lot of
noise on github, with a notification every patch release, which may be
as often as every time a PR is merged.

Rather than creating all of this noise for relatively uninteresting
changes, the script will now stick to simple tags, and create the
release notes only on minor releases.

I think this is what several of you were originally suggesting,
sorry about the journey, at least I learned a lot.
2018-09-29 12:31:27 -05:00
Christopher Haster
447d89cbd8 Merge pull request #109 from OTAkeys/pr/fix-sign-compare
Fix -Wsign-compare error
2018-09-29 12:29:54 -05:00
Vincent Dupont
28d2d96a83 Fix -Wsign-compare error 2018-09-29 11:33:19 -05:00
Christopher Haster
cb62bf2188 Fixed release script issue with fetching recent tags
Fetching all tags was triggering the pagination system inside the github
API. This prevent version tags from being found.

Modified to use the version tag prefix in the ref lookup, however this
still may cause an issue if there are still enough patch releases to trigger
pagination.

Simpleish solution is to grab the link header to jump to the last page,
since pagination results appear to be in sorted order.
2018-09-27 14:46:12 -05:00
Christopher Haster
646b1b5a6c Added -Wjump-misses-init and fixed uninitialized warnings 2018-09-26 18:58:54 -05:00
Christopher Haster
1b7a15599e Merge pull request #106 from conkerkh/master
If stats file doesn't exist lfs_emubd_create will fail.
2018-09-26 18:58:34 -05:00
Christopher Haster
e5a6938faf Fixed possible infinite loop in deorphan step
Normally, the linked-list of directory pairs should terminate at a null
pointer. However, it is possible if the filesystem is corrupted, that
that this linked-list forms a cycle.

This should never happen with littlefs's power resilience, but if it does
we should recover appropriately.

Modified lfs_deorphan to notice if we have a cycle and return
LFS_ERR_CORRUPT in that situation.

Found by kneko715
2018-09-26 18:58:11 -05:00
Chris
6ad544f3f3 If stats file doesn't exist lfs_emubd_create will fail.
This will create default stats file if it doesn't exist.
2018-09-26 18:24:58 -05:00
Christopher Haster
3419284689 Fixed issue with corruption due to different cache sizes
The lfs_cache_zero function that was recently added assumed a single cache
size, which is incorrect. This would cause a buffer overflow if
read_size != prog_size.

Since lfs_cache_zero is only used for scrubbing prog caches, the fix
here is to use lfs_cache_drop instead on read caches. Info in read
caches should never make its way to disk.

Found by nstcl
2018-09-04 13:57:22 -05:00
Christopher Haster
510cd13df9 Bumped minor version to v1.6 2018-07-27 15:59:18 -05:00
Christopher Haster
f5e0539951 Fixed issue with release script non-standard version tags 2018-07-27 15:20:00 -05:00
Christopher Haster
066448055c Moved SPDX and license info into README
This makes is a bit easier to find the description about the SPDX tags,
and fixes the issue where GitHub doesn't detect the license text.
2018-07-27 14:02:38 -05:00
Christopher Haster
d66723ccfd Merge pull request #81 from ARMmbed/simple-versioning
Simplified release process based on feedback
2018-07-27 14:02:23 -05:00
Christopher Haster
0234c77102 Simplified release process based on feedback
Previously, littlefs had mutable versions. That is, anytime a new commit
landed on master, the bot would update the most recent version to
contain the patch. The idea was that this would make sure users always
had the most recent bug fixes. Immutable snapshots could be accessed
through the git hashes.

However, at this point multiple developers have pointed out that this is
confusing, with mutable versions being non-standard and surprising.

This new release process adopts SemVer in its entirety, with
incrementing patch numbers and immutable versions.

When a new commit lands on master:
1. The major/minor version is taken from lfs.h
2. The most recent patch version is looked up on GitHub and incremented
3. A changelog is built out of the commits to the previous version
4. A new release is created on GitHub

Additionally, any commits that land while CI is still running are
coalesced together. Which means multiple PRs can land in a single
release.
2018-07-25 14:21:58 -05:00
9 changed files with 216 additions and 175 deletions

View File

@@ -134,53 +134,58 @@ jobs:
- STAGE=deploy
- NAME=deploy
script:
# Update tag for version defined in lfs.h
# Find version defined in lfs.h
- LFS_VERSION=$(grep -ox '#define LFS_VERSION .*' lfs.h | cut -d ' ' -f3)
- LFS_VERSION_MAJOR=$((0xffff & ($LFS_VERSION >> 16)))
- LFS_VERSION_MINOR=$((0xffff & ($LFS_VERSION >> 0)))
- LFS_VERSION="v$LFS_VERSION_MAJOR.$LFS_VERSION_MINOR"
- echo "littlefs version $LFS_VERSION"
# Grab latests patch from repo tags, default to 0, needs finagling to get past github's pagination api
- PREV_URL=https://api.github.com/repos/$TRAVIS_REPO_SLUG/git/refs/tags/v$LFS_VERSION_MAJOR.$LFS_VERSION_MINOR.
- PREV_URL=$(curl -u "$GEKY_BOT_RELEASES" "$PREV_URL" -I
| sed -n '/^Link/{s/.*<\(.*\)>; rel="last"/\1/;p;q0};$q1'
|| echo $PREV_URL)
- LFS_VERSION_PATCH=$(curl -u "$GEKY_BOT_RELEASES" "$PREV_URL"
| jq 'map(.ref | match("\\bv.*\\..*\\.(.*)$";"g")
.captures[].string | tonumber) | max + 1'
|| echo 0)
# We have our new version
- LFS_VERSION="v$LFS_VERSION_MAJOR.$LFS_VERSION_MINOR.$LFS_VERSION_PATCH"
- echo "VERSION $LFS_VERSION"
- |
curl -u $GEKY_BOT_RELEASES -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/git/refs \
-d "{
\"ref\": \"refs/tags/$LFS_VERSION\",
\"sha\": \"$TRAVIS_COMMIT\"
}"
- |
curl -f -u $GEKY_BOT_RELEASES -X PATCH \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/git/refs/tags/$LFS_VERSION \
-d "{
\"sha\": \"$TRAVIS_COMMIT\"
}"
# Create release notes from commits
- LFS_PREV_VERSION="v$LFS_VERSION_MAJOR.$(($LFS_VERSION_MINOR-1))"
- |
if [ $(git tag -l "$LFS_PREV_VERSION") ]
# Check that we're the most recent commit
CURRENT_COMMIT=$(curl -f -u "$GEKY_BOT_RELEASES" \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/commits/master \
| jq -re '.sha')
if [ "$TRAVIS_COMMIT" == "$CURRENT_COMMIT" ]
then
curl -u $GEKY_BOT_RELEASES -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/releases \
# Create a simple tag
curl -f -u "$GEKY_BOT_RELEASES" -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/git/refs \
-d "{
\"tag_name\": \"$LFS_VERSION\",
\"name\": \"$LFS_VERSION\"
\"ref\": \"refs/tags/$LFS_VERSION\",
\"sha\": \"$TRAVIS_COMMIT\"
}"
RELEASE=$(
curl -f -u $GEKY_BOT_RELEASES \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/releases/tags/$LFS_VERSION
)
CHANGES=$(
git log --oneline $LFS_PREV_VERSION.. --grep='^Merge' --invert-grep
)
curl -f -u $GEKY_BOT_RELEASES -X PATCH \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/releases/$(
jq -r '.id' <<< "$RELEASE"
) \
-d "$(
jq -s '{
"body": ((.[0] // "" | sub("(?<=\n)#+ Changes.*"; ""; "mi"))
+ "### Changes\n\n" + .[1])
}' <(jq '.body' <<< "$RELEASE") <(jq -sR '.' <<< "$CHANGES")
)"
# Minor release?
if [[ "$LFS_VERSION" == *.0 ]]
then
# Build release notes
PREV=$(git tag --sort=-v:refname -l "v*.0" | head -1)
if [ ! -z "$PREV" ]
then
echo "PREV $PREV"
CHANGES=$'### Changes\n\n'$( \
git log --oneline $PREV.. --grep='^Merge' --invert-grep)
printf "CHANGES\n%s\n\n" "$CHANGES"
fi
# Create the release
curl -f -u "$GEKY_BOT_RELEASES" -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/releases \
-d "{
\"tag_name\": \"$LFS_VERSION\",
\"name\": \"${LFS_VERSION%.0}\",
\"draft\": true,
\"body\": $(jq -sR '.' <<< "$CHANGES")
}"
fi
fi
# Manage statuses

View File

@@ -25,7 +25,8 @@ ifdef WORD
override CFLAGS += -m$(WORD)
endif
override CFLAGS += -I.
override CFLAGS += -std=c99 -Wall -pedantic -Wshadow -Wunused-parameter
override CFLAGS += -std=c99 -Wall -pedantic
override CFLAGS += -Wshadow -Wunused-parameter -Wjump-misses-init -Wsign-compare
all: $(TARGET)

View File

@@ -175,3 +175,18 @@ handy.
[littlefs-js](https://github.com/geky/littlefs-js) - A javascript wrapper for
littlefs. I'm not sure why you would want this, but it is handy for demos.
You can see it in action [here](http://littlefs.geky.net/demo.html).
[mklfs](https://github.com/whitecatboard/Lua-RTOS-ESP32/tree/master/components/mklfs/src) -
A command line tool built by the [Lua RTOS](https://github.com/whitecatboard/Lua-RTOS-ESP32)
guys for making littlefs images from a host PC. Supports Windows, Mac OS,
and Linux.
[SPIFFS](https://github.com/pellepl/spiffs) - Another excellent embedded
filesystem for NOR flash. As a more traditional logging filesystem with full
static wear-leveling, SPIFFS will likely outperform littlefs on small
memories such as the internal flash on microcontrollers.
[Dhara](https://github.com/dlbeer/dhara) - An interesting NAND flash
translation layer designed for small MCUs. It offers static wear-leveling and
power-resilience with only a fixed O(|address|) pointer structure stored on
each block and in RAM.

View File

@@ -47,19 +47,24 @@ int lfs_emubd_create(const struct lfs_config *cfg, const char *path) {
// Load stats to continue incrementing
snprintf(emu->child, LFS_NAME_MAX, "stats");
FILE *f = fopen(emu->path, "r");
if (!f) {
if (!f && errno != ENOENT) {
return -errno;
}
size_t res = fread(&emu->stats, sizeof(emu->stats), 1, f);
if (res < 1) {
return -errno;
}
if (errno == ENOENT) {
memset(&emu->stats, 0x0, sizeof(emu->stats));
} else {
size_t res = fread(&emu->stats, sizeof(emu->stats), 1, f);
if (res < 1) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
err = fclose(f);
if (err) {
return -errno;
}
}
return 0;

245
lfs.c
View File

@@ -1373,7 +1373,10 @@ int lfs_file_opencfg(lfs_t *lfs, lfs_file_t *file,
}
// zero to avoid information leak
lfs_cache_zero(lfs, &file->cache);
lfs_cache_drop(lfs, &file->cache);
if ((file->flags & 3) != LFS_O_RDONLY) {
lfs_cache_zero(lfs, &file->cache);
}
// add to list of files
file->next = lfs->files;
@@ -2055,8 +2058,8 @@ static int lfs_init(lfs_t *lfs, const struct lfs_config *cfg) {
}
// zero to avoid information leaks
lfs_cache_zero(lfs, &lfs->rcache);
lfs_cache_zero(lfs, &lfs->pcache);
lfs_cache_drop(lfs, &lfs->rcache);
// setup lookahead, round down to nearest 32-bits
LFS_ASSERT(lfs->cfg->lookahead % 32 == 0);
@@ -2093,83 +2096,86 @@ cleanup:
}
int lfs_format(lfs_t *lfs, const struct lfs_config *cfg) {
int err = lfs_init(lfs, cfg);
if (err) {
return err;
}
int err = 0;
if (true) {
err = lfs_init(lfs, cfg);
if (err) {
return err;
}
// create free lookahead
memset(lfs->free.buffer, 0, lfs->cfg->lookahead/8);
lfs->free.off = 0;
lfs->free.size = lfs_min(lfs->cfg->lookahead, lfs->cfg->block_count);
lfs->free.i = 0;
lfs_alloc_ack(lfs);
// create free lookahead
memset(lfs->free.buffer, 0, lfs->cfg->lookahead/8);
lfs->free.off = 0;
lfs->free.size = lfs_min(lfs->cfg->lookahead, lfs->cfg->block_count);
lfs->free.i = 0;
lfs_alloc_ack(lfs);
// create superblock dir
lfs_dir_t superdir;
err = lfs_dir_alloc(lfs, &superdir);
if (err) {
goto cleanup;
}
// write root directory
lfs_dir_t root;
err = lfs_dir_alloc(lfs, &root);
if (err) {
goto cleanup;
}
err = lfs_dir_commit(lfs, &root, NULL, 0);
if (err) {
goto cleanup;
}
lfs->root[0] = root.pair[0];
lfs->root[1] = root.pair[1];
// write superblocks
lfs_superblock_t superblock = {
.off = sizeof(superdir.d),
.d.type = LFS_TYPE_SUPERBLOCK,
.d.elen = sizeof(superblock.d) - sizeof(superblock.d.magic) - 4,
.d.nlen = sizeof(superblock.d.magic),
.d.version = LFS_DISK_VERSION,
.d.magic = {"littlefs"},
.d.block_size = lfs->cfg->block_size,
.d.block_count = lfs->cfg->block_count,
.d.root = {lfs->root[0], lfs->root[1]},
};
superdir.d.tail[0] = root.pair[0];
superdir.d.tail[1] = root.pair[1];
superdir.d.size = sizeof(superdir.d) + sizeof(superblock.d) + 4;
// write both pairs to be safe
lfs_superblock_tole32(&superblock.d);
bool valid = false;
for (int i = 0; i < 2; i++) {
err = lfs_dir_commit(lfs, &superdir, (struct lfs_region[]){
{sizeof(superdir.d), sizeof(superblock.d),
&superblock.d, sizeof(superblock.d)}
}, 1);
if (err && err != LFS_ERR_CORRUPT) {
// create superblock dir
lfs_dir_t superdir;
err = lfs_dir_alloc(lfs, &superdir);
if (err) {
goto cleanup;
}
valid = valid || !err;
}
// write root directory
lfs_dir_t root;
err = lfs_dir_alloc(lfs, &root);
if (err) {
goto cleanup;
}
if (!valid) {
err = LFS_ERR_CORRUPT;
goto cleanup;
}
err = lfs_dir_commit(lfs, &root, NULL, 0);
if (err) {
goto cleanup;
}
// sanity check that fetch works
err = lfs_dir_fetch(lfs, &superdir, (const lfs_block_t[2]){0, 1});
if (err) {
goto cleanup;
}
lfs->root[0] = root.pair[0];
lfs->root[1] = root.pair[1];
lfs_alloc_ack(lfs);
// write superblocks
lfs_superblock_t superblock = {
.off = sizeof(superdir.d),
.d.type = LFS_TYPE_SUPERBLOCK,
.d.elen = sizeof(superblock.d) - sizeof(superblock.d.magic) - 4,
.d.nlen = sizeof(superblock.d.magic),
.d.version = LFS_DISK_VERSION,
.d.magic = {"littlefs"},
.d.block_size = lfs->cfg->block_size,
.d.block_count = lfs->cfg->block_count,
.d.root = {lfs->root[0], lfs->root[1]},
};
superdir.d.tail[0] = root.pair[0];
superdir.d.tail[1] = root.pair[1];
superdir.d.size = sizeof(superdir.d) + sizeof(superblock.d) + 4;
// write both pairs to be safe
lfs_superblock_tole32(&superblock.d);
bool valid = false;
for (int i = 0; i < 2; i++) {
err = lfs_dir_commit(lfs, &superdir, (struct lfs_region[]){
{sizeof(superdir.d), sizeof(superblock.d),
&superblock.d, sizeof(superblock.d)}
}, 1);
if (err && err != LFS_ERR_CORRUPT) {
goto cleanup;
}
valid = valid || !err;
}
if (!valid) {
err = LFS_ERR_CORRUPT;
goto cleanup;
}
// sanity check that fetch works
err = lfs_dir_fetch(lfs, &superdir, (const lfs_block_t[2]){0, 1});
if (err) {
goto cleanup;
}
lfs_alloc_ack(lfs);
}
cleanup:
lfs_deinit(lfs);
@@ -2177,53 +2183,56 @@ cleanup:
}
int lfs_mount(lfs_t *lfs, const struct lfs_config *cfg) {
int err = lfs_init(lfs, cfg);
if (err) {
return err;
}
// setup free lookahead
lfs->free.off = 0;
lfs->free.size = 0;
lfs->free.i = 0;
lfs_alloc_ack(lfs);
// load superblock
lfs_dir_t dir;
lfs_superblock_t superblock;
err = lfs_dir_fetch(lfs, &dir, (const lfs_block_t[2]){0, 1});
if (err && err != LFS_ERR_CORRUPT) {
goto cleanup;
}
if (!err) {
err = lfs_bd_read(lfs, dir.pair[0], sizeof(dir.d),
&superblock.d, sizeof(superblock.d));
lfs_superblock_fromle32(&superblock.d);
int err = 0;
if (true) {
err = lfs_init(lfs, cfg);
if (err) {
return err;
}
// setup free lookahead
lfs->free.off = 0;
lfs->free.size = 0;
lfs->free.i = 0;
lfs_alloc_ack(lfs);
// load superblock
lfs_dir_t dir;
lfs_superblock_t superblock;
err = lfs_dir_fetch(lfs, &dir, (const lfs_block_t[2]){0, 1});
if (err && err != LFS_ERR_CORRUPT) {
goto cleanup;
}
lfs->root[0] = superblock.d.root[0];
lfs->root[1] = superblock.d.root[1];
}
if (!err) {
err = lfs_bd_read(lfs, dir.pair[0], sizeof(dir.d),
&superblock.d, sizeof(superblock.d));
lfs_superblock_fromle32(&superblock.d);
if (err) {
goto cleanup;
}
if (err || memcmp(superblock.d.magic, "littlefs", 8) != 0) {
LFS_ERROR("Invalid superblock at %d %d", 0, 1);
err = LFS_ERR_CORRUPT;
goto cleanup;
}
lfs->root[0] = superblock.d.root[0];
lfs->root[1] = superblock.d.root[1];
}
uint16_t major_version = (0xffff & (superblock.d.version >> 16));
uint16_t minor_version = (0xffff & (superblock.d.version >> 0));
if ((major_version != LFS_DISK_VERSION_MAJOR ||
minor_version > LFS_DISK_VERSION_MINOR)) {
LFS_ERROR("Invalid version %d.%d", major_version, minor_version);
err = LFS_ERR_INVAL;
goto cleanup;
}
if (err || memcmp(superblock.d.magic, "littlefs", 8) != 0) {
LFS_ERROR("Invalid superblock at %d %d", 0, 1);
err = LFS_ERR_CORRUPT;
goto cleanup;
}
return 0;
uint16_t major_version = (0xffff & (superblock.d.version >> 16));
uint16_t minor_version = (0xffff & (superblock.d.version >> 0));
if ((major_version != LFS_DISK_VERSION_MAJOR ||
minor_version > LFS_DISK_VERSION_MINOR)) {
LFS_ERROR("Invalid version %d.%d", major_version, minor_version);
err = LFS_ERR_INVAL;
goto cleanup;
}
return 0;
}
cleanup:
@@ -2472,7 +2481,11 @@ int lfs_deorphan(lfs_t *lfs) {
lfs_dir_t cwd = {.d.tail[0] = 0, .d.tail[1] = 1};
// iterate over all directory directory entries
while (!lfs_pairisnull(cwd.d.tail)) {
for (lfs_size_t i = 0; i < lfs->cfg->block_count; i++) {
if (lfs_pairisnull(cwd.d.tail)) {
return 0;
}
int err = lfs_dir_fetch(lfs, &cwd, cwd.d.tail);
if (err) {
return err;
@@ -2501,7 +2514,7 @@ int lfs_deorphan(lfs_t *lfs) {
return err;
}
break;
return 0;
}
if (!lfs_pairsync(entry.d.u.dir, pdir.d.tail)) {
@@ -2517,7 +2530,7 @@ int lfs_deorphan(lfs_t *lfs) {
return err;
}
break;
return 0;
}
}
@@ -2562,5 +2575,7 @@ int lfs_deorphan(lfs_t *lfs) {
memcpy(&pdir, &cwd, sizeof(pdir));
}
return 0;
// If we reached here, we have more directory pairs than blocks in the
// filesystem... So something must be horribly wrong
return LFS_ERR_CORRUPT;
}

2
lfs.h
View File

@@ -21,7 +21,7 @@ extern "C"
// Software library version
// Major (top-nibble), incremented on backwards incompatible changes
// Minor (bottom-nibble), incremented on feature additions
#define LFS_VERSION 0x00010005
#define LFS_VERSION 0x00010006
#define LFS_VERSION_MAJOR (0xffff & (LFS_VERSION >> 16))
#define LFS_VERSION_MINOR (0xffff & (LFS_VERSION >> 0))

View File

@@ -32,18 +32,18 @@ lfs_alloc_singleproc() {
tests/test.py << TEST
const char *names[] = {"bacon", "eggs", "pancakes"};
lfs_mount(&lfs, &cfg) => 0;
for (int n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
for (unsigned n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
sprintf((char*)buffer, "$1/%s", names[n]);
lfs_file_open(&lfs, &file[n], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
}
for (int n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
for (unsigned n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
size = strlen(names[n]);
for (int i = 0; i < $SIZE; i++) {
lfs_file_write(&lfs, &file[n], names[n], size) => size;
}
}
for (int n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
for (unsigned n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
lfs_file_close(&lfs, &file[n]) => 0;
}
lfs_unmount(&lfs) => 0;

View File

@@ -301,7 +301,7 @@ tests/test.py << TEST
size = strlen("hedgehoghog");
const lfs_soff_t offsets[] = {512, 1020, 513, 1021, 511, 1019};
for (int i = 0; i < sizeof(offsets) / sizeof(offsets[0]); i++) {
for (unsigned i = 0; i < sizeof(offsets) / sizeof(offsets[0]); i++) {
lfs_soff_t off = offsets[i];
memcpy(buffer, "hedgehoghog", size);
lfs_file_seek(&lfs, &file[0], off, LFS_SEEK_SET) => off;

View File

@@ -23,14 +23,14 @@ tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
for (unsigned i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
sprintf((char*)buffer, "hairyhead%d", i);
lfs_file_open(&lfs, &file[0], (const char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
strcpy((char*)buffer, "hair");
size = strlen((char*)buffer);
for (int j = 0; j < startsizes[i]; j += size) {
for (lfs_off_t j = 0; j < startsizes[i]; j += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_size(&lfs, &file[0]) => startsizes[i];
@@ -55,13 +55,13 @@ tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
for (unsigned i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
sprintf((char*)buffer, "hairyhead%d", i);
lfs_file_open(&lfs, &file[0], (const char*)buffer, LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file[0]) => hotsizes[i];
size = strlen("hair");
int j = 0;
lfs_off_t j = 0;
for (; j < startsizes[i] && j < hotsizes[i]; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
@@ -87,13 +87,13 @@ tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
for (unsigned i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
sprintf((char*)buffer, "hairyhead%d", i);
lfs_file_open(&lfs, &file[0], (const char*)buffer, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file[0]) => coldsizes[i];
size = strlen("hair");
int j = 0;
lfs_off_t j = 0;
for (; j < startsizes[i] && j < hotsizes[i] && j < coldsizes[i];
j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;