Compare commits

..

16 Commits

Author SHA1 Message Date
Christopher Haster
8bf45c3a29 Fixed issue with writes following a truncate
The problem was not setting the file state correctly after the truncate.
To truncate < size, we end up using the cache to traverse the ctz
skip-list far away from where our file->pos is.

We can leave the last block in the cache in case we're going to append
to the file, but if we do this we need to set up file->block+file->off
to tell use where we are in the file, and set the LFS_F_READING flag to
indicate that our cache contains read data.

Note this is different than the LFS_F_DIRTY, which we need also. The
purpose of the flags are as follows:
- LFS_F_DIRTY - file ctz skip-list branch is out of sync with
  filesystem, need to update metadata
- LFS_F_READING - file cache is in use for reading, need to drop cache
- LFS_F_WRITING - file cache is in use for writing, need to write out
  cache to disk

The difference between flags is subtle but important because read/prog
caches are handled differently. Prog caches have asserts in place to
catch programs without erases (the infamous pcache->block == 0xffffffff
assert).

Though maybe the names deserve an update...

Found by ebinans
2019-05-22 14:46:59 -05:00
Christopher Haster
f35fb8c148 Fixed migration test condition for prefix branches
Both the littlefs-fuse and littlefs-migration test jobs depend on
the external littlefs-fuse repo. But unfortunately, the automatic
patching to update the external repo with the version under test
does not work with the prefix branches.

In this case we can just skip these tests, they've already been tested
multiple times to get to this point.
2019-04-16 18:29:44 -05:00
Christopher Haster
0a1f706ca2 Merge pull request #160 from FreddieChopin/no-cache-bypass
Don't bypass cache in `lfs_cache_prog()` and `lfs_cache_read()`
2019-04-16 17:59:28 -05:00
Freddie Chopin
fdd239fe21 Don't bypass cache in lfs_cache_prog() and lfs_cache_read()
In some cases specific alignment of buffer passed to underlying device
is required. For example SDMMC in STM32F7 (when used with DMA) requires
the buffers to be aligned to 16 bytes. If you enable data cache in
STM32F7, the alignment of buffer passed to any driver which uses DMA
should generally be at least 32 bytes.

While it is possible to provide sufficiently aligned "read", "prog" and
per-file caches to littlefs, the cases where caches are bypassed are
hard to control when littlefs is hidden under some additional layers.
For example if you couple littlefs with stdio and use it via `FILE*`,
then littlefs functions will operate on internal `FIlE*` buffer, usually
allocated dynamically, so in these specific cases - with insufficient
alignment (8 bytes on ARM Cortex-M).

The easy path was taken - remove all cases of cache bypassing.

Fixes #158
2019-04-12 15:21:25 -05:00
Christopher Haster
780ef2fce4 Fixed buffer overflow due to mistaking prog_size for cache_size
found by ajaybhargav
2019-04-12 08:44:00 -05:00
Christopher Haster
73ea008b74 Merge pull request #151 from Krakonos/master
Fixed documentation for return lfs_dir_read return value.
2019-04-12 17:07:25 -05:00
Christopher Haster
c849748453 Merge pull request #150 from ajaybhargav/truncate-fix
Fix: length more than LFS_FILE_MAX should return error
2019-04-12 17:06:58 -05:00
Christopher Haster
25a843aab7 Fixed .travis.yml to use explicit branch names for migration testing
This lets us actually update the littlefs-fuse repo instead of being
bound to master for v1.
2019-04-12 15:13:00 -05:00
Ajay Bhargav
905727b684 Fix: length more than LFS_FILE_MAX should return error
To make lfs_file_truncate inline with ftruncate function, when -ve
or size more than maximum file size is passed to function it should
return invalid parameter error. In LFS case LFS_ERR_INVAL.

Signed-off-by: Ajay Bhargav <contact@rickeyworld.info>
2019-04-12 15:09:44 -05:00
Christopher Haster
0907ba7813 Merge pull request #85 from ARMmbed/v2-alpha
v2: Metadata logging, custom attributes, inline files, and a major version bump
2019-04-10 20:49:34 -05:00
Christopher Haster
48bd2bff82 Artificially limited number of file ids per metadata block
This is an expirement to determine which field in the tag structure is
the most critical: tag id or tag size.

This came from looking at NAND storage and discussions around behaviour of
large prog_sizes. Initial exploration indicates that prog_sizes around
2KiB are not _that_ uncommon, and the 1KiB limitation is surprising.

It's possible to increase the lfs_tag size to 12-bits (4096), but at the
cost of only 8-bit ids (256).

  [----            32             ----]
a [1|-3-|-- 8 --|--  10  --|--  10  --]
b [1|-3-|-- 8 --|-- 8 --|--   12    --]

This requires more investigation, but in order to allow us to change
the tag sizes with minimal impact I've artificially limited the number
of file ids to 0xfe (255) different file ids per metadata pair. If
12-bit lengths turn out to be a bad idea, we can remove the artificial
limit without backwards incompatible changes.

To avoid breaking users already on v2-alpha, this change will refuse
_creating_ file ids > 255, but should read file ids > 255 without
issues.
2019-04-10 11:27:53 -05:00
Christopher Haster
651e14e796 Cleaned up a couple of warnings
- Shifting signed 32-bit value by 31 bits is undefined behaviour

  This was an interesting one as on initial inspection, `uint8_t & 1`
  looks like it will result in an unsigned variable. However, due to
  uint8_t being "smaller" than int, this actually results in a signed
  int, causing an undefined shift operation.

- Identical inner 'if' condition is always true (outer condition is
  'true' and inner condition is 'true').

  This was caused by the use of `if (true) {` to avoid "goto bypasses
  variable initialization" warnings. Using just `{` instead seems to
  avoid this problem.

found by keck-in-space and armandas
2019-04-10 11:27:53 -05:00
Christopher Haster
1ff6432298 Added clarification on buffer alignment.
In v2, the lookahead_buffer was changed from requiring 4-byte alignment
to requiring 8-byte alignment. This was not documented as well as it
could be, and as FabianInostroza noted, this also implies that
lfs_malloc must provide 8-byte alignment.

To protect against this, I've also added an assert on the alignment of
both the lookahead_size and lookahead_buffer.

found by FabianInostroza and amitv87
2019-04-10 11:27:48 -05:00
Christopher Haster
c2c2ce6b97 Fixed issue with handling block device errors in lfs_file_sync
lfs_file_sync was not correctly setting the LFS_F_ERRED flag.
Fortunately this is a relatively easy fix. LFS_F_ERRED prevents
further issues from occuring when cleaning up resources with
lfs_file_close.

found by TheLoneWolfling
2019-04-09 17:41:26 -05:00
Christopher Haster
0b76635f10 Added better handling of large program sizes (> 1024)
The issue here is how commits handle padding to the nearest program
size. This is done by exploiting the size field of the LFS_TYPE_CRC
tag that completes the commit. Unfortunately, during developement, the
size field shrank in size to make room for more type information,
limiting the size field to 1024.

Normally this isn't a problem, as very rarely do program sizes exceed
1024 bytes. However, using a simulated block device, user earlephilhower
found that exceeding 1024 caused littlefs to crash.

To make this corner case behave in a more user friendly manner, I've
modified this situtation to treat >1024 program sizes as small commits
that don't match the prog size. As a part of this, littlefs also needed
to understand that non-matching commits indicate an "unerased" dir
block, which would be needed for portability (something which notably
lacks testing).

This raises the question of if the tag size field size needs to be
reconsidered, but to change that at this point would need a new major
version.

found by earlephilhower
2019-04-09 16:06:43 -05:00
Ladislav Láska
26d25608b6 Fixed documentation for return lfs_dir_read return value.
lfs_dir_read breaks the convention of returning non-zero on success,
this feature should be at least documented.
2019-03-01 10:01:02 +01:00
5 changed files with 180 additions and 35 deletions

View File

@@ -136,10 +136,11 @@ jobs:
env:
- STAGE=test
- NAME=littlefs-migration
if: branch !~ -prefix$
install:
- sudo apt-get install libfuse-dev
- git clone --depth 1 https://github.com/geky/littlefs-fuse -b v2-alpha v2
- git clone --depth 1 https://github.com/geky/littlefs-fuse v1
- git clone --depth 1 https://github.com/geky/littlefs-fuse -b v1 v1
- fusermount -V
- gcc --version
before_script:

59
lfs.c
View File

@@ -80,21 +80,6 @@ static int lfs_bd_read(lfs_t *lfs,
diff = lfs_min(diff, rcache->off-off);
}
if (size >= hint && off % lfs->cfg->read_size == 0 &&
size >= lfs->cfg->read_size) {
// bypass cache?
diff = lfs_aligndown(diff, lfs->cfg->read_size);
int err = lfs->cfg->read(lfs->cfg, block, off, data, diff);
if (err) {
return err;
}
data += diff;
off += diff;
size -= diff;
continue;
}
// load to cache, first condition can no longer fail
LFS_ASSERT(block < lfs->cfg->block_count);
rcache->block = block;
@@ -827,7 +812,8 @@ static lfs_stag_t lfs_dir_fetchmatch(lfs_t *lfs,
// next commit not yet programmed or we're not in valid range
if (!lfs_tag_isvalid(tag) ||
off + lfs_tag_dsize(tag) > lfs->cfg->block_size) {
dir->erased = (lfs_tag_type1(ptag) == LFS_TYPE_CRC);
dir->erased = (lfs_tag_type1(ptag) == LFS_TYPE_CRC &&
dir->off % lfs->cfg->prog_size == 0);
break;
}
@@ -854,7 +840,7 @@ static lfs_stag_t lfs_dir_fetchmatch(lfs_t *lfs,
}
// reset the next bit if we need to
ptag ^= (lfs_tag_chunk(tag) & 1) << 31;
ptag ^= (lfs_tag_chunk(tag) & 1U) << 31;
// toss our crc into the filesystem seed for
// pseudorandom numbers
@@ -1436,8 +1422,10 @@ static int lfs_dir_compact(lfs_t *lfs,
// space is complicated, we need room for tail, crc, gstate,
// cleanup delete, and we cap at half a block to give room
// for metadata updates.
if (size <= lfs_min(lfs->cfg->block_size - 36,
lfs_alignup(lfs->cfg->block_size/2, lfs->cfg->prog_size))) {
if (end - begin < 0xff &&
size <= lfs_min(lfs->cfg->block_size - 36,
lfs_alignup(lfs->cfg->block_size/2,
lfs->cfg->prog_size))) {
break;
}
@@ -1497,7 +1485,7 @@ static int lfs_dir_compact(lfs_t *lfs,
// begin loop to commit compaction to blocks until a compact sticks
while (true) {
if (true) {
{
// There's nothing special about our global delta, so feed it into
// our local global delta
int err = lfs_dir_getgstate(lfs, dir, &lfs->gdelta);
@@ -1596,13 +1584,12 @@ static int lfs_dir_compact(lfs_t *lfs,
dir->count = end - begin;
dir->off = commit.off;
dir->etag = commit.ptag;
dir->erased = true;
dir->erased = (dir->off % lfs->cfg->prog_size == 0);
// note we able to have already handled move here
if (lfs_gstate_hasmovehere(&lfs->gpending, dir->pair)) {
lfs_gstate_xormove(&lfs->gpending,
&lfs->gpending, 0x3ff, NULL);
}
}
break;
@@ -1711,7 +1698,7 @@ static int lfs_dir_commit(lfs_t *lfs, lfs_mdir_t *dir,
}
}
if (dir->erased) {
if (dir->erased || dir->count >= 0xff) {
// try to commit
struct lfs_commit commit = {
.block = dir->pair[0],
@@ -2122,7 +2109,7 @@ static int lfs_ctz_extend(lfs_t *lfs,
}
LFS_ASSERT(nblock >= 2 && nblock <= lfs->cfg->block_count);
if (true) {
{
err = lfs_bd_erase(lfs, nblock);
if (err) {
if (err == LFS_ERR_CORRUPT) {
@@ -2381,7 +2368,8 @@ int lfs_file_opencfg(lfs_t *lfs, lfs_file_t *file,
if (file->ctz.size > 0) {
lfs_stag_t res = lfs_dir_get(lfs, &file->m,
LFS_MKTAG(0x700, 0x3ff, 0),
LFS_MKTAG(LFS_TYPE_STRUCT, file->id, file->cache.size),
LFS_MKTAG(LFS_TYPE_STRUCT, file->id,
lfs_min(file->cache.size, 0x3fe)),
file->cache.buffer);
if (res < 0) {
err = res;
@@ -2576,6 +2564,7 @@ int lfs_file_sync(lfs_t *lfs, lfs_file_t *file) {
while (true) {
int err = lfs_file_flush(lfs, file);
if (err) {
file->flags |= LFS_F_ERRED;
return err;
}
@@ -2611,6 +2600,7 @@ int lfs_file_sync(lfs_t *lfs, lfs_file_t *file) {
if (err == LFS_ERR_NOSPC && (file->flags & LFS_F_INLINE)) {
goto relocate;
}
file->flags |= LFS_F_ERRED;
return err;
}
@@ -2624,6 +2614,7 @@ relocate:
file->off = file->pos;
err = lfs_file_relocate(lfs, file);
if (err) {
file->flags |= LFS_F_ERRED;
return err;
}
}
@@ -2858,6 +2849,10 @@ int lfs_file_truncate(lfs_t *lfs, lfs_file_t *file, lfs_off_t size) {
return LFS_ERR_BADF;
}
if (size > LFS_FILE_MAX) {
return LFS_ERR_INVAL;
}
lfs_off_t oldsize = lfs_file_size(lfs, file);
if (size < oldsize) {
// need to flush since directly changing metadata
@@ -2869,13 +2864,14 @@ int lfs_file_truncate(lfs_t *lfs, lfs_file_t *file, lfs_off_t size) {
// lookup new head in ctz skip list
err = lfs_ctz_find(lfs, NULL, &file->cache,
file->ctz.head, file->ctz.size,
size, &file->ctz.head, &(lfs_off_t){0});
size, &file->block, &file->off);
if (err) {
return err;
}
file->ctz.head = file->block;
file->ctz.size = size;
file->flags |= LFS_F_DIRTY;
file->flags |= LFS_F_DIRTY | LFS_F_READING;
} else if (size > oldsize) {
lfs_off_t pos = file->pos;
@@ -3223,8 +3219,9 @@ static int lfs_init(lfs_t *lfs, const struct lfs_config *cfg) {
lfs_cache_zero(lfs, &lfs->pcache);
// setup lookahead, must be multiple of 64-bits
LFS_ASSERT(lfs->cfg->lookahead_size % 8 == 0);
LFS_ASSERT(lfs->cfg->lookahead_size > 0);
LFS_ASSERT(lfs->cfg->lookahead_size % 8 == 0 &&
(uintptr_t)lfs->cfg->lookahead_buffer % 8 == 0);
if (lfs->cfg->lookahead_buffer) {
lfs->free.buffer = lfs->cfg->lookahead_buffer;
} else {
@@ -3292,7 +3289,7 @@ static int lfs_deinit(lfs_t *lfs) {
int lfs_format(lfs_t *lfs, const struct lfs_config *cfg) {
int err = 0;
if (true) {
{
err = lfs_init(lfs, cfg);
if (err) {
return err;
@@ -4176,7 +4173,7 @@ static int lfs1_moved(lfs_t *lfs, const void *e) {
static int lfs1_mount(lfs_t *lfs, struct lfs1 *lfs1,
const struct lfs_config *cfg) {
int err = 0;
if (true) {
{
err = lfs_init(lfs, cfg);
if (err) {
return err;
@@ -4247,7 +4244,7 @@ int lfs_migrate(lfs_t *lfs, const struct lfs_config *cfg) {
return err;
}
if (true) {
{
// iterate through each directory, copying over entries
// into new directory
lfs1_dir_t dir1;

8
lfs.h
View File

@@ -215,8 +215,9 @@ struct lfs_config {
// By default lfs_malloc is used to allocate this buffer.
void *prog_buffer;
// Optional statically allocated program buffer. Must be lookahead_size.
// By default lfs_malloc is used to allocate this buffer.
// Optional statically allocated lookahead buffer. Must be lookahead_size
// and aligned to a 64-bit boundary. By default lfs_malloc is used to
// allocate this buffer.
void *lookahead_buffer;
// Optional upper limit on length of file names in bytes. No downside for
@@ -577,7 +578,8 @@ int lfs_dir_close(lfs_t *lfs, lfs_dir_t *dir);
// Read an entry in the directory
//
// Fills out the info structure, based on the specified file or directory.
// Returns a negative error code on failure.
// Returns a positive value on success, 0 at the end of directory,
// or a negative error code on failure.
int lfs_dir_read(lfs_t *lfs, lfs_dir_t *dir, struct lfs_info *info);
// Change the position of the directory

View File

@@ -192,6 +192,7 @@ static inline uint32_t lfs_tobe32(uint32_t a) {
uint32_t lfs_crc(uint32_t crc, const void *buffer, size_t size);
// Allocate memory, only used if buffers are not provided to littlefs
// Note, memory must be 64-bit aligned
static inline void *lfs_malloc(size_t size) {
#ifndef LFS_NO_MALLOC
return malloc(size);

View File

@@ -11,6 +11,150 @@ tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
echo "--- Simple truncate ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldynoop",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
strcpy((char*)buffer, "hair");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < $LARGESIZE; j += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_size(&lfs, &file[0]) => $LARGESIZE;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldynoop", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file[0]) => $LARGESIZE;
lfs_file_truncate(&lfs, &file[0], $MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file[0]) => $MEDIUMSIZE;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldynoop", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file[0]) => $MEDIUMSIZE;
size = strlen("hair");
for (lfs_off_t j = 0; j < $MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
lfs_file_read(&lfs, &file[0], buffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Truncate and read ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldyread",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
strcpy((char*)buffer, "hair");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < $LARGESIZE; j += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_size(&lfs, &file[0]) => $LARGESIZE;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldyread", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file[0]) => $LARGESIZE;
lfs_file_truncate(&lfs, &file[0], $MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file[0]) => $MEDIUMSIZE;
size = strlen("hair");
for (lfs_off_t j = 0; j < $MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
lfs_file_read(&lfs, &file[0], buffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldyread", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file[0]) => $MEDIUMSIZE;
size = strlen("hair");
for (lfs_off_t j = 0; j < $MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
lfs_file_read(&lfs, &file[0], buffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Truncate and write ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldywrite",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
strcpy((char*)buffer, "hair");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < $LARGESIZE; j += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_size(&lfs, &file[0]) => $LARGESIZE;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldywrite", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file[0]) => $LARGESIZE;
lfs_file_truncate(&lfs, &file[0], $MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file[0]) => $MEDIUMSIZE;
strcpy((char*)buffer, "bald");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < $MEDIUMSIZE; j += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_size(&lfs, &file[0]) => $MEDIUMSIZE;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "baldywrite", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file[0]) => $MEDIUMSIZE;
size = strlen("bald");
for (lfs_off_t j = 0; j < $MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "bald", size) => 0;
}
lfs_file_read(&lfs, &file[0], buffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
# More aggressive general truncation tests
truncate_test() {
STARTSIZES="$1"
STARTSEEKS="$2"