Compare commits

...

18 Commits

Author SHA1 Message Date
Christopher Haster
9c7e232086 Fixed missing definition of lfs_cache_drop in readonly mode
Interestingly this was introduced by two different PRs which were not tested
together until pre-release testing:

- Fix lfs_file_seek doesn't update cache properties correctly
- Fix compiler warnings when LFS_READONLY defined
2022-03-21 20:29:04 -05:00
Christopher Haster
c676bcee4c Merge branch 'bf_lfs_file_seek_readonly' into HEAD 2022-03-20 23:16:15 -05:00
Christopher Haster
03f088b92c Tweaked lfs_file_flush to still flush caches when build under LFS_READONLY
A slight varation to the fix from ondrap
2022-03-20 23:14:34 -05:00
ondrap
e955b9f65d Fix lfs_file_seek doesn't update cache properties correctly in readonly mode. Invalidate cache to fix it. 2022-03-20 23:10:11 -05:00
Christopher Haster
99f58139cb Merge pull request #650 from Kongduino/patch-1
Typo
2022-03-20 23:09:41 -05:00
Christopher Haster
5801169348 Merge pull request #635 from mikee47/fix/spelling-errors
Fix spelling errors
2022-03-20 23:09:23 -05:00
Christopher Haster
2d6f4ead13 Merge pull request #620 from XinStellaris/master
fix bug:lfs_alloc will alloc one block repeatedly in multiple split
2022-03-20 23:09:04 -05:00
Christopher Haster
3d1b89b41a Merge pull request #612 from tniessen/patch-1
Always zero rambd buffer before first use
2022-03-20 23:08:31 -05:00
Christopher Haster
45cefb825d Merge pull request #606 from eclig/improve-config-doc
Specify unit of the size members of the lfs_config struct
2022-03-20 23:07:51 -05:00
Christopher Haster
bbb9e3873e Merge pull request #593 from tannewt/patch-1
Indent sub-portions of tag fields
2022-03-20 23:07:32 -05:00
Christopher Haster
c6d3c48939 Merge pull request #569 from tniessen/fix-compilation-with-lfs_readonly
Fix compiler warnings when LFS_READONLY defined
2022-03-20 23:06:50 -05:00
田昕
1363c9f9d4 fix bug:lfs_alloc will alloc one block repeatedly in multiple split
BUG CASE:Assume there are 6 blocks in littlefs, block 0,1,2,3 already allocated. 0 has a tail pair of {2, 3}. Now we try to write more into 0.
When writing to block 0, we will split(FIRST SPLIT), thus allocate block 4 and 5. Up to now , everything is as expected.
Then we will try to commit in block 4, during which split(SECOND SPLIT) is triggered again(In our case, some files are large, some are small, one split may not be enough).  Still as expected now.
BUG happens when we try to alloc a new block pair for the second split:
As lookahead buffer reaches the end , a new lookahead buffer will be generated from flash content, and block 4, 5 are unused blocks in the new lookahead buffer because they are not programed yet. HOWEVER, block 4,5 should be occupied in the first split!!!!!  The result is block 4,5 are allocated again(This is where things are getting wrong).

commit ce2c01f results in this bug. In the commit, a lfs_alloc_ack is inserted in lfs_dir_split, which will cause split to reset lfs->free.ack to block count.
In summary, this problem exists after 2.1.3.

Solution: don't call lfs_alloc_ack in lfs_dir_split.
2022-03-20 20:53:48 -05:00
Kongduino
5bc682a0d4 Typo
s/propogated/propagated/
2022-03-20 20:49:45 -05:00
Scott Shawcroft
1877c40aac Indent sub-portions of tag fields
This makes the bit breakdown clearer.
2022-02-18 21:13:41 -06:00
Emilio Lopes
e29e7aeefa Specify unit of the size members of the lfs_config struct
Fixes littlefs-project/littlefs#568
2022-02-18 21:09:19 -06:00
mikee47
4977fa0c0e Fix spelling errors 2022-01-29 09:52:00 +00:00
Tobias Nießen
fdda3b4aa2 Always zero rambd buffer before first use
This fixes warnings produced by tools such as memcheck without
requiring the user to set an erase value.
2021-11-14 16:10:54 +01:00
Tobias Nießen
fb2c311bb4 Fix compiler warnings when LFS_READONLY defined 2021-06-14 12:12:38 +02:00
6 changed files with 46 additions and 36 deletions

14
SPEC.md
View File

@@ -233,19 +233,19 @@ Metadata tag fields:
into a 3-bit abstract type and an 8-bit chunk field. Note that the value
`0x000` is invalid and not assigned a type.
3. **Type1 (3-bits)** - Abstract type of the tag. Groups the tags into
8 categories that facilitate bitmasked lookups.
1. **Type1 (3-bits)** - Abstract type of the tag. Groups the tags into
8 categories that facilitate bitmasked lookups.
4. **Chunk (8-bits)** - Chunk field used for various purposes by the different
abstract types. type1+chunk+id form a unique identifier for each tag in the
metadata block.
2. **Chunk (8-bits)** - Chunk field used for various purposes by the different
abstract types. type1+chunk+id form a unique identifier for each tag in the
metadata block.
5. **Id (10-bits)** - File id associated with the tag. Each file in a metadata
3. **Id (10-bits)** - File id associated with the tag. Each file in a metadata
block gets a unique id which is used to associate tags with that file. The
special value `0x3ff` is used for any tags that are not associated with a
file, such as directory and global metadata.
6. **Length (10-bits)** - Length of the data in bytes. The special value
4. **Length (10-bits)** - Length of the data in bytes. The special value
`0x3ff` indicates that this tag has been deleted.
## Metadata types

View File

@@ -80,7 +80,7 @@ int lfs_filebd_read(const struct lfs_config *cfg, lfs_block_t block,
LFS_ASSERT(size % cfg->read_size == 0);
LFS_ASSERT(block < cfg->block_count);
// zero for reproducability (in case file is truncated)
// zero for reproducibility (in case file is truncated)
if (bd->cfg->erase_value != -1) {
memset(buffer, bd->cfg->erase_value, size);
}

View File

@@ -32,10 +32,12 @@ int lfs_rambd_createcfg(const struct lfs_config *cfg,
}
}
// zero for reproducability?
// zero for reproducibility?
if (bd->cfg->erase_value != -1) {
memset(bd->buffer, bd->cfg->erase_value,
cfg->block_size * cfg->block_count);
} else {
memset(bd->buffer, 0, cfg->block_size * cfg->block_count);
}
LFS_RAMBD_TRACE("lfs_rambd_createcfg -> %d", 0);

22
lfs.c
View File

@@ -11,6 +11,7 @@
#define LFS_BLOCK_INLINE ((lfs_block_t)-2)
/// Caching block device operations ///
static inline void lfs_cache_drop(lfs_t *lfs, lfs_cache_t *rcache) {
// do not zero, cheaper if cache is readonly or only going to be
// written with identical data (during relocates)
@@ -268,22 +269,26 @@ static inline int lfs_pair_cmp(
paira[0] == pairb[1] || paira[1] == pairb[0]);
}
#ifndef LFS_READONLY
static inline bool lfs_pair_sync(
const lfs_block_t paira[2],
const lfs_block_t pairb[2]) {
return (paira[0] == pairb[0] && paira[1] == pairb[1]) ||
(paira[0] == pairb[1] && paira[1] == pairb[0]);
}
#endif
static inline void lfs_pair_fromle32(lfs_block_t pair[2]) {
pair[0] = lfs_fromle32(pair[0]);
pair[1] = lfs_fromle32(pair[1]);
}
#ifndef LFS_READONLY
static inline void lfs_pair_tole32(lfs_block_t pair[2]) {
pair[0] = lfs_tole32(pair[0]);
pair[1] = lfs_tole32(pair[1]);
}
#endif
// operations on 32-bit entry tags
typedef uint32_t lfs_tag_t;
@@ -365,6 +370,7 @@ static inline bool lfs_gstate_iszero(const lfs_gstate_t *a) {
return true;
}
#ifndef LFS_READONLY
static inline bool lfs_gstate_hasorphans(const lfs_gstate_t *a) {
return lfs_tag_size(a->tag);
}
@@ -376,6 +382,7 @@ static inline uint8_t lfs_gstate_getorphans(const lfs_gstate_t *a) {
static inline bool lfs_gstate_hasmove(const lfs_gstate_t *a) {
return lfs_tag_type1(a->tag);
}
#endif
static inline bool lfs_gstate_hasmovehere(const lfs_gstate_t *a,
const lfs_block_t *pair) {
@@ -388,11 +395,13 @@ static inline void lfs_gstate_fromle32(lfs_gstate_t *a) {
a->pair[1] = lfs_fromle32(a->pair[1]);
}
#ifndef LFS_READONLY
static inline void lfs_gstate_tole32(lfs_gstate_t *a) {
a->tag = lfs_tole32(a->tag);
a->pair[0] = lfs_tole32(a->pair[0]);
a->pair[1] = lfs_tole32(a->pair[1]);
}
#endif
// other endianness operations
static void lfs_ctz_fromle32(struct lfs_ctz *ctz) {
@@ -416,6 +425,7 @@ static inline void lfs_superblock_fromle32(lfs_superblock_t *superblock) {
superblock->attr_max = lfs_fromle32(superblock->attr_max);
}
#ifndef LFS_READONLY
static inline void lfs_superblock_tole32(lfs_superblock_t *superblock) {
superblock->version = lfs_tole32(superblock->version);
superblock->block_size = lfs_tole32(superblock->block_size);
@@ -424,6 +434,7 @@ static inline void lfs_superblock_tole32(lfs_superblock_t *superblock) {
superblock->file_max = lfs_tole32(superblock->file_max);
superblock->attr_max = lfs_tole32(superblock->attr_max);
}
#endif
#ifndef LFS_NO_ASSERT
static bool lfs_mlist_isopen(struct lfs_mlist *head,
@@ -1449,7 +1460,7 @@ static int lfs_dir_alloc(lfs_t *lfs, lfs_mdir_t *dir) {
}
}
// zero for reproducability in case initial block is unreadable
// zero for reproducibility in case initial block is unreadable
dir->rev = 0;
// rather than clobbering one of the blocks we just pretend
@@ -1509,7 +1520,6 @@ static int lfs_dir_split(lfs_t *lfs,
lfs_mdir_t *dir, const struct lfs_mattr *attrs, int attrcount,
lfs_mdir_t *source, uint16_t split, uint16_t end) {
// create tail directory
lfs_alloc_ack(lfs);
lfs_mdir_t tail;
int err = lfs_dir_alloc(lfs, &tail);
if (err) {
@@ -2730,7 +2740,6 @@ static int lfs_file_outline(lfs_t *lfs, lfs_file_t *file) {
}
#endif
#ifndef LFS_READONLY
static int lfs_file_flush(lfs_t *lfs, lfs_file_t *file) {
if (file->flags & LFS_F_READING) {
if (!(file->flags & LFS_F_INLINE)) {
@@ -2739,6 +2748,7 @@ static int lfs_file_flush(lfs_t *lfs, lfs_file_t *file) {
file->flags &= ~LFS_F_READING;
}
#ifndef LFS_READONLY
if (file->flags & LFS_F_WRITING) {
lfs_off_t pos = file->pos;
@@ -2805,10 +2815,10 @@ relocate:
file->pos = pos;
}
#endif
return 0;
}
#endif
#ifndef LFS_READONLY
static int lfs_file_rawsync(lfs_t *lfs, lfs_file_t *file) {
@@ -3081,13 +3091,11 @@ static lfs_soff_t lfs_file_rawseek(lfs_t *lfs, lfs_file_t *file,
return npos;
}
#ifndef LFS_READONLY
// write out everything beforehand, may be noop if rdonly
int err = lfs_file_flush(lfs, file);
if (err) {
return err;
}
#endif
// update pos
file->pos = npos;
@@ -4074,7 +4082,7 @@ static int lfs_fs_relocate(lfs_t *lfs,
lfs_fs_prepmove(lfs, 0x3ff, NULL);
}
// replace bad pair, either we clean up desync, or no desync occured
// replace bad pair, either we clean up desync, or no desync occurred
lfs_pair_tole32(newpair);
err = lfs_dir_commit(lfs, &parent, LFS_MKATTRS(
{LFS_MKTAG_IF(moveid != 0x3ff,

36
lfs.h
View File

@@ -159,49 +159,49 @@ struct lfs_config {
// information to the block device operations
void *context;
// Read a region in a block. Negative error codes are propogated
// Read a region in a block. Negative error codes are propagated
// to the user.
int (*read)(const struct lfs_config *c, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size);
// Program a region in a block. The block must have previously
// been erased. Negative error codes are propogated to the user.
// been erased. Negative error codes are propagated to the user.
// May return LFS_ERR_CORRUPT if the block should be considered bad.
int (*prog)(const struct lfs_config *c, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size);
// Erase a block. A block must be erased before being programmed.
// The state of an erased block is undefined. Negative error codes
// are propogated to the user.
// are propagated to the user.
// May return LFS_ERR_CORRUPT if the block should be considered bad.
int (*erase)(const struct lfs_config *c, lfs_block_t block);
// Sync the state of the underlying block device. Negative error codes
// are propogated to the user.
// are propagated to the user.
int (*sync)(const struct lfs_config *c);
#ifdef LFS_THREADSAFE
// Lock the underlying block device. Negative error codes
// are propogated to the user.
// are propagated to the user.
int (*lock)(const struct lfs_config *c);
// Unlock the underlying block device. Negative error codes
// are propogated to the user.
// are propagated to the user.
int (*unlock)(const struct lfs_config *c);
#endif
// Minimum size of a block read. All read operations will be a
// Minimum size of a block read in bytes. All read operations will be a
// multiple of this value.
lfs_size_t read_size;
// Minimum size of a block program. All program operations will be a
// multiple of this value.
// Minimum size of a block program in bytes. All program operations will be
// a multiple of this value.
lfs_size_t prog_size;
// Size of an erasable block. This does not impact ram consumption and
// may be larger than the physical erase size. However, non-inlined files
// take up at minimum one block. Must be a multiple of the read
// and program sizes.
// Size of an erasable block in bytes. This does not impact ram consumption
// and may be larger than the physical erase size. However, non-inlined
// files take up at minimum one block. Must be a multiple of the read and
// program sizes.
lfs_size_t block_size;
// Number of erasable blocks on the device.
@@ -215,11 +215,11 @@ struct lfs_config {
// Set to -1 to disable block-level wear-leveling.
int32_t block_cycles;
// Size of block caches. Each cache buffers a portion of a block in RAM.
// The littlefs needs a read cache, a program cache, and one additional
// Size of block caches in bytes. Each cache buffers a portion of a block in
// RAM. The littlefs needs a read cache, a program cache, and one additional
// cache per file. Larger caches can improve performance by storing more
// data and reducing the number of disk accesses. Must be a multiple of
// the read and program sizes, and a factor of the block size.
// data and reducing the number of disk accesses. Must be a multiple of the
// read and program sizes, and a factor of the block size.
lfs_size_t cache_size;
// Size of the lookahead buffer in bytes. A larger lookahead buffer
@@ -485,7 +485,7 @@ int lfs_stat(lfs_t *lfs, const char *path, struct lfs_info *info);
// Returns the size of the attribute, or a negative error code on failure.
// Note, the returned size is the size of the attribute on disk, irrespective
// of the size of the buffer. This can be used to dynamically allocate a buffer
// or check for existance.
// or check for existence.
lfs_ssize_t lfs_getattr(lfs_t *lfs, const char *path,
uint8_t type, void *buffer, lfs_size_t size);

View File

@@ -565,7 +565,7 @@ class TestSuite:
path=self.path))
mk.write('\n')
# add truely global defines globally
# add truly global defines globally
for k, v in sorted(self.defines.items()):
mk.write('%s.test: override CFLAGS += -D%s=%r\n'
% (self.path, k, v))
@@ -656,7 +656,7 @@ def main(**args):
for path in glob.glob(testpath):
suites.append(TestSuite(path, classes, defines, filter, **args))
# sort for reproducability
# sort for reproducibility
suites = sorted(suites)
# generate permutations