Compare commits

..

5 Commits

Author SHA1 Message Date
Christopher Haster
0aba71d0d6 Fixed single unchecked bit during commit verification
This bug was exposed by the bad-block tests due to changes to block
allocation, but could have been hit before these changes.

In flash, when blocks fail, they don't fail in a predictable manner. To
account for this, the bad-block tests check a number of failure
behaviors. The interesting one here is "LFS_TESTBD_BADBLOCK_ERASENOOP",
in which bad blocks can not be erased or programmed, and are stuck with
the data written at the time the blocks go bad.

This is actually a pretty realistic failure behavior, since flash needs a
large voltage to force the electrons of the floating gates. Though
realistically, such a failure would like corrupt the data a bit, not leave the
underlying data perfectly intact.

LFS_TESTBD_BADBLOCK_ERASENOOP is rather interesting to test for because it
means bad blocks can end up with perfectly valid CRCs after a failed write,
confusing littlefs.

---

In this case, we had the perfect series of operations such that a test
was repeatedly writing the same sequence of metadata commits to the same
block, which eventually goes bad, leaving the block stuck with metadata
that occurs later in the sequence.

What this means is that after the first commit, the metadata block
contained both the first and second commits, even though the loop in the
test hadn't reached that point yet.

expected       actual
.----------.  .----------.
| commit 1 |  | commit 1 |
| crc 1    |  | crc 1    |
|          |  | commit 2 <-- (from previous iteration)
|          |  | crc 2    |
'----------'  '----------'

To protect against this, littlefs normally compares the written CRC
against the expected CRC, but because this was the exact same data that
it was going to write, this CRCs end up the same.

Ah! But doesn't littlefs also encode the state of the next page to keep
track of if the next page has been erased or not? Wouldn't that change
between iterations?

It does! In a single bit in the CRC-tag. But thanks to some incorrect
logic attempting to avoid an extra condition in the loop for writing out
padding commits, the CRC that littlefs checked against was the CRC
immediately before we include the "is-next-page-erased" bit.

Changing the verification check to use the same CRC as what is used to
verify commits on fetch solves this problem.
2020-11-22 15:07:16 -06:00
Christopher Haster
0ea2871e24 Fixed typo in scripts/readtree.py
Not sure how this went unnoticed, I guess this is the first bug that
needed in-depth inspection after the a last-minute argument cleanup
in the debug scripts.
2020-11-22 15:05:22 -06:00
Christopher Haster
f215027fd4 Switched to CRC as seed collection function instead of xor
As noted by gtaska, we are sitting on a better hash-combining function
than xor: CRC. Previous issues with xor were solvable, but relying on
xor for this isn't really worth the risk when we already have a CRC
function readily available.

To quote a study found by gtaska:

https://michiel.buddingh.eu/distribution-of-hash-values

> CRC32 seems to score really well, but its graph is skewed by the results
> of Dataset 5 (binary numbers), which may or may not be too synthetic to
> be considered a fair benchmark. But even if you substract the results
> from that test, it does not fare significantly worse than other,
> cryptographic hash functions.
2020-11-20 00:38:41 -06:00
Christopher Haster
1ae4b36f2a Removed unnecessary randomization of offsets in lfs_alloc_reset
On first read, randomizing the allocators offset may seem appropriate
for lfs_alloc_reset. However, it ends up using the filesystem-fed
pseudorandom seed in situations it wasn't designed for.

As noted by gtaska, the combination of using xors for feeding the seed
and multiple traverses of the same CRCs can cause the seed to flip to
zeros with concerning frequency.

Removed the randomization from lfs_alloc_reset, leaving it in only
lfs_mount.

Found by gtaska
2020-11-20 00:18:13 -06:00
Christopher Haster
480cdd9f81 Fixed incorrect modulus in lfs_alloc_reset
Modulus of the offset by block_size was clearly a typo, and should be
block_count. Interesting to note that later moduluses during alloc
calculations prevents this from breaking anything, but as gtaska notes it
could skew the wear-leveling distribution.

Found by guiserle and gtaska
2020-11-20 00:02:19 -06:00
4 changed files with 290 additions and 607 deletions

View File

@@ -208,22 +208,6 @@ jobs:
script:
- make test TFLAGS+="-k --valgrind"
# test compilation in thread-safe mode
- stage: test
env:
- NAME=littlefs-threadsafe
- CC="arm-linux-gnueabi-gcc --static -mthumb"
- CFLAGS="-Werror -DLFS_THREADSAFE"
if: branch !~ -prefix$
install:
- *install-common
- sudo apt-get install
gcc-arm-linux-gnueabi
libc6-dev-armel-cross
- arm-linux-gnueabi-gcc --version
# report-size will compile littlefs and report the size
script: [*report-size]
# self-host with littlefs-fuse for fuzz test
- stage: test
env:

868
lfs.c

File diff suppressed because it is too large Load Diff

11
lfs.h
View File

@@ -9,7 +9,6 @@
#include <stdint.h>
#include <stdbool.h>
#include "lfs_util.h"
#ifdef __cplusplus
extern "C"
@@ -175,16 +174,6 @@ struct lfs_config {
// are propogated to the user.
int (*sync)(const struct lfs_config *c);
#ifdef LFS_THREADSAFE
// Lock the underlying block device. Negative error codes
// are propogated to the user.
int (*lock)(const struct lfs_config *c);
// Unlock the underlying block device. Negative error codes
// are propogated to the user.
int (*unlock)(const struct lfs_config *c);
#endif
// Minimum size of a block read. All read operations will be a
// multiple of this value.
lfs_size_t read_size;

View File

@@ -106,7 +106,7 @@ def main(args):
struct.unpack('<HH', superblock[1].data[0:4].ljust(4, b'\xff'))))
print("%-47s%s" % ("littlefs v%s.%s" % version,
"data (truncated, if it fits)"
if not any([args.no_truncate, args.tags, args.log, args.all]) else ""))
if not any([args.no_truncate, args.log, args.all]) else ""))
# print gstate
print("gstate 0x%s" % ''.join('%02x' % c for c in gstate))