As noted in Python's subprocess library:
> This will deadlock when using stdout=PIPE and/or stderr=PIPE and the
> child process generates enough output to a pipe such that it blocks
> waiting for the OS pipe buffer to accept more data.
Curiously, this only became a problem when updating to Ubuntu 20.04
in CI (python3.6 -> python3.8).
Introduced when updating CI to Ubuntu 20.04, Ubuntu snaps consume
loop devices, which conflict with out assumption that /dev/loop0
will always be unused. Changed to request a dynamic loop device from
losetup, though it would have been nice if Ubuntu snaps allocated
from the last device or something.
- Added to GitHub statuses (61 results)
- Reworked generated release table to include these (16 results, only thumb)
These also required a surprisingly large number of other changes:
- Bumbed CI Ubuntu version 18.04 -> 20.04, 22.04 is already on the
horizon but not usable in GitHub yet
- Manualy upgrade to GCC v10, this is required for the -fcallgraph-info
flag that scripts/stack.py uses.
- Increased paginated status queries to 100 per-page. If we have more
statuses than this the status diffs may get much more complicated...
- Forced whitespace in generated release table to always be nbsp. GitHub
tables get scrunched rather ugly without this, prefering margins to
readable tables.
- Added limited support for "∞" results, since this is returned by
./scripts/stack.py for recursive functions.
As a side-note, this increases the number of statuses reported
per-commit from 6 to 61, so hopefully that doesn't cause any problems...
Now with -s/--sort and -S/--reverse-sort for sorting the functions by
size.
You may wonder why add reverse-sort, since its utility doesn't seem
worth the cost to implement (these are just helper scripts after all),
the reason is that reverse-sort is quite useful on the command-line,
where scrollback may be truncated, and you only care about the larger
entries.
Outside of the command-line, normal sort is prefered.
Fortunately the difference is just the sign in the sort key.
Note this conflicts with the short --summary flag, so that has been
removed.
This helps an outstanding maintainer annoyance: updating dependencies to
bring in new versions on each littlefs release.
But instead of adding a bunch of scripts to the tail end of the release
workflow, the post-release script just triggers a single
"repository_dispatch" event in the newly created littlefs.post-release
repo. From there any number of post-release workflows can be run.
This indirection should let the post-release scripts move much quicker
than littlefs itself, which helps offset how fragile these sort of scripts
are.
---
Also finished cleaning up the workflows now that they are mostly
working.
Currently this is just lfs.c and lfs_util.c. Previously this included
the block devices, but this meant all of the scripts needed to
explicitly deselect the block devices to avoid reporting build
size/coverage info on them.
Note that test.py still explicitly adds the block devices for compiling
tests, which is their main purpose. Humorously this means the block
devices will probably be compiled into most builds in this repo anyways.
This is pretty much a cleaned up version of the release script that ran
on Travis.
This biggest change is that now the release script also collecs the
build results into a table as part of the change notes, which is a nice
addition.
This was lost in the Travis -> GitHub transition, in serializing some of
the jobs, I missed that we need to clean between tests with different
geometry configurations. Otherwise we end up running outdated binaries,
which explains some of the weird test behavior we were seeing.
Also tweaked a few script things:
- Better subprocess error reporting (dump stderr on failure)
- Fixed a BUILDDIR rule issue in test.py
- Changed test-not-run status to None instead of undefined
Now littlefs's Makefile can work with a custom build directory
for compilation output. Just set the BUILDDIR variable and the Makefile
will take care of the rest.
make BUILDDIR=build size
This makes it very easy to compare builds with different compile-time
configurations or different cross-compilers.
This meant most of code.py's build isolation is no longer needed,
so revisted the scripts and cleaned/tweaked a number of things.
Also bought code.py in line with coverage.py, fixing some of the
inconsistencies that were created while developing these scripts.
One change to note was removing the inline measuring logic, I realized
this feature is unnecessary thanks to GCC's -fkeep-static-functions and
-fno-inline flags.
Now coverage information can be collected if you provide the --coverage
to test.py. Internally this uses GCC's gcov instrumentation along with a
new script, coverage.py, to parse *.gcov files.
The main use for this is finding coverage info during CI runs. There's a
risk that the instrumentation may make it more difficult to debug, so I
decided to not make coverage collection enabled by default.
Mostly taken from .travis.yml, biggest changes were around how to get
the status updates to work.
We can't use a token on PRs the same way we could in Travis, so instead
we use a second workflow that checks every pull request for "status"
artifacts, and create the actual statuses in the "workflow_run" event,
where we have full access to repo secrets.