Pretty bare bones but gets the job done, unlike the gcc
tooling, this will work for release builds, the performance cost
of it is on the high side of things, the full test suite tests take over
an hour for me with code coverage support enabled on a release build.
I have not timed a debug build. Given developers can just run their
tests to get coverage data over what they are working on, I feel this
is still useful tooling to have.
This adds the 3 targets for clang and adds a single gcc target
coverage-reset - this removes the collected code coverage data and
report
coverage-report - This merges the collected data and generates the
report (new for gcc)
coverage-show - This merges the collected data and generates the report
and opens it in the browser
This relies on llvm-cov and llvm-profdata being available if not found
code coverage is disabled.
Note: A full test run requires an obscene amount of disk space, a
complete test run takes about 125GB and takes 12 minutes to merge, so
provision the COMPILER_CODE_COVERAGE_DATA_DIR folder accordingly
Example report in PR
This only works with GCC and has only been tested on Linux. The main goal is to
automatically generate the code coverage reports on the buildbot and to publish
them. With some luck, this motivates people to increase test coverage in their
respective areas. Nevertheless, it should be easy to generate the reports
locally too (at least on supported software stacks).
Usage:
1. Create a **debug** build using **GCC** with **WITH_COMPILER_CODE_COVERAGE**
enabled.
2. Run tests. This automatically generates `.gcda` files in the build directory.
3. Run `make/ninja coverage-report` in the build directory.
If everything is successful, this will open a browser with the final report
which is stored in `build-dir/coverage/report/`. For a bit more control one can
also run `coverage.py` script directly. This allows passing in the
`--no-browser` option which may be benefitial when running it on the buildbot.
Running `make/ninja coverage-reset` deletes all `.gcda` files which resets the
line execution counts.
The final report has a main entry point (`index.html`) and a separate `.html`
file for every source code file that coverage data was available for. This also
contains some code that is not in Blender's git repository. We could filter
those out, but it also seems interesting (to me anyway), so I just kept it in.
Doing the analysis and writing the report takes ~1 min. The slow part is running
all tests in a debug build which takes ~12 min for me. Since the coverage data
is fairly large and the report also includes the entire source code, file
compression is used in two places:
* The intermediate analysis results for each file are stored in compressed zip
files. This data is still independent from the report html and could be used
to build other tools on top of. I could imagine storing the analysis data for
each day for example to gather greater insights into how coverage changes over
time in different parts of the code.
* The analysis data and source code is compressed and base64 encoded embedded
into the `.html` files. This makes them much smaller than embedding the data
without compression (5-10x).
Pull Request: https://projects.blender.org/blender/blender/pulls/126181
Printing that a library is found every time CMake runs isn't helpful.
Restrict these messages for the first execution so messages are limited
to information developers may need to know such as features being
disabled because of incompatible configurations.
Listing the "Blender Foundation" as copyright holder implied the Blender
Foundation holds copyright to files which may include work from many
developers.
While keeping copyright on headers makes sense for isolated libraries,
Blender's own code may be refactored or moved between files in a way
that makes the per file copyright holders less meaningful.
Copyright references to the "Blender Foundation" have been replaced with
"Blender Authors", with the exception of `./extern/` since these this
contains libraries which are more isolated, any changed to license
headers there can be handled on a case-by-case basis.
Some directories in `./intern/` have also been excluded:
- `./intern/cycles/` it's own `AUTHORS` file is planned.
- `./intern/opensubdiv/`.
An "AUTHORS" file has been added, using the chromium projects authors
file as a template.
Design task: #110784
Ref !110783.
This makes it convenient to build blender without referencing
pre-compiled libraries which don't always work on newer Linux systems.
Previously I had to rename ../lib while creating the CMakeCache.txt
to ensure my systems libraries would be used.
This change ensures LIBDIR is undefined when WITH_LIBS_PRECOMPILED is
disabled, so any accidental use warns with CMake's `--warn-unused-vars`
argument is given.
These warnings can reveal errors in logic, so quiet them by checking
if the features are enabled before using variables or by assigning
empty strings in some cases.
- Check CMAKE_THREAD_LIBS_INIT is set before use as CMake docs
note that this may be left unset if it's not needed.
- Remove BOOST/OPENVDB/VULKAN references when disable.
- Define INC_SYS even when empty.
- Remove PNG_INC from freetype (not defined anywhere).
Add a macro that implements something similar to cmake_path's IS_PREFIX
which isn't supported in older versions of CMake.
This caused the build-bot to fail.
Creating `__pycache__` directories in SVN's lib/ directory can cause
updating SVN to fail. Add the -B flag when TEST_PYTHON_EXE from LIBDIR
is used so so Python doesn't generate this cache.
* Use Python executable from lib folder since it's not installed.
* Make bpy module test work for portable install.
* Disable gtests which don't work with different Python link flags
and shared library locations.
Ref D15957
PLATFORM_BUNDLED_LIBRARIES gathers shared libraries that will be installed
to the lib/ folder. The Blender executable gets a relative rpath pointing to
this folder as part of the install step.
The build rpath is different and uses absolute paths, so that it works for
executables like tests that are in different locations, and to support the
case where the build and install folders are different.
The system is already used for the OpenMP library on macOS. But on Linux it
will only kick in once we start using shared libraries for dependencies.
This also removes Mesa libraries from the old location, as these would cause
Blender to start with software OpenGL.
Ref T99618
Makes it slightly easier to test whether the installed module works
or not. Depends on D10656
Ref T86579
Reviewed By: mont29, campbellbarton
Differential Revision: https://developer.blender.org/D10665
Avoid CTest errors and exit codes due to test failures which depend
on Blender executable.
Ref T86579
Reviewed By: mont29, campbellbarton
Differential Revision: https://developer.blender.org/D10656
This adds a new `--debug-exit-on-error` flag. When it is set, Blender
will abort with a non-zero exit code when there are internal errors.
Currently, "internal errors" includes memory leaks detected by
guardedalloc and error/fatal log entries in clog.
The new flag is passed to Blender in various places where automated
tests are run. Furthermore, the `--debug-memory` flag is used in tests,
because that makes the verbose output more useful, when dealing
with memory leaks.
Reviewers: brecht, sergey
Differential Revision: https://developer.blender.org/D8665
Blender was not configured to exit with non-zero return code on Python errors.
A bunch of tests worked around this but not all. This removes the need for such
workarounds.
CentOS on the buildbot still runs Python 3.6, which is also used for the
unit tests. This means that the tests can't use language features that
are available to Blender itself. And testing with a different version of
Python than will be used by the actual code seems like a bad idea to me.
This commit adds `TEST_PYTHON_EXECUTABLE` as advanced CMake option. This
will allow us to set a specific Python executable when we need it. When
not set, a platform-specific default will be used:
- On Windows, the `python….exe` from the installation directory. This is
just like before this patch, except that this patch adds the
overridability.
- On macOS/Linux, the `${PYTHON_EXECUTABLE}` as found by CMake.
Every platform should now have a value (configured by the user or
detected by CMake) for `TEST_PYTHON_EXE`, so there is no need to allow
running without. This also removes the need to have some Python files
marked as executable.
If `TEST_PYTHON_EXE` is not user-configured, and thus the above default
is used, a status message is logged by CMake. I've seen this a lot in
other projects, and I like that it shows which values are auto-detected.
However, it's not common in Blender, so if we want we can either remove
it now, or remove it after the buildbot has been set up correctly.
Differential Revision: https://developer.blender.org/D7395
Reviewed by: campbellbarton, mont29, sergey
Problem was twofold
1) `GENERATOR_IS_MULTI_CONFIG` is a property not a variable so
the test for it would always be false, unless you set a custom
CMAKE_INSTALL_PREFIX (like the buildbot does) the unit tests
would have a wrong working directory and complain about missing
dlls or blender executable
2) Tests added outside of `/test` (like libmv) would have no working
folder set since the variable would not be visible for them.
consulted @sergey who voiced the opinion that duplicating the code
to the test macro was slightly less evil than moving it to the main
CMakeLists.txt
Simply disabled python tests, they can't be run anyway (since blender target is
not enabled) and we don't have any player-related tests in that folder.