The typical order is vertex, edge, face(polygon), corner(loop), but in
these three functions polys and loops were reversed. Also use more
typical "num" variable names rather than "len"
Create a "nomain" mesh when converting intermediate representations
to a Mesh, meaning those areas don't have to know about data-block
names or the main database, and also that the boilerplate of adding
attributes individually can be avoided. The attribute arrays aren't
copied here, so the performance should be unaffected.
Add the ability to retrieve implicit sharing info directly from the
C++ attribute API, which simplifies memory usage and performance
optimizations making use of it. This commit uses the additions to
the API to avoid copies in a few places:
- The "rest_position" attribute in the mesh modifier stack
- Instance on Points node
- Instances to points node
- Mesh to points node
- Points to vertices node
Many files are affected because in order to include the new information
in the API's returned data, I had to switch a bunch of types from
`VArray` to `AttributeReader`. This generally makes sense anyway, since
it allows retrieving the domain, which wasn't possible before in some
cases. I overloaded the `*` deference operator for some syntactic sugar
to avoid the (very ugly) `.varray` that would be necessary otherwise.
Pull Request: https://projects.blender.org/blender/blender/pulls/107059
While replacing the extension with an empty string works,
it required a redundant string-size argument which took a dummy
value in some cases. Avoid having to pass in a redundant string size by
adding a function that strips the extension.
For non-object parents (so bones & vertices), the parent is now also
explicitly checked for animation. In other words: having an animated
parent will cause the transform of the child to be written to Alembic/USD
on every frame (as if it is animated itself).
Implements #95966, as the final step of #95965.
This commit changes the storage of mesh edge vertex indices from the
`MEdge` type to the generic `int2` attribute type. This follows the
general design for geometry and the attribute system, where the data
storage type and the usage semantics are separated.
The main benefit of the change is reduced memory usage-- the
requirements of storing mesh edges is reduced by 1/3. For example,
this saves 8MB on a 1 million vertex grid. This also gives performance
benefits to any memory-bound mesh processing algorithm that uses edges.
Another benefit is that all of the edge's vertex indices are
contiguous. In a few cases, it's helpful to process all of them as
`Span<int>` rather than `Span<int2>`. Similarly, the type is more
likely to match a generic format used by a library, or code that
shouldn't know about specific Blender `Mesh` types.
Various Notes:
- The `.edge_verts` name is used to reflect a mapping between domains,
similar to `.corner_verts`, etc. The period means that it the data
shouldn't change arbitrarily by the user or procedural operations.
- `edge[0]` is now used instead of `edge.v1`
- Signed integers are used instead of unsigned to reduce the mixing
of signed-ness, which can be error prone.
- All of the previously used core mesh data types (`MVert`, `MEdge`,
`MLoop`, `MPoly` are now deprecated. Only generic types are used).
- The `vec2i` DNA type is used in the few C files where necessary.
Pull Request: https://projects.blender.org/blender/blender/pulls/106638
A few fixes included here:
- Use `reserve` properly to add space after the first mesh
- Add to the end of the UVs array instead of replacing it for every mesh
Also, a cleanup/simplification:
- Split face size and face vertex loops, they are independent
Pull Request: https://projects.blender.org/blender/blender/pulls/106967
This revision moves the vertex color reading and writing in the USD import and export functions over to the new Mesh Attributes API. I have removed anything else (new features or unnecessary changes) that was present in the prior patches to focus only on this task.
On the import side, I've introduced a class method named read_custom_data. In this function is the call-out for reading mesh colors. As requested, this function is intended to be the starting point for future Attribute reads, with methods like the new read_color_data* methods being called when a USD primvar matches a specific heuristic. UVs will (in the future, not in this revision) also need to be processed here. In a later patch, any primvars that do not match a heuristic can be imported as generic Attributes. There is a matching function on the export side, write_custom_data.
Attached is a .blend file for testing. The plane has five Color Attributes. The colors should be visibly the same when exported and re-imported.
I have also enabled color attribute imports by default. I believe it would be counter intuitive for most users for this feature to be off-- it means that at some point, a person round-tripping with default settings will lose data.
Pull Request: https://projects.blender.org/blender/blender/pulls/105347
This integrates the new implicit-sharing system (from fbcddfcd68)
with `CustomData`. Now the potentially long arrays referenced by custom
data layers can be shared between different systems but most importantly
between different geometries. This makes e.g. copying a mesh much cheaper
because none of the attributes has to be copied. Only when an attribute
is modified does it have to be copied.
Also see the original design task: #95845.
This reduces memory and improves performance by avoiding unnecessary
data copies. For example, the used memory after loading a highly
subdivided mesh is reduced from 2.4GB to 1.79GB. This is about 25%
less which is the expected amount because in `main` there are 4 copies
of the data:
1. The original data which is allocated when the file is loaded.
2. The copy for the depsgraph allocated during depsgraph evaluation.
3. The copy for the undo system allocated when the first undo step is
created right after loading the file.
4. GPU buffers allocated for drawing.
This patch only gets rid of copy number 2 for the depsgraph. In theory
the other copies can be removed as part of follow up PRs as well though.
-----
The patch has three main components:
* Slightly modified `CustomData` API to make it work better with implicit
sharing:
* `CD_REFERENCE` and `CD_DUPLICATE` have been removed because they are
meaningless when implicit-sharing is used.
* `CD_ASSIGN` has been removed as well because it's not an allocation
type anyway. The functionality of using existing arrays as custom
data layers has not been removed though.
* This can still be done with `CustomData_add_layer_with_data` which
also has a new argument that allows passing in information about
whether the array is shared.
* `CD_FLAG_NOFREE` has been removed because it's no longer necessary. It
only existed because of `CD_REFERENCE`.
* `CustomData_copy` and `CustomData_merge` have been split up into a
functions that do copy the actual attribute values and those that do
not. The latter functions now have the `_layout` suffix
(e.g. `CustomData_copy_layout`).
* Changes in `customdata.cc` to make it actually use implicit-sharing.
* Changes in various other files to adapt to the changes in `BKE_customdata.h`.
Pull Request: https://projects.blender.org/blender/blender/pulls/106228
The fast_float is an external library, so move it to the
system includes which has less strict compiler flags applied.
This matches how other IO module use this library as well.
Pull Request: https://projects.blender.org/blender/blender/pulls/106892
This checkin will use OIIO to replace the image save/load code for BMP,
DDS, DPX, HDR, PNG, TGA, and TIFF.
This simplifies our build environment, reduces binary duplication,
removes large amounts of hard to maintain code, and fixes some bugs
along the way.
It should also help reduce rare differences between Blender and Cycles
which already uses OIIO for most situations. Or potentially makes them
easier to solve once discovered.
This is a continuation of the work for #101413
Pull Request: https://projects.blender.org/blender/blender/pulls/105785
Implements #95967.
Currently the `MPoly` struct is 12 bytes, and stores the index of a
face's first corner and the number of corners/verts/edges. Polygons
and corners are always created in order by Blender, meaning each
face's corners will be after the previous face's corners. We can take
advantage of this fact and eliminate the redundancy in mesh face
storage by only storing a single integer corner offset for each face.
The size of the face is then encoded by the offset of the next face.
The size of a single integer is 4 bytes, so this reduces memory
usage by 3 times.
The same method is used for `CurvesGeometry`, so Blender already has
an abstraction to simplify using these offsets called `OffsetIndices`.
This class is used to easily retrieve a range of corner indices for
each face. This also gives the opportunity for sharing some logic with
curves.
Another benefit of the change is that the offsets and sizes stored in
`MPoly` can no longer disagree with each other. Storing faces in the
order of their corners can simplify some code too.
Face/polygon variables now use the `IndexRange` type, which comes with
quite a few utilities that can simplify code.
Some:
- The offset integer array has to be one longer than the face count to
avoid a branch for every face, which means the data is no longer part
of the mesh's `CustomData`.
- We lose the ability to "reference" an original mesh's offset array
until more reusable CoW from #104478 is committed. That will be added
in a separate commit.
- Since they aren't part of `CustomData`, poly offsets often have to be
copied manually.
- To simplify using `OffsetIndices` in many places, some functions and
structs in headers were moved to only compile in C++.
- All meshes created by Blender use the same order for faces and face
corners, but just in case, meshes with mismatched order are fixed by
versioning code.
- `MeshPolygon.totloop` is no longer editable in RNA. This API break is
necessary here unfortunately. It should be worth it in 3.6, since
that's the best way to allow loading meshes from 4.0, which is
important for an LTS version.
Pull Request: https://projects.blender.org/blender/blender/pulls/105938
This change aimed to solve the following issues:
- Possible threading issue of two tests writing to the same
file, depending on how the ctest is invoked
- Test using the release directory, and potentially leaving
temp file behind on test failure, breaking code sign on
macOS.
Pull Request: https://projects.blender.org/blender/blender/pulls/106311
The goal is to solve confusion of the "All rights reserved" for licensing
code under an open-source license.
The phrase "All rights reserved" comes from a historical convention that
required this phrase for the copyright protection to apply. This convention
is no longer relevant.
However, even though the phrase has no meaning in establishing the copyright
it has not lost meaning in terms of licensing.
This change makes it so code under the Blender Foundation copyright does
not use "all rights reserved". This is also how the GPL license itself
states how to apply it to the source code:
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software ...
This change does not change copyright notice in cases when the copyright
is dual (BF and an author), or just an author of the code. It also does
mot change copyright which is inherited from NaN Holding BV as it needs
some further investigation about what is the proper way to handle it.
Similar to recent OBJ UV values merging commit (05a63e3705) - build
mapping of (vertex, UV) by going over the face loops directly, instead
of using BKE_mesh_uv_vert_map_get_vert and then having an additional
map on top. Provide the mapping into ready to use flat arrays, instead
of building a map and then building arrays out of that in a separate
pass.
While at it, avoid the extra cost of building all this complicated
mapping when we don't have or are not exporting UVs.
Timing tests on exporting several models into binary PLY file
(Win10, Ryzen 5950X):
- Suzanne subdivided to level 6 (2.1M verts): 0.93s -> 0.68s
- Rungholt Minecraft level (9.7M verts): 3.3s -> 2.3s
- Stanford Lucy 3D scan (14.0M verts, no UVs): 5.2s -> 1.5s
For example
```
OIIOOutputDriver::~OIIOOutputDriver()
{
}
```
becomes
```
OIIOOutputDriver::~OIIOOutputDriver() {}
```
Saves quite some vertical space, which is especially handy for
constructors.
Pull Request: https://projects.blender.org/blender/blender/pulls/105594
Previous code was using BKE_mesh_uv_vert_map_get_vert to somewhat
detect identical UV values on mesh vertices. But this was not
handling the case when separate mesh vertices still use the same UV
values (e.g. cube with all six faces mapped to whole image).
Replace usage of BKE_mesh_uv_vert_map_get_vert with a simple "unique UV
value" map, very similar to how OBJ normals are calculated for export.
Measurements on a somewhat extreme case: exporting "Rungholt" Minecraft
level from https://casual-effects.com/data time (Win10, Ryzen 5950X)
goes 2.9sec -> 2.2sec, and resulting file size 486MB -> 231MB. This
particular model has a lot of mesh faces mapping to the same places
in UV map.
On less extreme cases the timings are similar between old and new
code, with a tiny speedup in most tests I've tried.
UV attribute refactor in 6c774feb has changed the logic from "UV data does not exist" to "there's no active UV layer". The repro mesh has a UV layer, but not UV data due to the mesh being only a point cloud.
Pull Request: https://projects.blender.org/blender/blender/pulls/106185
Fixed a bug where the active color wasn't being set
on imported meshes, resulting in no colors displaying
in the viewport.
This bug has been in the code for a long time. However,
the colors have been displaying correctly until recently,
so this issue wasn't previously apparent.
Also, changed custom color data name from "displayColors"
to "displayColor", to match the actual USD primvar name.
(This was a typo in the original code.)
Note that pull request
https://projects.blender.org/blender/blender/pulls/104542
addresses other issues in the color import code (e.g.,
converting all color primvars and not just "displayColor",
avoiding hard-coding of attribute names, handling all
iterpolation types, etc.).
However, the current commit is meant as a short term fix
to a regression, where the "displayColor" attribute does
not render in the viewport at all, until the above pull
can be merged.
Implements #102359.
Split the `MLoop` struct into two separate integer arrays called
`corner_verts` and `corner_edges`, referring to the vertex each corner
is attached to and the next edge around the face at each corner. These
arrays can be sliced to give access to the edges or vertices in a face.
Then they are often referred to as "poly_verts" or "poly_edges".
The main benefits are halving the necessary memory bandwidth when only
one array is used and simplifications from using regular integer indices
instead of a special-purpose struct.
The commit also starts a renaming from "loop" to "corner" in mesh code.
Like the other mesh struct of array refactors, forward compatibility is
kept by writing files with the older format. This will be done until 4.0
to ease the transition process.
Looking at a small portion of the patch should give a good impression
for the rest of the changes. I tried to make the changes as small as
possible so it's easy to tell the correctness from the diff. Though I
found Blender developers have been very inventive over the last decade
when finding different ways to loop over the corners in a face.
For performance, nearly every piece of code that deals with `Mesh` is
slightly impacted. Any algorithm that is memory bottle-necked should
see an improvement. For example, here is a comparison of interpolating
a vertex float attribute to face corners (Ryzen 3700x):
**Before** (Average: 3.7 ms, Min: 3.4 ms)
```
threading::parallel_for(loops.index_range(), 4096, [&](IndexRange range) {
for (const int64_t i : range) {
dst[i] = src[loops[i].v];
}
});
```
**After** (Average: 2.9 ms, Min: 2.6 ms)
```
array_utils::gather(src, corner_verts, dst);
```
That's an improvement of 28% to the average timings, and it's also a
simplification, since an index-based routine can be used instead.
For more examples using the new arrays, see the design task.
Pull Request: https://projects.blender.org/blender/blender/pulls/104424
The input file contains a partial cube, with two regular faces, two
loose edges and one loose vertex. This checks whether the loose vertex
for example is included into the output when exporting with UVs
(loose verts do not have UVs in Blender, so they should get (0,0) UV
in PLY).
Follow connections when reading the varname attribute of a primvar
reader, and support both string and TfToken types for the varname.
A unit test is also provided.
Authored by Apple: Matt McLin
Pull Request: https://projects.blender.org/blender/blender/pulls/105508
1) There was a logic error in FileBuffer where when it was trying to
add a new 64kb chunk to hold output, it was adding an empty chunk,
but making sure we have space capacity to hold 64 thousand chunks.
So that was a bit of pointless juggling to get nothing good.
2) In UV_vertex_key, instead of trying to combine three members into
a hash value, badly, by doing some ad-hoc shifts and xors, use
get_default_hash_3 instead, which combines them way more properly.
Also avoid copying the whole hash map object.
On my windows box (Ryzen 5950X, VS2022), exporting Stanford Lucy 3D
scan:
- Binary: 13.4 -> 5.4 sec
- ASCII: 29.3 -> 14.6 sec
So basically 2x faster for two tiny changes.
Fixes:
* General:
- Was assuming that positions/normals/UVs are always `float`, and colors
are always `uchar`. But e.g. positions are sometimes doubles, or colors
are floats etc.
- Was assuming that `face` element is always just a single vertex indices
property. But some files have more arbitrary properties per-face.
* ASCII importer:
- Was assuming that file elements are always in this order: `vertex`,
`face`, `edge`. That is not necessarily the case.
- Was only handling space character as separator, but not other
whitespace e.g. tab.
- Was not checking for partially present properties (e.g. if just `nx` is
present but not `ny` and `nz`, it was still assuming whole normal is there).
* Binary importer:
- Was assuming that faces are always 1-byte list size and 4-byte indices.
- Was assuming that edge vertex indices are always 4-byte.
For the assumptions above, it often lead to either crashes, garbage data
or not detecting presence of a vertex component. E.g. binary importer was
just always reading 4 bytes for vertex indices, but if a file has 2 byte
indices then that would read too much, and then later on would start reading
garbage. ASCII importer was doing out-of-bounds reads into properties array
if some assumptions were not correct, etc.
Improvements:
- Some ply files use different property type names, e.g. `float32`,
`uint16` or `int8` instead of `float`, `ushort`, `char`; now that is
supported too.
- Handles `tristrips` element. Some files out there do not have `face` but
rather has a triangle strip, internally using -1 indices for strip restarts.
Performance:
While doing the above fixes/improvements, in order to fix some things it was
more convenient to also make them more performant :) Now it avoids splitting
each ASCII line into a vector of `std::string` objects, for example, or
reading a binary file in tiny chunks.
Generally this is now 2x-4x faster than the previous state of C++ importer.
The code has changed quite a bit to achieve both the fixes and performance:
- Instead of separate files for ASCII and Binary reading, they are now both
in `ply_import_data.cc/hh`. Reason is that all of the "find correct property
indices" logic is the same between both (and that bit was mostly missing
previously).
- Instead of using raw C++ `fstream` object and reading line by line (which
in turn reads one byte at a time) or reading field by field, now there's a
`ply_import_buffer.cc/hh` that reads in 64KB chunks and does any needed
logic for the ASCII/header part to properly split into lines. This avoids
all the mutex locking, locale lookups, C++ abstraction layers overhead
that C++ iostreams have when doing a ton of tiny reads.
Tests:
Extended test coverage with new files (comitted in svn revision 63274).
11 new "situations", in both ascii and binary forms. Additionally, now also
has a Big Endian binary file to test; BE codepath was completely untested
before.
Overall reworked the tests so that they don't do the whole "load dummy
blend file, import ply file on top, go over meshes, check them", but rather
do a simpler "run ply importer logic that fills in PlyData structure, check
that". The followup conversion to blender meshes is fairly trivial and the
tests were not really covering that in a useful way.
Pull Request: https://projects.blender.org/blender/blender/pulls/105842
This simplifies the usage of the API and is preparation for #104478.
The `CustomData_add_layer` and `CustomData_add_layer_named` now have corresponding
`*_with_data` functions that should be used when creating the layer from existing data.
Pull Request: https://projects.blender.org/blender/blender/pulls/105708