Previous version was trying to do a quick and simple process in the case
we were only considering smooth/flat status of faces.
Thing is, even then, the algorithm was not actually working in all
possible situations, e.g. two smooth faces having a single vertex in
common, but no common edges, would not have split that vertex, leading
to incorrect shading etc.
So now, tweaked slightly our split normals code to be able to generate
lnor spaces even when autosmooth is disabled, and we always go that way
when splitting faces.
Using smooth fans from clnor spaces is not only the only way to get 100%
correct results, it also makes face split code simpler.
One problem is that it was always using __mm_blendv_ps emulation even if the
instruction was supported. The other that the emulation function was wrong.
Thanks a lot to Ray Molenkamp for tracking this one down.
Apparently with Maya in a certain configuration, it's possible to have an
Alembic object without schema in the Alembic file. This is now handled
properly, instead of crashing on a null pointer.
This affects the curve display color setting, but is really intended
for future per-curve options.
The id_data reference in the created rna pointers refers to the object
even if the curve is actually owned by its action, which is somewhat
inconsistent, but the same problem can be found in existing code.
Fixing it requires changes in animdata filter API.
Previously it was returning short, which was really easy to (a) compare against
non-ID type value (b) forget to handle some specific value in switch statement.
Both issues happened in the nearest past, so it's time to tighten some nuts
here.
Most of the change related on silencing strict compiler warning now, but there
is also one tricky aspect: ID_NLA is not in the IDType enum. So there is still
cast to short to handle that switch. If someone has better ideas how to deal
with this please go ahead :)
- Vertex only meshes never restored their selection history.
- Select history was cleared on the source instead of the target.
Simple Optimizations:
- Avoid O(n^2) linked list looping that checked the entire list before
adding elements (NULL values in the source array to prevent dupes).
- Re-use vert & edge lookup tables instead of allocating new ones.
Issue was nasty hidden one, the dual status (mix of local and linked)
of proxies striking again.
Here, remapping process was considering obdata pointer of proxies as
indirect usage, hence clearing the 'LIB_TAG_EXTERN' of obdata pointer.
That would make savetoblend code not store any 'lib placeholder' for
obdata data-block, which was hence lost on next file read.
Another (probably better) solution here would be to actually consider
obdata of proxies are fully indirect usage, and simply reassign proxies
from their linked object's obdata on file read...
However, that change shall be safer for now, probably good for 2.79 too.
Don't use quick sort for small arrays, bubble sort works way faster for small
arrays due to cache coherency. This is what qsort() from libc is doing actually.
We can also experiment unrolling some extra small arrays, for example 3 and 4
element arrays.
This reduces tangent space calculation for dragon from 3.1sec to 2.9sec.
Brings tangent space calculation from 4.6sec to 3.1sec for dragon model in BI.
Cycles is also somewhat faster, but it has other bottlenecks.
Funny thing, using simple `static inline` already gives a lot of speedup here.
That's just answering question whether it's OK to leave decision on what to
inline up to a compiler..
Would be nice to be able to catch this with assert as well, will see what would
be the best way to do this/.\
Need to verify with Mai that this solves crash for her and maybe consider
porting this to 2.79.
scripts being "Artwork" which is your sole property and free to license.
I've removed the reference to scripts in this text.
This was from 2002! With our Python scripts becoming part of how Blender runs,
such scripts now are officially required to be compliant with GNU GPL.
For more information; check the FAQ or consult foundation@blender.orghttps://www.blender.org/support/faq/
We have a hardcored limit of 1000 images to be baked.
However anything anove 100 would be leading to overflow in the code.
Caught by warning from builder bot (my compiler doesn't even complain
about this, but it should).
Fishy cat benchmark was rendering with wrong shadows. Cause is unclear,
adding printf or rearranging code seems to avoid this issue, possibly a
compiler bug. This reverts the fix and solves the OSL bug elsewhere.
This was needed when we accessed OSL closure memory after shader evaluation,
which could get overwritten by another shader evaluation. But all closures
are immediatley converted to ShaderClosure now, so no longer needed.