Ceres: Update to the latest actual version

Brings all the fixes and improvements done in upstream within the last 13 months.
This commit is contained in:
Sergey Sharybin
2016-11-01 11:29:33 +01:00
parent cf8f6d1dbc
commit bf1e9bc613
45 changed files with 2804 additions and 1727 deletions

View File

@@ -73,10 +73,12 @@ set(SRC
internal/ceres/file.cc
internal/ceres/generated/partitioned_matrix_view_d_d_d.cc
internal/ceres/generated/schur_eliminator_d_d_d.cc
internal/ceres/gradient_checker.cc
internal/ceres/gradient_checking_cost_function.cc
internal/ceres/gradient_problem.cc
internal/ceres/gradient_problem_solver.cc
internal/ceres/implicit_schur_complement.cc
internal/ceres/is_close.cc
internal/ceres/iterative_schur_complement_solver.cc
internal/ceres/lapack.cc
internal/ceres/levenberg_marquardt_strategy.cc
@@ -116,6 +118,7 @@ set(SRC
internal/ceres/triplet_sparse_matrix.cc
internal/ceres/trust_region_minimizer.cc
internal/ceres/trust_region_preprocessor.cc
internal/ceres/trust_region_step_evaluator.cc
internal/ceres/trust_region_strategy.cc
internal/ceres/types.cc
internal/ceres/wall_time.cc
@@ -204,6 +207,7 @@ set(SRC
internal/ceres/householder_vector.h
internal/ceres/implicit_schur_complement.h
internal/ceres/integral_types.h
internal/ceres/is_close.h
internal/ceres/iterative_schur_complement_solver.h
internal/ceres/lapack.h
internal/ceres/levenberg_marquardt_strategy.h
@@ -248,6 +252,7 @@ set(SRC
internal/ceres/triplet_sparse_matrix.h
internal/ceres/trust_region_minimizer.h
internal/ceres/trust_region_preprocessor.h
internal/ceres/trust_region_step_evaluator.h
internal/ceres/trust_region_strategy.h
internal/ceres/visibility_based_preconditioner.h
internal/ceres/wall_time.h

1035
extern/ceres/ChangeLog vendored
View File

@@ -1,659 +1,588 @@
commit aef9c9563b08d5f39eee1576af133a84749d1b48
Author: Alessandro Gentilini <agentilini@gmail.com>
Date: Tue Oct 6 20:43:45 2015 +0200
Add test for Bessel functions.
Change-Id: Ief5881e8027643d7ef627e60a88fdbad17f3d884
commit 49c86018e00f196c4aa9bd25daccb9919917efee
Author: Alessandro Gentilini <agentilini@gmail.com>
Date: Wed Sep 23 21:59:44 2015 +0200
Add Bessel functions in order to use them in residual code.
See "How can I use the Bessel function in the residual function?" at
https://groups.google.com/d/msg/ceres-solver/Vh1gpqac8v0/NIK1EiWJCAAJ
Change-Id: I3e80d9f9d1cadaf7177076e493ff46ace5233b76
commit dfb201220c034fde00a242d0533bef3f73b2907d
Author: Simon Rutishauser <simon.rutishauser@pix4d.com>
Date: Tue Oct 13 07:33:58 2015 +0200
Make miniglog threadsafe on non-windows system by using
localtime_r() instead of localtime() for time formatting
Change-Id: Ib8006c685cd8ed4f374893bef56c4061ca2c9747
commit 41455566ac633e55f222bce7c4d2cb4cc33d5c72
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Mon Sep 28 22:43:42 2015 +0100
Remove link-time optimisation (LTO).
- On GCC 4.9+ although GCC supports LTO, it requires use of the
non-default gcc-ar & gcc-ranlib. Whilst we can ensure Ceres is
compiled with these, doing so with GCC 4.9 causes multiple definition
linker errors of static ints inside Eigen when compiling the tests
and examples when they are not also built with LTO.
- On OS X (Xcode 6 & 7) after the latest update to gtest, if LTO
is used when compiling the tests (& examples), two tests fail
due to typeinfo::operator== (things are fine if only Ceres itself is
compiled with LTO).
- This patch disables LTO for all compilers. It should be revisited when
the performance is more stable across our supported compilers.
Change-Id: I17b52957faefbdeff0aa40846dc9b342db1b02e3
commit 89c40005bfceadb4163bd16b7464b3c2ce740daf
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Sun Sep 27 13:37:26 2015 +0100
Only use LTO when compiling Ceres itself, not tests or examples.
- If Ceres is built as a shared library, and LTO is enabled for Ceres
and the tests, then type_info::operator==() incorrectly returns false
in gtests' CheckedDowncastToActualType() in the following tests:
-- levenberg_marquardt_strategy_test.
-- gradient_checking_cost_function_test.
on at least Xcode 6 & 7 as reported here:
https://github.com/google/googletest/issues/595.
- This does not appear to be a gtest issue, but is perhaps an LLVM bug
or an RTTI shared library issue. Either way, disabling the use of
LTO when compiling the test application resolves the issue.
- Allow LTO to be enabled for GCC, if it is supported.
- Add CMake function to allow easy appending to target properties s/t
Ceres library-specific compile flags can be iteratively constructed.
Change-Id: I923e6aae4f7cefa098cf32b2f8fc19389e7918c9
commit 0794f41cca440f7f65d9a44e671f66f6e498ef7c
commit 8590e6e8e057adba4ec0083446d00268565bb444
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Sat Sep 26 14:10:15 2015 -0700
Date: Thu Oct 27 12:29:37 2016 -0700
Documentation updates.
Remove two checks from rotation.h
1. Fix a typo in the Trust Region algorithm.
2. Add ARL in the list of users.
3. Update the version history.
This allows rotation.h to remove its dependency on glog.
Change-Id: Ic286e8ef1a71af07f3890b7592dd3aed9c5f87ce
Change-Id: Ia6aede93ee51a4bd4039570dc8edd100a7045329
commit 90e32a8dc437dfb0e6747ce15a1f3193c13b7d5b
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Mon Sep 21 21:08:25 2015 +0100
commit e892499e8d8977b9178a760348bdd201ec5f3489
Author: Je Hyeong Hong <jhh37@outlook.com>
Date: Tue Oct 18 22:49:11 2016 +0100
Use old minimum iOS version flags on Xcode < 7.0.
Relax the tolerance in QuaternionParameterizationTestHelper.
- The newer style, which are more specific and match the SDK names
are not available on Xcode < 7.0.
This commit relaxes the tolerance value for comparing between the actual
local matrix and the expected local matrix. Without this fix,
EigenQuaternionParameterization.ZeroTest could fail as the difference
exactly matches the value of std::numeric_limits<double>::epsilon().
Change-Id: I2f07a0365183d2781157cdb05fd49b30ae001ac5
Change-Id: Ic4d3f26c0acdf5f16fead80dfdc53df9e7dabbf9
commit 26cd5326a1fb99ae02c667eab9942e1308046984
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Mon Sep 21 10:16:01 2015 +0100
Add gtest-specific flags when building/using as a shared library.
- Currently these flags are only used to define the relevant DLL export
prefix for Windows.
Change-Id: I0c05207b512cb4a985390aefc779b91febdabb38
commit c4c79472112a49bc1340da0074af2d15b1c89749
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Sun Sep 20 18:26:59 2015 +0100
Clean up iOS.cmake to use xcrun/xcodebuild & libtool.
- Substantial cleanup of iOS.cmake to use xcrun & xcodebuild to
determine the SDK & tool paths.
- Use libtool -static to link libraries instead of ar + ranlib, which
is not compatible with Xcode 7+, this change should be backwards
compatible to at least Xcode 6.
- Force locations of unordered_map & shared_ptr on iOS to work around
check_cxx_source_compiles() running in a forked CMake instance without
access to the variables (IOS_PLATFORM) defined by the user.
- Minor CMake style updates.
Change-Id: I5f83a60607db34d461ebe85f9dce861f53d98277
commit 155765bbb358f1d19f072a4b54825faf1c059910
commit 7ed9e2fb7f1dff264c5e4fbaa89ee1c4c99df269
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Wed Sep 16 06:56:08 2015 -0700
Date: Wed Oct 19 04:45:23 2016 -0700
Import the latest version of gtest and gmock.
Occured -> Occurred.
Change-Id: I4b686c44bba823cab1dae40efa99e31340d2b52a
commit 0c4647b8f1496c97c6b9376d9c49ddc204aa08dd
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Wed Sep 16 20:01:11 2015 +0100
Remove FAQ about increasing inlining threshold for Clang.
Thanks to Phillip Huebner for reporting this.
- Changing the inlining threshold for Clang as described has a minimal
effect on user performance.
- The problem that originally prompted the belief that it did was
due to an erroneous CXX flag configuration (in user code).
Change-Id: I03017241c0f87b8dcefb8c984ec3b192afd97fc2
Change-Id: I9cddfbb373aeb496961d08e434fe661bff4abd29
commit f4b768b69afcf282568f9ab3a3f0eb8078607468
commit b82f97279682962d8c8ae1b6d9e801ba072a0ab1
Author: Je Hyeong Hong <jhh37@outlook.com>
Date: Tue Oct 18 21:18:32 2016 +0100
Fix a test error in autodiff_test.cc.
Previously, the test for the projective camera model would fail as no
tolerance is set in line 144. To resolve this, this commit changes
assert_equal to assert_near.
Change-Id: I6cd3379083b1a10c7cd0a9cc83fd6962bb993cc9
commit 5690b447de5beed6bdda99b7f30f372283c2fb1a
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Mon Sep 14 13:53:24 2015 -0700
Date: Thu Oct 13 09:52:02 2016 -0700
Lint changes from William Rucklidge
Fix documentation source for templated functions in rotation.h
Change-Id: I0dac2549a8fa2bfd12f745a8d8a0db623b7ec1ac
Change-Id: Ic1b2e6f0e6eb9914f419fd0bb5af77b66252e57c
commit 5f2f05c726443e35767d677daba6d25dbc2d7ff8
commit 2f8f98f7e8940e465de126fb51282396f42bea20
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Fri Sep 11 22:19:38 2015 -0700
Date: Thu Oct 13 09:35:18 2016 -0700
Refactor system_test
Prepare for 1.12.0RC1
1. Move common test infrastructure into test_util.
2. system_test now only contains powells function.
3. Add bundle_adjustment_test.
Instead of a single function which computes everything,
there is now a test for each solver configuration which
uses the reference solution computed by the fixture.
Change-Id: I16a9a9a83a845a7aaf28762bcecf1a8ff5aee805
Change-Id: I23eaf0b46117a01440143001b74dacfa5e57cbf0
commit 1936d47e213142b8bf29d3f548905116092b093d
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Tue Sep 8 23:27:42 2015 +0100
commit 55c12d2e9569fe4aeac3ba688ac36810935a37ba
Author: Damon Kohler <damonkohler@google.com>
Date: Wed Oct 5 16:30:31 2016 +0200
Revert increased inline threshold (iff Clang) to exported Ceres target.
Adds package.xml to support Catkin.
- Increasing the inline threshold results in very variable performance
improvements, and could potentially confuse users if they are trying
to set the inline threshold themselves.
- As such, we no longer export our inline threshold configuration for
Clang, but instead document how to change it in the FAQs.
Change-Id: I88e2e0001e4586ba2718535845ed1e4b1a5b72bc
Change-Id: I8ad4d36a8b036417604a54644e0bb70dd1615feb
commit a66d89dcda47cefda83758bfb9e7374bec4ce866
commit 0bcce6565202f5476e40f12afc0a99eb44bd9dfb
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Sat Sep 5 16:50:20 2015 -0700
Date: Mon Oct 10 23:30:42 2016 -0700
Get ready for 1.11.0RC1
Fix tabs in Android.mk
Update version numbers.
Drop CERES_VERSION_ABI macro.
Change-Id: Ib3eadabb318afe206bb196a5221b195d26cbeaa0
Change-Id: Ie5ab9a8ba2b727721565e1ded242609b6df5f8f5
commit 1ac3dd223c179fbadaed568ac532af4139c75d84
commit e6ffe2667170d2fc435443685c0163396fc52d7b
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Sat Sep 5 15:30:01 2015 -0700
Fix a bug in CompressedRowSparseMatrix::AppendRows
The test for CompressedRowSparseMatrix::AppendRows tries to add
a matrix of size zero, which results in an invalid pointer deferencing
even though that pointer is never written to.
Change-Id: I97dba37082bd5dad242ae1af0447a9178cd92027
commit 67622b080c8d37b5e932120a53d4ce76b80543e5
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Sat Sep 5 13:18:38 2015 -0700
Fix a pointer access bug in Ridders' algorithm.
A pointer to an Eigen matrix was being used as an array.
Change-Id: Ifaea14fa3416eda5953de49afb78dc5a6ea816eb
commit 5742b7d0f14d2d170054623ccfee09ea214b8ed9
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Wed Aug 26 09:24:33 2015 -0700
Improve performance of SPARSE_NORMAL_CHOLESKY + dynamic_sparsity
The outer product computation logic in SparseNormalCholeskySolver
does not work well with dynamic sparsity. The overhead of computing
the sparsity pattern of the normal equations is only amortized if
the sparsity is constant. If the sparsity can change from call to call
SparseNormalCholeskySolver will actually be more expensive.
For Eigen and for CXSparse we now explicitly compute the normal
equations using their respective matrix-matrix product routines and solve.
Change-Id: Ifbd8ed78987cdf71640e66ed69500442526a23d4
commit d0b6cf657d6ef0dd739e958af9a5768f2eecfd35
Author: Keir Mierle <mierle@gmail.com>
Date: Fri Sep 4 18:43:41 2015 -0700
Fix incorrect detect structure test
Change-Id: I7062f3639147c40b57947790d3b18331a39a366b
commit 0e8264cc47661651a11e2dd8570c210082963545
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Sat Aug 22 16:23:05 2015 +0100
Add increased inline threshold (iff Clang) to exported Ceres target.
- When compiled with Clang, Ceres and all of the examples are compiled
with an increased inlining-threshold, as the default value can result
in poor Eigen performance.
- Previously, client code using Ceres would typically not use an
increased inlining-threshold (unless the user has specifically added
it themselves). However, increasing the inlining threshold can result
in significant performance improvements in auto-diffed CostFunctions.
- This patch adds the inlining-threshold flags to the interface flags
for the Ceres CMake target s/t any client code using Ceres (via
CMake), and compiled with Clang, will now be compiled with the same
increased inlining threshold as used by Ceres itself.
Change-Id: I31e8f1abfda140d22e85bb48aa57f028a68a415e
commit a1b3fce9e0a4141b973f6b4dd9b08c4c13052d52
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Mon Aug 31 14:14:56 2015 +0100
Add optional export of Ceres build directory to new features list.
Change-Id: I6f1e42b41957ae9cc98fd9dcd1969ef64c4cd96f
commit e46777d8df068866ef80902401a03e29348d11ae
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Mon Aug 31 12:41:54 2015 +0100
Credit reporters of buildsystem bugs in version history.
Change-Id: I16fe7973534cd556d97215e84268ae0b8ec4e11a
commit 01548282cb620e5e3ac79a63a391cd0afd5433e4
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Sun Aug 30 22:29:27 2015 -0700
Date: Mon Oct 10 22:47:08 2016 -0700
Update the version history.
Change-Id: I29873bed31675e0108f1a44f53f7bc68976b7f98
Change-Id: I9a57b0541d6cebcb695ecb364a1d4ca04ea4e06c
commit 2701429f770fce69ed0c77523fa43d7bc20ac6dc
commit 0a4ccb7ee939ab35b22e26758401e039b033b176
Author: David Gossow <dgossow@google.com>
Date: Wed Sep 7 21:38:12 2016 +0200
Relaxing Jacobian matching in Gradient Checker test.
Any result of an arithmetic operation on floating-point matrices
should never be checked for strict equality with some expected
value, due to limited floating point precision on different machines.
This fixes some occurences of exact checks in the gradient checker
unit test that were causing problems on some platforms.
Change-Id: I48e804c9c705dc485ce74ddfe51037d4957c8fcb
commit ee44fc91b59584921c1d1c8db153fda6d633b092
Author: Je Hyeong Hong <jhh37@outlook.com>
Date: Mon Oct 3 12:19:30 2016 +0100
Fix an Intel compiler error in covariance_impl.cc.
Intel C compiler strictly asks for parallel loops with collapse to be
perfectly nested. Otherwise, compiling Ceres with ICC will throw an
error at line 348 of covariance_impl.cc.
Change-Id: I1ecb68e89b7faf79e4153dfe6675c390d1780db4
commit 9026d69d1ce1e0bcd21debd54a38246d85c7c6e4
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Sun Aug 30 21:33:57 2015 -0700
Date: Thu Sep 22 17:20:14 2016 -0700
Use Eigen::Dynamic instead of ceres::DYNAMIC in numeric_diff.h
Allow SubsetParameterization to hold all parameters constant
Change-Id: Iccb0284a8fb4c2160748dfae24bcd595f1d4cb5c
1. SubsetParameterization can now be constructed such that all
parameters are constant. This is required for it be used as part
of a ProductParameterization to hold a part of parameter block
constant. For example, a parameter block consisting of a rotation
as a quaternion and a translation vector can now have a local
parameterization where the translation part is constant and the
quaternion part has a QuaternionParameterization associated with it.
2. The check for the tangent space of a parameterization being
positive dimensional. We were not doing this check up till now
and the user could accidentally create parameterizations like this
and create a problem for themselves. This will ensure that even
though one can construct a SubsetParameterization where all
parameters are constant, you cannot actually use it as a local
parameterization for an entire parameter block. Which is how
it was before, but the check was inside the SubsetParameterization
constructor.
3. Added more tests and refactored existing tests to be more
granular.
Change-Id: Ic0184a1f30e3bd8a416b02341781a9d98e855ff7
commit 4f049db7c2a3ee8cf9910c6eac96be6a28a5999c
Author: Tal Ben-Nun <tbennun@gmail.com>
Date: Wed May 13 15:43:51 2015 +0300
Adaptive numeric differentiation using Ridders' method.
This method numerically computes function derivatives in different
scales, extrapolating between intermediate results to conserve function
evaluations. Adaptive differentiation is essential to produce accurate
results for functions with noisy derivatives.
Full changelist:
-Created a new type of NumericDiffMethod (RIDDERS).
-Implemented EvaluateRiddersJacobianColumn in NumericDiff.
-Created unit tests with f(x) = x^2 + [random noise] and
f(x) = exp(x).
Change-Id: I2d6e924d7ff686650272f29a8c981351e6f72091
commit 070bba4b43b4b7449628bf456a10452fd2b34d28
commit a36693f83da7a3fd19dce473d060231d4cc97499
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Tue Aug 25 13:37:33 2015 -0700
Date: Sat Sep 17 16:31:41 2016 -0700
Lint fixes from William Rucklidge
Update version history
Change-Id: I719e8852859c970091df842e59c44e02e2c65827
Change-Id: Ib2f0138ed7a1879ca3b2173e54092f7ae8dd5c9d
commit 887a20ca7f02a1504e35f7cabbdfb2e0842a0b0b
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Wed Aug 12 21:41:43 2015 +0100
commit 01e23e3d33178fdd050973666505c1080cfe04c3
Author: David Gossow <dgossow@google.com>
Date: Thu Sep 8 12:22:28 2016 +0200
Build position independent code when compiling Ceres statically.
Removing duplicate include directive.
- Previously, when Ceres was built as a static library we did not
compile position independent code. This means that the resulting
static library could not be linked against shared libraries, but
could be used by executables.
- To enable the use of a static Ceres library by other shared libraries
as reported in [1], the static library must be generated from
position independent code (except on Windows, where PIC does not
apply).
[1] https://github.com/Itseez/opencv_contrib/pull/290#issuecomment-130389471
Change-Id: I99388f1784ece688f91b162d009578c5c97ddaf6
Change-Id: I729ae6501497746d1bb615cb893ad592e16ddf3f
commit 860bba588b981a5718f6b73e7e840e5b8757fe65
commit 99b8210cee92cb972267537fb44bebf56f812d52
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Tue Aug 25 09:43:21 2015 -0700
Date: Wed Sep 7 15:31:30 2016 -0700
Fix a bug in DetectStructure
Update Android.mk to include new files.
The logic for determing static/dynamic f-block size in
DetectStructure was broken in a corner case, where the very first
row block which was used to initialize the f_block_size contained
more than one f blocks of varying sizes. The way the if block
was structured, no iteration was performed on the remaining
f-blocks and the loop failed to detect that the f-block size
was actually changing.
If in the remaining row blocks, there were no row blocks
with varying f-block sizes, the function will erroneously
return a static f-block size.
Thanks to Johannes Schonberger for providing a reproduction for this
rather tricky corner case.
Change-Id: Ib442a041d8b7efd29f9653be6a11a69d0eccd1ec
Change-Id: Id543ee7d2a65b65c868554a17f593c0a4958e873
commit b0cbc0f0b0a22f01724b7b647a4a94db959cc4e4
Author: Johannes Schönberger <hannesschoenberger@gmail.com>
Date: Thu Aug 20 14:21:30 2015 -0400
Reduce memory footprint of SubsetParameterization
Change-Id: If113cb4696d5aef3e50eed01fba7a3d4143b7ec8
commit ad2a99777786101411a971e59576ca533a297013
Author: Sergey Sharybin <sergey.vfx@gmail.com>
Date: Sat Aug 22 11:18:45 2015 +0200
Fix for reoder program unit test when built without suitesparse
This commit fixes failure of reorder_program_test when Ceres is built without
any suitesparse.
Change-Id: Ia23ae8dfd20c482cb9cd1301f17edf9a34df3235
commit 4bf3868beca9c17615f72ec03730cddb3676acaa
commit 195d8d13a6a3962ac39ef7fcdcc6add0216eb8bc
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Sun Aug 9 15:24:45 2015 -0700
Date: Tue Sep 6 07:12:23 2016 -0700
Fix a bug in the Schur eliminator
Remove two DCHECKs from CubicHermiteSpline.
The schur eliminator treats rows with e blocks and row with
no e blocks separately. The template specialization logic only
applies to the rows with e blocks.
They were present as debugging checks but were causing problems
with the build on 32bit i386 due to numerical cancellation issues,
where x ~ -epsilon.
So, in cases where the rows with e-blocks have a fixed size f-block
but the rows without e-blocks have f-blocks of varying sizes,
DetectStructure will return a static f-block size, but we need to be
careful that we do not blindly use that static f-block size everywhere.
Removing these checks only changes the behaviour in Debug mode.
We are already handling such small negative numbers in production
if they occur. All that this change does is to remove the crash.
This patch fixes a bug where such care was not being taken, where
it was assumed that the static f-block size could be assumed for all
f-block sizes.
https://github.com/ceres-solver/ceres-solver/issues/212
A new test is added, which triggers an exception in debug mode. In
release mode this error does not present itself, due to a peculiarity
of the way Eigen works.
Thanks to @NeroBurner and @debalance for reporting this.
Thanks to Werner Trobin for reporting this bug.
Change-Id: I8ae7aabf8eed8c3f9cf74b6c74d632ba44f82581
Change-Id: I66480e86d4fa0a4b621204f2ff44cc3ff8d01c04
commit 1635ce726078f00264b89d7fb6e76fd1c2796e59
commit 83041ac84f2d67c28559c67515e0e596a3f3aa20
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Wed Aug 19 00:26:02 2015 -0700
Date: Fri Sep 2 19:10:35 2016 -0700
Fix a bug in the reordering code.
Fix some compiler warnings.
When the user provides an ordering which starts at a non-zero group id,
or has gaps in the groups, then CAMD, the algorithm used to reorder
the program can crash or return garbage results.
Reported by Richard Trieu.
The solution is to map the ordering into grouping constraints, and then
to re-number the groups to be contiguous using a call to
MapValuesToContiguousRange. This was already done for CAMD based
ordering for Schur type solvers, but was not done for SPARSE_NORMAL_CHOLESKY.
Thanks to Bernhard Zeisl for not only reporting the bug but also
providing a reproduction.
Change-Id: I5cfae222d701dfdb8e1bda7f0b4670a30417aa89
Change-Id: I202b7a7df09cc19c92582d276ccf171edf88a9fb
commit 4c3f8987e7f0c51fd367cf6d43d7eb879e79589f
Author: Simon Rutishauser <simon.rutishauser@pix4d.com>
Date: Thu Aug 13 11:10:44 2015 +0200
Add missing CERES_EXPORT to ComposedLoss
Change-Id: Id7db388d41bf53e6e5704039040c9d2c6bf4c29c
commit 1a740cc787b85b883a0703403a99fe49662acb79
commit 8c4623c63a2676e79e7917bb0561f903760f19b9
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Tue Aug 11 18:08:05 2015 -0700
Date: Thu Sep 1 00:05:09 2016 -0700
Add the option to use numeric differentiation to nist and more_garbow_hillstrom
Update ExpectArraysClose to use ExpectClose instead of EXPECT_NEAR
Change-Id: If0a5caef90b524dcf5e2567c5b681987f5459401
commit ea667ede5c038d6bf3d1c9ec3dbdc5072d1beec6
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Sun Aug 9 16:56:13 2015 +0100
Fix EIGENSPARSE option help s/t it displays in CMake ncurses GUI.
The documentation for ExpectArraysClose and its implementation
did not match.
- Shorten description for EIGENSPARSE to a single line, as otherwise
it is not correctly displayed in the ncurses CMake GUI.
- Made explicit in description that this results in an LGPL licensed
version of Ceres (this is also made clear in the CMake log output if
EIGENSPARSE is enabled).
This change makes the polynomial_test not fail on 64bit AMD builds.
Change-Id: I11678a9cbc7a817133c22128da01055a3cb8a26d
commit a14ec27fb28ab2e8d7f1c9d88e41101dc6c0aab5
Author: Richard Stebbing <richie.stebbing@gmail.com>
Date: Fri Aug 7 08:42:03 2015 -0700
Fix SparseNormalCholeskySolver with dynamic sparsity.
Thanks to Phillip Huebner for reporting this.
The previous implementation incorrectly cached the outer product matrix
pattern even when `dynamic_sparsity = true`.
Change-Id: I1e58315a9b44f2f457d07c56b203ab2668bfb8a2
Change-Id: I503f2d3317a28d5885a34f8bdbccd49d20ae9ba2
commit 3dd7fced44ff00197fa9fcb1f2081d12be728062
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Sun Aug 9 16:38:50 2015 +0100
Remove legacy dependency detection macros.
- Before the new CMake buildsystem in 1.8, Ceres used non-standard
HINTS variables for dependencies. For backwards compatibility CMake
macros were added to translate these legacy variables into the new
(standard) variables.
- As it has now been multiple releases since the legacy variables
were used and they no longer appear in any of the documentation
support for them has now expired.
Change-Id: I2cc72927ed711142ba7943df334ee008181f86a2
commit 8b32e258ccce1eed2a50bb002add16cad13aff1e
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Sun Aug 9 15:42:39 2015 +0100
Fix failed if() condition expansion if gflags is not found.
- If a CMake-ified version of gflags is not detected, then
gflags_LIBRARIES is not set and the TARGET condition within a
multiconditional if() statement prevents configuration.
Change-Id: Ia92e97523d7a1478ab36539726b9540d7cfee5d0
commit cc8d47aabb9d63ba4588ba7295058a6191c2df83
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Sun Aug 9 15:18:42 2015 +0100
Update all CMake to lowercase function name style.
- Updated to new CMake style where function names are all lowercase,
this will be backwards compatible as CMake function names are
case insensitive.
- Updated using Emacs' M-x unscreamify-cmake-buffer.
Change-Id: If7219816f560270e59212813aeb021353a64a0e2
commit 1f106904c1f47460c35ac03258d6506bb2d60838
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Sun Aug 9 14:55:02 2015 +0100
Update minimum iOS version to 7.0 for shared_ptr/unordered_map.
- In order to correctly detect shared_ptr (& unordered_map)
the iOS version must be >= 7.0 (Xcode 5.0+). This only affects the
SIMULATOR(64) platform builds, as the OS (device) build uses the
latest SDK which is now likely 8.0+.
Change-Id: Iefec8f03408b8cdc7a495f442ebba081f800adb0
commit 16ecd40523a408e7705c9fdb0e159cef2007b8ab
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Sat Aug 8 17:32:31 2015 +0100
Fix bug in gflags' <= 2.1.2 exported CMake configuration.
- gflags <= 2.1.2 has a bug in its exported gflags-config.cmake:
https://github.com/gflags/gflags/issues/110 whereby it sets
gflags_LIBRARIES to a non-existent 'gflags' target.
- This causes linker errors if gflags is installed in a non-standard
location (as otherwise CMake resolves gflags to -lgflags which
links if gflags is installed somewhere on the current path).
- We now check for this case, and search for the correct gflags imported
target and update gflags_LIBRARIES to reference it if found, otherwise
proceed on to the original manual search to try to find gflags.
Change-Id: Iceccc3ee53c7c2010e41cc45255f966e7b13d526
commit 56be8de007dfd65ed5a31c795eb4a08ad765f411
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Thu Jun 25 21:31:00 2015 +0100
Add docs for new CXX11 option & mask option for Windows.
- The CXX11 option has no effect on Windows, as there, any new C++11
features are enabled by default, as such to avoid confusion we only
present the option for non-Windows.
Change-Id: I38925ae3bb8c16682d404468ba95c611a519b9b9
commit cf863b6415ac4dbf3626e70adeac1ac0f3d87ee5
commit 2fd39fcecb47eebce727081c9ffb8edf86c33669
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Thu Aug 6 14:52:18 2015 -0700
Date: Thu Sep 1 16:05:06 2016 -0700
Remove the spec file needed for generating RPMs.
FindWithDefault returns by value rather than reference.
Now that ceres is part of RawHide, there is no need to carry
this spec file with the ceres distribution.
Returning by reference leads to lifetime issues with the default
value which may go out of scope by the time it is used.
Change-Id: Icc400b9874ba05ba05b353e2658f1de94c72299e
Thanks to @Ardavel for reporting this, as this causes graph_test
to fail on VS2015x64.
https://github.com/ceres-solver/ceres-solver/issues/216
Change-Id: I596481219cfbf7622d49a6511ea29193b82c8ba3
commit 560940fa277a469c1ab34f1aa303ff1af9c3cacf
commit 716f049a7b91a8f3a4632c367d9534d1d9190a81
Author: Mike Vitus <vitus@google.com>
Date: Wed Aug 31 13:38:30 2016 -0700
Convert pose graph 2D example to glog and gflags.
Change-Id: I0ed75a60718ef95199bb36f33d9eb99157d11d40
commit 46c5ce89dda308088a5fdc238d0c126fdd2c2b58
Author: David Gossow <dgossow@google.com>
Date: Wed Aug 31 18:40:57 2016 +0200
Fix compiler errors on some systems
This fixes some signed-unsigned comparisons and a missing header
include.
Change-Id: Ieb2bf6e905faa74851bc4ac4658d2f1da24b6ecc
commit b102d53e1dd7dab132e58411183b6fffc2090590
Author: David Gossow <dgossow@google.com>
Date: Wed Aug 31 10:21:20 2016 +0200
Gradient checker multithreading bugfix.
This is a follow-up on c/7470. GradientCheckingCostFunction calls
callback_->SetGradientErrorDetected() in its Evaluate method,
which will run in multiple threads simultaneously when enabling
this option in the solver. Thus, the string append operation
inside that method has to be protected by a mutex.
Change-Id: I314ef1df2be52595370d9af05851bf6da39bb45e
commit 79a28d1e49af53f67af7f3387d07e7c9b7339433
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Sat Jul 11 22:21:31 2015 -0700
Date: Wed Aug 31 06:47:45 2016 -0700
A refactor of the cubic interpolation code
Rename a confusingly named member of Solver::Options
1. Push the boundary handling logic into the underlying array
object. This has two very significant impacts:
Solver::Options::numeric_derivative_relative_step_size to
Solver::Options::gradient_check_numeric_derivative_relative_step_size
a. The interpolation code becomes extremely simple to write
and to test.
b. The user has more flexibility in implementing how out of bounds
values are handled. We provide one default implementation.
Change-Id: Ic2f6cf9257ce7110c62e492688e5a6c8be1e7df2
Change-Id: Ib89ae3f87e588d4aba2a75361770d2cec26f07aa
commit dfdf19e111c2b0e6daeb6007728ec2f784106d49
commit 358ae741c8c4545b03d95c91fa546d9a36683677
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Wed Aug 5 15:20:57 2015 -0700
Date: Wed Aug 31 06:58:41 2016 -0700
Lint cleanup from Jim Roseborough
Note that Problem::Evaluate cannot be called from an IterationCallback
Change-Id: Id6845c85644d40e635ed196ca74fc51a387aade4
Change-Id: Ieabdc2d40715e6b547ab22156ba32e9c8444b7ed
commit 7444f23ae245476a7ac8421cc2f88d6947fd3e5f
commit 44044e25b14d7e623baae4505a17c913bdde59f8
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Mon Aug 3 12:22:44 2015 -0700
Date: Wed Aug 31 05:50:58 2016 -0700
Fix a typo in small_blas.h
Update the NumTraits for Jets
The reason this rather serious looking typo has not
caused any problems uptil now is because NUM_ROW_B is
computed but never actually used.
1. Use AVX if EIGEN_VECTORIZE_AVX is defined.
2. Make the cost of division same as the cost of multiplication.
Thanks to Werner Trobin for pointing this out.
These are updates to the original numtraits update needed for eigen 3.3
that Shaheen Gandhi sent out.
Change-Id: Id2b4d9326ec21baec8a85423e3270aefbafb611e
Change-Id: Ic1e3ed7d05a659c7badc79a894679b2dd61c51b9
commit 5a48b92123b30a437f031eb24b0deaadc8f60d26
commit 4b6ad5d88e45ce8638c882d3e8f08161089b6dba
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Sat Aug 27 23:21:55 2016 -0700
Use ProductParameterization in bundle_adjuster.cc
Previously, when using a quaternion to parameterize the camera
orientation, the camera parameter block was split into two
parameter blocks. One for the rotation and another for the
translation and intrinsics. This was to enable the use of the
Quaternion parameterization.
Now that we have a ProductParameterization which allows us
to compose multiple parameterizations, this is no longer needed
and we use a size 10 parameter block instead.
This leads to a more than 2x improvements in the linear solver time.
Change-Id: I78b8f06696f81fee54cfe1a4ae193ee8a5f8e920
commit bfc916cf1cf753b85c1e2ac037e2019ee891f6f9
Author: Shaheen Gandhi <visigoth@gmail.com>
Date: Thu Aug 4 12:10:14 2016 -0700
Allow ceres to be used with the latest version of Eigen
Change-Id: Ief3b0f6b405484ec04ecd9ab6a1e1e5409a594c2
commit edbd48ab502aa418ad9700ee5c3ada5f9268b90a
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Sat Jul 4 17:59:52 2015 +0100
Date: Sun Jul 10 14:13:51 2016 +0100
Export Ceres build directory into local CMake package registry.
Enable support for OpenMP in Clang if detected.
- Optionally use CMake's export() functionality to export the Ceres
build directory as a package into the local CMake package registry.
- This enables the detection & use of Ceres from CMake *without*
requiring that Ceres be installed.
- Previously we disabled OpenMP if Clang was detected, as it did not
support it. However as of Clang 3.8 (and potentially Xcode 8) OpenMP
is supported.
Change-Id: Ib5a7588446f490e1b405878475b6b1dd13accd1f
Change-Id: Ia39dac9fe746f1fc6310e08553f85f3c37349707
commit d9790e77894ea99d38137d359d6118315b2d1601
commit f6df6c05dd83b19fa90044106ebaca40957998ae
Author: Mike Vitus <vitus@google.com>
Date: Thu Aug 18 19:27:43 2016 -0700
Add an example for modeling and solving a 3D pose graph SLAM problem.
Change-Id: I750ca5f20c495edfee5f60ffedccc5bd8ba2bb37
commit ac3b8e82175122e38bafaaa9cd419ba3cee11087
Author: David Gossow <dgossow@google.com>
Date: Fri Apr 29 16:07:11 2016 +0200
Gradient checking cleanup and local parameterization bugfix
Change the Ceres gradient checking API to make is useful for
unit testing, clean up code duplication and fix interaction between
gradient checking and local parameterizations.
There were two gradient checking implementations, one being used
when using the check_gradients flag in the Solver, the other
being a standalone class. The standalone version was restricted
to cost functions with fixed parameter sizes at compile time, which
is being lifted here. This enables it to be used inside the
GradientCheckingCostFunction as well.
In addition, this installs new hooks in the Solver to ensure
that Solve will fail if any incorrect gradients are detected. This
way, you can set the check_gradient flags to true and detect
errors in an automated way, instead of just printing error information
to the log. The error log is now also returned in the Solver summary
instead of being printed directly. The user can then decide what to
do with it. The existing hooks for user callbacks are used for
this purpose to keep the internal API changes minimal and non-invasive.
The last and biggest change is the way the the interaction between
local parameterizations and the gradient checker works. Before,
local parameterizations would be ignored by the checker. However,
if a cost function does not compute its Jacobian along the null
space of the local parameterization, this wil not have any effect
on the solver, but would result in a gradient checker error.
With this change, the Jacobians are multiplied by the Jacobians
of the respective local parameterization and thus being compared
in the tangent space only.
The typical use case for this are quaternion parameters, where
a cost function will typically assume that the quaternion is
always normalized, skipping the correct computation of the Jacobian
along the normal to save computation cost.
Change-Id: I5e1bb97b8a899436cea25101efe5011b0bb13282
commit d4264ec10d9a270b53b5db86c0245ae8cbd2cf18
Author: Mike Vitus <vitus@google.com>
Date: Wed Aug 17 13:39:05 2016 -0700
Add a quaternion local parameterization for Eigen's quaternion element convention.
Change-Id: I7046e8b24805313c5fb6a767de581d0054fcdb83
commit fd7cab65ef30fbc33612220abed52dd5073413c4
Author: Mike Vitus <vitus@google.com>
Date: Wed Aug 10 09:29:12 2016 -0700
Fix typos in the pose graph 2D example.
Change-Id: Ie024ff6b6cab9f2e8011d21121a91931bd987bd1
commit 375dc348745081f89693607142d8b6744a7fb6b4
Author: Mike Vitus <vitus@google.com>
Date: Wed Aug 3 18:51:16 2016 -0700
Remove duplicate entry for the NIST example in the docs.
Change-Id: Ic4e8f9b68b77b5235b5c96fe588cc56866dab759
commit f554681bf22d769abc12dd6d346ef65f9bb22431
Author: Mike Vitus <vitus@google.com>
Date: Mon Jul 25 18:30:48 2016 -0700
Add an example for modeling and solving a 2D pose graph SLAM problem.
Change-Id: Ia89b12af7afa33e7b1b9a68d69cf2a0b53416737
commit e1bcc6e0f51512f43aa7bfb7b0d62f7ac1d0cd4b
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Sun Jul 12 19:39:47 2015 -0700
Date: Wed May 18 07:52:48 2016 -0700
Add ProductParameterization
Add additional logging for analyzing orderings
Often a parameter block is the Cartesian product of a number of
manifolds. For example, a rigid transformation SE(3) = SO(3) x R^3
In such cases, where you have the local parameterization
of the individual manifolds available,
ProductParameterization can be used to construct a local
parameterization of the cartesian product.
Change-Id: I4b5bcbd2407a38739c7725b129789db5c3d65a20
Change-Id: Ic68d2959db35254e2895f11294fb25de4d4b8a81
commit 7b4fb69dad49eaefb5d2d47ef0d76f48ad7fef73
commit 16980b4fec846f86910c18772b8145bcb55f4728
Author: Mike Vitus <vitus@google.com>
Date: Fri Jul 15 13:37:49 2016 -0700
Delete the remove_definitons command from sampled_functions
CMakeLists.txt because it will be inherited from the top level examples
CMakeLists.txt.
Change-Id: I25593587df0ae84fd8ddddc589bc2a13f3777427
commit a04490be97800e78e59db5eb67fa46226738ffba
Author: Mike Vitus <vitus@google.com>
Date: Thu Jul 14 10:10:13 2016 -0700
Add readme for the sampled_function example.
Change-Id: I9468b6a7b9f2ffdd2bf9f0dd1f4e1d5f894e540c
commit ff11d0e63d4678188e8cabd40a532ba06912fe5a
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Sun Jun 28 21:43:46 2015 +0100
Date: Wed Jun 29 09:31:45 2016 +0100
Cleanup FindGflags & use installed gflags CMake config if present.
Use _j[0,1,n]() Bessel functions on MSVC to avoid deprecation errors.
- Split out gflags namespace detection methods:
check_cxx_source_compiles() & regex, into separate functions.
- Use installed/exported gflags CMake configuration (present for
versions >= 2.1) if available, unless user expresses a preference not
to, or specifies search directories, in which case fall back to manual
search for components.
-- Prefer installed gflags CMake configurations over exported gflags
build directories on all OSs.
- Remove custom version of check_cxx_source_compiles() that attempted
to force the build type of the test project. This only worked for
NMake on Windows, not MSVC as msbuild ignored our attempts to force
the build type. Now we always use the regex method on Windows if
we cannot find an installed gflags CMake configuration which works
even on MSVC by bypassing msbuild.
- Add default search paths for gflags on Windows.
- Microsoft deprecated the POSIX Bessel functions: j[0,1,n]() in favour
of _j[0,1,n](), it appears since at least MSVC 2005:
https://msdn.microsoft.com/en-us/library/ms235384(v=vs.100).aspx.
- As this occurs in jet.h (templated public header), although Ceres
suppresses the warning when it itself is built (to suppress a warning
about the insecurity of using std::copy), it will crop up again in
client code (without this fix) unless it is explicitly suppressed
there also.
- Raised as Issue #190:
https://github.com/ceres-solver/ceres-solver/issues/190.
Change-Id: I083b267d97a7a5838a1314f3d41a61ae48d5a2d7
Change-Id: If7ac5dbb856748f9900be93ec0452a40c0b00524
commit b3063c047906d4a44503dc0187fdcbbfcdda5f38
commit 8ea86e1614cf77644ce782e43cde6565a54444f5
Author: Nicolai Wojke <nwojke@uni-koblenz.de>
Date: Mon Apr 25 14:24:41 2016 +0200
Fix: Copy minimizer option 'is_silent' to LinSearchDirection::Options
Change-Id: I23b4c3383cad30033c539ac93883d77c8dd4ba1a
commit 080ca4c5f2ac42620971a07f06d2d13deb7befa8
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Sun Apr 24 22:46:54 2016 -0700
Fix typos in users.rst
Change-Id: Ifdc67638a39403354bc9589f42a1b42cb9984dd2
commit 21ab397dc55335c147fdd795899b1f8981037b09
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Sun Apr 24 21:13:00 2016 -0700
Make some Jet comparisons exact.
Change-Id: Ia08c72f3b8779df96f5c0d5a954b2c0a1dd3a061
commit ee40f954cf464087eb8943abf4d9db8917a33fbe
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Sun Apr 24 07:49:55 2016 -0700
Add colmap to users.rst
Change-Id: I452a8c1dc6a3bc55734b2fc3a4002ff7939ba863
commit 9665e099022bd06e53b0779550e9aebded7f274d
Author: Sameer Agarwal <sameeragarwal@google.com>
Date: Mon Apr 18 06:00:58 2016 -0700
Fix step norm evaluation in LineSearchMinimizer
TrustRegionMinimizer evaluates the size of the step
taken in the ambient space, where as the LineSearchMinimizer
was using the norm in the tangent space. This change fixes
this discrepancy.
Change-Id: I9fef64cbb5622c9769c0413003cfb1dc6e89cfa3
commit 620ca9d0668cd4a00402264fddca3cf6bd2e7265
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Wed Jul 15 20:56:56 2015 +0100
Date: Mon Apr 18 15:14:11 2016 +0100
Add default glog install location on Windows to search paths.
Remove use of -Werror when compiling Ceres.
Change-Id: I083d368be48986e6780c11460f5a07b2f3b6c900
- As noted in Issue #193 (in that case for GCC 6), Ceres' use of -Werror
when compiling on *nix can prevent compilation on new compilers that
add new warnings and there is an inevitable delay between new compiler
versions and Ceres versions.
- Removing the explicit use of -Werror, and relying on indirect
verification by maintainers should fix build issues for Ceres releases
on newer compilers.
Change-Id: I38e9ade28d4a90e53dcd918a7d470f1a1debd7b4
commit 0c63bd3efbf1d41151c9fab41d4b77dc64c572c8
Author: Mike Vitus <vitus@google.com>
Date: Thu Apr 14 10:25:52 2016 -0700
Add floor and ceil functions to the Jet implementation.
Change-Id: I72ebfb0e9ade2964dbf3a014225ead345d5ae352
commit 9843f3280356c158d23c06a16085c6c5ba35e053
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Mon Mar 7 21:24:32 2016 +0000
Report Ceres compile options as components in find_package().
- Users can now specify particular components from Ceres, such as
SuiteSparse support) that must be present in a detected version of
Ceres in order for it to be reported as found by find_package().
- This allows users to specify for example that they require a version
of Ceres with SuiteSparse support at configure time, rather than
finding out only at run time that Ceres was not compiled with the
options they require.
- The list of available components are built directly from the Ceres
compile options.
- The meta-module SparseLinearAlgebraLibrary is present if at least
one sparse linear algebra backend is available.
Change-Id: I65f1ddfd7697e6dd25bb4ac7e54f5097d3ca6266
commit e4d4d88bbe51b9cc0f7450171511abbea0779790
Author: Timer <linyicx@126.com>
Date: Fri Apr 8 15:42:18 2016 +0800
Fix a spelling error in nnls_modeling.rst
Change-Id: I341d901d3df993bc5397ed15e6cb330b0c38fd72
commit 5512f58536e1be0d92010d8325b606e7b4733a08
Author: Keir Mierle <mierle@gmail.com>
Date: Thu Apr 7 12:03:16 2016 -0700
Only use collapse() directive with OpenMP 3.0 or higher
Change-Id: Icba544c0494763c57eb6dc61e98379312ca15972
commit d61e94da5225217cab7b4f93b72f97055094681f
Author: Thomas Schneider <schneith@ethz.ch>
Date: Wed Apr 6 10:40:29 2016 +0200
Add IsParameterBlockConstant to the ceres::Problem class.
Change-Id: I7d0e828e81324443209c17fa54dd1d37605e5bfe
commit 77d94b34741574e958a417561702d6093fba87fb
Author: Alex Stewart <alexs.mac@gmail.com>
Date: Sun Feb 14 16:54:03 2016 +0000
Fix install path for CeresConfig.cmake to be architecture-aware.
- Previously we were auto-detecting a "64" suffix for the install path
for the Ceres library on non-Debian/Arch Linux distributions, but
we were installing CeresConfig.cmake to an architecture independent
location.
- We now install CeresConfig.cmake to lib${LIB_SUFFIX}/cmake/Ceres.
- Also make LIB_SUFFIX visible to the user in the CMake GUI s/t they can
easily override the auto-detected value if desired.
- Reported by jpgr87@gmail.com as Issue #194.
Change-Id: If126260d7af685779487c01220ae178ac31f7aea

View File

@@ -173,26 +173,5 @@ if(WITH_OPENMP)
)
endif()
TEST_UNORDERED_MAP_SUPPORT()
if(HAVE_STD_UNORDERED_MAP_HEADER)
if(HAVE_UNORDERED_MAP_IN_STD_NAMESPACE)
add_definitions(-DCERES_STD_UNORDERED_MAP)
else()
if(HAVE_UNORDERED_MAP_IN_TR1_NAMESPACE)
add_definitions(-DCERES_STD_UNORDERED_MAP_IN_TR1_NAMESPACE)
else()
add_definitions(-DCERES_NO_UNORDERED_MAP)
message(STATUS "Replacing unordered_map/set with map/set (warning: slower!)")
endif()
endif()
else()
if(HAVE_UNORDERED_MAP_IN_TR1_NAMESPACE)
add_definitions(-DCERES_TR1_UNORDERED_MAP)
else()
add_definitions(-DCERES_NO_UNORDERED_MAP)
message(STATUS "Replacing unordered_map/set with map/set (warning: slower!)")
endif()
endif()
blender_add_lib(extern_ceres "\${SRC}" "\${INC}" "\${INC_SYS}")
EOF

View File

@@ -149,6 +149,7 @@ internal/ceres/generated/schur_eliminator_4_4_d.cc
internal/ceres/generated/schur_eliminator_d_d_d.cc
internal/ceres/generate_eliminator_specialization.py
internal/ceres/generate_partitioned_matrix_view_specializations.py
internal/ceres/gradient_checker.cc
internal/ceres/gradient_checking_cost_function.cc
internal/ceres/gradient_checking_cost_function.h
internal/ceres/gradient_problem.cc
@@ -160,6 +161,8 @@ internal/ceres/householder_vector.h
internal/ceres/implicit_schur_complement.cc
internal/ceres/implicit_schur_complement.h
internal/ceres/integral_types.h
internal/ceres/is_close.cc
internal/ceres/is_close.h
internal/ceres/iterative_schur_complement_solver.cc
internal/ceres/iterative_schur_complement_solver.h
internal/ceres/lapack.cc
@@ -243,6 +246,8 @@ internal/ceres/trust_region_minimizer.cc
internal/ceres/trust_region_minimizer.h
internal/ceres/trust_region_preprocessor.cc
internal/ceres/trust_region_preprocessor.h
internal/ceres/trust_region_step_evaluator.cc
internal/ceres/trust_region_step_evaluator.h
internal/ceres/trust_region_strategy.cc
internal/ceres/trust_region_strategy.h
internal/ceres/types.cc

View File

@@ -130,7 +130,8 @@ class CostFunctionToFunctor {
const int num_parameter_blocks =
(N0 > 0) + (N1 > 0) + (N2 > 0) + (N3 > 0) + (N4 > 0) +
(N5 > 0) + (N6 > 0) + (N7 > 0) + (N8 > 0) + (N9 > 0);
CHECK_EQ(parameter_block_sizes.size(), num_parameter_blocks);
CHECK_EQ(static_cast<int>(parameter_block_sizes.size()),
num_parameter_blocks);
CHECK_EQ(N0, parameter_block_sizes[0]);
if (parameter_block_sizes.size() > 1) CHECK_EQ(N1, parameter_block_sizes[1]); // NOLINT

View File

@@ -357,6 +357,28 @@ class CERES_EXPORT Covariance {
const double*> >& covariance_blocks,
Problem* problem);
// Compute a part of the covariance matrix.
//
// The vector parameter_blocks contains the parameter blocks that
// are used for computing the covariance matrix. From this vector
// all covariance pairs are generated. This allows the covariance
// estimation algorithm to only compute and store these blocks.
//
// parameter_blocks cannot contain duplicates. Bad things will
// happen if they do.
//
// Note that the list of covariance_blocks is only used to determine
// what parts of the covariance matrix are computed. The full
// Jacobian is used to do the computation, i.e. they do not have an
// impact on what part of the Jacobian is used for computation.
//
// The return value indicates the success or failure of the
// covariance computation. Please see the documentation for
// Covariance::Options for more on the conditions under which this
// function returns false.
bool Compute(const std::vector<const double*>& parameter_blocks,
Problem* problem);
// Return the block of the cross-covariance matrix corresponding to
// parameter_block1 and parameter_block2.
//
@@ -394,6 +416,40 @@ class CERES_EXPORT Covariance {
const double* parameter_block2,
double* covariance_block) const;
// Return the covariance matrix corresponding to all parameter_blocks.
//
// Compute must be called before calling GetCovarianceMatrix and all
// parameter_blocks must have been present in the vector
// parameter_blocks when Compute was called. Otherwise
// GetCovarianceMatrix returns false.
//
// covariance_matrix must point to a memory location that can store
// the size of the covariance matrix. The covariance matrix will be
// a square matrix whose row and column count is equal to the sum of
// the sizes of the individual parameter blocks. The covariance
// matrix will be a row-major matrix.
bool GetCovarianceMatrix(const std::vector<const double *> &parameter_blocks,
double *covariance_matrix);
// Return the covariance matrix corresponding to parameter_blocks
// in the tangent space if a local parameterization is associated
// with one of the parameter blocks else returns the covariance
// matrix in the ambient space.
//
// Compute must be called before calling GetCovarianceMatrix and all
// parameter_blocks must have been present in the vector
// parameters_blocks when Compute was called. Otherwise
// GetCovarianceMatrix returns false.
//
// covariance_matrix must point to a memory location that can store
// the size of the covariance matrix. The covariance matrix will be
// a square matrix whose row and column count is equal to the sum of
// the sizes of the tangent spaces of the individual parameter
// blocks. The covariance matrix will be a row-major matrix.
bool GetCovarianceMatrixInTangentSpace(
const std::vector<const double*>& parameter_blocks,
double* covariance_matrix);
private:
internal::scoped_ptr<internal::CovarianceImpl> impl_;
};

View File

@@ -85,22 +85,6 @@ class DynamicNumericDiffCostFunction : public CostFunction {
options_(options) {
}
// Deprecated. New users should avoid using this constructor. Instead, use the
// constructor with NumericDiffOptions.
DynamicNumericDiffCostFunction(
const CostFunctor* functor,
Ownership ownership,
double relative_step_size)
: functor_(functor),
ownership_(ownership),
options_() {
LOG(WARNING) << "This constructor is deprecated and will be removed in "
"a future version. Please use the NumericDiffOptions "
"constructor instead.";
options_.relative_step_size = relative_step_size;
}
virtual ~DynamicNumericDiffCostFunction() {
if (ownership_ != TAKE_OWNERSHIP) {
functor_.release();
@@ -138,19 +122,19 @@ class DynamicNumericDiffCostFunction : public CostFunction {
std::vector<double> parameters_copy(parameters_size);
std::vector<double*> parameters_references_copy(block_sizes.size());
parameters_references_copy[0] = &parameters_copy[0];
for (int block = 1; block < block_sizes.size(); ++block) {
for (size_t block = 1; block < block_sizes.size(); ++block) {
parameters_references_copy[block] = parameters_references_copy[block - 1]
+ block_sizes[block - 1];
}
// Copy the parameters into the local temp space.
for (int block = 0; block < block_sizes.size(); ++block) {
for (size_t block = 0; block < block_sizes.size(); ++block) {
memcpy(parameters_references_copy[block],
parameters[block],
block_sizes[block] * sizeof(*parameters[block]));
}
for (int block = 0; block < block_sizes.size(); ++block) {
for (size_t block = 0; block < block_sizes.size(); ++block) {
if (jacobians[block] != NULL &&
!NumericDiff<CostFunctor, method, DYNAMIC,
DYNAMIC, DYNAMIC, DYNAMIC, DYNAMIC, DYNAMIC,

View File

@@ -27,194 +27,121 @@
// POSSIBILITY OF SUCH DAMAGE.
// Copyright 2007 Google Inc. All Rights Reserved.
//
// Author: wjr@google.com (William Rucklidge)
//
// This file contains a class that exercises a cost function, to make sure
// that it is computing reasonable derivatives. It compares the Jacobians
// computed by the cost function with those obtained by finite
// differences.
// Authors: wjr@google.com (William Rucklidge),
// keir@google.com (Keir Mierle),
// dgossow@google.com (David Gossow)
#ifndef CERES_PUBLIC_GRADIENT_CHECKER_H_
#define CERES_PUBLIC_GRADIENT_CHECKER_H_
#include <cstddef>
#include <algorithm>
#include <vector>
#include <string>
#include "ceres/cost_function.h"
#include "ceres/dynamic_numeric_diff_cost_function.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/fixed_array.h"
#include "ceres/internal/macros.h"
#include "ceres/internal/scoped_ptr.h"
#include "ceres/numeric_diff_cost_function.h"
#include "ceres/local_parameterization.h"
#include "glog/logging.h"
namespace ceres {
// An object that exercises a cost function, to compare the answers that it
// gives with derivatives estimated using finite differencing.
// GradientChecker compares the Jacobians returned by a cost function against
// derivatives estimated using finite differencing.
//
// The only likely usage of this is for testing.
// The condition enforced is that
//
// (J_actual(i, j) - J_numeric(i, j))
// ------------------------------------ < relative_precision
// max(J_actual(i, j), J_numeric(i, j))
//
// where J_actual(i, j) is the jacobian as computed by the supplied cost
// function (by the user) multiplied by the local parameterization Jacobian
// and J_numeric is the jacobian as computed by finite differences, multiplied
// by the local parameterization Jacobian as well.
//
// How to use: Fill in an array of pointers to parameter blocks for your
// CostFunction, and then call Probe(). Check that the return value is
// 'true'. See prober_test.cc for an example.
//
// This is templated similarly to NumericDiffCostFunction, as it internally
// uses that.
template <typename CostFunctionToProbe,
int M = 0, int N0 = 0, int N1 = 0, int N2 = 0, int N3 = 0, int N4 = 0>
// CostFunction, and then call Probe(). Check that the return value is 'true'.
class GradientChecker {
public:
// Here we stash some results from the probe, for later
// inspection.
struct GradientCheckResults {
// Computed cost.
Vector cost;
// This will not take ownership of the cost function or local
// parameterizations.
//
// function: The cost function to probe.
// local_parameterization: A vector of local parameterizations for each
// parameter. May be NULL or contain NULL pointers to indicate that the
// respective parameter does not have a local parameterization.
// options: Options to use for numerical differentiation.
GradientChecker(
const CostFunction* function,
const std::vector<const LocalParameterization*>* local_parameterizations,
const NumericDiffOptions& options);
// The sizes of these matrices are dictated by the cost function's
// parameter and residual block sizes. Each vector's length will
// term->parameter_block_sizes().size(), and each matrix is the
// Jacobian of the residual with respect to the corresponding parameter
// block.
// Contains results from a call to Probe for later inspection.
struct ProbeResults {
// The return value of the cost function.
bool return_value;
// Computed residual vector.
Vector residuals;
// The sizes of the Jacobians below are dictated by the cost function's
// parameter block size and residual block sizes. If a parameter block
// has a local parameterization associated with it, the size of the "local"
// Jacobian will be determined by the local parameterization dimension and
// residual block size, otherwise it will be identical to the regular
// Jacobian.
// Derivatives as computed by the cost function.
std::vector<Matrix> term_jacobians;
std::vector<Matrix> jacobians;
// Derivatives as computed by finite differencing.
std::vector<Matrix> finite_difference_jacobians;
// Derivatives as computed by the cost function in local space.
std::vector<Matrix> local_jacobians;
// Infinity-norm of term_jacobians - finite_difference_jacobians.
double error_jacobians;
// Derivatives as computed by nuerical differentiation in local space.
std::vector<Matrix> numeric_jacobians;
// Derivatives as computed by nuerical differentiation in local space.
std::vector<Matrix> local_numeric_jacobians;
// Contains the maximum relative error found in the local Jacobians.
double maximum_relative_error;
// If an error was detected, this will contain a detailed description of
// that error.
std::string error_log;
};
// Checks the Jacobian computed by a cost function.
// Call the cost function, compute alternative Jacobians using finite
// differencing and compare results. If local parameterizations are given,
// the Jacobians will be multiplied by the local parameterization Jacobians
// before performing the check, which effectively means that all errors along
// the null space of the local parameterization will be ignored.
// Returns false if the Jacobians don't match, the cost function return false,
// or if the cost function returns different residual when called with a
// Jacobian output argument vs. calling it without. Otherwise returns true.
//
// probe_point: The parameter values at which to probe.
// error_tolerance: A threshold for the infinity-norm difference
// between the Jacobians. If the Jacobians differ by more than
// this amount, then the probe fails.
//
// term: The cost function to test. Not retained after this call returns.
//
// results: On return, the two Jacobians (and other information)
// will be stored here. May be NULL.
// parameters: The parameter values at which to probe.
// relative_precision: A threshold for the relative difference between the
// Jacobians. If the Jacobians differ by more than this amount, then the
// probe fails.
// results: On return, the Jacobians (and other information) will be stored
// here. May be NULL.
//
// Returns true if no problems are detected and the difference between the
// Jacobians is less than error_tolerance.
static bool Probe(double const* const* probe_point,
double error_tolerance,
CostFunctionToProbe *term,
GradientCheckResults* results) {
CHECK_NOTNULL(probe_point);
CHECK_NOTNULL(term);
LOG(INFO) << "-------------------- Starting Probe() --------------------";
// We need a GradientCheckeresults, whether or not they supplied one.
internal::scoped_ptr<GradientCheckResults> owned_results;
if (results == NULL) {
owned_results.reset(new GradientCheckResults);
results = owned_results.get();
}
// Do a consistency check between the term and the template parameters.
CHECK_EQ(M, term->num_residuals());
const int num_residuals = M;
const std::vector<int32>& block_sizes = term->parameter_block_sizes();
const int num_blocks = block_sizes.size();
CHECK_LE(num_blocks, 5) << "Unable to test functions that take more "
<< "than 5 parameter blocks";
if (N0) {
CHECK_EQ(N0, block_sizes[0]);
CHECK_GE(num_blocks, 1);
} else {
CHECK_LT(num_blocks, 1);
}
if (N1) {
CHECK_EQ(N1, block_sizes[1]);
CHECK_GE(num_blocks, 2);
} else {
CHECK_LT(num_blocks, 2);
}
if (N2) {
CHECK_EQ(N2, block_sizes[2]);
CHECK_GE(num_blocks, 3);
} else {
CHECK_LT(num_blocks, 3);
}
if (N3) {
CHECK_EQ(N3, block_sizes[3]);
CHECK_GE(num_blocks, 4);
} else {
CHECK_LT(num_blocks, 4);
}
if (N4) {
CHECK_EQ(N4, block_sizes[4]);
CHECK_GE(num_blocks, 5);
} else {
CHECK_LT(num_blocks, 5);
}
results->term_jacobians.clear();
results->term_jacobians.resize(num_blocks);
results->finite_difference_jacobians.clear();
results->finite_difference_jacobians.resize(num_blocks);
internal::FixedArray<double*> term_jacobian_pointers(num_blocks);
internal::FixedArray<double*>
finite_difference_jacobian_pointers(num_blocks);
for (int i = 0; i < num_blocks; i++) {
results->term_jacobians[i].resize(num_residuals, block_sizes[i]);
term_jacobian_pointers[i] = results->term_jacobians[i].data();
results->finite_difference_jacobians[i].resize(
num_residuals, block_sizes[i]);
finite_difference_jacobian_pointers[i] =
results->finite_difference_jacobians[i].data();
}
results->cost.resize(num_residuals, 1);
CHECK(term->Evaluate(probe_point, results->cost.data(),
term_jacobian_pointers.get()));
NumericDiffCostFunction<CostFunctionToProbe, CENTRAL, M, N0, N1, N2, N3, N4>
numeric_term(term, DO_NOT_TAKE_OWNERSHIP);
CHECK(numeric_term.Evaluate(probe_point, results->cost.data(),
finite_difference_jacobian_pointers.get()));
results->error_jacobians = 0;
for (int i = 0; i < num_blocks; i++) {
Matrix jacobian_difference = results->term_jacobians[i] -
results->finite_difference_jacobians[i];
results->error_jacobians =
std::max(results->error_jacobians,
jacobian_difference.lpNorm<Eigen::Infinity>());
}
LOG(INFO) << "========== term-computed derivatives ==========";
for (int i = 0; i < num_blocks; i++) {
LOG(INFO) << "term_computed block " << i;
LOG(INFO) << "\n" << results->term_jacobians[i];
}
LOG(INFO) << "========== finite-difference derivatives ==========";
for (int i = 0; i < num_blocks; i++) {
LOG(INFO) << "finite_difference block " << i;
LOG(INFO) << "\n" << results->finite_difference_jacobians[i];
}
LOG(INFO) << "========== difference ==========";
for (int i = 0; i < num_blocks; i++) {
LOG(INFO) << "difference block " << i;
LOG(INFO) << (results->term_jacobians[i] -
results->finite_difference_jacobians[i]);
}
LOG(INFO) << "||difference|| = " << results->error_jacobians;
return results->error_jacobians < error_tolerance;
}
bool Probe(double const* const* parameters,
double relative_precision,
ProbeResults* results) const;
private:
CERES_DISALLOW_IMPLICIT_CONSTRUCTORS(GradientChecker);
std::vector<const LocalParameterization*> local_parameterizations_;
const CostFunction* function_;
internal::scoped_ptr<CostFunction> finite_diff_cost_function_;
};
} // namespace ceres

View File

@@ -33,9 +33,8 @@
// This file needs to compile as c code.
#ifdef __cplusplus
#include <cstddef>
#include "ceres/internal/config.h"
#if defined(CERES_TR1_MEMORY_HEADER)
#include <tr1/memory>
#else
@@ -50,6 +49,25 @@ using std::tr1::shared_ptr;
using std::shared_ptr;
#endif
// We allocate some Eigen objects on the stack and other places they
// might not be aligned to 16-byte boundaries. If we have C++11, we
// can specify their alignment anyway, and thus can safely enable
// vectorization on those matrices; in C++99, we are out of luck. Figure out
// what case we're in and write macros that do the right thing.
#ifdef CERES_USE_CXX11
namespace port_constants {
static constexpr size_t kMaxAlignBytes =
// Work around a GCC 4.8 bug
// (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56019) where
// std::max_align_t is misplaced.
#if defined (__GNUC__) && __GNUC__ == 4 && __GNUC_MINOR__ == 8
alignof(::max_align_t);
#else
alignof(std::max_align_t);
#endif
} // namespace port_constants
#endif
} // namespace ceres
#endif // __cplusplus

View File

@@ -69,7 +69,7 @@ struct CERES_EXPORT IterationSummary {
// Step was numerically valid, i.e., all values are finite and the
// step reduces the value of the linearized model.
//
// Note: step_is_valid is false when iteration = 0.
// Note: step_is_valid is always true when iteration = 0.
bool step_is_valid;
// Step did not reduce the value of the objective function
@@ -77,7 +77,7 @@ struct CERES_EXPORT IterationSummary {
// acceptance criterion used by the non-monotonic trust region
// algorithm.
//
// Note: step_is_nonmonotonic is false when iteration = 0;
// Note: step_is_nonmonotonic is always false when iteration = 0;
bool step_is_nonmonotonic;
// Whether or not the minimizer accepted this step or not. If the
@@ -89,7 +89,7 @@ struct CERES_EXPORT IterationSummary {
// relative decrease is not sufficient, the algorithm may accept the
// step and the step is declared successful.
//
// Note: step_is_successful is false when iteration = 0.
// Note: step_is_successful is always true when iteration = 0.
bool step_is_successful;
// Value of the objective function.

View File

@@ -164,6 +164,7 @@
#include "Eigen/Core"
#include "ceres/fpclassify.h"
#include "ceres/internal/port.h"
namespace ceres {
@@ -227,21 +228,23 @@ struct Jet {
T a;
// The infinitesimal part.
//
// Note the Eigen::DontAlign bit is needed here because this object
// gets allocated on the stack and as part of other arrays and
// structs. Forcing the right alignment there is the source of much
// pain and suffering. Even if that works, passing Jets around to
// functions by value has problems because the C++ ABI does not
// guarantee alignment for function arguments.
//
// Setting the DontAlign bit prevents Eigen from using SSE for the
// various operations on Jets. This is a small performance penalty
// since the AutoDiff code will still expose much of the code as
// statically sized loops to the compiler. But given the subtle
// issues that arise due to alignment, especially when dealing with
// multiple platforms, it seems to be a trade off worth making.
// We allocate Jets on the stack and other places they
// might not be aligned to 16-byte boundaries. If we have C++11, we
// can specify their alignment anyway, and thus can safely enable
// vectorization on those matrices; in C++99, we are out of luck. Figure out
// what case we're in and do the right thing.
#ifndef CERES_USE_CXX11
// fall back to safe version:
Eigen::Matrix<T, N, 1, Eigen::DontAlign> v;
#else
static constexpr bool kShouldAlignMatrix =
16 <= ::ceres::port_constants::kMaxAlignBytes;
static constexpr int kAlignHint = kShouldAlignMatrix ?
Eigen::AutoAlign : Eigen::DontAlign;
static constexpr size_t kAlignment = kShouldAlignMatrix ? 16 : 1;
alignas(kAlignment) Eigen::Matrix<T, N, 1, kAlignHint> v;
#endif
};
// Unary +
@@ -388,6 +391,8 @@ inline double atan (double x) { return std::atan(x); }
inline double sinh (double x) { return std::sinh(x); }
inline double cosh (double x) { return std::cosh(x); }
inline double tanh (double x) { return std::tanh(x); }
inline double floor (double x) { return std::floor(x); }
inline double ceil (double x) { return std::ceil(x); }
inline double pow (double x, double y) { return std::pow(x, y); }
inline double atan2(double y, double x) { return std::atan2(y, x); }
@@ -482,10 +487,51 @@ Jet<T, N> tanh(const Jet<T, N>& f) {
return Jet<T, N>(tanh_a, tmp * f.v);
}
// The floor function should be used with extreme care as this operation will
// result in a zero derivative which provides no information to the solver.
//
// floor(a + h) ~= floor(a) + 0
template <typename T, int N> inline
Jet<T, N> floor(const Jet<T, N>& f) {
return Jet<T, N>(floor(f.a));
}
// The ceil function should be used with extreme care as this operation will
// result in a zero derivative which provides no information to the solver.
//
// ceil(a + h) ~= ceil(a) + 0
template <typename T, int N> inline
Jet<T, N> ceil(const Jet<T, N>& f) {
return Jet<T, N>(ceil(f.a));
}
// Bessel functions of the first kind with integer order equal to 0, 1, n.
inline double BesselJ0(double x) { return j0(x); }
inline double BesselJ1(double x) { return j1(x); }
inline double BesselJn(int n, double x) { return jn(n, x); }
//
// Microsoft has deprecated the j[0,1,n]() POSIX Bessel functions in favour of
// _j[0,1,n](). Where available on MSVC, use _j[0,1,n]() to avoid deprecated
// function errors in client code (the specific warning is suppressed when
// Ceres itself is built).
inline double BesselJ0(double x) {
#if defined(_MSC_VER) && defined(_j0)
return _j0(x);
#else
return j0(x);
#endif
}
inline double BesselJ1(double x) {
#if defined(_MSC_VER) && defined(_j1)
return _j1(x);
#else
return j1(x);
#endif
}
inline double BesselJn(int n, double x) {
#if defined(_MSC_VER) && defined(_jn)
return _jn(n, x);
#else
return jn(n, x);
#endif
}
// For the formulae of the derivatives of the Bessel functions see the book:
// Olver, Lozier, Boisvert, Clark, NIST Handbook of Mathematical Functions,
@@ -743,7 +789,15 @@ template<typename T, int N> inline Jet<T, N> ei_pow (const Jet<T, N>& x,
// strange compile errors.
template <typename T, int N>
inline std::ostream &operator<<(std::ostream &s, const Jet<T, N>& z) {
return s << "[" << z.a << " ; " << z.v.transpose() << "]";
s << "[" << z.a << " ; ";
for (int i = 0; i < N; ++i) {
s << z.v[i];
if (i != N - 1) {
s << ", ";
}
}
s << "]";
return s;
}
} // namespace ceres
@@ -757,6 +811,7 @@ struct NumTraits<ceres::Jet<T, N> > {
typedef ceres::Jet<T, N> Real;
typedef ceres::Jet<T, N> NonInteger;
typedef ceres::Jet<T, N> Nested;
typedef ceres::Jet<T, N> Literal;
static typename ceres::Jet<T, N> dummy_precision() {
return ceres::Jet<T, N>(1e-12);
@@ -777,6 +832,21 @@ struct NumTraits<ceres::Jet<T, N> > {
HasFloatingPoint = 1,
RequireInitialization = 1
};
template<bool Vectorized>
struct Div {
enum {
#if defined(EIGEN_VECTORIZE_AVX)
AVX = true,
#else
AVX = false,
#endif
// Assuming that for Jets, division is as expensive as
// multiplication.
Cost = 3
};
};
};
} // namespace Eigen

View File

@@ -211,6 +211,28 @@ class CERES_EXPORT QuaternionParameterization : public LocalParameterization {
virtual int LocalSize() const { return 3; }
};
// Implements the quaternion local parameterization for Eigen's representation
// of the quaternion. Eigen uses a different internal memory layout for the
// elements of the quaternion than what is commonly used. Specifically, Eigen
// stores the elements in memory as [x, y, z, w] where the real part is last
// whereas it is typically stored first. Note, when creating an Eigen quaternion
// through the constructor the elements are accepted in w, x, y, z order. Since
// Ceres operates on parameter blocks which are raw double pointers this
// difference is important and requires a different parameterization.
//
// Plus(x, delta) = [sin(|delta|) delta / |delta|, cos(|delta|)] * x
// with * being the quaternion multiplication operator.
class EigenQuaternionParameterization : public ceres::LocalParameterization {
public:
virtual ~EigenQuaternionParameterization() {}
virtual bool Plus(const double* x,
const double* delta,
double* x_plus_delta) const;
virtual bool ComputeJacobian(const double* x,
double* jacobian) const;
virtual int GlobalSize() const { return 4; }
virtual int LocalSize() const { return 3; }
};
// This provides a parameterization for homogeneous vectors which are commonly
// used in Structure for Motion problems. One example where they are used is

View File

@@ -206,29 +206,6 @@ class NumericDiffCostFunction
}
}
// Deprecated. New users should avoid using this constructor. Instead, use the
// constructor with NumericDiffOptions.
NumericDiffCostFunction(CostFunctor* functor,
Ownership ownership,
int num_residuals,
const double relative_step_size)
:functor_(functor),
ownership_(ownership),
options_() {
LOG(WARNING) << "This constructor is deprecated and will be removed in "
"a future version. Please use the NumericDiffOptions "
"constructor instead.";
if (kNumResiduals == DYNAMIC) {
SizedCostFunction<kNumResiduals,
N0, N1, N2, N3, N4,
N5, N6, N7, N8, N9>
::set_num_residuals(num_residuals);
}
options_.relative_step_size = relative_step_size;
}
~NumericDiffCostFunction() {
if (ownership_ != TAKE_OWNERSHIP) {
functor_.release();

View File

@@ -309,6 +309,9 @@ class CERES_EXPORT Problem {
// Allow the indicated parameter block to vary during optimization.
void SetParameterBlockVariable(double* values);
// Returns true if a parameter block is set constant, and false otherwise.
bool IsParameterBlockConstant(double* values) const;
// Set the local parameterization for one of the parameter blocks.
// The local_parameterization is owned by the Problem by default. It
// is acceptable to set the same parameterization for multiple
@@ -461,6 +464,10 @@ class CERES_EXPORT Problem {
// parameter block has a local parameterization, then it contributes
// "LocalSize" entries to the gradient vector (and the number of
// columns in the jacobian).
//
// Note 3: This function cannot be called while the problem is being
// solved, for example it cannot be called from an IterationCallback
// at the end of an iteration during a solve.
bool Evaluate(const EvaluateOptions& options,
double* cost,
std::vector<double>* residuals,

View File

@@ -48,7 +48,6 @@
#include <algorithm>
#include <cmath>
#include <limits>
#include "glog/logging.h"
namespace ceres {
@@ -418,7 +417,6 @@ template <typename T>
inline void EulerAnglesToRotationMatrix(const T* euler,
const int row_stride_parameter,
T* R) {
CHECK_EQ(row_stride_parameter, 3);
EulerAnglesToRotationMatrix(euler, RowMajorAdapter3x3(R));
}
@@ -496,7 +494,6 @@ void QuaternionToRotation(const T q[4],
QuaternionToScaledRotation(q, R);
T normalizer = q[0]*q[0] + q[1]*q[1] + q[2]*q[2] + q[3]*q[3];
CHECK_NE(normalizer, T(0));
normalizer = T(1) / normalizer;
for (int i = 0; i < 3; ++i) {

View File

@@ -134,7 +134,7 @@ class CERES_EXPORT Solver {
trust_region_problem_dump_format_type = TEXTFILE;
check_gradients = false;
gradient_check_relative_precision = 1e-8;
numeric_derivative_relative_step_size = 1e-6;
gradient_check_numeric_derivative_relative_step_size = 1e-6;
update_state_every_iteration = false;
}
@@ -701,12 +701,22 @@ class CERES_EXPORT Solver {
// this number, then the jacobian for that cost term is dumped.
double gradient_check_relative_precision;
// Relative shift used for taking numeric derivatives. For finite
// differencing, each dimension is evaluated at slightly shifted
// values; for the case of central difference, this is what gets
// evaluated:
// WARNING: This option only applies to the to the numeric
// differentiation used for checking the user provided derivatives
// when when Solver::Options::check_gradients is true. If you are
// using NumericDiffCostFunction and are interested in changing
// the step size for numeric differentiation in your cost
// function, please have a look at
// include/ceres/numeric_diff_options.h.
//
// delta = numeric_derivative_relative_step_size;
// Relative shift used for taking numeric derivatives when
// Solver::Options::check_gradients is true.
//
// For finite differencing, each dimension is evaluated at
// slightly shifted values; for the case of central difference,
// this is what gets evaluated:
//
// delta = gradient_check_numeric_derivative_relative_step_size;
// f_initial = f(x)
// f_forward = f((1 + delta) * x)
// f_backward = f((1 - delta) * x)
@@ -723,7 +733,7 @@ class CERES_EXPORT Solver {
// theory a good choice is sqrt(eps) * x, which for doubles means
// about 1e-8 * x. However, I have found this number too
// optimistic. This number should be exposed for users to change.
double numeric_derivative_relative_step_size;
double gradient_check_numeric_derivative_relative_step_size;
// If true, the user's parameter blocks are updated at the end of
// every Minimizer iteration, otherwise they are updated when the
@@ -801,6 +811,13 @@ class CERES_EXPORT Solver {
// Number of times inner iterations were performed.
int num_inner_iteration_steps;
// Total number of iterations inside the line search algorithm
// across all invocations. We call these iterations "steps" to
// distinguish them from the outer iterations of the line search
// and trust region minimizer algorithms which call the line
// search algorithm as a subroutine.
int num_line_search_steps;
// All times reported below are wall times.
// When the user calls Solve, before the actual optimization

View File

@@ -32,7 +32,7 @@
#define CERES_PUBLIC_VERSION_H_
#define CERES_VERSION_MAJOR 1
#define CERES_VERSION_MINOR 11
#define CERES_VERSION_MINOR 12
#define CERES_VERSION_REVISION 0
// Classic CPP stringifcation; the extra level of indirection allows the

View File

@@ -46,6 +46,7 @@ namespace internal {
using std::make_pair;
using std::pair;
using std::vector;
using std::adjacent_find;
void CompressedRowJacobianWriter::PopulateJacobianRowAndColumnBlockVectors(
const Program* program, CompressedRowSparseMatrix* jacobian) {
@@ -140,12 +141,21 @@ SparseMatrix* CompressedRowJacobianWriter::CreateJacobian() const {
// Sort the parameters by their position in the state vector.
sort(parameter_indices.begin(), parameter_indices.end());
CHECK(unique(parameter_indices.begin(), parameter_indices.end()) ==
parameter_indices.end())
<< "Ceres internal error: "
<< "Duplicate parameter blocks detected in a cost function. "
<< "This should never happen. Please report this to "
<< "the Ceres developers.";
if (adjacent_find(parameter_indices.begin(), parameter_indices.end()) !=
parameter_indices.end()) {
std::string parameter_block_description;
for (int j = 0; j < num_parameter_blocks; ++j) {
ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
parameter_block_description +=
parameter_block->ToString() + "\n";
}
LOG(FATAL) << "Ceres internal error: "
<< "Duplicate parameter blocks detected in a cost function. "
<< "This should never happen. Please report this to "
<< "the Ceres developers.\n"
<< "Residual Block: " << residual_block->ToString() << "\n"
<< "Parameter Blocks: " << parameter_block_description;
}
// Update the row indices.
const int num_residuals = residual_block->NumResiduals();

View File

@@ -38,6 +38,7 @@
namespace ceres {
using std::make_pair;
using std::pair;
using std::vector;
@@ -54,6 +55,12 @@ bool Covariance::Compute(
return impl_->Compute(covariance_blocks, problem->problem_impl_.get());
}
bool Covariance::Compute(
const vector<const double*>& parameter_blocks,
Problem* problem) {
return impl_->Compute(parameter_blocks, problem->problem_impl_.get());
}
bool Covariance::GetCovarianceBlock(const double* parameter_block1,
const double* parameter_block2,
double* covariance_block) const {
@@ -73,4 +80,20 @@ bool Covariance::GetCovarianceBlockInTangentSpace(
covariance_block);
}
bool Covariance::GetCovarianceMatrix(
const vector<const double*>& parameter_blocks,
double* covariance_matrix) {
return impl_->GetCovarianceMatrixInTangentOrAmbientSpace(parameter_blocks,
true, // ambient
covariance_matrix);
}
bool Covariance::GetCovarianceMatrixInTangentSpace(
const std::vector<const double *>& parameter_blocks,
double *covariance_matrix) {
return impl_->GetCovarianceMatrixInTangentOrAmbientSpace(parameter_blocks,
false, // tangent
covariance_matrix);
}
} // namespace ceres

View File

@@ -36,6 +36,8 @@
#include <algorithm>
#include <cstdlib>
#include <numeric>
#include <sstream>
#include <utility>
#include <vector>
@@ -43,6 +45,7 @@
#include "Eigen/SparseQR"
#include "Eigen/SVD"
#include "ceres/collections_port.h"
#include "ceres/compressed_col_sparse_matrix_utils.h"
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/covariance.h"
@@ -51,6 +54,7 @@
#include "ceres/map_util.h"
#include "ceres/parameter_block.h"
#include "ceres/problem_impl.h"
#include "ceres/residual_block.h"
#include "ceres/suitesparse.h"
#include "ceres/wall_time.h"
#include "glog/logging.h"
@@ -61,6 +65,7 @@ namespace internal {
using std::make_pair;
using std::map;
using std::pair;
using std::sort;
using std::swap;
using std::vector;
@@ -86,8 +91,38 @@ CovarianceImpl::CovarianceImpl(const Covariance::Options& options)
CovarianceImpl::~CovarianceImpl() {
}
template <typename T> void CheckForDuplicates(vector<T> blocks) {
sort(blocks.begin(), blocks.end());
typename vector<T>::iterator it =
std::adjacent_find(blocks.begin(), blocks.end());
if (it != blocks.end()) {
// In case there are duplicates, we search for their location.
map<T, vector<int> > blocks_map;
for (int i = 0; i < blocks.size(); ++i) {
blocks_map[blocks[i]].push_back(i);
}
std::ostringstream duplicates;
while (it != blocks.end()) {
duplicates << "(";
for (int i = 0; i < blocks_map[*it].size() - 1; ++i) {
duplicates << blocks_map[*it][i] << ", ";
}
duplicates << blocks_map[*it].back() << ")";
it = std::adjacent_find(it + 1, blocks.end());
if (it < blocks.end()) {
duplicates << " and ";
}
}
LOG(FATAL) << "Covariance::Compute called with duplicate blocks at "
<< "indices " << duplicates.str();
}
}
bool CovarianceImpl::Compute(const CovarianceBlocks& covariance_blocks,
ProblemImpl* problem) {
CheckForDuplicates<pair<const double*, const double*> >(covariance_blocks);
problem_ = problem;
parameter_block_to_row_index_.clear();
covariance_matrix_.reset(NULL);
@@ -97,6 +132,20 @@ bool CovarianceImpl::Compute(const CovarianceBlocks& covariance_blocks,
return is_valid_;
}
bool CovarianceImpl::Compute(const vector<const double*>& parameter_blocks,
ProblemImpl* problem) {
CheckForDuplicates<const double*>(parameter_blocks);
CovarianceBlocks covariance_blocks;
for (int i = 0; i < parameter_blocks.size(); ++i) {
for (int j = i; j < parameter_blocks.size(); ++j) {
covariance_blocks.push_back(make_pair(parameter_blocks[i],
parameter_blocks[j]));
}
}
return Compute(covariance_blocks, problem);
}
bool CovarianceImpl::GetCovarianceBlockInTangentOrAmbientSpace(
const double* original_parameter_block1,
const double* original_parameter_block2,
@@ -120,9 +169,17 @@ bool CovarianceImpl::GetCovarianceBlockInTangentOrAmbientSpace(
ParameterBlock* block2 =
FindOrDie(parameter_map,
const_cast<double*>(original_parameter_block2));
const int block1_size = block1->Size();
const int block2_size = block2->Size();
MatrixRef(covariance_block, block1_size, block2_size).setZero();
const int block1_local_size = block1->LocalSize();
const int block2_local_size = block2->LocalSize();
if (!lift_covariance_to_ambient_space) {
MatrixRef(covariance_block, block1_local_size, block2_local_size)
.setZero();
} else {
MatrixRef(covariance_block, block1_size, block2_size).setZero();
}
return true;
}
@@ -240,6 +297,94 @@ bool CovarianceImpl::GetCovarianceBlockInTangentOrAmbientSpace(
return true;
}
bool CovarianceImpl::GetCovarianceMatrixInTangentOrAmbientSpace(
const vector<const double*>& parameters,
bool lift_covariance_to_ambient_space,
double* covariance_matrix) const {
CHECK(is_computed_)
<< "Covariance::GetCovarianceMatrix called before Covariance::Compute";
CHECK(is_valid_)
<< "Covariance::GetCovarianceMatrix called when Covariance::Compute "
<< "returned false.";
const ProblemImpl::ParameterMap& parameter_map = problem_->parameter_map();
// For OpenMP compatibility we need to define these vectors in advance
const int num_parameters = parameters.size();
vector<int> parameter_sizes;
vector<int> cum_parameter_size;
parameter_sizes.reserve(num_parameters);
cum_parameter_size.resize(num_parameters + 1);
cum_parameter_size[0] = 0;
for (int i = 0; i < num_parameters; ++i) {
ParameterBlock* block =
FindOrDie(parameter_map, const_cast<double*>(parameters[i]));
if (lift_covariance_to_ambient_space) {
parameter_sizes.push_back(block->Size());
} else {
parameter_sizes.push_back(block->LocalSize());
}
}
std::partial_sum(parameter_sizes.begin(), parameter_sizes.end(),
cum_parameter_size.begin() + 1);
const int max_covariance_block_size =
*std::max_element(parameter_sizes.begin(), parameter_sizes.end());
const int covariance_size = cum_parameter_size.back();
// Assemble the blocks in the covariance matrix.
MatrixRef covariance(covariance_matrix, covariance_size, covariance_size);
const int num_threads = options_.num_threads;
scoped_array<double> workspace(
new double[num_threads * max_covariance_block_size *
max_covariance_block_size]);
bool success = true;
// The collapse() directive is only supported in OpenMP 3.0 and higher. OpenMP
// 3.0 was released in May 2008 (hence the version number).
#if _OPENMP >= 200805
# pragma omp parallel for num_threads(num_threads) schedule(dynamic) collapse(2)
#else
# pragma omp parallel for num_threads(num_threads) schedule(dynamic)
#endif
for (int i = 0; i < num_parameters; ++i) {
for (int j = 0; j < num_parameters; ++j) {
// The second loop can't start from j = i for compatibility with OpenMP
// collapse command. The conditional serves as a workaround
if (j >= i) {
int covariance_row_idx = cum_parameter_size[i];
int covariance_col_idx = cum_parameter_size[j];
int size_i = parameter_sizes[i];
int size_j = parameter_sizes[j];
#ifdef CERES_USE_OPENMP
int thread_id = omp_get_thread_num();
#else
int thread_id = 0;
#endif
double* covariance_block =
workspace.get() +
thread_id * max_covariance_block_size * max_covariance_block_size;
if (!GetCovarianceBlockInTangentOrAmbientSpace(
parameters[i], parameters[j], lift_covariance_to_ambient_space,
covariance_block)) {
success = false;
}
covariance.block(covariance_row_idx, covariance_col_idx,
size_i, size_j) =
MatrixRef(covariance_block, size_i, size_j);
if (i != j) {
covariance.block(covariance_col_idx, covariance_row_idx,
size_j, size_i) =
MatrixRef(covariance_block, size_i, size_j).transpose();
}
}
}
}
return success;
}
// Determine the sparsity pattern of the covariance matrix based on
// the block pairs requested by the user.
bool CovarianceImpl::ComputeCovarianceSparsity(
@@ -252,18 +397,28 @@ bool CovarianceImpl::ComputeCovarianceSparsity(
vector<double*> all_parameter_blocks;
problem->GetParameterBlocks(&all_parameter_blocks);
const ProblemImpl::ParameterMap& parameter_map = problem->parameter_map();
HashSet<ParameterBlock*> parameter_blocks_in_use;
vector<ResidualBlock*> residual_blocks;
problem->GetResidualBlocks(&residual_blocks);
for (int i = 0; i < residual_blocks.size(); ++i) {
ResidualBlock* residual_block = residual_blocks[i];
parameter_blocks_in_use.insert(residual_block->parameter_blocks(),
residual_block->parameter_blocks() +
residual_block->NumParameterBlocks());
}
constant_parameter_blocks_.clear();
vector<double*>& active_parameter_blocks =
evaluate_options_.parameter_blocks;
active_parameter_blocks.clear();
for (int i = 0; i < all_parameter_blocks.size(); ++i) {
double* parameter_block = all_parameter_blocks[i];
ParameterBlock* block = FindOrDie(parameter_map, parameter_block);
if (block->IsConstant()) {
constant_parameter_blocks_.insert(parameter_block);
} else {
if (!block->IsConstant() && (parameter_blocks_in_use.count(block) > 0)) {
active_parameter_blocks.push_back(parameter_block);
} else {
constant_parameter_blocks_.insert(parameter_block);
}
}
@@ -386,8 +541,8 @@ bool CovarianceImpl::ComputeCovarianceValues() {
switch (options_.algorithm_type) {
case DENSE_SVD:
return ComputeCovarianceValuesUsingDenseSVD();
#ifndef CERES_NO_SUITESPARSE
case SUITE_SPARSE_QR:
#ifndef CERES_NO_SUITESPARSE
return ComputeCovarianceValuesUsingSuiteSparseQR();
#else
LOG(ERROR) << "SuiteSparse is required to use the "
@@ -624,7 +779,10 @@ bool CovarianceImpl::ComputeCovarianceValuesUsingDenseSVD() {
if (automatic_truncation) {
break;
} else {
LOG(ERROR) << "Cholesky factorization of J'J is not reliable. "
LOG(ERROR) << "Error: Covariance matrix is near rank deficient "
<< "and the user did not specify a non-zero"
<< "Covariance::Options::null_space_rank "
<< "to enable the computation of a Pseudo-Inverse. "
<< "Reciprocal condition number: "
<< singular_value_ratio * singular_value_ratio << " "
<< "min_reciprocal_condition_number: "

View File

@@ -55,12 +55,21 @@ class CovarianceImpl {
const double*> >& covariance_blocks,
ProblemImpl* problem);
bool Compute(
const std::vector<const double*>& parameter_blocks,
ProblemImpl* problem);
bool GetCovarianceBlockInTangentOrAmbientSpace(
const double* parameter_block1,
const double* parameter_block2,
bool lift_covariance_to_ambient_space,
double* covariance_block) const;
bool GetCovarianceMatrixInTangentOrAmbientSpace(
const std::vector<const double*>& parameters,
bool lift_covariance_to_ambient_space,
double *covariance_matrix) const;
bool ComputeCovarianceSparsity(
const std::vector<std::pair<const double*,
const double*> >& covariance_blocks,

View File

@@ -0,0 +1,276 @@
// Ceres Solver - A fast non-linear least squares minimizer
// Copyright 2016 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of Google Inc. nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Authors: wjr@google.com (William Rucklidge),
// keir@google.com (Keir Mierle),
// dgossow@google.com (David Gossow)
#include "ceres/gradient_checker.h"
#include <algorithm>
#include <cmath>
#include <numeric>
#include <string>
#include <vector>
#include "ceres/is_close.h"
#include "ceres/stringprintf.h"
#include "ceres/types.h"
namespace ceres {
using internal::IsClose;
using internal::StringAppendF;
using internal::StringPrintf;
using std::string;
using std::vector;
namespace {
// Evaluate the cost function and transform the returned Jacobians to
// the local space of the respective local parameterizations.
bool EvaluateCostFunction(
const ceres::CostFunction* function,
double const* const * parameters,
const std::vector<const ceres::LocalParameterization*>&
local_parameterizations,
Vector* residuals,
std::vector<Matrix>* jacobians,
std::vector<Matrix>* local_jacobians) {
CHECK_NOTNULL(residuals);
CHECK_NOTNULL(jacobians);
CHECK_NOTNULL(local_jacobians);
const vector<int32>& block_sizes = function->parameter_block_sizes();
const int num_parameter_blocks = block_sizes.size();
// Allocate Jacobian matrices in local space.
local_jacobians->resize(num_parameter_blocks);
vector<double*> local_jacobian_data(num_parameter_blocks);
for (int i = 0; i < num_parameter_blocks; ++i) {
int block_size = block_sizes.at(i);
if (local_parameterizations.at(i) != NULL) {
block_size = local_parameterizations.at(i)->LocalSize();
}
local_jacobians->at(i).resize(function->num_residuals(), block_size);
local_jacobians->at(i).setZero();
local_jacobian_data.at(i) = local_jacobians->at(i).data();
}
// Allocate Jacobian matrices in global space.
jacobians->resize(num_parameter_blocks);
vector<double*> jacobian_data(num_parameter_blocks);
for (int i = 0; i < num_parameter_blocks; ++i) {
jacobians->at(i).resize(function->num_residuals(), block_sizes.at(i));
jacobians->at(i).setZero();
jacobian_data.at(i) = jacobians->at(i).data();
}
// Compute residuals & jacobians.
CHECK_NE(0, function->num_residuals());
residuals->resize(function->num_residuals());
residuals->setZero();
if (!function->Evaluate(parameters, residuals->data(),
jacobian_data.data())) {
return false;
}
// Convert Jacobians from global to local space.
for (size_t i = 0; i < local_jacobians->size(); ++i) {
if (local_parameterizations.at(i) == NULL) {
local_jacobians->at(i) = jacobians->at(i);
} else {
int global_size = local_parameterizations.at(i)->GlobalSize();
int local_size = local_parameterizations.at(i)->LocalSize();
CHECK_EQ(jacobians->at(i).cols(), global_size);
Matrix global_J_local(global_size, local_size);
local_parameterizations.at(i)->ComputeJacobian(
parameters[i], global_J_local.data());
local_jacobians->at(i) = jacobians->at(i) * global_J_local;
}
}
return true;
}
} // namespace
GradientChecker::GradientChecker(
const CostFunction* function,
const vector<const LocalParameterization*>* local_parameterizations,
const NumericDiffOptions& options) :
function_(function) {
CHECK_NOTNULL(function);
if (local_parameterizations != NULL) {
local_parameterizations_ = *local_parameterizations;
} else {
local_parameterizations_.resize(function->parameter_block_sizes().size(),
NULL);
}
DynamicNumericDiffCostFunction<CostFunction, CENTRAL>*
finite_diff_cost_function =
new DynamicNumericDiffCostFunction<CostFunction, CENTRAL>(
function, DO_NOT_TAKE_OWNERSHIP, options);
finite_diff_cost_function_.reset(finite_diff_cost_function);
const vector<int32>& parameter_block_sizes =
function->parameter_block_sizes();
const int num_parameter_blocks = parameter_block_sizes.size();
for (int i = 0; i < num_parameter_blocks; ++i) {
finite_diff_cost_function->AddParameterBlock(parameter_block_sizes[i]);
}
finite_diff_cost_function->SetNumResiduals(function->num_residuals());
}
bool GradientChecker::Probe(double const* const * parameters,
double relative_precision,
ProbeResults* results_param) const {
int num_residuals = function_->num_residuals();
// Make sure that we have a place to store results, no matter if the user has
// provided an output argument.
ProbeResults* results;
ProbeResults results_local;
if (results_param != NULL) {
results = results_param;
results->residuals.resize(0);
results->jacobians.clear();
results->numeric_jacobians.clear();
results->local_jacobians.clear();
results->local_numeric_jacobians.clear();
results->error_log.clear();
} else {
results = &results_local;
}
results->maximum_relative_error = 0.0;
results->return_value = true;
// Evaluate the derivative using the user supplied code.
vector<Matrix>& jacobians = results->jacobians;
vector<Matrix>& local_jacobians = results->local_jacobians;
if (!EvaluateCostFunction(function_, parameters, local_parameterizations_,
&results->residuals, &jacobians, &local_jacobians)) {
results->error_log = "Function evaluation with Jacobians failed.";
results->return_value = false;
}
// Evaluate the derivative using numeric derivatives.
vector<Matrix>& numeric_jacobians = results->numeric_jacobians;
vector<Matrix>& local_numeric_jacobians = results->local_numeric_jacobians;
Vector finite_diff_residuals;
if (!EvaluateCostFunction(finite_diff_cost_function_.get(), parameters,
local_parameterizations_, &finite_diff_residuals,
&numeric_jacobians, &local_numeric_jacobians)) {
results->error_log += "\nFunction evaluation with numerical "
"differentiation failed.";
results->return_value = false;
}
if (!results->return_value) {
return false;
}
for (int i = 0; i < num_residuals; ++i) {
if (!IsClose(
results->residuals[i],
finite_diff_residuals[i],
relative_precision,
NULL,
NULL)) {
results->error_log = "Function evaluation with and without Jacobians "
"resulted in different residuals.";
LOG(INFO) << results->residuals.transpose();
LOG(INFO) << finite_diff_residuals.transpose();
return false;
}
}
// See if any elements have relative error larger than the threshold.
int num_bad_jacobian_components = 0;
double& worst_relative_error = results->maximum_relative_error;
worst_relative_error = 0;
// Accumulate the error message for all the jacobians, since it won't get
// output if there are no bad jacobian components.
string error_log;
for (int k = 0; k < function_->parameter_block_sizes().size(); k++) {
StringAppendF(&error_log,
"========== "
"Jacobian for " "block %d: (%ld by %ld)) "
"==========\n",
k,
static_cast<long>(local_jacobians[k].rows()),
static_cast<long>(local_jacobians[k].cols()));
// The funny spacing creates appropriately aligned column headers.
error_log +=
" block row col user dx/dy num diff dx/dy "
"abs error relative error parameter residual\n";
for (int i = 0; i < local_jacobians[k].rows(); i++) {
for (int j = 0; j < local_jacobians[k].cols(); j++) {
double term_jacobian = local_jacobians[k](i, j);
double finite_jacobian = local_numeric_jacobians[k](i, j);
double relative_error, absolute_error;
bool bad_jacobian_entry =
!IsClose(term_jacobian,
finite_jacobian,
relative_precision,
&relative_error,
&absolute_error);
worst_relative_error = std::max(worst_relative_error, relative_error);
StringAppendF(&error_log,
"%6d %4d %4d %17g %17g %17g %17g %17g %17g",
k, i, j,
term_jacobian, finite_jacobian,
absolute_error, relative_error,
parameters[k][j],
results->residuals[i]);
if (bad_jacobian_entry) {
num_bad_jacobian_components++;
StringAppendF(
&error_log,
" ------ (%d,%d,%d) Relative error worse than %g",
k, i, j, relative_precision);
}
error_log += "\n";
}
}
}
// Since there were some bad errors, dump comprehensive debug info.
if (num_bad_jacobian_components) {
string header = StringPrintf("\nDetected %d bad Jacobian component(s). "
"Worst relative error was %g.\n",
num_bad_jacobian_components,
worst_relative_error);
results->error_log = header + "\n" + error_log;
return false;
}
return true;
}
} // namespace ceres

View File

@@ -26,7 +26,8 @@
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Author: keir@google.com (Keir Mierle)
// Authors: keir@google.com (Keir Mierle),
// dgossow@google.com (David Gossow)
#include "ceres/gradient_checking_cost_function.h"
@@ -36,7 +37,7 @@
#include <string>
#include <vector>
#include "ceres/cost_function.h"
#include "ceres/gradient_checker.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/scoped_ptr.h"
#include "ceres/parameter_block.h"
@@ -59,55 +60,25 @@ using std::vector;
namespace {
// True if x and y have an absolute relative difference less than
// relative_precision and false otherwise. Stores the relative and absolute
// difference in relative/absolute_error if non-NULL.
bool IsClose(double x, double y, double relative_precision,
double *relative_error,
double *absolute_error) {
double local_absolute_error;
double local_relative_error;
if (!absolute_error) {
absolute_error = &local_absolute_error;
}
if (!relative_error) {
relative_error = &local_relative_error;
}
*absolute_error = abs(x - y);
*relative_error = *absolute_error / max(abs(x), abs(y));
if (x == 0 || y == 0) {
// If x or y is exactly zero, then relative difference doesn't have any
// meaning. Take the absolute difference instead.
*relative_error = *absolute_error;
}
return abs(*relative_error) < abs(relative_precision);
}
class GradientCheckingCostFunction : public CostFunction {
public:
GradientCheckingCostFunction(const CostFunction* function,
const NumericDiffOptions& options,
double relative_precision,
const string& extra_info)
GradientCheckingCostFunction(
const CostFunction* function,
const std::vector<const LocalParameterization*>* local_parameterizations,
const NumericDiffOptions& options,
double relative_precision,
const string& extra_info,
GradientCheckingIterationCallback* callback)
: function_(function),
gradient_checker_(function, local_parameterizations, options),
relative_precision_(relative_precision),
extra_info_(extra_info) {
DynamicNumericDiffCostFunction<CostFunction, CENTRAL>*
finite_diff_cost_function =
new DynamicNumericDiffCostFunction<CostFunction, CENTRAL>(
function,
DO_NOT_TAKE_OWNERSHIP,
options);
extra_info_(extra_info),
callback_(callback) {
CHECK_NOTNULL(callback_);
const vector<int32>& parameter_block_sizes =
function->parameter_block_sizes();
for (int i = 0; i < parameter_block_sizes.size(); ++i) {
finite_diff_cost_function->AddParameterBlock(parameter_block_sizes[i]);
}
*mutable_parameter_block_sizes() = parameter_block_sizes;
set_num_residuals(function->num_residuals());
finite_diff_cost_function->SetNumResiduals(num_residuals());
finite_diff_cost_function_.reset(finite_diff_cost_function);
}
virtual ~GradientCheckingCostFunction() { }
@@ -120,133 +91,92 @@ class GradientCheckingCostFunction : public CostFunction {
return function_->Evaluate(parameters, residuals, NULL);
}
int num_residuals = function_->num_residuals();
GradientChecker::ProbeResults results;
bool okay = gradient_checker_.Probe(parameters,
relative_precision_,
&results);
// Make space for the jacobians of the two methods.
const vector<int32>& block_sizes = function_->parameter_block_sizes();
vector<Matrix> term_jacobians(block_sizes.size());
vector<Matrix> finite_difference_jacobians(block_sizes.size());
vector<double*> term_jacobian_pointers(block_sizes.size());
vector<double*> finite_difference_jacobian_pointers(block_sizes.size());
for (int i = 0; i < block_sizes.size(); i++) {
term_jacobians[i].resize(num_residuals, block_sizes[i]);
term_jacobian_pointers[i] = term_jacobians[i].data();
finite_difference_jacobians[i].resize(num_residuals, block_sizes[i]);
finite_difference_jacobian_pointers[i] =
finite_difference_jacobians[i].data();
}
// Evaluate the derivative using the user supplied code.
if (!function_->Evaluate(parameters,
residuals,
&term_jacobian_pointers[0])) {
LOG(WARNING) << "Function evaluation failed.";
// If the cost function returned false, there's nothing we can say about
// the gradients.
if (results.return_value == false) {
return false;
}
// Evaluate the derivative using numeric derivatives.
finite_diff_cost_function_->Evaluate(
parameters,
residuals,
&finite_difference_jacobian_pointers[0]);
// Copy the residuals.
const int num_residuals = function_->num_residuals();
MatrixRef(residuals, num_residuals, 1) = results.residuals;
// See if any elements have relative error larger than the threshold.
int num_bad_jacobian_components = 0;
double worst_relative_error = 0;
// Accumulate the error message for all the jacobians, since it won't get
// output if there are no bad jacobian components.
string m;
// Copy the original jacobian blocks into the jacobians array.
const vector<int32>& block_sizes = function_->parameter_block_sizes();
for (int k = 0; k < block_sizes.size(); k++) {
// Copy the original jacobian blocks into the jacobians array.
if (jacobians[k] != NULL) {
MatrixRef(jacobians[k],
term_jacobians[k].rows(),
term_jacobians[k].cols()) = term_jacobians[k];
}
StringAppendF(&m,
"========== "
"Jacobian for " "block %d: (%ld by %ld)) "
"==========\n",
k,
static_cast<long>(term_jacobians[k].rows()),
static_cast<long>(term_jacobians[k].cols()));
// The funny spacing creates appropriately aligned column headers.
m += " block row col user dx/dy num diff dx/dy "
"abs error relative error parameter residual\n";
for (int i = 0; i < term_jacobians[k].rows(); i++) {
for (int j = 0; j < term_jacobians[k].cols(); j++) {
double term_jacobian = term_jacobians[k](i, j);
double finite_jacobian = finite_difference_jacobians[k](i, j);
double relative_error, absolute_error;
bool bad_jacobian_entry =
!IsClose(term_jacobian,
finite_jacobian,
relative_precision_,
&relative_error,
&absolute_error);
worst_relative_error = max(worst_relative_error, relative_error);
StringAppendF(&m, "%6d %4d %4d %17g %17g %17g %17g %17g %17g",
k, i, j,
term_jacobian, finite_jacobian,
absolute_error, relative_error,
parameters[k][j],
residuals[i]);
if (bad_jacobian_entry) {
num_bad_jacobian_components++;
StringAppendF(
&m, " ------ (%d,%d,%d) Relative error worse than %g",
k, i, j, relative_precision_);
}
m += "\n";
}
results.jacobians[k].rows(),
results.jacobians[k].cols()) = results.jacobians[k];
}
}
// Since there were some bad errors, dump comprehensive debug info.
if (num_bad_jacobian_components) {
string header = StringPrintf("Detected %d bad jacobian component(s). "
"Worst relative error was %g.\n",
num_bad_jacobian_components,
worst_relative_error);
if (!extra_info_.empty()) {
header += "Extra info for this residual: " + extra_info_ + "\n";
}
LOG(WARNING) << "\n" << header << m;
if (!okay) {
std::string error_log = "Gradient Error detected!\nExtra info for "
"this residual: " + extra_info_ + "\n" + results.error_log;
callback_->SetGradientErrorDetected(error_log);
}
return true;
}
private:
const CostFunction* function_;
internal::scoped_ptr<CostFunction> finite_diff_cost_function_;
GradientChecker gradient_checker_;
double relative_precision_;
string extra_info_;
GradientCheckingIterationCallback* callback_;
};
} // namespace
CostFunction *CreateGradientCheckingCostFunction(
const CostFunction *cost_function,
GradientCheckingIterationCallback::GradientCheckingIterationCallback()
: gradient_error_detected_(false) {
}
CallbackReturnType GradientCheckingIterationCallback::operator()(
const IterationSummary& summary) {
if (gradient_error_detected_) {
LOG(ERROR)<< "Gradient error detected. Terminating solver.";
return SOLVER_ABORT;
}
return SOLVER_CONTINUE;
}
void GradientCheckingIterationCallback::SetGradientErrorDetected(
std::string& error_log) {
mutex_.Lock();
gradient_error_detected_ = true;
error_log_ += "\n" + error_log;
mutex_.Unlock();
}
CostFunction* CreateGradientCheckingCostFunction(
const CostFunction* cost_function,
const std::vector<const LocalParameterization*>* local_parameterizations,
double relative_step_size,
double relative_precision,
const string& extra_info) {
const std::string& extra_info,
GradientCheckingIterationCallback* callback) {
NumericDiffOptions numeric_diff_options;
numeric_diff_options.relative_step_size = relative_step_size;
return new GradientCheckingCostFunction(cost_function,
local_parameterizations,
numeric_diff_options,
relative_precision,
extra_info);
relative_precision, extra_info,
callback);
}
ProblemImpl* CreateGradientCheckingProblemImpl(ProblemImpl* problem_impl,
double relative_step_size,
double relative_precision) {
ProblemImpl* CreateGradientCheckingProblemImpl(
ProblemImpl* problem_impl,
double relative_step_size,
double relative_precision,
GradientCheckingIterationCallback* callback) {
CHECK_NOTNULL(callback);
// We create new CostFunctions by wrapping the original CostFunction
// in a gradient checking CostFunction. So its okay for the
// ProblemImpl to take ownership of it and destroy it. The
@@ -260,6 +190,9 @@ ProblemImpl* CreateGradientCheckingProblemImpl(ProblemImpl* problem_impl,
gradient_checking_problem_options.local_parameterization_ownership =
DO_NOT_TAKE_OWNERSHIP;
NumericDiffOptions numeric_diff_options;
numeric_diff_options.relative_step_size = relative_step_size;
ProblemImpl* gradient_checking_problem_impl = new ProblemImpl(
gradient_checking_problem_options);
@@ -294,19 +227,26 @@ ProblemImpl* CreateGradientCheckingProblemImpl(ProblemImpl* problem_impl,
string extra_info = StringPrintf(
"Residual block id %d; depends on parameters [", i);
vector<double*> parameter_blocks;
vector<const LocalParameterization*> local_parameterizations;
parameter_blocks.reserve(residual_block->NumParameterBlocks());
local_parameterizations.reserve(residual_block->NumParameterBlocks());
for (int j = 0; j < residual_block->NumParameterBlocks(); ++j) {
ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
parameter_blocks.push_back(parameter_block->mutable_user_state());
StringAppendF(&extra_info, "%p", parameter_block->mutable_user_state());
extra_info += (j < residual_block->NumParameterBlocks() - 1) ? ", " : "]";
local_parameterizations.push_back(problem_impl->GetParameterization(
parameter_block->mutable_user_state()));
}
// Wrap the original CostFunction in a GradientCheckingCostFunction.
CostFunction* gradient_checking_cost_function =
CreateGradientCheckingCostFunction(residual_block->cost_function(),
relative_step_size,
relative_precision,
extra_info);
new GradientCheckingCostFunction(residual_block->cost_function(),
&local_parameterizations,
numeric_diff_options,
relative_precision,
extra_info,
callback);
// The const_cast is necessary because
// ProblemImpl::AddResidualBlock can potentially take ownership of

View File

@@ -26,7 +26,8 @@
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Author: keir@google.com (Keir Mierle)
// Authors: keir@google.com (Keir Mierle),
// dgossow@google.com (David Gossow)
#ifndef CERES_INTERNAL_GRADIENT_CHECKING_COST_FUNCTION_H_
#define CERES_INTERNAL_GRADIENT_CHECKING_COST_FUNCTION_H_
@@ -34,50 +35,76 @@
#include <string>
#include "ceres/cost_function.h"
#include "ceres/iteration_callback.h"
#include "ceres/local_parameterization.h"
#include "ceres/mutex.h"
namespace ceres {
namespace internal {
class ProblemImpl;
// Creates a CostFunction that checks the jacobians that cost_function computes
// with finite differences. Bad results are logged; required precision is
// controlled by relative_precision and the numeric differentiation step size is
// controlled with relative_step_size. See solver.h for a better explanation of
// relative_step_size. Caller owns result.
//
// The condition enforced is that
//
// (J_actual(i, j) - J_numeric(i, j))
// ------------------------------------ < relative_precision
// max(J_actual(i, j), J_numeric(i, j))
//
// where J_actual(i, j) is the jacobian as computed by the supplied cost
// function (by the user) and J_numeric is the jacobian as computed by finite
// differences.
//
// Note: This is quite inefficient and is intended only for debugging.
// Callback that collects information about gradient checking errors, and
// will abort the solve as soon as an error occurs.
class GradientCheckingIterationCallback : public IterationCallback {
public:
GradientCheckingIterationCallback();
// Will return SOLVER_CONTINUE until a gradient error has been detected,
// then return SOLVER_ABORT.
virtual CallbackReturnType operator()(const IterationSummary& summary);
// Notify this that a gradient error has occurred (thread safe).
void SetGradientErrorDetected(std::string& error_log);
// Retrieve error status (not thread safe).
bool gradient_error_detected() const { return gradient_error_detected_; }
const std::string& error_log() const { return error_log_; }
private:
bool gradient_error_detected_;
std::string error_log_;
// Mutex protecting member variables.
ceres::internal::Mutex mutex_;
};
// Creates a CostFunction that checks the Jacobians that cost_function computes
// with finite differences. This API is only intended for unit tests that intend
// to check the functionality of the GradientCheckingCostFunction
// implementation directly.
CostFunction* CreateGradientCheckingCostFunction(
const CostFunction* cost_function,
const std::vector<const LocalParameterization*>* local_parameterizations,
double relative_step_size,
double relative_precision,
const std::string& extra_info);
const std::string& extra_info,
GradientCheckingIterationCallback* callback);
// Create a new ProblemImpl object from the input problem_impl, where
// each CostFunctions in problem_impl are wrapped inside a
// GradientCheckingCostFunctions. This gives us a ProblemImpl object
// which checks its derivatives against estimates from numeric
// differentiation everytime a ResidualBlock is evaluated.
// Create a new ProblemImpl object from the input problem_impl, where all
// cost functions are wrapped so that each time their Evaluate method is called,
// an additional check is performed that compares the Jacobians computed by
// the original cost function with alternative Jacobians computed using
// numerical differentiation. If local parameterizations are given for any
// parameters, the Jacobians will be compared in the local space instead of the
// ambient space. For details on the gradient checking procedure, see the
// documentation of the GradientChecker class. If an error is detected in any
// iteration, the respective cost function will notify the
// GradientCheckingIterationCallback.
//
// The caller owns the returned ProblemImpl object.
//
// Note: This is quite inefficient and is intended only for debugging.
//
// relative_step_size and relative_precision are parameters to control
// the numeric differentiation and the relative tolerance between the
// jacobian computed by the CostFunctions in problem_impl and
// jacobians obtained by numerically differentiating them. For more
// details see the documentation for
// CreateGradientCheckingCostFunction above.
ProblemImpl* CreateGradientCheckingProblemImpl(ProblemImpl* problem_impl,
double relative_step_size,
double relative_precision);
// jacobians obtained by numerically differentiating them. See the
// documentation of 'numeric_derivative_relative_step_size' in solver.h for a
// better explanation.
ProblemImpl* CreateGradientCheckingProblemImpl(
ProblemImpl* problem_impl,
double relative_step_size,
double relative_precision,
GradientCheckingIterationCallback* callback);
} // namespace internal
} // namespace ceres

View File

@@ -84,6 +84,12 @@ Solver::Options GradientProblemSolverOptionsToSolverOptions(
} // namespace
bool GradientProblemSolver::Options::IsValid(std::string* error) const {
const Solver::Options solver_options =
GradientProblemSolverOptionsToSolverOptions(*this);
return solver_options.IsValid(error);
}
GradientProblemSolver::~GradientProblemSolver() {
}
@@ -99,8 +105,6 @@ void GradientProblemSolver::Solve(const GradientProblemSolver::Options& options,
using internal::SetSummaryFinalCost;
double start_time = WallTimeInSeconds();
Solver::Options solver_options =
GradientProblemSolverOptionsToSolverOptions(options);
*CHECK_NOTNULL(summary) = Summary();
summary->num_parameters = problem.NumParameters();
@@ -112,14 +116,16 @@ void GradientProblemSolver::Solve(const GradientProblemSolver::Options& options,
summary->nonlinear_conjugate_gradient_type = options.nonlinear_conjugate_gradient_type; // NOLINT
// Check validity
if (!solver_options.IsValid(&summary->message)) {
if (!options.IsValid(&summary->message)) {
LOG(ERROR) << "Terminating: " << summary->message;
return;
}
// Assuming that the parameter blocks in the program have been
Minimizer::Options minimizer_options;
minimizer_options = Minimizer::Options(solver_options);
// TODO(sameeragarwal): This is a bit convoluted, we should be able
// to convert to minimizer options directly, but this will do for
// now.
Minimizer::Options minimizer_options =
Minimizer::Options(GradientProblemSolverOptionsToSolverOptions(options));
minimizer_options.evaluator.reset(new GradientProblemEvaluator(problem));
scoped_ptr<IterationCallback> logging_callback;

59
extern/ceres/internal/ceres/is_close.cc vendored Normal file
View File

@@ -0,0 +1,59 @@
// Ceres Solver - A fast non-linear least squares minimizer
// Copyright 2016 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of Google Inc. nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Authors: keir@google.com (Keir Mierle), dgossow@google.com (David Gossow)
#include "ceres/is_close.h"
#include <algorithm>
#include <cmath>
namespace ceres {
namespace internal {
bool IsClose(double x, double y, double relative_precision,
double *relative_error,
double *absolute_error) {
double local_absolute_error;
double local_relative_error;
if (!absolute_error) {
absolute_error = &local_absolute_error;
}
if (!relative_error) {
relative_error = &local_relative_error;
}
*absolute_error = std::fabs(x - y);
*relative_error = *absolute_error / std::max(std::fabs(x), std::fabs(y));
if (x == 0 || y == 0) {
// If x or y is exactly zero, then relative difference doesn't have any
// meaning. Take the absolute difference instead.
*relative_error = *absolute_error;
}
return *relative_error < std::fabs(relative_precision);
}
} // namespace internal
} // namespace ceres

51
extern/ceres/internal/ceres/is_close.h vendored Normal file
View File

@@ -0,0 +1,51 @@
// Ceres Solver - A fast non-linear least squares minimizer
// Copyright 2016 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of Google Inc. nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Authors: keir@google.com (Keir Mierle), dgossow@google.com (David Gossow)
//
// Utility routine for comparing two values.
#ifndef CERES_INTERNAL_IS_CLOSE_H_
#define CERES_INTERNAL_IS_CLOSE_H_
namespace ceres {
namespace internal {
// Returns true if x and y have a relative (unsigned) difference less than
// relative_precision and false otherwise. Stores the relative and absolute
// difference in relative/absolute_error if non-NULL. If one of the two values
// is exactly zero, the absolute difference will be compared, and relative_error
// will be set to the absolute difference.
bool IsClose(double x,
double y,
double relative_precision,
double *relative_error,
double *absolute_error);
} // namespace internal
} // namespace ceres
#endif // CERES_INTERNAL_IS_CLOSE_H_

View File

@@ -191,6 +191,7 @@ void LineSearchMinimizer::Minimize(const Minimizer::Options& options,
options.line_search_sufficient_curvature_decrease;
line_search_options.max_step_expansion =
options.max_line_search_step_expansion;
line_search_options.is_silent = options.is_silent;
line_search_options.function = &line_search_function;
scoped_ptr<LineSearch>
@@ -341,10 +342,12 @@ void LineSearchMinimizer::Minimize(const Minimizer::Options& options,
"as the step was valid when it was selected by the line search.";
LOG_IF(WARNING, is_not_silent) << "Terminating: " << summary->message;
break;
} else if (!Evaluate(evaluator,
x_plus_delta,
&current_state,
&summary->message)) {
}
if (!Evaluate(evaluator,
x_plus_delta,
&current_state,
&summary->message)) {
summary->termination_type = FAILURE;
summary->message =
"Step failed to evaluate. This should not happen as the step was "
@@ -352,15 +355,17 @@ void LineSearchMinimizer::Minimize(const Minimizer::Options& options,
summary->message;
LOG_IF(WARNING, is_not_silent) << "Terminating: " << summary->message;
break;
} else {
x = x_plus_delta;
}
// Compute the norm of the step in the ambient space.
iteration_summary.step_norm = (x_plus_delta - x).norm();
x = x_plus_delta;
iteration_summary.gradient_max_norm = current_state.gradient_max_norm;
iteration_summary.gradient_norm = sqrt(current_state.gradient_squared_norm);
iteration_summary.cost_change = previous_state.cost - current_state.cost;
iteration_summary.cost = current_state.cost + summary->fixed_cost;
iteration_summary.step_norm = delta.norm();
iteration_summary.step_is_valid = true;
iteration_summary.step_is_successful = true;
iteration_summary.step_size = current_state.step_size;
@@ -376,6 +381,13 @@ void LineSearchMinimizer::Minimize(const Minimizer::Options& options,
WallTimeInSeconds() - start_time
+ summary->preprocessor_time_in_seconds;
// Iterations inside the line search algorithm are considered
// 'steps' in the broader context, to distinguish these inner
// iterations from from the outer iterations of the line search
// minimizer. The number of line search steps is the total number
// of inner line search iterations (or steps) across the entire
// minimization.
summary->num_line_search_steps += line_search_summary.num_iterations;
summary->line_search_cost_evaluation_time_in_seconds +=
line_search_summary.cost_evaluation_time_in_seconds;
summary->line_search_gradient_evaluation_time_in_seconds +=

View File

@@ -30,6 +30,8 @@
#include "ceres/local_parameterization.h"
#include <algorithm>
#include "Eigen/Geometry"
#include "ceres/householder_vector.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/fixed_array.h"
@@ -87,28 +89,17 @@ bool IdentityParameterization::MultiplyByJacobian(const double* x,
}
SubsetParameterization::SubsetParameterization(
int size,
const vector<int>& constant_parameters)
: local_size_(size - constant_parameters.size()),
constancy_mask_(size, 0) {
CHECK_GT(constant_parameters.size(), 0)
<< "The set of constant parameters should contain at least "
<< "one element. If you do not wish to hold any parameters "
<< "constant, then do not use a SubsetParameterization";
int size, const vector<int>& constant_parameters)
: local_size_(size - constant_parameters.size()), constancy_mask_(size, 0) {
vector<int> constant = constant_parameters;
sort(constant.begin(), constant.end());
CHECK(unique(constant.begin(), constant.end()) == constant.end())
std::sort(constant.begin(), constant.end());
CHECK_GE(constant.front(), 0)
<< "Indices indicating constant parameter must be greater than zero.";
CHECK_LT(constant.back(), size)
<< "Indices indicating constant parameter must be less than the size "
<< "of the parameter block.";
CHECK(std::adjacent_find(constant.begin(), constant.end()) == constant.end())
<< "The set of constant parameters cannot contain duplicates";
CHECK_LT(constant_parameters.size(), size)
<< "Number of parameters held constant should be less "
<< "than the size of the parameter block. If you wish "
<< "to hold the entire parameter block constant, then a "
<< "efficient way is to directly mark it as constant "
<< "instead of using a LocalParameterization to do so.";
CHECK_GE(*min_element(constant.begin(), constant.end()), 0);
CHECK_LT(*max_element(constant.begin(), constant.end()), size);
for (int i = 0; i < constant_parameters.size(); ++i) {
constancy_mask_[constant_parameters[i]] = 1;
}
@@ -129,6 +120,10 @@ bool SubsetParameterization::Plus(const double* x,
bool SubsetParameterization::ComputeJacobian(const double* x,
double* jacobian) const {
if (local_size_ == 0) {
return true;
}
MatrixRef m(jacobian, constancy_mask_.size(), local_size_);
m.setZero();
for (int i = 0, j = 0; i < constancy_mask_.size(); ++i) {
@@ -143,6 +138,10 @@ bool SubsetParameterization::MultiplyByJacobian(const double* x,
const int num_rows,
const double* global_matrix,
double* local_matrix) const {
if (local_size_ == 0) {
return true;
}
for (int row = 0; row < num_rows; ++row) {
for (int col = 0, j = 0; col < constancy_mask_.size(); ++col) {
if (!constancy_mask_[col]) {
@@ -184,6 +183,39 @@ bool QuaternionParameterization::ComputeJacobian(const double* x,
return true;
}
bool EigenQuaternionParameterization::Plus(const double* x_ptr,
const double* delta,
double* x_plus_delta_ptr) const {
Eigen::Map<Eigen::Quaterniond> x_plus_delta(x_plus_delta_ptr);
Eigen::Map<const Eigen::Quaterniond> x(x_ptr);
const double norm_delta =
sqrt(delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2]);
if (norm_delta > 0.0) {
const double sin_delta_by_delta = sin(norm_delta) / norm_delta;
// Note, in the constructor w is first.
Eigen::Quaterniond delta_q(cos(norm_delta),
sin_delta_by_delta * delta[0],
sin_delta_by_delta * delta[1],
sin_delta_by_delta * delta[2]);
x_plus_delta = delta_q * x;
} else {
x_plus_delta = x;
}
return true;
}
bool EigenQuaternionParameterization::ComputeJacobian(const double* x,
double* jacobian) const {
jacobian[0] = x[3]; jacobian[1] = x[2]; jacobian[2] = -x[1]; // NOLINT
jacobian[3] = -x[2]; jacobian[4] = x[3]; jacobian[5] = x[0]; // NOLINT
jacobian[6] = x[1]; jacobian[7] = -x[0]; jacobian[8] = x[3]; // NOLINT
jacobian[9] = -x[0]; jacobian[10] = -x[1]; jacobian[11] = -x[2]; // NOLINT
return true;
}
HomogeneousVectorParameterization::HomogeneousVectorParameterization(int size)
: size_(size) {
CHECK_GT(size_, 1) << "The size of the homogeneous vector needs to be "
@@ -332,9 +364,9 @@ bool ProductParameterization::ComputeJacobian(const double* x,
if (!param->ComputeJacobian(x + x_cursor, buffer.get())) {
return false;
}
jacobian.block(x_cursor, delta_cursor, global_size, local_size)
= MatrixRef(buffer.get(), global_size, local_size);
delta_cursor += local_size;
x_cursor += global_size;
}

View File

@@ -67,7 +67,7 @@ FindOrDie(const Collection& collection,
// If the key is present in the map then the value associated with that
// key is returned, otherwise the value passed as a default is returned.
template <class Collection>
const typename Collection::value_type::second_type&
const typename Collection::value_type::second_type
FindWithDefault(const Collection& collection,
const typename Collection::value_type::first_type& key,
const typename Collection::value_type::second_type& value) {

View File

@@ -161,25 +161,34 @@ class ParameterBlock {
// does not take ownership of the parameterization.
void SetParameterization(LocalParameterization* new_parameterization) {
CHECK(new_parameterization != NULL) << "NULL parameterization invalid.";
// Nothing to do if the new parameterization is the same as the
// old parameterization.
if (new_parameterization == local_parameterization_) {
return;
}
CHECK(local_parameterization_ == NULL)
<< "Can't re-set the local parameterization; it leads to "
<< "ambiguous ownership. Current local parameterization is: "
<< local_parameterization_;
CHECK(new_parameterization->GlobalSize() == size_)
<< "Invalid parameterization for parameter block. The parameter block "
<< "has size " << size_ << " while the parameterization has a global "
<< "size of " << new_parameterization->GlobalSize() << ". Did you "
<< "accidentally use the wrong parameter block or parameterization?";
if (new_parameterization != local_parameterization_) {
CHECK(local_parameterization_ == NULL)
<< "Can't re-set the local parameterization; it leads to "
<< "ambiguous ownership.";
local_parameterization_ = new_parameterization;
local_parameterization_jacobian_.reset(
new double[local_parameterization_->GlobalSize() *
local_parameterization_->LocalSize()]);
CHECK(UpdateLocalParameterizationJacobian())
<< "Local parameterization Jacobian computation failed for x: "
<< ConstVectorRef(state_, Size()).transpose();
} else {
// Ignore the case that the parameterizations match.
}
CHECK_GT(new_parameterization->LocalSize(), 0)
<< "Invalid parameterization. Parameterizations must have a positive "
<< "dimensional tangent space.";
local_parameterization_ = new_parameterization;
local_parameterization_jacobian_.reset(
new double[local_parameterization_->GlobalSize() *
local_parameterization_->LocalSize()]);
CHECK(UpdateLocalParameterizationJacobian())
<< "Local parameterization Jacobian computation failed for x: "
<< ConstVectorRef(state_, Size()).transpose();
}
void SetUpperBound(int index, double upper_bound) {

View File

@@ -174,6 +174,10 @@ void Problem::SetParameterBlockVariable(double* values) {
problem_impl_->SetParameterBlockVariable(values);
}
bool Problem::IsParameterBlockConstant(double* values) const {
return problem_impl_->IsParameterBlockConstant(values);
}
void Problem::SetParameterization(
double* values,
LocalParameterization* local_parameterization) {

View File

@@ -249,10 +249,11 @@ ResidualBlock* ProblemImpl::AddResidualBlock(
// Check for duplicate parameter blocks.
vector<double*> sorted_parameter_blocks(parameter_blocks);
sort(sorted_parameter_blocks.begin(), sorted_parameter_blocks.end());
vector<double*>::const_iterator duplicate_items =
unique(sorted_parameter_blocks.begin(),
sorted_parameter_blocks.end());
if (duplicate_items != sorted_parameter_blocks.end()) {
const bool has_duplicate_items =
(std::adjacent_find(sorted_parameter_blocks.begin(),
sorted_parameter_blocks.end())
!= sorted_parameter_blocks.end());
if (has_duplicate_items) {
string blocks;
for (int i = 0; i < parameter_blocks.size(); ++i) {
blocks += StringPrintf(" %p ", parameter_blocks[i]);
@@ -572,6 +573,16 @@ void ProblemImpl::SetParameterBlockConstant(double* values) {
parameter_block->SetConstant();
}
bool ProblemImpl::IsParameterBlockConstant(double* values) const {
const ParameterBlock* parameter_block =
FindWithDefault(parameter_block_map_, values, NULL);
CHECK(parameter_block != NULL)
<< "Parameter block not found: " << values << ". You must add the "
<< "parameter block to the problem before it can be queried.";
return parameter_block->IsConstant();
}
void ProblemImpl::SetParameterBlockVariable(double* values) {
ParameterBlock* parameter_block =
FindWithDefault(parameter_block_map_, values, NULL);

View File

@@ -128,6 +128,8 @@ class ProblemImpl {
void SetParameterBlockConstant(double* values);
void SetParameterBlockVariable(double* values);
bool IsParameterBlockConstant(double* values) const;
void SetParameterization(double* values,
LocalParameterization* local_parameterization);
const LocalParameterization* GetParameterization(double* values) const;

View File

@@ -142,6 +142,11 @@ void OrderingForSparseNormalCholeskyUsingSuiteSparse(
ordering);
}
VLOG(2) << "Block ordering stats: "
<< " flops: " << ss.mutable_cc()->fl
<< " lnz : " << ss.mutable_cc()->lnz
<< " anz : " << ss.mutable_cc()->anz;
ss.Free(block_jacobian_transpose);
#endif // CERES_NO_SUITESPARSE
}

View File

@@ -127,7 +127,7 @@ class ResidualBlock {
int index() const { return index_; }
void set_index(int index) { index_ = index; }
std::string ToString() {
std::string ToString() const {
return StringPrintf("{residual block; index=%d}", index_);
}

View File

@@ -33,6 +33,7 @@
#include <algorithm>
#include <ctime>
#include <set>
#include <sstream>
#include <vector>
#include "ceres/block_random_access_dense_matrix.h"
@@ -563,6 +564,12 @@ SparseSchurComplementSolver::SolveReducedLinearSystemUsingEigen(
// worse than the one computed using the block version of the
// algorithm.
simplicial_ldlt_->analyzePattern(eigen_lhs);
if (VLOG_IS_ON(2)) {
std::stringstream ss;
simplicial_ldlt_->dumpMemory(ss);
VLOG(2) << "Symbolic Analysis\n"
<< ss.str();
}
event_logger.AddEvent("Analysis");
if (simplicial_ldlt_->info() != Eigen::Success) {
summary.termination_type = LINEAR_SOLVER_FATAL_ERROR;

View File

@@ -94,7 +94,7 @@ bool CommonOptionsAreValid(const Solver::Options& options, string* error) {
OPTION_GT(num_linear_solver_threads, 0);
if (options.check_gradients) {
OPTION_GT(gradient_check_relative_precision, 0.0);
OPTION_GT(numeric_derivative_relative_step_size, 0.0);
OPTION_GT(gradient_check_numeric_derivative_relative_step_size, 0.0);
}
return true;
}
@@ -351,6 +351,7 @@ void PreSolveSummarize(const Solver::Options& options,
summary->dense_linear_algebra_library_type = options.dense_linear_algebra_library_type; // NOLINT
summary->dogleg_type = options.dogleg_type;
summary->inner_iteration_time_in_seconds = 0.0;
summary->num_line_search_steps = 0;
summary->line_search_cost_evaluation_time_in_seconds = 0.0;
summary->line_search_gradient_evaluation_time_in_seconds = 0.0;
summary->line_search_polynomial_minimization_time_in_seconds = 0.0;
@@ -495,21 +496,28 @@ void Solver::Solve(const Solver::Options& options,
// values provided by the user.
program->SetParameterBlockStatePtrsToUserStatePtrs();
// If gradient_checking is enabled, wrap all cost functions in a
// gradient checker and install a callback that terminates if any gradient
// error is detected.
scoped_ptr<internal::ProblemImpl> gradient_checking_problem;
internal::GradientCheckingIterationCallback gradient_checking_callback;
Solver::Options modified_options = options;
if (options.check_gradients) {
modified_options.callbacks.push_back(&gradient_checking_callback);
gradient_checking_problem.reset(
CreateGradientCheckingProblemImpl(
problem_impl,
options.numeric_derivative_relative_step_size,
options.gradient_check_relative_precision));
options.gradient_check_numeric_derivative_relative_step_size,
options.gradient_check_relative_precision,
&gradient_checking_callback));
problem_impl = gradient_checking_problem.get();
program = problem_impl->mutable_program();
}
scoped_ptr<Preprocessor> preprocessor(
Preprocessor::Create(options.minimizer_type));
Preprocessor::Create(modified_options.minimizer_type));
PreprocessedProblem pp;
const bool status = preprocessor->Preprocess(options, problem_impl, &pp);
const bool status = preprocessor->Preprocess(modified_options, problem_impl, &pp);
summary->fixed_cost = pp.fixed_cost;
summary->preprocessor_time_in_seconds = WallTimeInSeconds() - start_time;
@@ -534,6 +542,13 @@ void Solver::Solve(const Solver::Options& options,
summary->postprocessor_time_in_seconds =
WallTimeInSeconds() - postprocessor_start_time;
// If the gradient checker reported an error, we want to report FAILURE
// instead of USER_FAILURE and provide the error log.
if (gradient_checking_callback.gradient_error_detected()) {
summary->termination_type = FAILURE;
summary->message = gradient_checking_callback.error_log();
}
summary->total_time_in_seconds = WallTimeInSeconds() - start_time;
}
@@ -556,6 +571,7 @@ Solver::Summary::Summary()
num_successful_steps(-1),
num_unsuccessful_steps(-1),
num_inner_iteration_steps(-1),
num_line_search_steps(-1),
preprocessor_time_in_seconds(-1.0),
minimizer_time_in_seconds(-1.0),
postprocessor_time_in_seconds(-1.0),
@@ -696,16 +712,14 @@ string Solver::Summary::FullReport() const {
num_linear_solver_threads_given,
num_linear_solver_threads_used);
if (IsSchurType(linear_solver_type_used)) {
string given;
StringifyOrdering(linear_solver_ordering_given, &given);
string used;
StringifyOrdering(linear_solver_ordering_used, &used);
StringAppendF(&report,
"Linear solver ordering %22s %24s\n",
given.c_str(),
used.c_str());
}
string given;
StringifyOrdering(linear_solver_ordering_given, &given);
string used;
StringifyOrdering(linear_solver_ordering_used, &used);
StringAppendF(&report,
"Linear solver ordering %22s %24s\n",
given.c_str(),
used.c_str());
if (inner_iterations_given) {
StringAppendF(&report,
@@ -784,9 +798,14 @@ string Solver::Summary::FullReport() const {
num_inner_iteration_steps);
}
const bool print_line_search_timing_information =
minimizer_type == LINE_SEARCH ||
(minimizer_type == TRUST_REGION && is_constrained);
const bool line_search_used =
(minimizer_type == LINE_SEARCH ||
(minimizer_type == TRUST_REGION && is_constrained));
if (line_search_used) {
StringAppendF(&report, "Line search steps % 14d\n",
num_line_search_steps);
}
StringAppendF(&report, "\nTime (in seconds):\n");
StringAppendF(&report, "Preprocessor %25.4f\n",
@@ -794,13 +813,13 @@ string Solver::Summary::FullReport() const {
StringAppendF(&report, "\n Residual evaluation %23.4f\n",
residual_evaluation_time_in_seconds);
if (print_line_search_timing_information) {
if (line_search_used) {
StringAppendF(&report, " Line search cost evaluation %10.4f\n",
line_search_cost_evaluation_time_in_seconds);
}
StringAppendF(&report, " Jacobian evaluation %23.4f\n",
jacobian_evaluation_time_in_seconds);
if (print_line_search_timing_information) {
if (line_search_used) {
StringAppendF(&report, " Line search gradient evaluation %6.4f\n",
line_search_gradient_evaluation_time_in_seconds);
}
@@ -815,7 +834,7 @@ string Solver::Summary::FullReport() const {
inner_iteration_time_in_seconds);
}
if (print_line_search_timing_information) {
if (line_search_used) {
StringAppendF(&report, " Line search polynomial minimization %.4f\n",
line_search_polynomial_minimization_time_in_seconds);
}

View File

@@ -33,6 +33,7 @@
#include <algorithm>
#include <cstring>
#include <ctime>
#include <sstream>
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/cxsparse.h"
@@ -71,6 +72,12 @@ LinearSolver::Summary SimplicialLDLTSolve(
if (do_symbolic_analysis) {
solver->analyzePattern(lhs);
if (VLOG_IS_ON(2)) {
std::stringstream ss;
solver->dumpMemory(ss);
VLOG(2) << "Symbolic Analysis\n"
<< ss.str();
}
event_logger->AddEvent("Analyze");
if (solver->info() != Eigen::Success) {
summary.termination_type = LINEAR_SOLVER_FATAL_ERROR;

View File

@@ -43,14 +43,27 @@ namespace internal {
using std::string;
#ifdef _MSC_VER
enum { IS_COMPILER_MSVC = 1 };
#if _MSC_VER < 1800
#define va_copy(d, s) ((d) = (s))
#endif
// va_copy() was defined in the C99 standard. However, it did not appear in the
// C++ standard until C++11. This means that if Ceres is being compiled with a
// strict pre-C++11 standard (e.g. -std=c++03), va_copy() will NOT be defined,
// as we are using the C++ compiler (it would however be defined if we were
// using the C compiler). Note however that both GCC & Clang will in fact
// define va_copy() when compiling for C++ if the C++ standard is not explicitly
// specified (i.e. no -std=c++<XX> arg), even though it should not strictly be
// defined unless -std=c++11 (or greater) was passed.
#if !defined(va_copy)
#if defined (__GNUC__)
// On GCC/Clang, if va_copy() is not defined (C++ standard < C++11 explicitly
// specified), use the internal __va_copy() version, which should be present
// in even very old GCC versions.
#define va_copy(d, s) __va_copy(d, s)
#else
enum { IS_COMPILER_MSVC = 0 };
#endif
// Some older versions of MSVC do not have va_copy(), in which case define it.
// Although this is required for older MSVC versions, it should also work for
// other non-GCC/Clang compilers which also do not defined va_copy().
#define va_copy(d, s) ((d) = (s))
#endif // defined (__GNUC__)
#endif // !defined(va_copy)
void StringAppendV(string* dst, const char* format, va_list ap) {
// First try with a small fixed size buffer
@@ -71,13 +84,13 @@ void StringAppendV(string* dst, const char* format, va_list ap) {
return;
}
if (IS_COMPILER_MSVC) {
// Error or MSVC running out of space. MSVC 8.0 and higher
// can be asked about space needed with the special idiom below:
va_copy(backup_ap, ap);
result = vsnprintf(NULL, 0, format, backup_ap);
va_end(backup_ap);
}
#if defined (_MSC_VER)
// Error or MSVC running out of space. MSVC 8.0 and higher
// can be asked about space needed with the special idiom below:
va_copy(backup_ap, ap);
result = vsnprintf(NULL, 0, format, backup_ap);
va_end(backup_ap);
#endif
if (result < 0) {
// Just an error.

View File

@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
// Copyright 2015 Google Inc. All rights reserved.
// Copyright 2016 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -43,674 +43,747 @@
#include "ceres/coordinate_descent_minimizer.h"
#include "ceres/evaluator.h"
#include "ceres/file.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/scoped_ptr.h"
#include "ceres/line_search.h"
#include "ceres/linear_least_squares_problems.h"
#include "ceres/sparse_matrix.h"
#include "ceres/stringprintf.h"
#include "ceres/trust_region_strategy.h"
#include "ceres/types.h"
#include "ceres/wall_time.h"
#include "glog/logging.h"
// Helper macro to simplify some of the control flow.
#define RETURN_IF_ERROR_AND_LOG(expr) \
do { \
if (!(expr)) { \
LOG(ERROR) << "Terminating: " << solver_summary_->message; \
return; \
} \
} while (0)
namespace ceres {
namespace internal {
namespace {
LineSearch::Summary DoLineSearch(const Minimizer::Options& options,
const Vector& x,
const Vector& gradient,
const double cost,
const Vector& delta,
Evaluator* evaluator) {
LineSearchFunction line_search_function(evaluator);
TrustRegionMinimizer::~TrustRegionMinimizer() {}
void TrustRegionMinimizer::Minimize(const Minimizer::Options& options,
double* parameters,
Solver::Summary* solver_summary) {
start_time_in_secs_ = WallTimeInSeconds();
iteration_start_time_in_secs_ = start_time_in_secs_;
Init(options, parameters, solver_summary);
RETURN_IF_ERROR_AND_LOG(IterationZero());
// Create the TrustRegionStepEvaluator. The construction needs to be
// delayed to this point because we need the cost for the starting
// point to initialize the step evaluator.
step_evaluator_.reset(new TrustRegionStepEvaluator(
x_cost_,
options_.use_nonmonotonic_steps
? options_.max_consecutive_nonmonotonic_steps
: 0));
while (FinalizeIterationAndCheckIfMinimizerCanContinue()) {
iteration_start_time_in_secs_ = WallTimeInSeconds();
iteration_summary_ = IterationSummary();
iteration_summary_.iteration =
solver_summary->iterations.back().iteration + 1;
RETURN_IF_ERROR_AND_LOG(ComputeTrustRegionStep());
if (!iteration_summary_.step_is_valid) {
RETURN_IF_ERROR_AND_LOG(HandleInvalidStep());
continue;
}
if (options_.is_constrained) {
// Use a projected line search to enforce the bounds constraints
// and improve the quality of the step.
DoLineSearch(x_, gradient_, x_cost_, &delta_);
}
ComputeCandidatePointAndEvaluateCost();
DoInnerIterationsIfNeeded();
if (ParameterToleranceReached()) {
return;
}
if (FunctionToleranceReached()) {
return;
}
if (IsStepSuccessful()) {
RETURN_IF_ERROR_AND_LOG(HandleSuccessfulStep());
continue;
}
HandleUnsuccessfulStep();
}
}
// Initialize the minimizer, allocate working space and set some of
// the fields in the solver_summary.
void TrustRegionMinimizer::Init(const Minimizer::Options& options,
double* parameters,
Solver::Summary* solver_summary) {
options_ = options;
sort(options_.trust_region_minimizer_iterations_to_dump.begin(),
options_.trust_region_minimizer_iterations_to_dump.end());
parameters_ = parameters;
solver_summary_ = solver_summary;
solver_summary_->termination_type = NO_CONVERGENCE;
solver_summary_->num_successful_steps = 0;
solver_summary_->num_unsuccessful_steps = 0;
solver_summary_->is_constrained = options.is_constrained;
evaluator_ = CHECK_NOTNULL(options_.evaluator.get());
jacobian_ = CHECK_NOTNULL(options_.jacobian.get());
strategy_ = CHECK_NOTNULL(options_.trust_region_strategy.get());
is_not_silent_ = !options.is_silent;
inner_iterations_are_enabled_ =
options.inner_iteration_minimizer.get() != NULL;
inner_iterations_were_useful_ = false;
num_parameters_ = evaluator_->NumParameters();
num_effective_parameters_ = evaluator_->NumEffectiveParameters();
num_residuals_ = evaluator_->NumResiduals();
num_consecutive_invalid_steps_ = 0;
x_ = ConstVectorRef(parameters_, num_parameters_);
x_norm_ = x_.norm();
residuals_.resize(num_residuals_);
trust_region_step_.resize(num_effective_parameters_);
delta_.resize(num_effective_parameters_);
candidate_x_.resize(num_parameters_);
gradient_.resize(num_effective_parameters_);
model_residuals_.resize(num_residuals_);
negative_gradient_.resize(num_effective_parameters_);
projected_gradient_step_.resize(num_parameters_);
// By default scaling is one, if the user requests Jacobi scaling of
// the Jacobian, we will compute and overwrite this vector.
jacobian_scaling_ = Vector::Ones(num_effective_parameters_);
x_norm_ = -1; // Invalid value
x_cost_ = std::numeric_limits<double>::max();
minimum_cost_ = x_cost_;
model_cost_change_ = 0.0;
}
// 1. Project the initial solution onto the feasible set if needed.
// 2. Compute the initial cost, jacobian & gradient.
//
// Return true if all computations can be performed successfully.
bool TrustRegionMinimizer::IterationZero() {
iteration_summary_ = IterationSummary();
iteration_summary_.iteration = 0;
iteration_summary_.step_is_valid = false;
iteration_summary_.step_is_successful = false;
iteration_summary_.cost_change = 0.0;
iteration_summary_.gradient_max_norm = 0.0;
iteration_summary_.gradient_norm = 0.0;
iteration_summary_.step_norm = 0.0;
iteration_summary_.relative_decrease = 0.0;
iteration_summary_.eta = options_.eta;
iteration_summary_.linear_solver_iterations = 0;
iteration_summary_.step_solver_time_in_seconds = 0;
if (options_.is_constrained) {
delta_.setZero();
if (!evaluator_->Plus(x_.data(), delta_.data(), candidate_x_.data())) {
solver_summary_->message =
"Unable to project initial point onto the feasible set.";
solver_summary_->termination_type = FAILURE;
return false;
}
x_ = candidate_x_;
x_norm_ = x_.norm();
}
if (!EvaluateGradientAndJacobian()) {
return false;
}
solver_summary_->initial_cost = x_cost_ + solver_summary_->fixed_cost;
iteration_summary_.step_is_valid = true;
iteration_summary_.step_is_successful = true;
return true;
}
// For the current x_, compute
//
// 1. Cost
// 2. Jacobian
// 3. Gradient
// 4. Scale the Jacobian if needed (and compute the scaling if we are
// in iteration zero).
// 5. Compute the 2 and max norm of the gradient.
//
// Returns true if all computations could be performed
// successfully. Any failures are considered fatal and the
// Solver::Summary is updated to indicate this.
bool TrustRegionMinimizer::EvaluateGradientAndJacobian() {
if (!evaluator_->Evaluate(x_.data(),
&x_cost_,
residuals_.data(),
gradient_.data(),
jacobian_)) {
solver_summary_->message = "Residual and Jacobian evaluation failed.";
solver_summary_->termination_type = FAILURE;
return false;
}
iteration_summary_.cost = x_cost_ + solver_summary_->fixed_cost;
if (options_.jacobi_scaling) {
if (iteration_summary_.iteration == 0) {
// Compute a scaling vector that is used to improve the
// conditioning of the Jacobian.
//
// jacobian_scaling_ = diag(J'J)^{-1}
jacobian_->SquaredColumnNorm(jacobian_scaling_.data());
for (int i = 0; i < jacobian_->num_cols(); ++i) {
// Add one to the denominator to prevent division by zero.
jacobian_scaling_[i] = 1.0 / (1.0 + sqrt(jacobian_scaling_[i]));
}
}
// jacobian = jacobian * diag(J'J) ^{-1}
jacobian_->ScaleColumns(jacobian_scaling_.data());
}
// The gradient exists in the local tangent space. To account for
// the bounds constraints correctly, instead of just computing the
// norm of the gradient vector, we compute
//
// |Plus(x, -gradient) - x|
//
// Where the Plus operator lifts the negative gradient to the
// ambient space, adds it to x and projects it on the hypercube
// defined by the bounds.
negative_gradient_ = -gradient_;
if (!evaluator_->Plus(x_.data(),
negative_gradient_.data(),
projected_gradient_step_.data())) {
solver_summary_->message =
"projected_gradient_step = Plus(x, -gradient) failed.";
solver_summary_->termination_type = FAILURE;
return false;
}
iteration_summary_.gradient_max_norm =
(x_ - projected_gradient_step_).lpNorm<Eigen::Infinity>();
iteration_summary_.gradient_norm = (x_ - projected_gradient_step_).norm();
return true;
}
// 1. Add the final timing information to the iteration summary.
// 2. Run the callbacks
// 3. Check for termination based on
// a. Run time
// b. Iteration count
// c. Max norm of the gradient
// d. Size of the trust region radius.
//
// Returns true if user did not terminate the solver and none of these
// termination criterion are met.
bool TrustRegionMinimizer::FinalizeIterationAndCheckIfMinimizerCanContinue() {
if (iteration_summary_.step_is_successful) {
++solver_summary_->num_successful_steps;
if (x_cost_ < minimum_cost_) {
minimum_cost_ = x_cost_;
VectorRef(parameters_, num_parameters_) = x_;
iteration_summary_.step_is_nonmonotonic = false;
} else {
iteration_summary_.step_is_nonmonotonic = true;
}
} else {
++solver_summary_->num_unsuccessful_steps;
}
iteration_summary_.trust_region_radius = strategy_->Radius();
iteration_summary_.iteration_time_in_seconds =
WallTimeInSeconds() - iteration_start_time_in_secs_;
iteration_summary_.cumulative_time_in_seconds =
WallTimeInSeconds() - start_time_in_secs_ +
solver_summary_->preprocessor_time_in_seconds;
solver_summary_->iterations.push_back(iteration_summary_);
if (!RunCallbacks(options_, iteration_summary_, solver_summary_)) {
return false;
}
if (MaxSolverTimeReached()) {
return false;
}
if (MaxSolverIterationsReached()) {
return false;
}
if (GradientToleranceReached()) {
return false;
}
if (MinTrustRegionRadiusReached()) {
return false;
}
return true;
}
// Compute the trust region step using the TrustRegionStrategy chosen
// by the user.
//
// If the strategy returns with LINEAR_SOLVER_FATAL_ERROR, which
// indicates an unrecoverable error, return false. This is the only
// condition that returns false.
//
// If the strategy returns with LINEAR_SOLVER_FAILURE, which indicates
// a numerical failure that could be recovered from by retrying
// (e.g. by increasing the strength of the regularization), we set
// iteration_summary_.step_is_valid to false and return true.
//
// In all other cases, we compute the decrease in the trust region
// model problem. In exact arithmetic, this should always be
// positive, but due to numerical problems in the TrustRegionStrategy
// or round off error when computing the decrease it may be
// negative. In which case again, we set
// iteration_summary_.step_is_valid to false.
bool TrustRegionMinimizer::ComputeTrustRegionStep() {
const double strategy_start_time = WallTimeInSeconds();
iteration_summary_.step_is_valid = false;
TrustRegionStrategy::PerSolveOptions per_solve_options;
per_solve_options.eta = options_.eta;
if (find(options_.trust_region_minimizer_iterations_to_dump.begin(),
options_.trust_region_minimizer_iterations_to_dump.end(),
iteration_summary_.iteration) !=
options_.trust_region_minimizer_iterations_to_dump.end()) {
per_solve_options.dump_format_type =
options_.trust_region_problem_dump_format_type;
per_solve_options.dump_filename_base =
JoinPath(options_.trust_region_problem_dump_directory,
StringPrintf("ceres_solver_iteration_%03d",
iteration_summary_.iteration));
}
TrustRegionStrategy::Summary strategy_summary =
strategy_->ComputeStep(per_solve_options,
jacobian_,
residuals_.data(),
trust_region_step_.data());
if (strategy_summary.termination_type == LINEAR_SOLVER_FATAL_ERROR) {
solver_summary_->message =
"Linear solver failed due to unrecoverable "
"non-numeric causes. Please see the error log for clues. ";
solver_summary_->termination_type = FAILURE;
return false;
}
iteration_summary_.step_solver_time_in_seconds =
WallTimeInSeconds() - strategy_start_time;
iteration_summary_.linear_solver_iterations = strategy_summary.num_iterations;
if (strategy_summary.termination_type == LINEAR_SOLVER_FAILURE) {
return true;
}
// new_model_cost
// = 1/2 [f + J * step]^2
// = 1/2 [ f'f + 2f'J * step + step' * J' * J * step ]
// model_cost_change
// = cost - new_model_cost
// = f'f/2 - 1/2 [ f'f + 2f'J * step + step' * J' * J * step]
// = -f'J * step - step' * J' * J * step / 2
// = -(J * step)'(f + J * step / 2)
model_residuals_.setZero();
jacobian_->RightMultiply(trust_region_step_.data(), model_residuals_.data());
model_cost_change_ =
-model_residuals_.dot(residuals_ + model_residuals_ / 2.0);
// TODO(sameeragarwal)
//
// 1. What happens if model_cost_change_ = 0
// 2. What happens if -epsilon <= model_cost_change_ < 0 for some
// small epsilon due to round off error.
iteration_summary_.step_is_valid = (model_cost_change_ > 0.0);
if (iteration_summary_.step_is_valid) {
// Undo the Jacobian column scaling.
delta_ = (trust_region_step_.array() * jacobian_scaling_.array()).matrix();
num_consecutive_invalid_steps_ = 0;
}
VLOG_IF(1, is_not_silent_ && !iteration_summary_.step_is_valid)
<< "Invalid step: current_cost: " << x_cost_
<< " absolute model cost change: " << model_cost_change_
<< " relative model cost change: " << (model_cost_change_ / x_cost_);
return true;
}
// Invalid steps can happen due to a number of reasons, and we allow a
// limited number of consecutive failures, and return false if this
// limit is exceeded.
bool TrustRegionMinimizer::HandleInvalidStep() {
// TODO(sameeragarwal): Should we be returning FAILURE or
// NO_CONVERGENCE? The solution value is still usable in many cases,
// it is not clear if we should declare the solver a failure
// entirely. For example the case where model_cost_change ~ 0.0, but
// just slightly negative.
if (++num_consecutive_invalid_steps_ >=
options_.max_num_consecutive_invalid_steps) {
solver_summary_->message = StringPrintf(
"Number of consecutive invalid steps more "
"than Solver::Options::max_num_consecutive_invalid_steps: %d",
options_.max_num_consecutive_invalid_steps);
solver_summary_->termination_type = FAILURE;
return false;
}
strategy_->StepIsInvalid();
// We are going to try and reduce the trust region radius and
// solve again. To do this, we are going to treat this iteration
// as an unsuccessful iteration. Since the various callbacks are
// still executed, we are going to fill the iteration summary
// with data that assumes a step of length zero and no progress.
iteration_summary_.cost = x_cost_ + solver_summary_->fixed_cost;
iteration_summary_.cost_change = 0.0;
iteration_summary_.gradient_max_norm =
solver_summary_->iterations.back().gradient_max_norm;
iteration_summary_.gradient_norm =
solver_summary_->iterations.back().gradient_norm;
iteration_summary_.step_norm = 0.0;
iteration_summary_.relative_decrease = 0.0;
iteration_summary_.eta = options_.eta;
return true;
}
// Use the supplied coordinate descent minimizer to perform inner
// iterations and compute the improvement due to it. Returns the cost
// after performing the inner iterations.
//
// The optimization is performed with candidate_x_ as the starting
// point, and if the optimization is successful, candidate_x_ will be
// updated with the optimized parameters.
void TrustRegionMinimizer::DoInnerIterationsIfNeeded() {
inner_iterations_were_useful_ = false;
if (!inner_iterations_are_enabled_ ||
candidate_cost_ >= std::numeric_limits<double>::max()) {
return;
}
double inner_iteration_start_time = WallTimeInSeconds();
++solver_summary_->num_inner_iteration_steps;
inner_iteration_x_ = candidate_x_;
Solver::Summary inner_iteration_summary;
options_.inner_iteration_minimizer->Minimize(
options_, inner_iteration_x_.data(), &inner_iteration_summary);
double inner_iteration_cost;
if (!evaluator_->Evaluate(
inner_iteration_x_.data(), &inner_iteration_cost, NULL, NULL, NULL)) {
VLOG_IF(2, is_not_silent_) << "Inner iteration failed.";
return;
}
VLOG_IF(2, is_not_silent_)
<< "Inner iteration succeeded; Current cost: " << x_cost_
<< " Trust region step cost: " << candidate_cost_
<< " Inner iteration cost: " << inner_iteration_cost;
candidate_x_ = inner_iteration_x_;
// Normally, the quality of a trust region step is measured by
// the ratio
//
// cost_change
// r = -----------------
// model_cost_change
//
// All the change in the nonlinear objective is due to the trust
// region step so this ratio is a good measure of the quality of
// the trust region radius. However, when inner iterations are
// being used, cost_change includes the contribution of the
// inner iterations and its not fair to credit it all to the
// trust region algorithm. So we change the ratio to be
//
// cost_change
// r = ------------------------------------------------
// (model_cost_change + inner_iteration_cost_change)
//
// Practically we do this by increasing model_cost_change by
// inner_iteration_cost_change.
const double inner_iteration_cost_change =
candidate_cost_ - inner_iteration_cost;
model_cost_change_ += inner_iteration_cost_change;
inner_iterations_were_useful_ = inner_iteration_cost < x_cost_;
const double inner_iteration_relative_progress =
1.0 - inner_iteration_cost / candidate_cost_;
// Disable inner iterations once the relative improvement
// drops below tolerance.
inner_iterations_are_enabled_ =
(inner_iteration_relative_progress > options_.inner_iteration_tolerance);
VLOG_IF(2, is_not_silent_ && !inner_iterations_are_enabled_)
<< "Disabling inner iterations. Progress : "
<< inner_iteration_relative_progress;
candidate_cost_ = inner_iteration_cost;
solver_summary_->inner_iteration_time_in_seconds +=
WallTimeInSeconds() - inner_iteration_start_time;
}
// Perform a projected line search to improve the objective function
// value along delta.
//
// TODO(sameeragarwal): The current implementation does not do
// anything illegal but is incorrect and not terribly effective.
//
// https://github.com/ceres-solver/ceres-solver/issues/187
void TrustRegionMinimizer::DoLineSearch(const Vector& x,
const Vector& gradient,
const double cost,
Vector* delta) {
LineSearchFunction line_search_function(evaluator_);
LineSearch::Options line_search_options;
line_search_options.is_silent = true;
line_search_options.interpolation_type =
options.line_search_interpolation_type;
line_search_options.min_step_size = options.min_line_search_step_size;
options_.line_search_interpolation_type;
line_search_options.min_step_size = options_.min_line_search_step_size;
line_search_options.sufficient_decrease =
options.line_search_sufficient_function_decrease;
options_.line_search_sufficient_function_decrease;
line_search_options.max_step_contraction =
options.max_line_search_step_contraction;
options_.max_line_search_step_contraction;
line_search_options.min_step_contraction =
options.min_line_search_step_contraction;
options_.min_line_search_step_contraction;
line_search_options.max_num_iterations =
options.max_num_line_search_step_size_iterations;
options_.max_num_line_search_step_size_iterations;
line_search_options.sufficient_curvature_decrease =
options.line_search_sufficient_curvature_decrease;
options_.line_search_sufficient_curvature_decrease;
line_search_options.max_step_expansion =
options.max_line_search_step_expansion;
options_.max_line_search_step_expansion;
line_search_options.function = &line_search_function;
std::string message;
scoped_ptr<LineSearch> line_search(
CHECK_NOTNULL(LineSearch::Create(ceres::ARMIJO,
line_search_options,
&message)));
LineSearch::Summary summary;
line_search_function.Init(x, delta);
line_search->Search(1.0, cost, gradient.dot(delta), &summary);
return summary;
}
scoped_ptr<LineSearch> line_search(CHECK_NOTNULL(
LineSearch::Create(ceres::ARMIJO, line_search_options, &message)));
LineSearch::Summary line_search_summary;
line_search_function.Init(x, *delta);
line_search->Search(1.0, cost, gradient.dot(*delta), &line_search_summary);
} // namespace
solver_summary_->num_line_search_steps += line_search_summary.num_iterations;
solver_summary_->line_search_cost_evaluation_time_in_seconds +=
line_search_summary.cost_evaluation_time_in_seconds;
solver_summary_->line_search_gradient_evaluation_time_in_seconds +=
line_search_summary.gradient_evaluation_time_in_seconds;
solver_summary_->line_search_polynomial_minimization_time_in_seconds +=
line_search_summary.polynomial_minimization_time_in_seconds;
solver_summary_->line_search_total_time_in_seconds +=
line_search_summary.total_time_in_seconds;
// Compute a scaling vector that is used to improve the conditioning
// of the Jacobian.
void TrustRegionMinimizer::EstimateScale(const SparseMatrix& jacobian,
double* scale) const {
jacobian.SquaredColumnNorm(scale);
for (int i = 0; i < jacobian.num_cols(); ++i) {
scale[i] = 1.0 / (1.0 + sqrt(scale[i]));
if (line_search_summary.success) {
*delta *= line_search_summary.optimal_step_size;
}
}
void TrustRegionMinimizer::Init(const Minimizer::Options& options) {
options_ = options;
sort(options_.trust_region_minimizer_iterations_to_dump.begin(),
options_.trust_region_minimizer_iterations_to_dump.end());
// Check if the maximum amount of time allowed by the user for the
// solver has been exceeded, and if so return false after updating
// Solver::Summary::message.
bool TrustRegionMinimizer::MaxSolverTimeReached() {
const double total_solver_time =
WallTimeInSeconds() - start_time_in_secs_ +
solver_summary_->preprocessor_time_in_seconds;
if (total_solver_time < options_.max_solver_time_in_seconds) {
return false;
}
solver_summary_->message = StringPrintf("Maximum solver time reached. "
"Total solver time: %e >= %e.",
total_solver_time,
options_.max_solver_time_in_seconds);
solver_summary_->termination_type = NO_CONVERGENCE;
VLOG_IF(1, is_not_silent_) << "Terminating: " << solver_summary_->message;
return true;
}
void TrustRegionMinimizer::Minimize(const Minimizer::Options& options,
double* parameters,
Solver::Summary* summary) {
double start_time = WallTimeInSeconds();
double iteration_start_time = start_time;
Init(options);
// Check if the maximum number of iterations allowed by the user for
// the solver has been exceeded, and if so return false after updating
// Solver::Summary::message.
bool TrustRegionMinimizer::MaxSolverIterationsReached() {
if (iteration_summary_.iteration < options_.max_num_iterations) {
return false;
}
Evaluator* evaluator = CHECK_NOTNULL(options_.evaluator.get());
SparseMatrix* jacobian = CHECK_NOTNULL(options_.jacobian.get());
TrustRegionStrategy* strategy =
CHECK_NOTNULL(options_.trust_region_strategy.get());
solver_summary_->message =
StringPrintf("Maximum number of iterations reached. "
"Number of iterations: %d.",
iteration_summary_.iteration);
const bool is_not_silent = !options.is_silent;
solver_summary_->termination_type = NO_CONVERGENCE;
VLOG_IF(1, is_not_silent_) << "Terminating: " << solver_summary_->message;
return true;
}
// If the problem is bounds constrained, then enable the use of a
// line search after the trust region step has been computed. This
// line search will automatically use a projected test point onto
// the feasible set, there by guaranteeing the feasibility of the
// final output.
// Check convergence based on the max norm of the gradient (only for
// iterations where the step was declared successful).
bool TrustRegionMinimizer::GradientToleranceReached() {
if (!iteration_summary_.step_is_successful ||
iteration_summary_.gradient_max_norm > options_.gradient_tolerance) {
return false;
}
solver_summary_->message = StringPrintf(
"Gradient tolerance reached. "
"Gradient max norm: %e <= %e",
iteration_summary_.gradient_max_norm,
options_.gradient_tolerance);
solver_summary_->termination_type = CONVERGENCE;
VLOG_IF(1, is_not_silent_) << "Terminating: " << solver_summary_->message;
return true;
}
// Check convergence based the size of the trust region radius.
bool TrustRegionMinimizer::MinTrustRegionRadiusReached() {
if (iteration_summary_.trust_region_radius >
options_.min_trust_region_radius) {
return false;
}
solver_summary_->message =
StringPrintf("Minimum trust region radius reached. "
"Trust region radius: %e <= %e",
iteration_summary_.trust_region_radius,
options_.min_trust_region_radius);
solver_summary_->termination_type = CONVERGENCE;
VLOG_IF(1, is_not_silent_) << "Terminating: " << solver_summary_->message;
return true;
}
// Solver::Options::parameter_tolerance based convergence check.
bool TrustRegionMinimizer::ParameterToleranceReached() {
// Compute the norm of the step in the ambient space.
iteration_summary_.step_norm = (x_ - candidate_x_).norm();
const double step_size_tolerance =
options_.parameter_tolerance * (x_norm_ + options_.parameter_tolerance);
if (iteration_summary_.step_norm > step_size_tolerance) {
return false;
}
solver_summary_->message = StringPrintf(
"Parameter tolerance reached. "
"Relative step_norm: %e <= %e.",
(iteration_summary_.step_norm / (x_norm_ + options_.parameter_tolerance)),
options_.parameter_tolerance);
solver_summary_->termination_type = CONVERGENCE;
VLOG_IF(1, is_not_silent_) << "Terminating: " << solver_summary_->message;
return true;
}
// Solver::Options::function_tolerance based convergence check.
bool TrustRegionMinimizer::FunctionToleranceReached() {
iteration_summary_.cost_change = x_cost_ - candidate_cost_;
const double absolute_function_tolerance =
options_.function_tolerance * x_cost_;
if (fabs(iteration_summary_.cost_change) > absolute_function_tolerance) {
return false;
}
solver_summary_->message = StringPrintf(
"Function tolerance reached. "
"|cost_change|/cost: %e <= %e",
fabs(iteration_summary_.cost_change) / x_cost_,
options_.function_tolerance);
solver_summary_->termination_type = CONVERGENCE;
VLOG_IF(1, is_not_silent_) << "Terminating: " << solver_summary_->message;
return true;
}
// Compute candidate_x_ = Plus(x_, delta_)
// Evaluate the cost of candidate_x_ as candidate_cost_.
//
// Failure to compute the step or the cost mean that candidate_cost_
// is set to std::numeric_limits<double>::max(). Unlike
// EvaluateGradientAndJacobian, failure in this function is not fatal
// as we are only computing and evaluating a candidate point, and if
// for some reason we are unable to evaluate it, we consider it to be
// a point with very high cost. This allows the user to deal with edge
// cases/constraints as part of the LocalParameterization and
// CostFunction objects.
void TrustRegionMinimizer::ComputeCandidatePointAndEvaluateCost() {
if (!evaluator_->Plus(x_.data(), delta_.data(), candidate_x_.data())) {
LOG_IF(WARNING, is_not_silent_)
<< "x_plus_delta = Plus(x, delta) failed. "
<< "Treating it as a step with infinite cost";
candidate_cost_ = std::numeric_limits<double>::max();
return;
}
if (!evaluator_->Evaluate(
candidate_x_.data(), &candidate_cost_, NULL, NULL, NULL)) {
LOG_IF(WARNING, is_not_silent_)
<< "Step failed to evaluate. "
<< "Treating it as a step with infinite cost";
candidate_cost_ = std::numeric_limits<double>::max();
}
}
bool TrustRegionMinimizer::IsStepSuccessful() {
iteration_summary_.relative_decrease =
step_evaluator_->StepQuality(candidate_cost_, model_cost_change_);
// In most cases, boosting the model_cost_change by the
// improvement caused by the inner iterations is fine, but it can
// be the case that the original trust region step was so bad that
// the resulting improvement in the cost was negative, and the
// change caused by the inner iterations was large enough to
// improve the step, but also to make relative decrease quite
// small.
//
// TODO(sameeragarwal): Make line search available more generally.
const bool use_line_search = options.is_constrained;
summary->termination_type = NO_CONVERGENCE;
summary->num_successful_steps = 0;
summary->num_unsuccessful_steps = 0;
summary->is_constrained = options.is_constrained;
const int num_parameters = evaluator->NumParameters();
const int num_effective_parameters = evaluator->NumEffectiveParameters();
const int num_residuals = evaluator->NumResiduals();
Vector residuals(num_residuals);
Vector trust_region_step(num_effective_parameters);
Vector delta(num_effective_parameters);
Vector x_plus_delta(num_parameters);
Vector gradient(num_effective_parameters);
Vector model_residuals(num_residuals);
Vector scale(num_effective_parameters);
Vector negative_gradient(num_effective_parameters);
Vector projected_gradient_step(num_parameters);
IterationSummary iteration_summary;
iteration_summary.iteration = 0;
iteration_summary.step_is_valid = false;
iteration_summary.step_is_successful = false;
iteration_summary.cost_change = 0.0;
iteration_summary.gradient_max_norm = 0.0;
iteration_summary.gradient_norm = 0.0;
iteration_summary.step_norm = 0.0;
iteration_summary.relative_decrease = 0.0;
iteration_summary.trust_region_radius = strategy->Radius();
iteration_summary.eta = options_.eta;
iteration_summary.linear_solver_iterations = 0;
iteration_summary.step_solver_time_in_seconds = 0;
VectorRef x_min(parameters, num_parameters);
Vector x = x_min;
// Project onto the feasible set.
if (options.is_constrained) {
delta.setZero();
if (!evaluator->Plus(x.data(), delta.data(), x_plus_delta.data())) {
summary->message =
"Unable to project initial point onto the feasible set.";
summary->termination_type = FAILURE;
LOG_IF(WARNING, is_not_silent) << "Terminating: " << summary->message;
return;
}
x_min = x_plus_delta;
x = x_plus_delta;
}
double x_norm = x.norm();
// Do initial cost and Jacobian evaluation.
double cost = 0.0;
if (!evaluator->Evaluate(x.data(),
&cost,
residuals.data(),
gradient.data(),
jacobian)) {
summary->message = "Residual and Jacobian evaluation failed.";
summary->termination_type = FAILURE;
LOG_IF(WARNING, is_not_silent) << "Terminating: " << summary->message;
return;
}
negative_gradient = -gradient;
if (!evaluator->Plus(x.data(),
negative_gradient.data(),
projected_gradient_step.data())) {
summary->message = "Unable to compute gradient step.";
summary->termination_type = FAILURE;
LOG(ERROR) << "Terminating: " << summary->message;
return;
}
summary->initial_cost = cost + summary->fixed_cost;
iteration_summary.cost = cost + summary->fixed_cost;
iteration_summary.gradient_max_norm =
(x - projected_gradient_step).lpNorm<Eigen::Infinity>();
iteration_summary.gradient_norm = (x - projected_gradient_step).norm();
if (iteration_summary.gradient_max_norm <= options.gradient_tolerance) {
summary->message = StringPrintf("Gradient tolerance reached. "
"Gradient max norm: %e <= %e",
iteration_summary.gradient_max_norm,
options_.gradient_tolerance);
summary->termination_type = CONVERGENCE;
VLOG_IF(1, is_not_silent) << "Terminating: " << summary->message;
// Ensure that there is an iteration summary object for iteration
// 0 in Summary::iterations.
iteration_summary.iteration_time_in_seconds =
WallTimeInSeconds() - iteration_start_time;
iteration_summary.cumulative_time_in_seconds =
WallTimeInSeconds() - start_time +
summary->preprocessor_time_in_seconds;
summary->iterations.push_back(iteration_summary);
return;
}
if (options_.jacobi_scaling) {
EstimateScale(*jacobian, scale.data());
jacobian->ScaleColumns(scale.data());
} else {
scale.setOnes();
}
iteration_summary.iteration_time_in_seconds =
WallTimeInSeconds() - iteration_start_time;
iteration_summary.cumulative_time_in_seconds =
WallTimeInSeconds() - start_time
+ summary->preprocessor_time_in_seconds;
summary->iterations.push_back(iteration_summary);
int num_consecutive_nonmonotonic_steps = 0;
double minimum_cost = cost;
double reference_cost = cost;
double accumulated_reference_model_cost_change = 0.0;
double candidate_cost = cost;
double accumulated_candidate_model_cost_change = 0.0;
int num_consecutive_invalid_steps = 0;
bool inner_iterations_are_enabled =
options.inner_iteration_minimizer.get() != NULL;
while (true) {
bool inner_iterations_were_useful = false;
if (!RunCallbacks(options, iteration_summary, summary)) {
return;
}
iteration_start_time = WallTimeInSeconds();
if (iteration_summary.iteration >= options_.max_num_iterations) {
summary->message = "Maximum number of iterations reached.";
summary->termination_type = NO_CONVERGENCE;
VLOG_IF(1, is_not_silent) << "Terminating: " << summary->message;
return;
}
const double total_solver_time = iteration_start_time - start_time +
summary->preprocessor_time_in_seconds;
if (total_solver_time >= options_.max_solver_time_in_seconds) {
summary->message = "Maximum solver time reached.";
summary->termination_type = NO_CONVERGENCE;
VLOG_IF(1, is_not_silent) << "Terminating: " << summary->message;
return;
}
const double strategy_start_time = WallTimeInSeconds();
TrustRegionStrategy::PerSolveOptions per_solve_options;
per_solve_options.eta = options_.eta;
if (find(options_.trust_region_minimizer_iterations_to_dump.begin(),
options_.trust_region_minimizer_iterations_to_dump.end(),
iteration_summary.iteration) !=
options_.trust_region_minimizer_iterations_to_dump.end()) {
per_solve_options.dump_format_type =
options_.trust_region_problem_dump_format_type;
per_solve_options.dump_filename_base =
JoinPath(options_.trust_region_problem_dump_directory,
StringPrintf("ceres_solver_iteration_%03d",
iteration_summary.iteration));
} else {
per_solve_options.dump_format_type = TEXTFILE;
per_solve_options.dump_filename_base.clear();
}
TrustRegionStrategy::Summary strategy_summary =
strategy->ComputeStep(per_solve_options,
jacobian,
residuals.data(),
trust_region_step.data());
if (strategy_summary.termination_type == LINEAR_SOLVER_FATAL_ERROR) {
summary->message =
"Linear solver failed due to unrecoverable "
"non-numeric causes. Please see the error log for clues. ";
summary->termination_type = FAILURE;
LOG_IF(WARNING, is_not_silent) << "Terminating: " << summary->message;
return;
}
iteration_summary = IterationSummary();
iteration_summary.iteration = summary->iterations.back().iteration + 1;
iteration_summary.step_solver_time_in_seconds =
WallTimeInSeconds() - strategy_start_time;
iteration_summary.linear_solver_iterations =
strategy_summary.num_iterations;
iteration_summary.step_is_valid = false;
iteration_summary.step_is_successful = false;
double model_cost_change = 0.0;
if (strategy_summary.termination_type != LINEAR_SOLVER_FAILURE) {
// new_model_cost
// = 1/2 [f + J * step]^2
// = 1/2 [ f'f + 2f'J * step + step' * J' * J * step ]
// model_cost_change
// = cost - new_model_cost
// = f'f/2 - 1/2 [ f'f + 2f'J * step + step' * J' * J * step]
// = -f'J * step - step' * J' * J * step / 2
model_residuals.setZero();
jacobian->RightMultiply(trust_region_step.data(), model_residuals.data());
model_cost_change =
- model_residuals.dot(residuals + model_residuals / 2.0);
if (model_cost_change < 0.0) {
VLOG_IF(1, is_not_silent)
<< "Invalid step: current_cost: " << cost
<< " absolute difference " << model_cost_change
<< " relative difference " << (model_cost_change / cost);
} else {
iteration_summary.step_is_valid = true;
}
}
if (!iteration_summary.step_is_valid) {
// Invalid steps can happen due to a number of reasons, and we
// allow a limited number of successive failures, and return with
// FAILURE if this limit is exceeded.
if (++num_consecutive_invalid_steps >=
options_.max_num_consecutive_invalid_steps) {
summary->message = StringPrintf(
"Number of successive invalid steps more "
"than Solver::Options::max_num_consecutive_invalid_steps: %d",
options_.max_num_consecutive_invalid_steps);
summary->termination_type = FAILURE;
LOG_IF(WARNING, is_not_silent) << "Terminating: " << summary->message;
return;
}
// We are going to try and reduce the trust region radius and
// solve again. To do this, we are going to treat this iteration
// as an unsuccessful iteration. Since the various callbacks are
// still executed, we are going to fill the iteration summary
// with data that assumes a step of length zero and no progress.
iteration_summary.cost = cost + summary->fixed_cost;
iteration_summary.cost_change = 0.0;
iteration_summary.gradient_max_norm =
summary->iterations.back().gradient_max_norm;
iteration_summary.gradient_norm =
summary->iterations.back().gradient_norm;
iteration_summary.step_norm = 0.0;
iteration_summary.relative_decrease = 0.0;
iteration_summary.eta = options_.eta;
} else {
// The step is numerically valid, so now we can judge its quality.
num_consecutive_invalid_steps = 0;
// Undo the Jacobian column scaling.
delta = (trust_region_step.array() * scale.array()).matrix();
// Try improving the step further by using an ARMIJO line
// search.
//
// TODO(sameeragarwal): What happens to trust region sizing as
// it interacts with the line search ?
if (use_line_search) {
const LineSearch::Summary line_search_summary =
DoLineSearch(options, x, gradient, cost, delta, evaluator);
summary->line_search_cost_evaluation_time_in_seconds +=
line_search_summary.cost_evaluation_time_in_seconds;
summary->line_search_gradient_evaluation_time_in_seconds +=
line_search_summary.gradient_evaluation_time_in_seconds;
summary->line_search_polynomial_minimization_time_in_seconds +=
line_search_summary.polynomial_minimization_time_in_seconds;
summary->line_search_total_time_in_seconds +=
line_search_summary.total_time_in_seconds;
if (line_search_summary.success) {
delta *= line_search_summary.optimal_step_size;
}
}
double new_cost = std::numeric_limits<double>::max();
if (evaluator->Plus(x.data(), delta.data(), x_plus_delta.data())) {
if (!evaluator->Evaluate(x_plus_delta.data(),
&new_cost,
NULL,
NULL,
NULL)) {
LOG_IF(WARNING, is_not_silent)
<< "Step failed to evaluate. "
<< "Treating it as a step with infinite cost";
new_cost = std::numeric_limits<double>::max();
}
} else {
LOG_IF(WARNING, is_not_silent)
<< "x_plus_delta = Plus(x, delta) failed. "
<< "Treating it as a step with infinite cost";
}
if (new_cost < std::numeric_limits<double>::max()) {
// Check if performing an inner iteration will make it better.
if (inner_iterations_are_enabled) {
++summary->num_inner_iteration_steps;
double inner_iteration_start_time = WallTimeInSeconds();
const double x_plus_delta_cost = new_cost;
Vector inner_iteration_x = x_plus_delta;
Solver::Summary inner_iteration_summary;
options.inner_iteration_minimizer->Minimize(options,
inner_iteration_x.data(),
&inner_iteration_summary);
if (!evaluator->Evaluate(inner_iteration_x.data(),
&new_cost,
NULL, NULL, NULL)) {
VLOG_IF(2, is_not_silent) << "Inner iteration failed.";
new_cost = x_plus_delta_cost;
} else {
x_plus_delta = inner_iteration_x;
// Boost the model_cost_change, since the inner iteration
// improvements are not accounted for by the trust region.
model_cost_change += x_plus_delta_cost - new_cost;
VLOG_IF(2, is_not_silent)
<< "Inner iteration succeeded; Current cost: " << cost
<< " Trust region step cost: " << x_plus_delta_cost
<< " Inner iteration cost: " << new_cost;
inner_iterations_were_useful = new_cost < cost;
const double inner_iteration_relative_progress =
1.0 - new_cost / x_plus_delta_cost;
// Disable inner iterations once the relative improvement
// drops below tolerance.
inner_iterations_are_enabled =
(inner_iteration_relative_progress >
options.inner_iteration_tolerance);
VLOG_IF(2, is_not_silent && !inner_iterations_are_enabled)
<< "Disabling inner iterations. Progress : "
<< inner_iteration_relative_progress;
}
summary->inner_iteration_time_in_seconds +=
WallTimeInSeconds() - inner_iteration_start_time;
}
}
iteration_summary.step_norm = (x - x_plus_delta).norm();
// Convergence based on parameter_tolerance.
const double step_size_tolerance = options_.parameter_tolerance *
(x_norm + options_.parameter_tolerance);
if (iteration_summary.step_norm <= step_size_tolerance) {
summary->message =
StringPrintf("Parameter tolerance reached. "
"Relative step_norm: %e <= %e.",
(iteration_summary.step_norm /
(x_norm + options_.parameter_tolerance)),
options_.parameter_tolerance);
summary->termination_type = CONVERGENCE;
VLOG_IF(1, is_not_silent) << "Terminating: " << summary->message;
return;
}
iteration_summary.cost_change = cost - new_cost;
const double absolute_function_tolerance =
options_.function_tolerance * cost;
if (fabs(iteration_summary.cost_change) <= absolute_function_tolerance) {
summary->message =
StringPrintf("Function tolerance reached. "
"|cost_change|/cost: %e <= %e",
fabs(iteration_summary.cost_change) / cost,
options_.function_tolerance);
summary->termination_type = CONVERGENCE;
VLOG_IF(1, is_not_silent) << "Terminating: " << summary->message;
return;
}
const double relative_decrease =
iteration_summary.cost_change / model_cost_change;
const double historical_relative_decrease =
(reference_cost - new_cost) /
(accumulated_reference_model_cost_change + model_cost_change);
// If monotonic steps are being used, then the relative_decrease
// is the usual ratio of the change in objective function value
// divided by the change in model cost.
//
// If non-monotonic steps are allowed, then we take the maximum
// of the relative_decrease and the
// historical_relative_decrease, which measures the increase
// from a reference iteration. The model cost change is
// estimated by accumulating the model cost changes since the
// reference iteration. The historical relative_decrease offers
// a boost to a step which is not too bad compared to the
// reference iteration, allowing for non-monotonic steps.
iteration_summary.relative_decrease =
options.use_nonmonotonic_steps
? std::max(relative_decrease, historical_relative_decrease)
: relative_decrease;
// Normally, the quality of a trust region step is measured by
// the ratio
//
// cost_change
// r = -----------------
// model_cost_change
//
// All the change in the nonlinear objective is due to the trust
// region step so this ratio is a good measure of the quality of
// the trust region radius. However, when inner iterations are
// being used, cost_change includes the contribution of the
// inner iterations and its not fair to credit it all to the
// trust region algorithm. So we change the ratio to be
//
// cost_change
// r = ------------------------------------------------
// (model_cost_change + inner_iteration_cost_change)
//
// In most cases this is fine, but it can be the case that the
// change in solution quality due to inner iterations is so large
// and the trust region step is so bad, that this ratio can become
// quite small.
//
// This can cause the trust region loop to reject this step. To
// get around this, we expicitly check if the inner iterations
// led to a net decrease in the objective function value. If
// they did, we accept the step even if the trust region ratio
// is small.
//
// Notice that we do not just check that cost_change is positive
// which is a weaker condition and would render the
// min_relative_decrease threshold useless. Instead, we keep
// track of inner_iterations_were_useful, which is true only
// when inner iterations lead to a net decrease in the cost.
iteration_summary.step_is_successful =
(inner_iterations_were_useful ||
iteration_summary.relative_decrease >
options_.min_relative_decrease);
if (iteration_summary.step_is_successful) {
accumulated_candidate_model_cost_change += model_cost_change;
accumulated_reference_model_cost_change += model_cost_change;
if (!inner_iterations_were_useful &&
relative_decrease <= options_.min_relative_decrease) {
iteration_summary.step_is_nonmonotonic = true;
VLOG_IF(2, is_not_silent)
<< "Non-monotonic step! "
<< " relative_decrease: "
<< relative_decrease
<< " historical_relative_decrease: "
<< historical_relative_decrease;
}
}
}
if (iteration_summary.step_is_successful) {
++summary->num_successful_steps;
strategy->StepAccepted(iteration_summary.relative_decrease);
x = x_plus_delta;
x_norm = x.norm();
// Step looks good, evaluate the residuals and Jacobian at this
// point.
if (!evaluator->Evaluate(x.data(),
&cost,
residuals.data(),
gradient.data(),
jacobian)) {
summary->message = "Residual and Jacobian evaluation failed.";
summary->termination_type = FAILURE;
LOG_IF(WARNING, is_not_silent) << "Terminating: " << summary->message;
return;
}
negative_gradient = -gradient;
if (!evaluator->Plus(x.data(),
negative_gradient.data(),
projected_gradient_step.data())) {
summary->message =
"projected_gradient_step = Plus(x, -gradient) failed.";
summary->termination_type = FAILURE;
LOG(ERROR) << "Terminating: " << summary->message;
return;
}
iteration_summary.gradient_max_norm =
(x - projected_gradient_step).lpNorm<Eigen::Infinity>();
iteration_summary.gradient_norm = (x - projected_gradient_step).norm();
if (options_.jacobi_scaling) {
jacobian->ScaleColumns(scale.data());
}
// Update the best, reference and candidate iterates.
//
// Based on algorithm 10.1.2 (page 357) of "Trust Region
// Methods" by Conn Gould & Toint, or equations 33-40 of
// "Non-monotone trust-region algorithms for nonlinear
// optimization subject to convex constraints" by Phil Toint,
// Mathematical Programming, 77, 1997.
if (cost < minimum_cost) {
// A step that improves solution quality was found.
x_min = x;
minimum_cost = cost;
// Set the candidate iterate to the current point.
candidate_cost = cost;
num_consecutive_nonmonotonic_steps = 0;
accumulated_candidate_model_cost_change = 0.0;
} else {
++num_consecutive_nonmonotonic_steps;
if (cost > candidate_cost) {
// The current iterate is has a higher cost than the
// candidate iterate. Set the candidate to this point.
VLOG_IF(2, is_not_silent)
<< "Updating the candidate iterate to the current point.";
candidate_cost = cost;
accumulated_candidate_model_cost_change = 0.0;
}
// At this point we have made too many non-monotonic steps and
// we are going to reset the value of the reference iterate so
// as to force the algorithm to descend.
//
// This is the case because the candidate iterate has a value
// greater than minimum_cost but smaller than the reference
// iterate.
if (num_consecutive_nonmonotonic_steps ==
options.max_consecutive_nonmonotonic_steps) {
VLOG_IF(2, is_not_silent)
<< "Resetting the reference point to the candidate point";
reference_cost = candidate_cost;
accumulated_reference_model_cost_change =
accumulated_candidate_model_cost_change;
}
}
} else {
++summary->num_unsuccessful_steps;
if (iteration_summary.step_is_valid) {
strategy->StepRejected(iteration_summary.relative_decrease);
} else {
strategy->StepIsInvalid();
}
}
iteration_summary.cost = cost + summary->fixed_cost;
iteration_summary.trust_region_radius = strategy->Radius();
iteration_summary.iteration_time_in_seconds =
WallTimeInSeconds() - iteration_start_time;
iteration_summary.cumulative_time_in_seconds =
WallTimeInSeconds() - start_time
+ summary->preprocessor_time_in_seconds;
summary->iterations.push_back(iteration_summary);
// If the step was successful, check for the gradient norm
// collapsing to zero, and if the step is unsuccessful then check
// if the trust region radius has collapsed to zero.
//
// For correctness (Number of IterationSummary objects, correct
// final cost, and state update) these convergence tests need to
// be performed at the end of the iteration.
if (iteration_summary.step_is_successful) {
// Gradient norm can only go down in successful steps.
if (iteration_summary.gradient_max_norm <= options.gradient_tolerance) {
summary->message = StringPrintf("Gradient tolerance reached. "
"Gradient max norm: %e <= %e",
iteration_summary.gradient_max_norm,
options_.gradient_tolerance);
summary->termination_type = CONVERGENCE;
VLOG_IF(1, is_not_silent) << "Terminating: " << summary->message;
return;
}
} else {
// Trust region radius can only go down if the step if
// unsuccessful.
if (iteration_summary.trust_region_radius <
options_.min_trust_region_radius) {
summary->message = "Termination. Minimum trust region radius reached.";
summary->termination_type = CONVERGENCE;
VLOG_IF(1, is_not_silent) << summary->message;
return;
}
}
}
// This can cause the trust region loop to reject this step. To
// get around this, we expicitly check if the inner iterations
// led to a net decrease in the objective function value. If
// they did, we accept the step even if the trust region ratio
// is small.
//
// Notice that we do not just check that cost_change is positive
// which is a weaker condition and would render the
// min_relative_decrease threshold useless. Instead, we keep
// track of inner_iterations_were_useful, which is true only
// when inner iterations lead to a net decrease in the cost.
return (inner_iterations_were_useful_ ||
iteration_summary_.relative_decrease >
options_.min_relative_decrease);
}
// Declare the step successful, move to candidate_x, update the
// derivatives and let the trust region strategy and the step
// evaluator know that the step has been accepted.
bool TrustRegionMinimizer::HandleSuccessfulStep() {
x_ = candidate_x_;
x_norm_ = x_.norm();
if (!EvaluateGradientAndJacobian()) {
return false;
}
iteration_summary_.step_is_successful = true;
strategy_->StepAccepted(iteration_summary_.relative_decrease);
step_evaluator_->StepAccepted(candidate_cost_, model_cost_change_);
return true;
}
// Declare the step unsuccessful and inform the trust region strategy.
void TrustRegionMinimizer::HandleUnsuccessfulStep() {
iteration_summary_.step_is_successful = false;
strategy_->StepRejected(iteration_summary_.relative_decrease);
iteration_summary_.cost = candidate_cost_ + solver_summary_->fixed_cost;
}
} // namespace internal
} // namespace ceres

View File

@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
// Copyright 2015 Google Inc. All rights reserved.
// Copyright 2016 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,35 +31,136 @@
#ifndef CERES_INTERNAL_TRUST_REGION_MINIMIZER_H_
#define CERES_INTERNAL_TRUST_REGION_MINIMIZER_H_
#include "ceres/internal/eigen.h"
#include "ceres/internal/scoped_ptr.h"
#include "ceres/minimizer.h"
#include "ceres/solver.h"
#include "ceres/sparse_matrix.h"
#include "ceres/trust_region_step_evaluator.h"
#include "ceres/trust_region_strategy.h"
#include "ceres/types.h"
namespace ceres {
namespace internal {
// Generic trust region minimization algorithm. The heavy lifting is
// done by a TrustRegionStrategy object passed in as part of options.
// Generic trust region minimization algorithm.
//
// For example usage, see SolverImpl::Minimize.
class TrustRegionMinimizer : public Minimizer {
public:
~TrustRegionMinimizer() {}
~TrustRegionMinimizer();
// This method is not thread safe.
virtual void Minimize(const Minimizer::Options& options,
double* parameters,
Solver::Summary* summary);
Solver::Summary* solver_summary);
private:
void Init(const Minimizer::Options& options);
void EstimateScale(const SparseMatrix& jacobian, double* scale) const;
bool MaybeDumpLinearLeastSquaresProblem(const int iteration,
const SparseMatrix* jacobian,
const double* residuals,
const double* step) const;
void Init(const Minimizer::Options& options,
double* parameters,
Solver::Summary* solver_summary);
bool IterationZero();
bool FinalizeIterationAndCheckIfMinimizerCanContinue();
bool ComputeTrustRegionStep();
bool EvaluateGradientAndJacobian();
void ComputeCandidatePointAndEvaluateCost();
void DoLineSearch(const Vector& x,
const Vector& gradient,
const double cost,
Vector* delta);
void DoInnerIterationsIfNeeded();
bool ParameterToleranceReached();
bool FunctionToleranceReached();
bool GradientToleranceReached();
bool MaxSolverTimeReached();
bool MaxSolverIterationsReached();
bool MinTrustRegionRadiusReached();
bool IsStepSuccessful();
void HandleUnsuccessfulStep();
bool HandleSuccessfulStep();
bool HandleInvalidStep();
Minimizer::Options options_;
// These pointers are shortcuts to objects passed to the
// TrustRegionMinimizer. The TrustRegionMinimizer does not own them.
double* parameters_;
Solver::Summary* solver_summary_;
Evaluator* evaluator_;
SparseMatrix* jacobian_;
TrustRegionStrategy* strategy_;
scoped_ptr<TrustRegionStepEvaluator> step_evaluator_;
bool is_not_silent_;
bool inner_iterations_are_enabled_;
bool inner_iterations_were_useful_;
// Summary of the current iteration.
IterationSummary iteration_summary_;
// Dimensionality of the problem in the ambient space.
int num_parameters_;
// Dimensionality of the problem in the tangent space. This is the
// number of columns in the Jacobian.
int num_effective_parameters_;
// Length of the residual vector, also the number of rows in the Jacobian.
int num_residuals_;
// Current point.
Vector x_;
// Residuals at x_;
Vector residuals_;
// Gradient at x_.
Vector gradient_;
// Solution computed by the inner iterations.
Vector inner_iteration_x_;
// model_residuals = J * trust_region_step
Vector model_residuals_;
Vector negative_gradient_;
// projected_gradient_step = Plus(x, -gradient), an intermediate
// quantity used to compute the projected gradient norm.
Vector projected_gradient_step_;
// The step computed by the trust region strategy. If Jacobi scaling
// is enabled, this is a vector in the scaled space.
Vector trust_region_step_;
// The current proposal for how far the trust region algorithm
// thinks we should move. In the most basic case, it is just the
// trust_region_step_ with the Jacobi scaling undone. If bounds
// constraints are present, then it is the result of the projected
// line search.
Vector delta_;
// candidate_x = Plus(x, delta)
Vector candidate_x_;
// Scaling vector to scale the columns of the Jacobian.
Vector jacobian_scaling_;
// Euclidean norm of x_.
double x_norm_;
// Cost at x_.
double x_cost_;
// Minimum cost encountered up till now.
double minimum_cost_;
// How much did the trust region strategy reduce the cost of the
// linearized Gauss-Newton model.
double model_cost_change_;
// Cost at candidate_x_.
double candidate_cost_;
// Time at which the minimizer was started.
double start_time_in_secs_;
// Time at which the current iteration was started.
double iteration_start_time_in_secs_;
// Number of consecutive steps where the minimizer loop computed a
// numerically invalid step.
int num_consecutive_invalid_steps_;
};
} // namespace internal
} // namespace ceres
#endif // CERES_INTERNAL_TRUST_REGION_MINIMIZER_H_

View File

@@ -0,0 +1,107 @@
// Ceres Solver - A fast non-linear least squares minimizer
// Copyright 2016 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of Google Inc. nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Author: sameeragarwal@google.com (Sameer Agarwal)
#include <algorithm>
#include "ceres/trust_region_step_evaluator.h"
#include "glog/logging.h"
namespace ceres {
namespace internal {
TrustRegionStepEvaluator::TrustRegionStepEvaluator(
const double initial_cost,
const int max_consecutive_nonmonotonic_steps)
: max_consecutive_nonmonotonic_steps_(max_consecutive_nonmonotonic_steps),
minimum_cost_(initial_cost),
current_cost_(initial_cost),
reference_cost_(initial_cost),
candidate_cost_(initial_cost),
accumulated_reference_model_cost_change_(0.0),
accumulated_candidate_model_cost_change_(0.0),
num_consecutive_nonmonotonic_steps_(0){
}
double TrustRegionStepEvaluator::StepQuality(
const double cost,
const double model_cost_change) const {
const double relative_decrease = (current_cost_ - cost) / model_cost_change;
const double historical_relative_decrease =
(reference_cost_ - cost) /
(accumulated_reference_model_cost_change_ + model_cost_change);
return std::max(relative_decrease, historical_relative_decrease);
}
void TrustRegionStepEvaluator::StepAccepted(
const double cost,
const double model_cost_change) {
// Algorithm 10.1.2 from Trust Region Methods by Conn, Gould &
// Toint.
//
// Step 3a
current_cost_ = cost;
accumulated_candidate_model_cost_change_ += model_cost_change;
accumulated_reference_model_cost_change_ += model_cost_change;
// Step 3b.
if (current_cost_ < minimum_cost_) {
minimum_cost_ = current_cost_;
num_consecutive_nonmonotonic_steps_ = 0;
candidate_cost_ = current_cost_;
accumulated_candidate_model_cost_change_ = 0.0;
} else {
// Step 3c.
++num_consecutive_nonmonotonic_steps_;
if (current_cost_ > candidate_cost_) {
candidate_cost_ = current_cost_;
accumulated_candidate_model_cost_change_ = 0.0;
}
}
// Step 3d.
//
// At this point we have made too many non-monotonic steps and
// we are going to reset the value of the reference iterate so
// as to force the algorithm to descend.
//
// Note: In the original algorithm by Toint, this step was only
// executed if the step was non-monotonic, but that would not handle
// the case of max_consecutive_nonmonotonic_steps = 0. The small
// modification of doing this always handles that corner case
// correctly.
if (num_consecutive_nonmonotonic_steps_ ==
max_consecutive_nonmonotonic_steps_) {
reference_cost_ = candidate_cost_;
accumulated_reference_model_cost_change_ =
accumulated_candidate_model_cost_change_;
}
}
} // namespace internal
} // namespace ceres

View File

@@ -0,0 +1,122 @@
// Ceres Solver - A fast non-linear least squares minimizer
// Copyright 2016 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of Google Inc. nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Author: sameeragarwal@google.com (Sameer Agarwal)
#ifndef CERES_INTERNAL_TRUST_REGION_STEP_EVALUATOR_H_
#define CERES_INTERNAL_TRUST_REGION_STEP_EVALUATOR_H_
namespace ceres {
namespace internal {
// The job of the TrustRegionStepEvaluator is to evaluate the quality
// of a step, i.e., how the cost of a step compares with the reduction
// in the objective of the trust region problem.
//
// Classic trust region methods are descent methods, in that they only
// accept a point if it strictly reduces the value of the objective
// function. They do this by measuring the quality of a step as
//
// cost_change / model_cost_change.
//
// Relaxing the monotonic descent requirement allows the algorithm to
// be more efficient in the long term at the cost of some local
// increase in the value of the objective function.
//
// This is because allowing for non-decreasing objective function
// values in a principled manner allows the algorithm to "jump over
// boulders" as the method is not restricted to move into narrow
// valleys while preserving its convergence properties.
//
// The parameter max_consecutive_nonmonotonic_steps controls the
// window size used by the step selection algorithm to accept
// non-monotonic steps. Setting this parameter to zero, recovers the
// classic montonic descent algorithm.
//
// Based on algorithm 10.1.2 (page 357) of "Trust Region
// Methods" by Conn Gould & Toint, or equations 33-40 of
// "Non-monotone trust-region algorithms for nonlinear
// optimization subject to convex constraints" by Phil Toint,
// Mathematical Programming, 77, 1997.
//
// Example usage:
//
// TrustRegionStepEvaluator* step_evaluator = ...
//
// cost = ... // Compute the non-linear objective function value.
// model_cost_change = ... // Change in the value of the trust region objective.
// if (step_evaluator->StepQuality(cost, model_cost_change) > threshold) {
// x = x + delta;
// step_evaluator->StepAccepted(cost, model_cost_change);
// }
class TrustRegionStepEvaluator {
public:
// initial_cost is as the name implies the cost of the starting
// state of the trust region minimizer.
//
// max_consecutive_nonmonotonic_steps controls the window size used
// by the step selection algorithm to accept non-monotonic
// steps. Setting this parameter to zero, recovers the classic
// montonic descent algorithm.
TrustRegionStepEvaluator(double initial_cost,
int max_consecutive_nonmonotonic_steps);
// Return the quality of the step given its cost and the decrease in
// the cost of the model. model_cost_change has to be positive.
double StepQuality(double cost, double model_cost_change) const;
// Inform the step evaluator that a step with the given cost and
// model_cost_change has been accepted by the trust region
// minimizer.
void StepAccepted(double cost, double model_cost_change);
private:
const int max_consecutive_nonmonotonic_steps_;
// The minimum cost encountered up till now.
double minimum_cost_;
// The current cost of the trust region minimizer as informed by the
// last call to StepAccepted.
double current_cost_;
double reference_cost_;
double candidate_cost_;
// Accumulated model cost since the last time the reference model
// cost was updated, i.e., when a step with cost less than the
// current known minimum cost is accepted.
double accumulated_reference_model_cost_change_;
// Accumulated model cost since the last time the candidate model
// cost was updated, i.e., a non-monotonic step was taken with a
// cost that was greater than the current candidate cost.
double accumulated_candidate_model_cost_change_;
// Number of steps taken since the last time minimum_cost was updated.
int num_consecutive_nonmonotonic_steps_;
};
} // namespace internal
} // namespace ceres
#endif // CERES_INTERNAL_TRUST_REGION_STEP_EVALUATOR_H_

View File

@@ -86,20 +86,20 @@ class TrustRegionStrategy {
struct PerSolveOptions {
PerSolveOptions()
: eta(0),
dump_filename_base(""),
dump_format_type(TEXTFILE) {
}
// Forcing sequence for inexact solves.
double eta;
DumpFormatType dump_format_type;
// If non-empty and dump_format_type is not CONSOLE, the trust
// regions strategy will write the linear system to file(s) with
// name starting with dump_filename_base. If dump_format_type is
// CONSOLE then dump_filename_base will be ignored and the linear
// system will be written to the standard error.
std::string dump_filename_base;
DumpFormatType dump_format_type;
};
struct Summary {