This is a fairly standard CAD feature; it conveys the same
information and has the same recovery path, without erroring out,
so seems like an obvious win.
Before this commit, it was possible to add some redundant constraints
(e.g. vertical, horizontal or midpoint) without failing the sketch,
because SolveBySubstitution() removed the redundant equations.
However, this could result in the solve failing later because
the system didn't converge, without any pointers as to the true
cause of the failure.
Before this commit, pt-on-line constraints are buggy. To reproduce,
extrude a circle, then add a datum point and constrain it to the
axis of the circle, then move it. The cylinder will collapse.
To quote Jonathan:
> On investigation, I (a) confirm that the problem is
> the unconstrained extrusion depth going to zero, and (b) retract
> my earlier statement blaming extrude and other similar non-entity
> parameter treatment for this problem; you can easily reproduce it
> with a point in 3d constrained to lie on any line whose length
> is free.
>
> PT_ON_LINE is written using VectorsParallel, for no obvious reason.
> Rewriting that constraint to work on two projected distances (using
> any two basis vectors perpendicular to the line) should fix that
> problem, since replacing the "point on line in 3d" constraint with
> two "point on line in 2d" constraints works. That still has
> the hairy ball problem of choosing the basis vectors, which you
> can't do with a continuous function; you'd need Vector::Normal()
> or equivalent.
>
> You could write three equations and make the constraint itself
> introduce one new parameter for t. I don't know how well that
> would work numerically, but it would avoid the hairy ball problem,
> perhaps elegant at the cost of speed.
Indeed, this commit implements the latter solution: it introduces
an additional free parameter. The point being coincident with
the start of the line corresponds to the parameter being zero, and
point being coincident with the end corresponds to one).
In effect, instead of constraining two of three degrees of freedom
(for which the equations do not exist because of the hairy ball
theorem), it constrains three and adds one more.
gcc 6 displays these when compiling in release mode; all of these
warnings except the rankOk one were benign because there would have
been an error about the incomplete switch statement.
The rankOk warning highlighted a real problem: bailing early to
didnt_converge would have branched on an uninitialized variable.
A system solved as REDUNDANT_OKAY is still solved correctly,
even if the UI would consider this an error, in case that
g->allowRedundant==false. So there's no reason to discard this
solution; we might find it useful if a system loses a degree of
freedom while dragging, or to avoid regeneration after redundant
constraints are allowed.
This commit also reverts commit 3ff236c, as that is not necessary
anymore.
Specifically, take the old code that looks like this:
class Foo {
enum { X = 1, Y = 2 };
int kind;
}
... foo.kind = Foo::X; ...
and convert it to this:
class Foo {
enum class Kind : uint32_t { X = 1, Y = 2 };
Kind kind;
}
... foo.kind = Foo::Kind::X;
(In some cases the enumeration would not be in the class namespace,
such as when it is generally useful.)
The benefits are as follows:
* The type of the field gives a clear indication of intent, both
to humans and tools (such as binding generators).
* The compiler is able to automatically warn when a switch is not
exhaustive; but this is currently suppressed by the
default: ssassert(false, ...)
idiom.
* Integers and plain enums are weakly type checked: they implicitly
convert into each other. This can hide bugs where type conversion
is performed but not intended. Enum classes are strongly type
checked.
* Plain enums pollute parent namespaces; enum classes do not.
Almost every defined enum we have already has a kind of ad-hoc
namespacing via `NAMESPACE_`, which is now explicit.
* Plain enums do not have a well-defined ABI size, which is
important for bindings. Enum classes can have it, if specified.
We specify the base type for all enums as uint32_t, which is
a safe choice and allows us to not change the numeric values
of any variants.
This commit introduces absolutely no functional change to the code,
just renaming and change of types. It handles almost all cases,
except GraphicsWindow::pending.operation, which needs minor
functional change.
In my (whitequark's) experience this warning tends to expose
copy-paste errors with a high SNR, so making a few fragments
slightly less symmetric is worth it.
Also mollify -Wlogical-op-parentheses while we're at it.
This setting is generally useful, but it especially shines when
assembling, since the "same orientation" and "parallel" constraints
remove three and two rotational degrees of freedom, which makes them
impossible to use with 3d "point on line" constraint that removes
two spatial and two rotational degrees of freedom.
The setting is not enabled for all imported groups by default
because it exhibits some edge case failures. For example:
* draw two line segments sharing a point,
* constrain lengths of line segments,
* constrain line segments perpendicular,
* constrain line segments to a 90° angle.
This is a truly degenerate case and so it is not considered very
important. However, we can fix this later by using Eigen::SparseQR.
Before this commit, overconstraining a system past a certain point
resulted in a wrong error message: instead of "redundant constraints",
"unsolvable constraints" was displayed.
To reproduce, place more six or more length constraints with the same
value onto the same line segment.
When a solver error arises after a change to the sketch, it should
be easy to understand exactly why it happened. Before this change,
two functionally distinct modes of failure were lumped into one:
the same "redundant constraints" message was displayed when all
degrees of freedom were exhausted and the had a solution, but also
when it had not.
To understand why this is problematic, let's examine several ways
in which we can end up with linearly dependent equations in our
system:
0) create a triangle, then constrain two different pairs of edges
to be perpendicular
1) add two distinct distance constraints on the same segment
2) add two identical distance constraints on the same segment
3) create a triangle, then constrain edges to lengths a, b, and c
so that a+b=c
The case (0) is our baseline case: the constraints in it make
the system unsolvable yet they do not remove more degrees of freedom
than the amount we started with. So the displayed error is
"unsolvable constraints".
The constraints in case (1) remove one too many degrees of freedom,
but otherwise are quite like the case (0): the cause of failure that
is useful to the user is that the constraints are mutually
incompatible.
The constraints in cases (2) and (3) however are not like the others:
there is a set of parameters that satisfies all of the constraints,
but the constraints still remove one degree of freedom too many.
It makes sense to display a different error message for cases (2)
and (3) because in practice, cases like this are likely to arise from
adjustment of constraint values on sketches corresponding to systems
that have a small amount of degenerate solutions, and this is very
different from systems arising in cases like (0) where no adjustment
of constraint values will ever result in a successful solution.
So the error message displayed is "redundant constraints".
At last, this commit makes cases (0) and (1) display a message
with only a minor difference in wording. This is deliberate.
The reason is that the facts "the system is unsolvable" and
"the system is unsolvable and also has linearly dependent equations"
present no meaningful, actionable difference to the user, and placing
emphasis on it would only cause confusion.
However, they are still distinguished, because in case (0) we
list all relevant constraints (and thus we say they are "mutually
incompatible") but in case (1) we only list the ones that constrain
the sketch further than some valid solution (and we say they are
"unsatisfied").
Before this change, it was possible to adjust constraints in a way
that removes a degree of freedom and makes the sketch unsolvable,
but rank test was performed before solving the system, and an error
was not displayed immediately. Instead, a solution would seemingly
be found, but it would be very unstable--unrelated changes to
the sketch would cause rank test to fail.
To reproduce the bug, do this:
* Draw a triangle.
* Create a length constraint for all sides.
* Set side lengths to a, b, and c such that a + b = c.
* Add a line segment.
The current messages accurately reflect what happens to the system
of equations that represents the sketch, but can be quite confusing
to users that only think in terms of the constraints.
We use "unsolvable" and not "impossible" because while most of
the cases that result in this error message will indeed stem from
mutually exclusive sets of constraints, it is still possible that
there is some solution that our solver is unable to find using
numeric methods.
Make the union anonymous so that its elements can be addressed
directly. Then, move the Expr *b field into the union, as it
already is never used at the same time as any of the union members.
This will allow us to use non-POD classes inside these objects
in future and is otherwise functionally equivalent, as well
as more concise.
Note that there are some subtleties with handling of
brace-initialization. Specifically:
On aggregates (e.g. simple C-style structures) using an empty
brace-initializer zero-initializes the aggregate, i.e. it makes
all members zero.
On non-aggregates an empty brace-initializer calls the default
constructor. And if the constructor doesn't explicitly initialize
the members (which the auto-generated constructor doesn't) then
the members will be constructed but otherwise uninitialized.
So, what is an aggregate class? To quote the C++ standard
(C++03 8.5.1 §1):
An aggregate is an array or a class (clause 9) with no
user-declared constructors (12.1), no private or protected
non-static data members (clause 11), no base classes (clause 10),
and no virtual functions (10.3).
In SolveSpace, we only have to handle the case of base classes;
Constraint and Entity have those. Thus, they had to gain a default
constructor that does nothing but initializes the members to zero.
The main benefit is that std::swap will ensure that the type
of arguments is copy-constructible and move-constructible.
It is more concise as well.
When min and max are defined as macros, they will conflict
with STL header files included by other C++ libraries;
in this case STL will #undef any other definition.
The SolveSpace top-level directory was getting a bit cluttered, so
following the example of numerous other free-software projects, we move the
main application source into a subdirectory and adjust the build systems
accordingly.
Also, got rid of the obj/ directory in favor of creating it on the fly in
Makefile.msvc.