In logic where expressions can be evaluated to expressions are equal when they have the same value. One can also argue that the natural values in a logic with equality are essentially the sets of equivalent elements under equality.
In MATHS, as in an algebra, we get different types by assuming different forms of equality. Items are equal, not when they are identical but when we wish to confuse them! We are permitted to create new types of object by ignoring differences. For example the strings "1+2" and "2+1" are different but 1+2 and 2+1 are expressions of an numeric type and so will be assumed equal.
When an axiom of the form
is added to the other axioms and variables that define a type, a new type is generated. The new type is called a quotient type and can be thought of as being a collection of sets of elements of the original type. The elements in the new set are treated as single objects however.
In MATHS we can write
For each type of object a rule needs to be given specifying when two objects are to be assumed equal. In many ways the conditions under which two expressions are considered to be equal define the mathematical properties of a type. In may ways algebra is the study of the effect of various equalities - commutativity, associativity, cancellation and so on. In practice there are three distinct ways an equality is asserted: (1) By a simple definition of a new term,(2) As an axiom making certain simple expressions equal, and (3) By a recursive definition. Take these in turn:
t ::= eor
t::T=ewhere e does not contain t and t is free prior to the definition.
These definitions declare the term t and also add the axiom
t=e.
Such a definition does not change the logic of the type very much because such terms can always be removed from any expression by substituting their expression. This does not constrain the type: if there were objects of the type without the definition then there will be similar objects existing afterward. These are essentially notational conveniences.
Compare with [Jackson95c] where terms are attached to real world designations. The syntax of a definition is used in MATHS also to designate a term:
t::T=`string in some other language`.is semantically equivalent to
t::T.This is therefore a declaration of a new term and can not constrain the type.
[2] An axiom making certain simple expressions equal:
For all ..... , e1=e2.The result is a different type of object called a quotient type because the the original objects are now divided up into class of equivalent objects that are treated as equal. [ logic_41_HomogenRelations#Equivalence Relations ]
[3] By a recursive definition:
t ::= eor
t::T=ewhere e does contain t, or t has been used in an earlier definition. These add a new axiom:
t=e,These may or may not invalidate the type. It is wise to demonstrate that there is a set of objects that satisfy the axioms when there is a recursive definition in the axiom system.
This equality is satisfied by exactly one value of x, x=0. So sometimes an axiom like this leads to defining a particular value for a term.
However consider this equation:
This is satisfied by x=0 or x=-2, for example. This shows that a recursive definition may not define a unique value. We have an under-defined term.
Considering:
we are forced to realize that there is no value of x that satisfies some definitions. Here x is over-defined.
All we can say when we know that x should satisfy some property P then
one of these cases will hold:
Equality has attracted the attention of both computer scientists and software engineers. Here are some notes:
Substitution of equals does not always work inside character strings and quotations. It can also fail in statements involving beliefs, desires, and knowledge. The MATHS assumption (as in Cyc [Lenat et al 90]) is that such statements can be handled by placing the beliefs, knowledge, etc in a string. Just because two expressions have the same informal description we can not conclude that they are equal.
Parnas has pointed out that when expressions in an equality contain partial functions then it is simplest to assume that the equality is false whenever either or both expressions are undefined[Parnas 93].
For any two expressions x and y of the same type x=y and x<>y are predicates. Thus x=y and x<>y have type @ if and only if x and y have the same type.
The "!=" notation comes from the C family of languages and has infected just about every programmer by now. The abbreviation representing inequality (<>), comes from BASIC and Pascal, but this should be shown (if possible) as the conventional mathematical symbol ≠ when a document is typeset or displayed.
Equality and inequality are both parallel operators so expressions of form x = y = z, w = x = y = z, and so on are valid when x, y, and z are expressions of the same type. For example x = y = z means x=y and y=z.
The exceptional non-logical contexts are: character strings, quotations, and statements involving beliefs, desires, and knowledge. These will not in general keep the same value when equals replace equals.
Note that if x or y are expressions with free variables then one needs to be careful to not substitute one for the other in a context where the free variables are bound. The bound variables need to be changed to other variables. Similarly, if x is a variable then only free occurrences of it can be replaced in an expression by y.
Equality is used via Euclid's Axiom - Equals can be substituted for equals:
and
Liebnitz's Axiom is similar: Two expressions are equal if and only if they
remain equal when any and/all values of variables they are equal:
So x+1 = 1+x in any traditional numeric type because 0+1=1=1+0, 1+1=1+1, 2+1=1+2, 3+1=1+3, ....
Similarly for predicates, two predicates are equal exactly when they have the same truth value when the same values are substituted for the same variables in each predicate. The iff is the same as = between predicates.
In [ maths/logic_25_Proofs.html ] the following rules are claimed to be derived from the above about equality
Notice -- for this rules to be applicable then the expressions on both sides of the equals sign should define a single object, and they must be both of the same type. The chief risk is using the definite description operator ("the") when there is more than one object described.
In MATHS use the quantifier one: (uniqueness): (for one x:X(W(x))) ::= for some x:X( W(x) and for all y:X(if W(y) then x=y)).
You can easily demonstrate results like the following:
.Case for no x(W(x)).
...
Reduce to absurdity.
.Else x1,x2, W(x1), W(x2).
...
x1=x2
.Else for one x(W(x)).
...
.Close.Case
Another common form is when we can find an x that satisfies W(x) but need to establish that it is the only possible x. Here we might use this pattern:
.Let x2, W(x2).
...
x2=x.
.Close.Let
the (object:Type || Property).
the (x:Real || 2*x = 1 ).
A more writable alternative is:
the(object:Type . Property).
The standard binding form is:
the[object:Type] (Property).which suggests using the with a subscript when type set.
Note. Later we can also define,
The Wikipedia [ Definite_description ] has an introduction to Russell's approach to this.
In normal MATHS, derivation rules that work on "the" all require uniqueness.
However, the rules for equality can be used, [ Proving Uniqueness ] abov.
Kalish and Montague go further and effectively introduce the idea that all non-unique descriptions define the same value the(x:X || false). This is reminiscent of the null pointer in many languages and the Smalltalk Class of Undefined Objects. This lets them derive three useful inference rules:
However MATHS takes these inferences for granted. Now we can show that, if no
object of the relevant set(X say) fits the definition then, the description
formula is equal to the(x:X || x<>x):
At this time ( Tue Oct 11 10:59:10 PDT 2011 ) MATHS still treats definite descriptions as syntactically correct expressions even if uniqueness has not been proven.
But before they can be used in arguments uniqueness must be established.
For example, (inspired by
[BergmanMoorNelson90]
and presuming a net of definitions for Roman Empire
)
Let
I recall a paradox in Euclidean Geometry that depends on a particular construction defining a point ... when in fact it can not do so.... thus MATHS does not allow algebra on undefined or over-defined objects.
Further a recent "proof", discovered by a computer [ http://mally.stanford.edu/cm/ontological-argument/ ] (refered to by [Krakovsky11] ) clearly breaches these rules... and, further, will also prove some clearly false results, for example that the numbers have a unique largest number, or that this graph has a single node that has no other node coming from it
Dana Scott's Domains are sets of objects which are more or less defined and contain both an undefined object (⊥) and the overdefined object(top) - one that can not exist because its specification is contradictory [ACM87]. So we do not assume this as an axiom but name the proposition encoding the inference as "Improper descriptions".
If you want impropper descriptions in your logic, you must link to the definition above and then assert it -- as an assumption.
If you need these formulas then you must link to DOMAIN and assert it in your document.
if( even n, n/2, 3*n+1)
but y'=sqrt(x) or y'=sqrt(-x) in normal logic is undefined because one part is undefined.
This question has also been debated on Usenet: comp.specification.z
Mathematicians assumed that there was an "imaginary" sqrt(-1) and developed complex numbers. LISP, Algol 60, C, Smalltalk, etc all delay the evaluation of all the arguments in condition expressions until after the condition is evaluated. Dijkstra introduces a special operator cand that is the logical equivalent of Ada's short circuited "and then" or UNIX's "&&".
Parnas notes that in software engineering it is
simpler to assume that certain primitive predicates (like equality)
are false when
their parts are undefined[Parnas93]. In MATHS the predicate e1=e2 is
assumed false if either e1 or e2 are undefined. This is quite different to
Kalish and Montague's assumption, but is, apparently consistent[Parnas 93].
We can say that '=' is Parnasian predicate. It follows that e1<>e2 must
be true if either e1 or e2 are undefined. We might call this an
anti-Parnasian predicate.
Net
Parnas does not discuss the question of whether a recursive definition of a predicate creates a primitive or non-primitive predicate. This may be because recursion opens the question of predicates being undefined -- something that Parnas is trying to avoid.
Adding '=' to a logic opens another can of theoretical worms if we then ask which universes of discourse fit the logic - the different semantic models of the logic. The work on Universal Algebra and Category Theory shows that there are several ways to define these models[Wechler 92, Cramer et al 94]. In the strictest of these (initial semantics) all the models are similar except for relabeling the objects systematically - here two objects in the universe are taken to be equal if and only if they can be proved equal in the logic. The loosest form - final or terminal semantics - two objects can be the same(confused) unless they can be proved different. In practice these distinctions don't seem to be particularly important.
. . . . . . . . . ( end of section Uniqueness and Definite Descriptions) <<Contents | End>>
. . . . . . . . . ( end of section Equality, Uniqueness, etc) <<Contents | End>>
In general, For T:Types,... (for abf x:T(W(x)) ) ::= {x:T||not W(x)} in finite(T).
Do not confuse this the idea of generic properties that apply to any type of objects.
For example, in normal 3D space, two 2D objects generically intersect in a 1D object. The chances of the just touching is mathematically 0. Or, generically two lines are never parallel, ...
Here is an abbreviation that may be useful for discovering databases and class hierarchies and ontologies. Suppose that we have a predicate F on type X and predicate G on objects of type Y then define:
Proofs follow a natural deduction style that start with assumptions ("Let") and continue to a consequence ("Close Let") and then discard the assumptions and deduce a conclusion. Look here [ Block Structure in logic_25_Proofs ] for more on the structure and rules.
The notation also allows you to create a new network of variables and constraints. A "Net" has a number of variables (including none) and a number of properties (including none) that connect variables. You can give them a name and then reuse them. The schema, formal system, or an elementary piece of documentation starts with "Net" and finishes "End of Net". For more, see [ notn_13_Docn_Syntax.html ] for these ways of defining and reusing pieces of logic and algebra in your documents. A quick example: a circle might be described by Net{radius:Positive Real, center:Point, area:=π*radius^2, ...}.
For a complete listing of pages in this part of my site by topic see [ home.html ]
For a more rigorous description of the standard notations see