[Skip Navigation] [CSUSB] / [CNS] / [Comp Sci & Eng ] / [R J Botting] / [ MATHS ] / math_10_Intro
[Contents] [Source Text] || [Notation] || [Copyright] || [Contact] [Search ]
Thu Jun 11 13:36:05 PDT 2009



      Introduction - Mathematics is More than Logic

      Modern mathematics contains a large number of re-usable formal systems complete with declarations, definitions of useful terms, axioms and theorems. For example, each algebra is a pre-fabricated collection of abstract data types. More powerfully, mathematical system talk about the possibilities for new classes and the relationships between different types of classes. Mathematics distils a quintessential language suitable for thinking about complex systems.

      This document describes the ways in which mathematics is more than traditional logic: [ logic_0_Intro.html ]

      If you want to get to the specific techniques that make mathematics be more than logic, see [ Algebra ] and the sections that follow it.


      "A Formal manipulator in mathematics often experiences the discomforting feeling that his pencil surpasses him in intelligence."

      Source: Howrad W Eves, "In Mathematical Circles", Boston: Prindel, Weber and Scmidt, 1969, p52. Quoted in Am Math Monthly, V102n5(May 95)p437.

      "The realisation began as an academic exercise without reference to personalities; Smiley manoevred the characters like pieces in a puzzle, twisting them this way and that to fit the complex framework of established facts -- and then, in a moment, the pattern had suddenly re-formed with such assurance that it was a game no more"

      Source: Le Carre 62, "Call for the Dead", page 149 of "The Incongruous Spy"

      Historical Examples

      Mathematicians have tended to use terminology, notation and techniques ahead of accepted logic of their time. Thus Euclid's geometry and Archimedes calculations were not expressed in the dialectic of Plato or the logic and metaphysics of Aristotle. Descartes used algebra (an Arabic invention) and combined it with Pappus's geometric analysis as a way of solving problems [ [ Grabiner 95 ] ] and then published a "Discourse on Method". Pascal is famous for noting that the "heart has its reasons". Similarly Newton was forced to publish results derived with the calculus in the language and logic of Euclid. Maclaurin's defence of Newton's calculus was to show that its results could be derived, in a cumbersome way, from Archimedean and Euclidian methods. Liebnitz's concept of an infinitessimal was not shown to be consistent until 300 years later. The differential and integral calculi were used for a century or so while more rigorous minds attempted to explain why it worked. During the 19th century George Boole attempted to put the laws of logic in mathematical terms ("The Laws of Thought").

      At the end of the 19th century, there was aprogram to put mathematics on a formal logical basis culminating in Whitehead and Russel's monumental three volume work "Principia Mathematica"[Whitehead and Russel 64]. I think that this attempt gives experiemntal evidence that mathematics is more complex than most philosophers can handle. The work of Goedel, Church, and Turing together show that any form of mechanical or formal reasoning will be either uninetersting, inconsistent or incomplete. And to this day even the most formal mathematical papers are (to a large extent) written in unformalised language.

      In what follows we look at some particular extra-logical features of modern mathematics.


      Algebra uses a very useful form of proof where a series of transformations are made to an expression and then a relation is deduced between the start and the end of the series. This is not used by "real" logicians. Recently, David Gries has been championing such "calculational proofs".

      In text books the syntax looks like this

      	e1	R1 e2
      		R2 e3
      		R3 e4
      		R[n-1] e[n]
      At then of such a chain of relationships we can conclude that the combination (relational product) of all the relations holds between the first and last expression:
       	e1 (R1; R2; R3; ...) e[n].
      or in shorthand:
      	e1 ;R e[n]

      This inference is valid, since we can show:

    1. (theory_of_relations)|-For e:#expressions, R:#relations, if e1 R1 e2 R2 e3 R3 ... e[n] then e1 (R1; R2; R3; ...) e[n].

      Often all the relations and the conclusion are the same transitive relation. The identity relation("=") or logical equivalence("iff") are both very common and useful. The MATHS syntax is in [ algebraic_step in logic_2_Proofs ]

      There is no general algorithm for proving the equality of two formulae in a general logic with equality formulae [Post...]. There are procedures for converting some sets of equalities into effective and efficient algorithms for testing for equality - eg

      Source: Knuth & Bendix "Simple word problems in Universal Algebras" pp263-297 in Cl Probs in Abs Alg Leech 1970.

      Source: Nelson G & Oppen 80, G Nelson & D C Oppen, Fast Decision procedures based on congruence closure, J ACM V27(Apr 80)pp356-364.

      Source: Smith A 90, Alf Smith, The Knuth-Bendix Completion Algorithm and its Specification in Z, pp195-220 in Nicholls 90.

      Solving Equations

      "Solving an equation" is another mathematical process that is not part of traditional logic and is indeed not computable. In many ways Descarte's contribution to mathematics is wide application of a problem solving method that starts:
       		Let x be ....
      and finishes
       		and so x=...

      Source: Grabiner 95

      Often a formula like f(x)=a where f(x) is an expression containing x as a free variable and e is an expression, is transformed into one like x=e and the mathematician concludes "That e is the solution to f(x)=a". But this is not always enough because, if there is no x such that f(x)=e, then assuming that x exists lets you deduce any conclusion including that x=a. So having derived x=a we should check f(a)=e.

      In general, suppose that P(x) is assumed and x=e derived. Then you have shown that if an x exists that satisfies P then it must be x:
      (1): if for some x( P(x)) then x=e,

      In formal terms we have the following theorem - if whenever x satisfies P it is e, then either e satisfies P or no x can satisfy P:

    2. (logic)|-if (for all x(if P(x) then x=e)) then ( P(e) or for no x( P(x))),

      Or - either there is no way to satisfy P or we have found it.

    3. (logic)|-if (for all x(if P(x) then x=e)) then ( for no x( P(x)) or (for one x(P(x)) and the x(P(x))=e )).

      Hence it is valid to infer
      (2): e=the x(P(x)) or for no x(P(x)) from
      (3): for all x(if P(x) then x=e).

      Typically P(e) is assumed to follow, however this step is not always valid. Only by assuming or showing for some x(Px) can we safely deduce P(e) from For all x, if P(x) then x=e. Thus 'solving an equation' does not guarantee that the solution exists. Of course if a value is deduced and if it satisfies the original properties then we have

    4. (logic)|-if for all x(if P(x) then x=e) then if P(e) then e=the(x||P(x) ).


    5. (above)|-if for all x(if P(x) then x=e) and P(e) then e=the x(P(x)).

    6. Hence a valid method is to find a possible solution and then verify it.

      Calculational Solutions

      Sometimes (ideally) the derivation of the solution is reversible - each step is an iff-step. Then solving the equation works because the following formula is true and indicates a valid argument:

    7. (logic)|-if for all x:T( f(x)=a iff x=e) then f(e)=a.

      The argument is a classical algebraic or calculational proof that looks like this

       Let{ x:T.
      	f(x)= a
      	iff e1=e2
      	iff e3=e4
      	iff x=e[n]
       } |- f(e[n])=a.

      In MATHS we would include labels or explanations to guarantee the above format.

       Let{ (e0): x:T.
       (e1):	f(x)= a
       (e2):	iff e1=e2
       (e3):	iff e3=e4
       (em):	iff x=e[n]
       } |- f(e[n])=a.

      Creative Mathematics

      In the past when an expression for x has not been found mathematicians have created new algebras by postulating that a set of equations do have solutions and working out the consequences - ultimately redefining the universe of discourse to include the new types of objects:

      for i:Nat0, some x:X(i+x=0) -- leads to signed integers(X:=Int)

      for i,j:Int, some x:X(i*x=j) -- leads to rational numbers(X:=Rat)

      for a:Rat, some x:X(x*x=a) -- leads to irrationals, and hence the Real numbers

      for all a:Real, some x:X(x*x=-1) -- leads to imaginary and complex numbers

      Proposed Syntax for Solving Problems

        The examples and schema are deceptively simple - a single variable and a single predicate. However by choosing the sufficeintly complex domain of discourse the variable x can be a structured variable with hundreds of more or less complex components. Similarly the predicate can be a conjunction of many terms. The MATHS syntax of a net is ideal for documenting such complex domains of discourse: [ notn_13_Docn_Syntax.html ] The following syntax for mathematical problem solving is therefore suggested.

        The ideal problem+solution would look like this

        1. PROBLEM::=following
          1. Definitions and declarations.
          2. Assumptions.
          3. Goals.

          (End of Net)

        2. SOLUTION::=following
          1. Analysis: Manipulation to give solutions.

          2. Synthesis: Proofs that solutions fit the assumptions.

          (End of Net)

        Examples of solving an equations.

        A non-parametric problem:
        (ex1): solutions{ x:Real, x^2-7*x+12=0} = {(x=>3),(x=>4)}.
      1. ex1_PROBLEM::=following
        1. x:Real.
        2. |- (ex1a): x^2-7x+12 =0.

        (End of Net)

      2. ex1_SOLUTION::=following
      3. x:Real.

      4. solutions = {(x=>3),(x=>4)}.

        proof the x=3 is a solution.

        1. x=3,
        2. (-1, arithmetic)|-x*x = 3 * 3 =9,
        3. (-2, arithmetic)|-7*x = 7*3=21,
        4. (-1, -2, arithmetic)|-x*x-7*x = -12,
        5. (-1, arithmetic)|-x*x-7*x +12 = 0
        6. (-1)|-ex1_PROBLEM.

        (Close Let )

        Proof that x=4 is a solution. Left as an exercise for the reader.

      (End of Net)

      Also see [ ../samples/fish.html ] for a simple example.

      Example of solving an equation given data/parameters.
      (ex2): solve{ x:Real, a*x^2-b*x+c=0} given{a,b,c:Real}
      (ex3): = map[a,b,c:Real]{ (-b-sqrt(b^2-4*a*c))/(2*a), (-b+sqrt(b^2-4*a*c))/(2*a)} .

      General form

    8. Solving_Equations::=
      1. For Documentation D, solutions{D}::=Net{D}.
      2. For Documentation D1,D2, solve D1 given D2::=map[p:D2](solutions(D1)).


    . . . . . . . . . ( end of section Proposed Syntax for Solving Problems) <<Contents | End>>

    Benign Confusion and Abuses of Notation

    Mathematical texts and papers over the years demonstrate that there is no harm in less than rigorous syntax. Bourbaki calls these "Abuses of notation". For example - An unbound variable in a proposition is automatically assumed to be unversally quantified to some local default type.

    There are two common and contradictory idioms used to talk about algebras in text books:

  1. The typical definition is
    An Monoid M is a triple of a set S, with an operation + and unit 0, ... but the typical way this is used is like this
    X is a Monoid with operation * and unit 1
  2. meaning
    The triple (X, *, 1) is a Monoid.

    In general the name of a set of tpls is used as a name for the type of the first element - this seems to be a harmless piece of overloading, which we formalize as follows:-

    If the first declaration in a set of documentation is for a variable of type Sets:

  3. xyz::=XYZ where
  4. XYZ::=Net{x:Sets, y:Y,z:Z...} Then
    xyz can be used as follows:
          (x=>X, y=>b, z=>c ...) : xyz -- declares variables(X,b,c,...)
          (X, b, c, ...) :xyz              -- declares same variables
           X : xyz(y=>b, z=>c, ...)       -- declares an X (Given b,c,...)
           X : xyz(b, c, ...),
           X : xyz, -- declares (x=>X, y=>y, z=>z,...) : $XYZ

    Further we allow sets in both these forms:

  5. |-xyz(y=>b, z=b, ...) = {X:Type||(x=>X, y=>b, z=>c ... )in XYZ}
  6. |-xyz(b, c, ...) = {X:Type||(x=>X, y=>b, z=>c ... )in XYZ}

  7. Thus since Monoid::=Net{X:Sets, op:associative(X), unit:units(X,op)} we can write
  8. (X=>Nat0, op=>(+), unit=>0) in Monoid


  9. Monoid(X=>Nat0, op=>(+), unit=>0)


  10. Nat0 in Monoid(op=>(+), unit=>0)


  11. Nat0:Monoid((+), 0)

    A piece of documentation can define a name to be a tpl of declared terms, then all the terms can be substituted implicitly. Suppose we have

  12. XYZ::=
      x:X, y:Y,z:Z,
    1. a::=...
    2. P...
    3. N::=(x,y,z)

    then we can write
  13. Use XYZ and N=(u,v,w) as equivalent to
    	u::X, v::Y,w::Z
    	N::=(x,y,z), N=(u,v,w)
    is equivalent to
    	u::X, v::Y,w::Z
    	x=u, y=v, z=w,


    A common method for a tough problem is to map it into one of a set of simpler probelms, solve that one and map the solution back to the original problem domain. This is very helpful in tackling tough problems. A well chosen map will transform a tough problem into one that can be solved more easily and then mapped back into a solution to the tough problem. For example the Logarithm map developed by Napier. Here is the simplest form the logarithms to base 2.
    	Original 	Relation	Transformed
    	128 * 4 	= 2^7*2^2
    				 	7+2 = 9
    			2^9 =

    The general form of logarithm to base b is:

     Numbers Greater than Zero	<->	Numbers
     		1			<->	0
     		b			<->	1
     Multiplication			<->	Addition
     Division   			<->	Subtraction
     Exponentiation			<->	Multiplication
     Roots     			<->	Division
     Less_than			<->	Less_than
     Unequal    			<->	Unequal

    More advanced maps are the Laplace and Fourier transforms. A computer proffessional also understands intuitively that decimal numbers and binary numbers are transforms of each other that preserves the arithmetic of the underlying numbers.

    In other words a complicated domain can sometimes be transformed into asimpler one without destroying the essence or structure of the original domain. For this process to be worthwhile there must be an easy way to transfer expressions and values across the map. Typically a set of tables are prepared (not an easy task) that can be searched quickly and easily. However for this to be valid the map must preserve the structure of the domain.

    Structure Preserving Mappings

    Structure preservation is at the heart being able to solve problems by [ Analogies ]

    One of the ways of distinguishing a logician from a mathematician is that (by and large) a logcian is most interested in a single set of axioms whereas the mathematicain is concerned with the relationship between two or more sets axiom systems. One of the commonest echniques and ideas of the twentieth century scene has been the study of sets of maps that transmit or preserve the structure of the objects that they connect.

    The very word 'map' indicates an object that reflects in a grossly reduced and simplified some of the properties of the real world. The properties are those that are useful to the owner of the map - lengths are to scale, directions similar, and colors on paper refect something about what is in the equivalent places in the real world - blue often means - a place of wetness, for example. Thus a map guides its user in the real world with out being the real world.

    In general a structure preserving mapping is called a 'morphism'. They come in a wide variety of flavors. Some standard flavors will be found in [ math_11_STANDARD.html ]

    The Grand Unified Theory of all systems and morphisms is called Category Theory. Some have called this "general abstract nonsense" but it turns out that category theory provides a common language and approach to any logistic system. Category theory is the ultimate map - it guides its user in a world of abstract structures.

See Also

  • theory_of_relations::= See http://www.csci.csusb.edu/dick/maths/logic_40_Relations.html.
  • logic::= See http://www.csci.csusb.edu/dick/maths/logic_2_Proofs.html

    . . . . . . . . . ( end of section Mathematics) <<Contents | End>>

    Notes on MATHS Notation

    Special characters are defined in [ intro_characters.html ] that also outlines the syntax of expressions and a document.

    Proofs follow a natural deduction style that start with assumptions ("Let") and continue to a consequence ("Close Let") and then discard the assumptions and deduce a conclusion. Look here [ Block Structure in logic_25_Proofs ] for more on the structure and rules.

    The notation also allows you to create a new network of variables and constraints. A "Net" has a number of variables (including none) and a number of properties (including none) that connect variables. You can give them a name and then reuse them. The schema, formal system, or an elementary piece of documentation starts with "Net" and finishes "End of Net". For more, see [ notn_13_Docn_Syntax.html ] for these ways of defining and reusing pieces of logic and algebra in your documents. A quick example: a circle = Net{radius:Positive Real, center:Point}.

    For a complete listing of pages in this part of my site by topic see [ home.html ]

    Notes on the Underlying Logic of MATHS

    The notation used here is a formal language with syntax and a semantics described using traditional formal logic [ logic_0_Intro.html ] plus sets, functions, relations, and other mathematical extensions.

    For a more rigorous description of the standard notations see

  • STANDARD::= See http://www.csci.csusb.edu/dick/maths/math_11_STANDARD.html


  • above::reason="I'm too lazy to work out which of the above statements I need here", often the last 3 or 4 statements. The previous and previous but one statments are shown as (-1) and (-2).
  • given::reason="I've been told that...", used to describe a problem.
  • given::variable="I'll be given a value or object like this...", used to describe a problem.
  • goal::theorem="The result I'm trying to prove right now".
  • goal::variable="The value or object I'm trying to find or construct".
  • let::reason="For the sake of argument let...", introduces a temporary hypothesis that survives until the end of the surrounding "Let...Close.Let" block or Case.
  • hyp::reason="I assumed this in my last Let/Case/Po/...".
  • QED::conclusion="Quite Easily Done" or "Quod Erat Demonstrandum", indicates that you have proved what you wanted to prove.
  • QEF::conclusion="Quite Easily Faked", -- indicate that you have proved that the object you constructed fitted the goal you were given.
  • RAA::conclusion="Reducto Ad Absurdum". This allows you to discard the last assumption (let) that you introduced.