For Internet pointers to automated proof systems, see [ Automation ] below.
There has been a lot of research that makes use of formal logic of one sort an another. To generate a quick list of pointers to citations see LOGIC
For a discussion of some differences between the way people think and formal logic, see [ Three Kinds of Inference ] below.
For more on how formal methods and creativity complement each other see [ Further Reading ] below.
In formal logic:
By medieval times we had a gigantic list of these [ ../samples/syllogisms.html ] and algebra added more:
But what to combine...?
Sometimes we are forced to go a different route -- assume that an object does not exist and derive a contradiction. Notice that a whole branch of mathematics intuitionstic rejects this approach and demands that if you want to calim that something exists, then you must give a method for constructing an example.
In MATHS we might express a Euclidean construction like this
Problem -- To draw an equilateral triangle on a given finite straight line
Let
For the real Euclid see [ propI1.html ]
. . . . . . . . . ( end of section Example 1) <<Contents | End>>
Algebra (using symbols for unknowns, and expressions to symbolize the construction of other things, ...) is a powerful tool for finding something that fits a property.
Algebra picks up Pappus's idea of stating with what you want -- expressed as an unkown -- and then expresing the constraints on the unkown... in the hope of transforming these into a known object.
I did a lot of exploration which, when pruned of errors and dead ends,
became:
Net
The proof goes like this... but you can fill in the blank:
Let
. . . . . . . . . ( end of section Example 2) <<Contents | End>>
Notice that I discovered a given property (dnz) in the process of
drafting the construction.
Let
In other cases you show that if an A exists then some other object
B can be
constructed from A,
and this object B we already know doesn't exist:
Let
This kind of proof is popular in computer theory texts -- it is called reduction.
If you are really stuck for ideas, then try relaxing and playing with various possibilities: what if, Suppose, perhaps,... [ logic_25_Proofs.html#Trying Ideas Out ] This is a traditional technique for mathematicians with a problem to solve. Often when you've worked on a problem long enough the key idea will come to you "out of the blue" while getting onto a bus or shopping.
Sometimes it helps to loosen up and do something silly. An example is DST [ The Dilbert Salary Theorem ] which was recently proved after years of diligent research.
Another strategy is to browse the ready made [ logic_25_Proofs.html#Proof Templates ] out there waiting to be used. Every field has it's own proof methods. It is, therefore, well worth studying these in preparation to publishing or presenting a proof.
I have found some lovely light hearted examples in the Internet Futility Closet like this example [ http://www.futilitycloset.com/2012/06/27/readership/ ] of a simple syllogism and a classic informal but MATHS-style proof.
. . . . . . . . . ( end of section Hints) <<Contents | End>>
Source: George Polya, How to Solve It: A New Aspect of Mathematical Method, Princeton University Press, Paperback 0-691-02356-5 (USA).
This is worth reading just for the entry under the heading "Mathematics
Professor, Traditional". The book lays out a program for solving problems and
proving results. It has been known to increase grades a whole point.
It balances rigor and creativity in an effective procedure. Polya
suggests using four phases to solve problems:
For more details on Polya's framework for problem solving see [ ../samples/polya.html ] which is my reduction/expansion/editing of his scheme.
In the MATHS language the Net construct can be used for setting up a collection of givens and a goal and then working out a plan:
.Net
name::type=given, meaning.
...
(given)|- property.
...
(goal): property.
...
.Close.Net
Source: Wayne Wyckelgreen, Conceptual BlockBusting,??
How and why we get stuck, and how to get unstuck. Full of practical ways of thinking around problems.
Source: Edward De Bono Just about everything Dr. De Bono has written is easy to understand. It de-mystifies how your brain works and how to use it. He shows how ideas can be created, modified, and put to use.
See [ http://www.debono.ws ] as a starting point.
According to
[SebeokEco83]
we can distinguish three kinds of inference that people,
detectives, doctors, scientists, philosophers, etc. tend to
make. Only the first of these is valid. But they are all commonly used.
We start by labeling three statements:
(Rule): All the beans from this bag are white.
(Case): These beans are from this bag.
(Result): These beans are white.
Here are three types of inference... written in the MATHS format
(premises)|- conclusion.
For a full discussion of Sherlock Holmes's logic please see [ http://philsci-archive.pitt.edu/5306/ ] Sosshichi Uchii's paper "Sherlock Holmes on Reasoning" at [ http://philsci-archive.pitt.edu/ ] the University of Pittsburg's PhilSci-Archive, or [ http://www1.kcn.ne.jp/~h-uchii/Holmes/ ] (1991) at the University or Kyoto.
The Abduction reasoning is proposed as a Sign of Progress (Page 181). or Heuristic Syllogism (pages 186-190) by polya:
These arguments can made rigorous by using the Bayesian model of probability. Here a value from 0..1 indicates the degree of belief we have in a proposition, and there are formulae for recalculating our beliefs as new data is collected. For more see [ math_81_Probabillity.html#Bayes ] my notes on how this works out using Bayes's Formula.
This book is full of ideas. The author replays an advanced seminar in mathematics where the students duplicate the key points in the history of a particularly elegant theorem in the theory of polyhedra. He makes it clear that proofs are often refuted. Mathematics grows by a process of proof, refutation, and reproof. Indeed, the exception improves the rule more often than not.
The author wanted to decrease the dogmatism of mathematics, and in particular the dogma that mathematical knowledge grows by deducing truth from known truths. He shows that actual mathematics develops in a illogical fashion because it grows by discovering the underlying assumptions as well as by working from these to prove conclusions.
This tendency of number language to come back to the firm earth of figures and models, when doubts arise, has played a great part in building up the rules of what mathematicians call proof. In one sense proof has much the same relation to discovery as art criticism has to authorship. Mathematical truths are usually discovered by experimenting with individual cases first and "proved" afterwards. The proof is then an obituary notice on a praiseworthy statement. It exposes how it is connected with all the other mathematical rules we have learned to use. As we are always learning other rules, the methods first used to justify a rule in mathematics are nearly always held to be lacking in "rigour" by mathematicians who come later. This is how one of our most abstract mathematicians, Professor G. H. Hardy, speaks of one of the most abstract branches of mathematics: "The theory of numbers, more than any other branch of mathematics, began by being an experimental science. Its most famous theorems have all been conjectured, sometimes a hundred years or more before they are proved; and they have been suggested by the evidence of a mass of computations"
Somebody [click here Visual Euclid if you can fill this hole] published a book of proofs from Euclid expressed using colored pictures. Some of these have been animated as Java Applets.
. . . . . . . . . ( end of section Further Reading) <<Contents | End>>
Even more intriguing: if a valid argument leads to a known truth then it does not prove the truth of the assumptions. It can also strengthen ones belief in the assumptions -- but this is only a matter of probabillity not proof. An argument never proves anything positive about assumptions. It can show something positive about a conclusion however.
When a set of assumptions leads to a false conclusion then one of the assumptions (at least) is false. Thus arguments help us question our assumptions. Hence they lead us to revise them until we have a set the do not contradict reality. The revision process, by the way, is not a logical process, but a dialectical one -- the steps are error prone. Indeed we may need a way to record the counter-arguments or rebuttals that develop in a real argument.
So when we meet a real penguin and it does not fly,
we need to revise our assumptions.
Perhaps penguins fly under water? Perhaps some birds are flightless?
Perhaps we have mis-classified penguins as birds? Choosing
the right one to change is not a logical process but you can
probably do it intuitively... Or if we are scientific
we spend more time studying birds:
sparrows, ostriches, crows, kiwi, canaries, cassowary, and so on,
We may end up with
birds divided into normal_birds and flightless_birds.
Let
In every day reasoning we are used to arguments that can be defeated by other arguments. Many of these are like the above and can be resolved by refining our assumptions. There has been research into the structures of these defeasible, dialectic, or plausible arguments: see [ChesnevarMaguitmanLoui00] for a history. Handling contradictory assumptions is also a key part of requirements analysis in developing software and there have been attempts to develop logics that help in this task. See [HunterNuseibeh97] [Nuseibehetal94] [Finkelsteinetal94] etc. for example. You can get a quick bibliography of recent work consistency by using my search engine: [Cc]onsisten
In MATHS we can document rebuttals simply:
But
Rebuttals should be put after an argument that is not a full proof
or a statement. Several rebuttals may be put after a statment and they all
attack that proof/statment.
Let
To rebut a rebuttal, place it inside the first rebuttal.
Let
In a sense the penguin problem arises becasue of the rarity of penguins and other flightless birds. If more birds did not fly we wouldn't have assumed that "all" fly. There have been attempts to develop reasoning based on statements like "All birds normally fly".
In mathematics, some thing analogous happens. You will see statements like:
Probability and statistics are the branches of mathematics that are designed to handle these problems. Statements like 90% of all birds can fly. allow us to reason about uncertain situations. But they are essentially a more complicated than the propositions on this page. However, we can make a theory of probability using the simple logic discussed here. See [ math_81_Probabillity.html ]
. . . . . . . . . ( end of section Quick Example) <<Contents | End>>
By Murphy's Law, it will happen that MATHS users will construct documentation that is inconsistent. The symptom of this is when a proposition and its negation are derived within a piece of documentation.
There has been a lot of work done in the 1990s on inconsistent requirements for software. When talking to people about what they want software to do it is common to find large numbers of small and large contradictions. It is wise to do something about these when they are discovered.
In any case a contradiction that occurs in MATHS is anything but a disaster! It is an opportunity to form a conclusion. It proves something. The correct thing to do is to close off the conclusion and the assumptions and declarations leading to the contradiction into a "set of documentation" that proves a result. Thus by simple editing the whole is protected from inconsistency. The result is a new theorem!
A historical example of a project that failed through not doing this are the Pythagoreans who failed as mathematicians when they killed the man who proved that the assumption that sqrt(2) was a ratio lead to a contradiction. They could have proceeded by making this into an demonstration that there must exist numbers that are not ratios. Instead they preferred to support a mystical theory which they knew to be unsound. There are no Pythagoreans left today.
Imre Lakatos studied the detailed behavior of mathematicians when faced with a proved theorem and a number of exceptions to it in the 19th century. From this he worked out an interesting heuristic (a guide for the in expert). A valid proof is(by definition) one that, given true 'givens' guarantees the truth of the conclusion. Now most proofs are some what incomplete but it is possible (by hard work) to construct such iron chains leading from assumptions to conclusions. In any case, Lakatos pointed out that this means that if the conclusion contradicts some other fact you should examine what you have taken as given, what you have assumed in the proof, to which of these are also contradicted. The assumptions that conflict with the new data can now either be improved, or they may have to be incorporated into the statement of the theorem. In summary, a proof plus a refutation improves the theory. See [ Further Reading ]
Absurdity happens! You will find that your cherished and documented assumptions leading to conclusions that do not fit with observation. Again this is an opportunity not a problem! Science takes a leap forward whenever reality destroys a theory. First the data has to be checked and then the theory may have to be modified.
For example the first evidence for a hole in the ozone layer was rejected by NASA because the values couldn't happen - it had to be "experimental error". Astronomy is full of examples of absurd ideas being rejected that were accepted later. There is an early map of the stars that someone drew which shows that one star had been erased and put near by - because a later observation showed it in a new position. The star was one of the unknown outer planets. Kepler wrote and crossed out "These points might as well fit an ellipse", and delayed his discovery of "Kepler's Laws" by several years. Because "action at a distance" was repugnant to Descartes' philosophy, his followers later rejected Newton's law of gravitation. For more on the history of astronomy see:
Source: Koestler 59 , Arthur Koestler, "The Sleepwalkers" Acknowledgment:
An engineer is in trouble when reality shows up a misunderstanding - because the system often becomes unsafe or ceases to work. The wise engineer takes time to scientifically examine and model the situation prior to searching out the problems and solving them. No work is started until a mathematical analysis proves that the system is safe (within a generous "safety factor"). The wise engineer also finishes up by making experimental prototype versions first. As an example, in the UK there were once two rival groups designing an airship. One was designed on an elegant mathematical model and known to be safe. The other was "hacked" together - despite evidence that it would fall to pieces. The properly engineered system flew, the other one was the R101 and crashed in flames - breaking up as predicted
Source: Nevile Shute, "Sliderule".
However never totally discard a theory - document the context in which it did/does work and file it under - "It seemed like a good idea at the time...".
. . . . . . . . . ( end of section Motivation and Theory) <<Contents | End>>
For example, formulas in the propositional calculus can be
proved by several algorithms:
Table
Truth Tables (Mil Post) |
Proof_by_cases+Indirect (Jensen) |
Tree structured analysis [ logic_27_Tableaux.html ] |
Boolean Algebra (Boole) |
Tools and notations can clear the logical underbrush and allow the necessary creative steps to be made.
There are tools on the WWW. For example there is an automatic theorem prover for many non-classical propositional logics: [ LWBinfo.html ]
I have developed a very simple Prolog program to evaluate, test and disprove boolean expressions: [ bool.plg ] and in some courses students are asked to write a program to generate semantic tableaux: [ project2.html ]
Full blown theorem provers tend to need human intervention and guidance. The record for humans trying to prove a theorem is approximately 400 hundred years.
One popular tool for computer people is the Prototype Verification System:
[click here Links to your tools can be put here. if you can fill this hole]
. . . . . . . . . ( end of section Automation) <<Contents | End>>
The following proof was supplied to Tom and Ray [ http://cartalk.cars.com/ ] by From Kate Fowler, Denver, Colorado. As an informal proof [ 04.29.html ] in April of 2000.
. . . . . . . . . ( end of section The Dilbert Salary Theorem) <<Contents | End>>
. . . . . . . . . ( end of section Glossary) <<Contents | End>>
. . . . . . . . . ( end of section Comments and Reviews) <<Contents | End>>
. . . . . . . . . ( end of section Help with Proofs and Demonstrations) <<Contents | End>>
Proofs follow a natural deduction style that start with assumptions ("Let") and continue to a consequence ("Close Let") and then discard the assumptions and deduce a conclusion. Look here [ Block Structure in logic_25_Proofs ] for more on the structure and rules.
The notation also allows you to create a new network of variables and constraints. A "Net" has a number of variables (including none) and a number of properties (including none) that connect variables. You can give them a name and then reuse them. The schema, formal system, or an elementary piece of documentation starts with "Net" and finishes "End of Net". For more, see [ notn_13_Docn_Syntax.html ] for these ways of defining and reusing pieces of logic and algebra in your documents. A quick example: a circle might be described by Net{radius:Positive Real, center:Point, area:=π*radius^2, ...}.
For a complete listing of pages in this part of my site by topic see [ home.html ]
For a more rigorous description of the standard notations see