[Skip Navigation] [CSUSB] / [CNS] / [CSE] / [R J Botting] / [ MATHS ] / math_81_Probabillity
[Contents] [Source Text] || [Notation] || [Copyright] || [Contact] [Search ]
Thu Aug 30 12:37:15 PDT 2012

Contents


    Probability

      The Wikipedia has some excellent pages on traditional theories of probability. This page has some non-traditional takes on probability theory. Plus one some useful formulas.

      It has notes on a particularly simple approach that adds a kind of division operator to symbolic logic. The alternative is the modern mathematical theory of [ Measure Theory ] (below) which include probability as a special case.

      Theories

        Standard Axiomatic Theory of Probability

        This is a MATHS approximation to the normal theory -- just some syntax and some axioms, without getting into the semantics -- is probability a measure of belief or a limit of a frequency?

        This is construction. Contact me with corrections.... Aug 28th 2012

      1. Good_Probability::=following
        Net
          This comes from page 34 of Good50...

        1. For p:wff, Pr(p)::Positive & Real=the probability of p being true.
        2. Pr(coin came up heads when tossed) = 0.5.
        3. Pr(coin came up tails when tossed) = 0.5.


        4. |- (B2): For p,q: wff, if Pr(p and q) = 0 then Pr(p or q) = Pr(p)+Pr(q).
        5. |- (B3): If (if p then q) then Pr(q) >= Pr(p).
        6. |- (B4): Pr(true) <>0
        7. |- (B5): for some p, Pr(p) = 0.


          (Conditional Probability):

        8. For p,h:wff, if Pr(h)<>0, Pr(p/h)::Real=Pr(p and h)/Pr(h), the probability of p, given h.
        9. coin came up heads/coin tossed = 0.5.
        10. coin came up tails/coin tossed = 0.5. [ Conditional_probability ]


        (End of Net Good_Probability)

        Probability as an extension of Logic

        Professor George published the following as part of his text book on logic and cybernetics.

        Source: George 77, Frank George, Precision, Language, and Logic, Pergamon Press, NY NY,p91

        I've chased the approach back to John Maynard Keynes in the 1920's via [RamseyFP60]!

        It has this advantage of growing algebraically out of the propositional calculus - almost as if it added a division operator to the set of logical operators. The main disadvantage is that some theorems and axioms (examples: P4, P5, and P6) are more complex because they have to expressed using a fraction.

        It is also a formal theory and so does not worry about what we mean by "Probability". It just has the rules and assumptions a rational person would be forced to adopt for giving values to propositions in a self-consistent .

      2. Georgian_Probability::=
        Net{
        1. For p,h:wff, p/h::Real=the probability of p, given h.
        2. coin came up heads/coin tossed = 0.5.
        3. coin came up tails/coin tossed = 0.5.

          Note: p/h <> h/p! [ Conditional_probability ]


        4. |- (P1): For p,h:wff,0<=p/h<=1,
        5. |- (P2): For p,h, if (if h then p) then p/h=1,
        6. |- (P3): For p,h, if (if h then not p) then p/h=0.
        7. |- (P4a): For p,q,h:wff, (p and q)/h = (p/h)*(q/(p and h)) = (q/h)*p/(q and h),
        8. |- (P4b): For p,q,h:wff, (p or q)/h=p/h+q/h-(p and q)/h,
        9. (above)|- (P5): p/(q and h)=(p/h)*(q/(p and h))/(q/h),
        10. (above)|- (P6): if (if p then q) then p/(q and h)=p/h / q/h.

          Notation [ Serial operations in math_11_STANDARD ]

        11. (STANDARD)|-For n:Nat, p:wff^n, or(p) = p(1) or p(2) or ... or p(n).
        12. (STANDARD)|-For n:Nat, x:[0..1]^n, +x = x(1) + x(2) + ... + x(n).

          Local notational convenience/abuse of notation -- the and can be omitted.

        13. For p,h, p h::= p and h.

        14. For n:Nat, partition(n):: @(wff^n), sets of n-tples of well formed formulas:
        15. |-For n:Nat, partition(n)={ P:wff^n || or(p) and for all i,j:1..n(if p(i) and p(j) then i=j)}.

          Compare the above with [ logic_31_Families_of_Sets.html#partitions ] , set theoretic model of partitions.


        16. (above)|- (Bayes): for p:partition(n), q,h:@, P:=map[i:1..n](q/(p(i) and h))*(p(i)/h)) (for all i:1..n, p(i)/(q and h)=P(i)/+P).

        17. Yudkowsky_explains_Bayes_theorem::= See http://yudkowsky.net/rational/bayes.

          A useful function for calculating probabilities, that I name norm, which normalizes a tuple:

        18. For X:Finite_set, P:X>->Real & Positive, norm(P)::= map[x:X](P(x)/(+P)).

          For example, see Columbus_and_the_Birds and Software_testing.

        19. Columbus_and_the_Birds::=following
          Net
            This example is quoted in Polya's excellent book "How to Prove It" (see [ logic_20_Proofs100.html#Heuristic Syllogism ] ).


            1. If we are approaching land, we often see birds.
            2. Now we see birds.
            3. Therefore, probably, we are approaching land.

            So, we have something like this

          1. If we are approaching land, we often see birds.
            Table
            q\pNear LandFar from Land
            See Birds0.70.1
            No Birds0.30.9

            (Close Table)

            Suppose that we think that there is 20% chance of being near land...
            Table
            Near LandFar from LandTotal
            0.20.81.0

            (Close Table)

            Then we see birds, then Bayes suggests that we calculate
            Table
            -Near LandFar from LandTotal
            Prior0.20.81.0
            P[i]0.7*0.2=0.140.8*0.1=0.080.22
            Normalize0.14/.22=0.64....08/0.22=0.36..1.0
            Post0.64...0.36..1.0

            (Close Table)
            So our belief we are near land should treble, having seen birds.


          (End of Net)

        20. Software_testing::=following
          Net
            Suppose we have a piece of software (h) that may be correct (p) or may have bugs (not p). We test the software and it may pass the test(q) or it may fail. Now the probability of the test failing depends on whether the software has bugs:
            Table
            -pnot p
            q 10.9
            not q00.1

            (Close Table)
            We are pretty good a writing software so we we put p/h = .9 and not p/h=0.1.

            This means that we have a .9*1 + 0.1 * 0.9 = .99 chance of the tests succeeding.

            Now if the test succeeds it should change the weight p/h by Bayes

          1. p/q h = (p/h * q/p h)/S,
          2. not p/q h = (not p/h * q/not p h)/S,
          3. S= (p/h * q/p h) + (not p/h * q/not p h).

            So

          4. p/q h = .9/S,
          5. not p/q h = 0.09/S,
          6. S=.99.

            So

          7. p/q h = .90909...
          8. not p/q h = .090909...

            Which means a successful test should improve our confidence that the software is correct by a small amount -- from 90% to 91%. Of course, we can not repeat the same test and get a similar improvement because the duplicated test is not independent. We might make the case that a series of random tests were independent and so our confidence in the software slowly tends towards 1.

            If you do the math repeated independent tests tend towards convincing us that the software is perfect, but there is always a small doubt left behind. Worse, how do we know that the tests are independent...


          (End of Net)

          For more complex cases see BBN in my bibliography.

        21. For h, Independent(h)::@(wff, wff)= rel[p,q]((p and q)/h = p/h * q/h ).
        22. For h, disjoint(h)::@(wff, wff)= rel[p,q]((p and q)/h = 0 ).

          [click here [socket symbol] if you can fill this hole]


        }=::Georgian_Probability.

        Measure Theory

      3. MEASURE::=
        Net{
        1. Space:Sets=given, [ logic_31_Families_of_Sets.html ]
        2. Set::@@Space=measurable subsets of Space.
        3. Set::=given.
        4. Space and {} in Set.
        5. |-For A,B:Set, A|B and A&B in Set.

        6. measure::Set->Real [0..1]=given.

          Notice that not all subsets of the space are given a measure. Doing that leads to some paradoxes. Instead we have a Set of measurable subspaces.


        7. |-For A,B:Set, measure( A | B )= measure(A) + measure(B) - measure(A & B).
        8. |-measure(Space)=1.0.
        9. |-measure({})=0.0.

        10. discrete::@=(Set=@Space).

        11. continuous::@=(for all a:Space(measure({a}) = 0.0) ).
        12. For A,B:Sets, A independent B::@= ( measure(A & B) = measure(A) * measure(B)

          [click here [socket symbol] if you can fill this hole]


        }=::MEASURE.

        Random Variables

        Notation - using the Power of MATHS to express functions without special variables...

        Teaching tends to present all random variables as real variables. All we need in MATHS however is a set and a MEASURE on it -- mostly, until I get to pdf..

      4. random_variable::=$
        Net{
        1. Values::Sets=given.
        2. Range::=Values.
        3. Set::=Values.
        4. measure::$ MEASURE(Set)=given.
        5. discrete::@.
        6. continuous::@= not discrete.


        7. |-if continuous then metric_space. [click here [socket symbol] if you can fill this hole]


        }=::random_variable.
      5. For X:random_variable, DF(X)::measure(X)=Probability Distribution Function.

      6. discrete_random_variable::=random_variable(discrete=true).


      7. |-For X:discrete_random_variable, p:@(X.Set),Pr( p(X) ) = measure({x:Set(X)|| p(X)}).
      8. |-For X:discrete_random_variable, op:{and, or, ...}, Pr( p(X) op q(X) ) = measure({x:Set(X)|| p(x) op q(x)} ).
      9. |-For X:discrete_random_variable, Pr( p(X) || h(X) ) = measure({x:Set(X)|| p(X)})/measure({x:Set(X)|| h(X)}).

      10. continuous_random_variable::=random_variable(discrete=false).
      11. For X:continuous_random_variable,
      12. PDF(X)::measure(X)=Probability Density Function.

        ??{ Not easy to invent a generalization of the elementary case... and the library is closed...

      13. PDF(X) is a limit at X=x of the measure a small ball surrounding x divided by the size of that ball.

        [click here [socket symbol] if you can fill this hole]
        }

    1. real_random_variable::=random_variable with Set=Real.
    2. For X:real_random_variable, PDF(X) = D DF(X).

      [click here [socket symbol] if you can fill this hole]

      Expected value

      Expected values are very useful. A typical example if you have a 50% chance of winning $100 in a bet vs a 50% chance of losing $90 then your expected value will be
    3. 100*0.5 - 90 * 0.5 = 5.

      So the bet is worth making...

      I will use the notation expect(v) rather than the more common E[v].

      If you have a discrete random variable with distribution p and a function v that can be applied to the random variable and returns a real value then

    4. expect(v)::= +(v*p).

      If you have a continuous random variable with density p and a function v that can be applied to the random variable and returns a real value then

    5. expect(v)::= integrate(v*p).

      Expectations have properties and can (probably) be used as an alternate basis for a theory of probability.

    6. EXPECTATION::=following
      Net
      1. X:Sets.
      2. ...
      3. Values::@(X->Real).
      4. |-for a:Real, X+>a in Values.
      5. For v:Values, expect(v)::Real.
      6. For u,v,w: Values, a:Real.
      7. |-expect(u+v) = expect(u) + expect(v).
      8. |-expect(a * v) = a*expect(v).
      9. |-expect(a) = a.

      10. The probability of a set A:@X can be expressed as an expectation as long as the map if A then 1 else 0 fi is in Values:
      11. probability(A)::= expect(A+>1|(X~A)+>0).

        [click here [socket symbol] if you can fill this hole]


      (End of Net)

      Mean value

      The mean value of a random variable is its expectation
    7. μ::= expect ( (_) ).

      Moments

      For r:1.., the r'th moment is the expected value of the r'th power of a random variable
      (where it exists):
    8. For r:Nat, μ[r] ::= expect( (_)^r ).

      Population Standard deviation and Variance

      These measure the spread of the distribution.
    9. variance::= μ[2] - μ[1]^2.
    10. sd::=√(variance).
    11. standard_deviation::=sd.

      Entropy

      A measure of the information conveyed by a typical event.
    12. H = expect (- lg(p) ), where p is the distribution or probability density function.

      Bayesian

    13. P(p)::=Degree of belief associated with proposition p. [click here [socket symbol] if you can fill this hole]

      Frequentist

    14. F(p)::=Frequency with each an event turns up [click here [socket symbol] if you can fill this hole]

    . . . . . . . . . ( end of section Theories) <<Contents | End>>

    Classic Distributions

      [click here [socket symbol] if you can fill this hole]

      Discrete classics

        [click here [socket symbol] if you can fill this hole]

        Uniform

      1. For n:Nat, uniform::1..n->probability= 1/n. [click here [socket symbol] if you can fill this hole]

        Binomial

      2. For n:Nat, p:probability, q:=1-p, B::0..n->probability= fun[r](C(r,n)*p^r * q^(n-r)).

        Where

      3. C::=Number of combinations of (2nd) things taken (1st) at a time.
      4. C(r,n)::= n!/(r! * (n-r)!).

        Where

      5. n!::=factorial n.
      6. n!= n*(n-1)*(n-2)* ... * 2*1.
      7. 0!=1.
      8. For n>0, n!= (n-1)!*n. [click here [socket symbol] if you can fill this hole]

        Negative Binomial

        [click here [socket symbol] if you can fill this hole]

        Poisson

      9. For m:Real, Poisson::Nat0->probability = map[r]( exp(-m)*m^r/r! ).
      10. P::=Poison.
      11. Poisson(r)::=the probability of r events occurring when they are very unlikely to occur at a particular time or place, but there are a lot of times or places when they could occur.

        The classic and delightful example being the number of Prussian officers kicked to death by horses in regiments per year. To some extent the number of goals in British professional soccer games is Poisson as well. Also the number of mistakes I make when typing (as measured in 1967) was Poisson. [click here [socket symbol] if you can fill this hole]

        Geometric

      12. For p:probability, q=1-p, G::Nat0->probability= p^(_)*q. [click here [socket symbol] if you can fill this hole]

        Hyper-Geometric

        [click here [socket symbol] if you can fill this hole]

      . . . . . . . . . ( end of section Discrete classics) <<Contents | End>>

      Continuous classics

        Pareto

        [click here [socket symbol] if you can fill this hole] The 80-20 law

        For x:[0..], Pareto(x)::= 1-(γ/x)**β.

        Weibull

        [click here [socket symbol] if you can fill this hole]
      1. Weibull(x)::=1 -exp(-(x/γ)**β).

      2. For Time t, defect_rate(t)::= N* a* _ * t**(a-1) * exp(_ * t**a).

        Exponential

      3. p(x) is proportional to exp(-x). [click here [socket symbol] if you can fill this hole]

        Gaussian or Normal

        When many small independent deviations are added up.

        Standard Gaussian has a mean of zero and standard deviation of 1 and the PDF (symbolized by φ) is

      4. φ(x) = exp(-x^2/2) / √(2*π).

        In General if the mean is m and the standard deviation is σ then the PDF is

      5. φ( (x-m)/σ ).

        [click here [socket symbol] if you can fill this hole]

        Log-Normal

        p(x) = if(x<0, 0, (1/√(2*p))*exp(-(ln(x)-5)**2/(2*s**2))). [click here [socket symbol] if you can fill this hole]

        Χ\_square

        Distribution of distances from 0 squared of normal variates. [click here [socket symbol] if you can fill this hole]

        Fisher's F

        Ratio of Χ^2s [click here [socket symbol] if you can fill this hole]

        Student's t

        [click here [socket symbol] if you can fill this hole]

      . . . . . . . . . ( end of section Continuous classics) <<Contents | End>>

    . . . . . . . . . ( end of section Classic Distributions) <<Contents | End>>

    Applications

      [click here [socket symbol] if you can fill this hole]

    . . . . . . . . . ( end of section Applications) <<Contents | End>>

    Glossary

  1. wff::=expression(@), well formed formula.

. . . . . . . . . ( end of section Probabilities) <<Contents | End>>

Notes on MATHS Notation

Special characters are defined in [ intro_characters.html ] that also outlines the syntax of expressions and a document.

Proofs follow a natural deduction style that start with assumptions ("Let") and continue to a consequence ("Close Let") and then discard the assumptions and deduce a conclusion. Look here [ Block Structure in logic_25_Proofs ] for more on the structure and rules.

The notation also allows you to create a new network of variables and constraints. A "Net" has a number of variables (including none) and a number of properties (including none) that connect variables. You can give them a name and then reuse them. The schema, formal system, or an elementary piece of documentation starts with "Net" and finishes "End of Net". For more, see [ notn_13_Docn_Syntax.html ] for these ways of defining and reusing pieces of logic and algebra in your documents. A quick example: a circle might be described by Net{radius:Positive Real, center:Point, area:=π*radius^2, ...}.

For a complete listing of pages in this part of my site by topic see [ home.html ]

Notes on the Underlying Logic of MATHS

The notation used here is a formal language with syntax and a semantics described using traditional formal logic [ logic_0_Intro.html ] plus sets, functions, relations, and other mathematical extensions.

For a more rigorous description of the standard notations see

  • STANDARD::= See http://www.csci.csusb.edu/dick/maths/math_11_STANDARD.html

    Glossary

  • above::reason="I'm too lazy to work out which of the above statements I need here", often the last 3 or 4 statements. The previous and previous but one statments are shown as (-1) and (-2).
  • given::reason="I've been told that...", used to describe a problem.
  • given::variable="I'll be given a value or object like this...", used to describe a problem.
  • goal::theorem="The result I'm trying to prove right now".
  • goal::variable="The value or object I'm trying to find or construct".
  • let::reason="For the sake of argument let...", introduces a temporary hypothesis that survives until the end of the surrounding "Let...Close.Let" block or Case.
  • hyp::reason="I assumed this in my last Let/Case/Po/...".
  • QED::conclusion="Quite Easily Done" or "Quod Erat Demonstrandum", indicates that you have proved what you wanted to prove.
  • QEF::conclusion="Quite Easily Faked", -- indicate that you have proved that the object you constructed fitted the goal you were given.
  • RAA::conclusion="Reducto Ad Absurdum". This allows you to discard the last assumption (let) that you introduced.

    End