[Skip Navigation] [CSUSB] / [CNS] / [CSE] / [R J Botting] / [ MATHS ] / math_73_Process_algebra
[Contents] [Source Text] || [Notation] || [Copyright] || [Contact] [Search ]
Tue May 1 07:21:23 PDT 2012

Contents


    Process Algebras and Communicating Sequential Processes

      Introduction

        This set of notes is a work in progress. Slowly and steadily I'm placing the more on the web. The reader will almost certainly benefit from studying my sources. Meanwhile I am scattering a Hole link all over this page to invite you, the reader, to help me fill in the blanks.

        Process algebras are very high level models of complex systems or programs. Various actions can take place. The systems can perform (or suffer?) a sequence of actions (a;b;c;...). As a result they change. They can choose between alternative actions (x|y|z...). Notation


        1. In the printed papers and books plus and minus are used, instead of '|' and ';', by the way. Systems can execute some actions in parallel and/or communicate. This I'll show as "<&>", however a vertical bar is used in the literature. The overall behavior is modelled by the idea that the different actions are interleaved or merged ("<*>").

        We can distinguish detailed models [ math_76_Concurency.html ] like Petri Nets and Computation Graphs from the more abstract Process Algebras developed described below.

        Also see the following alternative models of complex dynamic systems: General Systems theory and automata: [ math_71_Auto...Systems.html ] , Systems algebras: [ math_72_Systems_Algebra.html ] , and theories of programs: [ math_75_Programs.html ]

        An informal introduction to Process Algebras can be found at [ Process_Algebra ] on the Wikipedia.

        I've recently found [Fokkink07] , a text on a newer version of ACP + data abstraction -- μ_CRL.

        Current developments

        By adding probabillities to Process Algebras Jane Hilston has opened up a rich vein of research [ ../samples/PEPA.html ] into using Process Algebras to help predict performance.

        Sources

        Process Algebras developed after Hoare's work on Communicating Sequential Processes (CSP) [Hoare81] and Milner's Calculus of Communicating Systems (CCS) [Milner80] both of which are still worth studying.

        Information on [ Basic Process Algebras ] and ACP (below) came from sources like:


        (BerKop90): JA Bergstra & J W Kop(pp1-22) in Baeton 90. J C M Baeton(ed) Applications of Process Algebra, Cambridge U. Press 1990.


        (Vran97): J L M Vrancken, The Algebra of Communicating Processes with empty process, Theor Comp Sci V177 pp287-328 1997.

        Probably the simplest of these schemes is LOTOS:

      1. LOTOS::=Language of Temporal Ordering Specifications. [ http://www.cs.stir.ac.uk/~kjt/research/well/ ] [ http://wwwtios.cs.utwente.nl/lotos/ ] [ Links ] etc. LOTOS is an international standard ISO8807:1989: [ CatalogueDetailPage.CatalogueDetail?CSNUMBER=16258 ] and so the following just gives a feeling for the system. Or else see [ LOTOS in MATHS ] below.

        LOTOS in MATHS

        1. LOTOS_MATHS::=following
          Net
          1. Process::Sets.
          2. Action::Sets.

            For P,Q,R,S:Process.

            For a,b,c,d:Action.

            For A,B:@Action.

          3. P-a->Q::@=P can be observed executing action a and then behaving like Q.
          4. Ready-signal->Process_signal.

          5. STOP::Process, takes part in no actions.
          6. |- (stop): for no a:Action, some P:Process( STOP-a->P).

          7. EXIT::Process, can only be seen to terminate.
          8. δ::Action=observable termination of a Process THe EXIT Process can only terminate (δ) and then STOP.
          9. |- (exit1): EXIT -δ->STOP.
          10. |- (exit2): for no a:Action~δ, some P:Process( EXIT-a->P).

            Process Descriptions

          11. a;P::Process= a prefix P.
          12. Ready = signal;Process_signal.

            Note. An expression like a;b;c;P has to be parsed as a;(b;(c;P))) because a;b is not an Action.

          13. P>>Q::Process=P disabled by Q. Note the standard notation is [> not >>.
          14. Working >> interupt; Handle_interupt.
          15. Normal >> exception; Catch_exception_and_continue.

            Note. I'm not sure of the associativity of >>: Is P>>Q>>R the same as P>>(Q>>R) or (P>>Q)>>R. I don't know whether the meaning is same either! [click here [socket symbol] if you can fill this hole]

          16. P|A|Q::Process=P and Q synchonizing on A.
          17. User|{give_data, get_data}|System.

            Note. Again I don't know the Precedence of >> vs |...| or the associativity of |...|. [click here [socket symbol] if you can fill this hole]

            Structural Operational Semantics

            The semantics/meaning of LOTOS expressions can be given as rules describing the conditions for a transition P-a->Q to occur.

            Note. in MATHS we write X:-Y, Z to mean X if Y and Z, as in Prolog.


          18. |- (step): a;P-a->P.
          19. |- (nointerupt): P>>Q-a->R>>Q :- a<>δ, P-a->R. Notice that Q can interupt P later...
          20. |- (termination): P>>Q-δ->R :- P-δ->R.
          21. |- (interupt): P>>Q-a->R :- Q-a->R. Notice how Q overides P.


          22. |- (alone1): P|A|Q-a->R|A|Q :- a not_in A| δ, P-a->R.
          23. |- (alone2): P|A|Q-a->P|A|R :- a not_in A | δ, Q-a->R
          24. |- (shared): P|A|Q-a->R|A|S :- a in A | δ, a=b, P-a->R, Q-b->S.

            Note. I'm not sure about expressions like P|{a,b}|(Q|{a,c}|R) allow P, Q, and R all to evolve with an action a at the same time. [click here [socket symbol] if you can fill this hole]

            example


              Net
              1. Proc::= Pdcc |ctrlc| Edcc.
              2. Pdcc::= Ping>>CtrlC.
              3. Ping::=ping; EXIT.
              4. CtrlC::=ctrlc; EXIT.
              5. Edcc::=EXIT>>CtrlC.

                then we can show

              6. (above)|-Ping-ping->EXIT.


              7. (above)|-Pdcc-ping->EXIT>>CtrlC
              8. (above)|-Proc-ping->(EXIT>>CtrlC)[ctrlc]Edcc.

              (End of Net)

            . . . . . . . . . ( end of section example) <<Contents | End>>

            More on LOTOS: [click here [socket symbol] if you can fill this hole]

            ACTONE

            LOTOS combined with Abstract data types. [click here [socket symbol] if you can fill this hole]

          (End of Net LOTOS_MATHS)

        . . . . . . . . . ( end of section LOTOS in MATHS) <<Contents | End>>

        Communicating Sequential Processes

        1. CSP::=following,
          Net
          1. Process::Finite_sets.
          2. For P,Q,R,X,Y:Process.
          3. Event::Finite_sets.
          4. For x,y,z:Event.
          5. For A,B,C:@Event.

          6. Each Process has a set of events assoicated with it.
          7. α::Process->@Event.

          8. then::Event><Process -> Process.
          9. The above declaration does not capture one rule. We can only prefix a process with an event in that processes alphabet. a then P is only correct when a in α(P).
          10. |- (fixed_alpha): α(x then P) = α(P).

          11. ::=then. A left pointing arrow is the original notation.

            μ[X:A](P(X)) ::Process= the X(α(X)=A and X=P(X)).

          12. This is used to formulate (and clarify) recursive defininitions.

          13. Example:
          14. CLOCK = tick then CLOCK
          15. iff
          16. CLOCK = μ[X:{tick}](tick then X).

            Mutual recursive definitions are allowed and have solutions as long as each LHS is defined once and only once. If there are choices(Choice next) then each alternative must have a defferent set of prefixes.

            Choice

            The most general form is
          17. For B in @α(P), (x:B then P(x))::Process, choose an x in B and then follow process R that depends n the choice of x in B.
          18. For x,y:Alpha(P), α(P)=α(Q), (x then P | y then Q)::Process = (z:{x,y} then if(z=x, P, Q)).
          19. |-α(x then P | y then Q) = α(P)=α(Q).
          20. Extended to (x then P | y then Q | x then R | ...),
          21. STOP::=(x:{} then P), no choice leads anywhere.
          22. |- (singleton): (x:{e} then P(x)) = e then P(e).

            The above is reminiscient of Gries's (later) notations: F(x:B. P. R) and Unity.

            Laws


          23. |- (id): ( (x:A then P(x)) = (y:B then Q(y) ) iff (A=B and for all x:A(P(x) iff Q(x))).
          24. (STOP, id)|- (STOPunique): STOP <> (d then P).
          25. (id)|-if c<>d then (c then P)<>(d then Q).
          26. (above)|- (comm): (c then P | d then Q) = (d then Q|c then P).
          27. (id)|-(c then P)=(c then Q) iff )P iff Q).
          28. if F(X) is a properly guarded expression then
          29. (Y=F(Y)) iff (Y=μ[X](F(X)))..
          30. |-μ[X](F(X)) = F( μ[X](F(X)) ).

            Can show that every well defined process is a fuction mapping a set of initally acceptable events into a set of future behaviors.

            Traces

            More "Real Soon Now"... [click here [socket symbol] CSP if you can fill this hole]


          (End of Net CSP)

        . . . . . . . . . ( end of section Communicating Sequential Processes) <<Contents | End>>

      . . . . . . . . . ( end of section Introduction) <<Contents | End>>

      Process Algebras

        Basic Process Algebras

      1. Basic_Process_Operators::=following,
        Net
        1. Original notation used multiplication and addition.
        2. Actions::Sets.
        3. A::=Actions, -- local abbreviation.
        4. For x,y,z:A, x,y,z are used for general actions.

          Choices:

        5. x | y::A= the process which choose to either do x or y.
        6. |- (A1_2): abelian_semigroup(A,(|)).
        7. (-1)|-x|y = y|x.
        8. (-2)|-(x|y)|z = x|(y|z). Normally we omit parentheses.
        9. |- (A3): idempotent(|).
        10. (-1)|-x|x = x.

          Sequence:

        11. x;y::A= the process that first does x and then does y.
        12. |- (A5): semigroup(A,(;)).
        13. (-1)|-(x;y);z = x; (y;z). Normally we omit parentheses.

          Note: (;) has a higher priority than (|) in MATHS so we can omit parentheses

        14. (a;B) | C = a;B | C.

          However we do not assume that (;) distributes over (|)... until BPA.

          Note. I could declare (|) and (;) to be serial operators which would allow actions like

        15. |[i:1..10] a(i)

        16. We usually have a set of atomic steps which are used to describe more complex actions. For example in modelling a vending machine an atomic step might be insert money and another give change.
        17. Atomic_steps::@A.
        18. S::=Atomic_steps.
        19. For a,b,c:S, a,b,c are used for atomic steps.

          I'm not sure if the next property is very important. It seems intuitive but it is not usually stated in the literature.

        20. ATOMICITY::@=atoms can not be decomposed.
        21. |-ATOMICITY iff for no a:S, some x,y:A( a=x;y or a=x|y).


        (End of Net)

        BPA Operators used in BPA, SHARED_ACTIONS... below.

      2. BPA::=Basic_Process_Algebra.
      3. Basic_Process_Algebra::=Basic_Process_Operators with following,
        Net
          Making a choice and then acting is the same as choosing one of two sequences:
        1. |- (A4): for x,y,z:A((y|z);z=(x;z)|(y;z)).

          BPA does not specify full distributivity because (z;y)|(z;y) =z;(x|y) implies that a (z;y)|(z;y) can start to execute z;x and then change its mind and execute z;y instead. This is called backtracking.

        2. CTM::@=closed term model,
        3. |-CTM iff A is_finitely_generated_by_applying {(|),(;)} to S. [click here [socket symbol] finitely_generated if you can fill this hole]

          One of the key reasons for studying the CTM is that we can prove results by induction. If a result is true of all atomic steps and is preserved however we assemble the processes, then the result is true for all possible processes.

          As a result the result suggest possible assumptions to make when we don't know that the terms are constructed by a finite process for known atomic steps/pieces. They can also be taken as axioms when the CTM is not assumed. An alternative model is based on labelled graphs.


        (End of Net)

        BPA is used in DEADLOCK, BASIC_COMMUNICATION, COMMUNICATION, MERGE_AND_COMMUNICATION, ACP, SILENCE, ABSTRACTION, STANDARD_CONCURRENCY, BPA_EMPTY, below.

        Special Processes and Combinations

      4. DEADLOCK::=
        Net{

        1. |-BPA.
        2. deadlock::A.
        3. δ::=deadlock, standard notation use the Greek letter for "d".
        4. ATOMIC_DELTA::@=δ in atomic_steps.
        5. |- (A6): for all x:A( δ|x=x), -- processes avoid deadlock if they have the choice.
        6. |- (A7): for all x:A( δ;x=δ). --However precess can not proceed past a deadlock.

        }=::DEADLOCK.

      5. BASIC_COMMUNICATION::=
        Net{
          Original notation uses a vertical bar to show communication.
        1. |-BPA.
        2. x <&> y::A=the process in which x communicates with y, The idea is that the two actions must occur quasi-simultaneaously. It is not necessary that information is actually transfered. Just that two things (x and y) happen at one time.
        3. |- (C1_2): abelian_semigroup(A, <&>).

        4. HANDSHAKING::@=all communication occurs in pairs.
        5. |-HANDSHAKING = for all x,y,z ( x<&>y<&>z = δ ).


        }=::BASIC_COMMUNICATION.

      6. COMMUNICATION::=
        Net{

        1. |-BPA.
        2. |-DEADLOCK.
        3. |-BASIC_COMMUNICATION.

        4. γ::S><S->S|{δ}, defines those atomic steps that can occur at the same time. The function γ either gives the name of the pair. If the name is δ then the system can not execute both actions at the same time without deadlock occurring.

        5. For all a,b:S, if γ(a,b)<>δ then a and b are two parts of a single action γ(a,b) else a and b will deadlock the system.
        6. |- (C3): For a:S, δ<&>a=δ.


        }=::COMMUNICATION.

      7. MERGE::=
        Net{
          Merging describes how a collection of parts evolve by interleaving different actions.


        1. |-BASIC_COMMUNICATION.

        2. For x,y, x<*>y::A=free merge of x with y,
        3. Original uses a double bar for <*>

        4. For x,y, x>*>y::A=auxilary notation. Left merge.

          |-(CM1): For x,y, x<*>y::= x>*>y | y>*>x | x<&>y.

        5. Original notation used a double vertical bars with an underscore on the right "||_"
        6. |- (CM2): a>*>x=a;x.
        7. |- (CM3): a;x>*>y=a(x>*>y).
        8. |- (CM4): (x|y)>*>z=x>*>z | y>*>z

        }=::MERGE.

      8. MERGE_AND_COMMUNICATION::=
        Net{

        1. |-BPA.
        2. |-COMMUNICATION.
        3. |-MERGE.
        4. |- (CM5): (a;x)<&>b=(a<&>b);x.
        5. |- (CM6): a<&>(b;x)=(a<&>b);x.
        6. |- (CM7): (a;x)<&>(b;y)=(a<&>b);(x<&>y).
        7. |- (CM8_9): distribute (<&>) over (|).

        }=::MERGE_AND_COMMUNICATION.

        Algebra of Communicating Processes

        The full algebra allows us to take a whole slew of actions and hide or encapsulate them.

      9. ACP::=Algebra_of_Communicating_Processes.
      10. Algebra_of_Communicating_Processes::=
        Net{
          Encapsulation removes (deadlocks) unwanted communication.
        1. |-BPA.
        2. |-DEADLOCK.
        3. |-MERGE_AND_COMMUNICATION.

        4. For H:@A, δ[H]::A->A~H.

          The operator δ[H] doesn't hide actions outside of H,

        5. |- (D1): For a:A~H, δ[H](a)=a.

          The operator δ[H] does hide actions in H,

        6. |- (D2): For a:H, δ[H](a)=δ.

          Encapsulation preserves sequence and choice while hiding some actions:

        7. |- (D3_4): δ[H] in ((;), (|)) A->(A~H).

        }=::Algebra_of_Communicating_Processes.

      11. SILENCE::=
        Net{

        1. |-BPA.

        2. τ::A=the silent step. ??{τ in A~S}?

          Axioms from Milner's CCS.

        3. |- (T1): x;τ=x.
        4. |- (T2): τ;x=τ;x|x
        5. |- (T3): a;(τ;x|y)=a;(τ;x|y)|a;x.
        6. |-COMMUNICATION.
        7. |- (TM1): τ>*>x=τ;x.
        8. |- (TM2): τ;x >*> y = τ;(x <*> y).
        9. |- (TC1_2): τ<&>x=x<&>τ=δ.
        10. |- (TC3): τ;x <&> y = x <&>y.
        11. |- (TC4): x<&>(τ;y)=x<&>y.

        }=::SILENCE.

        A system is typically an expression of form <*>(x)=x1<*>x2<*>...<*>x[k] for some list of subsystems(expressions).

      12. ABSTRACTION::=
        Net{
          Abstraction is a matter of silencing actions(?atomic steps?).
        1. |-BPA.
        2. |-SILENCE.

        3. For I:@S, τ[I]:: A->A~I, -- assuming I can only be atomic
        4. |- (AT1): τ[I](τ)=τ.
        5. |- (AT2): For a:A~I, τ[I](a)=a.
        6. |- (AT3): For a:I, τ[I](a)=τ.


        7. |- (AT4_5): τ[I] in ((;), (|)) A->(A~I)

        }=::ABSTRACTION.
        ACP[τ]::= ACP with ABSTRACTION and following,
        Net
        1. (DT): δ[H](τ)=τ.

        (End of Net)

        Standard Concurrency Rules for ACP

      13. STANDARD_CONCURRENCY::=
        Net{
          ACP.
        1. |-<&> and <*> in associative(A) & commutative(A).
        2. |->*> in associative(A).
        3. |-(x<&>(a;y))>*>z=x<&>((a;y)>*>z).

        }=::STANDARD_CONCURRENCY.

        Expansion Theorem for ACP

      14. EXPANSION_THEOREM::=
        Net{

        1. |- (H1): ACP and HANDSHAKING.
        2. (H1)|-For k:3..., x:A^k, <*>(x) = |[i:1..k](x[i]>*>(<*>((1..k)~i);x) | |[i,j:1..k]((x[i]<&>x[j])>*>(<*>((1..k)~{i,j});x).

        }=::EXPANSION_THEOREM.

        Projection

      15. PROJECTION::=
        Net{

        1. |-ACP[Τ].

          for n:1.., π[n]:: A->A.

        2. |- (PR1): π[1](a;x)=a.
        3. |- (PR2): π[n+1](a;x)=a;π[n](x).
        4. |- (PR3): π[n](a)=a.
        5. |- (PR4): π[n](x|y)=π[n](x) | π[n](y).
        6. |- (PR5): π[n](τ)=τ.
        7. |- (PR6): π[n](τ;x)=τ.π[n](x).

        }=::PROJECTION.

        A Topology

      16. TOPOLOGY_OF_PROCESSES::=
        Net{

        1. |-PROJECTION.

          for x,y, d(x,y) ::= 2^-n where n:=the least{n:Nat || π[n](x)<>π[n](y)}

        2. |-METRIC_SPACES.
        3. |-d in ultra_metric(A).
        4. |-completion.
        5. guarded recursion equations without abstraction
        6. guarded V=guard;...V....|....

        7. τ is never a guard

        8. τ[I] is not used in the equations

        9. .Note Why_no_tau::=Net{ X=a;τ[{a}](X)} has infinitely many solutions {a;q||for some q, τ[a]q=q}.

          Recursion definition principle


        10. |- (RDP): all guarded recursion equations without abstraction have a solution.

          Recursion specification principle


        11. |- (RSP): all guarded recursion equations without abstraction have no more than one solution.

          Approximation Induction Principle


        12. |- (AIP): A process is determined by its finite projections.
        13. For x,y, if for all n:1..(x (= mod π[n])y) then x=y.
          (Weak_AIP): all guarded recursion equations without abstraction have solutions determined by its finite projections

        }=::TOPOLOGY_OF_PROCESSES.

        Alphabets

      17. ALPHABET_CALCULUS::=
        Net{

        1. |-APC[τ].

        2. α::A->@S.
        3. |- (AC1): α(τ)=α(δ)={}.
        4. |- (AC2): α(a)={a}.
        5. |- (AC3): α(τ;a)=α(a).
        6. |- (AC4): α(a;x)={a}|α(x).
        7. |- (AC5): α(x|y)=α(x)|α(y).
        8. |- (AC6): α(x)=|[n:1..]α(π[n](x)).
        9. |- (AC7): α(τ[I](x))=α(x)~I.

          if α(x)<&>(α(y)&H) ==>H then δ[H](x<*>y)=δ[H](x <*> δ[H](y)).

          if α(x)<&>(α(y)&I) ={} then τ[I](x<*>y)=τ[I](x <*> τ[I](y)).

          if α(x)&H ={} then δ[H](x)=x.

          if α(x)&I ={} then τ[I](x)=x.

          δ[H1|H2] = δ[H1] o δ[H2].

          τ[I1|I2] = τ[I1] o τ[I2].

          if H&I={} then τ[I] o δ[H] = δ[H] o τ[I].


        }=::ALPHABET_CALCULUS.

        Koomen's Fair Abstraction Rule(KFAR)

        Gedanken experiment - someone flips a coin behind closed doors until a head occurs. Will they ever come out?
      18. .Example Vaandrager_s_room::=
        Net{
        1. coin_tosser::= flip;( tail; coin_tosser | head).
        2. Room::=τ[{flip,tail}](coin_tosser).
        3. if fair then Room = τ;head else Room=τ;head|τ.

        }=::Vaandrager_s_room.

      19. FAIR_1::=
        Net{

        1. |-APC[τ].
        2. I:@S.
        3. i:I
        4. |- (KFAR_1): if x=(i;x | y) then τ[I](x)=τ;τ[I](y).

        }=::FAIR_1.

      20. LIVELOCK::=
        Net{

        1. |-APC[τ] with FAIR_1.
        2. livelock::=(;)[i:Nat](τ).
        3. |-τ[{i}](Solution{X=i;X}) = livelock
        4. (above)|- (ld): if FAIR_1 then livelock=deadlock

          Proof of ld


          Let

          1. |-FAIR_1.
          2. (above)|-τ[{i}](Solution{X=i;X|δ}) = livelock,
          3. (KFAR_1)|-livelock=τ;τ[I](δ)=τ;δ.

          (Close Let )

        }=::LIVELOCK.

      21. FAIR_2::=
        Net{

        1. |-APC[τ].
        2. I:@S.
        3. i1,i2:I
          (KFAR_2): if x1=(i1;x2 | y1) and x2=i2;x1+y2 then τ[I](x1)=τ;τ[I](y1|y2).

        }=::FAIR_2.

      22. FAIRNESS::=
        Net{
          ...
        1. Cluster Fair Abstraction Rule.
        2. V:@variables,
        3. I:@S.
        4. E:@equations:

        5. cluster is C:@V such that each eqn in E has form
        6. for some C(X) : @C, X= |[c:C(X)] (i(c);X(c))) | |Y X~~>Y
        7. exits::=A~C

        8. conservative if all exits are reached from every C

          Then

        9. (above)|-τ[I](X)=τ; |[Y:exits(C)]τ[I](Y).


        }=::FAIRNESS.

        Empty Processes

        The idea that it helps to include an empty process that does nothing much comes from Karst Koymans and is worked out in detail by J L M Vranken in the paper [Vran97] published in Theoretical Computer Science in 1997.
      23. BPA_EMPTY::=
        Net{

        1. |-BPA.

        2. empty::A.

        3. ε::=empty, -- standard notation.


        4. |- (E1): empty+empty = empty.
        5. |- (E2): empty; x = x.
        6. |- (E3): x; empty = x.

        }=::EMPTY.

      24. BPA_DELTA_EMPTY::=
        Net{

        1. |-BPA_EMPTY.


        2. |-DEADLOCK.


        3. |- (DE1): δ+empty = empty, given the choice of do nothing and deadlock, it avoids deadlock.
        4. (A7)|- (DE2): δ; empty = δ.


        }=::BPA_DELTA_EMPTY.

      25. PA_DELTA_EMPTY::=
        Net{
          Process algebra with deadlocks and empty processes.
        1. |-BPA_DELTA_EMPTY.
        2. <*>::serial(A)=merge.
        3. >*>::serial(A)=left merge.


        4. |- (PAD1): x<*>y = x>*>y | y>*>x.
        5. |-empty >*> empty = empty.
        6. |-empty >*> δ = δ.
        7. |-empty>*>(a;x)=δ.
        8. |-empty>*>(x | y) = (empty>*>x) | (empty>*>y).


        9. |-δ >*> δ.
        10. |-a;x >*> = a; ( x<*> y).
        11. |-(x|y)>*>z = x>*>y | y*>z.

        }=::PA_DELTA_EMPTY.

      26. ACP_EMPTY::=
        Net{

        1. |-BPA_DELTA_EMPTY

        2. <&>::serial(A)=communication.
        3. <*>::serial(A)=merge.
        4. >*>::serial(A)=left merge.
        5. γ::S><S -> S|δ.


        6. |- (ACPDE1): x<*>y = x>*>y | y>*>x | x<&>y.
        7. |-empty >*> empty = empty.
        8. |-empty >*> δ = δ.
        9. |-empty>*>(a;x)=δ.
        10. |-empty>*>(x | y) = (empty>*>x) | (empty>*>y).


        11. |-δ >*> δ.
        12. |-a;x >*> = a; ( x<*> y).
        13. |-(x|y)>*>z = x>*>y | y*>z.
        14. |-a<&>b = γ(a,b), much simpler the in ACP.


        15. |-x<&>y = y<*>x.
        16. |-empty <&>x = δ = δ<&>x.
        17. |-a;x<*>y = (a<*>y)>*>x.
        18. |-(x|y)<&>z = x<&>z | x<&>z.

          Compare with PAD1 above.

        19. |-empty >*> empty = empty.
        20. |-empty >*> δ = δ.
        21. |-empty>*>(a;x)=δ.
        22. |-empty>*>(x + y) = (empty>*>x) + (empty>*>y).


        23. |-δ >*> δ.
        24. |-a;x >*> = a; ( x<*> y).
        25. |-(x+y)>*>z = x>*>y + y*>z.

          Here we can replace an action by either empty or deadlock to hide it.

        26. For H:@A, r:{δ, ε}, r[H](x) ::A.
        27. |-r[H](empty)=empty.
        28. |-r[H](δ)=delta.
        29. |-for x:H~{empty,δ}, r[H](x) = r.
        30. |-for x:A~H, r[H](x) = x.
        31. |-r[H](x;y) = r[H](x);r[H](y).
        32. |-r[H](x|y) = r[H](x)|r[H](y).

          Vranken hints at proofs by induction of the following theorems:

        33. (above)|-if CTM then x>*>empty = x.
        34. (above)|-if CTM then x<*>empty = x.
        35. (above)|-if CTM then empty>*>x = empty or δ.
        36. (above)|-if CTM then (empty>*>x = empty iff x|empty=x).
        37. (above)|-if CTM then empty>*>(x>*>(y|z))= empty>*>(x>*>y)+empty>*>(x>*>z).
        38. (above)|-if CTM then empty>*>x;δ = δ.
        39. (above)|-if CTM then empty>*>(x|y)= δ.
        40. (above)|-if CTM then (empty>*>(x>*>y)= empty iff(empty>*>x=empty and empty>*>y=empty)).
        41. (above)|-if CTM then empty>*>(x>*>y)= empty >*>(x<*>y).
        42. (above)|-if CTM then ((empty>*>)x>*>y= empty iff ( empty>*>x=empty and empty>*>y=empty)).


        43. (above)|-if CTM then x;δ>*>y = (x>*>y);δ.
        44. (above)|-if CTM then y>*>x;δ = (y>*>x);δ.
        45. (above)|-if CTM then x;δ<&>y = (x<&>y);δ.
        46. (above)|-if CTM then x;δ<*>y = (x<*>y);δ.


        47. (above)|-if CTM then (x>*>y)>*>z = x>*>(y<*>z).
        48. (above)|-if CTM then (x<&>y)>*>z = x<&>(y>*>z).

          And so to th assoicativity of merge:

        49. (above)|-if CTM then associative(A, (<*>)).

        50. STANDARD_CONCURRENCY_WITH_EMPTY::=
          Net{
          1. Replace STANDARD_CONCURENCY above in ACP.
          2. (above)|-if CTM then all following,
            • (x>*>y)>*>z = x>*>(y<*>z).
            • (y>*>x)>*>z = (x>*>y)>*>z).
            • (x<&>y)>*>z = x<&>(y>*>z).
            • z>*>(x<&>y)=(z>*>y)>*>x.
            • (x>*>y)<&>z = x<*>(z>*>y).
            • (y>*>x)<&>z = (y<&>z)>*>x.
            • (x<&>y)<&>z = x<&>(y<&>z).

          }=::STANDARD_CONCURRENCY_WITH_EMPTY.


        51. (above)|-EXPANSION_THEOREM.

          Vranken also developes projection and priority operations.

          In place of Koman's Fair Abstraction Rule KFAR_1 and KFAR_2 we have the Epsilon Abstraction Rule:

        52. EAR::@.
        53. EAR iff for all n( x[n]=a[n]x[n+1]+y[n]) then empty[|a](x[0]) = |[n]empty[|a](y[n]).

          More [click here [socket symbol] if you can fill this hole]


        }=::ACP_EMPTY.

        Shared Actions

      27. SHARED_ACTIONS::=Basic_Proces_Operators with following,
        Net
        1. For α:@Atomic_steps, x<α>y::Action, Defines a system of processes x and y that share the actions in α.
        2. |- (S1): For all α, SEMIGROUP(Action, <α> ).
        3. |- (S2): x<α>y<α>z =x<α> ( y <α> z ).
        4. x <*> y::= x <{}> y, parallel composition.

          Again <α> could be declared to be serial.

          Semantics.

          To capture the semantics we need a relation expressing a single step of the evolution in a process:

        5. x -a-> y::= in one step a action x can evolve into y.

          We can then provide a the Structured Operational Semantics as a set of derivation rules like this:

        6. |- (S3): a;x -a-> x.
        7. |- (S3): If x -a-> y then (x+z) -a-> (y+z).
        8. |- (S4): If x -a-> y then (z+x) -a-> (z+y).

          In English the rule for shared actions is

        9. If x evolves thru a to x' and y thru a to y' then (
          1. x<α>y can evolve to x'<α>y iff a not in α and
          2. x<α>y can evolve to x<α>y' iff a not in α and
          3. x<α>y can evolve to x'<α>y' iff a ∈ α
          )
        10. If x -a-> x' and y-a-> y' then (
          1. x<α>y -a-> x'<α>y iff a not_in α and
          2. x<α>y -a-> x<α>y' iff a not_in α and
          3. x<α>y -a-> x'<α>y' iff a ∈ α
          )

          Definition of equivalence/bisimulation/ etc. leading to an algebra.


        (End of Net)

        See Also

      28. abelian_semigroup::= See http://cse.csusb.edu/dick/maths/math_31.html#abelian_semigroup.
      29. commutative::= See http://cse.csusb.edu/dick/maths/math_31.html#commutative.
      30. completion::=METRIC_SPACES.
      31. idempotent::= See http://cse.csusb.edu/dick/maths/math_11_STANDARD.html#idempotent.
      32. METRIC_SPACES::= See http://cse.csusb.edu/dick/maths/math_92_Metric_Spaces.html.
      33. semigroup::= See http://cse.csusb.edu/dick/maths/math_31.html#semigroup.

      . . . . . . . . . ( end of section Process Algebras) <<Contents | End>>

    . . . . . . . . . ( end of section Process Algebras and Communicating Sequential Processes) <<Contents | End>>

    Notes on MATHS Notation

    Special characters are defined in [ intro_characters.html ] that also outlines the syntax of expressions and a document.

    Proofs follow a natural deduction style that start with assumptions ("Let") and continue to a consequence ("Close Let") and then discard the assumptions and deduce a conclusion. Look here [ Block Structure in logic_25_Proofs ] for more on the structure and rules.

    The notation also allows you to create a new network of variables and constraints. A "Net" has a number of variables (including none) and a number of properties (including none) that connect variables. You can give them a name and then reuse them. The schema, formal system, or an elementary piece of documentation starts with "Net" and finishes "End of Net". For more, see [ notn_13_Docn_Syntax.html ] for these ways of defining and reusing pieces of logic and algebra in your documents. A quick example: a circle might be described by Net{radius:Positive Real, center:Point, area:=π*radius^2, ...}.

    For a complete listing of pages in this part of my site by topic see [ home.html ]

    Notes on the Underlying Logic of MATHS

    The notation used here is a formal language with syntax and a semantics described using traditional formal logic [ logic_0_Intro.html ] plus sets, functions, relations, and other mathematical extensions.

    For a more rigorous description of the standard notations see

  1. STANDARD::= See http://www.csci.csusb.edu/dick/maths/math_11_STANDARD.html

    Glossary

  2. above::reason="I'm too lazy to work out which of the above statements I need here", often the last 3 or 4 statements. The previous and previous but one statments are shown as (-1) and (-2).
  3. given::reason="I've been told that...", used to describe a problem.
  4. given::variable="I'll be given a value or object like this...", used to describe a problem.
  5. goal::theorem="The result I'm trying to prove right now".
  6. goal::variable="The value or object I'm trying to find or construct".
  7. let::reason="For the sake of argument let...", introduces a temporary hypothesis that survives until the end of the surrounding "Let...Close.Let" block or Case.
  8. hyp::reason="I assumed this in my last Let/Case/Po/...".
  9. QED::conclusion="Quite Easily Done" or "Quod Erat Demonstrandum", indicates that you have proved what you wanted to prove.
  10. QEF::conclusion="Quite Easily Faked", -- indicate that you have proved that the object you constructed fitted the goal you were given.
  11. RAA::conclusion="Reducto Ad Absurdum". This allows you to discard the last assumption (let) that you introduced.

End