[Skip Navigation] [CSUSB] / [CNS] / [CSE] / [R J Botting] / [ MATHS ] / math_71_Auto...Systems
[Contents] [Source Text] || [Notation] || [Copyright] || [Contact] [Search ]
Wed Dec 7 09:00:17 PST 2011

Contents


    Automata and System Theory

      Introduction

      Rather than the traditional computer science approach to automata theory (start with a tuple (Q, d, s0, F, ...)) I prefer to derive automata from simpler and more general models. The key idea is that of State. To this we can the rules by which states change. The most general one I've seen is the Labelled Transition System (LTS).

      A simple model is due to W Ross Ashby. I call it a Mechanism: it has states and transitions that may occur and change the states. Each state helps determine the next state. There are a set of valid transitions. Each defines a map from the current state to the next one. A deterministic mechanism has a single transition function determining an unique next state when given the current state.

      The next complication (again following W Ross Ashby) is the State+Input+Transition+Output system or SITO. Here the system receives a string of inputs and produces a string of outputs. The next state is determined by the current input and state. The output depends solely on the current state and input.

      Automata are rather like SITO systems except that they have no output. Instead they classify the input sequence as acceptable or not. When the system enters a special set of states then the input is accepted. These are useful in the theory of languages. Transducers (Again Ross Ashby's term) has input, state and output, but the output depends solely on the current state. When the Transducer has only a finite set of states, inputs, and outputs this is the equivalent of a Moore machine in computer science text books.

      [ General Systems Theory ] is another approach to dynamic modelling.

      All these models, so far allow the number of states to be infinite and to have any structure. Classical models: FSA, TM, PDA etc limit these.

      I plan some notes on a structured model of automata and a section will be added on Buchi Automata that model properties that hold on infinite input strings.

      In the 1992, the use of message sequence charts to specify interactions between parts of a system or protocols became popular. I'm including a section (Protocols) on a possible syntax with a couple of alternative forms.

      Basis

    1. Automata_and_System_Theory::=

      See Also

    2. Unary::= See http://www.csci.csusb.edu/dick/maths/math_15_Unary_Algebra.html#Unary.
    3. FFD::= See http://www.csci.csusb.edu/dick/maths/math_23_Flow_Diagrams.html#FFD.
    4. digraph::= See http://www.csci.csusb.edu/dick/maths/math_22_graphs.html#digraph.
    5. monoid_generated_by::= See http://www.csci.csusb.edu/dick/maths/math_33_Monoids.html#monoid_generated_by.

      State

      A system, machine, automaton, device, component, object typical is in one of a number of states at any given time. The set of States determines the power of the machine, system, automaton: more states give more power.

    6. States::=following,
      Net
      1. State::Sets=given, The traditional short hand is to use Q for State.
      2. For q:State.
      3. finite::@= finite(State).

      (End of Net States)

      Notice that we can construct a set of states by combining other sets: for example we can imagine a finite program with 3 counters:

    7. State = {q0,q1,q2,q3,q4} >< (Nat0^3).

      We can add special states (an initial state for example) and special subsets of a set of States (final states for example).

      Labeled Transition Systems

      THe most general model: a set of states and some labels. Each label may change that state. Some changes happen with no visible cause or effect and these are labelled τ.

    8. LTS::= following,
      Net

      1. |-States.
      2. Label::Sets=given
      3. invisible_action::Label, also known as a silent action.
      4. τ::=invisible_action.
      5. Action::=Label ~{τ}.

      6. finite::@= finite(State) and finite(Label).
      7. Transition_relations::Label->@(State, State)=given.
      8. initial::State.

      9. T:=Transition_relations, local short hand.

      10. For Label l, T(l) in @(State, State), is a relation,
      11. so expressions like T(l1);T(l2);T(l3) express sequences of changes.

      12. Possible_changes::@(State, State)= |Transition_relations.

      13. For s:State, Fail_in_state(s)::={l:Label. no s':State(s T(l) s').
      14. For s:State, Forbidden(s)::={l:Label. no s':State(s T(l) s').
      15. For s:State, Undefined(s)::={l:Label. all s':State(s T(l) s').

      16. Failures:: @(#Label >< @Label)= { l:#Label, f:@Label. for some s ( initial ;[l1:l]T(l1) s and f in Fail_in_state(s) }.

        [click here [socket symbol] if you can fill this hole]


      (End of Net LTS)

      Basic Mechanism

    9. Mechanism::=
      Net{

      1. |-States.
      2. Transitions::@(State^State)=given.

      3. Starting with Next:State->@State is another possibillity.
        Table
        Example Mechanism 1
        State01
        Transition[1]0+>11+>0
        Transition[2]0+>01+>1

        (Close Table)

        Table
        Example Mechanism 2
        StateNat0
        Transition[1]s+>2*s
        Transition[2]s+>2*s+1

        (Close Table)
      4. deterministic::@= |Transitions|=1.


      5. (above)|-for all t:Transitions( (State, t) in Unary).
      6. can_change_to::@(State,State)=rel [a,b](for some t:Transition(a=t(b))}.
      7. |~::= can_change_to.
      8. Next::=map[s:State]{s'. s can_change_to s'}.
      9. δ:=Next.
      10. (above)|-FFD(Parts=>State, Flow=>(@,or,false), flows=>can_change_to).
      11. (above)|-digraph(Nodes=>State, Arcs=>can_change_to).
      12. Monoid::=monoid_generated_by(Transitions, (;)).


      }=::Mechanism.

    10. Transition_System::= Mechanism and Net{ initial : State ; }.

    11. SITO::=
      Net{
      1. State and input determine next state and output.
      2. State::Sets=given,
      3. S:=State.
      4. Input::Sets=given,
      5. I:=Input.
      6. Transition::(S^S)^I=given,
      7. d:=Transition,
      8. Output::Sets=given,
      9. O:=Output,
      10. output_map::(O^S)^I=given,
      11. g:=output_map.

      12. There has been an extensive study of the properties of finite systems:
      13. finite::@.
      14. |-finite = finite(State) and finite(Input) and finite(Output).

      15. We can display a finite SITO in a table with one column per state, one row per input. The item in row i and column s shows d(i)(s)/g(i)(s).
        Table
        Example SITO 1
        Input\State01
        11/00/1
        00/01/1

        (Close Table)


      16. (above)|-Mechanism(State=>State, Transitions=>img(Transition)).

        This implicitly defines can_change_to, Monoid, etc etc in Mechanism.


      }=::SITO.
    12. State_determined_system::=SITO.

      Complex and Compound State Determined Systems

      Comonly small SITO are expressed by tables. But there are two alternatives: Dynamic predicates and composition. A dynamic predicate that contains an input
    13. s' = Bit(s<>i), (for example) can express a transition relation or function and the output can be expressed as a simple equation:
    14. o = s.

      The above two equation describe the example above. This technique is very useful when the the states, inputs and outputs are number, vectors, or other values in know algebraic systems.

      Because the states, inputs, and outputs are defined to be sets we can use set expressions to describe complex systems. For example if there are two inputs we would define the input set as the Cartesian product of two simpler sets. Similarly with multiple outputs. When the state is expressed as a Cartesian product as well we are expressing the systems as a composition of simpler machines. We can then add constraints to couple the inputs and outputs... [click here [socket symbol] Composition of Systems if you can fill this hole]

      Transducers

      Ross Ashby defines a transducer as a SITO where the output function depends on the state only:
    15. TRANSDUCER::=SITO with following
      Net
      1. for all i0,i1:Input ( output_map(i0)=output_map(i1) ).

      (End of Net)

      Mealy Machines

    16. MEALY::=SITO and finite. [ Mealy_machine ]

      Moore Machines

    17. MOORE::=TRANSDUCER and finite. [ Moore_machine ]

      General Systems Theory

        A General dynamic system can be modelled using the calculus of relations: [ Paths and Trajectories in logic_41_HomogenRelations ] for example. Another approach is via a morphism from a group of Times into changes of state, or via a one-many map indicating the possible next states.

      1. General_dynamic_system::=following,
        Net

        1. |- (st): additive_group(Time).
        2. State::Sets.
        3. For t:Time, δ^(t) ::@(State, State).
        4. |- (s+): for all Time t1,t2, δ^(t1+t2) = δ^(t1); δ^(t2).
        5. |- (s0): δ^(0) = Id.
        6. |- (s-): for all Time t1, δ^(-t1) = /(δ^(t1).

        (End of Net General_system)

      2. additive_group::= See http://www.csci.csusb.edu/dick/maths/math_34_Groups.html#Special groups.

      3. Discrete_Dynamic_System::=following,
        Net
        1. State::Sets=given.
        2. δ::State->@State=given.
        3. finite::@,
        4. |-finite = finite(State).
        5. deterministic::@,
        6. |-determinstic = for all s:State, one next(s).
        7. nondeterminstic::@=not deterministic.
        8. infinite::=not finite.

        9. Finite discrete dynamic systems have been used to model complex software. There exist tools that will test whether or not a given model has desirable properties and so fits project requirements. These have been used with some success in industry. It is possible to express the key ideas with the theory presented here:

          Suppose we are interested in a property of some states we could model it as the subset P of states where it holds:

        10. For P:@State.
        11. For all P, AX(P)::@State={s. δ(s) ==> P}, -- In the next step P must be true.
        12. For all P, EX(P)::@State={s. δ(s) & P}, -- In the next step P can be true.
        13. (above)|-EX(P) = / δ(P).

          Notice that AX(P) and EX(P) are defined to be sets of States and so properties. So we can write AX AX(P) -- P must be true in to steps. EX AX P -- P can enter a state where P must become true.

          From these we can define the 6 "modes" used in the Computational Tree Logic:

        14. AG(P)::= P & AX( AG(P)), states where P is always true.
        15. EG(P)::= P & EX( EG(P)), states where there is a path where P is always true.
        16. AF(P)::= P | AX( AF(P)), states where P is entered on every path.
        17. EF(P)::= P | EX( EF(P)), states where some path enters P.

        18. AU(p,q)::= q | (p & AX(AU(p,q))), states where p is true until q becomes true on all paths.
        19. EU(p,q)::= q | (p & EX(EU(p,q))), states where, on some path, p is true until q is true.

        20. AU and EU both imply that q is ultimately true.


        21. (above)|-AG(P) = P & AX P & AX AX P &... = &[i:0..]AX^i(P)
        22. (above)|-EF(P) = P | EX P | EX EX P |... = |[i:0..]EX^i(P)

          If the system is finite then there are quite efficient algorithms and tools for doing the above calculations.


        (End of Net Discrete_Dynamic_System)

        [click here [socket symbol] if you can fill this hole]

      . . . . . . . . . ( end of section General Systems Theory) <<Contents | End>>

      Automata Theory

      1. BASIC_AUTOMATA::=following,
        Net
        1. State and input determine next state.
        2. State::Sets=given,
        3. S:=State.
        4. finite_state::@.
        5. |-finite_state iff finite(S).
        6. Input::Sets=given,
        7. I:=Input.
        8. finite_input::@.
        9. |-finite_input iff finite(I).Q

        10. finite::@=finite_state and finite_input.


        (End of Net BASIC_AUTOMATA)

      2. Deterministic_Automata::=BASIC_AUTOMATA with following,
        Net
        1. Transition::(S^S)^I=given,
        2. d:=Transition,


        3. (above)|-Mechanism(State=>State, Transitions=>img(Transition)).

          This implicitly defines can_change_to, Monoid, etc etc in Mechanism.

        4. For i:#Input, n=length(i), s0, s1: State, s0 |=[i] s[n] ::@= for some s1,s2,...,s[n-1], (s0 d(i0) s1 d(i1) s2 ... d(i[n]) s[n]).

        (End of Net Deterministic_Automata)

      3. Automata::=BASIC_AUTOMATA with following,
        Net
        1. δ::@(S >< I >< S), transitions for each input.

        2. next::I->@(S,S).
        3. |-for all i, s, s', s next(i) s' iff (s, i, s') in δ.
        4. δ::@(S >< #I >< S).
        5. |-for all x,s,s', (s,x,s') in δ iff s (;next(x)) s'.
        6. (above)|-for s0,s1:S, (s0, (), s1) in δ iff s0=s1.
        7. (above)|-for s0,s':S, x,y:#I, (s0, x!y, s') in δ iff for some s1( (s0,x,s1) and (s1, y, s') in δ).

        (End of Net Automata)

      4. Acceptor::=Automata with following,
        Net
        1. Initial::@State=given,
        2. Final::@State=given,
        3. Accepting::@State=Final.
        4. Language::@#Input= {i:#Input. for some s0:Initial, s1:Accepting(s0|=[i]s1) }

        (End of Net Acceptor)

      5. FSA::=Acceptor & finite.

        [click here [socket symbol] if you can fill this hole]

      6. PDA::=following, [click here [socket symbol] if you can fill this hole]
      7. TM::=Turing_Machine,
      8. Turing_Machine::=following, [click here [socket symbol] if you can fill this hole]
      9. Bounded::=following, [click here [socket symbol] if you can fill this hole]

      . . . . . . . . . ( end of section Automata Theory) <<Contents | End>>

      Eilenberg's ultimately abstract model

      [click here [socket symbol] if you can fill this hole]

      Automata on Infinite words

        Source

        After

        Thomas90

        • Automata on Infinite Objects
        • in Leeuwen90 pp133-187
          • Jan van Leeuwen(ed)
          • Handbook of Theoretical Computer Science -- Volume B. Formal models and Semantics
          • MIT Press
          • Cambridge MA/Elsevier Science Pub Amsterdam The Netherlands CR9209-0659
            1. Pfau Library(4th Floor) QA76.H279 1990

        Notation

      1. Alphabet::Finite_Sets,
      2. A::=Alphabet.
      3. #A::=finite words over A. [ *strings*html ]
      4. ω::=Nat, -- common notation.
      5. Aw::=Nat->A, -- infinite words over A.
      6. Aoo::=A* | Aw.
      7. empty::A*=().
      8. ε::=empty. For u,v,w:A*, U,V,W:@#A, α, β:Aw, L,L':@Aw.

        Subwords

      9. α(m..)::=map[i:Nat](α(i+m-1)).
      10. α(m..n)::=map[i:1..n-m+1](α(i-1+m)).

        Infinite quantifiers

        for infinite n(W(n))::= there exist an infinite number of n such that W(n). for infinite n(W(n))::= for all n, some n'>n (W(n')). for finite n(W(n))::=there exist only a finite number of n satisfying W(n). for finite n(W(n))::=for some n, no n'>n (W(n')).
      11. (above)|-for no n(W) implies for finite n(W(n)). for some_finite n(W(n))::= for some n(W(n)) and for finite n(W(n)).

        Sets of words

      12. pref(W)::={u:#A. for some v:#A( u v in W }.
      13. W^ω::={α:Aw. for some w:Nat0->W ( α = ![i=0..](w(i))}.
      14. arrow(W)::={α. for infinite n( α(0..n) in W)}.
      15. lim(W)::=arrow(W), -- alternative notation.
      16. W^δ::=arrow(W), -- alternative notation.
      17. For σ:Nat->S, In(σ)::={s:S. for infinite n(σ(n)=s)}, -- states that occur infinitely often in a sequence.
      18. infinity_set(σ)::=In(σ).

        Buchi Automata

        Buchi automata inherit all the structure of finite non-deterministic automata: inputs, states, iniital_state, final_states, and transitions. However the condition for accepting an infinite string of symbols is defined and effectively over-rides the definition for finite strings. A normal automata accepts a finite string by entering a final state at the end of the string. An infinite string -- by definition, has no end. So, an infinite sequence of inputs is accepted when it allows the automata to enter the set of final states infinitely often.
      19. BUCHI::=following
        Net

        1. |-AUTOMATA(nondeterministic, finite) hide(accepts, language).
        2. runs::@(Nat->states)={σ: Nat->states. σ(0)=initial and for all i(σ(i) can_become σ(i+1)}.

        3. successful::@runs={σ. In(σ) & final}.
        4. (successful, final in Finite)|-successful = { σ:runs. for some f:final, infinite n(σ(n)=f)}.
        5. For α:@(Nat->inputs), σ: Nat->states, α produces σ::= for all i:Nat( (σ(i),α(i), σ(i+1)) in transitions).
        6. language::@(Nat->inputs)={α. for some σ :successful( α produces σ). s |=[α] s' ::= for some σ(σ(0)=s and σ(len(α))=s' and for i:Nat0( ((σ(i),α(i), σ(i+1)) in transitions) ). W[s,s']::={w:A*. s|=[w]s'}.
        7. |-For all s,s', W[s,s'] in regular_set.
        8. L:=language.
        9. |-L = |[s:final](W[q0,s].(W[s,s])^ω).

        (End of Net)

      20. Buchi_recognisable::@@(Nat->A)={L. for some BUCHI A(L=A.language)}.

        Examples

        A1:=the BUCHI(input:={a,b,c}, states:={q0,q1}, initial:=q0, final:={q0}, delta:={(q0,b,q0), (q0,c,q0), (q0,a,q1), (q1,a,q1), (q1,c,q1), (q1, b, q0)} ). A1.language=after any occurence of b there is an occurrence of b.

        A1_bar:=the BUCHI(INPut:={a,b,c}, states:={q0,q1}, initial:=q0, final:={q1}, delta:={(q0,b,q0), [click here [socket symbol] if you can fill this hole] (q0,c,q0), (q0,a,q0), (q0,a,q0), (q1,a,q1), (q1,c,q1)} ).

      21. |-A1_bar.language= ~A1.language.

        A2:=the BUCHI(input:={a,b,c}, states:={q0,q1,q2}, initial:=q0, final:={q0,q1,q2}, delta:={(q0,b,q0), (q0,c,q0), (q0,a,q1), (q1,a,q1), (q1,c,q2), (q1, b, q2), (q2,c,q1), (q2,b,q1)} ).

      22. |-A2.language=between any two a's there are an even number of b's and c's.

        Results

        Buchi recognisable iff finite union of sets U.V^ω where U and V are regular.

        Buchi recognisable is closed under union, complement, intersection and regular prefix.

      23. ω_regular::=Buchi_recognisable.

      . . . . . . . . . ( end of section Automata on Infinite Words) <<Contents | End>>

      Structured Automata

        Structured automata are something that I have developed. Their precursor (I believe) were the flowchart like defined Turing Machines in Davis, a long while ago. Essentially the behavior of the machine is defined as a regular relational expression. They can be written like a program. Here is my notes (from the late 1980s) of how acceptors, generators and filters can be modelled in MATHS. This is also argues that the structure of the languages accepted and generated can determine the structure of the program.

        Data Flows

        Model a data flow as a variable with values that are strings of items. Let U=$ Net{..., x:#X, ..., y:#Y,...z:X,...,R} then MATHS has the following relations defined for manipulating x and y as input and output streams
      1. STREAMS::= See http://cse.csusb.edu/dick/math_62_Strings.html#STREAMS.


      2. |-STREAMS.


      3. (STREAMS)|-y:!e = (y'=y!e), -- Put e at end of y.
      4. (STREAMS)|-/(y:!x) = (y = y' ! x'), -- Take some input from the end of y (nondeterministic).
      5. (STREAMS)|-x?:y = (x ! y' = y and x in Y), -- Take first x from front of y.
      6. (STREAMS)|-x' ?:y ==> /(x!:y), -- Get next item in y from y and put it in x.
      7. (STREAMS) |- S on x::= for some c:S, t:#X(x=(c!t)). -- Look ahead for string in S in front of x

        I will assume that each stream have a special item end that is used to indicate the end of streams.

      8. end(x)::= (x = end), -- An explicit end marker, end not in X.

        Structured Acceptors

        Given a relation R:@(U,U) where U=${a:%A, i:#I,...} and A,I :Sets, then `x ->i(R a) means that if R starts with arguments a` then it will accept x as an input on i,
      9. ->i(R a)::@#I,
      10. ->i(R a)::={ x:#I || for some ( (a=>a,i=>x! (end))R(i=>(), a=>c) )},
      11. ->(R a)::=->i(R a), c?::=c:?i,
      12. accepted_by(R)::=->i(R).
      13. (above)|-For a pair of relations R, S:@(U,U), all following
        1. accepted_by(R) & accepted_by(S) = accepted_by(R & S )
        2. accepted_by(R) | accepted_by(R) = accepted_by(R | S )
        3. accepted_by(R) accepted_by(R) = accepted_by(R ; S )
        4. # accepted_by(R) = accepted_by(do(R) ).

        Example

      14. even_parity::=#0 #( 1 #0 1 #0 ).
      15. |-even_parity = accepted_by (do 0?; do( 1?; do 0?; 1?; do 0?) ),

        Thus for any regular expression defining a language there is a similar relation specifying how to recognize it. This extends to context free grammars and MATHS dictionaries. By induction if a set S in @#I is defined by a dictionary with definitions using the regular operations and & then there is an isomorphic dictionary that defines a relation that accepts S. Such dictionaries are fairly easy to program in mosts languages using Jackson's rules for handling look ahead or backtracking. Further if we label the input statements and replace c? by `(state'=n; next=c)` we can derive a co-function that handles each character in turn, one input at a time [Jackson75].

        Structured Generators

        Generators are input/output dual to acceptors. Given a relation R:@(U,U) with U=${a:%A o:#O,...} with A,O:Sets, x<-(R a) means that if R starts with arguments a and then it can generate x: (R a)o->::@#O, (R a)o->::={ x:#O || for some c' ( (a=>a,o=>())R(o=>x!(end), a=>c') ) },
      16. <-(R a)::=(R a)o->, (R a)->::=(R a)o->,
      17. c!=o:!c.

        Again there is a natural structural relationship between the set generated by a generator and the generator:

      18. generated_by(R)::=<-(R) -- the language generated by the R.
      19. For a pair of relations R, S:
      20. <-(R) & <-(S) = <-(R & S )
      21. and <-(R) | <-(S) = <-(R | S )
      22. and <-(R) ; <-(S) = <-(R ; S )
      23. and # (<-(R)) = <-(do(R) )

        SO, the structure of a generator can reflect the structure of a specification of the language. If the specification uses recursion to define language then the equations indicate a simple nondeterministic generator for the language. By induction if a set S in @#O is defined by a dictionary with definitions using the regular operations and & then there is an isomorphic dictionary that defines a relation that generates S.

        Further if we label the output statements and replace c! by (next=c; state'=n) we can derive a co-function that produces each character in turn, one at a time.

        Notice that if S is an acceptor and R a generator then

      24. |-(S)-> ; ->(R) = for some x ((S)-> x ->(R)),
      25. = for some x ((S)-> x and x ->(R)),
      26. = some( (S)-> & ->(R) ).

        Structured Filters

        A filter is a program with command line arguments A that accepts data on the standard input I and produces data on the standard output O. Given a relation R:@(U,U) with U:=${a:%A i:#I, o:#O,...} and `A,I, O:Sets`, For x:#I, y:#O, a:#A, x ->i(R a)o->y::@(#I, #O)=for some c:#A ( (a=>a,o=>(),i=>x!(end))R(i=>(), o=>y!(end), a=>c) ). ->(R a)->::=->i(R a)o->.

        Informally x ->(R a)->y means that if R starts with arguments a and is supplied with input x then it can have produced y on its output when it has stopped. As always a relation does not connect every possible input with an output.

        The ideas of acceptor and generator defined above can also be defined for filters,

      27. accepted_by(R)::= | [a:A] pre( ->(R a)-> ),
      28. generated_by(R)::=| [a:A] post( ->(R a)-> ). Here is a familiar example:
      29. kitten::=(while(not end(i))( c'?:i; o:!c)).
      30. |-for all x:#I, x ->(kitten)-> x.
      31. |-->(kitten)-> = Id(#I)
      32. |- accepted_by(kitten)::= #I,
      33. |- generated_by(kitten)::=#I.

        The following are approximations to commands in a well known operating system.

      34. a:%#char, fopen:#char->#char, I=O=char.

      35. echo::=with{w:#char} ( do (w'?:a; o:!w); a=(); o:!EOLN),

      36. cat::=with {f: #char; c: char; p: #char} (
        1. while(not a<>()) (
          1. f'?:a; (
            1. f="-"; kitten
            2. | f<>"-"; p'=fopen(f);
            3. while(not end(p)) (
              1. c'?:p; o:!c
              )
            )
          )
        )

        There is a structural result for filters. It relates the input-output relationship to the structure of the filter:

      37. For R,S:@(U,U),
      38. ( ->(R)-> ) | ( ->(S)-> ) = ( ->(R | S)-> )
      39. and ( ->(R)-> ) & ( ->(S)-> ) = ( ->(R & S)-> )
      40. and ( ->(R)-> ) before ( ->(S)-> ) = ( ->(R ; S)-> ).
      41. For X,Y:@(#I, #O), X before Y::= { ( i1!i2, o1!o2 ) | (i1,o1) in X and (i2,o2) in Y }.

        For example given

      42. input = #( odd_item even_item),
      43. output = #output_item, then we can combine them to output the even items or the odd items:
      44. input_output::=#( odd_item even_item_output_item),
      45. input_output::=#( odd_item_output_item even_item).

        For deatils see Jackson's work "JSP".

        Floyd and Jackson both give examples where the correspondences can not form a grammar. However, for any pair of processes(R and S say) with i and o defined in the standard way,

      46. ->(R c)->;->(S d)-> specifies the behavior observed by two programs connected by McIlroy's pipe [Bell Labs 83, RitchieThompson74, Ritchie80, any good UNIX text]. Floyd, Jackson, Hoare and many others (see Chapter 2) have argued that this decomposition makes it possible to replace a complex sequential program with two or more simple programs.

        A Powerful MATHS Filter

        MATHS defines a powerful filter that applies a relation R to strings on the input and output: For R:@(#I, #O),
      47. filter_wrt(R)::= do( | [ (x,y):R ] (x'!:i & o:!y ) | no x' in pre(R) and x'!:i; |[x:I] x'?:i )).

        This can be highly nondeterministic but simple cases do interesting things:

      48. filter_wrt( 0+>1 | 1+>0 ) -- complement bits
      49. filter_wrt( (_^2) ) -- square the input numbers
      50. filter_wrt( "a".."z" | "A".."Z" ) -- delete everything but letters
      51. filter_wrt( |[x:I] (#x x+>x) ) -- remove duplicates.
      52. filter_wrt( #char "CSP" #char EOLN ) -- select lines containing CSP.

        Thus filter_wrt seems to combine tr, grep, awk, and uniq from the UNIX. system.

      . . . . . . . . . ( end of section Structured Automata) <<Contents | End>>

      Protocols

      A protocol is a useful tool for describing inter-process communications.

      A possible syntax

    18. protocol::=#message.
    19. message::=participant "->" participant ":" data semicolon.

      an alternative form of message syntax might be:

    20. participant "sends" message O(|"(" data ")") "to" participant.

      Note... it should be possible to generalize to structured protocols which use regular sets with messages as elements. Examples of defining the Language/action perspective protocol follow:

    21. lap(Alice, Bob)::=# following (
      1. Alice->Bob:request;
      2. Bob->Alice:promise;
      3. Bob->Alice:delivery;
      4. Alice->Bob:acceptance
      ).

    22. lap(Alice, Bob)::=#(Alice sends request to Bob; Bob sends promise to Alice; bBob sends delivery to Alice; Alice sends acceptance to Bob).

      Any preferences or thoughts [click here [socket symbol] Protocols if you can fill this hole] ?

    . . . . . . . . . ( end of section Automata and System Theory) <<Contents | End>>

    Notes on MATHS Notation

    Special characters are defined in [ intro_characters.html ] that also outlines the syntax of expressions and a document.

    Proofs follow a natural deduction style that start with assumptions ("Let") and continue to a consequence ("Close Let") and then discard the assumptions and deduce a conclusion. Look here [ Block Structure in logic_25_Proofs ] for more on the structure and rules.

    The notation also allows you to create a new network of variables and constraints. A "Net" has a number of variables (including none) and a number of properties (including none) that connect variables. You can give them a name and then reuse them. The schema, formal system, or an elementary piece of documentation starts with "Net" and finishes "End of Net". For more, see [ notn_13_Docn_Syntax.html ] for these ways of defining and reusing pieces of logic and algebra in your documents. A quick example: a circle might be described by Net{radius:Positive Real, center:Point, area:=π*radius^2, ...}.

    For a complete listing of pages in this part of my site by topic see [ home.html ]

    Notes on the Underlying Logic of MATHS

    The notation used here is a formal language with syntax and a semantics described using traditional formal logic [ logic_0_Intro.html ] plus sets, functions, relations, and other mathematical extensions.

    For a more rigorous description of the standard notations see

  1. STANDARD::= See http://www.csci.csusb.edu/dick/maths/math_11_STANDARD.html

    Glossary

  2. above::reason="I'm too lazy to work out which of the above statements I need here", often the last 3 or 4 statements. The previous and previous but one statments are shown as (-1) and (-2).
  3. given::reason="I've been told that...", used to describe a problem.
  4. given::variable="I'll be given a value or object like this...", used to describe a problem.
  5. goal::theorem="The result I'm trying to prove right now".
  6. goal::variable="The value or object I'm trying to find or construct".
  7. let::reason="For the sake of argument let...", introduces a temporary hypothesis that survives until the end of the surrounding "Let...Close.Let" block or Case.
  8. hyp::reason="I assumed this in my last Let/Case/Po/...".
  9. QED::conclusion="Quite Easily Done" or "Quod Erat Demonstrandum", indicates that you have proved what you wanted to prove.
  10. QEF::conclusion="Quite Easily Faked", -- indicate that you have proved that the object you constructed fitted the goal you were given.
  11. RAA::conclusion="Reducto Ad Absurdum". This allows you to discard the last assumption (let) that you introduced.

End