Some of these are standard well known types and are described below. Then there are the types that are peculiar to the domain being studied. This set of types is called the basis of the domain. Finally there are the types that can be constructed from the basic and standard types to give structured types (sets, tuples, relations, ...). This collection of basic, standard and constructed types is called the universe of discourse of the project.
The key discovery by Whitehead and Russell in their work on "The Principles of Mathematics" was that the idea of type could be used to avoid paradoxes as long as it was a syntactic concept. This means that each type defines those expressions that are valid or meaningful as well as what meaning can be attached. The Type system outlaws from the logic any discussions of expressions that involve contradictions in terms and paradoxes like the Russell Set. Thus any theory of types is a metalinguistic activity and taks place in yet another universe of discourse.
Simply put: Each type is linked to a set of expressions and a set of meanings and connects them together. And each type does this in its own way. Types are the main way that MATHS is extended.
All types must define a non-empty set of objects -- a valid universe of discourse -- so that we can use the predicate calculus to reason about objects of that type.
(Alphabets):
(@):
(Char): A Finite set of 256 characters used in MATHS.
(Unicode):
(Documents):
(Events):
(Numbers):
(Dimensions):
(Sets):
(Times):
(Types):
the Employee with SSN 123-45-6789
the[e:Employee](e.SSN="123-45-6789")
It seems wise to introduce -- as a partly-baked idea -- a standard way to designate objects of a type:
Employee("123-45-6789")
Here is a possible formalization:
So we might define, as an example,
By further defining functions, predictaes, sets, relations, etc. we can easily model the types of objects in the real world. This conceptual model is an excellent guide to designing data bases, a natural ontology for the domain, and a source of ideas for classes in object-oriented programming.
A Type is defined to have two key attributes attributes:
Notice that a type T is not a member of a type.
An example
The elements of Char >< Number for example all have a sigle character and a single number in them.
The type @T has the set of all subsets of T as elements.
If D is a list of declarations then \/{D} is a discriminated union or co-product. There is also a longer form that uses the "Net" syntax.
A discriminated union like
Discriminated unions are useful for decribing data bases and partial tples. A possible semantics is found in the theory of co-products in Category Theory(CATEGORY).
Briefly a product defines a composite structure with maps from the composite to the components. A co-product has the reverse maps -- from each component into a unique element of the co-product.
The meaning, is to extract the declarations, ignore the constraints, and create a list D of declations for the short form.
A type_quotient T/E where E:equivalence(T) is defined as any chosen isomorphism of the set of blocks or equivalent elements of T. Simarlarly T/f where f is a map from T into T2 (f:T2^T), then the lelements of T/f are the inverse images in T of elements in T2 -- for each t2:T2, { t:T. f(t)=t2}. See [ Quotient types ] below.
Need to give formal semantics for coproducts, and quotients.... and tidy up. [click here types if you can fill this hole]
The idiom
Where T is some unused variable.
These can be defined by enumation: Syntax: {a,b,c,...}, or by construction since the union of two finite sets is finite and the intersection of two finite sets is finite.
Some theories of finiteness can be found at [ logic_6_Numbers..Strings.html ] discussing numbers.
Hence
As a generallity the type of an expression can always be determined (sometimes relative to certain assumed types), from the type of the parts and the operators that make it up. Even so a modicum of overloading is allowed - an symbols can be defined to carry out different operations, (and return different types, depending on the context. Such a definition is written: term::context=expression For example, here are three different views of an assignment:
Standard examples of overloading:
(casts):
As a rule: An overloading can not take an existing valid meaning in the given context. So (|) is given a meaning for all sets and so can not be extended to operate on sets as well.
Polymorphic and generic terms have definitions written like this:
where t(T) is some formula that contains T as a free variable. If the symbol T appears in the expression then it is interpreted as the universal set of a generic type T. In general T can be any list of symbolic variables. Completely generic terms ( "macros") can be defined but are rare. If v1,v2,v3,... are previously undeclared variables then
For v1,v2,v3,..., M(v1,v2,v3,...) ::= e,
creates a function that applies to all types of operands and is short for
For T1, T2, T3,...:Types, v1:T1,v2:T2,v3:T3,..., M(v1,v2,v3,...) ::= e.
. . . . . . . . . ( end of section Introduction to Types) <<Contents | End>>
Note. If T is this type then x in T.objects iff x has type T.
Symbols for types have two meanings, one of these is in a normal part of an expression where it represents the set of objects or an identity map from objects to objects depending on the context.
The other meaning is when the description identifies the type in a declaration.
EXPRESSION uses infixes(T) to define expressions(T).
Used in EXPRESSION.
Used in Proposition Calculus.
See Also Expressions. [ Expressions ]
(syntax_symbolizes_semantics): For all T:Types, e:T.elements, symbol(e.syntax, T, e.semantics).
(descriptions_symbolise_types): For all T:Types, symbol(T.description, Types, T).
(different_types_do_not_overlap): For all T1, T2:Types, T1=T2 iff some (T1.objects&T2.objects).
(parallels): For T:Types, P:T.parallels, PARALLEL(expressions(@), T.expressions, P.syntax, P.semantics).
(serials): For T:Types, s:T.serial, SERIAL(T.expressions, T.expressions, s, T.infix_pairs(s)).
or even
. . . . . . . . . ( end of section Theory of Types) <<Contents | End>>
The definition of the type '@' in terms of type theory is an interesting test case of the system, and is rather like attempting to build the foundation of a building on its roof. The inabillity to this suggests limits to the system. The successful construction of a model of logic, on a theory of types, expressed in terms of that logic, establishes the power but not the consistency of the system.
representation | Type | meaning |
---|---|---|
not | @->@ | true+>false|false+>true, |
and | @><@->@ | (true,true)+>true |+> false, |
or | @><@->@ | (false,false)+>false |+> true, |
iff | @><@->@ | {(true,true),(false,false)}+>true |+> false, |
if_then_ | @><@->@ | (true,false)+>false |+> true, |
in | T><@T->@ | map x,X(x.X), |
= | T><T->@ | map x,y((x,y).(equality(T))), |
all | @^T->@ | map F:@^T(img(F)=true), |
==> | (@^T)><(@^T)->@ | map [A,B] (all(map x(if x.A then x.B))). |
... |
Consistency is demonstrated by exhibitting a finite model.
Another techniqu is to use the simplest (non-recursive) constructions.
We can prove that the set of data types leads to a consistent theory by constructing a model in terms of naive set theory and n-tples. The data types are those generated from @ and Nat by operations of fnite Products and coproducts. The proof of relative consistency is done by using Codd normalizations. Model each type as a set of n-tpls of elementary object. Replace T2^T1 by (T1><T2). The result is the Third Normal Form Model.
When defining new types recursion must not be combined with power and subset operators because these lead to paradoxes. Cantor Russell... [click here if you can fill this hole]
Care also needs to be taken with using generic operators (defined with an implicit type for convenence) in a way that the actual types involved are made explicit. This is the problem as found in C++ templates. In MATHS one example was the original definition of the function that returns the cardinality or size of a set:
So, this is true
The following expression
. . . . . . . . . ( end of section Meta-theory of types) <<Contents | End>>
Given a R:@(T1><T1) which is reflexive, transitive and symmetric (an equivalence relation) then there is a subset of @T1, symbolised T1/R, defined by T1/R::={X:@T1||for all x,y:X( x R y)}. Then there is a map (T1/R)^T1 from x:T1 to x/R:T1/R
We can define a relation equivalence modulo f x = y wrt f ::= f(x)=f(y) which leads to the same "quotient type".
[ Notation] .
. . . . . . . . . ( end of section Deriving New types) <<Contents | End>>
We start be defining the set of fully parenthesized expressions(p_expressions) of a given type and then state circumstances under which parentheses can be removed without change of meaning.
The above defines fully parenthesized expressions.
Example: "1+(2*3*4)+5" parenthesis_removal "1+2*3*4+5" and not "(1+2)*3" parenthesis_removal "1+2*3".
The semantics of MATHS expressions are trivial when formalized in MATHS. The correct meaning of a MATHS expression looks (in MATHS) like itself . Spivey has remarked on a similar phenomena in the attempt to define Z in terms of set theory and set theory using Z. The denotation (map from syntax into semantics) is the identity mapping. Such a circular definition can be shown to be useless for ignorant readers because ignorance is a trivial fixed point[Spivey 88]. Luckily, complete definition is not essential for a human being to learn a language.
Future research can establish the semantics in terms of other languages - such as Z, Larch, .... The syntax and a prossibly feasible way to parse expressions is a notational problem. [ Notation.12 Expressions ]
}=::EXPRESSIONS
. . . . . . . . . ( end of section Typed Expressions) <<Contents | End>>
paths, prime_links, links::(names(D))^( names(D)><names(D)). For T1,T2, prime_links(names(T1), names(T2)=names(F(T1,T2))| { "("! l1 ! ![i:2..pre(l)](","!l[i])! ")"||for some C:#D, f:F(T1,T2), l:#char(l=names o f and T2= ><C)}.
For T1,T2, links(names(T1), names(T2))=prime_links(T1,T2)|prime_links(T2,T1)
For n1,n2:names(D), paths(n1,n2)=links(n1,n2) |{ l!".."!p || for some n3:names(D), l:links(n1,n3), p:paths(n3,n2)}.
??|-paths(T1,T2) in regular_expression(|(links)).?
For T2,
It is easy to set up a map from paths into relations.
if rationalized then conceptual_model in DAG.
The above is sometimes called a functional dependency. These are worth noticing since they indicate the logical structure that underlies the original design.
The process starts with a set S:@N and finding a k and d such that
For two lists of declarations D1, D2, Data{D1. D2}::=Net{D1}->Net{D2}.
Since D1 and D2 have no definitions or wffs then
If we are discussing fixed set of variables and types with unique identifiers then this is abreviated further - suppose we have a DICTIONARY; DICTIONARY=Net{ v1:T1, ...} and let type(v[i])=t[i], then Data{ k1,k2,...,k[n]. d1,d2,..., d[m]} ::=@DICTIONARY(k1,k2,...,k[n])->(d1,d2,...,d[m]).
Open Example Data Base -- University
{ student(123-45-6789, "John Doe", ...),
. . . . . . . . . ( end of section Example Data Base -- University) <<Contents | End>>
Normalization is an analytical tool more than a design tool. The goal is to model part of the problem not to work out detailed solutions. Design has to be concerned with performance, physical limitations, and ways to encode the necessary data. Relational data is a starting place for design and a useful user interface. They are simple to understand and use but tend to not use machinary efficiently. The relational retrieval systems are also limmitted in power[]. However they are a high powered analytical tool separating an area into many simple relations. These relations are a canonical basis for design work.
One starts with samples, displays, or existing data structures. Abstraction starts by naming component data and their domains. Then a plausible key is postulated and the result normalized until all functional dependencies are represented by key->data relationships. The 3 step process goes thru an easy to recall pattern : the key, the whole key, and nothing key.
. . . . . . . . . ( end of section Formal model) <<Contents | End>>
The data on the list are
Now select a key: product_name samples>-- \/{
}
Here is the analysis of a receipt from a cashier at a local supermarket:
Extract repeated group of line items at the store.
data_base{
.Reference Wu 92, Margeret S. Wu, The Practical Need for a Fourth Normal Form, 23rd SIGCSE Tech Symp on CSEd 92, SIGCSE Bull V24n1(Mar 92)pp19-23
Let R be a subset of type T with maps x, y, and z into x:T->X, y:T->Y, and z:T->Z. A multivalued dependency MVD(x,y,z) occurs iff there is a value of x where whenever two different tuples have the same x and different y and z values then there are two more tuples in R.
For Types T, R:@T,x,y,z:#variables(T),
An MVD(x,y,z) is trivial if y is subset of x (|y==>|x) or x and y form a complete set of variables of T (|(x!y)=variables(T)).
A set of relations in BCNF is in 4NF iff all MVDs are trivial.
. . . . . . . . . ( end of section Normalisation of data) <<Contents | End>>
...
. . . . . . . . . ( end of section Types as a Data Model) <<Contents | End>>
. . . . . . . . . ( end of section Types as Object-oriented objects) <<Contents | End>>
Proofs follow a natural deduction style that start with assumptions ("Let") and continue to a consequence ("Close Let") and then discard the assumptions and deduce a conclusion. Look here [ Block Structure in logic_25_Proofs ] for more on the structure and rules.
The notation also allows you to create a new network of variables and constraints. A "Net" has a number of variables (including none) and a number of properties (including none) that connect variables. You can give them a name and then reuse them. The schema, formal system, or an elementary piece of documentation starts with "Net" and finishes "End of Net". For more, see [ notn_13_Docn_Syntax.html ] for these ways of defining and reusing pieces of logic and algebra in your documents. A quick example: a circle might be described by Net{radius:Positive Real, center:Point, area:=π*radius^2, ...}.
For a complete listing of pages in this part of my site by topic see [ home.html ]
For a more rigorous description of the standard notations see