I wrote a paper about best model reasoning that ends with the sentence:
"...the main limitation
in our ability to program an artificial intelligence will be our ability to parameterize
ideal objects in the world that intelligence is supposed to encounter.".
So now, moving on to the obvious questions about how language and semantics relate to the whole business of "best models", one needs to pause and ask: what world is the automatic language processor supposed to encounter? And how, in that world, are idealized objects parametrized? Well, I don't know yet but I think it is worth thinking about a world of language structure populated both by natural text and idealized text, where the idealized text needs to be parametrized somehow. I am looking for "pre-narrative" semantic structures, kind of atomic stories, kind of a semantic meta language. The goal is natural and explicit meanings structure, not mathematical tidiness, that can be used to analyze and translate other language. Here is what I've got so far :
Nouns
- person (me; or things I lend me-ness to)
- thing
- place
There are single word sentences like thing but let us ignore that for now.
Note that person can become thing, almost routinely. Thing can become "personified" but it is less common. This does not need to be explicit for now because the syntactic context is unambiguous.
Verbs
There are semantic structures of the form
person go place
person want thing
person assigns value to thing
person see thing
thing in place
place contains thing
thing causes thing
thing acts on thing
Caveat: I am not sure if other verbs are needed for person. Do we need a "make" verb?
Caveat: it is a bit artificial to construe a "person acting on thing" as something that requires the "person" to first be turned into a "thing" for the purpose of the analysis. I am taking the view that it is only the things which express personhood (indicators or what my son calls "agency") that require the meaning of a person. Otherwise the person functions semantically like a thing.
Adjectives
There are semantic structures of the form
thing has attribute
thing has thing (pretty similar to: place has thing)
Structural elements
( ) groups compound entities into single ones, treated as nouns
[ ] expresses implicit or optional parts of the structure (this helps be explicit)
:: transformation
, sequence
+ binding, (an abbreviation of multiple "has", "in" or "contains" statements, or more??]
Allowed syntax
X with attribute Y
[ ] with attribute Y (for example the sentence "hot!")
X acts on Y, [Y::Z]
X wants Z, [ X acts on Y, [Y::Z ]
(H ) + X (can also be written (H+X)
(H), X (can also be written (H , X)
X :: [X]+Y
other?????
Examples
Orginal text: Grunk takes ball into shower
Structure: ("Grunk"+"ball") go "shower" (an example of personification)
Original text: Grunk has ball in shower
Structure ("Grunk"+"ball") in "shower"
Original text: Grunk throws ball into shower
Structure: "Grunk" acts on "ball", "shower":: ("shower"+"ball")
Reverse examples. It is not the intention of this pre-narrative to be taken and used as a language, or as a playground for mathematical reasoning. The important point remains: How does something like this work in the context of best models. What are the measurements, ideal objects, and classifications to be made? But, as a dry exercise lets look at some examples of syntax that may or may not be meaninful in the pre-narrative language.
What is the difference between (X::Y) and [X::Y] ?? The first expresses a transformation become a noun. The second is an implicit transformation - not at all the same thing. Can transfomations become nouns? Seem like there are example in English: "I watched my sons growth with amazement"
"I" see ("son"::(["son"]+"growth")), "I" :: (["I"]+"amazed")
This is a start. It is easy to get distracted with the details of this proto-lanuage, and to forget its actual purpose - which is to provide an example of parametrized ideal sentences structure.
Update: This is really a proto-syntax
Update II: NO it is a proto semantics.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment