The theory of best model classification is essentially geometric and it seemed so successful in that domain that there was a strong desire to apply it to other things. And an impression it could be applied to reading text. Luckily I work at a place that needed language automation as well as geometry automation and I got to try out an approach informed by those ideas in the narrow world of custom part design.
But at home, I am trying to really understand what is involved. This requires pursuing the analogy in more details. What is the space of points, and data fitting that can apply a best model approach to text? That question drove the formulation of a proto semantics and its definition of narrative fragments, the "geometric objects" of this linguistics. I have yet to carry out the complete program including a goodness-of-fit metric (something to do with number of slots filled in a fragment).
Even in college, after reading Bertrand Russell, I became convinced that the way ideas merge together was often more intrinsic to the type of idea than to the grammar that combined words for the ideas. So "or" and "and" took there meaning largely from whatever was being juxtaposed. I was fascinated by how I cannot think of a square and a circle as the same object, but can easily have a square and a redness as the same object. If I try to make a single object red and green, it splits the color in two. I still don't understand how color and shape channels are what they are.
But although I do not understand this built-in logic, that comes with our use of words and thoughts about things, there is a reasonable, practical way to use data fitting ideas with text, provided you narrow the topic enough. Then the word definitions are what you make of them, and reside in your dictionaries and C++ class designs. Language recognition built in this way is real because the class definition is concrete and shared.
Thursday, March 27, 2014
Friday, March 21, 2014
Linguistics in the 20th century
Reading first some Chomsky and then, with diminishing respect, looking for something better at the dear old Concord Library and discovering Whorf, I find it interesting that so much energy was spent by the former defeating academic adversaries rather than addressing the topics at hand.
It is too bad they lived in an age before it was possible to imagine teaching a computer to understand some limited part of a natural language. In narrow world language processing the question is how to capture meaning and be able to extract it automatically from examples of natural language. The idea is that if you have a sufficiently well defined "world" object, its member variables can set themselves. So you have to put your money where your mouth is and write software that fills in data objects from language samples. Then do something with the objects. You have to confront the notion of word meaning at a mathematical level to do it successfully. I am trying to do that with best models and the proto semantics.
But the controversies of the 20th century did not go away. I hope Whorf would approve my scheme of proto semantic shapes, filled with words having native context for each speaker of the language. I hope he would be sympathetic to my view that many of our primary abstract words are developed from games in childhood which are culture specific. I wonder if he would also approve of my idea that the cultural differences get built on top of simpler meaning entities that maybe are universal.
For example, do not all cultures include: "person", "place", "thing", "want", "ask", and action verbs, and things like transformation, sequence, and grouping? Note, it does not really matter if it is universal or not in terms of programming for a single language.
But there is something to what Chomsky is saying that I feel is very true. The words in my mind are encoded with musculature - what you could call the phonemes [or is it morphemes?]. As I dozed off last night, the word "only" split off an "-ly", which brought a sensation of deep meaning. It would not be surprising at all if the concepts that used "ly" where physically manifested in my brain as a thinking mechanism that includes those muscles. But that is the implementation of meaning not the form or content of it. So yeah, Chomsky, meaning is there in the physical implementation of language, and the use of such muscles is critically important. The same way my computer program uses silicon dioxide and larger objects like transistors to perform the operations I want in a computer program. But it is the computer program which is of most interest with the elusive "meaning" we want to understand.
Separately, the algebraic rules of grammar are necessary for extracting subtleties of meaning. They apply during parsing, and ordering of the input into narrative structures. But how important is it? Aren't single concepts usually described with adjacent words? It is a tough subject.
Update: My son David mentions that Sasurre (sp?) wrote that the form and sounds of words cannot be connected to their meaning. I agree completely. At the same time, a system that stores meanings in efficient way will use a strong degree of parallelism between the word forms/sounds and the word meanings.
It is too bad they lived in an age before it was possible to imagine teaching a computer to understand some limited part of a natural language. In narrow world language processing the question is how to capture meaning and be able to extract it automatically from examples of natural language. The idea is that if you have a sufficiently well defined "world" object, its member variables can set themselves. So you have to put your money where your mouth is and write software that fills in data objects from language samples. Then do something with the objects. You have to confront the notion of word meaning at a mathematical level to do it successfully. I am trying to do that with best models and the proto semantics.
But the controversies of the 20th century did not go away. I hope Whorf would approve my scheme of proto semantic shapes, filled with words having native context for each speaker of the language. I hope he would be sympathetic to my view that many of our primary abstract words are developed from games in childhood which are culture specific. I wonder if he would also approve of my idea that the cultural differences get built on top of simpler meaning entities that maybe are universal.
For example, do not all cultures include: "person", "place", "thing", "want", "ask", and action verbs, and things like transformation, sequence, and grouping? Note, it does not really matter if it is universal or not in terms of programming for a single language.
But there is something to what Chomsky is saying that I feel is very true. The words in my mind are encoded with musculature - what you could call the phonemes [or is it morphemes?]. As I dozed off last night, the word "only" split off an "-ly", which brought a sensation of deep meaning. It would not be surprising at all if the concepts that used "ly" where physically manifested in my brain as a thinking mechanism that includes those muscles. But that is the implementation of meaning not the form or content of it. So yeah, Chomsky, meaning is there in the physical implementation of language, and the use of such muscles is critically important. The same way my computer program uses silicon dioxide and larger objects like transistors to perform the operations I want in a computer program. But it is the computer program which is of most interest with the elusive "meaning" we want to understand.
Separately, the algebraic rules of grammar are necessary for extracting subtleties of meaning. They apply during parsing, and ordering of the input into narrative structures. But how important is it? Aren't single concepts usually described with adjacent words? It is a tough subject.
Update: My son David mentions that Sasurre (sp?) wrote that the form and sounds of words cannot be connected to their meaning. I agree completely. At the same time, a system that stores meanings in efficient way will use a strong degree of parallelism between the word forms/sounds and the word meanings.
Saturday, March 8, 2014
How do words get their meanings?
[Obviously not a complete answer (from a doc I am writing):] No discussion of semantics can escape the central mystery of how words acquire their meanings and contexts. The narrative structures [of proto semantics] are simple by comparison and define roles for words independent of the native meanings. So let us acknowledge two kinds of word meaning: native meaning context and role within a narrative structure. For the sake of discussion, I assume words acquire their native meaning context through repeated use in a single role within multiple occurrences of a narrative structure in varying physical contexts. That single role is a word’s native role.
The “mystery” of word learning is why some parts of
the context are perceived as constant and become associated with the word, while
other parts are perceived as varying and become part of the expected narrative
usages. For example to use the word “sun”, for me, creates a vague picture of
an outdoors with a sky and a specific picture of a sun. The vague picture is
waiting to be clarified.
Saturday, March 1, 2014
The Turing Test Asks the Wrong Question
The Turing Test, as I understand, asks if a computer program could ever fool a person in a blind exchange of statements and replies. I believe the answer is clearly "yes" such a program could exist. Even if the computer had much thinner context-less definitions for words, real communication might still be possible. But that is the wrong question.
Instead let's ask if a computer program, dressed as an android, could be here with me so as to fool me in every way as to it being such. If this includes my being able to mate with it, then as far as I am concerned it is a person not a robot, and I am not fooled. [I guess that is the point they are making in "Bladerunner".]
Instead let's ask if a computer program, dressed as an android, could be here with me so as to fool me in every way as to it being such. If this includes my being able to mate with it, then as far as I am concerned it is a person not a robot, and I am not fooled. [I guess that is the point they are making in "Bladerunner".]
Subscribe to:
Posts (Atom)